id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9904/cond-mat9904086.html
|
ar5iv
|
text
|
# Perpendicular transport and magnetization processes in magnetic multilayers with strongly and weakly coupled magnetic layers.
## 1 Introduction
Since the discovery of giant magnetoresistance (GMR) phenomenon in magnetic multilayers there has been a great deal of interest in studying them by both theoretical and experimental methods. The reason for the interest is, already partially realized, possibility of practical applications as magnetic sensors, recording heads and magnetic memory elements.
The standard system exhibiting the GMR is a trilayer (*i.e.* two magnetic layers separated by a non-magnetic spacer) with thickness of the spacer chosen so as to produce antiferromagnetic coupling between magnetic layers. While such a system, due to its simplicity, is convenient for theoretical treatment, it presents some problems in practical applications. The main problem is a high switching field which is usually necessary to rotate the magnetizations (overcoming antiferromagnetic coupling) and produce GMR. One way to deal with this problem is to use somewhat more complex *spin-engineered* structures. Widely known structures of this type are spin-valve systems , in which one of the magnetic moments is fixed by the strong exchange coupling due to an additional antiferromagnetic layer (*e.g.* MnFe or CoO). There exists however also a different approach in which a system composed of three magnetic layers is used. Two of them are strongly antiferromagnetically coupled, forming the so called artificial antiferromagnetic subsystem (AAF) , and the third one — detection layer — is only weakly coupled (or just decoupled). Such a setup was proposed both for laboratory measurements and more practically as angular velocity meter . A similar system (superlattice with strong and weak exchange couplings) was also studied theoretically (on *ab initio* level) , the thicknesses involved were however small due to numerical limitations.
The aim of the present paper is to perform thorough studies of transport and magnetic properties of the systems in question and to relate them with corresponding magnetic structure phase diagrams.
## 2 The model and the method of calculations
We consider a system consisting of three magnetic layers separated by two non-magnetic spacers, *i.e.* the structure of the $`F_1/S_1/F_2/S_2/F_3`$ type, where $`F_i`$ stands for ferro- and $`S_i`$ for paramagnetic layer. In order to describe collinear configurations we employ tight-binding hamiltonian with two, hybridized bands and spin-dependent on-site potentials (see Ref. for details). We restrict ourselves for simplicity to the case of simple cubic structure. The following values of model parameters have been chosen: $`E_F=0`$, $`t_s=1`$, $`t_d=0.2`$, $`V_{sd}=1`$, $`ϵ_{i\sigma }^s=0`$, $`ϵ_i^d=1`$ for all the layers, $`ϵ_i^d=0.2`$ within magnetic layers and $`1`$ elsewhere, where $`E_F`$ is the Fermi energy, $`t_\alpha `$ (with $`\alpha `$ being the band index — $`s,d`$) are the hopping integrals, $`V_{sd}`$ is the $`sd`$ intra-atomic hybridization and $`ϵ_{i\sigma }^\alpha `$ are the on-site potentials (where $`\sigma `$ is $``$ for majority- and $``$ for minority-spin carriers). The above set of parameters enables us to mimic essential features of the electronic structure of Co/Cu multilayers (*i.e.* the spin-polarized density of states is qualitatively reproduced — in particular the majority *d*-bands in magnetic and (non-polarized) *d*-bands in paramagnetic layers are matched perfectly which closely resembles the situation in Co/Cu systems).
The conductance is computed from the Kubo formula with the help of recursion Green function technique . The only difference in comparison with is that hybridization in lead wires (attached to the multilayer from both sides, for transport calculations) has been taken into account this time. The GMR has been defined as
$$\mathrm{GMR}=\frac{\mathrm{\Gamma }^;}{\mathrm{\Gamma }^;}1,$$
(1)
where the arrows show the orientations of magnetic moments. Note that without the first magnetic layer this definition would be identical with the usual one. Additionally the thermopower (TEP) has also been calculated from the following formula (see eg. Ref. )
$$S=\frac{\pi ^2k_B^2T}{3\left|e\right|}\frac{d}{dE}\mathrm{ln}\mathrm{\Gamma }\left(E\right).$$
(2)
We define, as in Ref. , the “giant magneto-TEP-effect” GMTEP analogously to Eq. (1) (with $`\mathrm{\Gamma }`$ replaced by $`S`$).
For studying the magnetization processes we employ the phenomenological expression, not unlike the one introduced in Ref. , ($`\mathrm{\Theta }_i`$ is an angle between the *i*-th magnetic moment and the external field, B)
$`E(\mathrm{\Theta }_1,\mathrm{\Theta }_2,\mathrm{\Theta }_3)`$ $`=`$ $`J_{12}\mathrm{cos}\left(\mathrm{\Theta }_1\mathrm{\Theta }_2\right)J_{23}\mathrm{cos}\left(\mathrm{\Theta }_2\mathrm{\Theta }_3\right)J_{13}\mathrm{cos}\left(\mathrm{\Theta }_1\mathrm{\Theta }_3\right)`$ (3)
$`B{\displaystyle \underset{i=1}{\overset{3}{}}}t_i\mathrm{cos}\left(\mathrm{\Theta }_i\right)/t+{\displaystyle \underset{i=1}{\overset{3}{}}}E_A(\mathrm{\Theta }_i),`$
where the first three terms describe bilinear exchange coupling between magnetic layers, the next three are Zeeman energy terms ($`t_i`$ being $`i`$-th layer thickness and $`t`$ the overall thickness of all the magnetic layers) and $`E_A(\mathrm{\Theta }_i)`$ is the crystalline anisotropy, that is $`t_iK\mathrm{sin}^2(2\mathrm{\Theta }_i)/4t`$ and $`t_iD\mathrm{cos}^2\mathrm{\Theta }_i/t`$ for cubic and uniaxial case respectively. We assume that external magnetic field is applied along the in-plane crystallographic axis, which can be either easy or hard axis depending on the sign of the anisotropy constants. The magnetic moments are confined to the layers plane which corresponds to the strong shape anisotropy. Expression (3) was then numerically minimized, with respect to the $`\mathrm{\Theta }_i`$-s, by taking, starting from initial configuration, little steps in the direction opposite to the energy gradient. All the extremal points found in this way were additionally checked against the stability condition (*i.e.* the positivity of all the minors of $`M_{ij}=^2E/(\mathrm{\Theta }_i\mathrm{\Theta }_j)`$) in order to eliminate saddle points. Using Eq. (3) one can write (for $`B`$ equal to 0)
$`J_{12}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\left[E\left(\pi 00\right)+E\left(0\pi 0\right)E\left(000\right)E\left(00\pi \right)\right],`$
$`J_{23}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\left[E(00\pi )+E\left(0\pi 0\right)E\left(000\right)E\left(\pi 00\right)\right],`$ (4)
$`J_{13}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\left[E\left(00\pi \right)+E\left(\pi 00\right)E\left(000\right)E\left(0\pi 0\right)\right].`$
Therefore, having known the energies of collinear configurations from the model calculations (based on the two-band tight-binding hamiltonian), we are able to determine the exchange coupling constants.
From now on we will be using reduced values of the magnetic field $`b=B/|J_{12}|`$ and the anisotropy constants $`k=K/|J_{12}|`$ and $`d=D/|J_{12}|`$. We will also assume, if not stated explicitly otherwise, the magnetic layers thicknesses to be 8, 3 and 3 ML (monolayers), respectively, in order to keep the length ratios as in Ref. . The first spacer thickness will be set to 3 ML in order to achieve the needed antiferromagnetic coupling between the first two magnetic layers.
## 3 Results
Figure 1 presents the exchange coupling constants ($`J`$-s) plotted against the thickness of the second spacer ($`ns_2`$). As mentioned above, for the chosen thicknesses we got strong antiferromagnetic coupling ($`J_{12}`$) between the first two magnetic layers, while $`J_{23}`$ and $`J_{13}`$ oscillate around zero. In all three cases the period of oscillations is about 3 ML, which is close to the theoretically predicted value (2.8 ML) coming from the stationary spanning vector placed at the $`(0,\pi )`$ (and equivalent positions) in the two-dimensional Brillouin zone. The second period (8 ML) originating from the hole pocket placed at $`(\pi ,\pi )`$ does not seem to appear in the present context. Note however that it can become visible under some circumstances like in Ref. where the tunnelling conductance has been considered. The GMR and $`J_{23}`$ for the same system have been plotted in Fig. 2. It can be noted that GMR asymptotically tends to oscillate with the same period but in opposite phase to $`J_{23}`$. This is in agreement with our previous findings , but it is still not clear to what extent this correlation is universal. The values of GMR are strongly reduced in comparison with the trilayer case. This can be easily understood if we note that, due to the fixed antiferromagnetic alignment of the first two magnetic layers, there is no non-scattering channel in any of the configurations involved (see Eq. 1). Figure 3 where the on-site potentials for the *d*-bands ($`ϵ_{i\sigma }^d`$) have been schematically plotted, shows that there exist at least two scattering interfaces in each case. Basing on the number of interfaces one can qualitatively predict that the $`;`$ down- and $`;`$ up-spin electron channels have the higher conductances than the remaining two. This is indeed clearly visible in Fig. 4 where the computed conductances are shown. As already discussed there is no obvious highest conductance channel. Instead, we have two higher and two lower conducting channels close to each other within the pairs. As a consequence the sign of GMR is determined by all the channels, and can be changed by manipulating some parameters (*eg.* thicknesses of the layers). This is the case in Fig. 5 where we have plotted the GMR and $`J_{23}`$ for a system with thickness of the second magnetic layer set to 5 ML (instead of 3 ML as in Fig. 2). The GMR oscillations have a small but clearly negative bias. The asymptotic opposite-phase correlation with respect to $`J_{23}`$ is again clear in this case.
The GMTEP, calculated for the same set of parameters as in Fig. 2, has been plotted in Fig. 6. In agreement with the findings of Ref. , the oscillations are quite pronounced and have the same period as GMR but exhibit negative bias. Asymptotically they seem to have roughly the same phase as GMR.
For studying the magnetization processes we have chosen the parameters as in Fig. 1 with the second spacer thickness ($`ns_2`$) set to 7 ML (the second ferromagnetic maximum of $`J_{23}`$). As already stated, two cases have been taken into account, *i.e.* the cubic and uniaxial anisotropies.
In the first case the anisotropy term ($`t_ik\mathrm{sin}^2(2\mathrm{\Theta }_i)/4t`$) gives rise to four potential wells placed at the following in-plane crystallographic axes : , , \[1̄0\] and \[01̄\] for $`k>0`$ and , \[11̄\], \[1̄1̄\] and \[1̄1\] for $`k<0.`$ Figure 7a exemplifies some phase diagrams for various initial configurations. We have chosen the simplest ones *i.e.* the collinear configurations with relative alignment of magnetic moments favoured by interlayer exchange coupling. It is however, in principal, possible to stabilize also non-collinear ones, provided that the anisotropy is strong enough. The phase diagrams have been obtained by taking subsequent scans along $`b`$ for different $`k`$ values. Dotted horizontal lines are thus only guides to the eye. The diagrams exhibit rich structure with a number of configurations (phases) occurring during the magnetization process (including the non-collinear configurations). The transition between them can be either of first or second type, that is they manifest themselves as discontinuities in magnetization or its first derivative. Some exemplary hysteresis loops are presented in Fig. 7b. As expected stronger anisotropy produces a richer structure. For positive, and sufficiently big, values of $`k`$ some flat regions, typical for exchange-biased systems, occur. Note however that there is no $`;`$ to $`;`$ transition in the first diagram of Fig. 7a (but see below).
For the case of uniaxial anisotropy ($`t_id\mathrm{cos}^2(\mathrm{\Theta }_i)/t`$) there exist only two potential wells, that is and \[1̄0\] for $`d>0`$ and \[01̄\], for $`d<0`$. The phase diagrams (Fig. 8a) are somewhat less complicated now, due to the simpler energy landscape. The saturation curve for the upper part of the first diagram (with the $`;`$ initial configuration) has been obtained on analytical basis (*i.e.* from the stability condition) because of the weak stability of the $`;`$ configuration (by which we mean that the existing energy minimum is very shallow) which makes it difficult to perform reliable numerical minimalization. The same precautions have been applied to the first hysteresis loop in Fig. 8b. Note that this time the flat regions are present already for small values of $`d`$. For $`d>0.51`$ there exists, in the first diagram, the above mentioned $`;`$ to $`;`$ transition. Only positive values of $`d`$ were presented since the opposite case is trivial — there is practically no hysteresis.
## 4 Conclusions
Within the microscopic two-band tight-binding model we have performed the calculations of interlayer exchange coupling, current-perpendicular-to-plane conductance and thermopower for a system consisting of three magnetic layers, separated by paramagnetic ones. With the thicknesses chosen so as to produce strong antiferromagnetic coupling between the first two magnetic layers, we found both the interlayer exchange coupling, GMR and GMTEP to oscillate, as a function of the second spacer thickness, with the period originating from one of the extremal spanning vectors of the Fermi surface. Additionally, using the phenomenological approach, we have computed magnetic phase diagrams and commented on their relevance to the magnetoresistance. We found that the phase diagrams exhibit rich structure and there are flat regions in hysteresis loops, typical for exchange-biased spin-valves. It has been also found that in the case of the systems under consideration, in contrast to conventional trilayers, it is possible to obtain a negative (inverse) perpendicular GMR by merely changing thicknesses of particular layers.
## 5 Acknowledgments
The KBN grants 2PO3B-118-14 and 2PO3B-117-14 are gratefully acknowledged. We also thank Poznań Computing and Networking Center for the computing time.
|
no-problem/9904/astro-ph9904372.html
|
ar5iv
|
text
|
# Kinematics of the Local Universe.
## 1 Introduction
The determination of the value of the Hubble constant, $`H_0`$, is one of the classical tasks of observational cosmology. In the framework of the expanding space paradigm it provides a measure of the distance scale in FRW universes and its reciprocal gives the time scale. This problem has been approached in various ways. A review on the recent determinations of the value of $`H_0`$ shows that most methods provide values at $`H_055\mathrm{}75`$ (for brevity we omit the units; all $`H_0`$ values are in $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$): Virgo cluster yields $`55\pm 7`$ and clusters from Hubble diagram with relative distances to Virgo $`57\pm 7`$ (Federspiel et al. Federspiel98 (1998)), type Ia supernovae give $`60\pm 10`$ (Branch Branch98 (1998)) or $`65\pm 7`$ (Riess et al. Riess98 (1998)), Tully-Fisher relation in I-band yields $`69\pm 5`$ (Giovanelli et al. Giovanelli97 (1997)) and $`55\pm 7`$ in B-band (Theureau et al. 1997b , value and errors combined from the diameter and magnitude relations), red giant branch tip gives $`60\pm 11`$ (Salaris & Cassisi Salaris98 (1998)), gravitational lens time delays $`64\pm 13`$ (Kundić et al. Kundic97 (1997)) and the ‘sosies’ galaxy method $`60\pm 10`$ (Paturel et al. Paturel98 (1998)). Sunyaev-Zeldovich effect has given lower values, $`49\pm 29`$ by Cooray (Cooray98 (1998)), $`47_{15}^{+23}`$ by Hughes & Birkinshaw (Hughes98 (1998)), but the uncertainties in these results are large due to various systematical effects (Cen, Cen98 (1998)). Surface brightness fluctuation studies provide a higher value of $`87\pm 11`$ (Jensen et al., Jensen99 (1999)), but most methods seem to fit in the range 55 - 75 stated above. An important comparison to these local values may be found after the cosmic microwave background anisotropy probes (MAP and Planck) and galaxy redshift surveys (2dF and SDSS) offer us a multitude of high resolution data (Eisenstein et al., Eisenstein98 (1998)). Note that most of the errors cited here as well as given in the present paper are $`1\sigma `$ errors.
The present line of research has its roots in the work of Bottinelli et al. (Bottinelli86 (1986)), where $`H_0`$ was determined using spiral galaxies in the field. They used the direct Tully-Fisher relation (Tully & Fisher Tully77 (1977)):
$$M\mathrm{log}V_{\mathrm{max}},$$
(1)
where $`M`$ is the absolute magnitude in a given band and $`\mathrm{log}V_{\mathrm{max}}`$ is the maximum rotational velocity measured from the hydrogen 21 cm line width of each galaxy. Gouguenheim (Gouguenheim69 (1969)) was the first to suggest that such a relation might exist as a distance indicator.
Bottinelli et al. (Bottinelli86 (1986)) paid particular attention to the elimination of the so-called Malmquist bias. In general terms, the determination of $`H_0`$ is subject to the Malmquist bias of the $`2^{\mathrm{nd}}`$ kind: the inferred value of $`H_0`$ depends on the distribution of the derived distances $`r`$ for each true distance $`r^{}`$ (Teerikorpi Teerikorpi97 (1997)). Consider the expectation value of the derived distance $`r`$ at a given true distance $`r^{}`$:
$$E(r|r^{})=\underset{0}{\overset{\mathrm{}}{}}drrP(r|r^{}).$$
(2)
The integral is done over derived distances $`r`$. For example, consider a strict magnitude limit: for each true distance the derived distances are exposed to an upper cut-off. Hence the expectation value for the derived distance $`r`$ at $`r^{}`$ is too small and thus $`H_0`$ will be overestimated.
Observationally, the direct Tully-Fisher relation takes the form:
$$X=\mathrm{slope}\times p+\mathrm{cst},$$
(3)
where we have adopted a shorthand $`p`$ for $`\mathrm{log}V_{\mathrm{max}}`$ and $`X`$ denotes either the absolute magnitude $`M`$ or $`\mathrm{log}D`$, where $`D`$ labels the absolute linear size of a galaxy in kpc. In the direct approach the slope is determined from the linear regression of $`X`$ against $`p`$. The resulting direct Tully-Fisher relation can be expressed as
$$E(X|p)=ap+b.$$
(4)
Consider now the observed average of $`X`$ at each $`p`$, $`X_p`$, as a function of the true distance. The limit in $`x`$ (the observational counterpart of $`X`$) cuts off progressively more and more of the distribution function of $`X`$ for a constant $`p`$. Assuming $`X=\mathrm{log}D`$ one finds:
$$X_pE(X|p),$$
(5)
The inequality gives a practical measure of the Malmquist bias depending primarily on $`p`$, $`r^{}`$, $`\sigma _X`$ and $`x_{\mathrm{lim}}`$. The equality holds only when the $`x`$-limit cuts the luminosity function $`\mathrm{\Phi }(X)`$ insignificantly.
That the direct relation is inevitably biased by its nature forces one either to look for an unbiased subsample or to find an appropriate correction for the bias. The former was the strategy chosen by Bottinelli et al. (Bottinelli86 (1986)) where the method of normalized distances was introduced. This is the method chosen also by the KLUN project. KLUN (Kinematics of the Loal Universe) is based on a large sample, which consists of 5171 galaxies of Hubble types T=1-8 distributed on the whole celestial sphere (cf. e.g. Paturel Paturel94 (1994), Theureau et al. 1997b ).
Sandage (1994a , 1994b ) has also studied the latter approach. By recognizing that the Malmquist bias depends not only on the imposed $`x`$-limit but also on the rotational velocities and distances, he introduced the triple-entry correction method, which has consistently predicted values of $`H_0`$ supporting the long cosmological distance scale. As a practical example of this approach to the Malmquist bias cf. e.g. Federspiel et al. (Federspiel94 (1994)).
Bottinelli et al. (Bottinelli86 (1986)) found $`H_0=72\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ using the method of normalized distances, i.e. using a sample cleaned of galaxies suffering from the Malmquist bias. This value was based on the de Vaucouleurs calibrator distances. If, instead, the Sandage-Tammann calibrator distances were used Bottinelli et al. (Bottinelli86 (1986)) found $`H_0=63\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ (or $`H_0=56\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ if using the old ST calibration). One appreciates the debilitating effect of the Malmquist bias by noting that when it is ignored the de Vaucouleurs calibration yields much larger values: $`H_0100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
Theureau et al. (1997b ) by following the guidelines set out by Bottinelli et al. (Bottinelli86 (1986)) determined the value of $`H_0`$ using the KLUN sample. $`H_0`$ was determined not only using magnitudes but also diameters because the KLUN sample is constructed to be complete in angular diameters rather than magnitudes (completeness limit is estimated to be $`D_{25}=1\stackrel{}{.}6`$). Left with 400 unbiased galaxies (about ten times more than Bottinelli et al. (Bottinelli86 (1986)) were able to use) reaching up to $`20003000\mathrm{km}\mathrm{s}^1`$ they found using the most recent calibration based on HST observations of extragalactic cepheids
* $`H_0=53.4\pm 5.0\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ from the magnitude relation, and
* $`H_0=56.7\pm 4.9\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ from the diameter relation.
They also discussed in their Sect. 4.2 how these results change if the older calibrations were used. For example, the de Vaucouleurs calibration would increase these values by 11 %. We expect that a similar effect would be observed also in the present case.
In the present paper we ask whether the results of Theureau et al. (1997b ) could be confirmed by implementing the inverse Tully-Fisher relation:
$$p=a^{}X+b^{},$$
(6)
This problem has special importance because of the “unbiased” nature that has often been ascribed to the inverse Tully-Fisher relation as a distance indicator and because of the large number of galaxies available contrary to the direct approach where one is constrained to the so called unbiased plateau (cf. Bottinelli et al. Bottinelli86 (1986); Theureau et al. 1997b ). The fact that the inverse relation has it own particular biases has received increasing attention during the years (Fouqué et al. Fouque90 (1990), Teerikorpi Teerikorpi90 (1990), Willick Willick91 (1991), Teerikorpi Teerikorpi93 (1993), Ekholm & Teerikorpi Ekholm94 (1994), Freudling et al. Freudling95 (1995), Ekholm & Teerikorpi Ekholm97 (1997), Teerikorpi et al. Teerikorpi99 (1999) and, of course, the present paper).
## 2 Outlining the approach
As noted in the introduction the KLUN project approaches the problem of the determination of the value of $`H_0`$ using field galaxies with photometric distances. Such an approach reduces to three steps
1. construction of a relative kinematical distance scale,
2. construction of a relative redshift-independent distance scale, and
3. establishment of an absolute calibration.
Below we comment on the first two steps. In particular we further develop the concept of a relevant inverse slope which may differ from the theoretical slope, but is still the slope to be used. The third step is addressed in Sect. 6. It is hoped that this review clarifies the methodological basis of the KLUN project and also makes the notation used more familiar.
### 2.1 The kinematical distance scale
The first step takes its simplest form by assuming the strictly linear Hubble law:
$$R_{\mathrm{kin}}=V_\mathrm{o}/H_0^{}$$
(7)
where $`V_\mathrm{o}`$ is the radial velocity inferred from the observed redshifts and $`H_0^{}`$ is some input value for the Hubble constant. Because $`V_\mathrm{o}`$ reflects the true kinematical distance $`R_{\mathrm{kin}}^{}`$ via the true Hubble constant $`H_0^{}`$
$$R_{\mathrm{kin}}^{}=V_\mathrm{o}/H_0^{},$$
(8)
one recognizes that Eq. 7 sets up a relative distance scale:
$$d_{\mathrm{kin}}=\frac{R_{\mathrm{kin}}}{R_{\mathrm{kin}}^{}}=\frac{H_0^{}}{H_0^{}}.$$
(9)
In other words, $`\mathrm{log}d_{\mathrm{kin}}`$ is known next to a constant.
In a more realistic case one ought to consider also the peculiar velocity field. In KLUN one assumes that peculiar velocities are governed mainly by the Virgo supercluster.
In KLUN the kinematical distances are inferred from $`V_\mathrm{o}`$’s by implementing the spherically symmetric model of Peebles (Peebles76 (1976)) valid in the linear regime. In the adopted form of this model (for the equations to be solved cf. e.g. Bottinelli et al. Bottinelli86 (1986), Ekholm Ekholm96 (1996)) the centre of the peculiar velocity field is marked by the pair of giant ellipticals M86/87 positioned at some unknown true distance $`R^{}`$ which is used to normalize the kinematical distance scale: the centre is at a distance $`d_{\mathrm{kin}}=1`$.
The required cosmological velocities $`V_{\mathrm{cor}}`$ (observed velocities corrected for peculiar motions) are calculated as
$$V_{\mathrm{cor}}=C_1\times d_{\mathrm{kin}},$$
(10)
where the constant $`C_1`$ defines the linear recession velocity of the centre of the system assumed to be at rest with respect to the quiescent Hubble flow:
$$C_1=V_\mathrm{o}(\mathrm{Vir})+V_{\mathrm{inf}}^{\mathrm{LG}}.$$
(11)
$`V_\mathrm{o}(\mathrm{Vir})`$ is the presumed velocity of the centre and $`V_{\mathrm{inf}}^{\mathrm{LG}}`$ is the presumed infall velocity of the Local Group into the centre of the system.
### 2.2 The redshift-independent distances
The direct Tully-Fisher relation is quite sensitivite to the sampling of the luminosity function. On the other hand, when implementing the inverse Tully-Fisher relation (Eq. 6) under ideal conditions it does not matter how we sample $`X`$ (Schechter Schechter80 (1980)) in order to obtain an unbiased estimate for the inverse parameters and, furthermore, the expectation value $`E(r|r^{})`$ is also unbiased (Teerikorpi Teerikorpi84 (1984)). However, we should sample all $`\mathrm{log}V_{\mathrm{max}}`$ for each constant true $`X`$ in the sample. This theoretical prerequisition is often tacitly assumed in practice. For more formal treatments on the inverse relation cf. Teerikorpi(Teerikorpi84 (1984), Teerikorpi90 (1990), Teerikorpi97 (1997)) and e.g. Hendry & Simmons (Hendry94 (1994)) or Rauzy & Triay (Rauzy96 (1996)).
In the inverse approach the distance indicator is
$$X=A^{}p_X+\mathrm{cst}.,$$
(12)
where $`A^{}=1/a^{}`$ following the notation adopted by Ekholm & Teerikorpi (Ekholm97 (1997); hereafter ET97). The inverse regression slope $`a^{}`$ is expected to fulfill
$$p_XE(p|X)=a^{}X+\mathrm{cst}.$$
(13)
$`p_X`$ is the observed average $`p`$ for a given $`X`$. Eq. 13 tells that in order to find the correct $`a^{}`$ one must sample the distribution function $`\varphi _X(p)`$ in such a way that $`p_X=(p_0)_X`$, where $`(p_0)_X`$ is the central value of the underlying distribution function. $`\varphi _X(p)`$ is presumed to be symmetric about $`(p_0)_X`$ for all $`X`$. ET97 demonstrated how under these ideal conditions the derived $`\mathrm{log}H_0`$ as a function of the kinematical distance should run horizontally as the adopted slope approaches the ideal, theoretical slope.
In practice the parameters involved are subject to uncertainties, in which case one should use instead of the unknown theoretical slope a slope which we call the relevant inverse slope. We would like to clarify in accurate terms the meaning of this slope which differs from the theoretical slope and which has been more heuristically discussed by Teerikorpi et al. (Teerikorpi99 (1999)). The difference between the theoretical and the relevant slope can be expressed in the following formal way. Define the observed parameters as
$$X_\mathrm{o}=X+ϵ_x+ϵ_{\mathrm{kin}},$$
(14)
$$p_\mathrm{o}=p+ϵ_p,$$
(15)
where $`X`$ is inferred from $`x`$ with a measurement error $`ϵ_x`$ and the kinematical distance $`d_{\mathrm{kin}}`$ has an error $`ϵ_{\mathrm{kin}}`$ due to uncertainties in the kinematical distance scale. $`ϵ_p`$ is the observational error on $`p`$. The theoretical slope $`a_\mathrm{t}^{}`$ is<sup>1</sup><sup>1</sup>1We make use of the formal definition of the slope of the linear regression of $`y`$ against $`x`$ with
$$\mathrm{Cov}(x,y)=\frac{(xx)(yy)}{(N1)}.$$
(16)
$$a_\mathrm{t}^{}=\frac{\mathrm{Cov}(X,p)}{\mathrm{Cov}(X,X)},$$
(17)
while the observed slope is
$$a_\mathrm{o}^{}=\frac{\mathrm{Cov}(X_\mathrm{o},p_\mathrm{o})}{\mathrm{Cov}(X_\mathrm{o},X_\mathrm{o})}\frac{\mathrm{Cov}(X,p)+\mathrm{Cov}(ϵ_x+ϵ_{\mathrm{kin}},ϵ_p)}{\mathrm{Cov}(X,X)+\sigma _x^2+\sigma _{\mathrm{kin}}^2}$$
(18)
We call the slope $`a_\mathrm{o}^{}`$ relevant if it verifies for all $`X_\mathrm{o}`$ (Eq. 13)
$$p_\mathrm{o}_{X_\mathrm{o}}=E(p|X_\mathrm{o})=a_\mathrm{o}^{}X_\mathrm{o}+\mathrm{cst}.$$
(19)
This definition means that the average observed value of $`p_\mathrm{o}`$ at each fixed value of $`X_\mathrm{o}`$ (derived from observations and the kinematical distance scale) is correctly predicted by Eq. 19. Note also that in the case of diameter relation, $`ϵ_x`$, $`ϵ_{\mathrm{kin}}`$ and $`ϵ_p`$ are only weakly correlated. Thus the difference between the relevant slope and the theoretical slope is dominated by $`\sigma _x^2+\sigma _{\mathrm{kin}}^2`$. In the special case where the galaxies are in one cluster (i.e. at the same true distance), the dispersion $`\sigma _{\mathrm{kin}}`$ vanishes. In order to make the relevant slope more tangible we demonstrate in Appendix A how it indeed is the one to be used for the determination of $`H_0`$.
Finally, also selection in $`p`$ and type effect may affect the derived slope making it even shallower. Theureau et al. (1997a ) showed that a type effect exists seen as degenerate values of $`p`$ for each constant linear diameter $`X`$. Early Hubble types rotate faster than late types. In addition, based on an observational program of 2700 galaxies with the Nançay radiotelescope, Theureau (Theureau98 (1998)) warned that the detection rate in HI varies continuosly from early to late types and that on average $`10\%`$ of the objects remain unsuccessfully observed. Influence of such a selection, which concerns principally the extreme values of the distribution function $`\varphi (p)`$, was discussed analytically by Teerikorpi et al. (Teerikorpi99 (1999)).
## 3 A straightforward derivation of $`\mathrm{log}H_0`$
### 3.1 The sample
KLUN sample is – according to Theureau et al. (1997b ) – complete up to $`B_\mathrm{T}^\mathrm{c}=13\stackrel{m}{.}25`$, where $`B_\mathrm{T}^\mathrm{c}`$ is the corrected total B-band magnitude and down to $`\mathrm{log}D_{25}^\mathrm{c}=1.2`$, where $`D_{25}^\mathrm{c}`$ is the corrected angular B-band diameter. The KLUN sample was subjected to exclusion of low-latitude ($`|b|15\mathrm{°}`$) and face-on ($`\mathrm{log}R_{25}0.07`$) galaxies. The centre of the spherically symmetric peculiar velocity field was positioned at $`l=284\mathrm{°}`$ and $`b=74\mathrm{°}`$. The constant $`C_1`$ needed in Eq. 10 for cosmological velocities was chosen to be $`1200\mathrm{km}\mathrm{s}^1`$ with $`V_\mathrm{o}(\mathrm{Vir})=980\mathrm{km}\mathrm{s}^1`$ and $`V_{\mathrm{inf}}^{\mathrm{LG}}=220\mathrm{km}\mathrm{s}^1`$ (cf. Eq. 11). After the exclusion of triple-valued solutions to the Peebles’ model and when the photometric completeness limits cited were imposed on the remaining sample one was left with 1713 galaxies for the magnitude sample and with 2822 galaxies for the diameter sample.
### 3.2 The inverse slopes and calibration of zero-points
Theureau et al. (1997a ) derived a common inverse diameter slope $`a^{}0.50`$ and inverse magnitude slope $`a^{}0.10`$ for all Hubble types considered i.e. T=1-8. These slopes were also shown to obey a a simple mass-luminosity model (cf. Theureau et al. 1997a ).
With these estimates for the inverse slope the relation can be calibrated. At this point of derivation we ignore the effects of type-dependence and possible selection in $`\mathrm{log}V_{\mathrm{max}}`$. The calibration was done by forcing the slope to the calibrator sample of 15 field galaxies with cepheid distances, mostly from the HST programs (Theureau et al. 1997b , cf. their Table 1.). The absolute zero-point is given by
$$b_{\mathrm{cal}}^{}=\frac{(\mathrm{log}V_{\mathrm{max}}a^{}X)}{N_{\mathrm{cal}}},$$
(20)
where the adopted inverse slope $`a^{}=0.50`$ yields $`b_{\mathrm{cal}}^{}=1.450`$ and $`a^{}=0.10`$ $`b_{\mathrm{cal}}^{}=0.117`$. In Fig. 1 we show the calibration for the diameter relation and in Fig. 2 for the magnitude relation.
### 3.3 $`H_0`$ without type corrections
ET97 discussed in some detail problems which hamper the determination of the Hubble constant $`H_0`$ when one applies the inverse Tully-Fisher relation. They concluded that once the relevant inverse slope is found, the average $`\mathrm{log}H_0`$ shows no tendencies as a function of the distance. Or, in terms of the method of normalized distances of Bottinelli et. al. (Bottinelli86 (1986)), the unbiased plateau extends to all distances. ET97 also noted how one might simultaneously fine-tune the inverse slope and get an unbiased estimate for $`\mathrm{log}H_0`$. The resulting $`\mathrm{log}H_0`$ vs. kinematical distance diagrams for the inverse diameter relation is given in Fig. 3 and for the magnitude relation in Fig. 4. Application of the parameters given in the previous section yield $`\mathrm{log}H_0=1.92`$ correponding to $`H_0=83.2\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the diameter sample and $`\mathrm{log}H_0=1.857`$ or $`H_0=71.9\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the magnitude sample. These averages are shown as horizontal, solid straight lines. In panels (a) individual points are plotted and in panels (b) the averages for bins of $`1000\mathrm{km}\mathrm{s}^1`$ are given as circles.
Consider first the diameter relation. One clearly sees how the average follows a horizontal line up to $`9000\mathrm{km}\mathrm{s}^1`$. At larger distances, the observed behaviour of $`H_0`$ probably reflects some selection in $`\mathrm{log}V_{\mathrm{max}}`$ in the sense that there is an upper cut-off value for $`\mathrm{log}V_{\mathrm{max}}`$. Note also the mild downward tendency between $`1000\mathrm{km}\mathrm{s}^1`$ and $`5000\mathrm{km}\mathrm{s}^1`$. Comparison of Fig. 4 with Fig. 3 shows how $`\mathrm{log}H_0`$ from magnitudes and diameters follow each other quite well as expected (ignoring, of course, the vertical shift in the averages). Note how the growing tendency of $`\mathrm{log}H_0`$ beyond $`9000\mathrm{km}\mathrm{s}^1`$ is absent in the magnitude sample because of the limiting magnitude: the sample is less deep. This suggests that the possible selection bias in $`\mathrm{log}V_{\mathrm{max}}`$ does not affect the magnitude sample.
One might, by the face-value, be content with the slopes adopted as well as with the derived value of $`H_0`$. The observed behaviour is what ET97 argued to be the prerequisite for an unbiased estimate for the Hubble constant: non-horizontal trends disappear. It is – however – rather disturbing to note that the values of $`H_0`$ obtained via this straightforward application of the inverse relation are significantly larger than those reported by Theureau et al. (1997b ). The inverse diameter relation predicts some 50 percent larger value and the magnitude relation some 30 percent larger value than the corresponding direct relations. In what follows, we try to understand this discrepancy.
## 4 Is there selection in $`\mathrm{log}V_{\mathrm{max}}`$?
The first explanation coming to mind is that the apparently wellbehaving slope $`a^{}=0.5`$ ($`a^{}=0.1`$) is incorrect because of some selection effect and is thus not relevant in the sense discussed in Sect. 2.2 and in Appendix A. The relevant slope brings about an unbiased estimate for the Hubble parameter (or the Hubble constant if one possesses an ideal calibrator sample) if the distribution function of $`\mathrm{log}V_{\mathrm{max}}`$, $`\varphi (p)_X`$, is completely and correctly sampled for each $`X`$. Fig. 3 showed some preliminary indications that this may not be the case as regards the diameter sample.
Teerikorpi (Teerikorpi99 (1999)) discussed the effect and significance of a strict upper and/or lower cut-off on $`\varphi (p)_X`$. For example, an upper cut-off in $`\varphi (p)_X`$ should yield a too large value of $`H_0`$ and, furthermore, a too shallow slope. Their analytical calculations given the gaussianity of $`\varphi (p)_X`$ show that this kind of selection effect has only a minuscule affect unless the cut-offs are considerable. Because the selection does not seem to be significant, we do not expect much improvement in $`H_0`$.
There is, however, another effect which may alter the slope. As mentioned in Sect. 2.2 the type-dependence of the zero-point should be taken into account. Because the selection function may depend on the morphological type it also affects the type corrections. This is clearly seen when one considers how the type corrections are actually calculated. As in Theureau et al. (1997b ) galaxies are shifted to a common Hubble type 6 by applying a correction term $`\mathrm{\Delta }b^{}=b^{}(T)b^{}(6)`$ to individual $`\mathrm{log}V_{\mathrm{max}}`$ values, where
$$b^{}(T)=\mathrm{log}V_{\mathrm{max}}_Ta^{}X_T.$$
(21)
Different morphological types do not have identical spatial occupation, which is shown in Fig. 5 for Hubble types 1 and 8 as dashed lines corresponding to forced solutions using the common slope $`a^{}=0.5`$. The strict upper and lower cut-offs would influence the extreme types more. Hence we must first more carefully see if the samples suffer from selection in $`\mathrm{log}V_{\mathrm{max}}`$
The inverse Tully-Fisher diagram for the diameter sample is given in Fig. 5. The least squares fit ($`a^{}=0.576`$, $`b^{}=1.259`$) is shown as a solid line. One finds evidence for both an upper and lower cut-off in the $`\mathrm{log}V_{\mathrm{max}}`$-distribution, the former being quite conspicuous. The dotted lines are positioned at $`\mathrm{log}V_{\mathrm{max}}=2.55`$ and $`\mathrm{log}V_{\mathrm{max}}=1.675`$ to guide the eye. Fig. 5 hints that the slope $`a^{}=0.5`$ adopted in Sect. 3 may not be impeccable and thus questions the validity of the “naïve” derivation of $`H_0`$ at least in the case of the diameter sample.
In the case of diameter samples, Teerikorpi et al. (Teerikorpi99 (1999)) discussed how the cut-offs should demonstrate themselves in a $`\mathrm{log}H_0`$ vs. $`\mathrm{log}d_{\mathrm{norm}}`$ diagram, where $`\mathrm{log}d_{\mathrm{norm}}=\mathrm{log}D_{25}+\mathrm{log}d_{\mathrm{kin}}`$, which in fact is the log of $`D_{\mathrm{linear}}`$ next to a constant. We call $`d_{\mathrm{norm}}`$ “normalized” in analogy to the method of normalized distances, where the kinematical distances were normalized in order to reveal the underlying bias. That is exactly what is done also here.
Consider the differential behaviour of $`\mathrm{log}H_0`$ as a function of the normalized distance. Differential average $`\mathrm{log}H_0`$ was calculated as follows. The abscissa was divided into intervals of 0.01 starting at minimum $`\mathrm{log}d_{\mathrm{norm}}`$ in the sample. If a bin contained at least 5 galaxies the average was calculated. In Fig. 6. the inverse parameters $`a^{}=0.5`$ and $`b^{}=1.450`$ were used. It is seen that around $`\mathrm{log}d_\mathrm{n}2`$ the values of $`\mathrm{log}H_0`$ have a turning point as well as at $`\mathrm{log}d_\mathrm{n}1.45`$. The most striking feature is – however – the general decreasing tendency of $`\mathrm{log}H_0`$ between these two points. Now, according to ET97, a downward tendency of $`\mathrm{log}H_0`$ as a function of distance corresponds to $`A/A^{}>1`$, i.e. the adopted slope A is too shallow ($`A^{}`$ is the relevant slope).
Closer inspection of Fig. 1 shows that a steeper slope might provide a better fit to the calibrator sample. One is thus tempted to ask what happens if one adopts for the field sample the slope giving the best fit to the calibrator sample. As such solution we adopt the straightforward linear regression yielding $`a^{}=0.749`$ and $`b^{}=1.101`$ shown in Fig. 7. It is interesting to note that when these parameters are used the downward tendency between $`\mathrm{log}d_{\mathrm{norm}}1.45`$ and $`\mathrm{log}d_{\mathrm{norm}}2`$ disappears as can be seen in Fig. 8. From hereon we refer to this interval as the “unbiased inverse plateau”. The value of $`\mathrm{log}H_0`$ in this plateau is still rather high.
In the case of the magnitude sample we study the behaviour of the differential average $`\mathrm{log}H_0`$ as a function of a “normalized” distance modulus:
$$\mu _{\mathrm{norm}}=B_\mathrm{T}^\mathrm{c}5\mathrm{log}d_{\mathrm{kin}}.$$
(22)
The $`\mu _{\mathrm{norm}}`$ axis was divided into intervals of 0.05 and again, if in a bin is more than five points the average is calculated. As suspected in the view of Fig. 4., Fig. 9 reveals no significant indications of a selection in $`\mathrm{log}V_{\mathrm{max}}`$. The points follow quite well the straight line also shown. The line however is tilted telling us that the input slope $`a^{}=0.10`$ may not be the relevant one.
As already noted the type corrections may have some influence on the slopes. In the next section we derive the appropriate type corrections for the zero-points using galaxies residing in the unbiased plateau ($`\mathrm{log}d_{\mathrm{norm}}[1.45,2.0]`$) for the diameter sample and for the whole magnitude sample and rederive the slopes.
## 5 Type corrections and the value of $`H_0`$
The zero-points needed for the type corrections are calculated using Eq. 21. It was pointed out in Sect. 2.2 that $`\mathrm{log}H_0`$ should run horizontally in order to find an unbiased estimate for $`H_0`$. In this section we look for such an slope. Because the type-corrections depend on the adopted slope, this fine-tuning of the slope must be carried out in an iterative manner. This process consists of finding the type corrections $`\mathrm{\Delta }b^{}(\mathrm{T})`$ for each test slope $`a^{}`$. Corrections are made for both the field and calibrator samples. The process is repeated until a horizontal $`\mathrm{log}H_0`$ run is found.
Consider first the diameter sample. When the criteria for the unbiased inverse plateau were imposed on the sample, 2142 galaxies were left. For this subsample the iteration yielded $`a^{}=0.54`$ (the straight line in Fig. 10 is the least squares fit with a slope 0.003) and when the corresponding type corrections given in Table 1 were applied to the calibrator sample and the slope forced to it one found $`b_{\mathrm{cal}}^{}(6)=1.325`$. The result is shown in Fig. 10. The given inverse parameters predict an average $`\mathrm{log}H_0=1.897`$ (or $`H_0=78.9\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$).
We treated the magnitude sample of 1713 galaxies in a similar fashion. The resulting best fit is shown in Fig. 11. The relevant slope is $`a^{}=0.115`$ (the least squares fit yields a slope 0.0004). The corresponding type corrections are given in Table 1. The forced calibration gives $`b_{\mathrm{cal}}^{}(6)=0.235`$. From this sample we find an average $`\mathrm{log}H_0=1.869`$ (or $`H_0=72.4\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). In both cases the inverse estimates for the Hubble constant ($`H_080`$ for the diameter relation and $`H_070`$ for the magnitude relation) are considerably larger than the corresponding estimates using the direct Tully-Fisher relation ($`H_055`$).
## 6 $`H_0`$ corrected for a calibrator selection bias
The values of $`H_0`$ from the direct and inverse relations still disagree even after we have taken into account the selection in $`\mathrm{log}V_{\mathrm{max}}`$, made the type corrections and used the relevant slope. There is – however – a serious possibility left to explain the discrepancy. The calibrator sample used may not meet the theoretical requirements of the inverse relation. In order to transform the relative distance scale into an absolute one a properly chosen sample of calibrating galaxies is needed. What does “properly chosen” mean? Consider first the direct relation for which it is essential to possess a calibrator sample, which is volume-limited for each $`p_{\mathrm{cal}}`$. This means that for a $`p_{\mathrm{cal}}`$ one has $`X_{\mathrm{cal}}`$ which is drawn from the complete part of the gaussian distribution function $`G(X;X_p,\sigma _{X_p})`$, where the average $`X_p=ap+b`$. If $`\sigma _{X_p}`$ is constant for all $`p`$ and the direct slope $`a`$ has been correctly derived from the unbiased field sample, it will, when forced onto the calibrator sample, bring about the correct calibrating zero-point.
As regards the calibration of the inverse relation the sample mentioned above does not necessarily guarantee a successful calibration. As pointed out by Teerikorpi et al. (Teerikorpi99 (1999)) though the calibrator sample is complete in the direct sense nothing has been said about how the $`p_{\mathrm{cal}}`$’s relate to the corresponding cosmic distribution of $`p`$’s from which the field sample was drawn. $`p_{\mathrm{cal}}`$ should reflect the cosmic average $`p_0`$. If not, the relevant field slope when forced to the calibrator sample will bring about a biased estimate for $`H_0`$. Teerikorpi (Teerikorpi90 (1990)) already recognized that this could be a serious problem. He studied, however, clusters of galaxies where a nearby (calibrator) cluster obeys a different slope than a distant cluster. Teerikorpi et al. (Teerikorpi99 (1999)) developed the ideas further and showed how this problem may be met also when using field galaxies. The mentioned bias when using the relevant slope can be corrected for but is a rather complicated task. For the theoretical background of the “calibrator selection bias” consult Teerikorpi et al. (Teerikorpi99 (1999)).
One may – as pointed out by Teerikorpi et al. (Teerikorpi99 (1999)) – use instead of the relevant slope the calibrator slope which also predicts a biased estimate for $`H_0`$ but which can be corrected for in a rather straightforward manner. For the diameter relation the average correction term reads as
$$\mathrm{\Delta }\mathrm{log}H_0=(3\alpha )\mathrm{ln}10\sigma _D^2\times \left[\frac{a_{\mathrm{cal}}^{}}{a^{}}1\right],$$
(23)
where $`\sigma _D`$ is the dispersion of the log linear diameter $`\mathrm{log}D_{\mathrm{linear}}`$ and $`\alpha `$ gives the radial number density gradient : $`\alpha =0`$ corresponds to a strictly homogeneous distribution of galaxies. For magnitudes the correction term follows from (cf. Teerikorpi Teerikorpi90 (1990))
$$\mathrm{\Delta }\mathrm{log}H_0=0.2\left[\frac{a_{\mathrm{cal}}^{}}{a^{}}1\right]\times (MM_0).$$
(24)
Because $`MM_0`$ simply reflects the classical Malmquist bias one finds:
$$\mathrm{\Delta }\mathrm{log}H_0=\frac{(3\alpha )\mathrm{ln}10}{5}\sigma _M^2\times 0.2\left[\frac{a_{\mathrm{cal}}^{}}{a^{}}1\right],$$
(25)
Note that one may use the calibrator slope and consequently the correction formulas irrespective of the nature of the calibrator sample (Teerikorpi et al. Teerikorpi99 (1999)). If the calibrator sample would meet the requirement mentioned, the value corrected with Eqs. 23 or 25 should equal values obtained from the relevant slopes. Furthermore, our analysis carried out so far would have yielded an unbiased estimate for $`H_0`$ and thus the problems would be in the direct analysis. However, if the requirement is not met one should prefer the corrective method using the calibrator slope.
### 6.1 Is the calibrator sample representative?
Is the calibrator bias present in our case? Recall that the calibrators used were sampled from the nearby field to have high quality distance moduli mostly from the HST Cepheid measurements. This means that we have no a priori guarantee that the calibrator sample used will meet the criterium required. We compare the type-corrected diameter and magnitude samples with the calibrator sample. Note that for the diameter sample we use only galaxies residing in the unbiased inverse plateau (i.e. the small selection effect in $`\mathrm{log}V_{\mathrm{max}}`$ has been eliminated). In Fig. 12 we show the histogram of the $`\mathrm{log}V_{\mathrm{max}}`$ values for the diameter sample and the individual calibrators (labelled as stars). The vertical solid line gives the median of the plateau $`\mathrm{Med}(\mathrm{log}V_{\mathrm{max}}^{\mathrm{plateau}})=2.10`$ and the dotted line gives the median of the calibrators $`\mathrm{Med}(\mathrm{log}V_{\mathrm{max}}^{\mathrm{calib}})=2.11`$. In the case of magnitudes both the field and calibrator sample have the same median (2.14). The average values for the diameter case were $`\mathrm{log}V_{\mathrm{max}}^{\mathrm{plateau}}=2.09`$ and $`\mathrm{log}V_{\mathrm{max}}^{\mathrm{calib}}=2.06`$, and for the magnitude case $`\mathrm{log}V_{\mathrm{max}}^{\mathrm{mag}}=2.12`$ and $`\mathrm{log}V_{\mathrm{max}}^{\mathrm{calib}}=2.08`$. Both the diameter and the magnitude field samples were subjected to strict limits, which means that both inevitably suffer from the classical Malmquist bias. In order to have a representative calibrator sample in the sense described, we would have expected a clear difference between the field and calibrator samples. That the statistics are very close to each other lends credence to the assumption that the calibrator selection bias is present.
We also made tests using the Kolmogorov-Smirnov statistics (Figs. 13 and 14). In this test a low significance level should be considered as counterevidence for a hypothesis that two samples rise from the same underlying distribution. We found relatively high significance levels (0.89 for the diameter sample and 0.3 for the magnitude sample). Neither these findings corroborate the hypothesis that the calibrator sample is drawn from the cosmic distribution and hence the use of Eqs. 23 or 25 is warranted.
### 6.2 The dispersion in $`\mathrm{log}D_{\mathrm{linear}}`$
In order to find a working value for the dispersion in $`\mathrm{log}D_{\mathrm{linear}}`$, we first consider the classical Spaenhauer diagram (cf. Sandage 1994a , 1994b ). In the Spaenhauer diagram one studies the behaviour of $`X`$ as a function of the redshift. If the observed redshift could be translated into the corresponding cosmological distance, then $`X`$ inferred from $`x`$ and the redshift would genuinely reflect the true size of a galaxy.
In practice, the observed redshift cannot be considered as a direct indicator of the cosmological distance because of the inhomogeneity of the Local Universe. Peculiar motions should also be considered. Thus the inferred $`X`$ suffers from uncertainties in the underlying kinematical model. The Spaenhauer diagram as a diagnostics for the distribution function is always constrained by our knowledge of the form of the true velocity-distance law.
Because the normalized distance (cf. Sect. 3.) is proportional to the linear diameter we construct the Spaenhauer diagram as $`\mathrm{log}d_{\mathrm{norm}}`$ vs. $`\mathrm{log}d_{\mathrm{kin}}`$ thus avoiding the uncertainties in the absolute distance scale. The problems with relative distance scale are – of course – still present. The fit shown in Fig. 15 is not unacceptable. The dispersion used was $`\sigma _X=0.28`$, a value inferred from the dispersion in absolute B-band magnitudes $`\sigma _M=1.4`$ (Fouqué et al. Fouque90 (1990)) based on the expectation that the dispersion in log linear diameter should be one fifth of that of absolute magnitudes.
We also looked how the average values $`\mathrm{log}d_{\mathrm{norm}}`$ at different kinematical distances compare to the theoretical prediction which, in a strictly limited sample of $`X`$’s, at each log distance is formally expressed as
$$X_\mathrm{d}=X_0^{}+\frac{2\sigma _X}{\sqrt{2}\pi }\frac{\mathrm{exp}\left[(X_{\mathrm{lim}}X_0^{})^2/(2\sigma _X^2)\right]}{\mathrm{erfc}\left[(X_{\mathrm{lim}}X_0^{})/(\sqrt{2}\sigma _X)\right]}.$$
(26)
Here $`X`$ refers to $`\mathrm{log}d_{\mathrm{norm}}`$. The curve in Fig. 16 is based on $`X_0^{}=1.37`$ and $`\sigma _X=0.28`$. The averages from the data are shown as bullets. The data points follow the theoretical prediction reasonably well.
### 6.3 Corrections and the value of $`H_0`$
Consider a strictly homogeneous universe, i.e. $`\alpha =0`$. In Eqs. 23 and 25 one needs values for slope $`a_\mathrm{c}^{}`$. Least squares fit to the type-corrected calibrator sample yields $`a_\mathrm{c}^{}=0.73`$ for the diameter relation and $`a_\mathrm{c}^{}=0.147`$ for the magnitude relation. (cf. Figs. 17 and 18). These slopes correspond to diameter zero-point $`b_\mathrm{c}^{}(6)=1.066\pm 0.103`$ and to magnitude zero-point $`b_\mathrm{c}^{}=0.879\pm 0.131`$ The biased estimates for average $`\mathrm{log}H_0`$ are $`\mathrm{log}H_0=1.910\pm 0.188`$ for the diameters and $`\mathrm{log}H_0=1.876\pm 0.176`$ for the magnitudes. For the zero-points and the averages we have given the $`1\sigma `$ standard deviations. The mean error in the averages is estimated from
$$ϵ_{\mathrm{log}H_0}\sqrt{\frac{\sigma _B^{}^2}{N_{\mathrm{cal}}}+\frac{\sigma _{\mathrm{log}H_0}^2}{N_{\mathrm{gal}}}},$$
(27)
where $`\sigma _B^{}=\sigma _b^{}/a_{\mathrm{cal}}^{}`$ for diameters and $`\sigma _B^{}=0.2\sigma _b^{}/a_{\mathrm{cal}}^{}`$ for magnitudes. The use of Eq. 27 is acceptable because the dispersion in $`b^{}`$ does not correlate with the dispersion $`\mathrm{log}H_0`$. With the given slopes and dispersions we find:
* $`\mathrm{log}H_0=1.910\pm 0.037`$ for the diameters
* $`\mathrm{log}H_0=1.876\pm 0.046`$ for the magnitudes.
Eq. 23 predicts an average correction term for the slopes $`a_\mathrm{c}^{}=0.73`$ and $`a^{}=0.54`$ together with $`\sigma _X=0.28`$ $`\mathrm{\Delta }\mathrm{log}H_0=0.191`$. and Eq. 25 with $`a_\mathrm{c}^{}=0.147`$,$`a^{}=0.115`$ and $`\sigma _M=1.4`$ $`\mathrm{\Delta }\mathrm{log}H_0=0.151`$. When applied to the above values we get the corrected, unbiased estimates
* $`\mathrm{log}H_0=1.719\pm 0.037`$ for the diameters
* $`\mathrm{log}H_0=1.725\pm 0.046`$ for the magnitudes.
These values translate into Hubble constants
* $`H_0=52_4^{+5}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse diameter B-band Tully-Fisher relation, and
* $`H_0=53_5^{+6}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse magnitude B-band Tully-Fisher relation.
These corrected values are in good concordance with each other as well as with the estimates established from the direct diameter Tully-Fisher relation (Theureau et al. 1997b ). Note that the errors in the magnitude relation are slightly larger than in the diameter relation. This is expected because for the diameter relation we possess more galaxies. The error is however mainly governed by the uncertainty in the calibrated zero-point. This is expected because though the dispersion in inverse relation as such is large it is compensated by the number galaxies available.
Finally, how significant an error do the correction formulae induce? We suspect the error to mainly depend on $`\alpha `$. The correction above was based on the assumption of homogeneity (i.e. $`\alpha =0`$). Recently Teerikorpi et al. (Teerikorpi98 (1998)) found evidence that the average density radially decreases around us ($`\alpha 0.8`$) confirming the more general (fractal) analysis by Di Nella et al. (DiNella96 (1996)). Using this value of $`\alpha `$ we find $`\mathrm{\Delta }\mathrm{log}H_0=0.140`$ for the diameters and $`\mathrm{\Delta }\mathrm{log}H_0=0.111`$ for the magnitudes yielding
* $`\mathrm{log}H_0=1.770\pm 0.037`$ for the diameters
* $`\mathrm{log}H_0=1.765\pm 0.046`$ for the magnitudes.
In terms of the Hubble constant we find
* $`H_0=59_4^{+5}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse diameter B-band Tully-Fisher relation, and
* $`H_0=58_5^{+6}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse magnitude B-band Tully-Fisher relation.
## 7 Summary
In the present paper we have examined how to apply the inverse Tully-Fisher relation to the problem of determining the value of the Hubble constant, $`H_0`$, in the practical context of the large galaxy sample KLUN. We found out that the implementation of the inverse relation is not as simple task as one might expect from the general considerations (in particular the quite famous result of the unbiased nature of the relation). We summarize our main results as follows.
1. A straightforward application of the inverse relation consists of finding the average Hubble ratio for each kinematical distance and tranforming the relative distance into an absolute one through calibration. The 15 calibrator galaxies used were drawn from the field with cepheid distance moduli obtained mostly from the HST observations. The inverse diameter relation predicted $`H_080\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and the magnitude relation predicted $`H_070\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ The diameter value for $`H_0`$ is about 50 percent and the magnitude value about 30 percent larger than those obtained from the direct relation (cf. Theureau et al. 1997b ).
2. We examined whether this discrepancy could be resolved in terms of some selection effect in $`\mathrm{log}V_{\mathrm{max}}`$ and the type dependence of the zero-points on the Hubble type. One expects these to have some influence on the derived value of $`H_0`$. Only a minuscule effect was observed.
3. There is – however – a new kind of bias involved: if the $`\mathrm{log}V_{\mathrm{max}}`$-distribution of the calibrators does not reflect the cosmic distribution of the field sample and the relevant slope for the field galaxies differs from the calibrator slope the average value of $`\mathrm{log}H_0`$ will be biased if the relevant slope is used (Teerikorpi et al. Teerikorpi99 (1999)).
4. We showed for the unbiased inverse plateau galaxies i.e. a sample without galaxies probably suffering from selection in $`\mathrm{log}V_{\mathrm{max}}`$, that the calibrators and the field sample obey different inverse diameter slopes, namely $`a_{\mathrm{cal}}^{}=0.73`$ and $`a^{}=0.54`$, Also, the magnitude slopes differed from each other ($`a_{\mathrm{cal}}^{}=0.147`$ and $`a^{}=0.115`$). For the diameter relation we were able to use 2142 galaxies and for the magnitude relation 1713 galaxies. These sizes are significant.
5. We also found evidence that the calibrator sample does not follow the cosmic distribution of $`\mathrm{log}V_{\mathrm{max}}`$ for the field galaxies. This means that if the relevant slopes are used a too large value for $`H_0`$ is found. Formally, this calibrator selection bias could be corrected for but is a complicated task.
6. One may use instead of the relevant slope the calibrator slope which also brings about a biased value of $`H_0`$. Now, however, the correction for the bias is an easy task. Furthermore, this approach can be used irrespective of the nature of the calibrator sample and should yield an unbiased estimate for $`H_0`$.
7. When we adopted this line of approach we found
* $`H_0=52_4^{+5}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse diameter B-band Tully-Fisher relation, and
* $`H_0=53_5^{+6}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse magnitude B-band Tully-Fisher relation
for a strictly homogeneous distribution of galaxies ($`\alpha =0`$) and
* $`H_0=59_4^{+5}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse diameter B-band Tully-Fisher relation, and
* $`H_0=58_5^{+6}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ for the inverse magnitude B-band Tully-Fisher relation.
for a decreasing radial density gradient ($`\alpha =0.8`$).
These values are in good concordance with each other as well as with the values established from the corresponding direct Tully-Fisher relations derived by Theureau et al. (1997b ), who gave a strong case for the long cosmological distance scale consistently supported by Sandage and his collaborators. Our analysis also establishes a case supporting such a scale. It is worth noting that this is the first time when the inverse Tully-Fisher relation clearly lends credence to small values of the Hubble constant $`H_0`$.
###### Acknowledgements.
We have made use of data from the Lyon-Meudon extragalactic Database (LEDA) compiled by the LEDA team at the CRAL-Observatoire de Lyon (France). This work has been supported by the Academy of Finland (projects “Cosmology in the Local Galaxy Universe” and “Galaxy streams and structures in the Local Universe”). T. E. would like to thank G. Paturel and his staff for hospitality during his stay at the Observatory of Lyon in May 1998. Finally, we thank the referee for useful comments and constructive criticism.
## Appendix A The relevant slope and an unbiased $`H_0`$
In this appendix we in simple manner demonstrate how the relevant slope introduced in Sect. 2.2 indeed is the slope to be used. Consider
$$\mathrm{log}H_0=\mathrm{log}V_{\mathrm{cor}}\mathrm{log}R_{\mathrm{iTF}},$$
(28)
where the velocity corrected for the peculiar motions, $`V_{\mathrm{cor}}`$, depends on the relative kinematical distance scale as
$$\mathrm{log}V_{\mathrm{cor}}=\mathrm{log}C_1+\mathrm{log}d_{\mathrm{kin}}$$
(29)
and the inverse Tully-Fisher distance in Mpc is<sup>2</sup><sup>2</sup>2 The numerical constant $`\beta =1.536274`$ connects $`x`$ in 0.1 arcsecs, $`X`$ in kpc and $`R_{\mathrm{iTF}}`$ in Mpc
$$\mathrm{log}R_{\mathrm{iTF}}=Ap+B_{\mathrm{cal}}x+\beta $$
(30)
The constant $`C_1`$ was defined by Eq. 11 and can be decomposed into $`\mathrm{log}C_1=\mathrm{log}H_0^{}+\mathrm{log}C_2`$. $`H_0^{}`$ is the true value of the Hubble constant and $`C_2`$ transforms the relative distance scale into the absolute one: $`\mathrm{log}R_{\mathrm{kin}}=\mathrm{log}d_{\mathrm{kin}}+\mathrm{log}C_2`$. Because $`X_{\mathrm{kin}}=\mathrm{log}R_{\mathrm{kin}}+x\beta `$ Eq. 28 reads:
$$\mathrm{log}H_0\mathrm{log}H_0^{}=X_{\mathrm{kin}}ApB_{\mathrm{cal}}.$$
(31)
Consider now a subsample of galaxies at a constant $`X_\mathrm{o}`$. By realizing that $`X_{\mathrm{kin}}=X_\mathrm{o}+(B^{}B_{\mathrm{in}})`$, where $`B^{}`$ gives the true distance scale and $`B_{\mathrm{in}}`$ depends on the adopted distance scale (based on the input $`H_0`$), and by taking the average over $`X_\mathrm{o}`$ Eq. 31 yields
$$\mathrm{log}H_0_{X_\mathrm{o}}\mathrm{log}H_0^{}=X_{\mathrm{kin}}Ap_{X_\mathrm{o}}B_{\mathrm{cal}}.$$
(32)
The use of $`B^{}`$ is based on two presumptions, namely that the underlying kinematical model indeed brings about the correct relative distance scale and that the adopted value for $`C_1`$ genuinely reflects the true absolute distance scale. If the adopted slope $`a_\mathrm{o}^{}`$ is the relevant one we find using Eq. 19 $`Ap_{X_\mathrm{o}}=X_\mathrm{o}B_{\mathrm{in}}`$ and
$$\mathrm{log}H_0_{X_\mathrm{o}}\mathrm{log}H_0^{}=(B^{}B_{\mathrm{in}})(B_{\mathrm{cal}}B_{\mathrm{in}}).$$
(33)
As a final result we find
$$\mathrm{log}H_0_{X_\mathrm{o}}\mathrm{log}H_0^{}=(b_{\mathrm{cal}}^{}b_{\mathrm{true}}^{})/a_\mathrm{o}^{}.$$
(34)
Because Eq. 34 is valid for each $`X_\mathrm{o}`$, the use of the relevant slope necessarily guarantees a horizontal run for $`\mathrm{log}H_0`$ as a function of $`X_\mathrm{o}`$.
## Appendix B Note on a theoretical diameter slope $`a^{}0.75`$
Theureau et al. (1997a ) presented theoretical arguments which supported the inverse slope $`a^{}=0.5`$ being derived from the field galaxies. Consider a pure rotating disk (the Hubble type 8). The square of the rotational velocity measured at the radius $`r_{\mathrm{max}}`$ at which the rotation has its maximum is directly proportional to the mass within $`r_{\mathrm{max}}`$, which in turn is proportional to the square of $`r_{\mathrm{max}}`$. Hence, $`\mathrm{log}V_{\mathrm{max}}0.5\mathrm{log}r_{\mathrm{max}}`$. By adding a bulge with a mass-to-luminosity ratio differing from that of the disk, and a dark halo with mass proportional to the luminous mass, one can as a first approximation understand the dependence of the zero-point of the inverse relation on the Hubble type.
However, the present study seems to require that the theoretical slope $`a^{}`$ is closer to 0.75 rather than 0.5. The question arises whether the simple model used by Theureau et al. (1997a ) could in some natural way be revised in order to produce a steeper slope. In fact, the model assumed that for each Hubble type the mass-to-luminosity ratio $`M/L`$ is constant in galaxies of different sizes (luminosities). If one allows $`M/L`$ to depend on luminosity, the slope $`a^{}`$ will differ from 0.5. Especially, if $`M/LL^{0.25}`$, one may show that the model predicts the inverse slope $`a^{}=0.75`$. The required luminosity dependence of $`M/L`$ is interestingly similar to that of the fundamental plane for elliptical galaxies and bulges (Burstein et al. Burstein97 (1997)). The questions of the slope, the mass-to-luminosity ratio and type-dependence will be investigated elsewhere by Hanski & Teerikorpi (1999, in preparation).
## Appendix C How to explain $`a_{\mathrm{obs}}^{}<a^{}`$?
Among other things ET97 discussed how a gaussian measurement error $`\sigma _x`$ in apparent diameters yields a too large value for $`H_0`$. How does the combination of cut-offs in $`\mathrm{log}V_{\mathrm{max}}`$ -distribution and this bias affect the slope? We examined this problem by using a synthetic Virgo Supercluster (cf. Ekholm Ekholm96 (1996)). As a luminosity function we chose
$$\mathrm{log}D=0.28\times G(0,1)+1.2$$
(35)
and as the inverse relation
$$\mathrm{log}V_{\mathrm{max}}=0.11\times G(0,1)+a_\mathrm{t}^{}\mathrm{log}D+0.9,$$
(36)
where $`G(0,1)`$ refers to a normalized gaussian random variable. As the “true” inverse slope we used $`a_\mathrm{t}^{}=0.75`$. The other numerical values were adjusted in order to have a superficial resemblance with Fig. 5. We first subjected the synthetic sample to the upper and lower cut-offs in $`\mathrm{log}V_{\mathrm{max}}`$ given in Sect. 4. The resulting slope was $`a^{}=0.692`$. A dispersion of $`\sigma _x=0.05`$ yielded $`a^{}=0.642`$ and $`\sigma _x=0.1`$ $`a^{}=0.559`$. The inverse Tully-Fisher diagram for the latter case is shown in Fig. 19. Though the model for the errors is rather simplistic this experiment shows a natural way of flattening the observed slope $`a^{}`$ with respect to the input slope $`a_\mathrm{t}^{}`$.
|
no-problem/9904/nucl-th9904074.html
|
ar5iv
|
text
|
# Enhanced Signatures for Disoriented Chiral Condensates
## I introduction
Recently, there has been growing interest about the possibility of the formation of a disoriented chiral condensate (D$`\chi `$C) in heavy ion collisions . In an ultrarelativistic heavy ion collision, some region may thermalize at a temperature high enough so that chiral symmetry is restored in the region. As the system cools sufficiently rapidly back through the transition temperature, the chiral restored state is unstable as small fluctuations in any chiral direction ($`\sigma ,\stackrel{}{\pi }`$) will grow exponentially. This can create regions where the pion field has a macroscopic occupation number. It should be stressed that this scenario is not derivable directly from the underlying theory of QCD and contains a number of untested dynamical assumptions, principally that the cooling is rapid. While the failure of the system to form a D$`\chi `$C cannot be used to rule out that the system has reached the chiral restoration temperature (as the scenario described above is not derivable directly from QCD), observation of D$`\chi `$C formation would be a clear signal for chiral restoration at high temperature.
There are clear signatures of D$`\chi `$C formation provided a single large domain is formed, containing a large number of pions. For example, one expects an excess in low $`p_T`$ pion production as the characteristic momentum of a pion from a large region is small. Such a signal works even if multiple regions of D$`\chi `$C are formed provided each region is large, but it is not decisive since one could imagine some other collective low energy effects which produce low $`p_T`$ pions. On the other hand, since the pions formed in a D$`\chi `$C, being essentially classical, form a coherent state, this coherent state has some orientation in isospace, and all of the pions in the domain are essentially maximally aligned (given the constraints of quantum mechanics) and point in the same isospin direction. If there are a large number of pions in the domain, this implies a distinctive distribution of $`R`$, the ratio of neutral to total pions in the domain :
$$f_0(R)=\frac{1}{2\sqrt{R}}.$$
(1)
In contrast, the distribution from uncorrelated emissions is narrowly peaked about $`1/3`$ with a variance, $`R^2R^2=\frac{2}{9𝒩}`$ where $`𝒩`$ is the total number of pions, and approaches a delta function at $`R=1/3`$ when $`𝒩\mathrm{}`$. Since these two distributions are dramatically different, this provides a clear signature for single domain D$`\chi `$C formation, provided one can kinematically separate the pions from the D$`\chi `$C from other pions in the system.
Unfortunately, this signature depends critically on the assumption that a single large domain of D$`\chi `$C is formed, which is a priori rather unlikely. One expects the domains are of characteristic size $`\tau `$, the exponential growth time of the pion fluctuation in the unstable chiral restored state. Since the size of the fireball is much larger than typical QCD length scales, it seems unlikely that $`\tau `$ would be of the size of the fireball, and thus formation of a single D$`\chi `$C domain is improbable. Formation of multiple domains of D$`\chi `$C, however, tends to wash out the $`R`$ distribution described by Eq. (1). As the pions emerging from different domains cannot be distinguished kinematically, by the central limit theorem the $`R`$ distribution will approach a normal distribution peaked at $`R=1/3`$. This normal distribution may be distinguished from the normal distribution arising from uncorrelated emission; the case of multiple domains of D$`\chi `$C will have a substantially larger variance.
Unfortunately, there is an important practical limitation which makes it difficult to exploit the $`R`$ distribution as a signature. Even under the most optimistic of scenarios, the total number of pions coming from D$`\chi `$C’s will be a small fraction of the total number of pions. If one includes all pions produced in the reaction, the signal due to the pions from the D$`\chi `$C will presumably be overwhelmed. One can apply a low $`p_T`$ cut to suppress the noise due to incoherently emitted pions. However, even with the low $`p_T`$ cut, the noise may still be severe, as both the signal and the noise peak at $`R=1/3`$. In this paper, we suggest cuts which may dramatically enhance the signal-to-noise ratio. We study the conditional probability distribution of $`R`$ given only for the events in which the $`k`$ pions with the lowest $`p_T`$ are all neutral, and we will show that the expectation value of $`R`$ is shifted away from $`1/3`$. Since incoherent emission will result in a very narrow peak around $`R=1/3`$, any such shifts should be easily observable. Moreover, one can make successive cuts by increasing the value of $`k`$, and enhance the signal in each successive step.
This paper is organized as follows. We will start with the simple scenario and study single domain D$`\chi `$C formation in Sec. II. In Sec. III the effects of multi-domain formation and the noises due to incoherently emitted pions will be studied, while potential experimental application and limitations will be discussed in Sec. IV.
## II domain with non-zero isospin
We are going to start with an unrealistic simple scenario and make it more realistic later on in our discussion. We will consider a single domain which is described by an isosinglet density matrix. (In general, a D$`\chi `$C is not a pure state; the wavefunction of the pion coherent state at the core of the “fireball” of the heavy ion collision is entangled with the energetic emission at the edge of the “fireball”.) Such a state can be written as
$$\rho =\underset{n,I,I_z}{}c_{nI}\frac{(1)^{I_z}}{(2I+1)}|n,I,I_zn,I,I_z|,$$
(2)
with $`_{n,I}c_{nI}=1`$. Note that the coefficients $`c_{nI}`$ are real, positive and do not depend on $`I_z`$ (as the full state is assumed to be isoscalar). The probability distribution of such a mixed state in the isospace is <sup>*</sup><sup>*</sup>*Here we are assuming that the typical $`I`$ is much less than $`n`$.
$$d^2P(\theta ,\varphi )=\underset{n,I}{}c_{nI}/(2I+1)\underset{I_z}{}|Y_{II_z}(\theta ,\varphi )|^2\mathrm{sin}\theta d\theta d\varphi =\mathrm{sin}\theta d\theta d\varphi /4\pi ,dP(\theta )=\mathrm{sin}\theta d\theta /2,$$
(3)
where the angles $`(\theta ,\varphi )`$ are defined such that a unit vector in isospace is $`(r_x,r_y,r_0)=(\mathrm{sin}\theta \mathrm{cos}\varphi ,\mathrm{sin}\theta \mathrm{sin}\varphi ,\mathrm{cos}\theta )`$. The probability distribution is uniform, as the condensate is equally likely to point at any direction on the two-dimensional sphere, as demanded by isospin symmetry.
The number operator of neutral pions in the condensate is given by
$$n_0=a_0^{}a_0=\stackrel{}{a}^{}\stackrel{}{a}\mathrm{cos}^2\theta =n\mathrm{cos}^2\theta ,$$
(4)
with $`\stackrel{}{a}=(a_x,a_y,a_0)`$ is a vector of hermitian annihilation operators which annihilate $`\pi _x=(\pi _++\pi _{})/\sqrt{2}`$, $`\pi _y=(\pi _+\pi _{})/\sqrt{2}i`$, and $`\pi _0`$, respectively, and $`n=_{n,I}c_{nI}n`$ is the expected number of pions. Since $`R=n_0/n`$ We are assuming that $`n1`$., one can easily calculate the probability distribution of $`R`$.
$$dP=\frac{1}{2}\mathrm{sin}\theta d\theta =\frac{1}{2}d\mathrm{cos}\theta =d\sqrt{R}=\frac{1}{2}R^{1/2}dR.$$
(5)
By defining $`dPf_0(R)dR`$ (the subscript “0” stands for $`I=0`$), one has, in the limit that $`n`$ is large,
$$f_0(R)=\frac{1}{2}R^{1/2},$$
(6)
recovering Eq. (1). The distribution is plotted in Fig. 1a. It is obvious that the shape is qualitatively different from the Poisson–Gaussian distribution due to incoherent emissions. The expectation value of $`R`$ is $`1/3`$,
$$R_0_0^1Rf_0(R)𝑑R=1/3,$$
(7)
which has the simple interpretation that it is equally likely for the pion to be a $`\pi _0`$, $`\pi _+`$ or $`\pi _{}`$, and hence on average a third of the pions are neutral.
This distribution is a consequence of the fact that we have assumed the D$`\chi `$C to be an isosinglet, a reasonable assumption on physical grounds. However, let’s consider the distribution of $`R`$ after the D$`\chi `$C emits a single $`\pi _0`$. The density matrix after the emission, which can be written as $`\lambda a_0\rho a_0^{}`$, where $`\lambda `$ is a normalization constant, $`\rho `$ is the density matrix defined in Eq. (2) and $`a_0`$ annihilates a neutral pion. This new density matrix is not an isosinglet. It is straightforward to show that the probability distribution for this state is
$$dP=\frac{\frac{1}{2}\mathrm{sin}\theta \mathrm{cos}^2\theta d\theta }{\frac{1}{2}\mathrm{sin}\theta \mathrm{cos}^2\theta d\theta }=\frac{\frac{1}{2}RR^{1/2}dR}{_0^1\frac{1}{2}RR^{1/2}𝑑R}$$
(8)
or equivalently,
$$f_1(R)f(R|\text{1st pion is neutral})=Rf_0(R)/_0^1Rf_0(R)𝑑R=\frac{3}{2}R^{1/2}.$$
(9)
The distribution $`f_1(R)`$ is plotted in Fig. 1b, which is drastically different from $`f_0(R)`$. The distribution is skewed towards the high end, while $`f_0(R)`$ is skewed towards the low end. Moreover, the expectation value of $`R`$ is clearly pushed up:
$$R_1_0^1Rf_1(R)𝑑R=3/5.$$
(10)
So we have arrived at the intriguing conclusion that, if the “first pion” emitted from a isosinglet D$`\chi `$C is neutral, 60% of the pions subsequently emitted from the D$`\chi `$C are neutral, a huge enhancement from the original expectation of 33%.
This extraordinary statement certainly deserves more discussion. First, what is our criterion to decide which is the “first pion”? The answer is simple: it can be any criterion. It does not matter as long as it is a priori equally likely to be a $`\pi _0`$, a $`\pi _+`$ or a $`\pi _{}`$. The derivation just depends on our removing a neutral pion from the isosinglet D$`\chi `$C. It can be the first pion emitted in time, or the last one emitted in time, or even the 17th emitted in time. The criterion can also be unrelated to the order of emission. For example, we can choose the “first pion” to be the one with the smallest polar angle. One can use any of these criteria to identify the “first pion”, and if it turns out to be neutral, then the $`R`$ distribution of subsequent emissions is always given by $`f_1(R)`$, provided we are in the large number limit. However, this is only true in this idealized scenario, when all the pions are coming from a simple D$`\chi `$C domain. In reality, some of the pions come from incoherent emission, and if the “first pion” turns out to be incoherently emitted, the expectation value of $`R`$ of the remaining pions is still going to be $`1/3`$, not $`3/5`$. As a result, we want to choose our criterion in such a way that the “first pion” is likely to originate from the D$`\chi `$C and not from incoherent emissions. Since D$`\chi `$C pions by hypothesis have low $`p_T1/L`$, where $`L`$ is the size of the domain, a natural choice is to use the pion with the lowest $`p_T`$ as our “first pion”.
After clarifying the meaning of the term “first pion”, we move on to discuss the physical origin of the modification of the probability distribution of $`R`$. In a nutshell, we are seeing the physics of (iso)spin alignment due to Bose condensation. To illustrate the point, let us first consider the following apparently unrelated Stern–Gerlach experiment. Consider a large number of massive spin-1 particles, which for concreteness will be called deuterons. Initially they are all polarized along a randomly chosen direction $`\stackrel{}{n}`$, which is a priori equally likely to be any direction in three dimensional space. In other words, $`\stackrel{}{S}\stackrel{}{n}=0`$ for all the deuterons. Now let us pick one of these deuterons and pass it through a Stern–Gerlach spectrometer which measures $`S_z`$, the spin along the $`z`$-axis. What is the probability that the measurement gives $`S_z=0`$? The answer is clearly $`1/3`$, as the cases for $`S_z=+1`$, 0 and $`1`$ are equally likely. On the other hand, if the measurement on the first deuteron gives $`S_z=0`$, what is the conditional probability for the next deuteron to pass through the Stern–Gerlach spectrometer also to be measured to have $`S_z=0`$? The answer this time is no longer $`1/3`$. The spins of all the deuterons are aligned along the same direction $`\stackrel{}{n}`$, and that the first deuteron is measured to have $`S_z=0`$ suggests $`\stackrel{}{n}`$ is more probable to be more or less aligned along $`\stackrel{}{z}`$ than otherwise. As a result, the conditional probability is no longer $`1/3`$, but can be easily shown to be $`3/5`$, which is exactly the predicted value for $`R_1`$ in Eq. (10). The situation for a single domain of D$`\chi `$C is analogous, with isospin aligned pions instead of spin aligned deuterons. By construction, the pions in a D$`\chi `$C domain are isospin aligned, and by the same analysis, we have shown that the knowledge of the “first pion” being neutral can dramatically modify the conditional probability distribution of $`R`$.
One can also consider the conditional probability distribution of $`R`$ in the case that the “first pion” is charged. Note that
$$f_0(R)=\frac{1}{3}\left(f(R|\text{1st pion is a }\pi _+)+f(R|\text{1st pion is a }\pi _{})+f(R|\text{1st pion is a }\pi _0)\right),$$
(11)
and hence, since $`f_1(R)=f(R|\text{1st pion is a }\pi _0)`$,
$$\stackrel{~}{f}(R)f(R|\text{1st pion is charged})=\frac{3}{2}f_0(R)\frac{1}{2}f_1(R)=\frac{3}{4}(1R)R^{1/2}.$$
(12)
The expectation value of $`R`$, given that the “first pion” is charged, can be easily shown to be $`1/5`$. As a consistency check, one can calculate $`R`$, the expectation value of $`R`$ regardless of the species of the “first pion”. Since the “first pion” is twice as likely to be charged as to be neutral,
$$R_0=\frac{1}{3}(\frac{3}{5}+2\times \frac{1}{5})=\frac{1}{3},$$
(13)
agreeing with Eq. (7).
Lastly, we will study the conditional probability distribution of $`R`$ given that the $`k`$ pions with the lowest $`p_T`$, which will be hereafter referred to as the “first $`k`$ pions”, are all neutral. It is straightforward to show that in this case
$$dP=\frac{\frac{1}{2}\mathrm{sin}\theta \mathrm{cos}^{2k}\theta d\theta }{\frac{1}{2}\mathrm{sin}\theta \mathrm{cos}^{2k}\theta d\theta }=\frac{\frac{1}{2}R^kR^{1/2}dR}{_0^1\frac{1}{2}R^kR^{1/2}𝑑R}.$$
(14)
and
$$f_k(R)f(R|\text{1st }k\text{ pions are all neutral})=R^kf_0(R)/_0^1R^kf_0(R)𝑑R=(k+\frac{1}{2})R^{k1/2}.$$
(15)
The distributions $`f_2(R)`$ and $`f_3(R)`$ are plotted in Fig. 1c and d, respectively. One can see that as $`k`$ increases, the distribution is more and more skewed towards the high end. As a result, the expectation value of $`R`$ increases with $`k`$.
$$R_k_0^1Rf_k(R)𝑑R=(2k+1)/(2k+3).$$
(16)
It is useful to define $`Q`$ as the ratio of the number of $`\pi _+`$ to the number of total pions emitted. By symmetry it is also the ratio of the number of $`\pi _{}`$ to the number of total pions emitted, and since $`R+2Q=1`$,
$$Q_k=1/(2k+3).$$
(17)
From the above analysis, the prescription to enhance the collective signal is quite clear. One should make successive cuts on the data sample on the condition that the $`k`$ pions with the lowest $`p_T`$ are all neutral, and measure $`R_k`$ after each cut to see if it increases as predicted in Eq. (16). This result, however, depends on the assumption that we have only a single domain of D$`\chi `$C without any contamination due to incoherent pion emissions. Since this assumption is unrealistic for heavy ion collision experiments, the scenario we studied in this section is only an idealized situation. In the next section, we will discuss more realistic scenarios.
## III The effects of multi-domain formation and incoherent emissions
The scenario considered in the last section is highly unrealistic in at least two ways. First, as discussed in the introduction, single domain D$`\chi `$C formation is highly unlikely. For a realistic treatment one must study D$`\chi `$C formation with more than one domain, each pointing in a different direction in the isospace. Moreover, we have neglected the effect of incoherently emitted pions, which have very important effects. If the neutral “first pion” is incoherently emitted, the $`R`$ distribution of the remaining pions is described by $`f_0(R)`$, instead of $`f_1(R)`$ when the “first pion” comes from the D$`\chi `$C. In this section, we will incorporate these two effects and see how the predictions above are modified.
We will study the expectation value of $`R`$, or equivalently the expectation of $`Q`$, for a situation described by the following parameters. The coherent fraction $`\chi `$ is the fraction of pions which originate from D$`\chi `$C domains, so that when $`\chi =1`$, all pions are coherently emitted, and when $`\chi =0`$, all pions are incoherently emitted. We will consider the case where there are $`N`$ domains, all containing an equal number of pionsThis assumption of all domains having the same number of pions is unrealistic but is made for illustrative purposes. The effects of unequal domain sizes will be briefly discussed below., which will be assumed to be large. Each domain is described by an isosinglet density matrix, but the isospins of pions in different domains are uncorrelated. Now the question is: if the “first $`k`$ pions” in this channel are all neutral, what are the expectation values of $`R`$ and $`Q`$ among the rest of the pions?
The answer turns out to be the following expression:
$$R=\frac{1}{3}+2\mathrm{\Delta },Q=\frac{1}{3}\mathrm{\Delta }.$$
(19)
The shift $`\mathrm{\Delta }`$ is given by
$$\mathrm{\Delta }=\chi \underset{j=0}{\overset{k}{}}P_j(\frac{1}{3}\frac{1}{2j+3})=\chi \left(\frac{1}{3}\underset{j=0}{\overset{k}{}}P_j\frac{1}{2j+3}\right),$$
(20)
where
$$P_j=\left(\genfrac{}{}{0pt}{}{k}{j}\right)p^j(1p)^{kj},p=\chi /N.$$
(21)
Each term in this formula has a simple interpretation:
$``$ The expectation value $`R`$ is always $`1/3`$ for the incoherently emitted pions. Only the pions coming from the domains are affected by isospin alignment; hence the outstanding factor of $`\chi `$.
$``$ Each coherently emitted pion comes from one of the domains, which will be called domain X. How many of the “first $`k`$ pions” also come from domain X? The probability for each pion coming from domain X is $`p=\chi /N`$, and the probability that $`j`$ of the “first $`k`$ pions” coming from domain X is $`P_j=\left(\genfrac{}{}{0pt}{}{k}{j}\right)p^j(1p)^{kj}`$.
$``$ Given that $`j`$ of the “first $`k`$ pions” is coming from domain X, the conditional expectation value of $`Q`$ decreases from $`1/3`$ to $`1/(2j+3)`$, while the conditional expectation value of $`R`$ increases by twice the above quantity.
In passing, we note that $`\mathrm{\Delta }`$ can also be expressed as an integral or the hypergeometric function $`{}_{2}{}^{}F_{1}^{}`$:
$`\mathrm{\Delta }`$ $`=`$ $`\chi \left({\displaystyle \frac{1}{3}}{\displaystyle \frac{1}{\sqrt{p^3}}}{\displaystyle _0^\sqrt{p}}𝑑zz^2(z^2+1p)^k\right)`$ (22)
$`=`$ $`\chi \left({\displaystyle \frac{1}{3}}{\displaystyle \frac{\mathrm{cos}^{2k+3}\mathrm{\Theta }}{\mathrm{sin}^3\mathrm{\Theta }}}{\displaystyle _0^\mathrm{\Theta }}𝑑\vartheta {\displaystyle \frac{\mathrm{sin}^2\vartheta }{\mathrm{cos}^{2k+4}\vartheta }}\right),\mathrm{tan}^2\mathrm{\Theta }={\displaystyle \frac{\chi /N}{1\chi /N}}`$ (23)
$`=`$ $`{\displaystyle \frac{1}{3}}\chi \left(1(1p)_2^kF_1[{\displaystyle \frac{3}{2}},k,{\displaystyle \frac{5}{2}},{\displaystyle \frac{p}{1p}}]\right).`$ (24)
In Fig. 2, we have made contour plots of $`\mathrm{\Delta }=1/30`$ (such that $`R=0.4`$ and $`Q=0.3`$), for $`k=1,\mathrm{},5`$ in the $`(\chi ,1/N)`$ parameter space. The horizontal axis is the coherent fraction $`\chi `$ while the vertical axis is $`1/N`$ where $`N`$ is the number of domains. Both $`\chi `$ and $`1/N`$ range from 0 to 1. Thus, for example, with 3 domains and $`\chi =0.6`$, in order to have $`\mathrm{\Delta }1/30`$ we must have $`k3`$.
Equations (21) illustrate the main results of this paper. One can see that, without any D$`\chi `$C formation, $`\chi =0`$ (corresponding to the left edge of Fig. 2), $`\mathrm{\Delta }`$ vanishes, and $`R=Q=1/3`$ as expected. The bottom edge of the plot corresponds to $`N\mathrm{}`$ and also gives $`\mathrm{\Delta }=0`$ for any finite value of $`k`$. The shift $`\mathrm{\Delta }`$ is largest for a single domain of D$`\chi `$C without any noise due to incoherently emitted pions, i.e., when $`\chi =N=1`$ (the top right corner of the contour plots), giving $`\mathrm{\Delta }=1/3\mathrm{\hspace{0.17em}1}/(2k+3)`$ and reproducing Eqs. (16) and (17). For fixed values of $`(\chi ,N)`$, $`\mathrm{\Delta }`$ increases with $`k`$, accounting for the spreading of the parameter space with $`\mathrm{\Delta }>0.4`$ as $`k`$ increases from 1 to 5 in Fig. 2.
One expects that when the number of D$`\chi `$C domains is large ($`N1`$) or when most of the pions are incoherently emitted ($`\chi 1`$), it will be difficult to observe clear signals of D$`\chi `$C formation. However, in such situations $`\chi /N`$ is small and $`\mathrm{\Delta }`$ is dominated by the $`j=1`$ term (the $`j=0`$ term always identically vanishes) and
$$\mathrm{\Delta }=\frac{2\chi ^2k}{15N}+𝒪(\frac{\chi ^3}{N^2}).$$
(25)
Thus a large $`k`$ may make up for a small coherent fraction $`\chi `$, or a large number of domains $`N`$, and enhance $`\mathrm{\Delta }`$, which describes the shift of $`R`$ and $`Q`$ from $`1/3`$, to an experimentally measurable magnitude. From the form of Eq. (25), one expects this shift to be substantial whenever $`kN/\chi ^2`$. However, even for a value of $`k`$ as small as $`N/4\chi ^2`$, $`\mathrm{\Delta }=1/30+𝒪(\chi ^3/N^2)`$, which translates to $`R=0.4`$ and $`Q=0.3`$ — a substantial deviation from the incoherent case. This suggests one should make successive cuts for events where the $`k`$ pions with lowest $`p_T`$ are all neutral, and study $`R`$ after each cut. An increase of $`R`$ with $`k`$ would suggest that D$`\chi `$C domains are formed.
Equation (25) appears to suggest that one can increase $`\mathrm{\Delta }`$ to an arbitrarily large magnitude by choosing a sufficiently large value of $`k`$. Of course this is not true. Equation (25) is obtained as the leading term in a $`\chi /N`$ expansion, but when $`k\mathrm{}`$, this expansion breaks down as terms of higher order in $`\chi /N`$ are enhanced by factors of $`\left(\genfrac{}{}{0pt}{}{k}{j}\right)`$. We can easily see that
$$\mathrm{\Delta }\frac{1}{3}\chi ,R\frac{1}{3}+\frac{2}{3}\chi ,Q\frac{1}{3}(1\chi ),k\mathrm{}\text{ with }\chi \text{ and }N\text{ fixed.}$$
(26)
In other words, our signal enhancement scheme is fundamentally limited by the amount of noise due to incoherently emitted pions. When $`\chi `$ is small, most of the pions are incoherently emitted, and for them, $`R`$ is always around $`1/3`$ regardless of what cuts one makes. On the other hand, the large $`k`$ limit of $`R`$ does not depend on $`N`$, the number of D$`\chi `$C domains. Recall that we have several distinctive signatures, like the $`R`$ distribution in Eq. (1) and the conditional expectation values for $`R`$ described in the previous section, for a single domain of D$`\chi `$C, where all the pions in the D$`\chi `$C are isospin aligned. With multi-domain formation, where the pions in different domains may point to different directions in isospace, the effect of isospin alignment is greatly washed out. However, given that the “first $`k`$ pions” are all neutral with $`kN`$, it is probabilistically extremely likely that each of the $`N`$ domains is the origin of some of these “first $`k`$ pions”. As a result, each of these $`N`$ domains are well-aligned along the $`\pi _0`$ direction, and hence also well-aligned with each other. As $`k\mathrm{}`$, the $`N`$ domains look more and more like a single big domain in the $`\pi _0`$ direction, which is the case where the signature is the most dramatic. In short, with large $`k`$, our cuts are picking out the events where the signals are the strongest, and hence resulting in a large signal-to-noise ratio.
In above we have assumed the sizes of all $`N`$ domains are identical for illustrative purposes. A more realistic treatment would have $`N`$ domains, each with different sizes $`p_i`$, $`1iN`$, such that $`_ip_i=\chi `$. (The size of a domain is defined to be the fraction of pions which originate from this particular domain.) Then Eqs. (21) are generalized to
$$\mathrm{\Delta }=\frac{1}{3}\chi \underset{i=1}{\overset{N}{}}p_i\underset{j=0}{\overset{k}{}}\left(\genfrac{}{}{0pt}{}{k}{j}\right)p_i^k(1p)^{kj}\frac{1}{2j+3}.$$
(27)
In the weak signal limit, i.e., when all the $`p_i`$’s are small, Eq. (25) gets modified to
$$\mathrm{\Delta }=\frac{2k}{15}\underset{i=1}{\overset{N}{}}p_i^2+𝒪(\frac{\chi ^3}{N^2})=\frac{2k}{15}\overline{p}+𝒪(\frac{\chi ^3}{N^2}),$$
(28)
where $`\overline{p}=_ip_i^2`$ has the following nice interpretation: $`\overline{p}`$ is the average over all pions (both coherently and incoherently emitted) of the sizes of the originating domains, which is $`p_i`$ for a pion from domain $`i`$ and zero for an incoherently emitted pion. For $`N`$ domains of equal sizes, $`\overline{p}=\chi ^2/N`$ and Eq. (25) is recovered. Again we see that $`\mathrm{\Delta }`$ grows linearly with $`k`$ in the weak signal limit. As $`k\mathrm{}`$ the shift $`\mathrm{\Delta }`$ is again limited by the bound (26), which applies also for the cases of unequal domain sizes.
## IV summary
To recapitulate, we suggest the following experimental procedures:
$``$ Count the number of neutral and charged pions event by event from heavy ion collision experiments and measure their individual transverse momenta and rapidities.
$``$ Apply a low $`p_T`$ cut to suppress the noise due to uncorrelated pion emission.
$``$ Bin the events in different rapidity windows.
$``$ In each rapidity window, calculate the expectation value $`R`$.
$``$ Make a cut to retain only events where the pion with the lowest $`p_T`$ is neutral.
$``$ Calculate, in each rapidity window, the expectation value $`R`$ for all remaining pions in all events which survive the cut.
$``$ Make another cut on the surviving events to retain only those where the pion with the second lowest $`p_T`$ is also neutral.
$``$ Again, calculate in each rapidity window the expectation value $`R`$ for all remaining pions in all events which survive the cuts.
$``$ Repeat the above prescription of making successive cuts to retain only events in which the pion with the next lowest $`p_T`$ is also neutral, and calculate $`R`$ for each rapidity window after each cut. If we find $`R`$ deviates from $`1/3`$ then we are seeing signatures from D$`\chi `$Cs.
Note that this prescription requires reconstructions of $`p_T`$’s of individual pions, both charged and neutral. We have also presumed that the coherent fraction $`\chi `$ and the number of domains formed $`N`$ are roughly the same for each event. (More specifically, the probability distributions for $`\chi `$ and $`N`$ are narrow peaked.)
By applying these successive cuts, we are retaining the events with D$`\chi `$C formation and most of the pions are well-aligned along the $`\pi _0`$ direction. What is being cut are the events with D$`\chi `$C formation but most of the pions are well-aligned along the $`\pi _x`$ or $`\pi _y`$ directions, and the events where there are incoherent pions with very low $`p_T`$, which is the main source of noise to our signal. As a result, these successive cuts are substantially improving the signal-to-noise ratio, making it easier to observe D$`\chi `$C formation. On the other hand, just like any other cuts on data to suppress the noises, we are giving up on statistics. Moreover, for large $`k`$ we are cutting on rare events so the loss in statistics can be severe. For the cases where the signal is weak (small coherent fraction $`\chi 1`$ or large number of domains $`N1`$) on each cut we are losing about two-thirds of the events.
In conclusion, we have devised new cuts to enhance the signal in searches for D$`\chi `$C. These cuts retain only events where the $`k`$ pions with lowest $`p_T`$ are all neutral. We have shown that, after these cuts, the fraction of neutral pions within the remaining sample is substantially larger if D$`\chi `$Cs are formed in the heavy ion collision.
Support of this research by the U.S. Department of Energy under grant DE-FG02-93ER-40762 is gratefully acknowledged.
|
no-problem/9904/math-ph9904003.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is a great honor to have been awarded the Dannie Heineman Prize in Mathematical Physics and it is a great pleasure to be be in the company today of my fellow prize winners, C.N. Yang, this year’s winner of the Lars Onsager prize, A.B. Zamolodchikov, the inventor of conformal field theory, and especially my collaborator and thesis adviser T.T. Wu.
## 2 Why integrable models is the invisible field of physics
The citation of this year’s Heineman Prize is for “groundbreaking and penetrating work on classical statistical mechanics, integrable models and conformal field theory” and in this talk I plan to discuss the work in statistical mechanics and integrable models for which the award was given. But since the Heineman prize is explicitly for publications in mathematical physics I want to begin by examining what may be meant by the phrase “mathematical physics.”
The first important aspect of the term “mathematical physics” is that it means something very different from most other kinds of physics. This is seen very vividly by looking at the list of the divisions of the APS. Here you will find 1) astrophysics, 2) atomic physics, 3) condensed matter physics, 4) nuclear physics, and 5) particles and fields but you will not find any division for mathematical physics. This lack of existence of mathematical physics as a field is also reflected in the index of Physical Review Letters where mathematical physics is nowhere to be found. Therefore we see that the Heineman prize in mathematical physics is an extremely curious award because it honors achievements in a field of physics which the APS does not recognize as a field of physics.
This lack of existence of mathematical physics as a division in the APS reflects, in my opinion, a deep uneasiness about the relation of physics to mathematics. An uneasiness I have heard echoed hundreds of times in my career in the phrase “It is very nice work but it is not physics. It is mathematics”. A phrase which is usually used before the phrase “therefore we cannot hire your candidate in the physics department.”
So the first lesson to be learned is that mathematical physics is an invisible field. If you want to survive in a physics department you must call yourself something else.
So what can we call the winners of the Heineman prize in mathematical physics if we cannot call them mathematical physicists? The first winner was Murray Gell–Mann in 1959. He is surely belongs in particles and fields; the 1960 winner Aage Bohr is surely a nuclear physicist; the 1976 winner Stephen Hawking is an astrophysicist. And in fact almost all winners of the prize in mathematical physics can be classed in one of the divisions of the APS without much confusion.
But there are at least two past winners who do not neatly fit into the established categories, Elliott Lieb, the 1978 recipient, and Rodney Baxter, the 1987 recipient, both of whom have made outstanding contributions to the study of integrable models in classical statistical mechanics—the same exact area for which Wu and I are being honored here today. This field of integrable models in statistical physics is the one field of mathematical physics which does not fit into some one of the existing divisions of the APS.
It is for this reason that I have described integrable models as a hidden field. Indeed it is so hidden that it is sometimes not even considered it to be statistical mechanics as defined by the IUPAP.
The obscurity of the field of integrable statistical mechanics models explains why there are less than a dozen physics departments in the United States where it is done. This makes the job prospects of a physicist working in this field very slim. But on the other hand it means that we in the field get to keep all the good problems to ourselves. So it is with mixed feelings that I will now proceed to discuss some of the progress made in the last 33 years and the some of the directions for future research.
## 3 The Ising Model
The award of the 1999 Heineman prize, even though the citation says that it is for work on integrable models, is in fact for work done from 1966–1981 on a very specific system: the two dimensional Ising model. This work includes boundary critical phenomena, randomly layered systems, the Painlevé representation of the two point function, and the first explicit results on the Ising model in a magnetic field. All of these pieces of work had results which were unexpected at the time and all have lead to significant extensions of our knowledge of both statistical mechanics and mathematics.
### 3.1 The boundary Ising model and wetting
The Ising model is a two dimensional collection of classical “spins” $`\sigma _{j,k}`$ which take on the two values $`+1`$ and $`1`$ and are located at the $`j`$ row and $`k`$ column of a square lattice. For a translationally invariant system the interaction energy of this system is
$$=\underset{j,k}{}(E^v\sigma _{j,k}\sigma _{j+1,k}+E^h\sigma _{j,k}\sigma _{j,k+1}+H\sigma _{j,k}).$$
(1)
This is certainly one of the two most famous and important models in statistical mechanics (the Heisenberg-Ising chain being the other) and has been studied by some of the most distinguished physicists of this century including Onsager who computed the free energy for $`H=0`$ in 1944 and Yang who computed the spontaneous magnetization in 1952.
In 1966 I began my long involvement with this model when, for part of my thesis, Prof. Wu suggested that I compute for an Ising model on a half plane the same quantities which Onsager and Yang computed for the bulk. At the time we both thought that because the presence of the boundary breaks translational invariance the boundary computations would be at least as difficult as the bulk computations. It was therefore quite surprising when it turned out that the computations were drastically simpler . In the first place the model could be solved in the presence of a field on the boundary which meant that the computation of the magnetization came along for free once we could do the free energy but more importantly the correlation functions, which in the bulk were given by large determinants whose size increased with the separation of the spins, were here given by nothing worse than the product of two one–dimensional integrals for all separations. The key to this great simplification is the fact that the extra complication of the boundary magnetic field actually makes the problem simpler to solve (a realization I had in a dream at 3:00 AM after a New Years eve party).
This model is the first case where boundary critical exponents were explicitly computed. Indeed it remained almost the only solved problem of a boundary critical phenomena until it was generalized to integrable massive boundary field theory in 1993 .
This boundary field had the added virtue that we could analytically continue the boundary magnetization into the metastable region and explicitly compute a boundary hysteresis effect . This leads to the lovely effect that near the value of the boundary magnetic field where the hysteresis curve ends that the spins a very long distance from the boundary turn over from pointing in the direction of the bulk magnetization to pointing in the direction of the metastable surface spin. At the value where the metastability ends this surface effect penetrates all the way to infinity and “flips” the spin in the bulk. In later years this phenomena has been interpreted at a “wetting transition” and the Ising intuition has been extended to many models where exact solutions do not exist.
### 3.2 Random layered Ising model and Griffiths-McCoy singularities
Our next major project was to generalize the translationally invariant interaction (1) to a non translationally invariant problem where not just a half plane boundary was present but to a case where
1. The interaction energies $`E^v`$ were allowed to vary from row to row but translational invariance in the horizontal direction was preserved
2. The interaction energies $`E^v(j)`$ between the rows were chosen as independent random variables with a probability distribution $`P(E^v).`$
This was the first time that such a random impurity problem had ever been studied for a system with a phase transition and the entire computation was a new invention . In particular we made the first use in physics of Furstenburg’s theory of strong limit theorems for matrices. We felt that the computation was a startling success because we found that for any probability distribution, no matter how small the variance, there was a temperature scale, depending on the variance, where there was new physics that is not present in the translationally invariant model. For example the divergence in the specific heat at $`T_c`$ decreases from the logarithm of Onsager to an infinitely differentiable essential singularity. Moreover the average over the distribution $`P(E^v)`$ of the correlation functions of the boundary could be computed and it was seen that there was an entire temperature range surrounding $`T_c`$ where the boundary susceptibility was infinite. Thus the entire picture of critical exponents which had been invented several years before to describe critical phenomena in pure systems was not sufficient to describe these random systems . We were very excited.
But then something happened which I found very strange. Instead of attempting to further explore the physics we had found, arguments were given as to why our effect could not possibly be relevant to real systems. This has lead to arguments which continue to this day.
We were, and are, of the opinion that the effects seen in the layered Ising model are caused by the fact that the zeroes of the partition function do not pinch the real temperature axis at just one point but rather pinch in an entire line segment. A closely related effect was simultaneously discovered by Griffiths and it was, and is, our contention that in this line segment there is a new phase of the system. But this line segment is not revealed by approximate computations and for decades it was claimed that our new phase was limited to layered systems; a claim which some continue to make to this day.
However, fortunately for us, there is an alternative interpretation of the Ising model in terms of a one–dimensional quantum spin chain in which our layered classical two–dimensional system becomes a randomly impure quantum chain . In this interpretation there is no way to argue away the existence of our new phase and finally in 1995, a quarter century after we first found the effect, D. Fisher , in an astounding paper, was able to craft a theory of the physics of rare events based on an exact renormalization group computation which not only reproduced our the results of our layered model on the boundary but extended the computations to bulk quantities which in 1969 we had been unable to compute. With this computation I think the existence of what are now called Griffiths-McCoy singularities is accepted, but it has taken a quarter century for this to happen.
### 3.3 Painlevé Functions and difference equations
But perhaps the most dramatic discoveries were published from 1973 to 1981 on the spin correlation functions of the Ising model and most particularly the results that in the scaling limit where $`TT_c`$ and the separation of the spins $`N\mathrm{}`$ such that $`|TT_c|N=r`$ is fixed that the correlation function $`<\sigma _{0,0}\sigma _{0,N}>`$ divided by $`|TT_c|^{1/4}`$ is
$$G_\pm (2r)=(1\eta )\eta ^{1/2}\mathrm{exp}_r^{\mathrm{}}𝑑x\frac{1}{4}x\eta ^2[(1\eta ^2)^2\eta ^2]$$
(2)
where $`\eta (x)`$ satisfies the third equation of Painlevé
$$\frac{d^2\eta }{dx^2}=\frac{1}{\eta }(\frac{d\eta }{dx})^2\frac{1}{x}\frac{d\eta }{dx}+\frac{1}{x}(\alpha \eta ^2+\beta )+\gamma \eta ^3+\frac{\delta }{\eta }$$
(3)
with $`\alpha =\beta =0`$ and $`\gamma =\delta =1`$ and
$$\eta (x)1\frac{2}{\pi }K_0(2x)\mathrm{as}x\mathrm{}$$
(4)
where $`K_0(2x)`$ is the modified Bessel function of order zero. Furthermore on the lattice all the correlation functions satisfy quadratic difference equations .
This discovery of Painlevé equations in the Ising model was the beginning of a host of developments in mathematical physics which continues in an ever expanding volume to this day. It led Sato, Miwa, and Jimbo to their celebrated series of work on isomonodromic deformation and to the solution of the distribution of eigenvalue of the GUE random matrix problem in terms of a Painlevé V function. This has subsequently been extended by many people, including one of our original collaborators, Craig Tracy , to so many branches of physics and mathematics including random matrix theory matrix models in quantum gravity and random permutations that entire semester long workshops are now devoted to the subject. Indeed a recent book on special functions characterized Painlevé functions as ”the special functions of the 21st century.” Rarely has the solution to one small problem in physics had so many ramifications in so many different fields.
### 3.4 Ising model in a field
The final piece of work in the Ising model to be mentioned is what happens to the two point function when a small magnetic field is put the system for $`T<T_c.`$ At $`H=0`$ for $`T<T_c`$ the two point function has the important property that it couples only to states with an even number of particles and thus, in particular the leading singularity in the Fourier transform is not a single particle pole but rather a two particle cut. In 1978 , as an application of our explicit formulas for the $`n`$ spin correlation functions we did a (singular) perturbation computation to see what happens when in terms of the scaled variable $`h=H/|TT_c|^{15/8}`$ a small value of $`h`$ is applied to the system. We found that the two particle cut breaks up into an infinite number of poles which are given by the zeroes of Airy functions. These poles are exactly at the positions of the bound states of a linear potential and are immediately interpretable as a weak confinement of the particles which are free at $`H=0.`$ This is perhaps the earliest explicit computation where confinement is seen. From this result it was natural to conjecture that as we take the Ising model from $`T>T_c,H=0`$ to $`T<T_c,H=0`$ that as $`h`$ increases from $`0`$ to $`\mathrm{}(T=T_c,H>0)`$ that bound states emerge from the two particle cut and that as we further proceed from $`T=T_c,H>0`$ down to $`T<T_c,H=0`$ that bound states continue to emerge until at $`H=0`$ an infinite number of bound states have emerged and formed a two particle cut. What this picture does not indicate is the remarkable result found 10 years later by A. Zamolodchikov that at $`T=T_c,H=0`$ the problem can again be studied exactly. This totally unexpected result will be discussed by Zamolodchikov in the next presentation.
## 4 From Ising to integrable
Even at the time when this Ising model work was initiated there were other models known such as the Heisenberg chain , the delta function gases -, the 6–vertex model ,, the Hubbard model , and the 8–vertex model for which the free energy (or ground state energy), and the excitation spectrum could be computed exactly. Since then it has been realized that a fundamental equation first seen by Onsager , in the Ising model and used in a profound way by Yang in the delta function gases and by Baxter in the 8 vertex model could be used extend these computations to find many large classes of models for which free energies could be computed. These models which come from the Yang-Baxter (or star triangle equation) are what are now called the integrable models.
The Ising model itself is the simplest case of such an integrable model. It thus seems to be a very natural conjecture which was made by Wu, myself and our collaborators the instant we made the discovery of the Painlevé representation of the Ising correlation function that there must be a similar representation for the correlation functions of all integrable models. To be more precise I mean the following
Conjecture
The correlation functions of integrable statistical mechanical models are characterized as the solutions of classically integrable equations (be they differential, integral or difference).
One major step in the advancement of this program was made by our next speaker, Alexander Zamolodchikov, who showed, with the invention of conformal field theory , that this conjecture is realized for models at the critical point. One of the major unsolved problems of integrable models today is to extend the linear equations which characterize correlation functions in conformal field theory to nonlinear equations for massive models. This will realize the goal of generalizing to all integrable models what we have learned for the correlation functions of the Ising model. This is an immense field of undertaking in which many people have and are making major contributions. It is surely not possible to come close to surveying this work in the few minutes left to me. I will therefore confine myself to a few remarks about things I have been personally involved with since completing the work with Wu in 1981 on the Ising model.
### 4.1 The chiral Potts model
In 1987 my coauthors H. Au-Yang, J.H.H. Perk, C.H. Sah, S. Tang, and M.L. Yan and I discovered the first example of an integrable model where, in technical language, the spectral variable lies on a curve of genus higher than one. This model has $`N3`$ states per site and is known as the integrable chiral Potts model. It is a particular case of a phenomenological model introduced for case $`N=3`$ in 1983 by Howes, Kadanoff and den Nijs in their famous study of level crossing transitions and is a generalization of the $`N`$ state model introduced by Von Gehlen and Rittenberg in 1985 which generalizes Onsager’s original solution of the Ising model ,.
The Boltzmann weights were subsequently shown by Baxter, Perk and Au-Yang to have the following elegant form for $`0nN1`$
$$\frac{W_{p,q}^h(n)}{W_{p,q}^h(0)}=\underset{j=1}{\overset{n}{}}(\frac{d_pb_qa_pc_q\omega ^j}{b_pd_qc_pa_q\omega ^j}),\frac{W_{p,q}^v(n)}{W_{p,q}^v(0)}=\underset{j=1}{\overset{n}{}}(\frac{\omega a_pd_qd_pa_q\omega ^j}{c_pb_qb_pc_q\omega ^j})$$
(5)
where $`\omega =e^{2\pi i/N}.`$ The variables $`a_p,b_p,c_p,d_p`$ and $`a_q,b_q,c_q,d_q`$ satisfy the equations
$$a^N+kb^N=k^{}d^N,ka^N+b^N=k^{}c^N$$
(6)
with $`k^2+k^2=1`$ and this specifies a curve of genus $`N^32N^2+1.`$ When $`N=2`$ the Boltzmann weights reduce to those of the Ising model (1) with $`H=0`$ and the curve (6) reduces to the elliptic curve of genus 1. However when $`N3`$ the curve has genus higher than one. This is the first time that such higher genus curves has arisen in the Boltzmann weights of integrable models.
This model is out of the class of all previously known models and raises a host of unsolved questions which are related to some of the most intractable problems of algebraic geometry which have been with us for 150 years. As an example of these new occurrences of ancient problems we can consider the spectrum of the transfer matrix
$$T_{\{l,l^{}\}}=\underset{j=1}{\overset{𝒩}{}}W_{p,q}^v(l_jl_j^{})W_{p,q}^h(l_jl_{j+1}^{}).$$
(7)
This transfer matrix satisfies the commutation relation
$$[T(p,q),T(p,q^{})]=0$$
(8)
and also satisfies functional equations on the Riemann surface at points connected by the automorphism $`R(a_q,b_q,c_q,d_q)=(b_q,\omega _q,d_q,c_q).`$ For the Ising case $`N=2`$ this functional equation reduces to an equation which can be solved using elliptic theta functions. Most unhappily, however, for the higher genus case the analogous solution requires machinery from algebraic geometry which does not exist. For the problem of the free energy Baxter has devised an ingenious method of solution which bypasses algebraic geometry completely but even here some problems remain in extending the method to the complete eigenvalue spectrum .
The problem is even more acute for the order parameter of the model. For the $`N`$ state models there are several order parameters parameterized by an integer index $`n`$ where $`1nN1.`$ For these order parameters $`M_n`$ we conjectured 10 years ago from perturbation theory computations that
$$M_n=(1k^2)^{n(Nn)/2N^2}.$$
(9)
When $`N=2`$ this is exactly the result announced by Onsager in 1948 and proven by Yang for the Ising model in 1952. For the Ising model it took only three years to go from conjecture to proof. But for the chiral Potts model a decade has passed and even though Baxter has produced several elegant formulations of the problem which all lead to the correct answer for the Ising case none of them contains enough information to solve the problem for $`N3.`$ In one approach the problem is reduced the the evaluation of a path ordered exponential of noncommuting variables on a Riemann surface. This sounds exactly like problems encountered in non Abelian gauge theory but, unfortunately, there is nothing in the field theory literature that helps. In another approach a major step in the solution involves the explicit reconstruction of a meromorphic function from a knowledge of its zeros and poles. This is a classic problem in algebraic geometry for which in fact no explicit answer is known either. Indeed the unsolved problems arising from the chiral Potts model are so resistant to all known mathematics that I have reduced my frustration to the following epigram;
The nineteenth century saw many brilliant creations of the human mind. Among them are algebraic geometry and Marxism. In the late twentieth century Marxism has been shown to be incapable of solving any practical problem but we still do not know about algebraic geometry.
It must be stressed again that the chiral Potts model was not invented because it was integrable but was found to be integrable after it was introduced to explain experimental data. In a very profound way physics is here far ahead of mathematics.
### 4.2 Exclusion statistics and Rogers-Ramanujan identities
One particularly important property of integrable systems is seen in the spectrum of excitations above the ground state. In all known cases these spectra are of the quasiparticle form in which the energies of multiparticle states are additively composed of single particle energies $`e_\alpha (P)`$
$$E_{ex}E_0=\underset{\alpha =1}{\overset{n}{}}\underset{j=1}{\overset{m_\alpha }{}}e_\alpha (P_j^\alpha )$$
(10)
with the total momentum
$$P=\underset{\alpha }{\overset{n}{}}\underset{j=1}{\overset{m_\alpha }{}}P_j^\alpha (\mathrm{mod}2\pi ).$$
(11)
Here $`n`$ is the number of types of quasi-particles and there are $`m_\alpha `$ quasiparticles of type $`\alpha .`$ The momenta in the allowed states are quantized in units of $`2\pi /M`$ and are chosen from the sets
$$P_j^\alpha \{P_{\mathrm{min}}^\alpha (𝐦),P_{\mathrm{min}}^\alpha (𝐦)+\frac{2\pi }{M},P_{\mathrm{min}}^\alpha (𝐦)+\frac{4\pi }{M},\mathrm{},P_{\mathrm{max}}^\alpha (𝐦)\}$$
(12)
with the Fermi exclusion rule
$$P_j^\alpha P_k^\alpha \mathrm{for}jk\mathrm{and}\mathrm{all}\alpha $$
(13)
and
$$P_{\mathrm{min}}^\alpha (𝐦)=\frac{\pi }{M}[(𝐦(𝐁1))_\alpha A_\alpha +1]\mathrm{and}P_{\mathrm{max}}^\alpha =P_{\mathrm{min}}^\alpha +\frac{2\pi }{M}(\frac{𝐮}{2}𝐀)_\alpha $$
(14)
where if some $`u_\alpha =\mathrm{}`$ the corresponding $`P_{\mathrm{max}}^\alpha =\mathrm{}.`$
If some $`e_\alpha (P)`$ vanishes at some momentum (say 0) the system is massless and for $`P0`$ a typical behavior is $`e_\alpha =v|P|`$ where $`v`$ is variously called the speed of light or sound or the spin wave velocity.
The important feature of the momentum selection rules (12) is that in addition to the fermionic exclusion rule (13) is the exclusion of a certain number of momenta at the edge of the momentum zones which is proportional to the number of quasiparticles in the state. For the Ising model at zero field there is only one quasiparticle and $`P_{\mathrm{min}}=0`$ so the quasiparticle is exactly the same as a free fermion. However, for all other cases the $`P_{\mathrm{min}}`$ is not zero and exclusion does indeed take place. This is a very explicit characterization of the generalization which general integrable models make over the Ising model.
The exclusion rules (12) lead to what have been called fractional (or exclusion) statistics by Haldane . On the other hand they make a remarkable and beautiful connection with the mathematical theory of Rogers-Ramanujan identities and conformal field theory. We have found that these exclusion rules allow a reinterpretation of all conformal field theories (which are usually discussed in terms of a bosonic Fock space using a Feigin and Fuchs construction ) in terms of a set of fermionic quasiparticles . What is most surprising is that there is not just one fermionic representation for each conformal field theory but there are at least as many distinct fermionic representations as there are integrable perturbations. The search for the complete set of fermionic representations is ongoing and I will only mention here that we have extensive results for the integrable perturbations of the minimal models $`M(p,p^{})`$ for the $`\varphi _{1,3}`$ perturbation and the $`\varphi _{2,1}`$ and $`\varphi _{1,5}`$ perturbations .
## 5 Beyond Integrability
There is one final problem of the hidden field of integrable models which I want to discuss. Namely the question of what is the relation of an integrable model to a generic physical system which does not satisfy a Yang-Baxter equation. For much of my career I have been told by many that these models are just mathematical curiosities which because they are integrable can, by that very fact, have nothing to do with real physics. But on the other hand the fact remains that all of the phenomenological insight we have into real physics phase transitions as embodied in the notions of critical exponents, scaling theory and universality which have served us well for 35 years either all come from integrable models or are all confirmed by the the solutions of integrable models. So if integrable models leave something out we have a very poor idea of what it is.
Therefore it is greatly interesting that several months ago Bernie Nickel sent around a preprint in which he made the most serious advance in the study of the Ising model susceptibility since our 1976 paper . In that paper in addition to the Painlevé representation of the two point function we derive an infinite series for the Ising model susceptibility where the $`n^{th}`$ term in the series involves an $`n^{th}`$ order integral.
When the integrals in this expansion are scaled to the critical point each term contributes to the leading singularity of the susceptibility $`|TT_c|^{7/4}`$ However Nickel goes far beyond this scaling and for the isotropic case where $`E^v=E^h=E`$ in term of the variable $`v=\mathrm{sinh}2E/kT`$ he shows that successive terms in the series contribute singularities that eventually become dense on the unit circle in the complex plane $`|v|=1.`$ From this he concludes that unless unexpected cancelations happen that there will be natural boundaries in the susceptibility on $`|v|=1`$. This would indeed be a new effect which could make integrable models different from generic models, Such natural boundaries have been suggested by several authors in the past including Guttmann , and Orrick and myself on the basis of perturbation studies of nonintegrable models which show ever increasingly complicated singularity structures as the order of perturbation increases; a complexity which magically disappears when an integrability condition is imposed. This connection between integrability and analyticity was first emphasized by Baxter long ago in 1980 when he emphasized that the Ising model in a magnetic field satisfies a functional equation very analogous to the zero field Ising model but that the Ising model in a field lacks the analyticity properties need for a solution. The proof of Nickel’s conjecture will, if correct, open up a new view on what it means to be integrable.
## 6 Conclusion
I hope that I have conveyed to to you some of the excitement and challenges of the field of integrable models in statistical mechanics. The problems are physically important, experimentally accessible, and mathematically challenging. The field has been making constant progress since the first work of Bethe in 1931 and Onsager in 1944. So it might be thought that, even though the problems are hard, it would command the attention of some of the most powerful researchers in a large number of institutions. But as I indicated in the beginning of this talk this is in fact not the case.
Most physics departments are more or less divided into the same divisions as is the APS. Thus it is quite typical to find departments with a condensed matter group, a nuclear physics group, and high energy group, an astrophysics group and an atomic and molecular group. But as I mentioned at the beginning, none of the work I have discussed in this talk fits naturally into these categories and thus if departments hire people in the mainstream of the existing divisions of the APS no one doing research in integrable models in statistical mechanics will ever be hired.
So while I am deeply honored and grateful for the award of the 1999 Heineman prize for mathematical physics there is still another honor I am looking for. It is to receive a letter from the chairman of a physics department which reads as follows:
Dear Prof. McCoy,
Thank you for the recommendation you recently made to us concerning the hiring of a new faculty member. We had not considered hiring anyone in the area of physics represented by your candidate, but after reading the resume and publications we decided that you were completely correct that the candidate is doing outstanding work which will bring an entirely new area of research to our department. We are very pleased to let you know that the university has made an offer of a faculty appointment to your candidate which has been accepted today. Thank you very much for your help and advice.
I have actually received one such letter in my life. If I am fortunate I hope to receive a few more before the end of my career. The 21st century is long and anything is still possible.
Acknowledgments
This work is supported in part by the National Science Foundation under grant DMR 97-03543.
|
no-problem/9904/astro-ph9904308.html
|
ar5iv
|
text
|
# HST observations of the very young SMC “blob” N 88ABased on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555., Based on observations obtained at the European Southern Observatory, La Silla, Chile.
## 1 Introduction
The Hubble Space Telescope (HST) offers a unique opportunity for studying very young massive star formation regions in the outer galaxies. The fact that the Small Magellanic Cloud (SMC) is the most metal-poor galaxy observable with very high angular resolution makes it an ideal laboratory for investigating star formation in very distant galaxies reminiscent of those populating the early Universe.
Our search for the youngest massive stars in the Magellanic Clouds started almost two decades ago on the basis of ground-based observations. This led to the discovery of a distinct and very rare class of H ii regions in these galaxies, that we called high-excitation compact H ii “blobs” (HEBs). So far only five HEBs have been found in the LMC: N 159-5, N 160A1, N 160A2, N 83B-1, and N 11A (Heydari-Malayeri & Testor 1982, 1983, 1985, 1986, Heydari-Malayeri et al. 1990) and two in the SMC: N 88A and N 81 (Testor & Pakull 1985, Heydari-Malayeri et al. 1988a). These objects are expected to harbor newborn massive stars.
The first part of our HST project studying the H ii “blobs” was devoted to the SMC N 81 (Heydari-Malayeri et al. hey99 (1999), hereafter Paper I). The Wide Field Planetary Camera 2 (WFPC2) observations allowed us to resolve N 81 and discover a tight cluster of newborn massive stars embedded in this nebula of $``$ 10<sup>′′</sup> across. The WFPC2 images also uncovered a striking display of violent phenomena such as stellar winds, shocks, and ionization fronts, typical of turbulent starburst regions.
The SMC “blob” N 88A is part of the H ii region N 88 (Henize hen (1956)), or DEM 161 (Davies et al. dav (1976)) which lies in the Shapley Wing at $``$ 2.2 (2.4 kpc) from the main body of the SMC. Other H ii regions lying towards the Wing are from west to east N 81, N 84, N 89, and N 90. The HEB nature of N 88A was first recognized by Testor and Pakull (tes (1985)) who used CCD imaging at $``$ 2<sup>′′</sup> resolution through H$`\alpha `$, H$`\beta `$, and \[O iii\] filters and IDS spectroscopy (4<sup>′′</sup>$`\times `$ 4<sup>′′</sup> aperture) to study the central component N 88A. They found a high excitation object (\[O iii\]/H$`\beta `$ = 7.8) with an interstellar extinction $`A_V`$ = 1.7 mag. The chemical abundances in N 88 had previously been determined by Dufour & Harlow (dh (1977)) who, using a 10<sup>′′</sup>$`\times `$ 79<sup>′′</sup> slit, found a low-metal content typical of the chemical composition of the SMC. CCD photometry and spectroscopy of 10 stars lying around N 88A were carried out by Wilcots (wil (1994)) using the ctio 90 cm telescope. However, the ground-based observations in general were unable to reveal the internal morphology and stellar content of N 88A.
The HST was used for imaging and FOS spectroscopy of N 88A (Kurt et al. kurt (1995)). These pre-costar observations, in spite of the effort made in data analysis, could not clearly show the internal details of N 88A. Garnett et al. (gar (1995)) revisited the chemical abundances in N 88A on the basis of HST ultraviolet FOS (0<sup>′′</sup>.7 $`\times `$ 2<sup>′′</sup>.0 aperture) and ground-based spectra.
In this paper we present recent HST observations (GO 6535) of N 88A and its surroundings. In the following sections we elaborate on the extinction and emission properties of each component and suggest a plausible scenario on the star formation history of this region.
## 2 Observations and data reduction
The observations of N 88A described in this paper were obtained with WFPC2 on board the HST on August 31, 1997 using the wide- and narrow-band filters (F300W, F467M, F410M, F547M, F469N, F487N, F502N, F656N). The observational techniques, data reduction procedures, and photometry are similar to those explained in detail in Paper I. A composite image is presented in Fig. 1.
The ESO EFOSC Camera at the 3.6 m telescope was also used on 4 June 1988 for imaging N 88 through a narrow H$`\alpha `$ filter (ESO#507, $`\lambda `$ 6565.5 Å, $`\mathrm{\Delta }`$$`\lambda `$ = 12 Å) with exposure times of 1 and 5 minutes. The detector was a RCA CCD chip (#11) with 0<sup>′′</sup>.36 pixels and the seeing was $``$ 1<sup>′′</sup>.5 (fwhm). This H$`\alpha `$ image is displayed in Fig. 2. Due to its relatively short exposure, this image displays only the brightest part of the H ii emission.
## 3 Results
### 3.1 Morphology
N 88 is a relatively large concentration of ionized gas with several components (Fig. 1a). From the central region emanate a number of fine-structure filaments running southwards over 40<sup>′′</sup> ($``$ 10 pc) which can be seen in the “true-color” composite image (Fig. 1a). The larger field of the H$`\alpha `$ image obtained with EFOSC (Fig. 2) shows a veil of thin filaments curling southwards over more than 20 pc and brightening at some points.
The main component, N 88A, is a compact, high excitation H ii region $``$ 3<sup>′′</sup>.5 ($``$ 1 pc) in diameter surrounded by seven diffuse H ii regions, labelled B to H in Fig. 1b. N 88A has a complex morphology. An absorption lane crossing the nebula from north to south appears as an undulating yellow structure in Fig. 1c (see Sect. 3.2 for more details). West of this structure lies the brightest part of N 88A, a small core of diameter $``$ 0<sup>′′</sup>.3 (0.08 pc), especially apparent on the H$`\alpha `$ image (white spot in Fig. 1c; see also Fig. 5). N 88A is clearly ionization-bounded to the north-west since the sharp edge visible in Fig. 1 indicates an ionization front in that direction. It is limited to the south-east by the weaker component B. N 88B resembles a hollow sphere – a shell – centered on the bright star #55 (see Sect. 3.4). N 88A and N 88B are clearly in interaction, as shown by the brightening of the shell between the two regions. Moreover, we note a high excitation narrow filament showing up in the \[O iii\] emission north-east of N 88B (Fig. 1a). The other components are situated farther away from N 88A. N 88 E-F-G and H appear as more extended, diffuse, and spherical H ii regions.
Several lower excitation arc-shaped features and filaments emerging from N 88A run outward in the north-east and south-west directions. These wind-induced structures are best seen in Fig. 3, which presents an un-sharp masked image of N 88A-B created from H$`\alpha `$. In this image large-scale structures have been suppressed by the technique explained in Paper I. Note also the mottled structure of the main component A, even in the direction of the absorbing lane, indicating a very inhomogeneous medium, both for gas and dust, with a typical cell size of 0<sup>′′</sup>.4 (0.1 pc).
### 3.2 Nebular reddening
The Balmer H$`\alpha `$/H$`\beta `$ intensity ratio map of N 88A-B is presented in Fig. 4a. The most striking feature is the presence of a heavy absorption lane of $``$ 0<sup>′′</sup>.7 $`\times `$ 2<sup>′′</sup>.3 ($``$ 0.2 $`\times `$ 0.7 pc<sup>2</sup>) in size, running in a north-south direction, which divides the bright N 88A into two parts. The mean H$`\alpha `$/H$`\beta `$ ratio in the lane is 7.10 $`\pm `$ 1.42 (rms), corresponding to $`A_V`$ = 2.5 mag if the LMC interstellar reddening law is used (Prévot et al. pre (1984)), and reaches values as high as $``$ 10, or $`A_V`$$``$ 3.5 mag. The mean ratio for component A, 4.81 $`\pm `$ 1.46, corresponds to $`A_V`$$``$ 1.5 mag. The extinction is also high towards component B, where H$`\alpha `$/H$`\beta `$ keeps a relatively uniform value of 4.27 $`\pm `$ 0.90 ($`A_V`$$``$ 1.1 mag). For comparison, previous lower resolution spectroscopic observations yielded $`A_V`$ = 1.1 mag (Dufour & Harlow dh (1977), using a 10<sup>′′</sup> wide slit) and $`A_V`$ = 1.7 mag (Testor & Pakull tes (1985), 4<sup>′′</sup>$`\times `$ 4<sup>′′</sup> slit). The H$`\alpha `$/H$`\beta `$ map was used to de-redden the H$`\beta `$ flux on a pixel to pixel basis.
The G component shows a sharp dividing line in the middle separating it into two distinct halves, one much fainter than the other. This feature should be due to absorbing dust.
### 3.3 Ionized gas emission
The total H$`\beta `$ flux of component A is F(H$`\beta `$) = 3.45 $`\times `$ 10<sup>-12</sup> erg s<sup>-1</sup> cm<sup>-2</sup> (accurate to $``$ 3%). Correcting for the reddening (Sect. 3.2) gives a flux F<sub>0</sub>(H$`\beta `$) = 1.85 $`\times `$ 10<sup>-11</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. The total flux for both components A and B is F<sub>0</sub>(H$`\beta `$) = 1.97 $`\times `$ 10<sup>-11</sup> erg s<sup>-1</sup> cm<sup>-2</sup>. Thus, component B provides less than 10% of the total H$`\beta `$ energy. A Lyman continuum flux of $`N_L`$ = 2.10 $`\times `$ 10<sup>49</sup> photons s<sup>-1</sup> can be estimated for component A, using a distance of 66 kpc, if the H ii region is ionization-bounded. A single O6V star can account for this ionizing flux (Vacca et al. vacca (1996), Schaerer & de Koter sch (1997)). Similarly, the Lyman continuum flux corresponding to component B is $`N_L`$$``$ 3.5 $`\times `$ 10<sup>47</sup> photons s<sup>-1</sup>. If we take the estimated UV fluxes at face values, the exciting star of component B should be an early B type star. However, these should be considered as lower limits, since the H ii regions are not ideally ionization-bounded.
We find an rms electron density of 2700 cm<sup>-3</sup> for component A from the total H$`\beta `$ flux, a $`T_e`$ = 14 000 K (Garnett et al. gar (1995)), and assuming a radius of 0.5 pc for the object. The corresponding total ionized mass of N 88A is $``$ 45 $`M_{}`$.
The \[O iii\]$`\lambda `$5007/H$`\beta `$ intensity map displays an extended high excitation zone towards N 88A (Fig. 4b), where the ratio has a mean value of $``$ 7. The ratio peaks at some points to values as high as 9. Taking $`T_e`$$``$ 14 000 K and $`N_e`$ = 2 700 cm<sup>-3</sup>, a ratio \[O iii\]/H$`\beta `$$``$ 7 indicates an ionic abundance O<sup>++</sup>/H<sup>+</sup>$``$ 9.4 $`\times `$ 10<sup>-5</sup>. Since the mean SMC oxygen abundance is O/H $``$ 10.5 $`\times `$ 10<sup>-5</sup> (Dufour duf (1984)), this means that $``$ 90% of the oxygen atoms in N 88A are in the form of doubly ionized O<sup>++</sup> ions, in agreement with the result of Garnett et al. (gar (1995)). The \[O iii\]/H$`\beta `$ ratio for component B is comparatively much smaller, with a mean value of $``$ 4.
The high excitation narrow filament emanating from component A is clearly visible in the \[O iii\]/H$`\beta `$ map, suggesting that the O<sup>++</sup> ions in the filament may be excited by shock collisions.
### 3.4 Stars
The HST images reveal tens of previously unknown stars towards the N 88 complex. Many of them, especially the brightest ones, are gathered in several small groups often un-resolved in the EFOSC image (Fig. 2). The photometry obtained for the 79 brightest stars of the field using the filters wide $`U`$ (F300W), Strömgren $`v`$ (F410M), Strömgren $`b`$ (F467M), He ii (F469N), and Strömgren $`y`$ (F547) is presented in Table 1 which also gives the coordinates (J2000) of each star. These stars are identified by their numbers in Fig. 6 and Table 1. The capital letters in the last column of the table identify the associated H ii regions. The spectral types of the ionizing stars proposed by Wilcots (wil (1994)), as well as their labels, are also listed in the table. Note that the present observations show the exciting star of N 88D to be double (#71 & #72) and the given type corresponds therefore to both of them. Table 1 is available in electronic form at the Centre de Données astronomiques de Strasbourg (via anonymous ftp to cdsarc.u-strasbg.fr or via http://cdsweb.u-strasbg.fr/Abstract.html).
While the exciting stars of the fainter H ii regions are easily identified on the true-color image, a remarkable point is the absence of prominent stars towards the main component A. Nevertheless, we detect two faint stars embedded in the core of N 88A east of the absorption lane (Fig. 5). These are stars #1 and #2 with $`y=18.2`$ and 18.3 mag and colors $`by=+0.9`$ and +0.6 mag respectively. It should however be underlined that these magnitudes are very uncertain since the stars lie in a very bright area where nebular subtraction is not straightforward. A third fainter star ($`y`$$``$ 20 mag), not visible in Fig. 5, is marginally detected just to the east of the bright H$`\alpha `$ core of N 88A. Its position suggests it as being a good candidate for an exciting star.
Star #55, situated towards the center of component B, has $`y=16.57`$ mag and is one of the brightest in the field. It has a highly elongated PSF profile which is due to multiplicity (at least three components are resolved). This may be the star “$`s1`$” detected by Wilcots (wil (1994)) relatively close to the brightest part of N 88. Its spectrum shows strong He i$`\lambda `$ 4471 Å and He ii$`\lambda `$ 4686, but weak He ii$`\lambda `$ 4541 Å indicative of an O9 V star. If this spectrum belongs actually to #55, it should correspond to the brightest component of this system.
The color-magnitude diagram for the brightest stars of the sample (Fig. 7) shows a main sequence with the bulk of the stars centered on Strömgren colors $`by=0.10`$ and $`vb=0.20`$ mag, typical of massive OB stars (Relyea & Kurucz rk (1978)). These colors are equivalent to a Johnson $`BV`$ = –0.30 (Turner tur (1990)), which indicates a negligible reddening (Conti et al. conti (1986)). This result is due to the fact that the main sequence is overwhelmingly dominated by stars lying outside the N 88 complex and means that the areas situated north-east, east, south, and south-west of N 88 are not affected by dust.
However, taking a sub-sample made up of all the exciting stars of the N 88 H ii regions (excluding stars #1 & #2), we find the Strömgren colors $`by=0.02`$ and $`vb=0.17`$ which indicate $`BV`$ = –0.21 corresponding to a visual extinction of $`A_V`$ = 0.3 mag. This clearly confirms that the N 88 complex is the most reddened part of this region of the SMC. Of particular interest are stars #60 and #61 situated immediately north-west of N 88A (Fig. 1b). Assuming that these two stars are of O type, their colors suggest an extinction of $`A_V`$$`>`$ 1 mag. This result has implications for the location of the molecular cloud (Sect. 4.4).
The brightest stars of the sample are #39, #12, #19, and #6. The first two are blue stars and the latter ones are red. The red population showing up in the color-magnitude diagram represents a collection of evolved field stars as well as young massive ones contaminated by nebulosity/dust. For instance, it is noteworthy that the very red stars #76, and #74 are not associated with a nebulosity, and this suggests that they should be evolved field stars.
In the particular case of stars #1 and #2 lying inside N 88A, in spite of their red colors, they may be young blue stars suffering from heavy extinction. Assuming that star #1 has an O9 V spectrum with $`M_V`$ = –4.4 mag (Vacca et al. vacca (1996)), and considering that the distance modulus of SMC is 19 mag, then an extinction of $`A_V`$ = 3.4 mag is necessary to make it appear as faint as $`y`$$``$ 18 mag. Similarly, in order for a star of spectral type O6 V (Sect. 3.3) to have an $`y`$$``$ 20 mag, we need an $`A_V`$$``$ 6 mag. Thus, the main exciting star(s) of N 88A should remain hidden in the optical due to the presence of dust by an extinction of at least 6 mag.
## 4 Discussion and concluding remarks
### 4.1 Comparison with N 81
The most striking feature of N 88A is its lack of prominent stars, even at the WFPC2 resolution. This indicates a young age and is supported by other observational findings about N 88A: its compactness, its high density, and its exceptionally high extinction. These facts considered together suggest that N 88A is just hatching from its natal molecular cloud. Stars #1 and #2 are probably among the exciting sources of N 88A. Other exciting stars may still be embedded in the densest part of the nebula, such as he bright spot highlighted in Fig. 1c and Fig. 5, and remain invisible due to the high dust content. Compared with N 81, N 88A is probably younger since N 81 is more extended, less dense, and exhibits several of its exciting stars (Paper I). Although N 81 and N 88A are both very young, the present observations underline their difference. Apart from the exciting stars aspect, N 88A is surrounded by several diffuse H ii regions. In contrast, N 81 is an isolated object. These facts point to the diversity of star forming regions belonging to the same chemical environment and also to the necessity of observing each case in detail.
On the other hand, the whole N 88 region is very reminiscent of the LMC N 59 region studied by Armand et al. (arm (1992)). N 88 and N 59 contain several individual H ii regions, in various evolutionary states. They range from compact, bright and young components with a lot of dust hiding the exciting stars (N 88A and N 59A) to diffuse, spherical and evolved regions (N 88E-F-G-H and N 59C), and also to shell components (N 88B and N 59B which contains a supernova remnant). Similarly, both regions display a filamentary structure which results from the interaction of the strong stellar winds emitted by the massive stars with the surrounding medium, as well as small scale brightness variations pointing to a very inhomogeneous distribution of matter or dust inside or around these objects.
### 4.2 Associated neutral material
CO emission from the molecular cloud associated with N 88A was observed by Israel et al. (isco (1993)) using the ESO-SEST 15 m telescope. They detected the <sup>12</sup>CO(1–0) emission with a brightness temperature of 750 mK, a width of 2.5 km s<sup>-1</sup> and a radial velocity of V<sub>LSR</sub> = 147.8 km s<sup>-1</sup>. The molecular cloud is much brighter than the one associated with N 81 (Paper I) and ranks among the few sources in the SMC detected in <sup>13</sup>CO(1–0) (Israel et al. isco (1993)). Rubio et al. (rub (1996)) mapped the molecular cloud in <sup>12</sup>CO(1–0) and <sup>12</sup>CO(2–1) transitions using the SEST telescope with respective spatial resolutions of 43<sup>′′</sup> ($``$ 13 pc) and 22<sup>′′</sup> ($``$ 7 pc). It turns out that the cloud is in fact relatively small, $``$ 1 (18 pc) in size in the east-west direction and slightly smaller in north-south. More recently, Rubio et al. (private communication) have detected molecular transitions <sup>12</sup>CO(3–2), CS(2–1), CS(3–2), HCO<sup>+</sup>(1–0) which probably originate from the hot and dense core of the cloud. Molecular hydrogen emission also has been detected towards N 88A (Israel & Koornneef ik88 (1988)).
The SMC is known to have an overall complex structure with several overlapping neutral hydrogen layers (McGee & Newton McGee (1981)). We used the recent observations by Stanimirovic et al. (stan (1998)), with a resolution of 98<sup>′′</sup> (30 pc), to examine the H i emission towards N 88. The H i spectrum profile has two main emission peaks at $``$ 150 and $``$ 175 km s<sup>-1</sup>. The column density corresponding to their sum is 3.12 $`\times `$ 10<sup>21</sup> atoms cm<sup>-2</sup>, slightly smaller than that corresponding to N 81 (Paper I). It seems that the molecular cloud is correlated with the smaller velocity H i component.
### 4.3 Extinction
N 88 was detected as a very bright IRAS source (Schwering & Israel schwering (1990)). The fact that near infrared photometry of N 88A, at $`J`$, $`H`$, $`K`$, and $`L^{}`$ bands obtained using a 10<sup>′′</sup> aperture (Israel & Koornneef ik91 (1991)), is consistent with the IRAS spectrum (12, 25, 60, and 100 $`\mu `$m) suggested that the IR emission arises mostly from the compact object in the aperture. Moreover, these authors found a quite red $`K`$ – $`L^{}`$ color of more than 2 mag indicating the presence of hot dust.
Our HST images for the first time show the heavy concentration of absorbing dust towards the inner parts of the H ii region. More strikingly, the extinction rises to as high as $`A_V`$$``$ 3.5 mag in a narrow band towards the bright core of the nebula. This high absorption is quite unexpected for a metal-poor galaxy like the SMC. In fact N 88A holds the record of extinction among ionized nebulosities in the SMC. The correlation between the zones of the high excitation and high extinction is an argument in favor of the physical association of the dust with hot gas.
It is important to know the properties of this dust. Roche et al. (roche (1987)) studied 8–13 $`\mu `$m spectra of N 88A and found a featureless continuum without any evidence of dust signatures attributed to silicate grains. This led them to the conclusion that the dust is likely composed of carbon grains. Further progress in this area requires appropriate IR observations using the highest spatial resolutions.
### 4.4 Star formation
The N 88 nebular complex results from a small starburst which occurred recently in the Wing of the SMC. While the main stars creating N 88A are not visible, the other members of the starburst show up in the HST images (Fig. 1). The stars exciting the diffuse H ii regions (C to H) were formed in the outer, less dense parts of the molecular cloud, whereas the compact, very dusty N 88A is associated with the core of this small molecular cloud (Sect. 4.2). The cloud must be to the north-east of N 88A, as indicated by the ionization front detected in that direction (Sect. 3.1) and also by the fact that stars #60 and #61 situated north of the front are heavily affected by extinction (Sect. 3.4). This is further supported by ground-based higher exposure images showing a large front north-west of N 88A beyond which no stars are visible (Fig. 2; also Testor & Pakull tes (1985)).
The case of component B is interesting. Although it is, like N 88A, apparently related to the core of the molecular cloud, it seems more evolved. In fact N 88B has a significantly lower density and less dust, and reveals its exciting star (#55). It can be considered that N 88A has resulted from sequential star formation, that is the collapse of the shock/ionization front layer created by stars #55. If so, we are dealing with two successive generations of stars formed in the core of the molecular cloud.
The stars situated towards N 88 are also known as HW 81 following Hodge & Wright (hw (1977)) who surveyed the SMC in search of OB associations. The present observations reveal the fainter members of this association. The HST images also resolve another association in the direction of N 88. Lying $``$ 50<sup>′′</sup> (15 pc) south-east of N 88A, at the lower-left corner of Fig. 1a, HW 82 (Hodge & Wright hw (1977)) is composed of a dozen stars several of which are tightly packed multiple ones. HW 82 is not associated with ionized gas, and a relatively large number of its stars are red. At present we do not know whether the red and blue stars are co-spatial members of the same cluster. Nevertheless, the facts that the ionized gas is already dispersed from there and that no significant amount of dust is detected (Sect. 3.4) suggest that HW 82 represents an older burst of star formation in the Wing. This is confirmed by the larger field of EFOSC H$`\alpha `$ image (Fig. 2) which shows no H ii regions south of the HST field of view. Star formation must have therefore proceeded from south to north and N 88 is the most recent site of star formation in this part of the SMC.
A noteworthy aspect of the stellar population towards N 88 is the presence of several tight clusters or multiple systems uncovered by the present observations. For example, the exciting star of N 88B (#55) is a multiple system of at least three components. There are also at least two stars hidden inside N 88A, while both N 88C and D are excited by two blue stars of comparable brightness. Note also the tight cluster in HW 82 composed of stars #9, #10, #11, #12 , and #13. These cases present new pieces of evidence in support of collective formation of massive stars in the SMC (see Paper I for a brief discussion).
An intriguing, though unanswered question, is related to the origin of the large-scale filamentary veil visible in the EFOSC image. Our true-color image shows that filaments originating from north-east of N 88 run towards the anonymous blue cluster in the south (stars #16, #17, #21, #22, #23, and #24). However, the veil significantly brightens south of that cluster and bends to the south-east. In consequence, the association of the veil with the N 88 region is not established. It is possible that this filamentary structure is linked to the neighboring huge bubble nebula DEM 167 (Davies et al. dav (1976)).
###### Acknowledgements.
We are grateful to an anonymous referee for his careful reading of the manuscript and comments that contributed to substantially improve the paper. VC would like to acknowledge the financial support from a Marie Curie fellowship (TMR grant ERBFMBICT960967).
|
no-problem/9904/hep-th9904200.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Some years ago ’t Hooft introduced the concept of Abelian projection into non-Abelian gauge theories, in order to explain the confinement of quarks in four-dimensional $`QCD`$ as a dual Meissner effect in a dual superconductor .
The Abelian projection allows us, by a careful choice of the gauge, to describe the physical variables of a non-Abelian $`SU(N)`$ gauge theory, without scalar matter fields, as a set of electric charges and magnetic monopoles interacting via a residual $`U(1)^{N1}`$ Abelian gauge coupling.
The occurrence of magnetic monopoles into a non-Abelian gauge theory without matter fields is perhaps the most crucial feature of the Abelian projection, that furnishes a precise understanding of the structure of the phases of non-Abelian gauge theories, according to the following alternatives .
If there is a mass gap, either the electric charge condenses in the vacuum (Higgs phase) or the magnetic charge does (confinement phase). If there is no mass gap, the electric and magnetic fluxes coexist (Coulomb phase).
Recently, in an apparently unrelated development , some mathematical control was gained over the large-$`N`$ limit of four-dimensional $`QCD`$, mapping, by means of a chain of changes of variables, the function space of the $`QCD`$ functional integral into an elliptic fibration of Hitchin bundles.
Hitchin bundles are themselves a fibration of $`U(1)`$ bundles over spectral branched covers of a Riemann surface, that, in the case of , is a torus.
In this paper, we point out that the map in is a version, in a perhaps global algebraic-geometric setting, of the concept of Abelian projection .
In fact, the branching points of the spectral cover are identified with the magnetic monopoles of the Abelian projection, the parabolic points of the cover with (topological) electric charges and the $`U(1)`$ gauge group on the cover with a global version (on the cover) of the $`U(1)^{N1}`$ gauge group of the Abelian projection.
The identifications that we have just outlined provide a physical interpretation of the mathematical construction in . Indeed it is precisely this physical interpretation that explains naturally why the functional integral, once it is expressed as a functional measure supported over the collective field of the Hitchin fibration, is dominated by a saddle-point condition in the large-$`N`$ limit.
On the other side, we may think that the mathematical proof, that the variables of the Abelian projection really capture the physics of four-dimensional $`QCD`$ in the large-$`N`$ limit, relies on the fact that those variables may be employed to dominate the functional integral in the large-$`N`$ limit.
The only qualitative feature, in the treatment in , that was not already present in the concept of the Abelian projection, is the occurrence of Riemann surfaces and it is due to the global algebraic-geometric nature of the methods in . This, however, makes contact, at least qualitatively, with another long-standing conjecture about the $`QCD`$ confinement, the occurrence of string world sheets and the string program .
Our last concluding remark is that the electric/magnetic alternative and the physical interpretation based on the Abelian projection, applied in the mathematical framework of , give us a simple qualitative criterium to characterize the confinement phase of $`QCD`$ in the large-$`N`$ limit: confinement is equivalent to magnetic condensation, in absence of electric (parabolic) singularities of the spectral covers.
An alternative, compatible interpretation, based on the idea that $`QCD`$ is equivalent, in the large-$`N`$ limit, to a theory of strings is outlined in the following section. The rest of the paper is devoted to a technical explanation of the correspondence between the Abelian projection and the Hitchin fibration in four-dimensional $`QCD`$.
## 2 The Hitchin fibration as the Abelian projection in the gauge in which the Higgs current is a triangular matrix
The Abelian projection, according to , is really the choice of a gauge-fixing in such a way that, after the gauge-fixing, the theory is no longer locally invariant under $`SU(N)`$ but only under its Cartan subgroup $`U(1)^{N1}`$. The important point about this projection is that it is defined strictly locally, that is, the gauge rotation $`\mathrm{\Omega }`$ performed at each point in space-time to implement the gauge-fixing condition, does not depend on the values of the physical fields in other points of space-time. This then guarantees that all observables in the new gauge frame are still locally observable. There are no propagating ghosts. But $`\mathrm{\Omega }`$ is not completely defined. There is a subgroup, $`U(1)^{N1}`$, of gauge rotations that may still be performed. And this is why the theory, after the Abelian projection, looks like a local $`U(1)^{N1}`$ gauge theory.
If one now tries to gauge-fix this remaining gauge freedom, one discovers that it cannot be done locally, without encountering apparent difficulties. But local gauge-fixing is not needed, since the residual gauge symmetry is the one of a familiar Abelian theory.
There may be, however, isolated points, where the local gauge-fixing condition has coinciding eigenvalues, where the gauge symmetry is not $`U(1)^{N1}`$ but a larger group. Here singularities appear, the magnetic monopoles. So we see that, topologically, the full theory can only be topologically equivalent to the $`U(1)^{N1}`$ gauge theory if the latter is augmented with monopole singularities where the $`U(1)`$ conservation laws for the vortices are broken down into the (less restrictive) conservation laws of the $`SU(N)`$ vortices.
When we try to gauge-fix completely, we hit upon the Dirac strings, whose end points are the magnetic monopoles.
In addition to the magnetic monopoles, in the $`QCD`$ case, the gauge-fixed theory contains also gluon and quark fields, that are charged with respect to the residual $`U(1)^{N1}`$.
Therefore we have a set of electric charges and magnetic monopoles interacting via a residual $`U(1)^{N1}`$ Abelian gauge coupling.
We now compare this description with the one that arises in , for the pure gauge theory without quark matter fields.
The functional integral for $`QCD`$ in is defined in terms of the variables $`(A_z,A_{\overline{z}},\mathrm{\Psi }_z,\mathrm{\Psi }_{\overline{z}})`$, obtained by means of a partial duality transformation from $`(A_z,A_{\overline{z}},A_u,A_{\overline{u}})`$, where $`(z,\overline{z},u,\overline{u})`$ are the complex coordinates on the product of two two-dimensional tori, over which the theory is defined.
$`(A_z,A_{\overline{z}},\mathrm{\Psi }_z,\mathrm{\Psi }_{\overline{z}})`$ define the coordinates of an elliptic fibration of $`T^{}𝒜`$, the cotangent bundle of unitary connections on the $`(z,\overline{z})`$ torus, whose base is the $`(u,\overline{u})`$ torus.
$`\mathrm{\Psi }_z`$ transforms as a field strength by gauge transformations and it is a non-hermitian matrix.
Following Hitchin , the gauge is chosen in which $`\mathrm{\Psi }_z`$ is a triangular matrix, for example lower triangular, that leaves a $`U(1)^{N1}`$ residual gauge freedom as in the Abelian projection.
The points in space-time where $`\mathrm{\Psi }_z`$ has a pair of coinciding eigenvalues, correspond to monopoles. In addition there are the charged components of $`(A_z,A_{\overline{z}},\mathrm{\Psi }_z,\mathrm{\Psi }_{\overline{z}})`$. We have thus a set of charges and monopoles with a residual $`U(1)^{N1}`$, according to the Abelian projection.
In , however, it is found a dense set in the functional integral over (the elliptic fibration of) $`T^{}𝒜`$, with the property that the quotient by the action of the gauge group exists as a Hausdorff (separable) manifold.
This dense set is defined in as the set of pairs $`(A,\mathrm{\Psi })`$ that are solutions of the following differential equations (elliptically fibered over the $`(u,\overline{u})`$ torus):
$`F_Ai\mathrm{\Psi }\mathrm{\Psi }`$ $`=`$ $`{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\mu _p^0\delta _pidzd\overline{z}`$
$`\overline{}_A\psi `$ $`=`$ $`{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\mu _p\delta _pdzd\overline{z}`$
$`_A\overline{\psi }`$ $`=`$ $`{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{\mu }_p\delta _pd\overline{z}dz`$ (1)
where $`\delta _p`$ is the two-dimensional delta-function localized at $`z_p`$ and $`(\mu _p^0,\mu _p,\overline{\mu }_p)`$ are the set of levels for the moment maps. The moment maps are the three Hamiltonian densities generating gauge transformations on $`T^{}𝒜`$ that appear in the left hand sides of Eq.(1) .
$`\mu _p^0`$ are hermitian traceless matrices, and $`\mu _p`$ are matrices in the complexification of the Lie algebra of $`SU(N)`$, that determine the residues of the poles the Higgs current $`\mathrm{\Psi }`$. $`\psi `$ and $`\overline{\psi }`$ are the $`z`$ and $`\overline{z}`$ components of the one-form $`\mathrm{\Psi }`$.
Eq.(1) defines a dense stratification of the functional integral over $`T^{}𝒜`$ because the set of levels is dense everywhere in function space, in the sense of the distributions, as the divisor $`D`$ gets larger and larger.
Eq.(1) defines the data of parabolic $`K(D)`$ pairs on a torus valued in the Lie algebra of the complexification of $`SU(N)`$: a holomorphic connection $`\overline{}_A`$ of a holomorphic bundle, $`E`$, with a parabolic structure and a parabolic morphism $`\psi `$ of the parabolic bundle. The parabolic structure at a point $`p`$ consists in the choice of a set of ordered weights, that are positive real numbers modulo 1, and a flag structure, that is a collection of nested subspaces $`_\mathcal{1}_\mathcal{2}\mathrm{}_𝒾`$ labelled by the weights $`\alpha _1\alpha _2\mathrm{}\alpha _k`$, with the associated multiplicities defined as: $`m_{i+1}=dim_{𝒾+\mathcal{1}}𝒹𝒾𝓂_𝒾`$. A parabolic morphism, $`\varphi `$, is a holomorphic map between parabolic bundles, $`E^1,E^2`$, that preserves the parabolic flag structure at each parabolic point $`p`$ in the sense that $`\alpha _i^1>\alpha _j^2`$ implies $`\varphi (_𝒾^\mathcal{1})\varphi (_{𝒿+\mathcal{1}}^\mathcal{2})`$. We should now explain how a parabolic structure arises from Eq.(1) and how it follows that $`\psi `$ is a parabolic morphism with respect to the given parabolic structure. Though we are going to choose the gauge in which $`\psi `$ is a lower triangular matrix in most of this paper, we start at an intermediate stage with a gauge in which $`\mu _p^0`$ is diagonal. The eigenvalues of $`\mu _p^0`$ modulo $`2\pi `$ and divided by $`2\pi `$ define the parabolic weights. Their multiplicities will turn out to be the multiplicities of the yet to be defined flag structure.
Fixed $`\mu _p^0`$ and $`\mu _p`$ in Eq.(1), let $`(e_k)`$ be an orthonormal basis of the eigenvectors of $`\mu _p^0`$ in decreasing order. This basis is not necessarily unique if the eigenvalues have non-trivial multiplicities. However the corresponding flag structure will not be affected by this lack of uniqueness. Let $`g`$ be the gauge transformation that puts $`\mu `$ and $`\psi `$ into lower triangular form. Let $`(ge_k)`$ be the transformed basis and let $``$ be the flag obtained by taking the unions of subspaces generated by the vectors in the transformed basis that are the images of eigenvectors of the ordered eigenvalues with the given multiplicity in such a way that the multiplicities of the resulting flag are the same as the multiplicities of the eigenvalues. In addition, by construction, $`\psi `$ is a parabolic morphism with respect to the flag since it is holomorphic and lower triangular in the basis $`(ge_k)`$.
We have thus the data of a parabolic $`K(D)`$ pair from Eq.(1).
There is also a representation theoretic interpretation of Eq.(1).
The three equations for the moment maps are equivalent to a vanishing curvature condition for the non-hermitian connection one-form $`B=A+i\mathrm{\Psi }`$ plus a harmonic condition for $`\psi `$ away from the parabolic divisor .
Therefore the set of solution of Eq.(1) can be figured out essentially as a collection of monodromies around the points of the divisor with values in the complexified gauge group, that form a representation of the fundamental group of the torus with the points of the parabolic divisor deleted.
’t Hooft description of the Abelian projection previously outlined, applies to $`T^{}𝒜`$ and to its dense subset defined by Eq.(1) a fortiori. In addition, we have just shown that there is an embedding of the solutions of Eq.(1) into the parabolic $`K(D)`$ pairs.
However, on the parabolic $`K(D)`$ pairs, ’t Hooft concept of Abelian projection can be carried to its extreme consequences.
Indeed, in the global algebraic-geometric framework of the Hitchin fibration of parabolic $`K(D)`$ pairs, it is preferable to concentrate ourselves on the first eigenvalue and the first eigenstate of the lower triangular matrix $`\mathrm{\Psi }_z`$, since all the information of the original parabolic bundle, up to gauge equivalence, can be reconstructed from these only data .
The first eigenvalue defines a spectral covering, that is a branched cover of the two-torus. The eigenspace defines a section of a line bundle, that determines a $`U(1)`$ connection on the cover of the torus, instead of the $`U(1)^{N1}`$ bundle on the torus of the Abelian projection.
The $`U(1)`$ connection on the cover, $`a`$, and the eigenvalue, $`\lambda `$, of the Higgs current can be considered as coordinates of the cotangent bundle of unitary $`U(1)`$ connections on the cover, or as parabolic $`K(D)`$ pairs $`(a,\lambda )`$ on the cover, valued in the complexification of the Lie algebra of $`U(1)`$.
The system is now completely abelianized. Correspondingly, not only the magnetic charges, but also the electric ones can occur only as gauge invariant topological configurations.
The points in space-time where $`\mathrm{\Psi }_z`$ has a pair of coinciding eigenvalues, that in the Abelian projection correspond to monopoles, are here, according to Hitchin, simple branching points of the spectral covers, defined by means of the characteristic equation:
$`Det(\lambda 1\mathrm{\Psi }_z)=0,`$ (2)
in which the coordinates $`(u,\overline{u})`$ are kept fixed.
All the other branching points can be obtained by collision of these simple branching points, in the same way monopoles can in the Abelian projection. The branching points are the end points of string cuts on the Riemann surfaces, the Dirac strings of the Abelian projection.
These Riemann surfaces, the only additional global ingredient with respect to the Abelian projection, are interpreted as the world sheets of strings made by electric flux lines.
A closed string of electric flux is represented by a Wilson loop of the $`U(1)`$ connection $`a`$ on the cover, along a non-trivial generator of the fundamental group of the surface.
In addition, the Riemann surfaces, defined by the spectral equation, may posses parabolic points, associated to poles of the eigenvalues of the Higgs current $`\mathrm{\Psi }_z`$, whose origin is in the parabolic singularities of the original $`su_c(N)`$-valued $`K(D)`$ pair, which may be reflected into a parabolic structure for the $`u_c(1)`$-valued $`K(D)`$ pair on the cover.
These poles, together with the ones of the $`U(1)`$ connection, are interpreted as electric charges. Indeed it is not difficult to see that they are electric sources, that appear where a boundary-electric loop shrinks to a point.
Therefore, the electric charges occur here as topological objects associated to the parabolic degree of the $`u_c(1)`$-valued $`K(D)`$ pair. On the other side, magnetic topological quantum numbers are associated, as usual, to the ordinary degree of the $`U(1)`$ bundle.
We should mention however that a subtlety arises in our interpretation of the Hitchin fibration in terms of the Abelian projection. As we mentioned in the first part of this section, in the Abelian projection the gauge-fixing condition leaves a residual non-Abelian gauge symmetry where a magnetic monopole occurs. This is essentially due to the fact that ’t Hooft chooses to diagonalize a hermitian functional of the fields. On the contrary, in the case of the dense set defined by Eq.(1), since $`\psi `$ is a non-hermitian matrix, it can only be put in triangular form. This gauge-fixing does not leave in general a residual compact non-Abelian gauge symmetry even when the eigenvalues coincide. However this difficulty can be resolved in the following way, anticipating somehow some of the conclusions of this paper and the result of . Let us require for the moment that the levels of the non-hermitian moment maps be nilpotent. Since these are only $`N`$ conditions at each parabolic point they do not modify essentially the entropy of the functional integration in the large-$`N`$ limit. The true physical meaning of this choice has to do with confinement and it is explained in . If the residues of the Higgs field are nilpotent, Eq.(1) can be interpreted as the vanishing condition for the moment maps of the action of the compact $`SU(N)`$ gauge group on the pair $`(A,\mathrm{\Psi })`$ and on the cotangent space of coadjoint orbits :
$`F_Ai\mathrm{\Psi }\mathrm{\Psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\mu _p^0\delta _pidzd\overline{z}=0`$
$`\overline{}_A\psi {\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}n_p\delta _pdzd\overline{z}=0`$
$`_A\overline{\psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{n}_p\delta _pd\overline{z}dz=0`$ (3)
In addition the quotient under the action of the compact gauge group is hyper-Kahler . By a general result of Hitchin, Karlhede, Lindström and Rocěk , the hyper-Kahler quotient under the action of the compact gauge group in Eq.(3) is the same as the quotient defined by the non-hermitian moment maps:
$`\overline{}_A\psi {\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}n_p\delta _pdzd\overline{z}=0`$
$`_A\overline{\psi }{\displaystyle \frac{1}{|D|}}{\displaystyle \underset{p}{}}\overline{n}_p\delta _pd\overline{z}dz=0`$ (4)
under the action of the complexification of the gauge group. We can therefore impose a gauge condition compatible with the compact action in Eq.(3) or a gauge condition compatible with the action of the complexified group in Eq.(4) getting the same moduli space. In the second case we choose the gauge in which $`\psi `$ is diagonal. This condition becomes singular where two or more eigenvalues coincide. In fact it cannot be extended continuously to the points where the eigenvalues coincide. There it can only be required that $`\mathrm{\Psi }_z`$ be a triangular matrix. However this condition leaves now a residual non-Abelian gauge symmetry in the complexification of the gauge group: the freedom of making triangular gauge transformations, thus confirming our analogy with ’t Hooft definition of magnetic monopoles.
To summarize, the ingredients of the Hitchin fibration of the $`su_c(N)`$\- valued $`K(D)`$ pairs, are the branching points, that are interpreted as magnetic monopoles, and the $`U(1)`$ monodromies around closed loops, that are interpreted as electric lines. In addition, the ordinary degree of the $`U(1)`$ bundle is interpreted as a (topological) magnetic charge, while the parabolic degree of the $`U(1)`$ bundle is interpreted as a (topological) electric charge.
The difference here, with the letter but not with the spirit of the Abelian projection, is that the system has been completely abelianized, so that both the magnetic and the electric charges are topological. We are thus given a set of charges and monopoles with a $`U(1)`$ gauge group on the covering, in analogy with the Abelian projection.
We call this description a complete Abelian projection.
The string interpretation is as follows. The spectral covers are the world sheets of strings, made by the electric flux lines. The confinement condition is equivalent to requiring that only closed string world sheet occur, since confinement requires that the flux lines can never break in absence of quarks.
If the spectral covers posses parabolic points, the same as electric charges in the complete Abelian projection, they are, topologically, Riemann surfaces with boundaries at infinity.
For example a sphere with two parabolic points is a topological cylinder.
But a cylinder can occur in vacuum string world sheets (we are describing the contributions to the partition function, the vacuum to vacuum amplitude indeed) only if open strings propagate.
In fact, a closed string that propagates through the torus breaks into an open one at the parabolic points, since the parabolic points do not belong to the world sheet.
On the contrary, when a closed string meets a branching point, for example in a once-branched double cover of a torus, the closed string is pinched into another closed string with the form of a double loop intersecting at the (simple) branching point.
Notice also that the branching points do belong to the world sheet.
Thus, the string picture is consistent with the interpretation of branching points as magnetic charges, where the string electric line can self-intersect but not break, and of parabolic points as electric charges, where closed string break into open strings with the parabolic points as boundaries.
## 3 Conclusions
Our conclusion is that the concept of Abelian projection in furnishes a physical interpretation of the structures that appear in the Hitchin fibration of $`K(D)`$ pairs, as it is embedded in the $`QCD`$ functional integral in .
In addition, there is a complementary consistent string interpretation.
The most relevant consequence of these interpretations is a criterium for electric confinement in the framework of , that is the usual criterium of magnetic condensation of .
Therefore, if $`QCD`$ confines the electric charge, the functional measure must be localized, in the large-$`N`$ limit, on those parabolic $`K(D)`$ pairs, whose image through the Hitchin map, contains monopoles but no charges, that is, in geometric language, on those spectral covers that are arbitrarily branched, but that do not posses a parabolic divisor.
In turn, this is equivalent to the condition that only spectral covers spanned by closed strings occur as configurations in the vacuum to vacuum amplitude.
It is amusing to notice that this condition is satisfied by the string of two-dimensional $`QCD`$ in the large-$`N`$ limit .
## 4 Acknowledgements
We would like to thank Gerard ’t Hooft for several clarifying remarks on the Abelian projection.
|
no-problem/9904/cond-mat9904283.html
|
ar5iv
|
text
|
# Interchain interactions and magnetic properties of Li2CuO2
## Abstract
An effective Hamiltonian is constructed for an insulating cuprate with edge-sharing chains Li<sub>2</sub>CuO<sub>2</sub>. The Hamiltonian contains the nearest and next-nearest neighboring intrachain and zigzag-type interchain interactions. The values of the interactions are obtained from the analysis of the magnetic susceptibility, and this system is found to be described as coupled frustrated chains. We calculate the dynamical spin correlation function $`S(𝐪,\omega )`$ by using the exact diagonalization method, and show that the spectra of $`S(𝐪,\omega )`$ are characterized by the zigzag-type interchain interactions. The results of the recent inelastic neutron-scattering experiment are discussed in the light of the calculated spectra.
One-dimensional cuprates have received much attention as reference systems of high-T<sub>c</sub> superconductors with two-dimensional CuO<sub>2</sub> planes. Recently, a variety of the compounds with edge-sharing chains, where CuO<sub>4</sub> tetragons are coupled by their edges, were synthesized and found to show unique physical properties. Li<sub>2</sub>CuO<sub>2</sub> is one of the typical compounds having such chains. As shown in Fig. 1(a), the chains run parallel to b-axis and are stacked along a- and c-axes.
An important feature of the edge-sharing chains is that a nearest neighboring (NN) magnetic interaction $`J_1`$ between Cu spins strongly depends on Cu-O-Cu bond angle $`\theta `$. In the case that $`\theta `$ = 90, the superexchange process via O ions, which contributes to antiferromagnetic (AFM) interaction, is suppressed due to the orthogonality of Cu3$`d`$ and O2$`p`$ orbitals, and ferromagnetic (FM) contribution caused by, for example, direct exchange mechanism between Cu3$`d`$ and O2$`p`$ orbitals, becomes dominant. With increasing $`\theta `$, the AFM superexchange interaction increases, and consequently $`J_1`$ changes from FM to AFM interaction at a critical angle $`\theta _c`$. In the previous study, $`\theta _c`$ was estimated to be about 95 from the cluster calculation. For Li<sub>2</sub>CuO<sub>2</sub> with $`\theta =94^{}`$, $`J_1`$ was evaluated to be FM ($`<`$0) with magnitude of 100 K. In addition to $`J_1`$, a next-nearest neighboring (NNN) magnetic interaction $`J_2`$, which comes from Cu-O-O-Cu path, also plays an important role in the magnetic properties. The interaction $`J_2`$ is AFM ($`>`$0), and its magnitude is known to be comparable to $`|J_1|`$. Therefore, an appropriate model describing the edge-sharing chain is a spin 1/2 Heisenberg model with NN and NNN interactions (a $`J_1`$-$`J_2`$ model). The ground state of the $`J_1`$-$`J_2`$ model has been extensively studied: For $`J_2/|J_1|<1/4`$, it is a FM state, while for $`J_2/|J_1|>1/4`$, it is a frustrated state with incommensurate spin correlation.
In Li<sub>2</sub>CuO<sub>2</sub>, AFM long-range order occurs at $`T_N`$=9 K, and the magnetic structure below $`T_N`$ is FM along a- and b-axes and AFM along c-axis. The recent inelastic neutron-scattering experiment showed the existance of interchain (IC) interactions which bring about the ordering. The analysis of the dispersions along a- and c-axes in the linear spin-wave theory revealed that the IC interaction between NN Cu spins is of the order of 10 K. The band calculation also showed the existence of large effective hoppings between neighboring chains, and thus the superexchange interactions. Following these facts, IC interaction plays an important role in the magnetic properties of Li<sub>2</sub>CuO<sub>2</sub>. The importance of IC interactions have also been pointed out in other edge-sharing cuprates, CuGeO<sub>3</sub> (Ref. 7) and Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub> (Refs. 8 and 9).
In this paper, paying attention to the IC interaction, we examine the magnetic properties of Li<sub>2</sub>CuO<sub>2</sub> such as magnetic susceptibility and magnetic excitation by applying the exact diagonalization method on finite size clusters. We construct a minimal model, taking into account the IC interaction to the $`J_1`$-$`J_2`$ model. A set of parameter values is obtained from the analysis of the temperature dependence of the magnetic susceptibility $`\chi (T)`$. We find that the compound is described as a system with frustrated chains coupled by zigzag-type IC interactions. In order to examine the magnetic excitation, we calculate the dynamical spin correlation function $`S(𝐪,\omega )`$ in the chain direction. The calculated spectra show a flat dispersion caused by the frustration due to $`J_2`$ in the low energy region. On the other hand, in the high energy region, there is a dispersion, the energy position of which corresponds to that obtained from the linear spin-wave theory. The spectra of the dispersion are, however, broad. We show that this dispersion is brought about by the IC interaction with zigzag-type structure. The experimental results are discussed in the light of our theoretical results.
We first construct a minimal model for Li<sub>2</sub>CuO<sub>2</sub> including IC interaction. As shown in Fig. 1(a), Li ions are located between the chains. The IC interaction works in a- and c- directions via Li ions. Since the hatched chains in Fig. 1(a) are situated in b-c plane, the orbitals relevant to the electronic states in each chain are Cu3$`d_{y^2z^2}`$ and O2$`p_{y,z}`$ ones. The possible paths which give IC interactions along c-axis $`J_c`$ and along a-axis $`J_a`$ are shown in Fig. 1(a) by thick solid and dotted lines, respectively. For $`J_c`$ (solid line), the orbitals of Li ions (Li1$`s`$, 2$`s`$) couple to O2$`p_x`$ in one chain, but to O2$`p_{y,z}`$ orbitals in another one. On the other hand, for $`J_a`$, they couple to only O2$`p_x`$ orbitals in both chains. As a result, the magnitude of $`J_a`$ is expected to be much smaller than that of $`J_c`$ due to orthogonality of Cu3$`d_{y^2z^2}`$ and O2$`p_x`$ orbitals. In fact, the neutron-scattering experiment indicates that the width of magnon dispersion along a-axis is narrower than that along c-axis, and $`J_a`$ is $`4`$ K, which is less than half of $`J_c`$. In the present study, we neglect $`J_a`$ for simplicity.
For $`J_c`$, there are two paths connecting between NN Cu ions \[Cu(1) and Cu(2)\] and between NNN Cu ions \[Cu(1) and Cu(3)\] as shown by two solid and two dotted lines in Fig. 1(b), respectively. They give rise to zigzag-type IC interactions between NN Cu spins and between NNN ones, $`J_{c\mathrm{NN}}`$ and $`J_{c\mathrm{NNN}}`$, respectively. From the consideration of the crystal structure and the orbital configuration, the magnitudes of $`J_{c\mathrm{NN}}`$ and $`J_{c\mathrm{NNN}}`$ are found to be the same because each path makes an equal contribution to the two interactions. Therefore, we take $`J_{c\mathrm{NN}}=J_{c\mathrm{NNN}}J_c`$.
Based on the above consideration, we adopt a $`J_1`$-$`J_2`$-$`J_c`$ model shown in Fig. 1(c). In the previous study, $`J_1`$ was evaluated to be $``$100 K. We determine $`J_2`$ and $`J_c`$ based on the analysis of the experimental data of $`\chi (T)`$. By diagonalizing the Hamiltonian of the $`J_1`$-$`J_2`$-$`J_c`$ model for finite-size clusters with M$`\times `$N sites (M is the number of sites in a chain and N is the number of chains), the theoretical $`\chi (T)`$ is obtained. We preliminarily calculated it in 4$`\times `$2 and 4$`\times `$4 clusters in order to check the size effect along the IC direction (N dependence), and found that the difference of $`\chi (T)`$ between N=2 and 4 is so small that clusters with N=2 are enough to investigate the effect of the IC interaction on $`\chi (T)`$. In Fig. 2, we show the result for a 8$`\times `$2 cluster by solid line together with the experimental data denoted by square symbols. A good agreement between theory and experiment is obtained for $`J_2`$=40 K and $`J_c`$=16 K. This value of $`J_c`$ is not far from the value determined by the neutron-scattering experiment. The theoretical results of $`\chi (T)`$ reproduce well the divergent behavior of the experimental ones at low temperatures. We examined $`\chi (T)`$ of the single chain (the $`J_1`$-$`J_2`$ model) in the previous study, and obtained a peak in $`\chi (T)`$ at $`T`$=40 K. Therefore, we conclude that $`\chi (T)`$ is not explained unless $`J_c`$ is not taken into account.
Next, we calculate the dynamical spin correlation function $`S(𝐪,\omega )`$ in order to investigate the magnetic excitation. This is defined as follows.
$$S(𝐪,\omega )=\underset{m}{}|m|S_𝐪^z|0|^2\delta (\omega E_m+E_0),$$
(1)
where $`|0`$ and $`|m`$ are the ground and excited states with energies $`E_0`$ and $`E_m`$, respectively. $`S_𝐪^z`$ is the Fourier transform of spatial spin density.
It is instructive to consider $`S(𝐪,\omega )`$ for the single chain ($`J_c`$=0 case) in order to understand the effect of IC interaction on the magnetic excitation. The calculation is performed by applying the exact diagonalization technique on a 20-site single chain with $`J_1`$=$``$100 K and $`J_2`$=40 K. Because the ratio of $`J_2`$/$`|J_1|`$(=0.4) is more than 1/4, the system is described as a frustrated chain. The calculated results of $`S(𝐪,\omega )`$ are shown in Fig. 3(a), in which $`q_b`$ denotes a momentum of the chain direction. The spectra have a period of $`2\pi `$, and are symmetric with respect to $`q_b=\pi `$. Since the ground state has incommensurate spin correlation, the spectral intensity is much larger at $`q_b`$$``$1/2$`\pi `$ and 3/2$`\pi `$ than that at other momenta. Much of the spectral weight is concentrated in the very low energy region ($`\omega 2`$ meV), and small weight spreads over the high energy region.
We turn our attention to the system with finite $`J_c`$. Since we are interested in the effect of the IC interaction on the spectra of frustrated chain, $`S(𝐪,\omega )`$ along the chain direction is examined. $`S(𝐪,\omega )`$’s for 12$`\times `$2 and 6$`\times `$4 clusters, where $`𝐪`$ is the two-dimensional vector with $`𝐪=(q_b,q_c)`$, are calculated to understand the effect of the IC interaction. By comparing results between the two clusters, we found that there is no essential difference along the chain direction. Therefore, we consider $`S(𝐪,\omega )`$ for the 12$`\times 2`$ cluster.
The operator $`S_𝐪^z`$ for the 12$`\times `$2 cluster is written as
$`S_𝐪^z={\displaystyle \frac{1}{\sqrt{N_s}}}{\displaystyle \underset{i}{}}e^{iq_by_i}(S_{A,i}^z+S_{B,i}^ze^{i(\frac{q_b}{2}+\frac{q_c}{2})}),`$ (2)
where $`S_{A,i}^z`$ and $`S_{B,i}^z`$ are the $`z`$ component of spin operators at $`i`$-th sites on $`A`$ and $`B`$ chains in Fig. 1(c), respectively. $`y_i`$ is the $`y`$ component of the position vector at $`i`$-th sites, and $`N_s`$ is the number of Cu sites. Since in the case of finite $`J_c`$, the system has two Cu sites in unit cell as shown in Fig. 1(c), the lattice period in the chain direction is changed from $`|𝐛|`$ to $`|𝐛|/2`$. Correspondingly, the period of $`S(𝐪,\omega )`$ is changed from $`2\pi `$ to $`4\pi `$.
The result of $`S(𝐪,\omega )`$ for the 12$`\times `$2 cluster is shown in Fig. 3(b), where the momentum of $`q_c`$ is taken to be 0. The spectrum is completely different from that for the single chain shown in Fig. 3(a). The strong intensity is seen at $`q_b2\pi `$, and thus the spectrum becomes asymmetric with respect to $`q_b=\pi `$. The asymmetry is understood as follows. For example, consider the spectra at $`𝐪=(0,0)`$ and $`(2\pi ,0)`$. $`S_𝐪^z`$ at these momenta is given by $`\frac{1}{\sqrt{N_s}}_i(S_{A,i}^z+S_{B,i}^z)`$ and $`\frac{1}{\sqrt{N_s}}_i(S_{A,i}^zS_{B,i}^z)`$, respectively. From these expressions, it is clear that the weight at $`𝐪=(2\pi ,0)`$ is larger than that at $`𝐪=(0,0)`$, when the $`A`$ and $`B`$ chains are coupled antiferromagnetically. A flat dispersion with small intensity is seen around $`q_b\pi `$ at $`\omega 5`$ meV. This is a remnant of the continuum seen in Fig. 3(a). In addition, a dispersive spectrum with energy-maximum at $`q_b=\pi `$ is seen in Fig. 3(b). This dispersion is consistent with that obtained by the linear spin-wave theory in which the intrachain FM and interchain AFM ordering is assumed;
$`\omega (q_b)`$ $`=`$ $`\sqrt{[J_1(\mathrm{cos}q_b1)+J_2(\mathrm{cos}2q_b1)+8J_c]^2}`$ (4)
$`\overline{64J_c^2\mathrm{cos}^2q_b\mathrm{cos}^2{\displaystyle \frac{q_b}{2}}}.`$
Here, we note that $`J_c`$ of zigzag-type is responsible for this dispersion. $`J_c`$ connects four spins in a chain with one spin in the neighboring chain as shown in Fig. 1(c). Therefore, these four spins tend to align parallel so as to reduce the magnetic energy. This is why the IC interactions have large contribution to the dispersion. The spectra of the dispersion are rather broad. This is clearly different from the single-peak structure in a simple FM chain. This is because FM alignment in the chain is disturbed by the quantum fluctuation caused by $`J_c`$ and frustration by $`J_2`$.
Finally, we discuss the recent inelastic neutron-scattering data along b-axis for Li<sub>2</sub>CuO<sub>2</sub> in the light of our theoretical results. The experiment has shown that (i) when the momentum is far from magnetic zone center $`q_b=2\pi `$, the spectral intensity is very small, and (ii) the lowest excitation appears at the magnetic zone center, and the dipersion has a minimum at $`q_b=\pi `$. The feature (i) is in good agreement with the momentum dependence of the spectral intensity shown in Fig. 3(b). For the feature (ii), the structure at $`\pi `$ in the experiment may correspond to that at $`\omega 5`$ meV which is a remnant of the frustrated state. On the other hand, the dispersion at $`\omega 20`$ meV has not been observed experimentally. In order to find the dispersion, it will be necessary to do more detail experiment, especially, in the higher energy region.
We have studied the magnetic excitation spectra of Li<sub>2</sub>CuO<sub>2</sub>, and found that the zigzag-type IC interaction brings about the dramatic difference seen in the spectra between Fig. 3(a) and (b). Therefore, it is meaningful to examine the spectra of other edge-sharing compounds such as La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> and Ca<sub>2</sub>Y<sub>2</sub>Cu<sub>5</sub>O<sub>10</sub> because they have different IC interactions, reflecting the difference of the crystal structures. For example, in La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub>, which has the structure similar to Sr<sub>14</sub>Cu<sub>24</sub>O<sub>41</sub>, IC interactions are expected to be smaller than that in Li<sub>2</sub>CuO<sub>2</sub>. We thus suppose that the weight of the dispersion at high energy region ($``$20 meV) is very small and much of the weight is concentrated in the low-energy region ($``$5 meV) in La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> in contrast to Li<sub>2</sub>CuO<sub>2</sub>. A comparison among these materials will yield a clearer understanding of the effect of IC interactions on the magnetic excitation.
In summary, we have investigated the magnetic properties of Li<sub>2</sub>CuO<sub>2</sub> with edge-sharing chains, taking into account the interchain interaction. We determined the magnetic interactions in a chain and between chains by the analysis of the magnetic susceptibility. We also calculated the dynamical spin correlation function. The results show a dispersion with broad spectra induced by the interchain interaction. The zigzag-type interchain interaction causes the dramatic effects on the magnetic properties of Li<sub>2</sub>CuO<sub>2</sub>. It is highly desirable that the inelastic neutron-scattering experiment is performed in the wide energy region to understand the characteristics of the dispersion along the chain.
We would like to thank M. Matsuda, H. Eisaki and K. Mochizuki for valuable discussions. This work was supported by a Grant-in-Aid for Scientific Research on Priority Areas from the Ministry of Education, Science, Sports and Culture of Japan. The parts of the numerical calculation were performed in the Supercomputer Center, Institute for Solid State Physics, University of Tokyo, and the supercomputing facilities in Institute for Materials Research, Tohoku University. Y. M. acknowledges the financial support of Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists.
|
no-problem/9904/nucl-th9904060.html
|
ar5iv
|
text
|
# The 𝑐 -term of the TM 3-body Force: to be or not to be Dedicated to Prof. Dr. Walter Glöckle on the occasion of his birthday
## I Introduction
It is one of the great dreams in the field of few-nucleon physics to find a quantitative correct and theoretically reasonable three nucleon force(3NF). In the past many 3NF models were developed. An especially prominent one is the meson-theoretical 3NF, for instance in the form of the Tucson-Melbourne model (TM). The reason for studying 3NFs is the existence of disagreements between the 3N data and the theoretical predictions with NN forces only. First of all the theoretical binding energy of <sup>3</sup>H lacks about 500-800keV in relation to the experimental value of 8.48MeV using recent realistic potentials (e.g. CD-Bonn, AV18, Nijmegen 93, Nijmegen I, II). These potentials describe all 2N observables to a degree of accuracy of $`\chi ^2/N_{data}`$1. In the low energy three nucleon continuum we have demonstrated that most of the observables agree well with the data using nucleon-nucleon (NN) forces only, however, there are exceptions. Some of them are well known as the “$`A_y`$ puzzle”. In the high energy region the theoretical predictions differ visibly from the data if one only takes NN forces into account. The “Sagara discrepancy” is an example of this. These problems definitely require a new Hamiltonian in the realm of the three nucleon system. Moreover, the $`A_y`$-puzzle requires not only a $`2\pi `$-exchange 3NF to become explained but other 3NF mechanisms as well.
Beside these low-energy discrepancies in the 3N continuum there are also discrepancies at higher energies. This can be expected naively due to the shorter range nature of the 3NF in comparison to the NN force. Recently it became possible to explain discrepancies between experiment and predictions using NN forces only for the neutron-deuteron (nd) total cross section and the nd elastic differential cross section with the $`2\pi `$-exchange TM 3NF.
From the point of view of chiral perturbation, this special category of the TM 3NF should be modified. Chiral perturbation theory has been successfully applied in the $`\pi N`$ system and it is already playing an important role in its application to the NN system as well In it recommended that the pion range - short range part of the $`c`$-term of the TM $`2\pi `$-exchange 3NF should be dropped, based on arguments from chiral perturbation theory. In doing this the pion range - short range part of the $`c`$-term remains, which is of the same type than the $`a`$-term. This leads to a redefinition of $`a`$. The such modified TM 3NF,called TM’, has essentially the same effects on continuum than the original TM 3NF. Much remains to be investigated in the relation between NN and 3NF’s.
In the next section we present calculations for the triton binding energy based on variations of the values of the strength parameters in the TM 3NF, individually and combined. The summary and the outlook are give in Section 3.
## II Variations of the Tucson-Melbourne 3NF and their Triton Binding Energies
The TM force has the operator form:
$`V_{TM}^{(3)}={\displaystyle \frac{1}{(2\pi )^6}}{\displaystyle \frac{g_{\pi NN}^2}{4m^2}}{\displaystyle \frac{F_{\pi NN}^2(\stackrel{}{q}^2)F_{\pi NN}^2(\stackrel{}{q^{}}^2)\stackrel{}{\sigma }_1\stackrel{}{q}\stackrel{}{\sigma }_2\stackrel{}{q^{}}}{(\stackrel{}{q}^2+m_\pi ^2)(\stackrel{}{q^{}}^2+m_\pi ^2)}}[O^{\alpha \beta }\tau _\alpha \tau _\beta ],`$ (1)
$`O^{\alpha \beta }=\delta ^{\alpha \beta }[a+b\stackrel{}{q}\stackrel{}{q^{}}+c(\stackrel{}{q}^2+\stackrel{}{q^{}}^2)]d(\tau _3^\gamma ϵ^{\alpha \beta \gamma }\stackrel{}{\sigma }_3\stackrel{}{q}\times \stackrel{}{q^{}}).`$ (2)
where $`m_\pi `$, $`m`$, $`g_{\pi NN}`$ and $`F_{\pi NN}(\stackrel{}{q}^2)`$ are the pion mass, the nucleon mass, the $`\pi NN`$ coupling constant and the vertex function , respectively. The superscript (3) denotes that this expression is only one of three cyclically permuted parts of the total TM 3NF. There are four parameters ($`a`$, $`b`$, $`c`$ and $`d`$ ) which are chosen according to certain physical concepts. For practical calculations one needs to introduce the vertex function which is normally chosen as
$`F_{\pi NN}(\stackrel{}{q}^2)={\displaystyle \frac{\mathrm{\Lambda }^2m_\pi ^2}{\mathrm{\Lambda }^2+\stackrel{}{q}^2}}`$ (3)
The triton binding energy turns out to be strongly dependent on the cut-off parameter $`\mathrm{\Lambda }`$. In a phenomenological approach it can be used as a fit parameter to adjust the triton binding energy and this separately for each NN potential. Using these cut-off parameters we calculated the polarization transfer parameter $`K_y^y^{}`$ in the three-body continuum. While the individual pure NN force predictions are different they essentially coincide if those individually adjusted 3NFs where included and that prediction agrees rather well with the data. This is one example out of several where scaling with the triton binding energy exists for 3N continuum observables. In those studies we kept the original TM parameters ($`a`$, $`b`$, $`c`$ and $`d`$) and only varied the form-factor cut-off parameter$`\mathrm{\Lambda }`$.
Now we want to go one step further and study phenomenologically the dependence of the triton binding energy on the individual terms in the TM 3NF operators connected to the $`a`$-, $`b`$-,$`c`$\- and $`d`$-term. Like in we solve the Faddeev equation rigorously including the 3NF. We choose CD-Bonn as the NN interaction. The original parameters of the TM model are given in Table 2.1.
The cut-off parameter $`\mathrm{\Lambda }`$ is not the original one but adjusted to reproduce the triton binding energy together with CD-Bonn. We multiply each one of the parameters $`a`$, $`b`$, $`c`$ and $`d`$ by a factor $`X`$ ($`0X1.5`$ ), one after the other:
$`\left(\begin{array}{c}a\\ b\\ c\\ d\end{array}\right)\{\begin{array}{cc}\left(\begin{array}{c}aX\\ b\\ c\\ d\end{array}\right)\hfill & \text{case a}\hfill \\ \left(\begin{array}{c}a\\ bX\\ c\\ d\end{array}\right)\hfill & \text{case b}\hfill \\ \left(\begin{array}{c}a\\ b\\ cX\\ d\end{array}\right)\hfill & \text{case c}\hfill \\ \left(\begin{array}{c}a\\ b\\ c\\ dX\end{array}\right)\hfill & \text{case d}\hfill \end{array}`$ (4)
and determine the 3N binding energy for these four cases. The results are shown in Fig. 1.
We see that the parameter $`a`$ contributes negligibly to the 3N bound state and its presence or absence is unimportant. This explains why the prediction for the triton binding energy for the TM and TM’ 3NFs are close to each other. The $`b`$\- and $`d`$-terms however are important. The binding energy increases monotonically with their strength. Interestingly, the behaviour of the $`c`$-term is such that there are two solutions which lead to the experimental value. We find 0.150 as the new solution (see “$``$” in Fig.1).
Now, with the exception of $`a`$ we let all parameters float. The parameter $`a`$ is kept at its original value 1.13. We search for the sets ($`b`$, $`c`$, $`d`$) which fulfil the condition to produce the experimental binding energy. Thus we have now three independent variables $`X`$:
$`\left(\begin{array}{c}a\\ b\\ c\\ d\end{array}\right)\left(\begin{array}{c}a\\ bX_b\\ cX_c\\ dX_d\end{array}\right)`$ (5)
Fig. 2 shows contour plots for different $`X_c`$ and $`X_b`$ while keeping $`X_d`$ at different fixed values. Each line in Fig. 2 corresponds to the same binding energy (8.48MeV). The black spot indicates the position for the original values of the parameters ($`X_b`$=$`X_c`$=$`X_d`$=1). The star is as in Fig. 1. We see that these two solutions for $`c`$ found above lie on the line in Fig. 2. The $`b`$\- and $`c`$\- values to the left (right) of a curve for a particular value of $`d`$ lead to over-binding (under-binding).
A crucial rule corresponding to the properties of the $`\mathrm{\Delta }`$ particle excitation mechanism is that the ratio $`b/d`$ is 4. The Urbana-Argonne 3NF follows this rule, since except for a phenomenological short range term it includes only the $`2\pi `$-exchange with an intermediate $`\mathrm{\Delta }`$ isobar, the Fujita-Miyazawa 3NF. The TM value for the ratio $`b/d`$ is 3.43, since in this model the $`b`$\- and $`d`$-term include other processes on top of the $`2\pi `$-exchange with an intermediate $`\mathrm{\Delta }`$. Also $`b`$ and $`d`$ are larger for the TM 3NF than for the pure $`\mathrm{\Delta }2\pi `$-exchange. The values for $`b`$ and $`d`$ of the Brazil and RuhrPot 3NF are close to those of the TM 3NF. The Texas 3NF, based on chiral perturbation theory, has even larger values for $`b`$ and $`d`$. This shows that it is not at all clear which values for the strength parameters within the 3NF one should choose.
In Fig. 3 we show the curve for $`b`$=4$`d`$ and of course the additional requirement that the triton binding energy has the experimental value. As in Fig. 2 the underlying NN potential is CD-Bonn. Of course, in looking to Fig. 3 one should keep in mind that the choice for the $`NN\pi `$ form-factor (4) leads to a very strong dependence of the strength of the 3NF on the cut-off parameter $`\mathrm{\Lambda }`$, as well.
From Table II in the locations of the $`b`$ and $`c`$ parameters for several 3NFs are indicated. Note however those 3NFs are not adjusted to the triton binding energy together with the CD-Bonn potential.
## III Summary and outlook
Stimulated by we studied the binding energy of <sup>3</sup>H as a function of the strength parameters ($`a`$, $`b`$, $`c`$ and $`d`$ in (2) ) in the TM force. We find that the $`a`$-term is not decisive when varying in the interval $`0a2`$. The $`b`$\- and $`d`$-terms, however are very important to obtain the experimental value 8.48MeV. Varying $`c`$ from 0 to 1.5 $`\times c_{TM}`$ we find that there are two solutions which belong to the same binding energy. The new solution is 15% of the original value, namely, 0.15 \[$`m_\pi ^3`$\]. It supports phenomenologically the recommendation given in ( based on arguments from chiral perturbation theory) that the short-range part of the $`c`$-term in the TM force should be dropped.
If one assumes a purely phenomenological point of view for choosing the values for $`a`$ to $`d`$ in a 3NF of the form (2) Fig. 2 provides a complete overview for the possible values under the requirement that the triton binding energy is gained together with the CD-Bonn potential. Clearly corresponding pictures could be gained for other NN potentials. Of course other 3NF mechanisms have to be explored, too. At least for the $`A_y`$-puzzle it is clear that the $`2\pi `$-exchange 3NF is not sufficient to explain this discrepancy between theory and data. A study of pion range - short range 3NF terms is underway where are predicted by chiral perturbation theory.
Based on the chosen form (2) and the requirement to fit the triton binding energy those 3NFs can now be tested in the 3N continuum with high energy. At IUCF, RIKEN and KVI measurements are underway for 3N observables between 100-300 MeV. These are cross sections and various spin observables. They will be analysed using the 3NFs fixed in Fig. 2. This might allow to find a preference for a certain region in that parameter space or will show that additional forms are needed.
Acknowlegements
This paper is dedicated to Prof. Dr. Walter Glöckle on the occasion of his 60th birthday. This work is financially supported by the Deutsche Forschungsgemeinschaft under Project No. Gl 87/19-2, No. Hu 746/1-3 and No. Gl 87/27-1, and partially the auspices of the U.S. Department of Energy. The calculations have been performed on the CRAY T3E of the John von Neumann Institute for Computing, Jülich, Germany.
|
no-problem/9904/astro-ph9904040.html
|
ar5iv
|
text
|
# A Plasma Instability Theory of Gamma-Ray Burst Emission
## 1 Overview
Current observations of gamma-ray bursts place a number of strong constraints on gamma-ray burst physics. The measured red shift of $`z=0.835`$, $`0.966`$, and $`1.61`$ for lines in the optical emission of gamma-ray bursts GRB 970508 (Metzger et al. 1997a,b), GRB 980703 (Djorgovski et al. 1998a,b), and GRB 990123 (Kelson et al. 1999), respectively, and the indirect redshifts of $`z=3.5`$, $`5`$, and $`1.096`$ for GRB 971214 (Kulkarni, S. R., et al. 1998), GRB 980329 (Fruchter 1999), and GRB 980613 (Djorgovski et al. 1999), respectively, show that gamma-ray bursts are at extraordinarily-high redshifts. The power-law gamma-ray spectrum above $`511\text{keV}`$ and the rapid rise times of the gamma-ray light-curves force one to consider theories with highly-relativistic bulk motion in order to avoid thermalization of the gamma-rays through photon-photon pair creation (Schmidt 1978; Baring, & Harding 1996). The gamma-ray spectrum can be characterized by the E-peak energy ($`E_p`$), which is the photon energy of the maximum of the $`\nu F_\nu `$ spectrum. Because the distribution of $`E_p`$ values is narrow, with an average value of $`E_p250\text{keV}`$ (Mallozzi et al. 1996; Brainerd et al. 1999), one suspects that the characteristics of the spectrum are independent of the bulk Lorentz factor, which should vary greatly from burst to burst. In some bursts, the shape of the gamma-ray spectrum is inconsistent with optically thin synchrotron emission (Preece et al. 1998a,b). The shape of the burst spectrum is consistent with being Compton attenuated by a high density ($`10^5\text{cm}^3`$) interstellar medium (Brainerd 1994; Brainerd et al. 1996,1998); observations show the x-ray excess predicted by this theory (Preece et al. 1996), and the redshifts of $`1`$ to $`10`$ derived by fitting the model to burst spectra are consistent with the values measured at optical wavelength (Preece & Brainerd 1999). The success of this theory, its presence in every gamma-ray burst, implies that a high density interstellar medium is necessary for a burst to occur.
The observations suggest that the sources of gamma-ray bursts are compact—perhaps a several mass black hole, perhaps a supermassive black hole—and that these sources eject mass at relativistic velocities in one short event. The observed behavior of the gamma-ray burst arises from processes that convert the kinetic energy of the ejected material into electromagnetic radiation.
The general view that has developed concerning the gamma-ray emission of gamma-ray bursts is that it is radiated behind a shock that has developed within a shell moving with a bulk Lorentz factor of $`\mathrm{\Gamma }10^3`$ (Mészáros 1998). The radiative mechanism universally cited is synchrotron emission from electrons accelerated to an energy close to the equipartition energy (Tavani 1996). The shock itself is collisionless, arising either when the shell runs into the interstellar medium, or when the fastest portions of the shell overtake slower portions (Rees & Mesaros 1992, 1994; Piran & Sari 1998; Mochkovitch & Daigne 1998). This model has several shortcomings. First, the gamma-ray burst spectrum is often harder at x-ray energies than is allowed by an optically-thin synchrotron emission model. Second, it provides no explanation for the x-ray excess, for the narrow $`E_p`$ distribution, or for the apparent absence of bursts with $`\mathrm{\Gamma }<10^3`$, as inferred from the absence of photon-photon pair creation and thermalization. From the theoretical side, many assumptions are made without theoretical development; among these are that collective processes exist that mediate the shock, that a strong magnetic field is generated in the shock, and that the energy is efficiently transferred from the bulk-motion of the ions to the thermal energy of the electrons.
It is with these difficulties in mind that a new theory for the generation of the prompt radiation in a gamma-ray burst is proposed. The theory is a plasma instability theory. In this theory, the shell of relativistic material that is ejected from the source interacts with the interstellar medium through the plasma filamentation instability, which, in the relativistic regime, has a higher growth rate than the two-stream instability. Because the mass ejection event must be short, of order the burst durations, and because the mass travels $`2c\mathrm{\Gamma }^2`$ times the burst duration, the interaction is of a thin shell with the interstellar medium. As the shell passes through the interstellar medium, the interstellar medium collapses into filaments that contain strong magnetic fields and high electron thermal energies. It is the interstellar medium that filaments rather than the shell, because the interstellar medium has the smaller density as measured in each comoving reference frame. The ions in the filaments remain essentially at rest with respect to the observer, while the electrons move towards the observer with a high bulk Lorentz factor. The magnetic fields generated through filamentation are strong enough to produce gamma-rays through synchrotron self-Compton emission. One finds that the theory places lower limits on both $`\mathrm{\Gamma }`$ and the interstellar medium density through selection effects, and that these lower limits lead to the conditions required by the Compton attenuation theory. The theory implies that there exists a class of burst that produces intense and prompt optical and ultraviolet emission, but no x-rays and gamma-rays.
In this article, I give an analytic development of the theory outlined above. This development explores the relevant plasma and radiative processes, deriving the selection effects inherent in the theory, and ascertaining the aspects of the theory that provide means of observationally testing the theory. In §2, the characteristic ratio of shell density to interstellar medium density and the characteristic thickness of the shell are discussed. In §3, the growth of the two-stream and filamentation plasma instabilities is examined. The saturation of the filaments and the inability of the filamentation instability to mediate a shock are examined in §4. The electron thermalization and isotropization and the rest frame defined by the electron component are discussed in §5. The radiative processes of synchrotron emission and synchrotron self-Compton emission are examined in §6, where the characteristic frequencies and emission rates of each are derived. In §7, the radiative timescales for synchrotron and synchrotron self-Compton cooling are derived. The theory has several natural selection effects that constrain several of the free parameters in the theory. These are discussed in §8. The basic theory and the results of this study are summarized in §9. This section also contains some suggestions for observational tests of the validity of the theory.
## 2 Characteristic Model Parameters
The model is of a fully-ionized neutral shell of electrons and protons passing through a fully-ionized interstellar medium of electrons and protons. The characteristic physical parameters that describe this theory are $`n_{ism}`$, the number density of the electrons or of the protons in the interstellar material, $`n_{shell}^{}`$, the number density of the electrons or protons in the relativistic shell, and $`\mathrm{\Gamma }`$, the Lorentz factor of the shell relative to the interstellar medium. The quantities with primes are measured in the shell rest frame, while those without primes are measured in the interstellar medium rest frame. An important parameter in the discussion that follows is the parameter $`\eta `$, defined as
$$\eta =\frac{n_{ism}}{n_{shell}^{}}.$$
(1)
There is no simple connection between $`\eta `$, $`\mathrm{\Gamma }`$, and the parameters that describe the energetics of the system. In particular, to relate $`\eta `$ to $``$, the mass per unit ster radian of the relativistic shell, and $`R`$, the distance traveled before deceleration, requires a good understanding of the radiative transfer and plasma physics of the problem. The value of $`\eta `$ is therefore treated as a free parameter, with the limits on the acceptable values of $`\eta `$ set by a combination of theoretical and observational constraints. The former is set in this section, while the latter is set in §8.
A specific value of $`\eta `$ that has a physical significance is the value of $`\eta `$ expected from the continuity equation for a relativistic shock. For $`\mathrm{\Gamma }1`$, one has $`n_{shell}^{}=n_{ism}\mathrm{\Gamma }`$, and
$$\eta _{shock}=\frac{1}{\mathrm{\Gamma }}.$$
(2)
When $`\eta <\eta _{shock}`$, the interstellar medium density in the shell rest frame is less than the shell density, while it is greater than the shell density when $`\eta >\eta _{shock}`$.
A lower limit $`\eta _{min}`$ on $`\eta `$ can be derived through a theoretical argument concerning pressure equilibrium with in the shell. If one assumes that the interaction with the interstellar medium exerts a force only at the front of the shell, then one can relate $`\eta `$ to the temperature of the shell and $`R`$, the distance the shell has traveled from the source when $`\mathrm{\Gamma }`$ is a factor of 2 below its initial value. This defines a maximum value for the shell density, and therefore a minimum density for $`\eta `$, because when the deceleration force is spread throughout the shell, a lower pressure is required to maintain static equilibrium within the shell. Defining the dimensionless 4-velocity in the radial direction as $`u_r`$, the deceleration of the shell is given by
$$\frac{c}{R^2}\frac{du_r}{d\tau }=\mathrm{\Gamma }p^{}=\mathrm{\Gamma }n_{shell,max}^{}T.$$
(3)
In this equation, the pressure $`p^{}`$ is the pressure exerted on the shell by the interaction with the interstellar medium as measured in the shell rest frame. The factor of $`\mathrm{\Gamma }`$ converts this proper measure of force into the radial component of the 4-acceleration. The parameter $`T`$ is the sum of the electron and proton temperatures in the shell in units of energy. Changing the derivative on the left-hand side of equation (3) into a derivative in $`r`$ and setting $`drR`$ and $`du\mathrm{\Gamma }`$ gives a maximum density of
$$n_{shell,max}^{}\frac{c^2\mathrm{\Gamma }}{2R^3T}.$$
(4)
The value of $`R`$ in this equation is bounded by a minimum distance $`R_0`$ that is set by the case in which the interstellar medium is swept up by the shell. The amount of interstellar medium per unit ster radian that must be swept up to change $`\mathrm{\Gamma }`$ by a factor of 2 is $`m/\mathrm{\Gamma }`$, so
$$R_0=\left(\frac{3}{m_pn_{ism}\mathrm{\Gamma }}\right)^{1/3}=3.94\times 10^3\text{pc}_{27}^{1/3}n_{ism}^{1/3}\mathrm{\Gamma }_3^{1/3},$$
(5)
where $`_{27}`$ is the shell mass per unit ster radians in units of $`10^{27}\text{gm}`$, $`n_{ism}`$ is given in units of $`\text{cm}^3`$, and $`\mathrm{\Gamma }_3`$ is the Lorentz factor in units of $`10^3`$. With these parameters, a shell subtending $`4\pi `$ ster radians and having $`_{27}=n_{ism}=\mathrm{\Gamma }_3=1`$ will carry $`1.129\times 10^{52}\text{ergs}`$ of energy, which is the characteristic value inferred from the observations at gamma-ray energies. Using equation (5) to express $`R`$ in units of $`R_0`$ in equation (4) gives
$$n_{shell,max}^{}=\frac{m_pc^2n_{ism}\mathrm{\Gamma }^2}{6T}\left(\frac{R_0}{R}\right)^3.$$
(6)
The ratio of $`n_{ism}`$ to $`n_{shell}^{}`$ is then
$$\eta _{min}=\frac{n_{ism}}{n_{shell,max}^{}}=\frac{6T}{m_pc^2\mathrm{\Gamma }^2}\left(\frac{R}{R_0}\right)^3.$$
(7)
The value of $`\eta _{min}`$ is strongly dependent on the distance traveled. For deceleration over $`R=R_0`$ with $`T=m_pc^2\mathrm{\Gamma }`$, which is the temperature found when the interstellar medium is swept up adiabatically, one finds $`\eta _{min}\mathrm{\Gamma }^1=\eta _{shock}`$, as expected for a shock. For lower values of $`T`$, one has $`\eta _{min}<\eta _{shock}`$ unless $`R>R_0`$ by a sufficiently large value. For $`Tm_ec^2\mathrm{\Gamma }`$, which is the case for the theory discussed below, then one has $`\eta <\eta _{shock}`$ for $`R<6.7R_0`$. Lower limits on $`\eta `$ for $`T=m_ec^2\mathrm{\Gamma }`$ and several different values of $`\mathrm{\Gamma }`$ are given as functions of $`R/R_0`$ in Figures 1 and 2.
The value of $`\eta `$ defines the light crossing time scale across the shell width. The thickness of the shell in the shell rest frame is related to the mass of the shell by
$$\frac{}{R^2}=m_pn_{shell}^{}l^{}.$$
(8)
Expressing $`R`$ in terms of $`R_0`$ through equation (5), one finds
$`l^{}`$ $`=`$ $`\left({\displaystyle \frac{}{m_pn_{ism}}}\right)^{\frac{1}{3}}\left({\displaystyle \frac{\mathrm{\Gamma }}{3}}\right)^{\frac{2}{3}}\left({\displaystyle \frac{R_0}{R}}\right)^2\eta ,`$ (9)
$`=`$ $`4.05\times 10^{15}\text{cm}_{27}^{\frac{1}{3}}n_{ism}^{\frac{1}{3}}\mathrm{\Gamma }_3^{\frac{2}{3}}\left({\displaystyle \frac{R_0}{R}}\right)^2\eta _3.`$ (10)
where $`\eta _3=\eta /10^3`$. The thickness of the shell in the interstellar medium rest frame is $`l=l^{}/\mathrm{\Gamma }`$. Defining the shell’s characteristic timescale $`t_{shell}`$ as $`l=ct_{shell}`$, one has
$`t_{shell}`$ $`=`$ $`{\displaystyle \frac{\eta }{c}}\left({\displaystyle \frac{}{9m_pn_{ism}\mathrm{\Gamma }}}\right)^{\frac{1}{3}}\left({\displaystyle \frac{R_0}{R}}\right)^2,`$ (11)
$`=`$ $`1.35\times 10^2\text{s}_{27}^{\frac{1}{3}}n_{ism}^{\frac{1}{3}}\mathrm{\Gamma }_3^{\frac{1}{3}}\left({\displaystyle \frac{R_0}{R}}\right)^2\eta _3.`$ (12)
For $`\eta `$ of order $`1/\mathrm{\Gamma }`$, one requires that $`R=10R_0`$ at $`n_{ism}=1\text{cm}^3`$ for $`t_{shell}=1\text{s}`$. Equation (12) is plotted in Figures 1 and 2 as functions of $`\eta `$ with respect to $`R/R_0`$ for several values of $`\mathrm{\Gamma }`$ and $`t_{shell}`$. Because $`t_{shell}`$ must be less than or equal to the burst timescales, which are often observed to be $`<1\text{s}`$, model values for $`\eta `$ must be $`\eta 1`$. It is shown in §8 that the theory provides this limit on $`\eta `$.
## 3 Plasma Instabilities
Within the reference frame of the shell, one initially has a plane-parallel charge-neutral density profile through which a neutral and uniform plasma streams. The shell is assumed to have no initial magnetic field. The dissipation of the bulk kinetic energy of a relativistic shell to the interstellar medium must be through a plasma instability, because the densities of the shell and the interstellar medium ensure that the collision mean free path is much longer than the thickness of the shell. For a relativistic plasma, the two relevant instabilities are the two-stream instability and the electromagnetic filamentation instability (Davidson 1990). For $`vc`$, the former has a larger growth rate that the latter by a factor of $`c/v`$, but for $`\mathrm{\Gamma }1`$, the growth rate of the latter becomes much larger than the growth rate of the former.
The two-stream instability has a maximum growth rate in the relativistic regime of
$$\gamma _{2s}^{}=\frac{1}{2}\omega _{p,e,ism}^{}\mathrm{\Gamma }^{\frac{3}{2}}\frac{1}{2}\sqrt{\frac{4\pi e^2}{m_e}}n_{ism}^{\frac{1}{2}}\mathrm{\Gamma }^1,$$
(13)
where $`\omega _{p,e,ism}^{}`$ is the electron plasma frequency of the interstellar medium in the shell rest frame. Taking $`\mathrm{\Gamma }=10^3`$ and $`n_{ism}=1\text{cm}^3`$, one finds a growth rate for the electron two-stream instability of $`\gamma _{2s,e}^{}=30\text{s}^1`$ in the shell frame. This growth rate corresponds to distances of travel of $`10^9\text{cm}`$ for the interstellar medium through the shell.
The filamentation instability acts on counterstreaming plasmas by creating a magnetic pinch. Of the two plasmas, the interstellar medium and the shell, that with the smaller number density in it’s own rest frame is the plasma that filaments. For gamma-ray bursts, this is the interstellar medium when $`\eta 1`$.
The growth rates of the filamentation instability is easily derived from the dielectric tensor for cold electron and ion streams in the relativistic regime and in the absence of a magnetic field. The equation must satisfy (Davidson 1990)
$$1+\underset{j}{}\left[\frac{\omega _{pj}^2}{\mathrm{\Gamma }_j^3c^2k_{}^2}+\frac{\beta _j\omega _{pj}^2}{\mathrm{\Gamma }_j\omega ^2}\right]=0,$$
(14)
where $`\omega _{pj}`$ is the plasma frequency of component $`j`$ for the density measured in the observer’s rest frame, $`\mathrm{\Gamma }_j`$ is this component’s Lorentz factor, and $`k_{}`$ is the wave number perpendicular to the velocity vector of the streams.
If one is considering only a charge neutral stream of electrons and ions moving with Lorentz factor $`\mathrm{\Gamma }`$ through a charge neutral background, and if one defines the background plasma to be the shell, then equation (14) gives a frequency of
$$\omega ^{\mathrm{\hspace{0.17em}2}}=\frac{\beta ^2\left(\omega _{p,e,ism}^{\mathrm{\hspace{0.17em}2}}+\omega _{p,i,ism}^{\mathrm{\hspace{0.17em}2}}\right)}{\mathrm{\Gamma }\left[1+c^2k_{}^2\left(\omega _{p,e,shell}^{\mathrm{\hspace{0.17em}2}}+\omega _{p,i,shell}^{\mathrm{\hspace{0.17em}2}}\right)\right]}.$$
(15)
In this equation, $`\omega _{p,e,ism}^{}`$ and $`\omega _{p,i,ism}^{}`$ are the electron and ion plasma frequencies of the interstellar medium, and $`\omega _{p,e,shell}^{}`$ and $`\omega _{p,i,shell}^{}`$ are the electron and ion plasma frequencies of the shell, all measured in the shell rest frame. The negative value of $`\omega ^{\mathrm{\hspace{0.17em}2}}`$ shows that the wave grows. Because $`\omega _{p,e,ism}^{}>\omega _{p,i,ism}^{}`$ and $`\omega _{p,e,shell}^{}>\omega _{p,i,shell}^{}`$, one finds that the growth rate of the filamentation of the electrons is
$$\gamma _{fe}^{}=\frac{\beta \omega _{p,e,ism}^{}}{\mathrm{\Gamma }^{\frac{1}{2}}\sqrt{1+c^2k_{}^2\omega _{p,e,shell}^{\mathrm{\hspace{0.17em}2}}}}\frac{1}{2}\sqrt{\frac{4\pi e^2}{m_e}}n_{ism}^{\frac{1}{2}}.$$
(16)
The growth rate is independent of wave length for $`k_{}>\omega _{p,e,shell}^{}/c`$, and it is $`k_{}`$ otherwise. The length scale of the filament is therefore given by $`x_f=k_{}^1=c/\omega _{p,e,shell}`$. For ions alone, the growth rate, which differs from equation (16) only in the numerator, is given by
$$\gamma _{fi}^{}=\frac{\beta \omega _{p,i,ism}^{}}{\mathrm{\Gamma }^{\frac{1}{2}}\sqrt{1+c^2k_{}^2\omega _{p,e,shell}^{\mathrm{\hspace{0.17em}2}}}}\frac{1}{2}\sqrt{\frac{4\pi e^2}{m_p}}n_{ism}^{\frac{1}{2}}.$$
(17)
This has the same length scale as the electron filamentation instability, but a lower growth rate.
The filamentation instability growth rate is faster than the two stream instability by the factor of $`\mathrm{\Gamma }`$. For $`\mathrm{\Gamma }=10^3`$, $`n_{ism}=1\text{cm}^3`$, one finds an electron filamentation growth rate of $`\gamma _{f,e}^{}=6\times 10^4\text{s}^1`$, which gives a length scale of $`10^6\text{cm}`$ over which the instability occurs. For ions, the filamentation instability grows at a rate that is a factor of $`\sqrt{m_e/m_p}`$ slower, so that for the parameters given above, $`\gamma _{f,p}^{}=1.4\times 10^3\text{s}^1`$. The growth rates of all three instabilities are shown in Figures 3 and 4 as functions of $`\mathrm{\Gamma }`$. Both filamentation growth rates are therefore higher than the two-stream growth rate for electrons using the parameters derived above for gamma-ray bursts.
## 4 Filament Saturation
The filaments grow until the growth rate of the thread equals the magnetic bounce frequency of the particle beam producing the instability (Davidson et al. 1972; Lee & Lampe 1973). The bounce frequency, which describes the motion of a particle across the filament through the toroidal magnetic field, is given by
$$\omega _b=\sqrt{eB/mc\mathrm{\Gamma }x_f},$$
(18)
where $`x_f`$ is the length scale of the filament, and $`m`$ is the mass of the particles comprising the filament. The size of the filaments is set by the lower limit on the wave number for which growth occurs, $`k_{}c=\omega _{p,e,shell}^{}`$. The ratio of the magnetic field energy density to the particle beam energy density in the thread is then
$$\frac{W_B^{}}{W_{ism}^{}}=\frac{m_en_{ism}}{mn_{shell}^{}}=\frac{m_e}{m}\eta .$$
(19)
One point to note about this equation is that electron and proton components each generate magnetic fields of the same strength. For both components, one has $`W_B^{}=m_ec^2n_{ism}\eta \mathrm{\Gamma }^2`$, which, for $`\mathrm{\Gamma }=\eta ^1=10^3`$, gives $`B^{}=0.14G`$ with $`n_{ism}=1\text{cm}^3`$, and $`B^{}=45.36G`$ with $`n_{ism}=10^5\text{cm}^3`$. This independence is a consequence of the length scale of filamentation being set by the shell electron density.
The thermalization of the ions within the thread can be estimated by examining the equation of motion at saturation. A particle’s equation of motion perpendicular to the thread for $`u_x\mathrm{\Gamma }`$ is (Davidson et al. 1972)
$$\frac{d^2x}{d\tau ^2}=\omega _b^2x.$$
(20)
Solving this equation of motion using a maximum spatial amplitude of $`x_f`$, one finds that the maximum momentum perpendicular to the filament is $`u_x=x_f\omega _b\mathrm{\Gamma }/c`$. Replacing $`x_f`$ with the length scale of the filament, and replacing the bounce frequency with the filament growth rate, one finds
$$u_x=\sqrt{\frac{W_B^{}}{W_{ism}^{}}}\mathrm{\Gamma }.$$
(21)
If $`u_x1`$, then the approximate energy that goes into thermalizing the particles in the stream is approximately given by $`W_{th}^{}/W_{ism}^{}=u_x^2/\mathrm{\Gamma }`$, while if $`u_x1`$, it is given by $`W_{th}^{}/W_{ism}^{}=u_x/\mathrm{\Gamma }`$. As a result,
$$\frac{W_{th}^{}}{W_{ism}^{}}=\mathrm{min}(\frac{m_e}{m}\eta \mathrm{\Gamma },\sqrt{\frac{m_e}{m}\eta }).$$
(22)
A point to note is that both of the terms on the right hand side are greater than $`W_B^{}`$, so that one always has $`W_{th}^{}>W_B^{}`$. Equations (19) and (22) are plotted in Figure 5 for $`m=m_p`$ and $`\mathrm{\Gamma }=10^3`$ and $`10^4`$.
From these equations one sees that the filamentation instability cannot mediate a shock. If a shock were present, then $`\eta `$ would be given by equation (2). Placing this equation into equations (19) and (22) and setting $`m=m_p`$, one finds that $`W_B^{}/W_{ism}^{}m_e/m_p\mathrm{\Gamma }1`$ and $`W_i^{}/W_{ism}^{}=\left(m_e/m_p\mathrm{\Gamma }\right)^{1/2}1`$. The energy released through the instability is small compared to the kinetic energy of the interstellar medium as measured in the shell rest frame. This contradicts the Rankine-Hugoniot equations, and so a shock never arises through the filamentation instability.
## 5 Electron Thermalization
Once ion filamentation is complete, electrons will attempt to come into equilibrium through the two-stream instability. This instability will drive the electrons to a distribution that is uniformly distributed in energy between the rest frame described by the ions in the filaments and the ions in the shell. The rest frame of the electron distribution must preserve both the electron charge density and the electron current, since the shell plus interstellar medium is charge-neutral. These two conditions define a rest frame for the electrons that has a Lorentz factor relative to the interstellar medium rest frame of
$$\mathrm{\Gamma }_e=\frac{\mathrm{\Gamma }+\eta }{\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}}.$$
(23)
Relative to this rest frame, the shell is moving with
$$\mathrm{\Gamma }_s^{\prime \prime }=\frac{\eta \mathrm{\Gamma }+1}{\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}},$$
(24)
where the double primes are used to denote quantities measured in the electron rest frame.
In the electron rest frame, the electron density is given by $`n_e^{\prime \prime }=n_{ism}\mathrm{\Gamma }_e+n_{shell}^{}\mathrm{\Gamma }_s^{\prime \prime }`$, which can be written as
$$n_e^{\prime \prime }=n_{shell}^{}\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}.$$
(25)
The electron rest frame has nearly the same Lorentz factor as the shell rest frame as long as $`\eta \mathrm{\Gamma }^1`$. For $`1\eta \mathrm{\Gamma }^1`$, the Lorentz factor of the electrons is $`\mathrm{\Gamma }_e=\sqrt{\mathrm{\Gamma }/2\eta }`$, which is a factor of $`1/\sqrt{2\eta \mathrm{\Gamma }}`$ smaller than the Lorentz factor for the ions. The value $`\eta =\mathrm{\Gamma }^1`$ is therefore an important transition point for the character of the radiation emitted by the shell, because the Lorentz boost of the radiation is smaller than the boost associated with the shell when $`\eta \mathrm{\Gamma }^1`$. On the other hand, because $`\eta <1`$, $`\mathrm{\Gamma }_e^{\prime \prime }=\mathrm{\Gamma }_e>\mathrm{\Gamma }_s^{\prime \prime }`$, so the two-stream instability drives the electron distribution to the energy defined by $`\mathrm{\Gamma }_e`$.
The electron in the shell rest frame has a Landau radius that is much larger than the filament length scale. Using the definitions for $`x_f`$, the gyroradius, and equation (19) for the magnetic field strength in the shell rest frame, one can write
$$\frac{r_e}{x_f}=u_e\frac{m_e}{m_p}\eta .$$
(26)
The gyroradius is therefore much larger than the filament width when
$$u_e\frac{m_p}{m_e}\eta .$$
(27)
For electrons thermalizing to the energy $`\mathrm{\Gamma }_e`$, $`u_e\mathrm{\Gamma }/\sqrt{1+2\eta \mathrm{\Gamma }}`$, so that equation (27) becomes
$$\mathrm{\Gamma }\{\begin{array}{cc}\frac{m_p}{m_e}\eta ,\hfill & \text{if }\eta <\mathrm{\Gamma }^1\text{;}\hfill \\ 2\left(\frac{m_p}{m_e}\right)^2\eta ^3,\hfill & \text{otherwise.}\hfill \end{array}$$
(28)
One finds that the inequality holds for the upper term in equation (28) whenever $`\mathrm{\Gamma }>\sqrt{m_p/m_e}=42`$; below we show that $`\mathrm{\Gamma }10^3`$, so if equation (28) is to fail, it will be for the lower term. For $`\eta \mathrm{\Gamma }>1`$, the inequality holds as long as
$$\eta \mathrm{\Gamma }<2^{\frac{1}{3}}\left(\frac{m_e}{m_p}\right)^{\frac{2}{3}}\mathrm{\Gamma }^{\frac{4}{3}}=52.9\mathrm{\Gamma }_3^{\frac{4}{3}}.$$
(29)
When inequality (29) holds, each electron passes through many filaments in a single orbit, and the motion of the most energetic electrons will be as a single fluid, but when the inequality fails, the electrons will be confined to the local conditions within each filament, and the motion of each electron will be determined by these local conditions. For the remainder of the paper, we assume that equation (29) holds.
For the electrons to flow through the magnetic fields generated in the shell, an average electric field perpendicular to the magnetic field must exist in the shell rest frame. The electric field is not uniform, since the system is charge neutral, but exists only over distances of order the width of a filament, with values that are proportional to the magnetic field strength. When the gyroradius is large, the electron effectively sees an average electric field as it completes one orbit. This electric field gives the electron guiding center a velocity of $`U_s^{\prime \prime }`$ in the shell rest frame. The effective magnetic field strength in the electron rest frame is then
$$B^{\prime \prime }=\frac{B^{}}{\mathrm{\Gamma }_s^{\prime \prime }}.$$
(30)
Because the electron travels over many filaments, the orientation of the magnetic field changes dramatically over one gyroradius, so that the direction of the electron’s velocity vector is randomized through a random-walk process rather than through the rotation over one orbit. The electron travels the filament width $`x_f`$ in the time $`c\delta t^{\prime \prime }`$. In this time, the angle $`\theta \delta t^{\prime \prime }/r_e`$ is traveled, where $`r_e`$ is the electron gyroradius. Because the motion is a random walk, the number of time intervals $`\delta t^{\prime \prime }`$ required to change direction by $`2\pi `$ is approximately $`n=4\pi ^2/\theta ^2`$. The total amount of time required to isotropize the motion of the electrons is therefore given by
$$t_{iso}^{\prime \prime }=n\mathrm{\Delta }t^{\prime \prime }=\frac{4\pi ^2r_e^2}{cx_f}=\frac{4\pi ^2c\mathrm{\Gamma }_e^2}{\omega _c^{\prime \prime \mathrm{\hspace{0.17em}2}}x_f},$$
(31)
where $`\omega _c^{\prime \prime }`$ is the cyclotron frequency. Using equations (19) and (30) to define the magnetic field in equation (31), one finds
$$t_{iso}^{\prime \prime }=n\mathrm{\Delta }t=\frac{2\pi ^2\mathrm{\Gamma }_s^{\prime \prime }}{\omega _{p,e,ism}\eta ^{3/2}}=11.1\text{s}n_{ism}^{\frac{1}{2}}\eta _3^{\frac{3}{2}}\frac{\eta \mathrm{\Gamma }+1}{\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}}.$$
(32)
For $`\mathrm{\Gamma }_s^{\prime \prime }1`$, this timescale is long compared to the two-stream instability for electrons.
## 6 Burst Radiation
Two radiative processes are present within the theory: synchrotron emission and Compton scattering. The synchrotron emission will occur isotropically in the electron rest frame describe by equation (23). The observed synchrotron radiation will be boosted into the observer’s reference frame by a factor of $`\mathrm{\Gamma }_e`$. Compton scattering of synchrotron radiation by the synchrotron-emitting electrons boosts the radiation by another factor of $`\mathrm{\Gamma }_e^2`$, because the electrons in this rest frame have a characteristic energy of $`m_ec^2\mathrm{\Gamma }_e`$.
The characteristic synchrotron frequency in the shell rest frame is given by $`h\nu ^{\prime \prime }/m_ec^2=2B^{\prime \prime }\mathrm{\Gamma }_e^2/3B_{cr}`$, where $`B_{cr}=e\mathrm{}/m_e^2c^3`$. Transforming this into the observer’s reference frame and using equations (19) and (30) to remove $`B^{\prime \prime }`$, one finds that the characteristic synchrotron energy is
$`{\displaystyle \frac{h\nu _s}{m_ec^2}}`$ $`=`$ $`{\displaystyle \frac{2\sqrt{8\pi }\mathrm{}e}{3m_e^{3/2}c^2}}\eta ^{\frac{1}{2}}n_{ism}^{\frac{1}{2}}{\displaystyle \frac{\mathrm{\Gamma }_e^3\mathrm{\Gamma }}{\mathrm{\Gamma }_s^{\prime \prime }}},`$ (33)
$`=`$ $`{\displaystyle \frac{2\sqrt{8\pi }\mathrm{}e}{3m_e^{3/2}c^2}}\eta ^{\frac{1}{2}}n_{ism}^{\frac{1}{2}}{\displaystyle \frac{\left(\mathrm{\Gamma }+\eta \right)^3\mathrm{\Gamma }}{\left(1+\eta ^2+2\eta \mathrm{\Gamma }\right)\left(1+\eta \mathrm{\Gamma }\right)}},`$ (34)
$`=`$ $`2.17\times 10^6\eta _3^{\frac{1}{2}}n_{ism}^{\frac{1}{2}}{\displaystyle \frac{\mathrm{\Gamma }_3^4}{\left(1+2\eta \mathrm{\Gamma }\right)\left(1+\eta \mathrm{\Gamma }\right)}},`$ (35)
where terms of order $`\eta `$ and higher were dropped in the last equation. Equation (35) places the characteristic energy in the optical band for the given characteristic energies and $`\eta \mathrm{\Gamma }<1`$. As $`\eta `$ increases above $`\mathrm{\Gamma }^1`$, the characteristic observed energy falls as $`\eta ^{\frac{3}{2}}`$, so $`\eta =\mathrm{\Gamma }^1`$ defines the maximum characteristic frequency for a given value of $`\mathrm{\Gamma }`$. To have a characteristic photon energy of $`m_ec^2`$ at $`\eta =\mathrm{\Gamma }^1`$ requires $`\mathrm{\Gamma }>6.93\times 10^4n_{ism}^{1/7}`$.
The characteristic energy of the Compton emission after a single scattering is a factor of $`\mathrm{\Gamma }_e^2`$ larger than the synchrotron characteristic energy, so
$`{\displaystyle \frac{h\nu _C}{m_ec^2}}`$ $`=`$ $`{\displaystyle \frac{2\sqrt{8\pi }\mathrm{}e}{3m_e^{3/2}c^2}}\eta ^{\frac{1}{2}}n_{ism}^{\frac{1}{2}}{\displaystyle \frac{\left(\mathrm{\Gamma }+\eta \right)^5\mathrm{\Gamma }}{\left(1+\eta ^2+2\eta \mathrm{\Gamma }\right)^2\left(1+\eta \mathrm{\Gamma }\right)}},`$ (36)
$`=`$ $`2.17\eta _3^{\frac{1}{2}}n_{ism}^{\frac{1}{2}}{\displaystyle \frac{\mathrm{\Gamma }_3^6}{\left(1+2\eta \mathrm{\Gamma }\right)^2\left(1+\eta \mathrm{\Gamma }\right)}}.`$ (37)
The characteristic energy of the Compton scattered radiation after a single scattering is at $`m_ec^2`$ when $`\mathrm{\Gamma }>1.47\times 10^3n_{ism}^{1/11}`$. A second scattering takes the photon in the observer rest frame to the GeV energy range. Further scattering does not change the photon energy, because the photon energy is of order the characteristic electron energy after the second scattering.
Each of the radiative components spans a broad range of energies. The low end of the synchrotron emission is set by the cyclotron frequency, which is smaller than the characteristic synchrotron frequency by a factor of $`\mathrm{\Gamma }_e^2`$. For $`\eta \mathrm{\Gamma }=1`$, the cyclotron frequency is at $`\nu 2.68\times 10^8\text{Hz}n_{ism}^{1/2}\mathrm{\Gamma }_3^{3/2}`$, so that synchrotron emission extends down to the radio band. An important point is that the cyclotron frequency is related to the plasma frequency in the electron rest frame as $`\nu _c^{\prime \prime }\sqrt{2}\omega _{p,e}^{\prime \prime }\eta \mathrm{\Gamma }/2\pi \left(1+2\eta \mathrm{\Gamma }\right)^{1/4}`$, so that they are about equal, and the cyclotron photons escape the shell to the observer. The lowest energy of the Compton scattered radiation is the cyclotron frequency photons upscattered by $`\mathrm{\Gamma }_e^2`$, which means that the low end of the Compton scattered radiation equals the high end of the synchrotron emission. If most of the energy is in the Compton scattered component, and the synchrotron photon number spectrum falls faster than $`\nu ^2`$, so that most of the energy is released at the low end of the spectrum, then the low end of the Compton spectrum will be larger than the high end of the synchrotron spectrum. This implies that optical and ultraviolet emission is part of a single smooth continuum that extends through the x-ray and gamma-ray bands, and that most of the energy emitted by the burst is released in the optical and ultraviolet.
The ratio of the synchrotron emission rate to the single-scattering Compton emission rate for a single electron is given by $`P_{sync}^{\prime \prime }/P_{c1}^{\prime \prime }=W_B^{\prime \prime }/W_{synch}^{\prime \prime }`$, where $`W_B^{\prime \prime }`$ and $`W_{synch}^{\prime \prime }`$ are the magnetic field and the synchrotron photon energy densities as measured in the electron rest frame. The synchrotron energy density is related to the single electron emission rate by $`W_{sync}^{\prime \prime }=P_{synch}^{\prime \prime }n_{emis}^{\prime \prime }l^{\prime \prime }/4\pi c`$, where $`n_{emis}^{\prime \prime }`$ is the density of electrons with Lorentz factor $`\mathrm{\Gamma }_e`$ in the electron rest frame. The synchrotron emission rate is given by $`P_{sync}^{\prime \prime }=4\sigma _Tc\mathrm{\Gamma }_e^2W_B^{\prime \prime }/3`$, so the ratio becomes
$`P_{sync}^{\prime \prime }/P_{c1}^{\prime \prime }`$ $`=`$ $`{\displaystyle \frac{3\pi \left(\eta \mathrm{\Gamma }+1\right)}{\sigma _T\left(\mathrm{\Gamma }+\eta \right)^2f_{emis}}}\left({\displaystyle \frac{9m_p}{n_{ism}^2\mathrm{\Gamma }^2}}\right)^{\frac{1}{3}},`$ (38)
$`=`$ $`3.50\mathrm{\Gamma }_3^{\frac{8}{3}}{\displaystyle \frac{\left(\eta \mathrm{\Gamma }+1\right)\mathrm{\Gamma }^2}{\left(\mathrm{\Gamma }+\eta \right)^2}}n_{ism}^{\frac{2}{3}}_{27}^{\frac{1}{3}}f_{emis}^1\left({\displaystyle \frac{R}{R_0}}\right)^2.`$ (39)
The ratio of the synchrotron emission to the rate at which energy carried by the interstellar medium flows through the shell is given by
$`{\displaystyle \frac{\dot{E}_{sync}^{\prime \prime }}{m_pc^3n_{ion}\mathrm{\Gamma }_e^2}}`$ $`=`$ $`{\displaystyle \frac{4m_e\sigma _T^{\frac{1}{3}}n_{ism}^{\frac{2}{3}}\mathrm{\Gamma }^{\frac{8}{3}}\eta \left(1+\eta \mathrm{\Gamma }\right)f_{emis}}{3^{\frac{5}{3}}m_p^{\frac{4}{3}}}}\left({\displaystyle \frac{R_0}{R}}\right)^2,`$ (40)
$`=`$ $`1.956\times 10^6_{27}^{\frac{1}{3}}n_{ism}^{\frac{2}{3}}\mathrm{\Gamma }_3^{\frac{5}{3}}\left(\eta \mathrm{\Gamma }\right)\left(1+\eta \mathrm{\Gamma }\right)f_{emis}\left({\displaystyle \frac{R_0}{R}}\right)^2.`$ (41)
For single scattering Compton cooling, this ratio is
$`{\displaystyle \frac{\dot{E}_{Comp}^{\prime \prime }}{m_pc^3n_{ion}\mathrm{\Gamma }_e^2}}`$ $`=`$ $`{\displaystyle \frac{2m_e\sigma _T^2^{\frac{2}{3}}n_{ism}^{\frac{4}{3}}\mathrm{\Gamma }^{\frac{10}{3}}\eta \left(\mathrm{\Gamma }+\eta \right)^2f_{emis}^2}{3^{\frac{10}{3}}\pi m_p^{\frac{5}{3}}}}\left({\displaystyle \frac{R_0}{R}}\right)^4,`$ (42)
$`=`$ $`2.80\times 10^7_{27}^{\frac{2}{3}}n_{ism}^{\frac{4}{3}}\mathrm{\Gamma }_3^{\frac{13}{3}}\left(\mathrm{\Gamma }\eta \right){\displaystyle \frac{\left(\mathrm{\Gamma }+\eta \right)^2}{\mathrm{\Gamma }^2}}f_{emis}^2\left({\displaystyle \frac{R_0}{R}}\right)^4.`$ (43)
From these equations, one sees that the energy release is very inefficient unless $`n_{ism}`$ or $`\mathrm{\Gamma }`$ are larger than the characteristic values used in the calculations. We discuss this point in §8.
## 7 Radiative Timescales
The cooling time scale for synchrotron emission of a relativistic electron is found by dividing the electron energy $`m_ec^2\mathrm{\Gamma }_e`$ by $`P_{sync}^{\prime \prime }`$, the emissivity of a single electron. In the electron rest frame, the synchrotron cooling timescale is
$`t_{sync}^{\prime \prime }`$ $`=`$ $`{\displaystyle \frac{3}{4\sigma _Tc\eta n_{ism}}}{\displaystyle \frac{\left(\eta \mathrm{\Gamma }+1\right)^2}{\mathrm{\Gamma }^2\left(\mathrm{\Gamma }+\eta \right)\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}}},`$ (44)
$`=`$ $`3.76\times 10^7\text{s}n_e^1\mathrm{\Gamma }_3^2{\displaystyle \frac{\left(\eta \mathrm{\Gamma }+1\right)^2}{\eta \left(\mathrm{\Gamma }+\eta \right)\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}}}.`$ (45)
The synchrotron self-Compton cooling timescale is found by multiplying equations (38) and (45) together, which gives
$`t_{Comp}^{\prime \prime }`$ $`=`$ $`{\displaystyle \frac{9\pi }{4\sigma _T^2cf_{emitt}n_{ism}}}\left({\displaystyle \frac{9m_p}{n_{ism}^2}}\right)^{\frac{1}{3}}{\displaystyle \frac{\left(\eta \mathrm{\Gamma }+1\right)^3}{\eta \mathrm{\Gamma }^{\frac{8}{3}}\left(\mathrm{\Gamma }+\eta \right)^3\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}}}\left({\displaystyle \frac{R}{R_0}}\right)^2,`$ (46)
$`=`$ $`1.32\times 10^8\text{s}_{27}^{\frac{1}{3}}n_{ism}^{\frac{5}{3}}f_{emitt}^1\mathrm{\Gamma }_3^{\frac{14}{3}}{\displaystyle \frac{\left(\eta \mathrm{\Gamma }+1\right)^3\mathrm{\Gamma }^2}{\eta \left(\mathrm{\Gamma }+\eta \right)^3\sqrt{1+\eta ^2+2\eta \mathrm{\Gamma }}}}\left({\displaystyle \frac{R}{R_0}}\right)^2.`$ (47)
These timescales are plotted in Figures 3 and 4 for $`n_{ism}=1\text{cm}^3`$ and $`10^5\text{cm}^3`$, with $`\left(R/R_0\right)^2=m_p/m_e`$, and $`\eta \mathrm{\Gamma }=1`$. One sees that for the higher densities, the Compton cooling timescale can fall below the timescale for isotropization. When this occurs, the radiative cooling will determine the shape of the electron distribution. The Compton cooling timescale is shorter than the isotropization timescale when
$$n_{ism}>1.16\times 10^6\text{cm}^3\left(\frac{R}{R_0}\right)^{\frac{12}{7}}\mathrm{\Gamma }_3^{\frac{37}{7}}\left[\frac{\eta ^{\frac{1}{2}}\mathrm{\Gamma }^{\frac{7}{2}}\left(\eta \mathrm{\Gamma }+1\right)^2}{\left(\mathrm{\Gamma }+\eta \right)^3}\right]^{\frac{6}{7}}.$$
(48)
## 8 Observational Consequences
The theory outlined above has within it two observational selection effects that define lower limits on the values of $`n_{ism}`$ and $`\mathrm{\Gamma }`$. These selection effects arise because gamma-ray bursts are recognized as such through the efficient emission of gamma-rays.
Because gamma-ray bursts are selected by their gamma-ray emission, one must have a value of $`\eta `$ that is sufficiently small to give gamma-rays through Compton scattering. Because the largest value of the photon energy occurs at $`\eta =1/\mathrm{\Gamma }`$, one can derive a lower limits on $`\mathrm{\Gamma }`$ by requiring the right hand side of equation (37) be $`>1`$ at this value of $`\eta `$:
$$\mathrm{\Gamma }_3>0.831n_{ism}^{\frac{1}{11}}.$$
(49)
The weak dependence on $`n_{ism}`$ implies that for all gamma-ray bursts, the bulk Lorentz factor $`\mathrm{\Gamma }>10^3`$. Events may occur with smaller $`\mathrm{\Gamma }`$, but these would emit in the optical and ultraviolet bands.
From Figures 6 and 7, one sees that an upper limit on the value of $`\eta `$ is found from equation (37) for $`\eta \mathrm{\Gamma }1`$. This limit is
$$\eta <\eta _{max}=1.03\times 10^3n_{ism}^{\frac{1}{5}}\mathrm{\Gamma }_3^{\frac{6}{5}}.$$
(50)
These upper limits are plotted in Figures 1 and 2. An important aspect of these limits is that the timescales associated with the shell thickness are generally $`<1\text{s}`$. For the higher density figure, the time scales at $`R/R_0=0.1`$ range from $`0.01\text{s}`$ to $`1\text{s}`$, which is consistent with the shortest timescales exhibited by gamma-ray bursts.
Lower limits on the interstellar medium density are found by requiring that Compton scattering efficiently remove energy from the shell. Two conditions must be met: first, Compton cooling must dominate synchrotron cooling, and second, the Compton cooling rate must be comparable to the rate at which energy is lost as the shell decelerate over the distance $`R`$. The first of these conditions is derived from equation (39):
$$n_{ism}>6.54_{27}^{\frac{1}{2}}f_{emis}^{\frac{3}{2}}\frac{\left(\eta \mathrm{\Gamma }+1\right)^{\frac{3}{2}}\mathrm{\Gamma }^3}{\left(\mathrm{\Gamma }+\eta \right)^3}\mathrm{\Gamma }_3^4\left(\frac{R}{R_0}\right)^3.$$
(51)
The second of these conditions is found by equating the right hand side of equation (43) to $`\left(R_0/R\right)^3g`$, where $`g1`$ is a measure of efficiency. This gives
$$n_{ism}=8.22\times 10^4\text{cm}^3_{27}^{\frac{1}{2}}\mathrm{\Gamma }_3^{\frac{13}{4}}f_{emis}^{\frac{3}{2}}\left(\frac{R}{R_0}\right)^{\frac{3}{4}}g^{\frac{3}{4}}.$$
(52)
If $`g`$ is very small in equation (52), then most of the energy lost by the shell to the interstellar medium is not radiated away, making the burst dim and unobservable.
The limits from equations (51) and (52) on $`n_{ism}`$ are plotted as functions of $`\mathrm{\Gamma }`$ in Figure 8. The point to note in this figure is that the density required for high efficiency cooling is very high, of order $`10^5\text{cm}^3`$. This is important in explaining why all gamma-ray bursts are have a peak value of the $`\nu F_\nu `$ curve near $`250\text{keV}`$. The high source density provides a medium for Compton attenuation of the burst spectrum. Because the scattering medium is at rest in the galaxy rest frame, the value of $`E_p`$ is independent of $`\mathrm{\Gamma }`$, even though the characteristic energy emitted by the shell is a strong function of $`\mathrm{\Gamma }`$. As a consequence, if the medium density is low enough to keep attenuation from occurring, then the density will be too low to efficiently produce gamma-ray emission. In such a circumstance, the shell will loose energy by thermalizing ions and electrons, but the thermal energy of the electrons will not be rapidly radiated away, making these particular sources invisible.
An interesting consequence of the density on the theory is that as the density increases, the timescale for radiative cooling becomes shorter than the timescale for isotropization of the electron distribution. The implications of this is that there should be a coupling of the electron distribution to the density of the interstellar medium, with the distribution falling more rapidly, and being more anisotropic, at the higher interstellar medium densities. Because the shape of the electron distribution determines the shape of the spectrum, one expects the burst spectrum to become softer for the higher interstellar medium densities, and therefore for the higher attenuation optical depths. This is the case observationally, so this theory may provide an explanation for that one characteristic of the Compton attenuation theory.
The theory as constructed has two natural timescales, one from the thickness of the shell, and the second from the deceleration distance $`R`$. The shell thickness timescale is given by equation (12), and is of order $`2.9\text{s}`$ for $`n_{ism}=10^5\text{cm}^3`$ and $`R/R_0=1`$. For the same density and $`R/R_0=10`$, the timescale falls to $`0.029\text{s}`$. The deceleration timescale is given by
$$t_R=\frac{R}{2c\mathrm{\Gamma }^2}=0.203\text{s}_{27}^{1/3}n_{ism}^{1/3}\mathrm{\Gamma }_3^{7/3}\left(\frac{R}{R_0}\right),$$
(53)
where the definition of $`R_0`$ in equation (5) has been used. This timescale is less than the shell thickness timescale whenever
$$\frac{R}{R_0}>8.74\mathrm{\Gamma }_3^{\frac{2}{3}}\eta _3^{\frac{1}{3}}.$$
(54)
Because $`\frac{R}{R_0}`$ should be of order $`m_p/m_e`$, the two timescales are of the same order. When the two timescales are equal, the value is given by
$$t_R=t_{shell}=1.77\text{s}_{27}^{1/3}n_{ism}^{1/3}\mathrm{\Gamma }_3^{5/3}\eta _3^{\frac{1}{3}}.$$
(55)
For $`n_{ism}=10^5\text{cm}^3`$, the timescale is $`0.038\text{s}`$. The timescales associated with both the shell and the deceleration distance are sufficiently short to be responsible for the shortest timescales observed in gamma-ray bursts.
## 9 Summary of Conclusions
To summarize the theory presented above, a baryonic shell with an ultra-relativistic bulk velocity interacts with the interstellar medium through the filamentation and the two-stream plasma instabilities. The former instability gives rise to a magnetic field with a strength that is far below the equipartition value, and the latter instability heats the electrons to energies that are relativistic, but also far below the equipartition value. Neither instability is sufficient to produce a shock. Instead, the interstellar medium passes through the shell, so that the region behind the shell is not cleared of interstellar medium. The electrons within the shell produce synchrotron radiation with a characteristic energy in the observer’s rest frame that ranges from the radio to the ultraviolet. Synchrotron self-Compton emission by the electrons in the shell produces x-rays and gamma-rays in the observer’s rest frame, in addition to optical emission. The optical emission is dominated by the synchrotron self-Compton component. The timescales associated with the shell thickness and the length scale over which the shell decelerates provide a lower limit on the burst durations. The burst duration itself would be determined by the complex structure of the relativistic wind, since the interstellar medium remains in place, permitting multiple shells to each produce gamma-ray emission.
Two conditions must be met for the interaction between shell and interstellar medium to efficiently produce gamma-rays: first, the bulk Lorentz factor must be $`>10^3`$ in order to produce radiation above $`1\text{keV}`$; second, the number density of the interstellar medium must be greater than $`10^6\text{cm}^3\mathrm{\Gamma }_3^{\frac{13}{4}}`$, where $`\mathrm{\Gamma }_3`$ is the bulk Lorentz factor in units of $`10^3`$, for the thermal energy to be radiated efficiently. The lower limit on $`\mathrm{\Gamma }`$ through the selection effect provides an explanation for why the value of $`\mathrm{\Gamma }`$ is always sufficiently high to allow the escape of $`1\text{MeV}`$ photons from the gamma-ray burst emission region without the production of an electron-positron plasma from photon-photon pair creation and the subsequent thermalization of the radiation. The limit on density provides an explanation of why all gamma-ray burst spectra appear to be Compton attenuated. Because the limits on $`\mathrm{\Gamma }`$ and $`n_{ism}`$ are from selection effects in observing the emission of gamma-rays, one expects there to be burst events with values of $`\mathrm{\Gamma }`$ and $`n_{ism}`$ outside these limits. For bursts with low $`\mathrm{\Gamma }`$, the bursts radiate at energies below $`1\text{keV}`$. Therefore, one expects a population of optical and ultraviolet transients that have no gamma-ray emission. For bursts with low density, the radiation of energy is inefficient, so that the bursts are of low intensity. These bursts may appear in burst samples through a correlation of burst intensity with the interstellar medium density inferred from the Compton attenuation model.
An aspect of the theory that provides a test is the comparison of instantaneous gamma-ray, x-ray, and optical emission to the radio and optical emission. There are two aspects of the theory to test. First, one can test whether the broad band spectrum is consistent with being a synchrotron spectrum at low frequency and a Compton scattered synchrotron spectrum at higher frequency. Second, one can test the consistency of physical parameters in the theory. This last is done by comparing the Thomson and photoelectric optical depths found through a fit of the Compton attenuation model to the optical attenuation derived under the assumption that the unattenuated gamma-ray continuum extends to the optical band. Third, if the optical spectrum is sufficient to fix the cyclotron frequency by determining the low energy drop-off of the Compton spectrum, one can solve for the value of the Lorenz factor, which will provide a consistency test through the lower limit on $`\mathrm{\Gamma }`$. If one can model the x-ray afterglow of a burst as the forward scattering of x-rays by dust, then one has additional information about the optical extinction that can be used in this comparison.
The theory of afterglows in this theory has yet to be developed. An important difference from the shock theory of afterglows is that the region behind the shell will emit afterglow radiation in competition with the radiation from the decelerated shell. Because the evolution of the afterglow from the interstellar medium is determined by the broadening of the look-back surface and the radiative cooling of the interstellar medium, while the evolution of the shell radiation is determined by the decrease in $`\mathrm{\Gamma }`$ and the evolution of the thermal structure of the shell, the theory should have two distinct afterglow components that produce a complex afterglow behavior.
Numerical modeling of the plasma processes can lead to additional observational tests of the theory. In particular, the numerical modeling of the electron distribution for the regime where the electron isotropization timescale exceeds the Compton cooling timescale (Fig. 2) may provide a test through the correlation of Thomson optical depth with spectral hardness. One suspects that as $`\mathrm{\Gamma }`$ increases, the synchrotron spectrum becomes softer, because an electron radiatively cools before it isotropizes, making the electron distribution one-dimensional. Because of the lower limit on $`n_{ism}`$ in inversely related to $`\mathrm{\Gamma }`$, one expects $`n_{ism}`$, and therefore the Thomson optical depth, to be smaller for softer spectra. This conjecture requires numerical verification; if it is verified, then one can use the correlation of unattenuated spectral index with Thomson optical as a test of the theory. There is already some evidence of such a correlation (Brainerd et al. 1998, Fig. 8a).
Two theoretical investigations are now warranted. The first is a study of the broad band spectrum expected for this theory need to be numerically calculated for a number of electron distributions. This study will determine what aspects of the spectrum provide tests of the theory without requiring a full understanding of the plasma processes. Such a study can be carried out through Monte Carlo simulation, and should include the effects of optical and Compton attenuation. The goal is to model the spectrum from the radio to the gamma-ray. The second is a study of the plasma processes. This will require the development of plasma codes to study the interactions of relativistic beams and the growth of instabilities to the nonlinear regime. Only such a study will verify if the analytic conclusions reached above are accurate. Only such a study will one determine the value of $`\eta `$ in terms of the other burst parameters. And only through such a study will one obtain model spectra that are dependent just on the bulk Lorentz factor, the density of the interstellar medium, and the mass of the relativistic shell.
The plasma instability theory is capable of explaining the most important features of gamma-ray bursts. Further theoretical research is therefore justified, and should lead to strong and unambiguous observational tests of the theory.
|
no-problem/9904/cond-mat9904332.html
|
ar5iv
|
text
|
# Magnetotransport in manganites and the role of quantal phases II: Experiment
## Abstract
As in conventional ferromagnets, the Hall resistivity $`\rho _{xy}`$ of a La<sub>2/3</sub>(Ca,Pb)<sub>1/3</sub>MnO<sub>3</sub> single crystal exhibits both ordinary and anomalous contributions at low temperature. However, these contributions, unexpectedly, have opposite signs. Near $`T_c`$, the ordinary contribution is no longer evident and $`\rho _{xy}`$ is solely determined by the sample magnetization, reaching an extremum at $``$40 % of the saturated magnetization. A new model for the anomalous Hall effect, incorporating the quantal phase accumulated by double-exchange, three-site hopping reproduces this result. Below $`T_c`$, $`\rho _{xy}`$ reflects the competition between normal and anomalous Hall effects.
PACS No: 75.30.Vn, 72.20.My, 71.38.+i
Among the many intriguing properties exhibited by doped perovskite manganites, perhaps none is more puzzling than the Hall effect. A number of measurements have been reported on various members of the series La<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> (where A is Ca, Sr, or Pb), and all show common anomalous features. At the lowest temperatures, the Hall resistivity $`\rho _{xy}`$ is positive and linear in field, as expected for hole-doped materials, but rather smaller than expected. This has recently been attributed to charge compensation and Fermi-surface shape effects. As the temperature is increased, a component of the Hall resistivity appears that is proportional to the magnetization $`M(H,T)`$, but has a negative sign. The appearance of an anomalous Hall effect is, of course, commonly observed in ferromagnets, but is usually attributed to spin-orbit scattering of the charge carriers and normally carries the same sign as the ordinary Hall contribution. A strong negative contribution to $`\rho _{xy}`$ persists through the transition temperature, but loses its proportionality to $`M,`$ until, at temperatures $`1.5T_c,`$ $`\rho _{xy}`$ becomes linear in field again (though negative) with a slope decreasing exponentially with increasing temperature in the manner expected for the Hall resistivity of small polarons. Experimental data suggest that charge transport is not polaronic in the temperature range $`T_c1.5T_c`$.
In this Letter, we present new Hall resistivity data on optimally doped manganite single crystals, with emphasis on the temperature region between the band-like, positive Hall regime at low temperatures and polaronic behavior at high temperatures. We show that the Hall resistivity $`\rho _{xy}`$ is a function only of $`M(H,T),`$ reaching an extremum near $`M/M_{\text{sat}}=0.4`$ when this value can be reached with laboratory fields at temperatures above the Curie temperature $`T_c.`$ In fact, as we shall see, the data for all temperatures $`TT_c`$ lie on a universal curve that follows from the theoretical model presented in a companion paper that we refer to as I . Data taken below $`T_c`$ track this universal curve once the magnetization is saturated, but are shifted to slightly larger values of $`\left|\rho _{xy}\right|.`$ We argue that this is due to the return of band conduction as ferromagnetism, driven by double exchange, sets in.
High quality single crystals of La<sub>2/3</sub>(Ca,Pb)<sub>1/3</sub>MnO<sub>3</sub> were grown from 50/50 PbF<sub>2</sub>/PbO flux. It was found that the addition of Ca favors optimally doped crystals; chemical analyses of crystals from the same batch gave the actual composition as La<sub>0.66</sub>(Ca<sub>0.33</sub>Pb<sub>0.67</sub>)<sub>0.34</sub>MnO$`_3.`$ Specimens for the Hall measurements were cut along crystalline axes from larger, pre-oriented crystals. Details of the measurement technique and analysis of the low temperature region have been presented elsewhere. The Hall resistivity $`\rho _{xy}`$ and longitudinal resistivity $`\rho _{xx}`$ were measured simultaneously as functions of field and temperature. The magnetization of the same sample was measured following the Hall experiment, and was used to correct for demagnetization fields. Figure 1 shows the longitudinal resistivity as a function of temperature at zero field, 3 T and 7 T. Magnetization curves are shown in the inset. The residual resistivity of this sample, $`\rho _{xx}^051`$ $`\mu \mathrm{\Omega }`$ cm, is comparable to the best values obtainable in these materials, indicating the absence of grain boundaries in our sample. The maximum change in resistivity with temperature $`d\rho _{xx}/dT`$ occurs at 287.5 K in zero field, moving to higher temperatures with increasing field. This gives rise to the “colossal magnetoresistance (CMR)” effect, which is 326% at 293 K and 7 T. A scaling analysis of the magnetization data very close to the metal-insulator transition (MIT) gives a Curie temperature of $`T_c=285`$ K, but this must be taken cautiously as the scaling exponents differ significantly from those expected from a 3D Heisenberg ferromagnet. Nevertheless, it is clear that $`\rho _{xx}`$ and $`M`$ are closely correlated in this system.
In Fig. 2, we show the field dependence of $`\rho _{xy}`$ at a number of temperatures. As noted above, the Hall resistivity is positive and linear in field at low temperatures, indicating that the anomalous Hall effect (AHE) is small. As the temperature is increased, the AHE contribution increasingly dominates the ordinary Hall effect (OHE), first causing $`\rho _{xy}`$ to change sign as a function of field, and then driving it negative for all fields in the range measured. In ferromagnetic metals, the Hall resistivity is generally written as
$$\rho _{xy}=R_H[\mu _0H_{app}+\mu _0(1N)M]+R_S\mu _0M,$$
(1)
where $`R_H`$ is the coefficient of the OHE, while $`R_S`$ is the coefficient of the anomalous contribution. In Fig. 2, we have plotted the data in terms of the internal field, given by the square brackets in Eq. (1), using a demagnetization factor $`N`$ calculated from the dimensions of the sample. As the temperature is increased through the transition temperature, the minimum of $`\rho _{xy}`$ moves to higher fields, and the positive high-field contribution disappears. That there are rapid changes near $`T_c`$ is not surprising because, even in an ordinary ferromagnet, $`R_S`$ depends on longitudinal resistance which, in these samples, changes dramatically with temperature and applied field. However, $`R_S`$ is then attributed to scattering from spin disorder, via either a skew-scattering or side-jump processes , both of which require that the resistance be dominated by spin-disorder scattering. Even at low temperatures, where $`R_S`$ is proportional to $`\rho _{xx},`$ the sign is opposite that expected from skew-scattering theory . Further, as has been pointed out by many authors, the resistance changes observed here are too large to be the result of spin-dependent scattering, and must involve some form of localization, an effect we will invoke to explain the changes apparent here.
As discussed in I, we assume that transport in the transition region is dominated by hopping processes, giving rise to a longitudinal conductivity $`\sigma _{xx}=(ne^2d^2/k_BT)W_0\mathrm{cos}^2(\theta /2),`$ where $`d`$ is the distance between ions. Here $`W_0`$ is the probability of phonon-assisted direct hops and we have explicitly separated the Anderson-Hasegawa factors $`\mathrm{cos}^2(\theta /2)`$. The AH conductivity, correspondingly, is given by $`\sigma _{xy}=(ne^2d^2/k_BT)W_1`$, where $`W_1`$ is the probability of hopping between two ions via an intermediate state on a third ion and includes Anderson-Hasegawa factors \[see Eq. (1) in I\]. The problem then reduces to determining the ratio between direct and indirect hopping rates as a function of the spin texture. Because $`W_1`$ involves two-phonon processes, we write $`W_1/W_0^2=\alpha \mathrm{}\zeta /k_\mathrm{B}T`$, where $`\alpha `$ is a numerical factor describing the multiplicity of the various carrier-phonon interference processes (see ), the number of intermediate sites, and the difference between nearest- and next-nearest-neighbor hopping amplitudes, and $`\zeta `$ is an asymmetry parameter. For the OHE, $`\zeta \mathrm{sin}(𝐁𝐐/\varphi _0)`$, where $`𝐐`$ is the area vector of the triangle enclosed by the three sites. In the AHE case, it follows from Eqs.(2) and (4) in I that $`\zeta 3[𝐠_{jk}(𝐧_j\times 𝐧_k)][𝐧_1(𝐧_2\times 𝐧_3)]/4`$, where $`𝐠_{jk}`$ are characteristic vectors arising from the spin-orbit quantal phase in the hopping amplitude; $`𝐧_j`$ are unit vectors of the core spins in the triad, and $`𝐧_1(𝐧_2\times 𝐧_3)`$ is the volume of a parallelepiped defined by core-spin vectors, denoted as $`q_P`$ in I. The anomalous Hall resistivity can be written in the simple form
$$\rho _{xy}\sigma _{xy}/\sigma _{xx}^2=\frac{1}{ne}\left(\frac{\alpha \mathrm{}\zeta }{ed^2}\frac{1}{\mathrm{cos}^4(\theta /2)}\right).$$
(2)
The evaluation of Eq.(2) reduces to a determination of $`\mathrm{cos}(\theta /2)`$ and products $`(𝐧_j\times 𝐧_k)`$ and $`𝐧_1(𝐧_2\times 𝐧_3)`$ that survive averaging over all possible triads. In contrast to the hopping OHE in doped semiconductors , where only two sites in an optimal OHE triad are connected to the conducting network (CN), all three triad sites must participate in the network if they are to contribute to the AHE. Our argument is that if one of the sites is not a part of the CN then its core spin must be roughly opposite that of the other two spins, yielding a vanishingly small $`q_P`$. It is reasonable then to assume that the CN is formed by ions with splayed core spins oriented roughly in the direction of average magnetization $`𝐦`$. We then consider the square lattice formed by Mn ions in planes perpendicular to $`𝐦`$, and assume that the core spin vectors of the four ions in a typical elementary plaquette belonging to CN lie equally spaced on the cone whose half angle is given by $`\beta =`$ $`\mathrm{cos}^1[M(H,T)/M_{\text{sat}}].`$ A typical pair of ions that determine the longitudinal current, and a typical triad can now be chosen from ions of this plaquette. From elementary geometry, it follows that $`2\mathrm{cos}^2(\theta /2)=1+\mathrm{cos}^2\beta `$, $`q_P=2\mathrm{cos}\beta \mathrm{sin}^2\beta `$, and $`𝐦(𝐧_j\times 𝐧_k)=\mathrm{sin}^2\beta `$. To discuss the AHE magnitude, we need first to estimate the characteristic values of $`|𝐠_{jk}|g`$ arising from the spin-orbit interaction (SOI). As we discussed in I, the SOI term leads to a Dzyaloshinski-Moriya contribution to the eigenenergy of carriers, whose magnitude is given by $`gZe^2/4m_ec^2d_0`$, where $`d_0`$ is the radius of an Mn core d-state. An estimate based on free electron parameters gives $`g5\times 10^4`$. While renormalization of carrier parameters in crystals may tend to increase $`|𝐠_{jk}|,`$ it is necessary to allow admixtures of core orbitals with outer-shell wavefunctions in order to have $`|𝐠_{jk}|0`$ for symmetric potentials. The non-collinearity of the Mn-O-Mn bonds that allows carrier hopping around triads (including jumps along plaquette diagonals) effectively generates such an admixture. Thus, a value of $`g5\times 10^4`$ is reasonable.
As discussed in I, the magnitude of the longitudinal (and anomalous Hall) resistivities in the regime of abrupt increase of the resistivity depends not only on properties of individual pairs (triads), but also on how are they connected to the CN. We estimate the macroscopic longitudinal and Hall conductivities at the low temperature limit of our model, where the CN is still fully connected. Taking $`n=5.6\times 10^{21}`$ cm<sup>-3</sup>, $`W_02.5\times 10^{13}`$ s<sup>-1</sup>, and $`\mathrm{cos}\beta =0.6`$ from the magnetization data at $`T=275`$ K (Fig. 1), we obtain $`\rho _{xx}1`$ m$`\mathrm{\Omega }`$ cm which coincides with the value of the experimentally observed resistivity (Fig. 1). The AHE contribution to the Hall resistivity, assuming numerical factor $`\alpha =2.5`$, is then $`\rho _{xy}=0.5`$ $`\mu \mathrm{\Omega }`$ cm, in agreement with the experimentally observed Hall resistivity at the same $`T`$ (Fig. 2). The equivalent expression for the hopping Hall resistance in the Holstein mechanism has $`\zeta \mathrm{cos}^2(\theta /2)\mathrm{cos}\beta \mathrm{sin}(𝐁𝐐/\varphi _0)`$ and, at $`B=1`$ T, is an order of magnitude smaller than the AHE. We expect the macroscopic, hopping AHE and OHE to have the same sign, opposite that of the OHE in the metallic regime.
To relate $`\rho _{xy}`$ to $`m\left|𝐦\right|`$, we introduce a percolation factor $`P`$ for $`\sigma _{xx}`$ describing the connectivity of the pair to the CN; for the AH conductivity the corresponding factor would be $`P^2`$ because both pairs in a triad must, as discussed above, belong to the CN. It is remarkable that throughout the localization regime, $`\rho _{xy}`$ is, nevertheless, determined by currents formed in individual pairs and triads, because the factors of $`P`$ cancel. Therefore, as long as $`q_P`$ and the angles between neighboring spins can be directly related to $`mM/M_{\text{sat}}=\mathrm{cos}\beta `$, $`\rho _{xy}`$ depends on $`H`$ and $`T`$ only through $`m(H,T),`$ and is given by.
$$\rho _{xy}=\rho _{xy}^0\frac{m(1m^2)^2}{(1+m^2)^2}$$
(3)
The corresponding curve is shown in Fig. 3, where the data of Fig. 2 are replotted as a function of $`M/M_{\text{sat}}`$. At and above $`T_c`$ the data fall on a smooth curve that reaches an extremum at $`M/M_{\text{sat}}0.4.`$ Below $`T_c`$ the data first change rapidly with magnetization as domains are swept from the sample before saturating and following the general trend. At the lowest temperatures, the metallic OHE appears as a positive contribution at constant magnetization. The solid curve in Fig. 3 follows Eq. (3) with $`\rho _{xy}^0=4.7`$ $`\mu \mathrm{\Omega }`$ cm, consistent with the estimates of $`\rho _{xx}`$ and $`\rho _{xy}`$ given above. Down to 285 K, which is the Curie temperature determined by scaling analysis, Eq. (2) describes the data reasonably well. In addition, the extremum is located at $`M/M_{\text{sat}}=\mathrm{cos}\beta 0.35`$, close to the experimental extremum. Below $`T_c,`$ the longitudinal resistivity is metallic and no longer dominated by magnetic disorder. However, local spin arrangements still dominate the AHE via asymmetric scattering, as discussed in a forthcoming paper. The numerator of Eq.(3), $`m(1m^2)^2,`$ essentially the behavior of $`\sigma _{xy}`$ alone, has an extremum at $`m=1/\sqrt{5}0.45`$ as shown by the dashed line in Fig. 3. The broader maximum in the data suggest a shift toward a hopping model for $`\rho _{xx}`$ and $`\rho _{xy}`$ as the sample is warmed through the metal-insulator transition.
To consider the effect of OH contributions, which are masked in $`\rho _{xy}`$ by the large magnetoresistance, it is useful to examine the field and temperature dependence of $`\sigma _{xy}`$ instead. As seen in the main panel of Fig. 4, the magnetic field dependence of $`\sigma _{xy}`$ at 200 K clearly shows the OHE by free carriers at high fields, opposite in sign to the AHE. The AHE sign can be inferred by extrapolating the high field curve to its $`B=0`$ intercept. At $`T=200`$ K, $`\rho _{xx}`$ is mainly metallic. At 265 K, where $`\rho _{xx}`$ starts to increase rapidly, $`\sigma _{xy}`$ saturates at external magnetic fields $`H2`$ T. At that magnetic field $`M0.7M_{\text{sat}}`$, relatively close to maximal value achievable at this temperature, $`M0.8M_{\text{sat}}`$ at 7 T. This saturation effect strongly suggests that the AHE is dominant and that the negative Holstein OHE is either suppressed or partly compensated by the decrease in the AHE from reductions in $`q_P`$ by the magnetic field. As an increase in the magnetic field tends to delocalize carriers, we may expect to see the onset of metallic OHE at larger applied fields.
Another interesting feature is that the temperature dependence of $`\sigma _{xy}`$ shows an anomaly at the same temperature as the zero field $`d\rho _{xx}/dT`$ peak, (Fig. 4, inset). The size of the anomaly increases with increasing field, implicating the OHE. If this is related to a polaronic collapse of the conduction band, the anomaly should shift as $`H`$ increases. However, peak shifts much less than does the $`d\rho _{xx}/dT`$ peak (see Fig. 1). To the extent that the transition is a percolation process in which metallic regions grow to form a percolation network, it is possible for the non-metallic Hall contribution to remain dominant down to the percolation threshold, resulting in the sudden appearance of the OHE when the system becomes fully metallic. Indeed, this effect is evident, though less dramatic, in the Hall resistivity of Fig. 3, as a deviation of the data below $`T_c`$ from the universal curve deduced from the quantal phase calculation.
In conclusion, we measured the Hall resistivity, the longitudinal resistivity, and the magnetization of a La<sub>2/3</sub>(Ca,Pb)<sub>1/3</sub>MnO<sub>3</sub> single crystal. Very similar results have been observed in single crystals of Ca- and Sr-doped LaMnO<sub>3</sub> and will be reported elsewhere. We find that the Hall resistivity is solely determined by the sample magnetization ($`M`$) near and somewhat above the transition temperature. A model for the AHE, based on the Holstein picture in which interference between direct hops and those via a third site provides the necessary quantal phase, explains the results quite well. Unlike Holstein polarons, an additional phase here is introduced by the strong Hund’s rule coupling that forces the hopping charge carrier to follow the local spin texture. Below the transition temperature, the AHE competes with the OHE as long-range magnetic order and, presumably, an infinite percolating metallic cluster, develops. A sharp, field dependent drop in the Hall conductivity at the transition temperature is qualitative evidence for this cross over between hopping-dominated Hall effects and features similar to those observed in more conventional ferromagnets.
This work was supported in part by DOE DEFG-91ER45439.
|
no-problem/9904/cond-mat9904089.html
|
ar5iv
|
text
|
# Andreev Reflection Enhanced Shot Noise in Mesoscopic SNS Junctions
\[
## Abstract
Current noise is measured with a SQUID in low impedance and transparent Nb-Al-Nb junctions of length comparable to the phase breaking length and much longer than the thermal length. The shot noise amplitude is compared with theoretical predictions of doubled shot noise in diffusive normal/superconductor (NS) junctions due to Andreev reflections. We discuss the heat dissipation away from the normal part through the NS interfaces. A weak applied magnetic field reduces the amplitude of the $`1/f`$ noise by a factor of two, showing that even far from equilibrium the sample is in the mesoscopic regime.
\]
Nonequilibrium noise in SNS junctions has been recently addressed experimentally . Interest in this field has been motivated by the celebrated shot noise results obtained in short conductors connected to normal reservoirs , in a two-dimensional electron gas or in fractional quantum hall liquids . The analysis of the shot noise amplitude as well as the crossover from the Johnson-Nyquist to the shot noise regime provides information about the nature of the carriers beyond what is deduced from linear conductance measurements. It has been predicted (but not shown experimentally) that the shot noise in a mesoscopic normal diffusive sample connected to a superconducting reservoir at one end is doubled compared to the case of two normal reservoirs . This reflects that at low temperature and low energy the charge transport is dominated by Andreev processes transferring electrons by pairs. Beyond the SN case, the nature of charge carriers in the SNS case is also a major issue both theoretically and experimentally. In the case of multiple Andreev reflections (MAR) theoretical works predict an excess current noise .
Short SNS junctions have been studied by Dielemann et al. in the case of pinholes in a NbN/MgO/NbN (SIS) structure. Below the superconducting gap, the shot noise they measure is much larger than expected for independent electrons. That is attributed to the coherent charge transfer of large multiple charge quanta. Hoss et al. have studied longer SNS junctions and found different types of behaviour depending on the value of the superconducting gap of electrodes: for large gap Nb electrodes, the quasiparticles are overheated, whereas for low gap Al electrodes a very large shot noise at low bias is attributed to the same mechanism as in ref. .
A SN junction with a low resistive noiseless normal reservoir at one side and a transparent SN interface at the other one requires several technological steps (e.g. multideposition and realignment). We fabricate a much simpler SNS junction which captures the same physics if the length of the junction is larger than the inelastic mean free path. We present shot noise measurements in Nb-Al-Nb junctions (above the critical temperature of aluminium) where the current noise is measured by a calibrated SQUID-based setup (Fig. 1) . In our high temperature range the sample length $`L`$ is much larger than the superconducting coherence length but comparable to the phase breaking length which is dominated by the electron-electron relaxation length $`L_{ee}`$. Under these conditions the sample is in the mesoscopic regime where shot noise is only due to normal parts coherently attached to at least one of the superconducting reservoirs, but where MAR is inhibited ($`LL_{ee}`$). Indeed the conductance evolves in temperature and voltage as predicted for the standard proximity effect . The absence of conductance anomalies at finite bias (Fig. 2) indicates that Multi Particle Tunneling (i.e. coherent MAR process) is negligible. Our shot noise measurements show that the transport is indeed dominated by carriers whose effective charge is about twice that of the bare electron. At high temperature the shot noise is very much in agreement with the prediction for a diffusive normal metal connected to normal reservoirs ($`S_I=\frac{2}{3}eI`$), likely because the transport is mainly due to quasiparticles. But as the temperature decreases, the shot noise increases above this value. The evolution of the current noise power vs. bias current (including the crossover to the Johnson-Nyquist equilibrium noise) is consistent with an effective charge $`2e`$ at voltages well below the gap. In order to establish the role of carriers overheating in the noise properties in our SNS geometry , we have calculated the gradient of temperature produced at each SN interface by the Andreev thermal resistance and compared the resulting noise to the experimental data.
Another contribution to the noise is the $`1/f`$ noise. $`1/f`$ noise is found to be quantitatively in agreement with previous data. Its amplitude is reduced by a factor of two when a weak magnetic field is applied, as expected within the Feng-Lee-Stone theory of low-frequency resistance noise in dirty metals . Analysis of the field dependence shows that $`L_\varphi `$ is not substantially decreased even far from equilibrium.
Our SNS geometry as well as our temperature range differ from previous work. We start with a trilayer $`10nm`$ Al-$`100nm`$ Nb-$`10nm`$ Al made by sputtering in a single sequence on an $`SiSiO_2`$ substrate. Then we define a mesa structure (upper inset in Fig. 1) by optical lithography with a $`200\mu m\times 40\mu m`$ wire between large contact pads. The contact pads are further covered by a low resistance Ti-Au contact layer. By electron lithography and subsequent reactive ion etching we selectively etch the Nb-Al top layer over a length of $`0.5\mu m`$ across the mesa wire (left inset in Fig. 2). The resulting structure is a continuous $`10nm`$ thick Al layer, covered by two semi-infinite $`100nm`$-Nb layers separated by a gap of $`0.5\mu m\times 40\mu m`$. At $`4.2K`$ the 80 squares in parallel result in a resistance of $`0.25\mathrm{\Omega }`$. The geometry is the inverse of the wire used in Ref. . The experiment is performed above the critical temperature of the aluminium film($`1.6K`$). We chose aluminium for the normal metal because of the good quality of the Al-Nb interface.
The current noise measurement scheme is based on a resistance bridge and a dcSQUID as shown in the inset of Fig. 1. It is well adapted to our low impedance sample which has relatively high current noise but needs high bias currents to go beyond the thermal (equilibrium) noise regime. The bridge is composed of a reference resistance ($`R_{ref}`$) made with a macroscopic constantan wire, the sample ($`R_x`$) and the extra resistances in the superconducting loop ($`r_c`$) (dominated by the gold wires used to connect the sample). The current noise of the setup is $`5pA/\sqrt{Hz}`$. The total resistance of the bridge being $`0.4\mathrm{\Omega }`$, its Nyquist noise is $`5.8\times 10^{22}A^2/Hz`$ at $`4.2K`$ and is therefore more than 15 times bigger than the total noise of the electronic setup. A fit of the form $`\alpha +\beta /f`$ is always found in total agreement with the spectra for each value of the bias current , indicating two separable features: a $`1/f`$ component of amplitude $`\beta `$ and a white (i.e. frequency independent) noise level $`\alpha `$. Figure 1 shows the temperature dependance of the equilibrium noise. The solid line is the Johnson noise ($`4kT/R_x`$) calculated from the measured sample resistance. The data is very much in agreement with the prediction: the Nyquist noise is always recovered, thus showing the absolute calibration of the setup.
Around $`4.2K`$ the temperature is much larger than the Thouless energy: the thermal length $`L_T=\sqrt{\mathrm{}D/k_BT}0.08\mu m`$ (T=4.2K) is much shorter than both the sample length $`L0.5\mu m`$ and the phase breaking length $`L_\varphi 0.8\mu m`$ (T=4.2K). Both Josephson coupling and coherent MAR are negligible but $`L`$ is comparable to the electron-electron scattering length $`L_{ee}1\mu m`$, and smaller than the electron-phonon scattering length $`L_{eph}2.5\mu m`$, both estimated at $`4.2K`$. Therefore the shot noise is likely to be due to normal parts coherently attached to at least one superconducting reservoir. The temperature dependence of the resistance exhibits two jumps corresponding to the two critical temperatures for niobium ($`8.35K`$) and aluminium ($`1.6K`$). Using the latter as the only parameter we can calculate the expected resistance by solving the equation for the coherence length in aluminium above its $`T_c`$ which differs from the thermal length . The result fits the data remarkably well (see right inset in Fig. 1). The differential conductance (Fig. 2) exhibits a peak which is another signature of this proximity effect. We also performed magnetoconductance measurements from which we inferred $`L_\varphi 0.8\mu m`$ at $`4.2K`$, in quantitative agreement with previous data on aluminium films .
The shot noise results are presented in Fig. 3 for various temperatures. The Josephson coupling between the two superconducting banks is avoided by staying above $`2K`$: then the correlation length is substantially smaller than the sample length (typically $`0.13\mu m`$ at $`2.5K`$ and $`0.5\mu m`$). An exponentionally small Josephson coupling is important to study the low bias regime where the crossover between equilibrium (Johnson-Nyquist) and non-equilibrium (shot noise) regimes takes place ($`e^{}V2k_BT`$). The observation of this crossover has been a decisive argument in the study of fractional charges by noise measurements . A large Josephson coupling would also be responsible for another contribution to the current noise as shown in resistively shunted Josephson junctions . The shot noise data at $`8K`$ (where superconductivity is already dramatically weakened at equilibrium) follow the solid line corresponding to the so called $`\frac{1}{3}`$ quantum shot noise suppression in normal mesoscopic diffusive samples , including the thermal crossover: $`S_I=\frac{2}{3}[4k_BT/R_d+eIcoth(eV/2k_BT)]`$ .
Obviously the critical current at such temperatures is substantially smaller than at $`4.2K`$. Now as the temperature decreases the data coincide less and less with the normal prediction. As expected qualitatively the superconductivity is responsible for an increase in noise because it allows a new mechanism for charge transfer through the NS interfaces: the Andreev reflection of an electron as a hole and the transfer of a pair on the S side. We emphasize that, unlike in experimental (and theoretical) studies of short SNS systems in the coherent MAR regime, the Johnson value is always found at vanishingly small bias voltage and the crossover to the shot noise regime is smooth. The minimum observed at the onset of the $`2.5K`$ curve is a consequence of the peak in the differential conductance . In diffusive samples $`L>l/\mathrm{\Gamma }`$ ($`L`$ is the length, $`\mathrm{\Gamma }`$ the transparency of the NS interface and $`l`$ the elastic mean free path) the shot noise is expected to be doubled in NS samples compared to N samples . In the asymptotic limit $`eVkT`$ this means $`S_I=\frac{2}{3}(2eI)`$ instead of $`S_I=\frac{1}{3}(2eI)`$. In Fig. 4 we have plotted the equation given above for $`S_I`$ in the N case as well as the same equation with a charge $`2e`$ instead of $`e`$. We use this naive approach because models describing the Johnson to shot noise crossover in NS are restricted to the one channel case only . In this case however the exact calculation is close to the approximate $`e2e`$ substitution. We found good agreement between the data at $`4.2K`$ and $`2.5K`$ and the curves for a doubled charge $`e^{}=2e`$, at low enough bias currents. We believe this is an experimental confirmation of the predicted doubled shot noise at NS interfaces.
Recent shot noise experiments gave rise to important discussions about heating effects. The crucial role of the reservoirs has been emphasized . We calculated the thermal power that can be transferred through the NS interfaces by the single-particle excitations and the (thermal) noise associated with the hot electrons within the normal metal. In our temperature range ($`T>2K`$) the electron-phonon interaction is certainly able to restore the electrons closer to equilibrium. However as $`L_{eph}L`$ the contribution of the phonons is presumably too small to decrease the noise substantially, thus we neglected this mechanism in the heating calculation. We used the Andreev thermal resistance for the NS barrier to calculate the power that can be transferred through the NS interfaces. Then if we consider the power which is injected we obtain the temperature profile along the sample, taking into account both the Wiedemann-Franz law inside the normal part and the temperature jump across the NS interface due to its thermal resistance. Finally the noise due to these “hot” electrons is calculated with the Johnson noise formula. The result is plotted at $`2.5K`$ in Fig. 4 (dashed line). Clearly the Andreev thermal resistance gives an overheating effect higher than in the normal case. However at our relatively large temperatures this heating effect cannot quantitatively account for the data. At dilution refrigerator temperatures this heating effect becomes substantial as pointed out by Hoss et al . These authors used the Wexler formula and the BTK model to account for electron heating. Our calculation uses the Andreev thermal resistance which contains no adjustable parameters, but using their arguments with reasonable assumptions for the NS resistance leads to similar results.
Another strong evidence for mesoscopic effects even at high bias currents is provided by the $`1/f`$ noise results . First, we expressed the amplitude of the $`1/f`$ noise in terms of Hooge’s law: $`S_I/I^2=\alpha _H/Nf`$ where $`\alpha _H`$ is the phenomenological Hooge parameter. Assuming a carrier density $`N18\times 10^{22}cm^3`$ in aluminium, we found $`\alpha _H10^3`$, in agreement with the range $`10^5`$ to $`10^1`$ given in the literature for thin films made with various materials. The model developed by Feng et al. shows that at low temperature the motion of a single scattering center is responsible for corrections to the conductance because of interference over $`L_\varphi `$. A striking consequence is that under a weak magnetic field the amplitude of the $`1/f`$ noise is expected to be reduced by a factor of two . This prediction has been verified in bismuth films and semiconductors . We performed this experiment and also found the universal reduction by a factor of two as shown in the inset of Fig. 4. This result obtained for bias currents $`3.2`$ and $`3.9mA`$ demonstrates that even at these high currents the mesoscopic features are conserved. Indeed the characteristic decay length over which the field is reduced is directly related to $`L_\varphi `$. Stone established that for a reduction by $`75\%`$, $`H0.2L_\mathrm{\Phi }^2h/e`$. Using this relation we obtain for the two relevant bias currents $`L_\mathrm{\Phi }0.2\mu m`$, i.e. a smaller value than inferred from weak localization measurements ($`L_\varphi 0.8\mu m`$). Nevertheless $`L_\mathrm{\Phi }`$ remains comparable to $`L`$ even at high bias current. This result indicates that the inelastic lengths $`L_{ee}`$ or $`L_{eph}`$ (and therefore $`L_\varphi `$) are not drastically reduced when several $`mA`$ are driven through the junction.
In summary, we performed the first $`1/f`$ and shot noise measurements in very low impedance SNS junctions in a high temperature regime which inhibits MAR features. We observed the shot noise enhancement due to Andreev reflections at NS interfaces. Under appropriate voltage and temperature conditions we see the predicted doubled shot noise due to the transfer of electron pairs through the NS boundaries. We estimated the thermal properties of the SNS structure with the Wiedemann-Franz law and the Andreev thermal conductance at the NS boundary and concluded that heating cannot be responsible for the observed noise. The reduction of the $`1/f`$ noise by a weak magnetic field demonstrates that the mesoscopic properties are not dramatically reduced by high currents.
We are grateful for fruitful discussions with C. Strunk, Y. Naveh, Th. Martin and V. Shumeiko.
|
no-problem/9904/physics9904052.html
|
ar5iv
|
text
|
# Incoherent Energy Transfer within Light-harvesting Complexes
## I Introduction
We have had a reasonably complete picture of the bacterial light-harvesting (LH) system recently . Both the inner antenna, LH1, and the outer antenna, LH2, are assembled from the same modules to form rings. Each module consists of two short $`\alpha `$-helical polypeptides coordinate one carotenoid and three bacteriochlorophylls (BChls). The LH2 is composed of 9 units, for Rhodopseudomonas acidophila , resemble a cylinder, with an inner diameter $`36\AA `$ and an outer diameter $`68\AA `$, while the LH1 is composed of 16 units, for Rhodospirillum rubrum , in order to accommodate the reaction center (RC). The later has an outer diameter $`116\AA `$ and a central diameter $`68\AA `$. However, the exact numbers of both complexes are variable .
Furthermore, the LH2 B850 BChl $`a`$ form a complete over-lapping ring in a hydrophobic environment, which reduces the dielectric constant, while the B800 BChl $`a`$ are well separated and are in a polar environment. When a BChl molecule is excited by light, the energy can reach equilibrium within about $`10ps`$ . A LH2 can function as a storage ring to store the excited singlet state energy for about $`1100ps`$. However, the energy will transfer to other rings before decaying. The hopping of energy continues from one ring to another one until a LH1, which contains the RC, is finally reached. The total trip lasts for about $`5`$ to $`50ps`$ . Apparently, there is a competition between energy relaxation and energy transfer.
Historically, relatively few physicists have tackled problems of photosynthesis. Notably, Montroll used random walk concepts to model energy transfer amongst antenna rings on a lattice by considering its first passage time . Later, Hemenger et al. proposed a more realistic model by taking inhomogeneous transfer rates and trapping of RCs into account . Interestingly, it is Pearlstein’s work which is most often cited in the literature . In the mean time, almost all experimentalists try to find some explanations for their spectral data. However, due to lack of precise geometrical information most efforts are in vain.
Progresses in physics are often made along the line structures - energy - dynamics. A goal of researches nowadays is to find the relation between structural and spectral information obtained, expecting that the function of photosynthesis will be explained in terms of its structure, and further drawing inferences from the model by applying methods of mathematical or numerical analysis. Recently Timpmann et al. used a rate equation model to describe energy trapping and detrapping by the RC . However, their antenna has no structure. Skála et al. also carried out a series of investigation by analyzing the spectrum of a more realistic LH1 model . However, their model is incompatible with the recent structural finding. In this paper we established a two-parameter model based on recent structural data.
## II model
With the known periodical structure, shown in Fig.1, we can built, from chemical rate equation, the following phenomenological model of energy transfer,
$`{\displaystyle \frac{dE}{dt}}`$ $`=`$ $`k^{^{}}A_1(k^{^{\prime \prime }}+k_E)E,`$ (1)
$`{\displaystyle \frac{dA_1}{dt}}`$ $`=`$ $`kA_{16}2kA_1+kA_2k^{^{}}A_1+k^{^{\prime \prime }}E,`$ (2)
$`{\displaystyle \frac{dA_n}{dt}}`$ $`=`$ $`kA_{n1}2kA_n+kA_{n+1},n=2\mathrm{}15,`$ (3)
$`{\displaystyle \frac{dA_{16}}{dt}}`$ $`=`$ $`kA_{15}2kA_{16}+kA_1,`$ (4)
in which $`A_n`$s denote the excited BChl dimer, $`EP^{}BH`$ is the excited state, with B representing the chlorophyll monomer within the RC, and $`P^{}`$ is the excited special pair of BChl molecules. It is a set of 17 coupled linear differential equations. The symmetry of this system is broken due to $`k^{^{}}k^{^{\prime \prime }}`$. A similar model has been proposed by Skála et al. . However, the RC and the antenna ring are connected only at one site in the present model, corresponding to the recent experimental observation.
In the homogeneous case with the same transition rate amongst the units, the characteristic polynomial of the above rate-constant-matrix can always be expressed as
$$P_{16}=P_{16}^1P_{16}^2P_{16}^3P_{16}^4,$$
(5)
with
$`P_{16}^1`$ $`=`$ $`s+2k,`$ (6)
$`P_{16}^2`$ $`=`$ $`s^2+4ks+2k^2,`$ (7)
$`P_{16}^3`$ $`=`$ $`s^4+8ks^3+20k^2s^2+16k^3s+2k^4,`$ (8)
$`P_{16}^4`$ $`=`$ $`s^{10}+(k_E+k^{^{\prime \prime }}+k^{^{}}+18k)s^9+`$ (17)
$`(k^{^{}}k_E+18kk_E+18kk^{^{\prime \prime }}+16kk^{^{}}+134k^2)s^8+`$
$`2(8k^{^{}}k_E+67kk_E+67kk^{^{\prime \prime }}+52kk^{^{}}+266k^2)ks^7+`$
$`2(52k^{^{}}k_E+266kk_E+266kk^{^{\prime \prime }}+176kk^{^{}}+605k^2)k^2s^6+`$
$`2(176k^{^{}}k_E+605kk_E+605kk^{^{\prime \prime }}+330kk^{^{}}+786k^2)k^3s^5+`$
$`12(55k^{^{}}k_E+131kk_E+131kk^{^{\prime \prime }}+56kk^{^{}}+91k^2)k^4s^4+`$
$`4(168k^{^{}}k_E+273kk_E+273kk^{^{\prime \prime }}+84kk^{^{}}+86k^2)k^5s^3+`$
$`8(42k^{^{}}k_E+43kk_E+43kk^{^{\prime \prime }}+8kk^{^{}}+4k^2)k^6s^2+`$
$`2(32k^{^{}}k_E+16kk_E+16kk^{^{\prime \prime }}+kk^{^{}})k^7s+2k^8k^{^{}}k_E,`$
which is a consequence of the master equation used, and is independent of the detail geometrical symmetry. The mode controlling the decay to the RC is within $`P_{16}^4`$, since $`P_{16}^1`$, $`P_{16}^2`$, $`P_{16}^3`$ do not contain $`k^{^{}}`$, $`k^{^{\prime \prime }}`$ and $`k_E`$. However, all four parts will be influenced by the change of $`k`$. If one solves this set of differential equations by applying the Laplace transformation method, one finds the solution divides into four distinct groups of decay channels, namely, $`A_5`$-$`A_{13}`$; E-$`A_1`$-$`A_9`$; $`A_3`$-$`A_7`$-$`A_{11}`$-$`A_{15}`$; $`A_2`$-$`A_4`$-$`A_6`$-$`A_8`$-$`A_{10}`$-$`A_{12}`$-$`A_{14}`$-$`A_{16}`$. Because the matrix of rate constants is hermitian, all eigenvalues are negative. Furthermore, no eigenvalues are degenerated, in contrast to Skála’s model which posses too high degree of symmetry . Letting $`k^{^{}}=k^{^{\prime \prime }}`$ does not results in additional factorizability although the symmetry of our model is restored. At $`k^{^{}}=k^{^{\prime \prime }}=0`$, $`P_{16}`$ becomes
$$s(s+2k)^2(s+4k)(s+k_E)(s^2+4ks+2k^2)^2(s^4+8ks^3+20k^2s^2+16k^3s+2k^4)^2.$$
(18)
It contains a zero eigenvalue, which signals the existence of a steady-state solution, as should be happened without the decay to the RC. Degeneracy of eigenvalues is introduced as the transition to the RC is decreased.
## III spectrometry comparison
We can verify our model against experiments: The pump-probe spectroscopy measures the difference between two beams, with
$$\mathrm{\Delta }D=\mathrm{\Delta }ϵ_A\underset{n}{}A_n+\mathrm{\Delta }ϵ_EE,$$
(19)
being the signal measured. The symbol $`\mathrm{\Delta }ϵ`$ s are the differences in dielectrical constants between pump and probe beams of the corresponding pigments. By choosing the pump and probe laser frequencies, we can selectively detect the population changes of $`A_n`$ or $`E`$. Summing over Eq. (1)-(4) we know that the decay of the total population should be $`d(A_n)/dt=k^{^{}}A_1+k^{^{\prime \prime }}E`$. The measured charge separation rate is $`k_E3.57\times 10^{11}s^1`$ at room temperature, and increases by $`2`$ to $`4`$ times from $`300K`$ to $`10K`$ depending on the species chosen . The ratio of the forward and backward transition to the RC is know to be about $`25\%`$ for an open RC, i.e., the RC BChl dimer (P) is reduced and the iron quinone electron acceptor is oxidized; $`40\%`$ for pre-reduced RC. The back-trapping rate can, in principle, be estimated from $`k^{^{\prime \prime }}/k^{^{}}=exp(\mathrm{\Delta }G/k_BT)`$, with $`\mathrm{\Delta }G`$ the free-energy gap between $`A_1`$ and $`E`$ is estimated from their absorption peaks, $`k_B`$ is the Boltzmann constant, and $`T`$ is the absolute temperature. However, the measured absorption peaks of the excited RC are broad and imprecise. We do not know the absolute values of $`k^{^{}}`$ or $`k^{^{\prime \prime }}`$ experimentally since it is difficult to tune the laser frequency to distinguish $`A_n`$ from $`E`$. Nor do we know the transition rate between $`A_n`$s because transition between the same species cannot be measured directly. Furthermore, at room temperature, energy equilibrium within the antenna interferes with the trapping process. Therefore we have taken $`k`$ and $`k^{^{}}`$ as parameters and fit the slow mode of fluorescence decay of excited population observed, i.e. $`200ps`$ . Thus, the absolute value of the largest eigenvalue should be about $`3/200ps=1.5\times 10^{10}s^1`$. A computer code is written to scan all combinations of $`k`$ and $`k^{^{}}`$, with $`k^{^{\prime \prime }}=k^{^{}}/5`$, for the largest eigenvalue to be smaller than $`1.5\times 10^{10}s^1`$ between $`10^8s^1`$ to $`10^{15}s^1`$. Interestingly, we find all possibility happened at $`k=k^{^{}}`$ and $`k>6.97\times 10^{11}s^1`$ for $`k^{^{\prime \prime }}=k^{^{}}/5`$. Presumably, it is an extremum of $`P_{16}^4`$. At the lowest $`k`$, we can match the required $`200ps`$ decay whose curve is plotted at Fig.2. If $`k^{^{\prime \prime }}=k^{^{}}/4`$, we obtained $`k=7.25\times 10^{11}s^1`$. That $`k`$ has to be equal to $`k^{^{}}`$ might sound peculiar in view of the geometrical distance between $`A_1`$ and $`RC`$ is less than the distance between $`RC`$ and other $`A_n`$s . However, the species for donor and acceptors are different at these two cases. There are possibilities that the final hopping rate are still the same.
The transfer of excitation energy requires coupling between the emitting molecule and the ground state molecule. At an intermolecular separation involved between $`10\AA `$ to $`100\AA `$, long-range resonance transfer of electronic excitation arises from coupling between the transition dipoles of the donor and the acceptor, which is the Föster theory . Since the BChl $`Q_y`$ dipoles lie in the same plane, we have
$$k(R)\frac{1}{\tau _F}(\frac{R_0}{R})^6,$$
(20)
in which $`R_0`$, measures transfer efficiency, is the Föster radius. van Grondelle gave $`R_0=90\AA `$ for the BChl 875 to BChl 875 energy transfer and a fluorescence life time, $`\tau _F`$, about $`3000ps`$ or slightly higher . If a putative separation distance between interacting BChl $`a`$ dimers $`17.5\AA `$ is used we obtain an estimation of $`k6.17\times 10^{12}s^1`$. This number is about an order of magnitude higher than the value obtained from our model. However, the pairwise energy transfer is about $`1ps`$ according to our calculation . On the other hand, from the value of $`k`$ obtained here, by fitting the $`200ps`$ decay as well as the $`\tau _F`$, we estimated the Föster radius to be $`26.8\AA `$. This result is consistent within our model since we assume only nearest neighbour transition. Further, since we put the population at the antenna at $`t=0`$ for our calculation,the rising time is infinitely short, instead of having some instrumental limits as observed experimentally. Although the light wave length is much larger than the ring size, the ring still might receive energy in localized form by energy transfer from other rings as the initial condition we used in Fig.2. Table I provides a list of all eigenvalues and corresponding amplitudes obtained from our model. From the table, we found that the largest eigenvalue mode is important, not only for its large separation from the other eigenvalues but also for its corresponding large amplitude.
We have also introduced inhomogeneity, due to geometrical distortion, into the rate constant. However, even at large distortion, the basic character of the spectrum is not altered considerably. If the criteria for $`k=k^{^{}}`$ can be established, we can further reduce the free parameters in our model.
## IV Conclusion
In summary, a physicist’s approach of incoherent energy transfer within an antenna ring is taken by considering a two-parameter two-dimensional model. This model differs from the one presented by Skála et al.. The reality might be somewhere between these two models. In our model, we numerically found $`k`$ has to be equal to $`k^{^{}}`$. Furthermore, we are able to calculate analytically some of the eigenvalues and demonstrate explicitly that there is a mode for decaying to the RC and other three modes. However, this result of mode separation depends upon the exact number of unit involved in the ring. Therefore should not be important. Perhaps we should interpret the finding as: $`P_{16}^1`$, $`P_{16}^2`$, $`P_{16}^3`$ are redundant, since $`P_{16}^4`$ contains $`k_E`$ which should be important. A ring of $`16`$ units is huge. The only purpose for such a large antenna is to accommodate the $`RC`$.
Finally we remark that it is possible to extend a two-dimensional random walk model of energy transfer into a quasi-three-dimensional one, in view of the recent structural finding, with a recent result of random walk on bundled structures by Cassi and Regina. Furthermore, this theoretical result should be able to be verified experimentally using its spectral dimension by measurements involving diffusion processes such as time-resolved spectroscopy of nearest-neighbours energy transfer. Other light-harvesting models and mechanisms are under further investigation.
|
no-problem/9904/astro-ph9904009.html
|
ar5iv
|
text
|
# Structures Produced by the Collision of Extragalactic Jets with Dense Clouds
## 1 The effect of environment on extragalactic jets
### 1.1 Complex and distorted structure in extragalactic radio sources
The jets and hotspots of radio galaxies and quasars often show complex structure, including bends, twists, knots and multiple hotspots. Such structure is seen over a huge range of sizes, from the enormous wide-angle tail (WAT) sources (which can be megaparsecs in size), to the compact steep-spectrum (CSS) sources (which are about 10–15 kpc across). Jets can appear to bend by over 90 and remain collimated for several jet radii (Bridle & Perley 1984), despite the expectation that the oblique shock causing the bend should decelerate the jet. Explanations for these complex structures include motion through some intra-cluster medium (Leahy 1984), perturbations due to mergers (van Breugel et al. 1986, Sakelliou, Merrifield & McHardy 1996), variations in the direction of the jet at its source (Williams & Gull 1985, Scheuer 1982), and collision with dense clouds in the ambient medium (Burns 1986). Cloud collisions are particularly applicable in cases where these bends are very sharp. Other models would produce more gradual bends.
Theoretical studies of jet-cloud collisions appear to be in conflict over whether they can explain the observations. In some previous numerical simulations the jet is decelerated and effectively disrupted by the cloud, and the cloud is subsequently destroyed (De Young 1991). In others the jet appears to remain collimated as it is re-accelerated in a new direction (Norman 1993). Most recently Raga & Canto (1996) find from two-dimensional simulations and analytical studies that the jet bores through the cloud but reaches a steady configuration. In this paper we describe the results of a study aimed at resolving this question by investigating the effects of various hydrodynamical and geometrical parameters. This investigation uses three-dimensional, adiabatic simulations. We also follow the development of the interactions for a longer time than previous simulations, and estimate the intensity of synchrotron emission from the jet to enable meaningful comparison with observed radio maps.
Through these studies we hope to be able to determine what types of complex structure can be explained by jet-cloud collisions. Such studies could shed light on the alignment between radio and optical axes in high redshift radio galaxies, the contribution of shocks to the spectra of extended emission lines regions and the role of environmental effects in unified schemes for AGN and their evolution.
### 1.2 Examples of distorted structure
Wide-angle tail (WAT) sources are found in rich clusters and have distorted, C-shaped structure. Large WATs approaching 1 Mpc across cannot be bent by the motion of the parent galaxy through the cluster without assuming unreasonably high speeds through unreasonably dense intra-cluster gas (Burns 1986). To satisfy the requirements of momentum balance and to reproduce the sharpness of the bends that are observed, Burns proposed that the jets may collide with clouds of higher density in the surrounding medium. A good example is the western jet of the WAT 1919+479, which emerges from the core, vanishes after a short distance, and then reappears at a bright hotspot (Burns et al. 1986, Pinckney et al. 1994). Beyond this hotspot a tail stretches out for 800 kpc in a direction about 90 to that of the original jet, broadening and fading as it does so. The bend here is very sharp. There is one other hotspot not far from the beginning of the tail. Rotation measure and depolarisation are described as ‘patchy’, and vary significantly over the tail. X-ray observations show large scale asymmetry in the intra-cluster medium. The simulations of Loken et al. (1995) show that the necessary gas velocities can arise in cluster mergers, as can shocks that will bend the jet. However, these simulations still do not explain the sharpness of the bend or the persistence of the halo.
Barthel et al. (1988) present a large sample of quasars with powers greater than the Fanaroff-Riley division but distorted structures. Twenty out of a sample of eighty high-redshift quasars showed bending greater than 20, instead of the classical double structure that would be expected.
Compact Steep Spectrum (CSS) sources make up 10–15% of AGN. More than 15% have axes between the lobes and the core that are misaligned by more than 20 (Saikia et al. 1995). This distortion seems to be associated with an asymmetrical ambient medium.
In quasars the most prominent jets are in complex or one-sided structures, and smaller sources are the more powerful (Muxlow & Garrington 1991). Stocke, Burns and Christiansen (1985) present observations of ‘dogleg’ quasars showing strong changes in direction within the lobes. There is no evidence that these sources are found any more often in rich or poor cluster environments, a conclusion supported by recent work (Rector, Stocke & Ellingson 1995). Only a few dense clouds are needed in each case to explain the proportion of bent jets, so it would appear that it is the inhomogeneity of the environment, not its overall density, that causes the bends.
### 1.3 Intergalactic clouds in the neighbourhood of extragalactic jets
Evidence for inhomogeneity in the medium surrounding AGN comes from measurements of depolarization and line emission, and correlations between these properties/features and the radio structure.
Maps of rotation measure and depolarization of a sample of radio sources show significant inhomogeneity on a range of scales, some as low as 5kpc, some larger than 50kpc, out to distances of about 100kpc from the nuclei (Pedelty et al. 1989). Such effects could be caused by density inhomogeneities in the surrounding material.
The infrared, optical and ultraviolet structures of many high-redshift radio galaxies are closely aligned with the radio structures (McCarthy 1993, Chambers, Miley & van Breugel 1987). There are also several correlations between line emission and other aspects of distorted jets, for example: brighter line emission occurs on the side of the nearer radio-lobe (McCarthy, van Breugel & Kapahi 1991); the level of blue light and the strength of the alignment effect are correlated with a mix of radio power and spectral index (Dunlop & Peacock 1993). The lobes of these radio sources are often asymmetrical with the lobe nearer the nucleus tending to be more depolarized than the more distant lobe (Liu & Pooley 1991), suggesting that the material responsible for the depolarization may also present more resistance to the jet.
Regions of optical line emission are associated with knots and bends in the jets (Wilson 1993). For example, the radio galaxy 4C 29.30 has a region of line emission close to a bright knot just before a bend (van Breugel et al. 1986). The radio galaxy PKS2250-41 shows extended emission aligned with the radio axes, including a large arc-shaped region of line emission surrounding the radio lobe (Tadhunter et al. 1994). Extended emission line regions have a “clear spatial association” with regions of depolarized radio emission (Baum & Heckman 1989).
In many sources (especially at higher redshifts) the spectra from this extended emission line region can only be explained if a significant shock component is included (Clark & Tadhunter 1996, and references therein). Low polarization of the ultra-violet emission shows that there is not enough scattered AGN emission to account by itself for the total flux (Tadhunter 1996). The cloud collision model is now commonly invoked to explain the properties of extended emission line regions in AGN, for example the Seyfert galaxy NGC 1068 (Axon 1996); radio galaxies 3C 254 (Crawford 1996) and 3C368 (Stockton, Ridgway & Kellog 1996); and Cen A (Sutherland, Bicknell & Dopita 1993), as well as the sources mentioned in the previous paragraph.
The most luminous radio sources are known from observations to reside in regions of high galaxy density (Yates, Miller & Peacock 1989, Hill & Lilly 1990). X-ray observations of powerful radio galaxies and quasars show that many lie in the centres of rich clusters with dense, rapidly cooling IGM in which cold clouds can condense (Fabian 1993). Cowie et al. (1983) observed filamentary line emission in cooling flows that could indicate the presence of such clouds. Observations suggest that these clouds have temperatures of about 10<sup>4</sup> K, densities of about 100 cm<sup>-3</sup> and sizes of 3–15kpc (Baum 1992). Some contribution from radio jets seems necessary to re-energize the filaments.
## 2 Simulating collisions of jets and clouds
### 2.1 Previous simulations
Analytical studies show that sharp bends of 90 are possible if the jet is thin or only moderately supersonic (Icke 1991). More recently Raga & Canto (1996) have published analytical calculations and two-dimensional simulations showing bending by clouds. They conclude that slower jets will be bent more, and clouds will be eroded as jets bore through them.
The first investigation of the effect of off-axis jet-cloud collisions was by De Young (1991) using the ‘beam scheme’ (Sanders & Prendergast 1974), to test the proposal of Burns (1986) that bending in large WATs is the result of collisions with clouds. De Young monitored the jet flow using test particles, and observed that the jet was considerably decelerated by the impact with the cloud. The cloud was destroyed within a few million years, and the jet returned to its original direction. He concluded that the interaction does not last long enough to produce anything like a tail.
However, WAT tails have a wide opening angle and show no strong evidence of supersonic speeds (such as terminal hotspots). It would appear the jet is disrupted at the impact point anyway. The only question is whether the interaction can be maintained long enough to produce tails of the observed length.
A similar interaction was investigated at higher resolution by Balsara & Norman using their RIEMANN code (Norman 1993). They argued from plots of the velocity field that a De Laval nozzle was formed which re-accelerated the jet in a new direction after impact.
We aimed to resolve this conflict through the work described in this paper, and to more thoroughly investigate the effect of different parameters both on the development of the interaction over time, and the structures produced, and by estimating the appearance of the source at radio wavelengths for direct comparison with observations.
### 2.2 Numerical techniques
The simulations we present in this paper were performed using a hydrodynamic code based on the Godunov method of Falle (1991) in three-dimensional Cartesian coordinates. This technique solves the inviscid Euler equations to second-order accuracy in space and time, with an adiabatic equation of state. In addition to calculating the usual dynamical variables, we also calculate a parameter representing the fraction of density within each cell which was originally jet material. This allows us to follow the evolution of the jet separately from the ambient medium and to calculate synthetic radio maps as described in section 2.4.
Since we can draw a plane of symmetry bisecting the cloud and containing the jet we have only calculated one half of the region around the interaction (figure 1). The boundaries of this domain are treated as free flow except for the symmetry plane and the region where the jet enters the grid. A free flow condition assumes that values on the outer surfaces of each cell are exactly equal to those on the inner surfaces. The symmetry condition is that velocities normal to the surface are reflected. The jet is produced by using an appropriate boundary condition representing incoming material in the region where the jet enters. Note we have made no assumptions about the position of the central engine with respect to the grid.
The simulations can be rescaled so that they represent structures on parsec or kiloparsec scales, as long as the gas behaves adiabatically. The simplest rescaling is to change sizes and times in proportion, preserving all other variables. For example, if the cell side is 1 kpc, then 1 time unit = 2$`\times 10^6`$years. We use this scaling as a reference in discussing the simulations below. We can apply these simulations to other cases with different pressures and temperatures. For example, in the case of the fastest jets, the temperature may be a hundred times higher, or for the slow jets a hundred times lower, and we still have velocities in the range accepted for extragalactic jets.
### 2.3 Testing the hydrodynamic code
We used the code to calculate two one-dimensional test problems. The first was Sod’s shock tube problem (Sod 1978). The code was used to compute plane shocks moving along each of the three axes of the grid (in both forward and reverse directions, as well as at various resolutions). It produced results in good agreement with the analytical solution.
The second test problem was the collision of a one-dimensional shock with a density discontinuity (Nittman, Falle & Gaskell 1982). This problem is clearly appropriate to our investigation as a useful indication of the fidelity of the code in this case. Once again this was run with shocks and discontinuities normal to each of the three axes, moving both forwards and in reverse directions and at various resolutions. Figure 2 shows an example of the results (full details are given in the caption to the figure). The positions and values of the two shocks were accurately reproduced.
As a final test we re-calculated a portion of one of the simulations (simulation 3 – see section 3.2) at double the initial resolution within a volume one-eighth of the size and compared the results. When the high-resolution results were smoothed to the lower resolution they showed no significant differences. Nor did there appear to be any effects caused by allowing the simulated flow to leave this grid compared to the same region of the lower resolution simulation. In figure 3 we show density slices at the same time for the simulations at both resolutions. We are satisfied from this that the results of our simulations are not significantly affected by the resolution or boundary effects.
### 2.4 Producing synthetic radio maps
Previous studies of jet-cloud collisions relied on the interpretation of flow patterns produced by the simulations to reach their conclusions. To allow a more direct comparison we have developed a simple approximation for the intensity of synchrotron emission in terms of the hydrodynamic variables. We use this to produce estimates of the surface brightness distribution of radio emission. These synthetic radio maps can be compared with observations. In this section we describe our prescription, and the assumptions it is based on. We then consider how changing these assumptions might affect the results, and present some plots for alternative prescriptions.
The intensity $`j`$ of synchrotron emission at frequency $`\nu `$ is given by $`jKB^{1+\alpha }\nu ^\alpha `$, where $`B`$ is the magnetic field strength, $`\alpha `$ is the spectral index and $`K`$ is related to the number density $`N(\gamma )`$ of relativistic electrons with Lorentz factors in the range ($`\gamma `$,$`\gamma +d\gamma `$) via $`N(\gamma )d\gamma =K\gamma ^{(2\alpha +1)}d\gamma `$. In order to calculate $`j`$ we need to express the magnetic field and the coefficent $`K`$ in terms of the results of our hydrodynamic simulations.
To determine the magnetic field we assume that the field is insignificant outside the jet (see, for instance, Smith et al. 1985) and the magnetic flux is frozen into the jet material. By conservation of magnetic flux we would expect the field strength to be related to the jet density via $`B\rho _{jet}^{2/3}`$. In practise a complicated flow can amplify an initially disordered field. For the purposes of this work we have neglected this amplification.
To determine the coefficient $`K`$ we assume that the energy density of relativistic particles is a fixed fraction of the internal energy density of the gas with the only changes in the distribution being due to adiabatic expansion. This is equivalent to assuming that the relativistic electrons form a supra-thermal tail throughout the gas, and the efficiency of the acceleration process is similar everywhere. Following Wilson and Scheuer (1983) we can relate $`K`$ to the gas pressure $`p`$ via $`Kp^{(\alpha /2+3/4)}`$. The adiabatic index varies between 5/3 for non-relativistic gas to 4/3 for relativistic particles which corresponds to an uncertainty of about 0.3 in our choice of $`\alpha `$.
We substitute for $`K`$ and $`B`$ from the formulae above, and therefore obtain
$$j\nu ^\alpha p^{\alpha /2+3/4}\rho _{jet}^{2\alpha /3+2/3}$$
(1)
With $`\alpha `$=0.5 we have $`j\nu ^\alpha p\rho _{jet}`$, whereas with $`\alpha =1.0`$ we find $`j\nu ^\alpha p^{5/4}\rho _{jet}^{4/3}`$.
As an alternative to assuming that magnetic flux is frozen into the jet and that it is zero elsewhere we could assume that the magnetic energy density is in equipartition with the internal energy density of the gas (and, because of our other assumption, with the relativistic particle energy density). This leads to $`Bp^{1/2}`$ and therefore
$$j\nu ^\alpha p^{\alpha +5/4}$$
(2)
It is common to use $`\alpha `$=0.75 so that $`j\nu ^\alpha p^2`$.
Further, if one still believed that the magnetic field was negligible outside the jet then we could use this expression with one’s favourite $`\alpha `$ wherever $`\rho _{jet}0`$ and set the intensity equal to zero where $`\rho _{jet}=0`$.
We can calculate the synchrotron intensity for each computational cell and, assuming that the emission is optically thin, integrate along lines of sight to produce a synthetic radio map at any epoch. In figure 4 (a – d) we show four alternative maps from the same fluid conditions based on the alternatives described above (see caption). Using the square of total pressure, the hotspot is large with a shape like a tadpole, but neither the jet nor the deflected tail are detectable. However, the evidence for higher magnetic fields in the jet is strong. The other alternatives produce structures that are similar to each other, so that the general comparisons we would like to make are not seriously effected.
## 3 The effect of jet and cloud properties on the results of the interaction
### 3.1 Models and coverage of parameter space
The parameter space of jet cloud collisions is multi-dimensional. In an attempt to explore the range of behaviour within this space we have chosen to vary the values of three of the most significant parameters over ranges applicable to extragalactic jets: the jet Mach number, its density contrast with the ambient medium, and the contrast between cloud and ambient density. Using one of these cases, we have also investigated the effect of impact angle and relative size of cloud and jet on one of the more interesting cases. Details are given in table 1.
We have assumed conditions in the ambient medium consistent with observations, that is a temperature of 5$`\times 10^7`$K and a particle number density of 0.01 cm<sup>-3</sup>. These values are used to normalize the quantities in the computation so that model values for the ambient density and pressure in the code are set to 1.0. The jet and cloud are both taken to be in pressure balance with the ambient medium. The computational domain is divided into a grid of 50 $`\times `$ 120 $`\times `$ 120 cells.
### 3.2 Summary of results
By simply changing a few parameters we have produced a range of different structures. We discuss these in this section. The common features in all the simulations are: the jet is disrupted and decelerated to some extent on collision; the cloud is eroded by the jet; high pressures and densities are produced at the point of impact, giving rise to bright hotspots in the radio emission. All these structures vary in time, and many might be associated with features observed in real radio sources at different epochs in the simulation (Higgins, O’Brien, & Dunlop, 1995).
We illustrate this discussion by presenting some examples. In each case we show total density beside a synthetic radio map at two epochs. Density plots show a slice through the symmetry plane (see figure 1), rendered in a logarithmic greyscale. The synthetic radio maps are produced by estimating the intensity of synchrotron emission in each cell (using the formula $`j\nu ^\alpha p\rho _{jet}`$, as described in section 2.4). This is then summed along the line-of-sight perpendicular to the symmetry plane. The resulting maps are shown on a logarithmic greyscale with a range of about one order of magnitude. They have very different peak values, so the hostpots are not directly comparable, but the dynamic range between the peaks and faintest features are. The faster jets have left the grid in all cases before the slower jets have developed any interesting features, so we show each jet shortly after impact, and then at a later stage of the interaction. For the fast jet this is $`t=`$4 units of computational time after it enters the grid (16 million years on the scaling specified above) and $`t=`$28 (56 million years). The slower jets are also shown at $`t=`$28, which is around the time of impact, and then at $`t=`$76 (112 million years).
The light, slow jet (simulation 1, figure 5) is scattered into a very broad area in all directions perpendicular to the jet. A mushroom or umbrella shaped structure is formed in the radio emission. The hotspot is no brighter than it was in the ambient medium before impact, and is recessed a little within the lobe. It is not obvious from the radio structure that this is a cloud collision. When the same jet encounters a denser cloud (simulation 2) the interaction lasts longer, since the jet erodes the cloud much more slowly. The denser cloud makes no difference to the hotspot behaviour, which is not significantly brighter on impact with the cloud. This feature almost certainly depends on the relative sizes of the jet and cloud: a much larger cloud would not allow deflection in both directions.
The fast, light jet (simulations 3 and 4, figure 6) produces a weak secondary hotspot at the head of the deflected jet in the radio map. This is about a tenth as bright as the spot at the impact point. It becomes disconnected (on the dynamic range of our radio plots) and disappears within about fifty thousand years, and the jet rapidly erodes the cloud (whatever its density). As the jet breaks through the cloud there are two hotspots within a boot-shaped lobe. Although the cloud impact is the cause of the bending, the deflection and secondary hotspot is actually produced as the jet bends inside the distorted cocoon that has been formed during the interaction.
After a few hundred thousand years it has broken through the cloud and shot off the grid. This faster jet can be deflected, and produces much more visible results, but only for a short time. In this case the hotspot shows a small increase in brightness (about 10 – 20%) on impact, and remains about the same throughout the interaction. The density of the cloud does not make significant difference to the effect on the jet in either of these cases.
The slow, heavy jet (simulations 5 and 6, figure 7) shows a clear deflection by 90. The jet forms a bowl-shaped cavity in the cloud which deflects the jet with a relatively large radius of curvature. We do not see any secondary hotspot in the radio map. The primary hotspot brightens steadily after impact, up to a factor between two and three. The jet erodes the cloud slowly. On the scale of a WAT source it takes about $`3\times 10^8`$ years to erode the less dense cloud. There is a hotspot down the jet before the impact point. When the cloud is denser the interaction does of course last longer. The deflection angle is nearer to 70. After about 6$`\times 10^8`$ years the deflected arm is about four jet radii long, and about half as wide again as the incoming jet. The jet has worked about half way through the cloud. The shape of the deflecting face formed by the interaction is now deeper, so disrupting the jet more. However the interaction can certainly continue for some time yet.
The fast, heavy jet (simulations 7 and 8) pushes deep into the cloud, however dense it may be. The impact hotspot is orders of magnitude brighter than any other features. The jet breaks through the cloud within about 50 million years. The radio map shows an extremely bright deflection spot and weaker secondary (by 1000 times) in the deflected material. The primary hotspot brightens by a factor of ten on impact in this powerful jet. Because this interaction is short-lived, and the hotspot far outshines any other emission, it would be difficult to detect any deflection in a source with this powerful a jet.
### 3.3 The Effect of Impact Angle
We used the same fluid parameters as in simulation 4, altering only the angle at which the jet encountered the cloud. With an impact angle of 45 this produces a distinct secondary at the head about one jet radius away. However, it is shorter lived ($``$ 40 million years) and the angle of deflection decreases quickly as the jet erodes the cloud. With an impact angle of 30 the deflection is shallower ($`70^{}`$) and shorter lived. Shallower impact angles can produce secondary hotspots since less momentum is lost by the jet, but the interaction is short lived on the whole. An impact angle of 45 is probably the lower limit for a deflection of 90.
### 3.4 An Example of a Jet-Cloud Interaction
Our sixth simulation ($`M`$=2, $`\eta `$=0.2, figure 7) shows all the major features of WAT jets: bent sharply through 90 at bright hotspots and flaring out into long, wide tails (O’Donoghue, Eilek & Owen 1990). Due to the higher cloud density the interaction is long-lived. It lasts for something of the order of 10<sup>8</sup> years, which allows the formation of a tail not only large enough but of an age consistent with estimates of travel time to the end of the tails from synchrotron spectral-aging (O’Donoghue, Owen & Eilek 1993).
We chose this simulation to examine the dynamics of the interaction in more detail. In particular we look at the distribution of material and its velocity in three dimensions, along with the estimated radio emission, and how it varies with viewing angle. We also give some indications of the likely location and shape of line emission.
Figure 8 shows velocity vectors plotted in three dimensions, with the initial jet direction emerging from the plane of the diagram. This is at a later time than shown in figure 7 (3$`\times 10^8`$years). These show how the material is deflected after impact into a fan of opening angle between 60 – 80. None of this material is moving slower than 20% of the speed of the incoming jet. A spine of slightly faster material can be discerned along the middle of this fan, representing the remnant of the jet. The wings are material deflected at higher speed on impact; as the interaction progresses the impact cavity deepens and material deflected in this direction at later times has slightly lower speeds that fall below the threshold of this plot. Clearly on deflection the jet is not only decelerated but almost all collimation is lost. However, this fan is very flat, allowing the appearance of a deflected tail when viewed from the side.
The next plots (figure 9) show the expected radio emission, viewed from two orientations. Since the fan is fairly thin this gives a reasonable impression of a bent jet from the side (a), with the tail showing many similarities to WATs, including filamentary structure. This seems to be due to vortex action in the tail. In (b) we see how this radio structure would appear were it not close to the plane of the sky. The emission from the deflected fan now surrounds the whole jet. This emission should not be as bright as the previous case since that had longer path lengths through the emitting material. Thus it should be more difficult to identify the counterparts of these objects at different orientations.
### 3.5 Line emission
The compression and distortion of the cloud would be likely to result in enhanced line emission, due to photoionization by continuum from a hidden quasar (if not limited by the flux from the nucleus) or shock heating. The simulations show elongated density enhancements beside the jet, especially the transmitted shock driven into the cloud. As a crude indicator of the expected position and distribution of line emission, we have plotted the distribution of density squared (emission measure) integrated through the grid in figure 10. This is overlayed in the figure with contours of radio emission. Note the bright, elongated region inside the cloud, forming a cap around the impact point. Compact regions of line emission are seen in HST images (for example 4C41.17, van Breugel 1996) which we interpret in this model as shock fronts moving away from the jet, providing density enhancements for scattering of or photo-ionization by the AGN continuum, or shock-ionization. This gives a natural explanation for the radio-optical alignment effect.
### 3.6 The Effect of Several clouds
In realistic situations there are likely to be many clouds, so we have simulated the passage of a jet through a medium containing an ensemble of clouds. Figure 11 shows a three-dimensional rendering of total density and a synthetic radio map at the same epoch. As the jet progresses through the grid it collides with clouds, producing prominent hotspots in the radio emission. These spots persist as the jet moves past the cloud and encounters further obstructions. They fade as the clouds are eroded by the passage of the jet. Meanwhile new hotspots form at new encounters, and deflected jet material percolates through the ambient medium, producing filamentary and foamy shock structures. The result is a jet that is made visible by a series of irregular knots, with a crooked ridgeline, filamentary diffuse bridges and lobes and multiple hotspots at the head of the jet. It clearly remains collimated and produces a bow-shock at its head.
## 4 Conditions for the deflection of jets
Synchrotron emission may not trace the fluid flow in an extragalactic jet in a simple way. Bends in the radio structure may not represent bends in the flow. Our synthetic radio maps allow us to make more general comparisons between our simulations and observations. De Young and Balsara and Norman based their conclusions solely on the flow patterns. Our results show how a few changes in parameter values can alter the results of a collision. Under our assumptions, it is possible to produce structures reminiscent of those seen in a variety of radio sources. This also explains the apparent conflict between previous simulations. The simulations performed by De Young involved a fast, heavy jet (about Mach 25, with a density contrast of $`\eta `$=1) so it is not surprising that it ripped up the clouds. In contrast, Balsara and Norman’s jet was of a moderate speed and lightness (Mach 4 and a density contrast of $`\eta `$=0.2). As we have shown it is easier to produce deflection under these conditions. The conclusions clearly depend on the choice of parameter values.
The impact causes a large increase in radio luminosity of fast jets, which can still be seen after tens of millions of years. Samples of ultra-luminous radio sources may contain a large proportion of sources in which jets have been in collision with clouds within this time in their past, even if the probability of a collision is fairly small for any source. The technique of ultra-steep-spectrum selection used to locate luminous sources at high redshift (Rottgering 1992) could make this bias even stronger.
The range of structures we have produced suggest that deflection may be easier to produce or detect in lower power jets propagating in the plane of the sky. We attempt to model sources in more detail in future papers, but note that in the case of WATs these do not seem very demanding requirements: the sizes are already very large, so we would not expect any other orientation. When such sources are seen at some other orientation the lobe emission may be too diffuse to detect. However, our simulations use spherical clouds, which are clearly an idealised case. Other shapes or clouds with some density variation may produce a better degree of collimation in the deflected jet. This might allow deflected structures to be observed from a wider range of angles. Larger clouds may sustain the deflection of high power jets long enough to produce a deflected arm.
Steffen et al. 1997 have studied the interaction of jets with clouds in the narrow-line region of Seyfert galaxies through two-dimensional, non-adiabatic simulations. These suggest that only clouds above a critical density will radiate after being shocked by the impact of the jet. Thus the absence of detectable emission associated with bends may be due to a low cloud density.
We have also tentatively associated our other simulations with bent structure. We discuss this in detail elsewhere. In particular we find that different structures may be produced by a single set of parameters as the interaction progresses (Higgins, O’Brien & Dunlop 1995, Dunlop 1995). Other sources can also be modelled by these simulations (with rescaling when appropriate). Our studies show that shallower impact angles can produce secondary hotspots since less momentum is lost by the jet, although these are short-lived. Thus compact sources seem to be best modelled with more oblique impacts and/or several clouds (see figure 6).
### 4.1 Further Development
In conclusion, given the right sets of conditions collisions between jets and clouds can reproduce some of the distorted structure seen in observations, and is suggestive of alignments between radio and optical axes. A more detailed understanding of the physics involved could allow us to infer the properties of jets and their environment.
The radio maps presented here assume a fixed spectral index throughout the source. A more sophisticated prescription in which this could vary would allow us to investigate the variation of the appearance of a source with frequency. It should also be possible to include calculation of a few common lines, and their spatial distribution.
These calculations are non-relativistic. Relativistic simulations of jets show no gross differences to the structures seen in non-relativistic simulations. The most significant difference is in the effect of Doppler beaming, which should not apply in our case since the lobes are not beamed. Recent calculations suggest that relativistic jets may lose less kinetic energy through entrainment of ambient material (Bowman, Komissarov & Leahy 1996). If this was the case in our model it might allow better collimation after deflection. We intend to make relativistic calculations to explore this possibility.
## Acknowledgments
SWH acknowledges the PPARC for receipt of a studentship, and his wife, Leah, and parents for additional financial subsidies. Computing was performed using the Liverpool John Moores University Starlink node. We would like to thank Dr Sam Falle for useful suggestions, and Dr Huw Lloyd and Dr John Porter for valuable discussions and the referee for pointing out important considerations.
## References
Axon, D., 1996, in Clark, N.E., ed., Jet-cloud Interactions in Extragalactic Nuclei, published on the World Wide Web,
```
http://www.shef.ac.uk/~phys/research/astro/conf/
index.html
```
Barthel, P.D., Miley, G.K., Schilizzi, R.T., Lonsdale, C.J., 1988, A&AS, 73, 515
Baum, S.A., 1992, in Fabian, A.C., ed., NATO ASI Ser., vol. 366, Clusters and Superclusters of Galaxies, Kluwer, Dordrecht, p. 171
Baum, S.A., Heckman, T.M., 1989, ApJ, 336, 681
Bridle, A.H., Perley, R.A., 1984, ARA&A, 22, 319
Bowman, M., Leahy, J.P., Komissarov, S.S., 1996, MNRAS, 279, 899
Burns, J.O., 1986, Can.J.P., 64, 363
Burns, J.O., O’Dea, C.P., Gregory, S.A., Balonek, T.J., 1986, ApJ, 307, 73
Chambers, K.C., Miley, G.K., van Breugel, W.J.M., 1990, ApJ, 363, 21
Chambers, K.C., Miley, G.K., van Breugel, W.J.M., 1987, Nat, 329, 624
Cowie, L.L., Hu, E.M., Jenkins, E.B., York, D.G., 1983, ApJ, 272, 29
Clark, N.E., Tadhunter, C.N., 1996, in Carilli, C.L., Harris, D.E., eds., Cygnus A – Study of a Radio Galaxy, CUP, Cambridge, p. 15
Crawford, C.S., Vanderriest, C., 1996, MNRAS, 285, 580
De Young, D.S., 1991, ApJ, 371, 69
Dunlop, J.S., 1995, in Hippelein, H., Meisenheimer, K., eds., Galaxies in the Young Universe, Lecture Notes in Physics vol. 463, Springer-Verlag, Berlin
Dunlop, J.S., Peacock, J.A., 1993, MNRAS, 263, 936
Fabian, A.C., 1994, ARA&A, 32, 277
Falle, S.A.E.G., 1991, MNRAS, 250, 581
Higgins, S.W., O’Brien, T.J., Dunlop, J.S., 1996, in Ekers, R., Fanti, C., Padrielli, L., Extragalactic Radio Sources, IAU Symp. 175, Kluwer, Dordrecht, 467
Higgins, S.W., O’Brien, T.J., Dunlop, J.S., 1995, in Millar, T.J., Raga, A., eds, Shocks in Astrophysics, Kluwer, Dordrecht, p. 311
Hill, G.J., Lilly, S.J., 1990 ApJ, 367, 1
Icke, V., 1991, in Hughes, P.A., ed, Beams and Jets in Astrophysics, CUP, Cambridge, p. 232
Koide, S., Sakai, J-I., Nishikawa, K-I., Mutel, R. L., 1996, ApJ, 464, 724
Lacy, M., Rawlings, S., 1994, MNRAS, 270, 431
Liu, R., Pooley, G., 1991, MNRAS, 249, 343
Leahy, J.P., 1984, MNRAS, 208, 323
Leahy, J.P., Muxlow, T.W.B., Stephens, P.W. 1989, MNRAS, 239, 401
Loken, C., Roettiger, K., Burns, J.O. Norman, M., 1995, ApJ, 445, 80L
Lonsdale, C.J., Barthel, P.D., 1986, ApJ, 303, 617
McCarthy, P.J., 1993, ARA&A, 31, 639
McCarthy, P.J., van Breugel, W., Kapahi, V.K., 1991, ApJ, 371, 478
Meisenheimer, K., Hippelein, H., 1992, A&A, 264, 455
Muxlow, T.W.B, Garrington, S.T., 1991, in Hughes, P.A., ed, Beams and Jets in Astrophysics, CUP, Cambridge, p. 51
Nittman, J., Falle, S.A.E.G. and Gaskell, P.H., 1982, MNRAS, 201, 833
Norman, M.L., 1993, in Burgarella, D., Livio, M., O’Dea, C., eds, STScI Symp. Ser. Vol. 6, Astrophysical Jets, CUP, Cambridge, p. 211
Owen, F.N., O’Dea, C.P., Keel, W.C., 1990, ApJ, 352, 44
O’Donoghue, A.A., Eilek, J.A., Owen, F.N., 1993, ApJ, 408, 428
O’Donoghue, A.A., Owen, F.N., Eilek, J.A., 1990, ApJS, 72, 75
Pedelty, J. A., Rudnick, L., McCarthy, P.J., Spinrad, H., 1989, AJ, 97, 647
Pinckney, J., Burns, J.O., Hill, J.M., 1994, AJ, 108, 2031
Raga, A.C., Canto, J., 1996, MNRAS, 280, 567
Rector, T.A., Stocke, J.T., Ellingson, E., 1995, AJ, 110, 1492
Rottgering, H., 1993. PhD Thesis, University of Leiden
Saikia, D.J., Jeyakumar, S., Wiita, P.J., Sanghera, H., Spencer, R.E., 1995, MNRAS, 276, 1215
Sakelliou, I., Merryfield, M.R., McHardy, I.M., 1996, MNRAS, 283, 673
Sanders, R.H., Prendergast, K.H., 1974, ApJ, 188, 489
Scheuer, P.A.G., 1982, in Heeschen, D.S., Wade, C.M., eds, Extragalactic Radio Sources, IAU Symp. 97, Reidel, Dordrecht, p. 163
Smith, M., Norman, M.L., Winkler, K-H.A., Smarr, L., 1985, MNRAS, 214, 67
Sod, Gary A., 1978, J.Comp.Phys., 27, 1
Steffen, W.S., Gómez, J.L., Raga, A.C. & Williams, R.J.R, 1997, ApJ, L73
Stocke, J.T., Burns, J.O., Christiansen, W.A., 1985, ApJ, 299, 799
Stockton, A., Ridgway, S., Kellog, M., 1996, AJ, 112, 902
Sutherland, R.S., Bicknell, G.V., Dopita, M.A., 1993, ApJ, 414, 510
Tadhunter, C.N., Clark, N., Shaw, M.A., Morganti, R., 1994 A&A, 288, L21
Tadhunter, C.N., 1996, in Carilli, C.L., Harris, D.E., Cygnus A – Study of a Radio Galaxy, CUP, Cambridge, p. 33
van Breugel, W.J.M., Heckman, T.M., George, K., Filippenko, A.V., 1986, ApJ, 311, 58
van Breugel, W.J.M., 1996, in Ekers, R., Fanti, C., Padrielli, L., Extragalactic Radio Sources, IAU Symp. 175, Kluwer, Dordrecht, 577
Williams, A.G., Gull, S.F., 1985, Nat, 313, 34
Wilson, Andrew S., 1993, in Burgarella, D., Livio, M., O’Dea, C., eds, STScI Symp. Ser. Vol. 6, Astrophysical Jets, CUP, Cambridge, p. 122
Wilson, M.J., Scheuer, P.A.G., 1983, MNRAS, 205, 449
Yates, M.G., Miller, L., Peacock, J.A., 1989, MNRAS, 240, 129
## Figure Captions
Figure 1: The geometry of the computational grid. This shows the symmetry plane containing the jet axis (arrow) and bisecting the cloud (hemisphere). It also shows the viewpoint used for the density slices in figures 3 to 7, the $`z`$-axis.
Figure 2: Example of the results of a test using a one-dimensional problem involving the impact of a shock on a density discontinuity. The solid line represents the analytical solution, the points represent the results of the simulation. The parameter values are: ‘jet’ (shock) Mach number 2.0, pressure 1.0 and density 1.0; ‘cloud’ (discontinuity) pressure 1.0 and density 200.
Figure 3: Logarithmic plots (on the same greyscale) of density in the symmetry plane for the same jet parameters at the same time run at two different resolutions. The upper panel was produced from a simulation run at twice the resolution of the second, then smoothed with a 3-D gaussian and binned down to the same pixel size.
Figure 4: Alternative plots of synchrotron surface brightness distribution, using a logarithmic grey scale. Panels a) and b) show the effect of changing $`\alpha `$ on our prescription. Panel a) shows the effect of $`\alpha `$ = 0.5, which is the value we use in plots shown elsewhere in this paper. Panel b) shows the effect of $`\alpha `$ = 1.0: jet and tail are fainter, so that the wider diffuse emission at the edges of the tail is not seen. Panel c) shows pressure squared only where there is jet material. The jet and tail are as bright as in the previous plots, but the diffuse emission around them is not seen due to the negligible magnetic fields in these outer regions, producing a narrower tail. Panel d) shows the square of total pressure: this is the effect of allowing the same magnetic field in the ambient medium and the jet.
Figure 5: Logarithmic greyscale plots of total gas density (in the symmetry plane) and integrated radio emission form simulation 1 ($`M=2,\eta _j=0.01,\eta _c=50.0`$). In the density plots white represents the highest value. For clarity in the radio plots black represents the peak brightness. The plots are shown at two epochs from the simulation: a) 56 million years after the jet enters the grid, and b) 112 million years.
Figure 6: Logarithmic greyscale plots of total gas density (in the symmetry plane) and integrated radio emission for simulation 4 ($`M=10,\eta =0.01,\eta _c=200.0`$). In the density plots white represents the highest value. For clarity in the radio plots black represents the peak brightness. The plots are shown at two epochs from the simulation: a) 16 million years after the jet enters the grid, and b) 56 million years.
Figure 7: Logarithmic greyscale plots of total gas density (in the symmetry plane) and integrated radio emission for simulation 6 ($`M=2,\eta _j=0.2,\eta _c=200.0`$). In the density plots white represents the highest value. For clarity in the radio plots black represents the peak brightness. The plots are shown at two epochs from the simulation: a) 56 million years after the jet enters the grid, and b) 112 million years.
Figure 8: Three dimensional velocity vector plots of a Mach 2 jet with density contrast $`\eta =0.2`$, seen from two orientations. The jet emerges from the plane of the upper panel. Vectors are plotted if they have a speed greater than 20% of the speed of the incoming jet.
Figure 9: Radio emission seen from two orientations:(a) perpendicular to the symmetry plane (as in previous radio plots), and (b) rotated so that the jet axis is about 30 from the line of sight.
Figure 10: Square of density integrated along the line of sight, indicating the expected site of line emission, overlaid with synthesized radio contours corresponding to 9 a).
Figure 11: A jet propagating through a collection of clouds. This is a fast, light jet ($`M`$=10, $`\eta `$=0.01). All clouds have density contrast of 50. They are distributed at random positions in the grid, with a fixed volume filling factor of 0.2 and a power law distribution of radius up to a fixed maximum size (1.4 times the radius of the jet). There is no plane of symmetry in this problem, so we had to calculate the whole grid, which in this case was 90 $`\times `$ 90 $`\times `$ 90. This represents a volume of 729000 kpc<sup>3</sup>. The plots show the results about four million years after the jet entered the grid. a) A constant density surface of the jet inside a diffuse rendering of cloud density. b) A synthetic radio map. The jet has been deflected at least twice, where it has encountered clouds, but clearly remains collimated.
|
no-problem/9904/nucl-ex9904003.html
|
ar5iv
|
text
|
# Anisotropic Flow at STAR
## 1 Introduction
STAR Note SN388
The study of collective flow in nuclear collisions at high energies has been attracting increasing attention from experimentalists . This is partly because recent progress has been made in the development of new techniques suitable for flow studies at high energies . Instead of studying $`p_x`$, in these new methods a Fourier expansion of the azimuthal distribution of particles is used in which the first harmonic coefficient, $`v_1`$, quantifies the directed flow and the second harmonic coefficient, $`v_2`$, quantifies the elliptic flow. In some cases $`A1`$ and $`A2`$ were reported, which in modern terminology, are twice the square of the sub-event resolution. Using these new techniques anisotropic flow has now been observed for heavy symmetric systems at both the AGS and SPS.
At the AGS the E877 Collaboration pioneered the use of the Fourier expansion method to measure $`v_1`$ and $`v_2`$. They studied these quantities (as well as $`v_4`$) from a calorimeter as a function of centrality in different pseudorapidity windows . Then they studied nucleons as well as pions as a function of pseudorapidity for different centralities . Using their spectrometer to identify particles while still obtaining the event plane from the calorimeter, they measured $`v_1`$ and $`v_2`$ as a function of $`p_t`$ for different rapidities and centralities . They also reported $`p_x`$ as a function of rapidity . In their latest papers they extended this study to light nuclei . The E802 Collaboration studied $`p_x`$ for light nuclei in the target rapidity region using a forward hodoscope to determine the event plane .
At the SPS NA49 first observed elliptic flow in a calorimeter study which reported $`A2`$ as a function of centrality . WA98 reported $`A1`$ as a function of centrality for protons and $`\pi ^+`$ in the target rapidity region . They also studied $`p_x`$ in the target rapidity region . NA45 used silicon drift detectors to study $`v_1`$ and $`v_2`$ as a function of pseudorapidity . NA49 has presented a differential study of $`v_1`$ and $`v_2`$ as a function of $`p_t`$ and $`y`$ and has also started to study the centrality dependence .
Also, the importance of flow for other measurements has just begun to be studied. For two particle correlations relative to the event plane the mathematical scheme has been worked out . Some first results have been given by WA98 . Also, for non-identical particles the correlation relative to the event plane has been discussed .
## 2 Physics Motivation
Anisotropic flow, in particular elliptic flow, in spite of the relatively small absolute value of the effect, contains very rich physics. In general words, it is very sensitive to the equation of state which governs the evolution of the system created in the nuclear collision. Being such, anisotropic flow provides important information on the state of matter under the extreme conditions of the nuclear collision. The anticipated phase transition to QGP should have a dramatic effect on elliptic flow due to the softening of the equation of state.
First it was pointed out in the pioneering work of Ollitrault, who suggested elliptic anisotropy as a possible signature of transverse collective flow. Within the hydro-dynamical model Ollitrault analyzed the role of different equations of state and phase transitions on the final anisotropy. Hung and Shuryak suggested scanning with beam energy in order to look for the QCD phase transition. Using their idea of the softest point in the equation of state combined with hydro-dynamical calculations, Rischke predicted a dramatic drop in the elliptic flow signal at the corresponding beam energies (in the original calculations this was at AGS energies). Sorge has shown that the elliptic flow is very sensitive to the pressure at maximum compression, which is the most interesting time in the system evolution. Recent studies within the parton cascade model yield similar conclusions providing also the relation between the strength of the elliptic flow and parton-parton cross sections. Recently, Sorge also tried to combine the early system evolution in accordance to a QGP equation of state with a later hadron cascade. He looked at the centrality dependence of the elliptic flow in order to detect QGP production. Summarizing this part, we would conclude that the effect of QGP should be seen in the anisotropic flow dependence on the energy of the colliding nuclei, or in the dependence on the centrality of the collision. If the situation would be such that a QGP is produced only in a small fraction of the collisions than fluctuations in flow would be one of the best observables for this effect.
The formation of DCC in nuclear collisions could also result in an event anisotropy. It could be due to the anisotropic shape of the DCC domains or just to local fluctuations in the charged multiplicity, which should result in “orthogonal” flow in charged and neutral sectors .
The very magnitude of anisotropic flow is sensitive to the degree of equilibration in the system. Note that at present there is no calculation based on the hydro-dynamical picture which accounts for the experimentally observed values of the effect. This could have its origin in the obvious difficulties of hydrodynamic model calculations, but it could also indicate a non-applicability of the picture to nuclear collisions. The cascade models such as RQMD describe the data much better. From this point of view the analysis of elliptic flow in the collision-less and hydrodynamic limits performed in is very interesting. The HBT interferometry performed relative to the event plane becomes also extremely important at this point. Does the system really expand in the reaction plane as prescribed by hydrodynamics? Simultaneous measurements of the anisotropic flow and the two-particle, identical as well as non identical , correlations in principle should answer this question.
We must also mention the importance of anisotropic flow measurements to the vast variety of other measurements, which from first look have nothing to do with anisotropic flow. Let us consider high $`p_t`$ particle production. It could be that the production mechanism (hard parton scattering) is very insensitive to the in-plane expansion, but that the rescattering of high $`p_t`$ partons is different in the different directions of particle emission due to the anisotropic geometry of the collision zone. This would lead to anisotropy in high $`p_t`$ particle production and gives another opportunity to study how it develops .
Another example is HBT measurements averaged over all orientations of particle emission. One would think that this does not require reaction plane measurements, but this is not really true. The mixed pair distribution usually used in the correlation function calculation can strongly depend on the relative orientation of the reaction plane of the events used to create the mixed pair. Therefore one should have this information even in the case where the dependence of the HBT parameters on the reaction plane is not studied.
## 3 Technical Requirements
The study of azimuthal anisotropy of unidentified charged hadrons needs the momenta of the particles but does not have any unusual requirements for calibrations, momentum resolution, acceptance, efficiency, two-track resolution, or two-track efficiency. However, for future analyses it would be good to have particle identification.
## 4 Directed and Elliptic Flow at RHIC
The anisotropy in the azimuthal distribution of particles is often characterized by $`v_1`$, $`v_2`$ and called directed and elliptic flow respectively. This anisotropy, especially $`v_2`$, plays an important role in high energy nuclear collisions and is expected to be even more important at RHIC energies . The azimuthal distribution of particles is described by a Fourier expansion
$$E\frac{\mathrm{d}^3N}{\mathrm{d}^3p}=\frac{1}{2\pi }\frac{\mathrm{d}^2N}{p_t\mathrm{d}p_t\mathrm{d}y}\left(1+\underset{n=1}{\overset{\mathrm{}}{}}2v_n\mathrm{cos}[n(\varphi \mathrm{\Psi }_r)]\right),$$
(1)
where $`\mathrm{\Psi }_r`$ is the true reaction plane angle. The reaction plane is defined by the beam direction and the impact parameter vector $`𝐛`$. In a given rapidity ($`y`$) and $`p_t`$ interval the coefficients are determined by
$$v_n=\mathrm{cos}[n(\varphi \mathrm{\Psi }_r)].$$
(2)
Similarly this Fourier expansion can be done in coordinate space, where for a given rapidity and $`p_t`$ interval the coefficients are determined by
$$r_n=\mathrm{cos}[n(\mathrm{arctan}(\frac{y}{x})\mathrm{\Psi }_r)]$$
(3)
where $`x,y`$ are the particle space coordinates at freeze-out. Of course, these equations only apply to simulations where one knows $`\mathrm{\Psi }_r`$.
Comparing the anisotropy coefficients in momentum space ($`v_n`$) with the anisotropy coefficients in coordinate space ($`r_n`$) as a function of $`p_t`$ helps us to understand the space-time evolution of nucleus-nucleus collisions . To study this space-time evolution at RHIC, $`\mathrm{40\hspace{0.33em}000}`$ Au+Au collisions at $`\sqrt{s}`$ = 200 $`A`$GeV have been analyzed using the RQMD v2.4 model.
Figs. 4a-d show the first harmonic both in momentum and coordinate space for nucleons and pions. For nucleons at mid-rapidity note the similarity in shape of $`v_1`$ versus $`y`$ and r<sub>1</sub> versus $`y`$. Here (Fig. 1a) both the slopes of $`v_1`$ versus $`y`$ and r<sub>1</sub> versus $`y`$ show a reversal of sign. This finds an explanation in a picture with strong (positive) space-momentum correlations, taking into account the correlation between nucleon stopping and the original position of the nucleons in the transverse plane . For pions, the rapidity dependence of $`v_1`$ is predominantly governed by rescattering on comoving nucleons. Figs. 4e-h show $`v_2`$ for nucleons and pions. For both nucleons and pions $`v_2`$ is positive and is larger for particles with $`p_t1.5`$ GeV. Particles acquire a large $`p_t`$ when they are produced by a hard collision (which should not produce an event anisotropy) or when they have a large number of soft collisions (rescattering). The latter would explain the increase in $`v_2`$ and it explains why r<sub>2</sub> goes from negative for nucleons integrated over all $`p_t`$ to positive for nucleons with large $`p_t`$.
Collective flow and the coefficients $`v_1`$ and $`v_2`$ are usually associated with soft processes. However, the coefficients describe the event anisotropy and are not limited to only soft physics. At RHIC energies hard processes become important. They happen early in the reaction and thus can be used to probe the early stage of the evolution of a dense system. During this time a quark-gluon plasma (QGP) could exist. Associated with hard processes are jets. However, when the transverse energy of the jets becomes smaller it becomes increasingly difficult to resolve them from the “soft” particles. These jets with $`E_T<`$ 5 GeV are usually refered to as mini-jets. At RHIC energies it has been estimated that 50% of the transverse energy is produced by mini-jets .
Medium induced radiative energy loss of high $`p_t`$ partons (jet quenching) could be very different in a hadronic medium and a partonic medium. Recently it was shown that this energy loss per unit distance, $`dE/dx`$, grows linearly with the total length of the medium . For non central collisions the hot and dense overlap region has an almond shape. This implies different path lengths and therefore different energy loss for particles moving in the in-plane versus the out-plane direction. To study this anisotropy with respect to the reaction plane , $`\mathrm{100\hspace{0.33em}000}`$ Au+Au collisions at $`\sqrt{s}`$ = 200 $`A`$GeV have been generated using HIJING v1.35.
Figs. 4a-d show $`v_1`$ and $`v_2`$ for nucleons and charged pions. The coefficient $`v_1`$ shows a small negative slope around mid-rapidity for both nucleons and pions and this becomes more pronounced for particles with $`p_t1.5`$ GeV. The coefficient $`v_2`$ is slightly negative over the whole rapidity range for both charged pions and nucleons. For particles with $`p_t1.5`$ GeV, $`v_2`$ becomes more negative especially at forward and backward rapidity. Figs. 4e-f show that without jet quenching the anisotropy coefficients become zero. This indicates that interactions among particles, either quenching or rescattering, are important for producing the anisotropy.
## 5 Event Plane Resolutions
Within event generators the true reaction plane angle $`\mathrm{\Psi }_r`$ is known. This is not the case experimentally and the reaction plane has to be estimated from the data. This is done using the anisotropy in the azimuthal distribution of particles itself. The estimated reaction plane angle for the $`n^{th}`$ harmonic is called $`\mathrm{\Psi }_n`$. The magnitude of the anisotropy and the finite number of particles available to determine this event plane leads to a finite resolution. Therefore, the measured $`v_n^{obs}`$ coefficients with respect to the event plane have to be corrected for this event plane resolution
$$v_n=\frac{v_n^{obs}}{\mathrm{cos}[n(\mathrm{\Psi }_n\mathrm{\Psi }_r)]}.$$
(4)
However, eq. 4 uses the true reaction plane which is not known experimentally. Following Ref. , if one constructs the event plane from two random subevents one can relate the resolution of the subevents to the full event plane resolution,
$$\mathrm{cos}[n(\mathrm{\Psi }_n\mathrm{\Psi }_r)]=C\times \sqrt{\mathrm{cos}[n(\mathrm{\Psi }_n^a\mathrm{\Psi }_n^b)]},$$
(5)
where $`C`$ is a correction for the difference in subevent multiplicity compared to the full event and $`\mathrm{\Psi }_n^a,\mathrm{\Psi }_n^b`$ are the angles of the event planes determined in the subevents.
To calculate how well the event plane can be determined in STAR, we considered the TPC (-1.5 $`y`$ 1.5) and the FTPCs (2.5 $`|y|`$ 4.). For this the RQMD v2.4 model predictions for Au+Au at $`\sqrt{s}`$ = 200 $`A`$GeV have been used. In Fig. 5a, $`v_2`$ for charged pions integrated over the TPC rapidity region is shown versus the impact parameter $`b`$. Fig. 5b shows the corresponding multiplicity as a function of $`b`$. These quantities lead to a resolution for $`v_2`$, calculated using the true reaction plane, as shown in Fig. 5c. The resolution for $`v_2`$ which can be obtained in the STAR TPC using subevents is shown in Fig. 5d. For $`v_2`$ charged pions and protons both contribute positively and therefore do not need to be identified. However, the multiplicity of protons at mid-rapidity is small compared to that of pions and, therefore, including protons does not significantly change the resolution.
In Fig. 5a, $`v_2`$ integrated over the FTPC rapidity region is shown versus the impact parameter $`b`$. For the FTPCs the $`\pi ^+,\pi ^{}`$ and protons are combined. It was shown in Fig. 4e that $`v_2`$ is relatively flat as a function of rapidity and its magnitude is therefore comparable in the FTPC and TPC regions. Fig. 5b shows the corresponding multiplicity as a function of $`b`$ for the combined FTPCs. These quantities lead to a resolution for $`v_2`$, calculated using the true reaction plane, as shown in Fig. 5c. The resolution for $`v_2`$ which can be obtained in the STAR FTPCs using subevents is shown in Fig. 5d. If only one FTPC would be used this resolution would be approximately $`\sqrt{2}`$ smaller.
Using $`v_2`$ the event plane can be determined, however the sign of $`v_2`$ is not determined relative to $`𝐛`$. This sign could be determined from $`v_2`$ relative to $`\mathrm{\Psi }_1`$. Fig. 4c shows that around mid rapidity $`v_1`$ is maximally 0.5% which makes $`\mathrm{\Psi }_1`$ extremely hard to measure. From Fig. 4a and 4c it is clear that the best region to measure $`v_1`$ is at forward rapidity. Fig. 5a shows $`v_1`$ integrated over the FTPC rapidity region, versus $`b`$. As for $`v_2`$, the $`\pi ^+,\pi ^{}`$ and protons are combined. This decreases the magnitude of $`v_1`$ because their signs are opposite but the FTPCs are not able to separate these particles. At large $`b`$ the magnitude of $`v_1`$ becomes $``$ 1% and, although this is already hard to measure, also the multiplicity decreases rapidly at large $`b`$. This leads to negligible resolution for $`v_1`$ at all values of $`b`$, which is shown in Fig. 5c.
## 6 Conclusion
We have investigated the feasibility of reconstructing the event plane. Both Fig. 5 and Fig. 5 show that it is possible to determine the second harmonic event plane and calculate $`v_2`$ within STAR, assuming the RQMD predictions (multiplicity distribution, magnitude of $`v_2`$) are correct. For $`v_2`$ both the TPC or the FTPCs can be used. This would initially provide a cross check and later combining both detectors would increase the resolution. For this study we only need the momenta of the charged hadrons and thus anisotropic flow could be one of the first results from STAR. For future analyses it would be good to have particle identification. Because it is important to study the dependence of $`v_2`$ as a function of $`b`$ we would like to have 10 centrality bins, which would be possible with $`\mathrm{1\hspace{0.33em}000\hspace{0.33em}000}`$ minimum bias events.
## 7 Acknowledgments
We would like to thank the other members of the STAR LBNL Soft Hadron Group and in particular the group leader Nu Xu for help with this work.
|
no-problem/9904/math9904167.html
|
ar5iv
|
text
|
# Real Rational Curves in Grassmannians
## Introduction
Fulton asked how many solutions to a problem of enumerative geometry can be real, when that problem is one of counting geometric figures of some kind having specified position with respect to some general fixed figures . For the problem of plane conics tangent to five general conics, the (surprising) answer is that all 3264 may be real . Similarly, given any problem of enumerating $`p`$-planes incident on some general fixed subspaces, there are real fixed subspaces such that each of the (finitely many) incident $`p`$-planes are real . We show that the problem of enumerating parameterized rational curves in a Grassmannian satisfying simple (codimension 1) conditions may have all of its solutions be real.
This problem of enumerating rational curves on a Grassmannian arose in at least two distinct areas of mathematics. The number of such curves was predicted by the formula of Vafa and Intriligator from mathematical physics. It is also the number of complex dynamic compensators which stabilize a particular linear system, and the enumeration was solved in this context . The question of real solutions also arises in systems theory . Our proof, while exploiting techniques from systems theory, has no direct implications for the problem of real dynamic output compensation.
## 1. Statement of results
We work with complex algebraic varieties and ask when a priori complex solutions to an enumerative problem are real. Fix integers $`m,p>1`$ and $`q0`$. Set $`n:=m+p`$. Let $`𝐆`$ be the Grassmannian of $`p`$-planes in $`^n`$. The space $`_q`$ of maps $`M:^1𝐆`$ of degree $`q`$ has dimension $`N:=pm+qn`$ . If $`L`$ is an $`m`$-plane and $`s^1`$, then the collection of all maps $`M`$ satisfying $`M(s)L\{0\}`$ is an irreducible subvariety of codimension 1. We study the following enumerative problem:
(1) Given general points $`s_1,\mathrm{},s_N`$ in $`^1`$ and general $`m`$-planes $`L_1,\mathrm{},L_N`$ in $`^n`$, how many maps $`M_q`$ satisfy $`M(s_i)L_i\{0\}`$ for $`i=1,\mathrm{},N`$?
Rosenthal interpreted the solutions as a linear section of a projective embedding of $`_q`$, and Ravi, Rosenthal, and Wang show that the degree of its closure $`𝒦_q`$ in this embedding is
(2)
$$\delta :=(1)^{q(m+1)}N!\underset{\nu _1+\mathrm{}+\nu _p=q}{}\frac{_{i<j}(ji+n(\nu _j\nu _i))}{_{j=1}^p(m+j+n\nu _j1)!}.$$
Thus, if there are finitely many solutions, then their number (counted with multiplicity) is at most $`\delta `$. The difference between $`\delta `$ and the number of solutions counts points common to both the linear section and the boundary $`𝒦_q_q`$ of $`𝒦_q`$. Since $`𝐆`$ is a homogeneous space, an application of Kleiman’s Theorem shows there are finitely many solutions and no multiplicities. Bertram uses explicit methods (a moving lemma) to to show there are finitely many solutions and also no points in the boundary of $`𝒬_q`$, and hence none in the boundary of $`𝒦_q`$. He also computes the small quantum cohomology ring of $`𝐆`$, which gives algorithms for computing $`\delta `$ and other intersection numbers involving rational curves on a Grassmannian.
When the $`s_i`$ and $`L_i`$ are real, not all of these solutions are defined over the real numbers. We show there are real $`s_i`$ and $`L_i`$ for which each of the $`\delta `$ maps are real.
###### Theorem 1.
There exist real $`m`$-planes $`L_1,\mathrm{},L_N`$ in $`^n`$ and points $`s_1,\mathrm{},s_N_{}^1`$ so that there are exactly $`\delta `$ maps $`M:^1𝐆`$ of degree $`q`$ which satisfy $`M(s_i)L_i\{0\}`$ for each $`i=1,\mathrm{},N`$, and each of these are real.
Our proof is elementary in that it argues from the equations for the locus of maps $`M`$ which satisfy $`M(s)L\{0\}`$. A consequence is that we obtain fairly explicit choices of $`s_i`$ and $`L_i`$ which give only real maps, which we discuss in Section 4. Also, our proof uses neither Kleiman’s Theorem nor Bertram’s moving lemma, and thus it provides a new and elementary proof that there are $`\delta `$ solutions to the enumerative problem (1).
## 2. The quantum Grassmannian
The space $`_q`$ of maps $`^1𝐆`$ of degree $`q`$ is a smooth quasi-projective algebraic variety. A smooth compactification is provided by a quot scheme $`𝒬_q`$ . By definition, there is a universal exact sequence
$$0𝒮^n𝒪𝒯0$$
of sheaves on $`^1\times 𝒬_q`$ where $`𝒮`$ is a vector bundle of degree $`q`$ and rank $`p`$. Twisting the determinant of $`𝒮`$ by $`𝒪_^1(q)`$ and pushing forward to $`𝒬_q`$ induces a Plücker map
$$𝒬_q\left(^p^nH^0(𝒪_^1(q))^{}\right)$$
which is the analog of the Plücker embedding of $`𝐆`$. The Plücker map is an embedding of $`_q`$, and so its image $`𝒦_q`$ provides a different compactification of $`_q`$. We call $`𝒦_q`$ the quantum Grassmannian. (In , this space is called the Uhlenbeck compactification). Our proof of Theorem 1 exploits some of its structures that were elucidated in work in systems theory.
The Plücker map fails to be injective on the boundary $`𝒬_q_q`$ of $`𝒬_q`$. Indeed, Bertram constructs a $`^{p1}`$ bundle over $`^1\times 𝒬_{q1}`$ that maps onto the boundary, with its restriction over $`^1\times _{q1}`$ an embedding. On this projective bundle, the Plücker map factors through the base $`^1\times 𝒬_{q1}`$ and the image of a point in the base is $`sS`$, where $`s`$ is the section of $`𝒪_^1(1)`$ vanishing at $`s^1`$ and $`S`$ is the image of a point in $`𝒬_{q1}`$ under its Plücker map. This identifies the image of the exceptional locus of the Plücker map with the image of $`^1\times 𝒦_{q1}`$ in $`𝒦_q`$ under a map $`\pi `$ (given below).
More concretely, a point in $`𝒬_q`$ may be (non-uniquely) represented by a $`p\times n`$-matrix $`M`$ of forms in $`s,t`$, with homogeneous rows and whose maximal minors have degree $`q`$ . The image of such a point under the Plücker map is the collection of maximal minors of $`M`$. The maps in $`_q`$ are represented by matrices whose maximal minors have no common factors: Given such a matrix $`M`$, the association
$$^1(s,t)\text{row space }M(s,t)$$
defines a map of degree $`q`$.
The collection $`\left(\genfrac{}{}{0pt}{}{[n]}{p}\right)`$ of $`p`$-subsets of $`\{1,\mathrm{},n\}`$ index the maximal minors of $`M`$. For $`\alpha \left(\genfrac{}{}{0pt}{}{[n]}{p}\right)`$ and $`0aq`$, the coefficients $`z_{\alpha ^{(a)}}`$ of $`s^at^{qa}`$ in the $`\alpha `$th maximal minor of $`M`$ provide Plücker coordinates for maps in $`_q`$, and for the space $`\left(^p^nH^0(𝒪_^1(q))^{}\right)`$. Let $`𝒞_q:=\{\alpha ^{(a)}\alpha \left(\genfrac{}{}{0pt}{}{[n]}{p}\right),0aq\}`$ be the indices of these Plücker coordinates. Then the image of the exceptional locus in $`𝒦_q`$ is the image of the (birational) map $`\pi :^1\times 𝒦_{q1}𝒦_q`$ defined by
(3)
$$\pi :([A,B],(x_{\beta ^{(b)}}\beta ^{(b)}𝒞_{q1}))(Ax_{\alpha ^{(a)}}Bx_{\alpha ^{(a1)}}\alpha ^{(a)}𝒞_q).$$
The relevance of the quantum Grassmannian $`𝒦_q`$ to the enumerative problem (1) is seen by considering the condition for a map $`M_q`$ to satisfy $`M(s,t)L\{0\}`$ where $`L`$ is an $`m`$-plane in $`^n`$ and $`(s,t)^1`$. If we represent $`L`$ as the row space of a $`m\times n`$ matrix, also written $`L`$, then this condition is
$$0=det\left[\begin{array}{c}L\\ M(s,t)\end{array}\right]=\underset{\alpha \left(\genfrac{}{}{0pt}{}{[n]}{p}\right)}{}f_\alpha (s,t)l_\alpha ,$$
the second expression given by Laplace expansion of the determinant along the rows of $`M`$. Here, $`l_\alpha `$ is the appropriately signed maximal minor of $`L`$. If we expand the forms $`f_\alpha (s,t)`$ in this last expression, we obtain
$$\underset{\alpha ^{(a)}𝒞_q}{}z_{\alpha ^{(a)}}s^at^{qa}l_\alpha =0,$$
a linear equation in the Plücker coordinates of $`M`$. Thus the solutions $`M_q`$ to the enumerative problem (1) are a linear section of $`_q`$ in its Plücker embedding, and so the degree $`\delta `$ of $`𝒦_q`$ provides an upper bound on the number of solutions.
The set $`𝒞_q`$ of Plücker coordinates has a natural partial order
$$\alpha ^{(a)}\beta ^{(b)}\begin{array}{c}ab,\text{ and if }ab<p,\text{ then }\\ \alpha _{ab+1}\beta _1,\mathrm{},\alpha _p\beta _{p+1b+a}\end{array}.$$
The poset $`𝒞_q`$ is graded with the rank, $`|\alpha ^{(a)}|`$, of $`\alpha ^{(a)}`$ equal to $`an+_i\alpha _ii`$. Figure 1 shows $`𝒞_1`$ when $`p=2`$ and $`m=3`$.
Given $`\alpha ^{(a)}𝒞_q`$, define the quantum Schubert variety
$$Z_{\alpha ^{(a)}}:=\{z=(z_{\beta ^{(b)}})𝒦_qz_{\beta ^{(b)}}=0\text{ if }\beta ^{(b)}\alpha ^{(a)}\}.$$
Let $`_{\alpha ^{(a)}}`$ be the hyperplane defined by $`z_{\alpha ^{(a)}}=0`$. The main technical result we use is the following.
###### Proposition 2 ().
Let $`\alpha ^{(a)}𝒞_q`$. Then
1. $`Z_{\alpha ^{(a)}}`$ is an irreducible subvariety of $`𝒦_q`$ of dimension $`|\alpha ^{(a)}|`$.
2. The intersection of $`Z_{\alpha ^{(a)}}`$ and $`_{\alpha ^{(a)}}`$ is generically transverse, and
$$Z_{\alpha ^{(a)}}_{\alpha ^{(a)}}=\underset{\beta ^{(b)}\alpha ^{(a)}}{}Z_{\beta ^{(b)}}.$$
Another proof of (ii) is given in , which shows (ii) is an ideal-theoretic equality. From (ii) and Bézout’s theorem, we obtain the following recursive formula for the degree of $`Z_{\alpha ^{(a)}}`$
$$\mathrm{deg}Z_{\alpha ^{(a)}}=\underset{\beta ^{(b)}\alpha ^{(a)}}{}\mathrm{deg}Z_{\beta ^{(b)}}.$$
Since the minimal quantum Schubert variety is a point, we deduce the main result of :
###### Corollary 3.
The degree $`\delta `$ of $`𝒦_q`$ is the number of maximal chains in the poset $`𝒞_q`$.
Closed formulas are given for $`\delta `$ in , the source of the formula (2), as well as the number $`\mathrm{deg}Z_{\alpha ^{(a)}}`$ of maximal chains below $`\alpha ^{(a)}`$.
## 3. Proof of Theorem 1
Let $`L(s,t)`$ be the $`m`$-plane osculating the parameterized rational normal curve
$$\gamma :(s,t)^1(s^{n1},ts^{n2},\mathrm{},t^{n2}s,t^{n1})^{n1}$$
at the point $`\gamma (s,t)`$. Then $`L(s,t)`$ is the row space of the $`m\times n`$ matrix of forms with rows $`\gamma (s,t),\gamma ^{}(s,t),\mathrm{},\gamma ^{(m1)}(s,t)`$, the derivative taken with respect to the parameter $`t`$. Write $`L(s,t)`$ for this matrix. For $`\alpha \left(\genfrac{}{}{0pt}{}{[n]}{p}\right)`$, the maximal minor of $`L(s,t)`$ complementary to $`\alpha `$ is $`(1)^{|\alpha |}s^{\left(\genfrac{}{}{0pt}{}{m}{2}\right)}l_\alpha s^{|\alpha |}t^{mp|\alpha |}`$, where $`|\alpha |:=_i\alpha _ii`$ and $`(1)^{|\alpha |}l_\alpha `$ is the corresponding maximal minor of $`L(1,1)`$. Let $`(s,t)`$ be the pencil of hyperplanes given by the linear form
$$\mathrm{\Lambda }(s,t):=\underset{\alpha ^{(a)}𝒞_q}{}z_{\alpha ^{(a)}}l_\alpha s^{|\alpha ^{(a)}|}t^{N|\alpha ^{(a)}|}.$$
Let $`M`$ be a matrix representing a curve in $`_q`$. Then
$$det\left[\begin{array}{c}L(s,t)\\ M(s^n,t^n)\text{}\end{array}\right]=s^{\left(\genfrac{}{}{0pt}{}{m}{2}\right)}\underset{\alpha ^{(a)}𝒞_q}{}z_{\alpha ^{(a)}}s^{an}t^{(qa)n}l_\alpha s^{|\alpha |}t^{mp|\alpha |}=s^{\left(\genfrac{}{}{0pt}{}{m}{2}\right)}\mathrm{\Lambda }(s,t).$$
Thus $`_q(s,t)`$ consists of all maps $`M:^1𝐆`$ of degree $`q`$ which satisfy $`M(s^n,t^n)L(s,t)\{0\}`$.
Theorem 1 is a consequence of the following two theorems.
###### Theorem 4.
There exist positive real numbers $`t_1,\mathrm{},t_N`$ such that for any $`\alpha ^{(a)}𝒞_q`$, the intersection
$$Z_{\alpha ^{(a)}}(1,t_1)\mathrm{}(1,t_{|\alpha ^{(a)}|})$$
is transverse with all points of intersection real.
###### Theorem 5.
If $`t_1,\mathrm{},t_k`$ are distinct, then for any $`\alpha ^{(a)}𝒞_q`$, the intersection
(4)
$$Z_{\alpha ^{(a)}}(1,t_1)\mathrm{}(1,t_k)$$
is proper in that it has dimension $`|\alpha ^{(a)}|k`$.
Proof of Theorem 1. By Theorem 4, there exist positive real numbers $`t_1,\mathrm{},t_N`$ (necessarily distinct) so that the intersection
(5)
$$𝒦_q(1,t_1)\mathrm{}(1,t_N)$$
is transverse and consists of exactly $`\delta `$ real points. We show all these points lie in $`_q`$, and thus are maps $`M:^1𝐆`$ of degree $`q`$ satisfying $`M(1,t_i^n)L(1,t_i)\{0\}`$ for $`i=1,\mathrm{},N`$, which proves Theorem 1.
Recall the map $`\pi :^1\times 𝒦_{q1}𝒦_q`$ (3) whose image is the complement of $`_q`$ in $`𝒦_q`$. Then
$`\pi ^{}(s,t)`$ $`=`$ $`{\displaystyle \underset{\alpha ^{(a)}𝒞_q}{}}(Ax_{\alpha ^{(a)}}Bx_{\alpha ^{(a1)}})l_\alpha s^{|\alpha ^{(a)}|}t^{N|\alpha ^{(a)}|}`$
$`=`$ $`(At^nBs^n){\displaystyle \underset{\alpha ^{(a)}𝒞_{q1}}{}}x_{\alpha ^{(a)}}l_\alpha s^{|\alpha ^{(a)}|}t^{Nn|\alpha ^{(a)}|}.`$
Hence, if $`^{}(s,t)`$ is the pencil of hyperplanes in the Plücker space of $`𝒦_{q1}`$ defining the locus of $`M_{q1}`$ satisfying $`M(s^n,t^n)L(s,t)\{0\}`$, then
$$\pi ^{}(s,t)=(At^nBs^n)^{}(s,t).$$
Thus any point in (5) not in $`_q`$ is the image of a point $`([A,B],M)`$ in $`^1\times 𝒦_{q1}`$ satisfying $`\pi ^{}(1,t_i)=(At_i^nB)^{}(1,t_i)`$ for each $`i=1,\mathrm{},N`$. As the $`t_i`$ are positive and distinct, such a point can only satisfy $`At_i^nB=0`$ for one $`i`$. Thus $`M𝒦_{q1}`$ lies in at least $`N1`$ of the hyperplanes $`^{}(1,t_i)`$. Since $`N1`$ exceeds the dimension $`Nn`$ of $`𝒦_{q1}`$, there are no such points $`M𝒦_{q1}`$, by Theorem 5 for maps of degree $`q1`$. ∎
Proof of Theorem 5. For any $`t_1,\mathrm{},t_k`$, the intersection (4) has dimension at least $`|\alpha ^{(a)}|k`$. We show it has at most this dimension, if $`t_1,\mathrm{},t_k`$ are distinct.
Suppose $`k=|\alpha ^{(a)}|+1`$ and let $`zZ_{\alpha ^{(a)}}`$. Then $`z_{\beta ^{(b)}}=0`$ if $`\beta ^{(b)}\alpha ^{(a)}`$ and so the form $`\mathrm{\Lambda }(1,t)(z)`$ defining $`(1,t)`$ is divisible by $`t^{N|\alpha ^{(a)}|}`$ with quotient
$$\underset{\beta ^{(b)}\alpha ^{(a)}}{}z_{\beta ^{(b)}}l_\beta t^{|\alpha ^{(a)}||\beta ^{(b)}|}.$$
This is a non-zero polynomial in $`t`$ of degree at most $`|\alpha ^{(a)}|`$ and thus it vanishes for at most $`|\alpha ^{(a)}|`$ distinct $`t`$. It follows that (4) is empty for $`k>|\alpha ^{(a)}|`$.
If $`k|\alpha ^{(a)}|`$ and $`t_1,\mathrm{},t_k`$ are distinct, but (4) has dimension exceeding $`|\alpha ^{(a)}|k`$, then completing $`t_1,\mathrm{},t_k`$ to a set of distinct numbers $`t_1,\mathrm{},t_{|\alpha ^{(a)}|+1}`$ would give a non-empty intersection in (4), a contradiction. ∎
Proof of Theorem 4. We construct the sequence $`t_i`$ inductively. If we let $`\alpha =1<2<\mathrm{}<p1<p+1`$, then $`Z_{\alpha ^{(0)}}`$ is a line. Indeed, it is isomorphic to the set of $`p`$-planes containing a fixed $`(p1)`$-plane and lying in a fixed $`(p+1)`$-plane. By Theorem 5, $`Z_{\alpha ^{(0)}}(1,t)`$ is then a single, necessarily real, point, for any real number $`t`$. Let $`t_1`$ be any positive real number.
Suppose we have positive real numbers $`t_1,\mathrm{},t_k`$ with the property that for any $`\beta ^{(b)}`$ with $`|\beta ^{(b)}|k`$,
$$Z_{\beta ^{(b)}}(1,t_1)\mathrm{}(1,t_{|\beta ^{(b)}|})$$
is transverse with all points of intersection real.
Let $`\alpha ^{(a)}`$ be an index with $`|\alpha ^{(a)}|=k+1`$ and consider the 1-parameter family $`𝒵(t)`$ of schemes defined for $`t0`$ by $`Z_{\alpha ^{(a)}}(1,t)`$. For $`t0`$, if we restrict the form $`\mathrm{\Lambda }(1,t)`$ to $`zZ_{\alpha ^{(a)}}`$, then, after dividing out $`t^{N|\alpha ^{(a)}|}`$, we obtain
$$z_{\alpha ^{(a)}}+\underset{\beta ^{(b)}<\alpha ^{(a)}}{}z_{\beta ^{(b)}}l_\beta t^{|\alpha ^{(a)}||\beta ^{(b)}|}.$$
Thus $`𝒵(0)`$ is
$$Z_{\alpha ^{(a)}}_{\alpha ^{(a)}}=\underset{\beta ^{(b)}\alpha ^{(a)}}{}Z_{(\beta ,d)},$$
by Proposition 2 (ii).
Claim: The cycle
$$𝒵(0)(1,t_1)\mathrm{}(1,t_k)$$
is free of multiplicities.
If not, then there are two components $`Z_{\beta ^{(b)}}`$ and $`Z_{\gamma ^{(c)}}`$ of $`𝒵(0)`$ such that
$$Z_{\beta ^{(b)}}Z_{\gamma ^{(c)}}(1,t_1)\mathrm{}(1,t_k)$$
is nonempty. But this contradicts Theorem 5, as $`Z_{\beta ^{(b)}}Z_{\gamma ^{(c)}}=Z_{\delta ^{(d)}}`$, where $`\delta ^{(d)}`$ is the greatest lower bound of $`\beta ^{(b)}`$ and $`\gamma ^{(c)}`$ in $`𝒞_q`$, and so $`dimZ_{\delta ^{(d)}}<dimZ_{\beta ^{(b)}}=k`$.
From the claim, there is an $`ϵ_{\alpha ^{(a)}}>0`$ such that if $`0tϵ_{\alpha ^{(a)}}`$, then
$$𝒵(t)(1,t_1)\mathrm{}(1,t_k)$$
is transverse with all points of intersection real. Set
$$t_{k+1}:=\mathrm{min}\{ϵ_{\alpha ^{(a)}}:|\alpha ^{(a)}|=k+1\}.\mathit{}$$
## 4. Further Remarks
From our proof of Theorem 4, we obtain a rather precise choice of $`s_i`$ and $`L_i`$ in the enumerative problem which give only real maps. By $`t_1t_2\mathrm{}t_N>0`$, we mean
$$t_1>0ϵ_2>0ϵ_2>t_2>0\mathrm{}ϵ_N>0ϵ_N>t_N>0.$$
###### Corollary 6.
$`t_1t_2\mathrm{}t_N>0`$, each of the $`\delta `$ maps $`M:^1𝐆`$ of degree $`q`$ which satisfy $`M(1,t_i)L(1,t_i^{1/n})\{0\}`$ for $`i=1,\mathrm{},N`$ are real.
When $`q=0`$, there is substantial evidence that this choice of $`t_1,\mathrm{},t_N`$ is too restrictive. B. Shapiro and M. Shapiro have the following conjecture:
Conjecture. Suppose $`q=0`$. Then for generic real numbers $`t_1,\mathrm{},t_{mp}`$ all of the finitely many $`p`$-planes $`H`$ which satisfy $`HL(1,t_i)\{0\}`$ are real.
In contrast, when $`q>0`$, the restriction $`t_1t_2\mathrm{}t_N>0`$ is necessary. We observe this in the case when $`q=1`$, $`p=m=2`$, so $`N=8`$ and $`\delta =8`$. That is, for parameterized curves of degree 1 in the Grassmannian of 2-planes in $`^4`$. Here, the choice of $`t_i=i`$ in (5) gives no real maps, while the choice $`t_i=i^6`$ gives 8 real maps.
We briefly describe that calculation. There are 12 Plücker coordinates $`z_{ij^{(a)}}`$ for $`1i<j4`$ and $`a=0,1`$. If we let $`f_{ij}:=tz_{ij^{(0)}}+sz_{ij^{(1)}}`$, then
$$f_{14}f_{23}f_{13}f_{24}+f_{12}f_{34}=0,$$
as $`f_{ij}(s,t)𝐆`$ for all $`s,t`$. The coefficients of $`t^2`$, $`st`$, and $`s^2`$ in this expression give three quadratic relations among the $`z_{ij^{(a)}}`$:
$$\begin{array}{c}z_{14^{(0)}}z_{23^{(0)}}z_{13^{(0)}}z_{24^{(0)}}+z_{12^{(0)}}z_{34^{(0)}},\\ z_{12^{(1)}}z_{34^{(0)}}z_{13^{(1)}}z_{24^{(0)}}+z_{14^{(1)}}z_{23^{(0)}}\text{}+z_{23^{(1)}}z_{14^{(0)}}z_{24^{(1)}}z_{13^{(0)}}+z_{34^{(1)}}z_{12^{(0)}},\\ z_{14^{(1)}}z_{23^{(1)}}\text{}z_{13^{(1)}}z_{24^{(1)}}+z_{12^{(1)}}z_{34^{(1)}},\end{array}$$
and these constitute a Gröbner basis for the homogeneous ideal of $`𝒦_1`$.
Here, the form $`\mathrm{\Lambda }`$ is
$$\begin{array}{c}t^8z_{12^{(0)}}2t^7z_{13^{(0)}}+t^6z_{14^{(0)}}+3t^6z_{23^{(0)}}2t^5z_{24^{(0)}}+t^4z_{34^{(0)}}\\ +t^4z_{12^{(1)}}2t^3z_{13^{(1)}}+t^2z_{14^{(1)}}+3t^2z_{23^{(1)}}2tz_{24^{(1)}}+z_{34^{(1)}}.\end{array}$$
We set $`z_{34^{(1)}}=1`$ and work in local coordinates. Then the ideal generated by the 3 quadratic equations and 8 linear relations $`\mathrm{\Lambda }(t_i)`$ for $`i=1,\mathrm{},8`$ defines the 8 solutions to (5). We used Maple V.5 to generate these equations and then compute a univariate polynomial in the ideal, which had degree 8. This polynomial had no real solutions when $`t_i=i`$, but all 8 were real when $`t_i=i^6`$. (Elimination theory guarantees that the number of real solutions equals the number of real roots of the eliminant.)
We describe how the enumerative problem (1) arises in systems theory (see also ). A physical system (eg. a mechanical linkage) with $`m`$ inputs and $`p`$ measured outputs whose evolution is governed by a system of linear differential equations is modeled by a $`m\times n`$-matrix $`L(s)`$ of real univariate polynomials. The largest degree of a maximal minor of this matrix is the MacMillan degree, $`r`$, of the evolution equation. Consider now controlling this linear system by output feedback with a dynamic compensator. That is, a $`p`$-input, $`m`$-output linear system $`M`$ is used to couple the $`m`$ inputs of the system $`L`$ to its $`p`$ outputs. The resulting closed system has characteristic polynomial
$$\phi (s):=\left[\begin{array}{c}L(s)\\ M(s)\end{array}\right],$$
and the roots of $`\phi `$ are the natural frequencies or poles of the closed system. The dynamic pole assignment problem asks, given a system $`L(s)`$ and a desired characteristic polynomial $`\phi `$, can one find a (real) compensator $`M(s)`$ of MacMillan degree $`q`$ so that the resulting closed system has characteristic polynomial $`\phi `$? That is, if $`s_1,\mathrm{},s_{r+p}`$ are the roots of $`\phi `$, which $`M_q`$ satisfy
$$det\left[\begin{array}{c}L(s_i)\\ M(s_i)\end{array}\right]=0,\text{for }i=1,2,\mathrm{},r+p\mathrm{?}$$
In the critical case when $`r+q=mp+qn`$, this is an instance of the enumerative problem (1). When the degree $`\delta `$ is odd, then for a real system $`L`$ and a real characteristic polynomial $`\phi `$, there will be at least one real dynamic compensator. Part of the motivation for was to obtain a formula for $`\delta `$ from which its parity could be deduced for different values of $`q,m`$, and $`p`$.
From this description, we see that the choice of planes $`L_i`$ that arise in the dynamic pole placement problem are $`N=mp+qn`$ points on a rational curve of degree $`mp+(n1)q`$ in the Grassmannian of $`m`$-planes in $`^n`$. In contrast, the planes of Theorem 4 (and hence of Theorem 1) arise as $`N`$ points on a rational curve of degree $`mp`$. Only when $`q=0`$ (the case of static compensators) is there any overlap. While our proof of Theorem 1 owes much to systems theory, it has no direct implications for the problem of real dynamic output compensation.
Our method of proof of Theorem 1 (like that in ) was inspired by the numerical Pieri homotopy algorithm of for computing the solutions to (1) when $`q=0`$. Likewise, the explicit degenerations of intersections of the $`(s,t)`$ that we used, and more generally Proposition 2 (ii), can be used to construct an optimal numerical homotopy algorithm for finding the solutions to (1). This is in exactly the same manner as the explicit degenerations of intersections of special Schubert varieties of were used to construct the Pieri homotopy algorithm of .
We close with one open problem concerning the enumeration of rational curves on a Grassmannian. For a point $`s^1`$ and any Schubert variety $`\mathrm{\Omega }`$ of $`𝐆`$, consider the quantum Schubert variety $`\mathrm{\Omega }(s)`$ of curves $`M_q`$ satisfying $`M(s)\mathrm{\Omega }`$. The quantum Schubert calculus gives algorithms to compute the number of curves $`M_q`$ which lie in the intersection of an appropriate number of these $`\mathrm{\Omega }(s)`$, and we ask when it is possible to have all solutions real. A modification of the proof of Theorem 4 shows that this is the case when all except possibly 2 are hypersurface Schubert varieties. In every case we have been able to compute, all solutions may be real.
|
no-problem/9904/hep-ph9904423.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Recently there is a growing interest in the primordial nucleosynthesis in an Universe having an inhomogeneous distribution of baryon number. The most popular mechanism for the generation of baryon number inhomogeneities has been suggested as a first-order QCD phase transition \[1-8\]. The present Universe is presumed to have undergone several successive phase transitions associated with symmetry breaking in its early stage. A QCD phase transition might have occurred at approximately 10<sup>-6</sup> sec after the big bang in the early Universe. Even though many aspects of phase transition are not a settled issue, yet, the QCD phase transition is expected to be a first-order one from lattice QCD results . Therefore, a large local fluctuation in the baryon to photon ratio $`n_B/n_\gamma `$ could arise which might subsequently modify the standard picture of the primordial nucleosynthesis (PNS). Such a fluctuation might also result into the formation of quark nuggets and/or quark stars etc.\[1-8\]. The basis for the production of isothermal baryon number fluctuation lies in the separation of cosmic phases as follows . Initially the Universe is in the quark-gluon plasma (QGP) phase at a high temperature and the net baryon number resides entirely in the quark-gluon plasma phase and is distributed homogeneously. As the Universe expands, the temperature drops to the critical temperature $`T_c`$ where the quark-gluon plasma exists in thermal and chemical equilibrium with the dense and hot hadron gas. Subsequently, the expansion requires a continuous conversion of QGP into the hadron phase. The phase transition is completed when all the quark-gluon plasma has been converted to hadron phase and all the baryon number residing finally in the hadron phase is distributed homogeneously. The magnitude of baryon number fluctuation is estimated by the baryon contrast ratio $`R_{eq}`$ of the net equilibrium baryon number density in the quark-gluon plasma (QGP) phase to that in the hadron gas (HG) phase, i.e., $`R_{eq}=n_{QGP}^B/n_{HG}^B`$ which is evaluated at the critical temperature and chemical potential. The baryon number density inhomogeneity arising due to such a quark-hadron phase transition will thus alter the yields of the PNS which is a function of $`R_{eq}`$ and the neutron to proton $`(n/p)`$ ratio . Several theoretical attempts have been recently made to determine the values of $`R_{eq}`$ \[11-14\].
All aforementioned calculations of $`R_{eq}`$ assume that the Universe is in complete thermal and chemical equilibrium during the phase transition . But in reality, the cosmic first-order phase transition will necessarily result in deviations from thermal and chemical equilibrium because the low temperature hadron phase is not nucleated immediately at the critical temperature T<sub>c</sub>. A generic feature of quantum or thermal nucleation theory is that the nucleation rate does not become large unless the temperature has dropped below $`T_c`$ . The magnitude of these deviations will depend on the efficiency of heat and baryon transport across the transition front during the phase transition. Latent heat transport is carried out by the motion of the boundary wall which converts volume of one vacuum into another. This vacuum energy difference acts as a source for particle creation. However, the latent heat or entropy could also be carried out across the phase boundary by neutrinos apart from surface evaporation of hadrons (mostly pions). Baryon number is not thermally created in the hadron phase. Therefore, it actually flows across the boundary by the strong interaction physics if the hadron phase is going to have any net baryon number at all.
There are two limiting situations governing the efficiency of baryon number transport across the phase boundary \[5.17\]. In the first situation, when the bubble-like regions of quark-gluon plasma are small in size, the phase boundaries move slowly compared to the baryon number transport rate. Hence the baryon number is quickly and efficiently transported across the boundaries, so as to maintain the chemical equilibrium. In the second situation, the boundaries of the bubble-like regions move rapidly compared to the baryon transport rate so that the net baryon transport process becomes inefficient. Therefore, chemical equilibrium may not be achieved on the time scale of the phase coexistence evolution and, consequently, the baryons could be concentrated in the shrinking bubble-like regions of the QGP. The magnitude of the resulting baryon number fluctuation at the time of phase decoupling will still be given by the ratio $`R^{}`$ (say) of baryon number densities in the two phases but now each phase will have a different baryon chemical potentials. Thus, the final ratio $`R^{}`$ will be larger than the equilibrium ratio $`R_{eq}`$.
It is clear that in both the abovementioned situations isothermal baryon number fluctuations will result. However, the magnitude of $`R_{eq}`$ is in the limit of efficient baryon number transport mechanism which is the aforesaid first situation. The efficiency of baryon number transport across the phase interface will be given by the bulk properties of the phases and the estimate of baryon number penetrability . The baryon number penetrability $`\mathrm{\Sigma }_h`$ from the hadronic side is defined as the probability that a baryon which approaches from the hadronic phase will dissociate into quarks and pass over into the QGP phase. Similarly from another side, we could define the probability $`\mathrm{\Sigma }_q`$ that the quarks which approaches towards the phase boundary will form a color singlet baryon and pass over to the hadron phase. The thermal average of these probabilities are related by the detailed balance
$`kf_{q\overline{q}}\mathrm{\Sigma }_q=f_{b\overline{b}}\mathrm{\Sigma }_h`$ (1)
where $`f_{q\overline{q}}(f_{b\overline{b}})`$ is the excess quark flux (baryon flux) in the QGP (HG) phase, respectively, and the dimensionless quantity $`k`$ takes values from 1/3 to 1. The value $`k=1/3`$ implies that baryon number predominantly passes over the phase boundary as preformed three-quark clusters in the QGP phase. Similarly $`k=1`$ signifies that baryons will be predominantly formed by the leading quarks which cross over the phase boundary into the hadron phase. A low value for $`\mathrm{\Sigma }_h`$ (or $`\mathrm{\Sigma }_q`$) ($`<1`$) implies an early drop out of chemical equilibrium during the separation of phases and thus might lead to the large amplitude of baryon-number fluctuations.
The estimation of baryon number penetrability has been attempted recently by many authors within the framework of the chromoelectric flux tube (CFT) model. Here an energetic leading quark in the QGP is assumed to pick up an antiquark forming thereby an expanding CFT whose decay through a single or double pair creation results in the formation of mesons or baryons, respectively. Naturally, the calculation of $`\mathrm{\Sigma }_h`$ within CFT model requires the values of the single as well as coherent double pair creation probabilities per unit space per unit time, called $`\kappa _m`$ and $`\kappa _b`$, respectively. Although $`\kappa _b`$ is a crucial factor for the formation of a baryon yet no theoretical expression for it based on strong interaction physics exists in the literature.
In this context the following points pertaining to a recent paper by Jedamzik and Fuller \[17, referred to as JF hereafter\] become particularly relevant : (i) JF have extracted the numerical value of $`\kappa _b`$ by analyzing empirically the ratio of baryons to mesons produced in the $`e^+e^{}`$ hadrons experiment conducted in the accelerator laboratories. (ii) JF allow for the full angular range $`0\theta \pi `$ where $`\theta `$ is the polar angle of the leading quark’s velocity with respect to the normal to the phase boundary. (iii) The final expression for $`\mathrm{\Sigma }_h`$ written as an integral over the quark distribution functions is computed by JF entirely numerically. (iv) Finally, the threshold energy in JF’s work increases with the temperature in an unreasonable manner.
The aim of the present paper is to extend/modify the theory of JF in multifold respects : (i) By regarding the unconnected double-pair creation event as a succession of two single-pair events, we show in Sec.2 below, that the double-pair creation probability $`P_b`$ (over finite time duration) can be obtained in terms of the $`\kappa _m`$ parameter itself. (ii) Since the leading quark’s velocity should be directed towards the phase boundary, we point out in Sec.3 that the corresponding polar angle should be restricted to the range $`0\theta \pi /2`$. (iii) We derive in Sec.3 an analytical expression for the thermal average of the baryon number penetrability so that the dependence of $`\mathrm{\Sigma }_h`$ on the relevant variables becomes more transparent. (iv) We show in Sec.4 the comparison between our and JF models by plotting $`P_b`$ as well as $`\mathrm{\Sigma }_h`$ as functions of the temperature when the quarks are assigned either the constituent or current mass. (v) Finally, we carefully analyze the temperature-dependence of the threshold energy in Sec.4 to obtain an elegant formula for $`\mathrm{\Sigma }_h`$ in the high T limit.
## 2 Decay Statistics of Flux Tubes
Chromoelectric flux tube (CFT) models provide us a phenomenological description regarding the formation of a hadron from quarks and gluons . These models assume the existence of a chromoelectric field between two oppositely coloured quarks where the chromoelectric field strength is assumed to be constant in magnitude and independent of the separation of the quarks. These fields can be thought of as being confined to a tube of constant width known as flux tube. Lattice QCD results justify the existence of such a flux tube . Initially, chromoelectric flux tube models were used to explain the processes such as $`e^+e^{}`$ hadrons . Later these models have also been used to describe the spectrum of mesonic and baryonic resonances . Recently flux tube models have been employed to estimate the meson evaporation rates from quark-gluon plasma produced in relativistic heavy-ion collisions and also to estimate baryon-number penetrability across the phase boundary in cosmological quark-hadron phase transition .
If the CFT is regarded as an unstable system then its stochastic properties can be conveniently discussed by introducing the following abbreviations : SP $``$ survival probability, DP $``$ decay probability, m $``$ mesonic or single-pair production channel, b $``$ baryonic or connected double-pair production channel, $`b^{}`$ disconnected double-pair production channel. Let us now take up the the earlier model used by Jedamzik and Fuller and a new proposal due to us.
### 2.1 Jedamzik and Fuller (JF) Model
In analogy with QED, at a perturbative field theoretic level, the JF model amounts to connected diagrams shown in Figs. 1(m) and 1(b). The corresponding DP’s per
unit time per unit volume are denoted by $`\kappa _m`$ and $`\kappa _b`$, respectively, having a total $`\kappa =\kappa _m+\kappa _b`$. The space-time integrated parameters are called as
$`w_m`$ $`=`$ $`\kappa _m{\displaystyle _0^{t_o}}V(t)𝑑t`$ (2)
$`w_b`$ $`=`$ $`\kappa _b{\displaystyle _0^{t_o}}V(t)𝑑t`$ (3)
and $`w=w_m+w_b`$, $`V(t)`$ is the instantaneous volume of the flux tube and $`t_0E/\sigma `$ is the maximum time upto which it expands when the incident quark has energy $`E`$. From the conservation of energy and momentum parallel to the phase boundary, the above integral could be written as in
$`{\displaystyle _0^{t_o}}V(t)𝑑t={\displaystyle \frac{\pi \mathrm{\Lambda }^2}{2\sigma ^2}}E^2\mathrm{cos}\theta `$ (4)
with $`\theta `$ being the angle of incidence of the leading quark relative to the normal to the phase boundary, $`\mathrm{\Lambda }`$, the radius of the flux tube and $`\sigma `$ the string tension of the flux tube.
Then, according to the classical theroy of radioactivity, the net SP and DP counting all channels would be
$`P_s`$ $`=`$ $`e^w`$ (5)
$`P_d`$ $`=`$ $`1e^w`$ (6)
Since the relative weights for the $`m`$ and $`b`$ channels are $`w_m/w`$ and $`w_b/w`$, respectively, the corresponding channelwise DP’s are as used by JF :
$`P_m`$ $`=`$ $`{\displaystyle \frac{\kappa _m}{\kappa }}P_d`$ (7)
$`=`$ $`{\displaystyle \frac{\kappa _m}{\kappa }}\left(1exp\left[\kappa {\displaystyle _0^{t_o}}V(t)𝑑t\right]\right)`$
$`P_b`$ $`=`$ $`{\displaystyle \frac{\kappa _b}{\kappa }}P_d`$ (8)
$`=`$ $`{\displaystyle \frac{\kappa _b}{\kappa }}\left(1exp\left[\kappa {\displaystyle _0^{t_o}}V(t)𝑑t\right]\right)`$
Although $`\kappa _m`$ can be estimated from Schwinger’s single-pair production mechanism applied to QCD, nothing a priori is known about $`\kappa _b`$. However, JF have extracted empirically the magnitude of $`\kappa _b`$ from the analysis of $`e^+e^{}`$ hadrons data.
### 2.2 Our Proposal
Suppose we retain information about the single $`q\overline{q}`$ pair production of Fig. 1(m) but ignore the diagram 1(b). Then, the double-pair production event may be looked upon as a sequence of two unconnected single-pair creations within time $`t_0`$ as shown in Fig.1($`b^{}`$). Since the flux tube is now characterized only by the parameter $`w_m`$, hence radioactivity theory would give the following expressions for the net SP and DP over the duration time $`t_0`$ :
$`P_s^{}`$ $`=`$ $`e^{w_m}`$ (9)
$`P_d^{}`$ $`=`$ $`1e^{w_m}={\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}p_n^{}`$ (10)
where the DP of having exactly $`n`$ successive single-pair events is a Poissonian viz.
$`p_n^{}=e^{w_m}{\displaystyle \frac{w_m^n}{n!}}`$ (11)
Therefore, in our approach the DP for the single and double- pair would be obtained from
$`P_m^{}=p_1^{}=e^{w_m}w_m`$ (12)
$`P_b^{}=p_2^{}=e^{w_m}{\displaystyle \frac{w_m^2}{2}}`$ (13)
In contrast to the JF approach $`(7,8)`$, our proposal involves only one parameter $`w_m`$ (or $`\kappa _m`$) as an input.
## 3 Analytical Estimation of Baryon number Penetrability
The main result of JF’s work is contained in their baryon penetrability equation which gives the thermal average of the baryon penetrability integrated over the $`q\overline{q}`$ distribution function. We wish to make two plausible comments on this result. Firstly, the angular integration range in their calculation $`0\theta \pi `$ may not be appropriate because if the leading quark is to enter the phase boundary then it should have $`0\theta \pi /2`$. Therefore we suggest that the corrected expression should be
$`\mathrm{\Sigma }_h={\displaystyle \frac{1}{f_{b\overline{b}}}}{\displaystyle _0^{\pi /2}}𝑑\theta {\displaystyle _{E_{th}}^{\mathrm{}}}𝑑E{\displaystyle \frac{dn_{q\overline{q}}}{dEd\theta }}\dot{x}_{}^{}{}_{}{}^{q}(\theta )P_b(E,\theta )`$ (14)
Here $`f_{b\overline{b}}`$, the excess baryon flux in the hadron phase, is given by
$`f_{b\overline{b}}={\displaystyle \frac{\mu _b}{T}}{\displaystyle \frac{g_b}{2\pi ^2}}(m_b+T)T^2exp(m_b/T)`$ (15)
where $`m_b`$, $`\mu _b`$ and $`g_b`$ are the baryon mass, baryon chemical potential and degeneracy factors, respectively and $`\dot{x}_{}^{}{}_{}{}^{q}`$ is the component of the leading quark velocity perpendicular to the phase boundary, and $`E_{th}`$ is the threshold energy.
Secondly, JF have done their subsequent calculations numerically so that the dependence of $`\mathrm{\Sigma }_h`$ on the relevant parameters remains somewhat obscure. We suggest that an analytical estimate for the seemingly complicated expression (14) is very desirable because that would make the dependence of $`\mathrm{\Sigma }_h`$ on the relevant parameters more transparent. For this purpose we proceed as follows :
Using Boltzmann approximation(E/T $``$ 1), the differential excess quark number density in a given energy interval $`dE`$ and in a given interval of incident angles $`d\theta `$ becomes
$`{\displaystyle \frac{dn_{q\overline{q}}}{dEd\theta }}{\displaystyle \frac{\mu _q}{T}}{\displaystyle \frac{g_q}{2\pi ^2}}E^2exp(E/T)\mathrm{sin}\theta `$ (16)
where $`\mu _q`$, $`g_q`$ are the quark chemical potential, and the statistical weight of quarks, respectively.
Then Eq.(14) becomes
$`\mathrm{\Sigma }_hC{\displaystyle _0^1}𝑑ss{\displaystyle _{E_{th}}^{\mathrm{}}}𝑑EE^2exp(E/T)P_b(E,\theta )`$ (17)
where
$`C{\displaystyle \frac{1}{f_{b\overline{b}}}}{\displaystyle \frac{\mu _q}{T}}{\displaystyle \frac{g_q}{2\pi ^2}};s=\mathrm{cos}\theta `$ (18)
Due to the rapidly damped factor $`exp(E/T)`$ in Eq.(17) most contribution to the energy integration comes from around the threshold energy $`E_{th}`$. Therefore, it is convenient to make the transformation
$`\rho ={\displaystyle \frac{EE_{th}}{T}};E=E_{th}\{1+{\displaystyle \frac{T}{E_{th}}}\rho \}`$ (19)
Upon using the identity $`_0^{\mathrm{}}𝑑\rho exp(\rho )=1`$, Eq.(17) reduces to
$`\mathrm{\Sigma }_hC{\displaystyle _0^1}𝑑ssTE_{th}^2exp({\displaystyle \frac{E_{th}}{T}})P_b(E_{th},\theta )`$ (20)
within correction terms of order $`T/E_{th}`$. Now, at large angles $`\theta \pi /2`$, $`s0`$, $`E_{th}\mathrm{}`$ implying that $`exp(E_{th}/T)`$ gets heavily damped. Hence, most contribution to the angular integral must come from around the forward direction $`\theta 0`$, $`s1`$. For convenience define
$`\widehat{E_{th}}`$ $``$ $`E_{th}|_{\theta =0}`$ (21)
$`=`$ $`m_b+Bn_q^1`$
$`\widehat{P_b}`$ $``$ $`P_b(\widehat{E_{th}},\theta =0)`$ (22)
$`\lambda {\displaystyle \frac{E_{th}\widehat{E_{th}}}{T}}`$
$`{\displaystyle \frac{m_b}{T}}(1s)`$ (23)
Here $`B`$ is the bag constant and $`n_q`$ is the quark plus antiquark density. The above eq.(21) is the threshold energy condition in the forward angle direction $`(\theta =0)`$ and the next eq.(22) is the corresponding probability. Using the identity $`_0^{m_b/T}𝑑\lambda exp(\lambda )1`$, eq.(20) yields
$`\mathrm{\Sigma }_h`$ $``$ $`{\displaystyle \frac{CT^2\widehat{E_{th}}^2}{m_b}}exp(\widehat{E_{th}}/T)\widehat{P_b}\{1+O({\displaystyle \frac{T}{m_b}})\}`$ (24)
Substituting Eq.(18) for $`C`$, we arrive at the desired analytical estimate :
$`\mathrm{\Sigma }_h={\displaystyle \frac{\mu _q}{\mu _b}}{\displaystyle \frac{g_q}{g_b}}\left({\displaystyle \frac{\widehat{E_{th}}}{m_b}}\right)^2e^{(\widehat{E_{th}}m_b)/T}\widehat{P_b}\{1+O({\displaystyle \frac{T}{m_b}})\}`$ (25)
The probability function in JF model
$`\widehat{P_b}`$ $`=`$ $`a\left[1exp\{b\widehat{E_{th}}^2\}\right]`$ (26)
can be replaced in our model as
$`\widehat{P_b^{}}`$ $`=`$ $`e^{w_m}{\displaystyle \frac{w_m^2}{2}}`$ (27)
with
$`a`$ $`=`$ $`{\displaystyle \frac{\kappa _b}{\kappa _m+\kappa _b}};b=(\kappa _m+\kappa _b){\displaystyle \frac{\pi \mathrm{\Lambda }^2}{2\sigma ^2}}`$ (28)
$`w_m`$ $`=`$ $`\kappa _m{\displaystyle \frac{\pi \mathrm{\Lambda }^2\widehat{E_{th}^2}}{2\sigma ^2}}`$ (29)
Here the string tension of the flux tube $`\sigma 0.177GeV^2`$, and single-pair creation probability per unit time per unit volume $`\kappa _m`$ is given by
$`\kappa _m={\displaystyle \frac{\sigma ^2}{4\pi ^3}}exp(\pi m_q^2/\sigma )`$ (30)
where $`m_q`$ is the quark mass. Since there is an ambiguity in choosing the precise value of $`m_q`$, we allow in the sequel both the possibilities viz. $`m_q=300MeV`$ (constituent quark mass) and $`m_q=10MeV`$ (current quark mass). Eqs.(25), (26), (27) form the main algebraic results of the present paper.
## 4 Results and Discussions
In Fig. 2, we have shown the variation of double pair creation probability at threshold in the forward direction with the temperature for a small value of $`b=0.047GeV^2`$ corresponding to the constituent quark mass $`(m_q=330MeV)`$. Here we have taken the value of the ratio $`a(\kappa _m/\kappa _b)1/5`$ as in
Fig.3 is also the same as Fig.2 except that a higher value of $`b=0.32GeV^2`$ is taken corresponding to the current quark mass $`(m_q=10MeV)`$ and the ratio $`a1/20`$.
Clearly, the difference between the predicted values of $`\widehat{P_b}`$ in our model and JF model must be attributed to their different premises. Our approach has the advantage of working with only one decay parameter $`\kappa _m`$ but ignores the connected diagram 1(b). The JF approach has the advantage of including the graph 1(b) but at the cost of bringing-in the additional unknown decay parameter $`\kappa _b`$.
Next, we turn to the calculation of the baryon number penetrability $`\mathrm{\Sigma }_h`$ based on the analytical result (25). The probability $`\widehat{P_b}`$ can have either the JF form (26) or our form (27) with the remaining parameters having been set as for u, d flavours in QGP sector and nucleons in the hadronic sector. Figs. 4 and 5 display the dependence of $`\mathrm{\Sigma }_h`$ on the temperature when the quark has the constituent and current masses, respectively.
In Fig.4, when the quark has constituent mass, our curve lies below that of the JF model. Our small predicted value $`\mathrm{\Sigma }_hO(10^3)`$, incidentally, justifies the results obtained by Sumiyoshi et al. and Fuller et al.. However, in the opposite limit, i.e., for current quark mass, Fig.5 reveals that our graph is consistently above that of JF. Our predicted higher value of $`\mathrm{\Sigma }_hO(10^2)`$ seems to agree with the original result obtained by Jedamzik and Fuller using numerical quadrature involving the unphysical angular range $`0\theta \pi `$.
In passing we note that the typical time upto which the CFT expands is $`t_0\widehat{E_{th}}/\sigma 10^{24}s`$ which seems to be one order of magnitude smaller than typical nuclear time scale of $`10^{23}s`$. Can the $`b^{}`$ channel of Fig.1 be important under such a circumstance? To answer this question we borrow the $`w_m`$ values from Figs. 2-5 and look at the ratio
$`{\displaystyle \frac{P_b^{}}{P_m^{}}}={\displaystyle \frac{w_m}{2}}2\%\mathrm{to}15\%`$ (31)
which is sizable. Therefore, our view of regarding the double-pair production event as a succession of two single-pair events can be justified.
Towards the end we wish to comment on a couple of factors on which baryon number penetrability crucially depends within the chromoelectric flux tube approach. One factor is the choice of the quark mass as seen from Figs. 4 and 5. Another factor, in which $`\mathrm{\Sigma }_h`$ is very sensitive, is the threshold energy $`\widehat{E_{th}}`$ in the forward direction. Following the idea of Jedamzik and Fuller , this threshold energy $`E_{th}`$ consists of two parts : one is simply the rest mass of the baryon $`m_b`$ and the other is the interaction energy $`Bn_q^1`$ for each quark and antiquark which resides in the QGP. JF have parametrized the interaction energy as $`3.7T`$ which apparently grows with the temperature. However, in our opinion, the function $`Bn_q^1`$ should decrease with $`T`$ like $`T^3`$ for a fixed value of the bag constant $`B`$ since the total quark density $`n_q`$ is an increasing function of $`T`$. So $`E_{th}`$ would tend to $`m_b`$ in Eq.(25) which yields the elegant high-temperature behaviour
$`\mathrm{\Sigma }_h\stackrel{T\mathrm{}}{}\widehat{P_b}`$ (32)
remembering that $`g_q=12`$, $`g_b=4`$ and $`\mu _b=3\mu _q`$.
ACKNOWLEDGEMENTS : One of us, VJM thanks the UGC, Goverment of India, New Delhi for financial support.
|
no-problem/9904/cond-mat9904101.html
|
ar5iv
|
text
|
# Statistics of Shear-induced Rearrangements in a Model Foam
## I Introduction
A foam is a disordered collection of densely-packed polydisperse gas bubbles in a relatively small volume of liquid . Foams have a rich rheological behavior; they act like elastic solids for small deformations but they flow like viscous liquids at large applied shear stress . The stress is relaxed by discrete rearrangement events that occur intermittently as the foam is sheared. Three-dimensional foams are opaque, which makes it difficult to observe these bubble movements directly. However, measurements by diffusing-wave spectroscopy of three-dimensional foams subjected to a constant shear rate suggest that the number of bubbles involved in the rearrangements is small, of the order of four bubbles. Bubble rearrangements can be observed directly by fluorescence microscopy in two-dimensional foams found in insoluble monolayers at the air-water interface. A study of shear in such foams also revealed no large-scale rearrangements.
While analytical theories for the response to applied steady shear may be constructed for periodic foams, only simulation approaches are possible for disordered foams. Kawasaki’s vertex model was the first to incorporate dissipative dynamics. It applies to a two-dimensional foam in the limit in which the area fraction of gas is unity (a dry foam). Bubble edges are approximated by straight line segments that meet at a vertex that represents a Plateau border. The equations of motion for the vertices are solved by balancing viscous dissipation due to shear flow within the borders by surface tension forces. At low shear rates, the elastic energy of the foam, which is associated with the total length of the bubble segments, shows intermittent energy drops with a distribution of event rate vs. energy release that follows a broad power law, consistent with self-organized criticality. The rearrangements associated with the largest events consist of cooperative motions of bubbles that extend over much of the system.
Weaire and coworkers were the first to develop a model appropriate to a disordered wet foam. The model does not include dissipation, so it is quasi-static by construction. Thus the system is allowed to relax to an equilibrium configuration after each of a series of infinitesimal shear steps. The size of rearrangements is measured by the number of changes in nearest-neighbor contacts. For dry foams, the average event size is small, inconsistent with a picture of self-organized criticality. However, as the liquid content increases, the event-size distribution broadens, with the largest events involving many bubbles. Although the statistics are limited, this is consistent with a picture of criticality at the point where the foam loses its rigidity.
The first model capable of treating wet, disordered foams at nonzero shear rate was proposed by Durian . His model pictures the foam as consisting of spherical bubbles that can overlap. Two pairwise-additive interactions between neighboring bubbles are considered, a harmonic repulsive force that mimics the effect of bubble deformation and a force proportional to the velocity difference between neighboring bubbles that accounts for the viscous drag. He found that the probability density of energy drops followed a power law, with a cutoff at very high energy events. The largest event observed consisted of only a few bubbles changing neighbors. This is inconsistent with a picture of self-organized criticality, although the effect of the liquid content on the topology statistics was not examined.
Most recently, Jiang et al. have employed a large-Q Potts model to examine sheared foams. In this lattice model bubbles are represented by domains of like spin, and the film boundaries are the links between regions of different spins. Each spin merely acts as a label for a particular bubble, and the surface energy arises only at the boundaries where the spins differ. The evolution of the foam is studied by Monte Carlo dynamics with a Hamiltonian consisting of three terms: the coupling energy between neighboring spins at the boundaries of the bubbles; an energy penalty for changes in the areas of the bubbles, which inhibits coarsening of the foam; and a shear term that biases the probability of a spin reassignment in the strain direction. The spatial distribution of T1 events was examined and no system-wide rearrangements were observed. Nevertheless, Jiang, et al. found a power-law distribution of energy changes. They also found that the number of events per unit strain displayed a strong shear rate dependence, suggesting that a quasi-static limit does not exist.
These four simulation approaches thus offer conflicting pictures as to (1) the existence of a quasistatic limit, (2) whether or not rearrangement dynamics at low shear rates are a form of self-organized criticality, and (3) whether or not the melting of foams with increasing liquid content is a more usual form of criticality. One possible reason for this disagreement is differences in the treatment of dissipation, and hence in the treatment of the dynamics of the rearrangements. In principle, the only accurate way in which to include dissipation in a sheared foam is to solve for the Stokes flow in the liquid films and Plateau borders. This approach has been adopted by Li, Zhou and Pozrikidis, but so far it has only been applied to periodic foams. The statistics of rearrangement events are fundamentally different in periodic and disordered foams; in sheared periodic foams, all the bubbles rearrange simultaneously at periodic intervals, while in a disordered foam, the rearrangements can be localized and intermittent. Nonetheless, the Stokes-flow approach is the only one that can be used as a benchmark for more simplified models.
In order to gain a better understanding of the origin of the discrepancies between the various models, as well as between the models and experiments, we report here a systematic study of the properties of a sheared foam using Durian’s model. We begin by reviewing his model and discussing our numerical implementation using two different forms of dissipation. After confirming that there are no significant system-size effects for dry samples, we examine shear-rate dependence and establish the existence of a true quasistatic limit for the distribution and rate of energy drops and topology changes. This limit is shown to be independent of the dissipation mechanism for foams of different gas fractions. Finally, we examine dramatic changes in the behavior of these quantities as the liquid content is tuned toward the melting point.
## II Bubble model
Durian’s model is based on the wet-foam limit, where the bubbles are spherical. The foam is described entirely in terms of the bubble radii $`\{R_i\}`$ and the time-dependent positions of the bubble centers $`\{\stackrel{}{r}_i\}`$. The details of the microscopic interactions at the level of soap films and vertices are subsumed into two pairwise additive interactions between bubbles, which arise when the distance between bubble centers is less than the sum of their radii. The first, a repulsion that originates in the energy cost to distort bubbles, is modeled by the compression of two springs in series with individual spring constants that scale with the Laplace pressures $`\sigma /R_i`$, where $`\sigma `$ is the liquid-gas surface tension and $`R_i`$ is the bubble radius. Bubbles that do not overlap are assumed not to interact. The repulsive force on bubble $`i`$ due to bubble $`j`$ is then
$$\stackrel{}{F}_{ij}^r=k_{ij}\left[(R_i+R_j)|\stackrel{}{r}_i\stackrel{}{r}_j|\right]\widehat{r}_{ij}$$
(1)
where $`\widehat{r}_{ij}`$ is the unit vector pointing from the center of bubble $`j`$ to the center of bubble $`i`$, and $`k_{ij}=F_0/(R_i+R_j)`$ is the effective spring constant, with $`F_0\sigma R`$. The second interaction is the viscous dissipation due to the flow of liquid in the films. It, too, is assumed to be pairwise additive and is modeled by the simplest form of drag, where the force is proportional to the velocity difference between overlapping bubbles. The viscous force on bubble $`i`$ due to its neighbor $`j`$ is
$$\stackrel{}{F}_{ij}^v=b(\stackrel{}{v}_i\stackrel{}{v}_j),$$
(2)
where the constant $`b`$ is proportional to the viscosity of the liquid, and is assumed to be the same for all bubble neighbors.
The net force on each bubble sums to zero, since inertial effects are negligible in this system. Summing over those bubbles $`j`$ that touch bubble $`i`$, the equation of motion for bubble $`i`$ is
$$\underset{j}{}(\stackrel{}{v}_i\stackrel{}{v}_j)=\frac{F_0}{b}\underset{j}{}\left[\frac{1}{|\stackrel{}{r}_i\stackrel{}{r}_j|}\frac{1}{R_i+R_j}\right](\stackrel{}{r}_i\stackrel{}{r}_j)+\frac{\stackrel{}{F}_i^a}{b},$$
(3)
where $`\stackrel{}{F}_i^a`$ is an externally applied force, arising, for instance, from interactions with moving walls.
Durian employed a further simplification of this model, in which the viscous dissipation is taken into account in a mean-field manner by taking the velocity of each bubble relative to an average linear shear profile. In this case, the total drag force on bubble $`i`$ due to all of its $`N_i`$ overlapping neighbors is
$$\stackrel{}{F}_i^v=bN_i\left(\stackrel{}{v}_i\dot{\gamma }y_i\widehat{x}\right).$$
(4)
In the numerical simulations reported here we use both the mean-field model of dissipation as well as the approximation represented by Eq. 2, which we call the local dissipation model. In the latter, at each integration time step the velocity of a bubble is measured with respect to the average of the velocities of its $`N_i`$ overlapping neighbors, so that the total drag force on bubble $`i`$ is
$$\stackrel{}{F}_i^v=b\left(N_i\stackrel{}{v}_i\underset{j=\text{ nn}}{}\stackrel{}{v}_j\right)$$
(5)
For very large $`N_i`$, this reduces to Eq. 4; otherwise, it allows for fluctuations. One aim of our study is to establish the sensitivity of the results to the specific form of dissipation used, Eq. 4 or Eq. 5.
In two dimensions, the area fraction of gas bubbles, $`\varphi `$, can be defined by the total bubble area $`\pi R_i^2`$ per system area. Because the bubbles are constrained to remain circular and their interactions are approximated as pairwise-additive, the model necessarily breaks down for very dry foams. In fact, bubble radii can even be chosen so that $`\varphi `$ exceeds one. In a real foam, of course, this is prevented by the divergence of the osmotic pressure.
## III Numerical Method
All the results reported here are based on simulations of a two-dimensional version of Durian’s model. We use Eq. 3 to study a two-dimensional foam periodic in the $`x`$–direction and trapped between parallel plates in the $`y`$–direction. Bubbles that touch the top and bottom plates are fixed to them, and the top plate is moved at a constant velocity in the $`x`$–direction. (The system can also be sheared with a constant force instead of a constant velocity, but that case will not be discussed here.) Thus, bubbles are divided into two categories — “boundary” bubbles, which have velocities that are determined by the motion of the plates, and “interior” bubbles, whose velocities must be determined from the equations of motion.
The equation of motion Eq. 3 can be written in the form
$$𝐌(\{𝐫\})\{𝐯\}=\{𝐅^r\}/b+\{𝐅^a\}/b$$
(6)
where $`\{𝐯\}`$ is a vector containing all the velocity components of all of the bubbles, $`\{v_0^x,v_0^y,v_1^x,v_1^y,\mathrm{}\}`$, $`\{𝐅^r\}`$ is a vector of all of the repulsive bubble–bubble forces, and $`\{𝐅^a\}`$ contains all the forces exerted by the walls. The matrix $`𝐌`$ depends on the instantaneous positions of the bubbles. The $`2\times 2`$ block submatrix $`M_{ij}`$ is a unit matrix $`\mathrm{𝟏}`$ if the distinct bubbles $`i`$ and $`j`$ overlap, and $`\mathrm{𝟎}`$ if they do not overlap. On the diagonal, $`M_{ii}=\mathrm{𝟏}N_i`$, where $`N_i`$ is the number of overlapping neighbors of bubble $`i`$. Eq. 6 is of the form $`𝐀(𝐫,t)(d𝐫/dt)=f(𝐫,t)`$, which we solve for the bubble positions $`𝐫`$ with the routine DDRIV3. DDRIV3 has the ability to solve differential equations in which the left hand side is multiplied by an arbitrary time-dependent matrix. Furthermore, it allows all matrix algebra to be performed by external routines, allowing us to take advantage of the sparse nature of $`𝐌`$. We use the SPARSKIT2 library for sparse matrix solutions.
The only relevant dynamical scale in this problem is set by the characteristic relaxation time arising from the competing mechanisms for elastic storage and viscous dissipation, $`\tau _d=bR/F_0`$. This is the characteristic time scale for the duration of bubble rearrangements driven by a drop in total elastic energy. Without loss of generality we set this to unity in the simulation. In these units, the dimensionless shear rate $`\dot{\gamma }`$ is the capillary number.
To introduce polydispersity, the bubble radii are drawn at random from a flat distribution of variable width; in all the results reported here, the bubble radii vary from 0.2 to 1.8 times the average bubble radius. We note that the size distribution in experimental systems is closer to a truncated Gaussian with the maximum size equal to twice the average radius. The truncated Gaussian distribution arises naturally from the coarsening process . We tested the sensitivity of our results to the bubble distribution by doing one run with bubbles drawn from a triangular distribution, and found that the shape of the distribution had no significant effect. Similarly, variation of the width of a triangular distribution has been shown to have no influence on the linear viscoelasticity . Note that it is important to include polydispersity because a monodisperse system will crystallize under shear, especially in two dimensions.
In all of our runs, the system is first equilibrated with all bubbles treated as interior bubbles, and with a repulsive interaction between the bubbles and the top and bottom plates so that bubbles cannot penetrate the plates. The bubbles that touch the top and bottom plates are then converted to boundary bubbles. The top plate is moved at a constant velocity and data collection begins after any initial transients die away. In addition to recording quantitative measures of the system, we also run movies of the sheared foam in order to observe visually how the flow changes as a function of shear rate, area fraction and other parameters .
## IV Quantities Measured
Before showing results, we discuss the various quantities extracted during a run. Under a small applied shear strain, bubbles in a real foam distort; as the shear strain increases, the structure can become unstable and they may thus rearrange their relative positions. In the bubble model, the distortion of bubbles is measured globally by the total elastic energy stored in all the springs connecting overlapping bubbles:
$$E=\frac{1}{2}k_{ij}\left[(R_i+R_j)|\stackrel{}{r}_i\stackrel{}{r}_j|\right]^2.$$
(7)
Under steady shear, the elastic energy rises as bubbles distort (overlap) and then drops as bubbles rearrange. Thus, the total elastic energy fluctuates around some average value. The scale of the energy is set by the elastic interaction and is of order $`F_0R`$ per bubble, where $`R`$ is the average bubble radius.
Fig. 1a shows a plot of the total elastic energy as a function of strain for a system of 144 bubbles at area fraction $`\varphi =1.0`$ driven at a constant shear rate of $`\dot{\gamma }=10^3`$. Similar plots for stress vs strain are shown in Refs. . Note the precipitous energy drops, $`\mathrm{\Delta }E`$, due to bubble rearrangements. In the literature, these energy drops are often referred to as avalanches. Since the term “avalanche” tends to imply the existence of self-organized criticality, we employ the more neutral but less elegant term “energy drop.” The time interval between energy drops is much larger than the duration of a single event. This is also illustrated in Fig. 1b, which shows the magnitude of energy drops that occur as the system is strained. ($`\mathrm{\Delta }E`$ is normalized by the average energy per bubble $`E_b`$, which has been computed by averaging the elastic energy over the entire duration of a run and dividing by the total number of bubbles in the system, $`N_{bub}`$.) These recurring precipitous rearrangements represent the only way for the foam to relax stress: there is no mechanism involving a gradual energy release, as illustrated in Fig. 1a. Note that we compute only the total elastic energy of the system; because events can be localized and intermittent, the elastic energy may be dropping in one region of the sample and rising in other regions. This would limit the size of the energy drop measured.
While useful for building intuition, the distribution of energy drops does not yield direct information about bubble rearrangements. Therefore, we also measure the number $`N`$ of bubbles that experience a change in overlapping neighbors during an energy drop. We exclude events in which two bubbles simply move apart or together; thus the smallest event is $`N=3`$. A typical sequence of configurations before, during, and after an event is shown in the first three frames of Fig. 2. In this energy drop the magnitude of the drop and the number of bubbles that change neighbors are close to the average. In the second and third frame of the sequence, we have marked the bubbles that changed neighbors since the beginning of the energy drop (shown in the first frame). As the system is strained, more bubbles change neighbors. For the particular energy drop chosen, roughly one-sixth of the bubbles eventually change overlapping neighbors. The fourth frame shows the final configuration of bubbles (colored gray) superimposed on the initial configuration at the start of the energy drop (colored black). Most of the bubble motions that lead to this average-sized energy drop are rather subtle shifts; there are no topological rearrangements. A large energy drop, from the tail of the distribution, is shown in Fig. 3. Again, the first three frames show the configurations at the beginning, middle and end of the drop, with the bubbles that change overlapping neighbors marked in gray. The fourth frame shows the extensive rearrangements that occur from the beginning to the end of the drop. The configuration shown is the final one, and the short segments are the tracks made by the centers of the bubbles during the energy drop.
Typically, larger drops involve larger numbers of bubbles. Fig. 1c depicts $`N`$ during each energy drop in the same run as in Fig. 1a and b. (Here, $`N`$ is normalized by the total number of bubbles in the system, $`N_{bub}`$.) The correlation between energy drops and the number of bubbles involved is shown by a scatter plot of these quantities in Fig. 4 for a 900-bubble system strained from 0 to 10. We see that indeed there is a strong correlation between these two measures of the size of an event. Larger drops in energy involve larger numbers of bubbles and are therefore spatially more extended. The correlation is particularly good at the large-event end. There is more variability for midsize and small events – a large range of energy drops corresponds to the same small number of rearranging bubbles, suggesting that typical rearrangements involve only a few bubbles.
Besides counting statistics for energy drops and changes in number of bubble overlaps, another direct measure of bubble rearrangements is the rate of T1 events, i.e. of topology changes of the first kind . For a perfectly dry two-dimensional foam consisting of thin films, these are said to occur when a bubble edge shrinks to zero, such that a common vertex is shared by four bubbles, two moving apart and two moving together. These events were the only property used by Dennin and Knobler to characterize the response of their monolayer foam to shear because they were unable to measure changes in the energy. While the time at which a T1 event occurs is well defined in a dry foam, it is somewhat ambiguous for a wet foam because there can be an exchange of nearest neighbors without a common point of contact. Moreover, while the number of bubbles involved in a T1 event is four by definition, large clusters of bubbles can rearrange, with some of the interior bubbles being involved in two or three T1 events simultaneously. It is then much harder to assign an exact time to a T1 event.
To make contact with the monolayer experiments, we may define T1 events within the bubble model as follows. First we broaden the definition of “nearest neighbors” to also include bubbles that do not necessarily overlap, but that are nonetheless so close such that $`|\stackrel{}{r}_i\stackrel{}{r}_j|<a(R_i+R_j)`$, where $`a>1`$ is a suitably chosen factor that may depend on $`\varphi `$. We then say that a T1 event begins when two nearest neighbors move apart, and we say that it ends when a new nearest neighbor pair intrudes between them; the time at which the event occurs is taken as the midpoint in this sequence. This definition is illustrated in the time sequence of a T1 event shown in Fig. 5. While the duration of an actual T1 event in a dry foam is instantaneous, the duration within the bubble model may vary greatly. Furthermore, the midpoint in the sequence does not necessarily coincide with the exact moment the switching occurs. In many instances it takes a long time after two bubbles separate for the remaining pair to come into contact. To compare with our other measures of rearrangement, we depict in Fig. 1d the number of T1 events as a function of strain for the same run as in Figs. 1a, b and c. There appears to be good correlation between the largest energy drops and instances in which many T1 events occur simultaneously. However, there are many more T1 events than energy drops. This is because many T1 events can occur when a large cluster of bubbles rearranges, and because our definition also includes topology changes that cause an increase in the total elastic energy.
We can examine the consequences of our definition of a T1 event by studying the distribution of the number of rearrangement events as a function of their total duration in units of the strain. This is done for both energy drops and T1 events, as shown in Fig. 6. The duration of an energy drop is taken as the difference in strain between a decrease in the elastic energy and the next increase. It is evident from the duration distribution for energy drops, Fig 6a, that most energy drops occur over a relatively short strain scale. In units of time, the longest events are comparable to a hundred times the characteristic time scale in the problem ($`\tau _d=1`$ in our simulations). We find a good correlation between the number of bubbles that change overlapping neighbors and the duration of the event; the more bubbles involved in the event, the longer it lasts. The distribution for T1 events, shown in Fig 6b, has a qualitatively similar shape, exhibiting a slightly more rapid decrease for both fast and slow events. However, the scale on which T1 events occur is an order of magnitude larger than the characteristic duration of the energy drops. By examining the bubble motions we see that the largest energy drops are associated with many T1 events, but the difference in strain scales makes it difficult to demonstrate an exact correlation between the number of overlap changes and the number of T1’s. In counting the T1 events, we include only events that have a total strain duration of less than 2. Fig. 6b shows that we have included all the T1 events for this run.
## V Simulation Results
For a given system size, strain rate, dissipation mechanism and gas fraction, we now collect statistics on the following measures of bubble dynamics: (1) The probability distribution $`P(\mathrm{\Delta }E)`$ for energy drops of size $`\mathrm{\Delta }E`$; (2) The probability distribution $`P(N)`$ for the number of bubbles $`N`$ that change overlapping neighbors during a energy drop event; and (3) The event rates for both energy drops and T1 events, $`S(T1)`$ and $`S(\mathrm{\Delta }E)`$, both defined as the number of events per bubble per unit strain.
### A System Size
We first address the important issue of the finite size of the simulation sample. This is done for dry foams, $`\varphi =1.0`$, driven at a slow strain rate, $`\dot{\gamma }=10^3`$. The results for four system sizes, $`N_{bub}=36`$, 144, 324 and 900, are shown in Fig.7. In these runs, the systems were strained up to 80, 80, 31 and 10, respectively. The top plot shows the energy drop distribution scaled by $`E_b`$, the average energy per bubble. It shows that energy drops vary greatly in size over the course of a single run. The general features of this distribution have been reported earlier . There is a power-law region with an exponent of -0.7 that extends over several decades in $`\mathrm{\Delta }E/E_b`$, followed by a sharp cutoff that occurs above a characteristic event size. Such a distribution has a well-defined average energy drop, which is near the cutoff between 2$`E_b`$ and 3$`E_b`$ for the systems shown here. The slight deviation from power-law behavior for small $`\mathrm{\Delta }E`$ was absent in the earlier simulations , which did not exclude two-bubble events, and which had a different roundoff error. Also, as seen earlier , the two largest systems, with 324 and 900 bubbles, respectively, have nearly identical distributions. This has two important implications; namely, that the sharp cutoff of the power-law distribution is not a finite-size effect, and that the system does not exhibit self-organized criticality.
The presence of a characteristic energy-drop size can be corroborated by examining the number of bubbles that participate in rearrangements for the same set of runs, which is given in the middle plot, Fig. 7b. This quantity has not been studied previously within the bubble model. We plot the probability distribution $`P(N)`$ of the number of bubbles $`N`$ that change overlapping neighbors during a rearrangement. The distribution decreases monotonically with a sharp cutoff at the large-event end. This indicates that most of the rearrangements are local and involve only a few bubbles. Fig. 7b shows that as the system size increases, the largest events represent a smaller fraction of the total number of bubbles. Indeed, the tail of the distribution extends to smaller and smaller values of $`N/N_{bub}`$ with no signs of saturation as the system size $`N_{bub}`$ increases, indicating diminishing finite size effects.
We next look at the system-size dependence of event rates, $`S(T1)`$ and $`S(\mathrm{\Delta }E)`$, for the number of T1 events and energy drops per bubble per unit strain. This is shown in the bottom plot, Fig. 7c, for the same runs as in Figs. 7a-b. We find that $`S(\mathrm{\Delta }E)`$ decreases very slightly with increasing system size, but saturates for the largest systems. The results for $`S(T1)`$ show a stronger system-size dependence, increasing slightly with $`N_{bub}`$. This could be due to the fact that bubbles on the top and bottom boundaries of the system are fixed, which lowers the number of possible T1 events per bubble. As the system size grows, the boundary bubbles represent a smaller fraction of the system so the event rate increases towards its bulk value.
In short, all of our measurements at $`\varphi =1.0`$ and $`\dot{\gamma }=10^3`$ indicate that the rearrangement events are localized and that there is no self-organized criticality. This agrees with observations of rearrangements in both monolayer and bulk foams.
### B Shear Rate Dependence
Now that size effects have been ruled out for dry foams, we may examine the influence of shearing the sample at different rates. Experiments by Gopal and Durian on three-dimensional foams show a marked change in the character of the flow with increasing shear rate. At low shear rates, the flow is characterized by intermittent, jerky rearrangement events occurring at a rate proportional to the strain rate. As the shear rate increases, so that the inverse shear rate becomes comparable to the duration of a rearrangement event, the flow becomes smoother and laminar, with all the bubbles gradually rearranging all the time. This was attributed to a dominance of viscous forces over surface tension forces when the strain rate exceeds the yield strain divided by the duration of a rearrangement event. In movies of our simulation runs, we also observe a crossover from intermittent, jerky rearrangements to smooth laminar flow. Similar smoothing has also been seen in stress vs. strain at increasing shear rates for the mean-field version of bubble dynamics . This raises the question of how the statistics of rearrangement events change with shear rate. Specifically, how is the “smoothing out” of the flow reflected in the statistics at high rates, and is there a quasistatic limit at low shear strain rates, in which rearrangement behavior is independent of strain rate? Earlier numerical studies by Bolton and Weaire were restricted, by construction, to the quasistatic limit. Okuzono and Kawasaki examined nonzero shear rates, but focused only on establishing the low shear-rate limit. Recently, Jiang and coworkers found a strong dependence of the T1 event rate on shear rate . They found that the number of T1 events per bubble per strain, $`S(T1)`$, decreases sharply with strain rate with no evidence of a quasi-static limit.
Our results for rearrangement behavior vs strain rate are collected in Fig. 8 for a 144-bubble system at $`\varphi =1.0`$. The top plot for the probability distribution of energy drops indicates that there is no gross change in $`P(\mathrm{\Delta }E)`$ with shear rate, even though our movies show a smoothing with less frequent energy drops. However, there is some suppression of small energy drops with an accompanying increase at large energy drops, as reflected in a somewhat smaller power-law exponent and larger cutoff at high values of $`\mathrm{\Delta }E/E_b`$. It is not apparent from $`P(\mathrm{\Delta }E)`$ vs $`\mathrm{\Delta }E/E_b`$, but we find that the average energy drop $`\mathrm{\Delta }E`$ and the average energy per bubble $`E_b`$ both increase with shear rate, and that $`\mathrm{\Delta }E`$ increases more rapidly. The reason why $`E_b`$ increases with shear rate is, of course, that viscous forces become more important than elastic forces and lead to increasing deformation (or in our model, overlaps) of bubbles. The net result is that there are fewer, relatively larger, rearrangements at high strain rates.
The tendency that small events are suppressed with increasing shear rates is also borne out by the distribution of the number of bubbles that change neighbors during an energy drop, as shown in Fig. 8b. Note that unlike the previous curves, $`P(N)`$ is plotted here on a linear scale. Two systematic trends emerge with increasing $`\dot{\gamma }`$: there are relatively fewer small events, i.e. $`P(N)`$ decreases significantly at small $`N/N_{bub}`$, and the tail extends to slightly higher $`N/N_{bub}`$. For $`\dot{\gamma }10^1`$ the distribution is fairly flat, suggesting that no one event size is dominant and there are numerous large events of the order of the system size. This suggests that at this shear rate the system no longer relaxes stress by intermittent rearrangements, but by continuous flow, as confirmed by our movies of the runs . The trend in $`P(N)`$ is seen in larger systems as well. For the 900-bubble system we also find that as the shear rate increases from $`10^5`$ to $`10^3`$, the distribution flattens and extends to higher values of $`N`$. The average number of rearranging bonds increases with shear rate, consistent with the picture of many bubbles in motion as the system becomes more liquid-like. We cannot, however, probe the system at very high shear rates. Data above a shear rate of about 1 cannot be trusted because of the nature of the model used. At high rates of strain the viscous term dominates and the elastic forces are not strong enough to prevent clumping of bubbles. This is actually an artifact of the assumption that only overlapping bubbles interact viscously; such clumping does not occur until much higher strain rates in the mean-field version of dynamics. Another reason why we do not study shear rates higher than unity is because we do not allow bubble breakup under flow (recall that $`\dot{\gamma }`$ is the capillary number).
The gradual smoothing with increasing shear rate is most apparent in Fig 8c, where we see that the event rates of T1 events and energy drops both decrease with increasing strain rate. For the T1 events, the decrease is slight, and is primarily due to the fact that the event duration becomes even longer. The decrease is more dramatic for the energy drop events. With increasing strain rate, the average energy drop increases and the rate of energy drops decreases.
Let us now re-examine the behavior of all quantities in Fig.8, focusing on behavior at low shear strain rates. Note that all quantities appear to approach a reasonably well-defined “quasistatic” limit insensitive to the value of $`\dot{\gamma }`$. We thus have the following picture. For small $`\dot{\gamma }`$, the time between rearrangements is typically much longer than the duration of a rearrangement, implying there is adequate time for the system to relax stress. As the shear rate increases, bubbles are constantly in motion and cannot fully rearrange into local-minimum-energy configurations. Therefore, the viscous interactions dominate, and the system flows like an ordinary liquid.
### C Mean-Field vs Local Dissipation
In the bubble model at higher strain rates, the behavior was seen to depend on the form of dissipation: clumping for local dissipation, Eq. 5, as opposed to no clumping for mean-field dissipation, Eq. 4. In this section we will investigate whether dissipation affects the low-strain-rate behavior as well. If there truly exists a quasi-static limit as $`\dot{\gamma }0`$, as suggested by the plots in the previous section, then the form of dissipation should have no influence. This need not occur, since once a rearrangement starts it proceeds with finite speed according to dynamics set by a competition between surface tension and dissipation forces. For example, it is conceivable that the mean-field dynamics might discourage the mushrooming of a tiny shift in bubble position into a large avalanche, whereas local dynamics might not. Another important issue is that differences in mean-field vs local dissipation could be relevant to true physical differences between bulk foams and Langmuir monolayers at an air/water interface. For three-dimensional foams, the shear is transmitted through the sample via bubble-bubble interactions, so the dissipation might be better captured by the local dissipation model. In contrast, for two-dimensional Langmuir monolayer foams the subphase imposes shear on the monolayers, and the dissipation might therefore be closer to that calculated with the mean-field model.
To investigate the influence of mean-field vs local dynamics, we can simply compare avalanche statistics. This is done in Fig. 9 for 144-bubble systems at four different area fractions, all sheared at $`\dot{\gamma }=10^3`$. The top plot shows results for the energy-drop distribution, $`P(\mathrm{\Delta }E)`$, with solid/dashed curves for local/mean-field dissipation respectively. There is no significant difference seen between the two choices of dissipative dynamics. This is also true of the spatial extent of the rearrangements, as seen in the middle plot for the probability distribution $`P(N)`$ of rearranging bubbles. The bottom plot for the rate of energy-drop and T1 events also shows little significant difference between mean-field and local dynamics. The only distinction is a slightly greater rate of T1 events in the mean-field case. This reflects the difference in duration of T1 events within the two models; we find that T1 events tend to last longer within the local dissipation model. Since we do not count T1 events that last longer than a strain of 2, we count fewer events within the local model than the mean-field version. Thus, the differences in $`S(T1)`$ may simply be due to our method of counting T1 events. Taken together, the three plots in Fig. 9 encourage us to believe that the rearrangement dynamics predicted by the model are robust against details of the dissipation. They also provide further evidence for the existence of a true quasi-static limit, where the effect of strain rate is $`\mathrm{𝑜𝑛𝑙𝑦}`$ to set the rate of rearrangements.
### D Gas Area fraction
Finally, we turn to the issue of how the elastic character of a foam disappears with increasing liquid content, and possibility of critical behavior at the melting transition. The principal signature of the melting, or rigidity-loss, transition is that the shear modulus $`G=lim_t\mathrm{}\sigma (t)/\gamma `$ vanishes and the foam can no longer support a nonzero shear stress without flowing. In two-dimensional systems, this happens at a critical gas fraction corresponding to that of randomly packed disks, $`\varphi _c0.84`$. This has been seen in several different simulations, where the gas fraction was tuned to within 0.05 of the transition and where it was tuned through, and even below, the transition. Other signatures of melting are that the osmotic pressure vanishes as a power-law the coordination number decreases towards about 4 as a power-law, and that the time scale for stress relaxation following an applied step-strain appears to diverge . Here we look for signs of melting in the statistics of avalanches during slow, quasi-static flow. Within our model, an increase in liquid content causes a decrease in the average overlap between neighboring bubbles. This in turn produces a decrease in the average elastic energy of the system, $`E_b`$ and sets the scale for the average energy drop $`\mathrm{\Delta }E`$ per rearrangement. It therefore should also decrease at lower gas fractions $`\varphi `$.
The energy drop and size statistics of rearrangement events for increasingly wet foams were shown already in Fig. 9, but were discussed only in the context of mean-field vs local dissipative dynamics. A clear trend emerges when we examine the $`\varphi `$ dependence specifically. In the top plot Fig. 9a for $`P(\mathrm{\Delta }E)`$, we see that the power-law behavior for small events does not change, but that the exponential cut-off moves towards larger values of $`\mathrm{\Delta }E/E_b`$ as $`\varphi \varphi _c`$. Though both $`\mathrm{\Delta }E`$ and $`E_b`$ decrease towards zero, the latter evidently vanishes more rapidly. This results in a broader distribution of event sizes near the melting transition; as the system becomes more liquid, large events are more prevalent. The probability distribution $`P(N)`$ for the numbers of bubbles involved in rearrangement events is shown in Fig. 9b. It displays similar trends as a function of $`\varphi `$, but not as pronounced as in $`P(\mathrm{\Delta }E)`$. Namely, the power law for small $`N`$ is unaffected by $`\varphi `$, but the exponential cut-off moves towards slightly larger events as $`\varphi \varphi _c`$. Thus, although the scale of energy drops increases dramatically, the number of broken bonds only increases marginally. Note, however, that the largest events include almost all the bubbles in the system; thus, the relatively weak dependence of $`P(N)`$ on $`\varphi `$ could be a finite-size effect in these $`N_{bub}=144`$ systems, as we will show below.
The behavior of $`S`$, the number of energy drops and T1 events per bubble per strain, is shown in Fig.9c. As the system becomes wetter, there is no noticeable change in the event rate $`S(\mathrm{\Delta }E)`$ for energy drops. In contrast, if our definition of nearest neighbors only includes overlapping bubbles, we find that $`S(T1)`$ decreases as $`\varphi `$ decreases. This runs counter to expectations–bubbles in a wet foam should have more freedom to move and rearrange because the energy barrier between rearrangements is lower and the yield strain is smaller. The apparent drop arises because the bubble coordination number is much higher in a dry foam (roughly 6) than in a wet foam (roughly 4). As a result there are more overlapping neighbors for each bubble in a dry foam, and more possibilities for the occurrence of T1 events. In the wet foam, however, there are many T1 events that do not satisfy the stringent starting or ending configurations because neighboring bubbles do not overlap. It is therefore appropriate in wet foams to modify the criterion for neighbors to $`|𝐫_i𝐫_j|<a(R_i+R_j)`$, where the proximity coefficient $`a`$ is taken as $`1/\varphi `$. When T1’s are computed with this definition, we find no significant dependence on area fraction.
The fact that the power-law region of the energy drop distribution is more extended at lower area fractions suggests the possibility of a critical point as the close-packing density, $`\varphi _c`$, is approached from above. This would imply a pure power-law distribution $`P(\mathrm{\Delta }E)`$ for the energy drops at $`\varphi _c`$, which would presumably be accompanied by a growing correlation length, as well as the growing relaxation time observed previously in Refs. . Note, however, that the distribution of the number of bubbles involved in a rearrangement, $`P(N)`$, does not depend very strongly on $`\varphi `$ for the 144-bubble systems of Fig. 9; furthermore, the cut-off to power-law behavior is always present, no matter how closely $`\varphi _c`$ is approached. This raises the question of whether finite system size effects are more important at values of $`\varphi `$ near $`\varphi _c`$ (recall from Fig. 7 that there were no significant system size effects near $`\varphi =1`$). To examine this, we have plotted the dependence of $`P(\mathrm{\Delta }E)`$, $`P(N)`$ and $`S`$ on system size in Fig. 10. We indeed find a strong system size dependence in $`P(\mathrm{\Delta }E)`$ at $`\varphi =0.85`$ just above the melting transition, with no saturation at the largest size studied (900 bubbles). This is consistent with the existence of a long correlation length.
The distribution of the number of bubbles per energy drop, $`P(N)`$ also shows signs of criticality. Recall from Fig. 7b that at $`\varphi =1`$, the tail of $`P(N)`$ was cut off at smaller and smaller values of $`N/N_{bub}`$ with increasing system size at $`\varphi =1`$. This was consistent with a short correlation length, characteristic of localized rearrangement events. At $`\varphi =0.85`$, the behavior with increasing $`N_{bub}`$ is quite different, as shown in Fig. 10b. The distribution falls off slightly more rapidly with $`N/N_{bub}`$ at larger system sizes (probably because $`\varphi =0.85`$ still lies above $`\varphi _c`$), but the largest events in the system still involve the same fraction $`N/N_{bub}0.75`$ of bubbles, indicating a correlation length that is comparable to the largest system size studied (30 bubble diameters across).
The event rates for energy drops and T1 events for the different system sizes at $`\varphi =0.85`$ are shown in Fig. 10c. The behavior is not markedly different from that found for the drier foam. Recall, however, that we have adjusted our definition of a T1 event by changing the proximity coefficient $`a`$ with area fraction, so little can be expected to be learned from this measure.
## VI Discussion
We have reported the results of several different measures of rearrangement event dynamics in a sheared foam. A comparison of the probability distribution of energy drops $`P(\mathrm{\Delta }E)`$ with the probabilty distribution of bubbles changing neighbors $`P(N)`$ shows that the size of an energy drop correlates well with the number of bubbles involved in a rearrangement (see Fig. 4). This is valuable because the energy drop-distribution has been widely studied theoretically, but is very difficult to measure experimentally. The number of bubbles involved in rearrangements, however, can be probed with multiple light scattering techniques on three-dimensional foams and by direct visualization of two-dimensional foams. A study of the rate of occurrence of topological changes (T1 events) provides a further link to experiments.
In general, our results agree with experiments on three-dimensional and two-dimensional foams. Despite its simplicity, the bubble model appears to capture the main qualitative features of a sheared foam remarkably well. For example, we find that the size of rearrangement events is typically small at low shear rates and at area fractions not too close to $`\varphi _c`$. This is in accord with experiments of Gopal and Durian, and Dennin and Knobler, as well as simulation results of Bolton and Weaire and Jiang and coworkers. Our results do not agree with those of Okuzono and Kawasaki, however, who found power-law distributions of rearrangement events at $`\varphi =1`$ in two dimensions.
The largest discrepancies between our results and those of others lie in the statistics of T1 events. We find that the number of T1 events per bubble per unit strain is of order unity and is generally insensitive to shear rate and gas area fraction. Kawasaki et al. found similar results: $`S(T1)=0.5`$ and no dependence on shear rate. In the Potts-model simulations , however, $`S(T1)`$ is unity at $`\dot{\gamma }=10^3`$ but falls to about 0.01 at $`\dot{\gamma }=10^1`$.
The monolayer experiments yielded values of $`S(T1)0.15`$, nearly an order of magnitude lower than predicted by our simulations. Durian reported a number of rearrangement events per bubble per unit strain for simulations of a 900-bubble system at $`\dot{\gamma }=10^5`$ that was comparable to the monolayer result, but he measured the number of energy drops per bubble per unit strain, $`S(\mathrm{\Delta }E)`$, not the T1 event rate, $`S(T1)`$. Note that our energy-drop event rate, $`S(\mathrm{\Delta }E)`$, agrees well with Durian’s earlier result.
One might guess that the discrepancy between our measurement of $`S(T1)`$ and that of the monolayer experiment might lie in the method of analysis used to count T1 events. Unlike the simulations, in which the number of T1 events can be computed from an analysis of bubble positions as a function of time, the number of T1’s in the monolayer studies was determined by repeated viewing of videotapes of the experiments and counting of the events as the foam cells reach their midpoint configuration. It seemed possible, then, that the difference between the simulation and the experiment was the result of a systematic undercounting of the number of the events. To check this possibility, the number of T1’s in a simulation run was determined by observations of the animated bubble motions. The number of events missed in this unautomated counting was only 2% of the total.
We believe that the origin of the discrepancy between the T1 event rates in the simulation and the monolayer experiment lies in the yield strain. While the yield strain in the model system is less than 0.2, which is consistent with that measured in three-dimensional foams, that in the monolayer foams is closer to unity. Bubbles in monolayer foams can therefore sustain very large deformations without inducing rearrangements. The T1 event rate should be inversely proportional to the yield strain. Thus, the ratio of $`S(T1)`$ in the simulation to $`S(T1)`$ in the experiment should equal the ratio of the yield strain in the experiment to the yield strain in the simulation. This is exactly what we find.
One of our main results is that a quasistatic limit exists within the bubble model. We find that the statistics of rearrangement events are independent of shear rate at low shear rates. This agrees with the monolayer experiments, which measured T1 event rates at two different shear rates, $`\dot{\gamma }=0.003s^1`$ and 0.11$`s^1`$. Dennin and Knobler found no noticeable difference in the T1 event rate, despite the fact that the shear rates studied differ by a factor of thirty. In addition, Gopal and Durian found that the number of rearrangement events per bubble per second in a three-dimensional foam is given by the event rate in the absence of shear plus a term proportional to the shear rate. In their case, the event rate was nonzero in the absence of shear because of coarsening; we have neglected this effect in our simulations. However, we do find that the rearrangement event rate per unit time is simply proportional to the shear rate at low shear rates. Thus, experimental results in both two and three dimensions contradict the simulation results of Jiang, et al., which find no quasistatic limit, but agree with our findings.
The form of dissipation used in the bubble model is a simple dynamic friction, which does not capture the hydrodynamics of fluid flow in the plateau borders and films in a realistic way. However, our results suggest that we may still be capturing the correct behavior at low shear rates. We find that the rearrangement event statistics are the same whether we use mean-field or local dissipation at low shear rates. This suggests that the statistics are determined by elastic effects rather than viscous ones at low shear rates, and that the behavior in that limit should be independent of the form of viscous dissipation used.
Finally, our results as a function of gas area fraction imply that there may be a critical point at the melting transition, as the area fraction approaches the random close-packing fraction from above. Previous studies showed that both the shear modulus and yield stress vanish as power laws at the melting transition, and that the stress relaxation time appears to diverge. Here, we have shown by finite-size studies that there is also a correlation length, characterizing the size of rearrangements, which grows as one approaches the melting transition. We also find that the distribution of energy drops appears to approach a pure power law in that limit.
The existence of a critical point at the melting transition remains to be tested experimentally. The vanishing of the shear modulus and osmotic pressure at the transition has been measured by Mason and Weitz for monodisperse, disordered emulsions, and by Saint-Jalmes and Durian for polydisperse gas-liquid foams. However, these small-amplitude-strain rheological measurements could not test whether there is a diverging length scale for rearrangements in a steadily sheared system at the melting transition. On the other hand, Gopal and Durian have measured the size of rearrangement events in a gas-liquid foam, but only at packing fractions well above the melting transition. At lower packing fractions close to the melting transition, the liquid drains too quickly from the foam due to gravity to permit such measurements. Experiments under microgravity conditions should be able to resolve whether the melting transition is indeed a critical point.
###### Acknowledgements.
We thank Narayanan Menon and Ian K. Ono for many helpful discussions, and we thank Michael Dennin for performing the visual analysis of the number of T1 events. This work was supported by the National Science Foundation through grants CHE-9624090 (AJL), CHE-9708472 (CMK), and DMR-9623567 (DJD), as well as by NASA through grant NAG3-1419 (DJD).
|
no-problem/9904/hep-lat9904004.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The description of hadronic matter in terms of confined quark and gluon constituents carrying a color quantum number has opened the prospect of a new, deconfined, phase of matter in which colored excitations can propagate over distances much larger than typical hadronic sizes. In the framework of pure Yang-Mills theory, the transition to this new phase is thought to occur as a function of temperature. While compelling evidence for the deconfining phase transition has been collected in lattice Monte Carlo simulations ,, it is necessary to concomitantly develop an intuitive picture for the deconfinement phenomenon in order to be able to treat scenaria as complex as heavy ion collisions; such collision experiments, planned at RHIC and LHC, are hoped to produce lumps of deconfined matter in the near future.
The question of the deconfinement transition can not be separated from an underlying picture of the confinement mechanism itself. Conversely, any purported mechanism of confinement should also be able to incorporate deconfinement. The present paper concentrates on the center vortex picture of confinement in the case of SU(2) color. This mechanism, initially proposed in -, generates an area law for the Wilson loop by invoking the presence of vortices in typical configurations entering the Yang-Mills functional integral. These vortices are closed two-dimensional surfaces in four-dimensional space-time, or, equivalently, closed lines in the three dimensions making up, e.g., a time slice. They carry flux such that they contribute a factor corresponding to a nontrivial center element of the gauge group to any Wilson loop whenever they pierce its minimal area; in the case of SU(2) color to be treated below, that is a factor $`1`$. If the vortices are distributed in space-time sufficiently randomly, then samples of the Wilson loop of value $`+1`$ (originating from loop areas pierced an even number of times by vortices) will strongly cancel against samples of the Wilson loop of value $`1`$ (originating from loop areas pierced an odd number of times by vortices), generating an area law fall-off. The simplest (SU(2)) model visualization which demonstrates this is the following: Consider a universe of volume $`L^4`$, and a two-dimensional slice through it of area $`L^2`$, containing a Wilson loop spanning an area $`A`$. Generical vortices will pierce the slice at points; assume $`N`$ of these points to be randomly distributed on the slice. Then the probability of finding $`n`$ such points inside the Wilson loop area is binomial,
$$P_N(n)=\left(\begin{array}{c}N\\ n\end{array}\right)\left(\frac{A}{L^2}\right)^n\left(1\frac{A}{L^2}\right)^{Nn}$$
(1)
and the expectation value of the Wilson loop becomes
$$W=\underset{n=0}{\overset{N}{}}(1)^nP_N(n)=\left(1\frac{2\rho A}{N}\right)^N\stackrel{N\mathrm{}}{}e^{2\rho A}$$
(2)
where the planar density of the intersection points $`\rho =N/L^2`$ is kept constant as $`N\mathrm{}`$. One thus obtains an area law with string tension $`\kappa =2\rho `$. In a more realistic calculation, one would e.g. take into account interactions between the vortices ; the proportionality constant $`\kappa /\rho `$ turns out to be close to $`1.4`$ in zero temperature lattice measurements , (a survey of existing data follows further below).
The emphasis of the present work, however, lies not on relatively short-range properties of the vortices such as their thickness, but on their long-range topology. This is where the argument presented above has more serious shortcomings. For one, it suggests that the expectation value of a Wilson loop might depend on the area with which one chooses to span the loop. However, due to the closed nature of the vortices, the choice of area is in fact immaterial, as it should be. In a more precise, area-independent, manner of speaking than adopted above, the value a Wilson loop takes in a given vortex configuration should be derived from the linking numbers of the vortices with the loop. Now, the above model visualization demonstrating an area law implicitly makes a strong assumption about the long-range topology of vortex configurations: For the intersection points of vortices with a given plane to be distributed sufficiently randomly on the plane to generate confinement, typical vortices or vortex networks (note that vortices are not forbidden to self-intersect) must extend over the entire universe. Consider the converse, namely that there is an upper bound to the space-time extension of single vortices or vortex networks. Then an intersection point of a vortex with a plane always comes paired with another such point a finite distance away, due to the closed character of the vortices. This pairing in particular would preclude an area law for the Wilson loop, as can be seen more clearly with the help of another simple model.
Consider a universe as above, but with the additional information that intersection points of vortices with a two-dimensional slice come in pairs at most a distance $`d`$ apart. Then the only pairs which can contribute a factor $`1`$ to a planar Wilson loop are ones whose midpoints lie in a strip of width $`d`$ centered on the trajectory of the loop. Denote by $`p`$ the probability that a pair which satisfies this condition actually does contribute a factor $`1`$. This probability is an appropriate average over the distances of the midpoints of the pairs from the Wilson loop, their angular orientations, the distribution of separations between the points making up the pairs, and the local geometry of the Wilson loop up to the scale $`d`$. The probability $`p`$, however, does not depend on the macroscopic extension of the Wilson loop. A pair which is placed at random on a slice of the universe of area $`L^2`$ has probability $`pA/L^2`$ of contributing a factor $`1`$ to a Wilson loop, where $`A`$ is the area of the strip of width $`d`$ centered on the Wilson loop trajectory. To leading order, $`A=Pd`$, where $`P`$ is the perimeter of the Wilson loop; subleading corrections are induced by the local loop geometry. Now, placing $`N`$ pairs on a slice of the universe of area $`L^2`$ at random, the probability that $`n`$ of them contribute a factor $`1`$ to the Wilson loop is
$$P_{N_{pair}}(n)=\left(\begin{array}{c}N_{pair}\\ n\end{array}\right)\left(\frac{pPd}{L^2}\right)^n\left(1\frac{pPd}{L^2}\right)^{N_{pair}n}$$
(3)
and, consequently, the expectation value of the Wilson loop for large universes is
$$W=\underset{n=0}{\overset{N_{pair}}{}}(1)^nP_{N_{pair}}(n)\stackrel{N_{pair}\mathrm{}}{}e^{\rho pPd}$$
(4)
where $`\rho =2N_{pair}/L^2`$ is the planar density of points. One thus observes a perimeter law, negating confinement, if the space-time extension of vortices or vortex networks is bounded. They must thus extend over the entire universe, i.e. percolate, in order to realize confinement.
Conversely, therefore, a possible mechanism driving the deconfinement transition in the vortex picture is that vortices, in a sense to be made more precise below, cease to be of arbitrary length, i.e. cease to percolate, in the deconfined phase . The main result of the present work is that this is indeed the case, implying that the deconfinement transition can be characterized as a vortex percolation transition.
Before entering into the details, it should be noted that a description of the deconfinement transition in terms of percolation phenomena has also been advocated in frameworks based on Yang-Mills degrees of freedom other than vortices. For one, electric flux is expected to percolate in the deconfined phase, while it does not percolate in the confined phase. Note that this is the reverse, or dual, of the magnetic vortex picture. General arguments related to electric flux percolation were recently advanced in ; also, specific electric flux tube models support this picture .
On the other hand, in the dual superconductor picture of confinement, it has been observed that the confined phase is characterized by the presence of a magnetic monopole loop percolating throughout the (lattice) universe, whereas the monopole configurations are considerably more fragmented in the deconfined phase and cease to percolate . To the authors’ knowledge, however, this is mainly an empirical observation and there is no clear physical argument connecting the deconfinement transition and monopole loop percolation. Indeed, there has been speculation that the two phenomena may be disconnected . This should be contrasted with the vortex language, which, as discussed at length above, has the advantage of providing a clear physical picture motivating an interrelation between vortex percolation and confinement.
## 2 Tools and survey of existing data
Before vortex clustering properties can be investigated in detail, some technical prerequisites have to be met; foremost, one must have a manageable definition of vortices, i.e. an algorithm which allows to localize and isolate them in Yang-Mills field configurations. After the initial proposal of the center vortex confinement mechanism, a first hint of the existence of vortex configurations was provided by the Copenhagen vacuum based on the observation that a constant chromomagnetic field in Yang-Mills theory is unstable with respect to the formation of flux tube domains in three-dimensional space. Later it was observed that the chromomagnetic flux associated with these domains indeed is quantized according to the center of the gauge group . However, the theory of these flux tubes quickly became too technically involved to allow e.g. the study of global properties of the flux tube networks, especially at finite temperatures. In parallel, efforts were undertaken to define and isolate vortices on a space-time lattice. One definition, proposed by Mack and coworkers and developed further by Tomboulis introduces a distinction between thin and thick vortices, only the latter remaining relevant in the continuum limit. The defining property of these thick vortices is the nontrivial center element factor they contribute to a large Wilson loop when they pierce its minimal area. This definition has the advantage of being gauge invariant; on the other hand, it does not allow to easily localize vortices in the sense of associating a space-time trajectory with them.
A different line of reasoning has only recently been developed in a series of papers by Del Debbio et al ,-. One chooses a gauge which as much as possible concentrates the information contained in the field configurations on particular collective degrees of freedom, in the present case, the vortices. If this concentration of information is successful (more about this question further below), one obtains a good approximation of the dynamics by neglecting the residual deviations away from the chosen collective degrees of freedom, i.e. by projecting onto them. This type of approach was pioneered by G. ’t Hooft, who introduced the class of Abelian gauges and the subsequent Abelian projection in order to study Abelian monopole degrees of freedom . In complete analogy, one can introduce maximal center gauges ,-, in which one uses the gauge freedom to choose link variables on a space-time lattice as close as possible to center elements of the gauge group. Subsequently, one can perform center projection, i.e. replace the gauge-fixed link variables with the center elements nearest to them on the group.
Given such a lattice of center elements, i.e. in the case of SU(2) color, a lattice with links taking the values $`\pm 1`$, center vortices are defined as follows: Consider all plaquettes in the lattice. If the links bordering the plaquette multiply to $`1`$, then a vortex pierces that plaquette. These are precisely the vortices needed for the center vortex mechanism of confinement. To see this, one merely needs to apply Stokes’ theorem: Consider a Wilson loop $`W`$, made up of links $`l=\pm 1`$, and an area $`A`$ it circumscribes, made up of plaquettes $`p=\pm 1`$ (the value of a plaquette is given by the product of the bordering links). Then
$$W=\underset{lW}{}l=\underset{pA}{}p$$
(5)
(the same letter was used here to denote both space-time objects and the associated group elements). In other words, the Wilson loop receives a factor $`1`$ from every vortex piercing the area. Furthermore, the product of all plaquettes making up a three-dimensional elementary cube in the lattice is $`1`$, since this product contains every link making up the cube twice. This fact, which in physical terms is a manifestation of the Bianchi identity, implies that every such cube has an even number of vortices piercing its surfaces; consequently, any projection of the lattice down to three dimensions contains only closed vortex lines. Since any cut through a two-dimensional vortex surface in four dimensions is thus a closed line, the original surface is also closed. Note that if one defines the dual lattice as a lattice with the same spacing $`a`$ as the original one, shifted with respect to the latter by the vector $`(a/2,a/2,a/2,a/2)`$, then vortices are made up of plaquettes on the dual lattice.
In the work presented here, the specific maximal center gauge called “direct maximal center gauge”, see e.g. , was used. This gauge is reached by maximizing the quantity
$$\underset{l}{}\left|\text{tr}U_l\right|^2,$$
(6)
where $`l`$ labels all the links $`U_l`$ on the lattice. Center projection then means replacing
$$U_l\text{sign tr }U_l.$$
(7)
In practice, the question whether the gauge fixing and projection procedure indeed successfully concentrates the relevant physical information on the collective degrees of freedom being projected on is difficult to settle a priori; most often, this is tested a posteriori by empirical means. Success furthermore depends on the specific physics, i.e. the observable, under consideration. One carries out two Monte Carlo experiments, using the full Yang-Mills action as a weight in both cases, and samples the observable in question, such as e.g. the Wilson loop, using either the full lattice configurations or the center projected ones. If the results agree, one refers to this state of affairs as “center dominance” for that particular observable. Center dominance for the Wilson loop is interpreted as evidence that the center gauge concentrates the physical information relevant for confinement on the vortex degrees of freedom, and that consequently center projection, i.e. projection onto the associated vortex configuration, constitutes a good approximation. Center dominance has been verified for the long-range part of the static quark potential at zero temperature (see - for the SU(2) theory and for the SU(3) theory).
This recent verification of center dominance has sparked renewed interest in the vortex picture of confinement. In establishing the relevance of vortex degrees of freedom for confinement, it provides the necessary basis for any further investigation of vortex properties. An observation analogous to center dominance has been made in the framework of the gauge-invariant vortex definition advanced by Tomboulis . There, one samples both the quantities $`W`$ and sign$`(W)`$, $`W`$ denoting the Wilson loop; sign$`(W)`$ is interpreted as containing only the center vortex contributions to $`W`$, whereas all other fluctuations of the gauge fields are neglected. One finds that the expectation value of sign$`(W)`$ alone already provides the full string tension, i.e. one finds a gauge-invariant type of center dominance (see for the SU(2) theory and for the SU(3) theory). Subsequently, it has been noted that this type of center dominance without gauge fixing can in fact be understood in quite simple terms , and that furthermore the density of center vortices arising on center-projected lattices without gauge fixing does not exhibit the renormalization group scaling corresponding to a finite physical density .
In parallel, other vortex properties were investigated. There is evidence in the SU(2) theory that the vortices defined by center gauging and center projection indeed localize thick vortices as defined by their center element contributions to linked Wilson loops ,. In both the gauge-fixed and unfixed frameworks, absence of vortices was shown to imply absence of confinement ,,. In zero temperature lattice calculations using the maximal center gauge, the planar density of intersection points of vortices with a given surface was shown to be a renormalization group invariant, physical quantity in the SU(2) theory, cf. (note erratum in ) and also . This planar density equals approximately $`3.6/\text{fm}^2`$ if one fixes the scale by positing a string tension of $`(440\text{MeV})^2`$. Also the radial distribution function of these intersection points on a plane is renormalization group invariant . Furthermore, if one takes into account the thickness of center vortices, they are able to account for the “Casimir scaling” behavior of higher representation Wilson loops, a feature which hitherto was considered incompatible with the vortex confinement mechanism ,. Also, the monopoles generated by the maximal Abelian gauge have been found to lie on the center vortices identified in a subsequent (indirect) maximal center gauge, forming monopole-antimonopole chains . Recently, a modified $`SU(2)`$ lattice ensemble was investigated in which all center vortices had been removed, with the result that chiral symmetry is restored and all configurations turn out to belong to the topologically trivial sector .
The purpose of the present analysis is to confront the center vortex picture of confinement with the finite temperature transition to a deconfined phase observed in Yang-Mills lattice experiments. Some previous work on vortex properties at finite temperatures has already been carried out, generalizing the zero-temperature results surveyed above. For one, the authors reported some preliminary work in . There, center dominance for the string tension between static quarks was verified at finite temperatures, and the transition to the deconfined phase with a vanishing string tension observed at the correct temperature in the center-projected theory. A depletion in the density of vortex intersection points with a plane extending in the (Euclidean) time and one space direction occurs as one crosses into the deconfined phase. The vortices are to a certain extent polarized in the time direction. However, the polarization is not complete; an area spanned by a Polyakov loop correlator is still pierced by a finite density of vortices. Thus, more detailed correlations between these vortex intersection points must induce the deconfinement transition; this led the authors to first conjecture in that the deconfinement transition in the center vortex picture may be connected to global properties of vortex networks such as their connectivity.
Very recently, a related investigation into the global topology of the two-dimensional vortex surfaces in four space-time dimensions was reported in , including the case of finite temperatures. This investigation focused on properties such as orientability and genus of the surfaces, in particular, changes in these characteristics as one crosses into the deconfined phase. In the present work, the global properties of vortex surfaces are considered from a slightly different vantage point, namely specifically with a view to testing the heuristic arguments given in the introduction, connecting confinement with percolation properties. For this purpose, it will be necessary to consider in more detail different slices of vortex surfaces; details follow below.
## 3 Spatial string tension
Before doing so, a certain gap in the existing literature on center vortices at finite temperatures should be addressed. As already mentioned above, the basis for the center vortex picture of confinement is the empirical observation of center dominance for the Wilson loop. Without first establishing center dominance for an observable under investigation, a more detailed discussion of the manner in which vortex dynamics influence the observable runs the risk of being largely academic. Center dominance for the finite-temperature long-range heavy quark potential, via the corresponding Polyakov loop correlator, was verified in , as mentioned above; however, in what follows, also the behavior of the so-called spatial string tension, extracted from large spatial Wilson loops, will be under scrutiny. To provide the necessary basis for this, center dominance for large spatial Wilson loops should first be checked. For this purpose, the authors have carried out lattice measurements of spatial Creutz ratios, using center-projected configurations to evaluate the Wilson loops, for three temperatures.
Before presenting the results, a comment on the physical scales is in order. Throughout this paper, the zero-temperature string tension is taken to be $`\kappa =(440\text{MeV})^2`$, the lattice spacing $`a(\beta )`$ at inverse coupling $`\beta =2.3`$ is determined by $`\kappa a^2=0.12`$, and one-loop scaling is used for the $`\beta `$-dependence of $`a`$. The deconfinement temperature is identified as $`T_C=300`$ MeV, cf. . It should be noted that these scales are fraught with considerable uncertainty, of the order of $`10\%`$, due to finite size effects. This was discussed in more detail in .
The values obtained for the center-projected spatial Creutz ratios are summarized in Fig. 1, where they are compared with the high-precision data for the full spatial string tension of Bali et al . Since the temperatures used here and in do not coincide, an interpolation of the data points given in had to be carried out to arrive at the values depicted in Fig. 1.
Measurements were taken on a $`12^3\times N_t`$ lattice, and for each temperature, two values of the inverse coupling $`\beta `$ were used. Note that there are two potential sources of scaling violations in Fig. 1. On the one hand, center projection may destroy the renormalization group scaling of the spatial string tension known to occur when using the full configurations . This type of scaling violation would be a consequence, and thus a genuine indicator, of vortex physics. On the other hand, the manner in which the data is presented in Fig. 1 also engenders additional scaling violations to the extent in which Creutz ratios, which represent difference quotients with increment $`a(\beta )`$, still deviate from the derivatives they converge to as $`a0`$. The authors have elected to accept this slight disadvantage, since the presentation of the data in Fig. 1 is on the other hand well adapted to aid in the discussion below. Now, comparing the data obtained for different $`\beta `$ at one temperature in Fig. 1, scaling violations are evidently not significant as compared with the error bars. Namely, values of Creutz ratios for two different choices of $`\beta `$ are well described by a universal curve, better in fact than the error bars would suggest. However, in view of the size of the error bars, which is due to the moderate statistics available to the authors, the data do not give very stringent evidence of correct renormalization group scaling; they are perhaps best described as being compatible with such scaling.
Furthermore, the data seem to point towards a certain change in the dynamics generating the spatial string tension as the temperature is raised to values significantly above the deconfinement transition. At $`T=1.1T_C`$, the Creutz ratios are practically constant as a function of the Wilson loop size. This behavior of center-projected Wilson loops has been reported before in zero-temperature studies and has been dubbed “precocious scaling”. Center projection truncates the short-range Coulomb behavior of full Wilson loops and one can read off the asymptotic string tension already from $`2\times 2`$ Creutz ratios.
By contrast, this behavior does not seem quite as pronounced at temperatures significantly above the deconfinement transition. Creutz ratios rise as a function of loop size; it should however be mentioned that this rise is much weaker than the usual Coulomb fall-off one obtains when using the full Yang-Mills configurations to evaluate the Creutz ratios. Due to this variation with loop size, the asymptotic value of the full spatial string tension extracted from the data in is, in the case of $`T=1.4T_C`$, only reached by the Creutz ratio corresponding to the largest Wilson loops investigated; at $`T=1.7T_C`$, the asymptotic value is not quite reached even by the ratios derived from the most extended loops sampled, although it is within the error bars. While the error bars afflicting the Creutz ratios extracted from larger loops are sizeable, the rise as a function of loop size does seem to be significant, especially as compared to the precocious scaling displayed at $`T=1.1T_C`$. Also the difference between the values taken at $`T=1.4T_C`$ and $`T=1.7T_C`$ is compatible with the difference found for the full Wilson loops .
In view of their limited accuracy, the data depicted in Fig. 1 are perhaps best done justice by the statement that they do not allow to negate the hypothesis of center dominance for the spatial string tension in the deconfined phase. Certainly, no drastic deviation from center dominance is apparent. However, more accurate studies of this question are clearly called for.
## 4 Vortex percolation
### 4.1 Clustering of vortices
As already mentioned above, there exists even in the deconfined phase a substantial density of vortex intersection points on the area spanned by two Polyakov loops . Thus, deconfinement must be due more specifically to a correlation between these intersection points, such that the distribution of points ceases to be sufficiently random to generate an area law. As motivated in the introduction, a correlation conducive to deconfinement would occur if vortices only formed clusters smaller than some maximal size, i.e. if they ceased to percolate. This would make the points appear in pairs separated by less than the aforementioned maximal size, leading to a perimeter law for the Polyakov loop correlator. In order to test whether this type of mechanism is at work in connection with the Yang-Mills deconfinement transition, it is necessary to measure the extension of vortex clusters.
Vortices constitute closed two-dimensional surfaces in four space-time dimensions, or, equivalently, one-dimensional loops if one projects down to three dimensions by taking a fixed time slice or a fixed space slice of the (lattice) universe. Note that the term space slice here is meant to denote the three-dimensional space-time one obtains by holding just one of the three space coordinates fixed. Which particular coordinate is fixed is immaterial in view of spatial rotational invariance. In the following, specifically the extension of vortex line clusters in either time or space slices will be investigated. In this way, the relevant information is exhibited more clearly than by considering the full two-dimensional vortex surfaces in four-dimensional space-time.
Given a center-projected lattice configuration, the corresponding vortices can be constructed on the dual lattice in the fashion already indicated in the introduction. As a definite example, consider a fixed time slice. Then the vortices are described by lines made up of links on the dual lattice. Consider in particular a plaquette on the original lattice, lying e.g. in the $`z=z_0`$ plane and extending from $`x_0`$ to $`x_0+a`$ and from $`y_0`$ to $`y_0+a`$, where $`a`$ denotes the lattice spacing. By definition, if the links making up this plaquette multiply to the center element $`1`$, then a vortex pierces that plaquette. This means that a certain link on the dual lattice is part of a vortex; namely, the link connecting the dual lattice points $`(x_0+a/2,y_0+a/2,z_0a/2)`$ and $`(x_0+a/2,y_0+a/2,z_0+a/2)`$.
Having constructed the vortex configuration on the dual lattice, one can proceed to define the vortex clusters. One begins by scanning the dual lattice for a link which is part of a vortex. Starting from that link, one tests which adjacent links, i.e. links which share a dual lattice site with the first link, are also part of the vortex. This is repeated with all new members of the cluster until all links making up the cluster are found. In this way, it is possible to separate the different vortex clusters.
### 4.2 Extension of vortex clusters
Given the vortex clusters, their extensions can be measured. Consider all pairs of links on a cluster and evaluate the space-time distance between each pair. The maximal such distance defines the extension of that cluster. In Figs. 2-5, histograms are displayed in which, for every cluster, the total number of links making up that cluster was added to the bin corresponding to the extension of the cluster.
The histograms were finally normalized such that the integral of the distributions gives unity. Constructed in this way, the histograms give a very transparent characterization of typical vortex configurations. The content of each bin represents the percentage of the total vortex length in the configurations, i.e. the available vortex material, which is organized into clusters of the corresponding extension. Accordingly, these distributions will be referred to as vortex material distributions in the following. In a percolating phase, the vortex material distribution is peaked at the largest extension possible on the lattice universe under consideration. Note that, due to the periodic boundary conditions, this maximal extension e.g. on a $`N_s\times N_s\times N_t`$ space slice of the four-dimensional space-time lattice is $`\sqrt{(N_s/2)^2+(N_s/2)^2+(N_t/2)^2}`$ lattice spacings. In a non-percolating phase, the vortex material distribution is peaked at a finite extension independent of the size of the universe. Figs. 2-4 pertain to space slices. Analogous results for time slices are summarized in Fig. 5.
In space slices of the lattice universe, one observes a transition from a percolating to a non-percolating phase at the Yang-Mills deconfinement temperature. Namely, in space slices, the vortex material distribution is strongly peaked at the maximal possible extension as long as the temperature remains below $`T_C`$; when the temperature rises above $`T_C`$, however, the distribution becomes concentrated at short lengths. The behavior near the deconfinement temperature $`T_C`$ displayed in Figs. 2-4 deserves more detailed discussion. While the contents of the bin of maximal extension fall sharply between $`T=0.8T_C`$ and $`T=1.1T_C`$, a residual one quarter of vortex material remains concentrated in loops of maximal extension at the temperature identified as $`T=1.1T_C`$. This is too large a proportion to let pass by without further consideration. The authors have repeated the measurement at $`T=1.1T_C`$ on a larger, $`16^3\times 3`$ lattice, and did not find a depletion of the bin of maximal extension. On the other hand, one should be aware that there is a considerable uncertainty, of the order of $`10\%`$, in the overall physical scale in these lattice experiments, affecting in particular the identification of the deconfinement temperature $`T_C`$ itself. These uncertainties were already mentioned in section 3 and are discussed in detail in . At the present level of accuracy, $`T=1.1T_C`$ cannot be considered significantly separated from $`T_C`$; the authors cannot state with confidence that the measurement formally identified with a temperature $`T=1.1T_C`$ must unambiguously be associated with the deconfined phase. Note that also in standard string tension measurements via the Polyakov loop correlator, one does not attain a sharper signal of the deconfinement transition if one uses comparable lattices and statistics. Indeed, in , the authors still extracted a string tension of about $`10\%`$ of the zero-temperature value at the temperature formally identified as $`T=1.1T_C`$.
In balance, the authors would argue that the percolation transition in space slices does occur together with the deconfining transition, both in view of the strong heuristic arguments connecting the two phenomena in the vortex picture, and in view of the sharp change in the vortex material distributions between $`T=0.8T_C`$ and $`T=1.1T_C`$. The latter sharp change suggests that the vortex material distributions can in practice be used as an alternative order parameter for the deconfinement transition. When the vortices rearrange at the transition temperature to form a non-percolating phase, intersection points of vortices with planes containing Polyakov loop correlators occur in pairs less than a maximal distance apart. This leads to a perimeter law for the Polyakov loop correlator, implying deconfinement.
Consider now by contrast the vortex material distributions obtained in time slices. According to Fig. 5, these distributions are strongly peaked at the maximal possible extension at all temperatures, even above the deconfinement transition. Thus, vortex line clusters in time slices always percolate; there is no marked change in their properties as the temperature crosses $`T_C`$.
Note that this entails no consequences for the behavior of the Polyakov loop correlator, since Polyakov loops do not lie within time slices. However, the persistence of vortex percolation into the deconfined phase when time slices are considered represents one way of understanding the persistence of a spatial string tension above $`T_C`$. Given percolation, it seems plausible that intersection points of vortices with spatial Wilson loops continue to occur sufficiently randomly to generate an area law. There is another, complementary, way of understanding the spatial string tension which will be discussed in detail in the concluding section.
Note furthermore that Figs. 2-5 taken together imply that the vortices, regarded as two-dimensional surfaces in four-dimensional space-time, percolate both in the confined and the deconfined phases; this was also observed in . Only by considering a space slice does one filter out the percolation transition in the topology of the vortex configurations. It should be emphasized that the percolation of the two-dimensional vortex surfaces in four-dimensional space-time in the deconfined phase does not negate the heuristic picture of deconfinement put forward above. Given that vortex line clusters in space slices cease to percolate in the deconfined phase, intersection points of vortices with planes extending in one space and the time direction necessarily come in pairs less than a maximal distance apart, regardless of whether the different vortex line clusters do ultimately connect if one follows their world sheets into the additional spatial dimension. It is this pair correlation of the intersection points which induces the deconfinement transition.
### 4.3 Winding vortices in the deconfined phase
In order to gain a more detailed picture of the deconfined regime, it is useful to carry out the following analysis. Consider again a space slice of the lattice universe, in which vortex line clusters are short in the deconfined regime. Consider in particular lattices of time extension $`N_ta`$ with odd $`N_t`$, where $`a`$ is the lattice spacing; in the following numerical experiment, $`N_t=3`$. On such a lattice, measure vortex material distributions akin to the ones described in the previous section, with one slight modification; namely, define the bins of the histograms not by cluster extension, but simply by the number of dual lattice links contained in the clusters. It turns out that, in the deconfined phase, specifically at $`T=1.85T_C`$, roughly $`55\%`$ of the vortex material is concentrated in clusters made up of an odd number of links, cf. Fig. 6. On a lattice with $`N_t=3`$, these are necessarily vortex loops which wind around the lattice in (Euclidean) time direction by virtue of the periodic boundary conditions, where the loops containing an odd number of links larger than $`3`$ exhibit residual transverse fluctuations in the spatial directions, as also visualized in Fig. 7 further below.
One thus obtains a quite specific characterization of the short vortices appearing in the deconfined regime. This phase can evidently be visualized largely in terms of short winding vortex loops with residual transverse fluctuations if one considers a space slice of the lattice universe, cf. Fig. 7. Note that this picture also explains the partial vortex polarization observed in density measurements .
## 5 Discussion and Outlook
On the basis of the measurements shown in the preceding sections, a detailed description of the confined and deconfined phases of Yang-Mills theory in terms of center vortices emerges. The typical vortex configurations present in the two phases are visualized in Fig. 7. This picture allows an intuitive understanding of the phenomenon of confinement as well as the characteristics of the transition to the deconfined phase. In the confined phase, vortex line clusters in space slices of the lattice universe percolate. This allows intersection points of vortices with planes containing Polyakov loop correlators to occur sufficiently randomly to generate an area law. By contrast, in the deconfined phase, typical vortex configurations in space slices of the lattice universe are characterized by short vortex loops, to a large part winding in the (Euclidean) time direction. This causes intersection points of vortices with planes containing Polyakov loop correlators to occur in pairs less than a maximal distance apart, leading to a perimeter law. Simple analytical model arguments clarifying the emergence of this qualitative difference were presented in the introduction. The deconfinement phase transition in the vortex picture can thus be understood as a transition from a percolating to a non-percolating phase.
It should be emphasized that the percolation properties of vortices focused on in the present work are more stringently related to confinement than the polarization properties reported in . There is a priori no direct logical connection between the observed partial vortex polarization by itself and deconfinement. On the one hand, even in presence of a significant polarization, confinement would persist as long as the vortex loops retain an arbitrarily large length, namely by winding sufficiently often around the (Euclidean) time direction before closing. On the other hand, even in an ensemble with no polarization, deconfinement will occur if the vortices are organized into many small isolated clusters. Thus, vortex polarization should be viewed more as an accompanying effect than the direct cause of deconfinement. Of course, a correlation between the absence of percolation in space slices of the lattice universe and vortex polarization is not surprising. If fluctuations of vortex loops in the space directions are curtailed, e.g. due to a phase containing many short vortices winding in the time direction becoming favored (more about this below), then clearly the connectivity of vortex clusters in the space direction is reduced and they may cease to percolate. In this sense, polarization indirectly can facilitate deconfinement. However, the percolation concept is related much more directly and with much less ambiguity to the question of confinement. Ultimately, this is a consequence of a point already made in the introduction in connection with the heuristic models discussed there. Since the Wilson loop should be independent of the choice of area which one may regard it to span, it is conceptually sounder not to consider densities occuring on such areas, but the global topology of the vortices such as their linking number with the Wilson loop. The likelihood of a particular linking number occuring is strongly influenced by the connectivity of the vortex networks. Correspondingly, there is a clear signal of the phase transition in the vortex material distributions displayed in Figs. 2-4; these quantities can be used as alternative order parameters for the transition. By contrast, the vortex densities seem to behave smoothly across the deconfinement phase transition .
Turning to the spatial string tension, there are two complementary ways to qualitatively account for its persistence in the deconfined phase of Yang-Mills theory. One was already mentioned in section 4.2. If one considers a time slice of the lattice universe, the associated vortex line configurations display no marked change of their clustering properties across the deconfinement transition. Even in the deconfined phase, vortex loops in time slices percolate. In view of this, it seems plausible that intersection points of vortices with spatial Wilson loops continue to occur sufficiently randomly to generate an area law. It should be noted, however, that this percolation is qualitatively different from the one observed in the confined phase in that it only occurs in the three space dimensions, whereas the configurations are relatively weakly varying in the Euclidean time direction. In other words, in the deconfined phase, one finds a dimensionally reduced percolation phenomenon only visible either in the full four space-time dimensions or in time slices thereof.
On the other hand, if one considers a space slice of the lattice universe, the deconfined phase is characterized to a large part by short vortex loops winding in the time direction, cf. Fig. 7. However, in this topological setup, such short vortices can pierce the area spanned by a large spatial Wilson loop an odd number of times, even far from its perimeter. This should be contrasted with the picture one obtains for the Polyakov loop correlator. There, shortness of vortices implies that their intersection points with the plane containing the Polyakov loop correlator occur in pairs less than a maximal distance apart. This leads to a perimeter law behavior of the Polyakov loop correlator, i.e. deconfinement. For spatial Wilson loops, this mechanism is inoperative due to the different topological setting. On the contrary, in view of Fig. 7, if one assumes the locations of the various winding vortices to be uncorrelated, one obtains precisely the heuristic model of the introduction, in which vortex intersection points are distributed at random on the plane containing the spatial Wilson loop, leading to an area law. Finite length vortex loops thus do not contradict the existence of a spatial string tension.
Of course, there is no reason to expect the locations of the winding vortices to be completely uncorrelated in the high-temperature Yang-Mills ensemble. In fact, comparing the values for the spatial string tension $`\kappa _s`$ from and the relevant density $`\rho _s`$ of vortex intersection points on planes extending in two spatial directions , the ratio $`\kappa _s/\rho _s`$ reaches values $`\kappa _s/\rho _s3`$ at $`T2T_C`$. This should be contrasted with the value $`\kappa =2\rho `$ obtained in the model of random intersection points discussed in the introduction. If one further takes into account that a sizeable part of $`\rho _s`$ is still furnished by non-winding vortex loops, cf. Fig. 6, then one should actually use the density $`\rho _s^{}<\rho _s`$ corresponding to winding vortices only in the above consideration. This yields an even larger ratio $`\kappa _s/\rho _s^{}`$. Therefore, the winding vortices in the deconfined phase seem to be subject to sizeable correlations.
Both of the above complementary mechanisms generating the spatial string tension in the deconfined phase are qualitatively distinct from the mechanism of confinement below $`T_C`$. In the space-slice picture, this is obvious; a new class of configurations, namely short vortex loops winding in the Euclidean time direction, induces the spatial string tension. However, as already indicated further above, also in the time-slice picture, the observed percolation is qualitatively different from the one in the confined phase in that it is dimensionally reduced. This qualitatively different origin of the spatial string tension may provide a natural explanation for the novel behavior detected in section 3 for spatial Creutz ratios at temperatures well inside the deconfined regime; namely, their rise as a function of the size of the Wilson loops from which they are extracted (as opposed to the precocious scaling observed at lower temperatures). However, the detailed connection between the abovementioned modified dynamics in the deconfined phase and the signal seen in the measurements of spatial Creutz ratios remains unclear.
While the relevant characteristics of the vortex configurations in the different regimes were described in detail in this work, the present understanding of the underlying dynamics in the vortex picture is still tenuous. There are, however, indications that the deconfining percolation transition can be understood in terms of simple entropy considerations. Increasing the temperature implies shortening the (Euclidean) time direction of the (lattice) universe. This means that the number of possible percolating vortex configurations decreases simply due to the reduction in space-time volume<sup>1</sup><sup>1</sup>1For example, a vortex surface extending into two space directions has a greatly reduced freedom of transverse fluctuation into the time direction. Note that if one thinks of such a fluctuating, fuzzy thin vortex surface in terms of a thick envelope, this amounts to stating that the thick vortex extending into two space directions simply does not fit into the space-time manifold anymore. To a certain extent, the difference between these two (thin and thick vortex) pictures is semantic. To state that a thick vortex does not fit into the space-time manifold perpendicular to the time direction amounts to nothing but the statement that the number of possible configurations of this type has been reduced (to zero).. At the same time as the number of possible percolating vortex clusters is reduced, the number of available short vortex configurations is enhanced by the emergence of a new class of short vortices at finite temperature, namely the vortices winding in time direction. In view of this, it seems plausible that a transition to a non-percolating phase is facilitated as temperature is raised.
There are two pieces of evidence supporting this explanation, one of which was already given above. Namely, the deconfined phase indeed contains a large proportion of short winding vortices, cf. Fig. 6. More than half of the vortex material is transferred to the newly available class of short winding vortices in the deconfined phase. The second piece of evidence is related to the behavior of stiff random surfaces in four space-time dimensions; some of the authors plan to report on their Monte Carlo investigation of these objects in an upcoming publication. The model assumes that the vortices are random surfaces associated with a certain action cost per unit area and a penalty for curvature of the vortex surface. By construction, evaluating the partition function of this model simply corresponds to counting the available vortex configurations under certain constraints imposed by the action; namely, the action cost per surface area effectively imposes a certain mean density of vortices, while the curvature penalty imposes an ultraviolet cutoff on the fluctuations of the vortex surfaces. Beyond this, no further dynamical information enters. It turns out that already this simple model generates a percolation phase transition analogous to the one observed here for the center vortices of Yang-Mills theory. This suggests that the deconfining percolation transition of center-projected Yang-Mills theory can be understood in similarly simple terms, without any need for detailed assumptions about the form of the full center vortex effective action.
## 6 Acknowledgements
Discussions with F. Karsch and H. Satz are gratefully acknowledged. K.L. also acknowledges the friendly hospitality of the members of the KIAS, Korea, where a part of the numerical computations was carried out.
|
no-problem/9904/math9904021.html
|
ar5iv
|
text
|
# On the section of a cone
## Abstract
A problem from Democritus is used to illustrate the building and use of infinitesimal covectors.
The Friday before Passover I was forced to make some bureaucratic consultations in our Ministery of Defence. So I landed Tuesday in our desolated airport at Zaragoza and, two days after, I took the train to Madrid, hoping to get lodgement in the house of Miss Ana Leal in the folkish town of La Latina.
Such happenings use to be intellectually exciting, and this one was not exception. Miss Leal suggested extending some hours the visit in order to be able to attend a lecture of Agustin Garcia Calvo in Lavapies. This Agustin is a well-known classical linguist, and a kind of anarchist philosopher, who likes to teach in the Greek stile, and we had already enjoyed his tertulias in the old institution of the Ateneo de Madrid. This one was supposed to be a more technical lecture, addressed to secondary school teachers.
Indeed, it was a very amenable lecture. According to my notebook, he made a good point of the use of language for political control, opposing the vocabulary against grammar, the former owned by the power to construct the reality, the latter unconsciently managed by the people driving a ”raison en marche”. Being practising mathematician, one can easily to feel this confrontation; gramatician placeholders, names, adjectives, etc, are not very different of our variables and constants, and our whole fight is to leave all the weight over proofs, over our grammar, avoiding to get any conclusions from the loose vocabulary of definitions. I meditated on these parallelisms while hearing the linguist’ admonitions.
Then, Agustin centred in the lecture main theme, the teaching of philosophy in secondary education, and somehow malevously suggested three examples to be proposed to students. The Lewis Carroll approach to Zeno paradoxes, the Zeno paradox itself (beautifully glossed as no se vive mientras se besa, no se besa mientras se vive, one does not live while kissing, one does not kiss while living) and, for my surprise, the dilemma of the cone from Democritus!
Heath, following Plutarch, enunciates it asking what happens if we cut a cone using a plane parallel and very close to the basis. Is the resulting circle equal to the one of the basis? Our XXth century presocratic philosopher prefers a more intrincate set up; let me to go to my notes and remember this. Take a cone and cut it through a plane, which for simplicity we can still take parallel to the basis. Now, look at the resulting figures, a smaller cone with basis B and a conical trunk with top B’. The question is, are the circles B and B’ equal or unequal?
In any case, if they are not equal, it results that some discontinuity happens in the complete, joined, cone, and the generating line should present jumps, small steps. But if, on the contrary, both surfaces are not unequal, their fusion should build a cylinder, no a cone.
To solve this paradox, we can negate the possibility of the described action. We can claim that the cone is a real figure, and then it is not proofed that it is a mathematical cone, and its generating line could really be irregular. But the mathematical problem still exists, and we can ask about the ideal cone<sup>1</sup><sup>1</sup>1We can claim that ”we can see it with the eye of the mind; and we know, by force of demonstration, that it cannot be otherwise”, as Democritus himself claimed for the tangent of the circle..
Again, in this setup, we can negate the starting point and claim that the cut is really the intersection of the plane with the cone. There are no two surfaces to be compared.
But then the paradox can be got again by using a Gedankenexperiment. Instead of Plutarch quote, let me get the same music, if not the notes, from the last point of Agustin: Imagine we cut a carrot, or a turnip with a cutter, so we can not deny we have two surfaces. In principle the cutter retires some slice of matter from the carrot, slice thickness being related to the one of the cutter. Imagine we make the cutter thinner and thinner, so no mass is moved out of the carrot when we cut it. Now imagine the same operation on the mathematical setup, we have two surfaces and the paradox again.
And from here we are in our own<sup>2</sup><sup>2</sup>2I intend to hide some complex or distracting comments under the carpet of the footnotes. The reader could prefer to avoid them in a first reading.
The result of the progressive thinning of the cutter has been a pair of planes becoming progressively nearer. This is to be noted: the operative problem involves no a plane, but two planes approaching one to other.
This figure, a pair of parallel planes, is known to mathematics following Shouten and Golab as being a covector (in a tridimensional space). Technically, it is specified by giving a unit segment (axial vector) perpendicular to the plane, and then a modulus measuring the separation of both planes and a support point where the first of the planes lies<sup>3</sup><sup>3</sup>3The specification of the axial vector changes peculiarly when we make a change of coordinates of the system, fitting the usual definition of covectors. And the space of covectors (n-1 segments figures as specified) is dual to the space of vectors (1 dimensional oriented segments). An equivalence relationship can be added to get a space of free covectors, but here this step is not needed. We can say that two covectors can be added when the final plane of one coincides with the starting plane of the other, then fusing it to make a grosser cutter. To be honest, this restriction is stronger than the usual for ”free” covectors, and in fact it reflect that we are interested in a slightly looser structure, which we could call q-covectors (the q making reference to a scale of the thickness of the cutter, and -indirectly- to the deformed differential calculus of Majid).
Now, lets to make the cutter slimmer and slimmer. Then the modulus of our covectors goes to zero, but we still have two planes.
This is in fact the resolution of the paradox. We distinguish between a cylinder and a cone because we have more information: the continuity should be claimed between one surface of the pair and the next one of the following ”slice”. The difference between a cone and a cylinder resides in the internal structure of the pair. When the pair becomes so-to-say infinitesimal, both planes of the slice live over the same points of the space. All the infinitesimal slices over the same area can be added without taking care of the fusion condition above, and then get additional structure<sup>4</sup><sup>4</sup>4 Mathematicians call this space of infinitesimal covectors over a point ”cotangent space”, the whole set being the ”cotangent bundle”. An application selecting one covector over each point is called a ”differential form”..
Of course, it must be seen that there is something in the limit of approaching surfaces, i.e., we must to give sense to this limit and proof there are really something over a single cut. In our modern XXth century we could jump directly to use scaling transformations in the spirit of Wilson-Kogut. But perhaps it is better to start from Archimedian methods, which the reader can enjoy in the interesting book of T.L. Heath. For instance, we can see how the volume of a conoid can be extracted by using our bifacial knife. And, as we ignore the full detail of Greek methods<sup>5</sup><sup>5</sup>5Heath quotes Wallis regretting that ”nearly all the ancients so hid from posterity their method of Analysis (though it is clear that they had one) that more modern mathematicians found it easier to invent a new Analysis than to seek out the old”. Indeed, the lack of texts is surprising, all a branch of reason cleared white as the recycled folia where ”The Method” was found in 1909, palimpsesta sunt, scriptura antiqua (litteris minusculis s. X) aqua tantum diluta plerumque oculis intentis discipi potest (de foll 1…, 119-122 tamen desperandum mihi erat), a inmense cleaning which justifies Wallis’ paranoia. But again, even Newton kept secret his own method, until that Leibnitz developments forced him to show it., it could be perhaps forgiven if we avoid refilling the discussion with Leibnitzian meat, trying instead to keep the spicy flavour of our local cooking.
Imagine again the cone, divided into slices of finite size, lets say using our finite cutter. Each slice can be fitted between two cylindrical ones, a smaller one, with takes as basis the small circle of the conic slice, and a greater one, taking as basis the big circle.
By joining the cylinders we have two circular ”ziggurats”, a small one inscribed inside the cone, and a greater one circumscribed on it. We can consider then the calculation of a quantity such all the volume of these whole figures from the corresponding quantity of the pieces.
The difference between the circumscribed and the inscribed figure amounts only to the greater slice of the circumscribed one. This is because each cylinder of the inscribed one is equal to the previous one of the circumscribed one. Here we see that the importance of the correct pasting condition: it must be between the circle of one slice and the immediate of the following one. Only in this manner the subtraction keeps control, all the difference being the volume of only one slice. Thus when the cutter thickness goes to zero, so goes the difference between figures, and their volume converges to the volume of the cone.
Lets examine this convergence with more detail. It involves two operations: to increase the number of slices, to decrease the width of the slices. Both operations related, of course, because the product is the height of the cone. But here we do not see the structure of the limiting objects, so it is possible yet to hold some doubts about the process. An alternative approach is the averaging method of Wilson <sup>6</sup><sup>6</sup>6The interested reader can see some examples in the article published by Wilson himself in 1979 in the Scientific American. Two consecutive cylinders can be substituted by an unique cylinder averaging them, i.e., with a volume that is the sum of the volumes of both cilinders and a thickness equal to the union of them, thus double of the original one.
Aplying this procedure to the whole ”ziggurat” we get a new figure which is no more inscribed (or circunscribed) to the cone, but has the same volume that the starting one.
Now, this method can be used to control the limit process in the following way: we choose an arbitrary scale of thickness, say for instance the one half of the height of the cone, and for each ”ziggurat” in the converging series we apply the averaging until we get back to a figure composed of cylinders of the choosen thickness.
This new series of figures<sup>7</sup><sup>7</sup>7The new series could be called ”renormalized”, if we call the original ”bare” is composed of finite objects, each one having equal number of cylinders and cylinder thickness being the same in every figure. Then the limit process of this series is not affected by the two infinities, in thickness and in number of cylinders, that were incrusted into the previous series. Even if we do not believe in the infinitesimal slices, we should have not problem admiting the regularized slices built at given, but arbitrary, scale.
Readers could note that a ”ziggurat” composed of cylinders of equal radius should be invariant under the Wilson transformation, only the external, nominal, scale of thickness changes. A deeper examination would show that the existence of this set of invariant figures is the key for the convergence of the whole process<sup>8</sup><sup>8</sup>8By iteration of the transformation, we finish over some invariant figure, and the slice we are slimming is also similar to the invariant figure. If the transformation were given by a continous group, we should see trajectories on the space the figures, approaching the trajectory of invariant cylinders, and the renormalized series would be a line cutting across trajectories and converging to a point in the invariant trajectory. In our case, with discrete transformations, we can still hit into the invariant trajectory by choosing the length (for instance, using as fundamental length the heigth of the cone, instead of one half as above). . In some sense, this invariant shape is the amplification, to a finite scale, of the infinitesimal cylinders of the first convergence process. We can choose the arbitrary reference scale as near to zero as we wish, so its limit zero<sup>9</sup><sup>9</sup>9Which we could be tempted to call ”classical limit” instead of the usual ”continous limit” can be interpreted as the home of such ”differentials”. In fact the existence of limit for our finite series depends of the existence of the line of invariant figures, and the existence of this line relates to the existence of a fixed point, from which the line starts. Such fixed point can be linked the above suggested zero limit. All the pieces of the puzzle fit together.
Note that equal that we can cut the cone, we can also cut its axis, the only difference being that the former lives in three dimensions, the latter in one, then the covectors over this latter are specified by pairs of points instead of planes. Following conventions, we can call dV to the infinitesimal slice over the cone, and dz to the slice over the axis. It is also usual to write $`dV=\frac{V}{z}dz=A(z)dz`$ but of course such writing must also be justified.
Another stroke could be draw if we choose to imagine the cone as developing in the time, i.e., the axis being some kind of temporal direction. Thus each cut is a circle which grows in the time, and the connection with Zeno paradox becomes evident <sup>10</sup><sup>10</sup>10Sketching a parallelism with modern physics terminology, we could say that Democritus paradox is the ”Wick-rotated” version of this from Zeno. The axis should be the ”imaginary time”, and volume and area should perhaps correspond to position and velocity.
And further developments could be done, for instance connecting the scaling procedure to the ones currently in use in theoretical physics, or to work out the q-covectors composition rule aiming to a cotangent groupoid similar to the tangent groupoid raised by Monsieur Alain Connes. The chain of reasonments is enough tight to rule out impossible relations and, as an old friend used to say, when impossible is ruled out, the only remaining thing is the answer. In this footing, we could follow up trying to build the arguments in four dimensional spaces and within field theory, from where we have already taken part of our terminology.
For sure that readers can imagine a lot of additional quests.
So, which our conclusion is? Well, we have seen how the discussion about such an old problem becomes an argument for the teaching of modern mathematics and physics. Perhaps this is the whole point of this note, although surely it was not the one of Garcia Calvo when spelling this old tale to the philosophical audience. But again, mathematics works in its own pace, independently of our own intentions, just as sometimes science gets to be taught independently of the intention of educational programs. Well, this one was probably the very point of Agustin, ours is only to give voice to the math through ourselves. He says dejarse hablar.
|
no-problem/9904/math9904140.html
|
ar5iv
|
text
|
# Untitled Document
COLORING OF TREES WITH MINIMUM SUM OF COLORS
Tao Jiang and Douglas B. West
j-tao@math.uiuc.edu and west@math.uiuc.edu
University of Illinois, Urbana, IL 61801-2975
Running head: COLORING OF TREES AMS codes: 05C35, 05C55 Keywords: chromatic sum, minimal coloring, strength Written July 1998.
Abstract. The chromatic sum $`\mathrm{\Sigma }(G)`$ of a graph $`G`$ is the smallest sum of colors among all proper colorings with natural numbers. The strength $`s(G)`$ of $`G`$ is the minimum number of colors needed to achieve the chromatic sum. We construct for each positive integer $`k`$ a tree $`T_k`$ with strength $`k`$ that has maximum degree only $`2k2`$. The result is best possible.
1. INTRODUCTION
A proper coloring of the vertices of a graph $`G`$ is a function $`f:V(G)N`$ such that adjacent vertices receive different labels (colors). The chromatic number $`\chi (G)`$ is the minimum number of colors in a proper coloring of $`G`$. The chromatic sum $`\mathrm{\Sigma }(G)`$ is a variation introduced by Ewa Kubicka in her dissertation. It is the minimum of $`_{vV(G)}f(v)`$ over proper colorings $`f`$ of $`G`$. A minimal coloring of $`G`$ is a proper coloring of $`G`$ such that $`_vf(v)=\mathrm{\Sigma }(G)`$.
One might think that a minimal coloring can be obtained by selecting a proper coloring with the minimum number of colors and then giving the largest color class color $`1`$, the next largest color $`2`$, and so on. However, even among trees, which have chromatic number $`2`$, more colors may be needed to obtain a minimal coloring. The strength $`s(G)`$ of a graph $`G`$ is the minimum number of colors needed to obtain a minimal coloring. Kubicka and Schwenk constructed for every positive integer $`k2`$ a tree $`T_k`$ with strength $`k`$. Thus $`s(G)`$ may be arbitrarily large even when $`\chi (G)=2`$ (trivially $`s(G)\chi (G)`$).
How large can $`s(G)`$ be in terms of other parameters? When vertices are colored greedily in natural numbers with respect to a vertex ordering $`v_1,\mathrm{},v_n`$, the number of colors used is at most $`1+\mathrm{max}_id^{}(v_i)`$, where $`d^{}(v_i)`$ counts the neighbors of $`v_i`$ in $`\{v_1,\mathrm{},v_{i1}\}`$. Always this yields $`\chi (G)1+\mathrm{\Delta }(G)`$. The best upper bound on $`\chi (G)`$ that can be obtained in this way is the Szekeres-Wilf number $`w(G)=1+\mathrm{max}_{HG}\delta (H)`$ (also confusingly called the “coloring number”). Interestingly, the average of these two well-known upper bounds for the chromatic number is an upper bound for the strength $`s(G)`$.
THEOREM (Hajiabolhassan, Mehrabadi, and Tusserkani ) Every graph $`G`$ has strength at most $`(w(G)+\mathrm{\Delta }(G))/2`$.
We show that this bound is sharp, even for trees. Every nontrivial tree $`T`$ has Szekeres-Wilf number $`2`$, and thus $`s(T)1+\mathrm{\Delta }(T)/2`$. In the Kubicka-Schwenk construction , the tree with strength $`k`$ has maximum degree about $`k^2/2`$. To show that the bound above is sharp, we construct for each $`k1`$ a tree $`T_k`$ with strength $`k`$ and maximum degree $`2k2`$. Given a proper coloring $`f`$ of a tree $`T`$, we use $`\mathrm{\Sigma }f`$ to denote $`_{vV(T)}f(v)`$.
2. THE CONSTRUCTION
Linearly order the pairs of natural numbers so that $`(h,l)<(i,j)`$ if either $`h+l<i+j`$ or $`h+l=i+j`$ and $`l<j`$. With respect to this ordering, we inductively construct for each pair $`(i,j)N\times N`$ a rooted tree $`T_i^j`$ and a coloring $`f_i^j`$ of $`T_i^j`$. In other words, we construct trees in the order $`T_1^1`$, $`T_2^1`$, $`T_1^2`$, $`T_3^1,\mathrm{}`$. Our desired tree with strength $`k`$ will be $`T_k^1`$. Let $`[n]=\{kZ:\mathrm{\hspace{0.17em}1}kn\}`$.
Construction. Let $`T_1^1`$ be a tree of order 1, and let $`f_1^1`$ assign color 1 to this single vertex. Consider $`(i,j)(1,1)`$, and suppose that for each $`(h,l)<(i,j)`$ we have constructed $`T_h^l`$ and $`f_h^l`$. We construct $`T_i^j`$ and $`f_i^j`$ as follows. Let $`u`$ be the root of $`T_i^j`$. For each $`k`$ such that $`1ki+j1`$ and $`ki`$, we take two copies of $`T_k^m`$, where $`m=(i+jk)/2`$, and we let the roots of these $`2(i+j2)`$ trees be children of $`u`$. The resulting tree is $`T_i^j`$ (see Fig. 1). Define the coloring $`f_i^j`$ of $`T_i^j`$ by assigning $`i`$ to the root $`u`$ and using $`f_k^m`$ on each copy of $`T_k^m`$ rooted at a child of $`u`$.
$``$$``$$``$$``$$``$$``$$``$$`u`$$`\mathrm{}`$$`\mathrm{}`$$`\mathrm{}`$$`\mathrm{}`$$`T_1^{\frac{i+j1}{2}}`$$`T_1^{\frac{i+j1}{2}}`$$`T_k^{\frac{i+jk}{2}}`$$`T_k^{\frac{i+jk}{2}}`$$`T_{i+j1}^1`$$`T_{i+j1}^1`$2 copies2 copies2 copies
$`1ki+j1`$ and $`ki`$
Figure 1. The construction of $`T_i^j`$
LEMMA For $`(i,j)N\times N`$, the construction of $`T_i^j`$ is well-defined, and $`f_i^j`$ is a proper coloring of $`T_i^j`$ with color $`i`$ at the root.
Proof: To show that $`T_i^j`$ is well-defined, it suffices to show that when $`(i,j)(1,1)`$, every tree used in the construction of $`T_i^j`$ has been constructed previously. We use trees of the form $`T_k^m`$, where $`k[i+j1]\{i\}`$ and $`m=(i+jk)/2`$. It suffices to show that $`k+mi+j`$ and that $`m<j`$ when $`k+m=i+j`$.
For the first statement, we have $`k+m(i+j+k)/2i+j`$, since $`ki+j1`$. Equality requires $`k=i+j1`$, which occurs only when $`j2`$ and yields $`m=1`$. Thus $`m<j`$ when $`k+m=i+j`$. Since the trees whose indices sum to $`i+j`$ are generated in the order $`T_{i+j1}^1,\mathrm{},T_1^{i+j1}`$, the tree $`T_k^m`$ exists when we need it.
Finally, $`f_i^j`$ uses color $`i`$ at the root of $`T_i^j`$, by construction. Since the subtrees used as descendants of the root have the form $`T_k^m`$ with $`ki`$, by induction the coloring $`f_i^j`$ is proper.
3. THE PROOF
The two-parameter construction enables us to prove a technically stronger statement. The additional properties of the construction facilitate the inductive proof. Recall that all colorings considered are labelings with positive integers.
THEOREM The construction of $`T_i^j`$ and $`f_i^j`$ has the following properties:
(1) If $`f^{}`$ is a coloring of $`T_i^j`$ different from $`f_i^j`$, then $`\mathrm{\Sigma }f^{}>\mathrm{\Sigma }f_i^j`$. Furthermore, if $`f^{}`$ assigns a color different from $`i`$ to the root of $`T_i^j`$, then $`\mathrm{\Sigma }f^{}\mathrm{\Sigma }f_i^jj`$;
(2) If $`j=1`$, then $`\mathrm{\Delta }(T_i^j)=2i2`$, achieved by the root of $`T_i^j`$. If $`j2`$, then $`\mathrm{\Delta }(T_i^j)=2(i+j)3`$;
(3) The highest color used in $`f_i^j`$ is $`i+j1`$.
Proof: We use induction through the order in which the trees are constructed. As the basis step, $`T_1^1`$ is just a single vertex, and $`f_1^1`$ gives it color 1; conditions (1)-(3) are all satisfied.
Now consider $`(i,j)(1,1)`$. For simplicity, we write $`T`$ for $`T_i^j`$ and $`f`$ for $`f_i^j`$. To verify (1), let $`f^{}`$ be a coloring of $`T`$ different from $`f`$. We consider two cases.
Case 1. $`f^{}`$ assigns $`i`$ to the root $`u`$ of $`T`$.
In this case, $`f^{}`$ and $`f`$ differ on $`Tu`$. Recall that $`Tu`$ is the union of $`2(i+j2)`$ previously-constructed trees. The colorings $`f^{}`$ and $`f`$ differ on at least one of these trees. By the induction hypothesis, the total under $`f^{}`$ is at least the total under $`f`$ on each of these subtrees, and it is larger on at least one. Hence $`\mathrm{\Sigma }f^{}>\mathrm{\Sigma }f`$.
Case 2. $`f^{}`$ assigns a color different from $`i`$ to the root $`u`$.
In this case, we need to show that $`\mathrm{\Sigma }f^{}\mathrm{\Sigma }fj`$. Again the induction hypothesis gives $`f^{}`$ as large a total as $`f`$ on each component of $`Tu`$. If $`f^{}(u)i+j`$, then the difference on $`u`$ is large enough to yield $`\mathrm{\Sigma }f^{}\mathrm{\Sigma }fj`$.
Hence we may assume that $`f^{}(u)=k`$, where $`1ki+j1`$ and $`ki`$. Since $`f^{}`$ is a proper coloring, it assigns a label other than $`k`$ to the roots $`v,v^{}`$ of the two copies of $`T_k^m`$ in $`Tu`$, where $`m=(i+jk)/2`$. Since $`f`$ uses $`f_k^m`$ on each copy of $`T_k^m`$, we have $`f(v)=f(v^{})=k`$. Since $`f^{}(v)`$ and $`f^{}(v^{})`$ differ from $`k`$, the induction hypothesis implies that on each copy of $`T_k^m`$ the total of $`f^{}`$ exceeds the total of $`f`$ by at least $`m`$. Since the total is at least as large on all other components, we have
$$\mathrm{\Sigma }f^{}\mathrm{\Sigma }fki+2m=ki+2\frac{i+jk}{2}j.$$
Next we verify (2). In the construction of $`T=T_i^j`$, we place $`2(i+j2)`$ subtrees under the root $`u`$. These have the form $`T_k^m`$ for $`1ki1`$ and $`i+1ki+j1`$, and always $`m=(i+jk)/2`$. Note that $`m=1`$ only when $`k=i+j1`$ or $`k=i+j2`$. The subtrees have maximum degree $`2k2`$ (when $`m=1`$) or $`2(k+m)3`$ (when $`m>1`$). Note that $`2(k+m)3>2k2`$ when $`m1`$. Thus
$$\mathrm{\Delta }(T_k^m)2(k+m)3=2\left(k+\frac{i+jk}{2}\right)3=2\frac{i+j+k}{2}3.$$
Also, we always have $`k+m=(i+j+k)/2`$ for the subtree $`T_k^m`$.
When $`j=1`$ we only have $`ki1`$, and thus $`\mathrm{\Delta }(T_k^m)2(i+1+k)/232i3`$. Hence each vertex in $`Tu`$ has degree at most $`(2i3)+1=2i2`$ in $`T`$. Since $`d_T(u)=2i2`$, we have $`\mathrm{\Delta }(T)=2i2`$, achieved by the root.
When $`j2`$, the values of $`k`$ for the subtrees are $`1ki1`$ and $`i+1ki+j1`$. By the induction hypothesis, the maximum degree of $`T_{i+j1}^1`$ is $`2(i+j1)2=2(i+j)4`$ and is achieved by its root. In $`T`$ this vertex has degree $`2(i+j)3`$, which exceeds $`d_T(u)`$. For $`ki+j2`$, we have $`\mathrm{\Delta }(T_k^m)2(i+j+k)/232(i+j)5`$. Hence $`\mathrm{\Delta }(T)=2(i+j)3`$, achieved by the roots of the trees that are isomorphic to $`T_{i+j1}^1`$.
It remains to verify (3): the maximum color used in $`f_i^j`$ is $`i+j1`$. By the induction hypothesis and the construction, the maximum color used by $`f_k^m`$ on each $`T_k^m`$ within $`f_i^j`$ is $`k+m1=(i+j+k)/21`$. Since the largest $`k`$ is $`i+j1`$ when $`j2`$ and is $`i1`$ when $`j=1`$, this computation yields $`i+j1`$ when $`j2`$ and $`i1`$ when $`j=1`$ as the maximum color on $`Tu`$. Since $`f`$ assigns $`i`$ to the root $`u`$, we obtain $`i+j1`$ as the maximum color on $`T`$ for both $`j2`$ and $`j=1`$.
We have proved that $`f_i^j`$ is the unique minimal coloring of $`T_i^j`$ and that it uses $`i+j1`$ colors. Hence $`s(T_i^j)=i+j1`$. The maximum degree is $`2i2`$ or $`2(i+j)3`$, depending on whether $`j=1`$ or $`j2`$. In particular, $`T_i^1`$ is a tree with strength $`i`$ and maximum degree $`2i2`$.
COROLLARY 1. There exists for each positive integer $`i`$ a tree $`T_i`$ with $`s(T_i)=i`$ and $`\mathrm{\Delta }(T_i)=2i2`$.
COROLLARY 2. For every real number $`\alpha (0,1/2)`$, there is a sequence of trees $`T_1^{},T_2^{},\mathrm{}`$ such that $`lim_n\mathrm{}s(T_n^{})/\mathrm{\Delta }(T_n^{})=\alpha `$.
Proof: Let $`t=(\frac{1}{\alpha }2)i+2`$. Consider the construction of $`T_i^1`$. Form $`T_i^{}`$ by adding $`t`$ additional copies of the subtree $`T_{i1}^1`$ under the root $`u`$ of $`T_i^1`$. The strength of $`T_i^{}`$ is $`i`$, but $`\mathrm{\Delta }(T_i^{})=2i2+t`$. As $`i\mathrm{}`$, we have
$$\frac{s(T_i^{})}{\mathrm{\Delta }(T_i^{})}=\frac{i}{2i+t2}=\frac{i}{2i+(\frac{1}{\alpha }2)i}\alpha .$$
References
P. Erdős, E. Kubicka, and A.J. Schwenk, Graphs that require many colors to achieve their chromatic sum, Congr. Numer. 71(1990), 17–28.
H. Hajiabolhassan, M.L. Mehrabadi, and R. Tusserkani, Minimal coloring and strength of graphs. Proc. 28th Annual Iranian Math. Conf., Part 1 (Tabriz, 1997), Tabriz Univ. Ser. 377 (Tabriz Univ., Tabriz, 1997), 353–357.
E. Kubicka, Constraints on the chromatic sequence for trees and graphs, Congr. Numer. 76(1990), 219–230.
E. Kubicka and A.J. Schwenk, An introduction to chromatic sums, Proc. ACM Computer Science Conference, Louisville(Kentucky) 1989, 39–45.
C. Thomassen, P. Erdős, Y. Alavi, P.J. Malde, and A.J. Schwenk, Tight bounds on the chromatic sum of a connected graph, J. Graph Theory 13(1989), 353–357.
Z. Tuza, Contraction and minimal $`k`$-colorability Graphs and Combin. 6(1990), 51–59.
|
no-problem/9904/cond-mat9904302.html
|
ar5iv
|
text
|
# REFERENCES
Comment on “Lyapunov Exponent of a Many Body System and Its Transport Coefficients”
In a recent Letter, Barnett, Tajima, Nishihara, Ueshima and Furukawa obtained a theoretical expression for the maximum Lyapunov exponent $`\lambda _1`$ of a dilute gas. They conclude that $`\lambda _1`$ is proportional to the cube root of the self-diffusion coefficient $`D`$, independent of the range of the interaction potential. They validate their conjecture with numerical data for a dense one-component plasma, a system with long-range forces. We claim that their result is highly non-generic. We show in the following that it does not apply to a gas of hard spheres, neither in the dilute nor in the dense phase.
Systems of hard spheres have properties similar to real fluids and solids and provide a reference for successful perturbation theories . Simulations with this model were able to uncover fundamental aspects of collective particle dynamics such as recollisions and the “cage” effect . Hard-sphere systems are also paradigms for the chaotic and ergodic properties of many-body systems with short range interactions, and were shown to have a positive Kolmogorov-Sinai entropy .
For dilute gases, Krylov provided an analytical estimate for the maximum Lyapunov exponent,
$$\lambda _1=\left(32\pi K/3mN\right)^{1/2}\sigma ^2n\mathrm{log}\left(\pi \sigma ^3n/\sqrt{2}\right),$$
(1)
where $`K`$ is the kinetic energy, $`N`$ is the number of particles, $`m`$ the particle mass, $`n`$ the number denstiy, and $`\sigma `$ is the hard sphere diameter. This expression has been verified numerically (apart from a factor $`2.8`$), and has been extended to larger densities .
The diffusion coefficient for dilute hard-sphere gases is well approximated by the Enskog expression
$$D_E=\left(3\pi K/32mN\right)^{1/2}\frac{1}{n\pi \sigma ^2}\left[1+\frac{5n\pi \sigma ^3}{12}\right]^1.$$
(2)
A comparison of Eqs. (1) and (2) reveals that, in the dilute gas limit, the proposed relation $`\lambda _1D^{1/3}`$ of Barnett et al. cannot be satisfied. Moreover, we combine in Fig. 1 recent simulation results for $`D`$ and $`\lambda _1`$, which were obtained for a system of 500 hard spheres over the full range of fluid densities $`(0.0001<n\sigma ^3<0.89)`$. Reduced units are used for which $`\sigma `$, $`m`$, and the kinetic energy per particle $`K/N`$ are all unity. One observes that these data are not consistent with the proposed $`D^{1/3}`$-dependence (solid line), neither for low densities nor for large.
We conclude that the conjecture by Barnett et al. does not apply to many-body systems with short-range interactions. But even its applicability for long-range interactions is doubtful. A one-dimensional gravitational system with finite $`N`$ exhibits a positive $`\lambda _1`$ , whereas this clustering and confining system does not show diffusion. We also note that, while the theoretical expression (26) in Ref. has been obtained for a dilute gas, the data in Fig. 1 of Ref. are for a dense plasma with a Coulomb coupling costant $`\mathrm{\Gamma }`$ ranging from 1 to 150. As reported by the same authors , for $`\mathrm{\Gamma }>1`$ the plasma behaves as a liquid and not as a gas. The dilute gas limit is recovered only for $`\mathrm{\Gamma }1`$ .
We thank M. Antoni, U. Balucani, A. Rapisarda, and S. Ruffo for useful discussions.
A. Torcini<sup>(1)</sup>, Ch. Dellago<sup>(2)</sup>, and H.A. Posch<sup>(3)</sup>
(1) Dipartimento di Energetica, via S. Marta 3, I-50139 Firenze, Italy
INFM, Unità di Firenze, Italy.
(2) Department of Chemistry, University of California, Berkeley, CA 94720, U.S.A.
(3) Institut für Experimentalphysik, Universität Wien, Boltzmanngasse 5, A-1090 Wien, Austria
PACS numbers: 05.45.+b, 05.60.+w
|
no-problem/9904/cond-mat9904360.html
|
ar5iv
|
text
|
# Application of 𝑝-adic analysis to models of spontaneous breaking of the replica symmetry
## 1 Introduction
Numerous works, for example , , discuss application of ultrametrics to investigation of spin glasses. The most important example of ultrametric space is the field of $`p`$-adic numbers, for introduction to $`p`$-adic analysis see . In the present paper we apply the methods of $`p`$-adic analysis to investigate the spontaneous symmetry breaking in the models of spin glasses. We obtain the following results:
1) A $`p`$-adic expression for the replica matrix $`Q_{ab}`$ is found. It has the form $`Q_{ab}=q_k`$ where $`k=\mathrm{log}_p|l(a)l(b)|_p`$ where the notations is expressed below. It is shown that the replica matrix in the Parisi form in the models of spontaneous breaking of the replica symmetry in the simplest case have the form of the Vladimirov operator of $`p`$-adic fractional differentiation .
2) The model of hierarchical diffusion that was used in to describe relaxation of spin glasses in our approach takes the form of the model of $`p`$-adic diffusion. For instance, we reproduce the results of the paper using the methods of $`p`$-adic analysis.
The models of spontaneous breaking of the replica symmetry are used for investigations of spin glasses , , . The breaking of symmetry in such models is described by the replica $`n\times n`$ matrix $`𝐐=\left(Q_{ab}\right)`$ in the Parisi form . This matrix looks as follows. Let us consider the set of integer numbers $`m_i`$, $`i=1,\mathrm{},N`$, where $`m_i/m_{i1}`$ are integers for $`i>1`$ and $`n/m_i`$ are integers. The matrix element of the replica matrix is defined as follows
$$Q_{aa}=0,Q_{ab}=q_i,\left[\frac{a}{m_i}\right]\left[\frac{b}{m_i}\right];\left[\frac{a}{m_{i+1}}\right]=\left[\frac{b}{m_{i+1}}\right].$$
(1)
Here $`[]`$ is the function of integer part (we understand the integer part $`[x]`$ as follows: $`[x]1x[x]`$ where $`[x]`$ is integer), $`q_i`$ are some (real) parameters. An example of the matrix of this kind for $`m_i/m_{i1}=2`$ and $`n=2^N`$ have the form
$$𝐐=\left(\begin{array}{ccccccccc}0& q_1& q_2& q_2& q_3& q_3& q_3& q_3& \mathrm{}\\ q_1& 0& q_2& q_2& q_3& q_3& q_3& q_3& \mathrm{}\\ q_2& q_2& 0& q_1& q_3& q_3& q_3& q_3& \mathrm{}\\ q_2& q_2& q_1& 0& q_3& q_3& q_3& q_3& \mathrm{}\\ q_3& q_3& q_3& q_3& 0& q_1& q_2& q_2& \mathrm{}\\ q_3& q_3& q_3& q_3& q_1& 0& q_2& q_2& \mathrm{}\\ q_3& q_3& q_3& q_3& q_2& q_2& 0& q_1& \mathrm{}\\ q_3& q_3& q_3& q_3& q_2& q_2& q_1& 0& \mathrm{}\\ \mathrm{}& & & & & & & & \end{array}\right)$$
(2)
In the present paper we discuss the replica matrix (2) (more precisely, the generalization of this example for the case of $`p^N\times p^N`$ matrices) using the language of $`p`$-adic analysis. This allows to give the natural interpretation for (2) as the operator that can be diagonalized by the $`p`$-adic Fourier transform. In particular this gives the spectrum of the matrix (2). In the limit of infinite breaking of the replica symmetry $`N\mathrm{}`$ the dimension $`p^N`$ of the replica matrix tends to infinity, but the $`p`$-adic norm of the dimension $`|p^N|_p=p^N`$ tends to zero. The conjecture by Volovich is that this phenomenon might explain the paradoxical fact that in the replica method the dimension of the replica matrix in the limit of infinite breaking of the replica symmetry tends to zero.
Let us make here a brief review of $`p`$-adic analysis. The field $`Q_p`$ of $`p`$-adic numbers is the completion of the field of rational numbers $`Q`$ with respect to the $`p`$-adic norm on $`Q`$. This norm is defined in the following way. An arbitrary rational number $`x`$ can be written in the form $`x=p^\gamma \frac{m}{n}`$ with $`m`$ and $`n`$ not divisible by $`p`$. The $`p`$-adic norm of the rational number $`x=p^\gamma \frac{m}{n}`$ is equal to $`|x|_p=p^\gamma `$.
The most interesting property of the field of $`p`$-adic numbers is ultrametricity. This means that $`Q_p`$ obeys the strong triangle inequality
$$|x+y|_p\mathrm{max}(|x|_p,|y|_p).$$
We will consider disks in $`Q_p`$ of the form $`\{xQ_p:|xx_0|_pp^k\}`$. For example, the ring $`Z_p`$ of integer $`p`$-adic numbers is the disk $`\{xQ_p:|x|_p1\}`$ which is the completion of integers with the $`p`$-adic norm. The main properties of disks in arbitrary ultrametric space are the following:
1. Every point of a disk is the center of this disk.
2. Two disks either do not intersect or one of these disks contains the other.
The $`p`$-adic Fourier transform $`F`$ of the function $`f(x)`$ is defined as follows
$$F[f](\xi )=\stackrel{~}{f}(\xi )=_{Q_p}\chi (\xi x)f(x)𝑑\mu (x)$$
Where $`d\mu (x)`$ is the Haar measure. The inverse Fourier transform have the form
$$F^1[\stackrel{~}{g}](x)=_{Q_p}\chi (\xi x)\stackrel{~}{g}(\xi )𝑑\mu (\xi )$$
Here $`\chi (\xi x)=\mathrm{exp}(i\xi x)`$ is the character of the field of $`p`$-adic numbers. For example, the Fourier transform of the indicator function $`\mathrm{\Omega }(x)`$ of the disk of radius 1 with center in zero (this is a function that equals to 1 on the disk and to 0 outside the disk) is the function of the same type:
$$\stackrel{~}{\mathrm{\Omega }}(\xi )=\mathrm{\Omega }(\xi )$$
In the present paper we use the following Vladimirov operator $`D_x^\alpha `$ of the fractional $`p`$-adic differentiation, that is defined as
$$D_x^\alpha f(x)=F^1|\xi |_p^\alpha F[f](x)=\frac{p^\alpha 1}{1p^{1\alpha }}_{Q_p}\frac{f(x)f(y)}{|xy|_p^{1+\alpha }}𝑑\mu (y)$$
(3)
Here $`F`$ is the ($`p`$-adic) Fourier transform, the second equality holds for $`\alpha >0`$.
For further reading on the subject of $`p`$-adic analysis see .
## 2 The replica matrix
Let us describe the model of the replica symmetry breaking using the language of $`p`$-adic analysis. We will show that the replica matrix $`𝐐=\left(Q_{ab}\right)`$ can be considered as an operator in the space of functions on the finite set consisting of $`p^N`$ points with the structure of the ring $`p^NZ/Z`$. The ring $`p^NZ/Z`$ can be described as a set with the elements
$$x=\underset{j=1}{\overset{N}{}}x_jp^j,0x_jp1$$
with natural operations of addition and multiplication up to modulus 1. Let us consider the $`p`$-adic norm on this ring (the distance can take values $`0,p,\mathrm{},p^N`$). We consider the following construction. We introduce one-to-one correspondence
$$l:1,\mathrm{},p^Np^NZ/Z$$
$$l^1:\underset{j=1}{\overset{N}{}}x_jp^j1+p^1\underset{j=1}{\overset{N}{}}x_jp^j,0x_jp1$$
The formula (1) takes the form
$$Q_{aa}=0,Q_{ab}=q_i,\left[\frac{a}{p^{i1}}\right]\left[\frac{b}{p^{i1}}\right],\left[\frac{a}{p^i}\right]=\left[\frac{b}{p^i}\right].$$
(4)
Let us prove the following theorem.
Theorem. The matrix element $`Q_{ab}`$ defined by (4) depends only on $`p`$-adic distance between $`l(a)`$ and $`l(b)`$:
$$Q_{ab}=\rho (|l(a)l(b)|_p),$$
where $`\rho (p^k)=q_k`$, $`\rho (0)=0`$.
Proof
The condition $`\left[\frac{a}{p^i}\right]=\left[\frac{b}{p^i}\right]`$ in our notions have a form
$$\left[\frac{1+p^1_{j=1}^Na_jp^j}{p^i}\right]=\left[\frac{1+p^1_{j=1}^Nb_jp^j}{p^i}\right]$$
This means that $`a_j=b_j`$ for $`j>i`$. The condition $`\left[\frac{a}{p^{i1}}\right]\left[\frac{b}{p^{i1}}\right]`$ means that $`a_ib_i`$. But these two condition both mean that $`|l(a)l(b)|_p=p^i`$. We get that the matrix element of the replica matrix $`Q_{ab}`$ depends only on the $`p`$-adic distance $`|l(a)l(b)|_p`$: if $`|l(a)l(b)|_p`$ equals to $`p^k`$ then $`Q_{ab}=q_k`$ and the statement of the theorem follows.
The replica matrix $`\left(Q_{ab}\right)`$ acts on functions on $`p^NZ/Z`$ as on vectors with matrix elements $`f_b`$ where $`b=l(y)`$, $`b=1,\mathrm{},p^N`$. The action of the replica matrix in the space of functions on $`p^NZ/Z`$ takes the form
$$𝐐f(x)=_{p^NZ/Z}\rho (|xy|_p)f(y)𝑑\mu (y)$$
(5)
where the measure $`d\mu (y)`$ of one point equals to 1 and $`f_b=f(l(b))`$ (because we can consider the index $`b`$ of the vector as the index of the first column of the matrix $`\left(Q_{ab}\right)`$).
It is easy to see that operators of the form (5) have the following properties:
1. The operators (5) commute with operators of shift. This means that the operators (5) can be diagonalized by the Fourier transform (in our case this is the discrete Fourier transform).
2. The function $`\rho `$ depends on $`p`$-adic norm of argument.
3. $`\rho (0)=0`$.
The language of $`p`$-adic analysis allows us to describe the natural generalization of the operator (5). This generalization have the form of operator
$$𝐐f(x)=_{Q_p}\rho (|xy|_p)f(y)𝑑\mu (y)$$
(6)
where the function $`\rho `$ obeys the properties 1.-2. (an analogue of the property 3. will be considered later). Here and further in the present paper we use the agreement that we will use the same notions (without special comments) for analogous values in the discrete and the continuous ($`p`$-adic) cases.
It is easy to see that the character $`\chi (kx)`$ is the generalized eigenvector for the operator (6), if $`\rho (|x|_p)L^1(Q_p)`$. Thus the operator (6) can be diagonalized by the $`p`$-adic Fourier transform $`F`$: $`𝐐f(x)=F^1\gamma (\xi )F[f](x)`$. From the property 2. follows that the function $`\gamma `$ depends only on the $`p`$-adic norm of the argument: $`\gamma =\gamma (|\xi |_p)`$. Therefore we get
$$𝐐f(x)=F^1\gamma (|\xi |_p)F[f](x)$$
## 3 The model of hierarchical diffusion
Let us reproduce now (partially) the results of the paper using the methods of $`p`$-adic analysis. In the paper relaxation of spin glasses was described using the following model of hierarchical diffusion. Let us consider $`2^N`$ points (we will also consider more general case of $`p^N`$ points, $`p>0`$ is prime), separated by barriers of energy. The barriers of energy have the following form. Let us enumerate the points by integer numbers starting from $`0`$ to $`2^N1`$ (analogously, from $`0`$ to $`p^N1`$). Let us consider the increasing sequence of the barriers of energy (nonnegative numbers) $`0=\mathrm{\Delta }_0<\mathrm{\Delta }_1<\mathrm{\Delta }_2<\mathrm{}<\mathrm{\Delta }_k<\mathrm{}`$. We define the barriers of energy on the set of $`p^N`$ points according to the following rule: if $`ab`$ is divisible by $`p^k`$ then the barrier between $`a`$-th and $`b`$-th points equals to $`\mathrm{\Delta }_k`$.
The hierarchical diffusion will be described by the ensemble of particles that jump over the described above set of $`p^N`$ points. Let us define the probability $`q_i`$ of transition (or jump) over the barrier of energy $`\mathrm{\Delta }_i`$ in the following way $`q_i=\mathrm{exp}(\mathrm{\Delta }_i)`$, $`i=1,2,\mathrm{}`$. Then the matrix of probabilities of transitions will be equal (up to additive constant) to the matrix $`𝐐`$ of the form (2).
We denote the density of particles at the $`a`$-th point as $`f_a(t)`$ and vector with elements that equal to densities at all points as $`𝐟(t)`$. We define the dynamics of the model using the following differential equation :
$$\frac{d}{dt}𝐟(t)=(𝐐\lambda _0𝐈)𝐟(t)$$
(7)
where $`N\times N`$ matrix $`𝐐`$ for $`p=2`$ have the form (2) of the replica matrix for the model of the replica symmetry breaking, $`𝐈`$ is the unity matrix, $`\lambda _0`$ is the eigenvalue of the matrix $`𝐐`$ that corresponds to the eigenvector with equal matrix elements. This choice of the transition probability matrix is defined by the law of the conservation of number of particles (that is an analogue of the property 3.).
Application of the technique developed in the section 2 allows us to write the equation (7) in the form
$$\frac{d}{dt}f(x,t)=_{p^NZ/Z}(f(y,t)f(x,t))\rho (|xy|_p)𝑑\mu (y)$$
(8)
where $`f_a(t)=f(l(a),t)`$. For example, for the considered above $`q_i=\mathrm{exp}(\mathrm{\Delta }_i)`$, $`i=1,2,\mathrm{}`$ and for the linear dependence of the barrier energy $`\mathrm{\Delta }_i=i(1+\alpha )\mathrm{ln}p`$ on $`i`$ we get $`\rho (|x|_p)=|x|_p^{1\alpha }`$ and the equation (8) gets the form
$$\frac{d}{dt}f(x,t)=_{p^NZ/Z}\frac{f(y,t)f(x,t)}{|xy|_p^{1+\alpha }}𝑑\mu (y)$$
(9)
At the right hand side of the equation (9) we get the discretization of the Vladimirov operator $`D_x^\alpha `$ (3) of the fractional $`p`$-adic differentiation, see .
In the paper the Cauchy problem for the equation (9) with initial condition $`f(x,0)=\delta _{x0}`$ was investigated. The dependence on time of the value $`P_0(t)`$ that in $`p`$-adic notions have the form
$$P_0(t)=f(0,t)=_{p^NZ/Z}\delta _{y0}f(y,t)𝑑\mu (y)$$
was found. In the present paper we will calculate the value $`P_0(t)`$ using the methods of $`p`$-adic analysis.
The $`p`$-adic generalization of the equation (8) have the following form
$$\frac{d}{dt}f(x,t)=_{Q_p}(f(y,t)f(x,t))\rho (|xy|_p)𝑑\mu (y)$$
(10)
Let us describe how to get the spectrum of the operator $`𝐃`$ at the right hand side (or simply RHS) of (10) (or the spectrum of times of relaxation for the model of hierarchical diffusion , describing spin glasses). We will use the $`p`$-adic Fourier transform. It is easy to see that the character $`\chi (kx)`$ is the generalized eigenfunction for the operator at the RHS of (10) if $`\rho (|x|_p)L^1(Q_p\backslash U_ϵ)`$, where $`U_ϵ`$ is the arbitrary neighborhood of 0, or equivalently if $`kZ`$ the series $`_{i=k}^{\mathrm{}}|\rho (p^i)|p^i`$ converges. For instance, 1 is the eigenfunction for the eigenvalue that equals to 0. The proof is as follows:
$$𝐃\chi (kx)=_{Q_p}(\chi (ky)\chi (kx))\rho (|xy|_p)𝑑\mu (y)=$$
$$=\chi (kx)_{Q_p}(\chi (k(yx))1)\rho (|xy|_p)𝑑\mu (y)=\chi (kx)_{Q_p}(\chi (ky)1)\rho (|y|_p)𝑑\mu (y)$$
To finish the proof we note that $`\chi (ky)`$ is locally constant function that equals to 1 in some neighborhood of 0. Using that the integral $`_{|y|_pp^i}\chi (ky)𝑑\mu (y)=p^i`$, if $`|k|_pp^i`$ and equals to zero if $`|k|_p>p^i`$, we get
$$𝐃\chi (kx)=\left(\left(1p^1\right)\underset{p^i|k|_p}{}p^i\rho (p^i)\frac{p^1}{|k|_p}\rho (|k|_p)\right)\chi (kx)$$
(11)
This relation shows the correspondence between the spectrum of relaxation times and the elements of the replica matrix in the form (2) (here $`q_i=\rho (p^i)`$). The relation (11) reproduces the result obtained in , where more complicated technique was used.
Let us describe how to get the operator at the right hand side of the equation (8) using the analogous operator (at the right hand side of the equation (10)) on $`Q_p`$. Consider the finite dimensional subspace $`V_NL^2(Q_p)`$ of the following form. The subspace $`V_N`$ consists of functions with zero average with support in $`p^NZ_p`$ that are constants on disks of radius 1. Therefore the dimension of the subspace $`V_N`$ equals to $`p^N1`$. The operator at (10) maps this space into itself. At the subspace $`V_N`$ the operator at the RHS of the equation (10) takes the form
$$𝐃f(x)=_{p^NZ/Z}(f(y)f(x))\rho (|xy|_p)𝑑\mu (y)$$
that looks exactly like the operator at the RHS of the equation (8). But we will not get in this way the equation (8) because the operator at the RHS of (8) acts in the space of dimension that is larger by 1 than $`V_N`$. This space can be obtained from the space $`V_N`$ by adding to $`V_N`$ the function that equals to 1 on the ball $`p^NZ_p`$ (and to 0 outside).
Thus the model (up to the comments made above) corresponds to the action of the operator of $`p`$-adic fractional differentiation at the subspace.
We investigate the following $`p`$-adic generalization of the model . Let us consider the Cauchy problem for the $`p`$-adic generalization of the equation (9)
$$\frac{d}{dt}f(x,t)+AD_x^\alpha f(x,t)=0,$$
(12)
that have the form of the equation of $`p`$-adic diffusion that was investigated in the book . We take the initial equation for the equation (12) of the form
$$f(x,0)=\delta (x),$$
(13)
This means that we investigate the fundamental solution of the equation (12). After the Fourier transform the equation (12) takes the form
$$\frac{d}{dt}\stackrel{~}{f}(\xi ,t)+A|\xi |_p^\alpha \stackrel{~}{f}(\xi ,t)=0,$$
The solution of this equation is $`\stackrel{~}{f}(\xi ,0)e^{A|\xi |_p^\alpha t}`$. Because the Fourier transform of the $`\delta `$-function with support in zero equals to 1, we get finally
$$\stackrel{~}{f}(\xi ,t)=e^{A|\xi |_p^\alpha t}$$
$$f(x,t)=_{Q_p}\chi (\xi x)e^{A|\xi |_p^\alpha t}𝑑\mu (\xi )$$
(14)
As the $`p`$-adic generalization of $`P_0(t)`$ we consider the value
$$P_0(t)=_{|x|_p1}f(x,t)𝑑\mu (x)=_{Q_p}\mathrm{\Omega }(x)f(x,t)𝑑\mu (x)$$
(we use the same notion) that for the solution (14) takes the form
$$_{Q_p}\mathrm{\Omega }(\xi )e^{A|\xi |_p^\alpha t}𝑑\mu (\xi )=_{|\xi |_p1}e^{A|\xi |_p^\alpha t}𝑑\mu (\xi )=\left(1p^1\right)\underset{k=0}{\overset{\mathrm{}}{}}p^ke^{Ap^{\alpha k}t}$$
(15)
The answer we have got coincides with the answer obtained in for (8). The value found in (they use $`p=2`$) have a form
$$P_0(t)=\underset{n\mathrm{}}{lim}\left(2^n+\frac{1}{2}\mathrm{exp}\left(\frac{R^{n+1}t}{1R}\right)\underset{m=0}{\overset{n1}{}}\mathrm{exp}\left(m\mathrm{ln}2\frac{2R}{1R}R^{m+1}t\right)\right)$$
(16)
where $`0<R<1`$ is some constant. It is easy to see that (15) and (16) coincide for $`R=p^\alpha `$ and $`A=\frac{2R}{1R}`$. We see that $`p`$-adic analysis allows us to investigate the models of hierarchical diffusion using the simple and natural formalism.
Acknowledgements
The authors are grateful to I.V.Volovich for discussion. This work was partially supported by INTAS 96-0698, RFFI 98-03-3353a, RFFI 96-15-97352 and RFFI 96-15-96131 grants.
|
no-problem/9904/chao-dyn9904025.html
|
ar5iv
|
text
|
# REFERENCES
## I Figure Captions
1. Fig 1(a) shows the spatial intermittency in the lattice associated with tangent -period doubling bifurcation. i refers to position of site on the lattice.Fig 1(b) shows two iterates of the lattice. As can be seen both regular and irregular sites behave periodically in time.The fixed point is $`\xi =\frac{\gamma ϵ\mu 2}{\mu \gamma ϵ}`$
2. Fig 2 The scaling behaviour of the laminar sites is shown. $`P(l)`$ is the probability of obtaining a laminar region of length $`l`$. The length of the laminar region is defined as number of adjacent sites which remain within a particular accuracy. Three different exponents are found corresponding to three kinds of intermittency.The fixed point is $`\xi =\frac{\gamma ϵ\mu 2}{\mu \gamma ϵ}`$ .The behaviour observed was obtained for a lattice of size 50,000 iterated for 20000 iterates starting with $`100`$ random initial conditions. The accuracy to which the laminarity of the region was checked was $`10^5`$. The asterisks denote the behaviour with exponent $`\zeta _1`$, crosses $`\zeta _2`$ and pluses denote $`\zeta _3`$. The ranges of the exponents are given in the caption of Table I. $`\zeta _3`$ shows departures from this power-law over the third decade. The axes are marked in the natural log-scale.
3. Fig 3 The spatial second return map $`x(i+2)`$ vs $`x(i)`$ where $`i`$ is the site index is plotted for $`\xi =\frac{\gamma ϵ\mu 2}{\mu \gamma ϵ}`$.
## II Table Captions
1. Table 1.1 We list the bifurcations from the synchronised fixed point $`x=0`$. The type of bifurcation involved,the region in parameter space where the bifurcation conditions are satisfied, the manner in which the eigenvalue crosses unit circle and the power law exponent $`P(l)l^{\zeta _i}`$ are listed. The range of the exponent $`\zeta _1`$ is $`[1.92.2]`$, $`\zeta _2`$ lies in the range $`[1.31.35]`$ and $`\zeta _3`$ in the range $`[0.610.72]`$. We also identify the nature of the intermittency, whether spatial or spatio-temporal(ST). In Table 1.2 the same quantities are given for the other fixed point.
2. Table 2 This table lists the values of the spatial laminar exponent observed in experiments (values as quoted in Ref. ) and compares their values with the laminar exponents of our CML model. It is clear that the exponents $`\zeta _1`$ and $`\zeta _3`$ are directly observed in fluid experiments on convection in an annulus and in the roll coating system. As explained in the text, the exponent $`\zeta _2`$ serves as a lower bound on the ex ponent observed in convection in a channel and in the Taylor Dean system. The experimentally observed exponent actually coincides with the CML exponent $`\zeta _G`$ which occurs when the structure associated with the exponent $`\zeta _2`$ undergoes a further bifurcation.
|
no-problem/9904/astro-ph9904315.html
|
ar5iv
|
text
|
# Monopole-antimonopole bound states as a source of ultra-high-energy cosmic rays.
## I Introduction
The observation of ultra-high-energy cosmic rays (UHECR) with energies above $`10^{11}\text{GeV}`$ poses a serious challenge to the particle acceleration mechanisms so far proposed. This fact has motivated the search for non-acceleration models, in which the high energy cosmic rays are produced by the decay of a very heavy particle. Topological defects are attractive candidates for this scenario. Due to their topological stability these objects can retain their energy for very long times and release quanta of their constituents, typically with GUT scale masses, which in turn decay to produce the UHECR.
Various topological defect models and mechanisms have been studied by numerous authors. In this paper we investigate two different scenarios involving the annihilation of monopole-antimonopole pairs. We first discuss standard magnetic monopole pair annihilation , paying particular attention to the kinetics of monopolonium formation. We find that, due to the inefficiency of the pairing process, the density of monopolonium states formed is many orders of magnitude less than the value required to explain the UHECR events.
We then present a different scenario in which very massive monopoles ($`m10^{14}\text{GeV}`$) are bound by a light string formed at approximately $`100\text{GeV}`$. These monopoles do not have the usual magnetic charge, or in fact any unconfined flux. Gravitational radiation is the only significant energy-loss mechanism for the bound systems.Such systems were studied in a different context by Martin and Vilenkin. Their lifetimes can then be comparable with the age of the universe, and their final annihilation will then contribute to the high energy end of the cosmic ray spectrum.
## II Required monopolonium abundance.
What density of decaying monopolonium states is required to produce the observed cosmic rays? The monopolonium will behave as a cold dark matter (CDM) component and will cluster in the galactic halo, producing a high energy spectrum of cosmic rays without the Greisen-Zatsepin-Kuzmin (GZK) cutoff. Since the observational data does not seem to show any such cutoff, this is an advantage of such topological defect models .
For a given monopole mass, we can set the lifetime of the monopolonium at least equal to the age of the universe, and obtain the required density of monopolonium in the halo by normalizing the flux to the observed high energy spectrum. The required number density decreases with the monopole mass, so as a lower limit we can take the required density corresponding to $`m_M=10^{17}\text{GeV}`$,
$$N_{M\overline{M}}^h(T_0)>6\times 10^{27}\text{cm}^3.$$
(1)
Since the different components of the CDM cluster in the same way we can use this halo density to get the mean density in the universe, by computing,
$$N_{M\overline{M}}=\frac{N_{M\overline{M}}^h\mathrm{\Omega }_{CDM}\rho _{cr}}{\rho _{CDM}^h}.$$
(2)
For $`\mathrm{\Omega }_{CDM}h^2=0.2`$, $`\rho _{CDM}^h=0.3\text{GeV}`$ $`\text{cm}^3`$, and $`\rho _{\text{cr}}=10^4h^2\text{eV cm}^3`$, we get
$$N_{M\overline{M}}(T_0)>10^{32}\text{cm}^3.$$
(3)
We will work with a comoving monopolonium density $`\mathrm{\Gamma }=N_{M\overline{M}}/s`$ where $`s`$ is the entropy density, currently $`s3\times 10^3\text{cm}^3`$, so that we require
$$\mathrm{\Gamma }>10^{35}$$
(4)
to explain the observed UHECR.
## III Magnetic monopole states
### A Introduction
Monopolonium states are expected to have been formed by radiative capture if there was a non-zero density of free monopoles in the early universe. They will typically be bound in an orbit with a large quantum number, so we can treat them as classical objects emitting electromagnetic radiation as they spiral down to deeper and deeper orbits, until they annihilate in a final burst of very high energy particles.
The electromagnetic decay of monopolonium was analyzed by Hill using the dipole radiation formula. The rate of energy loss is<sup>§</sup><sup>§</sup>§Here and throughout we use units where $`\mathrm{}=c=k_B=1`$.
$$\frac{dE}{dt}=\frac{64E^4}{3g_M^2m_M^2},$$
(5)
where $`g_M`$ is the magnetic charge. From this expression, the lifetime of monopolonium with radius $`r`$ and binding energy $`E=g_M^2/2r`$ is
$$\tau _E\frac{m_M^2r^3}{8g_M^4}.$$
(6)
For $`m_M=10^{16}\text{GeV}`$, $`g_M=1/(2e)\sqrt{34}`$, and an initial radius of $`r=10^9\text{cm}`$, this gives $`\tau _E10^{18}\text{sec}`$, comparable to the age of the universe.
Bhattacharjee and Sigl used a thermodynamic equilibrium approximation to estimate the monopolonium density and argued that the late annihilation of very massive magnetic monopoles could explain the UHECR events observed. Here we recalculate the density of monopolonium states, taking into account the kinematics of formation and the frictional energy loss of monopolonium formed at early times.
### B Friction
Before electron-positron annihilation, monopoles interact with a background of relativistic charged particles. These interactions produce a force which, for a non-relativistic monopole is given by
$$F=\frac{\pi }{18}N_cT^2v_{b_{\text{min}}}^{b_{\text{max}}}\frac{db}{b}$$
(7)
where $`N_c`$ is the number of species of charged particles, $`v`$ the velocity of the monopole with respect to the background gas of charged particles and $`b`$ the impact parameter of the incident particles. Since we are interested in the friction that a monopole feels in a bound state orbit of monopolonium, we will not consider the interaction of charged particles with impact parameter greater than the radius of the monopolonium, so $`b_{\text{max}}g_M^2E^1`$. Initially, the monopoles are bound with energy $`ET`$, so $`b_{\text{max}}g_M^2T^1`$. Equation (7) is derived using the approximation that each charged particle is only slightly deflected. This approximation breaks down for impact parameters that are too small, so we should cut off our integration at $`b_{min}T^1`$. Using $`N_c=2`$ and $`g_M^234`$, we get
$$F1.22T^2v$$
(8)
so the energy loss rate due to interactions with charged particles in the background is
$$\frac{dE}{dt}1.22T^2v^2.$$
(9)
Taking the system to be bound in a circular orbit, we have
$$m_Mv^2E$$
(10)
so we can write
$$\frac{dE}{dt}1.22T^2\frac{E}{m_M}$$
(11)
The time scale for this process is
$$\tau _F=\frac{E}{dE/dt}\frac{m_M}{1.22T^2}$$
(12)
If we compare it with the Hubble time,
$$\tau _H=\sqrt{\frac{90}{8\pi ^3g_{}}}m_{pl}T^20.184m_{pl}T^2,$$
(13)
where $`m_{pl}`$ is the Planck mass, and $`g_{}`$ is the number of effectively massless degrees of freedom, $`g_{}=10.75`$, we get
$$\frac{\tau _F}{\tau _H}0.15\frac{m_M}{m_{pl}}1.$$
(14)
Thus, we see that the damping of the monopolonium energy due to friction is very effective in this regime, and the monopoles spiral down very quickly.
When the distance between monopoles becomes small as compared to $`T^1`$, the effect of friction is reduced and Eq. (7) is no longer accurate. However, even for $`T=1\text{MeV}`$, the radius has been reduced about two orders of magnitude to $`r2\times 10^{11}\text{cm}`$, and the electromagnetic lifetime has been reduced by about six orders of magnitude. Thus only monopolonium states formed after electron-positive annihilation can live to decay in the present era.
After electron-positron annihilation the number of charged particles in the thermal background has decreased by a factor $`10^9`$ so $`\tau _F/\tau _H1`$ and the monopolonium is little affected by friction.
### C Formation rate
We can obtain an upper limit for the monopolonium density by solving the Boltzmann equation,
$$\frac{dN_{M\overline{M}}}{dt}=\sigma _bvn_M^23HN_{M\overline{M}},$$
(15)
where $`n_M`$ denotes the free monopole density, $`N_{M\overline{M}}`$ the monopolonium density, $`H`$ the Hubble constant, and $`\sigma _bv`$ the average product of the binding cross section times the thermal velocity of the monopoles.
With the comoving monopole density $`\gamma =n_M/s`$, we can rewrite the equation above as
$$\frac{d\mathrm{\Gamma }}{dt}=\sigma _bv\gamma n_M=\sigma _bv\gamma ^2s.$$
(16)
Using the approximation for the classical radiative capture cross section of monopoles with thermal velocities given by ,
$$\sigma _bv\frac{\pi ^{7/5}}{2}\frac{g_{M}^{}{}_{}{}^{4}}{m_{M}^{}{}_{}{}^{2}}\left(\frac{m_M}{T}\right)^{9/10},$$
(17)
and with
$$s=\frac{2\pi ^2}{45}g_ST^3,$$
(18)
where $`g_S`$ is the number of degrees of freedom contributing to the entropy, we get
$$\frac{d\mathrm{\Gamma }}{dt}=\frac{\pi ^{17/5}}{45}\frac{g_M^4\gamma ^2}{m_M^2}\left(\frac{m_M}{T}\right)^{9/10}g_ST^3.$$
(19)
Since we are interested in the evolution of the monopolonium density after electron-positron annihilation, we will take a constant value $`g_S3.91`$ to get
$$\frac{d\mathrm{\Gamma }}{dt}4.25\frac{g_M^4\gamma ^2}{m_M^2}\left(\frac{m_M}{T}\right)^{9/10}T^3.$$
(20)
As we will see, only a tiny fraction of the monopoles will ever be bound, so we can consider the comoving number of monopoles $`\gamma `$ to be constant. To integrate Eq (20), we will make a change of variable
$$t=\sqrt{\frac{90}{32\pi ^3g_{}}}m_{pl}T^20.164m_{pl}T^2,$$
(21)
appropriate to times after electron-positron annihilation, to get
$$\frac{d\mathrm{\Gamma }}{dT}1.34\frac{g_M^4m_{pl}\gamma ^2}{m_M^2}\left(\frac{m_M}{T}\right)^{9/10}.$$
(22)
and thus
$$\mathrm{\Gamma }_f13.4g_M^4\left(\frac{m_{pl}}{m_M}\right)\left(\frac{T_i}{m_M}\right)^{1/10}\gamma ^2.$$
(23)
We now take $`T_i1\text{MeV}`$ and $`g_M^2=34`$, and note that to produce the observed UHECR, we must have $`m_M>10^{11}\text{GeV}`$, so that for a fixed monopole comoving density $`\gamma `$, we have the bound,
$$\mathrm{\Gamma }_f<4\times 10^6\gamma ^2.$$
(24)
### D Monopole density bound
The formation of magnetic monopoles via the Kibble mechanism is inevitable in all GUT models of the early universe, and annihilation mechanisms are not efficient in a rapidly expanding background, so that the typical initial density of monopoles produced at a GUT phase transition will very soon dominate the energy density of the universe. The most attractive solution for this problem is the inflationary scenario. In standard inflation, the exponential expansion of the universe reduces the monopole density to a completely negligible value. However, it is possible for new monopoles to be formed at the end of inflation. The exact relic abundance of monopoles created in this period is very model dependent, but its value is constrained by the Parker limit : To prevent the acceleration of monopoles from eliminating the galactic magnetic field, the monopole flux into the galaxy must be limited by
$$F<10^{16}\text{cm}^2\text{s}^1\text{sr}^1.$$
(25)
Assuming a monopole velocity with respect to the galaxy of $`10^3c`$, we can translate this bound into a limit on the monopole density,
$$n_M<10^{23}\text{cm}^3,$$
(26)
and thus $`\gamma <10^{26}`$.Then, from Eq. (24) we have
$$\mathrm{\Gamma }_f<10^{45}.$$
(27)
Since this conflicts with Eq. (4) by 10 orders of magnitude, we conclude that primordial bound states of magnetic monopoles cannot explain the UHECR.
We note that we have used several approximations which overstate the possible value of $`\mathrm{\Gamma }_f`$: First, we have considered the total classical radiative capture cross section. This takes into account not only the monopolonium formed with the right energy to decay at present, but all the possible binding energies, clearly overestimating the value of $`\mathrm{\Gamma }_f`$. Second, it has been argued that the classical cross section given in Eq (17) overestimates its real value due to photon discreteness effects. Finally, some of the monopolonium will have decayed before the present time, reducing the value of $`\mathrm{\Gamma }`$. All of these effects make the conflict above more serious.
## IV Monopoles connected by strings.
We present now a different scenario for the formation and annihilation of monopole-antimonopole bound states. The main problem in explaining the UHECR by the conventional magnetic monopolonium system is the inefficiency of the binding mechanism. This can be solved if we assume that all the monopoles get connected by strings in a later phase transition. Since the U(1) symmetry of the monopoles would be broken by the second phase transition, this U(1) must be a field other than the usual electromagnetism.This is different from the Langacker-Pi scenario, where electromagnetism is broken and then restored at a lower temperature, and monopoles do feel large frictional forces. We furthermore assume that these monopoles will not have any other unconfined charge, so that they will feel almost no frictional force moving in a background of particles.
We take the comoving density of bound monopole systems $`\mathrm{\Gamma }`$ to be constant. With a monopole mass of $`10^{14}\text{GeV}`$ the calculation of Sec. II gives $`\mathrm{\Gamma }10^{33}`$, and with all monopoles bound, $`\gamma =2\mathrm{\Gamma }`$. The proper density at the time of string formation is then
$$n_M(T_s)=\gamma s=\frac{2\pi ^2}{45}g_ST_s^3\gamma 10^{32}T_s^3.$$
(28)
We can then compute the mean separation between monopoles at the time the string is formed,
$$L_i\left[n_M(T_s)\right]^{1/3}.$$
(29)
If we take $`T100\text{GeV}`$, we obtain
$$L_i10^6\text{cm},$$
(30)
which is much smaller than the horizon distance, $`d_H3`$ cm at $`T100\text{GeV}`$. We will assume that there are no light ($`mT_s`$ or less) particles that are charged under the string flux. This means that there will be no charged particles that interact with the monopoles and cause the system to lose energy, so that gravitational radiation will be the only energy loss mechanism. When the strings are formed they may have excitations on scales smaller than the distance between monopoles, but these will be quickly smoothed out by gravitational radiation, leaving a straight string. The energy stored in the string is then $`\mu L_i`$, where $`\mu T_{s}^{}{}_{}{}^{2}`$ is the energy per unit length of the string. This is smaller than the monopole mass by the ratio
$$\frac{\mu L_i}{m_M}10^2$$
(31)
so the monopoles will move non-relativistically.
In order to estimate the radiation rate we can assume that the monopoles are moving in straight lines. In fact, at the time of string formation the monopoles will have thermal velocities, so that in general the system will be formed with some non-zero angular momentum. However, in general this will be small compared to the linear motion due to the string tension, so we will ignore it, except to note that the monopoles will pass by without collision. The half oscillation of one monopole is parameterized by
$$x(t)=(2aL)^{1/2}t\frac{1}{2}at^2$$
(32)
with $`a=\mu /m_M`$ and $`0<t<(8L/a)^{1/2}`$. Using the quadrupole approximation,The fully relativistic situation was considered in . the rate of energy loss of the system is
$$\frac{dE}{dt}=\frac{288}{45}G\mu ^2\left(\frac{\mu L}{m_M}\right)$$
(33)
Since $`\mu L`$ is the energy in the string, we can integrate this equation to obtain
$$L=L_ie^{t/\tau _g}$$
(34)
with
$$\tau _g=\frac{45}{288}\frac{m_M}{G\mu ^2}=\frac{45}{288}\frac{m_{pl}^2m_M}{T_s^4}.$$
(35)
The lifetime of the state will thus be $`\tau _g\mathrm{ln}(L_i/r_M)`$, where $`r_Mm_M^1`$ is the radius of the monopole core. For $`T100\text{GeV}`$ and $`m_M10^{14}\text{GeV}`$, Eq. (35) gives $`\tau _g10^{17}\text{sec}`$, comparable with the age of the universe.
This suggests that the bound system formed by a monopole-antimonopole pair connected by a string can slowly decay gravitationally, and release the energy stored in the monopole in a final annihilation when the two monopole cores become close enough.
## V Conclusions
We have shown that is not possible to construct a consistent model for the origin of the UHECR based on the electromagnetic decay and final annihilation of magnetic monopole-antimonopole bound states formed in the early universe. We have obtained an upper limit for the monopolonium density today, taking into account its enhancement in the galactic halo and the maximum average free monopole density consistent with the Parker limit. Due to the small radiative capture cross section for the monopoles and the rapid expansion of the universe, the maximum density of monopolonium is many orders of magnitude below the concentration required to explain the highest energy cosmic ray events.
We then proposed a different scenario in which the monopoles are connected by strings that form at a relatively low energy. This mechanism solves the problem of the inefficiency of the binding process, since every monopole will be attached to an antimonopole at the other end of the string. Due to the confinement of the monopole flux inside of the string , the main source of energy lose for these bound systems will be gravitational radiation. If we assume a monopole mass of $`10^{14}\text{GeV}`$ and a string energy scale of the order of $`100\text{GeV}`$, the lifetime of the bound states would be comparable with the age of the universe, making them a possible candidate for the origin of the ultra-high-energy cosmic rays.
## VI Acknowledgments
We would like to thank Alex Vilenkin for suggesting this line of work, and Xavier Siemens and Alex Vilenkin for helpful conversations. This work was supported in part by funding provided by the National Science Foundation. J. J. B. P. is supported in part by the Fundación Pedro Barrie de la Maza.
|
no-problem/9904/hep-th9904135.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
There has been considerable interest recently in studying absorption probabilities for fields propagating in various black hole and $`p`$-brane backgrounds . One of the motivations is the conjectured duality of supergravity on an AdS spacetime and the conformal field theory on the boundary of the AdS . Previously, the study of absorption was mainly concentrated on the case of massless scalars. Some work has been done for the cases of the emission of BPS particles from five- and four-dimensional black holes . These BPS particles can be viewed as pp-waves in a spacetime of one higher dimension. Hence they satisfy the higher-dimensional massless wave equations.
In this paper, we consider the absorption probability of minimally-coupled massive particles by extremal $`p`$-branes. The wave equation for such a scalar depends only on the metric of the $`p`$-brane, which has the form
$$ds^2=\underset{\alpha =1}{\overset{N}{}}H_\alpha ^{{\scriptscriptstyle \frac{\stackrel{~}{d}}{D2}}}dx^\mu dx^\nu \eta _{\mu \nu }+\underset{\alpha =1}{\overset{N}{}}H_\alpha ^{{\scriptscriptstyle \frac{d}{D2}}}dy^mdy^m,$$
(1.1)
where $`d=p+1`$ is the dimension of the world volume of the $`p`$-brane, $`\stackrel{~}{d}=Dd2`$, and $`H_\alpha =1+Q_\alpha /r^{\stackrel{~}{d}}`$ are harmonic functions in the transverse space $`y^m`$, where $`r^2=y^my^m`$. (Note that the ADM mass density and the physical charges of the extreme $`p`$-brane solutions are proportional to $`_{i=1}^NQ_i`$ and $`Q_i`$, respectively)
It follows that the wave equation, $`_M(\sqrt{g}g^{MN}_N\mathrm{\Phi })=m^2\mathrm{\Phi }`$, for the massive minimally-coupled scalar, with the ansatz, $`\mathrm{\Phi }(t,r,\theta _i)=\varphi (r)Y(\theta _i)e^{\mathrm{i}\omega t}`$, takes the following form:
$$\frac{d^2\varphi }{d\rho ^2}+\frac{\stackrel{~}{d}+1}{\rho }\frac{d\varphi }{d\rho }+\left[\underset{\alpha =1}{\overset{N}{}}(1+\frac{\lambda _\alpha ^{\stackrel{~}{d}}}{\rho ^{\stackrel{~}{d}}})\frac{\mathrm{}(\mathrm{}+\stackrel{~}{d})}{\rho ^2}\frac{m^2}{\omega ^2}\underset{\alpha =1}{\overset{N}{}}(1+\frac{\lambda _\alpha ^{\stackrel{~}{d}}}{\rho ^{\stackrel{~}{d}}})^{{\scriptscriptstyle \frac{d}{D2}}}\right]\varphi =0,$$
(1.2)
where $`\rho =\omega r`$ and $`\lambda _\alpha =\omega Q_\alpha ^{1/\stackrel{~}{d}}`$. Note that when $`m=0`$, the wave equation depends on $`\stackrel{~}{d}`$, but is independent of the world-volume dimension $`d`$. This implies that the wave equation for minimally-coupled massless scalars is not invariant under the vertical-dimensional reduction, but is invariant under double-dimensional reduction of the corresponding $`p`$-brane . However, for massive scalars, the wave equation (1.2) is not invariant under either double or vertical reductions.
The absorption probability of massless scalars is better understood. It was shown that for low frequency the cross-section/frequency relation for a generic extremal $`p`$-brane coincides with the entropy/temperature relation of the near extremal $`p`$-brane . There are a few examples where the wave equations can be solved exactly in terms of special functions. Notably, the wave equations for the D3-brane and the dyonic string can be cast into modified Mathieu equations. Hence, the absorption probability can be obtained exactly, order by order, in terms of a certain small parameter. There are also examples where the absorption probabilities can be obtained in closed-form for all wave frequencies .
When the mass $`m`$ is non-zero, we find that there are two examples for which the wave function can be expressed in terms of special functions and, thus, the absorption probabilities can be obtained exactly. One example is the wave equation in the self-dual string background, which can be cast into a modified Mathieu equation. Therefore, we can obtain the exact absorption probability, order by order, in terms of a certain small parameter. We discuss this example in section 2. Another example is the wave equation for the $`D=4`$ two-charge black hole with equal charges. The wave function can be expressed in terms of Kummer’s regular and irregular confluent hypergeometric functions. It follows that we can obtain the absorption probability in closed-form, which we present in section 3. In both of the above examples, the massive scalar wave equation has the same form as the massless scalar wave equation under the backgrounds where the two charges are generically non-equal.
However, in general, the massive scalar wave equation (1.2) cannot be solved analytically. For low-frequency absorption, the leading-order wave function can be obtained by matching wave functions in inner and outer regions. In section 4, we make use of this technique to obtain the leading-order absorption probability for D3-, M2- and M5-branes.
## 2 Massive scalar absorption for the self-dual string
For the self-dual string ($`Q_1=Q_2Q`$), we have $`d=\stackrel{~}{d}=2`$ and $`\lambda _1=\lambda _2\lambda =\omega \sqrt{Q}`$. It follows that the wave equation (1.2) becomes
$$\frac{d^2\varphi }{d\stackrel{~}{\rho }^2}+\frac{3}{\stackrel{~}{\rho }}\frac{d\varphi }{d\stackrel{~}{\rho }}+\left(1+\frac{\stackrel{~}{\lambda }_1^2+\stackrel{~}{\lambda }_2^2\mathrm{}(\mathrm{}+2)}{\stackrel{~}{\rho }^2}+\frac{\stackrel{~}{\lambda }_1^2\stackrel{~}{\lambda }_2^2}{\stackrel{~}{\rho }^4}\right)\varphi =0,$$
(2.1)
where
$`\stackrel{~}{\lambda }_1=\lambda ,\stackrel{~}{\lambda }_2=\lambda \sqrt{1(m/\omega )^2},`$
$`\stackrel{~}{\rho }=\rho \sqrt{1(m/\omega )^2},`$ (2.2)
Thus the wave equation of a minimally-coupled massive scalar on a self-dual string has precisely the same form as that of a minimally-coupled massless scalar on a dyonic string, where $`\stackrel{~}{\lambda }_1`$ and $`\stackrel{~}{\lambda }_2`$ are associated with electric and magnetic charges. It was shown in that the wave equation (2.1) can be cast into the form of a modified Mathieu equation, and hence the equation can be solved exactly. To do so, one makes the following definitions
$$\varphi (\stackrel{~}{\rho })=\frac{1}{\rho }\mathrm{\Psi }(\rho )\stackrel{~}{\rho }=\sqrt{\stackrel{~}{\lambda }_1\stackrel{~}{\lambda }_2}e^z.$$
(2.3)
The wave equation (2.1) then becomes the modified Mathieu equation
$$\mathrm{\Psi }^{\prime \prime }+(8\mathrm{\Lambda }^2\mathrm{cosh}(2z)4\alpha ^2)\mathrm{\Psi }=0,$$
(2.4)
where
$`\alpha ^2`$ $`=`$ $`\frac{1}{4}(\mathrm{}+1)^2\mathrm{\Lambda }^2\mathrm{\Delta },`$
$`\mathrm{\Lambda }^2`$ $`=`$ $`\frac{1}{4}\stackrel{~}{\lambda }_1\stackrel{~}{\lambda }_2=\frac{1}{4}\lambda ^2\sqrt{1(m/\omega )^2}=\frac{1}{4}\omega \sqrt{\omega ^2m^2}Q,`$ (2.5)
$`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \frac{\stackrel{~}{\lambda }_1}{\stackrel{~}{\lambda }_2}}+{\displaystyle \frac{\stackrel{~}{\lambda }_2}{\stackrel{~}{\lambda }_1}}=\sqrt{1(m/\omega )^2}+{\displaystyle \frac{1}{\sqrt{1(m/\omega )^2}}}.`$ (2.6)
The Mathieu equation can be solved, order by order, in terms of $`\mathrm{\Lambda }^2`$. The result was obtained in , using the technique developed in . (For an extremal D3-brane, which also reduces to the Mathieu equation, an analogous technique was employed in .) In our case there are two parameters, namely $`\omega R`$ and $`m/\omega `$. We present results for two scenarios:
### 2.1 Fixed mass/frequency ratio probing
In this case, we have $`m/\omega =\beta `$ fixed. The requirement that $`\mathrm{\Lambda }`$ is small is achieved by considering low-frequency and, hence, small mass of the probing particles. In this case, $`\mathrm{\Delta }`$ is fixed, and the absorption probability has the form
$$P_{\mathrm{}}=\frac{4\pi ^2\mathrm{\Lambda }^{4+4\mathrm{}}}{(\mathrm{}+1)^2\mathrm{\Gamma }(\mathrm{}+1)^4}\underset{n0}{}\underset{k=0}{\overset{n}{}}b_{n,k}\mathrm{\Lambda }^{2n}(\mathrm{log}\overline{\mathrm{\Lambda }})^k,$$
(2.7)
where $`\overline{\lambda }=e^\gamma \lambda `$, and $`\gamma `$ is Euler’s constant. The prefactor is chosen so that $`b_{0,0}=1`$. Our results for the coefficients $`b_{n,k}`$ with $`kn3`$ for the first four partial waves, $`\mathrm{}=0,1,2,3`$, were explicitly given in . In particular the result up to the order of $`\mathrm{\Lambda }^2`$ is given by
$$P_{\mathrm{}}=\frac{4\pi ^2\mathrm{\Lambda }^{4+4\mathrm{}}}{(\mathrm{}+1)^2\mathrm{\Gamma }(\mathrm{}+1)^4}\left[1\frac{8\mathrm{\Delta }}{\mathrm{}+1}\mathrm{\Lambda }^2\mathrm{log}\mathrm{\Lambda }+\frac{4\mathrm{\Delta }\mathrm{\Lambda }^2}{(\mathrm{}+1)^2}\left(1+2(\mathrm{}+1)\psi (\mathrm{}+1)\right)+\mathrm{}\right],$$
(2.8)
where $`\psi (x)\mathrm{\Gamma }^{}(x)/\mathrm{\Gamma }(x)`$ is the digamma function.
### 2.2 Fixed mass probing
Now we consider the case where the mass of the test particle is fixed. In this case, it is ensured that $`\mathrm{\Lambda }`$ is small by considering the limiting frequency of the probing particle, namely $`\omega m^+`$, i.e. the particle is non-relativistic. In this limit, the value of $`\mathrm{\Delta }`$ becomes large (while at the same time the expansion parameter $`\mathrm{\Lambda }`$ can still be ensured to remain small). Furthermore, we shall consider a special slice of the parameter space where $`\alpha ^2`$, given in (2.5), is fixed. The absorption probability for fixed $`\alpha `$ was obtained in . It is of particular interest to present the absorption probability for $`\alpha 0`$, given by
$$P=\frac{\pi ^2}{\pi ^2+(2\mathrm{log}\overline{\mathrm{\Lambda }})^2}\left(1\frac{32}{3}\mathrm{\Lambda }^4(\mathrm{log}\overline{\mathrm{\Lambda }})^2\frac{16}{3}(4\zeta (3)3)\frac{\mathrm{\Lambda }^4\mathrm{log}\overline{\mathrm{\Lambda }}}{\pi ^2+(2\mathrm{log}\overline{\mathrm{\Lambda }})^2}+𝒪(\mathrm{\Lambda }^8)\right),$$
(2.9)
where $`\overline{\mathrm{\Lambda }}=e^\gamma \mathrm{\Lambda }`$. When $`\alpha ^2<0`$, we define $`\alpha ^2=\mathrm{i}\beta `$, and find that the absorption probability becomes oscillatory as a function of $`\mathrm{\Lambda }`$, given by
$$P=\frac{\mathrm{sinh}^22\pi \beta }{\mathrm{sinh}^22\pi \beta +\mathrm{sin}^2(\theta 4\beta \mathrm{log}\mathrm{\Lambda })}+\mathrm{},$$
(2.10)
where
$$\theta =\mathrm{arg}\frac{\mathrm{\Gamma }(2\mathrm{i}\beta )}{\mathrm{\Gamma }(2\mathrm{i}\beta )}.$$
(2.11)
Note that the $`\alpha 0`$ limit is a dividing domain between the region where the absorption probability has power dependence of $`\mathrm{\Lambda }`$ ($`\alpha ^2>0`$) and the region with oscillating behavior on $`\mathrm{\Lambda }`$ ($`\alpha <0`$).
## 3 Closed-form absorption for the $`D=4`$ two-charge black hole
For a $`D=4`$ black hole, specified in general by four charges $`Q_1`$, $`Q_2`$, $`P_1`$ and $`P_2`$ , we have $`d=\stackrel{~}{d}=1`$. We consider the special case of two equal non-zero charges ($`Q_1=Q_2Q`$ with $`P_1=P_2=0`$) and therefore $`\lambda _1=\lambda _2\lambda =\omega Q`$. It follows that the wave equation (1.2) becomes
$$\frac{d^2\varphi }{d\stackrel{~}{\rho }^2}+\frac{2}{\stackrel{~}{\rho }}\frac{d\varphi }{d\stackrel{~}{\rho }}+\left[\left(1+\frac{\stackrel{~}{\lambda }_1}{\stackrel{~}{\rho }}\right)\left(1+\frac{\stackrel{~}{\lambda }_2}{\stackrel{~}{\rho }}\right)\frac{\mathrm{}(\mathrm{}+1)}{\stackrel{~}{\rho }^2}\right]\varphi =0$$
(3.1)
where
$`\stackrel{~}{\lambda }_1={\displaystyle \frac{\lambda }{\sqrt{1(m/\omega )^2}}},\stackrel{~}{\lambda }_2=\lambda \sqrt{1(m/\omega )^2},`$
$`\stackrel{~}{\rho }=\rho \sqrt{1(m/\omega )^2},`$ (3.2)
Thus the wave equation of a minimally-coupled massive scalar on a $`D=4`$ black hole with two equal charges has precisely the same form as that of a minimally-coupled massless scalar on a $`D=4`$ black hole with two different charges. The closed-form absorption probability for the latter case was calculated in (see also ). The absorption probability for the former case is, therefore, given by
$$P^{(\mathrm{})}=\frac{1e^{2\pi \sqrt{4\lambda ^2(2\mathrm{}+1)^2}}}{1+e^{\pi (2\lambda +\sqrt{4\lambda ^2(2\mathrm{}+1)^2})}e^{\pi \delta }},\lambda \mathrm{}+\frac{1}{2},$$
(3.3)
where
$$\delta \stackrel{~}{\lambda }_1+\stackrel{~}{\lambda }_22\lambda =\lambda \left[(1(m/\omega )^2)^{1/4}(1(m/\omega )^2)^{1/4}\right]^20,$$
(3.4)
with $`P^{(\mathrm{})}=0`$ if $`\lambda \mathrm{}+\frac{1}{2}`$. In the non-relativistic case ($`\omega m^+`$), the absorption probability takes the (non-singular) form $`P^{(\mathrm{})}=1e^{2\pi \sqrt{4\lambda ^2(2\mathrm{}+1)^2}}`$, with $`\lambda mQ`$.
The total absorption cross-section is given by:
$$\sigma ^{(abs)}=\underset{\mathrm{}\lambda {\scriptscriptstyle \frac{1}{2}}}{}\frac{\pi \mathrm{}(\mathrm{}+1)}{\omega ^2m^2}P^{(\mathrm{})}$$
(3.5)
It is oscillatory with respect to the dimensionless parameter, $`M\omega Q\omega =\lambda `$. ($`M`$ is the ADM mass of the black hole.) This feature was noted in for Schwarzschild black holes and conjectured to be a general property of black holes due to wave diffraction. Probing particles feel an effective finite potential barrier around black holes, inside of which is an effective potential well. Such particles inhabit a quasi-bound state once inside the barrier. Resonance in the partial-wave absorption cross-section occurs if the energy of the particle is equal to the effective energy of the potential barrier. Each partial wave contributes a ’spike’ to the total absorption cross-section, which sums to yield the oscillatory pattern. As the mass of the probing particles increases, the amplitude of the oscillatory pattern of the total absorption cross-section decreases.
## 4 Leading-order absorption for D3, M2 and M5-branes
In the previous two sections, we considered two examples for which the massive scalar wave equations can be solved exactly. In general, the wave function (1.2) cannot be solved analytically. In the case of low frequency, one can adopt a solution-matching technique to obtain approximate solutions for the inner and outer regions of the wave equations. In this section, we shall use such a procedure to obtain the leading-order absorption cross-sections for the D3, M2 and M5-branes.
We now give a detailed discussion for the D3-brane, for which we have $`D=10`$, $`d=\stackrel{~}{d}=4`$ and $`N=1`$. We define $`\lambda \omega R`$. It follows that the wave equation (1.2) becomes
$$\left(\frac{1}{\rho ^5}\frac{}{\rho }\rho ^5\frac{}{\rho }+1+\frac{(\omega R)^4}{\rho ^4}\sqrt{1+\frac{(\omega R)^4}{\rho ^4}}(\frac{m}{\omega })^2\frac{\mathrm{}(\mathrm{}+4)}{\rho ^2}\right)\varphi (\rho )=0.$$
(4.1)
Thus, we are interested in absorption by the Coulomb potential in 6 spatial dimensions. For $`\omega R1`$ we can solve this problem by matching an approximate solution in the inner region to an approximate solution in the outer region. To obtain an approximate solution in the inner region, we substitute $`\varphi =\rho ^{3/2}f`$ and find that
$$\left(\frac{^2}{\rho ^2}+\frac{2}{\rho }\frac{}{\rho }\left(\frac{15}{4}+\mathrm{}(\mathrm{}+4)\right)\frac{1}{\rho ^2}+1+\frac{(\omega R)^4}{\rho ^4}\sqrt{1+\frac{(\omega R)^4}{\rho ^4}}\left(\frac{m}{\omega }\right)^2\right)f=0.$$
(4.2)
In order to neglect $`1`$ in the presence of the $`\frac{1}{\rho ^2}`$ term, we require that
$$\rho 1.$$
(4.3)
In order for the scalar mass term to be negligible in the presence of the $`\frac{1}{\rho ^2}`$ term, we require that
$$\rho \left[\left(\frac{\omega }{m}\right)^4\left(\frac{15}{4}+\mathrm{}(\mathrm{}+4)\right)^2(\omega R)^4\right]^{1/4}.$$
(4.4)
Physically we must have $`m\omega `$. Imposing the low-energy condition $`\omega R1`$ causes (4.3) to be a stronger constraint on $`\rho `$ than is (4.4). Under the above conditions, (4.2) becomes
$$\left(\frac{^2}{\rho ^2}+\frac{2}{\rho }\frac{}{\rho }\left(\frac{15}{4}+\mathrm{}(\mathrm{}+4)\right)\frac{1}{\rho ^2}+\frac{(\omega R)^4}{\rho ^4}\right)f=0,$$
(4.5)
which can be solved in terms of cylinder functions. Since we are interested in the incoming wave for $`\rho 1`$, the appropriate solution is
$$\varphi _o=i\frac{(\omega R)^4}{\rho ^2}\left(J_{\mathrm{}+2}(\frac{(\omega R)^2}{\rho })+iN_{\mathrm{}+2}(\frac{(\omega R)^2}{\rho })\right),\rho 1,$$
(4.6)
where $`J`$ and $`N`$ are Bessel and Neumann functions. In order to obtain an approximate solution for the outer region, we substitute $`\varphi =\rho ^{5/2}\psi `$ into (4.1) and obtain
$$\left(\frac{^2}{\rho ^2}\left(\frac{15}{4}+\mathrm{}(\mathrm{}+4)\right)\frac{1}{\rho ^2}+1+\frac{(\omega R)^4}{\rho ^4}\sqrt{1+\frac{(\omega R)^4}{\rho ^4}}(\frac{m}{\omega })^2\right)\psi =0.$$
(4.7)
In order to neglect $`\frac{(\omega R)^4}{\rho ^4}`$ in the presence of the $`\frac{1}{\rho ^2}`$ term, we require that
$$\rho (\omega R)^2.$$
(4.8)
Within the scalar mass term, $`\frac{(\omega R)^4}{\rho ^4}`$ can be neglected in the presence of 1 provided that
$$\rho \omega R.$$
(4.9)
Imposing the low-energy condition, $`\omega R1`$, causes (4.9) to be a stronger constraint on $`\rho `$ than (4.8). Under the above conditions, (4.7) becomes
$$\left(\frac{^2}{\rho ^2}\left(\frac{15}{4}+\mathrm{}(\mathrm{}+4)\right)\frac{1}{\rho ^2}+1(\frac{m}{\omega })^2\right)\psi =0.$$
(4.10)
Equation (4.10) is solved in terms of cylinder functions:
$$\varphi _{\mathrm{}}=A\rho ^2J_{\mathrm{}+2}(\sqrt{1(m/\omega )^2}\rho )+B\rho ^2N_{\mathrm{}+2}(\sqrt{1(m/\omega )^2}\rho ),\rho \omega R,$$
(4.11)
where $`A`$ and $`B`$ are constants to be determined.
Our previously imposed low-energy condition, $`\omega R1`$, is sufficient for there to be an overlapping regime of validity for conditions (4.3) and (4.9), allowing the inner and outer solutions to be matched. Within the matching region, all cylinder functions involved have small arguments. We use the same asymptotic forms of the cylinder functions as used by . We find that $`B=0`$ and
$$A=\frac{4^{\mathrm{}+2}\mathrm{\Gamma }(\mathrm{}+3)\mathrm{\Gamma }(\mathrm{}+2)}{\pi \left(1(\frac{m}{\omega })^2\right)^{\frac{\mathrm{}+2}{2}}(\omega R)^2\mathrm{}}$$
(4.12)
The absorption probability is most easily calculated in this approximation scheme as the ratio of the flux at the horizon to the incoming flux at infinity. In general, this flux may be defined as
$$F=i\rho ^{\stackrel{~}{d}+1}\left(\overline{\varphi }\frac{\varphi }{\rho }\varphi \frac{\overline{\varphi }}{\rho }\right),$$
(4.13)
where $`\varphi `$ here is taken to be the in-going component of the wave. From the approximate solutions for $`\varphi `$ in the inner and outer regions, where the arguments of the cylinder functions are large, we find that the in-going fluxes at the horizon and at infinity are given by
$$F_{horizon}=\frac{4}{\pi }\omega ^4R^8,F_{\mathrm{}}=\frac{A^2}{\pi \omega ^4}.$$
(4.14)
Thus, to leading order, the absorption probability, $`PF_{horizon}/F_{\mathrm{}}`$, is
$$P^{(\mathrm{})}=\frac{\pi ^2\left(1\left(\frac{m}{\omega }\right)^2\right)^{\mathrm{}+2}(\omega R)^{4\mathrm{}+8}}{4^{2\mathrm{}+3}(\mathrm{}+2)^2[(\mathrm{}+1)!]^4}$$
(4.15)
In general, the phase-space factor relating the absorption probability to the absorption cross-section can be obtained from the massless scalar case considered in with the replacement $`\omega \sqrt{\omega ^2m^2}`$:
$$\sigma ^{(\mathrm{})}=2^{n2}\pi ^{n/21}\mathrm{\Gamma }(n/21)(\mathrm{}+n/21)\left(\genfrac{}{}{0pt}{}{\mathrm{}+n3}{\mathrm{}}\right)(\omega ^2m^2)^{(1n)/2}P^{(\mathrm{})}$$
(4.16)
where $`n=Dd`$ denotes the number of spatial dimensions. Thus, for the D3-brane we find
$$\sigma _{3brane}^{(l)}=\frac{\pi ^4(\mathrm{}+3)(\mathrm{}+1)[1(m/\omega )^2]^{\mathrm{}1/2}}{(3)2^{4\mathrm{}+3}[(\mathrm{}+1)!]^4}\omega ^{4l+3}R^{4l+8}.$$
(4.17)
As can be seen, within our approximation scheme, the effects of a nonzero scalar mass amount to an overall factor in the partial absorption cross-section. Also, the s-wave absorption cross-section is increased by $`m`$ and the higher partial wave absorption cross-sections are diminished by $`m`$. This is to be expected, since the scalar mass serves to increase gravitational attraction as well as rotational inertia.
The above approximation scheme can be applied to massive scalar particles in all $`N=1`$ $`p`$-brane backgrounds except for the case of $`D=11`$ $`p`$-branes with $`\stackrel{~}{d}=4`$ and $`\stackrel{~}{d}=5`$, in which cases the scalar mass term cannot be neglected in the inner region. For $`N>1`$, we are unable to find solvable approximate equations which give an overlapping inner and outer region.
For the M2-brane, we have $`D=11`$, $`d=3`$, $`\stackrel{~}{d}=6`$ and $`N=1`$:
$$\sigma _{\mathrm{M2}\mathrm{brane}}^{(l)}=\frac{\pi ^5(\mathrm{}+5)(\mathrm{}+4)[1(m/\omega )^2]^{\mathrm{}1/2}}{(15)2^{3\mathrm{}+2}\mathrm{}!(\mathrm{}+2)!\mathrm{\Gamma }^2(\frac{3+\mathrm{}}{2})}\omega ^{3\mathrm{}+2}R^{3\mathrm{}+9}$$
(4.18)
For the M5-brane, we have $`D=11`$, $`d=6`$, $`\stackrel{~}{d}=3`$ and $`N=1`$:
$$\sigma _{\mathrm{M5}\mathrm{brane}}^{(l)}=\frac{2^{2\mathrm{}+5}\pi ^3(\mathrm{}+2)(\mathrm{}+3/2)(\mathrm{}+1)[(\mathrm{}+1)!]^2[1(m/\omega )^2]^{\mathrm{}1/2}}{(2\mathrm{}+3)^2[(2\mathrm{}+2)!]^4}\omega ^{6\mathrm{}+5}R^{6\mathrm{}+9}$$
(4.19)
In fact, for all $`N=1`$ $`p`$-branes, other than the two for which the approximation scheme cannot be applied, the partial absorption cross-sections have the same additional factor due to the scalar mass:
$$\sigma _{\mathrm{massive}}^{\mathrm{}}=\sigma _{\mathrm{massless}}^{\mathrm{}}[1(m/\omega )^2]^{\mathrm{}1/2},$$
(4.20)
for $`m\omega `$, and $`\sigma _{\mathrm{massless}}^{\mathrm{}}`$ has the same form as the leading-order absorption for massless scalars. Note that the suppression \[enhancement\] of the partial cross-section for $`\mathrm{}1`$ \[for $`\mathrm{}=0`$\], when the non-relativistic limit is taken.
## 5 Conclusions
In this paper we have addressed the absorption cross-section for minimally-coupled massive particles in the extreme $`p`$-brane backgrounds. In particular, we found exact absorption probabilities in the cases of the extreme self-dual dyonic string in $`D=6`$ and two equal-charge extreme black hole in $`D=4`$. Notably these two examples yield the same wave equations as that of the minimally coupled massless scalar in the $`D=6`$ extreme dyonic string, and two charge $`D=4`$ extreme black hole backgrounds, respectively. Namely, one of the two charge parameters in the latter (massless) case is traded for the scalar mass parameter in the former (massive) case. Thus, for these equal charge backgrounds, the scattering of minimally-coupled massive particles can be addressed explicitly, and the distinct behavior of the absorption cross-section on the energy $`\omega `$ (or equivalently momentum $`p\sqrt{\omega ^2m^2}`$) is studied. In particular, the non-relativistic limit of the particle motion gives rise to a distinct, resonant-like absorption behavior in the case of the self-dual dyonic string.
We have also found corrections due to the scalar mass for the leading-order absorption cross-sections for D3-, M2- and M5-branes. In particular, in the non-relativistic limit, there is the expected suppression \[enhancement\] in the absorption cross-section for partial waves $`\mathrm{}1`$ \[$`\mathrm{}=0`$\].
The results obtained for the absorption cross-section of the minimally-coupled massive scalars, in particular those in the extreme self-dual dyonic string background, may prove useful in the study of AdS/CFT correspondence . Namely, the near-horizon region of the extreme dyonic string background has the topology of $`AdS_3\times S^3`$, with the $`AdS_3`$ cosmological constant $`\mathrm{\Lambda }`$ and the radius $`R`$ of the three-sphere ($`S^3`$) related to the charge $`Q`$ of the self-dual dyonic string as $`\mathrm{\Lambda }=R^2=\sqrt{Q}`$ (see e.g., ). On the other hand, the scattering of the minimally-coupled massive fields (with mass $``$) in the $`AdS_3`$ background yields information on the correlation functions of the operators of the boundary $`SL(2,𝐑)\times SL(2,𝐑)`$ conformal field theory with conformal dimensions $`h_\pm =\frac{1}{2}(1\pm \sqrt{1+^2\mathrm{\Lambda }^2})`$ . The scattering analyzed here corresponds to that of a minimally-coupled massive scalar in the the full self-dual string background, rather than in only the truncated $`AdS_3`$ background. These explicit supergravity results may, in turn, shed light on the pathologies of the conformal field theory of the dyonic string background .
|
no-problem/9904/hep-ph9904328.html
|
ar5iv
|
text
|
# Neutrino oscillation constraints on neutrinoless double-beta decay
## Abstract
It is shown that, in the framework of the scheme with three-neutrino mixing and a mass hierarchy, the results of neutrino oscillation experiments imply an upper bound of about $`10^2`$ eV for the effective Majorana mass in neutrinoless double-$`\beta `$ decay. The schemes with four massive neutrinos are also briefly discussed.
preprint: UWThPh-1999-26 DFTT 21/99
The recent results of the high-precision and high-statistics Super-Kamiokande experiment have confirmed the indications in favor of neutrino oscillations obtained in atmospheric and solar neutrino experiments. Here we will discuss the implications of the results of atmospheric and solar neutrino oscillation experiments for neutrinoless double-$`\beta `$ decay ($`(\beta \beta )_{0\nu }`$) in the framework of the scheme with three neutrinos and a mass hierarchy, that can accommodate atmospheric and solar neutrino oscillations, and in the framework of the schemes with four massive neutrinos that can accommodate also the $`\overline{\nu }_\mu \overline{\nu }_e`$ and $`\nu _\mu \nu _e`$ oscillations observed in the LSND experiment.
The results of atmospheric neutrino experiments can be explained by $`\nu _\mu \nu _\tau `$ oscillations due to the mass-squared difference
$$\mathrm{\Delta }m_{\mathrm{atm}}^2(18)\times 10^3\mathrm{eV}^2.$$
(1)
The results of solar neutrino experiments can be explained by $`\nu _e\nu _\mu ,\nu _\tau `$ transitions due to the mass-squared difference
$$\mathrm{\Delta }m_{\mathrm{sun}}^2(0.510)\times 10^{10}\mathrm{eV}^2\text{(VO)}$$
(2)
in the case of vacuum oscillations, or
$$\mathrm{\Delta }m_{\mathrm{sun}}^2(0.41)\times 10^5\mathrm{eV}^2\text{(SMA-MSW)}$$
(3)
in the case of small mixing angle MSW transitions, or
$$\mathrm{\Delta }m_{\mathrm{sun}}^2(0.620)\times 10^5\mathrm{eV}^2\text{(LMA-MSW)}$$
(4)
in the case of large mixing angle MSW transitions. Hence, atmospheric and solar neutrino data indicate a hierarchy of $`\mathrm{\Delta }m^2`$’s: $`\mathrm{\Delta }m_{\mathrm{sun}}^2\mathrm{\Delta }m_{\mathrm{atm}}^2`$. A natural scheme that can accommodate this hierarchy is the one with three neutrinos and a mass hierarchy $`m_1m_2m_3`$, that is predicted by the see-saw mechanism. In this case we have
$$\mathrm{\Delta }m_{\mathrm{sun}}^2=\mathrm{\Delta }m_{21}^2m_2^2,\mathrm{\Delta }m_{\mathrm{atm}}^2=\mathrm{\Delta }m_{31}^2m_3^2.$$
(5)
In the spirit of the see-saw mechanism, we presume that massive neutrinos are Majorana particles and $`(\beta \beta )_{0\nu }`$-decay is allowed. The matrix element of $`(\beta \beta )_{0\nu }`$ decay is proportional to the effective Majorana neutrino mass
$$|m|=\left|\underset{k}{}U_{ek}^2m_k\right|,$$
(6)
where $`U`$ is the mixing matrix that connects the flavor neutrino fields $`\nu _{\alpha L}`$ ($`\alpha =e,\mu ,\tau `$) to the fields $`\nu _{kL}`$ of neutrinos with masses $`m_k`$ through the relation $`\nu _{\alpha L}=_kU_{\alpha k}\nu _{kL}`$. The present experimental upper limit for $`|m|`$ is 0.2 eV at 90% CL. The next generation of $`(\beta \beta )_{0\nu }`$ decay experiments is expected to be sensitive to values of $`|m|`$ in the range $`10^210^1`$ eV.
Since the results of neutrino oscillation experiments allow to constraint only the moduli of the elements $`U_{ek}`$ of the neutrino mixing matrix, let us consider the upper bound
$$|m|\underset{k}{}|U_{ek}|^2m_k|m|_{\mathrm{UB}}.$$
(7)
In the framework of the scheme with three neutrinos and a mass hierarchy, the contribution of $`m_1`$ to $`|m|_{\mathrm{UB}}`$ is negligible and the contributions of $`m_2`$ and $`m_3`$ are given, respectively, by
$`|m|_{\mathrm{UB2}}|U_{e2}|^2m_2|U_{e2}|^2\sqrt{\mathrm{\Delta }m_{\mathrm{sun}}^2}`$ (8)
$`|m|_{\mathrm{UB3}}|U_{e3}|^2m_3|U_{e3}|^2\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}.`$ (9)
The parameter $`|U_{e2}|^2`$ is large in the case of solar vacuum oscillations, is smaller than $`1/2`$ in the case of large mixing angle MSW transitions and is very small ($`|U_{e2}|^22\times 10^3`$) in the case of small mixing angle MSW transitions. Taking into account the respective ranges of $`\mathrm{\Delta }m_{\mathrm{sun}}^2`$ in Eqs.(2)–(4), we have
$$|m|_{\mathrm{UB2}}\{\begin{array}{ccc}3\times 10^5\mathrm{eV}\hfill & & \text{(VO)},\hfill \\ 6\times 10^6\mathrm{eV}\hfill & & \text{(SMA-MSW)},\hfill \\ 7\times 10^3\mathrm{eV}\hfill & & \text{(LMA-MSW)}.\hfill \end{array}$$
(10)
Hence, the contribution of $`m_2`$ to the upper bound (7) is small and one expects that the dominant contribution is given by $`m_3\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}(39)\times 10^2\mathrm{eV}`$. However, as we will show in the following, the value of $`|U_{e3}|^2`$ is constrained by the results of the atmospheric Super-Kamiokande experiment and by the negative results of the long-baseline reactor $`\overline{\nu }_e`$ disappearance experiment CHOOZ.
The two-neutrino exclusion plot obtained in the CHOOZ experiments imply that $`|U_{e3}|^2a_e^{\mathrm{CHOOZ}}`$ or $`|U_{e3}|^21a_e^{\mathrm{CHOOZ}}`$, with $`a_e^{\mathrm{CHOOZ}}=\frac{1}{2}\left(1\sqrt{1\mathrm{sin}^22\vartheta _{\mathrm{CHOOZ}}}\right)`$. Here $`\mathrm{sin}^22\vartheta _{\mathrm{CHOOZ}}`$ is the upper value of the two-neutrino mixing parameter $`\mathrm{sin}^22\vartheta `$ obtained from the CHOOZ exclusion curve as a function of $`\mathrm{\Delta }m^2=\mathrm{\Delta }m_{31}^2=\mathrm{\Delta }m_{\mathrm{atm}}^2`$, where $`\mathrm{\Delta }m^2`$ is the two-neutrino mass-squared difference. Since the quantity $`a_e^{\mathrm{CHOOZ}}`$ is small for $`\mathrm{\Delta }m_{\mathrm{atm}}^210^3\mathrm{eV}^2`$, the results of the CHOOZ experiment imply that $`|U_{e3}|^2`$ is either small or close to one. However, since the survival probability of solar $`\nu _e`$’s is bigger than $`|U_{e3}|^4`$, only the range $`|U_{e3}|^2a_e^{\mathrm{CHOOZ}}`$ is allowed by the results of solar neutrino experiments. Therefore, the contribution of $`m_3`$ to $`|m|_{\mathrm{UB}}`$ is bounded by
$$|m|_{\mathrm{UB3}}a_e^{\mathrm{CHOOZ}}\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}.$$
(11)
Notice that this limit depends on $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ both explicitly and implicitly through $`a_e^{\mathrm{CHOOZ}}`$.
The bound in the $`|m|_{\mathrm{UB3}}`$$`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ plane obtained from the inequality (11) using the CHOOZ exclusion curve is shown in Fig. 1 by the solid line (the region on the right of this curve is excluded). The dashed straight line in Fig. 1 represents the unitarity bound $`|m|_{\mathrm{UB3}}\sqrt{\mathrm{\Delta }m_{\mathrm{atm}}^2}`$.
The shadowed and hatched regions in Fig. 1 are allowed by the analysis of the Super-Kamiokande data and the combined analysis of the Super-Kamiokande and CHOOZ data, respectively. One can see that the value of $`|m|_{\mathrm{UB3}}`$ is tightly constrained:
$$|m|_{\mathrm{UB3}}6\times 10^3\mathrm{eV}.$$
(12)
Therefore, taking into account the inequalities (7), (10) and (12), we conclude that in the scheme with three neutrinos and a mass hierarchy the effective Majorana mass $`|m|`$ in $`(\beta \beta )_{0\nu }`$-decay is bounded by
$$|m|10^2\mathrm{eV}.$$
(13)
Let us consider now the two schemes with four-neutrino mixing that can accommodate the results of solar and atmospheric neutrino experiments and the results of the accelerator LSND experiment:
$$\text{(A)}\underset{\mathrm{LSND}}{\underset{}{\stackrel{\mathrm{atm}}{\stackrel{}{m_1<m_2}}\stackrel{\mathrm{sun}}{\stackrel{}{m_3<m_4}}}},\text{(B)}\underset{\mathrm{LSND}}{\underset{}{\stackrel{\mathrm{sun}}{\stackrel{}{m_1<m_2}}\stackrel{\mathrm{atm}}{\stackrel{}{m_3<m_4}}}}.$$
(14)
These two spectra are characterized by the presence of two couples of close masses separated by a gap of about 1 eV which provides the mass-squared difference $`\mathrm{\Delta }m_{\mathrm{LSND}}^2=\mathrm{\Delta }m_{41}^2`$ responsible of the oscillations observed in the LSND experiment. In the scheme A $`\mathrm{\Delta }m_{\mathrm{atm}}^2=\mathrm{\Delta }m_{21}^2`$ and $`\mathrm{\Delta }m_{\mathrm{sun}}^2=\mathrm{\Delta }m_{43}^2`$, whereas in scheme B $`\mathrm{\Delta }m_{\mathrm{atm}}^2=\mathrm{\Delta }m_{43}^2`$ and $`\mathrm{\Delta }m_{\mathrm{sun}}^2=\mathrm{\Delta }m_{21}^2`$.
It has been shown that the results of the short-baseline $`\overline{\nu }_e`$ disappearance experiment Bugey, in which no indication in favor of neutrino oscillations was found, imply that the mixing of $`\nu _e`$ with the two “heavy” neutrinos $`\nu _3`$ and $`\nu _4`$ is large in scheme A and small in scheme B. Therefore, if scheme A is realized in nature the effective Majorana mass in $`(\beta \beta )_{0\nu }`$ decay can be as large as $`m_3m_4\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}0.51.2\mathrm{eV}`$. On the other hand, in scheme B $`(\beta \beta )_{0\nu }`$ decay is strongly suppressed. Indeed, the contribution of $`m_2`$ to the upper bound (7) is limited by Eq.(10) and the contribution of $`m_3`$ and $`m_4`$, $`|m|_{\mathrm{UB34}}(|U_{e3}|^2+|U_{e4}|^2)\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}`$, is limited by the inequality
$$|m|_{\mathrm{UB34}}a_e^{\mathrm{Bugey}}\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2},$$
(15)
where $`a_e^{\mathrm{Bugey}}`$ is given by the exclusion curve of the Bugey experiment. The numerical value of the upper bound (15) is depicted in Fig. 2 by the solid line. The dashed straight line in Fig. 2 represents the unitarity bound $`|m|_{\mathrm{UB34}}\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}`$ and the shadowed region indicates the interval of $`\mathrm{\Delta }m_{\mathrm{LSND}}^2`$ allowed at 90% CL by the results of the LSND experiment: $`0.22\mathrm{eV}^2\mathrm{\Delta }m_{\mathrm{LSND}}^21.56\mathrm{eV}^2`$. From Fig. 2 one can see that $`|m|_{\mathrm{UB34}}2\times 10^2\mathrm{eV}`$. Therefore, in scheme B we have the upper bound
$$|m|2\times 10^2\mathrm{eV}.$$
(16)
In conclusion, the results of the analysis of neutrino oscillation data show that the effective Majorana mass $`|m|`$ in neutrinoless double-$`\beta `$ decay is smaller than about $`10^2`$ eV in the scheme with mixing of three neutrinos and a mass hierarchy, is smaller than about $`2\times 10^2`$ eV in the four-neutrino mixing scheme B, whereas it can be as large as $`\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}0.51.2\mathrm{eV}`$ in the four-neutrino mixing scheme A.
|
no-problem/9904/astro-ph9904076.html
|
ar5iv
|
text
|
# Diffraction-Limited Imaging and Photometry of NGC 1068
## 1 Introduction
As a close, luminous active galactic nucleus (AGN) \[14.4 Mpc (Tully (1988)) so that 1 arcsec=72 pc), NGC 1068 has been studied at nearly every available spatial resolution and wavelength for thirty years. While classified as a Seyfert 2 based on the presence of narrow emission lines and the absence of broad ones, polarization studies have detected broad wings on its narrow emission lines (Antonucci & Miller (1985)). These observations suggest that NGC 1068 harbors an obscured Seyfert 1 nucleus whose broad lines are scattered into our line of sight. Significant modeling of the spectrum and spectral energy distribution has been done by Pier & Krolik (1993) and Granato et al. (1997) for the presence of a dusty torus which conceals the nucleus, and these models can reproduce the observed emission from the X-ray through the near-infrared.
At scales of a few hundred parsecs around the nucleus, HST narrow band (Macchetto et al. (1994)) and continuum (Lynds et al. (1991)) imaging show a non-uniform conical narrow-line region. These observations suggest that clumps of gas have been ionized by a partially collimated nuclear source. Mid-infrared measurements of this area have revealed the presence of warm gas (Cameron et al. (1993)). Previous near infrared one$``$dimensional speckle measurements of the nucleus (McCarthy et al. (1982); Chelli et al. (1987)) showed extended emission on the 100 pc scale, and more recent two$``$dimensional speckle work finds extended emission closer to the nucleus (Wittkowski et al. (1998)). Radio VLBI measurements (Greenhill et al. (1996)) of water maser emission demonstrate the presence of a thick torus, and high-resolution radio maps of the nucleus (Gallimore et al. 1996b ) suggest that the obscuring material takes the form of a warped disk.
The combination of these results have made the nucleus of NGC 1068 the prototypical obscured Seyfert 1. In addition, the nucleus resides in an SB host galaxy with a 3 kpc bar (Scoville et al. (1988); Thronson et al. (1989)) and active star formation in the inner 10 kpc (Telesco et al. (1988)). Authors have speculated on the relationship between this star formation and the activity of the nucleus (Norman & Scoville (1988)).
Near-infrared measurements trace the distribution of hot dust and stars in NGC 1068, and thus characterize the physical condition of the material near the nucleus. The ability to do speckle imaging with the W. M. Keck Telescope allows a resolution of 0.<sup>′′</sup>05, or 3.6 pc, at 2.2 $`\mu `$m which for the first time provides a direct comparison between near infrared and visual (HST) measurements. We use these speckle measurements, as well as complimentary 1.6 $`\mu `$m speckle imaging from the 200-inch Telescope and direct imaging at both wavelengths from the Keck Telescope to investigate the physical conditions in the near nuclear region of NGC 1068 on hitherto unavailable scales.
## 2 Observations
Speckle observations of NGC 1068 were made on four nights, 1994 October 18, 1994 December 19, and 1995 November 4$``$5, with the 200-inch Hale Telescope. A 64$`\times `$64 subsection of a 256$`\times `$256 Santa Barbara Research Center InSb array was used in order to allow continuous readout of speckle frames every 0.07 or 0.10 s. Speckle frames were collected in sets of $``$400 images on the AGN and on the two nearby unresolved SAO catalog stars. The sources were observed at both H$``$band ($`\lambda _0`$=1.65 $`\mu `$m, $`\mathrm{\Delta }\lambda `$=0.32) and K$``$band ($`\lambda _0`$=2.2 $`\mu `$m, $`\mathrm{\Delta }\lambda `$=0.4). Additional observations were made on three nights, 1995 December 18-20, at the W. M. Keck Observatory. Images from the full 256$`\times `$256 InSb array of the facility’s Near Infrared Camera (NIRC; Matthews & Soifer (1994)), were taken at a rate of one 0.118 s image every 1.5 s in sets of 100 images on the AGN and on the two nearby unresolved stars SAO 130046 and SAO 110692. A K$``$band ($`\lambda _0`$=2.21 $`\mu `$m, $`\mathrm{\Delta }\lambda `$=0.43) filter was used for all the observations. The basic observing strategy was reported in Matthews et al. (1996). To reduce the noise contributed by phase discontinuities between the 36 segments of the Keck Telescope, the object and calibrators were observed at 12 different pupil orientations. A summary of observations is provided in Table 1.
At both telescopes, reimaging optics were used to convert the standard detector plate scales to scales appropriate for diffraction limited imaging. At the 200-inch Hale Telescope, detector pixel scales of 0.<sup>′′</sup>034 and 0.<sup>′′</sup>036 pixel$`1`$were used and at the Keck Telescope, a detector pixel scale of 0.<sup>′′</sup>021 pixel$`1`$was used. The pixel scale at the 200-inch Telescope was chosen to oversample the K$``$band while still allowing diffraction limited imaging in $`H`$band. The pixel scale at the Keck Telescope was chosen to sample optimally the aperture in the $`K`$band.
Several long exposure images of the nuclear region were also taken at the Keck Telescope under photometric conditions in both the H$``$band and $`K`$band. In order to avoid saturating the detector, the speckle plate scale of 0.<sup>′′</sup>021 pixel$`1`$was used for these images. The total integration times were 30 seconds at K$``$band and 40 seconds at $`H`$band, and the seeing for these images was 0.<sup>′′</sup>45. A single 5 second K$``$band image of NGC 1068 was also obtained at the Keck Telescope with a pixel scale of 0.<sup>′′</sup>15/pixel and seeing of 0.<sup>′′</sup>45. Although the center is saturated, this image captured the distribution of galactic K$``$band flux at distances greater than 0.<sup>′′</sup>75 from the nucleus. HST infrared standard stars of (Persson (1998)) were observed both with and without the reimaging optics.
## 3 Data Analysis
In preliminary processing, each image was sky subtracted and flat fielded, and bad pixels were corrected by interpolation. The full NIRC frame of 256$`\times `$256 pixels was clipped around the centroid of each speckle frame to 128$`\times `$128 pixels (2.<sup>′′</sup>6 on a side). Outside this smaller field, the signal-to-noise-ratio in each pixel is less than one-fifth, so clipping cut out pixels which would add only noise to the Fourier analysis. In the 200-inch data no clipping was necessary since a smaller field was used in collecting the images. In the second stage of analysis, the object’s Fourier amplitudes and phases were recovered via classical speckle analysis (Labeyrie (1970)) and bispectral analysis (Weigelt (1987)) respectively. For the data from the Keck Telescope, both processes were modified from the standard procedure to incorporate the field rotation that occurred during the observations (Matthews et al. (1996)). The observations were made over a spread of 103 in parallactic angle, although the change in parallactic angle over a single stack of 100 frames was always less than $``$2. Linear interpolation was used to find the rotated pixel values in each frame. The bispectral analysis was sufficiently computationally intensive as to require the use of the Caltech Concurrent Supercomputing Facility’s nCUBE2 and Intel Delta computers.
In order to go from the Fourier components calculated above to a final image, it is necessary to include a smoothing function, or effectively a telescope transfer function; a Gaussian of FWHM equal to the $`\lambda /D`$ where $`\lambda `$ is the wavelength of the observations and $`D`$ is the diameter of the telescope, was multiplied by the Fourier amplitudes. Then, the amplitudes and phases were combined directly in an inverse transform to produce the final image.
## 4 Results
### 4.1 $`K`$band Results from the Keck Telescope
Figure 1 presents the 0.<sup>′′</sup>05 (or 3.6 pc) resolution K$``$band image produced with speckle imaging from using all of the nearly 4000 frames obtained in 1995 December. Each pixel is 0.<sup>′′</sup>021 $`\times `$ 0.<sup>′′</sup>021 and the field of view has been clipped to 0.<sup>′′</sup>67 on a side. The nuclear emission is seen to be comprised of two components, an unresolved point source and an extended region symmetric about the nucleus with a major axis of $``$0.<sup>′′</sup>3 (22 pc) and a minor axis of $``$0.<sup>′′</sup>18 (13 pc). The calculated Fourier phases were consistent with zero, so the extended component is symmetric about the nucleus.
For comparison with previous results and with models of the nucleus, we estimated the fractions of the total nuclear flux density arising in the point source and extended components. These quantities, along with the orientation and size of the extended emission, were found by fitting a two component model to the two-dimensional average object visibility. The fit was performed in the spatial frequency, i.e. Fourier, domain rather than the image domain so that no tapering function had to be applied to the high spatial frequencies. This model was intended not to reproduce the exact distribution of flux but to provide a robust estimate of the magnitude of the contributions of the two components.
The extended emission was modeled as a smoothly falling exponential of the form $`Ie^{kd^n}`$. The parameter $`k`$ measures the size of the extended emission, and the parameter $`d`$ measures the shape and orientation of the extended emission; $`d`$ was parameterized by an ellipticity and angle. The power $`n`$ determines how quickly the visibility falls with spatial frequency and hence the overall shape of the extended emission. The power $`n`$ was assumed, by ad-hoc phenomenological inspection, to be $`3`$ for the Keck Telescope K$``$band visibility. Since this model has only the explicit purpose of measuring the contributions from the two components, it is important for it to fit the overall shape of the visibility but unimportant whether it reproduced the details of the visibility at low spatial frequency. In particular, the lowest spatial frequencies were not used in the fit both because they are the most corrupted by global changes in the seeing in the time between when the object and calibrator were measured and because the extended flux from the galaxy causes the visibility to drop at the low frequencies.
The point source was included as a constant visibility offset, i.e. the same visibility at all spatial frequencies. The fit was performed as a $`\chi ^2`$ minimization of the two-dimensional object visibility, where each frequency was weighted by its statistical uncertainty as calculated from the ensemble of power spectra, subject to the constraints that the parameters be greater than or equal to zero. All of the points from 0$``$20 cycles arcsec<sup>-1</sup> were included in the fit. The salient results of this model are given in Table 2. Figure 2 shows the radial (i.e. azimuthally averaged) profiles of the measured two-dimensional visibilities and of the fit are shown. The differences between the data and model, i.e. the residuals, are also shown in the figure.
The result of the fitting demonstrates that 49% of the K$``$band flux density in the speckle image is contained in the unresolved core, i.e. in a diffraction limited beam, and 51% is in the extended region. The uncertainty in this fit is taken to be the uncertainty in the normalization of the power spectra, or 8%. § 5.1 describes how the total flux in the speckle image was determined.
### 4.2 $`H`$band Results from the Palomar 200-inch Telescope
Of all the H$``$band data taken at the 200-inch Telescope, the observations from 19 December 1994 had the best signal to noise ratio at high spatial frequencies, so they were used for the analysis which follows. The observations from other nights are consistent with that of 19 December, but of lower quality. The final H$``$band image from the 200-inch Telescope, with half the resolution of the K$``$band image from the Keck Telescope (0.<sup>′′</sup>1 or 7.2 pc), similarly consists of both a point source and extended emission.
The same fitting procedure described in the previous section was used to fit the two-dimensional H$``$band visibility, but the power $`n`$ was taken to be $`1`$. Since the value of $`n`$ primarily affects the shape of the visibility at low spatial frequencies (where, as noted above, the measurements are sensitive to seeing variation and the galactic emission) we make no comparison between the H$``$band and K$``$band data on these scales.
In the H$``$band data from the 200-inch Telescope, 63% of the flux density from the speckle image is contained in the unresolved point source (i.e. within a diffraction limited beam) and 37% is in the extended region. The uncertainty in this fit is taken to be the uncertainty in the normalization of the visibilities, or 8%. A discussion of how the total flux in the H$``$band image is determined is given in § 5.1.
### 4.3 $``$Color
The data from the Keck Telescope, at higher spatial resolution than that from the 200-inch Telescope, resolves more of the nuclear K$``$band flux into extended emission. In order to compute the $``$colors of the unresolved point source and the extended emission, however, the K$``$band data from the Keck Telescope must be smoothed to the 200-inch Telescope resolution. Instead of smoothing the reconstructed image, the object visibility from the Keck Telescope data was fit out to a spatial frequency of 11.3 cycles arcsec$`1`$, i.e. the same resolution obtainable at the 200-inch Telescope, using the procedure described above in § 4.1. In this case the K$``$band shows the same distribution of flux density as the $`H`$band, i.e. 64% in an unresolved core and 36% in an extended region. Since both the K$``$band and H$``$band data show the same fraction of their respective total fluxes in the point source, the H$``$K color of the point source is the same as that of the extended emission. The uncertainty in this color is 11%, the combination of the uncertainties in each of the fits to the visibilities. The actual value of this color is computed below in the discussion section.
### 4.4 Upper Limit to the Size of the Point-like Nucleus
The speckle data can be used to place an upper limit on the size of the nuclear point source. If the core were actually extended, it would have the effect of reducing the visibilities at high spatial frequencies, but instead we find the visibilities flatten out at high spatial frequencies. The fit to the Keck Telescope data, shown in a radial profile plot in Figure 2, leaves residuals of less than 10% at frequencies above 19 cycles arcsec$`1`$. So, the presence of another undetected component is therefore constrained by the uncertainty in the residuals at the highest frequencies. Without making an a priori assumption as to the shape of the extension, should it be present, the upper limit to its size can be taken as the highest spatial frequency at which high S/N information was obtained in the data, in this case 19.7 cycles arcsec$`1`$or 0.<sup>′′</sup>051. A more stringent upper limit to the point source size can be placed under the assumption that the true nucleus has a Gaussian shape. The width of the largest Gaussian which could be hidden in the K$``$band visibility data (i.e. which will not differ from the data by more than 3$`\sigma `$ at the highest spatial frequency) is 0.<sup>′′</sup>02 or 1.4 pc.
### 4.5 Photometry
Because of limitations deriving from the small field size, low S/N, and image wander in speckle images, it is difficult to make photometric measurements from speckle data. Therefore, we measured the total flux density in a beam the size of the speckle frames from 0.<sup>′′</sup>45 resolution long exposure images made at the Keck Telescope in both the and $`K`$bands. In a beam radius of 1.<sup>′′</sup>25, the K$``$band magnitude is 7.5 and the H$``$band magnitude is 9.26, resulting in an H$``$K color of 1.76 mag. Aperture photometry from these images (in both magnitudes and Janskys) at a variety of other beam sizes is reported in Table 3 and shown in Figure 3.
The contribution of the galaxy to the photometry at small beam sizes was estimated by fitting the galaxy surface brightness at radii between 1.<sup>′′</sup>8 and 27<sup>′′</sup> with a deVaucouleurs function. Extrapolating the fit to a beam of radius 1.<sup>′′</sup>25, approximately the same size as the speckle field of view, implies that only 15% percent of the flux arises in the galaxy. Furthermore, the shape of the surface brightness profile within 1.<sup>′′</sup>25 is consistent with being the sum of the deVaucouleurs profile and a point source.
### 4.6 Comparison to Previous Measurements
The size of the near-infrared core of NGC 1068 has been the subject of investigation by many previous researchers. One-dimensional speckle by McCarthy et al. (1982) at the 3.8 m Mayall Telescope placed an upper limit of 0.<sup>′′</sup>2 on the size of the unresolved core, and found extended emission that was 25% of the total 2.2 $`\mu `$m flux in the beam. As shown in Figure 2, the K$``$band visibility measured in this work is consistent with 1.0 out to 1 cycle arcsec$`1`$, but in the McCarthy et al. data, the visibility decreased to 0.8 by 0.5 cycles arcsec$`1`$. This is probably a consequence of their large, 5<sup>′′</sup> by 10<sup>′′</sup>, beam, which captured more light from the galactic stars, which have large spatial extent, than our smaller 2.<sup>′′</sup>6 square beam.
Similarly, 1-D speckle at 3.6 $`\mu `$m by Chelli et al. (1987) at the 3.6 m ESO Telescope found an unresolved core, large scale (100 pc) emission, and a third component of extended yet compact emission 0.<sup>′′</sup>2 around the nucleus. The visibility obtained by Chelli et al. agrees very well with what we report. Their data also suggested that, of the two position angles they measured, the compact extended emission was larger along an angle of 135 than 45 which is consistent with the emission we measure shown in Figure 1.
In more recent imaging with an aperture mask, Thatte et al. (1997) find that 94% of the K$``$band flux in a 1<sup>′′</sup> diameter aperture comes from a point source smaller than 0.<sup>′′</sup>03. They report the flux from this source as 190 mJy. Both of these measurements disagree with what is reported above in § 4.1 and Table 3, respectively.
Recent two dimensional speckle imaging by Wittkowski et al. (1998) at the 6 m Special Astrophysical Observatory Telescope finds extended emission which is 20% of the total K$``$band flux and in addition places a limit on the core size of 0.<sup>′′</sup>03. These estimates came from assuming a uniform disk model for the reconstructed emission, but they admit that an alternative explanation for their data would be an unresolved central object and extended emission. Fitting their data with this model, as described in §§ 4.1 and 4.2 above, would increase the fraction of the flux density attributable to the extended emission. While of insufficient sensitivity to show the extended emission reported in this work, their results are consistent with what is described here.
## 5 Discussion
Discussed in this section are three components of the emission from the central 1.<sup>′′</sup>25 radius of the nucleus: (1) the central point source, (2) the newly imaged extended nuclear emission reported in § 4 (and accounting for approximately 50% of the flux at 2$`\mu `$m previously attributed to the point source), and (3) the stars of the underlying host galaxy.
### 5.1 $``$Color of the Nucleus
While it is possible to tell from the speckle measurements alone that the color of the point source and the color of the newly imaged extended emission are the same, it is not possible to determine the color itself. This is because speckle, as an interferometric technique, resolves out (i.e. is not sensitive to) smooth large-scale extended emission which fills the field of view. Thus, all of the flux measured in a beam the size of the speckle frames, as reported in Table 3, cannot be automatically attributed to the features observed in the speckle image. However, the color of the emission in the speckle image can be deduced by subtracting the contribution from large scale galactic emission from the total flux measured in the speckle beam. We assume that the only such contribution comes from the distribution of stars in the host galaxy.
From the deVaucouleurs model fitting described in § 4.5 the galactic stellar contribution to the 1.<sup>′′</sup>25 radius speckle beam was determined to be 97 mJy at $`K`$band, or 15% of the total K$``$band emission. Since no large (38<sup>′′</sup> square) image of NGC 1068 at H$``$band such as the one taken at K$``$band was available, it was assumed that the $`HK`$ color of the galactic stellar population was 0.3 mag in the nuclear region, i.e. the same color as measured in aperture photometry off the nucleus (Thronson et al. (1989)). Combining this color with the K$``$band measurement, it was calculated that 118 mJy, or 57%, of the H$``$band emission in the 1.<sup>′′</sup>25 radius beam is due to stars. All flux in excess of the galactic stellar contribution, was assumed to come from the nucleus plus extended emission reported in § 4, and this flux then has an $`HK`$ color of 2.5 mag. If there is another population of stars in excess of the assumed galactic contribution, this estimate of the $`HK`$ color would be low. If there is substantial reddening of the stars in the nucleus compared with far from the nucleus, this estimate would be high. However, $`HK`$ = 2.1 mag can safely be considered a lower limit to the color based on the aperture photometry reported in Table 3. The statistical uncertainty in this color is a combination of the uncertainties in the photometric calibration, the aperture photometry, and the fit to the photometry, for a total of 9%. Combined with the uncertainty in the visibility fitting from §§ 4.1 and 4.2, our best estimate of the color of the extended emission is 2.5 $`\pm `$ 0.2 mag.
### 5.2 Possible Mechanisms for Extended Emission
It is of interest to consider the origin of the observed extended nuclear emission. There are several possibilities: it could be emission from stars, from nuclear light reflected off dust or electrons, or emission from hot dust, either in equilibrium or from single photon heating.
The $`HK`$ color of the extended emission, 2.5 mag, is significantly redder than that of any stellar population. Thus, if the emission is from stars, it must be highly extincted. A color excess of 2.2 mag, obtained from assuming the stars have the same intrinsic color as the galactic stars far from the nucleus, (i.e., $`HK`$=0.3 mag) necessitates $`A_v`$=34 mag. This is similar to the extinction found in the center parsec of our own Galaxy and produced in models of thermal emission from dust in a thick torus in NGC 1068 (Efstathiou et al. (1995); Young et al. (1995)). It is, however, much higher than the extinction of $`A_v`$=0$``$2 mags suggested in the data of Thatte et al. (1997) for the central stellar cluster. It is known, however, that the extinction is quite patchy near the nucleus (Blietz et al. (1994)). The extended emission reported in this work is smooth over a length of 20 pc, and the substantial reddening required for the extended emission to be from stars seems unlikely to be similarly smooth over this scale.
A bigger problem with this hypothesis comes from the luminosity of the extended emission. K$``$band imaging spectroscopy of the nuclear region (Thatte et al. (1997)) has revealed the presence of a dense stellar cluster, which, based on the equivalent width of the CO band-head, accounts for 7% of the total nuclear luminosity. However, 50% of the nuclear flux is resolved by speckle. The K$``$band luminosity of the extended region, if it is emitted isotropically, is 4.7$`\times `$10<sup>8</sup> L, a factor of ten larger than the stellar luminosity given in Thatte et al.
Finally, in this scenario the fact that the point source and the extended emission have the same color would be purely accidental unless the point source is also composed of stars. If the extended near-infrared structure is comprised of stars, it is also unlikely to be a continuation of a larger scale structure in the host galaxy. NGC 1068 does have a well known large scale ($``$ 1 kpc) stellar bar (Scoville et al. (1988); Thronson et al. (1989)), but it is also oriented at a position angle of approximately 45.
Light from the point source reflected off dust or electrons in the narrow-line region would, on the other hand, provide a natural explanation for the similar colors of the point source and extended emission. The nucleus is highly (4$``$5%) polarized at 2.2$`\mu `$m in a 4<sup>′′</sup> beam suggesting that there is extensive scattering of the nuclear radiation (Lebofsky et al. (1978)). There are three possible sources of scattering: the warm electron gas which scatters the broad line emission, another population of electrons, or dust. The warm electrons modeled by Miller, Goodrich, & Matthews (1991) are located at least 30 pc from the nucleus, i.e. outside the extended emission reported here. Therefore scattering from these electrons is unlikely be the source of the extended region. Scattering from other electrons which are within a few parsecs of the point source would also tend to reflect the broad line region, so it is reasonable to conclude that there is no second population of electrons beyond that found by Miller et al.
The albedo, and wavelength dependence thereof, of an ensemble of dust grains varies widely with their size distribution (e.g. Lehtinen & Mattila (1996)). For the small grains that are to be expected in high UV-radiation field locations such as that around NGC 1068, observations and theory (Draine & Lee (1984)) predict that the albedo at 2.2 $`\mu `$m is approximately 20% lower than at 1.6 $`\mu `$m. Therefore, if the observed extended emission were reflected light from the point source of NGC 1068, it would be significantly bluer than the point source itself, whereas we observe the same color in the two sources. On the optimistic assumption that the albedo at 2.2 $`\mu `$m has a value of 0.8 and that the dust scatters isotropically, the central source would have a true 2.2 $`\mu `$m luminosity which is 15 times greater than observed.
Observations by Glass (1997) have shown that the nucleus of NGC 1068 became steadily brighter at K$``$band over a twenty year span from 1976 to 1994 before leveling off. He did not detect a concomitant rise in the H$``$band emission, but this is understandable given the galactic stellar contamination of his 12<sup>′′</sup> beam. If the nuclear emission comes from dust on the inner edge of the torus which is heated to just below its sublimation point, an increase in luminosity will push the inner edge of the torus further away from the nucleus but not change the intrinsic color of the emission. However, the time constant for destroying grains may be long enough, on the order of years (Voit (1992)) so that an increase in luminosity produces temporarily higher temperatures and therefore bluer colors.
The light crossing time of the extended emission reported in § 4 is approximately 10 yr, so if it is reflected point source light, it should show a 10 yr lag in color compared to the point source. We do not know what the color of the point source was 10 years ago, but for it to have been just red enough to offset the tendency of dust reflection to make the emission bluer would be quite a conspiracy.
The final possibility is that the extended emission comes from hot dust. The color temperature implied by an H$``$K color of 2.5 mags is 800 K. Taking the central luminosity of the AGN as $`1.5\times 10^{11}L_{\mathrm{}}`$, we can calculate that the dust grains which would be heated to this temperature in equilibrium would lie at most 1 pc or 0.<sup>′′</sup>01 (for silicate grains) from the point source. The extended emission reported in this paper is a factor of 10 larger. It has been suggested by other authors (Baldwin et al. (1987); Braatz et al. (1993); Bock et al. (1998)) that extended 10 $`\mu `$m emission on 200 pc scales may be caused by heating of grains by the central source if the luminosity is beamed along the direction of the radio jets rather than emitted isotropically. The good spatial correspondence between the mid-infrared and radio jet (Cameron et al. (1993); Bock et al. (1998)) also lends credence to this idea. The component of the radio jet thought to lie at the infrared point source (Gallimore et al. 1996a ), S1, sits in a region of extended radio continuum emission which lies at a position angle of 175. A beaming factor of 200 which would be sufficient to explain the extended mid-infrared emission would also be sufficient to produce 800 K grains at 10 pc from the point source. By contrast, Efstathiou et al. (1995) derive a beaming factor of $``$ 6 based on their fit to the near-infrared spectrum of the nucleus, and this would be insufficient to heat grains to 800 K, 10 pc from the point source.
It is possible that the extended near-infrared emission is from hot dust which is heated not externally (i.e. by the central AGN), but internally, for example, by an interaction with the jet. The jets are observed to drive the motion of the emission line gas (Axon et al. (1998)), so it is reasonable to assume that they are dumping energy into the circumnuclear gas. More complete models would have to be made to examine this hypothesis.
A natural explanation for the high color temperature which is reconcilable with an isotropically emitting central source would be single photon heating of small grains (Sellgren et al. (1984)). The rate of UV photons necessary to produce the total K$``$band luminosity of the extended nuclear emission can be calculated if one knows the mass of dust present in the region. If the hydrogen density is 10<sup>5</sup> cm<sup>-3</sup> (Tacconi et al. (1994)), the dust to gas mass ratio is 10<sup>-2</sup>, and the grains radiate with unity efficiency in the infrared, the rate of photons needed per grain is $``$10<sup>-5</sup> s<sup>-1</sup>. This rate is well below that expected from the intrinsic UV/X-ray spectrum of NGC 1068 (Pier et al. (1994)). However, polycyclic aromatic hydrocarbons (PAHs) are thought to be destroyed by intense X-ray/UV radiation fields (Voit (1992)) and the 3.3 $`\mu `$m emission feature associated with PAHs has not been unambiguously detected at the nucleus (Bland-Hawthorn et al. (1997)). However, if the dust along the edges of the extended torus, as in Figure 4, were illuminated by UV photons which were reflected off of the high lying electron cloud and yet protected from the nucleus by the bulk of the torus, the X-ray flux they intercept would be substantially reduced. Miller et al. (1991) predict that if the optical depth to electron scattering is about 0.1 and there is a dusty disk of dimension 10<sup>20</sup> cm surrounding the central region, then about 10% of the central UV luminosity would be back scattered onto the disk. If the disk is not uniform, as is likely considering the lumpy high resolution radio maps, some regions would have high enough column density to stop the grain destroying X-rays, yet see a reflected UV flux sufficient to create transient grain heating.
### 5.3 Comparison to Models and Line Emission
In the model of infrared emission from NGC 1068 (e.g. Pier & Krolik (1993); Efstathiou et al. (1995); Granato et al. (1997)) commonly found in the literature, the central source is surrounded by an optically thick torus. The torus is heated by the ultraviolet and X-ray photons from the accretion disk plus black hole system to which it is optically thick, but ionizing photons escape along the axis of the torus. The inner radius of the torus, $``$0.2 pc, is set by the sublimation point of the dust, and its outer radius, $``$ 40 pc (Granato et al. (1997)), is set by models of its infrared emission. The line of sight to this torus is nearly edge on, passing through 70-1000 magnitudes of visual extinction, depending on the model, and therefore does not permit a direct view of the central source. The 1$``$2 $`\mu `$m emission observed is produced by the thermal radiation from hot dust on the inner edge of the torus, which, because it is on the edge, escapes through a region of moderate extinction. The geometry of the torus is constrained by the conical shape of the narrow-line region to have an opening angle of approximately 45. A cartoon of this model is shown in Figure 4. The 2 pc upper limit placed on the size of the point source in § 4.4 is consistent with this model, but the fact that the extended emission we observe is much larger than the size of the inner edge of the torus means that we must add to this picture. The emission we observe at 10 pc from the nucleus could come from a larger scale dusty structure, perhaps an extension of the torus, if the emission can be produced by one of the mechanisms outlined in § 5.2.
Light scattering off of dust is observed in the narrow-line region much further from the central source (Miller, Goodrich, & Matthews (1991)) than predicted by models of the spectral energy distribution (Efstathiou et al. (1995)). The narrow-line emission comes from clouds excited by the central source (Macchetto et al. (1994)), and it extends to hundreds of parsecs, at a position angle of approximately 45. The placement of the 2 $`\mu `$m point source and extended region, shown superposed on the HST narrow-line image in Figure 5, shows that the extended 2 $`\mu `$m emission lies alongside the bright emission knots in the visual ionization cone. Of course, the registration of the infrared and optical images is not known to exquisite precision. The best estimates from Thatte et al. (1997) have a 0.<sup>′′</sup>1 uncertainty in the registration and this uncertainty nearly encompasses the size of the extended near-infrared emission.
## 6 Conclusions
Two components of the nuclear 2.2 and 1.6 $`\mu `$m emission of NGC 1068, in addition to its galactic stellar population, have been detected with speckle imaging on the Keck and 200-Inch Telescopes. The observations reveal an extended region of emission that accounts for nearly 50% of the nuclear flux at $`K`$band. This region extends 10 pc along its major axis and 6 pc along its minor axis on either side of an unresolved point source nucleus which is at most, 0.<sup>′′</sup>02 or 1.4 pc in size.
Both the point source and the newly imaged extended emission are very red, with identical $`HK`$ colors corresponding to a color temperature of 800 K. While the point source is of a size to be consistent with grains in thermal equilibrium with the nuclear source, the extended emission is not. The current data do not allow us to unambiguously determine the origin of the extended emission, but it is most likely either scattered nuclear radiation from an extended dusty disk or emission from thermally fluctuating small grains heated by reflected nuclear UV photons.
We thank Andrea Ghez for her help with the observations and data analysis, Tom Soifer for many helpful conversations on models of NGC 1068, and the telescope operators at Palomar and Keck Observatories for their efforts during time consuming speckle observing. Infrared astronomy at Caltech is supported by the NSF.
|
no-problem/9904/cond-mat9904190.html
|
ar5iv
|
text
|
# 1 On a semi-log plot, the number of distinct minima classes versus the number of order parameters appears as a straight line. This is evidence that the number of distinct minima grows exponentially with the number of order parameters.
There are many situations where behavior of great complexity arises, or is thought to arise, from simple underlying equations. Extensively studied cases include chaos, turbulence, and spin glasses. Chaos and turbulence involve long-term dynamics and extended spatial structures, while spin glasses involve an element of randomness. Here we will analyze a much simpler case (the simplest known to us) involving a static, deterministic, and very symmetrical system, wherein simple equations exhibit quite a complicated space of solutions. In particular, we present a simple class of potentials in $`n`$-component order parameters, whose number of local minima unrelated by symmetry grows exponentially in $`n`$. Our model is closely related to ones commonly used in studying large $`N`$ limits of quantum field theory , differing only in that the assumption of some continuous symmetry among the fields (e.g., $`O(N)`$) is replaced by a discrete permutation symmetry (basically $`S_N`$). Of course, it is just such permutation symmetries which arise in studies of quenched disorder by the replica method, so there is a close connection to that branch of spin glass theory . In some circumstances the flexibility afforded by imposing less symmetry might allow better extrapolations than the traditional one, in the sense that $`1/N`$ corrections might be made smaller, and more complex behaviors captured.
To put the later results in perspective, and to highlight the minimal requirements for complexity in our framework, let us first consider an example that does not work. Suppose that we have $`N`$ order parameters $`\varphi _i`$, for $`i=1\mathrm{}N`$. The most general renormalizable (i.e., no more than quartic) potential symmetric under the $`S_N`$ permuting these parameters and under a change in all their signs simultaneously is
$`V(\stackrel{}{\varphi })=\mu {\displaystyle \underset{i}{}}\varphi _i^2+\alpha {\displaystyle \underset{i,j}{}}\varphi _i\varphi _j+\beta _1{\displaystyle \underset{i}{}}\varphi _i^4+\beta _2{\displaystyle \underset{i,j}{}}\varphi _i^3\varphi _j`$
$`+\beta _3{\displaystyle \underset{i,j}{}}\varphi _i^2\varphi _j^2+\beta _4{\displaystyle \underset{i,j,k}{}}\varphi _i^2\varphi _j\varphi _k+\beta _5{\displaystyle \underset{i,j,k,l}{}}\varphi _i\varphi _j\varphi _k\varphi _l.`$ (1)
Varying with respect to $`\varphi _a`$, we find that for an extremum $`\varphi _a`$ must obey a cubic equation. The equation can be written in a form that is the same for every value of $`a`$. That is, for a particular fixed minimum, the equation can be written as a cubic equation in $`\varphi _a`$, so that the coefficients of the various terms (when evaluated as constant numbers for the particular minimum in question) are the same for all values of $`a`$. This cubic equation has at most three real roots. Of these, at most two are local minima. Therefore, for any local minimum, the different components of the order parameter will take at most two distinct values. Thus for large values of $`N`$ many of the components will be equal. Let us suppose there are $`n_1`$ components with value $`r_1`$ and $`n_2`$ components with value $`r_2`$, where $`n_1+n_2=N`$, and $`n_2n_1`$. Then (for given $`n_1,n_2`$) the conditions for an extremum will be two polynomial equations of degree 3 in the variables $`r_1`$, and $`r_2`$. In general, these will have at most 9 solutions. Taking into account that there are at most two solutions when $`n_2=0`$, for generic values of the parameters $`\mu `$, $`\alpha `$, and $`\beta _1\mathrm{}\beta _5`$ in the potential we readily bound the number of distinct minima depths by $`(9N+4)/2`$ for $`N`$ even, and $`(9N5)/2`$ for $`N`$ odd. We expect that with more care this number could be further reduced. Non-generic values presumably correspond either to fine tuning of the parameters, which is not physically realistic, or to enhanced symmetry, which renders mathematically distinct solutions physically equivalent. In any case, one does not find here a straightforward possibility for the sort of exponential growth in the number of physically distinct minima that we will encounter shortly.
The next logical step after the vector order parameter is to consider matrices. For simplicity we impose that our matrices are symmetric, so that $`M_{ij}`$ and $`M_{ji}`$ are actually the same variable. Each index runs from 1 to $`N`$, implying that there are $`N(N+1)/2`$ independent order parameters in the matrix M. We define the $`a^{th}`$ “row-column” of a matrix to be the union of the $`a^{th}`$ row, and the $`a^{th}`$ column of the matrix; it is the set of all $`M_{ia}`$’s and all $`M_{ai}`$’s for all $`i`$’s.
We assume the potential is symmetric under M$``$\- M, and under permutation of the values of the labels; none of the row-columns is to be singled out in any way. For example, one can take a matrix, and every time one sees index 3 in the matrix, replace it with index 7, and vice-versa. Thus entries $`M_{37}`$, and $`M_{73}`$ stay the same, while the entries $`M_{33}`$ and $`M_{77}`$ get interchanged, and for all other $`i`$’s, $`M_{i3}`$ swaps with $`M_{i7}`$, and $`M_{3i}`$ swaps with $`M_{7i}`$. We refer to this symmetry as the “row-column exchange symmetry,” or simply “exchange symmetry” of the potential.
Given these constraints, the allowed quadratic terms in a potential are:
$$M_{ii}M_{ii},M_{ii}M_{ij},M_{ii}M_{jj},M_{ij}M_{ij},M_{ii}M_{jk},M_{ij}M_{ik},\mathrm{and}M_{ij}M_{kl},$$
(2)
Here, and hereafter, summation over all indices, even if they are not repeated, is assumed unless explicitly stated otherwise.
There are many allowed quartic terms, and we will not write them all out here. But for future reference note that terms as highly structured as $`M_{ij}M_{jk}M_{kl}M_{li}`$, and $`M_{ii}M_{ij}M_{jk}M_{kl}`$ are fair game now.
We will now demonstrate, by explicit construction, exponential proliferation of inequivalent local minima in this case. Our strategy will be to use a subset of the allowed terms to construct a very simple potential with many isolated local minima. These will be equivalent under a symmetry of the simplified potential, but not under the smaller symmetry of our full class of allowed potentials. Then we shall lift the degeneracy (and physical equivalence) of these minima in a controlled way by perturbing with additional allowed terms, in such a way that they remain local minima.
To begin, we form what we call a plastic-soda-bottle-bottom potential out of the allowed terms. (In contrast to the classic wine-bottle potential, the plastic-soda bottle potential in two variables has four symmetrically arranged dips.) This takes the form:
$$V(𝐌)=a\left[\underset{i,j}{}(1M_{ij}^2)+\underset{i}{}(1M_{ii}^2)\right]^2+b\left[\underset{i,j}{}(1M_{ij}^2)^2+\underset{i}{}(1M_{ii}^2)^2\right],$$
(3)
where $`a,b>0`$ are arbitrary, and no summation over the indices is assumed within the explicitly stated sums. All the local minima lie at:
$$\left[\begin{array}{cccccc}\pm 1& \pm 1& \text{.}& \text{.}& \text{.}& \pm 1\\ \pm 1& \pm 1& & & & \pm 1\\ \text{.}& & \text{.}& & & \text{.}\\ \text{.}& & & \text{.}& & \text{.}\\ \text{.}& & & & \text{.}& \text{.}\\ \pm 1& \pm 1& \text{.}& \text{.}& \text{.}& \pm 1\end{array}\right].$$
(4)
They are all related by the accidental symmetry of the plastic-soda-bottle potential, which allows both independent changes in the signs of individual components and interchange of any two components (not just row-columns).
Now we can experiment numerically by adding in more of the allowed terms. The positions of the minima, and their depths, will change as we vary the amounts of the various small terms we are adding. We will be careful that the terms we are adding are small enough so as not to destabilize any minimum, nor change the sign of any of the order parameters at the position of any of the minima. Let us add terms of the form $`M_{ij}M_{jk}M_{kl}M_{li}`$ and $`M_{ii}M_{ij}M_{jk}M_{kl}`$ with small coefficients, with $`N=2,3,4,5,6`$, and track the depth of each minimum numerically. Then we can count the number of distinct numerical values for the potential at the perturbed minima. Of course, local minima with distinct energies must be physically inequivalent, i.e. unrelated by an underlying symmetry. The results are exhibited in Figure 1.
In Figure 1, it appears that the number of distinct minima classes grows exponentially in the number of the order parameters, in response to only these two particular terms for the perturbations. Now we shall discuss how this proliferation can be understood theoretically.
First, let us show that the number of minima which are not related by the exchange symmetry, or by the M $``$-M symmetry grows exponentially in the number of the order parameters. We focus our attention on one very particular subset of all minima, and prove that the logarithm of the number of minima in this subset that are not related by any of the allowed symmetries grows quadratically with $`N`$. The subset in question consists of all minima that can be written in the following form:
$$\left[\begin{array}{cc}\text{B}& \text{A}\\ 𝐀^𝐓& \text{C}\end{array}\right],$$
(5)
where when $`N`$ is even, all matrices A, B, and C have $`N/2`$ rows and $`N/2`$ columns; and when $`N`$ is odd, B has $`(N+1)/2`$ rows and $`(N+1)/2`$ columns, while C has $`(N1)/2`$ rows and $`(N1)/2`$ columns; consequently, A has $`(N+1)/2`$ rows, and $`(N1)/2`$ columns. Furthermore, we require that the matrix B has only positive values on the diagonal (from now on denoted by a $`+`$,) while the matrix C has only negative values on the diagonal (from now on denoted by a $``$,) while any other entry of B, and C is “free” to be either a $`+`$, or a $``$. Note that the logarithm of the number of such “free” entries grows quadratically in $`N`$, or linearly in the number of order parameters, for large $`N`$. Finally, all the entries of the matrix A are fixed; if $`N`$ is even, all elements on or above the diagonal are $`+`$’s, while all the elements below the diagonal are $``$’s; if $`N`$ is odd, entry $`A_{kl}`$ is $`+`$ if $`kl`$, and $``$ otherwise.
Concretely, if $`N=8`$, an element of our subset looks like:
$$\left[\begin{array}{cccccccc}\text{+}& \text{?}& \text{?}& \text{?}& \text{+}& \text{+}& \text{+}& \text{+}\\ \text{?}& \text{+}& \text{?}& \text{?}& \text{-}& \text{+}& \text{+}& \text{+}\\ \text{?}& \text{?}& \text{+}& \text{?}& \text{-}& \text{-}& \text{+}& \text{+}\\ \text{?}& \text{?}& \text{?}& \text{+}& \text{-}& \text{-}& \text{-}& \text{+}\\ \text{+}& \text{-}& \text{-}& \text{-}& \text{-}& \text{?}& \text{?}& \text{?}\\ \text{+}& \text{+}& \text{-}& \text{-}& \text{?}& \text{-}& \text{?}& \text{?}\\ \text{+}& \text{+}& \text{+}& \text{-}& \text{?}& \text{?}& \text{-}& \text{?}\\ \text{+}& \text{+}& \text{+}& \text{+}& \text{?}& \text{?}& \text{?}& \text{-}\end{array}\right],$$
(6)
where $`\mathrm{?}`$ can be either a $`+`$ or a $``$, as long as it is consistent with the requirement $`M_{ij}=M_{ji}`$; i.e. the elements below the diagonal are fixed once we pick the elements above the diagonal. If $`N=7`$, an element of our subset looks like:
$$\left[\begin{array}{ccccccc}\text{+}& \text{?}& \text{?}& \text{?}& \text{+}& \text{+}& \text{+}\\ \text{?}& \text{+}& \text{?}& \text{?}& \text{-}& \text{+}& \text{+}\\ \text{?}& \text{?}& \text{+}& \text{?}& \text{-}& \text{-}& \text{+}\\ \text{?}& \text{?}& \text{?}& \text{+}& \text{-}& \text{-}& \text{-}\\ \text{+}& \text{-}& \text{-}& \text{-}& \text{-}& \text{?}& \text{?}\\ \text{+}& \text{+}& \text{-}& \text{-}& \text{?}& \text{-}& \text{?}\\ \text{+}& \text{+}& \text{+}& \text{-}& \text{?}& \text{?}& \text{-}\end{array}\right],$$
(7)
with same requirements as for the case $`N=8`$.
The reason we focus our attention on this particular subset is that none of its elements are related by the symmetries of our class of potentials, as we now discuss. The proof proceeds in two steps. First, ignoring the existence of the M$``$\- M symmetry, we prove that exchange symmetry alone cannot change one member of subset into another. Then we prove that the M$``$\- M symmetry does not cause any further problems.
We propose a “painting scheme” to keep track where each entry of the matrix moves during the exchange process. This scheme also makes it easier to visualize what is going on. Paint each row-column with a different color. Consequently, each $`M_{ij}`$ for $`ij`$ is covered with two layers of distinct paints; $`M_{ii}`$ is covered with two layers of the same paint. Make sure to use “light colors” if $`+`$ is on the diagonal entry of the row-column you are painting, and “dark colors” if $``$ is on the diagonal entry. Each particular entry $`M_{ij}`$ for $`ij`$ is is now labeled uniquely by its two colors; of course, $`M_{ij}`$ has the same colors as $`M_{ji}`$, which suits us because they are the same variable anyway.
Say the $`2^{nd}`$ row-column is yellow, and the $`5^{th}`$ row-column is green. Exchanging indices 2 and 5 makes the $`5^{th}`$ row-column yellow, and the $`2^{nd}`$ row-column green. Using the coloring scheme, it is easy to keep track where each particular entry moved during the exchange. Say the $`11^{th}`$ row-column was blue initially, and we want to know where the entry $`M_{2,11}`$ ended up after the exchange; we look for the square of the matrix that is covered precisely by the yellow, and the blue paint, and conclude that the entry in question is now at the position $`M_{5,11}`$.
Note that every entry of the B matrix initially contains only light colors, while the matrix C contains only dark colors. In contrast, every entry of matrix A is painted with precisely one light, and one dark color.
Now, we start with a matrix $`𝐌_\mathrm{𝟏}`$ and permute it into a matrix $`𝐌_\mathrm{𝟐}`$ so that both of these matrices are elements of our preferred subset. First, note that all rows of $`𝐀_\mathrm{𝟐}`$ and $`𝐁_\mathrm{𝟐}`$ are painted with light colors, while all columns of $`𝐀_\mathrm{𝟐}`$ and $`𝐂_\mathrm{𝟐}`$ are painted with darker colors; this is so because $`𝐁_\mathrm{𝟐}`$ has only $`+`$’s on the diagonal, while $`𝐂_\mathrm{𝟐}`$ has only $``$’s on the diagonal. Therefore, the set of all entries of $`𝐀_\mathrm{𝟏}`$ is exactly the same as the set of all entries of $`𝐀_\mathrm{𝟐}`$; only these entries are such as to have exactly one light, and one dark color. Suppose that the light colors we have are: yellow, orange, red and pink, and suppose $`N=8`$. Furthermore, suppose that $`𝐀_\mathrm{𝟏}`$ has the $`1^{st}`$ row yellow, the $`2^{nd}`$ row orange, etc. Since two entries that were in the same row-column before the exchanges stay in the same row-column after the exchanges, the only way to get exactly $`4+`$’s in the $`1^{st}`$ row of $`𝐀_\mathrm{𝟐}`$ is to have the $`1^{st}`$ row of $`𝐀_\mathrm{𝟐}`$ yellow. This implies that the $`1^{st}`$ row-column of $`𝐌_\mathrm{𝟐}`$ is yellow. Furthermore, the only way to have exactly $`3+`$’s in the $`2^{nd}`$ row of $`𝐀_\mathrm{𝟐}`$ is to have the $`2^{nd}`$ row of $`𝐀_\mathrm{𝟐}`$ orange, implying that the $`2^{nd}`$ row-column of $`𝐌_\mathrm{𝟐}`$ is orange, etc. This way we determine the position of all light colors, and thereby determine uniquely everything about the matrix $`𝐁_\mathrm{𝟐}`$. In a similar manner, we determine everything about the matrix $`𝐂_\mathrm{𝟐}`$. Therefore $`𝐀_\mathrm{𝟏}=𝐀_\mathrm{𝟐}`$, $`𝐁_\mathrm{𝟏}=𝐁_\mathrm{𝟐}`$, and $`𝐂_\mathrm{𝟏}=𝐂_\mathrm{𝟐}`$, so $`𝐌_\mathrm{𝟏}=𝐌_\mathrm{𝟐}`$, as we sought to prove, since this implies that there is no symmetry that relates any two elements of this particular subset.
The particular case $`N=8`$ is just illustrative; everything we said generalizes immediately to any even $`N`$. Furthermore, everything we said applies with only minor modifications to the case where $`N`$ is odd.
Now we prove that during the whole process of transforming matrix $`𝐌_\mathrm{𝟏}`$ into matrix $`𝐌_\mathrm{𝟐}`$, one always has to multiply the matrix with $`1`$ a total of an even number of times. The way to see this differs a bit in the case when $`N`$ is even, and when $`N`$ is odd. When $`N`$ is odd, we have to end up with less $``$’s than $`+`$’s on the diagonal of $`𝐌_\mathrm{𝟐}`$, which is the same as for the diagonal we started with; however, none of the entries of the diagonal ever moves off the diagonal during the process. Similarly, in the case $`N`$ is even, we have to end up with less $``$’s than $`+`$’s in the matrix $`𝐀_\mathrm{𝟐}`$, and we already proved during Step 1 of this proof that $`𝐀_\mathrm{𝟏}`$ consists of the same set of elements as $`𝐀_\mathrm{𝟐}`$. Therefore, the matrix has to be multiplied with $`1`$ an even number of times during the process, both when $`N`$ is odd, and when $`N`$ is even. Since the operation of multiplication with $`1`$ treats all the elements of the matrix indiscriminately, it does not matter at all when during the process we perform these operations; in particular, we could instead perform all of them before doing anything else; but then, we might as well not do them at all, since multiplying the matrix with $`1`$ an even number of times leaves the matrix unchanged.
This concludes our proof that the number of local minima of the special potential that are unrelated by any symmetry of the general potential grows exponentially in the number of the order parameters for large $`N`$.
Physical intuition suggests that unless two minima have a very good reason to have the same depths (e.g. an underlying symmetry of the full potential), generically one would not expect them to have equal depths. Since the potentials of our class support an exponentially large number of minima unrelated by symmetry, we expect that such potentials will generally have a number of distinct depths at local minima that is exponential in the number of order parameters, unless the equations that determine them are insensitive to the symmetry-breaking structure. That of course is the behavior indicated by our numerical work, and it differs markedly from the earlier, vector case. The following consideration makes it plausible, though it does not prove, that the degeneracy among the physically distinct minima, which occurs for our initial plastic-soda-bottle potential, is lifted by perturbation with certain of the allowed potential terms. The point is that the derivative with respect to $`M_{ij}`$ of a term like $`M_{ab}M_{bc}M_{cd}M_{da}`$, that is $`M_{ib}M_{bc}M_{cj}`$, probes the whole structure of M in a way that is significantly different for each value of $`ij`$. Thus, unlike in the vector case, here the response to the perturbation in principle knows enough about (contains enough independent measures of) the order parameter to encode its detailed structure. In the vector case, one would need to go to N<sup>th</sup> order terms, of the type $`\varphi _1\varphi _2\mathrm{}\varphi _N`$, or higher to encounter similar sensitivity.
To illustrate this point further, we now examine the properties of some particular cases of our potentials, thus showing concretely how the various minima become physically inequivalent.
To keep things as simple as possible, we just add a tiny perturbation to the initial plastic-soda-bottle potential. Because the perturbations are tiny, we are justified in evaluating the changes in the potential only to the first order; we say that the depth of each minimum moves by whatever the perturbation we are adding evaluates to at the original position of the minimum in question; these positions are given in (4). To the first order, the degeneracy can not be broken into an exponentially large number of minima classes; for example, a quartic term that involves as many as 8 different indices can assume at most $`𝒪(N^8)`$ different values when evaluated at the positions given in (4), since it is a sum of $`N^8`$ terms each of which can be either a $`+1`$, or a $`1`$. Even if we add all the allowed terms, each multiplied by an arbitrary tiny coefficient, at lowest order we still have at best a power law breaking of the degeneracy.
Nevertheless, the number of distinct minima one can in principle get by analyzing only to the first order is quite large, especially if we include many allowed terms to create the perturbation. Furthermore, for small perturbations, the expectation values of different operators will typically not differ significantly if we evaluate the changes in the depths only to the first order, as opposed to evaluating them exactly. Moreover, in practice we sort the minima into energy bins of finite width in our plots. If our perturbation breaks the degeneracy to the first order into say $`𝒪(N^8)`$ distinct minima classes and $`N=6`$, we have in principle up to $`10^6`$ distinct minima. Since our plots typically involve 200 bins, it does not matter for the plots that we evaluate the depth changes to the first order only instead of calculating them exactly.
Typical results are displayed in Figure 2. Plots A and B from that figure demonstrate that one can get quite a rich structure by using only a few of the allowed terms. Furthermore, the breaking of degeneracy is quite extensive even when we work to first order only. When we include more than one perturbative term, the degeneracy breaking is even bigger, producing quite a rich structure even at first order. This is visible in plots C and D of Figure 2, where we included 20 of the allowed terms, with random coefficients multiplying them. The plot D has a very high resolution of almost 40000 bins for the whole plot; both plots are for exactly the same potential. Note that in these plots we count the total number of minima, so that minima are counted as distinct even if they are related by a symmetry. Thus, much of the degeneracy is intrinsic, and will not be broken in any order of approximation.
An example of the general sort of structure described here arises in the analysis of QCD with many flavors of quarks at high density. For three flavors the color-flavor locking condensate takes the form
$$q_a^\alpha q_b^\beta =U_\gamma ^\alpha U_\delta ^\beta (\kappa _1\delta _a^\gamma \delta _b^\delta +\kappa _2\delta _b^\gamma \delta _a^\delta ),$$
(8)
where the Greek indices refer to color and the Latin to flavor. For present purposes we are suppressing various inessential complications (spin, chirality, momentum dependence), and emphasizing the existence of the matrix degree of freedom $`U`$, which parameterizes the degenerate vacua associated with the spontaneous symmetry breaking $`SU(3)_{\mathrm{color}}\times SU(3)_{\mathrm{flavor}}SU(3)_{\mathrm{color}+\mathrm{flavor}}`$.
It appears that for $`3k`$ flavors the favored condensation is repeated color-flavor locking . Thus we start with the ansatz
$$q_a^\alpha q_b^\beta =\underset{i=1}{\overset{k}{}}U_\gamma ^{(i)\alpha }U_\delta ^{(i)\beta }(\kappa _1\delta _{a3i+3}^\gamma \delta _{b3i+3}^\delta +\kappa _2\delta _{b3i+3}^\gamma \delta _{a3i+3}^\delta ),$$
(9)
corresponding to the symmetry breaking $`SU(3)_{\mathrm{color}}\times SU(3k)_{\mathrm{flavor}}SU(3)_{\mathrm{color}+\mathrm{diagonal}}\times S_k`$. The residual $`SU(3)`$ acts on the flavor indices in blocks of 3, while the permutation symmetry $`S_k`$ implements interchanges of the blocks.
Now the question arises how the energy depends on the relative alignment of the $`U^{(i)}`$. Non-trivial relative alignments violate the permutation symmetry. We will not attempt here to determine whether this actually occurs in the ground state, or in other low-lying states, but we do want to point out that to analyze this question one would need to consider potentials resembling those discussed above, featuring permutation rather than rotation symmetry in internal space. This case is intermediate in complexity between the vector and matrix cases discussed above, in that the permutation acts on a single index (as in the vector case), but the objects being permuted are chosen from a complicated manifold, rather than being a simple choice of sign. Symmetry breaking correlations of the type $`U^{(i)}U^{(j)}M^{(ij)}`$ could produce an effective matrix structure in the permutation index.
We acknowledge useful discussions with Shiraz Minwalla of Princeton University.
|
no-problem/9904/adap-org9904003.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In this paper, we consider extensions of the bar-attendance problem introduced by Arthur and simplified into a minority game by Challet and Zhang . In its simplest form, the minority game mimics the internal dynamic of the exchange of one commodity. Agents are allowed to buy or sell this commodity at each time step. No attempt is made to model any external factors that influence the market. Here, we introduce symmetric and asymmetric three sided games as extensions of the minority game.
In the symmetric three sided model, the agents have to choose between three identical sides at each time step. These three sides are trading with each other, agents on one side buying from the second side to sell to the third. This model mimics the cyclic trading of goods. If we group any two sides together and consider the trading between this imaginary group and the third side, the model reduces to a kind of minority game with an uneven distribution of the agents. Hence, the connection between this model and the minority game is very strong.
In the asymmetric three sided model, the agents can buy or sell a commodity at each time step, but they can also be inactive, that is, they are allowed to miss a turn. In contrast to the symmetric model, the three choices are not equivalent, as being inactive appears as a compromise between buying and selling. This model can be thought of as an open minority game in the sense that the agents buying and selling are playing a minority game with a variable number of agents at each turn.
In Sec. 2, the minority game is briefly recalled and the two new three sided models are described in detail. In Sec. 3, the symmetric three sided model is numerically investigated, while in Sec. 4, the asymmetric three sided model is investigated. Sec. 5 presents a comparison between the minority game and the two three sided models, as well as our conclusions.
## 2 The models
In the minority game, an odd number $`N`$ of agents have to choose between two sides, $`1`$ or $`2`$, at each time step. An agent wins if he chooses the minority side. The record of which side was the winning side for the last $`m`$ time steps constitutes the history of the system. The agents analyze the history of the system in order to make their next decision.
In the symmetric three sided model, a number of agents $`N`$ have to choose between three sides, $`1`$, $`2`$ or $`3`$, at each time step. $`N`$ is not a multiple of 3. The agents on side 1 buy from side 2 to sell to side 3, the agents on side 2 buy from side 3 to sell to side 1 and the agents on side 3 buy from side 1 to sell to side 2. This cyclic trading pattern is shown in Fig. 1. It is assumed that the profit or loss at a side is reflected in the difference between the number of agents they are selling to and the number of agents they are buying from. For instance, $`N_3N_2`$ is a measure of the profit of side 1. Agents choosing the side with the highest profit win and are rewarded with a point. Agents choosing the side with the lowest profit lose and consequently lose a point. Agents choosing the side with the intermediate profit neither lose nor gain a point. Agents strive to maximize their total number of points.
In the asymmetric three sided model, a number of agents $`N`$ also have to choose between three sides, $`1`$, $`2`$ or $`3`$, at each time step. $`1`$ corresponds to selling, $`2`$ to doing nothing and $`3`$ to buying. The agents buying or selling are said to be active, while the agents doing nothing are said to be inactive. The agents choosing the smaller group among buyers and sellers win and are rewarded with a point. The agents choosing the larger group among buyers and sellers lose and they lose a point. The points of the inactive agents don’t change. If there is the same number of buyers and sellers, the points of all the agents remain unchanged. However, the inactive agents are recorded as winners in the history of the system, on the grounds that they achieved the same result as the buyers and sellers, but without taking any risk. Again, agents strive to maximize their total number of points.
In each model, the record of which side was the winning side for the last $`m`$ time steps constitutes the history of the system. For a given $`m`$, there are $`3^m`$ different histories. The 9 different histories for $`m=2`$ are listed in the first column of table 1. Every agent makes a decision for the next time step according to the history of the system. To be able to play, an agent must have a strategy that allows him to make a decision for any of the $`3^m`$ different histories. The second and third columns of table 1 list two possible sets of decisions, $`\sigma `$ and $`\sigma ^{}`$, that we will call strategies.
Each agent has at his disposal a fixed set of $`s`$ strategies chosen at random, multiple choices of the same strategy being allowed. At any one moment in time, the agent only uses one of these strategies to make a decision. To allow an agent to decide which strategy to use, every strategy is awarded points, which are called virtual points. The virtual points of a strategy are the points the agent thinks he could have earned had he played with this strategy. Hence, the virtual points are rewarded using the same scheme as the points given to the agents, the prediction of a strategy being compared to the actual decisions. A strategy predicting the winning side is awarded a virtual point, a strategy predicting the losing side loses a virtual point and a strategy predicting the third side does not gain or lose any points. In the asymmetric model, in the case of an equal number of buyers and sellers, the virtual points of all strategies remain unchanged. An agent always plays with the strategy with the highest number of virtual points. When more than one strategy has the highest number of virtual points, one of them is chosen at random.
If we compare two strategies $`\sigma `$ and $`\sigma ^{}`$ component by component, we see that for some histories they can make the same prediction and for others they can make different predictions. In the example in table 1, the decisions differ when the history is (1,1), (1,2), (2,1), (3,1) and (3,3). To consider this feature, we have to distinguish between the symmetric model and the asymmetric one. For the former, the three sides are equivalent and only the number of differences between the strategies can give a measure of the difference between two strategies in the strategy space. For the latter, there is a qualitative difference between the three sides. This qualitative difference should appear in the definition of the difference between strategies.
Consider first the symmetric three sided model. As the three sides are equivalent, a geometrical representation should put them at the same distance from one another. A convenient measure of the differences between two strategies $`\sigma `$ and $`\sigma ^{}`$ is
$$d_s=\frac{1}{3^m}\underset{i=1}{\overset{3^m}{}}\delta (\sigma _i\sigma _i^{})$$
(1)
where $`\delta (0)=1`$, and $`\delta (x)=0`$ otherwise. $`d_s`$ is defined as the distance between strategies in the symmetric model. This definition takes into account the geometrical structure of the strategy space, including the equivalence between the three sides. In the example of table 1, $`d_s=5/9`$. By definition, the symmetric distance is a number ranging from 0 to 1.
As Eq. (1) shows, the symmetric distance $`d_s`$ is defined as a sum of $`3^m`$ terms, which we label $`d_s^{(i)}`$’s. Each of these terms is equal to 0 with probability 1/3 or equal to 1 with probability 2/3. The average distance between two strategies is $`\overline{d}_s=2/3`$, while the variance of the symmetric distance distribution is $`\sigma _s^2=2/3^{m+2}`$. The symmetric distance between two strategies corresponds to the probability that these two strategies will give different predictions, assuming that all the histories are equally likely to occur. The symmetric distance corresponds to the distance defined in the minority game . Two strategies at $`d_s=0`$ are correlated, two strategies at $`d_s=2/3`$ are uncorrelated and two strategies at $`d_s=1`$ are anticorrelated.
In the asymmetric three sided minority game, selling is just the opposite decision to buying while doing nothing is a compromise. Consequently, the normalized asymmetric distance between strategies $`\sigma `$ and $`\sigma ^{}`$,
$$d_a=\frac{1}{3^m}\underset{i=1}{\overset{3^m}{}}\frac{|\sigma _i\sigma _i^{}|}{2}$$
(2)
is a measure of the difference between the two strategies. $`d_a`$ is defined as the distance between strategies in the asymmetric model. This definition takes into account the fact that buying is more different from selling than it is from being inactive in an arbitrary way. In the example of table 1, $`d_a=4/9`$. By definition, the asymmetric distance is a number ranging from 0 to 1.
As shown by Eq. (2), the asymmetric distance $`d_a`$ is defined as a sum of $`3^m`$ terms we label $`d_a^{(i)}`$’s. When the component of a strategy is equal to 2, this component can never give a $`d_a^{(i)}`$ equal to 1. In other words, the inactive side has no side at distance 1 from itself. Considering all the possibilities, the probability to find a $`d_a^{(i)}`$ of 0 is 1/3, of 0.5 is 4/9 and of 1 is 2/9. The average asymmetric distance between strategies is $`\overline{d}_a=4/9`$, while the variance of the asymmetric distance distribution is equal to $`\sigma _a^2=11/3^{m+4}`$. The interpretation of this asymmetric distance is ambiguous. In fact, the opposite to selling is buying, but the opposite to being inactive is being inactive. Hence, $`d_a`$ is not a measure of the probability that two strategies would give opposite decisions. Two correlated strategies are at $`d_s=0`$ from each other, two uncorrelated strategies are at $`d_a=1/2`$ from each other, but two anticorrelated strategies can be at $`d_a=0`$ or $`d_a=1`$ from each other.
## 3 Numerical results for the symmetric model
In this section, we report on numerical investigations of the properties of the symmetric three sided model, interpreting the results using the symmetric distance.
Fig. 2 presents a typical result for the time evolution of the attendance at one side. The simulation is for $`N=101`$ agents with $`s=2`$ strategies each and a memory of $`m=3`$. The result for the attendance at one side is very similar to the results of the minority game, the mean attendance being shifted to $`N/3`$ instead of $`N/2`$. Given an agent choosing one side, the average distance between the strategy used by this agent and the strategies used by the other agents is $`\overline{d}=2/3`$. That is, around $`2/3`$ of the agents should choose one of the two other sides. Hence, the average attendance at one side is $`N/3`$.
The variance of the attendance at one side as a function of the size of the memory $`m`$ is presented at Fig. 3 for $`N=101`$ agents with $`s=2`$ strategies. The result is again very similar to the minority game, with a very high variance for $`m<3`$, a minimum at $`m=3`$ and the variance going to $`2N/9`$, the random value, as $`m`$ goes to infinity. Curves of the same shape are obtained for the variance of the number of winners or the variance of the number of losers. Also, the maximum profit or the number of agents on the more crowded side exhibit the same behaviour. Each of these curves has a minimum for $`m=3`$. For $`m<3`$, the number of strategies used at each time step is a representative sample of the space of the strategies. Consequently, the variance of the attendance is directly related to the variance of the distance distribution, $`\sigma _s^2=2/3^{m+2}`$. In fact, the variance of the attendance scales like $`1/\sigma _s^2`$. On the contrary, for $`m>3`$, the space of the strategies is very large, so that most of the strategies used are uncorrelated. As a result, the kinetics of the system are the same as the kinetics of a random walk. Between these two behaviours, for $`m`$ around 3, the agents organize themselves better, a crowd-anticrowd effect being obtained .
Even for small values of $`m`$, the space of strategies is very large, of size $`3^{3^m}`$. But as in the minority game, not all the strategies are uncorrelated. If we suppose that $`1/\sigma _s^2`$ gives an estimate of the number of uncorrelated strategies, the method of Johnson et al. can be used to find an analytical expression for the variance of the attendance at one side. We followed the original calculation in , with a size $`a=3^{m+2}/2`$ for the space of strategies and a variance of 2/9 for an independent agent. The analytical result obtained by this method is compared in Fig. 3 to the result of the numerical simulations. The curves agree qualitatively.
Fig. 4 (a) presents a typical result for the average number of points given to the agents and their strategies. The parameters of the simulation are $`N=101`$, $`s=2`$ and $`m=3`$. Note that there are two different ordinate scales. As Fig. 4 (a) shows, the virtual points are steadily decreasing with time. In contrast, the points given to the agents display a more complex behaviour. The points given to the agents increase very slowly for $`m<3`$ and then oscillate around 0 for $`m>3`$. There seem to be no special behaviour for $`m=3`$. The time evolution of the virtual points can be approximated by a linear relation with a profit rate $`\tau `$. We define $`\tau `$ as the average number of points earned by a strategy at each time step. Fig. 4 (b) presents $`\tau `$ as a function of the memory $`m`$ for $`N=101`$ and $`s=2`$. For $`m<3`$, the strategies are slowly losing points, the worst results being obtained for $`m=3`$. For $`m>3`$, the virtual points oscillate around 0. Hence, the agents seem to be able to choose their strategy efficiently, in the sense that the strategies they choose win more often than the average strategy. This behaviour is to be contrasted with the minority game where the agents are not able to choose a strategy efficiently.
As a summary, the symmetric model is a direct extension of the minority game to three sides. The results found are very similar, with a glassy phase transition when the size of the memory of the agents is increased. We numerically identified a critical value $`m_c`$ for the size of the memory. For $`m<m_c`$, the space of strategies is crowded and its geometrical structure is apparent in the results. As this structure is encoded in the distance definition, the system is driven by its distance distribution . For $`m>m_c`$, the number of strategies used is not relevant as most of the strategies used are uncorrelated. The kinetics of the system reduce to agents choosing one of the three sides at random. Hence, there is a transition from a system driven by its distance distribution to a random system.
## 4 Numerical results for the asymmetric model
We investigated numerically the different properties of the asymmetric three sided game. In the figures, 1 denotes buying, 2, doing nothing and 3, selling.
The attendance of the three different sides as a function of the size of the memory $`m`$ is plotted at Fig. 5 for $`N=101`$ agents, playing with $`s=2`$ strategies each. The number of agents in the winning side is also presented. For small $`m`$ values, most of the agents are buying or selling (the two superimposed upper curves). Just a few of them are doing nothing (the lower curve for small $`m`$ values). As the size of the memory is increased, the system corresponds more and more to the agents guessing at random between the three possibilities. Also, for small values of $`m`$, the number of winners is significantly more than 1/3, the random guess value. Fig. 5 is interesting because the difference between the three sides is clearly apparent.
In Fig. 6, the variance of the attendances at the three sides and the variance of the number of winners are presented as functions of $`m`$ for $`N=101`$ and $`s=2`$. For $`m<6`$, the variance of the number of inactive agents is significantly higher than $`2N/9`$, the value for agents guessing at random. The variances of the number of buyers and sellers has a minimum at $`m=2`$. The variances of the three sides increase to $`2N/9`$ as $`m`$ increases. Hence, there seems to be an organization of the agents for $`m`$ around 2. The variance of the number of winners has a shape very similar to the one found in the minority game. For small value of $`m`$, the variance diverges like a power law of $`m`$; at $`m7`$, it seems to reach a minimum and for higher values of $`m`$, it goes asymptotically to a value near $`N/9`$. However, the existence of a minimum at $`m=7`$ could not be confirmed unequivocally by the numerical simulations.
Fig. 5 and 6 show that for small $`m`$ values, the behaviour of the system is directly related to the properties of the distance distribution. The proportion of people buying or selling is of the same order as the average distance, 4/9, while the variance of the number of winners scales as $`1/\sigma _a^2`$, the inverse of the variance of the asymmetric distance distribution. These properties were also present in the minority game. In this asymmetric model, the variance of the attendance at one side does not represent the wasted number of points. The wasted number of points is defined to be the difference between the maximum points that can be earned by the system at each time step and the average points actually earned by the system at each time step. This is why we also have to consider the properties of the number of winners in addition to the properties of the attendances.
For higher values of $`m`$, the strategy space is so large that most of the used strategies are uncorrelated. The system is similar to a system with agents choosing at random from the three sides. In the minority game, the relative attendance predicted by the distance distribution is the same as the one predicted by random guesses, that is 1/2. In the present three sided minority game, these two ratios are 4/9 and 1/3 respectively. Hence, the transition from a system driven by the distance distribution to a system of agent guessing at random is seen directly in the attendance of the different sides.
Fig. 7 presents the average success rate of one side, that is, the probability that at any one moment in time, one side will win. As expected, the sides corresponding to buying and selling are symmetric and more likely to win than the inactive side. In fact, there are $`(N+1)(N+2)/2`$ different configurations for the attendances of the 3 sides. Among these, only $`(N+2)/2`$ make the inactive agents winners if $`N`$ is even, $`(N+1)/2`$ if $`N`$ is odd. Hence, if all the situations were equally likely to occur, the inactive agents would win at most about every $`N+1`$ time steps. This is the order of the asymptotical value for the success rate of this side. For low values of $`m`$, the success rate of the inactive side is higher than the asymptotical value, implying that the agents playing are organizing themselves rather well. The transition between organized and non-organized agents is for $`m=2`$ in Fig. 7.
Fig. 8 confirms the organization of the agents. The profit rates of the agents and their strategies are shown as functions of $`m`$. We define a profit rate as the average number of points earned at each time step. For values of $`m`$ less than $`m=5`$, the agents are able to choose strategies which are more successful than the average ones. On the contrary, for $`m>5`$, they are doing worse than guessing at random. The curve of the profit rate of the strategies suggests that the transition takes place for $`m=2`$.
As a summary, in the asymmetric three sided minority game, agents playing with a small memory win more points on average than agents playing with a bigger memory in a pure population, that is, a population of all the agents with the same memory size $`m`$. As in the minority game and the symmetric three sided model, a glassy phase transition is found at a particular value of the memory $`m_c`$. For $`m<m_c`$, the geometrical properties of the space of strategies is apparent, especially the asymmetry between the three sides. Most of the agents are playing and the system is driven by the distance distribution. In contrast to the minority game, this property is seen directly in the number of agents on each side. For $`m>m_c`$, the strategies used are uncorrelated and the system is similar to a system of agents guessing at random. Considering the adaptation of the agents, they are unable to realize that the wiser choice is to decide to be inactive. In fact, more than half of the active agents will lose. The agents are fooled because they base their confidence in virtual points, not on their profit. Hence, the agents are always tempted to play even if they are unlikely to win.
## 5 Conclusions
We introduced two three sided models as extensions of the minority game. In the symmetric three sided model, agents are given three equivalent choices while in the asymmetric three sided model, agents have the opportunity to miss a turn and not play. We have investigated these two new models numerically and compared the results with the original minority game.
In both models, we defined a distance between the strategies of the agents. These distances incorporate in their definitions the geometrical structure of the space of strategies. In the symmetric model, the geometrical structure of the space of strategies is very similar to the one in the minority game. The distance gives a measure of the correlation between two strategies. Conversely, the distance in the asymmetric model has no obvious interpretation.
A transition between a system driven by the distance distribution and a system of agents guessing at random was identified numerically in both models. However, in contrast to the minority game, the agents make their highest profit for $`m`$ small and not at the transition value of $`m`$. In the distance driven phase, the agents organize themselves, as in the minority game. In contrast to the minority game, however, the average profit rate of the agents is higher than the average profit rate of the strategies, indicating that the agents are choosing their strategies efficiently. In the symmetric model, the transition is apparent in the variance of the number of agents choosing one side while in the asymmetric model the transition is seen in the number of agents itself. This latter property of the asymmetric model is a direct consequence of the geometrical structure of the space of strategies.
In the future, we intend to investigate both models analytically. The symmetric model, in particular, should be amenable to analytical treatment, perhaps following the methods introduced in for the two sided minority game.
|
no-problem/9904/astro-ph9904168.html
|
ar5iv
|
text
|
# A measurement of 𝐻₀ from Ryle Telescope, ASCA and ROSAT observations of Abell 773
## 1 Introduction
We have previously reported the detection of a Sunyaev-Zel’dovich (SZ) decrement \[Sunyaev & Zel’dovich 1972\] towards the $`z=0.217`$ cluster Abell 773 using the Ryle Telescope (RT) \[Grainge et al 1993\]. (The SZ effect in this cluster has also been mapped by the millimeter array of the Owens Valley Radio Observatory \[Carlstrom, Joy & Grego 1996\].) The RT observations of Abell 773 form part of a continuing programme to observe an X-ray luminosity-limited sample of rich, intermediate-redshift clusters in order to measure $`H_0`$ by combining SZ and X-ray observations \[Jones et al.2001\]. Such programmes (e.g. Reese et al., 2002; Mason, Myers and Readhead, 2001; see also Birkinshaw, 1999 for a review) are direct measurements of $`H_0`$ free from distance-ladder arguments.
In Grainge et al. 1993 we did not calculate an estimate of $`H_0`$ because no suitable X-ray image of A773 and no estimate of its gas temperature existed. A ROSAT HRI image and ASCA spectroscopic data have since become available, and we have also made additional RT observations. These now enable us to make an estimate of the Hubble constant from this cluster, which, when combined with other clusters from the sample, will give an estimate of $`H_0`$ unbiased by the individual shapes and orientations of the clusters.
## 2 Ryle Telescope observations and source subtraction
The RT \[Jones 1991\] is an east–west synthesis telescope of 13-m antennas with a bandwidth of 350 MHz and an average system temperature for these observations of 65 K at an observing frequency of 15.4 GHz. We used five antennas in a compact configuration, giving two baselines of 18 m, three of 36 m, and five more out to 108 m. The short baselines alone are sensitive to the SZ signal; the longer ones are used to recognize and subtract the radio sources in the field that would otherwise mask the SZ decrement. We have made a total of 30 12-h observations of A773, each with the pointing centre $`\mathrm{RA}09^\mathrm{h}17^\mathrm{m}51^\mathrm{s}.91,\mathrm{Dec}.`$ $`+51^{}43^{}32^{\prime \prime }`$ (J2000). Phase calibration using 0859+470 and flux calibration using 3C 48 and 3C 286 were carried at as described in Grainge et al \[Grainge et al 1993\]. Similarly, we used the Postmortem package \[Titterington 1991\] to flag the data for interference and antenna pointing errors, and to weight them in accord with the continuously monitored system temperature of each antenna. As a standard check, we used the Aips package to make a map of each 12-h run and then combined the data.
We removed radio sources from the data by a simultaneous maximum-likelihood fit to several point sources and the SZ effect using a technique described by Grainger et al \[Grainger et al 2002\]. We use a model for the SZ signal as a function of baseline that is based on the $`\beta `$-model fit to the X-ray image described below (Section 3). We simultaneously fit flux densities for trial sources whose initial positions are determinied both from a map made from just the long-baseline data ($`>2\mathrm{k}\lambda `$), and from a VLA 1.4-GHz image of the cluster field (Figure 1). This allows us to fit the optimum flux densities of sources whose existence we know of from the VLA image but which would not give a significant detection from the RT data alone. The postitions and fitted flux densities are given in Table 1. The image made from the long ($`>2\mathrm{k}\lambda `$) source-subtracted baselines is consistent with noise (Figure 2).
To image the decrement, we removed the sources in Table 1 from all the visibilities and made a short-baseline map from baselines shorter than 1 k$`\lambda `$, and CLEANed this. The resulting image is shown in Figure 3. The decrement is of $`527\mu \mathrm{Jy}\mathrm{beam}^1`$ with a noise (1-$`\sigma `$) of $`60\mu \mathrm{Jy}\mathrm{beam}^1`$; the beam is $`152\times 119`$ arcsec FWHM. Also shown is the X-ray image of the cluster; it can be seen that the alignment with the X-ray image is very good. The extension of the SZ image to the north-east is of marginal significance. The magnitude of the decrement is consistent with that of $`590\pm 116\mu `$Jy, in the same beam, reported in Grainge et al \[Grainge et al 1993\].
An alternative way of looking at the data is shown in Figure 4, which shows the real part of the source-subtracted visibilities binned radially, along with the best-fitting model based on the X-ray data. These data have the advantage, unlike the image pixels, of having independent gaussian noise on each point; it is these that are used in the fitting for $`H_0`$.
## 3 X-ray observations and fitting
We measure the gas temperature from ASCA observations on 1994 April 29 of 46240 s (GIS) and 39904 s (SIS), using standard XSPEC tools. Times of high background flux were excluded and both GIS and SIS data were used. We took the Galactic absorbing column density predicted by Dickey and Lockman \[Dickey & Lockman 1990\] in the direction of A773 of $`1.3\times 10^{24}`$ H atoms m<sup>-2</sup>. Using a Raymond-Smith model, we find a temperature of $`8.7\pm 0.7`$ keV ($`90\%`$-confidence error bounds) and a metallicity of 0.25 solar. The 2–10 keV flux from A773 is $`6.7\pm 1.0`$ x $`10^{13}`$ W m<sup>-2</sup>. Our temperature estimate is consistent with that of Allen and Fabian \[Allen & Fabian 1998\] who find a temperature of $`9.29_{0.60}^{+0.69}`$ keV ($`90\%`$-confidence error bounds).
For the X-ray surface-brightness fitting we used a ROSAT HRI image of A773 with an effective exposure of 16518 s obtained on 13–15 April 1994 and analysed using standard ASTERIX routines. We calculate the ROSAT HRI count rate, given our estimates of metallicity and Galactic column and with the K-correction appropriate to the redshift of A773, to be $`1.53(\pm 0.08)\times 10^{69}`$ counts s<sup>-1</sup> from a $`1\mathrm{m}^3`$ cube of gas of electron density $`1\mathrm{m}^3`$ at the temperature of A773 and at a luminosity distance of 1 Mpc.
We then fitted an ellipsoidal King profile to the X-ray image. Since the high spatial resolution of the HRI leads to a low count rate per pixel, we use Poisson rather than Gaussian statistics to fit for the measured count in each pixel. For $`c_i`$ counts measured at position $`x_i`$, and for a mean number $`f(x_i|a)`$ of counts predicted by the model given parameters $`a`$ (such as core radius), the probability of obtaining $`c_i`$ counts is
$$P(c_i|a)=\frac{\left(f(x_i|a)\right)^{c_i}}{c_i!}e^{f(x_i|a)},$$
and the most likely value of $`a`$ can be obtained in a computationally efficient way by maximizing
$$\mathrm{ln}P(c|a)=\underset{i}{}(c_i\mathrm{ln}f(x_i|a)\mathrm{ln}c_i!f(x_i|a)).$$
We fitted an ellipsoidal King profile to the HRI data with $`\theta _1`$ and $`\theta _2`$ as the perpendicular angular sizes in the plane of the image, assuming that the length along the line of sight is the geometric mean of the other two. We find $`\theta _1=60^{\prime \prime }`$ and $`\theta _2=44^{\prime \prime }`$, with the major axis at position angle $`16^{}`$, $`\beta =0.64`$, and central electron density $`n_0=6.80\times 10^3h_{50}^{1/2}\mathrm{m}^3`$ where $`H_0=50h_{50}`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Fig 5 shows the HRI image, the model, and the residual image with the best model subtracted. To assess the goodness of fit, we made 50 realisations of the image with the appropriate Poisson noise added, and calculated the mean and standard deviation of their Poisson likelihoods. The likelihood of the observed HRI image is 0.32 standard deviations from the mean; we therefore conclude that the fit is good and the cluster is well represented by a $`\beta `$ model.
There is a strong degeneracy in the fit between $`\beta `$ and $`\theta _{1,2}`$; however this has little effect on the comparison with the SZ data and the derived value of $`H_0`$. Figure 6 shows the likelihood contours for the fit in the $`\beta `$$`\theta _1`$ plane, marginalised over $`n_0`$ and using the best-fit value of the axial ratio (which is very well constrained). Overlaid are the contours of predicted mean observed SZ flux density on the shortest RT baseline. It can be seen that despite the degeneracy between $`\beta `$ and $`\theta _1`$, the range of SZ flux densities corresponding to the 1-$`\sigma `$ limits of the model fit is only $`\pm 3\%`$. Since the SZ flux density varies as $`H_0^2`$, this corresponds to a $`6\%`$ error in $`H_0`$ due to the model fitting. This lack of sensitivity to the $`\beta `$$`\theta `$ degeneracy is characteristic of observations that are sensitive to spatial frequencies around the cluster core size (see eg Reese et al \[Reese et al 2000\]) and contrasts with the sensitivity to the model fitting of measurements that measure only lower spatial frequencies (eg Birkinshaw & Hughes\[Birkinshaw & Hughes 1994\]).
## 4 $`H_0`$ estimation
To measure $`H_0`$, we compared the real SZ data with a simulation of the SZ effect from the X-ray gas model. We use the expression of Challinor & Lasenby \[Challinor & Lasenby 1998\] to provide a relativistic correction to the standard non-relativistic SZ expression; in the case of A773, the effect is to increase our estimate of the $`y`$-parameter by $`2.4\%`$. We then simulated RT observations of the SZ effect due to the model gas distribution and to compared these with the real source-subtracted RT visibilities on the same baselines, and adjusted $`H_0`$ to get the best fit. Using our temperature of $`8.7\pm 0.7`$ keV we find $`H_0=77_{11}^{+13}`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, assuming an Einstein-de-Sitter universe. The $`1`$-$`\sigma `$ error quoted is that due solely to noise in the SZ data. For the best fit $`\beta ,\theta _{1,2}`$ model, the corresponding central density $`n_0`$ is $`8.44\times 10^3`$ m<sup>-3</sup> and the central decrement $`737\pm 85\mu `$K.
Grainge et al \[Grainge et al 2002\] consider at some length the contributions to error in the $`H_0`$ determination from A1413. The situation in A773 is very similar. The dominant contributions to the error in $`H_0`$ in A773 are $`\pm 16\%`$ from noise in the SZ measurement, $`\pm 12\%`$ from our estimation of the gas temperature and a likely error of $`\pm 14\%`$ from the uncertain line-of-sight depth. This is obtained by considering the range of axial ratios of simulated clusters that is needed to reproduce the projected axial ratio distribution observed in clusters with redshift similat to that of A773 \[Grainger 2001\]. Clearly this estimate is rather uncertain for a single object, but can be significantly reduced by averaging a sample of clusters with random orientations. Table 2 shows the complete error budget, and the final 1-$`\sigma `$ error limits of $`H_0=77_{15}^{+19}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ if $`(\mathrm{\Omega }_\mathrm{m},\mathrm{\Omega }_\mathrm{\Lambda })=(1.0,0.0)`$ and $`H_0=85_{17}^{+20}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ if $`(\mathrm{\Omega }_\mathrm{m},\mathrm{\Omega }_\mathrm{\Lambda })=(0.3,0.7)`$.
## 5 Conclusions
Using ASCA, ROSAT HRI, and RT observations of A773, we find:
1. there are eight radio sources detectable in the field of the cluster that we have removed from the data, which would otherwise contaminate the measurement of the SZ effect;
2. the correlated fitting errors on the shape parameters $`\beta `$ and $`\theta `$ have negligable effect on the derived value of $`H_0`$, a feature characteristic of observations on the scale of the cluster core size;
3. the estimated value of $`H_0`$ is $`77_{15}^{+19}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ if $`(\mathrm{\Omega }_\mathrm{m},\mathrm{\Omega }_\mathrm{\Lambda })=(1.0,0.0)`$ or $`85_{17}^{+20}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ if $`(\mathrm{\Omega }_\mathrm{m},\mathrm{\Omega }_\mathrm{\Lambda })=(0.3,0.7)`$ , where the 1-$`\sigma `$ error bars include estimates from the main sources of error—noise in the SZ data, X-ray temperature uncertainty, and uncertain line-of-sight depth.
### ACKNOWLEDGMENTS
We thank the staff of the Cavendish Astrophysics group who maintain and operate the Ryle Telescope, which is funded by PPARC. AE acknowledges support from the Royal Society; WFG acknowledges the support of a PPARC studentship; RK acknowledges support from an EU Marie Curie Fellowship.
This paper has been produced using the Blackwell Scientific Publications style file.
|
no-problem/9904/quant-ph9904059.html
|
ar5iv
|
text
|
# Quantum theory of excess noise
## Abstract
We analyze the excess noise in the framework of the conventional quantum theory of laser-like systems. Our calculation is conceptually simple and our result also shows a correction to the semi-classical result derived earlier.
In attenuators, amplifiers and laser systems, the noise of the signal is increased by the interaction with a reservoir. In usual laser systems this is reflected in the formula for the Schawlow-Townes linewidth which provides a simple relation between the linewidth and the gain or loss of the system. Under certain conditions, however, the noise entering from the reservoir can exceed this minimum amount by a large factor — the so-called excess-noise or Petermann factor. The appearance of this factor was predicted by Petermann in the context of gain-guided semiconductor lasers. Within a semi-classical theory , this concept was later generalized to other systems in quantum electronics.
However, for a general system a complete quantum mechanical derivation is still lacking. Most of the considerations so far include the quantum noise properties ad hoc into an otherwise classical theory. Only a few simple systems have been discussed quantum mechanically in a rigorous way .
During recent years, many interesting experiments have been carried out: The first experiments used cavities with large output coupling enhancing the noise by a few times only. However, geometrically unstable laser cavities show excess-noise factors up to a few hundreds for both solid state lasers and gas lasers . Also the coupling of the polarizations of the laser light and the insertion of a small aperture can lead to large excess noise.
We derive the noise properties of an amplifier starting from the usual quantum mechanical description of the radiation field. First we derive a multi-mode master equation for the amplification. This is done in a way analogous to the maser theory assuming a reservoir of excited atoms . Then we are able to define quasi modes of the total system which show noise enhanced by the so-called $`K`$-factor. Our calculation is conceptually simple and transparent; it applies to all linear systems. We derive a correction to the semi-classical theory of Ref. . We subsequently generalize our result by including damping which allows us to apply it to the case of a laser below or up to threshold. We explain the main features of the excess noise and discuss its properties. For the experimentally relevant cases, we show that our calculations justify the use of the ordinary semi-classical treatment of excess noise. Thus we refrain from providing any explicit physical realization of our abstract model. Such a calculation would, in all essential features, reproduce the calculations carried out in the various model systems discussed in the literature . In this paper we do not include the effects of nonlinear saturation which needs to be considered separately.
Here we briefly recall the standard expressions of the quantized electromagnetic field in order to define our notation. We use orthonormal real mode functions $`u_n(x)`$ of the electromagnetic field with frequency $`\omega _n`$ which fulfill the boundary conditions for the given configuration in the whole “universe” and satisfy the orthonormality relation
$$\frac{1}{V}d^3xu_n(x)u_m(x)=\delta _{nm},$$
(1)
where $`V`$ is the volume of the whole space. Note that the mode function $`u_n(x)`$ is a vector including the polarization orientation and that we choose them to be real for convenience. The electric field operator then reads
$$\widehat{E}(x)=\underset{n}{}\epsilon _nu_n(x)\left(\widehat{a}_n+\widehat{a}_n^{}\right),$$
(2)
where $`\widehat{a}_n`$ and $`\widehat{a}_n^{}`$ are the usual creation and annihilation operators of the field excitations and the so-called vacuum field amplitude is
$$\epsilon _n=\sqrt{\frac{\mathrm{}\omega _n}{2ϵ_0V}}.$$
(3)
We assume an amplifier medium consisting of two level atoms. This model has frequently been used for the quantum treatment of laser systems . The atoms start in the upper level, interact independently with the field for a short time before they are repumped incoherently to the upper level. The master equation for the density operator of the field is then obtained by tracing out the atomic variables. To describe the interaction between the field and a single atom at position $`x`$, we use the interaction Hamiltonian
$$\widehat{\stackrel{~}{H}}=d\underset{n}{}\epsilon _nu_n(x)(\widehat{a}_n^{}\widehat{\sigma }^{}+\widehat{a}_n\widehat{\sigma }^+),$$
(4)
where $`\widehat{\sigma }^\pm `$ are the raising and lowering operators for the two atomic levels coupled by the dipole moment $`d`$. After a short interaction time $`\tau `$ the change of the reduced field density operator $`\widehat{\stackrel{~}{\rho }}(t)`$, in the interaction picture, is given by
$`\delta \widehat{\stackrel{~}{\rho }}(t)`$ $`=`$ $`{\displaystyle \frac{\tau ^2}{2\mathrm{}^2}}{\displaystyle \underset{n,m}{}}\epsilon _n\epsilon _m[u_n(x)d][u_m(x)d]e^{i(\omega _n\omega _m)t}\left\{2\widehat{a}_n^{}\widehat{\stackrel{~}{\rho }}(t)\widehat{a}_m\widehat{a}_m\widehat{a}_n^{}\widehat{\stackrel{~}{\rho }}(t)\widehat{\stackrel{~}{\rho }}(t)\widehat{a}_m\widehat{a}_n^{}\right\}`$ (6)
$`+O(\tau ^3)`$
for one atom. We now assume that the atoms are introduced uniformly distributed over a volume $`V^{}V`$ in the upper level at a rate $`R`$. In principle we could let $`R`$ depend on the position $`x`$ and the reservoir average could include an average over the orientation of the dipole moment $`d`$. Without such refinements, the rate of change of the ensemble averaged density operator is given by
$`{\displaystyle \frac{d}{dt}}\widehat{\stackrel{~}{\rho }}(t)`$ $`=`$ $`R{\displaystyle \frac{1}{V^{}}}{\displaystyle d^3x\delta \widehat{\stackrel{~}{\rho }}(t)}`$ (7)
$`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n,m}{}}L_{m,n}e^{i(\omega _n\omega _m)t}\left\{2\widehat{a}_n^{}\widehat{\stackrel{~}{\rho }}(t)\widehat{a}_m\widehat{a}_m\widehat{a}_n^{}\widehat{\stackrel{~}{\rho }}(t)\widehat{\stackrel{~}{\rho }}(t)\widehat{a}_m\widehat{a}_n^{}\right\},`$ (8)
with the matrix elements
$$L_{m,n}=\frac{R\tau ^2}{\mathrm{}^2}\epsilon _n\epsilon _m\frac{1}{V^{}}d^3x[u_n(x)d][u_m(x)d].$$
(9)
Transforming from the interaction picture to the laboratory frame we get
$$\frac{d}{dt}\widehat{\rho }(t)=\frac{1}{2}\underset{n,m}{}L_{m,n}\left\{2\widehat{a}_n^{}\widehat{\rho }(t)\widehat{a}_m\widehat{a}_m\widehat{a}_n^{}\widehat{\rho }(t)\widehat{\rho }(t)\widehat{a}_m\widehat{a}_n^{}\right\}i\underset{n}{}\omega _n[\widehat{a}_n^{}\widehat{a}_n,\widehat{\rho }(t)].$$
(10)
This multi-mode master equation with a symmetric matrix $`L_{m,n}`$ allows us to define the quasi modes and derive their noise properties in the following. Its important feature is the coupling between modes with different frequencies due to the reservoir.
We may use the same kind of model also to derive dissipative losses of the field if we assume a fraction of the reservoir atoms to be initially in the lower state. These can be introduced into the system with a distribution differing from that of the amplifying atoms. But we will focus now only on the linear amplifier and discuss the case with damping later.
We are looking for a mode operator $`\widehat{A}`$ which obeys
$$\frac{d}{dt}\widehat{A}=(\frac{\lambda }{2}i\mathrm{\Omega })\widehat{A}$$
(11)
for an arbitrary field state. Here $`\mathrm{\Omega }`$ is the real frequency and $`\lambda `$ is the real amplification rate. We write this mode operator in terms of the free field mode operators as
$$\widehat{A}=\underset{n}{}\epsilon _nc_n\widehat{a}_n$$
(12)
with the expansion coefficients $`c_n`$. This transformation includes the vacuum-field amplitudes $`\epsilon _n`$ and we define $`=\sqrt{\frac{\mathrm{}\mathrm{\Omega }}{2ϵ_0V}}`$, because then the classical field amplitudes $`\epsilon _n\widehat{a}_n`$ obey the same transformation. Inserting Eq. (11) into (12) we get an eigenvalue equation
$$\underset{n}{}\left(\frac{1}{2}L_{m,n}i\delta _{n,m}\omega _n\right)\frac{\epsilon _n}{\epsilon _m}c_n=(\frac{\lambda }{2}i\mathrm{\Omega })c_m$$
(13)
for the non-Hermitian matrix $`\stackrel{~}{L}_{m,n}=(\frac{1}{2}L_{m,n}i\delta _{n,m}\omega _n)\frac{\epsilon _n}{\epsilon _m}`$. Here $`c_n^{(\nu )}`$ is the right eigenvector of $`\stackrel{~}{L}_{m,n}`$; the corresponding left eigenvector is $`\epsilon _n^2c_n^{(\nu )}`$ . The superscript $`\nu `$ distinguishes the different eigenvectors.
The only properties of the left and right eigenvectors of non-Hermitian matrices which we need for our analysis are their mutual orthogonality and completeness : The eigenvectors fulfill the orthogonality condition
$$\underset{n}{}\epsilon _n^2c_n^{(\nu )}c_n^{(\mu )}=\delta _{\nu ,\mu }\underset{n}{}\epsilon _n^2c_{n}^{(\nu )}{}_{}{}^{2}$$
(14)
and the completeness relation
$$\underset{\nu }{}\left(\frac{\epsilon _n^2c_n^{(\nu )}c_m^{(\nu )}}{_n^{}\epsilon _n^{}^2c_{n^{}}^{(\nu )}{}_{}{}^{2}}\right)=\delta _{n,m}$$
(15)
with $`_n^{}\epsilon _n^{}^2c_{n^{}}^{(\nu )}{}_{}{}^{2}0`$. We can now uniquely define the set of quasi-mode operators as
$`\widehat{A}_\nu ={\displaystyle \frac{1}{_\nu }}{\displaystyle \underset{n}{}}c_n^{(\nu )}\epsilon _n\widehat{a}_n`$ (16)
with the vacuum field amplitude
$$_\nu =\sqrt{\frac{\mathrm{}\mathrm{\Omega }_\nu }{2ϵ_0V}}.$$
(17)
The inverse transformation is
$$\widehat{a}_n=\epsilon _n\underset{\nu }{}\frac{c_n^{(\nu )}}{_m\epsilon _m^2c_{m}^{(\nu )}{}_{}{}^{2}}_\nu \widehat{A}_\nu .$$
(18)
Consequently the positive frequency part of the electric field operator is given by
$`\widehat{E}^{(+)}(x)`$ $`=`$ $`{\displaystyle \underset{n}{}}\epsilon _nu_n(x)\widehat{a}_n`$ (19)
$`=`$ $`{\displaystyle \underset{\nu }{}}_\nu \left({\displaystyle \underset{n}{}}{\displaystyle \frac{\epsilon _n^2c_n^{(\nu )}}{_m\epsilon _m^2c_{m}^{(\nu )}{}_{}{}^{2}}}u_n(x)\right)\widehat{A}_\nu `$ (20)
$`=`$ $`{\displaystyle \underset{\nu }{}}_\nu U_\nu (x)\widehat{A}_\nu .`$ (21)
The quasi-mode eigenfunctions
$$U_\nu (x)=\underset{n}{}\frac{\epsilon _n^2c_n^{(\nu )}}{_m\epsilon _m^2c_{m}^{(\nu )}{}_{}{}^{2}}u_n(x)$$
(22)
satisfy an orthogonality relation
$$\frac{1}{V}d^3xU_\nu (x)\overline{U}_\mu (x)=\delta _{\nu ,\mu }$$
(23)
with their adjoint quasi-mode functions
$$\overline{U}_\nu (x)=\underset{n}{}c_n^{(\nu )}u_n(x).$$
(24)
The quasi-mode functions have the norm
$$N_\nu ^2=\frac{1}{V}d^3xU_\nu (x)U_\nu ^{}(x)=\frac{_n\epsilon _n^4|c_n^{(\nu )}|^2}{\left|_m\epsilon _m^2c_{m}^{(\nu )}{}_{}{}^{2}\right|^2}$$
(25)
and their adjoints
$$\overline{N}_\nu ^2=\frac{1}{V}d^3x\overline{U}_\nu (x)\overline{U}_\nu ^{}(x)=\underset{n}{}|c_n^{(\nu )}|^2.$$
(26)
From now on we only consider one quasi mode and drop the index $`\nu `$. This is well justified, because, after a long enough time, that quasi mode which has the largest amplification rate $`\lambda `$ is dominating in the sum Eq. (19). Thus all expectation values can be calculated with just this largest contribution.
Later we need the properties
$$\mathrm{\Omega }=\frac{_n\epsilon _n^2\omega _n|c_n|^2}{_n\epsilon _n^2|c_n|^2}=\frac{2ϵ_0V}{\mathrm{}}\frac{_n\epsilon _n^4|c_n|^2}{_n\epsilon _n^2|c_n|^2}$$
(27)
and
$$\lambda =\frac{_{n,m}L_{n,m}\epsilon _n\epsilon _mc_n^{}c_m}{_n\epsilon _n^2|c_n|^2}$$
(28)
which can be obtained from the real and imaginary parts of Eq. (13) after taking the scalar product with the vector $`\epsilon _m^2c_m^{}`$. Note that $`\mathrm{\Omega }`$ is the mean frequency with respect to the probabilities
$$p_n=\frac{\epsilon _n^2|c_n|^2}{_n^{}\epsilon _n^{}^2|c_n^{}|^2}.$$
(29)
We calculate now the noise of the slowly-varying quadrature operator
$$\widehat{X}(x)=(U(x)\widehat{A}e^{i\mathrm{\Omega }t}+U^{}(x)\widehat{A}^{}e^{i\mathrm{\Omega }t})$$
(30)
in a frame rotating with $`\mathrm{\Omega }`$. After some straightforward calculation we get the time evolution of the noise
$$\frac{d}{dt}(\mathrm{\Delta }X(x))^2=\lambda (\mathrm{\Delta }X(x))^2+\lambda |U(x)|^2\epsilon _n^2|c_n|^2.$$
(31)
We have to compare this with the noise of the usual single-mode-amplifier master equation
$$\frac{d}{dt}\widehat{\rho }(t)=\frac{\lambda }{2}\left\{2\widehat{a}^{}\widehat{\rho }(t)\widehat{a}\widehat{a}\widehat{a}^{}\widehat{\rho }(t)\widehat{\rho }(t)\widehat{a}\widehat{a}^{}\right\}i\mathrm{\Omega }[\widehat{a}^{}\widehat{a},\widehat{\rho }(t)]$$
(32)
for the mode frequency $`\mathrm{\Omega }`$, normalized mode function $`u(x)`$, and the vacuum field amplitude $`=\sqrt{\frac{\mathrm{}\mathrm{\Omega }}{2ϵ_0V}}`$. In this case the noise is given by the equation
$$\frac{d}{dt}(\mathrm{\Delta }X(x))^2=\lambda (\mathrm{\Delta }X(x))^2+\lambda |u(x)|^2^2.$$
(33)
Hence, after averaging over position in Eqs. (31) and (33) and using Eq. (25), we can read off the excess-noise factor
$`K`$ $`=`$ $`{\displaystyle \frac{N^2}{^2}}{\displaystyle \underset{n}{}}\epsilon _n^2|c_n|^2={\displaystyle \frac{\epsilon _n^2|c_n|^2_n\epsilon _n^4|c_n|^2}{^2\left|_m\epsilon _m^2c_{m}^{}{}_{}{}^{2}\right|^2}}`$ (34)
$`=`$ $`\left|{\displaystyle \frac{\epsilon _n^2|c_n|^2}{_m\epsilon _m^2c_{m}^{}{}_{}{}^{2}}}\right|^2.`$ (35)
From the triangular inequality $`\epsilon _n^2|c_n|^2|_m\epsilon _m^2c_{m}^{}{}_{}{}^{2}|`$ follows $`K1`$. The peculiar mode coupling of the non-Hermitian eigenvalue equation (13) causes this enhancement of the reservoir noise entering the quasi mode.
We can now compare the result, Eq. (34), with the expression
$`\stackrel{~}{K}`$ $`=`$ $`{\displaystyle \frac{d^3xU(x)U^{}(x)d^3x\overline{U}(x)\overline{U}^{}(x)}{\left|d^3xU(x)\overline{U}(x)\right|^2}}`$ (36)
$`=`$ $`N^2\overline{N}^2={\displaystyle \frac{_n|c_n|^2_n\epsilon _n^4|c_n|^2}{\left|_m\epsilon _m^2c_{m}^{}{}_{}{}^{2}\right|^2}}`$ (37)
which has been derived from a semi-classical theory . We see that they are almost identical when we use Eqs. (23), (25) and (26). The ratio of the factors is given by
$`{\displaystyle \frac{\stackrel{~}{K}}{K}}`$ $`=`$ $`{\displaystyle \frac{^2_n|c_n|^2}{_n\epsilon _n^2|c_n|^2}}={\displaystyle \frac{_n|c_n|^2_n|c_n|^2\epsilon _n^4}{(_n|c_n|^2\epsilon _n^2)^2}}`$ (38)
$`=`$ $`\mathrm{\Omega }\overline{\left(1/\omega \right)}1+𝒪(\mathrm{\Delta }\omega /\mathrm{\Omega })^2`$ (39)
with the mean inverse frequency $`\overline{\omega ^1}`$ with respect to the probabilities $`p_n`$ introduced above in Eq. (29). Here we have used the definitions of $`\epsilon _n`$, $``$ and $`\mathrm{\Omega }`$ in Eqs. (3), (17) and (27). The quantity $`\mathrm{\Delta }\omega `$ is a measure of the bandwidth of the quasi mode in terms of the modes of the universe. From Eqs. (34) and (36) we see, by using Schwarz’ inequality, that $`K\stackrel{~}{K}`$. The correction to the semi-classical result is small for the optical frequency domain where the bandwidth ($`<10^{10}`$ Hz) is negligible with respect to the mean frequency ($`>10^{14}`$ Hz). But in the micro-wave regime (GHz) the correction may be essential.
We note that if the quasi-mode frequency $`\mathrm{\Omega }`$ was defined with respect to the weights $`|c_n|^2`$ instead of Eq. (29), the ratio Eq. (38) would be unity. Indeed, $`|c_n|^2`$ is the weight of the classical field amplitudes $`\epsilon _n\widehat{a}_n`$ in Eq. (12) and it is the weight of the universe modes $`u_n(x)`$ in the adjoint quasi-mode functions $`\overline{U}_\nu (x)`$ in Eq. (24). On the other hand, the weight of the universe modes in the quasi-mode functions in Eq. (22) is $`\epsilon _n^4|c_n|^2`$. But, Eq. (27) shows that the geometric mean of the two classical possibilities gives the proper distribution $`p_n`$ for the frequencies with the eigenfrequency $`\mathrm{\Omega }`$ as mean value.
As a next step we could derive damping for the quasi modes by considering the limit of a continuum of modes of the universe. In the case of classical field modes, this was done by Lang, Scully and Lamb in one dimension for a cavity with one perfect and one semi-transparent mirror. It is interesting to note that in this special case, the authors could explicitly prove that only one quasi mode exhibits amplification whereas all other quasi modes experience attenuation. However, in the context of discussing the properties of excess noise in this paper, we are satisfied by adding damping due to a separate reservoir. In an analogous way as before we can derive the multi-mode master equation
$`{\displaystyle \frac{d}{dt}}\widehat{\rho }(t)`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{n,m}{}}L_{m,n}\left\{2\widehat{a}_n^{}\widehat{\rho }(t)\widehat{a}_m\widehat{a}_m\widehat{a}_n^{}\widehat{\rho }(t)\widehat{\rho }(t)\widehat{a}_m\widehat{a}_n^{}\right\}`$ (41)
$`+{\displaystyle \frac{1}{2}}{\displaystyle \underset{n,m}{}}\mathrm{\Gamma }_{m,n}\left\{2\widehat{a}_n\widehat{\rho }(t)\widehat{a}_m^{}\widehat{a}_m^{}\widehat{a}_n\widehat{\rho }(t)\widehat{\rho }(t)\widehat{a}_m^{}\widehat{a}_n\right\}i{\displaystyle \underset{n}{}}\omega _n[\widehat{a}_n^{}\widehat{a}_n,\widehat{\rho }(t)].`$
The symmetric matrix $`\mathrm{\Gamma }_{m,n}`$ is defined similar to the matrix $`L_{m,n}`$ in Eq. (9), with a possibly different volume $`V^{}`$ and a possibly different atom injection rate $`R`$ for the lower state.
We are interested in the case when the strength of the amplification is adjusted such that the quasi-mode amplitude obeys the equation
$$\frac{d}{dt}\widehat{A}=i\mathrm{\Omega }\widehat{A}$$
(42)
yielding a stationary modulus. At threshold, the laser quasi-mode is oscillating without change of amplitude. Thus, the effects of amplification and attenuation must compensate exactly on the average, but only for this quasi-mode. This does not mean that the two contributions cancel at the level of the master equation, but only that we can find an eigenmode with a purely imaginary eigenvalue. In fact, assuming exact cancellation in the master equation leads us to a trivial case, see below. In Ref. it is assumed that the same applies to the saturated gain at steady state operation above threshold; here we consider only the linear behavior close to threshold. We return to the discussion of the nonlinear problem in a forthcoming publication.
These assumptions lead to the eigenvalue problem
$$\underset{n}{}\left(\frac{1}{2}L_{m,n}\frac{1}{2}\mathrm{\Gamma }_{m,n}i\delta _{n,m}\omega _n\right)\frac{\epsilon _n}{\epsilon _m}c_n=i\mathrm{\Omega }c_m.$$
(43)
We thus require the amplification rate $`\lambda `$ as given in Eq. (28) and the damping rate
$$\gamma =\frac{_{n,m}\mathrm{\Gamma }_{n,m}\epsilon _n\epsilon _mc_n^{}c_m}{_n\epsilon _n^2|c_n|^2}$$
(44)
to be equal; $`\gamma =\lambda `$. Proceeding as before and averaging over position, we obtain the equation
$$\frac{d}{dt}(\mathrm{\Delta }X)^2=(\gamma +\lambda )^2K=2\lambda ^2K$$
(45)
for the noise with the same excess-noise factor $`K`$ as in Eq. (34). This describes a diffusion process with the diffusion constant $`2D_X=2\lambda ^2K`$. The linewidth of the laser at threshold is then given by the phase diffusion constant $`2D_\varphi =2D_X/I=K\lambda ^2/4P`$ with field intensity $`I=4^2N^2\widehat{A}^{}\widehat{A}`$ and output power $`P=\gamma I/(4^2)=\gamma N^2\widehat{A}^{}\widehat{A}`$ measured in photon energies. The measurement of the laser intensity is not straightforward in the case of a quasi-mode. However, if the detector is mode matched to the outgoing quasi-mode profile, the maximum intensity $`I`$ is obtained. All other detector arrangements will miss some intensity and give a lower value. In the present case, we find the noise to be purely due to phase diffusion. This derives from our assumption that we consider the steady state near threshold, not a linearized noise theory around the saturated gain. In the latter case, we additionally would expect a line width contribution from the amplitude fluctuations.
We consider now the steady state case when the mode coupling is mainly caused by the damping, which holds if $`L_{m,n}\lambda \delta _{m,n}`$. Then we get from Eq. (43) the eigenvalue equation
$$\underset{n}{}\left(\frac{1}{2}\mathrm{\Gamma }_{m,n}i\delta _{n,m}\omega _n\right)\frac{\epsilon _n}{\epsilon _m}c_n=(\frac{\gamma }{2}i\mathrm{\Omega })c_m$$
(46)
which is the equation describing a system without an amplifying medium. Since with this assumption the eigenvectors are the same for Eqs. (43) and (46), also the $`K`$-factor is the same in both cases. This result justifies the commonly used procedure to calculate the $`K`$-factor of a laser system from the cavity-decay properties alone. Our assumption probably gives an over-simplified picture of a real laser. However, in most systems the gain medium is distributed uniformly over an essential part of the mode volume. From the definition, Eq. (9), then follows that $`L_{m,n}\lambda \delta _{m,n}`$. Besides this, the gain is often due to a resonant interaction whereas the damping is highly broad band. In that situation it seems reasonable to assume the gain to be diagonal in the “universe” modes and the mode-mode coupling to derive mainly from the loss mechanisms. These contain diffractive losses at the cavity edges and transmission losses, and hence their distribution is bound to differ greatly from the distribution of the amplification. In comparison to the off-diagonal elements of $`\mathrm{\Gamma }_{m,n}`$, the off-diagonal elements of $`L_{m,n}`$ can be neglected.
From Eq. (43) we can also see in which case the minimum excess noise $`K=1`$ can be achieved. For $`L_{m,n}=\mathrm{\Gamma }_{m,n}`$ we have $`\widehat{a}_n=\widehat{A}_n`$. Losses and amplification are compensated in each volume element. This means that, when damping and amplification are acting equally in the same volume $`V^{}`$, the true modes and the quasi modes are identical. This can happen when losses are mainly provided by the same entities which give the gain like in dye lasers. In most lasers, however, it is mainly the smallness of the off-diagonal elements of the matrices $`L_{m,n}`$ and $`\mathrm{\Gamma }_{m,n}`$ which leads to a $`K`$-factor close to one.
Summarizing our main results, we have derived the excess-noise factor $`K`$ within the framework of the conventional quantum theory of laser-like systems. Our calculation is conceptually simple and our result also shows a small correction to the well established semi-classical one.
Acknowledgement. We thank M. T. Fontenelle and U. Leonhardt for helpful discussion. We also thank A. E. Siegman and J. P. Woerdman for providing manuscripts prior to publication. One of us (P.J.B.) thanks the Alexander von Humboldt Foundation for supporting his work at the Royal Institute of Technology.
|
no-problem/9904/cond-mat9904285.html
|
ar5iv
|
text
|
# Numerical investigation of the thermodynamic limit for ground states in models with quenched disorder
## Abstract
Numerical ground state calculations are used to study four models with quenched disorder in finite samples with free boundary conditions. Extrapolation to the infinite volume limit indicates that the configurations in “windows” of fixed size converge to a unique configuration, up to global symmetries. The scaling of this convergence is consistent with calculations based on the fractal dimension of domain walls. These results provide strong evidence for the “two-state” picture of the low temperature behavior of these models. Convergence in three-dimensional systems can require relatively large windows.
The structure of the thermodynamic set of states of a system in statistical mechanics is studied formally through the infinite volume limits of correlation functions . If a nested sequence of systems with given Hamiltonian and boundary conditions has spin correlation functions that converge in the infinite volume limit, a thermodynamic state can be defined. For example, in a ferromagnet with fixed, positive fields at the boundary, the single-spin correlation function converges to a positive value, defining an “up” state. For disordered spin systems, the question of the number of thermodynamic states is a subtle one . Whether there are many thermodynamic states in some sense or a small number of states related by simple global symmetries (e.g., two spin-flip related states in an Ising spin glass) has been a most controversial point for low-dimensional systems. Part of this debate has been over what are the most useful methods for determining the structure of thermodynamic states, spin overlaps $`P(q)`$ or correlation functions in subsystems and it is unclear whether Monte Carlo simulations at finite temperature can be used to study large enough systems .
This letter describes the results of numerical computations which address the structure of states in disordered systems in the thermodynamic limit, at zero temperature. Two two-dimensional models, an Ising spin glass and a charge density wave (CDW) model (also referred to here as an elastic medium model) and two three-dimensional models, a CDW model and a dimer matching model that is equivalent to non-intersecting lines in a random medium (similar to vortex lines in type-II superconductors), were studied. The ground states were computed for a sequence of free boundary conditions and the configurations in a fixed finite subsystem (or “window”) were compared. This study is a particular instance of the numerical approach suggested by Newman and Stein , who have presented detailed arguments that the existence of many states, as in the Parisi solution of the mean field spin glass, gives rise to “chaotic size dependence” . The principle result derived from the simulations presented here is that the window configurations converge to a single fixed configuration with probability one. These computations strongly support the picture of a small number of ground states related by global symmetries, consistent with the droplet model . The details of the convergence to a single fixed configuration as the boundary grows has a scaling behavior which is well-described by a simple picture of domain walls.
The 2d spin glass model (SG) studied has spins $`s_i=\pm 1`$ defined at lattice points $`i`$, with Edwards-Anderson Hamiltonian $`H_{\mathrm{SG}}=_{ij}J_{ij}s_is_j`$, where $`J_{ij}`$ is chosen independently from a Gaussian distribution for all nearest neighbor bonds $`ij`$. This model is believed to be paramagnetic at finite temperature, but is a spin glass at $`T=0`$; minimal energy large scale excitations of size $`L`$ have an energy $`E(L)L^{\theta _{\mathrm{SG}}}`$ with $`\theta _{\mathrm{SG}}0.27`$ . The discretized CDW or elastic medium model in two dimensions (E2) studied here is also equivalent to a disordered substrate model or vortex lines in two dimensions pinned by quenched disorder . The configurations in this model are defined by complete dimer coverings of a hexagonal lattice, with the Hamiltonian being the sum over covered dimers $`d`$ of dimer weights $`w_d`$, $`H_{E2}=_dw_d`$, where the $`w_d`$ are chosen for each bond from a uniform distribution. In mean field replica calculations, matching problems are found to have replica symmetric solutions . A mapping of the dimer model to a discrete height representation $`h`$ can be made ; the variable $`h`$ corresponds to the scalar phase displacements in CDW models. This model is believed to have a finite temperature phase transition, with the height-height correlations $`h(r)h(0)`$ behaving as $`\mathrm{ln}(r)`$ in the high-$`T`$ phase and as $`\mathrm{ln}^2(r)`$ in the low-$`T`$ phase. In this model, $`\theta _{\mathrm{E2}}=0`$ ($`E(L)\mathrm{const}`$.) The model E2 can be extended to three dimensions in two distinct ways; both are both studied here. One extension is that of dimer covering (matching) on a cubic lattice (M3), which can be mapped to a set of vortex lines with hard-core repulsion . It has a Hamiltonian identical to that for E2, with the covering dimers a subset of the edges in a simple cubic lattice. The other 3-D model is the three dimensional CDW or elastic medium model (E3) ; in the continuum limit, $`\theta _{E3}=1`$ (consistent with numerics in Ref. .) The low temperature phase of the elastic medium models have been studied using both renormalization group and replica symmetry breaking techniques, which are usually, though not exclusively, interpreted physically as describing systems with few states or many states, respectively.
These models of disordered systems were studied using polynomial-time combinatorial optimization algorithms . The spin glass was studied on a triangular lattice, using the method developed by Barahona , rather than the string method which is often used . The minimum weighted matching algorithm used for the implementation of Barahona’s algorithm was the algorithm described in Ref. . Calculations were made for at least $`10^3`$ samples of up to $`512^2`$ spins. The model E2 can be mapped to a bipartite matching problem and was solved using the algorithm of Ref. for at least $`10^3`$ samples of sizes up to $`1024^2`$ sites. The same algorithm was used for model M3, 3-D matching, with up to $`128^3`$ sites with at least $`10^3`$ samples, while the push-relabel maximum flow algorithm as implemented in Ref. was used to study the 3-D elastic medium model E3 (up to $`64^3`$ sites with at least $`10^3`$ samples). The algorithms used determine ground states up to global symmetry transformations. For example, in the spin glass, unsatisfied bonds (bonds with $`J_{ij}s_is_j<0`$) are calculated, rather than $`s_i`$. Configurations related by symmetries are considered identical here, so that a “two-state” picture for spin glasses naturally appears as a single state in the computations .
The effect of system size was studied extensively for free boundary conditions. The disorder realizations for each sample $`S_L^\alpha `$ of linear size $`L`$ (with $`L^d`$ spins or sites) were generated so that $`S_L^\alpha `$ was a subsystem of a given infinite volume sample $`\alpha `$. Two finite samples $`S_L^\alpha `$ and $`S_L^{}^\alpha `$ had the same quenched disorder in their intersection. Each finite sample was centered at an origin $`C`$, so that a sequence of samples $`S_L^\alpha ,S_L^{}^\alpha ,S_{L^{\prime \prime }}^\alpha ,\mathrm{}`$ with $`L<L^{}<L^{\prime \prime }<\mathrm{}`$ gives a nested set of square or cubic samples centered about $`P`$. The $`L\mathrm{}`$ limit could then be numerically studied for a number of infinite samples $`\alpha `$. The free boundary conditions were assumed to be typical for the models SG and M3. In the elastic models, free boundary conditions give ground states with lower energy than boundary conditions that would introduce a uniform strain in the elastic models in the infinite volume limit; such uniform strain states are not considered here.
The configuration differences for samples of different sizes $`L<L^{}`$ were computed by comparing the exact ground states in the volumes of size $`L^d`$ where $`S_L^\alpha `$ and $`S_L^{}^\alpha `$ overlapped. Spin glass ground states in two dimensions were compared by finding the differences in unsatisfied bonds. An example of such a ground state comparison by bond overlap is shown in Fig. 1. For the models with dimer matchings (E2 and M3), the configurations are compared by finding the symmetric difference of the dimer sets in the common volume. The natural comparison for the height configurations for the model E3 is to determine where the gradients of the heights in the intersection volume differ.
The primary quantity of interest that was computed was the (sampled) probability that a change in boundary conditions resulted in any change in the ground state configuration in a window of size $`w`$ centered at $`C`$. The probability $`P(L^{},L,w)`$ is defined as the probability that the configuration in the window region changes as the system size is increased from $`L`$ to $`L^{}`$, that is, that the ground state configuration for $`S_L^{}^\alpha `$ differs from that for $`S_L^\alpha `$ in the volume of size $`w`$ centered at $`C`$. This quantity was estimated by sampling over a large number of samples $`\alpha `$ for various $`L^{}`$, $`L`$, and $`w`$ . This measurement is sensitive to all gauge invariant spin correlation functions in the window volume.
A plot of the data for $`P(L^{},L,w)`$, as a function of $`w`$ for various $`L^{}`$ and $`L`$, is shown in Fig. 2 for the spin glass. Assuming scale invariance, $`P`$ should be a function of the two ratios $`L^{}/L`$ and $`L/w`$. The data is consistent with this hypothesis, for large values of $`w`$ and $`L`$. For fixed $`L/w`$, $`P(L^{},L,w)`$ approaches a constant for large $`w`$ or large $`L`$. Note that to within error estimates, $`P(L^{},L,w)`$ is independent of $`L^{}`$ for $`L^{}/L=2,4,8`$: the probability of change in a finite window is approximately independent of the magnitude of expansion in the boundary, for $`L^{}2L`$ ($`P`$ does decrease noticeably as $`L^{}L`$.) In addition, for fixed $`w`$, $`P(L^{},L,w)`$ decreases approximately as a power law in $`L`$ (by the even vertical spacing of the data points for $`P1`$.) The data strongly suggest, by extrapolation to larger values of $`L`$ for fixed $`w`$, that the probability of changing the configuration in a window of size w goes to zero for $`L^{}/L\mathrm{}`$ as $`L\mathrm{}`$, implying convergence to a unique thermodynamic ground state (up to global symmetries).
The data can be explained by simple assumptions about the convergence of the configurations as $`L\mathrm{}`$ and the properties of domain walls or defect lines. For the spin glass model, induced domain walls are lines where the bonds change from satisfied to unsatisfied or vice versa. In models which are represented by a matching (E2 and M3) defects are also line objects and are composed of bonds where the dimer covering changes. In model E3, the induced walls are surfaces where the height gradient changes. Defect lines have fractal dimension $`d_f^{\mathrm{SG}}=1.27(1)`$ for model SG and $`d_f^{\mathrm{E2}}=1.25(1)`$ for model E2 . For the 3-D elastic medium, a shift in boundary conditions introduces a domain wall of dimension $`d_f^{\mathrm{E3}}=2.60(5)`$ , while localized string defects were computed during the course of this work to have fractal dimension of $`d_f^{\mathrm{M3}}=1.65(4)`$ in the 3-D matching model. If the fractal dimension of the defects is large enough ($`d_f>d/2`$) that no more than $`O(1)`$ defects of size $`L`$ can co-exist in the volume $`L^d`$, the expected number of defects of linear size $`L`$ introduced upon expansion to size $`L^{}`$ is bounded above by a constant. Whether boundary changes do induce a number of defects that saturate this bound is less clear a priori. For the models where $`\theta 0`$, finite changes at the boundary are likely to induce as many defects as possible, as the large scale defect cost is comparable to the cost of local changes at the boundary. The probability that a line or surface will intersect a window of size $`w`$ is then the ratio of the number of volumes of size $`w^d`$ that intersect the defect to the number of areas of size $`w^d`$ in the area $`L^d`$, giving the form
$$P(L^{},L,w)=c(L^{}/L)(L/w)^\kappa ,$$
(1)
for large $`L/w`$, with $`\kappa =dd_f`$ by the supposition of a single dominant defect and, by these numerical results, the coefficient function $`c(L^{}/L)`$ quickly converges to a constant value for $`L^{}/L2`$. This form can be checked by plotting $`P`$ as a function of $`L/w`$ and comparing with a line of slope $`d_fd`$, as shown in Fig. 3. The match between this prediction and the data in $`d=2`$ is quite good; a two-parameter fit (varying $`c(\mathrm{})`$ and $`\kappa `$) gives exponents that agree with $`\kappa =dd_f`$ to within $`0.05`$ for models SG and E2. Differences of this order are within statistical fluctuations and apparent finite size effects. In addition, numerical study of a number of configurations for three values of $`L`$ (e.g., $`S_L^\alpha `$, $`S_L^{}^\alpha `$, $`S_{(L^{})^2/L}^\alpha `$) for the $`d=2`$ spin glass suggests that the location of $`L`$-scale defects in a volume is nearly independent of $`L^{}`$, giving more support to the conclusion that there is convergence to a unique state in these models.
The 3-D results also indicate convergence to a single state, as $`P(L/w)0`$ for $`L/w\mathrm{}`$. The quantitative fits are also consistent with a defect picture, but have a larger uncertainty. For the 3-D elastic medium, the data are consistent with $`P(L/w)^{d_f^{\mathrm{E3}}d}`$ for fixed $`w`$, as shown in Fig. 3(c), though larger sample sizes would be useful. For the problem $`M3`$, the behavior $`P(L/w)^{1.35}`$, with $`\kappa =dd_f`$, can be fit to the largest $`L/w`$ values, though only over a small range. Note that $`P>0.9`$ for $`L/w4`$ and $`P>0.5`$ for $`L/w=8`$. Under an expansion $`L^{}/L=2`$, the configuration in a window usually changes for $`L/w<8`$. Such change in small systems mimics the predictions of a many-states picture.
In summary, the infinite-volume limit for four model disordered systems was studied numerically by computing ground state configurations in fixed volumes embedded in systems of successively larger sizes. Strong evidence was found for convergence to a unique state (up to global symmetries), even in cases where $`\theta 0`$. The convergence to a unique state in $`d=2`$ can be understood in detail by estimating the chance of a defect wall intersecting a given area upon a boundary change. The 3d model results are more qualitative: while it appears that the system converges to a unique state, the ratio of scales ($`L^{}/L`$, $`L/w`$) required is larger, so that systems of size $`L>50`$ are needed. Polynomial ground state algorithms are not available for the 3d spin glass and this system is not directly addressed here, but these results suggest that one should be cautious in interpreting finite temperature Monte Carlo results and ground state calculations in small systems.
I would like to thank Daniel Fisher for stimulating discussions and comments on a draft of this paper. At the completion of this paper, I became aware of related results for the spin glass in $`d=2`$ by Palassini and Young . This work was supported in part by the National Science Foundation (DMR-9702242) and by the Alfred P. Sloan Foundation.
|
no-problem/9904/hep-th9904024.html
|
ar5iv
|
text
|
# Untitled Document
RI-3-99
ITP-SB-99-10
hep-th/9904024
Supersymmetric string vacua on $`AdS_3\times 𝒩`$
Amit Giveon e-mail address: giveon@vms.huji.ac.il
Racah Institute of Physics, The Hebrew University
Jerusalem 91904, Israel
and
Martin Roček<sup>∗∗</sup> e-mail address: rocek@insti.physics.sunysb.edu
Institute of Theoretical Physics, State University of New York
Stony Brook, NY 11794-3840, USA
String backgrounds of the form $`AdS_3\times 𝒩`$ that give rise to two dimensional spacetime superconformal symmetry are constructed.
4/99
1. Introduction
In this letter we study the conditions on curved string backgrounds of the form $`AdS_3\times 𝒩`$ that give rise to spacetime superconformal symmetry. We use the NSR formulation; for simplicity we describe the left moving sector only – it can be combined with the right-moving sector in the standard way. String theories on $`AdS_3\times 𝒩`$ were studied for the bosonic case in \[1,,2,,3\]. Examples of supersymmetric strings in the NSR formulation were studied in, e.g., \[1,,4,,5,,6,,3,,7\]. For some early work on string theory on $`AdS_3`$ see, e.g., .
The main results of this work are the following: If $`𝒩`$ has an affine $`U(1)`$ symmetry and $`𝒩/U(1)`$ has an $`N=2`$ worldsheet superconformal symmetry, then there is a construction of a superstring with two-dimensional $`N=2`$ $`spacetime`$ superconformal symmetry. A $`Z_2`$ quotient of this construction leads to a family of theories with two-dimensional $`N=1`$ $`spacetime`$ superconformal symmetry. We also discuss conditions for $`N>2`$ superconformal symmetry: These involve an $`𝒩`$ with an $`SU(2)`$ factor whose level is determined in terms of the level of the $`AdS_3`$ background.
This investigation is the analog of the study of supersymmetric backgrounds for compactification to Minkowski space $`_d`$ in $`d=3`$ or $`4`$ dimensions \[9,,10,,11,,12\]. String theories on $`_4\times 𝒩`$ have four-dimensional $`N=1`$ spacetime supersymmetry provided $`𝒩`$ has an $`N=2`$ worldsheet superconformal symmetry \[9,,10\]. String theories on $`_3\times 𝒩`$ have three-dimensional $`N=2`$ spacetime supersymmetry if $`𝒩`$ has an affine $`U(1)`$ and $`𝒩/U(1)`$ has an $`N=2`$ worldsheet superconformal symmetry. A $`Z_2`$ quotient of such an $`𝒩`$ leaves a three-dimensional $`N=1`$ spacetime supersymmetry; in this case, the symmetry algebra on $`𝒩`$ can be extended to a nonlinear algebra associated to manifolds with $`G_2`$ holonomy \[11,,12\].
The structure of the paper is as follows: In section 2, we describe the worldsheet properties that lead to spacetime supersymmetry on $`AdS_3\times 𝒩`$. In section 3, we construct the two dimensional $`N=2`$ spacetime superconformal algebra associated with the boundary CFT of $`AdS_3`$. In section 4, we take a quotient of the $`N=2`$ construction to find a class of models with $`N=1`$ spacetime superconformal symmetry. In section 5, we discuss models with $`N>2`$ spacetime superconformal symmetry. Finally, in section 6, we comment on new examples that arise from our results and other issues.
2. Worldsheet properties of fermionic strings on $`AdS_3\times 𝒩`$
We first consider the $`AdS_3`$ factor of the background. This theory has affine $`SL(2)`$ currents
$$\psi ^A+\theta \sqrt{\frac{2}{k}}J^A,A=1,2,3,$$
where
$$J^A=j^A\frac{i}{2}ϵ^A{}_{BC}{}^{}\psi _{}^{B}\psi ^C,$$
and
$$\begin{array}{cc}\hfill \psi ^A(z)\psi ^B(w)& \frac{\eta ^{AB}}{zw},\eta ^{AB}=\mathrm{diag}(+,+,),\hfill \\ & \\ \hfill J^A(z)J^B(w)& \frac{\frac{k}{2}\eta ^{AB}}{(zw)^2}+\frac{iϵ^{AB}{}_{C}{}^{}J_{}^{C}}{zw}.\hfill \end{array}$$
The purely bosonic currents $`j^A`$ generate an affine $`SL(2)`$ algebra at level $`k+2`$ and commute with $`\psi ^A`$, whereas the total currents $`J^A`$ generate a level $`k`$ $`SL(2)`$ algebra and act on $`\psi `$ as follows from (2.1),(2.1). The central charge of the $`AdS_3`$ part of the theory is thus
$$c^{SL(2)}=\frac{3(k+2)}{k}+\frac{3}{2},$$
where the two terms are the bosonic and fermionic contributions, respectively. The $`N=1`$ worldsheet supercurrent is
$$T_F^{SL(2)}=\sqrt{\frac{2}{k}}(\psi ^Aj_Ai\psi ^1\psi ^2\psi ^3).$$
The internal space $`𝒩`$ is described by a unitary superconformal field theory (CFT) background with central charge
$$c^𝒩=15c^{SL(2)}=\frac{21}{2}\frac{6}{k}.$$
We denote the worldsheet supercurrent of $`𝒩`$ by $`T_F^𝒩`$.
The construction of $`N=2`$ spacetime supersymmetry described in section 3 below requires that $`𝒩`$ have an affine $`U(1)`$ symmetry with an $`N=1`$ current
$$\psi ^{U(1)}+\theta J^{U(1)},$$
where
$$\begin{array}{cc}\hfill \psi ^{U(1)}(z)\psi ^{U(1)}(w)& \frac{1}{zw},\hfill \\ & \\ \hfill J^{U(1)}(z)J^{U(1)}(w)& \frac{1}{(zw)^2},\hfill \\ & \\ \hfill J^{U(1)}(z)\psi ^{U(1)}(w)& 0,\hfill \end{array}$$
and a worldsheet supercurrent
$$T_F^{U(1)}=\psi ^{U(1)}J^{U(1)}.$$
It is convenient to bosonize the affine current
$$J^{U(1)}=iY$$
where $`Y`$ is a canonically normalized scalar: $`Y(z)Y(w)\mathrm{log}(zw)`$.
We can construct the quotient CFT $`𝒩/U(1)`$ with the supercurrent
$$T_F^{𝒩/U(1)}=T_F^𝒩T_F^{U(1)};$$
this has a central charge
$$c^{𝒩/U(1)}=c^𝒩c^{U(1)}=9\frac{6}{k}.$$
The construction of $`N=2`$ spacetime supersymmetry described in section 3 below further requires that $`𝒩/U(1)`$ have an $`N=2`$ superconformal algebra (which commutes with the $`U(1)`$ above). In particular, its $`U(1)_R`$-current $`J_R^{𝒩/U(1)}`$ has the standard normalization
$$J_R^{𝒩/U(1)}(z)J_R^{𝒩/U(1)}(w)\frac{1}{3}\frac{c^{𝒩/U(1)}}{(zw)^2}.$$
We bosonize $`J_R^{𝒩/U(1)}`$ in terms of a canonically normalized scalar $`Z`$ by
$$J_R^{𝒩/U(1)}=i\sqrt{\frac{c^{𝒩/U(1)}}{3}}ZiaZ,$$
where
$$a\sqrt{3\frac{2}{k}}.$$
The worldsheet supercurrent $`T_F^{𝒩/U(1)}`$ can be decomposed into two parts with $`R`$-charges $`\pm 1`$; these charges can be expressed in terms of explicit $`Z`$ dependent factors to give:
$$T_F^{𝒩/U(1)}=e^{\frac{i}{a}Z}\tau _++e^{\frac{i}{a}Z}\tau _{},$$
where $`\tau _\pm `$ carry $`no`$ $`R`$-charge, i.e.
$$\begin{array}{cc}\hfill J_R^{𝒩/U(1)}(z)e^{\pm \frac{i}{a}Z}(w)& \frac{\pm e^{\pm \frac{i}{a}Z}}{zw}\hfill \\ & \\ \hfill J_R^{𝒩/U(1)}(z)\tau _\pm (w)& 0.\hfill \end{array}$$
3. $`N=2`$ spacetime superconformal theories
We now construct an $`N=2`$ superconformal algebra in spacetime out of the worldsheet ingredients described above. As in , we introduce canonically normalized scalars $`H_I`$ with $`I=0,1,2`$:
$$\begin{array}{cc}\hfill H_1& =\psi ^1\psi ^2\hfill \\ \hfill iH_2& =\psi ^3\psi ^{U(1)}\hfill \\ \hfill i\sqrt{3}H_0& =J_R^{𝒩/U(1)}\sqrt{\frac{2}{k}}J^{U(1)},\hfill \end{array}$$
where
$$H_I(z)H_J(w)\delta _{IJ}\mathrm{log}(zw).$$
For future reference we remind the reader that
$$e^{\pm iH_2}=\frac{i}{\sqrt{2}}(\psi ^3\pm \psi ^{U(1)}).$$
The spacetime supercharges are constructed as
$$𝐆_r^\pm =(2k)^{\frac{1}{4}}𝑑ze^{\frac{\varphi }{2}}S_r^\pm ,r=\pm \frac{1}{2},$$
where $`\varphi `$ is the scalar field arising in the bosonized $`\beta ,\gamma `$ superghost system of the worldsheet supersymmetry, and the spin fields $`S_r^\pm `$ are (recall (3.1),(2.1),(2.1),(2.1))
$$\begin{array}{cc}\hfill S_r^+& =e^{ir(H_1+H_2)+i\frac{\sqrt{3}}{2}H_0}=e^{ir(H_1+H_2)+i\frac{a}{2}Zi\sqrt{\frac{1}{2k}}Y},\hfill \\ \hfill S_r^{}& =e^{ir(H_1H_2)i\frac{\sqrt{3}}{2}H_0}=e^{ir(H_1H_2)i\frac{a}{2}Z+i\sqrt{\frac{1}{2k}}Y}.\hfill \end{array}$$
(We neglect the usual cocycle factors). The supercharges $`𝐆_r^\pm `$ are physical only if they are BRST invariant. This requires that the OPE of $`T_F(z)S_r^\pm (w)`$ have no $`(zw)^{3/2}`$ singularity. Here $`T_F`$ is the total worldsheet $`N=1`$ supercurrent:
$$T_F=T_F^{SL(2)}+T_F^{U(1)}+T_F^{𝒩/U(1)}$$
(see (2.1),(2.1),(2.1)). Consider
$$S_{ϵ_1ϵ_2ϵ}=e^{\frac{i}{2}(ϵ_1H_1+ϵ_2H_2+ϵ(aZ\sqrt{\frac{2}{k}}Y))},ϵ_1,ϵ_2,ϵ=\pm 1;$$
From (2.1) we find
$$T_F^{𝒩/U(1)}(z)S_{ϵ_1ϵ_2ϵ}(w)(zw)^{\frac{ϵ}{2}}(\mathrm{})\tau _++(zw)^{\frac{ϵ}{2}}(\mathrm{})\tau _{},$$
where $`(\mathrm{})`$ represents irrelevant factors. Therefore $`T_F^{𝒩/U(1)}(z)S_r^\pm (w)`$ has $`no`$ $`(zw)^{3/2}`$ singularity, and the only possible sources of such singularities are $`T_F^{U(1)}`$ and the $`\psi ^1\psi ^2\psi ^3`$ term in $`T_F^{SL(2)}`$. These two contributions cancel each other for $`ϵ_1ϵ_2ϵ=1`$, as can be seen using
$$\psi ^{U(1)}J^{U(1)}i\sqrt{\frac{2}{k}}\psi ^1\psi ^2\psi ^3=(\frac{1}{\sqrt{2}}Y\frac{1}{\sqrt{k}}H_1)e^{iH_2}(\frac{1}{\sqrt{2}}Y+\frac{1}{\sqrt{k}}H_1)e^{iH_2}$$
(see (2.1),(3.1),(3.1)). Substituting all 4 solutions of $`ϵ_1ϵ_2ϵ=1`$ into (3.1), we recover (3.1). In addition, $`e^{\varphi /2}S_r^\pm `$ are mutually local. This completes the proof that $`𝐆_r^\pm `$ as defined in (3.1) are physical.
The algebra generated by the supercharges is
$$\begin{array}{cc}\hfill \{𝐆_r^+,𝐆_s^{}\}& =2𝐋_{r+s}+(rs)𝐉_0,r,s=\pm \frac{1}{2}\hfill \\ \hfill [𝐋_m,𝐋_n]& =(mn)𝐋_{m+n},m,n=0,\pm 1\hfill \\ \hfill [𝐋_m,𝐆_r^\pm ]& =(\frac{m}{2}r)𝐆_{m+r}^\pm \hfill \\ \hfill [𝐉_0,𝐆_r^\pm ]& =\pm 𝐆_r^\pm \hfill \end{array}$$
with all other (anti)commutators vanishing. Up to picture-changing , $`𝐋_0,𝐋_{\pm 1},𝐉_0`$ are given by (recall (2.1),(2.1))
$$𝐋_0=J^3,𝐋_{\pm 1}=\frac{1}{\sqrt{2}}(J^1\pm iJ^2),$$
$$𝐉_0=\sqrt{2k}J^{U(1)}.$$
The algebra (3.1) is a global spacetime $`N=2`$ superconformal algebra.
String theory on $`AdS_3\times 𝒩`$ has a $`full`$ spacetime Virasoro symmetry $`𝐋_n`$, with $`n𝐙`$ \[1,,3\]; commuting $`𝐋_n`$ with the generators of the global algebra (3.1) gives a full spacetime $`N=2`$ superconformal algebra in the spacetime NS-sector with modes $`𝐆_r^\pm `$, $`r𝐙+\frac{1}{2}`$ and $`𝐉_n`$, $`n𝐙`$. Physical states are constructed using physical vertex operators that are local with respect to the supercharges (3.1); this is the analog of the usual $`GSO`$ projection<sup>1</sup> Strictly speaking, the full Virasoro algebra and physical states were constructed in the Euclidean version of $`AdS_3`$ \[1,,3\]. The construction of the finite Lie superalgebra (3.1) uses only the algebraic structure of $`SL(2)`$, and is independent of the representation theory; hence it is also valid for the Lorentzian case..
4. $`N=1`$ spacetime superconformal theories
The construction of the previous section gave us two dimensional $`N=2`$ spacetime superconformal symmetry. It is straightforward to find a $`Z_2`$ quotient that preserves exactly half of the spin fields (3.1) and leads to $`N=1`$ spacetime superconformal symmetry. This quotient is analogous to the construction of manifolds with $`G_2`$ holonomy by a $`Z_2`$ quotient of a product of a Calabi-Yau manifold with an $`S^1`$ .
Concretely, we break the $`N=2`$ superconformal symmetry of $`𝒩/U(1)`$ by the quotient with respect to $`J_R^{𝒩/U(1)}J_R^{𝒩/U(1)}`$; simultaneously, we take the quotient with respect to $`J^{U(1)}J^{U(1)}`$ and $`\psi ^{U(1)}\psi ^{U(1)}`$. This has the net effect of identifying $`\{H_1,H_2,H_0\}\{H_1,H_2,H_0\}`$ (see (3.1)), and thus $`S_r^\pm S_r^{}`$ (3.1). Therefore, the spacetime superconformal symmetry is projected to the $`N=1`$ subalgebra generated by the symmetric combination $`𝐆_r^++𝐆_r^{}`$.
This indeed resembles the construction of superconformal models on manifolds with $`G_2`$ holonomy \[11,,12\], except that the total central charge of $`𝒩`$ is not $`21/2`$ but rather $`21/26/k`$ (see (2.1)). It would be interesting to see if one can find a general nonlinear worldsheet algebra that characterizes $`𝒩`$ and then use the methods of to generate the $`N=1`$ spacetime superconformal symmetry in the general $`AdS_3\times 𝒩`$ case.
5. $`N>2`$ spacetime superconformal theories
We may also consider the extension of the methods of section 3 to models with $`N>2`$ symmetry. This gives rise to models that have been considered on a case by case basis in the literature \[1,,4,,5,,6\].
The small $`N=4`$ superconformal algebra (see and references therein) has an affine $`SU(2)`$ $`R`$-symmetry. As explained in , this spacetime affine $`SU(2)`$ arises from a level $`k`$ worldsheet affine $`SU(2)`$ factor in $`𝒩`$. For the construction of section 3, we may take $`J^{U(1)}`$ as the Cartan generator of $`SU(2)_k`$. The remaining background $`𝒩_{(c=6)}𝒩/SU(2)_k`$ is precisely a $`c=6`$ CFT, and small $`N=4`$ spacetime supersymmetry requires that $`𝒩_{(c=6)}`$ have small $`N=4`$ worldsheet supersymmetry. This can be shown by the methods in , where it is argued that for compactification to four dimensional Minkowski space $`_4\times 𝒩_{(c=9)}`$, $`N=2`$ spacetime supersymmetry on $`_4`$ requires that $`𝒩_{(c=9)}`$ factorize as $`𝒩_{(c=9)}=𝒩_{(c=6)}\times T^2`$ with small $`N=4`$ worldsheet superconformal symmetry on $`𝒩_{(c=6)}`$.
The large $`N=4`$ superconformal algebra (see and references therein) has an affine $`SU(2)\times SU(2)\times U(1)`$ $`R`$-symmetry. Again, as explained in \[1,,4\], this spacetime affine algebra arises from a worldsheet algebra $`SU(2)_k^{}\times SU(2)_{k^{\prime \prime }}\times U(1)`$ where the levels are related to the level $`k`$ of the $`AdS_3`$ factor by $`1/k=1/k^{}+1/k^{\prime \prime }`$. This implies that the central charge of the $`SU(2)_k^{}\times SU(2)_{k^{\prime \prime }}\times U(1)`$ factor is $`c=21/26/k`$, and so completely determines $`𝒩`$ (2.1). For the construction of section 3, we may take $`J^{U(1)}`$ as the diagonal Cartan generator of $`SU(2)_k^{}\times SU(2)_{k^{\prime \prime }}`$; this implies that $`H_2`$ of (3.1) above is $`H_4`$ of equation (2.31) in .
To construct $`N=3`$ spacetime superconformal models, we may for instance take a $`Z_2`$ quotient of the large $`N=4`$ model in such a way as to preserve 3 out of 4 spacetime supersymmetries. This is worked out in detail in ; the basic idea is to take the construction of with $`k^{}=k^{\prime \prime }`$ and quotient by a $`Z_2`$ action that exchanges the two $`SU(2)`$ factors in $`𝒩`$ and simultaneously reflects the $`U(1)`$ factor in $`𝒩`$. Since the $`J^{U(1)}`$ current we use is in the diagonal of $`SU(2)\times SU(2)`$ and hence inert under this quotient, the construction survives and gives an $`N=2`$ subalgebra of the $`N=3`$ spacetime superconformal algebra discussed in .
In models with enhanced spacetime superconformal symmetries, one has to take some care in choosing $`J^{U(1)}`$, as an arbitrary choice may lead to supercharges that are not mutually local with the spacetime $`R`$-symmetries, and thus preserve only the $`N=2`$ subalgebra (3.1).
6. Examples and discussion
We close with a few remarks:
1. The construction of section 3 can be used to find many new examples of $`AdS_3\times 𝒩`$ string backgrounds with spacetime superconformal symmetry. A broad class is given by $`𝒩=U(1)\times 𝒩_{KS}`$ where $`𝒩_{KS}`$ is a Kazama-Suzuki model with central charge $`c=96/k`$. Kazama-Suzuki models are gauged $`N=1`$ WZW models $`G/H`$ with an enhanced $`N=2`$ worldsheet superconformal symmetry. The cases $`(SU(2)_k\times U(1)^4)/U(1)`$ or $`(SU(2)_k^{}\times SU(2)_{k^{\prime \prime }}\times U(1))/U(1)`$ are precisely the cases with enhanced spacetime superconformal symmetry discussed above. A simple new case is, for instance, $`SU(3)_{4k}/U(1)^2`$.
2. When the background has an enhanced worldsheet affine algebra, the construction of section 3 can be generalized; in particular, if the enhanced algebra includes an extra affine $`U(1)^2`$ factor, $`N>2`$ spacetime symmetries can be constructed as in .
3. The construction we have given here leads to $`conformal`$ spacetime supersymmetries of the boundary CFT of $`AdS_3`$. Other constructions of spacetime supersymmetry are possible, such as the construction with respect to the $`U(1)_R`$ of the total worldsheet $`N=2`$ superconformal symmetry of $`AdS_3\times 𝒩`$ (see, e.g., appendix B of ). These in general correspond to different string vacua defined on the same $`\sigma `$-model background, and do $`not`$ give rise to spacetime conformal symmetries<sup>2</sup> Technically, the theories differ because the physical states are required to be mutually local with respect to different spin fields. The spin fields (3.1) are mutually local with respect to the $`SL(2)`$ currents, whereas the spin fields in the appendix B of are not.. It would be interesting to know if the construction given here is the unique one that does lead to spacetime conformal symmetry (modulo the ambiguity noted in the previous paragraph for spaces with $`U(1)^2`$ factors).
Acknowledgements: We are happy to thank D. Kutasov for comments on the manuscript. This work is supported in part by the BSF – American-Israel Bi-National Science Foundation. The work of AG is supported in part by the Israel Academy of Sciences and Humanities – Centers of Excellence Program. The work of MR is supported in part by NSF grant No. PHY9722101. AG thanks the ITP at Stony Brook and MR thanks the Racah Institute at the Hebrew University for their hospitality.
References
relax A. Giveon, D. Kutasov, and N. Seiberg, “Comments on String Theory on $`AdS_3`$,” Adv. Theor. Math. Phys. 2 (1998) 733, hep-th/9806194. relax J. de Boer, H. Ooguri, H. Robins, and J. Tannenhauser, “String Theory on $`AdS_3`$,” JHEP 9812 (1998) 026, hep-th/9812046. relax D. Kutasov and N. Seiberg, “More Comments on String Theory on $`AdS_3`$,” hep-th/9903219. relax S. Elitzur, O. Feinerman, A. Giveon, and D. Tsabar, “String Theory on $`AdS_3\times S^3\times S^3\times S^1`$,” hep-th/9811245. relax D. Kutasov, F. Larsen, and R. G. Leigh, “String Theory in Magnetic Monopole Backgrounds,” hep-th/9812027. relax S. Yamaguchi, Y. Ishimoto, and K. Sugiyama, “$`AdS_3/CFT_2`$ Correspondence and Space-Time $`N=3`$ Superconformal Algebra,” hep-th/9902079. relax N. Seiberg and E. Witten, “The D1/D5 System and Singular CFT,” hep-th/9903224. relax A. B. Zamolodchikov and V. A. Fateev, Sov. J. Nucl. Phys. 43 (1986) 657; J. Balog, L. O’Raifeartaigh, P. Forgacs, and A. Wipf, Nucl. Phys. B325 (1989) 225; L. J. Dixon, M. E. Peskin and J. Lykken, Nucl. Phys. B325 (1989) 329; A. Alekseev and S. Shatashvili, Nucl. Phys. B323 (1989) 719; N. Mohameddi, Int. J. Mod. Phys. A5 (1990) 3201; P. M. S. Petropoulos, Phys. Lett. B236 (1990) 151; M. Henningson and S. Hwang, Phys. Lett. B258 (1991) 341; M. Henningson, S. Hwang, P. Roberts, and B. Sundborg, Phys. Lett. B267 (1991) 350; S. Hwang, Phys. Lett. B276 (1992) 451, hep-th/9110039; I. Bars and D. Nemeschansky, Nucl. Phys. B348 (1991) 89; S. Hwang, Nucl. Phys. B354 (1991) 100; K. Gawedzki, hep-th/9110076; I. Bars, Phys. Rev. D53 (1996) 3308, hep-th/9503205; in Future Perspectives In String Theory (Los Angeles, 1995), hep-th/9511187; O. Andreev, hep-th/9601026, Phys.Lett. B375 (1996) 60. J. L. Petersen, J. Rasmussen and M. Yu, hep-th/9607129, Nucl.Phys. B481 (1996) 577; Y. Satoh, Nucl. Phys. B513 (1998) 213, hep-th/9705208; J. Teschner, hep-th/9712256, hep-th/9712258; J. M. Evans, M. R. Gaberdiel, and M. J. Perry, hep-th/9806024; hep-th/9812252. relax T. Banks, L. Dixon, D. Friedan, and E. Martinec, “Phenomenology and Conformal Field Theory or Can String Theory Predict the Weak Mixing Angle?” Nucl. Phys. B299 (1988) 613. relax T. Banks and L. Dixon, “Constraints on String Vacua with Spacetime Supersymmetry,” Nucl. Phys. B307 (1988) 93. relax S. Shatashvili and C. Vafa, “Superstrings and Manifolds of Exceptional Holonomy,” Selecta Math. A1 (1995) 347, hep-th/9407025. relax J. Figueroa-O’Farrill, “A note on the extended superconformal algebras associated with manifolds of exceptional holonomy,” Phys. Lett. B392 (1997) 77, hep-th/9609113. relax D. Friedan, E. Martinec, and S. Shenker, “Conformal Invariance, Supersymmetry, and String Theory,” Nucl. Phys. B271 (1986) 93. relax Y. Kazama and H. Suzuki, “New $`N=2`$ Superconformal Field Theories and Superstring Compactification,” Nucl. Phys. B321 (1989) 232.
|
no-problem/9904/hep-ph9904439.html
|
ar5iv
|
text
|
# The theoretical predictions for the study of the 𝑎₀(980) and 𝑓₀(980) mesons in the ϕ radiative decays. 11footnote 1Talk presented by V.V. Gubin
## Abstract
The potentialities of the production of the $`a_0`$ and $`f_0`$ mesons in the $`\varphi `$ radiative decays are considered.
The central problem of light hadron spectroscopy has been the problem of the scalar $`f_0(980)`$ and $`a_0(980)`$ mesons. It is well known fact that these states possess peculiar properties from the naive quark ($`q\overline{q}`$) model point of view, see, for example . To clarify the nature of these mesons a number of models has been suggested. It was shown that all their challenging properties could be understood in the framework of the four-quark ($`q^2\overline{q}^2`$) MIT-bag model with symbolic quark structure $`f_0(980)=s\overline{s}(u\overline{u}+d\overline{d})/\sqrt{2}`$ and $`a_0(980)=s\overline{s}(u\overline{u}d\overline{d})/\sqrt{2}`$. Along with the $`q^2\overline{q}^2`$ nature of $`a_0(980)`$ and $`f_0(980)`$ mesons the possibility of their being the $`K\overline{K}`$ molecule is discussed . During the last few years it was established that the radiative decays of the $`\varphi `$ meson $`\varphi \gamma f_0\gamma \pi \pi `$ and $`\varphi \gamma a_0\gamma \eta \pi `$ could be a good guideline in distinguishing the $`f_0`$ and $`a_0`$ meson models. The branching ratios are considerably different in the cases of naive quark, four-quark or molecular models. As has been shown , in the four quark model the branching ratio is
$$BR(\varphi \gamma f_0(q^2\overline{q}^2)\gamma \pi \pi )BR(\varphi \gamma a_0(q^2\overline{q}^2)\gamma \pi \eta )10^4,$$
(1)
and in the $`K\overline{K}`$ molecule model it is
$$BR(\varphi \gamma f_0(K\overline{K})\gamma \pi \pi )BR(\varphi \gamma a_0(K\overline{K})\gamma \pi \eta )10^5.$$
(2)
It is easy to note that in the case $`f_0=s\overline{s}`$ and $`a_0=(u\overline{u}d\overline{d})/\sqrt{2}`$ ( so called $`s\overline{s}`$ model ) the branching ratios $`BR(\varphi \gamma f_0\gamma \pi \pi )`$ and $`BR(\varphi \gamma a_0\gamma \pi \eta )`$ are different by factor of ten, which should be visible experimentally.
In the case when $`f_0=s\overline{s}`$ the suppression by the OZI rule is absent and the evaluation gives
$`BR(\varphi \gamma f_0(s\overline{s})\gamma \pi \pi )510^5,`$ (3)
whereas for $`a_0=(u\overline{u}d\overline{d})/\sqrt{2}`$ the decay $`\varphi \gamma a_0\gamma \pi \eta `$ is suppressed by the OZI rule and is dominated by the real $`K^+K^{}`$ intermediate state breaking the OZI rule
$`BR(\varphi \gamma a_0(q\overline{q})\gamma \pi \eta )(5÷8)10^6.`$ (4)
Imposing the appropriate photon energy cuts $`\omega <100`$ MeV, one can show that the background reactions $`e^+e^{}\rho (\omega )\pi ^0\omega (\rho )\gamma \pi ^0\pi ^0`$, $`e^+e^{}\rho (\omega )\pi ^0\omega (\rho )\gamma \pi ^0\eta `$ and $`e^+e^{}\varphi \pi ^0\rho \gamma \pi ^0\pi ^0(\eta )`$ are negligible in comparison with the scalar meson contribution $`e^+e^{}\varphi \gamma f_0(a_0)\gamma \pi ^0\pi ^0(\eta )`$ for $`BR(\varphi \gamma f_0(a_0)\gamma \pi ^0\pi ^0(\eta ))`$ greater than $`510^6(10^5)`$.
Let us consider the reaction $`e^+e^{}\varphi \gamma (f_0+\sigma )\gamma \pi ^0\pi ^0`$ with regard to the mixing of the $`f_0`$ and $`\sigma `$ mesons. We consider the one-loop mechanism of the $`R`$ meson production, where $`R=f_0,\sigma `$, through the charged kaon loop, $`\varphi K^+K^{}\gamma R`$, see . The whole formalism in the frame of which we study this problem is discussed in . The parameters of the $`f_0`$ and $`\sigma `$ mesons we obtain from fitting the $`\pi \pi `$ scattering data, see .
In the four-quark model and $`s\overline{s}`$ model we consider the following parameters to be free: the coupling constant of the $`f_0`$ meson to the $`K\overline{K}`$ channel $`g_{f_0K^+K^{}}`$, the coupling constant of the $`\sigma `$ meson to the $`\pi \pi `$ channel $`g_{\sigma \pi \pi }`$, the constant of the $`f_0\sigma `$ transition $`C_{f_0\sigma }`$, the ratio $`R=g_{f_0K^+K^{}}^2/g_{f_0\pi ^+\pi ^{}}^2`$, the phase $`\theta `$ of the elastic background and the $`\sigma `$ meson mass. The mass of the $`f_0`$ meson is restricted to the region $`0.97<m_{f_0}<0.99`$ GeV. Treating the $`\sigma `$ meson as an ordinary two-quark state, we get $`g_{\sigma K^+K^{}}=\sqrt{\lambda }g_{\sigma \pi ^+\pi ^{}}/20.35g_{\sigma \pi ^+\pi ^{}}`$, where $`\lambda 1/2`$ takes into account suppression of the strange quark production. So the constant $`g_{\sigma K^+K^{}}`$ ( and $`g_{\sigma \eta \eta }`$ ) is not essential in our fit.
As for the reaction $`e^+e^{}\gamma \pi ^0\eta `$ the similar analysis of the $`\pi \eta `$ scattering cannot be performed directly. But, our analysis of the final state interaction for the $`f_0`$ meson production show that the situation does not changed radically, in any case in the region $`\omega <100`$ MeV. Hence, one can hope that the final state interaction in the $`e^+e^{}\gamma a_0\gamma \pi \eta `$ reaction will not strongly affect the predictions in the region $`\omega <100`$ MeV. Based on the analysis of $`\pi \pi `$ scattering and using the relations between coupling constants we predict the quantities of the $`BR(\varphi \gamma a_0\gamma \pi \eta )`$ in the $`q^2\overline{q}^2`$ model, $`K\overline{K}`$ model and the $`q\overline{q}`$ model where $`f_0=s\overline{s}`$ and $`a_0=(u\overline{u}d\overline{d})/\sqrt{2}`$.
The fitting shows that in the four quark model ($`g_{f_0K^+K^{}}^2/4\pi 1GeV^2`$ ) a number of parameters describe well enough the $`\pi \pi `$ scattering in the region $`0.7<m<1.8`$ GeV, see . We predict $`BR(\varphi \gamma (f_0+\sigma )\gamma \pi \pi )10^4`$ and $`BR(\varphi \gamma a_0\gamma \pi \eta )10^4`$ in the $`q^2\overline{q}^2`$ model.
In the model of the $`K\overline{K}`$ molecule we get $`BR(\varphi \gamma (f_0+\sigma )\gamma \pi \pi )10^5`$ and $`BR(\varphi \gamma a_0\gamma \pi \eta )10^5`$.
In the $`q\overline{q}`$ model the $`f_0(a_0)`$ meson is considered as a point-like object, i.e. in the $`K\overline{K}`$ loop, $`\varphi K^+K^{}\gamma f_0(a_0)`$ and in the transitions caused by the $`f_0\sigma `$ mixing we consider both the real and the virtual intermediate states. This model is different from $`q^2\overline{q}^2`$ model by the coupling constant which is $`g_{f_0K^+K^{}}^2/4\pi <0.5GeV^2`$. In this model we obtain $`BR(\varphi \gamma (f_0+\sigma )\gamma \pi \pi )510^5`$ and taking into account the imaginary part of the decay amplitude only, which violates the OZI rule, we get $`BR(\varphi \gamma a_0(q\overline{q})\gamma \pi \eta )810^6`$.
The experimental data from SND and CMD-2 detectors support the four quark nature of the $`f_0`$ and $`a_0`$ mesons, see Fig.1 and Fig.2. and also . The obtained parameters for $`f_0`$ meson from SND detector are $`m_{f_0}=971\pm 6\pm 5`$ MeV, $`g_{f_0K^+K^{}}^2/4\pi =2.1\pm _{0.56}^{0.88}GeV^2`$, $`R=4.1`$ and $`BR(\varphi \pi ^0\pi ^0\gamma )=(1.14\pm 0.1\pm 0.12)10^4`$, see the dashed line on Fig.1.
As for reaction $`e^+e^{}\gamma \pi ^+\pi ^{}`$, the analysis shows that the study of this reaction is an interesting and rather complex problem.
The main problem is the large background process of final pions radiation. The $`f_0`$ state in this reaction could be studied only by observing the interference patterns in the total cross-section and in the photon spectrum . As it was shown in , since the Fermi-Watson theorem for the final state interaction due to the soft photons in the reaction $`e^+e^{}\rho (s)\gamma \pi ^+\pi ^{}`$ is not valid, the phase of the amplitude $`\gamma ^{}(s)\rho \gamma \pi \pi `$ does not determined by the s-wave phase of $`\pi \pi `$ scattering. The analyses of the interference patterns in the reaction $`e^+e^{}\varphi +\rho \gamma f_0+\gamma \pi ^+\pi ^{}\gamma \pi ^+\pi ^{}`$ should be performed taking into account the phase of the elastic background of the $`\pi \pi `$ scattering, the phase of the triangle diagram $`\varphi K^+K^{}\gamma f_0`$ and the phase of the $`f_0`$$`\sigma `$ complex in the $`\varphi K^+K^{}\gamma (f_0+\sigma )\gamma \pi \pi `$ amplitude. The whole formalism for the description of these reactions and the resulting pictures were stated in .
|
no-problem/9904/hep-ph9904438.html
|
ar5iv
|
text
|
# Superheavy Dark Matter and Thermal Inflation
## Abstract
The thermal inflation is the most plausible mechanism that solves the cosmological moduli problem naturally. We discuss relic abundance of superheavy particle $`X`$ in the presence of the thermal inflation assuming that its lifetime is longer than the age of the universe, and show that the long-lived particle $`X`$ of mass $`10^{12}`$$`10^{14}`$ GeV may form a part of the dark matter in the present universe in a wide region of parameter space of the thermal inflation model. The superheavy dark matter of mass $`10^{13}`$ GeV may be interesting in particular, since its decay may account for the observed ultra high-energy cosmic rays if the lifetime of the $`X`$ particle is sufficiently long.
preprint: UT-845 RESCEU-6/99
A large class of string theories predicts a number of flat directions, called moduli fields $`\varphi `$ . They are expected to acquire their masses of order of the gravitino mass $`m_{3/2}`$ from some nonperturbative effects of supersymmetry (SUSY) breaking . The gravitino mass lies in a range of $`10^2`$ keV–1 GeV for gauge-mediated SUSY breaking models and in a range of 100 GeV–1 TeV for hidden-sector SUSY breaking models . It is well known that such moduli fields are produced too much as coherent oscillations in the early universe and conflict with various cosmological observations. Therefore, we must invoke a mechanism such as late-time entropy production to dilute the moduli density substantially.
The thermal inflation is the most plausible mechanism to produce an enormous amount of entropy at the late time of the universe’s evolution. In recent articles we have shown that the thermal inflation is very successful in solving the above cosmological moduli problem. Since it produces a tremendous amount of entropy to dilute the moduli density, abundances of any relic particles are substantially diluted simultaneously, which may provide a new possibility for a superheavy $`X`$ particle to be a part of the dark matter in the universe.<sup>*</sup><sup>*</sup>* Other candidates for the dark matter in the presence of the thermal inflation are the moduli themselves whose masses are less than about 100 keV , and the axion with a relatively high decay constant $`f_a10^{15}`$$`10^{16}`$ GeV . In this paper we show that it is indeed the case if the mass of $`X`$ particle is of order of $`10^{12}`$$`10^{14}`$ GeV and its lifetime is longer than the age of the universe. Such a long-lived $`X`$ particle is particularly interesting, since its decay may naturally explain the observed ultra high-energy cosmic rays beyond the Greisen-Zatsepin-Kuzmin cutoff when the lifetime is sufficiently long.
In this paper we consider that the particle $`X`$ was primordially in the thermal equilibrium For the superheavy particle $`X`$ to be in the thermal equilibrium, the reheating temperature after the primordial inflation should be higher than $`m_X10^{13}`$ GeV. Such a high reheating temperature is realized in e.g. hybrid inflation models . Although a large number of gravitinos are also produced in this case, they are sufficiently diluted by the thermal inflation and become harmless. and froze out at the cosmic temperature $`T=T_f`$ when it was nonrelativistic $`x_f=m_X/T_f>1`$ (i.e., the $`X`$ is left as a cold relic.). Nonthermal productions of superheavy particles were discussed in Refs. . Then, the present relic abundance of $`X`$ (the ratio of the present energy density of $`X`$ to the critical density $`\rho _{cr}`$) is estimated by using the thermally-averaged annihilation cross section of the $`X`$, $`\sigma _X\left|v\right|`$, as
$`\mathrm{\Omega }_X^0h^2={\displaystyle \frac{0.76(n_f+1)x_f}{g_{}(T_f)^{\frac{1}{2}}M_G(h^2\rho _{cr}/s_0)\sigma _X\left|v\right|}}`$ (1)
where $`g_{}(T_f)200`$ counts the effective degrees of freedom at $`T=T_f`$, $`M_G2.4\times 10^{18}`$ GeV denotes the reduced Planck scale, and $`s_0`$ is the present entropy density $`(\rho _{cr}/s_0)3.6\times 10^9h^2`$ GeV with the present Hubble parameter $`h`$ in units of 100 km/sec$``$Mpc<sup>-1</sup>. Here $`n_f`$ parametrizes the dependence on $`T`$ of the annihilation cross section and $`n_f=0`$ for $`s`$-wave annihilation, $`n_f=1`$ for $`p`$-wave annihilation, etc. Assuming the $`s`$-wave annihilation with $`\sigma _X\left|v\right|m_X^2`$ ($`x_f7`$ for $`m_X10^{13}`$ GeV) we obtain
$`\mathrm{\Omega }_X^0h^2{\displaystyle \frac{0.4m_X^2}{M_G(h^2\rho _{cr}/s_0)}}=4.3\times 10^{15}\left({\displaystyle \frac{m_X}{10^{13}\text{GeV}}}\right)^2.`$ (2)
Therefore, the superheavy particle $`X`$ of $`m_X10^5`$ GeV leads to overclosure of the universe if its lifetime is longer than the age of the universe. In order to realize $`\mathrm{\Omega }_Xh^21`$ the dilution factor more than about $`10^{16}`$ is required for $`m_X10^{13}`$ GeV for example. However, such a huge dilution may be naturally provided by the thermal inflation. In this paper, we examine whether the thermal inflation could sufficiently reduce the energy density $`\mathrm{\Omega }_X`$ of the superheavy particle even if it was in the thermal equilibrium, as well as that of the string moduli.
Let us start with reviewing briefly the thermal inflation model . The potential of the inflaton field $`S`$ is given by
$`V=V_0m_0^2\left|S\right|^2+{\displaystyle \frac{\left|S\right|^{2n+4}}{M_{}^{2n}}},`$ (3)
where $`m_0^2`$ denotes a soft SUSY breaking negative mass squared which is expected to be of order of the electroweak scale. $`M_{}`$ denotes the cutoff scale and $`V_0`$ is the vacuum energy. Then the vacuum expectation value of $`S`$ is estimated as
$`SM=\left(n+2\right)^{\frac{1}{2(n+1)}}\left(m_0M_{}^n\right)^{\frac{1}{n+1}},`$ (4)
and $`V_0`$ is fixed as
$`V_0={\displaystyle \frac{n+1}{n+2}}m_0^2M^2,`$ (5)
so that the cosmological constant vanishes at the true vacuum. The inflaton $`\sigma `$ ($``$Re$`S`$), which we call it a flaton, obtains a mass $`m_\sigma ^2=2(n+1)m_0^2`$.
After the thermal inflation ends, the vacuum energy is transferred into the thermal bath through flaton decay and increases the entropy of the universe by a factor <sup>§</sup><sup>§</sup>§ Here we have assumed that the flaton can not decay into two $`R`$-axions which is an imaginary part of $`S`$ to obtain a successful dilution. See Ref. .
$`\mathrm{\Delta }{\displaystyle \frac{V_0}{66m_\sigma ^3T_R}}.`$ (6)
Here $`T_R`$ is the reheating temperature after the thermal inflation and determined by the flaton decay width $`\mathrm{\Gamma }_\sigma `$ which can be written as
$`\mathrm{\Gamma }_\sigma =C_\sigma {\displaystyle \frac{m_\sigma ^3}{M^2}},`$ (7)
where $`C_\sigma `$ is a dimension-less parameter depending on the decay modes. We consider that the flaton $`\sigma `$ dominantly decays into two photons or two gluons, and $`C_\sigma `$ is given by We assume that the decay amplitudes are obtained from one-loop diagrams of some heavy particles. See also Ref. .
$`C_\sigma \{\begin{array}{cc}\frac{1}{4\pi }\left(\frac{\alpha }{4\pi }\right)^2\hfill & \text{for}\sigma \gamma \gamma \hfill \\ \frac{1}{4\pi }\left(\frac{\alpha _s}{4\pi }\right)^2\hfill & \text{for}\sigma gg\hfill \end{array}.`$ (10)
Then the reheating temperature is estimated as
$`T_R=\left({\displaystyle \frac{90}{\pi ^2g_{}(T_R)}}\right)^{\frac{1}{4}}\sqrt{\mathrm{\Gamma }_\sigma M_G}0.96C_\sigma ^{\frac{1}{2}}{\displaystyle \frac{m_\sigma ^{\frac{3}{2}}M_G^{\frac{1}{2}}}{M}}.`$ (11)
The entropy-production factor $`\mathrm{\Delta }`$, therefore, can be written as
$`\mathrm{\Delta }`$ $`=`$ $`{\displaystyle \frac{M^3}{130(n+2)C_\sigma ^{\frac{1}{2}}m_\sigma ^{\frac{5}{2}}M_G^{\frac{1}{2}}}},`$ (12)
and for $`n=1`$ we obtain
$`\mathrm{\Delta }`$ $``$ $`\{\begin{array}{cc}1.0\times 10^{17}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{5}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)^3\hfill & \text{for}\sigma \gamma \gamma \hfill \\ 6.4\times 10^{15}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{5}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)^3\hfill & \text{for}\sigma gg\hfill \end{array}.`$ (15)
Here the values $`m_\sigma =100`$ GeV and $`M=10^{10}`$ GeV correspond to $`M_{}=3.5\times 10^{18}`$ GeV for $`n=1`$. In the following analysis, we take $`n=1`$ for simplicity. From Eq. (15) we see that the thermal inflation can dilute relic particle density extensively by producing an enormous entropy. Here it should be noted that the reheating temperature should be $`T_R1`$ MeV to keep the big bang nucleosynthesis successful In Ref. the lower bound on the reheating temperature is determined as about 0.5 MeV. Since our definition of the reheating temperature is different by a factor $`\sqrt{3}`$, it leads to $`T_R1`$ MeV in our case. , which leads to the upper bounds on $`M`$ from Eq. (11) as
$`M`$ $`=`$ $`0.96C_\sigma ^{\frac{1}{2}}{\displaystyle \frac{m_\sigma ^{\frac{3}{2}}M_G^{\frac{1}{2}}}{T_R}}\{\begin{array}{cc}2.4\times 10^{11}\text{GeV}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{3}{2}}\hfill & \text{for}\sigma \gamma \gamma \hfill \\ 3.9\times 10^{12}\text{GeV}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{3}{2}}\hfill & \text{for}\sigma gg\hfill \end{array}.`$ (18)
These are translated into the upper bounds on $`M_{}`$ as
$`M_{}`$ $`=`$ $`3.2C_\sigma {\displaystyle \frac{m_\sigma ^2M_G}{T_R^2}}\{\begin{array}{cc}2.1\times 10^{21}\text{GeV}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^2\hfill & \text{for}\sigma \gamma \gamma \hfill \\ 5.4\times 10^{23}\text{GeV}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^2\hfill & \text{for}\sigma gg\hfill \end{array}.`$ (21)
In the presence of the thermal inflation the relic abundance of the superheavy particle $`X`$ \[Eq. (2)\] is reduced by the factor $`\mathrm{\Delta }`$ in Eq. (15) as
$`\mathrm{\Omega }_Xh^2`$ $`=`$ $`\mathrm{\Omega }_X^0h^2\times {\displaystyle \frac{1}{\mathrm{\Delta }}}=140C_\sigma ^{\frac{1}{2}}{\displaystyle \frac{m_X^2m_\sigma ^{\frac{5}{2}}}{M^3M_G^{\frac{1}{2}}(h^2\rho _{cr}/s_0)}}`$ (22)
$``$ $`\{\begin{array}{cc}0.04\left({\displaystyle \frac{m_X}{10^{13}\text{GeV}}}\right)^2\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{5}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)^3\hfill & \text{for}\sigma \gamma \gamma \hfill \\ 0.68\left({\displaystyle \frac{m_X}{10^{13}\text{GeV}}}\right)^2\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{5}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)^3\hfill & \text{for}\sigma gg\hfill \end{array},`$ (25)
In Fig. 1 we show the contour of $`\mathrm{\Omega }_X`$ in the parameter space of the thermal inflation model (in $`m_\sigma `$-$`M_{}`$ plane). We find that the thermal inflation can naturally realize $`\mathrm{\Omega }_Xh^21`$ keeping the constraint $`T_R1`$ MeV in a large region of the parameter space.
As mentioned in the introduction, the thermal inflation was originally proposed as a solution to the cosmological moduli problem. Therefore, we turn to examine whether the thermal inflation could sufficiently dilute not only the density of the superheavy particle $`X`$ but also that of the string moduli, simultaneously.
When the Hubble parameter becomes comparable to the moduli masses, the moduli $`\varphi `$ start coherent oscillations and the corresponding cosmic temperature is estimated as
$`T_\varphi 7.2\times 10^6\text{GeV}\left({\displaystyle \frac{m_\varphi }{100\text{keV}}}\right)^{\frac{1}{2}}.`$ (26)
Here notice that the moduli oscillations always begin after the $`X`$ freezes out since $`T_fm_XT_\varphi `$ even for the heavy moduli $`m_\varphi m_{3/2}100`$ GeV–1 TeV predicted in hidden sector SUSY breaking models. Because the initial amplitudes of the oscillations, $`\varphi _0`$, are expected to be $`\varphi _0M_G`$, the present abundances<sup>\**</sup><sup>\**</sup>\** If the moduli masses are less than about 100 MeV, the moduli become stable until the present. On the other hand, for the heavier moduli mass region, $`\mathrm{\Omega }_\varphi `$ is regarded as the ratio, $`(\rho _\varphi /s)_D/(\rho _{cr}/s_0)`$, where $`(\rho _\varphi /s)_D`$ denotes the ratio of the energy density of the moduli oscillations to the entropy density when the moduli decay. of the moduli oscillations are given by
$`\mathrm{\Omega }_\varphi h^2=2.5\times 10^{14}\left({\displaystyle \frac{m_\varphi }{100\text{keV}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2.`$ (27)
Such a huge energy density of the moduli leads to cosmological difficulties for the typical moduli mass regions predicted in both gauge-mediated SUSY breaking and hidden-sector SUSY breaking scenarios.
However, if the universe experienced the thermal inflation, the moduli abundances are reduced by the factor $`\mathrm{\Delta }`$ \[Eq. (15)\] as
$`\left(\mathrm{\Omega }_\varphi \right)_{BB}h^2`$ $`=`$ $`22C_\sigma ^{\frac{1}{2}}{\displaystyle \frac{m_\varphi ^{\frac{1}{2}}m_\sigma ^{\frac{5}{2}}M_G}{M^3(h^2\rho _{cr}/s_0)}}\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2`$ (28)
$``$ $`\{\begin{array}{cc}2.4\times 10^3\left({\displaystyle \frac{m_\varphi }{100\text{keV}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{5}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)^3\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2\hfill & \text{for}\sigma \gamma \gamma \hfill \\ 3.9\times 10^2\left({\displaystyle \frac{m_\varphi }{100\text{keV}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{5}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)^3\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2\hfill & \text{for}\sigma gg\hfill \end{array}.`$ (31)
We call these moduli produced at $`T=T_\varphi `$ as “big-bang” moduli. In deriving Eqs. (27) and (28) we have assumed that the energy of the universe is radiation-dominated when the big-bang modulus starts to oscillate at $`Hm_\varphi `$. This assumption is justified for $`\left(m_\varphi /100\text{keV}\right)^{1/2}\left(m_X/10^{13}\text{GeV}\right)^22.9`$. On the other hand, when the energy at $`Hm_\varphi `$ is dominated by the superheavy particle $`X`$, the present abundance of the big-bang modulus is related to the abundance of $`X`$ (22) as
$`\left(\mathrm{\Omega }_\varphi \right)_{BB}h^2={\displaystyle \frac{1}{6}}\mathrm{\Omega }_Xh^2\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2.`$ (32)
Therefore, both abundances are comparable for $`\varphi _0M_G`$.
Furthermore, it should be noticed that the secondary oscillations of the moduli start just after the thermal inflation ends . We call the moduli produced by these secondary oscillations as “thermal inflation” moduli. The present abundances of the thermal inflation moduli are estimated as
$`\left(\mathrm{\Omega }_\varphi \right)_{TI}h^2`$ $`=`$ $`6.0\times 10^2C_\sigma ^{\frac{1}{2}}{\displaystyle \frac{m_\sigma ^{\frac{7}{2}}M}{m_\varphi ^2M_G^{\frac{3}{2}}(h^2\rho _{cr}/s_0)}}\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2`$ (33)
$``$ $`\{\begin{array}{cc}7.4\left({\displaystyle \frac{m_\varphi }{100\text{keV}}}\right)^2\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{7}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2\hfill & \text{for}\sigma \gamma \gamma \hfill \\ 1.2\times 10^2\left({\displaystyle \frac{m_\varphi }{100\text{keV}}}\right)^2\left({\displaystyle \frac{m_\sigma }{100\text{GeV}}}\right)^{\frac{7}{2}}\left({\displaystyle \frac{M}{10^{10}\text{GeV}}}\right)\left({\displaystyle \frac{\varphi _0}{M_G}}\right)^2\hfill & \text{for}\sigma gg\hfill \end{array}.`$ (36)
We see from Eqs. (28) and (33) that the thermal inflation dilutes the moduli density substantially. In fact, it has been shown in Refs. that two moduli mass regions: (i) $`m_\varphi 1`$ MeV and (ii) $`m_\varphi 10`$ GeV survive various cosmological constraints in the presence of the thermal inflation.
We are now at the point to examine whether the thermal inflation can solve the moduli problem and also dilute sufficiently the superheavy particle $`X`$ at the same time. First, we consider the lighter allowed region of the moduli masses predicted in gauge-mediated SUSY breaking models. In Fig. 2 we show the contour plot of the abundance of $`X`$ of mass $`m_X=10^{13}`$ GeV as well as the various constraints in the $`m_\sigma `$-$`M_{}`$ plane for $`m_\varphi =100`$ keV. Such light moduli are constrained from the overclosure limit.<sup>††</sup><sup>††</sup>†† Notice that for the modulus of mass $`m_\varphi =100`$ keV the constraint from the diffuse x$`(\gamma )`$-ray backgrounds is not so severe as the overclosure limit. It gives a more stringent upper bound on the modulus abundance for 100 MeV $`m_\varphi 200`$ keV . Then the requirements $`(\mathrm{\Omega }_\varphi )_{BB}h^2`$ 1 and $`(\mathrm{\Omega }_\varphi )_{TI}h^2`$ 1 put the lower and upper bounds on $`M_{}`$ respectively. Furthermore, the condition $`T_R1`$ MeV leads to the upper bound on $`M_{}`$ as represented in Eq. (21). We see from the figures that in the parameter space which survives above constraints one can obtain $`\mathrm{\Omega }_Xh^210^6`$$`1`$ for $`m_X=10^{13}`$ GeV: i.e., the thermal inflation can dilute the abundance of the superheavy particle $`X`$ sufficiently. Similar results are obtained in the allowed moduli mass region $`m_\varphi 10^2`$ keV–$`1`$ MeV. For example, when $`m_\varphi =10^2`$ keV, the thermal inflation solves the moduli problem in the parameter regions $`m_\sigma 10^2`$ GeV –$`1`$ GeV and $`M_{}10^{13}`$ GeV–$`10^{17}`$ GeV and in this region we obtain $`\mathrm{\Omega }_Xh^210^1`$$`1`$ for $`m_X=10^{13}`$ GeV. We find that the thermal inflation is very successful, in a whole range of $`m_\varphi 10^2`$ keV–1 MeV, to solve the moduli problem as well as to reduce the density of the superheavy particle whose mass is $`m_X10^{13}`$ GeV even if it was in the thermal bath in the early universe.
Next, we turn to discuss the case of the heavier moduli of masses $`m_\varphi 100`$ GeV–$`1`$ TeV predicted in hidden-sector SUSY breaking models. In Fig. 3 we show the contour plot of $`\mathrm{\Omega }_X`$ with various constraints for $`m_X=10^{13}`$ GeV and $`m_\varphi =100`$ GeV. Such heavy moduli are severely constrained not to destroy or overproduce light elements synthesized by the big bang nucleosynthesis. It has been found that $`(\mathrm{\Omega }_\varphi )_{BB}h^2`$ and $`(\mathrm{\Omega }_\varphi )_{TI}h^2`$ should be less than about $`10^5`$ . Fig. 3 shows that in order to solve the moduli problem, the thermal inflation requires $`M_{}`$ higher than in the previous case, which results in more dilution of $`X`$, and hence we obtain $`\mathrm{\Omega }_Xh^210^9`$$`10^7`$ for $`m_X=10^{13}`$ GeV.
Although we have only considered the case $`n=1`$, similar discussions also hold in the higher $`n`$ case, except that the scale of $`M_{}`$ becomes higher.
In the present analysis we have taken the cutoff scale of the thermal inflation model $`M_{}`$ as a free parameter. However, it is natural to choose it as the gravitational scale, i.e., $`M_{}M_G`$. If it is the case, the thermal inflation dilutes the string moduli sufficiently only if their masses are $`m_\varphi 10^1`$keV –$`1`$ MeV. Moreover, as shown in Fig. 4, the dark matter density of the superheavy particle of mass $`m_X10^{12}`$$`10^{14}`$ GeV becomes $`\mathrm{\Omega }_Xh^210^4`$–1, which is just the mass region required to explain the observed ultra high-energy cosmic rays. The required long lifetime of the $`X`$ particle may be explained by discrete gauge symmetries or by compositeness of the $`X`$ particle .
In this paper we have shown that the thermal inflation provides indeed a new possibility that the superheavy $`X`$ particle may form a part of the dark matter in the present universe. However, it gives rise to a new problem, since it also dilutes the primordial baryon asymmetry significantly. Because the reheating temperature of the thermal inflation should be quite low $`T_R1`$–10 MeV to dilute $`\varphi `$ and $`X`$ sufficiently, the electroweak baryogenesis does not work. However, as shown in Ref. , the Affleck-Dine mechanism may produce enough baryon asymmetry even with tremendous entropy production due to the thermal inflation, if the moduli are light ($`m_\varphi 1`$ MeV) . Therefore, in the present scenario, the light moduli predicted in gauge-mediated SUSY breaking models are favored.
###### Acknowledgements.
This work was partially supported by the Japan Society for the Promotion of Science (TA) and “Priority Area: Supersymmetry and Unified Theory of Elementary Particles($`\mathrm{}`$707)”(MK,TY).
|
no-problem/9904/astro-ph9904377.html
|
ar5iv
|
text
|
# Physical Parameters of Hot Horizontal-Branch Stars in NGC 6752: Deep Mixing and Radiative Levitation
## 1 Introduction
The discovery of “gaps” along the blue horizontal branch (HB) in globular clusters as well as of long extensions towards higher temperatures has triggered several spectroscopic investigations (Moehler moeh99 (1999) and references therein) yielding the following results:
1. Most of the stars analysed above and below any gaps along the blue horizontal branch are “bona fide” blue HB stars ($`T_{\mathrm{eff}}<20,000`$ K), which show significantly lower gravities than expected from canonical stellar evolution theory.
2. Only in NGC 6752 and M 15 have spectroscopic analyses verified the presence of stars that could be identified with the subdwarf B stars known in the field of the Milky Way ($`T_{\mathrm{eff}}>20,000`$ K, $`\mathrm{log}g`$ $`>`$ 5). In contrast to the cooler blue HB (BHB) stars the gravities of these “extended HB” (EHB) stars agree well with the expectations of canonical stellar evolution.
Two scenarios have been suggested to account for the low gravities of BHB stars:
Abundance anomalies observed in red giant branch (RGB) stars in globular clusters (e.g., Kraft kraf94 (1994), Kraft et al. krsn97 (1997)) may be explained by the dredge-up of nuclearly processed material to the stellar surface. If the mixing currents extend into the hydrogen-burning shell – as suggested by current RGB nucleosynthesis models and observed Al overabundances – helium can be mixed into the stellar envelope. This in turn would increase the luminosity (and mass loss) along the RGB (Sweigart swei99 (1999)) and thereby create less massive (i.e. bluer) HB stars with helium-enriched hydrogen envelopes. The helium enrichment increases the hydrogen burning rate, leading to higher luminosities (compared to canonical HB stars of the same temperature) and lower gravities. The gravities of stars hotter than about 20,000 K are not affected by this mixing process because these stars have only inert hydrogen shells.
Grundahl et al. (grca99 (1999)) found a “jump” in the $`u`$, $`uy`$ colour-magnitude diagrams of 15 globular clusters, which can be explained if radiative levitation of iron and other heavy elements takes place over the temperature range defined by the “low-gravity” BHB stars. This assumption has been confirmed in the case of M 13 by the recent high resolution spectroscopy of Behr et al. (beco99 (1999)). Grundahl et al. argue that super-solar abundances of heavy elements such as iron should lead to changes in model atmospheres which may be capable of explaining the disagreement between models and observations over the “critical” temperature range $`11,500\mathrm{K}<T_{\mathrm{eff}}<20,000`$ K.
NGC 6752 is an ideal test case for these scenarios, since it is a very close globular cluster with a long blue HB extending to rather hot EHB stars. While previous data already cover the faint end of the EHB, we now obtained new spectra for 32 stars in and above the sparsely populated region between the BHB and the EHB stars. In this Letter, we present atmospheric parameters derived for a total of 42 BHB and EHB stars and discuss the constraints they may pose on the scenarios described above.
## 2 Observational Data
We selected our targets from the photographic photometry of Buonanno et al. (buca86 (1986)) to cover the range 14.5$`V`$15.5. 19 stars were observed with the ESO 1.52m telescope (61.E-0145, July 22-25, 1998) and the Boller & Chivens spectrograph using CCD # 39 and grating # 33 (65 Å/mm). This combination covered the 3300 Å – 5300 Å region at a spectral resolution of 2.6 Å. The data reduction will be described in Moehler et al. (1999a ). Prompted by the suggestion of Grundahl et al. (grca99 (1999)) that radiative levitation of heavy metals may enrich the atmospheres of BHB stars, we searched for metal absorption lines in these spectra. Indeed we found Fe II absorption lines in almost all spectra (for examples see Fig. 1).
13 stars were observed as backup targets at the NTT during observing runs dedicated to other programs (60.E-0145, 61.E-0361). The observations and their reduction are described in Moehler et al. (1999b ). Those spectra have a spectral resolution of 5 Å covering 3350 to 5250 Å. No metal lines could be detected due to this rather low spectral resolution.
## 3 Atmospheric Parameters
### 3.1 Fit procedure and model atmospheres
To derive effective temperatures, surface gravities and helium abundances we fitted the observed Balmer lines H<sub>β</sub> to H<sub>10</sub> (excluding H<sub>ϵ</sub> because of possible blending problems with the Ca II H line) and the helium lines (He I 4026, 4388, 4471, 4922Å) with stellar model atmospheres. We corrected the spectra for radial velocity shifts, derived from the positions of the Balmer and helium lines and normalized the spectra by eye.
We computed model atmospheres using ATLAS9 (Kurucz kuru91 (1991)) and used Lemke’s version of the LINFOR<sup>1</sup><sup>1</sup>1For a description see http://a400.sternwarte.uni-erlangen.de/$``$ai26/linfit/linfor.html program (developed originally by Holweger, Steffen, and Steenbock at Kiel University) to compute a grid of theoretical spectra which include the Balmer lines H<sub>α</sub> to H<sub>22</sub> and He I lines. The grid covered the range 7,000 K $``$ $`T_{\mathrm{eff}}`$ $``$ 35,000 K, 2.5 $``$ $`\mathrm{log}g`$ $``$ 5.0, $`3.0`$ $``$ $`\mathrm{log}\frac{n_{\mathrm{He}}}{n_\mathrm{H}}`$ $``$ $`1.0`$, at a metallicity of \[M/H\] = $`1.5`$.
To establish the best fit we used the routines developed by Bergeron et al. (besa92 (1992)) and Saffer et al. (saff94 (1994)), which employ a $`\chi ^2`$ test. The fit program normalizes model spectra and observed spectra using the same points for the continuum definition. The results are plotted in Fig. 2 (upper panel). The errors are estimated to be about 10% in $`T_{\mathrm{eff}}`$ and 0.15 dex in $`\mathrm{log}g`$ (cf. Moehler et al. mohe97 (1997)). Representative error bars are shown in Fig. 2 . To increase our data sample we reanalysed the NTT spectra described and analysed by Moehler et al. (mohe97 (1997)). For a detailed comparison see Moehler et al. (1999a ).
### 3.2 Iron abundances
Due to the spectral resolution and the weakness of the few observed lines a detailed abundance analysis (such as that of Behr et al., 1999) is beyond the scope of this paper. Nevertheless we can estimate the iron abundance in the stars by fitting the Fe II lines marked in Fig. 1. A first check indicated that the iron abundance was about solar whereas the magnesium abundance was close to the mean cluster abundance.
As iron is very important for the temperature stratification of stellar atmospheres we tried to take the increased iron abundance into account: We used ATLAS9 to calculate a solar metallicity atmosphere. The emergent spectrum was then computed from the solar metallicity model stratification by reducing the abundances of all metals M (except iron) to the cluster abundances (\[M/H\] = $``$1.5). It was not possible to compute an emergent spectrum that was fully consistent with this iron-enriched composition, since the ATLAS9 code requires a scaled solar composition. We next repeated the fit to derive $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$, and $`\mathrm{log}\frac{n_{\mathrm{He}}}{n_\mathrm{H}}`$ with these enriched model atmospheres. The results are plotted in Fig. 2 (central panel).
For each star observed at the ESO 1.52m telescope we then computed an “iron-enriched” model spectrum with $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$ as derived from the fits of the Balmer and helium lines with the “enriched” model atmospheres (cf. Fig. 2, central panel) and $`\mathrm{log}\frac{n_{\mathrm{He}}}{n_\mathrm{H}}`$ = $``$2. The fit of the iron lines was started with a solar iron abundance and the iron abundance was varied until $`\chi ^2`$ achieved a minimum. As the radiative levitation in BHB stars is due to diffusion processes (which is also indicated by the helium deficiency found in these stars) the atmospheres have to be very stable. We therefore kept the microturbulent velocity $`\xi `$ at 0 km/s – the iron abundances plotted in Fig. 3 are thus upper limits. The mean iron abundance turns out to be \[Fe/H\] $`+0.1`$ dex (for 18 stars hotter than about 11,500 K) and $``$1.6 for the one star cooler than 11,500 K. Although the iron abundance for the hotter BHB stars is about a factor of 50 larger than the cluster abundance, it is smaller by a factor of 3 than the value of \[Fe/H\] = $`+`$0.5 estimated by Grundahl et al. (grca99 (1999)) as being necessary to explain the Strömgren $`u`$-jump observed in $`u`$, $`uy`$ colour-magnitude diagrams.
Our results are in good agreement with the findings of Behr et al. (beco99 (1999)) for BHB stars in M 13 and Glaspey et al. (glmi89 (1989)) for two BHB stars in NGC 6752. Again in agreement with Behr et al. (beco99 (1999)) we see a decrease in helium abundance with increasing temperature, whereas the iron abundance stays roughly constant over the observed temperature range.
### 3.3 Influence of iron enrichment
From Fig. 2 it is clear that the use of enriched model atmospheres moves most stars closer to the zero-age horizontal branch (ZAHB). The three stars between 10,000 K and 12,000 K, however, fall below the canonical ZAHB when fitted with enriched model atmospheres. This is plausible as the radiative levitation is supposed to start around 11,500 K (Grundahl et al. grca99 (1999)) and the cooler stars therefore should have metal-poor atmospheres (see also Fig. 3 where the coolest analysed star shows no evidence of iron enrichment). We repeated the experiment by increasing the iron abundance to \[Fe/H\]=$`+`$0.5 (see Fig. 2 lower panel), which did not change the resulting values for $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ significantly.
Since HB stars at these temperatures spend most of their lifetime close to the ZAHB, one would expect the majority of the stars to scatter (within the observational error limits) around the ZAHB line in the $`\mathrm{log}T_{\mathrm{eff}}`$, $`\mathrm{log}g`$–diagram. However, this is not the case for the canonical ZAHB (solid lines in Fig. 2) even with the use of iron-enriched model atmospheres (central and lower panels in Fig. 2). The scatter instead seems more consistent with the ZAHB for moderate helium mixing (dashed lines in Fig. 2). Thus the physical parameters of HB stars hotter than $`11,500`$ K in NGC 6752, as derived in this paper, are best explained by a combination of helium mixing and radiative levitation effects.
## 4 Conclusions
Our conclusions can be summarized as follows:
1. We have obtained new optical spectra of 32 hot HB stars in NGC 6752 with 11,000 K $`<`$ $`T_{\mathrm{eff}}`$$`<`$ 25,000. When these spectra (together with older spectra of hotter stars) are analysed using model atmospheres with the cluster metallicity (\[Fe/H\] = $``$1.5), they show the same “low-gravity” anomaly with respect to canonical HB models, that has been observed in several other clusters (Moehler 1999).
2. For 18 stars with $`T_{\mathrm{eff}}`$ $`>`$ 11,500 K, we estimate a mean iron abundance of \[Fe/H\] $`+`$0.1, whereas magnesium is consistent with the cluster metallicity. The hot HB stars in NGC 6752 thus show an abundance pattern similar to that observed in M 13 (Behr et al. beco99 (1999)), which presumably arises from radiative levitation of iron (Grundahl et al. 1999).
3. When the hot HB stars are analysed using model atmospheres with an appropriately high iron abundance, the size of the gravity anomaly with respect to canonical HB models is significantly reduced. Whether the remaining differences between observations and canonical theory can be attributed to levitation effects on elements other than iron remains to be investigated by detailed modeling of the diffusion processes in the stellar atmospheres. With presently available models, the derived gravities for HB stars hotter than $`11,500`$ K are best fit by non-canonical HB models which include deep mixing of helium on the RGB (Sweigart swei99 (1999)).
###### Acknowledgements.
We thank the staff of the ESO La Silla observatory for their support during our observations. S.M. acknowledges financial support from the DARA under grant 50 OR 96029-ZA. M.C. was supported by NASA through Hubble Fellowship grant HF–01105.01–98A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. We are grateful to the referee, Dr. R. Kraft, for his speedy report and valuable remarks.
|
no-problem/9904/chao-dyn9904018.html
|
ar5iv
|
text
|
# Transitions to Line-Defect Turbulence in Complex Oscillatory Media
## Abstract
The transition from complex-periodic to chaotic behavior is investigated in oscillatory media supporting spiral waves. We find turbulent regimes characterized by the spontaneous nucleation, proliferation and erratic motion of synchronization defect lines which separate domains of different oscillation phases. The line defect dynamics is controlled by the competition between diffusion, which reduces line length and curvature, and phase-gradient-induced growth. The onset of each type of defect-line turbulence is identified with a non-equilibrium phase transition characterized by non-trivial critical exponents.
Two-dimensional reactive media with oscillatory dynamics support a variety of spatio-temporal patterns including spiral waves. In the vicinity of the Hopf bifurcation, spiral waves are described by the complex Ginzburg-Landau equation (CGLE) . Spiral waves can also exist if the local dynamics is complex-periodic or even chaotic . While the basic features of such regimes are akin to those of the CGLE, a complete description of complex oscillatory media cannot be given in terms of the CGLE. For example, these media may undergo bifurcations where the period of the orbit doubles at almost every point in space . The rotational symmetry of spiral waves is then broken by the presence of synchronization defect lines where the phase of the local orbit changes by multiples of $`2\pi `$. These defect lines have been observed in a super-excitable system and in experiments on the Belousov-Zhabotinsky reaction .
In this Letter, we study the fate of the synchronization defect lines as the system parameters are tuned to approach the domain where spiral waves have chaotic local dynamics. We show the existence of a new type of spatiotemporal chaos where the global temporal periodicity of the medium is broken by the spontaneous nucleation, proliferation and erratic motion of the defect lines separating domains of different oscillation phases. We describe the basic mechanisms governing the dynamics of the defect lines and provide evidence that the onset of each type of defect-line turbulence is a non-equilibrium phase transition with non-trivial critical exponents. We also study inhomogeneous media without spirals where line motion has a different nature and different scaling laws due to the absence of overall phase gradients.
We study reaction-diffusion (RD) systems where the local kinetics is described by $`𝐑(𝐜(𝐫,t))`$, a vector of nonlinear functions of the local concentrations $`𝐜(𝐫,t)`$. For simplicity, we assume that all species have the same diffusion coefficient $`D`$. While our considerations should apply to any RD system exhibiting a period doubling cascade to chaos, the calculations described here were carried out on the Rössler model where $`R_x=c_yc_z,R_y=c_x+Ac_y,R_z=c_xc_zCc_z+B`$. We investigate the behavior of the system as $`C`$ increases, with other parameters fixed at $`A=B=0.2`$ and $`D=0.4`$.
Beyond the Hopf bifurcation, the system supports a spiral solution and infinitely-many spatially-blocked configurations of spirals coexist with spatially inhomogeneous states without spirals. This multistability is preserved away from the Hopf bifurcation, even for $`C`$ values corresponding to chaotic regimes. The blocked configurations form irregular cellular structures, similar to those observed in the CGLE. Cells are centered on spiral cores and their polygonal boundaries are delimited by shock lines where spiral waves from two neighboring cells collide. Fig. 1(a) shows a snapshot of the $`c_z(𝐫)`$ field for a simple configuration with two spirals.
As $`C`$ increases beyond the Hopf point, the Rössler ODE system exhibits a period-doubling route to chaos followed by band-chaotic regimes intertwined with windows of periodic behavior. In spatially distributed media supporting spiral waves two period-doubling bifurcations take place at $`C3.03`$ and $`C4.075`$. These values are larger than the corresponding values for the ODE, 2.83 and 3.86, respectively. These shifts in the bifurcation diagram arise from the concentration gradients created by the spiral waves and their values depend on the spiral wavelength. In spatially inhomogeneous media without spiral waves the spatial gradients are small and the shifts of the bifurcation points are not detectable. The period doublings in media with spiral waves are necessarily accompanied by the appearance of synchronization defect lines ($`\mathrm{\Omega }`$ curves) whose existence is required to reconcile the doubling of the oscillation period and the period of rotation of the spiral wave .
In the period-2 regime ($`3.03C4.075`$), a single type of synchronization defect line exists. These $`\mathrm{\Omega }`$ curves are defined as the loci of those points in the medium where the two loops of the period-2 orbit exchange their positions in local phase space and the dynamics is effectively period-1. The period-2 oscillations on opposite sides of the $`\mathrm{\Omega }`$ curve are shifted relative to each other by $`2\pi `$ (a half of the full period). A medium with period-4 dynamics may support two types of synchronization defect lines corresponding to the two different possible types of loop exchange for a period-4 orbit. Across $`\mathrm{\Omega }_1`$ curves, which inherit properties of the $`\mathrm{\Omega }`$ curves, the two period-2 bands of the period-4 orbits exchange, leading to a $`\pm 2\pi `$ phase shift across them. The $`\mathrm{\Omega }_2`$ curves correspond to the exchange of loops within the two bands, a finer rearrangement of the local cycle. Along them, the dynamics is effectively period-2 and there is a $`4\pi `$ phase shift as the curves are crossed.
Synchronization defect lines can be conveniently located by constructing scalar fields encoding the distance between loops of the period-4 orbit in local phase space. To this aim, we chose to take advantage of the regular succession of peaks in the local time series of $`c_z`$, whose heights are in one-to-one correspondence with the various loops of the orbit. Calculating, at each point $`𝐫`$, four such consecutive concentration maxima $`A_i(𝐫)`$ and ordering them so that $`A_1(𝐫)A_2(𝐫)A_3(𝐫)A_4(𝐫)`$, one may construct the scalar fields $`\xi _1(𝐫)=A_4(𝐫)A_1(𝐫)`$ and $`\xi _2(𝐫)=A_4(𝐫)A_3(𝐫)`$. In the period-4 case, $`\xi _1(𝐫)`$ and $`\xi _2(𝐫)`$ take on fixed non-zero values at points in the medium away from spiral cores and shock lines and vanish at points where the loop exchanges occur . Indeed, $`\xi _1(𝐫)`$ decreases to zero on the $`\mathrm{\Omega }_1`$ curves while $`\xi _2(𝐫)`$ vanishes on both the $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$ curves. In the following, we study the fate of the $`\mathrm{\Omega }`$ lines and use $`\xi _1(𝐫)`$ and $`\xi _2(𝐫)`$ both to determine their length and to visualize them. The $`\xi _2(𝐫)`$ field corresponding to Fig. 1(a) is shown in Fig. 1(b). The thick vertical line connecting the spiral cores is an $`\mathrm{\Omega }_1`$ curve, while the thinner lines are $`\mathrm{\Omega }_2`$ curves.
On the shock lines, where the phase gradient vanishes, the local dynamics is approximately that of the Rössler ODE, and is thus always more advanced along the bifurcation diagram. In particular, chaos first appears on the shock lines (for $`C4.20`$). For $`C=4.3`$, where most of the medium is still in the period-4 regime, two-banded chaos is seen on the shock lines (Fig. 2(a)). These localized chaotic regions give rise to fluctuations which may result in the creation of “bubbles” – domains delineated by circular $`\mathrm{\Omega }_2`$ curves (Fig. 1(b)). For $`CC_{\mathrm{\Omega }_2}4.306`$, the bubbles are formed with a size smaller than a certain critical value and collapse shortly after their birth. As $`C`$ increases beyond $`C_{\mathrm{\Omega }_2}`$, the bubble nuclei begin to proliferate, forming large domains whose growth is limited by collisions with spiral cores or other domains.
A typical life-cycle of a domain is illustrated in Fig. 1(b-d). The shock lines are nucleation sites of $`\mathrm{\Omega }_2`$ domains. Consider the two bubble-shaped nuclei indicated by arrows in panel (b) which were born in close proximity. In panel (c), they have coalesced, forming one rapidly-growing domain which then collides with its neighbor, leaving a shrinking internal domain (panel (d)). The contact of two $`\mathrm{\Omega }_2`$ lines always leads to their reconnection and a reduction of their total length. The contact of $`\mathrm{\Omega }_1`$ and $`\mathrm{\Omega }_2`$ lines leaves one $`\mathrm{\Omega }_1`$ line. This event is accompanied by a change of sign of the phase shift across the $`\mathrm{\Omega }_1`$ line.
The evolution of the size and shape of the $`\mathrm{\Omega }_2`$ line encircling a domain is controlled by the balance of two competing factors: propagation along phase gradient directed toward spiral cores which results in line growth, and the tendency of diffusion to eliminate curvature and reduce the length of defect lines. To investigate the interplay of these two factors, a series of simulations was carried out on a system without spiral waves, but with constant concentrations corresponding to those in the spiral core imposed along one pair of parallel boundaries. This effectively creates “spiral core” boundaries emitting trains of plane waves which collide in the center of the system to form a straight shock line. In this case, the nucleated domains have very simple finger-like shapes, normal to the shock line, and consist of two straight segments with approximately semi-circular caps (Fig. 3). The domain growth velocity normal to the core boundaries, $`v_{}`$, varies with the radius $`R`$ of the arc of the growing tip as $`v_{}=v_p\mathrm{\Delta }/R`$ where $`v_p0.126`$ is the velocity of a straight $`\mathrm{\Omega }_2`$ line parallel to the core boundaries, and $`\mathrm{\Delta }0.658`$. The linear dependence of $`v_{}`$ on $`1/R`$ shown in Fig. 3 confirms the effect of mean curvature on the velocity of $`\mathrm{\Omega }`$ line propagation. Since the width of small domains is approximately equal to $`2R`$, one can estimate the critical size that must be exceeded for domain proliferation. One finds $`R_c5.228`$, in good agreement with direct measurements from the observation of domains whose shape does not change with time.
The transition to $`\mathrm{\Omega }_2`$-line-defect turbulence in media with spiral waves, which occurs around $`C=4.3`$, changes the character of the local dynamics observed in the bulk of the medium. As the $`\mathrm{\Omega }_2`$ lines propagate, the associated loop exchanges result in an effective band-merging in the orbits of local trajectories so that they take the form of two-banded chaotic trajectories (Fig. 2(b)). Although the local trajectories retain their period-4 structure between two passages of $`\mathrm{\Omega }_2`$ lines, the long-time trajectory cannot be distinguished from that of two-banded chaos. Thus, the global transition of the medium to defect-line turbulence can be characterized locally as intermittent band-merging.
As the parameter increases further ($`C>4.44`$) the local dynamics undergoes prominent changes. It fails to exhibit a period-4 pattern in the intervals separating line defect passages, and consists instead of four-banded orbits whose bands grow in width with increasing $`C`$ and merge at $`C4.7`$. Together with this permanent band-merging, spontaneous nucleation of $`\mathrm{\Omega }_2`$ bubbles occurs in the bulk. These chaotic $`\mathrm{\Omega }_2`$ lines are the loci of medium points where the two chaotic bands of the local orbit shrink and a thick “period-2” orbit is formed. As $`C`$ increases, their width decreases, and for $`C4.8`$ the $`\mathrm{\Omega }_2`$ lines cease to exist as well defined objects.
While the local dynamics changes continuously to four- and subsequently two-banded chaos, another transition, mediated by moving $`\mathrm{\Omega }_1`$ lines, takes place. At $`C=4.557`$, the shock regions, where the local dynamics exhibits one-banded chaos, begin to spontaneously nucleate bubbles delineated by $`\mathrm{\Omega }_1`$ lines. As $`C`$ increases, the newly-born domains begin to proliferate. The qualitative features of this transition are similar to those of the $`\mathrm{\Omega }_2`$-line turbulence transition: the dynamics of the $`\mathrm{\Omega }_1`$ lines encircling domains is controlled by the factors discussed above and, considering shape of the long-time local phase space trajectories, it can be associated with intermittent band-merging (Fig. 2(c)), leading to one-banded local chaotic orbits.
As the parameter $`C`$ increases even further, beyond $`C5.0`$, the local trajectories in the bulk of the medium exhibit complete band-merging to one-banded chaos. In this regime, no defect lines can be identified and further increase in $`C`$ does not result in any qualitative changes. However, spiral waves continue to exist, signalling the robustness of phase synchronization in this amplitude-turbulent regime.
We now focus on the two onsets of synchronization defect line turbulence. The $`\mathrm{\Omega }_i`$ line density, $`\rho _i(t)=\mathrm{}_i(t)/\sqrt{S}`$, where $`S`$ is the surface area of the medium and $`\mathrm{}_i(t)`$ the instantaneous total length of $`\mathrm{\Omega }_i`$ lines, can serve as an order parameter to characterize these transitions. Above each transition threshold, and as long as the corresponding defect lines continue to exist, the balance between line growth and destruction results in a statistically stationary average density $`\overline{\rho _i}`$, while $`\rho _i(t)`$ fluctuates. Thus the time series of $`\rho _2(t)`$ above the first threshold shown in Fig. 4(a) exhibits high-frequency, low-amplitude fluctuations attributed to the birth and death of nuclei in the shock regions, as well as large-amplitude oscillations with long correlation time. This suggests that the proliferation of domains and their destruction through coalescence occurs cooperatively. This is confirmed by the fact that, for both transitions, the order parameter goes continuously to zero as $`C`$ decreases toward the threshold. (In Fig. 4(d) the $`\overline{\rho _1}`$ density does not vanish below threshold because the contribution from the stationary $`\mathrm{\Omega }_1`$ line shown in Fig. 1 has not been removed.) The data fall on curves with power-law forms, $`\overline{\rho _i}(C)(CC_{\mathrm{\Omega }_i})^{\beta _i}`$, the signature of continuous phase transitions. The critical values are found to be $`C_{\mathrm{\Omega }_2}4.306`$ and $`C_{\mathrm{\Omega }_1}4.557`$, while the critical exponents are $`\beta _20.25`$ and $`\beta _10.49`$. Finite-size effects usually accompany critical point phenomena as correlation lengths diverge near threshold. Here, the finite-size to consider is the typical size of the cells composing the spiral wave structure. Strictly speaking, $`\overline{\rho _i}`$ is not an intensive quantity because line-defect motion is constrained to occur between the network of shocks (where they nucleate) and the spiral cores, and this area varies from one spiral configuration to another. However, the data in Fig. 4(b,d) show that $`\overline{\rho _i}(C)`$ depends weakly on the cell size.
A number of conclusions can be drawn from the simulation results. The two transitions exhibit significantly different scaling properties, a remarkable fact given that the mechanisms at play appear to be the same in both cases. This difference may arise from the fact that the $`\mathrm{\Omega }_1`$ line transition takes places when $`\mathrm{\Omega }_2`$ lines still exist in the medium.
The zero-spiral-density limit is singular since the transitions observed in the medium without spiral waves are different from those described above. In this case, the onset of defect-line nucleation occurs at the same critical values $`C_{\mathrm{\Omega }_i}`$. However, in the absence of large-scale phase gradients, the entire medium behaves like the shock regions separating spiral wave cells, defect lines do not grow and the increase of $`\overline{\rho _i}(C)`$ arises essentially from the enhanced nucleation rate. This leads to a different form of the onset of line turbulence (cf. Fig. 4(c) for the behavior of $`\overline{\rho _2}(C)`$) characterized by different critical exponents ($`\beta _1^{}=1.22`$, $`\beta _2^{}=0.53`$). These values are difficult to estimate because of fluctuations in the $`\xi _i`$ fields not associated with fully developed $`\mathrm{\Omega }`$ lines. The fact that $`\beta _1^{}`$ and $`\beta _2^{}`$ are significantly different from $`\beta _1`$ and $`\beta _2`$ supports the conclusion that the character of the transitions is different in the spiral and spiral-free systems.
To our knowledge, there is no equilibrium equivalent of these phase transitions, nor were their non-equilibrium analogs reported previously. The line-defect phase transitions may constitute a special class of non-equilibrium critical phenomena since, in this form of spatiotemporal chaos, it is the dynamics of one-dimensional synchronization defects that breaks the global temporal periodicity of the medium.
Finally, the observations of synchronization defect lines in a number of excitable systems demonstrate their independence of any particular reaction mechanism. Therefore, the phenomena presented in this Letter should be observable not only in chemical media exhibiting period-doubling but also in a much broader class of systems. For example, they may exist in the cardiac muscle where complex-excitable dynamics and spiral waves, necessary prerequisites for the emergence of synchronization defects, have been established experimentally .
|
no-problem/9904/hep-ex9904001.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
We describe an experiment to measure the cross sections for the disintegration of deuterons by neutral- and charged-current interactions with low energy electron-antineutrinos. Data were taken at the Centrale Nucleaire de Bugey in France, at 18 m from the core of Reactor 5.
Improvements were made to the cosmic-ray shielding of the detector which we previously used in a similar experiment at the Savannah River Plant in South Carolina in the late 1970s . An outer layer of active cosmic ray veto detectors was added which completely surrounds the lead and steel gamma ray shield. These improvements reduced the neutron background due to cosmic rays by a factor of six to $`25`$ day<sup>-1</sup>.
There are two reactions of interest in this experiment — the Neutral Current disintegration of the Deuteron (NCD),
$$\overline{\nu }_e+d\overline{\nu }_e+p+n,$$
and the Charged Current disintegration of the Deuteron (CCD),
$$\overline{\nu }_e+de^++n+n.$$
The experiment was designed to probe these reactions at low energies ($``$1 MeV). In particular, it measures the square of the isovector-axial vector coupling constant ($`\beta ^2`$). The neutrino-induced disintegration of the deuteron is an ideal reaction for this purpose since, at reactor neutrino energies, all other coupling constants make negligible contributions to the cross section. Other coupling constants depend on the value of the Weinberg mixing angle, $`\theta _W`$, which is an unspecified parameter of the theory, while $`\beta `$ is predicted to be -1.0, independent of $`\theta _W`$. In addition, it does not suffer from ambiguities arising from the presence of vector interactions, nor from momentum-transfer-dependent form factors, to which high-energy experiments are subject. The deuteron disintegration experiment is unique, then, in being able to measure the contribution of a single coupling constant with an unambiguous theoretical value.
## 2 The Detector
### 2.1 Location
The detector was installed at Reactor 5 of the Centrale Nucleaire de Bugey, near Lyon, France. It is located in a room about 10 m below ground level with an overburden of 25 mwe. The distance from the center of the reactor core to the center of the detector was 18.5 meters.
### 2.2 The Target
Schematics of the detector and shielding are shown in Figures 1, 2, and 3. The target detector, labeled D<sub>2</sub>O in Figures 1 and 2 and shown in more detail in Figure 3, consists of a cylindrical stainless steel tank, 54 cm in diameter, 122 cm in height, and a wall thickness of 0.18 cm, containing 267 kg of 99.85% pure D<sub>2</sub>O and ten tubular proportional chambers, equally spaced in two concentric rings of 10.16 cm and 20.37 cm radius and offset from each other by $`36^{}`$. Running down the center of the target tank is a stainless steel tube that allows the placement of radioactive sources inside the tank for calibration purposes.
Immediately surrounding the target tank is 10 cm of lead shielding and a 1 mm layer of cadmium to absorb thermal neutrons. These are contained in an outer steel tank that sits on a small pedestal inside the large, inner veto detector tank (Tank 2 of Figures 1 and 2).
The proportional counters are 5.08 cm in diameter, 122 cm in height, have a wall thickness of 0.025 cm, and are filled with 1 atm of <sup>3</sup>He and 1.7 atm of Ar as a buffer. They are essentially black to thermal neutrons, with a capture cross section of $``$5300 barns per <sup>3</sup>He nucleus. The neutron capture in the counters proceeds via the (n,p) reaction:
$${}_{}{}^{3}\text{He}+n^3\text{H}+p+764\text{ keV.}$$
The energy resolution of the counters was measured to be 3% at the 764 KeV neutron capture peak. A typical neutron spectrum obtained with a <sup>252</sup>Cf neutron source is shown in Fig. 4. A discussion of the neutron detection efficiency is given in Section 3.3. A more detailed description of the construction and testing of the <sup>3</sup>He proportional tubes can be found in Reference .
### 2.3 Detection Technique
The neutral-current and charged-current events in the D<sub>2</sub>O target are recognized solely by the neutrons they produce: the neutral-current reaction releases a single neutron and the charged-current releases two. Consequently, the quantities of interest are the rates of single and double neutron captures.
### 2.4 The Shielding and Anticoincidence System
Due to the detector’s close proximity to the reactor core, there can be a significant reactor associated gamma flux. Gamma rays of $`>`$2.2 MeV which reach the target detector can photodisintegrate the deuterons, leading to single neutron signals. In the previous version of the experiment, this background was reduced by surrounding the inner layer of active cosmic-ray veto detectors with a layer of lead and water shielding. Unfortunately, cosmic rays interacting in the surrounding lead shield, but not reaching the inner veto counters, were a significant source of neutrons in the target detector.
It was concluded that the shielding could be improved by an additional layer of active cosmic ray veto detectors outside the lead shielding. In this way, cosmic rays interacting in the lead would be seen by the outer veto detectors. Simulations showed that this would reduce the cosmic ray neutron background by a factor of three to four.
In the current configuration, the target tank is in the center of a large liquid scintillator detector (the “inner” veto) composed of Tank 2 and Tank 4, shown in Figure 2. Immediately surrounding Tanks 2 and 4 is a layer of each lead and steel. Surrounding this layer of passive shielding is an outer layer of cosmic ray veto detectors (the “outer” veto). Slabs of plastic scintillator cover the north and south sides and the bottom face, while larger tanks of mineral oil scintillator (Tanks 1, 3, & 5) cover the east, west, and top faces. The liquid scintillator used in all five tanks is mineral oil based with a high flash point. Five-inch hemispherical photomultiplier tubes (PMTs) are used to view the liquid scintillator tanks and three inch tubes are employed on the plastic slabs.
As noted above, the inner veto system consists of two liquid scintillator tanks, Tank 2 and Tank 4. As indicated in Figures 1 and 2, there is a string of fifteen evenly spaced PMTs along each vertical corner of Tank 2 . Alternating tubes are offset in direction by $`90^{}`$. Along the east and west walls on the floor of the tank is a row of PMTs which view the space underneath the target tank. Tank 4 has three PMTs in each vertical corner that are configured like those of Tank 2.
There are eight signal lines coming from the inner veto system. The PMTs in each vertical string are ganged onto a single line, as are each row along the floor. The signals from the east corners of Tank 4 are fanned together as are the signals from the west corners.
The inner veto detector is primarily a “soft” veto — its signals are recorded at each trigger and analyzed off line. However, it also triggers an on-line veto in the event that all four corner strings see a large pulse simultaneously. Such a signal is likely to be produced by a throughgoing muon.
Figure 5 shows some details of our electronics configuration.
### 2.5 Data collection system
The data-collection program, based on a 80486DX processor and the software package LabVIEW, has a fast, graphical interface to the electronics. The software takes advantage of the multitasking capabilities of the operating system, allowing the transfer and processing of data without interrupting data collection.
A trigger is generated under the following conditions:
1. A neutron-like pulse is detected in one of the <sup>3</sup>He proportional counters.
2. No pulses above hardware thresholds were detected in any of the inner or outer veto detectors in the preceeding $`900\mu `$s. This value was chosen to reduce background from muon-induced neutrons arising in the inner-anti scintillator which had a neutron capture time of about 200$`\mu `$s.
When a trigger occurs, the contents of waveform digitizers and scalers are read by the computer and written to disk. The contents of the digitizers give a pulse history of all detectors for a period of 4 milliseconds before and after the event.
More details on the detector and data-collection system can be found in Reference .
## 3 Data Analysis
### 3.1 Selection criteria
After the data are collected, they are further reduced by offline selection according to the following criteria.
#### 3.1.1 Target cuts
The purpose of the target cuts is to remove any events that do not appear to be valid neutron captures.
An event is removed if there is no target pulse in the pulse-height acceptance window within 5$`\mu `$sec of the trigger time. The pulse-height acceptance values were determined from the neutron calibrations and varied slightly from run to run. The total number of target peaks in the pulse-height window during the 782$`\mu `$sec (three times the neutron capture time in the target) following the trigger is taken to be the number of neutrons in the event. This time interval was selected to maximize the signal to background. Shorter time windows yield consistent results with larger statistical errors.
Remove event if it has a pulse in the neutron pulse-height acceptance window before the trigger time.
#### 3.1.2 Outer-anti cut
This cut removes cosmic-ray muons that might create neutrons that would subsequently be detected by the target.
If during the 1800 $`\mu `$sec preceeding the trigger a signal is detected in any outer anti which exceeds the threshold for that counter, the event is removed.
The pulse-height threshold values were determined run by run. The values were chosen at the lowest value for which the events removed by this criterion were at least twice the number of the “background” peaks at the same pulse height.
#### 3.1.3 Inner-anti cuts
The inner-antis provide additional protection against cosmic-ray muons that sneak through the outer anti. However, this large volume of liquid scintillator also provides a large target for inverse-beta events on hydrogen ($`\overline{\nu _e}pe^+n`$). A small fraction ($`0.08\%`$) of the neutrons thus produced diffuse into the target area and are recorded by the <sup>3</sup>He tubes. To reduce this number, a low-energy cut is applied to the inner antis, thus using the light produced by the positron to veto the event.
The main purpose of this cut is to remove the inverse-beta events. Any event with $`>`$0.8 MeV in either Tank 2 or Tank 4 within 900$`\mu `$sec before to 200$`\mu `$sec after the trigger was removed. This energy-threshold value was chosen in order to remove the maximum number of inverse-beta events, while not suffering too much dead time from the many low-energy background pulses. From the Monte Carlo, the mean time between production of a neutron by the inverse-beta process in Tanks 2 or 4 and its subsequent capture by a <sup>3</sup>He tube was about 230$`\mu `$sec. Thus the period in which this cut is active is from about four such mean times before to one after the trigger.
Events having a pulse of total energy exceeding 8 MeV in Tank 2 or 6 MeV in Tank 4 from 2400 to 900$`\mu `$sec before the trigger are removed. This cut removes cosmic-ray events that are recorded before the beginning of the hardware anti block. Extending the times earlier than 2400$`\mu `$sec has little effect.
The fraction of events removed by each of the above cuts is shown in Table 1. Figure 6 shows the effects of the cuts on the data. As a result of the cuts, the number of candidate neutrino events is reduced from roughly 60,000 per day to about 25 per day, with the reactor off.
### 3.2 Monte Carlo calculations
geant with the gcalor interface was used for all Monte Carlo simulations. The gcalor interface handles neutron transport from 20 MeV down to thermal energies. geant handles the transport of all other particles. One comparison of the data and the Monte Carlo is given in Figs. 7 and 8. The former shows the capture-time spectrum of neutrons detected by the <sup>3</sup>He counters from a simulated <sup>252</sup>Cf source at the center of the target detector. The mean capture time is $`265\pm 3`$ $`\mu `$secs. Fig. 8 is for the same configuration, but for real data. The mean capture time is $`267\pm 4`$ $`\mu `$secs.
### 3.3 Neutron Detection Efficiency
Special neutron-calibration runs were periodically made with a <sup>252</sup>Cf source in the center of the target detector. Data from these runs were processed thru the same programs used to analyze the neutrino events. In particular, the same target cuts (as described above) were used.
The resulting pulse-height spectra from the <sup>3</sup>He tubes were histogrammed for each calibration run, and the peaks fitted to Gaussians. Only those pulses within 2 standard deviations of the peak value are finally accepted as neutrons. The numbers of events with 1, 2, 3, 4, and 5 neutrons within a given time window were tallied. The time window chosen was 3 neutron capture times.
Based on the known neutron multiplicity from <sup>252</sup>Cf fissions, one can calculate the neutron detection efficiency by assuming various efficiencies and comparing the observed number distribution with the calculated distribution. Our procedure took into account:
* The neutron-number distribution from <sup>252</sup>Cf fission.
* The neutron acceptance time window of 3 capture times.
* The probability of an “extra” fission from the Cf source during the acceptance time window, which is a function of the source activity.
This procedure yielded a mean efficiency of 0.41$`\pm `$0.01 for a neutron source at the center of the target. This value agreed well with the value derived from the Monte Carlo. As a result we were able to use the Monte Carlo value of 0.29$`\pm `$0.01 as the mean efficiency for single neutrons generated isotropically throughout the D<sub>2</sub>O of the target volume.
The efficiency for two neutrons is the square of the single-neutron efficiency (0.084$`\pm `$0.006). And the efficiency for seeing only 1 neutron, if 2 were produced is $`2\times 0.29\times (1.00.29)=0.41\pm 0.01`$.
### 3.4 Energy calibration of inner antis
Since we desired to base the inner anti cut criteria on energy, both Tanks 2 and 4 must be energy calibrated. Periodic runs were made over the course of the experiment with a <sup>60</sup>Co source placed at various known positions in Tank 2, and beneath the center of Tank 4. (Tank 4 was also calibrated with a <sup>252</sup>Cf source in that same position.) The data were compared with Monte Carlo simulations. Several algorithms were tested to find the best estimates of the energy. The best measures found were: Tank 4, sum the signals from the two PMT strings; Tank 2, sum the signals from the 4 vertical corner strings. Results are shown in Table 2.
## 4 Results
### 4.1 Event rates
The 1- and 2-neutron event rates for both the reactor up and down data are given in Table 3. Subtracting the reactor down rates from the up rates yields the data shown in Table 4, where we have also given the corresponding neutron detection efficiencies.
The 2-neutron rate (per day) is $`(2.45\pm 0.48)/(0.084\pm 0.006)=29.2\pm 6.1`$. To get the CCD rate from this value we need only correct for the effect of a nearby reactor, Reactor #4. It is located about 80 m from our detector. While taking data with Reactor #5 up, the mean power of reactor #4 was 1925 MW; while #5 was down, it was 2246 MW. This gives a correction factor of +0.6% to our final rates. Thus the CCD daily rate is
$$R_{CCD}=(29.2\pm 6.1)\times (1.006)=29.4\pm 6.1$$
To get the NCD rate from the 1-neutron rate, two corrections must first be applied to the 1-neutron rate.
* The number of CCD reactions in which only 1, instead of 2, neutrons was observed must be subtracted. This number is the CCD rate times the efficiency of seeing only one out of the two neutrons:
$$(29.2\pm 6.1)\times (0.41\pm 0.01)=12.0\pm 2.5$$
* The number of inverse-beta decays in the inner detector that leak into the target volume and create a single neutron must also be subtracted. From the Monte Carlo we estimate 22.0$`\pm `$0.5 inverse-beta events per day enter the target volume. Also from the Monte Carlo we estimate that only 5$`\pm `$1% of those events survive the 0.8 MeV inner-anti cut. Thus the number of events to be subtracted from the 1-neutron rate is:
$$(22.0\pm 0.5)\times (0.05\pm 0.01)\times (0.29\pm 0.01)=0.3\pm 0.1$$
The corrected 1-neutron event rate is then
$$(37.7\pm 2.0)(12.0\pm 2.5)(0.3\pm 0.1)=25.4\pm 3.2$$
Applying the single-neutron detection efficiency correction and the Reactor #4 correction from above, yields the daily NCD rate:
$$R_{NCD}=(25.4\pm 3.2)\times (1.006)/(0.29\pm 0.01)=88.1\pm 11.1$$
### 4.2 Systematic uncertainties
The significant systematic uncertainties are given in Table 5. Other possible sources of systematic effects which were considered, but found to be insignificant were: the calculated neutrino energy spectrum and the energy-calibration effects on data cuts.
### 4.3 Theoretically-expected event rates
The rates (events per day) are given by:
$$R=\frac{N_D}{4\pi r^2}\overline{N}_\nu (E_\nu )\sigma (E_\nu )𝑑E_\nu $$
(1)
where $`E_\nu `$ is the neutrino energy, $`\overline{N}_\nu (E_\nu )`$ the daily average neutrino energy spectrum per MeV, $`N_D`$ the total number of deuterons in the target, $`\sigma (E_\nu )`$ the cross section for the process, and $`r`$ is the distance from the reactor to the detector.
The mean neutrino energy spectrum was determined from the reactor power and the core “burn up,” i.e. the isotopic composition of the fuel, as a function of time. The reactor power was obtained from reactor monitoring devices several times per day. The isotopic composition of the fuel rods was given to us at the beginning and ending of each reactor cycle of about 11 months.
The only four nuclei of importance are: <sup>235</sup>U, <sup>238</sup>U, <sup>239</sup>Pu, and <sup>241</sup>Pu. Combining the data in those references with the reactor power as a function of time, both the neutrino energy spectrum and the conversion factor from MW-hours to total number of neutrinos was calculated for each day.
The energy per fission and the mean number of fissions per day are given in Table 6 for each isotope.
The data-collection MW-hours was calculated for every day by combining the data collection times with the reactor power level at that time. The number of deuterons was 1.605$`\times 10^{28}`$.
Combining all these factors and dividing by the number of live days yields the mean neutrino spectrum (neutrinos/MeV/day) as shown in Table 7.
There has been considerable work done on the CCD and NCD cross sections in the past few years. Kubodera and Nozawa review the field in Ref. . In their Table 1, they give the cross sections for both the CCD and NCD reactions from threshold to 170 MeV. They state that the uncertainties in the values are 5%. Using the data of Ref. with Eqn. 1, yields $`R_{NCD}=87.2\pm 4.4`$ and $`R_{CCD}=30.4\pm 1.5`$.
### 4.4 Experimental cross sections
The average cross section per neutrino is given by
$$\overline{\sigma }=\frac{\overline{N}_\nu (E_\nu )\sigma (E_\nu )𝑑E_\nu }{\overline{N}_\nu (E_\nu )𝑑E_\nu }$$
where the integrals go from the threshold for the reaction to infinity. Combining this with Eqn. 1, we get
$$\overline{\sigma }=\frac{4\pi r^2R}{N_D\overline{N}_\nu (E_\nu )𝑑E_\nu }.$$
The values obtained for the NCD and CCD cross sections are given in Table 8.
### 4.5 Improved NCD cross section
As stated above, the CCD events create a significant background for the NCD events, and this background must be subtracted. The large uncertainty in our measured CCD rate makes a significant contribution to the uncertainty in the NCD rate. However, we note that our experimentally determined rates and the theoretically-expected rates agree within one standard deviation of each the experimental and theoretical uncertainties. Given this excellent agreement, we feel that an improved value for the NCD cross section may be calculated by using the theoretically-expected CCD daily rate (30.4 $`\pm `$ 1.52) rather than our observed rate (29.4 $`\pm `$ 6.1). Repeating the procedure described above, this yields an improved NCD rate of 86.7 $`\pm `$ 7.9, and a corresponding cross section of $`5.98\pm 0.54\times 10^{45}`$ cm<sup>2</sup> per neutrino.
### 4.6 Calculation of $`\beta ^2`$
The value of $`\beta ^2`$ is given by the ratio of the measured neutral current rate to the theoretically expected rate. Thus we find
$$\beta ^2=\frac{88.1\pm 11.1\pm 1.2}{87.2\pm 4.4}=1.01\pm 0.16$$
Using the improved NCD cross section determined above, we get an improved $`\beta ^2`$ of 0.99 $`\pm `$ 0.10.
### 4.7 Neutrino oscillations
Another aspect of this experiment is its ability to explore neutrino oscillation by measuring the ratio of the CCD to NCD rates. At reactor neutrino energies, there is insufficient energy to create leptons more massive than the electron. Therefore, if neutrino oscillation occurs at a significant level, a deficit of charged-current events compared to neutral-current events should be seen. This leads us to define the ratio $`R`$, where
$$R=\frac{\frac{\mathrm{CCD}_{\mathrm{exp}}}{\mathrm{NCD}_{\mathrm{exp}}}}{\frac{\mathrm{CCD}_{\mathrm{th}}}{\mathrm{NCD}_{\mathrm{th}}}},$$
(2)
a ratio of ratios of experimentally determined reaction rates to theoretically expected reaction rates. A deficit of charged current reactions could imply that some electron antineutrinos have oscillated to a different flavor or helicity state, either of which would imply new physics.
We find
$$R=\frac{\frac{29.4\pm 6.1}{88.1\pm 11.1}}{\frac{30.4}{87.2}}=\frac{0.334\pm 0.080}{0.348\pm 0.004}=0.96\pm 0.23$$
The error of 1% in the theoretical ratio is taken from reference . The neutrino-oscillation exclusion plot resulting from this value of R is shown in Fig. 9.
### 4.8 Possible extension of this technique
Since the theoretical error in the ratio is quite small, a high statistics, good precision measurement of R should be possible. This measurement has the potential of reaching small values of $`\mathrm{sin}^22\theta `$.
In the current experiment, the CCD measurement is handicapped by the requirement that we observe two neutrons. The efficiency for observing this goes as the square of the single-neutron detection efficiency and so is necessarily small. Another method, which we explored but did not pursue, employs the addition of a small amount (approximately 10%) of light water into the heavy water target. This small addition does not effect the neutron detection efficiency appreciably and gives one the opportunity to observe the charged current reaction on the proton. Since the CCP reaction has a much larger cross-section than the CCD reaction, a threshold of 1.8 MeV, closer to the CCD threshold and since it can be detected by searching for a single neutron, one can determine the ratio of NCD to CCP with higher precision.
## 5 Discussion
This experiment was an improved version of our experiment done at Savannah River in the late 1970s. The primary improvements were in the cosmic-ray shielding, which cut that background by a factor of six, and an improved data-collection system.
During the past 20 years great progress has been made in calculating the CCD and NCD cross sections, and they agree well with the results of this experiment.
## Acknowledgments
The authors would like to acknowledge the operators of the Bugey Nuclear Plant, and the contributions of our technicians, Thomasina Godbee, Herb Juds, Eric Juds, and Butch Juds. This work was supported by the U.S. Department of Energy.
|
no-problem/9904/cond-mat9904013.html
|
ar5iv
|
text
|
# Realistic model of correlated disorder and Anderson localization
## Abstract
A conducting 1D chain or 2D film inside (or on the surface of) an insulator is considered. Impurities displace the charges inside the insulator. This results in a long-range fluctuating electric field acting on the conducting line (plane). This field can be modeled by that of randomly distributed electric dipoles. This model provides a random correlated potential with $`U(r)U(r+k)1/k`$. In the 1D case such correlations may essentially influence the localization length but do not destroy Anderson localization.
It was recently stated in that some special correlations in a random potential can produce a mobility edge (between localized and delocalized states) inside the allowed band in the 1D tight-binding model. In principle, extrapolation of this result to 2D systems may give a possible explanation of the insulator-conductor transition in dilute 2D electron systems observed in ref. . In such a situation it is very important to build a reasonable model of “correlated disorder” in real systems and calculate the effects of this “real” disorder.
Usually, a 1D or 2D conductor is made inside or on the surface of an insulating material. Impurities inside the insulator displace the electric charges. However, a naive “random charge” model violates electro-neutrality and gives wrong results. Indeed, the impurities do not produce new charges, they only displace charges thus forming electric dipoles. Therefore, we consider a model of randomly distributed electric dipoles (alternatively, one can consider a spin glass model which gives the correlated random magnetic field). The dipoles have long-range electric field. Therefore, the potentials at different sites turn out to be correlated.
The system of the dipoles $`d_j`$ produces the potential
$$U(r)=e\underset{j}{}𝐝_𝐣\frac{1}{|𝐫𝐑_𝐣|}.$$
(1)
The average value of this potential is zero if $`𝐝_j=0`$, when the correlator of the potentials at the points $`𝐫_\mathrm{𝟏}`$ and $`𝐫_\mathrm{𝟐}`$ is equal to
$$U(𝐫_\mathrm{𝟏})U(𝐫_\mathrm{𝟐})=e^2\underset{i,j}{}<𝐝_𝐢\frac{1}{|𝐫_\mathrm{𝟏}𝐑_𝐢|}𝐝_𝐣\frac{1}{|𝐫_\mathrm{𝟐}𝐑_𝐣|}>=\frac{e^2𝐝^\mathrm{𝟐}}{3}\underset{j}{}\left(\frac{1}{|𝐫_\mathrm{𝟏}𝐑_𝐣|}\right)\left(\frac{1}{|𝐫_\mathrm{𝟐}𝐑_𝐣|}\right).$$
(2)
Here we assumed that $`d_i^\alpha d_j^\beta =𝐝^2/3`$ $`\delta _{il}\delta _{\alpha \beta }`$ where $`\alpha `$ and $`\beta `$ are space indices. Let us further suggest that the dipoles are distributed in space with a constant density $`\rho `$. Then we have
$$U(𝐫_\mathrm{𝟏})U(𝐫_\mathrm{𝟐})=\frac{e^2𝐝^2\rho }{3}d^3R\left(\frac{1}{|𝐫_\mathrm{𝟏}𝐑|}\right)\left(\frac{1}{|𝐫_\mathrm{𝟐}𝐑|}\right)=\frac{4\pi e^2𝐝^2\rho }{3|𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟐}|}.$$
(3)
However, the fluctuations of the potential at a given site $`𝐫_\mathrm{𝟏}=𝐫_\mathrm{𝟐}`$ cannot be calculated in such a way since the expression (3) diverges. To remove this divergence, one has to take into account the geometrical size $`r_0`$ of the dipoles. Indeed, inside the radius $`r_0`$ the electric field is not described by the dipole formula and the real potential $`U(r)`$ does not contain the singularity $`1/r^2`$ which leads to the divergence. Obviously, only those dipoles, which are closer to the conducting chain than $`r_0`$, make the problem mentioned. One may eliminate them by putting the chain into an empty tube with the diameter $`d_0>2r_0`$. This method of cut-off is justified since a conducting wire always has finite diameter. The correlator (3) then reads on the chain oriented along the $`x`$ axis as
$$U(x_1)U(x_2)=\frac{e^2𝐝^2\rho }{3}\frac{1}{\sqrt{(x_1x_2)^2+d_0^2}}𝐄\left[\frac{(x_1x_2)^2}{(x_1x_2)^2+d_0^2}\right]$$
(4)
where $`𝐄(m)`$ is the complete second kind elliptic integral with parameter $`m`$. This expression is everywhere finite and gives for the variance $`U^2=(2\pi ^2e^2𝐝^2\rho )/d_0`$ so that the normalized correlator is equal to
$$\xi (k)\frac{U(x)U(x+k)}{U^2}=\frac{2}{\pi }\frac{d_0}{\sqrt{k^2+d_0^2}}𝐄\left(\frac{k^2}{k^2+d_0^2}\right).$$
(5)
This function is positively definite, equal to 1 at $`k=0`$ and decays inversely proportional to the distance $`|k|`$,
$$\xi (k)=\frac{2d_0}{\pi |k|},$$
when $`|k|d_0`$. Similar calculation in the case of a conducting film which is put in a split of the width $`2z_0`$ inside the insulator gives
$$\xi (𝐤)=\frac{\pi z_0}{|𝐤|}\left(1\frac{2}{\pi }\mathrm{arctan}\frac{2z_0}{|𝐤|}\right).$$
(6)
In the Ref. the inverse localization length for electron with an energy $`E`$, which moves in 1D random potential, has been expressed in terms of the trace of the Green function. Being applied to the discrete Schrödinger equation
$$\psi _{n+1}+\psi _{n1}=(E+ϵ_n)\psi _n$$
(7)
with a weak potential $`ϵ_n`$, the Thouless’s formula looks in the second order approximation like
$$l^1(E)=\frac{1}{2N}\mathrm{Re}\mathrm{Tr}\left[G^{(0)}(E+i0)ϵG^{(0)}(E+i0)ϵ\right].$$
(8)
Here $`N`$ is the number of sites, $`G^{(0)}(E)`$ is the unperturbed Green function of the equation (7), and $`ϵ`$ is the diagonal matrix of the random potential.
We employ spectral representation of the Green function $`G^{(0)}(E)`$
$$G_{n^{}n}^{(0)}=\frac{1}{N}\underset{q}{}\frac{e^{i\mu _q(nn^{})}}{EE_q}$$
(9)
to calculate the trace in the r.h.s of eq. (8). Here
$$E_q=2\mathrm{cos}\mu _q;\mu _q=\frac{\pi }{N}q;N<q<N$$
(10)
and the periodic boundary conditions are implied. Substitution in eq. (8) gives
$$l^1(E)=\underset{N\mathrm{}}{lim}\frac{1}{2N}\mathrm{Re}\underset{q,q^{}}{}\left[\frac{1}{N^2}\underset{n,n^{}}{}e^{i(\mu _q\mu _q^{})(nn^{})}ϵ_nϵ_n^{}\right]\frac{1}{(E+i0E_q^{})(E+i0E_q)}.$$
(11)
In the limit $`N\mathrm{}`$ the summation over $`q`$ and $`q^{}`$ can be replaced by the integration over $`\mu _q`$ and $`\mu _q^{}`$. After the change of the variables $`p=\mu _q\mu _q^{},k=nn^{}`$ the integrals and one of the sums can be easily calculated. As a result, the inverse correlation length $`l^1(E)`$ at the energy $`E=2\mathrm{cos}\mu `$
$$l^1=\frac{ϵ_0^2}{8\mathrm{sin}^2\mu }\varphi (\mu )$$
(12)
turns out to be proportional to the Fourier component
$$\varphi (\mu )=\underset{k=\mathrm{}}{\overset{\mathrm{}}{}}\xi (k)\mathrm{exp}(2i\mu k)=\underset{N\mathrm{}}{lim}\frac{1}{Nϵ_0^2}|\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}ϵ_n\mathrm{exp}(2i\mu n)|^2$$
(13)
of the normalized pair correlator $`\xi (k)=ϵ_nϵ_{n+k}/ϵ_0^2`$. One can see that the localization length is in essence the mean value of the squared “Fourier component” of the potential or in other words the mean free path for the backward scattering (see also ) .
The Fourier transform of the correlator (5) can be calculated analytically in two limiting cases. If $`d_01`$ (in the units of lattice constant) the elliptic integral $`𝐄(1)=1`$ and
$$\varphi (\mu )=1\frac{4d_0}{\pi }\mathrm{ln}|2\mathrm{sin}\mu |.$$
(14)
It is less than one in the part $`\sqrt{3}<E<\sqrt{3}`$ of the total energy band $`2<E=2\mathrm{cos}\mu <2`$ and is minimal, $`\varphi _{min}10.9d_0`$, at the center of the spectrum $`E=0,\mu =\pi /2`$. The quantity $`\varphi _{min}`$ reaches zero, which according to would mean delocalization, when $`d_01.1`$. However, eq. (14) is not already entirely valid for such values of $`d_0`$ and further terms in the expansion over $`d_0`$ must be taken into account. In the opposite case of large $`d_0`$, transition to limit $`d_0\mathrm{}`$ may be for all $`\mu 0,\pi `$ taken before summing since the resulting sum is cut-off by oscillations. Consecutive summation gives then $`\varphi (\mu 0,\pi )=0`$.
For arbitrary finite values of $`d_0`$ Fourier components $`\varphi (\mu )`$ remain positive. However, they may be rather small within some energy interval. In Fig. 1 we show how $`\varphi (\pi /2)`$ decreases with $`d_0`$ growing. It is seen that at this point (and, obviously, in some vicinity of this point) $`\varphi (\mu )`$ becomes very small already for the values $`d_03`$ comparable with the lattice constant. In such a situation the deviations from the tight-binding model and higher order corrections in $`ϵ_n`$ may give important contributions and deserve a special consideration.
The realistic value of $`d_0`$ can hardly be much larger than the lattice constant. Actually, it is a typical minimal distance from impurities to the chain. Besides, any short-range fluctuations increase the variance $`ϵ_0^2`$ and reduce the normalized long-range correlator $`\xi (k)`$. Thus, we conclude that the “natural” correlations in random potential can hardly destroy Anderson localization in 1D case. However, these correlations can significantly influence the value of the localization length. It is well known that in 2D systems the localization length is very sensitive to parameters of the problem. Therefore, in the 2D case the correlations due to the long-range character of the dipole field may be even more important than in the 1D case.
Acknowledgments. V.V. Flambaum acknowledges the support from Australian Research Council. He is grateful to B. Altshuler, F. Izrailev and A. Krokhin for discussions and to V.Zelevinsky for valuable comments and hospitality during the stay in MSU Cyclotron laboratory when this work was done. V.V. Sokolov is indebted to V.F. Dmitriev, I.M. Izrailev and V.B. Telitsin for helpful discussions and advices.
|
no-problem/9904/nucl-th9904062.html
|
ar5iv
|
text
|
# Effects of Neutron Spatial Distributions on Atomic Parity Nonconservation in Cesium.
## Abstract
We have examined modifications to the nuclear weak charge due to small differences between the spatial distributions of neutrons and protons in the Cs nucleus. We derive approximate formulae to estimate the value and uncertainty of this modification based only on nuclear rms neutron and proton radii. Present uncertainties in neutron distributions in Cs are difficult to quantify, but we conclude that they should not be neglected when using atomic parity nonconservation experiments as a means to test the Standard Model.
preprint: CU Preprint NPL-1163
Recent measurements of transition polarizabilities , coupled with previous measurements of parity nonconservation (PNC) in atomic cesium have significantly reduced uncertainties associated with the extraction of $`Q_w`$, the radiatively corrected weak charge of the Cs nucleus. The latest result, $`Q_w^{\mathrm{expt}}=72.06(28)_{\mathrm{expt}}(34)_{\mathrm{atomic}\mathrm{theory}}`$ is in mild disagreement, at the $`2.5\sigma `$ level, with the Standard Model prediction of $`Q_w^{\mathrm{St}.\mathrm{Mod}.}=73.20(13)_{\mathrm{theory}}`$ The experimental number requires input from atomic theory calculations which include effects of normalization of the relevant axial electron transition matrix element in the vicinity of the nucleus. The finite nuclear size is incorporated by including $`\rho _N(r)`$, the spatial nuclear distribution, in the matrix elements. One possible contribution to $`Q_w`$ which has been left out of the quoted numbers is the modification of the extracted weak charge due to the difference between neutron and proton spatial distributions in this nucleus with relatively large neutron excess.
The effect of the neutron distribution differing from the proton distribution in a nucleus has been explicitly considered in the atomic theory calculations, and was dismissed because the estimated size was extremely small compared to existing uncertainties at the time. Other authors have also derived and discussed this contribution further. In the case of Cs, all authors agree the effect is quite small. However, with the significant reduction in errors in recent atomic PNC measurements, the effect should no longer be neglected. As we argue below, the additional uncertainties in extracting $`Q_w`$ from the data arising from neutron-proton distribution differences are slightly below the uncertainties arising from atomic theory calculations or current experimental error bars, but are comparable to Standard Model radiative correction uncertainties.
In this note, we attempt to quantify the additive contribution and uncertainties to the nuclear weak charge, $`Q_w`$, arising from the relatively poorly known spatial distribution of neutrons in the nucleus, $`\rho _n(r)`$. We briefly summarize some relevant nuclear structure issues, both theoretical and experimental. We also briefly discuss methods that could improve this knowledge. We present results of our numerical calculations of $`Q_w`$ arising from various $`\rho _n`$ distributions, and present approximate methods which show what effect differing nuclear structure model predictions would have on precision Standard Model tests.
At tree level in the Standard Model, the nuclear weak charge is $`Q_w^{\mathrm{St}.\mathrm{Mod}}=(14\mathrm{sin}^2\theta _W)ZN`$, with N and Z the neutron and proton number, and $`\mathrm{sin}^2\theta _W`$ the weak mixing angle. Standard Model radiative corrections modify this formula slightly. The effect of finite nuclear extent is to modify N and Z to $`q_nN`$ and $`q_pZ`$ respectively, where
$$q_{n(p)}=f(r)\rho _{n(p)}(r)d^3r.$$
(1)
Here $`f(r)`$ is a folding function determined from the radial dependence of the electron axial transition matrix element inside the nucleus, and the neutron (proton) spatial distribution $`\rho _{n(p)}`$ is normalized to unity. It is common to characterize the neutron distribution by its rms value, $`R_n`$, since it can easily be shown that the weak charge is most sensitive to this moment. To the extent that $`\rho _n`$ and $`\rho _p`$ are the same, the overall nuclear size effect can be completely factored out. This has explicitly been done in the experimental extraction of $`Q_w`$. The slight difference between $`q_n`$ and $`q_p`$ has the effect of modifying the effective weak charge:
$$Q_w=Q_w^{\mathrm{St}.\mathrm{Mod}}+\mathrm{\Delta }Q_w^{np},$$
(2)
where <sup>*</sup><sup>*</sup>*There will be additional small multiplicative corrections to $`\mathrm{\Delta }Q_w^{np}`$ arising from Standard Model radiative corrections, as well as additive corrections arising from e.g. internal structure of the nucleon, but these can be safely neglected since $`\mathrm{\Delta }Q_w^{np}`$ is itself so small.
$$\mathrm{\Delta }Q_w^{np}=N(1q_n/q_p).$$
(3)
A naive calculation, helpful for quick estimates of the effect of different possible neutron distributions on $`\mathrm{\Delta }Q_w^{np}`$, can be made by assuming a uniform nuclear charge distribution (zero-temperature Fermi gas), and then parameterizing the neutron distribution solely by its value of $`R_n`$. In this approximation, one solves the Dirac equation for the electron axial matrix elements, $`f(r)`$, near the origin by expanding in powers of $`\alpha `$ (the fine structure constant). Finally, we can assume $`R_nR_p`$, characterizing the difference by a single small parameter, $`(R_n^2/R_p^2)1+ϵ`$. In this case, we find
$`q_p`$ $`1(Z\alpha )^2(.26),`$ (4)
$`q_n`$ $`1(Z\alpha )^2(.26+.221ϵ),`$ (5)
$`\mathrm{\Delta }Q_w^{np}`$ $`N(Z\alpha )^2(.221ϵ)/q_p.`$ (6)
Eq. 6 shows the rough dependence of the correction to the weak charge on the difference between neutron and proton distributions, characterized by $`ϵ`$. Results of this naive calculation are shown as the solid line in Fig. 1. The slope of the line demonstrates the sensitivity of the uncertainty in $`\mathrm{\Delta }Q_w^{np}`$ to the uncertainty in rms neutron radius. The range in $`ϵ`$ of $`\pm 0.1`$ corresponds to a $`\delta R_n/R_n`$ of about $`5\%`$, which we argue below might be a reasonable estimate of the uncertainty in neutron rms radius. We do not need to rely on these approximations; we have solved the Dirac Equation numerically for s- and p- state electron wave functions given the experimental charge distribution of Cs, evaluated $`f(r)`$ numerically, and thus calculated $`q_n`$, $`q_p`$, and $`\mathrm{\Delta }Q_w^{np}`$ given various model predictions for the neutron distribution. The diamonds in Fig. 1 correspond to the full calculation assuming neutron distributions with the same shape as the proton distribution, scaled to give the particular values of $`ϵ`$. The above approximations prove to be accurate, although the resulting uncertainty in $`\mathrm{\Delta }Q_w^{np}`$ is marginally underestimated by only including the uncertainty in $`R_n`$. The additional effects of neutron distribution shape variations could slightly increase the uncertainty in the nuclear contribution to the weak charge, as demonstrated by the error bars on the diamonds in Fig. 1. These error bars arise by assuming a 2 parameter Fermi fit for the neutron distribution, $`\rho _n(r)=1/(1+e^{[(rc)/z]})`$, and allowing the “skin thickness” parameter $`z_n`$ to vary by $`\pm 0.1`$, keeping $`ϵ`$ fixed. Such a variation is comparable to the difference $`z_nz_p`$ in various nuclear models. (A more detailed analysis of the effect of neutron shape on $`Q_w`$ will be presented elsewhere .) From Fig 1, it is clear that the uncertainty in the radius $`R_n`$ dominates the uncertainty in $`\mathrm{\Delta }Q_w^{np}`$.
The effect of $`\mathrm{\Delta }Q_w^{np}`$ was understood and estimated in the atomic structure calculations by first assuming $`\rho _n(r)=\rho _p(r)`$ (so $`q_n=q_p`$ and $`\mathrm{\Delta }Q_w^{np}=0`$) and then recalculating with a theoretical parameterization of the neutron density. The resulting $`\mathrm{\Delta }Q_w^{np}0.06`$ was extremely small, amounting to about 0.08$`\%`$ of the total weak charge, and was thereafter ignored. This neutron density was obtained by scaling a variational extended spherical Thomas-Fermi calculation using an effective parameterization of the nuclear Lagrangian ( “Skyrme SkM\*”), which happened to yield a nuclear neutron rms radius which differed by only 0.9$`\%`$ from the proton rms radius. Eq. 6 confirms the size of this shift, given only the rms neutron and proton radii. However, the assumed $`\rho _n(r)`$ distribution may not be an accurate representation of the correct neutron distribution. There exists both theoretical and experimental evidence that $`R_n`$ might differ from $`R_p`$ by significantly more than 0.9$`\%`$, and thus from Eq. 6, $`\mathrm{\Delta }Q_w^{np}`$ may be similarly underestimated.
In a more recent theoretical analysis, Chen and Vogel considered two more sophisticated nuclear structure models. Both models involved a Skyrme parameterized nuclear Lagrangian, computed in the spherical Hartree-Fock (HF) approximation. Such models are quite successful in predicting a wide variety of nuclear observables, including charge distributions, binding energies, bulk properties, etc. These two models (SkM\* and SkIII) yielded $`R_n/R_p`$ values of 1.022 and 1.016 respectively. Using the average of these values in Eq. 6 we obtain $`\mathrm{\Delta }Q_w^{np}=+.11`$, double the estimate of Ref. . Using spatial distributions of neutron and proton densities from even more recent nuclear structure models , we have calculated the nuclear correction directly, rather than using the approximation of Eq. 6. Using spherical Skyrme SLy4 distributions, we find $`\mathrm{\Delta }Q_w^{np}=+.14`$. Similarly, using (spherical) Gogny distributions, including blocking, we find $`\mathrm{\Delta }Q_w^{np}=+.11`$ (Eq. 6 for these two cases predicts +.15 and +.12 respectively). Relativistic potentials typically generate significantly larger neutron radii (see discussion below), and thus would predict larger $`\mathrm{\Delta }Q_w^{np}`$, possibly by a factor of 2 or more, based on calculations in nearby nuclei, but no <sup>133</sup>Cs distributions for such models have been published to date. Note that if $`R_n>R_p`$, then $`\mathrm{\Delta }Q_w^{np}`$ is positive. The central value of the most recent experiment gives $`Q_w=72.06`$, compared to $`Q_w^{\mathrm{St}.\mathrm{Mod}}=73.20`$, so this nuclear correction is of the right sign to partially explain the small discrepancy. However, if one wanted to attribute the difference entirely to nuclear physics effects, one would require $`R_n=(1.18\pm .07)R_p`$ (adding all atomic experimental and theoretical, and Standard Model theoretical errors in quadrature), which is significantly out of the range of any theoretical or experimental nuclear structure predictions.
The fundamental question regarding nuclear structure remains — what uncertainty should be associated with $`\mathrm{\Delta }Q_w^{np}`$? Chen and Vogel argued that a reasonable uncertainty in their calculated neutron radius might be $`\delta R_n^2\pm 1\mathrm{f}\mathrm{m}^2`$. According to Eq. 6, this corresponds to an uncertainty $`\delta \mathrm{\Delta }Q_w^{np}=\pm 0.13`$. The estimate in ref. for the theoretical uncertainty in $`Q_w`$ for a single isotope was slightly larger, 0.25$`\%`$ of $`Q_w`$, i.e. $`\pm 0.18`$. This is still quite small compared to the current atomic structure uncertainty ($`\pm 0.34`$ in $`Q_w`$), but is as large as the uncertainty in $`Q_w`$ arising from uncertainties in Standard Model radiative corrections (see Fig. 1). All of the models we have considered predict the charge radius in Cs within about 1$`\%`$, but the parameter fits used to determine the Skyrme potentials are based in part on observables, including charge radii, in nearby semi-magic even-even nuclei. There remain various possible sources of concern that a value of $`\delta R_n^2\pm 1\mathrm{f}\mathrm{m}^2`$ may still be an underestimate. For example, <sup>133</sup>Cs is a deformed, odd-Z nucleus. Most nuclear structure calculations for large nuclei assume spherical symmetry with at least partially closed nuclear subshells. Pairing and blocking effects make calculations with odd N or Z less reliable, as evidenced by the failure of most Skyrme HF calculations to reproduce experimentally observed “even-odd” staggering of charge radius along isotope chains. In reference pairing effects were included, but deformation was included only in a semi-phenomenological manner.
There exist other classes of nuclear structure models which give quite different predictions for neutron properties, for example, relativistic Hartree models based on a modified Walecka-model nuclear Lagrangian. These models have seen significant improvements in recent years, and may now be viewed as competitive with more established Skyrme models in terms of their predictive power over a wide variety of observables throughout the periodic table. In a recent paper comparing models, $`R_n^2/R_p^2`$ for <sup>138</sup>Ba (the nearest even-even semi-magic nucleus above Cs) ranged from 1.03 in a Skyrme model to more than 1.08 in the relativistic models. The difference in predicted $`R_n^2`$ between these two models alone exceeds 1 fm<sup>2</sup>. For <sup>136</sup>Xe, $`R_n^2/R_p^2`$ values vary from 1.04 to 1.09, with the predicted $`R_n^2`$ differing by well over 1 fm<sup>2</sup>. In another recent paper comparing models, the predictions for $`R_n`$ in <sup>124</sup>Sn (with a value of N/Z similar to <sup>133</sup>Cs) varied by more than 2 fm<sup>2</sup> between extreme models, a spread of over 8$`\%`$. Again, these calculations are primarily for even-even nuclei; relativistic models have not yet been used to calculate self-consistently in the neighboring unpaired (odd Z) cases. This only adds to the uncertainty in the prediction of a model spread for the case of Cs. Based on these spreads, it appears that current nuclear theory yields an uncertainty of at least 4 or 5% in $`R_n`$.
The uncertainties in neutron distributions discussed so far arise from disagreements between model predictions. It is important to note that the neutron rms radius has never been directly measured in any isotope of Cs. Indeed, it is extremely difficult to measure $`R_n`$ in any nucleus - the most accurate measurements of charge radii come from electromagnetic interactions, which are dominated by the proton distribution. Elastic magnetic scattering is affected mostly by unpaired (valence) nucleons, which does not allow for a detailed or accurate measure of the bulk rms neutron radius. Data from strong interaction probes measure the “matter radius”, but are somewhat more sensitive to surface effects, and suffer from some poorly controlled systematic theoretical uncertainties arising from the models required in analyzing strong interaction observables. For example, there exist data from polarized proton elastic scattering on heavy nuclei. The data are statistically of high quality, and are frequently viewed as an accurate experimental measure of $`R_n`$ in several heavy nuclei, including Sn and Pb. However, the systematic uncertainties in extracting $`R_n`$, including choice of optical model and spurious variations in the result as a function of experimental beam energy, easily approach 5$`\%`$ or more. Other data, including pion or alpha scattering, suffer from similar uncertainties. The experimentally extracted average value from polarized proton scattering and pionic atoms for $`R_n`$ in <sup>208</sup>Pb differ by around 3$`\%`$.
Even if strong interaction measurements can be argued to provide an accurate measure of the neutron rms radius, the weak interaction is sensitive to the spatial distribution of weak charge, which can not be exactly identified with neutrons or protons, but also includes effects of other nuclear degrees of freedom including e.g. meson exchange, and is more sensitive to non-surface density variations. A parity violating electron scattering experiment could directly measure the weak charge distribution, precisely what is needed for the interpretation of atomic PNC as a standard model test, and would be of clear value. Even if measured on another nucleus, the additional constraint on nuclear models should increase confidence in the predicted neutron distribution in Cs. As can be seen from Eq. 6, high precision atomic PNC measurements on significantly higher Z nuclei are more sensitive to the neutron distribution than in the case of Cs. Thus, a measurement of atomic PNC on extremely heavy nuclei might also be used as a measure of the neutron distribution, which in turn could be used to constrain the isovector parameters in the nuclear models, and thus increase the reliability of the predictions for Cs.
To summarize, $`\mathrm{\Delta }Q_w^{np}`$ is the deviation between the experimentally extracted weak charge and Standard Model predictions due solely to differences in neutron and proton weak charge spatial distributions. The predicted value is small, typically of order $`0.1`$, but with an uncertainty larger than the value itself, arising mostly from uncertainties in $`R_n`$. This should be compared to the nominal value of the weak charge, $`Q_w^{\mathrm{St}.\mathrm{Mod}}=73.20(13)`$. The effect of uncertain nuclear structure is thus comparable to the present uncertainties involved in the Standard Model prediction, and for $`R_n>R_p`$ is of the right sign to partially explain the experimental discrepancy in Cs. For these reasons, it should be included in any future atomic PNC tests of the Standard Model. With any significant further reduction in the uncertainties in atomic theory calculations, this nuclear contribution may eventually limit the level at which Standard Model tests can be performed with atomic PNC on Cs. To reliably reduce this uncertainty would require additional direct experimental input on neutron distributions, most likely from parity violating electron scattering at low momentum transfer, e.g. at a facility such as Jefferson Lab.
###### Acknowledgements.
We are grateful to Jacek Dobaczewski and J. Decharge for providing neutron and proton densities in Cs, calculated with Skyrme and Gogny potentials. Our work was supported in part under U.S. Department of Energy contract #DE-FG03-93ER40774.
|
no-problem/9904/physics9904027.html
|
ar5iv
|
text
|
# Untitled Document
Spectroscopy through the change of undulator parameters in dipole regime
M.L. Schinkeev
Tomsk Polytechnic University
Abstract
In this work a method of spectroscopy without monochromators for the undulator radiation (UR) source is proposed. This method is based of changing the magnetic field in the undulator. Different variations of field modulations and corresponding object reaction for the case of a dipole regime of UR exitation were considered. The results of a numerical experiment are shown and possibilites of this method for an undulator consisting of two blocks and rearranging by changing the distance between the blocks were estimated.
1. Introduction
In recent works \[1-3\] it has been proposed to use undulator radiation for spectroscopy without monochromators. According to these proposals the UR without an intermediate converter falls on the spectroscopical object, where the spectral density of the UR flux is multiplied with the spectral sensitivity of the detector on every wavelength $`\lambda `$ and then summed. This summation gives an object reaction on the UR flux integrally over all wavelengths. Considering the UR spectrum can be relativistically changed (by changing the particles energy or the undulator field), the reaction of the object on the rearranged source of radiation can be defined as an integral Fredgolm equation of the $`I`$ kind:
$$I(P)=\underset{0}{\overset{\mathrm{}}{}}K(P,\lambda )A(\lambda )𝑑\lambda ,$$
(1)
where $`I(P)`$ is the object reaction on the UR flux, $`K(P,\lambda )`$ is the equation kernel (1) - spectral density of the radiation flux, $`A(\lambda )`$ is the spectral function of the radiacion acceptor, and $`P`$ is the rearranging parameter. The determination of the object function $`A(\lambda )`$ by the measured reaction $`I(P)`$ reduces to the solution of the integral equation with the known kernel $`K(P,\lambda )`$.
As was shown in Ref. , the simplest way to solve this equation is to use the energy of the accelerated particles as a rearranging parameter. This method was already used in Refs. . However, the active change of the particle energy is effectively possible only for accelerators and rather problematic by big storage rings, to which the majority of users are now oriented. Spectroscopy without monochromators attract interest, because in this case the spectral properties of the source could be changed not by the particle energy, but by changing the undulator field.
2. Basic relations
Singe spectroscopy without monochromators produces no loss in intensity of the source on an intermediate converter, it is suitable to use an undulator with the dipole regime of UR exitation for the dipole parameter $`k^21`$. By this means, spectroscopy without monochromators is confined to changes in the UR field structure (not taking into consideration changes of $`k^2`$), but it gives, nowever, a simler possibility for the numerical realization of the algorithm of the solution of Eq. (1). The spectral density of the photon flux in an arbirary spectral interval $`d\overline{\omega }=d\omega /\omega `$ for a radiating charge in the dipole approximation, according to Refs, can by written as:
$$\frac{d\mathrm{\Phi }}{d\overline{\omega }}=\frac{8\pi \alpha }{e}\left(\frac{\mu _0e}{\pi mc}\right)^2J\eta \underset{\eta }{\overset{\mathrm{}}{}}H^2(\nu )\left(12\frac{\eta }{\nu }+2\frac{\eta ^2}{\nu ^2}\right)\frac{d\nu }{\nu ^2},$$
(2)
where $`J`$ is the magnitude of the current of the accelerated charge in the storage ring, $`\alpha `$ is the fine structure constant, $`e`$ is the electron charge, $`m`$ is the electron mass, $`\mu _0`$ is the magnetic permeability of vacuum, $`\eta =(\lambda _0/(2\lambda \gamma ^2))(1+k^2)`$ is the number of the UR harmonic; $`\gamma `$ is the Lorentz factor, $`\lambda _0`$ is the period length of the undulator magnetic field, and $`H^2(\nu )`$ is the square modulus of the Fourier-structure of the UR magnetic field.
As was shown in Ref. for spectroscopy without monochromators an undulator consisting of periodical blocks is suitable. Its Fourier structure of the field can be written as follows:
$$H(\nu )=G(\nu )\mathrm{\Psi }(\nu )S(\nu ),$$
(3)
where $`G(\nu )`$ is the Fourier structure of a standart undulator element (UE) (i.e. an element from which an undulator half-period is formed), $`\mathrm{\Psi }(\nu )`$ is the Fourier structure of an undulator block, as a set of UE, $`S(\nu )`$ is the Fourier structure of the undulator, as a set blocks. $`G(\nu )`$ is the expression for an ironless electromagnetic undulator with the winding of an optimal profile section :
$$G(\nu )=\frac{j\lambda _0^2}{2}\frac{e^{2\pi \nu h/\lambda _0}}{(\pi \nu )^2}\left(\mathrm{cos}\left(\frac{\pi \nu }{4}\right)\mathrm{cos}\left(\frac{\pi \nu }{2}\right)\frac{\pi \nu }{4}\mathrm{sin}\left(\frac{\pi \nu }{4}\right)\right),$$
(4)
where $`j`$ is the current density in winding section UE and $`2h`$ is the magnetic gap. For $`n`$-fold balanced charge motion in the undulator block (this corresponds to the condition of turning to zero of the $`n`$-fold integral from the UR field along undulator block), with period number $`N`$, the Fourier structure is :
$$\mathrm{\Psi }(\nu )=\left(2\mathrm{cos}\left(\frac{\pi }{2}(1+\nu )\right)\right)^n\frac{\mathrm{sin}\left({\displaystyle \frac{\pi }{2}}(1+\nu )(2Nn)\right)}{\mathrm{sin}\left({\displaystyle \frac{\pi }{2}}(1+\nu )\right)}.$$
(5)
If all $`M`$ undulator blocks are equal and the distances between neighbouring blocks are $`l`$, then for $`S(\nu )`$ one can derive from Ref. :
$$S(\nu )=2\mathrm{cos}\left(\frac{\pi }{2}(\delta +\nu (2N+L))\right)\frac{\mathrm{sin}\left({\displaystyle \frac{\pi }{2}}(\delta +\nu (2N+L))M\right)}{\mathrm{sin}\left({\displaystyle \frac{\pi }{2}}(\delta +\nu (2N+L))\right)},$$
(6)
where $`L=2l/\lambda _0`$, i.e. it defines the distances between the blocks in units of half-periods, and parameter $`\delta `$ defines the phasing of the blocks switching on: the value $`\delta =0`$ correspondes to the in phase switching on; the value $`\delta =1`$ correspondes to switching oninthe opposite phase.
As was shown in Ref. for the purpose of monochromator-free spectroscopy the integral spectrum, taken as the difference in the integral spectra corresponding to the undulator state with in phase and opposite phase switching on, is most suitable as the kernel of Eq. (1) (further, the differential UR kernel). Since for such a kernel the low frequency range of the spectrum is suppressed and accordingly the contributions of ”off-axis” particles into the radiation are also suppressed, the $`\sigma `$-component polarization appears. Further, by $`S(\nu )`$ the expression corresponding to the phase state difference is meant.
3. Object reaction on UR flux
Undulators for obtaining arbitrary UR are described in detail in the literature. It is possible to modulate the properties of the UR flyx for a multiblock undulator, as described above, either by changing the geometrical parameters of the UE, or by changing the contribution to undulator of the separate elements, or by changing the blocks location.
If the field is modulated by changing the geometrical parameters of the UE, then the most technically realized, in this case, is changing of the magnetic gap. However, if we consider the dipole exitation UR regime, then the corresponding changing interval $`h`$ does not essentially change the form of the spectrum. This is clearly seen if one considers the integral UR spectrum according to Eq. (2) in the first UR harmonic approximation. In this case (2) becomes
$$\frac{d\mathrm{\Phi }}{d\overline{\omega }}=CG^2(h)\eta \underset{\eta }{\overset{\mathrm{}}{}}\mathrm{\Psi }^2(\nu )S^2(\nu )\left(12\frac{\eta }{\nu }+2\frac{\eta ^2}{\nu ^2}\right)\frac{d\nu }{\nu ^2}$$
(7)
($`C=`$const.) and integra loperator $`K(h,\eta (\lambda ))=d\mathrm{\Phi }/d\overline{\omega }`$ is degenerate because it can by written as $`K(h,\eta )=K_1(h)K_2(\eta )`$. If we consider a set of harmonics, for example the first and the third, then changing of $`h`$ leads to a rearrangement of the contributions of these harmonics in the general spectrum.
Changing the contributions of the UE in the frame of expression (5), is really possible only for an electromagnetic undulator system, and even this is possible only in discrete limits. Therefore in this work this case is not considered.
So, the most simple variant of UR field modulation is modulation by changing the undulator blocks location, i.e., by changing the distance $`L`$ between the blocks. It is obvious that an arbitrary $`L`$ can by realized for $`L>0`$ (by mechanical removement), because for $`L<0`$ only discrete values exist (by rearranging the contributions into the undulator).
The reaction of an object with spectral sensitivity $`A(\lambda )`$ by the full capture of the UR flux over all wavelengths changable over the undulator $`L`$ is defined as follows:
$$I(L)=\underset{\eta }{\overset{\mathrm{}}{}}A(\lambda )\frac{d\mathrm{\Phi }}{d\overline{\omega }}(\eta (\lambda ),L)\frac{d\lambda }{\lambda }=\underset{\eta }{\overset{\mathrm{}}{}}A(\eta )\frac{d\mathrm{\Phi }}{d\overline{\omega }}(\eta ,L)\frac{d\eta }{\eta }=\underset{\eta }{\overset{\mathrm{}}{}}A(\eta )K(\eta ,L)𝑑\eta .$$
(8)
At $`L\mathrm{},I(L)`$ goes asymptioticaly to 0; that is why changing the interval of $`L`$ is defined by giving the reaction level $`I(L)`$. On the other side, real limitations on changing of $`L`$ exist. They are correlated with the value of the intervening space in the insertion device. Since the limits of changing of $`L`$ define the method resolution ($`\mathrm{\Delta }\lambda /\lambda 1/L_{max}`$),at a certain value of the intervening space $`L_{max}`$, it is better to use blocks with minimal values $`N`$, which guarantee a meaningful level $`I(L)`$ over the whole interval, and for making the radiation characteristics better one has to use the maximal balancing degree of the block $`n=N1`$. A solution of Eq. (8) $`A(\eta )`$ for the measured reaction $`I(L)`$ and the analytically given equation kernel $`K(\eta ,L)`$ can be found by means of the Tichonov regularization method .
4. Numerical model of the algorithm
Since the resolution $`\mathrm{\Delta }\lambda /\lambda `$ of the metod of defining the object spectral function by its reaction (8) is a value of the order of $`1/L_{max}`$ (where $`L_{max}N`$) and the number of calculation operations grows proportionally to prodict $`NL_{max}`$, then for a simple realization of a numerical model an undulator consisting of two blocks was used; there were four elements in every block with a balancing degree of every block $`n=3`$. The corresponding contribution structure of the exitation currents of the blocks elements is written as $`\{1;3;3;1\}`$.
As a function of spectral sensitivity of object $`A(\eta )`$ a set of Gaussian was used:
$$A(\eta )=\underset{i=1}{\overset{2}{}}e^{(\eta \eta _i)^2/\sigma _{i}^{}{}_{}{}^{2}},$$
(9)
with parameters $`\eta _i=\{0.6;0.9\}`$, $`\sigma _i=\{0.3;0.1\}`$, so that the reflection $`\eta =\lambda _0/(2\lambda \gamma ^2)`$ gives the function:
$$A(\lambda )=\underset{i=1}{\overset{2}{}}e^{(1\lambda _i/\lambda )^2/\overline{\sigma }_{i}^{}{}_{}{}^{2}},\overline{\sigma }_i=\sigma _i/\eta _i,$$
(10)
corresponding to two spectral lines with the relation $`\mathrm{\Delta }\lambda /\lambda `$ with is equal to 0.35 and 0.08 respectively. (Relatively small values of $`\mathrm{\Delta }\lambda /\lambda `$ correspond to a choice of the undulator parameters made above).
The kernel of the equation was defined over the interval $`\eta (0,2)`$, and the limits of changing of $`L`$ were taken from 0 to 20. The size of the net, which approximates the kernel of Eq. (8) was taken equal to $`(200\times 400)`$.
To solve Eq. (8) the Tichonov regularization method with a choice of the regularization parameter according to a generalization of nonbuilding principle was used. For the minimization of Tichonov functional the method of gradients was used.
5. Conclusion
From the results of the numerical experiments one can see that the considered algotithm of monochromator-free spectroscopy may by rather perspective for insertion devices where an active changing of the particles energy is problematic. For a realization of this method a simple undulator, consisting of two blocks and rather small period number $`N`$ in every block $`(520)`$ but with the opportunity to change the distances between the blocks, is necessary. The theoritical resolution of this method is inversely proportional to the distance between the blocks $`L_{max}`$ (in units of half-periods). The object reaction practically disappears at a shift value $`L10N`$. This value, probably, defines the maximally possible method resolution by a given period number $`N`$ in the block. So, if we consider a period number for each block $`N=20`$ and a period length of 2 cm, then we will obtain $`L_{max}200`$ and accordingly the possible resolution, $`\mathrm{\Delta }\lambda /\lambda 1/L_{max}=510^3`$ (by the total length of such a twoblock system), is equal $`(2N+L_{max})2=480`$ cm.
References
1. M.M. Nikitin and G. Zimmerer. Use of undulators for spectroscopy without monochromators. – Nucl. Instr. and Meth. A 240 (1985) 188.
2. V.G. Bagrov, A.F. Medvedev, M.M. Nikitin and M.L. Shinkeev. Computer analysis for undulator radiation spectroscopy without monochromators. – Nucl. Instr. and Meth. A 243 (1987) 156.
3. V.F. Zalmeg, A.F. Medvedev, M.L. Shinkeev and V.Ya. Epp. On the construction of block periodic undulators with metrologically pure radiation kernels – Nucl. Instr. and Meth. A 308 (1991) 337.
4. A.N. Tichonov et al. Regularization Algorithms and Information. – Nauka, Moskow, 1983, in Russian.
5. M.M.Nikitin and V.Ya.Epp. Undulator Radiation. – Energoatomizdat, Moskow, 1988, in Russian.
|
no-problem/9904/cond-mat9904083.html
|
ar5iv
|
text
|
# Conductance of Distorted Carbon Nanotubes
## Abstract
We have calculated the effects of structural distortions of armchair carbon nanotubes on their electrical transport properties. We found that the bending of the nanotubes decreases their transmission function in certain energy ranges and leads to an increased electrical resistance. Electronic structure calculations show that these energy ranges contain localized states with significant $`\sigma `$-$`\pi `$ hybridization resulting from the increased curvature produced by bending. Our calculations of the contact resistance show that the large contact resistances observed for SWNTs are likely due to the weak coupling of the NT to the metal in side bonded NT-metal configurations.
Carbon nanotubes (NTs) can be metallic or semiconducting. They have high mechanical strength and good thermal conductivity , properties that make them potential building blocks of a new, carbon-based, nanoelectronic technology . Conduction in defect-free NTs, especially at low temperatures, can be ballistic, thus involving little energy dissipation within the NT . Furthermore, NTs are expected to behave like quasi one-dimensional systems (Q1D) with quantized electrical resistance, which, for metallic armchair nanotubes at low bias should be about 6 k$`\mathrm{\Omega }`$ ($`h/4e^2`$). The experimentally observed behavior is, however, quite different. The contact resistance of single-wall nanotubes (SWNTs) with metal electrodes is generally quite high. Furthermore, at low temperatures a localization of the wavefunction in the nanotube segment contained between the metal electrodes is observed that leads to Coulomb blockade phenomena . The latter observation suggests that a barrier or bad-gap develops along the NT near its contact with the metal. In an effort to understand the origin of these discrepancies we have used Green’s function techniques to calculate the effect of the modification of the NTs by bending on their electronic structure and electric transport properties. We also investigated the effects of the strength of the NT-metal pad interaction on the value of the contact resistance.
Most discussions on the electronic structure of NTs assume perfect cylindrical symmetry. The introduction of point defects such as vacancies or disorder has been shown to lead to significant modification of their electrical properties. Here we focus on the effects of structural (axial) distortions on the transport properties of armchair NTs. AFM experiments and molecular mechanics simulations have shown that the van der Waals forces between NTs and the substrate on which they are placed can lead to a significant deformation of their structure. To maximize their adhesion energy the NTs tend to follow the topography of the substrate . Thus, for example, NTs bend to follow the curvature of the metal electrodes on which they are deposited. When the strain due to bending exceeds a certain limit, kinks develop in the nanotube structure . It is important to understand how these NT deformations affect the electrical transport properties of the NTs. Could they be responsible for the low temperature localization observed ? Early theoretical work on this issue was based on tight-binding model involving only the $`\pi `$-electrons of the NTs and accounted for the electronic structure changes induced by bending through the changes in $`\pi `$-orbital overlap at neighboring sites. This study concluded that bending distortions would have a negligible effect on the electrical properties of the NTs . The applicability of this approach is limited to weak distortions. Experiments, however, show that strong deformations and kink formation are common. Under such conditions, bending-induced $`\sigma `$-$`\pi `$ mixing, which was not considered before, becomes very important in strongly bent NTs . In this work, the NT electronic structure is computed using the extended Hückel method (EHM) that includes both $`s`$ and $`p`$ valence electrons. We have previously shown that EHM calculations on an armchair $`(6,6)`$ NT model (96 Å long) reproduce the electronic properties obtained with more sophisticated ab-initio and band structure computations on NTs. The approach we used in the computation of the electrical properties is similar to that of Datta et al. .
The conductance through a molecule or an NT cannot be easily computed; even if the electronic structure of the free molecule or NT is known, the effect of the contacts on it can be substantial and needs to be taken into account. Typically, there will be two (or more) leads connected to the NT. We model the measurement system as shown in Figure 1a. The leads are macroscopic gold pads that are coupled to the ends of the NT through matrix elements between the Au surface atoms and the end carbon atoms of the NT. In most experiments to date the NTs are laid on top of the metal pads. As we discussed above, the NTs then tend to bend around the pads. Such bending deformations are modelled in our calculations by introducing a single bend placed at the center of the tube.
The electrical transport properties of a system can be described in terms of the retarded Green’s function . To evaluate the conductance of the NT we need to compute the transmission function, $`T(E)`$, from one contact to the other. This can be done following the Landauer-Büttiker formalism as described in . The key element of this approach lies in the treatment of the infinite leads which are here described by self-energies. We can write the Green’s function in the form of block matrices separating explicitly the molecular Hamiltonian. After some simplification we obtain:
$$G_{NT}=\left[ES_{NT}H_{NT}\mathrm{\Sigma }_1\mathrm{\Sigma }_2\right]^1$$
(1)
where $`S_{NT}`$ and $`H_{NT}`$ are the overlap and the Hamiltonian matrices, respectively, and $`\mathrm{\Sigma }_{1,2}`$ are self-energy terms that describe the effect of the leads. They have the form $`\tau _i^{}g_i\tau _i`$ with $`g_i`$ the Green’s function of the individual leads and $`\tau _i`$ is a matrix describing the interaction between the NT and the leads. The Hamiltonian and overlap matrices are determined using EHM for the system Gold-NT-Gold. The transmission function, $`T(E)`$, that is obtained from this Green’s function is given by :
$$T(E)=T_{21}=Tr[\mathrm{\Gamma }_2G_{NT}\mathrm{\Gamma }_1G_{NT}^{}].$$
(2)
In this formula, the matrices have the form:
$$\mathrm{\Gamma }_{1,2}=i(\mathrm{\Sigma }_{1,2}\mathrm{\Sigma }_{1,2}^{}).$$
(3)
The summation over all conduction channels in the molecule allows the evaluation of the resistance at the Fermi energy, $`R=h/(2e^2T(E_F))`$. Transport in the presence of an applied potential is also computed. The differential conductance is computed in this case using the approximation :
$$\kappa (V)=\frac{I}{V}\frac{2e^2}{h}[\eta T(\mu _1)+(1\eta )T(\mu _2)]$$
(4)
with $`\eta [0,1]`$ describing the equilibration of the nanotube energy levels with respect to the reservoirs . As a reference, we use the $`E_F`$ obtained from EHM for individual nanotubes as the zero of energy. The NT model used in our calculations is a $`(6,6)`$ carbon nanotube segment containing 948 carbon atoms. The bond distance between carbon atoms in non-deformed regions of the NT is fixed to that in graphite 1.42 Å, leading to a tube length of 96 Å. The building of deformed NTs using molecular mechanics minimization schemes has been described in detail elsewhere . The structures of the bent NTs are shown in Figure 1b. The metallic contacts consist each of 22 gold atoms in a (111) crystalline arrangement. The height of the NT over the gold layer is 1.0 Å, where the Au-C bond distances vary from 1.1 to 1.6 Å.
In Figure 2 we present the computed transmission function $`T(E)`$ for the bent tubes (note that $`T(E)`$ represents the sum of the transmission probabilities over all contributing NT conduction channels). The upper-right of Figure 2 shows the raw transmission results obtained for the straight NT. The fast oscillations of $`T(E)`$ are due to the discrete energy levels of the finite segment of the carbon nanotubes used. For clarity, we will use smoothed curves in the description of the results. At $`E_F`$, $`T(E)`$ is about 1.2, leading to a resistance ($``$ 11 k$`\mathrm{\Omega }`$ ) higher than expected for ballistic transport ($``$ 6 k$`\mathrm{\Omega }`$ for $`T(E)`$ = 2.0). This reduction in transmission is due to the contribution from the contact resistance. The increasing $`T(E)`$ at higher binding energies is due to the opening of new conduction channels. The asymmmetry in the transmission T(E) is a function of the NT-pad coupling (C-Au distance). A longer NT-Au distance increases the T(E) above $`E_F`$, while it decreases it below $`E_F`$, and vise versa. Since the NT-pad geometry is kept fixed in all computations, this behavior does not influence the effects induced by NT bending.
According to our calculation the contact resistance at $`E_F`$ is only about 5 k$`\mathrm{\Omega }`$, much smaller than the $``$ 1 M$`\mathrm{\Omega }`$ resistance typically observed in experiments on single-wall NTs . The dependence of $`T(E)`$ and contact resistance at $`E_F`$ on the Au-NT distance is shown in the upper-left of Figure 2. We see that $`T(E_F)`$ remains nearly constant between 1-2 Å, then decreases exponentially. For distances appropriate for van der Waals bonding ($`3`$ Å) the contact resistance is already in the M$`\mathrm{\Omega }`$ range. The above findings suggest that the high NT contact resistance observed experimentally may, in addition to experimental factors such as the presence of asperities at the metal-NT interface, be due to the topology of the contact. In most experiments, the NT is laid on top of the metal pad. The NT is at nearly the van der Waals distance away from the metal surface, and given that transport in the NT involves high $`k`$-states which decay rapidly perpendicular to the tube axis, the coupling between NT and metal is expected to be weak . Direct chemical bonding between metal and the NT, or interaction of the metal with the NT cap states should lead to stronger coupling. In this respect, it has been found that high energy electron irradiation of the contacts leads to a drastic reduction of the resistance. Since the irradiation is capable of breaking NT C-C bonds it may be that the resulting dangling bonds lead to a stronger metal-NT coupling.
The strongest modification of $`T(E)`$ as a result of bending is observed at around $`E`$=-0.6 eV where a transmission dip appears. This dip is strongest in the $`60^{}`$ bent NT. Furthermore, its transmission function at higher binding energies (BE) is lower than those of the 0-45 bent NTs, indicating that the transmission of higher conduction channels is also decreased. The nature of the dip at about -0.6 eV can be understood by examining the local density-of-states (LDOS) of bent tubes shown in Figure 3 . A change (increase) in the LDOS is seen in the same energy region (0.5-0.8 eV below $`E_F`$) as the transmission dip. This change is essentially localized in the vicinity of the deformed region. The new states result from the mixing of $`\sigma `$ and $`\pi `$ levels, have a more localized character than pure $`\pi `$ states leading to a reduction of $`T(E)`$. As Figure 3 shows, the change in transmission with bending angle is not gradual; the transmission of the 30 and 45 models is only slightly different from that of the straight tube. Apparently, large changes in DOS and $`T(E)`$ require the formation of a kink in the NT structure, as is the case in the 60 and 90 bent NTs.
Once the transmission function is computed, the determination of the differential conductance and resistance is straigtforward. Figure 4 shows the results for two extreme cases of equilibration of the Fermi levels. The first is when $`\eta =0`$ (Figure 4a), and the symmetric case $`\eta =0.5`$ (Figure 4b). When $`\eta `$=0.0, the Fermi level of the NT follows exactly the applied voltage on one gold pad and the conductance spectrum is directly proportional to $`T(E)`$. As expected, there is no large difference between the 0, 30 and 45 models, while the 60 and 90 models show the dip structure at around 0.6 V. The non-linear resistance (NLR) spectra show clearly a sharp increase by almost an order of magnitude at 0.6 V. These features are also observed when $`\eta `$=0.5, where the Fermi level of the NT is floating at half the voltage applied between the two gold pads. The dip at around 1.2 V in the conductance spectra is now broader, and the NLR of the 60 bent tube increases by about a factor of 4 from the computed resistance of the straight tube. These results suggest that there exists a critical bending angle (between 45 and 60) above which the conduction in armchair carbon nanotubes is drastically altered .
In conclusion, we have calculated the effects of structural distortions of armchair carbon nanotubes on their electrical transport properties. We found that bending of the nanotubes decreases their transmission function and leads to an increased electrical resistance. The effect is particularly strong at bending angles higher than 45 degrees when the strain is strong enough to lead to kinks in the nanotube structure. The electronic structure calculations show that the reduction in $`T(E)`$ is correlated with the presence at the same energy of localized states with significant $`\sigma `$-$`\pi `$ hybridization due to the increased curvature produced by bending. Resistance peaks near $`E_F`$ are the likely cause for the experimentally observed low temperature localization in carbon NTs bent over metal electrodes . Our calculations of the resistance (including the contact resistance) of a perfect NT give a value close to $`h/2.4e^2`$ instead of $`h/4e^2`$. This increase in resistance is solely due to the finite transmission of the contacts. The much larger contact resistances observed in many experiments on SWNTs are likely due to the weaker coupling of the NT to the metal when the NT is simply placed on top of the metal electrodes. We predict that NTs end-bonded to metal pads will have contact resistances of only a few k$`\mathrm{\Omega }`$. Such low contact resistances will greatly improve the performance of NT-based devices and unmask the Q1D transport properties of NTs.
|
no-problem/9904/cond-mat9904088.html
|
ar5iv
|
text
|
# Two-dimensional photonic crystal polarizer
## Abstract
A novel polarizer made from two-dimensional photonic bandgap materials was demonstrated theoretically. This polarizer is fundamentally different from the conventional ones. It can function in a wide frequency range with high performance and the size can be made very compact, which renders it usefully as a micropolarizer in microoptics.
Since the pioneering work of Yablonovitch and John, photonic bandgap (PBG) materials have generated considerable attention from many differents research fields. PBG materials are periodically modulated dielectric composites that have stop bands for electromagnetic (EM) waves over a certain range of frequencies because of multiple Bragg scattering, analogous to the electronic band structures in semiconductors. They represent a new class of materials that are capable of uniquely controlling the flow of EM waves or photons. The unique optical properties of photonic crystals render the fabrication of perfect mirrors, high efficient antenna , thresholdless lasers, novel optical waveguides, microcavities and many other unique optical devices possible .
Polarizer is one of the basic elements in optics. One of the challenges in microoptics and microstructures optics is the fabrication of clever optical elements and devices, such as refractive, diffractive lenses, gratings, and polarizers, with compact sizes and high performance. In this paper we show that two-dimensional (2D) PBG materials can be used to make polarizer.
Basically, there were three kinds of conventional ways to make polarizers by using the properties of absorption, reflection and refraction. Some materials like polaroid have all of their long organic molecules oriented in the same direction. These molecules absorb radiation of one polarization, thus transmitting the orthogonal polarization. The reflection of light from a surface is polarization and angle dependent. At Brewster’s angle, which is material and wavelength dependent, the reflectance of $`p`$-polarized light becomes zero, and the reflected light is completely $`s`$-polarized. Birefringent crystals, calcite is an example, have different indices of refraction for different polarizations of light. Two rays of orthogonal polarizations entering the crystal will be refracted at different angles and therefore separated spatially on leaving the crystal.
The idea to use 2D PBG materials to make polarizer is basically different from the above mentioned ones. It is based on the special properties of 2D PBG crystals. Any unpolarized light can be decomposed into two components: one with electric field parallel to the periodic plane (TE) and the other one with magnetic filed parallel to the periodic plane (TM). In 2D PBG crystals, propagations of the TE and TM polarizations can be decoupled . As a consequence, the TE and TM polarizations have their own band structures and PBGs. The typical band structures of a 2D PBG crystal are shown in Fig. 1, from which some general features of propagation of EM waves can be postulated. The transmission of an EM wave is dependent on its frequency and the band structures of the PBG crystal. In the overlapping region of TE and TM bands both TE and TM waves can transmit. In the overlapping region of TE and TM PBGs the propagations of both TE and TH waves are forbidden. In the overlapping region of the TE (TM) bands and TM (TE) PBG, only TE (TM) wave can transmit due to the fact that TM (TE) wave cannot propagate in the region of its PBG. As a result, the outgoing wave will have only one polarization and is perfectly polarized.
For testing purposes, a 2D PBG crystal consisting of dielectric rods arranged in the square lattice, shown in Fig. 2, is used to make polarizer. The lattice constant of the lattice is $`a`$ and the radius of the rod is 0.25$`a`$. The dielectric constant of rods is $`ϵ=14.0`$. The background is air with $`ϵ_b=1.0`$. The calculated projections of photonic band structures in real space for this 2D square PBG crystal is shown in Fig. 3. The band structures of TE and TM waves are calculated by using the plane wave expansion method. Owing to the introduction of a periodicity in 2D, the wavevector will be limited to $`\pi /a`$. A large PBG is opened for the TM wave in the reduced frequency range from 0.2 to 0.34 due to the spatially periodic modulation of dielectric constants. From the above discussions, for an incident EM wave with reduced frequency ranging from 0.2 to 0.34, the transmission of the TM component is hence forbidden and that of the TE component is allowable. As a result, the outgoing light will have only TE component. The frequency range from 0.2 to 0.34 is the working window if this structure is used to make polarizer.
The performance of a polarizer is conventionally characterized by the degree of polarization and transmittance. The transmission is calculated by the transfer matrix method .
The degree of polarization $`P`$ is defined by
$$P=\frac{\left|I_{\text{TE}}I_{\text{TM}}\right|}{I_{\text{TE}}+I_{\text{TM}}}$$
(1)
where $`I_{\text{T}E}`$ ($`I_{\text{T}M}`$) is the intensity of the outgoing TE (TM) component. For a natural light, $`P=0`$ and for a complete polarized light, $`P=1`$. The transmittance $`T`$ of a polarizer is defined here as the ratio of the intensity of the TE wave passing through a polarizer to the incident intensity of the TE wave
$$T=\frac{I_{\text{TE}}(\text{out})}{I_{\text{TE}}(\text{in})}$$
(2)
where in and out stand for the incident and outgoing waves, respectively. For a perfect polarizer, $`T=1`$ is expected.
The generic 2D PBG polarizer consists of eight layers of dielectric rods along the $`y`$ direction. The incident light is also along the $`y`$ direction. To check the performance of the polarizer, we display the calculated degree of polarization $`P`$ and transmittance $`T`$ in Fig. 4. Within the range of the reduced frequency from 0.2 to 0.34 the degree of polarization $`P`$ is almost 1, indicated that this polarizer is excellent in this frequency range. The transmittance of this polarizer is also very large.
This kind of polarizer possesses the other virtues. Because of the scale invariance of PBG materials, only by adjusting the spatial period $`a`$, we can make the polarizer adaptable for the desired range of frequency and at the mean time the degree of polarization $`P`$ and the transmittance $`T`$ are the same. For example, for $`a=1`$ $`\mu `$m, the working frequency window is 6 to 10 THz; for $`a=10`$ $`\mu `$m, the window is 0.6 to 1 THz; for $`a=1`$ mm, the windows is 60 to 100 GHz. The gap to midgap frequency ratio is rather larger, close to 50 %.
The miniaturization of a polarizer is rather difficult to achieve in the conventional polarizers. The 2D PBG polarizer proposed here, however, can be made with rather small size, which may have potential use in microoptics. By adjusting the period parameter $`a`$, we could obtain a polarizer working in the desired frequency range. There features are absent in the conventional polarizers.
We denmenstrate in this paper the possibility to make polarizer from 2D PBG materials. The example given here is the square lattice consisting of 2D dielectric rods. It can be easily realized in the range of millimeter waves and microwaves. It should be noted the air rod structures with dielectric background may be more amiable to realize in the optical and IR wavelengths.
Acknowledgments: We thank W. Lu and H. Chen for interesting discussions. This work was supported by the National Natural Science Fundation of China under Contract No. 69625609.
|
no-problem/9904/cond-mat9904165.html
|
ar5iv
|
text
|
# Discrete breathers in dc biased Josephson-junction arrays
\[
## Abstract
We propose a method to excite and detect a rotor localized mode (rotobreather) in a Josephson-junction array biased by dc currents. In our numerical studies of the dynamics we have used experimentally realizable parameters and included self-inductances. We have uncovered two families of rotobreathers. Both types are stable under thermal fluctuations and exist for a broad range of array parameters and sizes including arrays as small as a single plaquette. We suggest a single Josephson-junction plaquette as an ideal system to experimentally investigate these solutions.
\]
The phenomenon of intrinsic localization (intrinsic localized modes or discrete breathers (DB)) is a recent discovery in the subject of nonlinear dynamics . DB are solutions to the dynamics of discrete extended systems for which energy is exponentially localized in space. They appear either as oscillator localized modes, for which a localized group of oscillators librate; or rotor localized modes or rotobreathers, for which a group of oscillators rotate while the others librate . Recently, it has been found that DB are not restricted to periodic solutions but can also include more complex (chaotic) dynamics .
DB have been proven to be generic solutions in hamiltonian and dissipative nonlinear lattices. It is believed that they might play an important role in the dynamics of a large number of systems, such as coupled nonlinear oscillators or rotors.
Though intrinsic localized modes have been the object of great theoretical and numerical attention in the last 10 years, they have yet to be generated and detected in an experiment. Thus, finding the best system and method for the generation, detection and study of an intrinsic localized mode in a Condensed Matter system has become an important challenge .
Josephson-junctions arrays are excellent experimental systems for studying nonlinear dynamics . In this paper we propose an experiment to detect a rotating localized mode in JJ anisotropic ladder arrays biased by dc external currents . For this, we have done numerical simulations of the dynamics of an open ladder including induced fields at experimentally accessible values of the parameters of the array. We also propose a method for exciting a rotobreather in the array. We distinguish between two families of solutions which present different voltage patterns in the array. Both types are robust to random fluctuations and exist over a range of parameter values and array sizes. Unexpectedly, we have found that many of the rotobreather solutions do not satisfy the up-down symmetry usually assumed for most types of dynamical solutions in the ladder. We also show that a DB solution can be most readily studied in a single plaquette.
According to the RCSJ model, a Josephson junction is characterized by its critical current $`I_c`$, normal state resistance $`R_n`$, and capacitance $`C`$. The junction voltage $`v`$ is related to the gauge-invariant phase difference $`\phi `$ as
$$v=\frac{\mathrm{\Phi }_0}{2\pi }\frac{d\phi }{dt},$$
(1)
where $`\mathrm{\Phi }_0`$ is the flux quantum. After standard rescaling of the time by $`\tau =\sqrt{\mathrm{\Phi }_0C/2\pi I_c}`$, the normalized current through the junction is
$$i=\ddot{\phi }+\mathrm{\Gamma }\dot{\phi }+\mathrm{sin}\phi ,$$
(2)
here $`\mathrm{\Gamma }`$ represents a damping and is directly related to the Stewart-McCumber parameter $`\beta _c=\mathrm{\Gamma }^2=2\pi I_cCR_n^2/\mathrm{\Phi }_0`$.
Our anisotropic JJ ladders (see Fig. 1) contain junctions of two different critical currents: $`I_{ch}`$ for the horizontal junctions and $`I_{cv}`$ for the vertical ones. Anisotropic arrays are easily fabricated by varying the area of the junctions. In the case of unshunted junctions, the critical current and capacitance are proportional to this area. Due to the constant $`I_cR_n`$ product, the normal state resistance is inversely proportional to the junction area. The anisotropy parameter $`h`$ can then be defined as $`h=I_{ch}/I_{cv}=C_h/C_v=R_v/R_h`$.
To write the governing equations of an anisotropic JJ ladder array with $`N`$ cells, Fig. 1, we need to apply current conservation at each node and flux quantization at each mesh. We are including self-induced magnetic fields so that flux quantization at mesh $`j`$ yields
$$(\times \phi )_j=2\pi f_j.$$
(3)
Here $`(\times \phi )_j=\phi _j^t+\phi _{j+1}^v\phi _j^b\phi _j^v`$ and it represents the circulation of gauge-invariant phase differences in mesh $`j=1`$ through $`N`$. The self-induced flux through mesh $`j`$ normalized by $`\mathrm{\Phi }_0`$ is given by $`f_j`$. The resulting equation can be written compactly as,
$`h(\ddot{\phi }_j^t+\mathrm{\Gamma }\dot{\phi }_j^t+\mathrm{sin}\phi _j^t)=\lambda (\times \phi )_j`$ (4)
$`\ddot{\phi }_j^v+\mathrm{\Gamma }\dot{\phi }_j^v+\mathrm{sin}\phi _j^v=\lambda [(\times \phi )_j(\times \phi )_{j1}]+I`$ (5)
$`h(\ddot{\phi }_j^b+\mathrm{\Gamma }\dot{\phi }_j^b+\mathrm{sin}\phi _j^b)=\lambda (\times \phi )_j,`$ (6)
where the open boundaries can be imposed by setting $`(\times \phi )_0=(\times \phi )_{N+1}=0`$ in Eqs. 6. The system has four independent parameters: $`h`$, $`\mathrm{\Gamma }`$, the penetration depth $`\lambda =\mathrm{\Phi }_0/2\pi I_{cv}L`$ where $`L`$ is the mesh self-inductance, and the normalized external current $`I`$. On writing Eqs.(3) and (6) we assume zero external field and normalize by $`I_{cv}`$. Non-zero external fields can be included in the model replacing the $`(\times \phi )_j`$ terms in Eqs.(3) and (6) by $`(\times \phi )_j+2\pi f_j^{ext}`$ where $`f_j^{ext}`$ is the flux due to an applied external field, measured in terms of $`\mathrm{\Phi }_0`$.
The parameter values we will consider are based on Nb-Al<sub>2</sub>O<sub>x</sub>-Nb junctions with a critical current density of $`1000\mathrm{A}/\mathrm{cm}^2`$. Typical values of the Stewart-McCumber parameter and the penetration depth for arrays with $`h=1/4`$ are $`\beta _c30`$ and $`\lambda 0.02`$. For the purposes of this work we will let $`\mathrm{\Gamma }=0.2`$, $`\lambda =0.02`$ and $`N=8`$.
Consider the $`h=0`$ limit in Eqs. 6. In this limit the vertical junctions behave as uncoupled damped pendula driven by an external current $`I`$. There, we can think of a configuration in which one or a few of the phases are rotating or oscillating around their equilibrium points while the others remain at rest. Thus rotor and/or oscillator localized modes appear as solutions to the dynamics when the array is biased either by dc or ac external currents.
For a single underdamped junction driven by a constant external current, the response measured in terms of dc voltage presents a hysteresis loop between the depinning and the retrapping currents. In this range the pinned ($`V=0`$) and rotating ($`V0`$) solutions coexist. Then the rotobreather solution in the $`h=0`$ limit corresponds to a solution in which the phase of one of the vertical junctions is rotating while the other vertical junctions are at rest.
As $`h`$ is increased from zero the non-convex character of the coupling allows for the continued existence of rotobreathers in the system. Since a solution with a time increasing field cannot physically exist, the flux quantization condition (Eq. 3) implies that each cell with a rotating junction must have at least one other junction which is rotating. Thus, for the single rotobreather solution one of the vertical and some of the horizontal neighboring junctions rotate. Fig. 2 shows schematically simple rotating localized modes in a ladder and in the single plaquette. These DB are amenable to simple experimental detection when measuring the average voltage through different junctions.
Although the rotobreather solution can theoretically be continued from its $`h=0`$ limit by varying $`h`$, we have developed a simple method of exciting it in an array. This method should be experimentally reproducible and has three steps: (i) bias all the array up to the operating point ($`I=I^{}`$); (ii) increase the current injected into one of the junctions to a value of the current above the junction critical current ($`I=I^{}+\stackrel{~}{I}>1`$); (iii) go back to the operating point by decreasing to zero the value of this extra current $`\stackrel{~}{I}`$. Typical values of $`I^{}`$ and $`\stackrel{~}{I}`$ in our simulations are 0.6.
We have checked the robustness of this method under fluctuations by simulating the equations of the ladder while adding a noise current to the junctions (this is the standard manner of including thermal effects in the system ). Thus we are able to excite DB in the ladder at some values of the parameters of the system. The solution showed in Fig. 3 was excited using this procedure.
Henceforth, we are going to consider ladders with an even number of cells for which one vertical junction (the central one) is rotating. We will relabel this junction as $`j=0`$.
Figs. 3 shows a solution of a stable rotobreather in a JJ ladder. We plot the phase portrait ($`\phi _j^q`$,$`\dot{\phi }_j^q`$) of some of the superconducting gauge invariant phase differences of the array. The corresponding junctions are shown in Fig. 2(a). For clarity we have reduced the values of the phases to the $`(\pi ,\pi ]`$ interval. We see that at this value of the penetration depth the solution is highly localized while three of the junctions describe a nearly sinusoidal rotation all the others oscillate with decreasing amplitudes. The average voltage through the three rotating junctions in the array is different from zero and equals to zero for all the other junctions. Fig. 4 shows the average value of the induced field of the cells of the array. It decreases exponentially as $`\overline{f_j}\text{e}^{j/0.26}`$ ($`j0`$ and $`f_j=f_{j1}`$)
There are some surprising characteristics in this solution. Current conservation in the open ladder implies $`i_j^t=i_j^b`$. From Eqs. 2 and 6 we can see that $`\phi _j^t=\phi _j^b`$ is a simple solution of the dynamics of the array and it corresponds to the up-down symmetry of the phases. All the previous theoretical approaches to the dynamics of the array (which include whirling modes, resonances, row-switching, etc.) and many of the numerical ones focus on solutions which satisfy this up-down symmetry. However, looking at Fig. 3 we see that the rotobreather solution shown there does not comply with this simple symmetry; that is, $`\phi _j^t\phi _j^b`$ although $`i_j^t=i_j^b`$.
We will distinguish between two families of single rotobreather solutions in the ladder which present different voltage patterns. The first family, rotobreather A \[see Fig.2(a)\], is characterized by one vertical and two horizontal rotating junctions. Type A solutions have two possible configurations. The two rotating horizontal junctions can be both in the same side, either top or bottom, as in Fig. 2(a); or one in the top and the other in the bottom. The second family, rotobreather B \[Fig.2(c)\], is characterized by one vertical and four horizontal rotating junctions. The solution shown in Fig. 3 and 4 is a type A rotobreather. Up-down symmetric solutions belong to family B but not all family B solutions satisfy this symmetry.
Figs. 3 and 4 show a solution for which the scale of localization is smaller than one cell. Thus, it is natural to study the DB solution in the simplest ladder array, the single plaquette. Obviously, the concept of exponential spatial localization is not applicable to the plaquette, but all the other characteristics of the solution remain. In particular we will also distinguish between type A and type B rotobreather solutions in the plaquette, which in this case correspond to one vertical and one horizontal rotating junctions \[Fig.2(b)\], and one vertical and both horizontal rotating junctions \[Fig.2(d)\] respectively. The single plaquette biased by dc external currents, is then proposed as the simplest and most convenient experimental system for detecting a rotating localized mode. The method for exciting the mode is also applicable to this system.
An important experimental issue then becomes finding the region of existence of these DB solutions with respect to the system parameters in order to investigate the feasibility of designing an array in to detect a DB. To design an array we need to calculate the junction areas, so that the anisotropy needs to be known. Since different values of the anisotropy affect the cell geometry, they also change the value of $`\lambda `$. On the other hand, $`\mathrm{\Gamma }`$ is determined by the current density of the junctions and therefore it is fixed and independent of the geometry while the applied current can be easily changed while measuring. In order to make an optimal design we will fix the value of $`I`$ and $`\mathrm{\Gamma }`$ to 0.6 and 0.2 respectively and study DB solutions in the ($`h`$-$`\lambda `$) plane of parameters.
Type A and type B rotobreathers exist close to the $`h=0`$ limit. We then calculate the maximum value of the anisotropy for which a DB exists as a stable solution of the dynamical equations for different values of $`\lambda `$. Fig. 5 (a) and (b) show the result for the ladder and the single plaquette. The data were calculated by integrating the equations of motion for the corresponding system with a small quantity of noise. We start with a type B rotobreather and $`h0.001`$. As we increase $`h`$, type B solutions become unstable and the solution evolves to a type A rotobreather. As we further increase $`h`$ this rotobreather becomes unstable and the system usually jumps to either a pinned or a whirling state. To verify that our method is accurate, we have calculated Floquet multipliers for periodic rotobreather solutions and found results consistent with those shown in the Fig. 5.
We note that when doing this existence analysis of the solutions we find many different single-breather solutions. Most are periodic with different periods and amplitudes but there are some that appear to be chaotic \[specially close to the $`\lambda =0.2`$ region in Fig. 5(a)\]. A detailed study of the different bifurcations which include period doubling bifurcations to chaos is in current progress. There also exists a large family of different multibreather solutions, each one with its own domain of existence.
Fig. 5 shows that at $`\mathrm{\Gamma }=0.2`$ and $`I=0.6`$ type A solutions exist at larger values of the anisotropy than type B solutions. Also a simple inspection reveals a strong similarity between Figs. 5(a) and 5(b). The similarity can be easily understood. The rotobreather solution shown in Figs. 3 and 4 presents a mirror symmetry with respect to the rotating vertical junction: $`\phi _j^v=\phi _j^v`$, and $`\phi _j^{t(b)}=\phi _{j1}^{t(b)}`$. In the case of solutions satisfying a mirror symmetry it is possible to map the dynamics of a JJ ladder for which the rotating junction is the central one, to the dynamics of a smaller JJ ladder for which the rotating junction is on one of the ends. Then, due to the localized nature of the DB solution the dynamics can be approximated by studying a single plaquette. When doing these transformations we need to rescale two of the parameters of the equations. Thus, results for the DB solution studied above present some similarities with the dynamics of a DB in a single plaquette when $`h_p=2h_l`$ and $`\lambda _p=2\lambda _l`$.
By establishing a criteria for the design of simple experiments to detect these intrinsic localized modes we hope to stimulate experimental investigations.
The research was supported in part by the NSF Grant DMR-9610042 and DGES (PB95-0797). JJM is supported by a Fulbright/MEC Fellowship. We thank A. E. Duwel, F. Falo, L. M. Floría, P. J. Martínez, and S. H. Strogatz for useful discussions.
|
no-problem/9904/nucl-ex9904005.html
|
ar5iv
|
text
|
# Fission Hindrance in hot 216Th: Evaporation Residue Measurements
## I Introduction
Experimental studies of the time-scale of fission of hot nuclei have recently been carried out using the emission rates of neutrons , $`\gamma `$-rays , and charged particles as ”clocks” for the fission process. These experiments have shown that the fission process is strongly hindered relative to expectations based on the statistical model description of the process. The observed effects extend well beyond any uncertainties in the model parameters. It therefore appears that a dynamical description of the fission process at these energies is more appropriate and that the experimental data are able to shed light on dissipation effects in the shape degree of freedom. However, these experiments are not very sensitive to whether the emission occurs mainly before or after the traversal of the saddle point as the system proceeds toward scission. Various dissipation models are, however, strongly dependent on the deformation and shape symmetry of the system. As an alternative to these methods we therefore measure the evaporation probability for hot nuclei formed in heavy-ion fusion reactions, which is sensitive only to the dissipation strength inside the fission barrier. As the hot system cools down by the emission of neutrons and charged particles there is a finite chance to undergo fission after each evaporation step. If the fission branch is suppressed due to dissipation there is therefore a strongly enhanced probability for survival which manifests itself as an evaporation residue cross section which is larger than expected from statistical model predictions. This effect depends, however, only on the dissipation strength inside the saddle point and may therefore provide the desired separation between pre-saddle and post-saddle dissipation.
In this paper, we report on recent measurements of evaporation residue cross sections for the <sup>32</sup>S+<sup>184</sup>W system over a wide range of beam energies using the Argonne Fragment Mass Analyzer (FMA). In sect. II we describe the experimental procedure followed by a discussion of the measurements of absolute evaporation residue cross section in sect. III. The results are compared to statistical model calculations and other relevant data in sect. IV followed by the conclusion, sect. V.
## II Experimental arrangement
The measurements were carried out using <sup>32</sup>S-beams from the ATLAS superconducting linac at Argonne National Laboratory. The cross sections for evaporation residues produced in the <sup>32</sup>S+<sup>184</sup>W reaction were measured at beam energies of 165, 174, 185, 195, 205, 215, 225, 236, 246, and 257 MeV. Targets of isopically separated <sup>184</sup>W with thickness 200 $`\mu `$g/cm<sup>2</sup> on a 100 $`\mu `$g/cm<sup>2</sup> carbon backing were used. The Argonne Fragment Mass Analyzer was used for identification of evaporation residues. A schematic illustration of the setup is shown in Fig. 1.
In these experiments a sliding seal target chamber was used, which allows for measurements at angles away from 0. This is required in order to obtain the angular distributions for integration of the total evaporation residue cross section. Elastically scattered S-ions were registered in a Si detector placed at 30 relative to the beam axis with a solid angle of $`\mathrm{\Omega }_{mon}`$ = 0.249 msr. These data were used for normalization purposes. A 40 $`\mu `$g/cm<sup>2</sup> carbon foil was placed 10 cm downstream from the target to reset the charge state of reaction products, which may be abnormally highly charged as a result of Auger electron emission following the $`\gamma `$ decay of short-lived isomers.
A square entrance aperture for the FMA covering $`\theta ,\varphi =4.5^{}\times 4.5^{}`$ ($`\mathrm{\Omega }_{FMA}=6.24`$ msr) was used. Reaction products transmitted through the FMA were dispersed in M/q (mass/charge) at the focal plane, where the spatial distribution was measured by a thin x-y position sensitive avalanche detector. When the FMA was placed at 0 some settings of the electrostatic and magnetic fields of the instrument allows beam particles scattered off the anode of the first electrostatic dipole, ED1, to be transported to the focal plane (presumably after a subsequent forward scattering in the vacuum chamber of the magnetic dipole MD1). When measuring small cross sections, as in the present study, it is therefore mandatory to achieve a clean separation between evaporation residues and beam particles. This was achieved by measuring their flight-time over the 40 cm distance to a double-sided Si strip detector (DSSD) placed behind the focal plane. This detector has a total active area of 5$`\times `$5 cm<sup>2</sup> and is divided into 16 strips on both the front and rear surface arranged orthogonally to each other. The information on the particle mass obtained from the time-of-flight and energy measurement provided by the Si-detector gave a clean discrimination against the scattered beam as illustrated in Fig. 2. The efficiency for transporting evaporation residues from the focal plane to the Si-detector was determined from the spatial distribution over the face of the DSSD detector as shown in Fig. 3 for these beam energies. By Gaussian extrapolation of the distribution beyond the edge of the detector it is estimated that this efficiency is around $`ϵ_{PPACSi}`$ = 87%.
The transport efficiency of the FMA as a function of the mass, energy amd charge-state of the ion has been determined in a separate experiment .
## III Cross sections
The evaporation residue cross section for the <sup>32</sup>S+<sup>184</sup>W reaction was measured for beam energies in the range $`E_{beam}`$=165-257 MeV. Evaporation residues were identified by time-of-flight and energy measurement using the focal plane PPAC detector and the Si-strip detector placed ca. 40 cm behind the focal plane. The charge state distributions, which were measured at three beam energies, are shown in Fig. 4. The dashed curves represent the formula of Shima et al. , whereas a somewhat better fit to the data is given by the Gaussian fit (solid curves) with a fixed standard deviation of $`\sigma `$=3. The arrows indicate the charge state setting of the FMA used for the cross section measurement. The derivation of the evaporation residue cross section at intermediate beam energies is based on an interpolation between these measured charge state distributions.
Since the FMA disperses in $`M/q`$ at the focal plane there will be cases of ambiguities in the mass identification, since overlaps between lighter mass products from one charge state, $`q`$, will invariably overlap with heavier products from the neighboring charge state, $`q+1`$, when compound nuclei with high excitation energy are studied, see Fig. 5. We are not able to resolve this ambiguity with the present setup, and have therefore obtained the cross sections by integrating all counts that fall between the positions for $`M/(q\frac{1}{2})`$ and $`M/(q+\frac{1}{2})`$ along the focal plane. Since the FMA is set up for the most abundant charge state, $`q`$ and mass, $`M`$, we expect that the loss of residues with charge state, $`q`$, and masses that fall outside this window is compensated by the acceptance of residues with charge states, $`q+1`$ and $`q1`$ that fall inside this window.
### A Detection efficiency
The transport efficiency as a function of recoil energy and mass relative to the setting for the FMA has been measured for monoenergetic particles by observing the recoils from elastic scattering of <sup>32</sup>S + <sup>197</sup>Au, <sup>208</sup>Pb, <sup>232</sup>Th . To correctly estimate the transport efficiency for evaporation residues, which have an extended energy distribution, it is necessary to fold the energy distribution with the measured acceptance curve. The energy distribution was not measured directly in the present experiment, but the yield of residues as a function of the energy setting of the FMA was measured as shown in Fig. 6 (top panel). In principle, since the energy acceptance of the FMA is known, it should be possible to convert this measurement into an energy distribution with some accuracy.
We have, however, used a slightly different method which incorporated both this measurement and the measurement of the angular distributions. Assuming that both the angular distribution of evaporation residues and their energy distribution at 0 arise from isotropic multiparticle emission from the hot compound nucleus, these two entities are related by the kinematics of the particle decay cascade. We assume that the recoil energy distribution is isotropic in the center-of-mass system and that it has a Maxwellian form, namely
$$\frac{dP}{dE_{cm}}=\frac{2}{\sqrt{\pi }}\frac{\sqrt{E_{cm}}}{a^{3/2}}\mathrm{exp}\left(E_{cm}/a\right),$$
(1)
where $`E_{cm}`$ is the recoil energy in the center-of-mass system and $`a=\frac{2}{3}E_{cm}`$ is two thirds of its average value. The energy distribution in the laboratory system at $`\theta =0^{}`$ is then
$$\frac{dP}{dE_{lab}}|__0^{}=\frac{1}{2}\left(\frac{1}{\pi a}\right)^{3/2}\sqrt{E_{lab}}\mathrm{exp}\left[\left(\sqrt{E_{lab}}\sqrt{E_{CN}}\right)^2/a\right].$$
(2)
Here, $`E_{CN}`$ is the laboratory energy of the compound nucleus prior to the particle evaporation cascade. A small correction to $`E_{CN}`$ arising from the mass loss due to particle evaporation has been ignored in eqs. 1-3. Similarly we find the angular distribution
$$\frac{dP}{d\mathrm{\Omega }_{lab}}=\frac{1}{2}\left(\frac{1}{\pi a}\right)^{3/2}_0^{\mathrm{}}\sqrt{E_{lab}}\mathrm{exp}\left[\left(E_{lab}+E_{CN}2\sqrt{E_{lab}E_{CN}}\mathrm{cos}\theta \right)/a\right]𝑑E_{lab}.$$
(3)
We find that a value of $`a`$ = 0.5 MeV gives a good representation of both the transmission as a function of the energy setting of the FMA, $`E_{FMA}`$ and the measured angular distribution, see Fig. 6. For the angular distribution we have also taken the effects of multiple scattering in the target and backing material as well as the charge state reset foil into account. This increases the width of the angular distribution somewhat and results in good agreement with the data as shown by the solid curve in Fig. 6b. This value of $`a`$ = 0.5 MeV corresponds to a transport efficiency of the FMA for evaporation residues of $`\overline{ϵ_{FMA}}0.60`$, see Table I.
### B Angular distributions
The angular distributions of evaporation residues were measured at three beam energies utilizing the sliding-seal target chamber for the FMA. Differential cross sections, $`d\sigma /d\mathrm{\Omega }`$, as a function of the mean angle, $`<\theta _{lab}>`$, relative to the beam axis are shown in the left side panel of Fig. 7. The right side panel shows the cross sections converted to $`d\sigma /d\theta `$, which is relevant for the angular integration of the total evaporation residue cross section. The angle integrated cross sections are thus derived from a fit to the data expressed in terms of $`d\sigma /d\theta `$ using the function $`2\pi \mathrm{sin}\theta dP/d\mathrm{\Omega }_{lab}`$. The curves shown in the left side panel of Fig. 7 are computed by removing the $`2\pi \mathrm{sin}\theta `$ term. We observe that these latter curves underrepresent the differential cross section at small angles indicating that the angular distribution really has two components. However, we do not feel that the data are of sufficient quality to allow for a reliable separation of two components and by observing the fits to the $`d\sigma /d\theta `$ data it is clear that only a very small error could arise from this simplification.
The data shown in Fig. 7 are corrected for the efficiency for transporting evaporation residues through the FMA. We estimate this transport efficiency, $`ϵ_{FMA}`$, by folding the energy distribution of the evaporation residues with the energy acceptance of the FMA, which was measured by Back et al for the entrance aperture used in this experiment. The mean energy of the compound system, $`E_{CN}^{}`$ ( corrected for the energy losses in the target material, backing and the reset foil ), is determined from the reaction kinematics and listed in Table I. The parameter, $`a=\frac{2}{3}E_{cm}`$ was determined to have a value of about $`a0.5`$ for the $`E_{beam}=`$ 246 MeV point by simultaneously fitting the angular distribution of evaporation residues and a scan of the energy setting of the FMA, see Fig. 6. The value of $`a`$ was for the other beam energies scaled according to $`a\sqrt{E^{}}/22.4`$, which was found to reproduce also the angular distributions measured at $`E_{beam}`$ = 174 and 205 MeV, see Fig. 7.
### C Total evaporation residue cross sections
The total evaporation residue cross section, $`\sigma _{ER}`$, is obtained from the measurement of the differential cross section at $`\theta =5^{}`$, which was performed at all beam energies. The ratio, $`f(E_{beam})=\sigma _{ER}/\frac{d\sigma _{ER}}{d\theta }(5^{})`$, of the angle integrated cross section, $`\sigma _{ER}`$, to the measured differential cross section, $`\frac{d\sigma _{ER}}{d\theta }(5^{})`$, is obtained by smooth interpolation between the values of $`f(E_{beam})`$ = 0.089, 0.088, and 0.086 rad obtained from the angular distribution measurements at $`E_{beam}`$ = 174, 205, and 246 MeV, respectively. The total evaporation residue cross sections are then given by
$`\sigma _{ER}`$ $`=`$ $`f(E_{beam}){\displaystyle \frac{d\sigma (5^{})}{d\theta }}`$ (4)
$`=`$ $`f(E_{beam}){\displaystyle \frac{N_{ER}(5^{})}{N_{mon}}}{\displaystyle \frac{\mathrm{\Omega }_{mon}}{\mathrm{\Omega }_{FMA}}}\mathrm{\hspace{0.33em}2}\pi \mathrm{sin}(5^{}){\displaystyle \frac{d\sigma _{Ruth}}{d\mathrm{\Omega }}}(30^{}){\displaystyle \frac{1}{ϵ_{FMA}ϵ_{PPACSi}P(q)}}`$ (5)
where, $`N_{ER}(5^{}),N_{mon}`$ are the number of evaporation residue counts observed at the FMA focal plane Si-detector, and the number of elastically scattered <sup>32</sup>S ions registered in the monitor detector, respectively. The differential Rutherford cross section in the laboratory system is denoted $`d\sigma _{Ruth}/d\mathrm{\Omega }`$, and $`P(q)`$ is the fraction of evaporation residues in the charge state, $`q`$, for which the FMA was tuned. The charge state fraction, $`P(q)`$, was obtained by interpolation of the central charge state, $`q_0`$ resulting from the fits to the measured distributions with a Gaussian with a standard deviation of $`\sigma `$ = 3 charge state units.
The resulting evaporation residue cross sections for the <sup>32</sup>S+<sup>184</sup>W reaction are shown as filled circles in Fig. 8 and listed in Table I. The measurements are assigned a systematic error of 20%, mainly due to the procedure for estimating the transport efficiency through the FMA.
Fission-like cross sections and a derived estimate of the complete fusion cross sections for the <sup>32</sup>S+<sup>182</sup>W reaction are shown as open circles and open squares , respectively, along with theoretical calculations using a modified Extra Push model .
## IV Comparison with statistical model calculations
In Fig. 8, the evaporation residue data are compared with a statistical model calculation obtained with the code CASCADE (long dashed curve labeled $`\gamma `$=0) using Sierk fission barriers scaled by a factor of 0.9 to approximately account for the cross section at low beam energies, and using level density parameters of $`a_n=a_f=A/8.8`$ MeV<sup>-1</sup>. We observe that the measured cross section increases with beam energy, whereas the statistical model predicts a decreasing cross section because of an increased probability for fission during the longer evaporation cascades. For comparison we have also performed CASCADE calculations using level density parameters of $`a_n=A/8.68`$ MeV<sup>-1</sup> and $`a_f=A/8.49`$ MeV<sup>-1</sup> as suggested by Tōke and Swiatecki (dotted curve), and $`a_n=A/11.26`$ MeV<sup>-1</sup> and $`a_f=A/11.15`$ MeV<sup>-1</sup> by Ignatyuk et al (dotted-dashed curve). Using these values results in an even sharper decrease of the predicted evaporation residue cross section with beam energy as shown in Fig. 8. This is a consequence of the fact that the fission decay rate increases more rapidly with excitation energy when values of $`a_f>a_n`$ are used. Although it is expected that $`a_f>a_n`$ on rather firm theoretical grounds we have, however, used the standard values of $`a_f=a_n=a/8.8`$ in order to be able to compare to other works, where this value was used in the analysis.
We hypothesize that the observed increase of the measured evaporation residue cross section with excitation energy, which is at variance with the statistical model calculations, can be attributed to an increased hindrance of the fission motion with excitation energy. Fission hindrance at high excitation has previously been shown to explain observations of enhanced emission of pre-scission neutrons , charged particles , and $`\gamma `$-rays , as well as recent observation of an enhanced survival probability of excited target recoils from deep inelastic scattering reactions .
The inclusion of friction in the fission motion results in a modification of the normal Bohr - Wheeler expression for the fission decay width, $`\mathrm{\Gamma }_f^{BW}`$ as pointed out by Kramers , i.e.
$$\mathrm{\Gamma }_f^{Kramers}=\mathrm{\Gamma }_f^{BW}(\sqrt{1+\gamma ^2}\gamma )[1\mathrm{exp}(t/\tau _f)]$$
(6)
where $`\gamma =\beta /2\omega _0`$ is a reduced nuclear friction coefficient, and $`\tau _f`$ is a charactistic time for the building of the fission flux over the saddle point. $`\beta `$ denotes the reduced dissipation constant and $`\omega _0`$ describes the potential curvature at the fission saddle point. The modification to the Bohr-Wheeler expression for the fission width thus consists of an overall reduction given by the so-called Kramers factor, $`\sqrt{1+\gamma ^2}\gamma `$, as well as a time dependent in-growth of the fission rate given by the factor $`1\mathrm{exp}(t/\tau _f)`$ . These modifications to the fission decay width has been incorporated into the CASCADE statistical model code in an approximate way , which has, however, been shown to be very accurate over the applied range of parameters.
Because the evaporation residue cross section is such a small fraction of the complete fusion cross section we find that it is very sensitive to the nuclear viscosity of the system inside the barrier. The thin solid curve in Fig. 8 represent a statistical model calculation where the effects of viscosity are included using a linear normalized dissipation coefficient of $`\gamma `$=5, corresponding to a strongly overdamped motion in the fission degree of freedom. This is approximately the dissipation strength expected from the one-body dissipation mechanism . We see that this leads to an increase of about a factor 10-20 in the evaporation residue cross section relative to the pure statistical model estimate (long dashed curve), but the overall shape of the excitation function is virtually unchanged. Within this framework it therefore appears that the viscosity (or dissipation) increases rather rapidly over this range of beam energies i.e. from 200 to 260 MeV, which corresponds to an excitation energy range of $`E_{exc}`$=85-136 MeV. Similar effects have been observed in studies of pre-scission $`\gamma `$-rays albeit in that case it appears to take place over an even smaller excitation energy interval.
In order to deduce the temperature dependence of the dissipation strength in the fission degree of freedom $`\gamma (T)`$ that reproduces the observed increase of the evaporation residue cross section, we have performed a series of calculations at each beam energy, varying the value of $`\gamma `$ to reproduce the measured cross section. This procedure leads to the thick solid cross section curve going through the data points in Fig. 8; the corresponding values of $`\gamma (T)`$ are plotted as solid triangles in Fig. 9. Note that there is some inconsistency in this approach because the value of the dissipation strength is $`\mathrm{𝑛𝑜𝑡}`$ allowed to vary as the system cools down during the particle evaporation cascade. Rather, the dissipation strength is kept constant throughout the cascade with the value needed to fit the measured evaporation residue cross section for this particular beam energy. Although this has been recognized as a shortcoming of these calculations, we have employed this procedure to be able to compare to other published data analyzed in the same way.
## V Discussion
The dissipation strength in the fission process has recently been measured by several methods, and it is of interest to compare these different results. In Fig. 9 we show the normalized dissipation strength parameter, $`\gamma `$, obtained from the analysis of 1) the survival probability of Th-like nuclei excited in deep-inelastic scattering reaction of 400 MeV <sup>40</sup>Ar+<sup>232</sup>Th (solid squares), 2) the evaporation residue cross section and pre-scission $`\gamma `$-ray emission from the <sup>16</sup>O+<sup>208</sup>Pb (solid diamonds), 3) the present data (solid triangles), and 4) the fission cross section for <sup>3</sup>He+<sup>208</sup>Pb reaction (open circles). We observe that the dissipation strength required to reproduce the different data falls into two groups, namely one which increases rather sharply above an excitation energy of $`E_{exc}`$ 40 MeV, and another group that increases slowly only above $`E_{exc}`$ 80 MeV.
It is interesting to note that this behaviour may be related to the shell structure of the compound system. The two systems that have a closed (or nearly closed) neutron shell at N=126 show only moderate fission dissipation strength up to high excitation energy, whereas the mid-shell systems with N = 134 , 142 display a strong increase in $`\gamma `$ above $`E_{exc}`$ 40 MeV.
Recently, there has been much theoretical interest in the study of the dynamics of the fission process, both in terms of the description of experimental observables on the basis of phenomelogical assumptions of the dissipation strength as well as more fundamental theories for the dissipation mechanism itself . Although the overall dissipation strength found to reproduce the present data is in fair agreement with estimated based on the simple one-body dissipation model, namely $`\gamma 56`$ the rather striking increase with excitation energy (or temperature) is unexplained within this mechanism, which has no temperature dependence. It is interesting to note that the linear response theory approach appears to predict the increase in dissipation strength although the present development level of this theory is not directly applicable for comparison with the experimental data.
## VI Conclusion
Measurements of evaporation residue cross sections for heavy fissile systems are shown to provide rather direct evidence for the fission hindrance (or retardation) which is caused by strong nuclear dissipation in the fission degree of freedom for hot nuclei. The data obtained for the <sup>32</sup>S+<sup>184</sup>W system show an increasing evaporation cross section with excitation energy, whereas a decrease is expected on the basis of statistical model considerations and calculations. The data indicate an increase in the linear normalized dissipation coefficient $`\gamma `$ from $`\gamma `$=0 at $`E_{exc}`$=85 MeV to $`\gamma `$=5 at $`E_{exc}`$=135 MeV. Although hints of such an increase have been obtained within the framework of linear response theory, no direct comparison can be made with the experimental data. Further study, both experimental and theoretical, of this phenomenon is warrented,
This work was supported by the U. S. Department of Energy, Nuclear Physics Division, under contact No. W-31-109-ENG-38.
|
no-problem/9904/astro-ph9904415.html
|
ar5iv
|
text
|
# Detection of Cosmic Microwave Background Structure in a Second Field with the Cosmic Anisotropy Telescope
## 1 Introduction
Observations of spatial fluctuations in the cosmic microwave background (CMB) radiation are fundamental to our understanding of structure formation in the universe as they mark the earliest observable imprints of massive gravitational structures (see e.g. review by White, Scott & Silk 1994). The distribution and amplitude of anisotropies in the CMB sky over scales from degrees to arcminutes can be used to discriminate between competing cosmological theories. On scales of 0.2\- 2, inflationary models predict that increased power should be seen in the CMB sky due to scattering of photons during acoustic oscillations of the photon-baryon fluid at recombination. Detection and study of these acoustic or ‘Doppler’ peaks in the power spectrum is one of the primary goals of CMB astronomy. Furthermore, the amplitudes and angular scales of the acoustic peaks provide powerful constraints on basic cosmological parameters including $`H_0`$ and $`\mathrm{\Omega }`$.
The first clear indication of a downturn in the power spectrum on sub-degree scales was provided by the detection of CMB power by the Cambridge Cosmic Anisotropy Telescope (CAT) on scales of about half a degree (Scott et al. 1996; Paper I). The CAT is a three-element interferometer operating at frequencies between 13 and 17 GHz (Robson et al. 1993). It is sensitive to structure on angular scales of about $`10^{}`$ to $`30^{}`$ over a field of view covering 2$`\times 2`$ (primary beam FWHM). This paper describes observations of a second field observed with CAT and the detection of CMB anisotropies within it, at levels consistent with measurements in the first field.
## 2 Observations and Data Reduction
Observations have been made with the CAT of a blank field centred at the position 17 00 00 $`+64`$ 30 00 (B1950), which we call ‘CAT2’. The field was chosen to be relatively free from strong radio sources at frequencies up to 5 GHz (Condon, Broderick & Seielstad 1989), lie at high Galactic latitude ($`b>30`$) and away from known Galactic features (e.g. the North Polar spur).
In periods during the interval 1995 March – 1997 June, CAT observed the CAT2 field at three frequencies, 13.5, 15.5 and 16.5 GHz. The three baselines of the array were scaled with frequency to achieve the same resolution at each frequency. The resulting synthesised beams measured $`20^{}`$ FWHM in right ascension and $`24^{}`$ in declination. The primary beam of the telescope, due to the symmetric nearly-Gaussian envelope beams of the three horn-reflector antennas, has a FWHM of 1.96 at 15.5 GHz, scaling inversely with frequency. The telescope observes in two orthogonal linear polarisations (which rotate on the sky as it tracks), and has a system noise temperature of 50 K. An observing bandwidth of 500 MHz was used. Amplitude and phase calibrations were carried out daily with observations of Cas A (using the flux scale of Baars et al. 1977), and cross checked periodically by observations of other 15-GHz calibrators from the VLA list (Perley 1982). Typical uncertainties in flux scaling are less than 10%. Observations were generally carried out at night (about 80% of the data) or pointing $`>90^{}`$ away from the sun to avoid possible solar interference; no extra emission from the moon was detected at this declination.
The CAT2 data were reduced using the same method as for the first field, CAT1, as described in detail by O’Sullivan et al. (1995). Phase rotation and flux calibration were applied first, and the data were then edited and analysed using standard tasks in AIPS. Excessively noisy data (reflecting periods of poor weather) were excised — the visibility amplitudes were all checked by eye for periods where they regularly exceeded the mean value by more than 3$`\sigma `$ and then these data ($`\pm 1`$ hour) were removed from all baselines. Across the remaining dataset, individual visibilities with amplitudes exceeding the 3$`\sigma `$ threshold were also excluded. In total, about 40% of the data were excluded by this process leaving 370, 310 and 1340 hours of good data at 13.5, 15.5 and 16.5 GHz respectively. Since the atmospheric coherence time is very short (10s) compared with the total integration time, any remaining atmospheric signals will be distributed uniformly across the synthesised map as noise, unlike true sky signals which will be modulated by the envelope beam pattern. The efficacy of atmospheric filtering for the CAT interferometer has already been demonstrated (Robson et al. 1994). CAT’s sidelobe response and lack of crosstalk and correlator offsets are discussed in O’Sullivan et al. (1995). No radio interference was seen.
## 3 Source Subtraction
Radio sources contributing significantly to the CAT2 image were identified and monitored at 15 GHz using the Ryle Telescope (RT). Five RT antennas were used in a configuration giving a synthesised beam of $`30^{\prime \prime }`$ FWHM. The RT has an instantaneous field of view of $`6^{}`$ FWHM, but for these observations is used in a rastering mode which covers $`30^{}\times 30^{}`$ in 12 hours to a typical flux sensitivity of 1.5 mJy/beam. To detect sources within the the CAT2 field, the central 2$`\times 2`$ area was scanned with the RT in raster mode in sixteen days. A source list was then compiled, including sources listed in the Green Bank 4.85-GHz survey (Condon et al. 1989). Pointed observations with the RT were then made, and repeated regularly to check for variability over the whole period of the CAT observations. In all, twenty-nine sources were detected, the faintest having a flux density of 4.5 mJy at 15 GHz. Flux densities at 13.5 and 16.5 GHz were extrapolated using spectral information obtained from lower frequency surveys (i.e. 4.85 GHz, Condon et al. 1989, and 1.4 GHz, Condon et al. 1998) where available. A flat spectrum between 13.5 and 16.5 GHz was assumed for three sources without other data. After correcting for CAT primary beam attenuation, the corresponding flux densities were subtracted as point sources from the visibility data. Totals of 33 mJy at 13.5, 27 mJy at 15.5 and 20 mJy at 16.5 GHz were subtracted. The robustness of the source-subtraction procedure is illustrated in O’Sullivan et al. The strongest source in the field, at position 16 45 32 +63 35 29 (B1950), was highly variable (by factors of up to two over periods of a few days) and so was monitored regularly with the RT; no residuals at the source position remain after subtraction of the variable source. The successful subtraction of this source, 1away from the pointing centre and clearly visible at the same position in all three maps, shows that CAT phase calibration and pointing are accurate.
## 4 The Images
The resulting source-subtracted images each show excess signal in the central 2$`\times 2`$ falling away at larger radii as expected from the antenna envelope beam. These are displayed in Figure 1. The instrumental noise levels (measured directly from the visibilities) for each source-subtracted image are 6.1, 6.5 and 3.5 mJy/beam rms for 13.5, 15.5 and 16.5 GHz respectively. At the three frequencies, excess powers above the intrinsic noise level in the central 2$`\times 2`$ area of $`7.8\pm 1.0`$, $`8.2\pm 1.0`$ and $`8.3\pm 0.5`$ mJy/beam rms were found for the source-subtracted 13.5, 15.5 and 16.5 GHz images respectively. Checks were made by splitting the data in time (consistent excesses were seen), polarisation (no excess power was visible on polarisation difference maps) and cross-correlating the source-subtracted image with a reconstructed map of the radio sources as in O’Sullivan et al. (no correlation was found); most importantly, maps were made by correlating orthogonal polarisations and these gave the same noise levels as those above.
To attempt to remove the instrumental response and thereby illustrate the distribution of features on the sky, we have co-added the data from the three frequencies weighted as $`\nu ^2`$ and CLEANed the final image. This is shown in Figure 2 and shows the presence of significant features in the central region as well as the diminution of the telescope sensitivity in accordance with the envelope beam.
The strongest feature in the CAT2 images is a negative one centred at position 17 05 29 +64 47 37 (B1950) reaching $`39`$ mJy at 16.5 GHz (i.e. 5$`\sigma `$, relative to the rms excess power in the sky). For comparison, the strongest positive feature in the 16.5-GHz dirty map has a peak flux density of 26 mJy at position 16 55 55 +63 49 47 (B1950). The negative feature can be seen at all three frequencies (most clearly in the 13.5 and 16.5-GHz images in Figure 1), both before and after source subtraction and in different time cuts, and its spectrum is consistent with that of CMB radiation. For example, even before source subtraction the hole reaches $`30`$ mJy in the 16.5-GHz image, and only three weak sources ($`S_{16.5}<10`$ mJy) lie within $`40^{}`$ of it. No obvious Galactic structures at the position of the negative feature were visible in IRAS 100 $`\mu `$m (Wheelock et al. 1994) or H i 21-cm sky survey images (Lockman & Dickey 1995) or the $`100\mu `$m map of Schlegel, Finkbeiner & Davis 1998. Indeed, there is no resemblance at all between any of these images of the CAT2 region and the CAT2 image itself.
On the scales sampled by CAT, the negative feature is unlikely to have been caused by a Sunyaev-Zel’dovich (S–Z) effect (Sunyaev & Zel’dovich 1972; Rephaeli 1995) towards a single massive cluster. Any cluster which might produce such an effect would have to be nearby (subtending a large angle on the sky, filling the CAT beam) and/or very massive to produce such a strong signature. The strongest S–Z decrements measured towards nearby clusters at 15 GHz with the RT (Grainge et al. 1996; Jones et al. 1993) are about $`0.5`$ mJy on arcminute scales. Observing a similar cluster at $`z0.2`$ (e.g. gas mass $`10^{14}M_{}`$, with a King profile of core radius $`300`$ kpc, truncated at about ten core radii) with the larger CAT beam of $`30^{}`$ would produce a similar decrement amplitude, due to the balance between the effects of beam dilution and sampling cluster gas out to a larger radius. In order to maximise the S–Z signal in a $`30^{}`$ beam with the minimum gas mass, a nearby cluster ($`z0.05`$) with $`10^{15}M_{}`$ of gas would be required, which is patently not observed. A system at any higher redshift would require an even higher mass because of beam dilution.
A central portion of the CAT2 field has been observed serendipitously with ROSAT. The direction of the negative CMB feature lies close to the edge of the PSPC field ($`45^{}`$ off axis) and is partially shadowed by the support structure of the PSPC detector. An X-ray source (heavily distorted by the PSPC point spread function) is found nearby (at the position 17 05 49 +64 45 50, B1950) but its X-ray spectrum does not fit a thermal model for any reasonable temperature, and is fit much better by a power law, implying it is not a cluster (but perhaps an AGN or Galactic source). Four clusters (Abell 2246 and two others at $`z=0.25`$ plus one at $`z=0.44`$) — and an optically luminous QSO (HS 1700+6416) — have been detected by ROSAT and lie within the central $`10^{}`$ of the CAT2 field (Reimers et al. 1997; Vikhlinin et al. 1998). All four clusters are fainter than $`10^{44}`$ erg s<sup>-1</sup> at X-ray energies 0.4–2.0 keV and none is apparent in the CAT images.
Finally, we emphasise that the presence of a 5$`\sigma `$ feature in a single CAT field should not be construed as evidence for non-Gaussianity of the CMB fluctuations: the sidelobe structure of the synthesised beam of the three-element CAT is significant, and analysis should be carried out in the aperture plane — see Section 5. As a check, we have simulated CAT images given standard CDM-based realisations of CMB structure (using the actual CMB and Galactic mean power measured by CAT), and find that features which appear as strong as $`5\sigma `$ occur in about 10% of cases. We return to this point in Section 5.
## 5 Determination of the CMB Component
Due to the limited range of baselines and resulting sparse sampling of the $`uv`$ plane by CAT, a statistical likelihood analysis was employed to estimate the relative contributions of CMB and Galactic components given the three-frequency CAT data. A Bayesian likelihood method was used, as in Paper I, as described by Hobson, Lasenby & Jones (1995). This method uses the complex visibility data directly in the calculation of the likelihood function to avoid the problem of the long-range correlations present between different resolution elements in the image plane.
As described in Paper I, power was estimated in two independent annular bins centred on spherical harmonic multipole values of $`\mathrm{}=410`$ and $`\mathrm{}=590`$ and with widths equivalent to the diameter of the antenna function, thus together spanning the range $`\mathrm{}330680`$. The noise-weighted centroid positions for data in each bin are $`\mathrm{}=422`$ and $`\mathrm{}=615`$. CMB and Galactic signals were modelled as independent Gaussian distributions with a power-law spectrum ($`S\nu ^\alpha `$), the CMB with a fixed spectral index of $`+2`$ in flux density and the Galactic spectral index variable between 0 and $`1`$ (as expected for Galactic free-free and synchrotron emission).
After marginalising over the Galactic parameters, this analysis confirmed that the bulk of the power in the 16.5 GHz map (Fig. 1) arises from CMB fluctuations. The CMB component was clearly distinguished from possible Galactic contamination in the $`\mathrm{}=422`$ bin; $`\mathrm{\Delta }T/T=2.1_{0.5}^{+0.4}\times 10^5`$ was estimated for the CMB signal, compared with only $`\mathrm{\Delta }T/T=0.8_{0.8}^{+0.5}\times 10^5`$ for any Galactic component. The uncertainties quoted are $`1\sigma `$ values. The CMB–Galaxy separation was less certain in the $`\mathrm{}=615`$ bin — a Galactic contribution of $`\mathrm{\Delta }T/T=1.0_{1.0}^{+0.7}\times 10^5`$ was given by the likelihood analysis, which is equivalent to an upper limit for the marginalised CMB power of $`\mathrm{\Delta }T/T<2.0\times 10^5`$ ($`1\sigma `$). These values are plotted in Figure 3. The average values of $`\mathrm{\Delta }T/T`$ in the CAT2 field agree with the CAT1 result (Paper I) within $`1\sigma `$; for comparison the measured CMB powers in the CAT1 field were $`\mathrm{\Delta }T/T=1.9_{0.5}^{+0.5}\times 10^5`$ ($`\mathrm{}=420`$) and $`\mathrm{\Delta }T/T=1.8_{0.5}^{+0.7}\times 10^5`$ ($`\mathrm{}=590`$).
We have also investigated how much of the CMB power is not associated with the strong negative feature discussed in Section 4. We removed the dip as a point source from the visibilities and repeated the above analysis. Significant power remained, at around half of the level with the negative feature included.
## 6 Estimation of Cosmological Parameters using new CAT results
The CAT2 points, taken in conjunction with the results from the Saskatoon experiment \[Netterfield et al. 1997\], provide further evidence for a downturn in the CMB power spectrum for $`\mathrm{}\stackrel{>}{_{}}300`$. To assess the implications for cosmological parameters using current CMB data sets, we have extended the analysis presented previously by Hancock et al. (1998). This analysis used a statistically independent subset of the current data and carried out $`\chi ^2`$ fitting for a range of cosmological models and parameters. The extensions carried out for the present work were (a) the inclusion of the CAT2 point at $`\mathrm{}=422`$; (b) the inclusion of new points from experiments Python (Python III, Platt et al. 1997), MSAM (the 2nd and 3rd flights, Cheng et al. 1996, 1997, Ratra et al. 1997), ARGO (Aries+Taurus region, Masi et al. 1996), FIRS \[Ganga et al. 1994\] and BAM \[Tucker et al. 1997\], and with the latest calibration correction to the Saskatoon data (i.e. increased by 5%, Leitch, private communication); (c) consideration of a wider class of cosmological models, all treated using exact power spectra rather than the generic forms assumed in Hancock et al. (see also Rocha 1997; Rocha et al. in preparation). The formalism and approach are otherwise the same as in Hancock et al. (1998), to which the reader is referred. The cosmological models considered were:
1. Flat models with $`\mathrm{\Lambda }=0`$, and with a range of spectral tilts (i.e. $`n1`$);
2. Flat models with $`\mathrm{\Lambda }0`$, and a range of spectral tilts;
3. Open models with $`\mathrm{\Lambda }=0`$ and open-bubble inflation spectrum \[Hu & Sugiyama 1995, Ratra & Peebles 1994, Kamionkowski et al. 1994\].
In cases (i) and (ii), nucleosynthesis constraints, with $`0.009\mathrm{\Omega }_bh^20.02`$ \[Copi, Schramm & Turner 1995\] were assumed. Theoretical power spectra from Seljak & Zaldarriaga (1996) were used in cases (i) and (ii), and kindly provided by N. Sugiyama for case (iii). The parameters fitted for (unless fixed) were the Hubble constant $`h`$ (in units of $`100\text{km}\text{s}^1\text{Mpc}^1`$ ), the spectral tilt $`n`$, the cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }`$ and the matter density $`\mathrm{\Omega }_m`$ in units of the critical density. The flat models are defined by $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$. Note the ranges of $`h`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ considered were $`0.30.8`$ and $`0.30.7`$ respectively.
The results are displayed in Table 1. In each case the best fit values of the parameters are shown, together with marginalised error ranges. These marginalised errors are $`\pm 1\sigma `$ confidence limits formed by integrating over all the other parameters with a uniform prior.
In common with other recent work on fitting to current CMB data (e.g. Lineweaver & Barbosa 1998) a tendency to low $`H_0`$ values (except in the case of open models) is found, although it is clear from the marginalised ranges that the statistical significance of this is not yet high. For comparison with the results of Webster et al. (1998), who worked jointly with recent CMB and IRAS large-scale structure results, we note that a flat $`\mathrm{\Lambda }`$ model with normalization fixed to the COBE results and $`n=1`$ yielded a best fit of $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ and $`h=0.6`$. In conjunction with the CAT points, new results forthcoming from the Python, Viper and QMAP<sup>1</sup><sup>1</sup>1Results from QMAP published after this paper was submitted show a rise in the power spectrum from $`\mathrm{}40`$ to $`\mathrm{}200`$ consistent with the Saskatoon results (de Oliviera Costa et al. 1998). experiments may soon be very significant in ruling out open models and in further delimiting the Doppler peak, sharpening up these parameter estimates. Recent OVRO results at $`\mathrm{}590`$ (Leitch, private communication) agree well with the values found by CAT. The joint CAT and OVRO results will clearly be very significant in constraining the latest cosmic string power spectrum predictions, which now include a cosmological constant (Battye et al. 1998). These predictions succeed in recovering a significant ‘Doppler peak’ in the power spectrum, but have peak power for $`\mathrm{}`$ in the range 500 to 600, at variance with the trend of the current experimental results.
## 7 Conclusions
* We have imaged a 2$`\times 2`$ patch of sky at 13-17 GHz with the CAT; this is the second field observed by CAT.
* Significant CMB anisotropy is detected in the CAT2 field with an average power of $`\mathrm{\Delta }T/T=2.1_{0.5}^{+0.4}\times 10^5`$ for the $`\mathrm{}=422`$ bin, and with an upper limit of $`\mathrm{\Delta }T/T<2.0\times 10^5`$ for $`\mathrm{}=615`$ (due to Galactic contamination).
* This new result is consistent with the first detection made by CAT in a different area of sky.
* Together with other CMB data over a range of angular scales, the inclusion of the new CAT2 detection restricts the likely values of cosmological parameters.
## Acknowledgments
Staff at the Cavendish Laboratory and MRAO, Lords Bridge are thanked for their ongoing support in the running of CAT. We thank an anonymous referee for comments. We are pleased to acknowledge major PPARC support of CAT. G. Rocha wishes to acknowledge a NSF grant EPS-9550487 with matching support from the State of Kansas and from a K$``$STAR First award. We also thank N. Sugiyama, and U. Seljak and M. Zaldarriaga for access to their theoretical power spectra.
|
no-problem/9904/hep-ph9904234.html
|
ar5iv
|
text
|
# References
TIFR-HECR-99-02
hep-ph/9904234
Constraining Large Extra Dimensions Using Dilepton Data from the Tevatron Collider
Ambreesh K. Gupta <sup>1</sup><sup>1</sup>1E-mail: ambr@tifr.res.in, Naba K. Mondal <sup>2</sup><sup>2</sup>2E-mail: nkm@tifr.res.in, Sreerup Raychaudhuri <sup>3</sup><sup>3</sup>3E-mail: sreerup@iris.hecr.tifr.res.in <sup>4</sup><sup>4</sup>4 Address after May 1, 1999:
Department of Physics, Indian Institute of Technology, Kanpur 208 016, India.
Department of High Energy Physics, Tata Institute of Fundamental Research,
Homi Bhabha Road, Colaba, Mumbai 400 005, India.
Abstract
We use the invariant mass distribution of Drell-Yan dileptons as measured by the CDF and DØ Collaborations at the Fermilab Tevatron and make a careful analysis to constrain Kaluza-Klein models with large extra dimensions. The combined data from both collaborations lead to a conservative lower bound on the string scale $`M_S`$ of about 1 TeV at 95% confidence level.
April 1999
Recently, the idea that gravity could become strong at scales of the order of a few TeV has attracted a great deal of attention. This is made possible if we allow for large compactified dimensions at the TeV scale. While such ideas can be fitted in within the scheme of quantum field theories, a more natural construction involves string theories with all Standard Model (SM) fields living on a three-dimensional D-brane (or 3-brane) embedded in a space of ($`4+d`$) dimensions (bulk). Of course, the original suggestion that we live in a spacetime continuum with more than the three canonical spatial dimensions was made early in this century , but these Kaluza-Klein (KK) theories, as they are called, have not been able to satisfactorily reproduce the observed mass spectrum. Such ideas, however, have always formed a basic ingredient of string theories. In fact, models having extra dimensions with compactification scales of the order of a few TeV have been proposed from time to time in the literature with various motivations. However, it is the discovery of D-branes which has provided the rather venerable KK theories with a new lease of life over the past year.
In a nutshell, the ideas proposed by Arkani-Hamed, Dimopoulos and Dvali (ADD) and by Antoniadis et al. are as follows. They suggest — as all KK theories do — that spacetime consists of ($`4+d`$) dimensions. The extra (spatial) $`d`$ dimensions are compactified, typically on a $`d`$-dimensional torus $`T^d`$ with radius $`R`$ each way. Since gravity experiments have not really probed the sub-millimetre regime, it is proposed that $`R`$ can be as large as $`0.11`$ mm, a very large value when compared with the Planck length $`10^{33}`$ cm. Though the actual value of Newton’s constant $`G_N^{(4+d)}`$ in the bulk is of the same order as the electroweak coupling, its value $`G_N^{(4)}`$ in the effective 4-dimensional space at length scales $`R`$ is the extremely small one measured in gravity experiments. This is decribed by a simple relation derived from Gauss’ Law,
$$\left[M_{Pl}^{(4)}\right]^2R^d\left[M_{Pl}^{(4+d)}\right]^{(d+2)}$$
where $`M_{Pl}1/\sqrt{G_N}`$ denotes the Planck mass. If $`M_{Pl}^{(4+d)}1`$ TeV, then $`R10^{30/d19}`$ m. This means that for $`d=1`$, $`R10^{11}`$ m, which, in turn, means that deviations from Einstein gravity would occur at solar system scales; since these have not been seen, we are constrained to take $`d2`$. For these values $`R<1`$ mm, hence there is no conflict with known facts. It is also perhaps worth mentioning that we would normally require $`d<7`$, since that is the largest number allowed if the string theory is derivable from M-theory, believed to be the fundamental theory of all interactions. In the ADD model the smallness of Newton’s constant is a direct consequence of the compactification-with-large-radius hypothesis and hence there is no hierarchy problem in this theory<sup>5</sup><sup>5</sup>5 A related problem, that of stabilization of the compactification scale, exists, however; this has been discussed in Ref. ..
In traditional KK theories, the mass-spectrum of nonzero KK modes arising from compactification of fields living in the bulk is driven to the Planck scale $`M_{Pl}^{(4)}`$. This problem is avoided in the ADD model by having the SM particles live on a ‘surface’ with negligible width in the extra $`d`$ dimensions, which we identify with the 3-brane. The SM particles may then be thought of as excitations of open strings whose ends terminate on the brane; gravitons correspond to excitations of closed strings propagating in the bulk. Thus, the only interactions which go out of the 3-brane into the bulk are gravitational ones. We thus have a picture of a 4-dimensional ‘surface’ embedded in a ($`4+d`$)-dimensional space, where SM fields live on the ‘surface’, but gravitons can be radiated-off into the bulk. Noting that the SM fields are confined to the 3-brane, it is obvious that the only new effects will be those due to exchange of gravitons between particles on the 3-brane. To construct an effective theory in 4 dimensions, gravity is quantized in the usual way, taking the weak-field limit, assuming that the underlying string theory takes care of ultraviolet problems. The interactions of gravitons now follow from the ($`4+d`$)-dimensional Einstein equations in the compactification limit. Feynman rules for this effective theory have been worked out in detail in Refs. and . We use their prescriptions in our work. On the 3-brane, the couplings of the gravitons to the SM particles will be suppressed, as is well-known, by the Planck scale $`M_{Pl}^{(4)}1.2\times 10^{19}`$ GeV. This is offset, however, by the fact that, after compactification, the density of massive KK graviton states in the effective theory is very high, being, indeed proportional to $`M_{Pl}^{(4)}/M_{Pl}^{(4+d)}`$. The Planck mass dependence cancels out, therefore, leaving a suppression by the string scale $`M_SM_{Pl}^{(4+d)}M_{EW}`$. In the ADD theory, therefore, the tower of KK graviton states leads to effective interactions of electroweak strength. A further assumption made in our work — and in other phenomenological studies — is that $`Y`$-particles, excitation modes of the 3-brane itself in the bulk — are heavy and do not affect the processes under consideration. This corresponds to a static approximation for the brane. It is also relevant to mention that the dilaton field associated with the graviton couples only to the trace of the energy-momentum tensor, i.e. to the mass of the SM particles at the vertex. For light fermions, as we have in the Drell-Yan process, this means that the interactions of the dilaton can be safely neglected.
Using these Feynman rules, it has been possible to explore a number of different processes where the new interactions could cause observable deviations from the SM. Only two new parameters enter the theory: one is the string scale $`M_SM_{Pl}^{(4+d)}`$. The other is a factor $`\lambda `$, of order unity and indeterminate sign, which arises when we sum over all possible KK modes of the graviton. As the amplitudes for virtual graviton exchange (with which we are concerned in this work) are always proportional to $`\lambda /M_S^4`$, it is usual to absorb the magnitude of $`\lambda `$ into $`M_S`$; this reduces the uncertainty to $`\lambda =\pm 1`$. Obviously this determines whether the graviton exchanges interfere constructively or destructively with the SM interactions.
Remembering that the gravitons couple to any particle with a non-vanishing energy-momentum tensor, it is possible to make a variety of phenomenoogical studies of the new interactions and to test the workability of the ADD model. Though the phenomenology of this model has not yet been fully explored, several important results are already available in the literature. These can be classified into two types: those involving real KK graviton production, and those involving virtual graviton exchange. A real KK mode of the graviton will have interactions with matter suppressed by the Planck scale $`M_{Pl}^{(4)}`$ and will therefore escape the detector. One can, therefore, see signals with large missing momentum and energy if an observable particle is produced in association with a KK graviton mode. However, cross-sections for these depend explicitly on $`d`$, the number of extra dimensions, and bounds derived from data reflect this dependence. Some of the processes examined so far include single-photon final states at $`e^+e^{}`$ colliders as well as hadron colliders, monojet production at hadron colliders, two-photon processes at $`e^+e^{}`$ colliders, single-Z production at $`e^+e^{}`$ colliders and the neutrino flux from the supernova SN1987A. Each process can be used to obtain a bound on the string scale $`M_S`$ for a given number $`d`$. The most dramatic of these bounds is $`M_S>50`$ TeV for $`d=2`$ and it comes from a study of neutrinos from the supernova SN1987A. However, this last bound drops to about a TeV as soon as we go to $`d=3`$. Most of the other processes lead to lower bounds of about 1–1.1 TeV on the string scale for $`d=2`$, but these bounds become much weaker for $`d>3`$.
Virtual (KK) graviton exchanges lead to extra contributions to processes involving SM particles in the final state and can be observed as deviations in the cross-sections and distributions of these from the SM prediction. After summation over all the KK modes of the graviton, the final result is proportional to $`\mathrm{sgn}(\lambda )/M_S^4`$, with practically no dependence on the number of extra dimensions<sup>6</sup><sup>6</sup>6This is really because the density of graviton KK modes is approximated by a continuum, as a result of which mass degeneracies due to the number of extra dimensions are lost, at least to the leading order. In a sense, therefore, bounds from virtual graviton exchange are more general.. Each process can be used to obtain a bound on the string scale $`M_S`$ for a given sign of $`\lambda `$. Some of the processes examined include Bhabha and M$``$ller scattering at $`e^+e^{}`$ colliders, photon pair-production in $`e^+e^{}`$ and hadron colliders, fermion pair production in $`\gamma \gamma `$ colliders, Drell-Yan production of dileptons, dijet and top-quark pair production at hadron colliders, deep inelastic scattering at HERA, massive vector-boson pair production in $`e^+e^{}`$ collisions and pair production of scalars (Higgs bosons and squarks) at both $`e^+e^{}`$ and $`\gamma \gamma `$ colliders. Among the best of these bounds is $`M_S>920(980)`$ GeV for $`\lambda =+1(1)`$ which comes from a study of experimental data on Drell-Yan leptons at the Tevatron. We make a more elaborate analysis if the same data in this work.
The contributions to the Drell-Yan production of dileptons at hadron colliders from graviton exchanges have been considered by Hewett. Some of her findings relevant to the Tevatron are:
* There is very little difference between the cases $`\lambda =+1`$ and $`\lambda =1`$ for the dilepton invariant mass distribution.
* The $`\lambda =\pm 1`$ cases differ, however, in the angular distribution; therefore, widely differing forward-backward asymmetries may be predicted.
* There are large deviations between the SM and the ADD model for large invariant masses.
* The gluon-gluon contribution to the Drell-Yan process (see below) is much suppressed compared to the quark-initiated process.
* The bounds can increase to about 1.15 (1.35) TeV for $`\lambda =+1(1)`$ in Run-II of the Tevatron.
We agree with most of these results at the generator level. However, in the absence of published details about the angular distribution of dileptons observed by the CDF and DØ Collaborations, we confine our analysis to the invariant mass distributions only. Hence we do not make a separate analysis for the two signs of $`\lambda `$.
Figure 1. Feynman diagrams for the contribution to the Drell-Yan process from (a) the Standard Model and (b,c) exchange of a Kaluza-Klein graviton.
The Drell-Yan cross-section, including the effects of Kaluza-Klein graviton exchanges, is given by the above Feynman diagrams. The Standard Model diagrams ($`a`$) involving exchange of a photon or a $`Z`$-boson in the $`s`$-channel, interfere with the diagram with $`s`$-channel exchange of a Kaluza-Klein graviton ($`b`$), while the diagram ($`c`$) has no Standard Model analogue.
Evaluating these leads to the result
$`\sigma _{DY}(p\overline{p}\mathrm{}^+\mathrm{}^{})={\displaystyle 𝑑x_1𝑑x_2f_{g/p}(x_1)f_{g/\overline{p}}(x_2)\widehat{\sigma }(gg\mathrm{}^+\mathrm{}^{})}`$ (1)
$`+{\displaystyle \underset{q=u,d,s}{}}{\displaystyle 𝑑x_1𝑑x_2[f_{q/p}(x_1)f_{\overline{q}/\overline{p}}(x_2)+f_{\overline{q}/p}(x_1)f_{q/\overline{p}}(x_2)]\widehat{\sigma }(q\overline{q}\mathrm{}^+\mathrm{}^{})},`$
where $`f_{a/b}(x)`$ denotes the flux of a parton $`a`$ in a beam of particles $`b`$,
$$\widehat{\sigma }(q\overline{q}\mathrm{or}gg\mathrm{}^+\mathrm{}^{})=\frac{1}{16\pi \widehat{s}^2}\overline{|(q\overline{q}\mathrm{or}gg\mathrm{}^+\mathrm{}^{})|^2},$$
(2)
and $`\overline{||^2}`$ represents the squared Feynman amplitude summed over final spins and averaged over initial spins and colours.
Evaluation of the Feynman diagrams gives, for the gluon-induced process (which has no Standard Model analogue):
$$\overline{|(gg\mathrm{}^+\mathrm{}^{})|^2}=\left(\frac{\pi }{2M_S^4}\right)^2\left[\widehat{s}^4+2\widehat{t}\widehat{u}(\widehat{t}\widehat{u})^2\right],$$
(3)
when all the graviton Kaluza-Klein modes have been summed over.
Evaluation of the Feynman diagrams gives for the quark-induced process (including interference terms):
$$\overline{|(q\overline{q}\mathrm{}^+\mathrm{}^{})|^2}=T_{SM}^{q\overline{q}}+T_{KK}^{q\overline{q}},$$
(4)
where the Standard Model contribution is given below. We adopt the convention that $`T_a`$ denotes the contribution from exchange of a particle $`a`$ and $`T_{ab}`$ denotes the interference term between diagrams with exchange of $`a`$ and $`b`$ respectively. With these, we get
$`T_{SM}^{q\overline{q}}`$ $`=`$ $`T_\gamma ^{q\overline{q}}+T_Z^{q\overline{q}}+T_{Z\gamma }^{q\overline{q}};`$ (5)
$`T_\gamma ^{q\overline{q}}`$ $`=`$ $`{\displaystyle \frac{32}{3}}(\pi \alpha Q_q)^2\left[{\displaystyle \frac{\widehat{t}^2+\widehat{u}^2}{\widehat{s}^2}}\right],`$
$`T_Z^{q\overline{q}}`$ $`=`$ $`{\displaystyle \frac{1}{3}}\left({\displaystyle \frac{\pi \alpha }{4\mathrm{sin}^2\theta _W\mathrm{cos}^2\theta _W}}\right)^2|D_Z(\widehat{s})|^2\left[(L_{\mathrm{}}^2L_q^2+R_{\mathrm{}}^2R_q^2)\widehat{t}^2+(L_{\mathrm{}}^2R_q^2+R_{\mathrm{}}^2L_q^2)\widehat{u}^2\right],`$
$`T_{Z\gamma }^{q\overline{q}}`$ $`=`$ $`{\displaystyle \frac{2}{3}}Q_q\left({\displaystyle \frac{\pi \alpha }{\mathrm{sin}\theta _W\mathrm{cos}\theta _W}}\right)^2|D_Z(\widehat{s})|^2\left(1{\displaystyle \frac{M_Z^2}{\widehat{s}}}\right)`$
$`\times \left[(L_{\mathrm{}}L_q+R_{\mathrm{}}R_q)\widehat{t}^2+(L_{\mathrm{}}R_q+R_{\mathrm{}}L_q)\widehat{u}^2\right],`$
defining
$$D_Z(\widehat{s})=\left[\widehat{s}M_Z^2+iM_Z\mathrm{\Gamma }_Z\right]^1$$
(6)
and
$$L_{\mathrm{}}=4\mathrm{sin}^2\theta _W2,R_{\mathrm{}}=4\mathrm{sin}^2\theta _W,$$
$$L_q=4(T_{3q}Q_q\mathrm{sin}^2\theta _W),R_q=4Q_q\mathrm{sin}^2\theta _W,$$
for the couplings. The non-Standard part, using the same convention, is given by
$`T_{KK}^{q\overline{q}}`$ $`=`$ $`T_G^{q\overline{q}}+T_{G\gamma }^{q\overline{q}}+T_{GZ}^{q\overline{q}},`$ (7)
$`T_G^{q\overline{q}}`$ $`=`$ $`{\displaystyle \frac{\lambda ^2}{3}}\left({\displaystyle \frac{\pi }{2M_S^4}}\right)^2\left[\widehat{s}^44\widehat{s}^2(\widehat{t}\widehat{u})^2+(\widehat{t}\widehat{u})^2(5\widehat{t}^26\widehat{t}\widehat{u}+5\widehat{u}^2)\right],`$
$`T_{G\gamma }^{q\overline{q}}`$ $`=`$ $`{\displaystyle \frac{4}{3}}\lambda Q_q{\displaystyle \frac{\pi ^2\alpha }{M_S^4}}\left({\displaystyle \frac{\widehat{t}\widehat{u}}{\widehat{s}}}\right)\left[\widehat{s}^22(\widehat{t}^2+\widehat{u}^2)(\widehat{t}\widehat{u})^2\right],`$
$`T_{GZ}^{q\overline{q}}`$ $`=`$ $`\lambda {\displaystyle \frac{\alpha }{3}}\left({\displaystyle \frac{\pi }{2\mathrm{sin}\theta _W\mathrm{cos}\theta _WM_S^2}}\right)^2(\widehat{s}M_Z^2)|D_Z(\widehat{s})|^2`$
$`\times \left[(L_{\mathrm{}}L_q+R_{\mathrm{}}R_q)\widehat{t}^2(\widehat{t}3\widehat{u})(L_{\mathrm{}}R_q+R_{\mathrm{}}L_q)\widehat{u}^2(\widehat{u}3\widehat{t})\right],`$
when, as before, all the Kaluza-Klein modes have been summed over.
The above formulae represent the lowest order (LO) calculation in perturbation theory. The calculation of higher-order effects, especially next-to-leading order (NLO) and next-to-NLO (NNLO) QCD corrections has been done in detail for the SM process represented by $`T_{SM}^{q\overline{q}}`$. No corresponding calculations have been attempted as yet for the KK parts, $`T_{KK}^{q\overline{q}}`$ and $`(gg\mathrm{}^+\mathrm{}^{})`$. In the absence of such a calculation, we make the assumption that the change in the LO cross-section due to QCD corrections — the ‘K-factor’ —is identical for the SM and KK parts. Our results are, therefore, correct only within this approximation<sup>7</sup><sup>7</sup>7This places our results on an equal footing with a large number of experimental bounds on new physics scenarios, such as those involving quark and lepton compositeness, for which the QCD corrections are not available.. However, we do not expect a proper calculation of NLO effects to make a drastic change in our rough-and-ready results, because the dominant contribution to dilepton production at the Tevatron comes from quark-induced processes. Since the SM and KK results both arise from colour-singlet exchange, the actual ‘K-factor’ is likely to be rather similar in both cases. For gluons, this is not true, but the gluon-induced process makes only a minor contribution at Tevatron energies.
In keeping with this philosophy, therefore, we have extracted, for each value of the dilepton invariant mass $`MM_\mathrm{}^+\mathrm{}^{}`$, a ‘K-factor’ by taking the ratio of the LO SM cross-section calculated using the above formulae with that calculated using the full NNLO calculation of Ref. . This set of ratios is then used to scale the entire differential cross-section when the KK effects are included. It is worth pointing out that this procedure also takes care of the leading effects arising from initial-state radiation. Finally, it is relevant to mention that we have used the CTEQ-4M set of structure functions to calculate the initial state parton luminosities.
We now describe our analysis in some detail. The DØ Collaboration has presented the $`e^+e^{}`$ invariant mass distribution in 9 bins starting from 120 GeV till 1 TeV using the di-electron data collected with 120 pb<sup>-1</sup> of luminosity. The cuts relevant for the cross-section calculation are given below. No distinction is made between the electron and the positron.
* The transverse momentum of both the isolated electrons must satisfy $`p_T>25`$ GeV.
* The electrons are called CC (for Central Calorimeter) if they satisfy $`|\eta |<1.1`$, $`\eta `$ being the pseudorapidity; they are called EC (for End Cap) if they satisfy $`1.5<|\eta |<2.5`$.
Only those events are considered in which there is at least one CC electron, while the other can be CC or EC. The acceptances described above are taken into account while estimating our Monte Carlo cross-sections. These cross-sections need to be further convoluted with efficiencies which are ($`74.1\pm 0.6`$)% when both electrons are CC and ($`52.6\pm 1.0`$)% when one of them is EC. Multiplying by the luminosity now gives us a prediction for the number of di-electron events expected in each mass bin, which is then compared with the DØ data.
The CDF Collaboration has presented results for dimuon samples, using 107 pb<sup>-1</sup> of data. The relevant cuts are given below.
* The reconstructed rapidity $`y`$ of the virtual $`s`$-channel state (‘boson rapidity’) is required to satisfy $`|y|<1`$ for all events.
* Both muons are required to satisfy $`|\eta |<1`$, which confines the analysis to the central region.
* A back-to-back cut $`|\eta _1+\eta _2|0.2`$ is imposed: this gets rid of cosmic ray backgrounds.
* Both muons are required to satisfy a ‘loose’ transverse momentum cut of $`p_T>17`$ GeV and at least one is required to satisfy a ‘tight’ cut of $`p_T>20`$ GeV.
These cuts are applied in our Monte Carlo generator to estimate the cross-section times acceptance for the 6 mass bins in the range 120 GeV to 500 GeV presented in Ref. (Table X). These are convoluted with the experimental efficiencies (Table VI of Ref. ). We then obtain an additional correction factor for each mass bin by normalising our SM expectation to the numbers given in Ref. . This may be expected to take care of the effect of other detector-specific cuts like triggers, etc. Finally, we use this correction factor along with our generator-level acceptance and the experimental efficiencies to estimate the number of events in each mass bin for various values of $`M_S`$. The choice of only 6 mass bins in the range 120 GeV to 500 GeV is because the ADD model predicts wider deviations from the SM in the higher mass bins (see Fig. 2). We also take note of the fact that no events are seen at CDF in the mass bin 500 GeV to 1 TeV.
Figure 2. Illustrating the effects of TeV scale quantum gravity on the invariant mass distributions of dileptons seen at the Tevatron by the DØ and CDF Collaborations respectively. Solid lines show the SM prediction; dashed lines show the predictions of the ADD model for marked values of $`M_S`$.
In Fig. 2 we show the differential cross-section as a function of the invariant mass $`M`$ of the dilepton, compared to the DØ and CDF data. We have set $`\lambda =+1`$, but $`\lambda =1`$ will not make a discernible change in the figure. Solid lines show the SM prediction; dashed lines show the predictions of the ADD model for $`M_S=`$ 0.5, 0.75, 1, 1.25 and 1.5 TeV respectively. The data points correspond to those used in our analysis and do not represent the full set of available points. Error bars are presented at 68% confidence level (C.L.) if there are events in the relevant mass bin and a 95% C.L. upper bound if there are no events in the relevant mass bin. The DØ numbers correspond to a differential cross-section $`d\sigma /dM`$: this is obtained by dividing the cross-section (modulo cuts) in the mass bin by the width of that bin. The CDF numbers correspond to a double differential cross-section $`d^2\sigma /dMdy`$ in both $`M`$ and $`y`$: this is obtained by dividing the cross-section (modulo cuts) in the mass bin by the width of that bin as well as by a factor $`\mathrm{\Delta }y=2`$.
As is apparent from the figure, the string scale cannot be anywhere near 500 GeV, since that would show extreme deviations from the observed data. This is just one of the arguments which tells us that quantum gravity effects must lie at scales of a TeV or more. On the other hand, as $`M_S`$ approaches 1 TeV, the differentiation between signal and background is less striking. This is partly because the deviations arise only in the high mass bins, where no events are expected with the current luminosities.
The actual limits on the string scale $`M_S`$ are calculated using a Bayesian analysis of the shape of the mass distribution of events. For a value $`M_S`$ of the string scale, the expected number of events in the $`k^{\mathrm{th}}`$ mass bin can be written as:
$`N^k(M_S)=b_k+ϵ_k\sigma ^k(M_S)`$ (8)
where $``$ is the data luminosity, $`b_k`$ is the expected background, $`ϵ_k`$ is the dilepton detection efficiency and $`\sigma ^k(M_S)`$ is the expected dielectron cross section with inclusion of the effect due to large extra dimension.
The posterior probability density for the string scale to be $`M_S`$, given the observed data distribution $`(D)`$, is given by
$`P(M_SD)={\displaystyle \frac{1}{A}}{\displaystyle 𝑑b𝑑ϵ𝑑\underset{k=1}{\overset{n}{}}\left[\frac{e^{N^k(M_S)}N^k(M_S)^{N_0^k}}{N_0^k!}\right]P(b,ϵ)P(M_S)}.`$ (9)
In the above equation the term in square brackets is the likelihood for the data distribution to be from a model with string scale $`M_S`$. The prior probability $`P(b,ϵ)`$ is taken to be a product of independent Gaussian distributions in $`b`$, $``$ and $`ϵ`$, with the measured value in each bin defining the mean and the uncertainity defining the width. The overall factor $`1/A`$ is just a normalisation. Since the excess cross-sections due to graviton exchanges are combinations of direct terms proportional to 1/$`M_S^8`$ and interference terms proportional to 1/$`M_S^4`$, we consider a prior distribution $`P(M_S)`$ uniform in ($`a`$) 1/$`M_S^4`$ and ($`b`$) 1/$`M_S^8`$ separately. The limit on the string scale from a prior uniform in 1/$`M_S^8`$ represents a conservative estimate; using a prior uniform in 1/$`M_S^4`$ provides more stringent limits. From the above posterior probability, the cumulative probability = $`_{M_S}^{\mathrm{}}P(M_S^{}D)𝑑M_S^{}`$ can be calculated. The $`M_S`$ value at which the cumulative probability equals 0.95 is, then, the 95% C.L. limit. We also combine the data using the simple expedient of treating the CDF probability as a prior for the D0 analysis (and vice versa).
Figure 3. Showing the (cumulative) posterior probability for the ADD model with different values of the string scale $`M_S`$ assuming a prior probability which is (a) uniform in $`1/M_S^4`$ and (b) uniform in $`1/M_S^8`$.
In Fig. 3, we have plotted the cumulative posterior probability for the ADD model with string scale $`M_S`$ as a function of ($`a`$$`1/M_S^4`$ and ($`b`$$`1/M_S^8`$. The dashed (dash-dot) lines indicate the results of considering the DØ (CDF) data alone, while the solid lines show the result of a combined fit. A glance at the figure will show that the horizontal (dotted) lines correspond to 95% C.L. limits, while the fact that the curve saturates for higher values of $`1/M_S^{(4,8)}`$ shows that the SM values ($`M_S\mathrm{}`$) constitute the best hypothesis to fit the data. In a more quantitative idiom, we may interpret the vertical (dotted) lines as lower bounds on the string scale $`M_S`$. It is clear that the bound of 927 GeV assuming a prior probability of 1/$`M_S^4`$, and using the CDF data, is consistent with that reported in Ref. , while the value 874 GeV assuming a prior probability of 1/$`M_S^8`$ represents a more conservative estimate. The DØ data provide an improvement<sup>8</sup><sup>8</sup>8This is for two reasons: ($`a`$) the published results from DØ use a slightly higher integrated luminosity and ($`b`$) the DØ Collaboration presents more data in the higher mass bins — where most of the deviations lie — than the CDF Collaboration. in the bound by about 100 GeV in both cases. Since the cross-section varies principally as $`1/M_S^8`$ in the region around $`M_S=1`$ TeV (this is reflected in the fact that it depends very weakly on the sign of $`\lambda `$), this corresponds to an increase in the sensitivity by a factor of about 2.6. Combining the data increases the sensitivity by another factor of about 1.2, which takes the bound to 1080 (1016) GeV, depending on the choice of prior probability. Increasing the energy to 2 TeV and the luminosity to 2 fb<sup>-1</sup>, which may be expected with the commissioning of the Main Injector in Run-II, improves the bounds by a further 200–300 GeV; this corresponds to an improvement in the sensitivity by a factor close to 4.
To conclude then, we have used published dilepton data from the DØ and CDF Collaborations to put bounds on the string scale $`M_S`$. This is the fundamental scale of the ADD model, which envisages large compact dimensions in addition to the known (noncompact) ones and predicts strong quantum gravity effects at TeV scales. Only the invariant mass distribution has been used and not the angular distribution. The latter might show some sensitivity to the sign of $`\lambda `$. For the current analysis, however, there is hardly any such sensitivity. Our result is also independent of the number of extra dimensions $`d`$. We obtain a bound on $`M_S`$ of 900 GeV (900 GeV – 1 TeV) using CDF (DØ) data alone and a bound of around 1.0 – 1.1 TeV using the combined data from both experiments. This is one of the most stringent bound obtained from collider studies at the present time and is likely to be improved (to about 1.3 TeV) in Run-II of the Tevatron.
The authors would like to thank Ashoke Sen and K. Sridhar for reading the manuscript. We also acknowledge fruitful discussions with Dilip K. Ghosh, Supriya Jain, Gautam Mandal, Prakash Mathews and Caramine Pagliarone.
|
no-problem/9904/hep-ph9904255.html
|
ar5iv
|
text
|
# Hard Diffraction in Vector Meson Production at HERA
## 1 Introduction
Since the first data taking in 1992, the HERA high energy $`ep`$ collider has shown to be a powerful tool for the study of the strong interaction, in particular to test the domain of applicability and the relevance of several approximations of perturbative QCD (pQCD) in the field of diffraction.
### 1.1 Total Cross Section and Diffraction
Two major experimental discoveries were made at HERA for the understanding of strong interactions and of hadron structure.
First, the observation that, in the deep inelastic scattering (DIS) domain, the $`\gamma ^{}p`$ cross section increases rapidly with energy. This is attributed to an enhancement of the number of gluons in the proton, the gluon structure function $`xG(Q^2,x)`$ thus growing fast as $`x`$ decreases ($`x`$ is the Bjorken scaling variable: $`x=Q^2/2pq`$, where $`p`$ and $`q`$ are, respectively, the proton and the intermediate photon four-momenta and $`Q^2=q^2`$; the $`\gamma ^{}p`$ centre of mass energy $`W`$ is given by $`W^2=Q^2/xQ^2`$).
This “hard” behaviour differs from that of the total and the elastic hadron-hadron cross sections (closely related through the optical theorem), which are characterised by a “soft” energy dependence. In the framework of Regge theory , elastic scattering is attributed at high energy to the exchange between the incoming hadrons of a colourless object, the pomeron $`IP`$. The energy dependence of the total cross section is proportional to $`W^{2[1\alpha _{IP}(t)]}`$, where the pomeron trajectory $`\alpha _{IP}(t)`$ is parameterised as
$$\alpha _{IP}(t)=\alpha _{IP}(0)+\alpha ^{}t1.08+0.25t,$$
(1)
$`t`$ being the square of the four-momentum transfer.
The second major discovery in DIS at HERA is the substantial contribution ($`810\%`$) of events formed of two hadronic subsystems separated by a large gap in rapidity, devoid of hadronic activity .
This process is similar to diffractive scattering in hadron-hadron interactions, where the incoming hadrons are excited without colour exchange. Diffraction thus forms an extension of elastic scattering and is dominated at high energy by pomeron exchange with the “soft” behaviour of eq. (1). The interesting feature at HERA was to observe diffraction as a leading twist process in DIS.
### 1.2 “Soft” Vector Meson Production
An important case of diffractive scattering is that of vector meson (VM) production, in particular when the proton remains intact in the reaction: $`e+pe+p+VM`$.
In the vector meson dominance (VDM) approach, the $`J^{PC}=1^{}`$ photon is modelled as the superposition of the lightest VM’s ($`\rho `$, $`\omega `$, $`\varphi `$). The total $`\gamma p`$ cross section is thus expected to present the characteristic “soft” behaviour of hadron-hadron interactions. The production of light VM’s, which is directly related to elastic scattering (with a differential absorption by the target proton of some of the hadronic components of the photon) is also expected to present a “soft” energy dependence. The gross features of this interpretation are supported by a huge quantity of data accumulated by fixed target experiments . At high energy, the HERA experiments have measured the total cross section in photoproduction ($`Q^20`$ and the cross section for diffractive photoproduction of $`\rho `$ , $`\omega `$ , $`\varphi `$ . They exhibit the “soft” energy dependence described by parameterisation (1), as shown on Fig. 1 (at low energy, a contribution from reggeon exchange, decreasing with $`W`$, is present for $`\sigma _{tot}`$, $`\rho `$ and $`\omega `$). The W dependence of $`\rho `$ photoproduction, studied as a function of $`t`$, has also allowed measuring the slope $`\alpha ^{}`$ of the pomeron trajectory .
### 1.3 “Hard” Vector Meson Production and QCD
At HERA, the main interest is for the production of light VM’s at high $`Q^2`$ or high $`|t|`$, and for the production of heavy vector mesons. This is because two far-reaching questions can be raised:
1. Is the “soft”, hadron-like behaviour observed in light VM photoproduction also observed in the presence of a “hard” scale: high $`Q^2`$, high $`|t|`$ or large quark mass ($`c`$, $`b`$) ?
2. In the presence of a “hard” scale, what are the relevant assumptions and approximations in pQCD calculations required to describe diffractive VM production ? Can this shed light on the partonic nature of the pomeron ?
A large number of experimental studies have thus been performed at HERA to investigate these questions. Data have been collected, in the presence of the scales $`Q^2`$, $`m_q`$ and $`t`$, on the production of $`\rho `$, $`\omega `$, $`\varphi `$, $`\rho ^{}`$, $`J/\psi `$, $`\psi (2s)`$ and $`\mathrm{{\rm Y}}`$ mesons, with studies of the differential $`Q^2`$, $`W`$ and $`t`$ distributions, of the polarisation characteristics, of the cross section ratio between several VM production and of the mass shape. Only a small fraction of these results will be presented here. They are largely based on results presented at the 29th Int. Conf. on HEP held at Vancouver, Canada, in July, 1998.
A large number of theoretical papers based on pQCD has also been published, presenting predictions for VM production under various assumptions and approximations (see e.g. ). A general feature of these approaches is that, at high energy, the amplitude is factorised in three contributions, characterised by very different time scales (see Fig. 3):
$$A=\mathrm{\Psi }_{\gamma ^{}q\overline{q}}^{}M_{q\overline{q}+pq\overline{q}+p}\mathrm{\Psi }_{q\overline{q}V}.$$
(2)
The first factor corresponds to the amplitude for a long distance fluctuation of the photon into a $`q\overline{q}`$ pair. The second factor describes the (short-time) scattering amplitude of this hadronic state with the proton. The exchange is generally modelled as a gluon pair (i.e. a colour singlet system), with $`M|xG(K^2,x)|^2`$, the square of the gluon density in the proton. The order of magnitude of the scale $`K^2`$ at which the gluon structure function is probed is $`K^21/4(Q^2+m_V^2+|t|)`$, since these three variables contribute to the “resolution” of the process; the factor $`1/4`$ comes from the sharing of the momenta between the two quarks. The third factor in eq. (2) accounts for the recombination of the scattered hadronic state in the VM wave function.
However, as stressed e.g. in , theoretical calculations are affected by significant uncertainties concerning the choice of the QCD scale, of the gluon distribution and of the VM wave function, in particular the effects of Fermi motion of the quarks within the meson.
## 2 Differential Distributions
### 2.1 $`W`$ Dependence
The most striking manifestation of pQCD features in VM production is to be expected in the $`W`$ dependence of the cross section, since the latter is related to the square of the gluon density in the proton, which increases rapidly with $`W`$ in the presence of a hard scale.
A “hard” behaviour is observed in photoproduction of $`J/\psi `$ mesons , as shown in Figs. 1 and 4a. When the $`W`$ dependence of the cross section is parameterised as $`W^\delta `$ (in a Regge approach, $`\delta =4\alpha _{IP}(t)`$), one finds $`\delta 0.8`$ for $`J/\psi `$ photoproduction. The contrast is thus manifest with the “soft” behaviour of light VM photoproduction, for which $`\delta =0.200.25`$ (this value is in agreement with the parameterisation of eq.(1), taking into account the $`t`$ distribution). A similar behaviour is observed for $`J/\psi `$ electroproduction (Fig. 4a). The curves on this figure represent the predictions of a model based on pQCD calculations , for different parameterisations of the gluon distribution in the proton. The agreement of these predictions with the data, especially as to the shape of the distribution (the absolute values are sensitive to the input charm quark mass), supports the modelisation of the pomeron as a colour-singlet gluon pair.
For $`\rho `$ and $`\varphi `$ meson electroproduction, the “hard” scale is related to $`Q^2`$. Although the precision of the data is still limited, an indication is present of a steeper $`W`$ dependence of the $`\gamma ^{}p`$ cross section as $`Q^2`$ increases for the $`\rho `$ and for the $`\varphi `$ . This is shown for the $`\rho `$ on Fig. 4b, where the pomeron intercept $`\alpha _{IP}(0)`$ is plotted.
### 2.2 $`Q^2`$ Dependence
The cross section for $`\rho `$ production in the DIS domain is presented as a function of $`Q^2`$ on Fig. 5a for the ZEUS and H1 experiments, which are in agreement. The $`Q^2`$ dependence is well parameterised in this domain as $`\mathrm{d}\sigma /\mathrm{d}Q^21/(Q^2+m_V^2)^n`$, with $`n2.28\pm 0.06`$ (combined value). This behaviour is expected from pQCD calculations, which give for the (dominant - see below) longitudinal cross section : $`\sigma _L[\alpha _s(Q^2)xG(Q^2,x)]^2/Q^6`$, when taking into account the $`Q^2`$ dependence in $`\alpha _s(Q^2)`$ and in $`xG(Q^2,x)`$, as well as other uncertainties affecting the calculations . Over the full measurement range, including photoproduction, the $`Q^2`$ dependence of $`\rho `$ cross section is best described by the QCD based model of ref. .
For $`\varphi `$ production , a value similar to that for the $`\rho `$ is found. For $`J/\psi `$ production, the values $`n=2.24\pm 0.19`$ and $`n=1.58\pm 0.25`$ are obtained.
### 2.3 $`t`$ Dependence
For not too large $`|t|`$ values, the $`t`$ distribution of VM production can reasonably well be parameterised in the exponential form $`\mathrm{d}\sigma /\mathrm{d}te^{b|t|}`$. In an optical model approach of diffraction, the slope parameter $`b`$ is related to the convolution of the sizes of the interacting objects: $`bR_p^2+R_{q\overline{q}}^2`$, with the proton radius $`R_p`$ giving a contribution of the order of $`45`$ $`\mathrm{GeV}^2`$.
As observed in Fig. 5b, the slope $`b`$ for $`\rho `$ production decreases when $`Q^2`$ increases, in agreement with the decrease of the transverse size of the virtual $`q\overline{q}`$ pair expected in pQCD calculations.
For $`J/\psi `$ photo- and electroproduction, a slope of the order of $`b45`$ $`\mathrm{GeV}^2`$ is measured , confirming the small size of the $`J/\psi `$ meson.
Fig. 5b also suggests that, at low $`Q^2`$, the slope parameter $`b`$ for $`\rho `$ production increases from the fixed target to the HERA energy range. This behaviour, known as “shrinkage” and expected in Regge theory, is related to the non-zero slope $`\alpha ^{}`$ of the “soft” pomeron trajectory. In contrast, no shrinkage is expected in a pQCD approach for asymptotically high values of the QCD scale ($`\alpha ^{}0`$). However, no significant measurement has been possible so far using the HERA experiments only, neither for $`\rho `$ production at high $`Q^2`$ nor for $`J/\psi `$ production, and the conclusions to be drawn from comparisons between fixed target and HERA data remain controversial .
## 3 Polarisation
The measurement of VM decay angular distributions allows the determination of the spin density matrix elements, which are related to the helicity amplitude $`T_{\lambda _V\lambda _\gamma }`$, where $`\lambda _V`$ and $`\lambda _\gamma `$ are the helicities of the VM and of the photon, respectively . In the case of $`s`$-channel helicity conservation (SCHC), the helicity of the photon is retained by the VM and the matrix elements containing helicity changing amplitudes ($`\lambda _V\lambda _\gamma `$) are thus zero.
Measurements of the full set of matrix elements have been performed for $`\rho `$ as a function of $`Q^2`$ (Fig. 6a), $`W`$ and $`t`$ , and for $`\varphi `$ mesons .
As is visible on Fig. 6, the data are compatible with SCHC, except for a small but significant deviation from zero of the matrix element $`r_{00}^5`$. The helicity flip amplitude $`T_{\lambda _\rho \lambda _\gamma }=T_{01}`$ is thus determined to be $`8\pm 3\%`$ of the non-flip amplitudes $`\sqrt{T_{00}^2+T_{11}^2}`$. This value is of the order of magnitude of that found at lower energy and lower $`Q^2`$ .
Neglecting the small violation of SCHC (which would affect the value of $`R`$ by $`2.5\pm 1.5\%`$), the matrix element $`r_{00}^{04}`$ can be used to extract the ratio $`R`$ of cross sections for $`\rho `$ production by longitudinal and transverse virtual photons: $`R=\sigma _L/\sigma _T=r_{00}^{04}/\epsilon (1r_{00}^{04})`$, where $`\epsilon `$ is the polarisation parameter ($`\epsilon =0.99`$ at HERA). Fig. 6b shows that R rises steeply at small $`Q^2`$, and that the longitudinal $`\gamma ^{}p`$ cross section dominates over the transverse cross section for $`Q^2`$ $`\text{ }>`$$`\mathrm{GeV}^2`$. However, the rise is non-linear, with a weakening dependence at large $`Q^2`$ values, and $`R`$ is $``$ 3 for $`Q^2`$ $`\text{ }>1020`$ $`\mathrm{GeV}^2`$.
This feature is not reproduced by numerous models based on VDM or QCD, which predict a linear increase of $`R`$ with $`Q^2`$. However, the model of ref. , based on QCD, gives a good description of $`R`$ over the full $`Q^2`$ range, as does also a model based on GVDM . Another model based on QCD predicts a moderate increase of $`R`$ with $`Q^2`$ in the DIS domain.
It is also found that the longitudinal and transverse amplitudes are nearly in phase ($`\mathrm{cos}\delta =0.93\pm 0.03`$), assuming SCHC and natural parity exchange. This is similar to lower energy measurements
A QCD based calculation predicts for the amplitudes the hierarchy
$$|T_{\lambda _\rho \lambda _\gamma }|=|T_{00}|>|T_{11}|>|T_{01}|>|T_{10}|>|T_{11}|,$$
(3)
which is supported by the measurement of the matrix elements, and also the magnitude of the element $`r_{00}^5`$ .
Values of the matrix elements close to those for the $`\rho `$ are obtained for $`\varphi `$ mesons . For $`J/\psi `$, the ratio $`R`$ of cross sections increases from values compatible with zero in photoproduction to $`0.4`$ for $`Q^24`$ $`\mathrm{GeV}^2`$ ; this is smaller than for $`\rho `$ production at the same $`Q^2`$, but is of the same order if compared at the same value of $`Q^2/m_V^2`$.
## 4 Other Features
### 4.1 VM Production Ratio
Predictions are obtained in pQCD for the cross section ratio of different VM production . As apparent in eq. (2), this ratio is determined by the photon coupling to the $`q\overline{q}`$ pairs, i.e. the charge of the quarks in the VM’s, and the effects of the wave functions. For $`\varphi /\rho `$ , the ratio increases with $`Q^2`$ towards the value $`2/9`$ obtained from quark counting (see Fig. 7a). For $`\psi /\rho `$, the ratio is about a factor $`1/200`$ in photoproduction in the HERA energy range, but flavour symmetry is restored within a factor 2 for $`Q^2`$ above 10 $`\mathrm{GeV}^2`$ .
The case of the $`\psi (2s)/\psi `$ ratio illustrates the interesting phenomenon of the “scanning” of the VM wave function as $`Q^2`$ varies. Because of the node in the $`\psi (2s)`$ wave function, which induces approximately cancelling contributions in the production amplitude, the photoproduction of $`\psi (2s)`$ mesons is small. As $`Q^2`$ increases, the transverse size of the $`q\overline{q}`$ pair decreases, thus avoiding the cancellation effect. The resulting increase with $`Q^2`$ of the cross section ratio is illustrated in Fig. 7, the asymptotic limit being computed to be of the order of 0.5 .
### 4.2 Mass Distribution
For $`\rho `$ photoproduction, the ($`\pi ,\pi `$) mass distribution is distorted with respect to a (relativistic) Breit-Wigner distribution, with an excess of events at small masses and a deficit at large masses. This phenomenon, known as “skewing”, is attributed to the interference between resonant $`\rho `$ production and non-resonant pion pair production, the interference changing sign at the resonance pole . The skewing is observed to decrease in photoproduction as $`|t|`$ increases (see Fig. 8a). The skewing also decreases with increasing $`Q^2`$ as seen in Fig. 8b for two different parameterisations .
## 5 Conclusions
Abundant data have been collected at HERA on diffractive production of light and heavy vector mesons, in the presence of the scales $`Q^2`$, $`m_q`$ and $`t`$.
A strong energy dependence of the cross section is observed for $`J/\psi `$ production; an indication is found for a similar behaviour for $`\rho `$ mesons at high $`Q^2`$. In the light of perturbative QCD, with the pomeron modelled as a gluon pair, these features are interpreted as reflecting the strong increases of the gluon distribution in the proton at high energy, and quantitative agreement is reached for $`J/\psi `$ production. The $`Q^2`$ dependence of VM production is also qualitatively explained in pQCD approaches.
The ratio of the longitudinal to transverse photon cross sections for $`\rho `$ production increases rapidly with $`Q^2`$, but this increase is non-linear for $`Q^2\text{ }>2`$ $`\mathrm{GeV}^2`$. This behaviour has been reproduced recently by a model based on QCD. More generally, the full set of $`\rho `$ meson spin density matrix elements has been measured. The correct hierarchy between scattering amplitudes and the magnitude of the dominant helicity-flip amplitude are also qualitatively reproduced in a QCD approach.
In summary, great progress has been made in the understanding of VM production at high energy when a hard scale is present ($`m_c`$, $`Q^2`$). This contributes significantly to the understanding of diffraction in a QCD framework.
## Acknowledgements
It is a pleasure to thank the organisors for a pleasant and fruitful Symposium, and my colleagues in H1 and ZEUS, in particular B. Clerbaux and P. Newmann, for numerous interesting discussions on diffraction.
## References
|
no-problem/9905/cond-mat9905358.html
|
ar5iv
|
text
|
# Models for adatom diffusion on FCC(001) metal surfaces
## I Introduction
Thin film growth processes involve complicated kinetics giving rise to a rich variety of surface morphologies. Within this vast domain, the study of the growth in the submonolayer regime is of particular interest due to the large impact of the initial kinetics on the resulting film structure. Experiments on thin film growth on well characterized substrates using molecular-beam epitaxy (MBE) have provided a large body of information about growth kinetics and morphology, and revealed that for a variety of systems and a broad temperature range, island nucleation is the dominant mechanism for crystal growth . Diffraction methods such as helium beam scattering , low energy electron diffraction and other techniques , provide information on the collective behavior and the statistical properties of the surface. These techniques have been used to measure the island size distribution, the island density, and their scaling properties with respect to the coverage and the flux . The variation of the island density with respect to the temperature was also studied .
More detailed information at the atomic scale is provided by scanning tunneling microscopy (STM) . Most notably, STM provides means to study the variety of morphologies encountered in the different systems, or in the same system under different growth conditions . In some experiments STM was used to acquire information on larger scales, e.g., island size distributions . Despite the wealth of experimental results at the atomic scale, for decay rates of small islands, mobility of small islands and edge diffusion , the underlying energetics is mostly inaccessible to direct experimental measurements. Thus, one must rely on theory to extract activation energies from the experimental results, and these are usually limited in number, and sometimes are subject to alternative interpretations.
The only technique which provides direct access to diffusion processes and activation energies at the atomic scale is field ion microscopy (FIM) . This technique was used to identify the diffusion modes of adatoms as well as small islands on FCC(001) metal surfaces, to measure their diffusion coefficients, and to determine the sticking process of adatoms to an island . Recently there were several attempts to use STM to derive such local information directly .
Theoretical studies aimed at providing better understanding of the relation between key processes at the atomic scale and the resulting morphologies have been done using Monte Carlo (MC) simulations . In simulations of island growth during deposition, atoms are deposited randomly on the substrate at rate $`F`$ \[given in monolayers (ML) per second\] and then hop, attach to, and detach from existing islands according to some model. A common approach is to assume some key processes and their rates, and then simulate the growth process . In some cases information such as diffusion length, typical distance and time between nucleation events is assumed to be known a-priori, and is put by hand into the simulation in order to accelerate the computation . The advantage of this approach is that the models are well defined and use only few parameters. These models are useful for studies of scaling and morphology but cannot provide a quantitative description of diffusion on a particular substrate. Furthermore, they account only for a limited number of processes, that are assumed to be the only significant ones.
A complementary scheme employs the underlying activation energies. In this scheme the hopping rate $`h`$ (in units of hops per second) of a given atom to each unoccupied nearest neighbor (NN) site is given by
$$h=\nu \mathrm{exp}(E_B/k_BT)$$
(1)
where $`\nu =10^{12}`$ $`s^1`$ is the commonly used attempt rate, $`E_B`$ is the activation energy barrier, $`k_B`$ is the Boltzmann constant and $`T`$ is the temperature.
The activation energy barrier $`E_B`$ depends on the local environment of the hopping atom, namely the configuration of occupied and unoccupied adjacent sites. Two approaches have been taken in the construction of the energy barriers for hopping in the simulations. One approach was to construct simple models that include the desired features, such as stability and mobility of small islands, and that take into account properties such as bond energies . In general, this approach encompasses both the virtues and the drawbacks of the simpler approach presented before.
A second approach is based on the use of an approximate many-body energy functional to calculate the hopping energy barriers for a complete set of relevant configurations . This approach provides a good description of diffusion processes on the given substrate but only limited understanding due to the large number of parameters.
In this paper we extend and further explore a framework for a systematic derivation of simple models for self-diffusion on FCC(001) surfaces out of a detailed and complicated set of energy barriers. Simple in this context means that only a small number of parameters are involved and all have a definite and intuitive interpretation. Using sensible assumptions about the bond energies and diffusion paths we obtain simple formulae for the activation energy barriers. We then optimize the parameters of these formulae for each metal separately by using energy barriers obtained from the embedded-atom method. This procedure gives rise to simple models that have at most four parameters and provide good quantitative description of the landscape of hopping energy barriers. In a previous publication we have introduced the framework and applied it to Cu/Cu(001) growth . Here we make a three-fold step forward. (a) We include four other FCC metals in the model and derive the appropriate parameters for them. This shows the utility of the models and provide a unifying framework that applies to a large class of metals; (b) Non-linear interactions are introduced in addition to the linear interactions considered before. This allows for a more accurate optimization without increasing the number of parameters; (c) We explore the basis of the assumptions underlying this scheme and the extent of their applicability.
The paper is organized as follows. In Sec. II we introduce the physical framework of the model and discuss its underlying assumptions. The results of the EAM calculations are given in Sec. III. The models are introduced in Sec. IV with the fitting to the EAM results. This is followed by a discussion of the results and their implications in Sec. V.
## II Applicability considerations
The framework developed in this paper assumes several characteristics of the diffusion processes considered. It applies to systems for which these assumptions are valid, which includes most of the FCC metals in the moderate temperature regime $`(200500K)`$. The following assumptions are employed throughout the discussion.
Bridge-site hopping of adatoms is in general dominant over exchange hopping (Fig. 1). There has been a controversy concerning this assumption. Using semi-empirical methods the barrier for exchange hopping for Cu(001) was estimated to be $`0.2`$ eV , in agreement with the experimental data that was available at that time . This is much lower than the barrier for bridge-site hopping. However, theoretical work using several other methods including EAM indicate that the exchange barrier for Cu is much higher (more than $`0.8`$ eV) and therefore bridge-site hopping is dominant. This conclusion is also supported by recent experimental work and reinterpretation of the previous findings . In general, bridge-site hopping is found to be dominant in all the metals studied here. (for Au, some exchange processes appear to be significant, yet in most cases bridge hopping is favorable). Due to the exponential dependence of the rate of each process on the corresponding energy barrier, it is generally reasonable to take into account only the mechanism which is energetically favorable (bridge-site hopping, in this case) and neglect the mechanism which exhibits higher activation energy barrier (exchange mechanism).
Only nearest and next-nearest neighbor interactions are significant. Within this assumption one can obtain the activation energies for most diffusion processes to a good accuracy. However there are some processes such as vacancy diffusion, where a larger environment affects the diffusing atom.
There is one common attempt frequency for all processes. Since there is no systematic knowledge about the dependence of the attempt frequency on the local environment, the assumption of one common frequency is the usual practice. Estimations for the attempt frequency can be obtained using MD simulations . Another way is to fit molecular static (MS) data to harmonic potential to find an effective force constant, and then deduce the frequency of oscillations. So far, little work has been done on this subject. Previous works, including interpretation of experimental results, usually pre-suppose some common attempt frequency in the range $`10^{12}`$$`10^{13}s^1`$.
## III The EAM barriers
The models described in this work are tested by fitting their parameters to energy barriers for self-diffusion of Cu(001), Ag(001), Au(001), Ni(001) and Pd(001) surfaces, obtained using EAM . This method uses semi-empirical potentials and provides a good description of self-diffusion on such surfaces . Specifically, for all the metals considered here the EAM functions developed by Adams, Foiles, and Wolfer (AFW) are employed. These functions are fitted to a similar but more accurate data base as the one employed by Foiles, Baskes, and Daw . The calculations are done on a slab of 20 square layers with 100 atoms in each layer.
When an atom on the surface hops into a vacant nearest neighbor site it has to cross the energy barrier between the initial and final sites. We have used molecular statics in conjunction with the EAM functions to find that energy barrier. This is simply the difference between the energy at the bridge site (or more precisely, at the point along the path with highest energy) and in the initial site.
The hopping energy barriers are calculated for all local environments as shown in Fig. 2, where seven adjacent sites, $`i=0,\mathrm{},6`$ are taken into account, according to the assumptions presented in Sec. II. Each one of these sites can be either occupied ($`S_i=1`$) or vacant ($`S_i=0`$), giving rise to $`2^7=128`$ barriers. A binary representation is used to assign indices to these barriers. For each configuration $`(S_0,\mathrm{},S_6)`$ the barrier is given by $`E_B^n`$, where
$$n=\underset{i=0}{\overset{6}{}}S_i2^i$$
(2)
takes the values $`n=0,\mathrm{},127`$. The full set of hopping energy barriers (given in eV) is presented in Table 1, for Cu(001), Ag(001), Au(001), Ni(001) and Pd(001). To show these values in a compact form, each barrier in Table I corresponds to a configuration in which the occupied sites are the union of the occupied sites in the picture on top of the given column and on the left hand side of the given row. The column in Table I in which a given configuration appears is determined by the occupancy of sites $`i=2,3,6`$ while the row is determined by sites $`i=0,1,4,5`$. One can define
$$n_1=\underset{i=2,3,6}{}S_i2^i;n_2=\underset{i=0,1,4,5}{}S_i2^i$$
(3)
such that for each configuration $`n=n_1+n_2`$. To demonstrate the use of Table 1 , we will check for Cu(001) the barrier of the configuration in which sites $`0,3`$ and $`4`$ are occupied and all other sites adjacent to the hopping atom are vacant. For this configuration, according to Eq. (3), $`n_1=8`$ and $`n_2=17`$ ($`n=25`$). The barrier, that is found in the column with the index $`8`$ and the row with index $`17`$, in the line of Cu, is $`E_B^{25}=0.89`$ eV.
In Table 1 we use the symmetries of the configurations in the $`3\times 3`$ cell (Fig. 2) to reduce the number of entries. There is a mirror symmetry plane perpendicular to the surface and containing the arrow of the hopping atom. Consequently, the columns of $`n_1=4`$ and $`12`$, in which site $`i=2`$ is occupied, stand also for the symmetric configurations in which $`i=6`$ is occupied. In the other four columns, there are some configurations that, due to symmetry, appear twice. In such cases, the barrier for the configuration with larger $`n`$ appears in italics.
For the purpose of the calculations and parameterization of the model we consider only hopping moves in which a single atom hops each time. It turns out, however, that in some cases the molecular statics calculations, used to obtain the barriers, give rise to concerted moves. In such moves the atom at site $`i=3`$ follows the hopping atom and takes the place vacated by the hopping atom. This fact significantly reduces the barrier. It turns out that for configurations in which concerted moves appear, they can be suppressed by adding a column of three atoms on the left hand side of sites $`i=0,3`$ and $`4`$. In Table I, the energy values for those configuration in which a concerted move was found, are shown in parenthesis. The barrier obtained when the concerted move was suppressed is shown to the left of the parenthesis.
To gain a better understanding of the barrier energy landscape we present the barrier height distribution (without concerted moves) in Fig. 3 for the five metals considered. We observe that this distribution exhibits four groups. This feature is in agreement with Ref. where a different method was used to calculate the barriers. Each group, corresponds to a single or a double column in Table I. In general, group I includes very fast moves towards island edges, group II includes moves along the edge, group III includes, most notably, the single atom move ,while group IV includes detachment moves.
## IV The Models
### IV.1 The Additivity Assumption
The starting point in the construction of a simple model that describes the hopping energy barriers for all the configurations of Fig. 2, is the assumption that the contributions of all adjacent atoms to the energy barrier add up linearly. To examine this assumption for Cu(001), we evaluated directly the binding energies within the EAM approach for a series of configurations from which we extracted the relevant bond energies.
The binding energy of a given configuration of adatoms on the surface is evaluated as follows: First we calculate the total energy of the system in that configuration. Then we find the total energy of another configuration in which there is the same number of adatoms on the surface but they are far apart from each other. (by that we mean that moving them one lattice site in any direction would not change the total energy). The binding energy between the adatoms is given by the difference in the total energies between the two configurations. An example of the procedure is shown in Fig. 4. It appears, that the evaluation of the NNN bond energy is easier since one can construct a sufficiently large set of configurations which include only NNN bonds with no NN bonds. In the case of NN bonds, most of the relevant configurations also include NNN bonds \[Fig. 5 (a)\]. Similarly, considering an atom on top of a bridge site, typical configurations which include atoms adjacent to the bridge site exhibit NN bonds between them as shown in Fig. 5(b).
Therefore, we will first examine the additivity of the NNN bonds employing a series of four configurations in which an adatom has 1, 2, 3 and 4 NNN on the surface. For each one of these four configurations the total energy is compared to that of a configuration with the same number of adatoms, in which they are far apart from each other. In Fig. 6(a), the total binding energy between adatoms is plotted as a function of the number of NNN bonds. The best linear fit is drawn, and its slope yields a value of $`E_{NNN}=0.0512`$ eV. The next step is to examine the linearity of the NN binding energy using a series of four configurations in which an adatom has 1, 2, 3 and 4 NN’s on the surface, within a similar procedure. The NNN bonds in each configuration are deducted using the value obtained before. The results are shown in Fig. 6(b) and the NN bond energy is obtained: $`E_{NN}=0.324`$ eV. A similar analysis for an adatom on a bridge site is shown in Fig. 6(c) and the binding energy between an atom on the bridge site and an adjacent atom is given by $`E_{NN(bridge)}=0.345`$ eV.
### IV.2 Construction of the Models
The energy barrier $`E_B`$ for a certain process is the difference between the binding energies of the hopping adatom (to the substrate and to adjacent adatoms) at the initial position, $`E_{in}`$, and at the bridge site, $`E_{top}`$, namely $`E_B=E_{top}E_{in}`$. On the basis of the additivity feature just demonstrated, we will now express these binding energies as the sum of the occupation states of the relevant sites. The first approximation for the energies gives a model (model I) with only two parameters, that reproduces the main features of the EAM barriers. In order to establish the model, there are two things to note about the parameters obtained in Sec. IV.1. First, the values of NN binding energies at the lattice site and bridge site are very close. This reflects the fact that the NN distance corresponds approximately to the minimum potential of the two-body interaction. Second, both these energies are much larger than the NNN binding energy. These two features are quite general and common to all the metals we discuss here. For the simplest model we will neglect the effect of the NNN atoms, and assume a single NN binding energy, $`\mathrm{\Delta }E_{NN}`$, for both lattice and bridge sites. The resulting expression for the binding energy at the initial (fourfold hollow) site is
$$E_{in}^n=E_{in}^0\mathrm{\Delta }E_{NN}(S_1+S_3+S_5).$$
(4)
The energy of an isolated atom is $`E_{in}^0`$. The energy of the hopping atom when it is on the bridge site is given by:
$$E_{top}^n=E_{top}^0\mathrm{\Delta }E_{NN}(S_1+S_2+S_5+S_6)$$
(5)
where $`E_{top}^0`$ is the energy of an isolated atom on top of a bridge site. Thus, for a given configuration the barrier, $`E_B^n=E_{top}^nE_{in}^n`$, for an atom to hop into an adjacent vacant site is given in model I by:
$$E_B^n=E_B^0+\mathrm{\Delta }E_{NN}(S_3S_2S_6)$$
(6)
where $`E_B^0=E_{top}^0E_{in}^0`$ and $`n`$ is given by Eq. (2). In this model only three sites affect the energy barrier, which can take only four different values, as the expression in the parenthesis can be either 1, 0, -1 or -2. Each of these four barrier values corresponds to one of the four groups in Fig. 3. The parameters of this model, as well as those of the models discussed below, are adjusted to best fit the EAM data. More specificly, we found the parameters that best describe the 128 EAM barriers by minimizing the sum of squares:
$$R=\underset{n=0}{\overset{127}{}}[E_B^n(EAM)E_B^n(Model)]^2.$$
(7)
The values obtained for these parameters for the five metals are shown in Table 2. Despite its simplicity, Model I can be used to describe and analyze the main diffusion processes: single adatom hopping, attachment, detachment and edge diffusion. The barriers obtained from this model can be incorporated in simulations to reproduce (at least qualitatively) experimental features such us cluster mobility, island morphology and island density .
The model presented above describes only the gross features of the diffusion process. In order to get more quantitative results, and to better understand the importance of the different processes, it is necessary to further refine the model. We will now introduce model II, in which the effect of NNN atoms in the initial configuration is included. The expression for the energy at the initial site is now
$`E_{in}`$ $`=`$ $`E_{in}^0\mathrm{\Delta }E_{NN}(S_1+S_3+S_5)`$ (8)
$`\mathrm{\Delta }E_{NNN}(S_0+S_2+S_4+S_6)`$
where $`\mathrm{\Delta }E_{NNN}`$ is the reduction of the energy due to a NNN bond. The energy barriers are now given by
$`E_B^n`$ $`=`$ $`E_B^0+\mathrm{\Delta }E_{NN}(S_3S_2S_6)`$ (9)
$`+\mathrm{\Delta }E_{NNN}(S_0+S_2+S_4+S_6)`$
Model II accounts better for processes such as detachment, edge diffusion and vacancy diffusion, which generally involve NNN interactions. In the distribution of the barriers obtained from the model, the main groups exhibit certain widths. Yet, they are still significantly narrower than the groups of the EAM barriers. This is due to the fact that during the hopping process adjacent atoms may relax within their potential well. Model II accounts for these effects only on average and therefore gives rise to narrower groups. The values obtained as best fits of the model parameters to the EAM data for the different metals are shown in Table 2. Further refinement can be obtained by introducing a distinction between the NN bond energies at the initial four fold hollow site and that at the bridge site, as suggested in Ref.. This modification, which introduces a fourth parameter into the model, gives only slightly better agreement with the EAM results. In the following Section we present a more effective refinement based on nonlinear interactions.
### IV.3 Adding Non-Linear Effects
To obtain models which provide a better fit to the EAM barriers, it is necessary to consider effects that are caused by the simultaneous interactions of the hopping atom with several of its neighbors. Such effects may be described by expressions such us $`S_iS_j`$ or $`S_iS_jS_k`$, which are equal to 1 only if all the relevant sites are occupied, and 0 otherwise. Such expressions are clearly beyond the linear bond counting scheme of the previous Section. There is a large number of possible nonlinear interaction terms. Our analysis of the EAM calculations, however, indicates that two of them are most significant. The first term is related to the shape of the diffusion path. It corresponds to configurations in which sites adjacent to the bridge site, are occupied on both sides of the diffusion path (namely, at least one of the sites 1 and 2, as well as at least one of the sites 5 and 6 are occupied). It appears that in these cases the energy barrier is considerably higher than for configurations where sites on only one side of the path are occupied. This effect is due to the “stiffness” of the diffusion path induced by the attraction from two opposite directions. Even though there are nine different such configurations, they all contribute about the same energy difference, and hence can be bound to a single parameter $`\mathrm{\Delta }E_{opp}`$ (for opposite). The additional term that is now added to the expression for the barrier is $`\mathrm{\Delta }E_{opp}(S_{1,2}S_{5,6})`$ where $`S_{i,j}=1`$ if at least one of the sites i and j is occupied, and 0 if both are empty. In all the metals we checked, except Cu, this term is much larger than the NNN bond, sometimes by an order of magnitude.
The second nonlinear interaction term is smaller, and is comparable to the effect of NNN sites. It is related to the energy of the hopping atom in the initial site. The EAM calculations indicate that if the two nearest neighbor sites 1 and 5, that are symmetric with respect to the hopping direction are both occupied, then the initial configuration is more tightly bound. This means that if sites 1 and 5 are both occupied, the energy barrier is expected to be higher. Consequently, the corresponding term would be: $`\mathrm{\Delta }E_{symm}(S_1S_5)`$. The fitted value obtained for $`\mathrm{\Delta }E_{symm}`$ is very close to that obtained for the NNN binding energy $`E_{NNN}`$, for all five metals. We thus included both contributions in the same term, although of different physical origin, to avoid the need for a fifth independent parameter.
The resulting model (model III) for the hopping energy barriers is
$`E_B^n`$ $`=`$ $`E_B^0+\mathrm{\Delta }E_{NN}(S_3S_2S_6)`$ (10)
$`+\mathrm{\Delta }E_{opp}(S_{1,2}S_{5,6})`$
$`+\mathrm{\Delta }E_{NNN}(S_0+S_4+S_1S_5)`$
The values obtained from the best fit for the four parameters $`E_B^0`$, $`\mathrm{\Delta }E_{NN}`$, $`\mathrm{\Delta }E_{opp}`$ and $`\mathrm{\Delta }E_{NNN}`$ for the different metals are given in Table 3. There are two remarks to be made about Eq. (10). First, terms such as $`S_1S_5`$ are not in contradiction to the assumption that only nearest and next-nearest neighbor interactions are significant. These terms are just a manifestation of the simultaneous interactions of, say the atoms in sites $`S_1`$ and $`S_5`$, with the hopping atom, of which they are both nearest-neighbors. Second, $`S_2`$ and $`S_6`$ are not included in the last term of Eq. (10), since we found that their dominant contribution is in the nonlinear term.
### IV.4 Testing the Quality of the Fit
The quality of the fit can be viewed in Fig. 7. The numbering of the configurations is the decimal representation of the binary number $`n^{}=S_3\overline{S}_2\overline{S}_6S_1S_5S_0S_4`$, where $`S_i=1(0)`$ if site $`i`$ is occupied (unoccupied), and $`\overline{S}_i`$ is the opposite of $`S_i`$. There are essentially 6 groups of barriers which are marked in the figures. These groups correspond (not necessarily in order) to the six columns of Table 1. As can be seen, groups II(a) and II(b) are in the same energy range, and together form group II in Fig. 3. Similarly groups III(a) and III(b) coincide with group III in Fig. 3. Thus, there are actually only four groups as mentioned in Sec. III. Beyond this basic division, there are some significant differences among the metals which Model III seems to handle well. The most important one is the effect of NNN atoms on the energy barrier. It can be seen from Table 3 that for Ag and Pd this effect is almost negligible, while for Cu and Ni it has much greater importance. The effective NNN binding energy for Au is even negative. Although it may be possible to construct models with the same number of parameters that would give better agreement with EAM results for each specific metal alone, our approach is to find the general characteristics of diffusion mechanisms, common to different substrates. The agreement between the EAM barriers and model III is slightly worse for Au than for the other metals. This may be due to substrate relaxation effects which are found to be more important in this metal, and are not accounted for in the model.
## V Discussion and Summary
The models presented in the previous Section help to identify the main physical mechanisms that determine the activation energies of self-diffusion processes on FCC(001) metal surfaces. Although such processes may involve interactions with many substrate and in-plane atoms, they can be well described as the sum of few relatively simple terms. The first term is the activation energy for hopping of an isolated atom. The main corrections are due to nearest-neighbor in-plane atoms at the initial site, as well as at the bridge site. The former increase the energy barrier while the latter decrease it by nearly the same amount. The next contribution is due to simultaneous presence of atoms on both sides of the hopping atom relative to the hopping path. This term is important in relatively dense environments. Its typical value is about half that of the NN binding energy. A third and generally much smaller contribution consists of NNN bonds as well as a term associated with the simultaneous presence of atoms in both sites 1 and 5 (Fig. 2).
Beyond the physical understanding gained by this analysis, the models can be used to evaluate the activation energy of any diffusion process on the (001) surface of the metals discussed above. The models suggest that given a set of few activation energies (which can be obtained from EAM, ab-initio calculations or experiments), it is possible to extract the complete set of activation energy barriers. To realize model I, for example, only two parameters are needed. They can be obtained e.g. from the activation energy for single adatom hopping, and the dissociation energy of a dimer. To estimate a barrier using model II, a third parameter is needed, which is the NNN binding energy. This may be obtained if the mobility of a trimer is known. The fourth parameter which is needed for model III can be estimated from the activation energy for detachment from an atomic step. Since model III provides an expression for the energy barriers, linear in the parameters, any four barriers that give rise to four linearly independent equations, are sufficient to determine all four parameters, and consequently all the other barriers.
The possibility to construct a full set of activation energy barriers from a relatively small set of parameters is especially useful for simulations. Without this knowledge, some processes have to be discarded from the simulations as unimportant, or assigned activation energies which are not fully substantiated. These approaches take much of the power of computer simulations, and deny the possibility of direct quantitative confrontation with experimental data. Even if a list of all relevant activation energies is available, the model can be used to check the self-consistency of the data. It can also help to interpret simulation results, which depend otherwise on a huge number of parameters.
The models presented here apply for diffusion on flat surfaces and do not describe the motion up/down steps. Such inter-terrace moves involve a large number of possible local enviroments, including flat steps as well as kink sites. We believe that the approach proposed here can be extended to describe these processes as well.
In summary, we have constructed a family of models which describe self-diffusion on FCC(001) metal surfaces and tested them for Cu, Ag, Au, Ni and Pd. For each one of these metals, the parameters of the models were optimized by comparing the energy barriers to a full set of barriers obtained from semi-empirical potentials via the embedded atom method. It is found that these models, with at most four parameters, provide a good description of the hopping energy barriers on the FCC(001) surfaces.
We thank G. Vidali for helpful discussions.
Table I
Table II
Table III
|
no-problem/9905/patt-sol9905006.html
|
ar5iv
|
text
|
# Stability of Multi-hump Optical Solitons
## Abstract
We demonstrate that, in contrast with what was previously believed, multi-hump solitary waves can be stable. By means of linear stability analysis and numerical simulations, we investigate the stability of two- and three-hump solitary waves governed by incoherent beam interaction in a saturable medium, providing a theoretical background for the experimental results reported by M. Mitchell, M. Segev, and D. Christodoulides \[Phys. Rev. Lett. 80, 4657 (1998)\].
Self-guided optical beams, or spatial optical solitons, are the building blocks of all-optical switching devices where light itself guides and steers light without fabricated waveguides . In the simplest case, a spatial soliton is created by one beam of a certain polarization and frequency, and it can be viewed as a self-trapped mode of an effective waveguide it induces in a medium . When a spatial soliton is composed of two (or more) modes of the induced waveguide , its structure becomes rather complicated, and the soliton intensity profile may display several peaks. Such solitary waves are usually referred to as multi-hump solitons; they have been found for various nonlinear models of coupled fields .
In realistic (nonintegrable) physical models, solitary waves can become unstable demonstrating self-focusing, decay, or a nonlinearity-driven transition to a stable state, if the latter exists . All these scenarios of soliton evolution are initiated by exponentially growing perturbations and they are attributed to linear instability. It is usually believed that all types of multi-hump solitary waves are linearly unstable, except for the special case of neutrally stable solitons in the integrable Manakov model . On the contrary, recent experimental results indicate the possibility of observing stationary structures resembling multi-hump solitary waves. This naturally poses a question: Were those observations only possible because of short propagation distance and a small instability growth rate? Definitely, the experimental results challenge the conventional view on multi-hump solitary waves in different models of nonlinear physics.
The purpose of this Letter is twofold. First, we study the origin of multi-hump solitons supported by incoherent interaction of two optical beams in a photorefractive medium. We find that multi-hump solitons appear via bifurcations of one-component solitons and due to the process of hump multiplication, when the intensity profile of a composite soliton changes from single- to multi-humped with increasing power. Second, we perform numerical stability analysis of two- and three-hump solitary waves and also find analytically the instability threshold for two-hump solitons. We reveal that two-hump solitary waves are linearly stable in a wide region of their existence, whereas all three-hump solitons are linearly unstable, and that even linearly stable multi-hump solitons may not survive collisions.
In the experiments , spatial multi-hump solitary waves were generated by incoherent interaction of two optical beams in a biased photorefractive crystal. The corresponding model has been derived by Christodoulides et al. , and it is described by a system of two coupled nonlinear equations for the normalized beam envelopes, $`u(x,z)`$ and $`w(x,z)`$, which for the purpose of our current analysis can be written in the following form :
$$\begin{array}{c}i\frac{u}{z}+\frac{1}{2}\frac{^2u}{x^2}+\frac{u(|u|^2+|w|^2)}{1+s(|u|^2+|w|^2)}u=0,\hfill \\ i\frac{w}{z}+\frac{1}{2}\frac{^2w}{x^2}+\frac{w(|u|^2+|w|^2)}{1+s(|u|^2+|w|^2)}\lambda w=0,\hfill \end{array}$$
(1)
where the transverse, $`x`$, and propagation, $`z`$, coordinates are measured in the units of $`(L_d/k)^{1/2}`$ and $`L_d`$, respectively, $`L_d`$ is a diffraction length, and $`k`$ is the wavevector in the medium. The parameter $`\lambda `$ is a ratio of the nonlinear propagation constants, and $`s`$ is an effective saturation parameter. For $`s0`$, the system (1) reduces to the integrable Manakov equations .
We look for stationary, $`z`$-independent, solutions of Eqs. (1) with both components $`u(x)`$ and $`w(x)`$ real and vanishing as $`|x|\mathrm{}`$. Different types of such two-component localized solutions, existing for $`0<\{\lambda ,s\}<1`$, can be characterized by the total power, $`P(\lambda ,s)=P_u+P_w`$, where the partial powers, $`P_u=_{\mathrm{}}^{\mathrm{}}|u|^2𝑑x`$ and $`P_w=_{\mathrm{}}^{\mathrm{}}|w|^2𝑑x`$, are integrals of motion. If one of the components is small, i.e. $`w/u\epsilon `$, Eqs. (1) become decoupled and, in the leading order, the equation for the $`u`$-component has a solution $`u_0(x)`$ in the form of a fundamental, $`sech`$-like, soliton with no nodes. The second equation can then be considered as an eigenvalue problem for the “modes” $`w_n(x)`$ of a waveguide created by the soliton $`u_0(x)`$ with the effective refractive index profile $`u_0^2(x)/[1+su_0^2(x)]`$. Parameter $`s`$ determines the total number of guided modes and the cut-off value for each mode, $`\lambda _n(s)`$. Therefore, a two-component vector soliton $`(u_0,w_n)`$ consists of a fundamental soliton and an $`n`$th-order mode of the waveguide it induces in the medium. Henceforward we denote such a composite solitary wave by its “state vector”: $`|0,n`$.
On the $`P(\lambda )`$ diagram (for fixed $`s`$), continuous branches representing $`|0,n`$ solitons emerge at the points of bifurcations $`\lambda _n(s)`$ of one-component solitons (see Fig. 1). It is noteworthy that the first-order mode is in fact the lowest possible mode of the waveguide induced by the fundamental soliton $`u_0(x)`$. This is because the state $`|0,0`$, node-less in both components, can exist only in the degenerate case $`\lambda =1`$, when Eqs. (1) have a family of equal-width solutions $`u_0=A(x)\mathrm{sin}\theta `$ and $`w_0=A(x)\mathrm{cos}\theta `$, with arbitrary $`\theta `$, and amplitude $`A`$ satisfying the scalar equation, $`dA/dx=\pm s^1[\mathrm{log}(1+sA^2)s(1s)A^2]^{1/2}`$.
Additionally, indefinitely many families of vector solitons $`|m,n`$, where $`mn0`$, can be formed as bound states of phase-locked $`|0,n`$ solitons . Although such states do contribute to the rich variety of the multi-hump solitons existing in our model, we exclude them from our present consideration.
Families of vector solitons can be found by numerical relaxation technique. Some results of our calculations are presented in Fig. 1, for $`|0,1`$ and $`|0,2`$ solitons found at $`s=0.8`$. Observing the modification of soliton profiles with changing $`\lambda `$ (see inset in Fig. 1), one can see that the modal description of two-component solitons is valid only near bifurcation points. For $`\lambda \lambda _n`$, the amplitude of an initially small $`w`$-component grows and the soliton-induced waveguide deforms. It is this purely nonlinear effect that gives rise to the existence of multi-hump solitons. In particular, two- and three-hump solitons are members of the soliton families $`|0,1`$ (branch A-B-C) and $`|0,2`$ (branch D-E-F) originating at different bifurcation points. At $`\lambda \lambda _n(s)`$, while the $`w`$-component remains small, all $`|0,n`$ solitons are single-humped, as shown in Figs. 1(a,d). As the amplitude of $`w`$ grows with increasing $`\lambda `$, the total intensity profile, $`I(x)=u_0^2(x)+w_n^2(x)`$, develops $`(n+1)`$ humps \[see Figs. 1(b,e)\], and at sufficiently large $`\lambda `$ the $`u`$-component itself becomes multi-humped \[Figs. 1(c,f)\]. The separation distance between the soliton humps tends to infinity as $`\lambda 1`$.
To analyze the linear stability of multi-hump solitons, we seek solutions of Eqs. (1) in the form of weakly perturbed solitary waves: $`u(x,z)=u_0(x)+\epsilon [F_u(x,z)+iG_u(x,z)]`$ and $`w(x,z)=w_n(x)+\epsilon [F_w(x,z)+iG_w(x,z)]`$, where $`\epsilon 1`$. Setting $`F_{u,w}f_{u,w}(x)e^{\beta z}`$, $`G_{u,w}g_{u,w}(x)e^{\beta z}`$, one can obtain the following eigenvalue problem (EVP)
$`\widehat{}_1\widehat{}_0\stackrel{}{g}=\mathrm{\Lambda }\stackrel{}{g},`$ (2)
$`\widehat{}_0\widehat{}_1\stackrel{}{f}=\mathrm{\Lambda }\stackrel{}{f}.`$ (3)
Here $`\stackrel{}{g}(g_u,g_w)^T`$, $`\stackrel{}{f}(f_u,f_w)^T`$, $`\mathrm{\Lambda }=\beta ^2`$, and
$`\widehat{}_{0,1}=\left(\begin{array}{cc}\frac{1}{2}\frac{d^2}{dx^2}+1a_{0,1}& b_{0,1}\\ b_{0,1}& \frac{1}{2}\frac{d^2}{dx^2}+\lambda c_{0,1}\end{array}\right),`$
where $`a_0=c_0=I/(1+sI)`$, $`b_0=0`$, $`a_1=a_0+2u_0^2/(1+sI)^2`$, $`c_1=c_0+2w_n^2/(1+sI)^2`$, and $`b_1=2u_0w_n/(1+sI)^2`$.
Because $`\widehat{}_1\widehat{}_0`$ and $`\widehat{}_0\widehat{}_1`$ are adjoint operators with identical spectra, we can consider the spectrum of only one of these operators, e.g. $`\widehat{}_1\widehat{}_0`$. Considering the complex $`\mathrm{\Lambda }`$-plane, it is straightforward to show that $`\mathrm{\Lambda }(\mathrm{},\lambda ^2)`$ is a continuum part of the spectrum with unbounded eigenfunctions. Stable bounded eigenmodes of the discrete spectrum (the so-called soliton internal modes ) can have eigenvalues only inside the gap, $`\lambda ^2<\mathrm{\Lambda }<0`$. The presence of either positive or complex $`\mathrm{\Lambda }`$ implies soliton instability, because in this case there always exists at least one eigenvalue of the soliton spectrum with $`\mathrm{Re}\beta >0`$.
Numerical solution of the EVP (2) shows that both $`|0,1`$ and $`|0,2`$ types of solitary wave solutions can be stable in a certain region of their existence domain, see Fig. 2. In the case of $`|0,1`$ solitons, the appearance of the instability is related to the fact that close to the curve where the total intensity $`I`$ becomes two-humped \[dashed line in Fig. 2\], a pair of internal modes split from the continuum into the gap. As $`\lambda `$ grows, the corresponding, purely imaginary, eigenvalues $`\beta =\pm i\sqrt{|\mathrm{\Lambda }(\lambda )|}`$ tend to zero, and at a certain critical value $`\lambda =\lambda _{\mathrm{cr}}(s)`$, they coincide at $`\beta =0`$. At this point, an eigenmode with positive eigenvalue $`\mathrm{\Lambda }`$ emerges, thus generating linear instability (see Fig. 3) with the instability growth rate $`\beta =\sqrt{\mathrm{\Lambda }(\lambda )}`$. For $`|0,2`$ solutions, the dynamics of internal modes can not be related in any obvious way with a change in the spatial solitary profiles, nevertheless the scenario of the instability development is similar to that for two-hump solitons. The dependence of $`\beta `$ on $`\lambda `$, for $`|0,1`$ and $`|0,2`$ soliton families giving rise to two- and three-hump solitary waves, is shown in Fig. 3 for $`s=0.3`$ and $`s=0.8`$, respectively. A decline in the instability growth rate as $`\lambda 1`$ (see Fig. 3) is caused by the fact that, in this limit, all multi-hump solitons decompose into a number of the neutrally stable $`|0,0`$ solitons separated by infinitely growing distance. Numerical analysis in the close vicinity of this limit is unfeasible due to lack of computational accuracy.
Note that, within the gap of the continuous spectrum, there exist several soliton internal modes not participating in the development of the linear instability. Analysis of their origin and influence on the soliton dynamics is beyond the scope of the present Letter.
With the aid of analytical asymptotic technique , it is possible to show that a perturbation mode with small but positive eigenvalue, and therefore the linear instability of a general localized solution $`(u,w)`$, appears if the functional $`J(u,w),`$ defined as
$$J=\frac{P_u}{2s}\frac{P_w}{\lambda }\frac{P_w}{2s}\frac{P_u}{\lambda }+\frac{P_u}{s}\frac{P_w}{\lambda }\frac{P_w}{s}\frac{P_u}{\lambda },$$
(4)
changes its sign. The threshold condition $`J=0`$ is, in fact, the Vakhitov-Kolokolov stability criterion , generalized for the case of two-parameter vector solitons. In this case, it does not necessarily give a threshold of leading instability . Therefore, the presence of other instabilities (which are not associated with the condition $`J=0`$ and can have stronger growth rates) is still possible, as in some other cases .
For two-hump solitons, we have been able to locate the critical curve in $`(\lambda ,s)`$-plane corresponding to the condition $`J=0`$. Superimposing this curve onto the numerically calculated values $`\lambda _{\mathrm{cr}}(s)`$, we have found a remarkable agreement between the numerical and analytical instability thresholds, as shown in Fig. 2. This gives us the first example of the generalized Vakhitov-Kolokolov criterion for the instability threshold of vector multi-hump solitary waves. For the whole family of $`|0,2`$ solutions, including three-hump solitons, it appears that $`J0`$ throughout the entire existence region. Thus, appearance of instability of three-hump solutions is not associated with the change of the sign of the functional $`J`$.
To analyze long-term evolution of multi-hump solitary waves, we perform numerical simulations of the beam propagation for $`|0,n`$ solitons within the existence domain $`\lambda _n<\lambda <1`$, at fixed $`s`$. First, we use no perturbation so that the soliton instability can only develop from numerical noise. As long as the soliton maintains its single-humped shape \[see corresponding profiles in Fig. 1(a,d)\], it remains almost insensitive to numerical noise. Moreover, while the $`|0,1`$ solitons do become two-humped at $`\lambda <\lambda _{\mathrm{cr}}`$, they still remain stable in a wide domain of their parameters until the linear instability threshold is reached. On the contrary, $`|0,2`$ solitons remain single-humped up to the instability threshold value $`\lambda =\lambda _{\mathrm{cr}}`$, so that all three-hump solitons are indeed unstable. Above the instability threshold (i.e. for $`\lambda _{\mathrm{cr}}<\lambda <1`$), a two-hump soliton splits into two independent single-humped beams as a result of the instability developed from noise \[see Fig. 4(a)\], whereas a three-hump soliton exhibits a more complex symmetry-breaking instability, as shown in Fig. 4(b).
Next, we propagate two-hump (at $`s=0.3`$) and three-hump (at $`s=0.8`$) solitons perturbed by an eigenmode with the largest instability growth rates, i.e. $`\beta _{\mathrm{max}}0.055`$ and $`\beta _{\mathrm{max}}0.153`$, respectively. We find that in the presence of $`6\%`$ amplitude perturbation, the diffraction-induced decay of a soliton can be stabilized by the nonlinearity, whereas its splitting is significantly speeded up by the perturbation, compared with splitting due to a numerical noise.
To make a link between our stability analysis and experiment, we note that for the experiment the diffraction length is defined as $`L_d=2/sb`$ and nonlinearity of the medium (SBN:60 crystal) is characterised by the parameter $`b=kr_{\mathrm{eff}}n_b^2E_0`$, where $`r_{\mathrm{eff}}`$ is the effective electro-optic coefficient ($`=280`$ pm/V), $`n_b`$ is the background refractive index $`(=2.3)`$, and $`E_0`$ is the applied electric field ($`2`$x$`10^5`$ V/m). For strong saturation we have $`s1`$ and $`L_d0.2`$mm. Now, the characteristic instability length $`z_{\mathrm{cr}}`$ can be defined through the maximum growth rate $`\beta _{\mathrm{max}}`$ and, as a result, for two-hump solitons at $`s=0.3`$ we obtain $`z_{\mathrm{cr}}12.18`$ mm. These estimates indicate that the instability, if it exists, could be detected for two-hump solitons within the experimental setup of Ref. and therefore stable two-hump solitons have been indeed observed.
Importantly, three-hump solitons so far generated in the experiment belong to a different class of vector solitons which, in our notation, can be identified as $`|1,2`$ states. The extensive numerical analysis of soliton states $`|1,2`$ shows that all such solitons are linearly unstable. However, the observation of this instability is beyond the experimental parameters of Ref. .
The complex structure of multi-hump solitons and nonintegrability of the model (1) result in a variety of collision scenarios, which are quite dissimilar to the collisions of multi-hump solitons of the exactly integrable Manakov system . For instance, even linearly stable vector solitons do not necessarily survive soliton collisions. In Figs. 5(a,b) we show two examples of non-elastic interaction of linearly stable $`|0,1`$ and $`|0,2`$ solitons.
In conclusion, we have analyzed, analytically and numerically, the stability of multi-hump optical solitons in a saturable nonlinear medium. We have found that multi-hump solitons are members of an extended class of vector solitons which can be linearly stable in a wide region of their existence, although they may be destroyed in collisions. We believe that this is an important physical result that calls for a revision of our understanding of the structure and stability of many types of multi-hump solitary waves in nonintegrable multi-component models, usually omitted in the analysis because of their a priori assumed instability.
We thank M. Segev, D. Christodoulides, M. Mitchell, and A. Buryak for useful discussions. Yu.K. and E.O. are members of the Australian Photonics Cooperative Research Centre. D.S. acknowledges support from the Royal Society of Edinburgh and British Petroleum.
|
no-problem/9905/cond-mat9905303.html
|
ar5iv
|
text
|
# Size Segregation of Granular Matter in Silo Discharges
## I Introduction
Segregation is often observed in granular matter subject to shear or external excitation. However, there have been very few studies where quantitative information on the development of segregation is available. The nature of segregation depends on many factors such as the geometry and the surface properties of the particles, velocity gradients, and boundary conditions . For example in vibrated granular matter, segregation is observed, but the size concentration profiles depends on the shape of the container and the direction of the resulting convection . In partially filled rotating cylinders, axial segregation depends on factors such as the filling fraction and rotation rates . The absence of a satisfactory continuum theory to describe the macroscopic properties of granular matter and the complicated nature of the complex convection patterns which result under the above conditions makes the analysis of the segregation difficult.
In comparison, gravity driven flows offer an alternative for studying segregation where some progress has been made in understanding the flow . One of the simplest types of flow is the slow flow of dense mixtures down an inclined surface. In this case free surface segregation has been studied using bi-disperse particles . Preferential void filling of small particles through the shear layer was identified as the main mechanism of segregation. This flow and the resulting surface segregation is similar to the situation where granular matter is poured into a silo . In this case, the angle of repose quickly develops and the resulting flow is confined to a few layers at the surface. For poly-disperse particles with similar surface properties, the larger particles are found at the bottom of the inclined surface and the smaller particles are found at the top. This segregation can be understood in the same way as the free surface segregation and can also be understood in terms of a capture model based on work by Bouchaud et al. . In the capture model the system is divided into flowing and static regions and the smaller particles are assumed to be more easily captured by the static layer than the larger ones.
In this paper we consider flow in a silo which is filled uniformly with bi-disperse particles and then drained from an orifice at the bottom. The nature of the flow is different because the flux of the particles is leaving the silo thus resulting in different boundary conditions. The resulting flow is more complex due to the development of a free surface and convergent flow near the orifice. Density waves are also known to occur .
To our knowledge, the only quantitative study of segregation in silo discharges is by Arteaga and Tuzun who measured the volume ratio of a bi-disperse mixture as a function of time, but did not visualize the internal flow. The ratio was found to be independent of time except at the very end of the discharge. It was speculated that the development of velocity gradients in the bulk due to the convergent flow is important for the observed segregation. The questions we address here are (1) Where does the segregation occur in the system? (2) What is the mechanism of segregation, that is, does it occur due to preferential void filling by small particles at the surface or is it due to the development of velocity gradients inside the silo? (3) What role does gravity play in segregation?
To address these questions, we used high resolution digital imaging to obtain detailed quantitative information on the evolution of particle segregation inside a quasi-two-dimensional silo. We find that there are static regions where there is no flow and mobile regions where the flow is very rapid. The latter region is parabolic in shape about the orifice. We observe that particles segregate so that the larger particles are found in the middle of the silo where the flow velocity is maximum. Although the large particles are found in the region with the maximum velocity, segregation actually occurs at the surface. The extent of segregation depends on the size ratio and the relative number of large and small particles in the initial mixture. This observed phenomena is consistent with the void filling model of segregation developed in the context of flow down a rough inclined plane . However, there are significant differences in the extent of segregation because of the details of the flow.
## II Experimental Apparatus
Figure 1 shows the schematic diagram of the experimental setup. A rectangular silo of dimensions $`89\mathrm{cm}\times 45\mathrm{cm}`$ and a width $`w`$ of 1.27 cm with an orifice at the bottom is used for the experiments. The surface and bulk flow inside the silo is visualized through the glass side-walls of the silo using a 1000 $`\times `$ 1000 pixel non-interlaced Kodak ES 1.0 digital camera. A layer of $`0.5\mathrm{mm}`$ glass beads is glued to the bottom surface to obtain non-slip boundary conditions. The shape of the orifice is rectangular with dimensions $`0.63\mathrm{cm}\times 1.27\mathrm{cm}`$. A valve controls the flow rate through the orifice. The resulting flow is observed to be essentially two-dimensional. Limited experiments were also performed with a silo width of $`2.54\mathrm{cm}`$ to study the effect of the side walls on the flow.
We use glass beads with various sizes and shapes as listed in Table I . The experiments are conducted in a controlled environment where the humidity is kept at 15% and the temperature is about $`38^{}\mathrm{C}`$. The data is insensitive to the temperature, but is sensitive to humidity above 25%.
## III Observations
We first discuss the data corresponding to 0.5 mm mono-disperse glass beads (type M-1) to illustrate how the flow develops as a function of time. A sequence of images of the glass beads as they discharge from the silo is shown in Fig. 2. The discharge rate $`Q=15\mathrm{g}/\mathrm{s}`$. The surface initially develops into an inverted Gaussian shape as shown in Fig. 2b, and no rolling of grains is observed at the surface. Soon after, rolling occurs after the local slope exceeds the angle of repose $`\alpha `$ for the grains. At the same time the surface develops a V-shape which is shown in Fig. 2c. The angle the inclined surface makes with the horizontal equals the angle of repose and remains constant throughout the discharge, which takes approximately 500 s for this case. The length of the inclined surface increases linearly until the top of the surface reaches the side of the silo, and then remains constant.
### A Flow inside the Silo
To identify the regions in motion, we subtract two images separated by a short time interval. Figure 3 shows the result of two images separated by 1.0 s. In the regions where the particles move, the intensities do not subtract to zero and give the speckled points in Fig. 3. Note that particles are completely stationary in the regions which are near the sides and a few layers from the surface. We divide the flowing phase into three regions as indicated in Fig. 3. The surface flow region, where the depth of the mobile layers increases roughly linearly down the incline. The crossover region, where the direction of the mean flow changes from along the surface to pointing toward the orifice. The internal region, where the convergent flow is very far from the surface. The interface between the moving and static regions is plotted in Fig. 4 and is obtained by averaging over two nearest neighbors to average out fluctuations.
Flow near the orifice. The velocity distribution of the granular matter deep inside the silo has been described by a kinematic theory based on a linear approximation for the relation between the horizontal component of the velocity $`u`$ and the gradient of the vertical velocity $`dv/dx`$, that is,
$$u=B\times \frac{dv}{dx}.$$
(1)
where $`B`$ is a constant which has dimensions of length. Combining Eq. (1) with the continuity equation yields an equation for the vertical component of the velocity $`v`$ :
$$\frac{v}{y}=B\times \frac{^2v}{x^2}.$$
(2)
The solution to Eq. (2) with appropriate boundary conditions in the converging flow regime is
$$v(x,y)=\frac{Q}{\rho \sqrt{4\pi By}}e^{x^2/4By},$$
(3)
where $`x`$ and $`y`$ measure the horizontal and vertical position from the orifice, and $`\rho `$ is the average mass density of particles relative to random close packing of the beads. Equation (3) can be rewritten to give the equivelocity contours.
$$y=a(v)x^2+2Ba(v)y\mathrm{ln}y,$$
(4)
where $`a(v)=1/(4B\mathrm{ln}(v\sqrt{4B\pi }\rho /Q))`$. The stream lines are also predicted by this theory and are given by
$$\psi =\frac{Q}{2\rho }\mathrm{erf}(\frac{x}{2\sqrt{4By}}).$$
(5)
Along the stream line $`\psi `$ is a constant, and Eq. (5) gives
$$x=x_i\sqrt{By},$$
(6)
where $`x_i`$ is a constant that depends on the streamline.
Predictions similar to Eqs. (3) and (6) have been made using a diffusing void model . In this model voids are assumed to diffuse from the orifice to the surface. Using a biased random walk model for the motion of the voids, the mean flow velocity can be estimated by calculating the frequency of a walker visiting a lattice site inside the silo.
We compare our data to the prediction of kinematic theory with $`B`$ as a fitting parameter in Fig. 4. The theory is in reasonable agreement with the data, especially considering the simplicity of the model. However, as can be seen in Fig. 4, if we consider only the first term in Eq. 4, we obtain a better fit to the data. We summarize the results for the parameter, $`a(v)`$, and the kinematic model parameter, $`B`$, from fits to Eq. 4 in Table II. No significant dependence of the shape of the mobile region on the flow rate and the width of system was found, but $`B`$ was found to depend on the size of the beads, which is consistent with the observations of Tuzun and Nedderman .
Flow near the surface. The description of flow using the kinematic model works only near the orifice. In the regions near the surface which are away from the sides, the flow is confined to a few layers. The number of layers increases linearly down the inclined surface to about 10 layers. As the silo empties, the surface moves down, and the static layers begin to flow. The depth dependence of the velocity at the point where 10 layers are moving is approximately linear, that is,
$$v_s=v_0(1d/10)\text{for}d<10,$$
(7)
where $`v_0`$ is the average velocity of the particles at the surface, and $`d`$ is the depth from the surface normalized by the mean particle size.
If all particles which leave the surface flow region eventually leave the silo from the orifice, then the velocity can be approximately related to the flux of material leaving the silo. Therefore the vertical and horizontal component of the velocity at the surface can be written as:
$$v=\frac{Q}{10d\rho }\mathrm{sin}(\alpha )\text{and}u=\frac{Q}{10d\rho }\mathrm{cos}(\alpha ).$$
(8)
Therefore the velocity of the particles in the crossover regime changes from that given by Eq. (8) to that given by Eqs. (1) and (3). The direction of the mean flow changes from being parallel to the surface to pointing down towards the orifice. This crossover region is approximately $`15\mathrm{cm}\times 15\mathrm{cm}`$ corresponding to hundreds of grains. We discuss in Sec. IV the details of the flow in the crossover regime and its effect on the observed segregation.
We note that the flow near the surface and around the obstacles has been also modeled by the diffusing void model . This model gives a crossover which is very sharp and is of the order of one grain diameter. The scale over which the crossover occurs in our experiments is much broader as can be seen in Fig. 4.
### B Segregation
With this description of flow inside a discharging silo, we now report our experiments on bi-disperse glass particles to study segregation. To visualize the segregation, two sizes of beads with different colors but identical surface properties are used. The symbol $`P_W`$ is used to specify the percentage of larger particles by weight in the initial mixture in the silo. We first discuss the development of segregation when 1.2 mm yellow glass particles (M-4) are mixed with 0.6 mm blue glass particles (M-2) and therefore the diameter ratio is $`r=2`$.
We pour the mixture ($`P_W=10\%`$) into the silo as uniformly as possible. The development of the flow is similar to the mono-disperse case described earlier, but we also observe that the density of the larger grains increases near the surface and the density of smaller grains increases in the moving layers below the surface. The evolution of segregation is shown in Fig. 5.
To parameterize the segregation we measure the ratio of the two types of beads in a horizontal narrow rectangular region at a height of 5 cm above the orifice by measuring the light intensity. The light intensity is a monotonic function of the density ratio of the two kind of particles. This function is determined by using known weight ratios of particles in a separate series of calibration experiments. The density ratio of the particles is plotted as a function of horizontal position and time in Fig. 6. We observe from Fig. 6 that the density of larger particles increases in the midpoint (directly above the orifice) as a function of time. The percentage of larger particles at the midpoint increases from 10%, corresponding to the initial mixture, to about 100%.
To further characterize the evolution of the segregation, we have plotted the density fraction of the large beads at the center point as a function of time (see Fig. 7(a)). This density is called the “segregation parameter” $`s(t)`$. From Fig. 7(a) it can be seen that there is no segregation for the first 50 s after the start of the flow as can be also seen from Fig. 6. During this time we note that the velocity gradients of the grains in the silo are fully developed. The fact that $`s(t)`$ increases only after $`t=50`$ s indicates that the velocity gradients deep inside the silo are not responsible for segregation. We observe that a thin band of larger particles initially appears at the surface and grows down towards the orifice (see Fig. 5.) Because the observed area is near the bottom of the silo (5 cm above the orifice), it takes about 50 s for the larger particles to travel to the measured region for a flow rate of 15.0 g/s. After 50 s, $`s(t)`$ increases to about 1 and remains constant.
Effect of size ratio. We repeated the experiment with different size ratios to investigate the effect of different sizes on the segregation rates. For a mixture of 0.6 mm black glass beads (M-2) and 0.7 mm red glass beads (M-3) with $`P_W=15\%`$, the size ratio $`r1.2`$. Segregation is observed even for such a small size difference. A separate set of calibration curves for the density ratio of the particles was obtained in this case. The density distribution as a function of position and time with a flow rate of 15 g/s is similar to the larger size ratio, but the segregation does not occurs as quickly and is less in comparison to the larger size ratio. As seen in Fig. 7(b), $`s(t)0.15`$ for $`t50\mathrm{s}`$, and then grows to about 0.3 – 0.35. Therefore $`s(t)`$ for $`r=1.2`$ is substantially smaller than for $`r=2.0`$.
For $`r=1.2`$, not only is the extent of segregation lower, but the interface between the region containing the large and small particles is more diffuse. The mass density ratio of the larger beads in the narrow rectangular region under observation is shown in Fig. 8 for $`r=2.0`$ and $`r=1.2`$. Both mass density ratios may be fitted by a Gaussian, with the Gaussian for higher $`r`$ substantially narrower. In Section IV, we argue that the diffused nature of the interface between large and small particles for $`r=1.2`$ is a result of the nature of the flow in the crossover region.
It can be also seen from Fig. 7 that the saturation point of the segregation parameter $`s(t)`$ depends on the diameter ratio $`r`$ and the density ratio $`P_W`$. If the segregation is not complete, there are fluctuations in the number ratio of the particles about the saturation value. The fluctuations are stronger when the saturation is lower.
Experiments were also performed with polydisperse particles (P-1 and P-2 in Table I), and segregation was also observed with larger particles found at the center as in the bi-disperse cases already discussed. No quantitative data was obtained for polydisperse particles.
Effect of number ratio of large and small particles. We considered the effect of smaller values of the density ratio $`P_W`$ on the extent of segregation by doing experiments with $`P_W=30\%`$ and 50%. The results of these experiments also are plotted in Fig. 7(b). The overall development of segregation is similar, but the extent depends on $`P_W`$. From Fig. 7(b), we observe that the saturation level of segregation increases for higher $`P_W`$. The saturation value is found to fluctuate on the order of 5%.
Effect of flow rate. We repeated all the experiments with a flow rate of $`3.0\pm 0.5\mathrm{g}/\mathrm{s}`$, which corresponds to the slowest rate for which continuous flow is possible in our system. The behavior of the segregation parameter $`s(t)`$ is similar to the faster flow rate of 15 g/s and is shown in Fig. 7(c). For the slower flow rate, no segregation occurs for $`t<200`$ s. This time is longer because the segregated particles at the surface take a longer time to arrive in the region where we monitor $`s(t)`$. The ratio of times for the development of the segregation is approximately the same as the ratio of the flow rates. There are small changes in the value of $`s(t)`$, but these changes are the same order as the errors in the calibration of the density ratio.
## IV Discussion
Because the data clearly suggests that segregation occurs at the surface (see Fig. 9), we first explore the relevance of a model proposed by Savage and Lun based on their experiments on simple flows on an inclined plane. They considered a shear flow of a thin layer of bi-disperse glass beads down a rough inclined plane and obtained quantitative data for the development of segregation. The beads were collected from different heights in the layer using an arrangement of baffles which directed the beads into different bins . They considered two mechanisms to explain the development of segregation: (i) the preferential filling of voids in a lower layer by smaller particles, and (ii) the expulsion of particles to the top layer (which is not size dependent) to make the net flux through a layer zero.
The probability of inter-layer percolation of particles is calculated by considering the relative probabilities of small and large particles falling into possible voids as a function of the size and the number ratio of the two types of particles (see Fig. 10). The resulting probability is an exponential function of particle sizes and average void size . Therefore, the probability for small particles to fall into voids is significantly higher than that for larger particles if the voids created are of a similar size. By then assuming a linear profile for the velocity as a function of depth in the layer and a constant velocity along the inclined plane, they were able to calculate relations for the concentration profiles of the particles as a function of depth in the layer and position down the inclined plane. Their model predicts that the particles segregate completely after traveling a certain distance down the incline which depends on the size ratio and volume density ratio of the initial mixture, and the angle of inclination of the plane.
In our experiments in silo discharges, the flow in the surface flow region (see Fig. 3) is similar to that considered by Savage and Lun . Therefore, we can qualitatively explain the observed segregation of particles inside the silo by the mechanism of preferential filling of voids by smaller particles near the surface. Particles are then carried along the streamlines which results in the larger particles being in the central region of the silo where the flow velocity is highest (also see Fig. 9). However, a quantitative comparison with the predictions of Ref. cannot be done because of significant differences in the underlying flow. In our experiments, the flux of particles comes from the static layers being converted to mobile layers, whereas in the simple inclined flow, the particles enter at one point at the top of the inclined plane. In addition, the flow near the bottom of the surface acquires a significant vertical component.
We also find that the inclined surface grows linearly as the silo discharges until the surface reaches the edge of the silo and then remains constant. If the segregation depends on the length of the surface as calculated in Ref. , then we would expect $`s(t)`$ to increase during the time the surface length increases. In our experiments, $`s(t)`$ saturates to values less than 100% for small $`r`$ over a range of values of $`P_W`$ during a time which is less than the time for which the surface length increases. The complications introduced by the additional features in the flow have to be taken into account to explain saturation of less than 100% in $`s(t)`$ for small $`r`$.
One possible explanation for the saturation is as follows. In addition to voids being created due to fluctuations in the inter-layer velocity, the vertical component of the velocity becomes significant in the crossover regime. Thus particles create larger voids behind them in the crossover regime (see Fig. 3) in comparison to the surface flow regime. Therefore, the probability of finding larger voids where large particles can fall into increases in the crossover regime. At some point on the inclined surface, the difference between the probability of a void being filled by a small or large particle becomes negligible. This point might be expected to occur higher on the inclined plane for smaller size differences resulting in lower saturation values for $`s(t)`$.
This saturation appears to occur for $`r=1.2`$ as seen in Fig. 8. If the size ratio $`r`$ is large enough, the probability for smaller particles to fill a void remains much higher and complete segregation is seen for $`r=2`$ in Fig. 8.
It might be expected that the change in flow rate would affect segregation because the creation of voids depends on the fluctuations and value of the mean velocity. Because the void filling mechanism is important for segregation, the quantitative progress of segregation may depend on the size ratio and flow rate. However, these changes appear to be not important in the range of flow rate available in our experiments.
The data for $`t<50`$ s and $`Q=15`$ g/s in Fig. 7 also shows that the void filling mechanism is unimportant if the direction of flow is in the same direction as the gravitational field as is the case in the deep flow regime of the silo. Segregation of bi-disperse particles in the absence of a gravitational field has been considered in a “collisional” flow by Jenkins . The difference in the scattering rates for the two different size particles is anticipated to give rise to segregation for rapid flow. However, the velocities of the particles in the silo may be too small for such effects to be important.
## V Summary
In summary, we have reported experiments on segregation of bi-mixtures of glass beads in a discharging silo. Using digital imaging we characterized the flow and measured the development of segregation as a function of position and time. The flow deep inside the silo is approximately characterized by the kinematic model , but more theoretical developments are required to model the flow near the surface. Segregation is observed even for very small size ratios. We observe that segregation occurs at the surface and not in the bulk where velocity gradients are also present. We quantitatively characterize the development and progress of the segregation using the mass ratio $`s(t)`$. We also obtained quantitative information about the size distribution of the particles inside the silo (see Fig. 8 for example). Our experiments also indicate that the segregation progresses very quickly if the surface flow is not along the direction of the gravitational field.
A qualitative explanation of the distribution of concentration of large and small particles can be given by using the void filling mechanism. However, a quantitative explanation requires a better understanding of the velocity profiles near the surface and its role on creating voids which drive flow and segregation. Quantitative experimental data for the spatial and time development of segregation is scarce, and the data presented in this paper provides a guide to the development of models of segregation.
We thank D. Hong, L. Mahadevan and H. Gould for useful discussions, and J. Norton for technical assistance. This work was partially supported by the the Donors of the Petroleum Research Fund administered by the American Chemical Society and one of us (A. K.) was also funded by the Alfred P. Sloan Foundation.
|
no-problem/9905/hep-ex9905014.html
|
ar5iv
|
text
|
# STATUS OF THE STUDY OF THE RARE DECAY 𝐊⁺→𝜋⁺𝜈𝜈̄ AT BNL
## 1 Theoretical Motivation
The $`K^+\pi ^+\nu \overline{\nu }`$ decay is a flavor changing neutral current process induced in the Standard Model (SM) by loop effects in the form of penguin and box diagrams. The decay is sensitive to top-quark effects and provides an excellent route to determine the absolute value of $`V_{td}`$ in the Cabibbo-Kobayashi-Maskawa matrix. Long-distance contributions are negligible and the hadronic matrix element is extracted from the $`K^+\pi ^0e^+\nu `$ decay. The theoretical uncertainty is 7% from the charm-quark contribution in the next-to-leading-logarithmic QCD calculations .
The branching ratio is represented in the SM as <sup>1</sup><sup>1</sup>1 $`X(x_t)`$ is the Inami-Lim loop function with the QCD correction, $`x_tm_t^2/m_W^2`$, and $`\rho _0`$ is due to the charm contribution. :
$$B(K^+\pi ^+\nu \overline{\nu })=4.57\times 10^{11}\times A^4\times X(x_t)^2\times [(\rho _0\rho )^2+\eta ^2]$$
(1)
in the Wolfenstein parameterization $`A`$, $`\rho `$ and $`\eta `$. With the $`\rho `$-$`\eta `$ constraints from other K and B decay experiments, the SM prediction of the branching ratio is $`(0.61.5)\times 10^{10}`$. New physics beyond the SM could affect the branching ratio. In addition, the two-body decay $`K^+\pi ^+X^0`$, where the $`X^0`$ is a weakly-interacting light particle such as a familon , could also be observed as a ’$`\pi ^+`$ plus nothing’ decay. Since the effects of new physics are not expected to be too large, a precise measurement of a decay at the level of $`10^{10}`$ is required.
## 2 E787 Detector and Analysis
Experiment 787 <sup>2</sup><sup>2</sup>2 E787 is a collaboration of BNL, Fukui, KEK, Osaka, Princeton, TRIUMF and Alberta. at the Alternating Gradient Synchrotron (AGS) of BNL performed an initial search in 1989-91 and obtained the 90% confidence level upper limit of $`2.4\times 10^9`$. Following major upgrades of the detector and the beam line, E787 took data from 1995 to 1998.
E787 measures the charged track emanating from stopped $`K^+`$ decays. The $`\pi ^+`$ momentum from $`K^+\pi ^+\nu \overline{\nu }`$ is less than 227MeV/$`c`$ as shown in Figure 1, while the major background sources of $`K^+\pi ^+\pi ^0`$ ($`K_{\pi 2}`$, 21.2%) and $`K^+\mu ^+\nu `$ ($`K_{\mu 2}`$, 63.5%) are two-body decays and have monochromatic momentum of 205MeV/$`c`$ and 236MeV/$`c`$, respectively. The region “above the $`K_{\pi 2}`$” between 211MeV/$`c`$ and 230MeV/$`c`$ is adopted for the search.
The E787 detector (Figure 2) is a solenoidal spectrometer with the 1.0 Tesla field directed along the beam line. Slowed by a BeO degrader, kaons stop in the scintillating-fiber target at the center of the detector. A delayed coincidence requirement ($`>`$ 2nsec) of the timing between the stopping kaon and the outgoing pion helps to reject backgrounds of pions scattered into the detector or kaons decaying in flight. Charged decay products pass through the drift chamber, lose energy by ionization loss and stop in the Range Stack made of plastic scintillators and straw chambers. Momentum, kinetic energy and range are measured to reject the backgrounds by kinematic requirements. For further rejection of $`\mu ^+`$ tracks, the output pulse-shapes of the Range Stack counters are recorded and analyzed so that the decay chain $`\pi ^+\mu ^+e^+`$ is identified in the stopping scintillator. $`K_{\pi 2}`$ and other decay modes with extra particles ($`\gamma `$, $`e`$, …) are vetoed by the in-time signals in the hermetic shower counters.
Extremely effective background suppression is required in this experiment, and reliable estimation of the rejections is essential to interpret potential observations. Data rather than Monte Carlo are used to do background studies. A set of background samples is prepared by reversing some of the selection cuts <sup>3</sup><sup>3</sup>3 For example : the $`K_{\pi 2}`$ backgrounds are rejected by kinematic cuts and photon veto cuts. By reversing the veto and requiring photons from $`\pi ^0`$ in the detector, the tails in the $`\pi ^+`$ kinematic distributions are studied. By picking up events with the track momentum/energy/range in the $`K_{\pi 2}`$ peak and by applying the photon veto cuts, the rejection of the veto is checked., which also assures that the development of the cuts and estimates of the background levels are made without looking at the candidate events (“blind” analysis). Furthermore, background studies are performed with partial data samples and the results are confirmed using the full sample. Possible correlations of the cuts are investigated. The background levels around the signal region are predicted by loosening cuts and are confirmed using data. Background level shapes inside the signal region are calculated in advance in the form of likelihood functions.
In the 1995 data set, with the total kaons stopping in the target $`1.49\times 10^{12}`$ and the acceptance $`0.16\%`$, one event, shown in Figure 3, was observed in the signal region. The estimated background level ($`0.08\pm 0.03`$ events) corresponds to a branching ratio of 3$`\times 10^{11}`$. The measured branching ratio is $`(4.2_{3.5}^{+9.7})\times 10^{10}`$ ($`0.006<|V_{td}|<0.06`$), which is consistent with the SM prediction.
## 3 E787 in 1996, 97 and 98
The total acceptance of 0.16% includes the phase space above the $`K_{\pi 2}`$ (0.16), solid angle acceptance of the charged track (0.39), $`\pi ^+`$ stop without nuclear interaction nor decay-in-flight in the detector (0.50), and the acceptance of $`\pi ^+`$ identification with the $`\pi ^+\mu ^+e^+`$ decay chain (0.25) to achieve the $`\mu ^+`$ rejection of $`10^5`$. The main limitation due to the requirement of no extra hits in the detector at the decay time is applied in the analysis costing around 40% of the acceptance. The strategy of E787 for the post-1995 runs was therefore to limit the instantaneous rates and attempt to gain in the overall number of stopped kaons.
In the experiment, only 20% of the kaons in the beam are slowed down and stop in the target while the remainder are lost or scattered out in the degrader. The rates in the E787 detector are proportional to the incident kaons, not to the stopped kaons in the target. That means, with the same incident flux and a lower beam-momentum, the kaon stopping fraction increases while accidental hits decrease. Also, using additional proton intensity to extend the spill length without increasing the instantaneous rate raises the number of kaon decays per hour without impacting the acceptance.
By reducing the beam-momentum from 790MeV/$`c`$ in 1995 to 710MeV/$`c`$ in 1997 the fraction of incident kaons stopping in the target was improved by 44%. The AGS spill length was extended from 1.6sec to 2.2sec in 1998. Other improvements in the trigger and readout provide acceptance gains of about 20%. In the off-line analysis, kinematic codes with better resolution and rejection-power were developed, and in the current study of the combined 1995-97 data sets more rejection corresponding to the background level of $`1\times 10^{11}`$ has been achieved with minimal loss in acceptance. The analysis is ongoing. The expected sensitivity for the the entire E787 data should reach less than $`0.9\times 10^{10}`$.
## 4 The New E949 Experiment
From 1999 the Relativistic Heavy Ion Collider (RHIC) at BNL starts operation, and the AGS is primarily used as the RHIC injector. However it is required for only $`4`$hours per day for this purpose. The rest of the time can be used for the proton program. The new experiment 949 <sup>4</sup><sup>4</sup>4 E949 is a collaboration of Alberta, BNL, Fukui, INR, KEK, Osaka and TRIUMF. The information is available at http://www.phy.bnl.gov/e949/. is to continue the experimental study of $`K^+\pi ^+\nu \overline{\nu }`$ at the AGS based on the E787 experience. An additional photon veto system will be installed in the detector to improve the photon rejection. E949 aims to reach a sensitivity of $`10^{11}`$ or less in two to three years of operation.
## 5 Summary
E787 has observed the evidence for the $`K^+\pi ^+\nu \overline{\nu }`$ decay in the 1995 data set and, with the entire data through 1998, expects to reach the sensitivity better than $`0.9\times 10^{10}`$. The new E949 continues the study at the BNL-AGS.
## Acknowledgments
This research was supported in part by the U.S. Department of Energy under Contracts No. DE-AC02-98CH10886 and No. W-7405-ENG-36, and Grant No. DE-FG02-91ER40671, by the Ministry of Education, Science, Sports and Culture of Japan (Monbusho), and by the Natural Sciences and Engineering Research Council and the National Research Council of Canada. The author would like to acknowledge support from Grant-in-Aid for Encouragement of Young Scientists by Monbusho.
## References
|
no-problem/9905/cond-mat9905038.html
|
ar5iv
|
text
|
# Many-Body Renormalization of Semiconductor Quantum Wire Excitons: Absorption, Gain, Binding, Unbinding, and Mott Transition
\[
## Abstract
We consider theoretically the formation and stability of quasi-one dimensional many-body excitons in GaAs quantum wire structures under external photoexcitation conditions by solving the dynamically screened Bethe-Salpeter equation for realistic Coulomb interaction. In agreement with several recent experimental findings the calculated excitonic peak shows very weak carrier density dependence upto (and even above) the Mott transition density, $`n_c3\times 10^5`$ cm<sup>-1</sup>. Above $`n_c`$ we find considerable optical gain demonstrating compellingly the possibility of one-dimensional quantum wire laser operation.
\]
An exciton, the bound Coulombic (”hydrogenic”) state between an electron in the conduction band and a hole in the valence band, is an (extensively studied) central concept in semiconductor physics. Recent interest has focused on low dimensional excitons in artificially structured semiconductor quantum well or wire systems where carrier confinement may substantially enhance the excitonic binding energy leading to novel optical phenomena. In this Letter we consider the formation, stability, and optical properties of one dimensional (1D) excitons in semiconductor quantum wires, a problem which has attracted a great deal of recent experimental \[1-3\] and theoretical \[4-6\] attention. Our motivation has been a number of recent puzzling experimental observations , which find the photoluminescence emitted from an initially photoexcited semiconductor quantum wire plasma to be peaked essentially at a constant energy independent of the magnitude of the photoexcitation intensity. This is surprising because one expects a strongly density-dependent ”red shift” in the peak due to the exchange-correlation induced band gap renormalization (BGR) (i.e. a density-dependent shrinkage of the fundamental band gap due to electron and hole self-energy corrections), which should vary strongly as a function of the photoexcited electron-hole density \[7-9\]. This striking lack of any dependence of the observed photoluminescence peak energy on the photoexcitation density has led to the suggestion that the observed quantum wire photoluminescence may be arising entirely from an excitonic (as opposed to an electron-hole plasma (EHP)) recombination mechanism, and the effective excitonic energy is, for unknown reasons, a constant (as a function of carrier density) in 1D quantum wires. This, however, introduces a new puzzle because one expects the excitonic level to exhibit a ”blue shift” (i.e. an increase) as a function of carrier density as the Coulomb interaction weakens due to screening by the finite carrier density leading to a diminished excitonic binding energy. Thus the only way to understand the experimental observation is to invoke a near exact cancelation between the red-shift arising from the self-energy correction induced BGR and the blue-shift arising from screening induced excitonic binding weakening. In this Letter, focusing on the photoexcited quasi-equilibrium regime, we provide the first quantitative theory for this problem by solving the full many-body dynamical Bethe-Salpeter equation for 1D excitons. We include both self-energy renormalization and vertex correction (arising from the Coulomb interaction) on an equal footing under high photoexcitation conditions. We find that, in agreement with experimental observations, our calculated effective excitonic energy (indicating the luminescence peak frequency) remains essentially a constant (with an energy shift of less than 0.5 meV) as a function of 1D carrier density $`n`$ for $`n<n_c3\times 10^5`$ cm<sup>-1</sup> with the system making a Mott transition from an insulating exciton gas of bound electron-hole pairs ($`n<n_c`$) to an EHP ($`n>n_c`$) at $`n=n_c`$. For $`n>n_c`$ we find strong optical gain in the calculated absorption spectra.
For our results to be presented here we have considered quantum wire parameters corresponding to the T-junction structure of width 70 Å in both transverse directions with only the lowest 1D subband occupied by the carriers. But our results and conclusions should be generically valid for arbitrary 1D quantum wire confinement (e.g. the V-groove wire of Ref. 2). The many-body exciton is given by the so-called Bethe-Salpeter equation for the 2-particle Green’s function which is shown diagrammatically in Fig. 1. The many-body diagrams shown in Fig. 1 correspond to a rather complex set of coupled non-linear integral equations which must be solved self-consistently with the bare interaction being the Coulomb interaction. These equations are notoriously difficult to solve without making drastic approximations. We use the parabolic band effective mass approximation considering the highest valence and the lowest conduction band only. The simplest approximation is to neglect all many-body effects and consider the one-electron problem when self-energy (Fig. 1(b)) and screening (Fig. 1(c)) effects disappear leaving the standard excitonic binding problem (Fig. 1 (a)) for a conduction band electron and a valence band hole interacting via the effective ”one dimensional” Coulomb interaction. Even this zeroth order exciton problem for quantum wires is far from trivial, however, because one must include proper quantum confinement effects in the Coulomb interaction matrix elements appropriate for the specific quantum wire geometry of interest. Not surprisingly a rather large theoretical literature exists in treating this zeroth order one-electron quantum wire exciton problem, which is the effective dilute or zero density ($`n0`$) limit of the many-body problem of interest to us. We include quantum wire confinement effects appropriate for a T-junction system in all the results presented in this paper.
In carrying out the full many-body dynamical calculation for the Bethe-Salpeter equation we are forced to make some approximations. Our most sophisticated approximation uses the fully frequency dependent dynamically screened electron-hole Coulomb interaction in the single plasmon-pole random phase approximation (Fig. 1(c)), which has been shown to be an excellent approximation for 1D quantum wire dynamical screening. It is essential to use the actual Coulomb interaction in solving this problem and the simplistic model interactions (such as the delta function zero range interaction used recently in Ref. 6) are not particularly meaningful from either a theoretical perspective or in understanding experimental data. In addition to our full dynamical screening theory (which is computationally extremely difficult) we have also carried out a number of simpler approximations (to be described below) in order to assess the quantitative contributions of various physical mechanisms to the 1D many-body exciton formation. For the self-energy correction we use the single-loop GW diagram shown in Fig. 1(b). Ward Identities then fix the vertex correction, entering Fig. 1(a), to be the appropriate ladder integral equation.
Before solving the full Bethe-Salpeter equation, it is instructive to study the excitonic and EHP effects separately by treating the influence of the plasma on the excitonic states as a perturbation . Using an effective Hamiltonian derived from the Bethe-Salpeter equation, we can obtain the exciton energy by minimizing the energy expectation value variationally through an 1s excitonic trial wave function. The BGR is calculated by the GW approximation (Fig. 1(b)). Note that the variational calculation is quantitatively valid only when the exciton-plasma hybridization is not particular important. In Fig. 2 we show our calculated zero-temperature (variational) excitonic energy and BGR separately as a function of 1D electron-hole density. The full dynamical screening solution is shown as the solid line and the quasi-static screening approximations (described below) is shown as the dashed line. For the purpose of comparison we also show as an inset in Fig. 2 the purely one-electron static screening result where the electron-hole interaction is modeled by the density dependent statically screened 1D Coulomb interaction, and all many-body effects (e.g. BGR) are ignored. The exciton binding energy shows a monotonic decrease (”blue-shift”) in the inset (induced by static screening) as the exciton eventually merges with the band continuum with a Mott transition density $`n_c10^5`$ cm<sup>-1</sup>. The quasi-static approximation , shown as dashed lines in Fig.2, involves making the screened exchange plus Coulomb hole approximation in the self-energy diagrams neglecting the correlation hole effect. The simpler approximations (static screening and quasi-static) are done in order to assess the importance of various terms in the full dynamical Bethe-Salpeter equation which is extremely difficult and computationally time-consuming to solve in the RPA dynamical screening approximation.
The Mott transition may be thought of as the unbinding of the bound electron-hole pair in the exciton to a free electron and a free hole — it is therefore effectively an interaction-induced insulator to metal transition which occurs as the exciton gas becomes an EHP at some high density ($`n_c`$). The statically screened single exciton behavior shown in the inset of Fig. 2 disagrees completely with the experimental finding of an approximately constant excitonic peak independent (at least in some finite range) of the free carrier density. We find that this large blue shift is not cancelled by the many-body self-energy effects within the same static screening approximation. Therefore, it is essential to consider the dynamical effects when one calculates the excitonic effects in quasi-1D quantum wire systems. Inclusion of dynamical many-body effects, shown in the results in the main part of Fig. 2, qualitatively modifies the situation: (1) the effective many-body excitonic energy is almost the same in the low density limit ($`13`$ meV for $`n<10^4`$ cm<sup>-1</sup>) in all the approximations; (2) for density between $`10^4`$ and $`10^5`$ cm<sup>-1</sup> the exciton energy has a few meV red-shift in the quasi-static approximation and almost no shift (less than 0.5 meV blue-shift) in the dynamical screening approximation; (3) the Mott transition density for the quasi-static approximation is about $`10^5`$ cm<sup>-1</sup>, while it is about $`3\times 10^5`$ cm<sup>-1</sup> for the dynamical theory; (4) below $`n_c`$ our variational solution corresponds to an excitonic wavefunction which is that of a bound electron-hole pair in the 1s hydrogenic state with a radius of about 100-500 Å , and this description is approximately valid with a constant (variational) ground state energy upto $`n_c`$ ; (5) above $`n_c`$ the calculated effective excitonic wave function is completely delocalized (with a very large radius) and the EHP becomes the dominant state of the system; (5) the quasi-static approximation, while being qualitatively valid, is quite poor quantitatively compared with our dynamical screening approximation.
In Fig. 3, we show our calculated absorption and gain spectra by solving the full Bethe-Salpeter equation in the quasi-static and the dynamical screening approximations. The integral equation for the two-particle Green’s function (Fig. 1(a)) is solved by the matrix inversion method with a singular kernel which arises from the singularity of the Coulomb interaction. The full dynamical screening approximation (which has never been solved in the literature before) has a multi-singular kernel with multiple momentum-dependent singularities (poles of the integrand) which arise from the many-body hybridization of photons, single particle excitations, and plasmons. This makes the usual singularity-removal method ineffective. This fact forces us to use a rather large matrix (about $`1500\times 1500`$ in a Gaussian quadrature) in the matrix inversion method in order to get good overall accuracy. We now discuss the important features of Fig. 3: (1) There are generally two absorption peaks in the low density ($`n<10^4`$ cm<sup>-1</sup>) spectra, one is the exciton peak at 1537 meV and the other one is the band edge peak at, for example, 1547.5 meV for $`n=10^2`$ cm<sup>-1</sup> in Fig. 3(b). The exciton peak has much larger oscillator strength than the band edge peak. (2) At low densities ($`n<10^4`$ cm<sup>-1</sup>) the exciton peak does not shift much ($`1537`$ meV) with increasing carrier density (in either approximation), indicating the effective constancy of the exciton energy; (3) at higher densities, however, the quasi-static approximation produces a red-shift in the excitonic peak by a few meV, consistent with the result shown in Fig. 2 which was obtained variationally. (4) Consistent with the variational energy shown in Fig. 2, the excitonic peak of the full dynamical screening approximation is almost a constant (with only a 0.5 meV blue-shift) upto $`n_c`$. (5) Below the Mott density ($`n_c3\times 10^5`$ cm<sup>-1</sup>) the oscillator strength of the excitons decreases rapidly as the carrier density increases in the quasi-static approximation; however, in the full dynamical theory the strength of the exciton peak remains almost a constant with increasing carrier density, indicating the interesting prospect of excitonic lasing in 1D quantum wires. (6) In the dynamical screening approximation, considerable excitonic gain is achieved for $`n>n_c`$ without any observable energy shift in the spectrum. We find that at very high densities ($`n>10^6`$ cm<sup>-1</sup>) the excitonic features in the absorption spectra are smeared out by the EHP continuum, and the BGR induced red-shift is observed. These very high density results will be presented elsewhere .
We note that our dynamical screening Bethe-Salpeter equation results are in excellent qualitative and quantitative agreement with the recent experimental findings . In particular, the effective constancy of the exciton peak as a function of the photoexcited carrier density as well as the possibility of excitonic absorption and lasing well into the high density regime (even for $`n>n_c3\times 10^5`$ cm<sup>-1</sup>) turn out to be characteristic features of the full dynamical theory (but not of the static and the quasi-static approximation). A full dynamical self-consistent theory as developed in this Letter is thus needed for an understanding of the recent experimental observations. We also note that in the recent literature the Mott density for 1D GaAs quantum wire systems has often been quoted as $`n_c8\times 10^5`$ cm<sup>-1</sup> which is substantially higher than our full dynamical theory result, $`n_c3\times 10^5`$ cm<sup>-1</sup>. The higher value of the Mott density ($`n_c8\times 10^5`$ cm<sup>-1</sup>) follows from a simple estimate based on ground state energy comparison where one equates the calculated density-dependent BGR (the light solid line in Fig. 2) with the zero-density exciton energy ($`13`$ meV in Fig. 2) — as one can see from Fig. 2, the calculated BGR (the light solid line in Fig. 2) equals 13 meV, the zero-density exciton energy, around $`n8\times 10^5`$ cm<sup>-1</sup>. In the full interacting theory the Mott transition (the intersection of the light and the heavy solid lines in Fig. 2) moves to a lower density, $`n_c3\times 10^5`$ cm<sup>-1</sup>, which is also consistent with our full dynamical Bethe-Salpeter equation based calculation of the absorption/gain spectra shown in Fig. 3.
In summary, our main accomplishments reported in this Letter are the following: (1) The first fully dynamical theory of a photoexcited electron-hole system in semiconductors which treats self-energy, vertex corrections, and dynamical screening in a self-consistent scheme based on the GW self-energy and ladder-bubble vertex-polarization diagrams within a realistic Coulomb interaction-based Bethe-Salpeter theory; (2) a reasonable qualitative and quantitative agreement with the recent experimental observations of an effectively (photoexcitation density-independent) constant exciton peak, which in our fully dynamical theory arises from an approximate cancelation of self-energy and vertex corrections in the Bethe-Salpeter equation; (3) an effective 1D quantum wire Mott transition density of $`n_c3\times 10^5`$ cm<sup>-1</sup> which is below earlier estimates based on less sophisticated approximations; (4) the concrete theoretical demonstration of the possibility of excitonic gain and lasing in 1D quantum wire structures in the density range of $`n>3\times 10^5`$ cm<sup>-1</sup> where considerable optical gain is achieved in our calculated absorption spectra.
In conclusion, we have carried out the first fully dynamical many-body theory for the photoexcited electron-hole plasma in 1D semiconductor quantum wires by solving the Bethe-Salpeter equation treating self-energy (”band gap renormalization”) and vertex (”excitonic shift”) corrections on an equal footing within the ladder-bubble-GW self-consistent conserving scheme. We find, consistent with a number of hitherto unexplained experimental observations \[1-3\], that the self-energy and the vertex corrections tend to cancel each other leading to an almost constant (in density) absorption/gain peak all the way to (and considerablely above) the Mott transition which occurs around a density of $`n_c3\times 10^5`$ cm<sup>-1</sup> for 70 Å wide T-quantum wires.
This work has been supported by the US-ONR and the US-ARO.
|
no-problem/9905/astro-ph9905121.html
|
ar5iv
|
text
|
# Quenching of the radio jet during the X-ray high state of GX 339-4
## 1 Introduction
GX 339-4 is one of only a handful of persistent black hole candidate X-ray binaries known (Tanaka & Lewin 1995). The system lies at a distance of several kpc in direction of the Galactic centre (e.g. Zdziarski et al. 1998) and exhibits a possible orbital modulation with a 14.8 hr period in optical photometry (Callanan et al. 1992), although this may in fact be half the true orbital period (Soria, Wu & Johnston 1999a). GX 339-4 shares X-ray timing and spectral properties with the classical black hole candidate Cyg X-1, although exhibiting more frequent state changes and a larger dynamic range of soft X-ray luminosity (Harmon et al. 1994; Tanaka & Lewin 1995; Méndez & van der Klis 1997; Rubin et al. 1998; Zdziarski et al. 1998; Nowak, Wilms & Dove 1999; Wilms et al. 1999; Belloni et al. 1999). The system is also a weak and persistent radio source with flux densities typically in the range 5 – 10 mJy at cm wavelengths and a flat (spectral index $`\alpha 0`$ where flux density $`S_\nu \nu ^\alpha `$) spectrum (Fender et al. 1997; Corbel et al. 1997; Hannikainen et al. 1998). The radio emission is roughly correlated with both the soft (as observed with RXTE ASM) and hard (as observed with CGRO BATSE) X-ray flux in the X-ray low/hard state (Hannikainen et al. 1998). As discussed in Wilms et al. (1999) the radio emission almost certainly arises in a region larger than the binary separation, supporting an interpretation of its origin in a compact partially self-absorbed jet, possibly of the type considered by Hjellming & Johnston (1988). Additional supporting evidence comes from the recent resolution of a compact jet in VLBA observations of Cyg X-1 (Stirling et al. 1998, de la Force et al. 1999), a source whose radio, as well as X-ray, properties appear to parallel those of GX 339-4 (Hannikainen et al. 1998; Pooley, Fender & Brocksopp 1999)
## 2 Observations
### 2.1 MOST
Occasional monitoring of GX 339-4 wth the Molonglo Observatory Synthesis Telescope (MOST) at 36 cm has been carried out for several years. All the observations were calibrated, imaged and CLEANed with the standard MOST imaging pipeline (McIntyre & Cram, 1999). To moderate any errors in the calibration we followed the procedure of Hannikainen et al. (1998), fitting three sources besides GX339-4 in each observation, and scaling the fluxes so that the sum of these three reference sources remained constant, on the assumption that these sources do not vary. The IMFIT task in the MIRIAD software package (Sault, Teuben & Wright 1995) was used to make point source fits to the synthesised maps. Further details, an observing log and tabulated flux densities will be presented in Corbel et al. (1999); see also Hannikainen et al. (1998). The MOST flux density measurements are plotted in the top panels of Figs 1 and 2.
### 2.2 ATCA
Observations of GX 339-4 have been carried out at wavelengths of 21.7, 12.7, 6.2 and 3.5 cm with the Australia Telescope Compact Array (ATCA). Observational procedures are similar to those described in Fender et al. (1997) and will be discussed more fully in Corbel et al. (1999). Data reduction was performed with the MIRIAD software package. The ATCA flux density measurements are plotted in the top panels of Figs 1 and 2.
### 2.3 CGRO BATSE
The BATSE experiment aboard the Compton Gamma Ray Observatory monitors the various hard X-ray sources in the sky using the Earth occultation technique (Harmon et al. 1994). An optically thin thermal bremsstrahlung model (with a fixed kT = 60 keV) has been used to fit the data (following Rubin et al. 1998) and to produce the light curve in the 20-100 keV energy band. We have checked for the presence of bright interfering sources in the limb which could have biased the measurement of the flux and have flagged suspicious data. The 20-100 keV BATSE data are plotted in the middle panels of Figs 1 and 2.
### 2.4 RXTE ASM
GX 339-4 is monitored up to several times daily by the Rossi X-ray Timing Experiment (RXTE) All-Sky Monitor (ASM) in the 2-12 keV range. See e.g. Levine et al. (1996) for more details. The 2-12 keV ASM data are plotted in the lower panels of Figs 1 and 2.
## 3 Quenching of the radio emission
Fig 1 plots the radio, hard- and soft-X-ray observations of GX 339-4 prior to, during and following the transition from the low/hard state to the high/soft state in early 1998 January, and the transition back to the low/hard state just over one year later (see Belloni et al. 1999 for X-ray spectral and timing properties). It is immediately obvious that the radio and hard X-ray flux are strongly anticorrelated with the soft X-rays and are consistent with zero measured flux for the majority of the observations during the high/soft state. In particular, there was no significant radio detection of GX 339-4 between MJD 50844 and the reappearance of the radio flux on MJD 51222, despite eight observations with MOST at 843 MHz and three observations with ATCA simultaneously at 4.8 and 8.6 GHz. The strongest limits on the radio flux in the high state are the ATCA measurements which had a typical $`3\sigma `$ flux density limit of $`0.2`$ mJy, constraining the emitted flux density to be more than a factor of 25 weaker than observed in the low/hard state. The single most stringent upper limit, of 0.12 mJy ($`3\sigma `$), from the ATCA observation on MJD 51129, constrained the radio flux to be more than forty times weaker than in the low/hard state.
In Fig 2 we examine in more detail the period of state transition. In addition we plot the low (1.3-3.0 keV) and high (5.0-12.2 keV) XTE ASM channels, instead of simply the total intensity as in Fig 1; this illustrates clearly the dramatic increase in the soft (disc) component during the state transition. Note that from XTE PCA timing observations we can only be certain that by MJD 50828 the source was in the high/soft state (Belloni et al. 1999). The most dramatic decrease in the hard X-ray flux and corresponding increase in the soft X-rays occur around MJD 50812 – 50816 (centred on New Year 1997/1998). By MJD 50822 the radio flux density had dropped to levels undetectable with either MOST or ATCA ($`3\sigma `$ limit at 4.8 GHz of $`0.1`$ mJy with ATCA) ; i.e. the timescale for decay from ‘normal’ to ‘quenched’ levels is $`10`$ d. This is consistent with the timescales for radio : X-ray correlations reported by Hannikainen et al. (1998). However, subsequent radio observations revealed a small resurgence in the radio flux density between MJD 50828 – 50840 with an unusually optically thin spectral index of $`0.4`$ (as measured on MJD 50828). By MJD 50844 the radio flux had again dropped to undetectable levels and was not detected again until over one year later. The quenching of the radio emission simultaneously with a large drop in the hard X-ray flux as observed with BATSE is reminiscent of that observed in the radio-jet X-ray binary Cyg X-3 (McCollough et al. 1999 and references therein).
## 4 Reappearance of the radio emission
Observations of GX 339-4 on MJD 51222 detected the radio source for the first time in over a year (Fig 1). The reappearance of the radio source was coincident with the end of a long ($`100`$ d) decline in the soft X-ray flux and a sharper increase in the hard X-ray emission. This return to the low/hard state was slow compared to the corresponding transition by Cyg X-1 in 1996 which took $`20`$ d (Zhang et al. 1997a). As in the small pre-quenching flare event, the spectral index immediately after the reapparance of the radio source was unusually optically thin at around $`0.4`$ (measured on MJD 51222). Subsequent observations have revealed a return to the flat spectrum and steady flux densities previously observed in the low/hard state. The timescale for the return from ‘quenched’ to ‘normal’ radio states can only be constrained to be $`20`$ d.
## 5 Discussion
Our observations have revealed that the radio emission from GX 339-4 is strongly suppressed during the high/soft X-ray state. This observation is in qualitative agreement with observations of an increase in the strength of radio emission from Cyg X-1 during transitions from the high/soft or intermediate states back to the more common low/hard state (Tananbaum et al. 1972; Braes & Miley 1976, Zhang et al. 1997b). In addition, Corbel et al. (1999) present evidence for previous periods of quenched radio emission in GX 339-4 which appear to correspond to periods of weak BATSE emission. We assert that it is a characteristic of the high/soft state in black hole X-ray binaries that radio emission is suppressed with respect to the low/hard state. At least one model already exists for the suppression of jet formation at high accretion rates in X-ray binaries (Meier 1996), and may be relevant to this phenomenon. It is unclear at present how these findings relate to observations of radio emission associated with X-ray transients in outburst (e.g. Hjellming & Han 1995; Kuulkers et al. 1999) as (a) these sources may reach the physically distinct Very High state (Miyamoto et al. 1991; Ebisawa et al. 1994), and (b) the radio emission in these cases appears to originate in discrete ejections, probably produced at points of X-ray state change, and as such are decoupled from the system. We note that Miyamoto & Kitamoto (1991) have proposed a jet model for the Very High state of GX 339-4.
In the low/hard X-ray state GX 339-4, in common with other black hole candidates, does not display a strong soft (disc) component (e.g. Wilms et al. 1999 and references therein). The inner regions of the accretion flow may be described by an ADAF, ADIOS or ‘sphere + disc’ geometry (e.g. Narayan & Yi 1995; Esin et al. 1998; Blandford & Begelman 1999; Wilms et al. 1999), all models in which the standard, thin, accretion disc is truncated some distance from the central black hole. A hot corona closer to the black hole Comptonises soft photons to produce the observed hard X-ray emission. In the high/soft state the disc is believed to extend to within a few gravitational radii of the black hole, resulting in a much increased soft, thermal X-ray component with $`kT1`$ keV. Simultaneously the Comptonising corona is believed to shrink and cool, resulting in a decrease and softening of the hard X-ray flux. Spectral fits to GX 339-4 data before and after the state transition under discussion are in agreement with this scenario (Belloni et al. 1999). In addition Soria, Wu & Johnston (1999b) present evidence that the outer accretion disc/flow, responsible for optical emission lines, is present in both the low/hard and high/soft states.
Adding our new observational constraint that the low/hard state produces a radio-emitting outflow, and the high/soft state does not, these models can be summarised qualitatively by a sketch such as Fig 3. The extremely strong correspondence between the hard X-rays and radio emission in GX 339-4 and other X-ray binaries (e.g. GRO J1655-40 (sometimes), Harmon et al. 1995; GRS 1915+105, Harmon et al. 1997, Fender et al. 1999; Cyg X-3, McCollough et al. 1999; Cyg X-1, Brocksopp et al. 1999) suggests that the regions responsible for the emission in the two energy regimes are strongly physically coupled. We consider it likely therefore that the corona is simply the base of the jet, and that the population of relativistic electrons responsible for the radio emission (at some point further downstream in the outflow when it becomes partially optically thin to cm radio emission) may be the high-energy tail of the population of hot electrons responsible, via Comptonisation, for the hard X-rays.
We note that it is possible that the outflow continues in the high/soft state but that radio emission is not observed because of greatly increased losses suffered by the relativistic electrons before the flow becomes (partially) optically thin to radio emission. In order for this to occur as a result of adiabatic expansion losses, the ratio of lateral expansion rate to jet width would need to be $`25`$ times more in the high/soft state than in the low/hard state. In order for synchrotron or inverse Compton losses to be responsible, an increase by a factor of $`25`$ would be required in the magnetic ($`B^2`$) or radiation energy densities respectively. As current models suggest adiabatic expansion is the dominant loss process in conical jets (e.g. Hjellming & Johnston 1988) the required increase in the magnetic or radiation energy densities would probably need to be even larger for these processes to result in quenching of the jet.
## 6 Conclusions
The radio emission from GX 339-4 is found to be strongly quenched in the high/soft X-ray state, by a factor of $`25`$, in comparison to the low/hard state. This quenching in radio emission is found to be extremely well correlated with a decrease in hard ($`20`$ keV) X-ray emission, suggesting a strong physical coupling between the regions responsible for hard X-ray and radio emission. We propose that high/soft states in black hole candidate X-ray binaries do not produce radio-emitting outflows. Optically thin radio emission at the time of transition to and from the high/soft state implies discrete ejections of material at the point of state change, in agreement with observations of X-ray transients and more unusual sources such as GRS 1915+105. However, many of those systems are observed in the Very High or poorly-defined states, and the exact relation between their radio emission and that of GX 339-4 is not well understood at present. In addition, the optically thin emission observed at these periods of state transition is further evidence that the flat-spectrum radio emission generally observed in the low/hard state results from partially optically thick emission from a quasi-continuous jet. The dramatic coupling between the emission from the inner ($``$ few 100 km) accretion disc and the radio emission is further confirmation that jets are generated close to the compact object.
Physical models developed to interpret the low/hard states (ADAF, ADIOS, sphere + disc) clearly need to take into account the direct evidence for a continuous outflow in these states. Models for jet formation need to consider why such accretion geometries produce outflows whereas those envisaged to explain the high/soft state do not, and why discrete ejection events are often, perhaps always, observed at the point of state transitions in X-ray binaries.
RPF thanks Mariano Méndez, Eric Ford, Tomaso Belloni and Michiel van der Klis for useful discussions. The Australia Telescope is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. MOST is operated by the University of Sydney and funded by grants from the Australian Research Council. RXTE ASM results provided by the ASM/RXTE teams at MIT and at the RXTE SOF and GOF at NASA’s GSFC. RPF was funded during the period of this research by ASTRON grant 781-76-017 and EC Marie Curie Fellowship ERBFMBICT 972436. MN was supported in part by NASA Grant NAG5-3225 and NSF Grant PHY94-07194.
|
no-problem/9905/hep-ph9905232.html
|
ar5iv
|
text
|
# Progress in leptonic and semileptonic decays in lattice QCD
## I Introduction
The calculation of hadronic matrix elements plays a vital rôle in determinations of the CKM matrix elements and overconstraining the unitarity triangle of the Standard Model. In particular, the CKM matrix elements that are combined to make up the sides of the triangle can be determined from neutral meson mixing and exclusive leptonic and semileptonic decay processes in the following way
$$\left\{\begin{array}{c}\text{experimental measurement of}\\ \text{decay rate}or\text{meson mixing}\end{array}\right\}=\left\{\begin{array}{c}\text{known}\\ \text{factors}\end{array}\right\}\left\{\begin{array}{c}\text{nonperturbative}\\ \text{form factor}or\text{decay constant}\end{array}\right\}|V_{CKM}|^2,$$
(1)
$`|V_{CKM}|`$ can be determined by combining experimental measurements of exclusive decay rates with theoretical calculations of the nonperturbative contributions. These calculations can be done with a number of methods including lattice QCD. Because the lattice formalism of QCD is model independent and systematically improvable it should offer some of the most reliable results. Currently, the uncertainty in many of the theoretical and experimental inputs is large enough that unitarity is not really tested. Table I shows the decay modes to be discussed here and the associated CKM matrix elements. Indeed $`V_{td}`$ and $`V_{ub}`$ are amongst the least well determined components with much of the error coming from the nonperturbative contributions . However, the next generation of B-physics experiments will reduce the experimental uncertainty considerably and to take full advantage of this the theoretical uncertainties must also be reduced.
The form factors and decay constants of Equation 1 are extracted from hadronic matrix elements, which can be calculated directly in lattice QCD. Lattice calculations of heavy quark quantities began in the late ’80s and it was quickly realised that they could provide important inputs to phenomenologically interesting parameters of the Standard Model. Over the years the field has matured enormously as a result of a deep understanding of the particular systematic errors involved and a concerted effort to control or remove them, as described in Section II. The largest remaining uncertainty in the numerical results is the quenched approximation. As this is removed in the next several years the lattice results will be the results of QCD and not a model. The aim of this paper is to give a progress report for heavy quark physics from lattice QCD emphasising the new results where attention has been paid to the particular systematic uncertainties involved. The state-of-the-art leptonic decay constants are described in some detail and a brief report on some new ideas for semileptonic physics is included. A more extensive review can be found in Ref. .
## II Progress in heavy quark physics
The evolution of lattice calculations using heavy quarks is shown very nicely in a plot made by Andreas Kronfeld for the Heavy Quark ’98 Workshop . Figure 1 shows the lattice determinations of $`f_B`$ from the earliest calculations to the most recently published ones, demonstrating how calculations of $`f_B`$ have matured over the years. Advances in computing power and our understanding of heavy quark physics mean the central value and error bars have settled down remarkably. The last four points are the most recent values of $`f_B`$ from the JLQCD , Fermilab , NRQCD and MILC collaborations respectively. I also note the UKQCD and APE collaborations have new determinations of $`f_B`$ which are not included in this plot but which will be included in the text.
The remainder of this section details the dominant uncertainties in a lattice calculation of heavy quark matrix elements and how they are treated in different approaches.
### A $`𝒪(am_Q)`$ discretistion errors
Present day computing resources mean lattice calculations are generally done in a box of size $`3`$ fm and with a lattice spacing $`a0.071.2`$ fm. Simulating realistically heavy quark masses at these lattice spacings must be approached carefully due to discretisation errors proportional to the heavy quark mass. A brute force reduction of $`a`$ is extremely costly, requiring orders of magnitude more computing power so improving lattice actions is a necessary step. One approach is to use an improved light-quark action, with the improvement coefficients of the action and operators calculated nonperturbatively. Simulations are done with quark masses around that of charm. Heavy quark effective theory is then used to guide an extrapolation from this region to that of the bottom quark. In some cases results of a calculation in the static approximation can be used - allowing an interpolation in quark mass instead of an extrapolation. Both UKQCD and APE collaborations use this method. However, this extrapolation to a region where $`𝒪(am)`$ errors are not under control introduces a systematic error which is hard to quantify.
A different approach is to consider the B system as nonrelativistic: for $`b`$ quarks $`(v/c)^2(0.3\mathrm{GeV}/5.0\mathrm{GeV})^2`$. Relativistic momenta are excluded by introducing a finite cut-off such that $`pm_Qvm_Q`$. Then, $`|p|/m_Q1`$ and the QCD Lagrangian can be expanded in powers of $`1/m_Q`$ . This has been extremely successful for B physics, where the quark mass is large, so the expansion is convergent. Results from the GLOK group using NRQCD are shown here. The theory is nonrenormalisable which means a formal continuum limit ($`a0`$) does not exist so results must be obtained at finite lattice spacing. Lattice artefacts are removed by including higher orders in $`1/m_Q`$ and $`a`$.
The final approach is that of the Fermilab group . It is a re-interpretation of the relativistic action which identifies and correctly renormalises nonrelativistic operators present in the so called light-quark action. Discretisation errors are then $`𝒪(a\mathrm{\Lambda }_{QCD})`$ and not $`𝒪(am_Q)`$. Also the existence of a formal continuum limit means a continuum extrapolation is possible. The result is an action valid for arbitrary $`am`$ and for which the systematic errors can be understood and controlled thereby allowing simulations at the $`b`$ quark mass. This approach is used by Fermilab, JLQCD and MILC.
### B Lattice spacing dependence
If the large mass-dependent errors are under control the remaining lattice spacing dependence can be quantified by repeating the matrix element calculation at three or more lattice spacings and extrapolating the result to the continuum ($`a=0`$) limit. This has been done by the collaborations using the Fermilab approach in their calculations of heavy-light decay constants. So far it has only been done by one group for the semileptonic case . In general, only a mild dependence on lattice spacing is observed and reliable extrapolations can be performed.
### C Operator matching
Matrix elements determined on the lattice must be related to their continuum counterpart as in
$$P^{}(\stackrel{}{k})|𝒪_\mu |P(\stackrel{}{p})_{continuum}=𝒵_\mu (\mu ,g^2)P^{}(\stackrel{}{k})|𝒪_\mu |P(\stackrel{}{p})_{lattice}.$$
(2)
The renormalisation factor, $`𝒵_\mu `$ is usually calculated in perturbation theory and has some associated uncertainty. UKQCD and APE use a nonperturbative determination of both the $`𝒵_\mu `$ and the improvement coefficients in their actions, as described in , which reduces considerably the matching uncertainty.
### D Quenching
Most lattice calculations are done in the quenched approximation (omitting light quark loops) as a computational expedient. The uncertainty introduced by this approximation is difficult to quantify. There are now however, some partially unquenched calculations of heavy quark quantities and these have been used by many collaborations to estimate the effect of quenching on their results.
## III Leptonic Decay Constants
The calculation of heavy-light leptonic decay constants is an important application of lattice techniques for two reasons. Firstly, the decay constants themselves are important phenomenological parameters. In particular, the $`B`$ meson decay constant, which has not been measured experimentally, is an input in determinations of the CKM matrix element $`V_{td}`$ through $`B_d^0\overline{B}_d^0`$ mixing,
$$\mathrm{\Delta }m_d(B_d^0\overline{B}_d^0)=\left[\frac{G_F^2m_W^2}{6\pi }\eta _{B_d}𝒮\left(\frac{m_t}{m_W}\right)\right]f_{B_d}^2\widehat{B}_{B_d}|V_{td}V_{tb}^{}|^2.$$
(3)
$`\mathrm{\Delta }m_d`$ has been measured experimentally to $`4\%`$ and the factor in square brackets is also well determined. Interestingly $`V_{td}`$ is known to only $`25\%`$ accuracy and the bulk of that uncertainty comes from the nonperturbative input $`f_B^2\widehat{B}_{B_d}`$. The bag parameter, $`\widehat{B}_{B_d}`$ is also calculated on the lattice but will not be discussed here, some recent reviews are in Refs. and .
Secondly, the calculation of leptonic decay constants is an important test-ground for lattice heavy quark techniques. Control of systematic errors here inspires confidence in the more complicated determinations of form factors and partial widths in the semileptonic case.
Naturally a calculation of the $`B`$ meson decay constant can also include the decay constants of the $`B_s`$ meson and the $`D`$ system. A number of experiments have measured $`f_{D_s}`$ so the lattice result can be compared with the experimental numbers for at least one heavy-light decay constant. The agreement, shown in Figure 2 should inspire confidence in the determination of $`f_B`$.
In recent years there have been a number of new calculations of $`f_B`$ using different approaches described in Section II, however a common feature is a comprehensive treatment of the systematic uncertainties involved. Table II lists results from the most recent calculations of the heavy-light decay constants and one can see that they are in rather good agreement now. World averages for all these decay constants can be found in the review by Draper .
Since both NRQCD and the Fermilab approach work well at the $`B`$ meson mass it is interesting to compare the results. Figure 3 shows that for $`f\sqrt{M}`$ (the amplitude of the matrix element) at $`1/m_B`$ there is good agreement .
To illustrate the care taken in most calculations, the systematic error quoted by the Fermilab group was arrived at by considering the following:
| (i) Excited state contamination is less than statistics ; | (ii) Finite volume is less than statistics | |
| --- | --- | --- |
| (iii) Lattice spacing dependence is less than statistics ; | (iv) Heavy quark tuning is less than statistics | |
| (v) Perturbation theory = 5% ; | | |
So the systematic error in this calculation is almost entirely from the perturbation theory used to match the lattice and continuum operators, as described in Section II. The pertubative calculation used is a one-loop, mass-dependent result from Aoki et al for which the leading error is taken to be $`𝒪(\alpha _s^2)`$. The groups who studied the dependence on lattice spacing found it to be gentle, examples of the continuum extrapolations are shown in Figure 4. The quenching error has been estimated from work by the MILC collaboration who produced the first results for an unquenched $`f_B`$ using a relativistic action. They found an increase in $`f_B`$ of $`10\%`$ at a finite lattice spacing. Thus a ($`+16`$MeV) estimate from Fermilab. Other collaborations have used this work to make similar estimates (some taking an increased central value and symmetric error bars). Preliminary results for an unquenched $`f_B`$ after extrapolation to the continuum limit suggest that the effect may be larger than 10% and closer to 25% . This is supported by the work of the NRQCD group, who have recently estimated this effect to be $`+25\%`$ .
The ratios of decay constants are also calculated. In a ratio many of the systematic errors may cancel leading to a more precise result. This can be used to place bounds on $`|V_{td}/V_{ts}|`$ and (assuming three generations) $`|V_{td}|`$ by using
$$\frac{\mathrm{\Delta }m_{B_s}}{\mathrm{\Delta }m_{B_d}}=\frac{m_{B_s}}{m_{B_d}}\frac{\widehat{B}_{B_s}f_{B_s}^2}{\widehat{B}_{B_d}f_{B_d}^2}\frac{|V_{tb}^{}V_{ts}|^2}{|V_{tb}^{}V_{td}|^2}.$$
(4)
## IV Semileptonic B meson Decays
The lattice treatment of semileptonic decays of heavy-light mesons is not yet as mature as that of the leptonic decays which is clear from the absence of a complete estimate of systematic errors à la $`f_B`$. The good news is that this is underway by a number of groups and should be available soon. In particular, calculations using NRQCD and the Fermilab approach herald a new era of precision in lattice calculations of form factors because the mass-dependent uncertainties are decoupled from other effects as a result of working at the $`b`$ quark mass. Only B meson decays will be discussed here but $`D\pi l\nu `$ and $`DKl\nu `$ are also being studied .
### A $`B\pi l\nu `$
To determine $`|V_{ub}|`$ the required parameters are the form factors, $`f^{+,0}(q^2)`$ given by
$$P^{}(\stackrel{}{k})|V_\mu |P(\stackrel{}{p})=\left(p+kq\frac{m_P^2m_P^{}^2}{q^2}\right)_\mu f^+(q^2)+q_\mu \frac{m_P^2m_P^{}^2}{q^2}f^0(q^2),$$
(5)
where $`q^2`$ is the momentum transfer ie. $`q_\mu =(pk)_\mu `$. These decays are trickier than the leptonic case because the final state hadron introduces an extra kinematic parameter - the momentum of the final state particle. Lattice calculations work best at small three-momenta because momentum-dependent errors are under control ie. close to $`q_{\text{max}}^2=(m_Pm_P^{})^2`$. However, it is traditional to quote form factors at $`q^2=0`$ since at this point experiments can measure a slope and intercept for the form factor, so a large model-dependent extrapolation in $`q^2`$ is performed. In contrast, for $`D\pi l\nu `$ the entire kinematic range can be covered, eg. Ref. . Figure 5 shows preliminary data from Fermilab for the form factors as a function of $`q^2`$. For the first time the lattice spacing dependence of these matrix elements has been studied and for $`300\text{MeV}p_\pi 800\text{MeV}`$ this is found to be mild . Finally I note that $`|V_{ub}|`$ can be determined without recourse to a $`q^2`$-extrapolation by calculating partial widths from the differential decay rate
$$\frac{d\mathrm{\Gamma }}{d|p_\pi |}|_{300MeVp_\pi 800MeV}=\frac{2m_BG_F^2|V_{ub}|^2}{24\pi ^3}\frac{|p_\pi |^4}{E_\pi }|f^+(q^2)|^2,$$
(6)
in a region where theory and experiment have reliable data, as in Figure 5.
### B $`BDl\nu `$
The final topic is a new development in the calculation of form factors at zero recoil for the decay $`BD^{()}l\nu `$. The matrix element is parameterised by two form factors, $`h^{(+,)}(1)`$ which are combined in a physical form factor,
$`_{BD}(1)=h_+^{BD}(1)h_{}^{BD}(1)(m_Bm_D)/(m_B+m_D)`$ .
$`|V_{cb}|`$ is determined from experimental data for $`(1)|V_{cb}|`$. Although the shape of the form factor has been calculated successfully (see eg. Refs. ) lattice calculations of the absolute value at zero recoil were frought with difficulties both statistical and systematic. But it has recently been shown that by extracting the form factors from a ratio of matrix elements at zero recoil, the bulk of the uncertainties cancel leaving an extremely precise result for the form factor and therefore $`|V_{cb}|`$. For $`h^+(1)`$ one constructs the ratio
$$\frac{D|V_0|BB|V_0|D}{D|V_0|DB|V_0|B}=\frac{h_+^{BD}(1)h_+^{DB}(1)}{h_+^{DD}(1)h_+^{BB}(1)}=|h_+^{BD}(1)|^2.$$
(7)
There is a similar expression using double ratios for $`h^{}(1)`$. Preliminary results show this is indeed very successful,
$$(1)=1.069\pm 0.008\pm 0.002\pm 0.025.$$
(8)
In fact this method has the added advantage that at zero recoil it is a deviation of $`(1)`$ from 1 that is measured so that effects of quenching and lattice spacing dependence which do not cancel in the ratio are expected to be small.
|
no-problem/9905/hep-ph9905225.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The $`\nu \overline{\nu }\gamma `$ production in $`e^+e^{}`$ collisions is of great interest, as it is sensitive to the triple gauge boson coupling $`WW\gamma `$ of the Standard Model. Its precise measurement will not serve only as a stringent test of the SM, but may reveal (or constrain) anomalous gauge couplings. The events with photon(s) plus missing energy might originate also from other mechanisms, signalling new physics beyond the Standard Model. For example, such final states can be produced in both gravity- and gauge-mediated supersymmetric models or low-scale gravity. The missing energy in these events is caused by weakly interacting supersymmetric particles, such as gravitinos, neutralinos and/or sneutrinos , or gravitons . In all such cases the Standard Model $`e^+e^{}\nu \overline{\nu }\gamma `$ events are irreducible background and reliable theoretical predictions for them are therefore necessary.
With the sensitivity afforded by the LEP experiments , and expected at future $`e^+e^{}`$ colliders , the photon(s) plus missing energy events provide an opportunity to search for new physics phenomena. Any meaningful interpretation of the experimental data requires a Monte Carlo simulation in which Standard Model predictions may be augmented by the contributions from possible anomalous couplings.
Since the LEP collaborations are entering their final years of operation, now is a good time to document the programs that have actually been used in the data analyses at LEP. In the present paper we document the library for the calculation of the effects of the $`WW\gamma `$ interaction for the $`e^+e^{}\nu \overline{\nu }n\gamma `$ process within the Standard Model as well as from the anomalous couplings. It is based on the work of and the description of the physical content of the program interface and discussion of its uncertainties can be found in . For an alternative implementation of the $`WW\gamma `$ vertex, see .
In principle our library can be combined with any $`e^+e^{}f\overline{f}n\gamma `$ generator, but in the present paper we will use an interface to KORALZ version 4.04, described in , as a working example. That is the reason why the fortran code of the library will be archived together with the KORALZ tree of subdirectories . Let us note that, in future, KORALZ will be replaced by a new program, KK2f , which is based on a more powerful exponentiation at the spin amplitude level and in which the library will also be easy to implement.
Version 4.04 of the KORALZ Monte Carlo program can be used to simulate $`e^+e^{}f\overline{f}n\gamma `$, ($`f=\mu ,\tau ,u,d,c,s,b,\nu `$) processes up to the LEP2 energy range, including YFS exclusive exponentiation of initial- and final-state bremsstrahlung and, optionally, the effects of various anomalous couplings. In the case of the LEP2 centre-of-mass energies and if $`f=\nu `$, the present library can be used for that purpose.
## 2 Calculation of anomalous couplings
To evaluate the effects of anomalous $`WW\gamma `$, a tree-level calculation of the squared matrix element for the process $`e^+e^{}\nu \overline{\nu }\gamma `$ has been carried out . It includes the effects of the anomalous C- and P-conserving<sup>1</sup><sup>1</sup>1For a recent phenomenological analysis of CP-violating couplings in $`e^+e^{}\nu \overline{\nu }\gamma `$, see . contributions parametrized with the help of the couplings $`\mathrm{\Delta }\kappa _\gamma `$, $`\lambda _\gamma `$ (in what follows we will suppress the subscript $`\gamma `$). The library is formed on this basis. When activated, it uses the 4-momenta of the neutrinos and the photon provided by the host program to compute a weight, $`w`$, for each event according to
$$w=\frac{|_{\mathrm{S}M}^{WW\gamma excl.}+_{\mathrm{S}M}^{WW\gamma }+_{\mathrm{a}no}^{WW\gamma }|^2}{|_{\mathrm{S}M}^{WW\gamma excl.}|^2}.$$
(1)
$`_{\mathrm{a}no}^{WW\gamma }`$ is the matrix element due to the anomalous $`\mathrm{\Delta }\kappa 0`$, $`\lambda 0`$ couplings, the $`_{\mathrm{S}M}^{WW\gamma }`$ is the matrix element due to the SM $`WW\gamma `$ interaction and $`_{\mathrm{S}M}^{WW\gamma excl.}`$ represents the remaining part of the SM matrix element for the $`e^+e^{}\nu \overline{\nu }\gamma `$ process<sup>2</sup><sup>2</sup>2Note that such a separation is gauge-dependent and, if not treated carefully, could lead to meaningless results. See ref. for details.. Note that because of the above separation, even for the Standard Model $`WW\gamma `$ interaction, the use of the library is necessary to calculate the weight $`w`$.
As the calculation of ref. is performed at $`𝒪(\alpha )`$, the case of multiple bremsstrahlung requires a special treatment. In this case, a reduction procedure is first applied in which all photons except the one with the highest-$`p_T`$ are incorporated into the 4-momenta of effective beams. In the second step, the 4-momenta of the highest $`p_T`$ photon, the effective beams and neutrinos are then used to compute the weight. Cross-checks of the calculation as well as checks of the validity of the reduction procedure are described in . The results of the calculation have been used in the measurement of the $`WW\gamma `$ coupling described in .
## 3 Flags to control anomalous couplings in KORALZ
The calculation of weights for anomalous couplings is activated by setting the card IFKALIN=1. This is transmitted from the main program via the KORALZ input parameter NPAR(15). Additional input parameters are set in the routine kzphynew(XPAR,NPAR), although there are currently no connections to the KORALZ matrix input parameters XPAR and NPAR <sup>3</sup><sup>3</sup>3 In most uses of KORALZ, the numerical value of these parameters is irrelevant or defaults are sufficient. It is expected that the advanced user may like to change them, connecting directly the kzphynew(XPAR,NPAR) routine with her or his main program. . Table 1 summarizes the functions of these input parameters.
More input parameters are initialized in the subroutine anomini, which is placed in the file gengface.f and subroutine initialize of the file geng.f. Both files are placed in the directory korz\_new/nunulib.
In order to provide the user with enough information to retrieve the $`w`$ for a given event for any $`\mathrm{\Delta }\kappa `$, $`\lambda `$, we take advantage of the fact that, for each event, one may write the $`w`$ as a quadratic function of the anomalous couplings, in terms of the results calculated for six numerically distinct combinations of the $`\mathrm{\Delta }\kappa `$, $`\lambda `$ values as follows:
$`w(\mathrm{\Delta }\kappa ,\lambda )`$ $`=`$ $`\left(1\left({\displaystyle \frac{\lambda }{\lambda _0}}\right)^2\left({\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}\right)^2+{\displaystyle \frac{\lambda }{\lambda _0}}{\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}\right)w(0,0)\left({\displaystyle \frac{\mathrm{\Delta }\kappa }{2\mathrm{\Delta }\kappa _0}}{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}\right)^2\right)w(\mathrm{\Delta }\kappa _0,0)`$ (2)
$`+`$ $`\left({\displaystyle \frac{\mathrm{\Delta }\kappa }{2\mathrm{\Delta }\kappa _0}}+{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}\right)^2{\displaystyle \frac{\lambda }{\lambda _0}}{\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}\right)w(\mathrm{\Delta }\kappa _0,0)\left({\displaystyle \frac{\lambda }{2\lambda _0}}{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\lambda }{\lambda _0}}\right)^2\right)w(0,\lambda _0)`$
$`+`$ $`\left({\displaystyle \frac{\lambda }{2\lambda _0}}+{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\lambda }{\lambda _0}}\right)^2{\displaystyle \frac{\lambda }{\lambda _0}}{\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}\right)w(0,\lambda _0)+{\displaystyle \frac{\lambda }{\lambda _0}}{\displaystyle \frac{\mathrm{\Delta }\kappa }{\mathrm{\Delta }\kappa _0}}w(\mathrm{\Delta }\kappa _0,\lambda _0).`$
When the calculation is completed, the six weights are stored in the common block common /kalinout/ wtkal(6), with the assignments shown in Table 2.
The user is then free to calculate the $`w`$ for whatever combination of $`\mathrm{\Delta }\kappa `$ and $`\lambda `$ is desired. In our program we set $`\mathrm{\Delta }\kappa _0=10`$ and $`\lambda _0=10`$. If the IENRICH input parameter is set to 1 the generated sample will have more events with hard photons than predicted by the Standard Model. The appropriate compensating factor is included into the weights wtkal. It is thus always assured that e.g. the generated sample, if the weight wtkal(1) is used, represents the Standard Model predictions.
The code for a calculation of the weight $`w`$ is placed in the directory korz\_new/nunulib in the files geng.f and gengface.f.
## 4 Demonstration programs
The demonstration program DEMO3.f for the run of KORALZ when our library is activated can be found in the directory korz\_new/february and the output DEMO3.out in the directory korz\_new/february/prod1. The DEMO.f for the run of KORALZ with the Standard Model interactions only and its output DEMO.out are also included in the directories mentioned above. All these files as well as the library itself are archived together with KORALZ .
Acknowledgements
One of us (ZW) thanks CERN Theory Division for support during the final work on the paper.
|
no-problem/9905/astro-ph9905351.html
|
ar5iv
|
text
|
# 1 Data for Bright Giants in M3.
|
no-problem/9905/cond-mat9905423.html
|
ar5iv
|
text
|
# Lyapunov Potential Description for Laser Dynamics
## I Introduction
Even for non-mechanical systems, it is occasionally possible to construct a function (called Lyapunov function or Lyapunov potential) that decreases along trajectories . The usefulness of Lyapunov functions lies on the fact that they allow an easy determination of the fixed points of a dynamical (deterministic) system as the extrema of the Lyapunov function as well as determining the stability of those fixed points. In some cases, the existence of a Lyapunov potential allows an intuitive understanding of the transient and stationary trajectories as movements of test particles in the potential landscape. In the case of non–deterministic dynamics, i.e. in the presence of noise terms, and under some general conditions, the stationary probability distribution can also be governed by the Lyapunov potential and averages can be performed with respect to a known probability density function. The aim of this work is to construct Lyapunov potentials for some laser systems. We start, then, by briefly reviewing the main features of the laser as a dynamical system.
A laser has three basic ingredients: i) a gain medium capable of amplifying the electromagnetic radiation propagating inside the cavity, ii) an optical cavity that provides the necessary feedback, and iii) a pumping mechanism. A complete understanding of laser dynamics is based on a fully quantum-mechanical description of matter-radiation interaction within the laser cavity. However, the laser is a system where the number of photons is much larger than one, thus allowing a semiclassical treatment of the electromagnetic field inside the cavity through the Maxwell equations. This fact was introduced in the semiclassical laser theory, developed by Lamb and independently by Haken . This model for laser dynamics was constructed from the Maxwell-Bloch equations for a single-mode field interacting with a two-level medium. The semiclassical laser theory ignores the quantum-mechanical nature of the electromagnetic field, and the amplifying medium is modeled quantum-mechanically as a collection of two-level atoms through the Bloch equations. A simpler description can be obtained by deriving rate equations for the temporal change of the electric field (or photons number) inside the cavity and the population inversion (carriers number in the case of semiconductor lasers) . Rate equations, with stochastic terms accounting for spontaneous emission noise, have been extensively used for semiconductor lasers.
Different types of lasers can be classified according to the decay rate of the photons, carriers and material polarization. Arecchi et al. were the first to use a classification scheme: class C lasers have all the decay rates of the same order, and therefore a set of three nonlinear differential equations is required for a satisfactory description of the electric field, the population inversion and the material polarization. For class B lasers, the polarization decays towards the steady state much faster than the other two variables, and it can be adiabatically eliminated. Class B lasers, of which semiconductor lasers are an example, are then described by just two rate equations for the atomic population inversion (or carriers number) and the electric field. Another examples of class B lasers are CO<sub>2</sub> lasers and solid state lasers . Finally, in class A lasers population inversion and material polarization decay much faster than the electric field. Both material variables can be adiabatically eliminated, and the equation for the electric field is enough to describe the dynamical evolution of the system. Some properties of class A lasers, like a dye laser, are studied in . In this paper we interpret the dynamics of both class A and class B lasers by using a Lyapunov potential.
The paper is organized as follows. In Section II we present a brief review of the relation of Lyapunov potentials to the dynamical equations and the splitting of those into conservative and dissipative parts. We consider the example of class A lasers. In this case, the Lyapunov potential gives an intuitive understanding of the dynamics observed in the numerical simulations. In the presence of noise, the probability density function obtained from the potential allows the calculation of stationary mean values of interest as, for example, the mean value of the number of photons. We will show that the mean value of the phase of the electric field in the steady state varies linearly with time only when noise is present, in a phenomenon reminiscent of the noise–sustained flows. In Section III, the dynamics of rate equations for class B lasers is presented in terms of the intensity and the carriers number (we will restrict ourselves to the semiconductor laser). In this case we have found a potential which helps to analyze the corresponding dynamics in the absence of noise. By using the conservative part of the equations, one can obtain an expression for the period of the oscillations in the transient regime following the laser switch-on. This expression extends the one obtained in a simpler case by an identification of the laser dynamics with a Toda oscillator in . Here, we have added in the expression for the period the corresponding modifications for the gain saturation term and spontaneous emission noise. Finally, in section IV, we summarize the main results obtained.
## II Potentials and Lyapunov Functions: Class A Lasers
The evolution of a system (dynamical flow) can be classified into different categories according to the relation of the Lyapunov potential to the actual equations of motion . We first consider a deterministic dynamical flow in which the real variables $`(x_1,\mathrm{},x_N)𝐱`$ satisfy the general evolution equations:
$$\frac{dx_i}{dt}=f_i(𝐱),i=1,\mathrm{},N$$
(1)
In the so–called potential flow, there exists a non–constant function $`V(𝐱)`$ (the potential) in terms of which the above equations can be written as:
$`{\displaystyle \frac{dx_i}{dt}}={\displaystyle \underset{j=1}{\overset{N}{}}}S_{ij}{\displaystyle \frac{V}{x_j}}+v_i`$ (2)
where $`S(𝐱)`$ is a symmetric and positive definite matrix, and $`v_i(𝐱)`$ satisfy the orthogonality condition:
$$\underset{i=1}{\overset{N}{}}v_i\frac{V}{x_i}=0.$$
(3)
A non-potential flow, on the other hand, is one for which the splitting (2), satisfying (3), admits only the trivial solution $`V(𝐱)=`$ constant, $`v_i(𝐱)=f_i(𝐱)`$.
Since the above (sufficient) conditions for a potential flow lead to $`dV/dt0`$, one concludes that $`V(𝐱)`$ (when it satisfies the additional condition of being bounded from below) is a Lyapunov potential for the dynamical system. In this case, one can get an intuitive understanding of the dynamics: the fixed points are given by the extrema of $`V(𝐱)`$ and the trajectories relax asymptotically towards the surface of minima of $`V(𝐱)`$. This decay is produced by the only effect of the terms containing the matrix $`S`$ in Eq. (2), since the dynamics induced by $`v_i`$ conserves the potential, and $`v_i(𝐱)`$ represents the residual dynamics on this minima surface. A particular case of potential flow is given when $`v_i(𝐱)`$ can also be derived from the potential, namely:
$$\frac{dx_i}{dt}=\underset{j=1}{\overset{N}{}}D_{ij}\frac{V}{x_j}$$
(4)
where the matrix $`D(𝐱)=S(𝐱)+A(𝐱)`$, splits into a positive definite symmetric matrix, $`S`$, and an antisymmetric one, $`A`$. In this case, the residual dynamics also ceases after the surface of minima of $`V(𝐱)`$ has been reached.
We now describe the effect of noise on the dynamics of the above systems. The stochastic equations (considered in the Itô sense) are:
$$\frac{dx_i}{dt}=f_i(𝐱)+\underset{j=1}{\overset{N}{}}g_{ij}(𝐱)\xi _j(t)$$
(5)
where $`g_{ij}(𝐱)`$ are given functions and $`\xi _j(t)`$ are white noise: Gaussian random processes of zero mean and correlations:
$$\xi _i(t)\xi _j(t^{})=2ϵ\delta _{ij}\delta (tt^{})$$
(6)
$`ϵ`$ is the intensity of the noise.
In the presence of noise terms, it is not adequate to talk about fixed points of the dynamics, but rather consider instead the maxima of the probability density function $`P(𝐱,t)`$, which satisfies the multivariate Fokker-Planck equation whose general solution is unknown. When the deterministic part of (5) is a potential flow, however, a closed form for the stationary distribution $`P_{st}(𝐱)`$ can be given in terms of the potential $`V(𝐱)`$ if the following (sufficient) conditions are satisfied:
1. The fluctuation–dissipation condition, relating the symmetric matrix $`S`$ to the noise matrix $`g`$:
$$S_{ij}=\underset{k=1}{\overset{N}{}}g_{ik}g_{jk},S=gg^T$$
(7)
2. $`S_{ij}`$ satisfies:
$$\underset{j=1}{\overset{N}{}}\frac{S_{ij}}{x_j}=0,i$$
(8)
This condition is satisfied, for instance, for a constant matrix $`S`$.
3. $`v_i`$ is divergence free:
$$\underset{i=1}{\overset{N}{}}\frac{v_i}{x_i}=0$$
(9)
this third condition is automatically satisfied for potential flows of the form (4) with a constant matrix A.
Under those circumstances, the stationary probability density function is:
$$P_{st}(𝐱)=Z^1\mathrm{exp}\left(\frac{V(𝐱)}{ϵ}\right)$$
(10)
where $`Z`$ is a normalization constant. Graham has shown that if conditions 2 and 3 are not satisfied, then the above expression for $`P_{st}(𝐱)`$ is still valid in the limit $`ϵ0`$.
As an example of the use of Lyapunov potentials in a dynamical system, we consider class A lasers whose dynamics can be described in terms of the slowly varying complex amplitude $`E`$ of the electric field:
$$\dot{E}=(1+i\alpha )\left(\frac{\mathrm{\Gamma }}{1+\beta |E|^2}\kappa \right)E+\zeta (t)$$
(11)
where $`\alpha `$, $`\beta `$, $`\mathrm{\Gamma }`$ and $`\kappa `$ are real parameters. $`\kappa `$ is the cavity decay rate; $`\mathrm{\Gamma }`$ the gain parameter; $`\beta `$ the saturation-intensity parameter and $`\alpha `$ is the detuning parameter. Another widely used model expands the non-linear term to give a cubic dependence on the field (third order Lamb theory ), but this is not necessary here. Eq. (11) is written in a reference frame in which the frequency of the on steady state is zero . $`\zeta (t)`$ is a complex Langevin source term accounting for the stochastic nature of spontaneous emission. It is taken as a Gaussian white noise of zero mean and correlations:
$$\zeta (t)\zeta ^{}(t^{})=4\mathrm{\Delta }\delta (tt^{}),$$
(12)
where $`\mathrm{\Delta }`$ measures the strength of the noise.
By writing the complex variable $`E`$ as $`E=x_1+ix_2`$ and introducing a new dimensionless time such that $`t\kappa t`$, the evolution equations become:
$`\dot{x}_1`$ $`=`$ $`\left({\displaystyle \frac{a}{b+x_1^2+x_2^2}}1\right)(x_1\alpha x_2)+\xi _1(t)`$ (13)
$`\dot{x}_2`$ $`=`$ $`\left({\displaystyle \frac{a}{b+x_1^2+x_2^2}}1\right)(\alpha x_1+x_2)+\xi _2(t)`$ (14)
Where $`a=\mathrm{\Gamma }/(\kappa \beta )`$ and $`b=1/\beta `$. $`\xi _1(t)`$ and $`\xi _2(t)`$ are white noise terms with zero mean and correlations given by equation (6) with $`ϵ=\mathrm{\Delta }/\kappa `$.
In the deterministic case ($`ϵ=0`$), these dynamical equations constitute a potential flow of the form (4) where the potential $`V(𝐱)`$ is
$$V(x_1,x_2)=\frac{1}{2}[x_1^2+x_2^2a\mathrm{ln}(b+x_1^2+x_2^2)]$$
(15)
and the matrix $`D(𝐱)`$ (split into symmetric and antisymmetric parts) is:
$$D=S+A=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)+\left(\begin{array}{cc}0& \alpha \\ \alpha & 0\end{array}\right).$$
(16)
A simpler expression for the potential is given in and valid for the case in which the gain term is expanded in Taylor series.
According to our discussion above, the fixed points of the deterministic dynamics are the extrema of the potential $`V(𝐱)`$: for $`a>b`$ there is a maximum at $`(x_1,x_2)=0`$ (corresponding to the laser in the off state) and a line of minima given by $`x_1^2+x_2^2=ab`$ (see Fig. 1). The asymptotic stable situation, then, is that the laser switches to the on state with an intensity $`I|E|^2=x_1^2+x_2^2=ab`$. For $`a<b`$ the only stable fixed point is the off state $`I=0`$.
In the transient dynamics, the symmetric matrix $`S`$ is responsible for driving the system towards the line of minima of $`V`$ following the lines of maximum slope of $`V`$. The antisymmetric part $`A`$ (which is proportional to $`\alpha `$) induces a movement orthogonal to the direction of maximum variation of $`V(𝐱)`$. The combined effects of $`S`$ and $`A`$ produce a spiraling trajectory in the $`(x_1,x_2)`$ plane. The angular velocity of this spiral movement is proportional to $`\alpha `$. Asymptotically, the system tends to one of the minima in the line $`I=ab`$, the exact location depending on the initial conditions. The potential decreases in time until it arrives at its minimum value: $`V(x_1^2+x_2^2=ab)=\frac{1}{2}(a\mathrm{ln}(a)a+b)`$.
In the presence of moderate levels of noise, $`ϵ0`$, the qualitative features of the transient dynamics remain the same as in the deterministic case. The most important differences appear near the stationary situation. As the final value of the intensity is approached and for $`\alpha 0`$, the phase rotation slows down and the mean value of the phase $`\varphi `$ of the electric field $`E`$ changes linearly with time also in the steady state, see Fig. 2. For $`\alpha =0`$ there is only the ordinary phase diffusion around the circumference $`x_1^2+x_2^2=ab`$ that represents the set of all possible deterministic equilibrium states . Therefore, for $`\alpha 0`$ the real and imaginary parts of $`E`$ oscillate not only in the transient dynamics but also in the steady state, and while the frequency of the oscillations still depends on $`\alpha `$ (as well as $`ϵ`$), their amplitude depends on the noise strength $`ϵ`$.
We can understand these aforementioned features of the noisy dynamics using the deterministic Lyapunov potential $`V(x_1,x_2)`$. Since conditions 1-3 above are satisfied, the stationary probability distribution is given by (10) with $`V(x_1,x_2)`$ given by (15). By changing variables to intensity and phase, we find that the probability density functions for $`I`$ and $`\varphi `$ are independent functions, $`P_{st}(\varphi )=1/(2\pi )`$ is a constant and,
$$P_{st}(I)=Z^1\mathrm{e}^{I/(2ϵ)}(b+I)^{a/(2ϵ)}$$
(17)
where the normalization constant is $`Z=(2ϵ)^{\frac{a}{2ϵ}+1}\mathrm{e}^{\frac{b}{2ϵ}}\mathrm{\Gamma }(\frac{a}{2ϵ}+1,\frac{b}{2ϵ})`$ and $`\mathrm{\Gamma }(x,y)`$ is the incomplete Gamma function. From this expression, we see that, independently of the value for $`ϵ`$, $`P_{st}(I)`$ has its maxima at the deterministic stationary value $`I_m=ab`$. Starting from a given initial condition corresponding, for instance, to the laser in the off state, the intensity fluctuates around a mean value that increases monotonically with time. In the stationary state, the intensity fluctuates around the deterministic value $`I_m=ab`$ but, since the distribution (17) is not symmetric around $`I_m`$, the mean value $`I_{st}`$ is larger than the deterministic value. By using (17) one can easily find that
$$I_{st}=(ab)+2ϵ\left[1+\frac{\mathrm{exp}(b/2ϵ)(b/2ϵ)^{\frac{a}{2ϵ}+1}}{\mathrm{\Gamma }(\frac{a}{2ϵ}+1,\frac{b}{2ϵ})}\right]$$
(18)
An expression for the mean value of the intensity in the steady state was also given in in the simpler case of an expansion of the saturation-term parameter in the dynamical equations.
As mentioned before, in the steady state of the stochastic dynamics, the phase $`\varphi `$ of the electric field fluctuates around a mean value that changes linearly with time. Of course, since any value of $`\varphi `$ can be mapped into the interval $`[0,2\pi )`$, this is not inconsistent with the fact that the stationary distribution for $`\varphi `$ is a uniform one. We can easily understand the origin of this noise sustained flow : the rotation inducing terms, those proportional to $`\alpha `$ in the equations of motion, are zero at the line of minima of the potential $`V`$ and, hence, do not act in the steady deterministic state. Fluctuations allow the system to explore regions of the configuration space $`(x_1,x_2)`$ where the potential is not at its minimum value. Since, according to Eq. (18), the mean value of $`I`$ is not at the minimum of the potential, there is, on average, a non-zero contribution of the rotation terms producing the phase drift observed.
The rotation speed can be calculated by writing the evolution equation for the phase of the electric field as:
$$\dot{\varphi }=\left(\frac{a}{b+I}1\right)\alpha +\frac{1}{\sqrt{I}}\xi (t)$$
(19)
where $`\xi (t)`$ is a white noise term with zero mean value and correlations given by (6). By taking the average value and using the rules of the Itô calculus, one arrives at:
$$\dot{\varphi }=\alpha \frac{a}{b+I}1$$
(20)
and, by using the distribution (17), one obtains the stochastic frequency shift:
$$\dot{\varphi }_{st}=\alpha \frac{\mathrm{exp}(b/2ϵ)(b/2ϵ)^{\frac{a}{2ϵ}}}{\mathrm{\Gamma }(\frac{a}{2ϵ}+1,\frac{b}{2ϵ})}$$
(21)
Notice that this average rotation speed is zero in the case of no detuning ($`\alpha =0`$) or for the deterministic dynamics ($`ϵ=0`$) and that, due to the minus sign, the rotation speed is opposite to that of the deterministic transient dynamics when starting from the off state. These results are in excellent agreement with numerical simulations of the rate equations in the presence of noise (see Fig. 2).
## III Class B lasers
The dynamics of a typical class B laser, for instance a single mode semiconductor laser, can be described in terms of two evolution equations, one for the slowly-varying complex amplitude $`E`$ of the electric field inside the laser cavity and the other for the carriers number $`N`$ (or electron-hole pairs) . These equations include noise terms accounting for the stochastic nature of spontaneous emission and random non-radiative carrier recombination due to thermal fluctuations. Both noise sources are usually assumed to be white Gaussian noise.
The equation for the electric field can be written in terms of the optical intensity $`I`$ and the phase $`\varphi `$ by defining $`E=\sqrt{I}e^{\mathrm{i}\varphi }`$. For simplicity, we neglect the explicit random fluctuations terms and retain, as usual , the mean power of the spontaneous emission. The equations are:
$$\frac{dI}{dt}=(G(N,I)\gamma )I+4\beta N$$
(22)
$$\frac{d\varphi }{dt}=\frac{1}{2}(G(N,I)\gamma )\alpha $$
(23)
$$\frac{dN}{dt}=C\gamma _eNG(N,I)I$$
(24)
$`G(N,I)`$ is the material gain given by:
$$G(N,I)=\frac{g(NN_o)}{1+sI}$$
(25)
The definitions and typical values of the parameters for semiconductor lasers are given in Table 1. The first term of Eq. (22) accounts for the stimulated emission while the second accounts for the mean value of the spontaneous emission power. Eqs. (22 \- 24) are written in the reference frame in which the frequency of the on state is zero when spontaneous emission noise is neglected. The threshold condition is obtained by setting $`G(N,I)=\gamma `$, $`I=0`$ and neglecting spontaneous emission, i.e. $`N_{th}=N_o+\frac{\gamma }{g}`$. The threshold carrier injected per unit time to turn the laser on is given by $`C_{th}=\gamma _eN_{th}`$. Eq. (23) shows that $`\dot{\varphi }`$ is linear with $`N`$ and slightly (due to the smallness of the saturation parameter $`s`$, see Table 1) nonlinear with $`I`$.
Since in the deterministic case considered henceforth, the evolution equations for $`I`$ and $`N`$ do not depend on the phase $`\varphi `$, we can concentrate only in the evolution of $`I`$ and $`N`$. One can obtain a set of simpler dimensionless equations by performing the following change of variables:
$$y=\frac{2g}{\gamma }I,z=\frac{g}{\gamma }(NN_o),\tau =\frac{\gamma }{2}t$$
(26)
The equations become then:
$`{\displaystyle \frac{dy}{d\tau }}`$ $`=`$ $`2\left({\displaystyle \frac{z}{1+\overline{s}y}}1\right)y+cz+d`$ (27)
$`{\displaystyle \frac{dz}{d\tau }}`$ $`=`$ $`abz{\displaystyle \frac{zy}{1+\overline{s}y}},`$ (28)
where we have defined $`a=\frac{2g}{\gamma ^2}(C\gamma _eN_o)`$, $`b=\frac{2\gamma _e}{\gamma }`$, $`c=\frac{16\beta }{\gamma }`$, $`d=\frac{16\beta gN_o}{\gamma ^2}`$ and $`\overline{s}=\frac{s\gamma }{2g}`$. These equations form the basis of our subsequent analysis. The steady states are obtained by setting (27) and (28) equal to zero, i.e.:
$`y_{st}`$ $`=`$ $`{\displaystyle \frac{1}{4(1+b\overline{s})}}[2(ab)+d(1+b\overline{s})+ca\overline{s}+\sqrt{v}]`$ (29)
$`z_{st}`$ $`=`$ $`{\displaystyle \frac{a(1+\overline{s}y_{st})}{b+y_{st}(1+b\overline{s})}}`$ (30)
where the constant $`v`$ is given by:
$`v`$ $`=`$ $`4(ab)^2+4d(a+b)(1+b\overline{s})+d^2(1+b\overline{s})^2`$ (31)
$`+`$ $`c(8a+4a\overline{s}(a+b)+2da\overline{s}(1+b\overline{s}))+c^2a^2\overline{s}^2`$ (32)
There is another steady state solution for $`y_{st}`$ given by Eq. (29) (with a minus sign in front of $`\sqrt{v}`$) which, however, does not correspond to any possible physical situation, since $`y_{st}<0`$. For a value of the injected carriers per unit time below threshold ($`C<C_{th}`$, equivalent to $`ab<0`$) $`y_{st}`$ is very small. This corresponds to the off solution in which the only emitted light corresponds to the spontaneous emission. Above threshold, stimulated emission occurs and the laser operates in the on state with large $`y_{st}`$. In what follows, we will concentrate in the evolution following the laser switch-on to the on state.
It is known that the dynamical evolution of $`y`$ and $`z`$ is such that they both reach the steady state by performing damped oscillations whose period decreases with time. This fact is different from the usual relaxation oscillations that are calculated near the steady state by linearizing the dynamical equations. The time evolution of $`y`$ and $`z`$ is shown in Fig. 3a for some parameters (for another values of the parameters equivalent results are obtained), while the corresponding projection in the $`y`$, $`z`$ phase-plane is shown in Fig. 4. We are interested in obtaining a Lyapunov potential that can helps to explain the observed dynamics. This study was done in without considering neither the saturation term, nor the mean value of the spontaneously emission power, and under those conditions an expression for the period of the transient oscillations was obtained. In our work, we calculate the period of the oscillations by taking into account these two effects. The period is obtained in terms of the potential, by assuming that the latter has a constant value during one period. It will be shown that this assumption works reasonably well and gives a good agreement with numerical calculations. Near the steady state, the relaxation oscillations can also be calculated in this form, but the potential is almost constant and consequently so is the period.
The evolutions equations (27), (28) can be casted in the form of Eq. (4) with the following Lyapunov potential:
$$V(y,z)=A_1y+A_2y^2+A_3\mathrm{ln}(y)+\frac{A_4}{y}+\frac{1}{2}B^2(y,z)$$
(33)
where
$`A_1`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \frac{1}{2}}a\overline{s}+b\overline{s}{\displaystyle \frac{1}{4}}\overline{s}d(1+b\overline{s}){\displaystyle \frac{1}{4}}a\overline{s}^2c`$ (34)
$`A_2`$ $`=`$ $`{\displaystyle \frac{\overline{s}}{4}}(1+b\overline{s})`$ (35)
$`A_3`$ $`=`$ $`{\displaystyle \frac{1}{2}}[ab+(ac+bd)\overline{s}+{\displaystyle \frac{d}{2}}]`$ (36)
$`A_4`$ $`=`$ $`{\displaystyle \frac{(ac+bd)}{4}}`$ (37)
$`B(y,z)`$ $`=`$ $`z1\overline{s}y+{\displaystyle \frac{(d+cz)}{2y}}(1+\overline{s}y).`$ (38)
The corresponding (non-constant) matrix $`D`$ is given by:
$$D=\left(\begin{array}{cc}0& d_{12}\\ d_{12}& d_{22}\end{array}\right).$$
(39)
being
$`d_{12}`$ $`=`$ $`{\displaystyle \frac{4y^2}{(1+\overline{s}y)[2y+c(1+\overline{s}y)]}}`$ (40)
$`d_{22}`$ $`=`$ $`{\displaystyle \frac{4y[(1+2\overline{s}+b\overline{s})y^2+by+d+cz]}{(1+\overline{s}y)[2y+c(1+\overline{s}y)]^2}}`$ (41)
This potential reduces to the one obtained in ref. when setting $`c=d=\overline{s}=0`$ (which corresponds to set the laser parameters $`\beta =s=0`$). As expected, non-vanishing values for the parameters $`s`$ and $`\beta `$ increase the dissipative part of the potential ($`d_{22}`$), associated with the damping term. This result was pointed out in when linearizing the rate equations around the steady state.
The equipotential lines of (33) are also plotted in Fig. 4. It is observed that there is only one minimum for V and hence the only stable solution (for this range of parameters) is that the laser switches to the on state and relaxes to the minimum of $`V`$. The movement towards the minimum of $`V`$ has two components: a conservative one that produces closed equipotential trajectories and a damping that decreases the value of the potential. The combined effects drives the system to the minimum following a spiral movement, best observed in Fig. 4.
The time evolution of the potential is also plotted in Fig. 3b. In this figure it can be seen that the Lyapunov potential is approximately constant between two consecutive peaks of the relaxation oscillations (This fact can be also observed with the equipotential lines of Fig. 4). This fact allows us to estimate the relaxation oscillation period by approximating $`V(y,z)=V`$, constant, during this time interval. When the potential is considered to be constant, the period can be evaluated by the standard method of elementary Mechanics: $`z`$ is replaced by its expression obtained from (27) in terms of $`y`$ and $`\dot{y}`$ (the dot stands for the time derivative) in $`V(y,z)`$. Using the condition that $`V(y,z)=V=constant`$, we obtain an equation for $`y`$ of the form: $`F(y,\dot{y})=V`$. From this equation, we can calculate the relaxation oscillation period ($`T`$) by integrating over a cycle. This leads to the expression:
$$T=_{y_0}^{y_1}\frac{1+\overline{s}y}{y}\frac{dy}{[2(VA_1yA_2y^2A_3\mathrm{ln}(y)A_4y^1)]^{1/2}}$$
(42)
where $`y_0`$ and $`y_1`$ are the values of $`y`$ that cancel the denominator. We stress the fact that the only one approximation used in the derivation of this expression is that the Lyapunov potential is constant during two maxima of the intensity oscillations. The previous equation for the period reduces, in the case $`c=d=\overline{s}=0`$, to the one previously obtained by using the relation between the laser dynamics and the Toda oscillator derived in . Evaluation of the above integral shows that the period $`T`$ decreases as the potential $`V`$ decreases. Since the Lyapunov potential decreases with time, this explains the fact that the period of the oscillations in the transient regime decreases with time. In Fig. 5 we compare the results obtained with the above expression for the period with the one obtained from numerical simulations of the rate equations (27), (28). In the simulations we compute the period as the time between two peaks in the evolution of the variable $`y`$. As can be seen in this figure, the above expression for the period, when using the numerical value of the potential $`V`$, accurately reproduces the simulation results although it is systematically lower than the numerical result. The discrepancy is less than one percent over the whole range of times.
It is possible to quantify the difference between the approximate expression (42) and the exact values near the stationary state. In this case expression (42) reduces to :
$$T=\frac{2\pi }{d_{12,st}\sqrt{EFH^2}}$$
(43)
where:
$`E`$ $`=`$ $`2\left(A_2{\displaystyle \frac{1}{2}}{\displaystyle \frac{A_3}{y_{st}^2}}+{\displaystyle \frac{A_4}{y_{st}^3}}+{\displaystyle \frac{1}{2}}\left[\overline{s}+{\displaystyle \frac{(d+cz_{st})}{2y_{st}^2}}\right]^2\right)`$ (44)
$`F`$ $`=`$ $`\left[1+c{\displaystyle \frac{(1+\overline{s}y_{st})}{2y_{st}}}\right]^2`$ (45)
$`H`$ $`=`$ $`\left[1+{\displaystyle \frac{c(1+\overline{s}y_{st})}{2y_{st}}}\right]\left[\overline{s}+{\displaystyle \frac{(d+cz_{st})}{2y_{st}^2}}\right]`$ (46)
and $`d_{12,st}`$ is the coefficient $`d_{12}`$ calculated in the steady state. The period of the relaxation oscillations near the steady state can be obtained by linearizing eqs. (27) and (28) after a small perturbation is applied. The frequency of the oscillations in the steady state is the imaginary part of the eigenvalues of the linearized equations. This yields a period:
$$T_{st}=\frac{2\pi }{d_{12,st}\sqrt{EFH^2}}\left[1\frac{d_{22,st}^2}{d_{12,st}^2}\frac{F^2}{4(EFH^2)}\right]^{1/2}$$
(47)
The difference between (43) and (47) vanishes with $`d_{22,st}`$ (i.e. $`d_{22}`$ in the stationary state). Since $`EFH^2`$ is always a positive quantity, our approximation will give, at least asymptotically, a smaller value for the period.
In order to have a complete understanding of the variation of the period with time, we need to compute the time variation of the potential $`V(\tau )`$ between two consecutive intensity peaks. This variation is induced by the dissipative terms in the equations of motion. We have not been able to derive an expression for the variation of the potential (see for an approximate expression in a simpler case). However, we have found that a semi-empirical argument can yield a very simple law which is well reproduced by the simulations. We start by studying the decay to the stationary state in the linearized equations. By expanding around the steady state: $`y=y_{st}+\delta y`$, $`z=z_{st}+\delta z`$, the dynamical equations imply that the variables decay to the steady state as: $`\delta y(\tau )`$, $`\delta z(\tau )\mathrm{exp}(\frac{\rho }{2}\tau )`$, where:
$$\rho =d_{22,st}F$$
(48)
Expanding $`V(y,z)`$ around the steady state and taking an initial condition at $`\tau _0`$ we find an expression for the decay of the potential:
$$\mathrm{ln}[V(\tau )V_{st}]=\mathrm{ln}[V(\tau _0)V_{st}]\rho (\tau \tau _0)$$
(49)
In Fig. 6 we plot $`\mathrm{ln}[V(\tau )V_{st}]`$ versus time and compare it with the approximation (49). One can see that the latter fits $`\mathrm{ln}[V(\tau )V_{st}]`$ not only near the steady state (where it was derived), but also during the transient dynamics. The value of $`\tau _0`$, being a free parameter, was chosen at the time at which the first peak of the intensity appears. Although other values of $`\tau _0`$ might produce a better fit, the one chosen here has the advantage that it can be calculated analytically by following the technique of ref. . It can be derived from Eq. (42) that the period $`T`$ is linearly related to the potential $`V`$. This, combined with the result of Eq. (49), suggests the semi-empirical law for the evolution of the period of the form:
$$\mathrm{ln}[T(\tau )T_{st}]=\mathrm{ln}[T(\tau _0)T_{st}]\rho (\tau \tau _0)$$
(50)
This simple expression fits well the calculated period not only near the steady state, but also in the transient regime, see Figs. 5 and 7. The tiny differences observed near the steady state are due to the fact that the semi-empirical law, Eq. (50), is based on the validity of relation Eq. (42) between the period and the potential. As it was already discussed above, that expansion slightly underestimates the asymptotic (stationary) value of the period. By complementing this study with the procedure given in to describe the switch-on process of a laser, and valid until the first intensity peak is reached, we can obtain a complete description of the variation of the oscillations period in the dynamical evolution following the laser switch-on.
## IV Summary
In this work we have used Lyapunov potentials in the context of laser dynamics. For class A lasers, we have explained qualitatively the observed features of the deterministic dynamics by the movement on the potential landscape. We have identified the relaxational and conservative terms in the dynamical equations of motion. In the stochastic dynamics (when additive noise is added to the equations), we have explained the presence of a “noise sustained flow” for the phase of the electric field as the interaction of the conservative terms with the noise terms. An analytical expression allows the calculation of the phase drift.
In the case of class B lasers, we have obtained a Lyapunov potential valid only in the deterministic case, when noise fluctuations are neglected. We have found that the dynamics is non-relaxational with a non-constant matrix $`D`$. The fixed point corresponding to the laser in the on state is interpreted as a minimum in the potential landscape. By observing that the potential is nearly constant between two consecutive intensity peaks during the relaxation process towards the steady state, but still in a highly non-linear regime, we were able to obtain an approximate expression for the period of the oscillations. Moreover, we have derived a simple exponential approach of the period of the oscillations with time towards the period of the relaxation oscillations near the steady state. This dependence appears to be valid after the first intensity peak following the switch-on of the laser. A possible extension of our work could be to consider the presence of an external field, which is numerically studied in .
## Acknowledgments
We wish to thank Professor M. San Miguel and Professor G.L. Oppo for a careful reading of this manuscript and for useful comments. We acknowledge financial support from DGES (Spain) under Project Nos. PB94-1167 and PB97-0141-C02-01.
Table 1
| PARAMETERS | | VALUES |
| --- | --- | --- |
| $`C`$ | Carriers injected per unit time. | $`>\mathrm{threshold}`$ |
| $`\gamma `$ | Cavity decay rate. | $`0.5ps^1`$ |
| $`\gamma _e`$ | Carrier decay rate. | $`0.001ps^1`$ |
| $`N_o`$ | Number of carriers at transparency. | $`1.5\times 10^8`$ |
| $`g`$ | Differential gain parameter. | $`1.5\times 10^8ps^1`$ |
| $`s`$ | Saturation parameter. | $`10^810^7`$ |
| $`\beta `$ | Spontaneous emission rate. | $`10^8ps^1`$ |
| $`\alpha `$ | Linewidth enhancement factor. | 3-6 |
## Figure captions
Fig. 1: Potential for a class A laser, Eq. (15) with the parameters: $`a=2`$, $`b=1`$. Dimensionless units. Fig. 2: Time evolution of the mean value of the phase $`\varphi `$ in a class A laser, in the case $`a=2`$, $`b=1`$, $`ϵ=0.1`$. For $`\alpha =0`$ (dashed line) there is only phase diffusion and the average value is $`0`$ for all times. When $`\alpha =5`$ (solid line) there is a linear variation of the mean value of the phase at late times. Error bars are incluted for some values. The dot-dashed line has the slope given by the theoretical prediction Eq. (21). The initial condition is taken as $`x_1=x_2=0`$ and the results were averaged over 10000 trajectories with different realizations of the noise. Dimensionless units. Fig. 3: a) Normalized intensity, $`y`$ (solid line) and normalized carriers number, $`z/40`$ (dot-dashed line) versus time in a class B laser obtained by numerical solution of Eqs. (27) and (28). b) Plot of the potential (33). Parameters: $`a=0.009`$, $`b=0.004`$, $`\overline{s}=0.5`$, $`c=3.2\times 10^9`$, $`d=1.44\times 10^8`$ which correspond to physical parameters in Table 1 with $`C=1.2C_{th}`$. The initial conditions are taken as $`y=5\times 10^8`$ and $`z=0.993`$. Dimensionless units. Fig. 4: Number of carriers versus intensity (scaled variables). The vector field and contour plot (thick lines) are also represented. Same parameters than in Fig. 3. Dimensionless units. Fig. 5: Period versus time in a class B laser. Solid line has been calculated as the distance between two peaks of intensity, with triangles plotted at the begining of each period; dashed line has been calculated using the expression (42), with the value of the potential $`V`$ obtained also from the simulation; dotted line corresponds to the semi-empirical expression (50). Same parameters than in Fig. 3. We have used $`\tau _0=55.55`$, coinciding with the position of the first intensity peak. Dimensionless units. Fig. 6: Logarithm of the potential difference versus time in a class B laser (solid line), compared with the theoretical expression in the steady state (49) (dashed line). Same parameters than in Fig. 3 and $`\tau _0`$ as in Fig. 5. Dimensionless units. Fig. 7: Logarithm of the period difference versus time in a class B laser. Triangles correspond to the period calculated from the simulations as the distance between two consecutive intensity peaks, at the same position than in Fig. 6. The dashed line is the semiempirical expression Eq. (50). Same parameters than in Fig. 3 and $`\tau _0`$ as in Fig. 5. Dimensionless units.
|
no-problem/9905/hep-th9905004.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The theory of gravity or string theory on three-dimensional anti de-Sitter space ($`AdS_3`$) recently has been a topic of considerable interest. It has been understood for a long time that the gravitational degrees of freedom in the bulk of $`AdS_3`$ may be described by a conformal field theory on the boundary. After Maldacena presented the celebrated conjecture on the duality between the gravity on the $`AdS`$ space and CFT on its boundary, the case of $`AdS_3`$ achieved particular interest, as we know CFT in two dimensions best. It may be possible to prove the Maldacena’s conjecture in more detail in the case of $`AdS_3/CFT_2`$. Much work has been done along this path.
In string theory, the $`AdS_3`$ space arises as the near-horizon geometry of the bound states of D1/D5-branes. We take its S-dual and consider the string theory on an NS1/NS5-brane background, in order to avoid the difficulty in considering string theories on RR backgrounds. The duality between string theory on $`AdS_3`$ and a certain boundary CFT has been discussed extensively. The generators of boundary superconformal algebras were directly constructed out of vertex operators on the superstring worldsheet. There have been attempts to study the spectrum of chiral primaries and observe the correspondence to some extent.
There have been further attempts to realize the boundary CFT Ward identities in which the $`AdS_3`$ string theory was treated in a different way. Although there was some confusion between these two approaches, some recent works put these concepts in order. Now it turns out that the string theory on $`AdS_3`$ has two distinct sectors corresponding to two different configurations of the worldsheet. They are called as the “short string” sector and the “long string” sector. The long strings are forced to live near the boundary with some fixed winding number, while the short strings can have arbitrary configurations of worldsheet and are free to propagate in bulk. The two sectors lead to two different explanations for the origin of the central charge of the boundary CFT, as was discussed in . The main purpose of the present paper is to propose a unified framework including both the short and long string sectors, based on Matrix string theory.
One of the unresolved problems regarding the spectrum of chiral primaries is the mismatch of its upper bound. In the string theory on $`AdS`$ space it is of order $`k`$, where $`k`$ is the number of $`5`$-branes. However, it is argued that the bound becomes of order $`pk`$ in the boundary CFT, where $`p`$ is the number of $`1`$-branes. As was stated in our previous paper, it is natural to think that this discrepancy is due to the fact that we are only considering the theory of a single string on $`AdS_3`$. We expect that the multi-string system based on Matrix string theory may solve this inconsistency. In this paper we show that this is indeed the case.
This paper is organized as follows. We begin with the analysis of the Coulomb branch of Matrix string theory in the presence of NS5-branes. We are then lead to the sigma model on the moduli space of the Coulomb branch, which is identified as the second quantized string theory on $`AdS_3\times S^3\times \text{R}^4`$. Twisted sectors of this orbifold CFT correspond to glued strings, which are strings made from some fundamental strings glued together.<sup>1</sup><sup>1</sup>1 Hereafter we shall use the term “glued string” for strings which are made by gluing some fundamental strings together. We will keep the term “long string” for different objects, which appear in . The long/short string sectors are distinguished by the presence/absence of nonzero electric flux on the worldsheet.
We shall also discuss the spectrum of chiral primaries from the viewpoint of Matrix string theory, especially the issue of missing chiral primaries and the threshold for continuous spectrum pointed out in .
Throughout this paper we denote NS$`5`$-brane charge as $`k`$ and NS$`1`$-brane charge as $`p`$, following the convention of .
## 2 Analysis of Gauge Theory
Matrix string theory of type IIA superstring is defined as the large $`N`$ limit of $`𝒩=(8,8)`$ $`U(N)`$ SYM theory in two dimensions. To incorporate $`k`$ longitudinal NS5-branes extending along the $`016789`$-directions, we add $`k`$ hypermultiplets belonging to the fundamental representation of $`U(N)`$. In $`𝒩=(4,4)`$ language the system consists of a $`U(N)`$ vector multiplet and hypermultiplets belonging to one adjoint and $`k`$ fundamental representations of $`U(N)`$. In the Coulomb branch the gauge group is generically broken down to $`U(1)^N`$. The massless scalar fields parametrizing the moduli space are $`N`$ abelian vectormultiplets, which correspond to the string coordinates along the directions transverse to the NS$`5`$-branes ($`2345`$ directions), and the $`N`$ neutral hypermultiplets corresponding to the coordinates along $`6789`$ directions. Usually the $`6789`$ directions are compactified on $`T^4`$ or $`K3`$. But in this article we would like to ignore the subtleties concerning the compactification of Matrix string theory and focus mainly on the six-dimensional part.
One can obtain the exact metric of moduli space by one-loop analysis. This was done in for the case of $`N=1`$, and similar calculation works also in the case of generic $`N`$.
In $`𝒩=(2,2)`$ terminology, an abelian vector multiplet consists of a chiral and a twisted-chiral multiplets, $`(\mathrm{\Phi },\mathrm{\Sigma })`$. We can write down the most generic action for these multiplets as an integral of a function $`K(\mathrm{\Phi },\overline{\mathrm{\Phi }},\mathrm{\Sigma },\overline{\mathrm{\Sigma }})`$ over superspace. The condition for $`K`$ to give a $`𝒩=(4,4)`$ supersymmetric gauge theory of vectormultiplets is given by
$$K_{\mathrm{\Phi }^i\overline{\mathrm{\Phi }}^j}+K_{\mathrm{\Sigma }^i\overline{\mathrm{\Sigma }}^j}=0,K_{\mathrm{\Phi }^i\overline{\mathrm{\Phi }}^j}K_{\mathrm{\Phi }^j\overline{\mathrm{\Phi }}^i}=0$$
(2.1)
The metric and $`B`$ field on the target space are given by
$`ds^2`$ $`=`$ $`K_{\mathrm{\Phi }^i\overline{\mathrm{\Phi }}^j}d\mathrm{\Phi }^id\overline{\mathrm{\Phi }}^jK_{\mathrm{\Sigma }^i\overline{\mathrm{\Sigma }}^j}d\mathrm{\Sigma }^id\overline{\mathrm{\Sigma }}^j`$ (2.2)
$`B`$ $`=`$ $`{\displaystyle \frac{1}{4}}(K_{\mathrm{\Phi }^i\mathrm{\Sigma }^j}d\mathrm{\Phi }^id\mathrm{\Sigma }^jK_{\mathrm{\Phi }^i\overline{\mathrm{\Sigma }}^j}d\mathrm{\Phi }^id\overline{\mathrm{\Sigma }}^jK_{\overline{\mathrm{\Phi }}^i\mathrm{\Sigma }^j}d\overline{\mathrm{\Phi }}^id\mathrm{\Sigma }^j+K_{\overline{\mathrm{\Phi }}^i\overline{\mathrm{\Sigma }}^j}d\overline{\mathrm{\Phi }}^id\overline{\mathrm{\Sigma }}^j)`$ (2.3)
Taking $`Spin(4)`$ and permutation symmetry into account, we can conclude that the most generic form for $`K`$ is given by
$`K`$ $`=`$ $`{\displaystyle \underset{i}{}}K_i`$
$`K_i`$ $`=`$ $`a(\mathrm{\Phi }^i\overline{\mathrm{\Phi }}^i\mathrm{\Sigma }^i\overline{\mathrm{\Sigma }}^i)+b\left(\mathrm{ln}\mathrm{\Phi }^i\mathrm{ln}\overline{\mathrm{\Phi }}^i{\displaystyle ^{\frac{\mathrm{\Sigma }^i\overline{\mathrm{\Sigma }^i}}{\mathrm{\Phi }^i\overline{\mathrm{\Phi }}^i}}}{\displaystyle \frac{d\xi }{\xi }}\mathrm{ln}(1+\xi )\right)`$ (2.4)
The terms with coefficients $`a`$ and $`b`$ are tree and one-loop contributions, respectively, and we can fix them as $`a=\frac{1}{2g^2}`$ and $`b=\frac{k}{4\pi }`$.
We would like to note here that the one-loop contributions to the effective action arise only from the fundamental hypermultiplets, so that the resultant contribution is proportional to $`k`$. The one-loop contribution of adjoint fields cancel each other. This is to be expected, because the system has a larger supersymmetry (with $`16`$ supercharges) when the fundamental hypermultiplets are absent.
The effective action has the following bosonic part
$`S`$ $`=`$ $`{\displaystyle \underset{i}{}}S_i`$
$`S_i`$ $`=`$ $`{\displaystyle d^2x\left(\frac{1}{g^2}+\frac{k}{2\pi y_{(i)}^2}\right)\left[\frac{1}{2}_\mu y_{(i)}^p^\mu y_{(i)}^p+\frac{1}{4}f_{(i)\mu \nu }f_{(i)}^{\mu \nu }\right]}`$ (2.5)
$`+{\displaystyle _{}}{\displaystyle \frac{k}{6\pi y_{(i)}^4}}ϵ_{pqrs}𝑑y_{(i)}^p𝑑y_{(i)}^q𝑑y_{(i)}^ry_{(i)}^s`$
$$i=1,\mathrm{},N,\mu ,\nu =0,1,p,q,r,s=2,3,4,5$$
Here $`y_{(i)}^p`$ and $`f_{(i)\mu \nu }`$ are the four scalars and the $`U(1)`$ field strength of the $`i`$-th abelian vectormultiplet. $``$ is some open three-dimensional space whose boundary is the worldsheet of Matrix string. In Maldacena’s near horizon limit or the weak coupling limit of IIA superstring theory, we send $`g^2g_s^2l_s^2`$ to zero and drop the tree term from the action. Then the metric of the target space is no longer asymptotically flat. The four-dimensional spatial directions now have a tube metric, and we expect to obtain the $`AdS_3\times S^3`$ geometry by incorporating the longitudinal directions. After changing the coordinates from $`y_{(i)}^p`$ to the radial coordinates $`y_{(i)}`$ and the $`SU(2)`$ group elements, the action becomes
$$S_i=\frac{k}{2\pi }_{}d^2x\left[\frac{1}{2y_{(i)}^2}_\mu y_{(i)}^\mu y_{(i)}+\frac{1}{4y_{(i)}^2}f_{(i)\mu \nu }f_{(i)}^{\mu \nu }\right]+(SU(2)\text{ WZW with level }k)_i$$
(2.6)
The full supersymmetric extension of this action is then
$$S_i=\frac{k}{4\pi }_{}d^2xd^2\theta Y_{(i)}^2DY_{(i)}DY_{(i)}+(SU(2)\text{ SWZW with level }k)_i$$
(2.7)
Here $`Y_{(i)}`$ is an $`𝒩=(1,1)`$ superfield which has $`y_{(i)}`$ as the lowest component. Its $`\theta `$-expansion reads
$$Y_{(i)}=y_{(i)}+i\theta \psi _{(i)}+\frac{i}{4}\theta \theta ϵ^{\mu \nu }f_{(i)\mu \nu }.$$
Note that the level of the bosonic part of SWZW in (2.7) should be shifted to $`k2`$ by means of a chiral rotation so as to make the fermionic fields free, as was discussed in . Hence the level of the total current, including the fermionic contribution, should remain $`k`$.
## 3 Short Strings and Long Strings
Various interesting phenomena occur when we compactify the spatial direction on $`S^1`$. Although the gauge field in two dimensions has no dynamics, it carries some topological informations, according to which we can classify the sectors of Matrix string theory.
Various sectors can be labeled by the periodicity conditions along $`S^1`$. As was explained in the original papers on Matrix string theory, we can twist the periodicity of fields which take values in the Cartan subgroup of $`U(N)`$ by elements of the Weyl group, namely, the $`N`$-th permutation group. The twisted sectors are pictorially understood as glued strings, or some strings mutually glued together into a single string. For example, there is a sector which is labeled by the $`N`$-th cyclic permutation. This sector is interpreted as the sector where $`N`$ fundamental strings are joined together to form a glued string. As we shall explain later, the glued strings constructed in this way are accountable for the missing chiral primaries.
Another interesting phenomenon is the presence of nonzero $`U(1)`$ electric flux. In the flat background with no NS5-brane charge, this flux describes the existence of D0-branes, and in the T-dualized framework it describes the $`(p,q)`$-string sectors. However, as our discussion below is based on the effective action (2.7), the presence of electric flux may have a different interpretation. We shall show below that the string becomes “long” if there is nonzero electric flux on its worldsheet.
To simplify the problem, we focus on the case of a single string (the case $`N=1`$) for the time being. To see the implication of nonzero electric flux, let us write down the relevant terms in the effective action below;
$$S=\frac{k}{4\pi }_\mathrm{\Sigma }d^2x\left(_\mu \varphi ^\mu \varphi +\frac{1}{2}e^{2\varphi }f^{\mu \nu }f_{\mu \nu }\right)+\mathrm{}$$
(3.1)
Here the “Liouville field” $`\varphi `$ is defined as $`y=e^\varphi `$ and the worldsheet $`\mathrm{\Sigma }`$ is a cylinder or the two-punctured sphere. We can quantize the system canonically in the $`A_0=0`$ gauge. The only dynamical variable is the Wilson line $`U=\mathrm{exp}\left({\displaystyle A_1𝑑x}\right)`$. $`\psi (U)U^n`$ is the eigenfunction of the electric field strength $`\mathrm{\Pi }e^{2\varphi }E`$, which is the canonical conjugate of the Wilson line. Note that the eigenvalue is quantized due to the periodicity of the Wilson line. When there is nonzero electric flux, it yields the following contribution to the effective action (here we move to the Euclidean signature):
$$S_\mathrm{\Sigma }d^2xe^{2\varphi _0}n^2,n𝐙$$
(3.2)
To minimize the action, we must have $`\varphi _0\mathrm{}(\text{or }y=0)`$ if there is nonzero electric flux. On the contrary, $`\varphi _0`$ is arbitrary if there is no electric flux. Hence we come to the conclusion that the string is forced to live near the source NS$`5`$-branes if there is nonzero electric flux on the worldsheet.
In the present situation, the NS$`5`$-branes are located at the origin, $`y=0`$. If we T-dualize the system, the parametrization of the radial direction is reversed, namely, $`y=0`$ and $`y=\mathrm{}`$ are interchanged.
We would like to see the correspondence between the effective action of Matrix string theory and the action of superstring on $`AdS_3\times S^3`$ in some detail. To do this, we shall first move to the Euclidean worldsheet. As was obtained in the previous section, the effective field theory has the following “Liouville part”,
$$S=\frac{1}{8\pi }_\mathrm{\Sigma }d^2x\left(\phi \overline{}\phi +e^{\sqrt{\frac{2}{k}}\phi }f_{\mu \nu }f_{\mu \nu }\right),$$
(3.3)
where $`\phi =\sqrt{2k}\varphi `$ up to a constant shift. To ensure the conformal invariance, we must add the background charge term,
$$\frac{Q}{8\pi }_\mathrm{\Sigma }\phi R_{(2)},$$
to the above action. The background charge $`Q`$ must be determined such that the term $`e^{\sqrt{\frac{2}{k}}\phi }f_{\mu \nu }f_{\mu \nu }`$ has the correct conformal dimension. Then the effective action is given by
$`S`$ $`=`$ $`(\text{super Liouville theory with the background charge }Q)`$ (3.4)
$`+(SU(2)\text{ SWZW with level }k)`$
$`+(\text{SCFT on }𝐑^4)`$
This theory has the central charge
$$c_{\mathrm{total}}=\left(1+3Q^2+\frac{1}{2}\right)+\left(\frac{3(k2)}{k}+\frac{3}{2}\right)+6$$
(3.5)
Below we show that one must choose $`Q`$ differently for long and short strings. The observation of that the CFT on the short and long strings describe respectively the Coulomb branch and the Higgs branch near the small instanton singularity.
1. Short String ($`n=0`$)
In this sector it is appropriate to assign the canonical dimension $`0`$ to the gauge field $`A_\mu `$. The background charge is fixed by this condition as $`Q=\sqrt{\frac{2}{k}}`$. The central charge becomes $`c=12`$.
It is easy to see the correspondence of this CFT and the world-sheet theory of short string on $`AdS_3`$ background. The worldsheet action of a fundamental string on $`AdS_3`$ is given by ;
$$S=\frac{1}{8\pi }_\mathrm{\Sigma }d^2x\left(\phi \overline{}\phi +e^{\sqrt{\frac{2}{k}}\phi }\overline{}\gamma \overline{\gamma }\right),$$
(3.6)
or equivalently, by introducing the auxiliary fields $`\beta `$ and $`\overline{\beta }`$ we have;
$$S=\frac{1}{8\pi }_\mathrm{\Sigma }d^2x\left(\phi \overline{}\phi e^{\sqrt{\frac{2}{k}}\phi }\beta \overline{\beta }+\beta \overline{}\gamma +\overline{\beta }\overline{\gamma }\right).$$
(3.7)
The short string theory on $`AdS_3`$ is defined on an arbitrary compact Riemann surface, because it should describe arbitrary propagation and interaction of closed strings in bulk. The $`(\beta ,\gamma )`$-system should have the conformal dimensions $`(1,0)`$ as in Wakimoto representation. From these facts we see the operator $`\gamma ^1\gamma `$ cannot take nonzero classical value. The RNS superstring action on $`AdS_3`$ background is obtained as the supersymmetric extension of the above action. In quantizing the system, we can choose the light-cone gauge $`\gamma z`$, $`\psi ^+=0`$ ($`z`$ is a holomorphic coordinate on $`\mathrm{\Sigma }`$ and $`\psi ^+`$ is the fermionic coordinate along the longitudinal direction) of and eliminate the longitudinal degrees of freedom. After this gauge fixing, we find that the CFT of the transversal degrees of freedom coincides with the one we have obtained from the gauge theory analysis, including the value of the background charge of the Liouville part.
Here we make an important remark. The gauge condition $`\gamma z`$ is quite different from the nonzero classical value of $`{\displaystyle \gamma ^1\gamma }`$. One must not confuse these two distinct notions. If we impose the light-cone gauge condition on a string theory defined on a generic Riemann surface, we should impose it on each of the local coordinate patches. Even in flat background the light-cone gauge cannot be imposed globally unless we take a cylindrical worldsheet.
2. Long String $`(n0)`$
As we have seen above, in the case of $`n0`$, the worldsheet of the string must be near the NS$`5`$-branes, $`y0`$. The nonzero electric flux $`\mathrm{\Pi }e^{\sqrt{\frac{2}{k}}\phi }E`$ also indicates that the gauge field $`A_z`$ and $`\overline{A}_{\overline{z}}`$ should have the “geometric dimension” $`(1,0)`$ and $`(0,1)`$, and the operator $`e^{\sqrt{\frac{2}{k}}\phi }`$ should have the conformal weight $`(1,1)`$. This leads to the Liouville background charge $`Q=\sqrt{2k}\sqrt{{\displaystyle \frac{2}{k}}}`$. The central charge now becomes $`c=6k`$.
This sector is identical to the CFT on the “long string” in the light-cone gauge, which is discussed in . As was discussed in these papers, we can bosonize the eight free fermions in this theory to define eight spin fields, which are fermions in the sense of the worldsheet as well. Using these spin fields we can construct a $`𝒩=(4,4)`$ superconformal algebra with $`c=6k`$. It is the sum of the two superconformal algebras arising from the $`𝐑^4`$ part and the remaining part, with $`c=6`$ and $`c=6(k1)`$, respectively.
Now we would like to compare our CFT with the CFT on the long string worldsheet. Recall that Matrix string theory describes the IIA strings and the (short or long) strings on $`AdS_3\times S^3`$ are the objects in IIB string theory. So as to make a comparison we have to apply T-duality to the two-dimensional field theory.
Let us start from IIB side. The relevant part of the world-sheet action is given by (3.6), which we rewrite with $`\gamma =\gamma ^0+i\gamma ^1`$. Of course $`\gamma ^0`$, $`\gamma ^1`$ are the time and spatial coordinates parameterizing the boundary of Euclidean $`AdS_3`$. We can partially gauge-fix the conformal symmetry by the condition $`\gamma ^0=t`$, and obtain
$$S=\frac{1}{8\pi }_\mathrm{\Sigma }d^2x\left\{\phi \overline{}\phi +e^{\sqrt{\frac{2}{k}}\phi }\left((_t\gamma ^1)^2+(_x\gamma ^1)^2+2_x\gamma ^1\right)\right\},$$
(3.8)
We can T-dualize this action according to the standard procedure(see, for example, ), with respect to the $`U(1)`$ isometry $`\gamma ^1\gamma ^1+\alpha `$
$$S=\frac{1}{8\pi }_\mathrm{\Sigma }d^2x\left\{\phi \overline{}\phi +e^{\sqrt{\frac{2}{k}}\phi }\left((_t\stackrel{~}{\gamma }^1)^2+(_x\stackrel{~}{\gamma }^1)^2\right)+2i_t\stackrel{~}{\gamma }^1\right\},$$
(3.9)
where $`\stackrel{~}{\gamma }^1`$ denotes the dual coordinate of $`\gamma ^1`$.
On the other hand, our effective action of Matrix string theory(3.3) can be rewritten, if a suitable source term ensuring nonzero electric flux on the worldsheet is added, as follows;
$`S`$ $`=`$ $`{\displaystyle \frac{1}{8\pi }}{\displaystyle _\mathrm{\Sigma }}d^2x\left\{\phi \overline{\phi }+e^{\sqrt{\frac{2}{k}}\phi }\left((_tA_1)^2+(_xA_1)^2\right)\right\}+2i\left({\displaystyle _{t=\mathrm{}}}{\displaystyle _{t=\mathrm{}}}A_1𝑑x\right)`$ (3.10)
$`=`$ $`{\displaystyle \frac{1}{8\pi }}{\displaystyle _\mathrm{\Sigma }}d^2x\left\{\phi \overline{\phi }+e^{\sqrt{\frac{2}{k}}\phi }\left((_tA_1)^2+(_xA_1)^2\right)+2i_tA_1\right\}`$
where we have chosen the $`A_0=0`$ gauge, and added a gauge-fixing term with respect to the residual gauge symmetry. Action (3.10) is clearly the same as (3.9) if we make the identification $`\stackrel{~}{\gamma }^1=A_1`$.
Some comments are in order. First of all, long strings have nonzero winding number $`{\displaystyle \gamma ^1𝑑\gamma }=1`$, which implies that $`\gamma `$ should be of conformal dimension $`1`$. This assignment of conformal weight forces us to impose the condition $`\gamma z`$ globally. This is possible only when the worldsheet is cylindrical. Hence long strings must have cylindrical worldsheets, as was mentioned previously. Note also that this assignment of conformal weight is consistent with the interpretation that $`\gamma `$ and $`A_\mu `$ are dual to each other.
Moreover, the term $`2e^{\sqrt{\frac{2}{k}}\phi }_x\gamma `$ in (3.8) plays the role of source term for the winding number of $`\gamma `$, while the corresponding term $`2i_tA_1`$ in the dual action (3.10) is no other than the source term for the electric field $`\mathrm{\Pi }e^{\sqrt{\frac{2}{k}}\phi }E`$. Therefore we can say that the nonzero winding number of $`\gamma `$ in IIB theory side precisely corresponds to nonzero electric flux in the IIA Matrix string theory side.
In the above argument we can see explicitly, as stated previously, the interchange of $`y=0`$ and $`y=\mathrm{}`$ under the T-duality transformation. Hence a long string, which is forced to live near $`y0`$ in IIA picture, is at $`y+\mathrm{}`$ in IIB picture.
The argument given above was based on a proposal of that we should assign the canonical dimension $`0`$ to the gauge field $`A_\mu `$ in the Coulomb branch CFT, while we should assign the geometrical dimension $`1`$ in the Higgs branch. Although originally we have been analyzing the Coulomb branch, the nonzero electric flux requires the redefinition of the conformal dimension of the gauge field, which then leads us to the physics of the Higgs branch. At the same time the object becomes a long string, whose dynamics suitably describes the Higgs branch CFT near the small instanton singularity, as was discussed in .
## 4 Twisted Sectors
Matrix string theory contains various sectors in which some fundamental strings are glued together. The field contents on the worldsheet of glued strings are the same as those of a single string, so we can determine whether a glued string is long or short in the same way as with a single string. A generic sector of Matrix string theory corresponds to a set of glued strings, each of which has an arbitrary length. Some of them are long, i.e. there is nonzero electric flux on their worldsheet. The sum of the length of the long strings is the NS$`1`$-brane charge $`p`$, which is fixed. By the argument of the charge conservation, two strings can be glued together only when there is equal electric flux on the two worldsheets. In particular, a long string and a short string cannot be glued together. Then we are led to the following conformal field theory for this system: $`Sym^{Np}(_{\text{short}})\times Sym^p(_{\text{long}})`$, where $`_{\text{short}}(_{\text{long}})`$ stands for the theory of a single short(long) string, as described in the previous section. Here we mean by $`Sym^N()`$ the $`N`$-th symmetric product of a conformal field theory $``$. This operation is defined in a similar manner as the sigma model on a symmetric orbifold $`^N/S_N`$ is defined from the sigma model on $``$.
Matrix string theory contains various twisted sectors describing many glued strings. So we expect that all of the connected and disconnected worldsheets having arbitrary genera can be reproduced in the large $`N`$-limit. Moreover, we further expect that the integration over the moduli of the worldsheets corresponds precisely to the large $`N`$ limit of Matrix string theory. This was conjectured in , and more detailed analyses were given in based on the interpretation of the BPS instantons in Matrix string theory as several plane curves. Although the analyses of were done in the flat background, this remarkable claim may be true even in the presence of NS$`5`$-branes. In this way we might be able to recover the full second quantized IIB superstring theory on $`AdS_3\times S^3\times 𝐑^4`$ for the short string sector in the large $`N`$-limit.
How about the long string sector? Since the NS$`1`$-brane charge $`p`$ is fixed, we have at most $`p`$-glued long string sector. This means that we cannot consider a worldsheet of a long string with arbitrarily large genus (although we assume $`p>>1`$). Furthermore the moduli of the worldsheet should be frozen, because the worldsheet of a long string must be cylindrical, lying near the boundary of $`AdS_3`$ in the IIB picture. Therefore it is more appropriate to regard the CFT of the long string sector as the boundary CFT itself, rather than the theory of gravity or string theory in bulk $`AdS_3`$.
In our previous paper we treated the first quantized RNS superstring on $`AdS_3\times S^3\times T^4`$ with the covariant formalism. We constructed a BRST invariant state that corresponds to the vacuum of the spacetime SCFT for the case of $`p=1`$. As for $`p>1`$ no such state was found. That observation fits nicely with the results obtained in this paper.
We can see that if there were a single string carrying a winding charge greater than $`1`$, it would lead to an inconsistency. In fact, the assumption $`{\displaystyle \gamma ^1𝑑\gamma }=p`$ or $`\gamma z^p`$ implies that $`(\beta ,\gamma )`$ have conformal weights $`(p+1,p)`$, hence we have to improve the stress tensor as follows:
$$T_{\mathrm{total}}=T_{\mathrm{worldsheet}}+pJ^3.$$
(4.1)
This stress tensor would give rise to the central charge $`c=6kp^2`$, if the worldsheet theory were originally a critical string theory. Hence this naive argument cannot explain the correct central charge unless $`|p|=1`$.
In addition, our previous arguments have shown that the shift in the conformal weight of the gauge field, $`A`$, due to the electric flux is always $`1`$, no matter how large the flux is. In the IIB picture the conformal weight of $`\gamma `$ is always $`1`$ when there is a nonzero electric flux in IIA picture, regardless of how large it is. So the winding number of a single long string is always $`1`$.
So, the winding charge $`p>1`$ must be carried by some glued long strings. They must be composed of precisely $`p`$ single long strings, which leads to the CFT of $`Sym^p(_{\text{long}})`$ with the correct central charge $`c=6kp`$. We believe that the Matrix string theory is the most natural framework to explain why such an orbifold CFT arises.
## 5 Spectrum in the Long String Sector
In this section we discuss the spectrum of chiral primaries in the long string sector. As was given in the previous section, the CFT of the long string sector is given by $`Sym^p(_{\text{long}})`$, where $`_{\text{long}}`$ is the CFT of a single long string. $`_{\text{long}}`$ is the product of three CFT’s, namely, a super Liouville theory with $`Q=\sqrt{2k}\sqrt{{\displaystyle \frac{2}{k}}}`$, an SWZW theory with level $`k`$ and the superconformal sigma model on $`M^4`$.<sup>2</sup><sup>2</sup>2 Previously we assumed that $`M^4=𝐑^4`$. In this section we shall somehow generalize the situation and consider $`M^4`$ as one of $`𝐑^4,T^4`$ or $`K3`$. Of course, as we have mentioned previously, if we consider the generalization for the choice of the internal manifold at the stage of Matrix string theory, there must arise the subtleties concerning the compactification of Matrix theory on higher-dimensional torus. But in this section we shall forget these difficulties and simply replace $`𝐑^4`$ by $`T^4`$ or $`K3`$. The CFT on $`M^4`$ and the remaining part have $`𝒩=(4,4)`$ superconformal algebras with the central charge $`c=6`$ and $`c=6(k1)`$, respectively. According to this product structure, any chiral primary of $`_{\text{long}}`$ can be written as the product of a chiral primary of $`M^4`$ part and the chiral primary of the remaining part. We denote them as follows:
$$𝒪(\omega ,l)=𝒪_\omega 𝒪_l,(\omega H^{}(M^4),l=0,\mathrm{},k2).$$
(5.1)
Here $`𝒪_\omega `$ is one of the chiral primaries of the $`M^4`$ part. They are labeled by the cohomology element, $`\omega `$, of $`M^4`$. $`𝒪_l`$ is a chiral primary of the remaining part, which has the following form;
$$𝒪_l\stackrel{\mathrm{def}}{=}e^{l\sqrt{\frac{2}{k}}\phi }V_l,(l=0,\mathrm{},k2).$$
(5.2)
where $`V_l`$ is the highest weight operator in bosonic $`SU(2)_{k2}`$ WZW theory with spin $`l`$, characterized by the following OPE with the bosonic $`SU(2)`$ currents
$$\{\begin{array}{ccc}k^3(z)V_l(0)\hfill & \hfill & \frac{l}{2z}V_l(0)\hfill \\ k^+(z)V_l(0)\hfill & \hfill & 0\hfill \end{array}$$
(5.3)
The chiral primary $`𝒪(\omega ,l)`$ has the following quantum numbers:
$$h=j=\frac{q(\omega )+l}{2},\overline{h}=\overline{j}=\frac{\overline{q}(\omega )+l}{2},(\omega H^{q(\omega )\overline{q}(\omega )}(M^4)).$$
(5.4)
Alternatively, we can consider the corresponding Ramond vacuum $`|\omega ,l`$ that is obtained by spectral flow:
$$\begin{array}{cc}L_0|\omega ,l=\frac{k}{4}|\omega ,l\hfill & K_0^3|\omega ,l=\left(\frac{q(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l,\hfill \\ \overline{L}_0|\omega ,l=\frac{k}{4}|\omega ,l\hfill & \overline{K}_0^3|\omega ,l=\left(\frac{\overline{q}(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l.\hfill \end{array}$$
(5.5)
Now let us turn to the analysis for whole long string sector, $`Sym^p(_{\text{long}})`$. First of all, the spectrum for the untwisted sector is essentially the same as that for the single long string $`_{\text{long}}`$. We again exhibit it as the spectrum of Ramond vacua:
$$\begin{array}{cc}L_0|\omega ,l;(0)=\frac{k}{4}|\omega ,l;(0)\hfill & K_0^3|\omega ,l;(0)=\left(\frac{q(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l;(0),\hfill \\ \overline{L}_0|\omega ,l;(0)=\frac{k}{4}|\omega ,l;(0)\hfill & \overline{K}_0^3|\omega ,l;(0)=\left(\frac{\overline{q}(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l;(0).\hfill \end{array}$$
(5.6)
The analyses for the twisted sectors are more difficult. In general they are classified by Young tableaus $`(n_1,\mathrm{},n_s)`$ composed of $`p`$ boxes ($`n_1\mathrm{}n_s>0`$, $`{\displaystyle \underset{i=1}{\overset{s}{}}}n_i=p`$). The sector labeled by a tableau $`(n_1,\mathrm{},n_s)`$ can be decomposed into a set of $`\text{Z}_{n_i}`$-twisted sectors. In other words it can be regarded as composed of $`s`$ glued strings of length $`(n_1,\mathrm{},n_s)`$. If we are interested in single particle states, we only have to consider the $`\text{Z}_m`$-twisted sector, or the sector of a single glued string of length $`m`$.
The superconformal algebra (SCA) suitably acting on the Hilbert space of the $`\text{Z}_m`$-twisted sector is constructed in the following manner (see for example ): First we make up the SCA $`\{L_n^{},G_n^{\alpha a},K_n^{\alpha \beta }\}`$ of the glued string variables as in the case of the single long string $`_{\mathrm{long}}`$ with $`c=6k`$. Second, we must mod out by $`\text{Z}_m`$-action, and get the desired SCA $`\widehat{𝒜}`$ with $`c=6mk`$. More explicitly, we should define the superconformal generators (in the Ramond sector) for the $`\text{Z}_m`$-twisted sector as follows<sup>3</sup><sup>3</sup>3 The situation becomes more complicated for the SCA of NS sector, since we must define them separately according to whether $`m`$ is even or odd. This is the main reason why here we work in the Ramond sector and make use of the spectral flow instead of dealing with the NS sector directly.
$$\begin{array}{ccc}\widehat{L}_n\hfill & \stackrel{\mathrm{def}}{=}\hfill & \frac{1}{m}L_{mn}^{}+\frac{k}{4}\left(m\frac{1}{m}\right)\delta _{n0}\hfill \\ \widehat{K}_n^{\alpha \beta }\hfill & \stackrel{\mathrm{def}}{=}\hfill & K_{mn}^{\alpha \beta ^{}}\hfill \\ \widehat{G}_n^{\alpha a}\hfill & \stackrel{\mathrm{def}}{=}\hfill & \frac{1}{\sqrt{m}}G_{mn}^{\alpha a^{}}.\hfill \end{array}$$
(5.7)
In this expression for $`\widehat{L}_n`$ the second term of the RHS corresponds to the contribution from the Schwarzian derivative of the covering map $`w=z^m`$.
We can now present the complete spectrum of chiral primaries for the $`\text{Z}_m`$-twisted sector. The Ramond vacua in this sector should have the same spectrum of weight and $`R`$-charge as those in the CFT of a single long string:
$$\begin{array}{cc}L_0^{}|\omega ,l;(m)=\frac{k}{4}|\omega ,l;(m)\hfill & K_0^3^{}|\omega ,l;(m)=\left(\frac{q(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l;(m),\hfill \\ \overline{L}_0^{}|\omega ,l;(m)=\frac{k}{4}|\omega ,l;(m)\hfill & \overline{K}_0^3^{}|\omega ,l;(m)=\left(\frac{\overline{q}(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l;(m).\hfill \end{array}$$
(5.8)
Hence, by the definitions(5.7) we obtain the following relations:
$$\begin{array}{c}\widehat{L}_0|\omega ,l;(m)=\left\{\frac{k}{4m}+\left(\frac{km}{4}\frac{k}{4m}\right)\right\}|\omega ,l;(m)=\frac{km}{4}|\omega ,l;(m)\hfill \\ \widehat{K}_0^3|\omega ,l;(m)=\left(\frac{q(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l;(m),\hfill \\ \overline{\widehat{L}}_0|\omega ,l;(m)=\frac{km}{4}|\omega ,l;(m),\overline{\widehat{K}}_0^3|\omega ,l;(m)=\left(\frac{\overline{q}(\omega )}{2}+\frac{l}{2}\frac{k}{2}\right)|\omega ,l;(m).\hfill \end{array}$$
(5.9)
Finally we translate the above relations in the Ramond sector to those in NS sector by means of the spectral flow. As a result, the chiral primaries in $`\text{Z}_m`$-twisted sector $`𝒪(\omega ,l;(m))`$ have the following quantum numbers:
$$\begin{array}{c}h=j=\frac{q(\omega )+l+k(m1)}{2},\overline{h}=\overline{j}=\frac{\overline{q}(\omega )+l+k(m1)}{2}\\ (l=0,\mathrm{},k2,m=1,\mathrm{},p)\end{array}$$
(5.10)
In this way we can reproduce almost all the chiral primaries of single particle type that are expected from the correspondence with the SCFT on $`Sym^{pk}(M^4)`$,except for the absence of the following sequence of states:
$$h=j=\frac{q(\omega )+km1}{2},\overline{h}=\overline{j}=\frac{\overline{q}(\omega )+km1}{2}(m=1,\mathrm{},p).$$
(5.11)
Notice that the case $`m=1`$ in (5.11) is no other than the “first missing state” discussed in . There it was discussed that the absence of this state is related to the small-instanton singularity of the moduli space of $`D1/D5`$-brane bound states. We find its cousins in the $`\text{Z}_m`$-twisted sectors $`m>1`$. Although we still have the missing states in the framework of Matrix string theory, there are “not too many” of them. Especially, the bound for the R-charge $`{\displaystyle \frac{pk}{2}}`$ expected from the relation to $`Sym^{pk}(M^4)`$ sigma model is correctly reproduced.
In it was also pointed out that, in the single long string theory, the continuous spectrum appears above the threshold conformal weight, $`\mathrm{\Delta }_0={\displaystyle \frac{(k1)^2}{4k}}{\displaystyle \frac{k}{4}}`$, which would make the counting of the chiral primaries above the threshold difficult. This result is based on the study of Liouville theory, in which it is claimed that we have two different types of states in quantum Liouville theory: one of them is a non-normalizable state which form a discrete spectrum, and the other is a normalizable state which form a continuous spectrum. The latter appears above the threshold, $`\mathrm{\Delta }_0{\displaystyle \frac{Q^2}{8}}`$, where $`Q`$ is the background charge of Liouville theory. Here we shall show that the threshold value also becomes $`p`$-times larger when we consider the CFT of $`p`$ long strings, $`Sym^p(_{\text{long}})`$.
It is easy to see this in the untwisted sector. We only have to find the normalizable state with the smallest conformal weight in this sector. Obviously this state has the form $`|\mathrm{\Delta }_0^p`$, where $`|\mathrm{\Delta }_0`$ is the normalizable state with the lowest energy in the single long string theory:
$$L_0|\mathrm{\Delta }_0\frac{k}{4}|\mathrm{\Delta }_0.$$
(5.12)
Therefore the threshold for the untwisted sector is equal to $`\widehat{\mathrm{\Delta }}_0{\displaystyle \frac{pk}{4}}`$.
In order to calculate the threshold value for general twisted sectors we have to recall the definition of the Virasoro operator in the $`\text{Z}_m`$-twisted sector(5.7). Then the threshold for the $`\text{Z}_m`$-twisted sector becomes:
$$\widehat{\mathrm{\Delta }}_0\frac{k}{4m}+\frac{k}{4}\left(m\frac{1}{m}\right)=\frac{mk}{4}.$$
(5.13)
From these results we can conclude that the threshold value is the same for all the twisted sectors characterized by arbitrary Young tableaus $`(n_1,\mathrm{},n_s)Y_p`$:
$$\widehat{\mathrm{\Delta }}_0=\underset{i=1}{\overset{s}{}}\frac{n_ik}{4}=\frac{pk}{4}.$$
(5.14)
We make one comment. It is known that the spectrum of chiral primaries in $`Sym^{pk}(M^4)`$ sigma model agrees with that of supergravity below the bound $`{\displaystyle \frac{pk}{4}}`$. This claim was first proved in the case of $`M^4=K3`$ by analyzing the elliptic genus, and more recently it has been proved in the case of $`M^4=T^4`$ by analyzing the “new SUSY index”. The value of the upper bound is the same, $`{\displaystyle \frac{pk}{4}}`$. Although apparently the origins of the bound are different between the analyses of and ours, there might be some relationship between the two. This is because supergravity on an $`AdS_3\times S^3\times M^4`$ background has only the discrete spectrum, so that the emergence of continuous spectrum may lead to a failure in the correspondence between the description of supergravity and the boundary SCFT.
## 6 Conclusions and Discussions
In this paper we studied the multi-string system on $`AdS_3\times S^3\times M^4`$ in the framework of Matrix string theory. Among other things, we have successfully presented a unified framework of the various short and long string sectors. Although gauge field in two dimensions has no local physical degrees of freedom, it plays an extremely important role, namely, the VEV of electric field on the worldsheet distinguishes between the short and long string sectors.
Now we would like to mention some questions which could be discussed in future works.
In section 3 we have shown the T-duality between the Matrix string action and the $`AdS_3\times S^3`$ string action, as far as their bosonic parts are concerned. It would be interesting to investigate further and see the full T-duality. Of course, some fermions are missing on the gauge-theory side (which would correspond to the superpartner of bosonic $`SL(2,R)`$ currents on the superstring theory.) Perhaps this aspect will be better understood when we come to a deeper understanding of the relation between the $`U(1)`$ gauge symmetry of Matrix string theory and the reparametrization invariance of the superstring theory. To this aim the formulation of Matrix theory in a manifestly covariant manner may be useful. Some approaches to this subject were given in .
In section 5 we discussed the problem of the missing chiral primaries by the analysis of the CFT of $`p`$ long strings. Although we still have some states missing, as was pointed out in , the correspondence with the $`Sym^{pk}(M^4)`$ sigma model is not “too bad”. In particular, by taking account of the various glued strings we have reproduced the bound $`{\displaystyle \frac{pk}{2}}`$ which is precisely the same as is expected from the analysis of the $`Sym^{pk}(M^4)`$ sigma model.
However, there seems to be a further subtlety regarding the spectrum of the multi-particle chiral primaries, for our construction seems to yield the spectrum interpretable as the cohomology of $`Sym^p(Sym^k(M^4))`$, not of $`Sym^{pk}(M^4)`$. In other words there should be an U-duality transformation which changes the value of $`k`$ and $`p`$, while preserving their product. At present it is not clear whether our construction in the framework of Matrix string theory has this duality symmetry. In any case, we will need further study to get a complete understanding of this issue.
The problem of the threshold for the continuous spectrum is also interesting. We have shown that our analysis of the CFT of $`p`$ long strings yields a threshold $`p`$-times larger, $`{\displaystyle \frac{kp}{4}}`$. This is the same as the threshold obtained in . It was discussed there that the correspondence between the supergravity in bulk and the boundary SCFT fails above this threshold. Perhaps it may be interesting to study the relation between this issue and the physics of three-dimensional black holes, as the above threshold coincides with the energy at which the first excited BTZ black hole (the massless BTZ black-hole) appears.
Acknowledgment
We would like to thank T. Eguchi for discussions and useful comments. We would also like to thank B.D. Bates for careful reading of the manuscript. K.H is supported in part by JSPS Research Fellowships, and also Y.S is supported in part by the Grant-in-Aid from the Ministry of Education, Science and Culture, Priority Area: “Supersymmetry and Unified Theory of Elementary Particles” $`(\mathrm{}707)`$.
|
no-problem/9905/hep-ph9905541.html
|
ar5iv
|
text
|
# West Indies or Antarctica — Direct CP Violation in B Decays *footnote **footnote *Based on talk given at DPF99, UCLA, Jan. 1999, reporting on work done in collaboration with N.G. Deshpande, X.G. He, S. Pakvasa and K.C. Yang.
## I Introduction
Our title clearly alludes to the story of Columbus landing in what he called the “West Indies”, which later on turned out to be part of the “New World”. I have substituted Antarctica in place of the “New World”, following a quip from Frank Paige after he realized that I was talking all the time about penguins. At the end of the Millennium, we are indeed on another Discovery Voyage. We are at the dawn of observing CP violation in the B system. The stage is the emerging penguins. Well, had Columbus seen penguins in his “West Indies”, he probably would have known he was onto something really new.
The EM penguin (EMP) $`BK^{}\gamma `$ (and later, $`bs\gamma `$) was first observed by CLEO in 1993. Alas, it looked and walked pretty much according to the Standard Model (SM), and the agreement between theory and experiment on rates are quite good. Perhaps the study of CP asymmetries ($`a_{\mathrm{CP}}`$) could reveal whether SM holds fully.
The strong penguins (P) burst on the scene in 1997, and by now the CLEO Collaboration has observed of order 10 exclusive modes , as well as the surprisingly large inclusive $`B\eta ^{}+X_s`$ mode. The $`\eta ^{}K^+`$, $`\eta ^{}K^0`$ and $`K^+\pi ^{}`$ modes are rather robust, but the $`K^0\pi ^+`$ and $`K^+\pi ^0`$ rates shifted when CLEO II data were recalibrated in 1998 and part of CLEO II.V data were included. The $`\omega K^+`$ and $`\omega \pi ^+`$ modes are still being reanalyzed. The nonobservation, so far, of the $`\pi ^+\pi ^{}`$, $`\pi ^+\pi ^0`$ and $`\varphi K^+`$ modes are also rather stringent. The observation of the $`\rho ^0\pi ^+`$ mode was announced in January this year, while the observation of the $`\rho ^\pm \pi ^{}`$ and $`K^+\pi ^{}`$ modes were announced in March. CLEO II.V data taking ended in February. With 10 million or so each of charged and neutral B’s, new results are expected by summer and certainly by winter. Perhaps the first observation of direct CP violation could be reported soon.
With BELLE and BABAR turning on in May, together with the CLEO III detector upgrade — all with $`K/\pi `$ separation (PID) capability! — we have a three way race for detecting and eventually disentangling direct CP violation in charmless B decays. We expect that, during 1999–2002, the number of observed modes may increase to a few dozen, while the events per mode may increase from 10–70 to $`10^2`$$`10^3`$ events for some modes, and sensitivity for direct CP asymmetries would go from the present level of order 30% down to 10% or so. It should be realized that the modes that are already observed ($`bs`$) should be the most sensitive probes.
Our first theme is therefore: Is Large $`a_{\mathrm{CP}}`$ possible in $`bs`$ processes? and, If so, Whither New Physics? However, as an antidote against the rush into the brave New World, we point out that the three observed $`K\pi `$ modes may indicate that the “West Indies” interpretation is still correct so far. Our second subject would hence be Whither EWP? Now!? That is, we will argue for the intriguing possibility that perhaps we already have some indication for the electroweak penguin (EWP).
It is clear that 1999 would be an exciting landmark year in B physics. So, work hard and come party at the end of the year/century/millennium celebration called “Third International Conference on B Physics and CP Violation”, held December 3-7 in Taipei .
## II Is Large CP Violation Possible/Whither New Physics?
We shall motivate the physics and give some results that have not been presented before, but refer to more detailed discussions that can be found elsewhere .
Our interests were stirred by a rumor in 1997 that CLEO had a very large $`a_{\mathrm{CP}}`$ in the $`K^+\pi ^{}`$ mode. The question was: How to get large $`a_{\mathrm{CP}}`$? With short distance (Bander-Silverman-Soni ) rescattering phase from penguin, the CP asymmetry could reach its maximum of order 10% around the presently preferred $`\gamma 64^{}`$. Final state $`K\pi K\pi `$ rescattering phases could bring this up to 30% or so, and would hence mask New Physics. But a 50% asymmetry seems difficult. New Physics asymmetries in the $`bs\gamma `$ process and $`B\eta ^{}+X_s`$ process are typically of order 10%, whereas asymmetries for penguin dominant $`bs`$ transitions are expected to be no more than 1%.
The answer to the above challenge is to hit SM at its weakest!
* Weak Spot of Penguin: Dipole Transition
$`F_1(q^2\gamma _\mu q_\mu \overline{)}q)L+\underset{¯}{\underset{¯}{F_2}}i\sigma _{\mu \nu }q_\nu m_bR`$
Note that these two terms are at same order in $`q/M_W`$ and $`m_b/MW`$ expansion. The effective “charge” is $`F_1q^2`$ which vanishes when the $`\gamma `$ or $`g`$ goes on-shell, hence, only the $`F_2`$ dipole enters $`bs\gamma `$ and $`bsg`$ transitions. It is an SM quirk due to the GIM mechanism that $`|F_1||F_2|`$ (the former becoming $`c_{36}`$ coefficients in usual operator formalism for gluonic penguin). Hence one usually does not pay attention to the subdominant $`F_2^g`$ which goes into the variously called $`c_8`$, $`c_g`$, or $`c_{11}`$ coefficients. In particular, $`bsg`$ rate in SM is only of order 0.2%. But if New Physics is present, having $`\delta F_2\delta F_1`$ is natural, hence the gluonic dipole could get greatly enhanced. While subject to $`bs\gamma `$ constraint, this could have great impact on $`bsg^{}sq\overline{q}`$ process.
* Blind Spot of Detector!
Because $`bsg`$ leads to jetty, high multiplicity $`bs`$ transitions
Hide easily in dominant $`bcs`$ sequence!
At present, 5–10% could still easily be allowed. The semileptonic branching ratio and charm counting deficits, and the strength of $`B\eta ^{}+X_s`$ rate provide circumstantial hints that $`bsg`$ could be more than a few percent.
* Unconstrained new CP phase via $`b_Rs_L`$
If enhanced by New Physics, $`F_2^g`$ is likely to carry a New Phase
Phase of $`b_R`$ not probed by $`V_{\mathrm{CKM}}!`$
However, one faces a severe constraint from $`bs\gamma `$. For example it rules out the possibility of $`H^+`$ as source of enhancement. But as Alex Kagan taught me at last DPF meeting in Minnesota, the constraint can be evaded if one has sources for radiating $`g`$ but not $`\gamma `$.
* Uncharted territory of Nonuniversal Squark Masses
SUSY provides a natural possibility via gluino loops:
Need flavor violation in $`\stackrel{~}{d}_j`$
The simplest being a $`\stackrel{~}{s}`$$`\stackrel{~}{b}`$ mixing model . Since the first generation down squark is not involved, one evades all low energy constraints. This is a New Physics CP model tailor-made for $`bs`$ transitions.
With the aim of generating huge CP asymmetries, we can now take $`bsg10\%`$ and study $`bsq\overline{q}`$ transitions at both inclusive and exclusive level . In both we have used operator language. One needs to consider the tree diagram, which carries the CP phase $`\gamma \mathrm{arg}(V_{ub}^{})`$; the standard penguin diagrams, which contain short distance rescattering phases; the enhanced $`bsg`$ dipole (SUSY loop induced) diagram; finally, diagrams containing $`q\overline{q}`$ loop insertions to the gluon self-energy which are needed to maintain unitarity and consistency to order $`\alpha _S^2`$ in rate differences .
At the inclusive level, one finds a “$`bsg`$ pole” at low $`q^2`$ which reflects the jetty $`bsg`$ process that is experimentally hard to identify. Destructive interference is in general needed to allow the $`bsq\overline{q}`$ rate to be comparable to SM. But this precisely facilitates the generation of large $`a_{\mathrm{CP}}`$s! More details such as figures can be found in . Dominant rate asymmetry comes from large $`q^2`$ of the virtual gluon. To illustrate this, Table I gives inclusive BR (arbitrarily cutoff at $`q^2=1`$ GeV<sup>2</sup>) and $`a_{\mathrm{CP}}`$ for SM and for various new CP phase $`\sigma `$ valus, assuming $`bsg`$ rate of order 10%. One obtains SM-like branching ratios for $`\sigma 145^{}`$, and $`a_{\mathrm{CP}}`$ also seem to peak. This becomes clearer in Table II where we give the results for $`q^2>4m_c^2`$, where $`c\overline{c}q\overline{q}`$ (perturbative) rescattering is fully open. We see that 20–30% asymmetries are achieveable. This provides support for findings in exclusive processes.
Exclusive two body modes are much more problematic. Starting from the operator formalism as in inclusive, we set $`N_C=3`$, take $`q^2m_b^2/2`$ and try to fit observed BRs with $`bsg10\%`$. We then find the $`a_{\mathrm{CP}}`$ preferred by present rate data. One finds that, analogous to the inclusive case, destructive interference is needed and in fact provides a mechanism to suppress the pure penguin $`B\varphi K^+`$ mode to satisfy CLEO bound. For the $`K^+\pi ^{}`$ and $`K^0\pi ^+`$ modes which are P-dominated, one utilizes the fact that the matrix element
$`O_6{\displaystyle \frac{m_K^2(m_B^2m_\pi ^2)}{(\underset{¯}{\underset{¯}{m_s}}+m_u)(m_bm_u)}}`$
could be enhanced by low $`m_s`$ values (of order 100–120 MeV) to raise $`K\pi /\varphi K`$, which at same time leads to near degeneracy of $`K^+\pi ^{}`$ and $`K^0\pi ^+`$ rates. The upshot is that one finds rather large CP asymmetries, i.e. $`a_{\mathrm{CP}}`$ 35%, 45% and 55% for $`K^0\pi ^+`$, $`K^+\pi ^{}`$ and $`\varphi K^+`$ modes, respectively, and all of the same sign. Such pattern cannot be generated by SM, with or without rescattering. We expect such pattern to hold true for many $`bs`$ modes.
We have left out the prominent $`B\eta ^{}K`$ modes from our discussion largely because the anomaly contribution
Not quite included at present!
To compute such diagrams, one needs to know the $`|\overline{s}gq`$ Fock component of the $`K`$ meson! This may be at the root of the rather large size of $`B\eta ^{}K`$ mode.
## III Whither EWP? Now!?
Before we get carried away by the possibility of large CP asymmetries from New Physics, there is one flaw (or two?) that emerged after summer 1998. Because of P-dominance which is certainly true in case of enhanced $`bsg`$, $`K^+\pi ^0`$ is only half of $`K^+\pi ^{}K^0\pi ^+`$. The factor of 1/2 comes from $`A_{K^+\pi ^0}^P\frac{1}{\sqrt{2}}A_{K^+\pi ^{}}^P`$, which is just an isospin Clebsch factor that originates from the $`\pi ^0`$ wave function. Although this seemed quite reasonable from 1997 data where $`K^+\pi ^0`$ mode was not reported, a crisis emerged in summer 1998 when CLEO updated their results for the three $`K\pi `$ modes. They found $`K^+\pi ^0K^+\pi ^{}K^0\pi ^+`$ instead!
Curiously, $`A_{K^+\pi ^0}^T\frac{1}{\sqrt{2}}A_{K^+\pi ^{}}^T`$ also, which cannot change the situation. In any case the expectation that $`|T/P|0.2`$ cannot make a factor of 2 change by interference. Miraculously, however, this could be the first indication of the last type of penguin, the EWP.
The yet to be observed EWP (electroweak penguin), namely $`bsf\overline{f}`$, occurs by $`bs\gamma ^{},Z^{}`$ followed by $`\gamma ^{},Z^{}f\overline{f}`$. The strong penguin oftentimes obscure the $`bsq\overline{q}`$ case (or so it is thought), and to cleanly identify the EWP one has to search for “pure” EWP modes such as $`B_s\pi \eta `$, $`\pi \varphi `$ which are clearly rather far away. One usually expects the $`BK^{()}\mathrm{}^+\mathrm{}^{}`$ mode to be the first EWP to be observed, which is still a year or two away, while clean and purely weak penguin $`BK^{()}\nu \overline{\nu }`$ is rather far away.
With the hint from $`K^+\pi ^0K^+\pi ^{}K^0\pi ^+`$, however, and putting back on our SM hat, we wish to establish the possibility that EWP may be operating behind the scene already . It should be emphasized that, unlike the gluon, the $`Zf\overline{f}`$ coupling depends on isospin, and can in principle break the isospin factor of 1/2 mentioned earlier.
We first show that simple $`K\pi K\pi `$ rescattering cannot change drastically the factor of two. From Fig. 1(a), where we have adopted $`\gamma =64^{}`$ from current “best fit” to CKM matrix , one clearly sees the factor of 2 between $`K^+\pi ^{}`$ and $`K^+\pi ^0`$. We also not that rescattering, as parametrized by the phase difference $`\delta `$ between I = 1/2 and 3/2 amplitudes, is only between $`K^+\pi ^0K^0\pi ^+`$ and $`K^+\pi ^{}K^0\pi ^0`$. When we put in the EWP contribution, at first sight it seems that the effect is drastic. On closer inspection at $`\delta =0`$, it is clear that the EWP contribution to $`K^0\pi ^+`$ and $`K^+\pi ^{}`$ modes are small, but is quite visible for $`K^+\pi ^0`$ and $`K^0\pi ^0`$ modes. This is because the $`K^+\pi ^0`$ and $`K^0\pi ^0`$ modes suffer from $`1/\sqrt{2}`$ suppression in amplitude because of $`\pi ^0`$ wave function. However, it is precisely these modes which pick up a sizable $`Z`$ penguin contribution via the $`\pi ^0`$ (the strength of $`c_9`$ is roughly a quarter of $`c_4`$ and $`c_6`$). As one dials $`\delta `$, $`K^+\pi ^0K^0\pi ^+`$ and $`K^+\pi ^{}K^0\pi ^0`$ rescattering redistributes this EWP impact and leads to the rather visible change in Fig. 1(b). We notice the remarkable result that the EWP reduces $`K^+\pi ^{}`$ rate slightly but raises the $`K^+\pi ^0`$ rate considerably, such that the two modes become rather close. We have to admit, however, to something that we have sneaked in. To enhance the relative importance of EWP, we had to suppress the strong penguin effect. We have therefore employed a much heavier $`m_s=200`$ MeV as compared to 100–120 MeV employed previously in New Physics case. Otherwise we cannot bring $`K^+\pi ^{}`$ and $`K^+\pi ^0`$ rates close to each other.
Having brought $`K^+\pi ^{}`$ and $`K^+\pi ^0`$ modes closer, the problem now is that $`K^0\pi ^+`$ lies above them, and the situation becomes worse for large rescattering. To remedy this, we play with the phase angle $`\gamma `$ which tunes the weak phase of the tree contribution T. Setting now $`\delta =0`$, again we start without EWP in Fig. 2(a). The factor of two between $`K^+\pi ^{}`$ and $`K^+\pi ^0`$ is again apparent. Dialing $`\gamma `$ clearly changes T-P interference. For $`\gamma `$ in first quadrant one has destructive interference, which becomes constructive in second quadrant. This allows the $`K^+\pi ^{}`$ mode to become larger than the pure penguin $`K^0\pi ^+`$ mode, which is insensitive to $`\gamma `$. However, nowhere do we find a solution where $`K^+\pi ^0K^+\pi ^{}K^0\pi ^+`$ is approximately true. There is always one mode that is split away from the other two.
Putting in EWP, as shown in Fig. 2(b), the impact is again quite visible. As anticipated, the $`K^+\pi ^{}`$ and $`K^+\pi ^0`$ modes come close to each other. Since their $`\gamma `$ dependence is quite similar, one finds that for $`\gamma 90^{}`$$`130^{}`$, the three observed $`K\pi `$ modes come together as close as one can get, and are basically consistent with errors allowed by data. Note that $`K^+\pi ^0`$ is never larger than $`K^+\pi ^{}`$.
We emphasize that a large rescattering phase $`\delta `$ would destroy this achieved approximate equality, as can be seen from Fig. 3, where we illustrate $`\delta `$ dependence for $`\gamma =120^{}`$. It seems that $`\delta `$ cannot be larger than $`50^{}`$ or so.
As a further check of effect of the EWP, we show the results for $`\delta =0`$ in Fig. 4. In absence of rescattering, the change in rate (enhancement) for $`K^+\pi ^0`$ mode from adding EWP is reflected in a dilution of the asymmetry, which could serve as a further test. This, however, depends rather crucially on absence of rescattering. Once rescattering is included, it would be hard to distinguish the impact of EWP from CP asymmetries. However, even with rescattering phase, the $`\gamma `$ dependence of CP asymmetries can easily distinguish between the two solutions of $`\gamma 120^{}`$ and $`240^{}`$, as illustrated in Fig. 5, where EWP effect is included. From our observation that a large $`\delta `$ phase would destroy the near equality of the three observed $`K\pi `$ modes that we had obtained, we find that $`a_{\mathrm{CP}}<20\%`$ even with presence of rescattering phase $`\delta `$.
It should be emphasized that the $`\gamma `$ value we find necessary to have $`K^+\pi ^{}K^0\pi ^+`$ is in a different quadrant than the present best ‘fit” result of $`\gamma 60^{}`$$`70^{}`$. In particular, the sign of $`\mathrm{cos}\gamma `$ is preferred to be negative rather than positive. An extended analysis to $`\pi \pi `$, $`\rho \pi `$ and $`K^{}\pi `$ modes confirm this assertion. Intriguingly, the size of $`\rho ^\pm \pi ^{}`$ and $`K^+\pi ^{}`$ was anticipated via this $`\gamma `$ value. Perhaps hadronic rare B decays can provide information on $`\gamma `$, and present results seem to be at odds with CKM fits to $`\epsilon _K`$, $`|V_{ub}/V_{cb}|`$, $`B_d`$ mixing, and in particular the $`B_s`$ mixing bound, which rules out $`\mathrm{cos}\gamma <0`$.
## IV Conclusion
Be prepared for CP Violation!!
We first illustrated the possibility of having $`a_{\mathrm{CP}}30\%`$$`50\%`$ from New Physics in already observed modes, such as $`K\pi `$, $`\eta ^{}K`$, and $`\varphi K`$ mode when seen. Our “existence proof” was the possibility of enhanced $`bsg`$ dipole transition, which from SUSY model considerations one could have a new CP phase carried by $`b_R`$. Note that this is just an illustration. We are quite sure that Nature is smatter.
We then made an about-face and went back to SM, and pointed out that the EWP may have already shone through the special “slit” of $`K^+\pi ^0K^+\pi ^{}K^0\pi ^+`$, where we inferred that $`\gamma 90^{}`$$`130^{}`$ is preferred, which implies that $`\mathrm{cos}\gamma <0`$, contrary to current CKM “fit” preference.
We hope we have illustrated the versatility of rare B decays, that they can open windows on both New Physics and SM. The next 5 years should be a very rewarding period!
|
no-problem/9905/hep-ph9905263.html
|
ar5iv
|
text
|
# Scaling violations and off-forward parton distributions: leading order and beyond
## Abstract
We give an outline of a formalism for the solution of the evolution equations for off-forward parton distributions in leading and next-to-leading orders based on partial conformal wave expansion and orthogonal polynomials reconstruction.
The off-forward parton distributions (OFPD) are new non-perturbative inputs used in exclusive electroproduction processes, like the hard diffractive production of mesons and the deeply virtual Compton scattering (DVCS), to parametrize hadronic substructure . Their characteristic feature is a non-zero momentum transfer in the $`t`$-channel which results into different momentum fractions of constituents inside hadron.
The leading order amplitude, e.g. for the unpolarized DVCS, in Leipzig-Ji’s conventions used throughout , looks like
$$𝒜_1^1𝑑x\left\{\frac{1}{x\eta +i0}+\frac{1}{x+\eta i0}\right\}𝒪(x,\eta ),$$
(1)
with the quark OFPD $`𝒪(x,\eta )`$ given as the Fourier transform of the light-cone string operator (in the light-cone gauge $`B_+=0`$)
$$𝒪(x,\eta )=2\frac{d\lambda }{2\pi }e^{i\lambda x}p_2|\overline{\psi }(\lambda n)\gamma _+\psi (\lambda n)|p_1,$$
(2)
with the skewedness $`\eta (p_1p_2)_+`$ and the constraints $`x[1,1]`$ coming form the support properties of the matrix element (2). Several peculiar properties can be learned from these: i) Translating the perturbative arguments used in the proof of factorization formula (1) to non-perturbative domain we immediately see that the $`𝒜`$ exists provided the $`𝒪(\pm \eta ,\eta )`$ is continuous. ii) In different kinematical regions of the phase space OFPD’s share common properties with the usual forward parton densities ($`|x|>\eta `$) and exclusive distribution amplitudes ($`|x|<\eta `$), and thus hybrids in this sense. iii) The amplitude (1) manifests Bjorken scaling.
The last property is violated once the QCD radiative corrections are taken into account. The $`Q`$-dependence of the amplitude appears via the scale dependence of the OFPD’s which obey the generalized evolution equation
$$\frac{d}{d\mathrm{ln}Q^2}𝓞(x,\eta )=_1^1𝑑x^{}𝑲(x,x^{},\eta )𝓞(x^{},\eta ),$$
(3)
with the kernel given by the series in the coupling constant $`\alpha _s`$. The diagonalization of leading order kernel can be achieved exploiting the consequences of conformal invariance of QCD at classical level. The eigenstates of one-loop non-forward evolution equation are given by Gegenbauer polynomials, $`C_j^\nu `$, — with the numerical value of the index $`\nu `$ depending on the parton species, — which form an infinite dimensional irreducible representation of the conformal group in the space of bi-linear composite operators. Starting from two loop order the conformal operators start to mix and the simple pattern of one-loop evolution is broken so that the eigenfunctions generalize to non-polynomial functions. In the basis of leading order conformal waves
$$_1^1𝑑xC_j^\nu (\frac{x}{\eta })𝑲(x,x^{},\eta )=\frac{1}{2}\underset{k=0}{\overset{j}{}}𝜸_{jk}C_k^\nu \left(\frac{x^{}}{\eta }\right)$$
(4)
the anomalous dimension matrix is not diagonal and possesses non-diagonal entries
$$𝜸_{jk}=𝜸_j^\mathrm{D}\delta _{jk}+𝜸_{jk}^{\mathrm{ND}},\text{with}𝜸_{jk}^{\mathrm{ND}}𝒪(\alpha _s^2).$$
(5)
The disadvantage of the standard approach to the study of scaling violation beyond leading order is the proliferation of Feynman graphs required for calculation of $`𝜸_{jk}^{(1)}`$. Our approach which allows for an extremely concise analytical solution of the problem is mainly based on four major observations: i) The triangularity of the anomalous dimension matrix $`𝜸_{jk}`$ implies that its eigenvalues are given by the diagonal elements and coincide with the well-known forward anomalous dimensions. ii) Tree-level special conformal invariance implies diagonal leading order matrix. One-loop violation of the symmetry induces non-diagonal elements, thus, one-loop special conformal anomaly will generate two-loop anomalous dimensions. iii) Scale Ward identity for the Green function with conformal operator insertion coincides with the Callan-Symanzik equation for the latter and thus the dilatation anomaly is the anomalous dimension of the composite operator $`[𝓞_j][𝑑x\mathrm{\Theta }_{\mu \mu }]\frac{1}{ϵ}_{k=0}^j𝜸_{jk}[𝓞_k]`$. iv) The four-dimensional conformal algebra provides a relation between the anomalies of dilatation and special conformal transformations via the commutator $`[𝒟,𝒦_{}]=i𝒦_{}`$. Using these ideas we have deduced the form of the two-loop non-diagonal elements to be
$$𝜸^{\mathrm{ND}(1)}=[𝜸^{\mathrm{D}(0)},𝒅(\beta _0𝜸^{\mathrm{D}(0)})+𝒈]_{},$$
(6)
where $`𝒅`$ is a simple matrix, $`𝜸_j^{\mathrm{D}(0)}`$ are the LO anomalous dimensions of the conformal operators and $`\beta _0`$ is the one-loop QCD Gell-Mann-Low function responsible for the violation of scale invariance. The most nontrivial information about the special conformal symmetry breaking is concentrated in the $`𝒈`$-matrices which are residues of the special conformal symmetry breaking counterterms at one-loop order $`[𝓞_j][𝑑xx_{}\mathrm{\Theta }_{\mu \mu }]\frac{\alpha _s}{ϵ}_{k=0}^ja(j,k)\left\{𝒈_{jk}+\mathrm{}\right\}[𝓞_k]`$.
Unfortunately the eigenfunctions of the evolution kernels cannot be used as a basis for expansion of OFPD since they do not form a complete set of functions outside the region $`|x/\eta |>1`$ where, however, the OFPD’s are nonvanishing in general. Our approach<sup>2</sup><sup>2</sup>2Several LO methods are available on a market . to the study of the scale dependence of the OFPD is based on the expansion of the distribution in series w.r.t. the complete set of orthogonal polynomials, $`𝒫_j(x)`$, on the interval $`1x1`$ to preserve the support properties of the functions in question
$$𝓞(x,\eta ,Q)=\underset{j=0}{\overset{J_{\mathrm{max}}}{}}\stackrel{~}{𝓟}_j(x)𝓜_j(\eta ,Q),$$
(7)
where formally $`J_{\mathrm{max}}=\mathrm{}`$. The expansion coefficients can be reexpressed in terms of eigenstates of the evolution equation (3) according to
$`𝓜_j(\eta ,Q)`$ $`=`$ $`{\displaystyle \underset{k=0}{\overset{j}{}}}𝑬_{jk}(\eta ){\displaystyle \underset{l=0}{\overset{k}{}}}\eta ^{kl}𝑩_{kl}(Q,Q_0)`$ (8)
$`\times `$ $`𝓔_l(Q,Q_0)𝓞_l(\eta ,Q_0),`$
where $`𝑬_{jk}(\eta )`$ is the overlap integral between the one-loop eigenfunctions, $`C_j^\nu `$, and the polynomials $`𝓟_j`$. The conformal moments at a reference scale $`Q_0`$ are defined as
$$𝓞_j(\eta ,Q_0)=\eta ^j_1^1𝑑xC_j^\nu \left(\frac{x}{\eta }\right)𝓞(x,\eta ,Q_0).$$
(9)
All scale dependence in Eq. (8) is extracted to the usual evolution operator which obeys the standard first order differential equation
$$\frac{d}{d\mathrm{ln}Q}𝓔+𝜸^\mathrm{D}𝓔=0.$$
(10)
Besides there is an additional dependence on the hard momentum transfer, $`Q`$, which appears due to the mixing of the conformal operators between themselves in two-loop approximation. This dependence is governed by a new evolution equation of the form
$$\frac{d}{d\mathrm{ln}Q}𝑩+[𝜸^\mathrm{D},𝑩]_{}+𝜸^{\mathrm{ND}}𝑩=0.$$
(11)
Making use of these results we are in a position to study the evolution of OFPD explicitly. In order to save place let us address the non-singlet sector only. The rough features of the shape of OFPD can be gained from the perturbation theory itself. Assuming skewednessless input shown in Fig. 1 at a very low normalization point typical for phenomenological models of confinement we evolve it (with $`J_{\mathrm{max}}=80`$) upwards with momentum scale $`Q^2=100\mathrm{GeV}^2`$ (Fig. 2). The relative size of next-to-leading order effects is shown for given $`\eta `$ in Fig. 3. Clearly, NLO effects do not exceed the level of few percent in the non-singlet sector.
The specific feature of the evolution of the OFPD is that the partons with momentum fractions $`\eta <|x|<1`$ tend to penetrate into the ER-BL-type region and once they do it they never return back from the domain $`x[\eta ,\eta ]`$.
Acknowledgements. A.B. was supported by the Alexander von Humboldt Foundation.
|
no-problem/9905/cond-mat9905346.html
|
ar5iv
|
text
|
# Proton NMR for Measuring Quantum-Level Crossing in the Magnetic Molecular Ring Fe10
\[
## Abstract
The proton nuclear spin-lattice relaxation rate 1/$`T_1`$ has been measured as a function of temperature and magnetic field (up to 15 T) in the molecular magnetic ring Fe<sub>10</sub>(OCH<sub>3</sub>)<sub>20</sub>(O<sub>2</sub> CCH<sub>2</sub>Cl)<sub>10</sub> (Fe10). Striking enhancement of 1/$`T_1`$ is observed around magnetic field values corresponding to a crossing between the ground state and the excited states of the molecule. We propose that this is due to a cross-relaxation effect between the nuclear Zeeman reservoir and the reservoir of the Zeeman levels of the molecule. This effect provides a powerful tool to investigate quantum dynamical phenomena at level crossing.
\]
The magnetic properties of metal ion clusters incorporated in large molecules attract considerable interest for the new physics involved and for the potential applications . At low temperatures, these molecules act as individual quantum nanomagnets, enabling to probe, at the macroscopic scale, the crossover between quantum and classical physics . Of fundamental interest is the situation of (near-) degeneracy of two magnetic levels, where quantum mechanical phenomena such as tunneling or coherence can occur. These effects have been intensively explored in the recent years, mostly in the high-spin ($`S`$=10) molecules Mn12 and Fe8 , or in the ferritin protein . Another interesting system is the molecule \[Fe<sub>10</sub>(OCH<sub>3</sub>)<sub>20</sub>(O<sub>2</sub>CCH<sub>2</sub>Cl)<sub>10</sub>\] (in short Fe10), where the ten Fe<sup>3+</sup> ions ($`s`$=5/2) are coupled in a ring configuration by an antiferromagnetic (AF) exchange $`J/k_B`$13.8 K . Unlike Mn12 or Fe8, the ground state of Fe10 is nonmagnetic (total spin $`S`$=0). The energies $`E`$ of the excited states are given approximately by Landé’s rule:
$$E(S)=\frac{P}{2}S(S+1)$$
(1)
where $`S`$ is the total spin value and $`P`$=$`4J/N`$, with $`N`$=10 the number of magnetic ions in the ring. In zero magnetic field, the first excited state is $`S`$=1, the second $`S`$=2, etc. (see Fig 1.). This picture is modified by an external magnetic field, which lifts the degeneracy of the magnetic states. A sufficiently strong field can induce level crossings between the ground state and the excited states, as shown in Fig. 1. In other words, the ground state of the molecule can be changed by the field, from $`S`$=$`0`$ to $`S`$=$`1`$, then from $`S`$=$`1`$ to $`S`$=$`2`$, etc. Owing to the relatively low value of the magnetic exchange coupling in Fe10, this field-induced transitions can be observed experimentally in conventional magnetic fields, for instance through steps of the magnetization .
The situation of degeneracy between levels raises fundamental problems of quantum dynamics (specific calculations for Fe10 can be found in ). A crucial issue is the role played by the coupling between magnetic molecular levels and the environment such as phonons and/or nuclear spins . Clearly, essential information on this problem should be accessed through measurements of the nuclear spin-lattice relaxation rate 1/$`T_1`$ since the nuclei (here protons) probe the fluctuations of the local field induced at the nuclear site by the localized magnetic moments.
The physics of level crossings is almost not documented experimentally, due to the rarity of systems in which the observation is possible. A situation which has some analogy with the one reported here is the crossover from antiferromagnetic to ferromagnetic phase in 1D chains, where a divergence of the one-magnon density of states generates an enhancement in the nuclear spin-lattice relaxation rate . A closer situation of level crossing between singlet and triplet states can be observed in 1D gapped quantum magnets , but the physical context and the continuum of excited states makes the situation certainly not comparable to that in finite-size magnets. In this respect, the mesoscopic ring Fe10 constitutes a model system since magnetic levels are sharp and well-defined in energy, due to the finite size of the system.
Previous <sup>1</sup>H NMR relaxation measurements in Fe10 have concerned magnetic fields much lower than the expected energy gap $`E(1)`$$``$6 K (Eqn. 1) .
Here, we present new proton $`T_1`$ measurements in Fe10, as a function of magnetic field up to 15 Tesla, and in the temperature range 1.3 K$``$$`T`$$``$4.2 K. Our main result is the observation of a dramatic enhancement of $`1/T_1`$ when the magnetic field reaches the critical values for which the magnetic levels become degenerate (level-crossing) . Although broadening effects due to the use of a powder sample prevent yet a quantitative interpretation of the data, it is pointed out that the cross-relaxation effect between (proton) nuclear and molecular levels, discovered here, should provide a powerful method to investigate the physics of level-crossing if large enough single crystals become available.
The powder samples were synthesized as described elsewhere . High-field ($`H`$$``$8 T) NMR measurements were performed at the Grenoble High Magnetic Field Laboratory in a 17 T variable field superconducting magnet. All measurements where performed with home-built pulsed NMR spectrometers.
The proton NMR spectrum is featureless, except for an asymmetry related to the orientation distribution of the grains and to the superposition of resonaces from inequivalent proton sites in each molecule. The width of the spectrum is both temperature and field dependent due to an inhomogeneous component, i.e. a distribution of hyperfine (dipolar) fields from Fe moments . At low field ($`H`$=0.33 T), the full width at half maximum (FWHM) is about 25 kHz at room temperature, it increases to a maximum of about 70 kHz at about 30 K and it decreases again at low temperature reflecting the collapse of the spin suceptibility when the Fe10 molecular states condense into the $`S`$=0 ground state. In the temperature range investigated here (1.3 K-4.2 K), there is a residual field-dependent inhomogeneous broadening of the proton NMR line, which is due to the Fe moments in the $`S`$=1 excited state. At 1.3 K the FWHM varies from 25 kHz at $`H`$=0.33 T to 1.8 MHz at 14.65 T.
$`T_1`$ was extracted from the recovery of the spin-echo amplitude following a sequence of saturating radiofrequency pulses. Both $`\left(\frac{\pi }{2}\right)_x`$-$`\left(\frac{\pi }{2}\right)_y`$ (solid echo) and $`\left(\frac{\pi }{2}\right)_x`$-$`\left(\pi \right)_y`$ (Hahn echo) sequences were used with similar results. The recovery of the nuclear magnetization was found to be non-exponential at all fields. For low fields ($`H`$$``$1 T), the NMR line is sufficiently narrow to be completely saturated by the radio frequency pulses. In this case, the non-exponential recovery is solely related to the distribution of relaxation rates, due to the superposition of inequivalent proton sites, and to the orientation distribution in the powder. At higher fields, the line becomes too broad to be completely saturated and thus the initial recovery is affected by spectral diffusion effects. Therefore, in order to measure a relaxation parameter consistently we chose to define $`T_1`$ as the time at which the nuclear magnetization has recovered half of the equilibrium value, after removal of the initial fast recovery due to spectral diffusion. This criterion is insensitive to the spectral diffusion, the strength of which depends on hardly controllable experimental parameters. The criterion also makes the $`T_1`$ value insensitive to slight modifications of the recovery law that were sometimes observed for the very long time delays. Otherwise, the shape of the recovery law was found to be field and $`T`$-independent. $`T_1`$ was also checked to be the same at different positions on the line.
The magnetic field dependence of proton 1/$`T_1`$ is reported in Fig. 2. For technical reasons, experiments between 8 and 15 Tesla were performed at $`T`$=1.3 K, while those at lower fields were at $`T`$=1.5-1.7 K. The difference is minor and, as will be seen later, $`T_1`$ is basically $`T`$-independent in most of the field range. So, Fig. 2 can be regarded as the field dependence of $`T_1`$ at fixed temperature. $`1/T_1`$ shows three very well-defined peaks centered around the critical field values: 4.7 T, 9.6 T and 14 T. These values correspond very closely to the fields for which steps were observed in the magnetization .
At low fields ($`H`$$`<`$1.5 T), the $`T`$-dependence of 1/$`T_1`$ is almost exponential (Fig. 3). This implies that the proton relaxation is dominated by the singlet-triplet gap and the finite lifetime of the $`S`$=1 excited state which generates fluctuations in the local hyperfine field at the proton site . The exponential $`T`$-dependence is a consequence of the Boltzmann distribution of the $`S`$=1 population.
However, as shown in Fig. 3, 1/$`T_1`$ at higher magnetic field appears to be temperature independent both at level crossings (4.7 and 9.61 T) and in-between them (7.96 T). Thus, the strong enhancement around level crossing requires a new description of the nuclear relaxation, which cannot be based on thermal excitations.
Near the critical field for level-crossing, the coupled system nuclei plus molecular magnetic moments can undergo flip-flop energy conserving transitions, resulting in a transfer of energy from the nuclear system to the molecular magnet which depends on the matching of energy levels and not on temperature. Thus, we propose that the peaks in 1/$`T_1`$ vs. magnetic field are the result of a cross-relaxation effect between the nuclear Zeeman levels and the magnetic molecular levels. In fact, since the magnetic molecules are strongly coupled to the ”lattice”, the cross-relaxation becomes a very effective channel for spin-lattice relaxation. It is emphasized that cross-relaxation, here in the sense of matching of energy levels, is observed between two nuclear reservoirs or between two electron reservoirs . Strictly speaking the cross-relaxation occurs only when the condition $`\mathrm{}\omega _n`$=$`\mathrm{}\gamma _nH`$=$`g\mu _B|HH_c|`$ is met. However, the broadening of both the NMR line and of the molecular energy levels can allow the energy conserving condition to be met over a wide field interval. Furthermore, broadening effects are expected for a powder sample.
In order to analyze the data quantitatively, it is necessary to have a precise description of the magnetic level diagram for Fe10. For the triplet state, the energy levels are obtained from the diagonalization of the Hamiltonian:
$$=\stackrel{}{S}\widehat{D}\stackrel{}{S}+g\mu _B\stackrel{}{B}\stackrel{}{S}+P,$$
(2)
which yields secular equation for energy $`E`$:
$`(P{\displaystyle \frac{2}{3}}D_1E)(P+{\displaystyle \frac{1}{3}}D_1E)^2`$ (3)
$`(P{\displaystyle \frac{2}{3}}D_1E)g^2\mu _B^2B^2\mathrm{cos}^2\theta `$ (4)
$`(P+{\displaystyle \frac{1}{3}}D_1E)g^2\mu _B^2B^2\mathrm{sin}^2\theta =0,`$ (5)
where we have assumed a diagonal, traceless, axial tensor for the zero-field splitting (-1/3$`D_1`$, -1/3$`D_1`$ , 2/3$`D_1`$). The axis perpendicular to the Fe10 ring plane is a hard axis, i.e. $`D_1`$$`>`$0. The values $`P`$=6.5 K and $`D_1`$=3.23 K are obtained from recent torque magnetometry measurements . As shown in Fig. 1, the critical field $`H_c`$ for the first level-crossing depends on the angle $`\theta `$ between the crystal field axis and the magnetic field: $`H_c`$ varies from 4.33 T for $`\theta `$=90<sup>o</sup> up to 5.6 T for $`\theta `$=0<sup>o</sup> . This implies a powder distribution of relaxation rates which should contribute to the width of the first peak at 4.7 Tesla. The calculation of the level distribution for $`S`$$``$2 is more complex, making a quantitative analysis of the second and third crossings beyond the scope of the present paper.
It is very interesting to point out the differences between the three peaks in 1/$`T_1`$. At the first level crossing ($`H_c`$=4.7 T), there is a very steep increase of 1/$`T_1`$ occuring in an extremely narrow field interval (about 0.1 Tesla). This is very suggestive of a resonant process in the relaxation. The two other peak have a more regular shape but the third peak is smaller than the second one. Of course, we speculate that these differences are related to the different spin values involved in each level-crossing. In particular, the first crossing involves the non-magnetic level $`S`$=0.
We tentatively describe the results in Fig. 2 as a sum of Lorenzian functions of width $`\mathrm{\Gamma }_\alpha `$ with $`\alpha `$=1,2,3 for the three level-crossing conditions:
$$\frac{1}{T_1}\underset{\alpha =1}{\overset{3}{}}A_\alpha \left[\frac{\mathrm{\Gamma }_\alpha }{\mathrm{\Gamma }_\alpha ^2+(\gamma _nH\frac{1}{\mathrm{}}g\mu _B|HH_c|)^2}\right]$$
(6)
This expression fits the data reasonably well with choice of parameters: $`A_1`$$``$0.3$`A_2`$$``$0.5$`A_3`$=4$`\pi `$10<sup>13</sup> rad s<sup>-2</sup> ($``$0.36 T) and $`\mathrm{\Gamma }_1`$$``$0.5$`\mathrm{\Gamma }_2`$$``$0.5$`\mathrm{\Gamma }_3`$=2$`\pi `$10<sup>10</sup> rad s<sup>-1</sup>, and critical fields $`H_{c1}`$=4.7 T, $`H_{c2}`$=9.6 T, $`H_{c3}`$=14.0 T. The physical meaning of the coupling constant $`A_\alpha `$ is not clear without a quantitative theory for the cross-relaxation effect. The width of each peak is most likely related to the distribution of level crossing fields due to the distribution of angles between the magnetic field and the crystalline axis in our powder sample. Thermal broadening is also expected since 1.3 K is equivalent to $``$1 T.
In summary we have presented an investigation of the proton spin-lattice relaxation rate 1/$`T_1`$ at low temperature in the Fe10 molecular magnetic ring. 1/$`T_1`$ at low fields is dominated by the thermal fluctuations in the triplet excited state. At high magnetic fields we have reported a dramatic enhancement of the 1/$`T_1`$ in correspondence to the critical fields for which the lowest lying molecular energy levels become almost degenerate. The effect can be explained by a $`T`$-independent resonant cross-relaxation effect where thermal fluctuations mediated by phonons do not seeem to play a role. Thus, the magnetic transitions between nearly degenerate $`\mathrm{\Delta }`$$`S`$=1 states become possible, presumably because of the coupling with the nuclear spins .
The most promising perspective open by these results concern the possibility to study dynamical effects of quantum mechanical origin, that are expected in the vicinity of the level crossing conditions. Enhanced transfer of population between two levels is possible, through a mechanism of quantum tunneling. We have shown here that the dynamics of nearly degenerate molecular levels is coupled to the dynamics of nuclear spins. This has to be taken into account in future theoretical works on Fe10, and at the same time the coupling between nuclei and molecular levels makes such NMR experiments a privileged tool for detailed studies when large enough single crystals become available.
Thanks are due to E. Lee for very helpful assistance, and to A. Rettori, C. Berthier and I. Svare for discussions and suggestions. This work has been partially supported by the ”Molecular Magnet” program of the European Science Foundation, and by the 3MD EU Network (contract No. ERB 4061 PL-97-0197). Ames Laboratory is operated for U.S Department of Energy by Iowa State University under Contract No. W-7405-Eng-82. The work at Ames Laboratory was supported by the director for Energy Research, Office of Basic Energy Sciences. The GHMFL is Laboratoire Conventionné aux Universités J. Fourier et INPG Grenoble I.
|
no-problem/9905/cond-mat9905274.html
|
ar5iv
|
text
|
# The Hall effect in Zn-doped YBa2Cu3O7-δ revisited: Hall angle and the pseudogap
## Abstract
The temperature dependence of the Hall coefficient is measured with a high accuracy in a series of YBa<sub>2</sub>(Cu<sub>1-z</sub>Zn<sub>z</sub>)<sub>3</sub>O<sub>6.78</sub> crystals with 0$``$$`z`$$``$0.013. We found that the cotangent of the Hall angle, $`\mathrm{cot}\theta _H`$, starts to deviate upwardly from the $`T^2`$ dependence below $`T_0`$ ($``$130 K), regardless of the Zn concentration. We discuss that this deviation is caused by the pseudogap; the direction of the deviation and its insensitivity to the Zn doping suggest that the pseudogap affects $`\mathrm{cot}\theta _H`$ through a change in the effective mass, rather than through a change in the Hall scattering rate.
The strong temperature dependence of the Hall coefficient $`R_H`$ of the high-$`T_c`$ cuprates has been considered to be one of the most peculiar properties of their unusual normal state . The rather complex behavior of $`R_H(T)`$ can be turned into a simpler one by looking at the cotangent of the Hall angle , $`\mathrm{cot}\theta _H\rho _{xx}`$/$`\rho _{xy}`$; it has been shown that $`\mathrm{cot}\theta _H`$ of cuprates behaves approximately as $`T^2`$, regardless of material and carrier concentration . This remarkable simplicity in the behavior of $`\mathrm{cot}\theta _H`$ led to the idea that $`\mathrm{cot}\theta _H`$ reflects a Hall scattering rate $`\tau _H^1`$, which is different from the scattering rate $`\tau _{tr}^1`$ governing the diagonal resistivity $`\rho _{xx}`$. There are two physical pictures to account for this apparent separation of the scattering rates: One picture considers that two distinct scattering times $`\tau _{tr}`$ and $`\tau _H`$, possibly associated with different particles, govern different kinds of scattering events . The other picture considers that the scattering time is strongly dependent on the position on the Fermi surface (FS) and that $`\rho _{xx}`$ and $`\mathrm{cot}\theta _H`$ are governed by the scattering events on different parts of the FS .
Separately from the above development, it has become a common understanding that in underdoped cuprates a pseudogap in the density of low-energy excitations is developed at a temperature much higher than the superconducting transition temperature $`T_c`$. In underdoped YBCO, the in-plane resistivity $`\rho _{ab}`$ shows a clear downward deviation from the $`T`$-linear behavior below a temperature $`T^{}`$, which has been discussed to mark the onset of the pseudogap . This $`T^{}`$ is notably higher than the other characteristic temperature $`T_g`$ determined from the onset of a suppression in the Cu NMR relaxation rate , which has also been associated with the pseudogap. The presence of two different temperature scales, $`T^{}`$ and $`T_g`$, is intriguing. It was proposed recently that at the upper temperature scale $`T^{}`$ the CuO<sub>2</sub> plane starts to develop local antiferromagnetic correlations or charged stripe correlations ; the lower temperature scale $`T_g`$ corresponds to the opening of a more robust pseudogap in the density of states , which can be observed by the angle-resolved photoemission or by the tunneling spectroscopy .
It was previously discussed that the pseudogap causes a deviation from the $`T^1`$ behavior in $`R_H(T)`$ at $`T^{}`$. The conspiring changes in $`\rho _{ab}(T)`$ and $`R_H(T)`$ at $`T^{}`$ leave the $`T^2`$ behavior of $`\mathrm{cot}\theta _H`$ unchanged at $`T^{}`$, which led to the belief that $`\mathrm{cot}\theta _H`$ is rather insensitive to the opening of the pseudogap. However, given the recent understanding that the pseudogap has two characteristic temperatures $`T^{}`$ and $`T_g`$, it is left to be investigated how $`\mathrm{cot}\theta _H(T)`$ behaves around $`T_g`$.
Since the pseudogap effect is expected to be related to the antiferromagnetic fluctuations , there have been efforts to investigate how the pseudogap feature is affected by Zn doping onto the CuO<sub>2</sub> planes, which produces spin vacancies. The reported Zn-doping effects on the pseudogap are not simple; for example, the pseudogap feature in $`\rho _{ab}(T)`$ in underdoped YBCO crystals is almost unchanged , while the suppression in the Cu NMR relaxation rate below $`T_g`$ is diminished with only 1% of Zn . To build a complete picture of the pseudogap effect, it is also useful to investigate how the Zn doping affects the pseudogap in the Hall channel.
In this paper, we report the results of our measurements of the Hall effect in YBa<sub>2</sub>(Cu<sub>1-z</sub>Zn<sub>z</sub>)<sub>3</sub>O<sub>y</sub> crystals with $`y`$=6.78, which corresponds to an underdoped concentration. At this composition $`y`$=6.78, which gives $`T_c`$$``$75 K in pure crystals, a peak in $`R_H(T)`$ can be clearly seen and also the pseudogap feature in $`\rho _{ab}(T)`$ is clearly discernible (due to the rather wide $`T`$-linear region above $`T^{}`$); from the literature, we can infer that $`T^{}`$ is about 200 K (Ref. ) and $`T_g`$ is about 130 K (Ref. ). Our measurements of three samples with different Zn concentrations (z=0, 0.006, and 0.013) found that a deviation from the $`T^2`$ behavior in $`\mathrm{cot}\theta _H`$ takes place in all the samples at the same temperature $`T_0`$ which is very close to $`T_g`$, indicating that the pseudogap indeed affects $`\mathrm{cot}\theta _H`$ near $`T_g`$ and that the effect is robust against Zn doping.
There have been several publications reporting the effect of Zn doping on $`R_H`$ in YBCO, but the results are not converged. The data by Chien, Wang, and Ong indicate that $`R_H`$ of optimally-doped crystals increases with increasing $`z`$ in the whole temperature range above $`T_c`$ and the $`T`$ dependence becomes less pronounced \[it is possible that in their samples the effective carrier concentration is changing, because the slope of $`\rho _{ab}(T)`$ is increasing with $`z`$\]. Mizuhashi et al. reported that $`R_H`$ increases over the whole temperature range with $`z`$ (almost like a parallel shifting), while the slope of $`\rho _{ab}(T)`$ in the $`T`$-linear part is unchanged . On the other hand, Walker, Mackenzie, and Cooper reported that, in their Zn-doped crystalline thin films, $`R_H`$ at 300 K remains essentially unchanged, while at low temperatures $`R_H`$ is progressively suppressed with increasing $`z`$ . In the present work, we therefore paid particular attention to reduce the errors in the measurement of $`R_H`$; the Hall voltage is measured with magnetic-field sweeps at constant temperatures, and errors due to the geometrical factors are minimized by making small voltage contacts and by determining the sample thickness with a high accuracy. We note that making the voltage contacts on the side faces (not on the top face) of the crystals is essential in reducing the error and increasing the reproducibility.
The Zn-doped YBCO single crystals are grown by a flux method using pure Y<sub>2</sub>O<sub>3</sub> crucibles . All the crystals measured here are naturally twinned. The oxygen content is tuned to $`y`$=6.78 by annealing the crystals with pure YBCO powders in air at 575C for 37 h, and subsequent quenching to room temperature. The final oxygen content is confirmed by iodometric titration. The actual Zn concentration in the crystals are measured with the inductively-coupled plasma (ICP) spectrometry with an error in $`z`$ of less than $`\pm `$0.001.
The measurements are performed with a low-frequency (16 Hz) ac technique. Longitudinal and transverse voltages are measured simultaneously using two lock-in amplifiers during the field sweeps at constant temperatures. For the transverse signal, we achieved a high sensitivity by subtracting the offset voltage at zero field (the offset comes from a slight longitudinal misalignment between the two Hall voltage contacts). The temperature is stabilized using a high-resolution resistance bridge with a Cernox resistance thermometer. We confined the maximum magnetic field to 4 T, with which the error of the Cernox thermometer caused by its own magnetoresistance is negligibly small in the temperature range of the present study. The magnetic field is applied along the $`c`$-axis of the crystals. To enhance the temperature stability, the sample and the thermometer are placed in a vacuum can with a weak thermal link to the outside. The achieved stability in temperature during the field sweeps is better than a few mK. The data are taken from $`4`$ T to $`+4`$ T, and then the asymmetrical component is calculated to obtain the true Hall voltage. The final accuracy in the magnitude of $`R_H`$ and $`\rho _{ab}`$ reported here is estimated to be better than $`\pm `$5%, and the relative error in the data for each sample is less than $`\pm `$2%.
Figure 1 shows the temperature dependence of $`\rho _{ab}`$ for the three Zn concentrations. Above $``$200 K, $`\rho _{ab}`$ of all the three samples shows a good $`T`$-linear behavior and the slope of this $`T`$-linear part does not change with $`z`$. As shown in the inset to Fig. 1, a downward deviation from the $`T`$-linear dependence takes place at the same temperature for all the three samples, indicating that the upper pseudogap temperature $`T^{}`$ does not change with $`z`$. This result is in good agreement with the previous reports .
Figure 2 shows the temperature dependence of $`R_H`$ for the three samples. Our results are somewhat different from previous results on single crystals , but rather resemble that of the thin film result . Notably, $`R_H`$ around 250 K does not change with $`z`$, while the peak at 110 K is clearly suppressed with increasing Zn concentration. Still, the behavior of $`\mathrm{cot}\theta _H`$ is in good agreement with the previous studies; as is shown in Fig. 3, $`\mathrm{cot}\theta _H`$ changes approximately as $`T^2`$ in a rather wide range, and the Zn impurities add a $`T`$-independent offset which is roughly proportional to $`z`$.
We note that the Zn-doping effect on $`R_H(T)`$ observed here is naturally expected in the context of the two scattering time scenario. One can infer that the primary effect of Zn-doping is to add some constant impurity-scattering rates to both $`\tau _{tr}^1`$ and $`\tau _H^1`$, because both $`\rho _{ab}(T)`$ and $`\mathrm{cot}\theta _H(T)`$ show essentially parallel shifts upon Zn-doping. Since one can approximately express $`\tau _{tr}^1`$$``$$`T`$ and $`\tau _H^1`$$``$$`T^2`$ in pure samples, the scattering rates in Zn-doped samples can be approximated as $`\tau _{tr}^1T+A`$ and $`\tau _H^1T^2+B`$. From the relation $`R_HH`$ = $`\rho _{ab}/\mathrm{cot}\theta _H`$ $`\tau _H/\tau _{tr}`$, $`R_H`$ is approximately written as $`R_H(T+A)/(T^2+B)`$ in Zn-doped samples. If we compare this expression with that for the pure samples, $`R_H^{pure}T/T^2T^1`$, we can infer that at high temperatures $`R_H`$ in Zn-doped sample should approach $`R_H^{pure}`$, while at low temperatures $`R_H`$ in Zn-doped sample is expected to become smaller than $`R_H^{pure}`$ (which can be easily seen when one considers $`T`$$``$0). The above heuristic argument implies that the weakening of the $`T`$ dependence of $`R_H(T)`$, combined with a $`z`$-independent room-temperature $`R_H`$, is a rather natural consequence of the Zn-doping in the two scattering time scenario, although this effect has not been well documented before.
Now let us analyze the data in more detail in regard of the $`T`$ dependence of $`\mathrm{cot}\theta _H`$. A close examination of Fig. 3 tells us that the data for $`z`$=0 and 0.006 are slightly curved in this plot; we found that the best power laws to describe the data in a wide temperature range are $`T^{1.85}`$, $`T^{1.9}`$, and $`T^{2.0}`$, for $`z`$=0, 0.006, and 0.013, respectively. In Fig. 4, we show plots of $`(\mathrm{cot}\theta _HC)/T^\alpha `$ vs $`T`$, which cancels out the power-law temperature dependence and therefore we can easily see the temperature range for the $`T^\alpha `$ dependence to hold well. Here, $`C`$ is the offset value (which increases with $`z`$) and $`\alpha `$ is the best power for each Zn concentration. It is clear from Fig. 4 that the power-law temperature dependence of $`\mathrm{cot}\theta _H`$ holds very well down to a temperature $`T_0`$ ($``$130 K) and then starts to deviate in all the three samples. Incidentally, the deviation occurs at a temperature very close to $`T_g`$, which is $``$130 K for $`y`$$``$6.78 (Ref. ). This is a strong indication that the change in $`\mathrm{cot}\theta _H(T)`$ is caused by the opening of the pseudogap . Our result shows that, unlike the Cu NMR relaxation rate, the Zn-doping does not diminish or shift the onset of the pseudogap marked by the change in $`\mathrm{cot}\theta _H`$ at $`T_0`$, at least up to the Zn concentration of 1.3%. Note, however, that the deviation from the power law becomes a bit weaker (or slower) with increasing $`z`$, which is similar to what is seen in the behavior of $`\rho _{ab}(T)`$ (inset to Fig. 1).
Given the fact that $`\mathrm{cot}\theta _H`$ is apparently affected by the pseudogap below $`T_0`$, it is useful to clarify how the pseudogap effect is reflected in the $`T`$ dependence of $`R_H`$, which is a result of the two different $`T`$ dependences of the more fundamental parameters $`\tau _{tr}^1`$ and $`\tau _H^1`$. For this purpose, it is instructive to see how $`R_H(T)`$ would behave if $`\mathrm{cot}\theta _H`$ continues to change as $`T^\alpha `$ down to $`T_c`$. The inset to Fig. 2 shows the plots of the $`T`$ dependence of such hypothetical $`R_H^{hyp}`$ for the three samples, where $`R_H^{hyp}`$ is calculated by dividing $`\rho _{ab}`$ by $`(C+DT^\alpha )\times H`$, where $`D`$ is the $`T`$-independent value at temperatures above $`T_0`$ in Fig. 3. It is clear from the behavior of $`R_H^{hyp}`$ that $`R_H(T)`$ would not show a peak if $`\mathrm{cot}\theta _H`$ continues to change as $`T^\alpha `$ down to $`T_c`$. Therefore, we can conclude that the peak in $`R_H(T)`$ in underdoped YBCO is caused by the opening of the pseudogap.
It should be noted that the direction of the change in $`\mathrm{cot}\theta _H`$ at $`T_0`$ implies that $`\tau _H^1`$ is enhanced when the pseudogap opens; this is opposite to the effect on $`\tau _{tr}^1`$, which is reduced below $`T^{}`$. Therefore, we cannot simply conclude that the change in $`\mathrm{cot}\theta _H`$ is caused by a reduced electron-electron scattering, which is the natural consequence of a pseudogap in the low-energy electronic excitations. One possibility to understand this apparently confusing fact is to attribute the change at $`T_0`$ to the effective mass, rather than to attribute it to the scattering rate; remember that $`\mathrm{cot}\theta _H`$ = $`1/(\omega _c\tau _H)`$ $`m_H/\tau _H`$, where $`m_H`$ is the effective mass of the particle responsible for the Hall channel , so an increase in $`\mathrm{cot}\theta _H`$ is expected when the effective mass is enhanced. For example, if the pseudogap is related to the formation of a dynamical charged stripes , a modification in the FS topology, which leads to a change in the effective mass, is expected. This picture is also consistent with the observed robustness of the pseudogap feature in $`\mathrm{cot}\theta _H`$ upon Zn doping, because the change in the FS topology is rather insensitive to a small amount of impurities. One might question why there is little trace of the effective-mass change in the $`T`$ dependence of $`\rho _{ab}`$. If $`\mathrm{cot}\theta _H`$ and $`\rho _{ab}`$ reflect different parts of the FS (as is conjectured in the hot/cold spots scenario ), it is possible that the modification of the FS topology alters the band mass for the Hall channel while leaving that of the diagonal channel relatively unchanged.
Finally, we note that the peak in the $`T`$ dependence of $`R_H`$ is not always caused by the pseudogap. For example, in overdoped Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6+δ</sub> (Tl-2201), it has been reported that $`\mathrm{cot}\theta _H`$ shows a good $`T^2`$ dependence down to near $`T_c`$ (which implies that the pseudogap does not open), and yet the peak in $`R_H(T)`$ is observed at a temperature well above $`T_c`$. In this case, the peak in $`R_H(T)`$ is just a result of the two different $`T`$ dependences of $`\tau _{tr}^1T^n+A`$ (1$``$$`n`$$``$1.9) and $`\tau _H^1T^2+B`$ (note that in Tl-2201 both $`\tau _{tr}^1`$ and $`\tau _H^1`$ have somewhat large offsets even in pure crystals ). Mathematically, $`R_H(T^n+A)/(T^2+B)`$ has a peaked $`T`$-dependence and thus $`R_H(T)`$ can show a peak well above $`T_c`$ for some combination of $`A`$ and $`B`$, even when both $`\rho _{ab}`$ and $`\mathrm{cot}\theta _H`$ do not show any deviation from the power laws. On the other hand, as is demonstrated in the inset to Fig. 2, the peak in $`R_H(T)`$ of underdoped YBCO cannot be accounted for by the above origin and therefore is clearly caused by the pseudogap. This argument tells us that one should always look at the $`T`$ dependence of $`\mathrm{cot}\theta _H`$, not just the peak in $`R_H(T)`$, to determine whether the pseudogap is showing up through $`(\omega _c\tau _H)^1`$.
In summary, we observed that $`\mathrm{cot}\theta _H`$ of pure and Zn-doped YBCO ($`y`$=6.78) crystals shows an upward deviation from the $`T^2`$ behavior below a temperature $`T_0`$ that is notably higher than $`T_c`$ but is much lower than $`T^{}`$. The onset temperature $`T_0`$ for this deviation, which is found to be unaffected by Zn doping, is close to the lower temperature scale for the pseudogap $`T_g`$ (probed by the Cu NMR relaxation rate, for example). The fact that $`\mathrm{cot}\theta _H`$ tends to be enhanced below $`T_0`$ suggests that the effect of the pseudogap is not to reduce the Hall scattering rate; we therefore propose that the effect is more likely to be originating from a change in the Fermi surface topology, which causes a change in the effective mass. Also, we demonstrated that the peak in $`R_H(T)`$ of underdoped YBCO is not just a result of two different scattering times, but is actually a result of the pseudogap effect on $`\mathrm{cot}\theta _H`$.
We thank A. N. Lavrov and I. Tsukada for fruitful discussions, and J. Takeya for technical assistance.
|
no-problem/9905/cond-mat9905381.html
|
ar5iv
|
text
|
# Determination of the order of phase transitions in Potts model by the graph-weight approach
## 1 Introduction
One of the less trivial questions in the study of phase-transitions is the determination of the order of transition. A large amount of work, especially on the Potts model, ranging from the exact solutions to different kinds of Monte Carlo (MC) simulations (to cite only some of them), has been done in order to establish reliable and at the same time applicable criteria for distinction between the first- and second-order phase transition.
The coexistence of two different phases at the first-order phase transition point and its absence at the second-order phase transition point is the main physical fact on which all of the above methods (including the present one) are based. Performing the MC simulations on finite models in order to obtain the probability distribution of some quantity (such as the energy or the order parameter) which has different values in different phases, is the usual way to investigate these phase transitions numerically. In the present paper we show that the quantity which we define as the graph weight also has different values in the coexisting phases at the first-order transition point. In Section 3 it is shown how the graph weight is related to the energy and the free energy of the system. The coexistence of phases with different energy at the first-order transition point then implies the coexistence of graphs with different weights. It can be identified by the two peaks in the graph-weight probability distribution.
In order to test the graph approach, we analyse the Potts model in two cases: the mean-field (MF) case defined by the interactions of equal strength between all particles of the model and the one-dimensional $`(1d)`$ model with long-range (LR) interactions decaying with distance $`r`$ as $`r^{(1+\sigma )}`$, $`\sigma >0`$. In both cases two regimes (of the first- and second-order transition) are present. The regimes are separated by a point $`q_c`$ in the MF case and by a line $`(q_c,\sigma _c)`$ in the LR case.
The present approach has also the advantage of dealing directly with the non-integer values of the Potts states $`q`$. (Note that in reference the non-integer $`q`$ values were not obtained by a direct calculation, but by a histogram-like extrapolation from the results for integer-$`q`$.) This may be of interest in studying the threshold $`q_c`$, separating the first- from the second-order transition regime, which is not always an integer. For example, in three-dimensional short-range (SR) Potts model, $`q_c`$ is not an integer , and in $`(1d)`$ Potts model with LR interaction, the onset of the first order transition at $`q_c`$ seems to depend continuously on the interaction-range parameter $`\sigma `$ .
The plan of the paper is as follows. In the next section we define the model and introduce the corresponding graph expansion. The basic steps of MC algorithm are also explained in that section. In the third section we discuss separately the results for the MF and the LR interaction case. In the last section we summarize and discuss the advantages and open problems connected with the graph approach.
## 2 Model and method
The graph representation was already used in MC investigation of critical behaviour of Potts models . In the present approach we follow the line of reasoning given in reference , but with the basic difference that we study directly the probability distribution of graph weights and not the cluster probability distribution.
We begin by rewriting the model in the graph language. The reduced Hamiltonian of the model, with periodic boundary conditions, has the form
$$\frac{H}{k_BT}=\underset{i=1}{\overset{N1}{}}\underset{j=1}{\overset{Ni}{}}K_j\delta (s_i,s_{i+j}).$$
(1)
where $`s_i=0,1,\mathrm{},q1`$ is the Potts particle placed on the $`i`$-th site of the chain, while $`\delta `$ denotes the Kronecker symbol. $`K_j`$’s denote interactions between two particles at distance $`j`$. Due to boundary conditions in the LR case, each $`K_j`$ has two contributions involving interactions at distances $`j`$ and $`(Nj)`$. In the MF case, the interactions are unique and the boundary conditions have no meaning. We use the system of units where $`K_j`$ is proportional to the inverse temperature.
The usual substitution, $`\mathrm{exp}[K_j\delta (s_i,s_{i+j})]=1+v_j\delta (s_i,s_{i+j})`$, with $`v_j=\mathrm{exp}(K_j)1`$, leads to the partition function of the form
$$Z_N=\underset{l=1}{\overset{N}{}}\underset{s_l=0}{\overset{q1}{}}\mathrm{exp}(H/k_BT)=\underset{l=1}{\overset{N}{}}\underset{s_l=0}{\overset{q1}{}}\underset{i=1}{\overset{N1}{}}\underset{j=1}{\overset{Ni}{}}[1+v_j\delta (s_i,s_{i+j})].$$
(2)
It is straightforward to establish one-to-one correspondence between each member of the r.h.s. of the above equation and a graph on the chain consisting of $`N`$ particles. Each square bracket can contribute to the above product in two ways: it gives $`1`$ when there is no connection between the $`i`$-th and the $`i+j`$-th particle (inactive link), and $`v_j\delta (s_i,s_{i+j})`$ when there is a direct connection (active link) between these two particles. The analytical expression attached to each graph $`𝒢`$ will be called the graph weight, $`W_N(𝒢)`$. The explicit expression for $`W_N(𝒢)`$ follows from the r.h.s. of eq. (2). We can describe every graph consisting of $`c(𝒢)`$ clusters. By cluster we mean a set of particles interconnected by any type of active links and disconnected from other particles. Single particles are considered as one-particle clusters. The products of $`\delta `$-functions from the expression for each graph will delete all except $`c`$ $`q`$-summations on the r.h.s. of (2), so that the analytical expression for $`W_N(𝒢)`$ is
$$W_N(𝒢)=v_1^{b_1(𝒢)}v_2^{b_2(𝒢)}\mathrm{}v_{N1}^{b_{N1}(𝒢)}q^{c(𝒢)}.$$
(3)
The symbols $`b_j(𝒢)`$ denote the total number of active links of type $`j`$.
In this way, the summation over all $`2^{N(N1)/2}`$ possible graphs between $`N`$ particles corresponds to the partition function of the model
$$Z_N=\underset{\mathrm{all}𝒢}{}W_N(𝒢)=\underset{W}{}𝒩_{N,W}W_N.$$
(4)
The $`𝒩_{N,W}`$ denotes the number of different graphs with the same weight $`W_N`$. The above equation is thus the graph analogy of the more familiar expression
$$Z_N=\underset{E}{}𝒩_{N,E}e^{E_N/T},$$
(5)
where the summation runs over all different energies $`E_N`$ of the system, while $`𝒩_{N,E}`$ is their degeneracy. Indeed, behind the analogy, there is a connection between the average number of clusters and the active links and average of energy and free energy of the system. The derivative of the partition function, given by eqs. (3), (4) and (5), over temperature leads to
$$E_N=T^2\underset{j=1}{\overset{N1}{}}\frac{v_j+1}{v_j}b_j\frac{K_j}{T},$$
(6)
while the derivative over $`q`$ leads to
$$\frac{\mathrm{ln}𝒩_{N,E}(q)}{q}\frac{1}{T}\frac{E_N}{q}=\frac{1}{q}c.$$
(7)
The coexistence of two phases characterized by different values of energy corresponds, by the above relations, to the coexistence of graphs with different weights.
According to (4), one introduces the graph-weight probability distribution $`P_N`$
$$P_N=\frac{𝒩_{N,W}W_N}{Z_N}.$$
(8)
Numerically, $`P_N`$ is obtained by a simple MC simulation of Metropolis type. The basic steps are:
(a) Pick at random one link in the graph $`𝒢`$ and change its status from active to inactive or vice versa. The resulting graph is called $`𝒢^{}`$.
(b) Compare the random number $`0<r1`$ with the ratio $`W_N(𝒢^{})/W_N(𝒢)`$. If the ratio is smaller than $`r`$ keep $`𝒢`$, otherwise save $`𝒢^{}`$ as $`𝒢`$.
(c) Return to (a).
At each step, the number of clusters has to be counted. Since we deal with a model with interactions of infinite range, the determination of clusters could not be performed by determination of their boundaries, but one has to check for possible connections with all the particles of the considered cluster. To do so, during the simulations, we have to keep record of the cluster structure of the system, i.e. which particle belongs to which cluster. We start with a simple and known cluster configuration. In each of the following MC steps, when one link is changed, we examine whether this has produced a change in cluster structure: if the link is deactivated - whether the corresponding cluster is split in two or not; if the link is activated - whether two clusters get connected or not.
Counting graphs by their weights through a large number of steps gives the unnormalized graph-weight probability distribution. Unfortunately, the shortcoming of the graph approach, is that it takes a much longer time to obtain the same precision of results compared to the simulation techniques that can be applied only for integer $`q`$ (like algorithm by Luijten and Blöte ). On the other hand, the comparison of the present approach with the simple Metropolis single spin flip algorithm done for the model of N particles with integer $`q`$ shows that $`10^4`$ flips per link of everyone of $`N(N1)/2`$ links in graph simulations give approximately the same precision as $`10^6`$ one-particle flips per particle in the spin simulations. For the chain of 400 sites considered here, the graph approach requires about $`25`$ times larger CPU time than the Metropolis algorithm on spins.
## 3 Results
The simulations were applied to the two mentioned cases of the Potts model with $`N=100,150,200,250,300,350`$ and $`400`$ particles. In final extrapolations only the data for $`N200`$ were used. The precision of simulations is determined by performing $`10^4`$ flips per link.
### 3.1 Mean-field case
The MF case of Hamiltonian (1) is given by taking the equal strength of interactions among all the particles, i.e. $`K_j=K/N`$, where $`K`$ denotes the inverse temperature. The exact work of Kihara et al. puts in evidence that the MF Potts model has a second-order phase transition for $`qq_c=2`$ and a first-order phase transition for $`q>2`$. It thus qualifies as a good example for testing the present approach. In our simulation we start with the case $`q=3`$. In continuation, we consider lower values of $`q`$, closer to the threshold $`q_c`$ in order to examine how efficient our approach can be in detecting weak first-order transitions.
The shape of the finite-$`N`$ graph-weight probability distribution $`P_N`$ depends on temperature in a similar way as the energy-probability distribution does. Far from the transition temperature, $`P_N`$ has a gaussian form. At the transition, the shape of $`P_N`$ depends on the order of the transition. For second-order transitions, the shape has a non-gaussian form, but it still has only one maximum. For first-order transitions, $`P_N`$ has two maxima which transform into two $`\delta `$-functions in the thermodynamic limit. We define the temperatures $`T_N`$ where the two maxima in $`P_N`$ are of equal height. The corresponding positions on the graph-weight axis are $`W_N^\mathrm{o}`$ (for the ordered phase) and $`W_N^{\mathrm{do}}`$ (for the disordered phase). When the two-peak structure of $`P_N`$ becomes more pronounced with increasing $`N`$, one may conclude that the transition is of the first order in the thermodynamic limit with two coexisting phases: the one described by the graph weight $`W_N^\mathrm{o}W^\mathrm{o}`$, stable below the transition temperature $`T_NT_t`$, and the other one described by $`W_N^{\mathrm{do}}W^{\mathrm{do}}`$ stable above $`T_t`$.
In Figure 1 are presented the results of simulations for $`q=3`$. It shows dependence of $`P_N`$ on $`w_N\mathrm{ln}W_N/N\mathrm{ln}N`$ at temperatures $`T_N`$. The peaks emerge for $`200N400`$ and become more pronounced with increasing $`N`$. By the above arguments, the $`N`$-dependence of $`P_N`$ confirms the existence of the first-order phase transition, in agreement with the exact solution .
The behaviour for smaller $`N`$ considered ($`N<200`$) points out that only one maximum in $`P_N`$ may have different origins: either the transition is of the second order in the thermodynamic limit and we observe its finite-$`N`$ behaviour, or the transition is of the first order in the thermodynamic limit, but the correlation length, although finite, is comparable with the system size used and we are not able to see the coexistence of the two phases. Arbitrarily close to the threshold which separates the first- from the second-order transitions, the correlation length is expected to become arbitrarily large and finally whatever system size is used, it will always be too small to show the first-order character of the transition. Such a situation is expected in the MF case when $`q`$ approaches 2. The simulations on $`q=2.8`$ model performed at the corresponding temperatures defined as $`T_N`$ give the distributions similar to those shown in Fig. 1. On the contrary, simulations performed for $`q=2.1`$ and $`2.5`$ do not show the two-peak structure in the graph-weight probability distribution for the considered system sizes.
To some extent, the above presented simulations can be compared to the exact solution. Since $`𝒩_{N,E}(q)`$ and $`E_N(q)`$ can be exactly calculated for the transition temperature $`T_{\mathrm{MF}}^1=\mathrm{\hspace{0.17em}2}[(q1)/(q2)]\mathrm{ln}(q1)`$, , it is easy to obtain from eqs. (6) and (7) the analytic expressions for the positions $`w_N^{\mathrm{o},\mathrm{do}}`$ of the two peaks at temperature $`T_{\mathrm{MF}}`$ in the large $`N`$ limit:
$`w_N^\mathrm{o}`$ $`=`$ $`{\displaystyle \frac{(q1)^3+1}{q^2(q2)}}\mathrm{ln}(q1)+{\displaystyle \frac{a_q^\mathrm{o}}{\mathrm{ln}N}}+𝒪({\displaystyle \frac{1}{N}}),`$
$`w_N^{\mathrm{do}}`$ $`=`$ $`{\displaystyle \frac{q1}{q(q2)}}\mathrm{ln}(q1)+{\displaystyle \frac{a_q^{\mathrm{do}}}{\mathrm{ln}N}}+𝒪({\displaystyle \frac{1}{N}}),`$ (9)
where
$`a_q^\mathrm{o}={\displaystyle \frac{(q1)^3+1}{q^2(q2)}}\mathrm{ln}(q1)\mathrm{ln}\left[2{\displaystyle \frac{q1}{q2}}\mathrm{ln}(q1)\right]+{\displaystyle \frac{\mathrm{ln}q}{q1}}\left[1{\displaystyle \frac{\mathrm{ln}(q1)}{q(q2)}}\right],`$
$`a_q^{\mathrm{do}}={\displaystyle \frac{q1}{q(q2)}}\mathrm{ln}(q1)\mathrm{ln}\left[2{\displaystyle \frac{q1}{q2}}\mathrm{ln}(q1)\right]+\mathrm{ln}q\left[1{\displaystyle \frac{(q1)\mathrm{ln}(q1)}{q(q2)}}\right].`$
In Table 1 we compare the data for $`q=3`$ obtained by MC simulations at $`T_{N=400}=0.3566`$, with those calculated from (3.1) for finite $`N=400`$ and for $`N\mathrm{}`$.
The values of $`w_N^{\mathrm{o},\mathrm{do}}`$ cited in Table 1 show that the simulated results fit well the exact values for finite $`N=400`$. Relatively high difference ($`24\%`$ for $`w_N^\mathrm{o}`$ and $`32\%`$ for $`w_N^{\mathrm{do}}`$) compared to the exact $`N\mathrm{}`$ values comes from the slow $`1/\mathrm{ln}N`$ convergence.
In Table 2 we present the corresponding data for $`T_N`$ (obtained in simulations) in function of size. Since for large $`N`$ the dependence of $`T_N`$’s is approximately linear in $`N^1`$, we use the extrapolation form $`T_N=T_{\mathrm{MF},t}+b/N`$. The coefficients $`T_{\mathrm{MF},t}`$ and $`b`$ were obtained in least-squares approximation (LSA). By taking into account only the data for sizes $`200N400`$, one obtains $`T_{\mathrm{MF},t}(q=2.8)=0.376`$ and $`T_{\mathrm{MF},t}(q=3.0)=0.360`$. In both cases, the difference between the extrapolated values and the exact ones $`T_{\mathrm{MF}}(q=2.8)0.378`$ and $`T_{\mathrm{MF}}(q=3.0)0.361`$, is less than $`1\%`$.
### 3.2 One-dimensional long-range case
In the case of one-dimensional Potts model with interactions decaying as $`1/r^{1+\sigma }`$, the interaction strength $`K_j`$ is given by $`K[1/j^{1+\sigma }+1/(Nj)^{1+\sigma }]`$, where $`K`$ denotes the inverse temperature. This model has a phase transition at finite temperature for all $`q`$ and $`0<\sigma 1`$. Our recent MC simulations of the energy probability distribution on $`q=3`$ and $`q=5`$ models have shown that the order of the transition depends on $`q`$ and $`\sigma `$. For the same fixed value $`q=3`$, it was shown that the transition changes from the first- to the second-order one with increasing $`\sigma `$.
Again, according to the relations (6) and (7), one can interpret the meaning of the graphs through their connection with the energy and the free energy of the system.
The purpose of simulations presented in this subsection is twofold: first, we wish to test the above described graph approach comparing the results with those obtained earlier by MC simulation of energy probability distribution (strong first-order transition for model with $`q=3`$ and $`\sigma =0.1`$); second, by using the same graph approach we wish to analyze an example of a non-integer $`q`$ model with a second-order phase transition (we choose $`q=0.5`$ and $`\sigma =0.1`$ and $`0.8`$).
The result of simulations for the case with $`q=3,\sigma =0.1`$ is shown in Fig. 2. It presents $`P_N`$ versus $`w_N`$ for system sizes ranging from $`200`$ to $`400`$, at temperatures $`T_N`$ where two peaks of the approximately same height appear. The figure shows that the depth of the minima increases with $`N`$. According to the discussion in the preceding subsection, a behaviour like that is characteristic for a first-order phase transition. This conclusion is confirmed by the results of the MC simulations for the energy probability distribution.
The temperatures $`T_N`$, where $`200N400`$ are presented in Table 2. Compared to the MF results and also to those for higher values of $`\sigma `$, the results for $`\sigma =0.1`$ converge very slowly, so that, for sizes considered here, we do not expect so good accuracy in extrapolation to $`N\mathrm{}`$. The extrapolation which takes one correction term, of the form $`T_N=T_{LR,t}+b/N^x`$ gives $`T_{LR,t}=7.73`$ with the convergence exponent $`x=0.1`$, while taking the form with two correction terms, e.g. $`T_N=T_{LR,t}+b/N^x+c/N^{2x}`$ gives by LSA fit $`x=0.12`$ and $`T_{LR,t}=7.45`$, value which differs by $`4\%`$. The first result for $`T_{LR,t}`$ shows the discrepancy of $`8\%`$ compared to the improved finite-range scaling (FRS) result $`(T^{\mathrm{FRS}}=7.14)`$ cited in , and even larger discrepancies when compared to the renormalisation-group (RG) result $`(T^{\mathrm{RG}}=6.72)`$ and the earlier MC result $`(T^{\mathrm{MC}}=6.25)`$ . (Notice however that, due to the same slow convergence, the difference between these previous results is also quite significant when $`\sigma =0.1`$.) Apart from the reduced precision due to the small convergence exponent, the additionnal source of deviation can come from the crossover effect and be the consequence of the limitation to relatively small sizes for which the system does not exhibit full qualities of the first-order transition.
In order to investigate the case with non-integer $`q`$, we also considered the case $`q=0.5`$ with $`\sigma =0.1`$ and $`0.8`$, where by and the transitions are of classical and non-classical second-order type, respectively. The simulations were performed for a wide range of temperatures around critical temperatures obtained earlier by FRS . As one can expect on the grounds of the discussion at the beginning of the preceding subsection, no double-peak structure in graph-weight probability distribution has been found.
## 4 Conclusion
The graph-weight probability distribution is introduced and applied to the analysis of the order of transition on two special cases of the Potts model: its mean-field case and the case in $`1d`$ with power-law decaying interactions. The mean-field case was analyzed for $`q=3`$ and several non-integer values approaching $`q_c=2`$. The long-range case was examined for $`q=3`$ and $`q=0.5`$ with $`\sigma =0.1`$ and $`\sigma =0.1,0.8`$, respectively. The simulations were limited by time to the systems of sizes $`N400`$.
It is shown that the graph weight is an appropriate quantity to distinguish the coexisting phases at the first-order transition point. The physical interpretation of the graph weight becomes transparent through the equations (6) and (7) which relate the average number of active links and clusters to the average of energy and the free energy of the system.
By analyzing the graph-weight probability distributions of the MF and $`1d`$ LR Potts models, we have observed the behaviour of distributions characteristic for first-order transitions in the case of $`q=3`$ and $`2.8`$ MF models and $`q=3,\sigma =0.1`$ LR model, while the transition of the second order was obtained in the case of $`q=0.5,\sigma =0.1,0.8`$ LR model.
Transition temperatures have also been calculated. The estimated values of those temperatures in the thermodynamic limit agree for the MF case values within $`1\%`$, while for the case with power-law decaying interactions, where only the approximate results are available, it agrees with the discrepancy of $`5\%`$ to $`24\%`$ depending on the method.
All of the above facts qualify the graph-weight probability distribution as an alternative quantity in investigations of the order of transitions in Potts models, capable to deal directly with non-integer values of $`q`$. This makes the graph approach interesting in the framework of efforts of determination of the border between the first- and second-order regions by continuously varying $`q`$ in the $`(q,d)`$ plane of SR models or $`(q,\sigma )`$ plane in $`1d`$ LR models.
Figure captions
Fig. 1: The simulation of the graph-weight probability distribution $`P_N`$ of the $`q=3`$ MF Potts model versus $`w_N\mathrm{ln}W_N/N\mathrm{ln}N`$ performed at temperatures $`T_N`$ (see in text).
Fig. 2: The simulation of graph-weight probability distribution $`P_N`$ of the $`q=3,\sigma =0.1`$ $`1d`$ LR Potts model versus $`w_N\mathrm{ln}W_N/N\mathrm{ln}N`$ performed at temperatures $`T_N`$ (see in text).
|
no-problem/9905/cond-mat9905161.html
|
ar5iv
|
text
|
# Failure time and microcrack nucleation
## Abstract
The failure time of samples of heterogeneous materials (wood, fiberglass) is studied as a function of the applied stress. It is shown that in these materials the failure time is predicted with a good accuracy by a model of microcrack nucleation proposed by Pomeau. It is also shown that the crack growth process presents critical features when the failure time is approached.
PACS: 62.20.Mk , 46.30
It is very well known that different materials subjected to a constant load may break after a certain time, which is a function of the applied load \[1-5\]. Many models have been proposed to predict this failure time, but the physical mechanisms remain often unclear . Very recently Pomeau proposed a model , which explains quite well the failure time of microcrystals and gels submitted to a constant stress. This model is based on the interesting idea that a nucleation process of microcracks has to take place inside the materials, in order to form the macroscopic crack. This nucleation process is controlled by an activation law, as the coalescence of a phase into another in a liquid-solid transition. Based on this prediction , L. Pauchard et al. found that if a constant load is applied to a bidimensional microcrystal, it breaks after a time $`\tau `$ given by the equation $`\tau =\tau _oe^{P_o^2/P^2}`$, where $`P`$ is the applied pressure, and $`\tau _o`$ and $`P_o`$ are constants. Bonn et al. found a similar law for gels. Pomeau predicted that for three-dimensional microscopic systems the life-time should be
$$\tau =\tau _oe^{\left(P_o/P\right)^4}$$
(1)
where $`\tau _o`$ is a characteristic time and $`P_o`$ a characteristic pressure, which mainly depend on the material characteristics, the experimental geometry and temperature. This idea is quite interesting and it merits to be checked experimentally in hetrogeneous materials, such as fiber glass and wood pannels. Indeed, in two recent papers , we have shown that in these materials the microcracks, preceding the main crack form something like a coalescence around the final path of the main crack.
Driven by these observation we decided to study the behaviour of these materials as a function of time and to check whether eq.1 could be useful in order to predict the sample life time. To do this we monitor the acoustic emission (AE) released before the final break-up by a sample placed between two chambers between which a pressure difference $`P`$ is imposed. In Fig. 1 a sketch of the apparatus is shown. We have prepared circular wood and fiberglass samples with a diameter of 22 cm. and a thickness of 4 mm (wood) and 2 mm (fiberglass). In our samples the AE consists of ultrasound bursts (events) produced by the formation of microcracks inside the sample. For each AE event, we record the energy $`\epsilon `$ detected by the four microphones, the place where it was originated, the time at which the event was detected and the instantaneous pressure and deformation of the sample. The energy is defined as the integral of the sum of the squared signals. A more detailed description of the experimental methods can be found in .
We first investigate the behaviour of the samples as a function of time when they are submitted to a constant load. Our interest is focused on the life-time of the sample, on the behaviour of the released acoustic energy near the fracture and on the distributions of energy and time elapsed between two consecutive events. The behaviour of the energy as a function of time for a system submitted to a constant load has been studied by geologists, but they were not specially interested in what happened near the fracture .
We first imposed a constant strain to our samples, as it has been made for crystals. As strain is fixed, every microcrack leads to a pressure decrease, so the system reaches a stationary state. This is because a microfracture weakens the material. In the absence of microcraks the pressure remains constant. One sample was submitted to a large deformation (close to the fracture) and it did not break after three days. Therefore, at imposed strain, the effect observed in microcrystals is not valid for heterogeneous materials. On the other hand, if a constant stress is applied to the system, it will break after a certain time which depends on the value of the applied pressure. The reason for this is that after every single microcrack the same load must be endured by the weakened sample, so it becomes more and more unstable. We have submitted several samples to different constant pressures and we have measured the time until the break-up (the life-time $`\tau `$). The values obtained are well fitted by eq.(1), that is the exponential function predicted by Pomeau; the life-time expression $`\tau =ae^{bP}`$ proposed by Mogi , on the other hand, does not conform to our data. In Fig. 2a $`\tau `$ is plotted versus $`\frac{1}{P^4}`$ in a semilog scale, and a straight line is obtained. Even if the pressure difference is very small the sample will eventually break, although the life-time can be extremely long. For example, using eq. (1) and the best fit parameters of fig. 2a, one estimate $`\tau 5000`$ s at $`P=0.43`$ atm. Halving the imposed pressure causes $`\tau `$ to become extremely large : $`\tau =4.410^{37}`$ years at $`P=0.21atm`$).
When a constant pressure is applied to the sample, the acoustic emission of the material is measured as a function of time. We find that the cumulated acoustic energy $`E`$ diverges as a function of the reduced time $`\frac{\tau t}{\tau }`$, specifically $`E(\frac{\tau t}{\tau })^\gamma `$ with $`\gamma =0.27`$ (see Fig. 2b). Notably, the exponent $`\gamma `$, found in this experiment with a constant applied pressure, is the same than the one corresponding to the case of constant stress rate . Indeed it has been shown that if a quasi-static constant pressure rate is imposed, that is $`P=A_pt`$, the sample breaks at a critical pressure $`P_c`$ and $`E`$ realesed by the final crack precursors (microcrcaks) scales with the reduced pressure or time (time and pressure are proportional) in the following way:
$$E(\frac{P_cP}{P_c})^\gamma =(\frac{\tau t}{\tau })^\gamma $$
(2)
where $`\tau =P_c/A_p`$ in this case. Thus it seems that the real control parameter of the failure process is time, regardless of the fact that either a constant pressure rate or a constant pressure is applied.
To find a general law, which is valid for a time dependent imposed stress, we intend to generalize the eq.1 which is valid only for a constant imposed pressure. In the case where the pressure changes with time, it is reasonable to consider the entire history of the load. Therefore we consider that
$$\frac{1}{\tau _o}\mathrm{exp}((\frac{P_o}{P})^4)$$
is the density of demage per unit time. The certitude of breaking is obtained after a time $`\tau `$ such that:
$$_0^\tau \frac{1}{\tau _o}e^{(\frac{P_o}{P})^4}𝑑t=1$$
(3)
where $`\tau _o`$ and $`P_o`$ have the previously determined value. Notice that this equation is equivalent to eq. 1 when a constant pressure is applied.
To test this, we have applied the load to the sample following different schemes. We have first applied successive pressure plateaux in order to check whether memory effects exist. In fig.3a the pressure applied to the sample is shown as a function of time. A constant load has been applied during a certain time $`\tau _1`$, then the load is suppressed and then the same constant load for a time interval $`\tau _2`$ is applied again.The sample breaks after a loading a time $`\tau _1+\tau _2`$ which is equal to the time needed if the same load had been applied continuosly without the absence of load during a certain interval. Therefore there is a memory of the load history. The life-time formula (eq. 3) is also valid if different constant loads are applied successively (fig. 3b). This concept can explain the violation of the Kaiser effect in these materials .
If the load is not constant, the life-times resulting from the proposed integral equation are still in good agreement with experimental data. A load linearly increasing at different rates $`A_p`$ has been applied to different samples. The measured breaking times are plotted in fig.4 along with a curve showing the values computed from eq.3. Even if a quasi-static load is applied erratically (fig. 3d), the calculated life-time agrees with the measured one. These experiments show that eq. 3 describes well the life-time of the samples submitted to a time dependent pressure.
The question is to understand why eq.(1) and (3) works so well for a three dimensional heterogeneous material. Indeed, in the Pomeau formulation
$$P_o=G\left(\frac{\eta ^3Y^2}{kT}\right)^{1/4}$$
(4)
where Y is the Young modulus,T the temperature, K the Boltzmann constant and $`\eta `$ the surface energy of the material under study. G is a geometrical factor which may depend on the experimental geometry, on defect shape and density.
In our experiment, we found $`P_o=0.62`$ atm for wood, which has Y=$`1.810^8`$ N/m<sup>2</sup>, and $`P_o=2.91`$ atm for fiberglass, which has Y=$`10^{10}`$ N/m<sup>2</sup>. Thus the ratio between the values of $`P_o`$ found for the two materials is closed to the ratio of the square root of their Young modula.
In contrast temperature does not seem to have a strong influence on $`\tau `$. In fact we changed temperature, from $`300K`$ to $`380K`$ which is a temperature range where the other parameters, $`Y`$ and $`\eta `$, do not change too much. For this temperature jump one would expect a change in $`\tau `$ of of about $`50\%`$ for the smallest pressure and of about $`100\%`$ for the largest pressure. Looking at fig.4 we do not notice any change of $`\tau `$ within experimental errors which are about $`10\%`$. In order to maintain the change of $`\tau `$ within $`10\%`$ for a temperature jump of $`80K`$ one has to assume that the effective temperature of the system is about $`3000K`$. Notice that this claim is independent on the exact value of the other parameters and G.
These observations seem to indicate that the nucleation process of microcraks is activated by a noise much larger than the thermal one. Such a large noise can be probably produced by the internal random distribution of the defects in the heterogeneous materials that we used in our experiments. This internal random distribution of material defects evolves in time because of the appearance of new microcracks and the deformation of the sample. Therefore this internal and time dependent disorder of the material could actually be the mechanism that activates the microcrak coalescence and play the role of a very high temperature. Similar conclusions about a disordered induced high temperature have been reached in other disordered systems . This is an important point that merits to be deeply explored. Simple numerical simulations which we performed in fuse networks seem to confirm this hypothesis.
As a conclusion we have shown that a model based on the nucleation of microcracks is in agreement with experimental data of failure of two heterogeneous materials. This model seems to be quite general because it also explains the failure of gels and microcrystals . It will be certainly interesting to test it in other materials. However many questions remain open. The first one concerns the high temperature of the system. The second is related to the behaviour of acoustic energy close to the failure time. Indeed if AE is considered as a susceptibility it is not easy to put together the observed critical divergency with a nucleation process. Probably the standard phase transition description can be only partially applied to failure because of the intrinsic irreversibility of the crack formation. All of these are very interesting aspects of this problem which certainly merit to be clarified in the future.
Correspondence should be addressed to S. Ciliberto (e-mail: cilibe@physique.ens- lyon.fr)
Figure Captions
1. Sketch of the apparatus. S is the sample, DS is the inductive displacement sensor (which has a sensitivity of the order of 1 $`\mu `$m). M are the four wide-band piezoelectric microphones. P=P<sub>1</sub>-P<sub>2</sub> is the pressure supported by the sample. P is measured by a differential pressure sensor ( sensitivity = 0.002 atm) that is not represented here. EV is the electronic valve which controls P via the feedback control system Ctrl. HPR is the high-pressure air reservoir.
2. Measures on wood samples: a) The time $`\tau `$ needed to break the wood samples under an imposed constant pressure P is here plotted as a function of 1/P<sup>4</sup> in a semilog scale. The dashed line represents the solution proposed by Mogi ($`\tau =ae^{bP}`$). The continuous line is the solution proposed by Pomeau for microcrystals ($`\tau =\tau _oe^{(P_o/P)^4}`$). In the plot $`\tau _o=50.5`$ s and $`P_o=0.63`$ atm. Every point is the average of 10 samples. The error bar is the statistical uncertainty. For the fiberglass samples, we find $`\tau _o=44.6`$ s and $`P_o=2.91`$ atm. b) The cumulated energy E, normalized to E<sub>max</sub>, as a function of the reduced control parameter $`\frac{\tau t}{\tau }`$ at the neighborhood of the fracture point (Case of imposed constant pressure). The circles are the average for 9 wood samples. The solid line is the fit $`E=E_0\left(\frac{\tau t}{\tau }\right)^\gamma `$. The exponent found, $`\gamma =0.26`$, does not depend on the value of the imposed pressure. In the case of a constant pressure rate the same law has been found .
3. The imposed time dependent pressure (bold dotted line) is plotted as a function of time in the case of wood samples. The continuous line is the integral in time of the function $`f\left(P\right)=\frac{1}{\tau _o}e^{P_o^4/P^4}`$. On the basis of eq. 3 the predicted breaking time $`\tau `$ is obtained when the integral of $`f\left(P\right)`$ is equal to 1. The horizontal distance between the two vertical dashed lines in each plot represent the difference between the predicted and the measured breaking time. In a) a constant pressure has been applied during about 700 s, then the load is suppressed and then the same constant load is applied again. The difference between the life-time predicted by (eq. 3) and the experimental result is of $`3\%`$. b) Here two pressure plateaux of different value are successively applied to the sample. The difference between the measured and the predicted life-time is of $`5\%`$. In c) an erratic pressure is applied to the sample. Here the error is of $`10\%`$.
4. A load linearly increasing at different rates $`A_p`$ has been applied to different samples. The measured breaking times are plotted as a function of $`A_p`$ in a loglog scale; circles and squares represent the measures on wood and fiberglass samples respectively at T=300 $`K`$. Bold triangles represent measures on wood samples at T=380 $`K`$. The lines are the life time calculated from eq.3 using the best fit values for $`P_o`$ and $`\tau _o`$. These experiments show that eq. 3 describes well the life-time of the samples submitted to a time dependent pressure.
|
no-problem/9905/cond-mat9905410.html
|
ar5iv
|
text
|
# Activated mechanisms in amorphous silicon: an activation-relaxation-technique study
## I Introduction
The properties of amorphous semiconductors can vary widely as a function of the details of the preparation method: hot-wire, electron-beam deposition and ion bombardment can yield samples with a significant spread in electronic and structural properties. Such diversity is a reflection of the immensely large number of metastable configurations of nearby energy.
The topological structure of this complex energy surface can be sampled indirectly, for instance by light illumination or ion bombardment, bringing a sample from one metastable state to another, often in a (statistically) reversible manner. A direct study of this energy surface requires the identification at the microscopic level of the mechanisms responsible for moving from one metastable state to another, and is much harder to perform. Because of the high degree of disorder, very few techniques can provide a truly microscopic representation of the bulk dynamics . At best, one can extract some quantity averaged in time and space, providing a very rough picture of what is really happening.
In spite of these difficulties, the past few years have seen significant experimental and theoretical efforts to try to provide some insight into the bulk dynamics. As it is becoming evident that little hard and precise information about the local environment can be obtained by via static methods, more and more emphasis is put onto the development of techniques to sample dynamical quantities.
This article presents the first detailed study of the microscopic nature of activated mechanisms in a-Si. The method that we have used is the activation-relaxation technique (ART), introduced by us a few years ago . Here, we apply it to an empirical model of amorphous silicon as described by a modified Stillinger-Weber potential . The activation-relaxation technique allows one to concentrate on the activated mechanisms that are responsible for most of the dynamics below melting. We have already reported a first stage of this study in a recent Letter , where we concentrated on only those mechanisms involving no coordination defects. Here we look at a wider spectrum of mechanisms, with a special emphasis on the diffusion of coordination defects.
This paper is organized as follows. In section II we describe the activation-relaxation technique and give the details of the simulations that result in the data base of events. Next, in section III, we present the results extracted from this data base. Because this type of work is rather new, we also discuss the analysis of the data in some detail. The results contain both a global analysis and a more detailed topological classification.
## II Method of generation of the event database
In this article, we concentrate on identifying and classifying activated mechanisms that are responsible for the relaxation and diffusion in amorphous silicon. This is done using the activation-relaxation technique (ART), an energy-landscape method, that searches for barriers and new states in complex landscapes.
As we shall see, the results we obtain are in general agreement with many of the experimental results mentioned above and provide some bounds on the type of mechanisms that can take place in a-Si. Of course, empirical potentials have strong limitations, especially far away from the equilibrium position for which they are developed. Without giving too much weight to the exact numerical values of the activation energies, it is nevertheless possible to give a first broad picture of the wide variety of mechanisms that can be associated with diffusion and structural relaxation. To go beyond the results presented below, it will be generally necessary to use more accurate interactions such as tight-binding or plane wave methods.
### A sampling one event with ART
The aim of ART is to sample minimum-energy paths, starting in a local energy minimum, passing through a first-order saddle point (where the minimum-energy path has its highest point), and leading to another local energy minimum. ART does this in three stages: leaving the so-called “harmonic well”, convergence to the saddle point, and convergence to the new minimum. The last stage, relaxation to a local energy minimum, is straightforward and can be achieved by a wide range of standard minimization techniques, for instance the conjugate gradient (CG) method . The first two stages represent the activation to a saddle point and are specific to ART. As the end conditions of the first stage (leaving the harmonic well) are set by the actual implementation of the second stage (finding the saddle point), we will discuss these two stages in reversed order.
The second stage is convergence to a first-order saddle point. At such a saddle point, the gradient of the energy is by definition zero in all directions, and the second derivative of the energy is positive in all directions but one. The single direction with negative curvature is that along which the minimum-energy path proceeds. Within ART, we make the assumption that the direction of the minimum-energy path in the saddle point and the direction towards the original local energy minimum have a significant overlap, i.e. that their dot-product is significantly non-zero. If this assumption holds, a modified force field can be introduced in which the saddle point is a minimum . This field is defined by the force $`𝐆`$:
$$𝐆=𝐅\left[1\alpha /(1+\mathrm{\Delta }x)\right]\left(𝐅\widehat{𝚫𝐱}\right)\widehat{𝚫𝐱},$$
(1)
where $`𝐅`$ is a $`3N`$-dimensional force vector obtained from the first derivative of the potential energy, $`𝚫𝐱`$ is the displacement vector from the minimum, and $`\alpha =0.15`$ a parameter determining how fast the motion is to the saddle point. In this second stage, the redefined force $`𝐆`$ is followed iteratively, usually along conjugate directions, starting from just outside the harmonic well around the original local energy minimum. Ideally, this process would bring configuration directly to the saddle-point and stop there, but because the projection is an approximation of the valley, the configuration passes in the viccinity of saddle point without halting. We therefore stop the activation as soon as the component of the force $`𝐅`$ projected onto the displacement $`𝚫𝐱`$ changes sign, an indication that a saddle point has just been past. At that point, the activation is stopped, the configuration stored as the activated configuration and we move to the relaxation. A more accurate convergence to the saddle point can be obtained by following directions determined by the eigenvectors of the dynamical Hessian , but this is too costly for the system size of interest here.
Most saddle points, even those belonging to energetically favorable minimum-energy paths, cannot be reached by following this redefined force $`𝐆`$ from a point well inside the harmonic region, i.e., the region of the energy landscape that is well approximated by a high-dimensional parabola centered on a local minimum. We refer the reader to Ref. for a detailed discussion of the origin of this problem and just describe our algorithm. We thus have to make sure that the configuration has left the harmonic well before following $`𝐆`$. This first stage is implemented as follows. In a local energy minimum configuration, a few atoms and their nearby neighbors are displaced randomly. The total displacement is small, typically 0.01 Å, and serves to create a non-zero force. At this point, the force will mostly point back to the minimum, and the component of the force, parallel to the displacement, will dominate the perpendicular components. However, we then increase the displacement from the original minimum until this no longer holds, and consequently we are outside the harmonic region. At that point, we can start the second stage of the activation and follow $`𝐆`$.
### B generating the data base
The database of events that we use in the current manuscript is generated as follows. The energy landscape is described by the Stillinger-Weber potential , modified as described below. This empirical interaction includes a two-body and a three-body interaction:
$$E=\underset{ij}{}V(r_{ij})+\underset{ijk}{}V(r_{ij},r_{ik},\theta _{jik})$$
(2)
where the brackets $``$ and $``$ indicate that each bond or angle is only counted once; the two-body potential is
$$V(r_{ij})=ϵA\left(Br_{ij}^p1\right)\mathrm{exp}\left[(r_{ij}a)^1\right]$$
(3)
and the three-body potential is
$`V(r_{ij},r_{ik},\theta _{jik})`$ $`=ϵ\lambda \left(\mathrm{cos}\theta _{jik}+{\displaystyle \frac{1}{3}}\right)^2`$ (4)
$`\times `$ $`\mathrm{exp}\left[\gamma (r_{ij}a)^1\right]\mathrm{exp}\left[\gamma (r_{ik}a)^1\right]`$ (5)
The numerical values for the parameters are $`A=7.050`$, $`B=0.6022`$, $`p=4`$, $`a=1.80`$, $`\lambda =31.5`$, $`\gamma =1.20`$, $`\sigma =2.0951`$Å and $`ϵ=2.1682`$ eV; this set is identical to that used by Stillinger and Weber except for $`\lambda `$ which has been increased by a factor of 1.5 in order to provide a more appropriate structure for a -Si . Recent work on fracture underlines the fact that none of the available empirical interaction potentials describe silicon exactly . This is particularly the case for energy barriers and density of defects; trends, more than exact values, are therefore what we are looking for here.
We report here on results obtained from three independent runs. Each initial 1000-atom cell is constructed by a random packing in a large cubic cell. This configuration is then minimized at zero pressure to a nearby minimum state. ART is then applied iteratively with a Metropolis temperature of 0.25 eV in order to bring the configuration to a well-relaxed amorphous state; we consider that a system is “well-relaxed” when the energy does not decrease significantly over hundreds of events. This takes place after about 5000 trial events, with a success rate slightly above 65%.
After reaching a plateau in energy, for each event we store the initial minimum configuration, the saddle point configuration, and the final minimum configuration. Over the three independent runs, we collected a set of 8106 events. Figure 1 shows the radial distribution function for run C at the beginning (C1) and end of the data acquisition run, 5000 trial-events later (C5000). Although radial distribution functions cannot discriminate easily between realistic and non-realistic structure , the one obtained here is in good agreement with experimental data .
Table I shows the structural properties of these two networks. A third of the bonds, involving 42% of the atoms, have been changed and yet the total energy and structural properties are almost unchanged. This is an indication that the configuration, through a sequence of events, has evolved considerably on an almost constant energy surface.
The bond angle distribution and coordination is comparable with the best networks as obtained with realistic potentials . A comparison with experiment is more difficult. Experimental works on amorphous silicon samples prepared by ion-bombardment report that homogeneous samples of a-Si, without voids, have a density about 1.8 % lower than c-Si . Recent high-Q X-ray diffraction measurements on similarly prepared samples show that well-relaxed a-Si could have an average coordination as low as 3.88, much below the 4.0 generally considered to be appropriate for ideal a-Si . The authors of this work conclude that the high density of dangling bonds should be responsible for the lower density of these samples. This analysis is not supported by our simulation, which leads to cells with a density as low as 7 % below that of the crystal, while keeping the number of defects lower than that seen in this experiment. Clearly more work needs to be done to clarify this situation, especially since the 12 % of dangling bonds seen by high-Q diffraction is at least an order of magnitude higher than what can be expected from either differential scanning calorimetry (1% defects) and electron-spin resonance measurements (0.04% defects) . These results also contrast with ab-initio and tight-binding computer simulations, which tend to find higher coordination, often above 4.0 . The origin of this discrepancy is hard to identify at the moment but it can be due to a combination of inaccurate interactions and/or the differences in time and lengths scales between simulations and experiments. In the case of our simulations, the stability of the configurations after 5000 trial-events suggests that we have reached some type of thermalization for this given interaction potential.
## III General properties of events
### A asymmetry, activation energy, and displacement
The events can first be classified in term of three energies: the energy asymmetry, the activation energy, and the total atomic displacement (see Fig. 2.) The simplest quantities to use for the classification of events are the barrier and asymmetry energies. In Ref. , we give the distribution of these two quantities for the full set of 8106 events. Both distributions are wide and relatively smooth. To push further the analysis, it is useful to establish a first classification based on topological properties of the network.
The radial distribution function of relaxed a-Si goes to zero between the first and second neighbor. The middle of this region between first and second-neighbor lies at around 3.05 Å. This allows us to establish a clear definition of nearest-neighbor bonds between atoms. Structural properties of the first and last minimum-energy configurations of run C in our database are listed in Table I; we find similar numbers for runs A and B. Most atoms that change neighbors in an event are four-fold coordinated both before and after. We discern three topological classes of events: if all atoms involved in an event keep their coordination unchanged between the initial and final state, we term it a perfect event; if topological defects change place during an event, but their total number is conserved, we have a conserved event, which describes defect diffusion; the remaining events are called creation/annihilation events.
The exact details of the number of each class of events depends slightly on the cut-off radius between first and second neighbors; although low-energy configurations have a clear gap between first and second neighbor peaks in the radial distribution function, many saddle and some high-energy final configurations show structure in this gap. For consistency, we have chosen a fixed cut-off at 3.05 Å for all our analysis; the topological classification of some events might be affected by these parameters but the overall conclusions are not sensitive to the fine tweaking of that value.
In a previous letter, we have discussed in some detail the class of perfect events (802 events) . The bulk of our database comprises creation/annihilation events, with 5325 events, but until now we did not succeed in revealing interesting characteristics from these. In this paper, we focus on the 1979 conserved events in our database, describing directly the diffusion of defects without the creation or annihilation of coordination defects.
As already mentioned, we produced 1979 conserved events, i.e., events where the number of coordination defects is identical in the initial and final state. The distribution of barriers and minimum to minimum energy differences is shown in Figure 3(a). The front of the barrier distribution peaks at about 4.5 eV while the asymmetry peaks at about 2.1 eV. This distribution is very similar to the total distribution shown in Fig. 1 of Ref. . In fact, one of the most striking results we obtain is that the distribution of barriers and asymmetry is almost independent of the subset of events we select. The bottom box of Fig. 3, for example, compares the asymmetry distribution for the three classes of events: perfect, conserved, and creation/annihilation. Except for the peak at 0 eV in the case of perfect events, involving atomic exchanges without modification of the overall topology of the network, the three distributions fall almost on top of each other at low asymmetry. The maximum of each distribution is slightly shifted, however, with peaks at about 2.3 eV for the conserved events, 2.5 eV for the perfect events and 3.2 eV for creation/annihilation events.
The bias towards higher energies in the asymmetry distribution is expected: the distribution includes all attempted events, and not just those that are accepted; since the run is started in an already well-relaxed configuration, most events lead to higher energy configurations.
A similar insensitivity can be seen with the activation barriers. Although the precision on the barrier height is about 0.5 eV within the modified-force approximation, we can still say something about activation. Fig. 3 also plots the distribution of barriers. It peaks at about 4.0 eV Taking into account the uncertainty on the empirical potential, this result are consistent with experimental measurement. Shin and Atwater conclude, based on conductivity measurements, that the activation-energy spectrum extends from as low as 0.25 eV to about 2.8 eV, supposing that a prefactor (entering logarithmically in the relation) is of order 1. Using isothermal calorimetry, Roorda et al find relaxation with a characteristic time of about 110 s between 200 and 500 C, also indicating a high activation barrier. Moreover, these results seem to depend only weakly on the method of preparation (vacuum evaporation or ion implantation) and are limited by the fact that above 500 C the samples tend to crystallize. Both results are therefore also consistent with a continuous distribution of activation barriers.
The tail of the distribution goes much beyond experimental values, and extends past 20 eV. Although such mechanisms are clearly unphysical, they underscore the fact that ART does not suffer from slowing down as the height of the barrier increases. This method is perfectly at ease with barriers of 0.1 eV as well as those of 25 eV. In the rest of this paper, we will concentrate on events with the 1147 more physical barriers of less than 8 eV.
Figure 4 shows the histogram of the total displacement, defined as the square root of the sum of the square of each single-atom displacement, for these conserved events with a barrier of less than 8 eV. There is again little structure in the distribution. We note that the the average displacement to the saddle point is shorter than that to the new minimum. Based on preliminary simulations in other materials, this trend, although intuitively reasonable, is not always present and might be indicative is certain type of activation; more work remains to be done to clarify this issue.
### B Volume expansion/contraction per event
If is often suggested that there should be correlation between the energy of an event and its size. This immediately raises the question as to whether mechanisms by which the structure rearranges itself are local or non-local.
Based on the observation that isothermal heat release curves obey bimolecular reaction kinetics and that the ion-beam-induced derelaxation scales with the density of displaced atoms due to the ion bombardment and appears to be independent of electronic energy-loss mechanisms, Roorda concludes that point-defect annihilation should control the structural relaxation. It remains unclear, however, whether this means the actual removal of defects or simply a clustering or a passivation by hydrogen atoms. A similar relaxation (to within a factor 2) of ion-bombarded c–Si and a–Si suggests that both materials have similar relaxation mechanisms but not necessarily identical. This similarity would point towards relatively local mechanisms of defect diffusion and relaxation since at this length scale, crystalline and amorphous materials resemble each other closely.
Based on EPS data however, Muller et al. suggest that from 1000 to 10 000 atoms have to move marginally in order to enable a single dangling-bond defect to anneal . This would be, at least qualitatively, in agreement with XPS measurements that suggests that the formation of dangling bonds in a-Si:H under exposure to light is also accompanied by long-range structural rearrangements of the amorphous network , but it is in clear disagreement with the conclusion of Roorda et al. mentioned above. The annealing mechanism suggested by this group, for example, is mutual annihilation of low- and high-density defects - vacancy/interstitial. Roorda et al. propose defects similar to that of c-Si but not necessarily identical . Mössbauer experiments with Sn suggest that vacancies exist also in a-Si . Part of the difficulty in assessing more clearly the type of defects involved in relaxation and diffusion is that amorphous silicon crystallizes at about 500 C, rendering studies of self-diffusion very difficult .
Because it is not always clear what size means, we consider here three definitions: number of atoms, total displacement and local density deformations.
The size of events is usually related to the number of atoms involved in the rearrangement of the network, i.e., the number of atoms that are displaced more than a threshold distance $`r_c`$. In Fig. 5 we plot the number of atoms involved as a function of $`r_c`$, for the conserved and the full set of events. A local topological rearrangement will generally push the surrounding atoms outwards, or occasionally pull them slightly inwards. The distance $`\mathrm{\Delta }r`$ over which the surrounding atoms are pushed away (pulled inwards) will, because of elasticity arguments, scale with the distance $`r`$ from the rearrangement as $`\mathrm{\Delta }rr^2`$, for sufficiently large $`r`$. Alternatively, this can be rewritten as $`\mathrm{\Delta }rr^2=V_e`$, where $`V_e`$ is a constant volume independent of the distance $`r`$. The distance over which the atomic displacement $`\mathrm{\Delta }r`$ exceeds a threshold displacement $`r_c`$ is then equal to $`r=\sqrt{V_e/r_c}`$, and the number of atoms displaced more than $`r_c`$ will then scale with $`r_c`$ as $`N_eV_e^{3/2}r_c^{3/2}`$. Figure 5 shows that this scaling holds in the region where $`0.1<r_c<1.0`$ Å and $`N_eN`$. Selecting a threshold at 0.1 Å the lower bound but also the typical vibration displacement at room temperature in Si, we find that, on average, about 50 atoms are involved in an event, clearly beyond the very local mechanism but well below the highest numbers proposed.
The size of the events can thus be measured also without the introduction of a threshold distance, by measuring $`V_e=\mathrm{\Delta }rr^2`$, averaged over some range of $`r`$. Figure 6 shows the correlation between the event volume and the energy barrier and asymmetry. The figure shows that most proposed events tend to expand the sample locally by about $`V_e1`$ Å<sup>3</sup>. Moreover, although scattered considerable, the data suggest a linear relation between the energy and the volume expansion with about 4 eV per Å<sup>3</sup>.
We can also search for correlations between the total displacement and the asymmetry energy, which could relate the diffusion length with the energy. This is plotted, for conserved events, in Fig. 7; there is little correlation. A similar negative result is obtained if we look at the correlation between the displacement and the activation energy.
The picture that emerges from these three approaches is that events are relatively localized, involving around 50 atoms, and require some local expansion to take place, as would be expected. Correlations between the size of events and the energy are difficult to establish due, in large part, to the wide spread of local environment typical of disordered systems. These results can be used to put bounds on models of diffusion and relaxation in amorphous silicon.
### C energetics of coordination defects
Weak bonds and coordination defects form another recurring theme in the study of dynamics and relaxation in a-Si. In this section, we discuss their properties in the sub-set of conserved events. The number of bonds broken/created at the saddle point is $`3.7\pm 1.4`$ and $`3.4\pm 1.4`$ with bond lengths of $`2.43\pm 0.09`$ and $`2.62\pm 0.12`$ Å. At the final point, it is $`4.3\pm 1.6`$ (equal number of bonds created and broken for a conserved event) with respective bond lengths of $`2.46\pm 0.09`$ and $`2.55\pm 0.10`$ Å.
These numbers are very similar to those obtained by concentrating on perfect events. This reflects one of our main conclusions: that correlation between different properties of the events is weak. If the bond length of the resulting states is typically longer than that of the initial configuration, it is simply because the final configurations have typically a significantly higher energy. There is no preference for a single stretched bond; it is the medium-range strain that matters.
One would expect a relation between relaxation and defect annihilation, but how strongly linked these two are is not clear a priori. Relatively little is know directly . To a first approximation, the total energy should follow the density of defects . Because of the similarity between crystalline and amorphous Si, Roorda et al concluded that relaxation occurs through defect annihilation . Polman et al find that Cu diffusion in annealed a–Si is 2 to 5 times faster than in non-annealed samples; this increase in diffusion might indicate a decrease in defects, trapping the Cu .
We see a certain correlation between defects and total energy as plotted in Figure 8(a). The distribution of energy for a given number of defects is wide, going from 12 eV for 35 defects to about 25 eV for 50 defects. Looking at the correlations of the energy with specific type of defects, we find much smaller impacts, with significant distribution in the total energy for a given number of 3-fold or 5-fold defects as is shown in the bottom panel for Figure 8.
Clearly, therefore, the definition of defects must also include strained environment and not just the coordination defects mentioned here — relaxing some highly strained ring might require the creation of a bond defect. Once again, the situation is much less clear than is generally thought. We must emphasize here that for less relaxed samples, the correlation between energy and the number of coordination defects is much better.
### D Topological classification
To go beyond the scalar picture given above, we need to consider in more detail the nature of the topological changes. The classification scheme applied here is an extension to the classification scheme that we used for the perfect events in an earlier letter . All atoms that change their neighbors are alphabetically labeled. the topological change is determined by specifying the list of all bonds before the event, and of all bonds after.
For perfect events, there are as many bonds before as after the event, and moreover, the set of all these bonds can always be organized into a ring of alternating created and destroyed bonds. This ring can be represented by the sequence of atoms visited. For instance, in event abacbd, the bonds before and after the event are ab, ac, bd, resp. ba, cb, da. Thus, bonds ac and bd are replaced by ad and bc, while bond ab is present both before and after the event. Many equivalent rings exist, but the convention of always using the alphabetically lowest label makes this classification unique.
If the event is not perfect, this classification scheme has to be modified. In case a single bond (or dangling bond) jumps from one atom to another, the set of bonds does not form a closed ring, but an open chain of alternating bonds before and after the event; the same classification scheme can still be used, with the note that the first and last atom are not bonded. It also happens that the event comprises a series of bond exchanges which are in disconnected regions (typically still nearby - interacting via the strain field). For such events, we introduce the concept of “ghost bonds” (represented by a dot in the label), that are added to either the set of bonds before or after the event.
Using this topological analysis, we can have a first crack at the events. 1148 events in our database have an activation barrier of less than 8 eV, the other events might be considered unphysical. Of these, 447 are too complicated (involving too many defects or too many disconnected regions) to be analyzed, leaving 701 labeled events.
For conserved events, there are no dominant labels, contrary to what is found for perfect events where three labels account for 85% of the events. Such diversity underscores the difficulty in trying to identify mechanisms and relating them to experimental information. Clearly, the dynamics of defects in amorphous silicon is much more complicated than is usually thought.
In the large set of labels, an often occurring theme is a ring of bonds corresponding to the Wooten-Winer-Weaire (WWW) bond-exchange mechanism , which in our classification scheme has the label abacbd. We find that up to two WWW moves can take place in a single event. This rearrangement changes the local ring structure and redistributes the strain, affecting the jump barrier seen by the dangling bond.
After removal of all these rings from the events as such, what remains is often a single coordination defect that jumps to a 1st, 2nd or higher neighbor. Table II presents a partial list of such labels. It is remarkable that the diffusion of coordination defects requires, in general, a topological rearrangement more complex than one would expect from the displacement of the bonds. Only nine events are of the abc type, the smallest rearrangement possible for the motion of a bond.
Other events involve longer jumps, at least in topological terms. The abacbde, for example, reflect this type of behavior. We show one such event in Fig. 9. Very few of the classified events displace more than one defect. (It could be that in the non-classified events this situation occurs more often.)
## IV Conclusions
Based on an extensive list of events in a-Si, representing a wide range of characters, we can provide a general overview of the nature of the microscopic mechanisms responsible for the diffusion of topological defects in this material. To do so, we havei concentrated on a class of
events that involve the displacement of coordination defects while keeping their overall number constant.
Analyzing a wide range of structural and topological properties of these events we find that: (1) In a well-relaxed sample, there is little correlation between the number of defects and the total energy; the relaxation of strain can take place in more subtle ways, sometimes involving the creation of topological defects. (2) Taken into account the use of an empirical potential, the activation barriers are in agreement with experimental value. (3) We find little correlation between the activation barrier or the asymmetry and the deformation of the network, either in terms of the number of atoms involved or the total displacement; the energy is best described in term of the volume of an event. (4) A topological analysis of the conserved events show an unexpected richness; we find literally hundreds of different mechanisms that cannot easily be put in a few classes. As a rule, the Wooten-Weaire-Winer bond exchange mechanism, dominant for perfect events, still plays a major role. Defect diffusion is often local, with coordination defects jumping from one atom to a neighbor, but it can also go as far the fourth neighbor, in a chain-like fashion.
These results can be used to put bounds on models of diffusion and relaxation in amorphous silicon. For example, the Fedders and Branz model for a–Si:H states that (1) relaxing the defect structure often requires several atoms to move simultaneously, (2) only by cooperative motion do the position changes lower or conserve the total energy, (3) the size of the barrier generally increases with the number of atoms that must move simultaneously. Our results support points (1) and (2) but not (3).
This study represents only a first step in the study of microscopic activated mechanisms in a-Si. More work remains to be done to converge barriers using more accurate interaction potentials. It is important also to try to connect some of these results with hard experimental numbers, a challenge both for theorists and experimentalists. Already, however, we can see that the dynamics of disordered materials promises to be much more complicated than was thought before.
## Acknowledgements
N.M. acknowledges partial support from the NSF under grant number DMR-9805848 as well as generous time allocations of the computers of the High Performance Computing Center at Delft Technical University, where part of the analysis was done. GB acknowledges the High Performance Computing group at Utrecht University for computer time.
|
no-problem/9905/quant-ph9905102.html
|
ar5iv
|
text
|
# Supersymmetry of a spin 1/2 particle on the real line
## 1 Introduction
Supersymmetric (SUSY) quantum mechanics was introduced by Witten as a laboratory for investigating SUSY breaking which is one of the fundamental issue in SUSY quantum field theory . Prior to Witten’s paper Nicolai had shown that SUSY could also be a useful tool in nonrelativistic quantum mechanics . Subsequently SUSY quantum mechanics has proved to be interesting on its own merit and has been studied from different points of view .
In the present paper we shall study a generalised one dimensional SUSY quantum mechanical problem concerning the motion of a spin $`\frac{1}{2}`$ particle in the presence of a scalar potential as well as a magnetic field. In this connection we would like to point out that supersymmetry based methods have previously been used to study various systems involving coupled channel problems , matrix Hamiltonians as well as models involving spin-orbit coupling . Quasi exactly solvable matrix models have also been studied . In the present case we shall obtain exact solutions of the eigenvalue problem when a spin $`\frac{1}{2}`$ particle moves in the presence of a rotating magnetic field. In particular it will be shown that supersymmetry breaking depends nontrivially on the strength and period of the magnetic field. Finally we shall also indicate briefly how supersymmetry is affected when apart from the rotative magnetic field a scalar potential is also present.
## 2 SUSY of a spin $`\frac{1}{2}`$ particle on the real line
In Witten’s model of SUSY quantum mechanics the Hamiltonian consists of two factorized Schrödinger operators
$$H_{}=A^+A^{},H_+=A^{}A^+,$$
(1)
where the operators $`A^+`$ and $`A^{}`$ are given by
$$A^\pm =\frac{d}{dz}+W(z),$$
(2)
and $`W(z)`$ is the superpotential.
The pair of Hamiltonians in (1) are called SUSY partner Hamiltonians and each of these Hamiltonians describe motion of a spinless particle in one dimension. We shall now generalize Witten’s model of SUSY quantum mechanics in such way that each of the Hamiltonians $`H_{}`$, $`H_+`$ will describe the motion of a spin $`\frac{1}{2}`$ particle in a magnetic field and scalar potential. In order to do this we generalise the operators $`A^\pm `$ in the following way:
$$A^\pm =\frac{d}{dz}+W(z)+𝐕(z)𝐒,$$
(3)
It may be noted that here we consider motion of the particle along $`z`$-axis and components of the spin operator $`𝐒`$ are $`S_\alpha =\sigma _\alpha /2`$ ($`\alpha =x,y,z`$), $`\sigma _\alpha `$ being the Pauli matrices. Then SUSY partner Hamiltonians can be obtained as in (1) and are given by
$$H_\pm =\frac{d^2}{dz^2}+V_\pm (z)+𝐁_\pm (z)𝐒,$$
(4)
where
$`V_\pm (z)=W^2\pm W^{}+V^2/4,`$ (5)
$`𝐁_\pm (z)=2W𝐕\pm 𝐕^{}.`$ (6)
The Hamiltonians $`H_\pm `$ in (4) describe a spin $`\frac{1}{2}`$ particle moving along the $`z`$-axis in a scalar potential $`V_\pm `$ and a magnetic field $`𝐁_\pm (z)`$. In the case $`𝐕=0`$ we obtain standard Witten model of SUSY quantum mechanics.
## 3 Spin $`\frac{1}{2}`$ particle in a rotating magnetic field and constant scalar potential
Let us now consider the motion of spin $`\frac{1}{2}`$ particle in a constant scalar potential and rotating magnetic field in the $`xy`$-plane:
$$(B_\pm )_x=B_0\mathrm{cos}kz,(B_\pm )_y=B_0\mathrm{sin}kz,(B_\pm )_z=0.$$
(7)
In this case without any loss of generality we can choose $`W=0`$ and thus
$$V_x=\frac{B_0}{k}\mathrm{sin}kz,V_y=\frac{B_0}{k}\mathrm{cos}kz,V_z=0.$$
(8)
Thus in this case the operators $`A^\pm `$ are given by
$$A^\pm =\frac{d}{dz}+\frac{B_0}{k}(\mathrm{sin}kzS_x+\mathrm{cos}kzS_y).$$
(9)
Then from (4) we can obtain the explicit form of the SUSY partner Hamiltonians:
$$H_\pm =\frac{d^2}{dz^2}B_0(\mathrm{cos}kzS_x+\mathrm{sin}kzS_y)+\frac{B_0^2}{4k^2}.$$
(10)
From the form of the Hamiltonians in (10) it is seen that the term coupling spin and magnetic field is dependent on z. In order to remove this dependence we now perform the following unitary transformation:
$$\stackrel{~}{\psi }=e^{ikzS_z}\psi .$$
(11)
As a result of this transformation we obtain the following set of Hamiltonians
$`\stackrel{~}{H}_\pm =e^{ikzS_z}He^{ikzS_z}=\stackrel{~}{A}^{}\stackrel{~}{A}^\pm `$
$`=\left(i{\displaystyle \frac{d}{dz}}kS_z\right)^2B_0S_x+{\displaystyle \frac{B_0^2}{4k^2}},`$ (12)
where
$$\stackrel{~}{A}^\pm =e^{ikzS_z}A^\pm e^{ikzS_z}=\frac{d}{dz}\pm ikS_z+\frac{B_0}{k}S_y.$$
(13)
In order to determine whether or not supersymmetry is broken it is necessary to investigate if there are normalisable zero energy ground state wave functions (it may be recalled that for SUSY to be unbroken the ground state energy must be zero while if SUSY is broken the ground state energy is positive). So we seek solutions of the equations
$$\stackrel{~}{A}^\pm \stackrel{~}{\psi }_0^\pm =\left(\frac{d}{dz}\pm ikS_z+\frac{B_0}{k}S_y\right)\stackrel{~}{\psi }_0^\pm =0.$$
(14)
Thus SUSY is unbroken if at least one of wave functions $`\stackrel{~}{\psi }_0^\pm `$ is a true zero mode. We now seek solutions of the above equations in the form
$$\stackrel{~}{\psi }_0^\pm =\stackrel{~}{\chi }_0^\pm e^{iqz},$$
(15)
where $`q`$ is wave vector of the particle, $`\stackrel{~}{\chi }_0^\pm `$ is spin part of the wave function and it satisfies the following equation
$$\left(iq\pm ikS_z+\frac{B_0}{k}S_y\right)\stackrel{~}{\chi }_0^\pm =0.$$
(16)
It can be shown that non zero solutions of equation (16) i.e., $`\stackrel{~}{\chi }_0^+`$ or $`\stackrel{~}{\chi }_0^{}`$ exists for the same value of the wave vector $`q`$
$$q=\pm \frac{k}{2}\sqrt{1\frac{B_0^2}{k^4}}.$$
(17)
This implies that true zero modes exist both for $`H_+`$ and $`H_{}`$ only when $`q`$ is real and from (17) it follows that $`q`$ is real if
$$\frac{B_0^2}{k^4}<1.$$
(18)
Thus when $`B_0^2<k^4`$ zero modes exist both for $`H_\pm `$ and SUSY is unbroken. In the other case when q is complex we do not have normalisable zero energy solutions and so SUSY is broken. It is interesting to note that when q is real zero energy states exist in both the sectors $`H_+`$ and $`H_{}`$ and thus are strictly isospectral. It may be noted that a similar situation arises when spinless particles move in periodic potentials .
Finally we note that eigenvalue problem for the Hamiltonians in (3) and thus for those in (10) can be solved exactly for the entire energy energy spectrum. After performing the unitary transformation the Hamiltonian (10) is transformed to (3) and the corresponding eigenfunctions can be written in the form
$$\stackrel{~}{\psi }^\pm =\stackrel{~}{\chi }^\pm e^{iqz}.$$
(19)
Then the eigenvalue problem becomes
$$[(qkS_z)^2B_0S_x]\stackrel{~}{\chi }^\pm =E\stackrel{~}{\chi }^\pm ,$$
(20)
from which we obtain a two band energy spectrum
$$E_{1,2}(q)=q^2+(k/2)^2\pm \sqrt{q^2k^2+(B_0/2)^2}+\frac{B_0^2}{4k^2}.$$
(21)
We note that the energy spectrum in (21) is the same for both $`H_\pm `$. The lowest energy $`E=0`$ for the first band $`E_1(q)`$ (with ”-” in (21)) is at wave vector given by (17) where $`B_0^2/k^4<1`$. If however $`B_0^2/k^4>1`$ the lowest energy is at $`q=0`$ and $`E_1(q=0)=(k/2|B_0|/2k)^2`$ which is greater than zero. Thus in this case the SUSY is broken.
## 4 Ground state in the case of a rotating magnetic field and non constant scalar potential
Unlike in the last section here we consider the motion of the particle in a rotating magnetic field and a non constant superpotential $`W(z)`$. In this case the equation for the ground state of $`H_{}`$, after the unitary transformation (11) is given by
$$\stackrel{~}{A}^{}\stackrel{~}{\psi }_0^{}=\left(\frac{d}{dz}ikS_z+\frac{B_0}{k}S_y+W(z)\right)\stackrel{~}{\psi }_0^{}=0.$$
(22)
As before spin and coordinate parts of the wave function can be separated and the solution can be written in the form
$$\stackrel{~}{\psi }_0^{}=\stackrel{~}{\chi }_0^{}\mathrm{exp}(^zW(z)𝑑z\lambda z)$$
(23)
where $`\stackrel{~}{\chi }_0^{}`$ satisfies the equation
$$\left(ikS_z+\frac{B_0}{k}S_y\right)\stackrel{~}{\chi }_0^{}=\lambda \stackrel{~}{\chi }_0^{}.$$
(24)
Eigenvalues of this equation are easily obtained and are given by
$$\lambda =\pm \frac{k}{2}\sqrt{\frac{B_0^2}{k^4}1}.$$
(25)
Thus equation (22) has two solutions (23) which correspond to the two eigenvalues (25).
An interesting feature which emerges from this scenario is that even in the presence of a non constant scalar potential the rotating magnetic field can lead to SUSY breaking if it is sufficiently strong. To see this let us choose a superpotential $`W(z)`$ such that $`W(z)\pm W_0`$ when $`z\pm \mathrm{}`$. Then it follows from (23) that in the case when
$$\frac{k}{2}\sqrt{\frac{B_0^2}{k^4}1}>W_0$$
(26)
the wave function becomes non square integrable. Thus sufficiently strong magnetic field destroys zero energy ground state and leads to SUSY breaking.
For the purpose of illustration let us consider an explicit example. We choose $`W(z)=\alpha \mathrm{tanh}z`$, $`\alpha >0`$ so that $`W(z)\pm \alpha `$ as $`z\pm \mathrm{}`$. Then the ground state wave function corresponding to $`H_{}`$ is given by
$$\stackrel{~}{\psi }_0^{}=\stackrel{~}{\chi }_0^{}(\mathrm{cosh}z)^\alpha e^{\lambda z},$$
(27)
where $`\lambda `$ is given by (25). This wave function is square integrable, when $`\alpha >\lambda `$. Thus in this case SUSY is unbroken. In the other case when $`\alpha <\lambda `$ the ground state wave function is nonnormalisable so that the magnetic field leads to the breaking of SUSY. We would like to point out that if $`\lambda `$ as given by (25) is imaginary (in other words if the magnetic field is small enough) then SUSY is always unbroken irrespective of the value of $`\alpha `$.
Finally let us point out about the zero energy solution corresponding to $`H_+`$. We note that in this case
$$\stackrel{~}{\psi }_0^+=\stackrel{~}{\chi }_0^+(\mathrm{cosh}z)^\alpha e^{\lambda z},$$
(28)
so that it is non square integrable. Thus $`H_+`$ posses no zero energy solution. This is in contrast to the case considered in the previous section where both $`H_\pm `$ had zero energy states.
## 5 Conclusions
In the present paper we have studied motion of a spin $`\frac{1}{2}`$ particle in rotating magnetic field and a scalar potential within the framework of SUSY quantum mechanics. The eigenvalue problem in the case of a purely magnetic field (scalar potential is constant) is solved exactly and two band energy spectrum is obtained. An interesting feature of the free motion of spin $`\frac{1}{2}`$ particle in a rotating magnetic field is that both Hamiltonians $`H_+`$ and $`H_{}`$ can have zero energy states simultaneously. It may be noted that existence of zero modes and thus exact SUSY depends on parameters of magnetic field and are given by (18). For sufficiently strong magnetic field we have broken SUSY.
We have also studied SUSY breaking when the particle is moving in a rotating magnetic field and non constant superpotential $`W(z)`$. In contrast to the free motion ($`W(z)=0`$) for non constant $`W(z)`$ only one of the Hamiltonians $`H_{}`$ or $`H_+`$ has a zero energy ground state. In this case if the magnetic field is sufficiently strong then SUSY can be broken. The condition for this is given by (26).
It may be noted that in addition to the magnetic field inclusion of a non constant superpotential $`W(z)`$ leads to the appearance of discrete energy levels. Investigation of complete discrete energy spectrum in the presence of a magnetic field and different scalar potentials will be the subject of a future publication.
|
no-problem/9905/hep-ph9905462.html
|
ar5iv
|
text
|
# Effective Photon Spectra for the Photon Colliders
## 1 Introduction
The photon colliders ($`\gamma \gamma `$and $`e\gamma `$) were proposed and discussed in details in Refs. . The forthcoming papers present some new details of design and analysis of some effects involved in conversion.
In a basic scheme two electron beams leave the final focus system and travel towards the interaction point (IP). At the conversion point (CP) at the distance $`b110`$ mm before IP they collide with focused laser beams. The Compton scattering of a laser photon on an electron produces a high energy photon. The longitudinal motion of this photon originates from that of an electron, so that these photons follow the trajectories of electrons (to the IP) with additional angular spread $`1/\gamma `$. With reasonable laser parameters, one can ”convert” most of the electrons into the high energy photons. Without taking into account rescattering of electrons on the next laser photons, the total $`\gamma \gamma `$ and $`e\gamma `$ luminosities are $`_{\gamma \gamma }^0=k^2_{ee}`$ and $`_{e\gamma }^0=k_{ee}`$ where $`k`$ is the conversion coefficient and $`_{ee}`$ is the geometrical luminosity of basic $`ee`$ collisions, which can be made much larger than the luminosity of the basic $`e^+e^{}`$ collider. Below we assume distances $`b`$ and the form of the electron beams to be identical for both beams.
Let the energy of the initial electron, laser photon and high energy photon be $`E`$, $`\omega _0`$ and $`\omega `$. We define as usual
$$x=\frac{4E\omega _0}{m_e^2},y=\frac{\omega }{E}y_m=\frac{x}{x+1}.$$
(1)
The quality of the photon spectra is better for higher $`x`$. However at $`x>2(1+\sqrt{2})4.8`$ the high energy photons can disappear via production of $`e^+e^{}`$pair in its collision with a following laser photon. That is why the preferable conversion is at $`x=45`$.
The energies of colliding photons $`\omega _i=y_iE`$ can be determined for each event by measuring the total energy of the produced system $`\omega _1+\omega _2`$ and its total (longitudinal) momentum $`\omega _1\omega _2`$. We discuss in more detail the main area for study of New Physics — the high energy region where energies of both photons are large enough. For the definiteness we consider the photon energy region $`(y_m/2)<y_i<y_m`$ and demand additionally that no photons with lower energy contribute to the entire distribution over the effective mass of $`\gamma \gamma `$ system $`2zE`$ or its total energy $`YE`$:
$$\frac{y_m}{2}<y_1,y_2<y_m\left(\frac{y_m}{\sqrt{2}}<z=\sqrt{y_1y_2}<y_m\right),\left(1.5y_m<Y=y_1+y_2<2y_m\right).$$
(2)
In the interesting cases this choice covers the high energy peak in luminosity since the photon spectra are concentrated in the more narrow regions near $`y_m`$.
The growth of the distance $`b`$ between IP and CP is accompanied by two phenomena. First, high energy collisions become more monochromatic. The high energy part of luminosity is concentrated in a relatively narrow peak which is separated well from an additional low energy peak. This separation becomes stronger at higher $`x`$ and $`b`$ values. Second, the luminosity in the high energy region decreases (relatively slowly at small $`b`$ and as $`b^2`$ at large $`b`$). Only the high energy peak is the area for study of the New Physics phenomena. The low energy peak is the source of background at these studies. The separation between peaks is very useful to eliminate background from the data. Therefore, some intermediate value of $`b`$ provides the best conditions for study of New Physics.
Let us discuss spectra neglecting rescattering, for beginning. At $`b=0`$ the $`\gamma \gamma `$luminosity distribution is a simple convolution of two photon spectra of separate photons. At $`b0`$ the luminosity distribution is a more complicated convolution of the above photon spectra with some factor depending on $`b`$ and the form of initial electron beams. With the growth of conversion coefficient the effect of rescattering of electrons on laser photons enhances and make this distribution dependent on the details of design (mainly, in low energy part).
In this paper we continue the discussion from Ref. about main parameters of scheme which are preferable for the $`\gamma \gamma `$and $`e\gamma `$colliders for the elliptic electron beams. We find the universal description of high energy peak in this preferable region of parameters. It allows us to obtain the remarkable approximate form of spectra of colliding photons, whose simple convolution describes high energy luminosity peak with reasonable accuracy.
## 2 Luminosity distribution without rescattering. Elliptic electron beams
The high energy peak in luminosity is described mainly by a single collision of an electron with a laser photon, this part of distribution depends on the form of initial beams only. Therefore we start with the detailed discussion of effects from single electron and laser photon collision. At first, we repeat some basic points from Refs. .
The scattering angle of produced photon $`\theta `$ is related to its energy as
$$\theta =\frac{g(x,y)}{\gamma },g(x,y)=\sqrt{\frac{x}{y}x1},\gamma =\frac{m_e}{E}.$$
(3)
Let the mean helicities of initial electron, laser photon and high energy photon be $`\lambda _e/2`$, $`P_{\mathrm{}}`$ and $`\xi _2`$. The energy spectrum of produced photons is ($`N`$ is normalization factor)
$$F(x,y)=N\left[\frac{1}{1y}y+(2r1)^2\lambda _eP_{\mathrm{}}xr(2r1)(2y)\right],r=\frac{y}{x(1y)}.$$
(4)
At $`\lambda _eP_{\mathrm{}}=1`$ and $`x>1`$ this spectrum has a sharp peak at high energy which becomes even more sharp with $`x`$ growth.
The spectrum is more sharp when $`\lambda _eP_{\mathrm{}}`$ is larger. We present below mainly the values from the real projects $`\lambda _e=0.85`$, $`P_{\mathrm{}}=1`$.
The degree of circular polarization of high energy photon is
$$<\xi _2>=N\frac{\lambda _exr\left[1+(1y)(2r1)^2\right]P_{\mathrm{}}(2r1)\left[{\displaystyle \frac{1}{1y}}+1y\right]}{F(x,y)}.$$
(5)
The photons with lower energy have higher production angle (3). With the growth of $`b`$, these photons spread more and more, and they collide only rarely. Therefore, with the growth of $`b`$ photon collisions become more monochromatic, only high energy photons taking part in these collisions the low energy part of total luminosity being rejected (here photons in the average are almost nonpolarized).
This effect was studied in for the gaussian round electron bunches. However, the incident electron beams are expected to be of an elliptic form with large enough ellipticity.
Let initial electron beams be of the gaussian elliptic form with vertical and horizontal sizes $`\sigma _{ye}`$ and $`\sigma _{xe}`$ atthe IP (calculated for the case without conversion). The discussed phenomena are described by a reduced distance between conversion and collision points $`\rho `$ and an aspect ratio $`A`$
$$\rho ^2=\left(\frac{b}{\gamma \sigma _{xe}}\right)^2+\left(\frac{b}{\gamma \sigma _{ye}}\right)^2,A=\frac{\sigma _{xe}}{\sigma _{ye}}.$$
(6)
The luminosity distributions in this case can be calculated by the same approach which was used in Ref. .
The distribution of the photons colliding with opposite electrons at $`e\gamma `$collider is
$$\frac{d_{e\gamma }}{dy}=\frac{d\varphi }{2\pi }F(x,y)\mathrm{exp}\left[\frac{\rho ^2g^2(x,y)}{4(1+A^2)}(A^2\mathrm{cos}^2\varphi +\mathrm{sin}^2\varphi )\right].$$
(7)
For the round beams ($`A=1`$) we have in the exponent $`\rho ^2g^2(x,y)/8`$.
The distribution of the colliding photons over their energies at $`\gamma \gamma `$collider is
$$\begin{array}{c}\frac{d^2_{\gamma \gamma }}{dy_1dy_2}=\frac{d\varphi _1d\varphi _2}{\left(2\pi \right)^2}F(x,y_1)F(x,y_2)\mathrm{exp}\left[\frac{\rho ^2\mathrm{\Psi }}{4\left(1+A^2\right)}\right],\\ \\ \mathrm{\Psi }=A^2\left[g(x,y_1)\mathrm{cos}\varphi _1+g(x,y_2)\mathrm{cos}\varphi _2\right]^2+\left[g(x,y_1)\mathrm{sin}\varphi _1+g(x,y_2)\mathrm{sin}\varphi _2\right]^2.\end{array}$$
(8)
For the round beams ($`A=1`$) one can perform integrations over $`\varphi _i`$ in the analytical form. It results in the equation from Ref. with the Bessel function of an imaginary argument $`I_0(v^2)`$
$$\begin{array}{c}\frac{d^2_{\gamma \gamma }}{dy_1dy_2}=F(x,y_1)F(x,y_2)\mathrm{exp}\left[\frac{\rho ^2}{8}(g^2(x,y_1)+g^2(x,y_2))\right]I_0(v),\\ v^2=\frac{\rho ^2}{4}g(x,y_1)g(x,y_2).\end{array}$$
(9)
We have analyzed numerically the high energy part of the luminosity (in the region (2)) $`^h`$ as a function of $`\rho ^2`$ and $`A`$ at $`2<x<5`$, $`\lambda _eP_{\mathrm{}}0`$. (We use this notation for both total luminosity integrated over the region (2) and differential distributions.)
The growth of $`\rho `$ results both in a better form of luminosity distribution and reduction of luminosity. We find that the luminosity $`^h`$ depends only weakly on the aspect ratio $`A`$ at $`A>1.5`$ and $`\rho ^2<1.3`$. At $`\rho ^21`$ this dependence is weak at all values of $`A`$ (including $`A=1`$). For $`\lambda _eP_{\mathrm{}}=0.85`$ the luminosity $`^h`$ at $`\rho =1`$ contains large enough fraction from the high energy part of luminosity given at $`\rho =0`$. For the unpolarized case ($`\lambda _eP_{\mathrm{}}=0`$) both this fraction is smaller and the high energy peak is separated weakly from the low energy one. Some of these statements can be seen from the table below where we show the ratio of high energy luminosity $`^h`$ to the total luminosity $`_{\gamma \gamma }^0`$ at some values of parameters.
| | $`\rho =0`$, any $`A`$ | | $`\rho =1`$, $`A1.5`$ | |
| --- | --- | --- | --- | --- |
| $`\lambda _eP_{\mathrm{}}`$ | -0.85 | 0 | -0.85 | 0 |
| $`x=4.8`$ | 0.35 | 0.25 | 0.28 | 0.19 |
| $`x=2`$ | 0.29 | 0.19 | 0.25 | 0.16 |
It seems unreasonable to use lowst part of total luminosity than that obtained at $`\rho =1`$.
To have a more detailed picture for the simulation, we study the luminosity distribution in the relative values of the effective mass $`z=W/(2E)`$ and the total energy $`Y=/E`$ of the pair of colliding photons (2). Their typical forms for different values of aspect ratio $`A`$ are shown in Figs. 1, 2.
The numerical study of these distributions shows us that its high energy part is practically the same for all values $`A>1.5`$ at fixed $`\rho ^2<1.3`$ (with small difference near lower part of peak). The luminosity within high energy peak for round beams ($`A=1`$) is slightly lower than that for the elliptic beams. This difference is about 5% in the main part of region below peak. At $`\rho ^2=1`$ this difference is small for all $`A`$. At $`\rho ^2>0.5`$ the high energy part of luminosity has the form of a narrow enough peak. This peak is not so sharp at lower $`x`$ and it is even less sharp at $`\lambda _eP_{\mathrm{}}=0`$.
With the growth of aspect ratio $`A`$ the entire distributions acquire low energy tails (as compared with round beams) originated from the collisions of low energy photons scattered near the horizontal direction with the opposite high energy photon scattered in the vertical direction. This tail is added to that from the rescatterings and is not of much interest in the discussion of high energy peak. At higher $`\rho `$ and $`A`$ this effect becomes more essential in the region of the peak.
As a result, the preferable region of parameters for photon colliders is
$$5>x>2,\lambda _eP_{\mathrm{}}0.5,\rho ^2<1.3.$$
(10)
Additionally, here the high energy peaks in the $`\gamma \gamma `$and $`e\gamma `$luminosities are described by one parameter $`\rho `$ and practically independent from $`A>1.5`$.
## 3 Rescattering contribution to the spectra. Qualitative description
The rescattering of electrons on the following laser photons produces new high energy photons (secondary photons) which modify the luminosity distribution mainly in the low energy part. Detailed form of additional components of luminosity distribution depends strongly on the conversion coefficient and other details of design. That is why we present here only qualitative discussion with some simple examples.
Let us enumerate the differences in properties of secondary photons from those from the first collision (we will denote them as primary photons).
(1) The energy of secondary photons is lower than that of primary photons.
(2) There is no definite relation between energy and production angle like (3).
(3) The mean polarization of secondary photons is practically zero.
Fig. 3 presents a typical energy spectrum of photons with only one rescattering at conversion coefficient $`k=1`$. Dashed line represents fraction of secondary photons. Let us explain some features of this spectrum.
The main fraction of electrons after the first scattering has the energy which corresponds to the peak in produced photons spectra, $`E_eE\omega _m=E/(x+1)`$. For the next collision $`xx/(x+1)`$. Therefore, the additional energy peak in photon spectrum caused by secondary photons from the first rescattering is at the photon energy $`y_m/(2x+1)`$, it is much lower than $`y_mE`$ for $`x>2`$. The subsequent rescatterings add more soft photons. Besides, the primary photon spectrum (4) is concentrated near its high energy boundary. So, the fraction of scattered electrons with energy close to $`E`$ is small, the effect of rescattering in a high energy part on the entire photon spectrum is also small. In the subsequent rescatterings such peaks become smooth. The well known result is a large peak near $`y=0`$. Note also that the secondary photons in average are nonpolarized.
The shape of an additional contribution of secondary photons to the luminosity distributions (second–second and primary–second) depends on $`k`$, $`\rho `$ and $`A`$. Nevertheless, the different simulations show the common features (see ):
At $`\rho ^20.5`$, $`k<1`$ the luminosity distribution has two well separated peaks: high energy peak (mainly from primary photons) and wide low energy peak (mainly from secondary photons). Photons in the high energy peak have high degree of polarization, mean polarization of photons in the low energy peak is close to 0. At smaller $`x`$ or $`\lambda _eP_{\mathrm{}}`$ this separation of peaks becomes less definite and mean photon polarization becomes less then that given by Eq. (5).
With a good separation of peaks, the backgrounds from the low energy peak could be eliminated relatively easily in many problems.
## 4 Approximation
The previous discussion shows us that there are chances to construct an approximation for photon spectra which would describe the high energy peak simply and in the universal way. We were searching for an approximation in which the high energy peak in $`\gamma \gamma `$luminosity would be given by a simple convolution of form
$$\frac{d^2}{dy_1dy_2}=F_a(x,y_1,\rho ^2)F_a(x,y_2,\rho ^2).$$
(11)
instead of complex integration (8) (independent from aspect ratio $`A`$).
We tested different forms of effective photon spectra. Taking into account form of angular spread of separate beam, we consider a test function for the high energy peak in the form
$$F_a(x,y,\rho ^2)=\{\begin{array}{cc}F(x,y)\mathrm{exp}\left[B\rho ^2g(x,y)^2/8\right]& \text{ at }y>y_m/2,\\ 0& \text{ at }y<y_m/2.\end{array}$$
(12)
where coefficient $`B`$ is varied.
A good fit for the high energy peak at $`2<x<5`$, $`\rho ^2<1.3`$, $`A>1.5`$ is given by the values
$$\begin{array}{cc}B=1\hfill & \text{ for the }\gamma \gamma \text{ collider},\hfill \\ B=0.7\hfill & \text{ for }e\gamma \text{ collider}.\hfill \end{array}$$
(13)
The curves in Figs. 4,5 show the accuracy of this approximation for the distributions in both the effective $`\gamma \gamma `$mass $`z=\sqrt{y_1y_2}`$ and the total photon energy $`Y=y_1+y_2`$ at $`\lambda _eP_{\mathrm{}}=0.85`$. These curves show the excellent quality of our approximation. The distributions calculated without angular spread factor (at $`B=0`$) are also shown here by dashed lines. These curves are markedly higher than the precise curve and they are wider than the real distributions. The first inaccuracy can be compensated by a suitable renormalization, but the second inaccuracy cannot be eliminated from calculations.
Note that the approximation (13) for the $`\gamma \gamma `$ collision can be obtained from eq. (9) if Bessel function $`I_0(v^2)`$ are replaced by unity. Figs. 4,5 show that our approximation coincide practically with precise distributions within high energy peak for the elliptic beams ($`A1.5`$). Therefore, the difference between curves for $`A=1`$ and $`A=2`$ in Figs. 1,2 in the region (2) is caused by the Bessel function factor.
Using of ”precise” Eq. (8) instead of our approximation is only a sham improvement. The difference between the approximation and the ”precise” formula is usually smaller than the effect of rescatterings.
## 5 Results
Let us enumerate the main results.
* The variable $`\rho `$ (6) is a good variable for the description of the high energy peak in the spectral luminosity independent from the aspect ratio $`A`$ at $`A>1.5`$, $`\rho ^2<1.3`$, $`2<x<5`$, $`\lambda _eP_{\mathrm{}}<0`$.
* At $`\rho 1`$ and suitable polarizations of initial beams the high energy peak in luminosity is separated well from the low energy peak. This separation can be destroyed by using of large conversion coefficient or (and) values $`x1`$ or $`\lambda _eP_{\mathrm{}}>0`$.
* To discuss future experiments at photon collider with good enough accuracy, one can use simple approximation (11)– (13) at $`\rho =1`$ instead of details simulation of conversion and collision. In this approximation the details of design are inessential. Possible decreasing of $`\rho ^2`$ to the value 0.5 can be also considered.
* The numbers describing luminosity of Photon Collider correspond the discussed high energy peak only.
## Acknowledgments
We are grateful to G. Jikia, V.G. Serbo and V.I. Telnov for the useful discussions. This work was supported by grant RFBR 99-02-17211 and grant of Sankt–Petersburg Center of Fundamental Sciences.
|
no-problem/9905/gr-qc9905084.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
In recent years, ways of effective superluminal travel (EST) within general relativity have generated a lot of attention . In the simplest definition of superluminal travel, one has a spacetime with a Lorentzian metric that is Minkowskian except for a localized region $`S`$. When using coordinates such that the metric is $`\text{diag}(1,1,1,1)`$ in the Minkowskian region, there should be two points $`(t_1,x_1,y,z)`$ and $`(t_2,x_2,y,z)`$ located outside $`S`$, such that $`x_2x_1>t_2t_1`$, and a causal path connecting the two. This was a definition given in . An example is the Alcubierre spacetime if the warp bubble exists only for a finite time. Note that the definition does not restrict the energy–momentum tensor in $`S`$. Such spacetimes will violate at least one of the energy conditions (the weak energy condition or WEC). In the case of the Alcubierre spacetime, the situation is even worse: part of the energy in region $`S`$ is moving tachyonically . The ‘Krasnikov tube’ was an attempt to improve on the Alcubierre geometry. In this paper, we will stick to the Alcubierre spacetime such as it is. It is not unimaginable that some modification of the geometry will make the problem of tachyonically moving energy go away without changing the other essential features, but we leave that for future work. Here we will concentrate on another problem.
Alcubierres idea was to start with flat spacetime, choose an arbitrary curve, and then deform spacetime in the immediate vicinity in such a way that the curve becomes a timelike geodesic, at the same time keeping most of spacetime Minkowskian. A point on the geodesic is surrounded by a ‘bubble’ in space. In the front of the bubble spacetime contracts, in the back it expands, so that whatever is inside is ‘surfing’ through space with a velocity $`v_s`$ with respect to an observer in the Minkowskian region. The metric is
$$ds^2=dt^2+(dxv_s(t)f(r_s)dt)^2+dy^2+dz^2$$
(1)
for a warp drive moving in the $`x`$ direction. $`f(r_s)`$ is a function which for small enough $`r_s`$ is approximately equal to one, becoming exactly one in $`r_s=0`$ (this is the ‘inside’ of the bubble), and goes to zero for large $`r_s`$ (‘outside’). $`r_s`$ is given by
$$r_s(t,x,y,z)=\sqrt{(xx_s(t))^2+y^2+z^2},$$
(2)
where $`x_s(t)`$ is the $`x`$ coordinate of the central geodesic, which is parametrized by coordinate time $`t`$, and $`v_s(t)=\frac{dx_s}{dt}(t)`$. A test particle in the center of the bubble is not only weightless and travels at arbitrarily large velocity with respect to an observer in the large $`r_s`$ region, it also does not experience any time dilatation.
Unfortunately, this geometry violates the strong, dominant, and especially the weak energy condition. This is not a problem per se, since situations are known in which the WEC is violated quantum mechanically, such as the Casimir effect. However, Ford and Roman suggested an uncertainty–type principle which places a bound on the extent to which the WEC is violated by quantum fluctuations of scalar and electromagnetic fields: The larger the violation, the shorter the time it can last for an inertial observer crossing the negative energy region. This so–called quantum inequality (QI) can be used as a test for the viability of would–be spacetimes allowing superluminal travel. By making use of the QI, Ford and Pfenning were able to show that a warp drive with a macroscopically large bubble must contain an unphysically large amount of negative energy. This is because the QI restricts the bubble wall to be very thin, and for a macroscopic bubble the energy is roughly proportional to $`R^2/\mathrm{\Delta }`$, where $`R`$ is a measure for the bubble radius and $`\mathrm{\Delta }`$ for its wall thickness. It was shown that a bubble with a radius of 100 meters would require a total negative energy of at least
$$E6.2\times 10^{62}v_s\text{kg},$$
(3)
which, for $`v_s1`$, is ten orders of magnitude bigger than the total positive mass of the entire visible Universe. However, the same authors also indicated that warp bubbles are still conceivable if they are microscopically small. We shall exploit this in the following section.
The aim of this paper is to show that a trivial modification of the Alcubierre geometry can have dramatic consequences for the total negative energy as calculated in . In section 2, I will explain the change in general terms. In section 3, I shall pick a specific example and calculate the total negative energy involved. In the last section, some drawbacks of the new geometry are discussed.
Throughout this note, we will use units such that $`c=G=\mathrm{}=1`$, except when stated otherwise.
## 2 A modification of the Alcubierre geometry
We will solve the problem of the large negative energy by keeping the surface area of the warp bubble itself microscopically small, while at the same time expanding the spatial volume inside the bubble. The most natural way to do this is the following:
$$ds^2=dt^2+B^2(r_s)[(dxv_s(t)f(r_s)dt)^2+dy^2+dz^2].$$
(4)
For simplicity, the velocity $`v_s`$ will be taken constant. $`B(r_s)`$ is a twice differentiable function such that, for some $`\stackrel{~}{R}`$ and $`\stackrel{~}{\mathrm{\Delta }}`$,
$`B(r_s)=1+\alpha `$ for $`r_s<\stackrel{~}{R},`$
$`1<B(r_s)1+\alpha `$ for $`\stackrel{~}{R}r_s<\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }},`$
$`B(r_s)=1`$ for $`\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}r_s,`$ (5)
where $`\alpha `$ will in general be a very large constant; $`1+\alpha `$ is the factor by which space is expanded. For $`f`$ we will choose a function with the properties
$`f(r_s)=1`$ for $`r_s<R,`$
$`0<f(r_s)1`$ for $`Rr_s<R+\mathrm{\Delta },`$
$`f(r_s)=0`$ for $`R+\mathrm{\Delta }r_s,`$
where $`R>\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}`$. See figure 1 for a drawing of the regions where $`f`$ and $`B`$ vary.
Notice that this metric can still be written in the $`3+1`$ formalism, where the shift vector has components $`N^i=(v_sf(r_s),0,0)`$, while the lapse function is identically $`1`$.
A spatial slice of the geometry one gets in this way can be easily visualized in the ‘rubber membrane’ picture. A small Alcubierre bubble surrounds a neck leading to a ‘pocket’ with a large internal volume, with a flat region in the middle. It is easily calculated that the center $`r_s=0`$ of the pocket will move on a timelike geodesic with proper time $`t`$.
## 3 Building a warp drive
In using the metric (4), we will build a warp drive with the restriction in mind that all features should have a length larger than the Planck length $`L_P`$. One structure at least, the warp bubble wall, cannot be made thicker than approximately one hundred Planck lengths for velocities $`v_s`$ in the order of 1, as proven in :
$$\mathrm{\Delta }10^2v_sL_P.$$
(6)
We will choose the following numbers for $`\alpha `$, $`\stackrel{~}{\mathrm{\Delta }}`$, $`\stackrel{~}{R}`$, and $`R`$:
$`\alpha `$ $`=`$ $`10^{17},`$
$`\stackrel{~}{\mathrm{\Delta }}`$ $`=`$ $`10^{15}\text{m},`$
$`\stackrel{~}{R}`$ $`=`$ $`10^{15}\text{m},`$
$`R`$ $`=`$ $`3\times 10^{15}\text{m}.`$ (7)
The outermost surface of the warp bubble will have an area corresponding to a radius of approximately $`3\times 10^{15}\text{m}`$, while the inner diameter of the ‘pocket’ is 200 m. For the moment, these numbers will seem arbitrary; the reason for this choice will become clear later on.
Ford and Pfenning already calculated the minimum amount of negative energy associated with the warp bubble:
$$E_{IV}=\frac{1}{12}v_s^2\left(\frac{(R+\frac{\mathrm{\Delta }}{2})^2}{\mathrm{\Delta }}+\frac{\mathrm{\Delta }}{12}\right),$$
(8)
which in our case is the energy in region IV. The expression is the same (apart from a change due to our different conventions) because $`B=1`$ in this region, and the metric is identical to the original Alcubierre metric. For an $`R`$ as in (7) and taking (6) into account, we get approximately
$$E_{IV}6.3\times 10^{29}v_s\text{kg}.$$
(9)
Now we calculate the energy in region II of the figure. In this region, we can choose an orthonormal frame
$`e_{\widehat{0}}`$ $`=`$ $`_t+v_s_x,`$
$`e_{\widehat{i}}`$ $`=`$ $`{\displaystyle \frac{1}{B}}_i`$ (10)
($`i=x,y,z`$). In this frame, there are geodesics with velocity $`u^{\widehat{\mu }}=(1,0,0,0)`$, called ‘Eulerian observers’ . We let the energy be measured by a collection of these observers who are temporarily swept along with the warp drive. Let us consider the energy density they measure locally in the region II, at time $`t=0`$, when $`r_s=r=(x^2+y^2+z^2)^{1/2}`$. It is given by
$$T_{\widehat{\mu }\widehat{\nu }}u^{\widehat{\mu }}u^{\widehat{\nu }}=T^{\widehat{0}\widehat{0}}=\frac{1}{8\pi }\left(\frac{1}{B^4}(_rB)^2\frac{2}{B^3}_r_rB\frac{4}{B^3}_rB\frac{1}{r}\right).$$
(11)
We will have to make a choice for the $`B`$ function. It turns out that the most obvious choices, such as a sine function or a low–order polynomial, lead to pathological geometries, in the sense that they have curvature radii which are much smaller than the Planck length. This is due to the second derivative term, which is also present in the expressions for the Riemann tensor components and which for these functions takes enormous absolute values in a very small region near $`r=\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}`$. To avoid this, we will choose for $`B`$ a polynomial which has a vanishing second derivative at $`r=\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}`$. In addition, we will demand that a large number of derivatives vanish at this point. A choice that meets our requirements is
$$B=\alpha ((n1)w^n+nw^{n1})+1,$$
(12)
with
$$w=\frac{\stackrel{~}{R}+\stackrel{~}{\mathrm{\Delta }}r}{\stackrel{~}{\mathrm{\Delta }}}$$
(13)
and $`n`$ sufficiently large.
As an example, let us choose $`n=80`$. Then one can check that $`T^{\widehat{0}\widehat{0}}`$ will be negative for $`0w0.981`$ and positive for $`w>0.981`$. It has a strong negative peak at $`w=0.349`$, where it reaches the value
$$T^{\widehat{0}\widehat{0}}=4.9\times 10^2\frac{1}{\stackrel{~}{\mathrm{\Delta }}^2}.$$
(14)
We will use the same definition of total energy as in : we integrate over the densities measured by the Eulerian observers as they cross the spatial hypersurface determined by $`t=0`$. If we restrict the integral to the part of region II where the energy density is negative, we get
$`E_{II,}`$ $`=`$ $`{\displaystyle _{II,}}d^3x\sqrt{|g_S|}T_{\widehat{\mu }\widehat{\nu }}u^{\widehat{\mu }}u^{\widehat{\nu }}`$ (15)
$`=`$ $`4\pi \stackrel{~}{\mathrm{\Delta }}{\displaystyle _0^{0.981}}𝑑w(2w)^2B(w)^3\stackrel{~}{T}^{\widehat{0}\widehat{0}}(w)`$
$`=`$ $`1.4\times 10^{30}\text{kg}`$
where $`\stackrel{~}{T}^{\widehat{0}\widehat{0}}`$ is the energy density with length expressed in units of $`\stackrel{~}{\mathrm{\Delta }}`$, and $`g_S=B^6`$ is the determinant of the spatial metric on the surface $`t=0`$. In the last line we have reinstated the factor $`c^2/G`$ to get the right answer in units of kg. The amount of positive energy in the region $`w>0.981`$ is
$$E_{II,+}=4.9\times 10^{30}\text{kg}.$$
(16)
Both $`E_{II,}`$ and $`E_{II,+}`$ are in the order of a few solar masses. Note that as long as $`\alpha `$ is large , these energies do not vary much with $`\alpha `$ if $`\stackrel{~}{R}=\stackrel{~}{\mathrm{\Delta }}`$ and $`\alpha \stackrel{~}{R}=100\text{m}`$. The value of $`R`$ in (7) is roughly the largest that keeps $`|E_{IV}|`$ below a solar mass for $`v_s1`$.
We will check whether the QI derived by Ford and Roman is satisfied for the Eulerian observers. The QI was originally derived for flat spacetime , where for massless scalar fields it states that
$$\frac{\tau _0}{\pi }_{\mathrm{}}^+\mathrm{}𝑑\tau \frac{T_{\mu \nu }u^\mu u^\nu }{\tau ^2+\tau _0^2}\frac{3}{32\pi ^2\tau _0^4}$$
(17)
should be satisfied for all inertial observers and for all ‘sampling times’ $`\tau _0`$. In , it was argued that the inequality should also be valid in curved spacetimes, provided that the sampling time is chosen to be much smaller than the minimum curvature radius, so that the geometry looks approximately flat over a time $`\tau _0`$.
The minimum curvature radius is determined by the largest component of the Riemann tensor. It is easiest to calculate this tensor after performing a local coordinate transformation $`x^{}=xv_st`$ in region II, so that the metric becomes
$$g_{\mu \nu }=\text{diag}(1,B^2,B^2,B^2).$$
(18)
Without loss of generality, we can limit ourselves to points on the line $`y=z=0`$; in the coordinate system we are using, the metric is spherically symmetric and has no preferred directions. Transformed to the orthonormal frame (10), the largest component (in absolute value) of the Riemann tensor is
$$R_{\widehat{1}\widehat{2}\widehat{1}\widehat{2}}=\frac{1}{B^4}(_rB)^2\frac{1}{B^3}_r^2B\frac{1}{B^3}_rB\frac{1}{r}.$$
(19)
The minimal curvature radius can be calculated using the value of $`R_{\widehat{1}\widehat{2}\widehat{1}\widehat{2}}`$ where its absolute value is largest, namely at $`w=0.348`$. This yields
$`r_{c,min}`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{|R_{\widehat{1}\widehat{2}\widehat{1}\widehat{2}}|}}}`$ (20)
$`=`$ $`{\displaystyle \frac{\stackrel{~}{\mathrm{\Delta }}}{72.5}}`$
$`=`$ $`1.4\times 10^{34}\text{m},`$
which is about ten Planck lengths. (Actually, the choice $`n=80`$ in (12) was not entirely arbitrary; it is the value that leads to the largest minimum curvature radius.) For the sampling time we choose
$$\tau _0=\beta r_{c,min},$$
(21)
where we will take $`\beta =0.1`$. Because $`T^{\widehat{0}\widehat{0}}`$ doesn’t vary much over this time, the QI (17) becomes
$$T^{\widehat{0}\widehat{0}}\frac{3}{32\pi ^2\tau _0^4}.$$
(22)
Taking into account the hidden factors $`c^2/G`$ on the left and $`\mathrm{}/c`$ on the right, the left hand side is about $`6.6\times 10^{93}\text{kg}/\text{m}^3`$ at its smallest, while the right hand side is approximately $`9.2\times 10^{94}\text{kg}/\text{m}^3`$. We conclude that the QI is amply satisfied.
Thus, we have proven that the total energy requirements for a warp drive need not be as stringent as for the original Alcubierre drive.
## 4 Final remarks
By only slightly modifying the Alcubierre spacetime, we succeeded in spectacularly reducing the amount of negative energy that is needed, while at the same time retaining all the advantages of the original geometry. The spacetime and the simple calculation I presented should be considered as a proof of principle concerning the total energy required to sustain a warp drive geometry. This doesn’t mean that the proposal is realistic. Apart from the fact that the total energies are of stellar magnitude, there are the unreasonably large energy densities involved, as was equally the case for the original Alcubierre drive. Even if the quantum inequalities concerning WEC violations are satisfied, there remains the question of generating enough negative energy. Also, the geometry still has structure with sizes only a few orders of magnitude above the Planck scale; this seems to be generic for spacetimes allowing superluminal travel.
However, what was shown is that the energies needed to sustain a warp bubble are much smaller than suggested in . This means that a modified warp drive roughly falls in the mass bracket of a large traversable wormhole . However, the warp drive has trivial topology, which makes it an interesting spacetime to study.
## Acknowledgements
I would like to thank P.–J. De Smet, L.H. Ford and P. Savaria for very helpful comments.
|
no-problem/9905/hep-th9905008.html
|
ar5iv
|
text
|
# On induced 𝐶𝑃𝑇-odd Chern-Simons terms in 3+1 effective action.
\[
## Abstract
This paper was originally designated as Comment to the paper by R. Jackiw and V. Alan Kosteleck$`\stackrel{´}{\mathrm{y}}`$ . We provide an example of the fermionic system, the superfluid <sup>3</sup>He-A, in which the $`CPT`$-odd Chern-Simons terms in the effective action are unambiguously induced by chiral fermions. In this system the Lorentz and gauge invariances both are violated at high energy, but the behavior of the system beyond the cut-off is known. This allows us to construct the $`CPT`$\- odd action, which combines the conventional 3+1 Chern-Simons term and the mixed axial-gravitational Chern-Simons term discussed in Ref.. The influence of Chern-Simons term on the dynamics of the effective gauge field has been experimentally observed in rotating <sup>3</sup>He-A.
PACS numbers: 11.30.Er, 11.15.-q, 67.57.-z, 98.80.Cq
\]
Recently the problem of the radiatively induced $`CPT`$-odd Chern- Simons term in 3+1 quantum field theory has been addressed in a number of papers . The Chern-Simons (CS) term $`L_{\mathrm{CS}}=\frac{1}{2}k_\mu e^{\mu \alpha \beta \gamma }F_{\alpha \beta }A_\gamma `$ in the 3+1 electromagnetic action, where $`k^\mu `$ is a constant 4-vector, is induced by the $`CPT`$\- and Lorentz- violating axial-vector term $`b^\mu \gamma _\mu \gamma _5`$ in the Dirac Lagrangian for massive fermions. In the limit of small and large $`b`$ compared with mass $`m`$ of Dirac fermions, it was found that
$`k^\mu ={\displaystyle \frac{3}{16\pi ^2}}b^\mu ,bm,`$ (1)
$`k^\mu ={\displaystyle \frac{1}{16\pi ^2}}b^\mu ,bm.`$ (2)
However it has been concluded that the existence of CS term depends on the choice of regularization procedure – ”a renormalization ambiguity”. This means that the result for $`k_\mu `$ depends on physics beyond the cut-off.
The above $`CPT`$-odd term can result not only from the violation of the $`CPT`$ symmetry in the vacuum. The nonzero density of the chiral fermions violates the $`CPT`$ invariance and thus can also lead to the Chern-Simons term, with $`b^0`$ determined by the chemical potential $`\mu `$ and temperature $`T`$ of the fermionic system .
Here we provide an example of the fermionic system, in which such Chern-Simons term is unambiguously induced by fermions. In this system the Lorentz and gauge invariances both are violated at high energy, but the behavior of the system beyond the cut-off is known. This allows the calculation of CS term in different physical situations. The influence of this CS term on the dynamics of the effective gauge field has been experimentally observed.
This is the superfluid <sup>3</sup>He-A, where in the low-energy corner there are two species of fermionic quasiparticles: left-handed and right-handed Weyl fermions . Quasiparticles interact with the order parameter, the unit $`\widehat{𝐥}`$-vector of the orbital momentum of Cooper pairs, in the same manner as chiral relativistic fermions interact with the vector potential of the $`U(1)`$ gauge field: $`𝐀p_F\widehat{𝐥}`$, where $`p_F`$ is the Fermi momentum. The ”electric charges” – the charges of the left and right quasiparticles with respect to this effective gauge field – are $`e_R=e_L=1`$. The normal component of superfluid <sup>3</sup>He-A consists of the thermal fermions, whose density is determined by $`T`$ and by the velocity $`𝐯_n𝐯_s`$ of the flow of the normal component with respect to the superfluid vacuum. The velocity of the counterflow in the direction of $`\widehat{𝐥}`$ is equivalent to the chemical potentials for left and right fermions in relativistic system:
$$\mu _R=\mu _L=p_F\widehat{𝐥}(𝐯_n𝐯_s).$$
(3)
As in the relativistic theories, the state of the system of chiral quasiparticles with nonzero counterflow velocity (an analogue of chemical potential) violates Lorentz invariance and $`CPT`$ symmetry and induces the $`CPT`$\- odd Chern-Simons term. This term can be written in general form, which is valid both for the relativistic systems including that found in Ref. and for <sup>3</sup>He-A :
$$\frac{1}{4\pi ^2}\left(\underset{L}{}\mu _Le_L^2\underset{R}{}\mu _Re_R^2\right)𝐀(\stackrel{}{}\times 𝐀).$$
(4)
Here sums over $`L`$ and $`R`$ mean summation over all the left-handed and right-handed fermionic species respectively; $`e_L`$ and $`e_R`$ are charges of left and right fermions with respect to $`U(1)`$ field (say, hypercharge field in the Standard model).
Translation of Eq.(4) to the <sup>3</sup>He-A language gives
$$\frac{p_F^3}{2\pi ^2}\left(\widehat{𝐥}_0(𝐯_s𝐯_n)\right)\left(\delta \widehat{𝐥}(\stackrel{}{}\times \delta \widehat{𝐥})\right).$$
(5)
Here $`\widehat{𝐥}_0`$ is the direction of the order parameter $`\widehat{𝐥}`$ in the homogeneous ground state; $`𝐯_n𝐯_s`$ is the uniform counterflow of the fermionic quasiparticles with respect to the superfluid vacuum; and $`\delta \widehat{𝐥}=\widehat{𝐥}\widehat{𝐥}_0`$ is the deviation of the order parameter from its ground state direction.
Since for chiral fermions the chemical potential plays the part of the parameter $`b^0`$ in the fermionic Lagrangian, the connection between $`k^0`$ and $`b^0`$ is $`k^0=b^0/2\pi ^2`$ in <sup>3</sup>He-A. Though it agrees with the result obtained in relativistic system with nonzero chemical potential for chiral fermions , it does not coincide with Eq.(2) obtained in the massless limit $`m/b^00`$.
The instability of the electromagnetic vacuum due to the 3+1 Chern- Simons term has been discussed by Caroll, Field and Jackiw , Andrianov and Soldati , and Joyce and Shaposhnikov . In the case of the nonzero density of right electrons ($`\mu _R0`$) this instability leads to the conversion of the density of the right electrons to the hypermagnetic field. This effect was used in the scenario for nucleation of the primordial magnetic field . In <sup>3</sup>He-A this phenomenon is represented by the well known helical instability of the counterflow, which is triggered by the term in Eq.(5) . The conversion of the counterflow of the normal component (an analogue of $`\mu _R`$ in the Joyce-Shaposhnikov scenario) to the inhomogeneous $`\widehat{𝐥}`$-field with $`\times \widehat{𝐥}0`$ (an analogue of hypermagnetic field) due to this instability has been observed in rotating <sup>3</sup>He-A .
Recently another type of the Chern-Simons term has been found for both systems, <sup>3</sup>He-A and chiral relativistic fermions with nonzero $`\mu `$ or/and $`T`$. This is the mixed axial-gravitational CS term, which contains both the gauge field and the gravimagnetic field :
$`{\displaystyle \frac{1}{8\pi ^2}}\left({\displaystyle \underset{L}{}}\mu _L^2e_L{\displaystyle \underset{R}{}}\mu _R^2e_R\right)𝐀𝐁_𝐠,`$ (6)
$`𝐁_𝐠=\stackrel{}{}\times 𝐠,𝐠g_{0i}.`$ (7)
Here $`g_{0i}`$ is the element of the metric in the reference frame of the heat bath (in superfluids it is the element of the effective metric in the frame, in which the normal component is at rest). If the heat bath of chiral fermions is rotating in Minkowski space, the ”gravimagnetic field” is expressed in terms of rotation velocity $`𝛀`$:
$$𝐁_𝐠=\times 𝐠=2\frac{𝛀}{c^2}.$$
(8)
Here $`c`$ is the material parameter, which is the speed of light in relativistic system, and the initial slope in the energy spectrum of fermionic quasiparticles propagating in the plane transverse to the $`\widehat{𝐥}`$-vector in <sup>3</sup>He-A . The material parameters do not enter Eq.(6) explicitly: they enter only through the metric. That is why the same equation Eq.(6) can be applied to different fermionic systems, including those with varying speed of light. In relativistic system this equation describes the macroscopic parity violating effect: rotation of the heat bath (or of the black hole) produces the flux of the chiral fermions along the rotation axis .
Comparison of the Eqs.(6) and (4) suggests that the two $`CPT`$-odd terms can be united if one uses the Larmor theorem and introduces the combined fields:
$$𝐀_{L(R)}=e_{L(R)}𝐀+\frac{1}{2}\mu _{L(R)}𝐠,𝐁_{L(R)}=\times 𝐀_{L(R)}.$$
(9)
Then the general form of the Chern-Simons $`CPT`$-odd term is
$$\frac{1}{4\pi ^2}\left(\underset{L}{}\mu _L𝐀_L𝐁_L\underset{R}{}\mu _R𝐀_R𝐁_R\right).$$
(10)
Note that in the Standard Model the nullification of the $`CPT`$-odd term in Eq.(10) occurs if the ”gyromagnetic” ratio $`e/\mu `$ is the same for all fermions. This happens because of the anomaly cancellation. For the $`CPT`$-odd term induced by the vacuum fermions, the anomaly cancellation was discussed in Refs.. In <sup>3</sup>He-A the ”gyromagnetic ratio” is the same for two fermionic species, $`e_L/\mu _L=e_R/\mu _R`$, but the CS terms survive, since there is no anomaly cancellation in this system.
In <sup>3</sup>He-A there are also subtle points related to gauge invariance of the CS term, as discussed by Coleman and Glashow , and to the reference frame. They are determined by physical situations.
(i) The reference frame for superfluid velocity $`𝐯_s`$ is the heat bath frame – the frame of the normal component moving with velocity $`𝐯_n`$. At $`T=0`$ this frame disappears: thermal fermions are frozen out. To avoid uncertainty in determination of the counterflow velocity $`𝐯_s𝐯_n`$, and thus of the chemical potential of the chiral fermions, the limit $`T0`$ must be taken after all other limits.
(ii) The leading terms in the low-energy effective action for the ”electrodynamics” of <sup>3</sup>He-A are gauge invariant, because the main contributions to the effective action are induced by the low-energy fermions, which are ”relativistic” and obey the gauge invariant Lagrangian. The Eq.(10) is an example of such gauge invariant term in the low-energy action. It is gauge invariant if the $`b^0`$ parameter (or $`\mu _R`$) is constant, i.e. if the background counterflow and $`\widehat{𝐥}_0`$ field are homogeneous. The inhomogeneous corrections, which correspond to the inhomogeneous $`b^0`$, violate the gauge invariance. This is natural, since these corrections are determined by the higher energy fermions, which do not obey the gauge invariance from the very beginning. This is in agreement with the conclusion made in Ref., that for existence of the CS term the ”weak condition” – the gauge invariance at zero 4-momentum – is required.
I thank Alex Vilenkin for discussions. This work was supported in part by the Russian Foundations for Fundamental Research grant No. 96-02-16072 and by European Science Foundation.
|
no-problem/9905/physics9905032.html
|
ar5iv
|
text
|
# Investigation of a 90 Degree Spherical Deflecting Analyzer Operated in an Asymmetrically Charged Configuration
## 1 Introduction
Direct measurements of the neutrino mass face the challenge of interpreting a convoluted spectrum to very high precision. The negative mass square issue prevails while the request for higher resolution presses on. The spectrometer function plays a key role in deciphering the mystery, but often the convolution of the finite source volume is not easy to take into account. In the UTA experiment, the resolution is set to reach the $`10^5`$ level while sufficient counts must be recorded to suppress statistical uncertainties. A 90 degree spherical analyzer (SDA) is designed as a preanalyzer for the UTA neutrino mass experiment. This analyzer has to provide a $`\pm 3.5^{}`$ acceptance cone for the beta particles (electrons) emitted from a cell positioned along the symmetrical axis. All the emission from within the fudiciary source volume is expected to be imaged through a narrow ring slit. This image will serve as the source for a high resolution cylindrical mirror analyzer(CMA). The ring slit of the SDA controls the flow of tritium gas emanating from the cell and cuts out the low or high energy tail of the distribution function for the CMA. The SDA-90 provides high luminosity, narrow throughput image and reasonable energy resolution, characterizing a focusing analyzer. Using the SDA as a focusing instrument was first proposed by Aston and later investigated by Purcell analytically based on trajectory analysis. Ashby then included relativistic corrections , but left out fringe effects. Kessler et al. formulated a mathematical model for the fringe effects, but due to the limitation of their instrument, only first order focusing was seen in their experiment. The second order focusing was included in the design of Ross et al. by adding two Herzog lenses to adjust the fringe fields. An important feature of that design was that the analyzer was asymmetrically charged. In this investigation we verify that we can maintain second-order focusing which is not sensitive to the positions of emitters. Regarding the spherical aberration as a minor effect, the imaging property of the analyzer has been further investigated. A SDA of very similar design by Ross et al. has been built, and our simulation results were checked using a telefocus electron gun as the source.
## 2 Theoretical Background
Theoretical studies of the electron optics of an analyzer is generally based on trajectory analysis. When an analytic form of a trajectory is available, it generally can be represented in the form
$`L=L(\theta ,n,k)`$ (1)
where $`L`$ is the projection of the flight path from the source to the image onto the symmetric axis, $`\theta `$ is the azimuthal angle of the incident trajectory, $`n`$ characterizes the source position, and $`k`$ accounts for the voltage configuration of the analyzer and the kinetic energy of the electrons. Similar to the optical axis in light optics, the principle trajectory is defined as the orbit of the electrons which goes along the geometrical central path from the source through the analyzer. For a point emitter, the deviations from the principle trajectory due to angular dispersion and energy dispersion can be expressed through a Taylor expansion
$`\mathrm{\Delta }L(\mathrm{\Delta }\theta ,\mathrm{\Delta }E)={\displaystyle \underset{\mu =1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\mu !}}\left({\displaystyle \frac{^\mu L}{\theta ^\mu }}\right)_{L_0}\left(\mathrm{\Delta }\theta \right)^\mu +{\displaystyle \underset{\nu =1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{1}{\nu !}}\left({\displaystyle \frac{^\nu L}{E^\nu }}\right)_{L_0}\left(\mathrm{\Delta }E\right)^\nu +R`$ (2)
The first term of Eq.(2) characterizes the spherical aberration, while the second term characterizes the energy dispersion in the image plane. The ‘mixed’ term $`R`$ is generally not important if the spectra to be analyzed is not continuous over a large range. The energy dispersion is defined as
$`D=E_0\left(L/E\right)`$ (3)
Often the source can not be considered as a point. For a finite source ( size $`l`$ ) the resolution of the analyzer can be modeled as
$`R={\displaystyle \frac{E}{\mathrm{\Delta }E}}=\left[{\displaystyle \frac{M_ll}{D}}+{\displaystyle \frac{\mathrm{\Delta }L\left(\mathrm{\Delta }\theta \right)}{D}}\right]^1,`$ (4)
and the transmission density is given by
$`T={\displaystyle \frac{N^{}\mathrm{\Omega }}{8\pi ^2\left(M_ll+\mathrm{\Delta }L\right)R}}`$ (5)
where $`N^{}`$ is the number of total electrons emitted per sec, $`\mathrm{\Omega }`$ is the solid angle of the acceptance cone, $`M_l`$ is the lateral magnification, and $`R`$ is the radius of the ring image. In a resolution optimized analyzer, large energy dispersion and small aberration are the main targets. The source size can be limited by an entrance aperture if the source intensity is sufficient. Otherwise, the finite size effect must be seriously considered. The spherical aberration can be investigated by minimizing $`\mathrm{\Delta }L`$ with respect to the input angular spread $`\mathrm{\Delta }\theta `$. To first order, i.e. the first order focusing, the coefficient of the $`\mu =1`$ term in Eq.(2) has to be zero. Second order focusing requires $`\mu =1`$ and $`\mu =2`$ terms to be zero. Principally three free parameters allow us to achieve third order focusing, but in practice usually only first order focusing is available due to the fact that the focusing behavior depends strongly only on the voltage configuration of the analyzer. In a transmission optimized analyzer, large acceptance ($`\mathrm{\Omega }`$) and source size ($`l`$) are required so that more particles can be accepted by the analyzer. When the spherical aberration is controlled such that the size of the aberration is smaller than the image of the finite source, demagnification is favored to keep high transmission.
Lacking graphical illustration, pure trajectory analysis does not volunteer all the information for optimization. For instance, it has been pointed out by Hafner et al. that the minimum beam-width does not always occur at the image in the case of second order focusing, but that the beam can be narrower before it comes to focus at the image. More importantly, practical applications involve the fringe fields which often can not be modeled properly, and even when they do, the analytical equations become too complicated for intuitive interpretation.
With the advent of modern computers, Poisson equation can be solved numerically using finite difference or finite element methods. The trajectories of charged particles can be calculated very accurately assuming that the boundaries of the fields are properly setup, the integrating steps are reasonably fine, and convergence tests are performed during the integration. Thus, it is possible to include the fringe fields and their use to minimize the aberration by tracing the minimum beam-width numerically. New, superior modes of operation can be found through this method and by adding entrance and exit lenses, the image quality can be fine-tuned. The ‘minimum-width’ ray tracing is often performed either along the symmetric axis or its perpendicular plane to study the aberration. As far as we know, most published results focused their attentions on the spherical aberration behavior only. The electron optical behavior of analyzers has several distinct features which are not seen in the performance of rotationally symmetric systems, as can be seen in the development of electron microscope . Due to the fact that the principle trajectory is curved in the analyzer, the up-down symmetry is broken. This has several significant consequences. For example, in rotationally symmetric system, the second-order spherical aberration is eliminated by symmetry; while in curved system all orders of spherical aberration can be present. The astigmatism in rotationally symmetric system is only a second-order effect; in curved system it is intrinsic. The spherical aberrations in curved system can easily be very significant, even overpowering the finite size effect, unless special efforts are made to suppress them. It was found by Ross et al. that the aberration behavior can be controlled by adding Herzog lenses to modify the fringe fields of the analyzer. We found that even when the source is moved significantly along the principle trajectory, the aberration of the SDA remains in the same mode. Thus we are able to regard the spherical aberration as a minor effect once second-order focusing is achieved, and advance our prescription to include the magnification of the finite source. As will be shown later, the image plane in the curved system may not be perpendicular to the principle trajectory. The magnification factor will be more properly presented by a matrix rather than a scalar. Therefore looking in only one direction in the ray-tracing results will not provide the full information of the imaging process. This is even more relevant when an asymmetric field is applied along the flight path. The rotation of the image by the asymmetric fields can improve the resolution compared to the symmetrical configuration.
## 3 Imaging Matrix Approach
### 3.1 Formula and Evaluations
We extend the scalar field representation of Eqs.(3) and (4) to a vector field representation as follows.
If $`X`$ is the coordinate representing the image,
$`x`$ the coordinate representing the object,
$`\alpha `$ the half angle of the exit beam with energy $`E`$,
$`l`$ the source finite size,
$`M`$ the magnification factor,
$`M_l`$ the lateral magnification,
$`\widehat{T}`$ the direction of the chromatic image,
$`\widehat{L}`$ the direction of the image,
and $`\widehat{n}`$ unit vector normal to beam trajectory,
then the defocusing broadening due to the rotation of images is given by $`\sqrt{M^2M_l^2}l\alpha `$,
the dispersive field by $`\stackrel{}{}E=\frac{dE}{dL}\widehat{T}`$,
the chromatic aberration by $`\stackrel{}{}E\widehat{n}`$, and finally
the defocusing broadening due to chromatic aberration by $`\left(\frac{\mathrm{\Delta }E}{\stackrel{}{}E}\widehat{T}+\frac{1}{2}M\widehat{L}\widehat{T}\right)\alpha `$.
These relations are depicted in Figure 2. In general, the imaging process can be expressed as
$`X_i=m_{ij}x_j+\epsilon _{ijk}x_jx_k+\gamma _{ijkl}x_jx_kx_l+\mathrm{}`$ (6)
To the first order, the image is constructed by linear mapping of the object. Thus $`m_{ij}`$ can be represented by a matrix which characterizes the imaging properties. To higher order, non-vanishing $`\epsilon _{ijk}`$ , $`\gamma _{ijkl}`$ … terms carry the information on geometric aberrations. Although the formulation does not include the spherical (angular) aberration, this effect can be easily taken into account as a universal blurring in the image. Aberration is included in first order imaging as
$`X_i=m_{ij}x_j\pm \mathrm{\Delta }_i\left(\alpha \right)`$ (7)
where $`\mathrm{\Delta }`$ depends only on the acceptance half angle ($`\alpha `$) of the instrument. In our case, $`\alpha =3.5^{}`$ and $`l=0.5mm`$, less than 5% contribution to the minimum beam width is based on the aberration after the second order focusing is achieved. To determine the matrix elements, point sources positioned along several $`x`$ positions are set up as the input rays for the instrument. After the rays within an emission angle $`\stackrel{~}{\alpha }`$ traverse through the analyzing field, each bundle of exit rays are traced to find the minimum ‘beam-width’, establishing the image position. It is worth mentioning that in preparing the input condition, $`\stackrel{~}{\alpha }`$ should not be too big, for the aberration may effect the result. We choose to take the semi-angle $`\stackrel{~}{\alpha }=`$0.65, 15 rays, and $`\mathrm{\Delta }r\left(\mathrm{\Delta }z\right)=1mm`$ for all simulation in this investigation. In our SDA-90, the incident angle is 48, and the exit angle is 42. (Fig. 1). Following the convention used by Ross et al., the source coordinate is defined as $`d\stackrel{}{l}=d\stackrel{}{r}+d\stackrel{}{z}`$, and the image coordinate $`d\stackrel{}{L}=d\stackrel{}{R}+d\stackrel{}{Z}`$ respectively. The imaging matrix m relates the source vector to the image vector as
$`\left(\begin{array}{c}\mathrm{\Delta }R\hfill \\ \mathrm{\Delta }Z\hfill \end{array}\right)=\left(\begin{array}{cc}m_{Rr}\hfill & m_{Rz}\hfill \\ m_{Zr}\hfill & m_{Zz}\hfill \end{array}\right)\left(\begin{array}{c}\mathrm{\Delta }r\hfill \\ \mathrm{\Delta }z\hfill \end{array}\right)`$ (14)
The magnification factor $`M`$ can be calculated as $`\sqrt{\mathrm{\Delta }R^2+\mathrm{\Delta }Z^2}/\sqrt{\mathrm{\Delta }r^2+\mathrm{\Delta }z^2}`$. For a unit vector perpendicular to the incident beam trajectory(principle ray), the lateral magnification applied to our SDA is, according to Figure 2,
$`M_l=(\mathrm{sin}48^{},\mathrm{cos}48^{})\left(\begin{array}{cc}m_{Rr}\hfill & m_{Rz}\hfill \\ m_{Zr}\hfill & m_{Zz}\hfill \end{array}\right)\left(\begin{array}{c}\mathrm{sin}42^{}\hfill \\ \mathrm{cos}42^{}\hfill \end{array}\right)`$ (19)
To calculate the resolution, the dispersive curve at the proximity of the image point has to be calculated. This can be achieved easily by carrying out the ray tracing for a bundle of rays with the same input conditions but different energies. The energy dispersion due to the analyzing field can be calculated by taking cuts on the ray bundle along $`\widehat{n}`$ direction at the proximity of the image. When $`\alpha `$ is large, including the defocusing effect, the chromatic images of different energy bundle emerging from the same source point must be traced. Again all the chromatic images line up linearly based on the first order approximation. Figure 2 also shows typical inclination ($`\widehat{T}`$) of the chromatic image. The resolution of the analyzer can be estimated following the convention: size of chromatic aberration=size of the image
$`{\displaystyle \frac{\mathrm{\Delta }E}{\stackrel{}{}E}}\widehat{n}+\left({\displaystyle \frac{\mathrm{\Delta }E}{\stackrel{}{}E}}\widehat{T}+{\displaystyle \frac{1}{2}}M\widehat{L}\widehat{T}\right)\alpha =\sqrt{M^2M_l^2}l\alpha +M_ll+\mathrm{\Delta }_{spher.aberr.}(\alpha )`$ (20)
When $`\alpha `$ is small, neglect the defocusing and spherical aberration, $`\mathrm{\Delta }E=M_ll\times \left(\stackrel{}{}E\widehat{n}\right)`$. The advantage of the analyzer under investigation is the minimization of the right-hand side due to rotation of images achieved with a non-symmetric analyzing field.
### 3.2 Results and Discussions
Table I shows the results obtained at three source positions that are relevant to us by the techniques outlined above for the symmetric and asymmetric modes. The voltage settings are optimized according to the aberration curve such that second order focusing is obtained in both modes. Note that while the overall maginifaction factors ($`M`$) in the asymmetric mode may be larger, the lateral magnification factors ($`M_l`$) are always smaller than those in the symmetric mode. A series of evaluations was done at source positions from $`s=180mm`$ to $`s=80mm`$. Figure 3 shows the graphical representation of the results. Rotation of image happens in both cases. In the symmetric mode, however, the rotation (presented by the dashed lines) is pretty much confined to the same quadrant, although an inversion is caused by the fringe fields when the source is placed so close that higher order effects contribute significantly. Figure 4 details the imaging properties for both operational modes. We found that the asymmetric mode generally provides a smaller $`M_l`$ (remember this is the projection of the image perpendicular to the principle trajectory) thanks to the rotation of image. That is to say, the charged particle flux will be better focused seen by a slit mounted perpendicularly to the principle trajectory, leading to higher transmission. Due to the fact that part of the image will not lie in the slit plane, some defocusing broadening will happen. The defocusing effect involves the extension of the source image along the principle trajectory and the angular spread of the flux. Both factors in the asymmetric mode enhance the defocusing broadening of the outgoing flux. However since the size of the image is so small, for reasonable angular dispersion, the defocusing broadening is insignificant.
The dramatic changes for $`s>50mm`$ in the magnification factor and orientation of the images ( Figs. 3, 4(b)(d)) are due to the presence of fringe fields. Particularly in the symmetric case, it creates long tails in the transmission function which are not easy to control. Figure 5 depicts the resolution $`D^1=M_l(E\widehat{n})`$ as a function of the source position for both modes. The resolution of the analyzer is the same for both cases when the source is exactly on the symmetric axis($`s=0`$). For $`s>0`$ the resolution of the asymmetric mode is better; while for $`s<0`$ the symmetric mode is preferred. In the UTA neutrino unit, the source is a cylindrical cell, 32 mm wide, 35.5 mm high with a ring opening of 0.5mm. Most of the beta flux will come from the $`s=0mm`$ to $`s=21.5mm`$ sector in the cell. Operating the analyzer in asymmetric mode will improve the transmission by $`23\%`$ and resolution by $`4\%`$ at the stage as an preanalyzer. Since the resolution of the CMA depends linearly on the size of lateral image formed by SDA, the overall resolution of the system gained by operating the SDA in asymmetric mode is over $`50\%`$. In the course of this investigation, changing the source position did not alter the second order focusing in either case. When the source is positioned in the field free region, the position of the image is largely decided by the analyzing fields provided by SDA. Tests were carried out to check the influence of biasing voltages of the SDA and the Herzog lenses on the position of the image for both symmetric and antisymmetric cases. Same amount of voltage is added or subtracted from SDA and lenses up to hundreds of volts. The image shift due to modified Herzog lenses is an order of magnitude smaller than equivalently modified SDA fields. However, the fringe fields between the lenses and SDA will change the aberration curve completely, and participate strongly in the rotation of images. Fundamentally, the rotation during the imaging is an inherent property for any curved optical system, and its treatment can still reside in the general paraxial principle with special care for its vectorial nature.
## 4 Experiments
A Steigerwald type gun is chosen as an electron beam source because the monochromaticity of the beam is better than 0.01% and an adjustable real image, created beyond the electron gun by telefocusing, can be used as the input object for the SDA. The electron source is used to measure the action of the dispersive field of the SDA, and thus makes the verification of the calculated imaging property of the SDA possible. While the position of the object moves–by adjusting the position of the inner Wehnelt cylinder of the gun–the image is traced both vertically and horizontally. Since the object will be in the proximity of the symmetric axis, the SDA will form a slightly magnified lateral image in the vertical direction.
### 4.1 Setup
A self-biasing electron gun based on Steigerwald’s design is mounted on a rotatable frame under the SDA-90. The incidence angle of the gun can be adjusted. A Faraday cage mounted at the entrance of SDA, 16mm offset from symmetrical axis, with 50 micron aperture facing the incident e-beam acts as a beam ‘checker’. Passing the SDA-90, the electrons face a similar Faraday cage(detector) which is mounted on another rotatable frame with moving mechanisms which allow the cage to move in the plane perpendicular to the exit beam. The two rotating frames are aligned horizontally by a digital indicator to better than 1 arc minute. In the vertical direction two axes are aligned optically through a laser beam defined by two apertures and a photodiode with a 100 micron entrance aperture. The accuracy is within 50 micron. The setup is detailed in Figure 6. The incidence angle of the gun is first fixed by the ‘checker’. Then the gun is rotated 90 to allow the beam to go through the SDA. By adjusting the position of the inner Wehnelt cone of the electron gun, beams with different sizes and points of convergence can be created. The beam profiles are first measured with the beam checker before entering the SDA, and later the exit beam profiles are measured again. The checker only measures the horizontal beam-width; while the detector measures both horizontal and vertical profiles. The cage current is measured by a Ketheley 616 multi-meter followed by a Dell P100 computer through GPIBs. The power supply of the electron gun is Spellman RHSR60N. A Bertan 205B power supply sets the voltage of the inner sphere of SDA, and a Fluke 408B power supply for Herzogs lenses. The beam intensities range from 2 to 6 micro-amps. The whole chamber is maintained at $``$2$`\times 10^6`$ Torr. The vacuum tank is shielded by mu-metal and the transverse magnetic fields are measured to be in the range of mGauss.
### 4.2 Results
The principle trajectory is determined by the incidence angle of the electron beam, the checker’s z position, the inclination angle of the detector, and the detector Z position. The voltages of the SDA and Herzog lenses are set according to the simulation. The variables are the azimuthal position of the detector and the incident energy of the electron. Although the electron energy can be recored to 6 digit precision by a differential voltmeter, its absolute value can only be read out in three significant digits. The agreement between the relativistic numerical calculation and the experimental results of the electron energy is within $`0.5\%`$. We also found that the principle trajectory is rather insensitive to the voltage setting of the two Herzog lenses. The beam envelopes under investigation all have excellent gaussian shape(Fig. 7). No observable baseline fluctuation appears in the measured beam intensity. The energy resolution of the analyzer is measured by adjusting the incident beam energy from 19940 eV to 20060 eV, such that the gaussian beam can scan through the detecting aperture. Depicted in Figure 8, the full width at half maximum(FWHM) of the beam profile defines the energy resolution to be $`\mathrm{\Delta }E/E=2.47\times 10^3`$. This is measured at incident beam size of 634 micron and exit beam size of 726 micron. The beam spread is only 2.5 mrad; thus no angular aberration is seen. Since $`\mathrm{\Delta }E/E=M_l(E\widehat{n})/E`$, inferred by this result, $`E\widehat{n}=68eV/mm`$. This measurement agrees with the ray trace result ($`70eV/mm`$) very well. The focusing property of the SDA is elucidated in terms of the ratios of the beam-widths measured before and after the SDA. Depicted in Figures 9 and 10, both vertical and horizontal lateral beam-widths were measured, at $`z=87.35`$ by the checker and at $`Z=522.52mm`$ by the detector. Since the ray-tracing predicted several optical properties of the SDA very well, we use the beam-width ratio to calculate the position of the object. Particularly we examine the data obtained by setting inner Wehnelt position marked $`0`$. In Figure 9, vertically the detector measures 726 micron lateral beam-width (FWHM) while the input beam-width is 634 micron (beam spread 3.12 mrad, point of convergence is 203mm ahead of the checker, based on the previous measurements on the focusing property of telefocus gun. ) at the checker. Applying the simulation results in Figure 4 and taking into account the modification due to the defocusing, the object is found at $`z=61.2mm`$, and correspondingly the image is at $`Z=483mm`$. ( Complete results see the previous article.) This result will on one hand be used in the previous article as another supporting evidence for the proper analysis of the emission optics of the Steigerwald type electron gun; on the other hand the focusing property of the horizontal direction, which is not calculated in the 2D ray tracing, can be constructed. From the same input beam, the detector receives a horizontal beam-width of 804 micron(FWHM)( Figure 10). Based on the conservation of brightness, the beam dispersion angle in this direction is estimated to be 3.49 mrad. This puts an upper limit 1.27 on the lateral magnification factor in the horizontal direction. Thus separate images will form vertically and horizontally , featuring the astigmatism of the SDA. However, since eventually a ring source will be used in the experiment, we have to look at the vertical image only. The ratio of beam-widths measured in both directions is close to 1; this suggests that the broadening in the ring image due to the horizontal defocusing will not be significant in our case. This astigmatism is often unchecked by two dimensional ray tracing for cylindrically symmetric analyzers and thus could cause anomalous broadening in practice.
## 5 Conclusion
In this work, we have used the imaging matrix as a tool to evaluate the resolution and transmission characteristics of a spherical deflecting analyzer. Results are shown for both symmetrically and asymmetrically charged cases. The asymmetrical case is found to be superior. A SDA-90 has been built based on the simulations. The principle trajectory and the dispersion field are checked by experiment using a telefocus gun as the beam source. Many details made possible by the imaging matrices provides a straightforward database for convoluting the spectrometer function into the finite source. Although extensive literature is available on this topic, their results contribute mainly to cases where the source is small. Thus the aberration behavior is the key element to optimize the resolution. Typically a third order aberration curve from analyzer of our size ($`50cm`$ ) has second order focus of 20 micron. For any finite source which is larger than 50 micron, the finite size effect often dominates, and the optimization based on the aberration only may not be correct. The imaging matrix approach proposed here provide a way of dealing with such problems.
## 6 Acknowledgement
The authors wish to express their gratitude to L.H. Thuesen, H.F. Wellenstein for their involvement in the early stage of this work. Special thanks go to the UTA Physics machinists for their excellent works. This work was supported by Texas Advanced Research Project and Robert A. Welch Foundation.
|
no-problem/9905/math9905155.html
|
ar5iv
|
text
|
# An Implementation of the Bestvina-Handel Algorithm for Surface Homeomorphisms
## 1 Introduction
The fundamental group of a surface $`S`$ of genus $`g`$ with one puncture is a free group $`F`$ on $`2g`$ generators. A homeomorphism of $`S`$ induces an outer automorphism $`𝒪`$ of $`F`$, and we can represent $`𝒪`$ as a homotopy equivalence $`f:GG`$ of a finite graph $`GS`$ homotopy equivalent to $`S`$.
$`f:GG`$ is said to be a train track map if for every $`n1`$ and for every edge $`e`$ of $`G`$, the restriction of $`f^n`$ to the interior of $`e`$ is an immersion. In \[BH95\], Bestvina and Handel give an effective algorithm that takes a homotopy equivalence $`f:GG`$ representing an outer automorphism $`𝒪`$ and attempts to find a train track representative $`f^{}:G^{}G^{}`$ of $`𝒪`$, where $`G^{}`$, like $`G`$, is embedded in and homotopy equivalent to $`S`$. If $`𝒪`$ is irreducible<sup>1</sup><sup>1</sup>1See \[BH92\] for a definition of irreducibility. For our purposes, it is sufficient to know that an outer automorphism induced by a pseudo-Anosov homeomorphism of a surface with one puncture will always be irreducible., the algorithm will always succeed. If $`𝒪`$ is reducible, it will either find a train track representative, or it will conclude that $`𝒪`$ is reducible.
Given a train track representative $`f:GG`$ of an outer automorphism induced by a surface homeomorphism $`\varphi :SS`$, Bestvina and Handel (see \[BH95\]) construct a so-called train track<sup>2</sup><sup>2</sup>2Thurston’s notion of train tracks is slightly different from the notion of train tracks according to Bestvina and Handel. For an exposition of Thurston’s theory of surface homeomorphisms, see \[FLP79\]. $`\tau `$, which can be thought of as being embedded in $`S`$. Using $`\tau `$, one can effectively decide whether $`\varphi `$ is pseudo-Anosov. Furthermore, in the pseudo-Anosov case, the following information can be extracted from $`\tau `$ and $`f`$:
* the growth rate of $`\varphi `$
* the structure of the stable and unstable foliations of $`\varphi `$, in particular singular points of the foliations and their indices
The software package implements this theory in the case of surfaces of genus at least two with exactly one puncture. This restriction is motivated by the fact pseudo-Anosov homeomorphisms of surfaces with one puncture induce irreducible automorphisms of the fundamental group. This is not true for surfaces with more than one puncture, and handling this case would require the implementation of a more complicated algorithm. However, the theory developed in \[BH95\] works in full generality (including the case of closed surfaces, which can be reduced to the case of punctured surfaces by removing the orbit of a periodic point).
The package consists of three main parts:
* The first part takes a surface homeomorphism $`\varphi :SS`$ defined by a sequence of Dehn twists and turns it into a homotopy equivalence of a graph.
* The second part takes a homotopy equivalence of a graph and either finds a reduction or a train track representative.
* The third part constructs a train track $`\tau `$ from a train track representative and generates an image of $`\tau `$ embedded in the surface $`S`$.
The output of the second and third part combined contain all the information about $`\varphi `$ listed above. In particular, they decide whether $`\varphi `$ is pseudo-Anosov.
The package is highly modular, and the three parts can be used independently. For example, the handling of Dehn twists has applications beyond the scope of this paper, and the second part also works for nongeometric outer automorphisms of free groups (see \[BH92\]). Moreover, each of the three parts falls into several functional units, many of which (such as computations and graphics in the hyperbolic plane) may be used in other contexts.
The implementation of the train track algorithm is a part of my master’s thesis (\[Bri95\]); the design and implementation of the programs handling Dehn twists and graphics is a part of my Diplom thesis (\[Bri96\]). It is my pleasure to express my gratitude to Klaus Johannson and Werner Ballmann for their help in writing my theses in Knoxville, Paris and Bonn. I would like to thank Steve Gersten for encouraging me to write this article. Finally, I would like to thank the editors and the referees for valuable suggestions.
The software package, written in Java, is available free of charge at http://www.math.utah.edu/~brinkman. An older version of the package, written in ANSI-C, is also available. Both versions are portable and should run on most systems.
## 2 Related algorithms and implementations
There are at three least other implementations of the Bestvina-Handel algorithm, each with an emphasis different from the implementation described here.
* T. White’s ”FOLDTOOL” (\[Whi90\]) is an implementation of the train track algorithm from \[BH92\] for free groups. Automorphisms are entered and displayed as homotopy equivalences of graphs.
* B. Menasco and J. Ringland (see \[MR96\]) have implemented the Bestvina-Handel algorithm in the case of automorphisms of punctured spheres. Homeomorphisms can be entered as braid words or as homotopy equivalences of graphs. Results are displayed as homotopy equivalences of graphs.
* T. Hall’s implementation (see \[Hal96\]) handles arbitrary punctured surfaces. Homeomorphisms are entered as homotopy equivalences of graphs, as braid words, or as horseshoe maps according to Smale. Results are displayed as homotopy equivalences of graphs.
A common characteristic of all implementations is a program realizing some part of the theory developed in \[BH92, BH95\]. The main distinguishing characteristic of the implementation discussed here is that homeomorphisms of surfaces with one puncture can be entered as compositions of Dehn twists, and results can be displayed as pictures of graphs embedded in surfaces, which significantly facilitates the generation of examples as well as the interpretation of results. Hence, the software described here provides a powerful yet easy-to-use environment for mathematical experimentation.
Finally, we note that various authors have found independent approaches to train tracks, e. g., M. Lustig in \[Lus92\], J. Los in \[Los93\] and J. Franks/M. Misiurewicz in \[FM93\]. In \[Lus92\], train tracks are used to study automorphisms of free groups, while the other two papers are concerned with homeomorphisms of punctured spheres.
## 3 Dehn twists
The software package contains a class with two methods for handling Dehn twists: One of them is extremely easy to use and allows the user to define surface homeomorphisms as a composition of Dehn twists with respect to a fixed set of curves (see figure 1). The Dehn twists with respect to this set of curves generate the mapping class group (see \[Lic64\]). This set of generators is not minimal; rather, it was chosen with the user’s convenience in mind.
The other method for handling Dehn twists removes the restriction to a fixed set of curves, which results in a slightly more complicated input format. This method is the part of the package that provides the link between surface homeomorphisms and homotopy equivalences of graphs; the method described in the previous paragraph merely generates input for the second one.
When computing Dehn twists, we adopt the following convention: We equip the surface with an outward pointing normal vector field. When twisting with respect to a curve $`c`$, we turn right<sup>3</sup><sup>3</sup>3The notion of turning left or right is defined with respect to the chosen normal vector field. whenever we hit $`c`$.
## 4 Examples
Figures 2, 3, 4 and 5 were generated by the software package. Each of them shows a train track belonging to a pseudo-Anosov homeomorphism of a once punctured surface of genus 2 or 3. The identification pattern on the boundary of the polygons is given by matching labels of edges intersecting the boundary, and the puncture corresponds to the vertices of the polygon.
Singularities of the stable or unstable foliation of the pseudo-Anosov map in question correspond either to the puncture or to shaded areas containing at least three edges. If a shaded area contains $`k3`$ edges, it gives rise to a singularity of index $`1\frac{k}{2}`$. For the proofs of these statements, see \[BH95\].
Since the sum of the indices of all singularities equals the Euler characteristic of the surface with the puncture closed, we can compute the index of the singularity at the puncture, if any. Moreover, the singularities of the two foliations are fixed points or periodic points of the pseudo-Anosov homeomorphism in question. There are more periodic points than just the singularities of the foliations — in fact, the set of periodic points of a pseudo-Anosov homeomorphism is dense, see \[FLP79, exposé 9, proposition 18\].
In the following examples, $`S_g`$ is a surface of genus $`g`$ with one puncture, and $`D_c`$ denotes the Dehn twist with respect to a curve $`c`$, which will always be one of the curves from figure 1. All the results in the following paragraphs were computed by the software package, the only input being the genus of the surface and a sequence of Dehn twists.
###### Example 4.1 (Maximal index I).
Consider the map $`h:S_2S_2,`$
$$h=D_{a_1}D_{c_0}D_{d_0}D_{a_1}D_{d_1}D_{a_1}.$$
Using the algorithm from \[BH95\], the software concludes that $`h`$ is a pseudo-Anosov homeomorphism with growth rate $`\lambda 1.722084`$. A train track for $`h`$ is shown in figure 2. None of the shaded areas gives rise to a singularity of the stable or unstable foliation, so the puncture is the only singularity, and its index is $`2`$.
###### Example 4.2 (Maximal index II).
Let $`h:S_2S_2`$ be given by
$$h=D_{a_1}^1D_{d_1}D_{c_0}^1D_{d_0}.$$
$`h`$ is a pseudo-Anosov homeomorphism with growth rate $`\lambda 4.390257`$. Figure 3 shows the corresponding train track. The unique shaded area in figure 3 contains six edges, so it gives rise to a singularity $`p`$ of index $`2`$. We conclude that there is no singularity at the puncture.
###### Example 4.3 (Minimal index).
Let the homeomorphism $`h:S_2S_2`$ be given by
$$h=D_{a_0}D_{c_0}^1D_{d_0}D_{d_1}^1.$$
$`h`$ is a pseudo-Anosov homeomorphism with growth rate $`\lambda 2.015357`$. Figure 4 shows the corresponding train track. The shaded areas labeled $`0,1,3,4`$ give rise to singularities of index $`\frac{1}{2}`$, which shows that there is no singularity at the puncture. The singularities $`0`$ and $`4`$ as well as $`1`$ and $`3`$ are exchanged by $`h`$.
###### Example 4.4 (Genus 3).
Let $`h:S_3S_3`$ be given by
$$h=D_{d_0}D_{c_0}D_{d_1}D_{c_1}D_{d_2}D_{c_2}^1.$$
$`h`$ is a pseudo-Anosov homeomorphism with growth rate $`\lambda 2.042491`$. Figure 5 shows the corresponding train track. The shaded areas labeled $`0,2`$ give rise to singularities of index $`2`$, and they are exchanged by $`h`$. There is no singularity at the puncture.
###### Example 4.5 (A reducible example).
Finally, consider $`h:S_2S_2`$ defined by
$$h=D_{d_0}D_{c_0}D_{d_1}.$$
$`h`$ is reducible since the complement of the curves $`d_0`$, $`c_0`$, and $`d_1`$ is not a (punctured) disc, and in fact the software reaches the same conclusion.
## 5 Implementation
The complete online documentation of the software package, including a user manual and the source code, can be found at http://www.math.utah.edu/ ~brinkman, so at this point we restrict ourselves to a brief discussion of the main implementation issues. For the most part, we take the point of view of mathematics rather than that of computer science.
### 5.1 Encoding of embeddings
For the rest of this discussion, it will be advantageous to think of punctures as being distinguished points of closed surfaces. Given a closed surface $`S`$ with a distinguished point $`p`$ and a finite graph $`GS`$ homotopy equivalent to $`S\{p\}`$, we need an efficient way of encoding the embedding of $`G`$ in $`S`$. To this end, consider a loop $`\rho ^{}`$ around $`p`$. $`\rho ^{}`$ is homotopic to a closed edge path $`\rho `$ in $`G`$ that crosses every edge of $`G`$ twice, once for each direction (assuming that $`G`$ has no vertices of valence one). Conversely, given $`G`$ and $`\rho `$, we can reconstruct $`S`$: We simply take a polygon $`P`$ with $`2n`$ sides, where $`n`$ is the number of edges of $`G`$, and interpret $`\rho `$ as an identification pattern on the boundary of $`P`$. Moreover, we can triangulate $`P`$ (and hence $`S`$) by fixing a point $`p`$ in the interior of $`P`$ and connecting $`p`$ to all the vertices in the boundary of $`P`$. Hence, we see that $`G`$ and $`\rho `$ give us an efficient way of encoding the embedding of $`G`$ in $`S`$ along with a triangulation of $`S`$.
### 5.2 Finding a metric
Now, given a triangulation $`\tau `$ of $`S`$, we want to find a hyperbolic metric on $`S`$ with the property that the edges of $`\tau `$ are geodesic segments. There are various ways to accomplish this (see \[CdV91\]); our method of choice is a special case of Thurston’s circle packing (see \[Thu78\]): Given a surface $`S`$ with a hyperbolic metric $`\mu `$ and with a triangulation $`\tau `$ whose edges are geodesic segments, there is a collection of circles centered at the vertices of $`\tau `$ such that no two circles intersect transversally and two vertices of $`\tau `$ are connected by an edge if and only if their corresponding circles are tangent. For each triangulation, there exists exactly one such set of circles, and their radii can be computed numerically. Moreover, they uniquely determine $`\mu `$. Hence, circle packing gives us an effective way of drawing $`S`$ as a polygon (with identifications on the boundary) in the hyperbolic plane.
### 5.3 Philosophy
The package takes advantage of many features of the object-oriented paradigm, such as data encapsulation and reusability. For example, the class that implements maps of graphs does not allow direct access to its contents; the other parts of the package operate on such maps through a small and well defined set of methods, which results in ease of maintenance and great flexibility.
At this time, one general drawback of Java is that many web browsers use faulty implementations of the Java libraries, which may cause problems when running the software described in this article. However, such problems do not occur when the software is run by the Sun appletviewer, which is available for most systems.
The mathematical part of the package consists of 16 classes, reflecting increasing levels of specialization. Some of them, like the implementation of the train track algorithm from \[BH92\], will only be used in the context of this package. Others, like the collection of methods for computations and drawings in the hyperbolic plane, have been designed with other uses in mind. In fact, the package presented here does not even use all the methods defined in this collection.
Finally, the classes and methods handling maps of graphs may be useful beyond the context of this article. For example, the author has already used them for a tentative implementation of some of the algorithms in \[Sta83\].
Department of Mathematics, University of Utah
Salt Lake City, UT 84112, USA
E-mail: brinkman@math.utah.edu
|
no-problem/9905/astro-ph9905334.html
|
ar5iv
|
text
|
# Simultaneous age-metallicity estimates of the Hyades open cluster from three binary systems
## 1. Introduction
Since the members of an open cluster are assumed to be of same age and chemical composition, these stars are currently used to test the validity of stellar evolution theories, mainly because main sequence stars define a tight sequence in a colour-magnitude diagram (CMD). Unfortunately, this tightness is sometimes misleading because of the contamination by field stars, the presence of unresolved binaries and also the influence of stellar rotation on the location of massive stars in CMDs. Alternatively, well-detached binaries are powerful tests when fundamental parameters are accurately known (see the comprehensive review by Andersen 1991 on double-lined eclipsing binaries). Unfortunately, the determination of their chemical composition often remains a difficult and unresolved issue. It appears therefore that a better test could be performed by combining both advantages, that is, testing the tracks with well-detached double-lined binaries which are members of open clusters. We have applied this idea to three well-detached binaries members of the Hyades: 51 Tau, V818 Tau, and $`\theta ^2`$ Tau.
### Observational data :
Torres et al., 1997 (\[TSL97a\], \[TSL97b\] and \[TSL97c\]) obtained the first complete visual-spectroscopic solutions for the 3 above-mentionned systems, from which they carefully derived very accurate parallaxes and individual masses. They also gathered some individual photometric data in the Johnson system. Furthermore, we found useful trigonometric parallaxes information in the Hipparcos catalogue (ESA, 1997). By combining the two sources of data, we investigate the influence of the Hipparcos parallaxes on our method which was developed to test stellar evolutionary models in HR diagrams.
### Theoretical tracks :
Among the most widely used stellar theoretical tracks in the literature are those computed by the Geneva group (see Charbonnel et al. 1993 and references therein) and the Padova group (see Fagotto et al. 1994 and references therein). We also used the stellar tracks from Claret & Giménez (1992) (CG92 thereafter). The tests are done with these 3 series of stellar tracks.
### Tests in the CMD :
The tests we want to perform are the following :
1. to check whether the two components of the systems are on the same isochrone, i.e. on a line defined by the same age and the same chemical composition for the two single stars.
2. since all the selected stars are members of the Hyades whose metallicity has been well measured (according to the review of Perryman et al. (1998): \[Fe/H\] $`=`$ 0.14 $`\pm `$ 0.05, i.e: Z $`=`$$`0.024_{0.003}^{+0.0025}`$), we can also check that the predicted metallicities from theoretical models are correct.
3. for 51 Tau and $`\theta ^2`$ Tau, the individual stellar masses are known with an accuracy of about 10%, and for V818 Tau, masses and radii are known with an accuracy close to 1-2%, allowing further tests with the theoretical models.
Therefore, if one of these criteria is not clearly fullfilled by a given set of tracks, then these models have obvious problems since they do not account for several observational constraints (namely the metallicity, mass, radius, and/or the photometric data).
### Photometric calibrations :
We do not claim that the 6 selected Hyades stars allow us to test without ambiguity any set of theoretical stellar tracks. Since the data are presented in CMD, we are in fact testing not only the validity of the tracks but also of the photometric calibrations, and disentangling the relative influence of both is a tricky task. We use the Basel Stellar Library (BaSeL) photometric calibrations, extensively tested and regularly updated for a larger set of parameters (see Lejeune et al. 1997, 1998 and Lastennet et al. 1999a). For reasons developed in Lastennet et al. (1999b), we assume that the calibrations from the BaSeL models are reliable enough for this work (for more details and references on the BaSeL library, see contributions of Lejeune et al. and Westera et al. in this volume).
### Brief description of the statistical method :
In order to derive simultaneously the metallicity (Z) and the age (t) of the system, and to produce confidence level contours (see Figure 1), we minimize the $`\chi ^2`$-functional defined as:
$`\chi ^2(t,Z)`$ $`=`$ $`{\displaystyle \underset{i=A}{\overset{B}{}}}\left[\left({\displaystyle \frac{\mathrm{M}_\mathrm{V}\left(\mathrm{i}\right)_{\mathrm{mod}}\mathrm{M}_\mathrm{V}\left(\mathrm{i}\right)}{\sigma \left(\mathrm{M}_\mathrm{V}\left(\mathrm{i}\right)\right)}}\right)^2+\left({\displaystyle \frac{\left(\mathrm{B}\mathrm{V}\right)\left(\mathrm{i}\right)_{\mathrm{mod}}\left(\mathrm{B}\mathrm{V}\right)\left(\mathrm{i}\right)}{\sigma \left(\left(\mathrm{B}\mathrm{V}\right)\left(\mathrm{i}\right)\right)}}\right)^2\right]`$ (1)
where $`A`$ is the primary and $`B`$ the secondary component. M<sub>V</sub> and (B$``$V) are the observed values, and M<sub>V</sub><sub>mod</sub> and (B$``$V)<sub>mod</sub> are obtained from the synthetic computations of the BaSeL models using a given set of stellar tracks.
## 2. Results
Table below briefly summarizes the results (see Lastennet et al. 1999b for further details) of the theoretical simultaneous age–metallicity estimates obtained from isochrone age fitting (1$`\sigma `$ level) taking into account the Hipparcos parallax.
* For 51 Tau and $`\theta ^2`$ Tau, the 3 sets of isochrones give good fits in the CMD, in agreement with previous estimates (Perryman et al.) of age (log t $`=`$ $`8.80_{0.04}^{+0.02}`$, from isochrone fitting technique with the CESAM stellar evolutionary code (Morel 1997)) and metallicity (\[Fe/H\] $`=`$ 0.14 $`\pm `$ 0.05).
* The Geneva and CG92 models can not be tested with the less massive component of V818 Tau. The Padova tracks provide contours in agreement with the Hyades metallicity only when taking into account the Hipparcos parallax. Otherwise, solutions are too old and metal rich.
* Masses predicted by the 3 sets of tracks are in good agreement with the measured individual masses of each system.
* Padova isochrones can not fit the system V818 Tau in a mass-radius diagram.
## References
Andersen, J. 1991, ARA&A, 3, 91
Charbonnel, C., Meynet, G., Maeder, A., Schaller, G., Schaerer, D. 1993, A&AS, 101, 415
Claret, A., Giménez, A. 1992, A&AS, 96, 255, \[CG92\]
ESA, 1997, The Hipparcos and Tycho Catalogues (ESA-SP 1200)
Fagotto, F., Bressan, A., Bertelli, G., Chiosi, C. 1994,A&AS, 105, 39
Lastennet, E., Lejeune, Th., Westera, P, Buser, R. 1999a, A&A, 341, 857
Lastennet, E., Valls-Gabaud, D., Lejeune, Th., Oblak, E. 1999b, accepted for A&A, \[astro-ph/9905273\]
Lejeune, Th., Cuisinier, F., Buser, R. 1997, A&AS, 125, 229
Lejeune, Th., Cuisinier, F., Buser, R. 1998, A&AS, 130, 65
Morel, P. 1997, A&A Suppl., 124, 597
Perryman, M.A.C., Brown, A.G.A., Lebreton, Y., Gómez, A., Turon, C., Cayrel de Strobel, G., Mermilliod, J.-C. 1998, A&A, 331, 81
Torres, G., Stefanik, R.P., Latham, D.W. 1997, ApJ, 474, 256, \[TSL97a\]
Torres, G., Stefanik, R.P., Latham, D.W. 1997, ApJ, 479, 268, \[TSL97b\]
Torres, G., Stefanik, R.P., Latham, D.W. 1997, ApJ, 485, 167, \[TSL97c\]
|
no-problem/9905/cond-mat9905324.html
|
ar5iv
|
text
|
# Depinning transition and thermal fluctuations in the random-field Ising model
## I Introduction
Driven interfaces in systems with quenched disorder display with increasing driving force a transition from a phase where no interface motion takes place to a phase with a finite interface velocity. This so-called depinning transition is caused by a competition of driving force and quenched disorder. While the driving force tends to move the interface, the motion is hindered by the disorder (see e.g. ).
Depinning transitions are found in a large variety of physical problems, like fluid invasion in porous media , depinning of charge density waves or field-driven motion of domain walls in ferromagnets . In magnetic systems a domain wall separates regions of different spin orientations. With the assumption that the corresponding interface shows properties of an elastic membrane, it has been argued that the depinning of the interface can be described by an Edwards-Wilkinson equation with quenched disorder. While the interface motion in a system with quenched disorder near the critical threshold is theoretically often investigated in the absence of thermal fluctuations, these fluctuations affect the experimental study of the depinning transition . The crucial point is that energy barriers which are responsible for a trapping of the interface in a metastable state at zero temperature can always be overcome due to thermal fluctuations. For driving fields far below the transition field this yields a thermally activated creep motion (see and references therein). This behavior changes approaching the transition point, where finite temperatures cause a rounded depinning transition (for experimental evidence see, for instance, Fig. 2 in ). To describe the dependence of the interface velocity on driving force and temperature near the transition point, a scaling ansatz has been proposed . This ansatz which is based on an equation of motion for sliding charge density waves predicts its characteristic velocity to be a power law of temperature at the critical threshold. This scaling ansatz has been shown to be a valid description for the depinning of a domain wall in the 2$`d`$ random-field Ising model (RFIM) with quenched disorder .
The outline of our paper is as follows: Sec. II describes the RFIM and reflects properties of $`[111]`$-interfaces in this model. In Sec. III we discuss the depinning transition from a microscopic point of view, analyzing the mechanisms of interface motion near the depinning transition. Also, we determine numerically the exponents of the interface velocity and of the correlation length, allowing an estimation of the universality class of the 3$`d`$-RFIM. In Sec. IV we analyse the influence of temperature on the depinning transition. By assuming the interface velocity to be a generalized homogenous function, our analysis is based on applying standard concepts of critical equilibrium phenomena. The ansatz allows the characterization of the thermal rounding of the depinning transition by a critical exponent $`\delta `$. We determine $`\delta `$ for the depinning transition in the 3$`d`$-RFIM for the first time and find numerical evidence for a scaling relation among certain critical exponents characterizing this transition. This scaling relation also holds in the 2$`d`$-RFIM analyzed previously .
## II Interfaces in the RFIM
We investigate the $`3d`$-RFIM with quenched disorder on a simple cubic lattice. The Hamiltonian of the system is given by
$$=\frac{J}{2}\underset{i,j}{}S_iS_jH\underset{i}{}S_i\underset{i}{}h_iS_i,$$
(1)
where the first sum is restricted to nearest-neighbors. $`H`$ denotes the driving field and $`h_i`$ quenched random-fields which are uniformly distributed within an interval $`[\mathrm{\Delta },\mathrm{\Delta }]`$. We carry out Monte Carlo simulations with single-spin-flip dynamics and we use transition probabilities $`p(S_iS_i,T)`$, where $`T`$ denotes the temperature, according to a heat-bath-Algorithm (see e. g. and references therein). At zero temperature these transition probabilities reduce to
$$p(S_iS_i,0)=\{\begin{array}{cc}1\hfill & :\delta <0\hfill \\ 1/2\hfill & :\delta =0\hfill \\ 0\hfill & :\delta >0,\hfill \end{array}$$
(2)
where $`\delta =(S_i)(S_i)`$. We investigate three dimensional cubic systems of linear extension from $`L=12`$ to $`L=162`$.
An initially flat interface is built into the system separating regions of up- and down spins. The applied field $`H`$ drives the interface. Within the Monte-Carlo simulation spins adjacent to the interface flip causing a movement of the interface. Also, nucleation may occur, i.e. a spin initially parallel to all of its neighbors may turn. Since we are interested in the scaling behavior of the interface motion in the vicinity of the depinning transition, it is essential that within the observation time nucleation does not occur. The minimum energy needed for isolated spin flips is $`2(zJH\mathrm{\Delta })`$. As long as this quantity is large as compared to temperature, the time scales on which nucleation and interface motion occur are separated, and within the observation time no nucleation takes place . In particular, there is no need to suppress artificially nucleation or isolated spin flips during the simulation.
The analysis of interface motion on simple cubic lattices considers usually $`[100]`$-interfaces. However, investigating $`[100]`$-interfaces in the limit of vanishing disorder means that the interface motion is restricted to driving fields $`H/J>z2`$ (see ). To avoid this, we consider $`[111]`$-interfaces which move in the absence of disorder at arbitrarily small driving fields increasing the separation of time scales for interface motion and nucleation even further .
We have found that the most convenient way to implement $`[111]`$-interfaces in the numerics is the introduction of antiperiodic boundary conditions. This implementation is illustrated in Fig. 1. For simplicity, periodic images of a snapshot of an interface in $`d=2`$ are shown. As can be seen from Fig. 1, the orientations of *up* and *down* are exchanged when passing the boundaries of the system. Of course, the exchange of *up* and *down* also affects the driving field whose sign has to be choosen in an appropriate manner. Our implementation will work as long as the different parts of the interface do not interact. An interaction takes place if the interface width, $`wL^\zeta `$, is of the same magnitude as the typical distance $`a`$ between two neighboring parts of the interface. This distance is proportional to the linear extension of the system, $`aL`$, independent of the considered dimension. Our implementation is therefore applicable to situations where $`\zeta <1`$. Despite this restriction antiperiodic boundary conditions have the advantages that they can be applied to any dimension and generalized to other orientations of the interface. They are a natural choice for interfaces, because the moving interface can be investigated without any time limit. This is especially an advantage close to the depinning transition, where the critical slowing down effect causes large relaxation times .
## III Zero Temperature
In the RFIM with an interface initially built into the system, there are in general two magnetization reversal processes: interface motion and nucleation. Without thermal fluctuations the second process does not occur as long as $`(H+\mathrm{\Delta })/J`$ does not exceed the number $`z`$ of nearest neighbors. The corresponding threshold is shown in Fig. 2 (upper broken curve). Above this threshold nucleation processes take place and interfere with the interface motion.
In the following we are interested in the influence of overhangs on the value of the critical field $`H_\mathrm{c}(\mathrm{\Delta })`$ at which the transition takes place. Close to the depinning transition, there are two important kinds of spin flips (see Fig. 3). All spins of type $`A`$ with
$$\underset{j}{}S_AS_j=0\mathrm{will}\mathrm{flip}\mathrm{if}HH_0=\mathrm{\Delta },$$
(3)
while the first spin of type $`B`$ with
$$\underset{j}{}S_BS_j=2\mathrm{can}\mathrm{flip}\mathrm{if}HH_2=2J\mathrm{\Delta }.$$
(4)
Here, the sum is taken over nearest-neighbors of $`A`$ and $`B`$, respectively. If the strength of disorder $`\mathrm{\Delta }`$ is smaller than the exchange energy $`J`$, then it follows that $`H_0<H_2`$. The critical field at which the transition takes place is given by $`H_\mathrm{c}(\mathrm{\Delta }<J)=\mathrm{\Delta }.`$ Hence, no overhangs occur in the vicinity of the transition point. Taking into account the transition probabilities given by Eq. (2), this value of the critical field means that the interface velocity depends neither on the driving field nor on the strength of disorder as long as no overhangs occur ($`H<H_2`$). In particular, in the absence of overhangs the interface velocity observed in a disordered system coincides with that of a non-disordered system ($`\mathrm{\Delta }=0`$). Figure 4 and its inset show numerical data which confirm this scenario for the 3$`d`$-RFIM within the error-bars.
Next we investigate the depinning transition occurring in the RFIM for $`\mathrm{\Delta }>J`$. In this case the transition takes place at a certain field $`H_\mathrm{c}<\mathrm{\Delta }`$ as can be understood from the following consideration: For $`H_0<H_2`$, not all spins of type $`A`$ can flip if $`H<\mathrm{\Delta }`$. But because of the existence of overhangs, a second growth mechanism is possible to the interface: If a spin of type $`A`$ cannot flip due to its large random field $`h_i`$, an overhang created elsewhere can cause an avalanche by which additional neighbors of $`A`$ are flipped. Thus the interface can be kept moving. Contrary to the regime $`\mathrm{\Delta }<J`$, the interface motion now is based on the existence of overhangs. Note that our considerations do not depend on the dimension $`d`$ of the system, because Eqs. (3) and (4) are independent of $`d`$.
We start examining the regime $`\mathrm{\Delta }>J`$ in the 3$`d`$-RFIM numerically by investigating the depinning transition from below. We analyse the disorder-averaged distance $`h(t\mathrm{})`$ traveled by an initially flat interface before pinning occurs. This quantity is closely related to the total volume invaded by a growing domain which was analyzed in . However, while in the driving force is increased step by step to allow for relaxation processes in between, we focus our attention to driving fields which remain unchanged during the interface motion.
Below the depinning transition $`h(t\mathrm{})`$ is finite. Approaching the transition point with increasing driving field, the distance traveled before pinning occurs increases and finally diverges at the transition point. We assume that in the vicinity of the transition point $`h(t\mathrm{})`$ diverges algebraically, characterized by some exponent $`y`$,
$`h(t\mathrm{})`$ $``$ $`\left(H_\mathrm{c}H\right)^y,`$ (5)
where $`H_\mathrm{c}`$ denotes the critical field observed in a system of infinite extension. In a finite system with linear dimension $`L`$ finite-size scaling is assumed. The corresponding scaling ansatz reads
$$h(t\mathrm{})=L^{y/\nu }f\left[\left(HH_\mathrm{c}\right)L^{1/\nu }\right],$$
(6)
with $`f(x)|x|^y`$ for $`x\mathrm{}`$. Note that $`h(t\mathrm{})`$ also diverges in any finite system which means that $`f(x)`$ should diverge at a finite value of $`x^{}`$. The corresponding driving field defines a size dependent critical field $`H_\mathrm{c}(L)`$ given by $`\left(H_\mathrm{c}(L)H_\mathrm{c}\right)L^{1/\nu }=x^{}`$. A scaling plot of the data according to Eq. (6) is shown in Fig. 5. The divergence of $`f(x)`$ occurs at $`x^{}2.5`$ showing that in a finite system the threshold field is always shifted to fields larger that $`H_\mathrm{c}`$.
The critical exponent of the correlation length parallel to the interface is given by $`1/\nu =1.31\pm 0.07`$ and the critical field turns out to be $`H_\mathrm{c}=1.371\pm 0.03`$. The value of $`\nu `$ coincides with , where an -interface in the self-affine growth regime corresponding in our case to $`\mathrm{\Delta }>J`$ has been investigated. This suggests that the behavior of the correlation length at the depinning transition does not depend on the orientation of the interface in the RFIM.
In the following we consider the disorder averaged interface velocity $`v=\text{d}h/\text{d}t`$ above the transition point in the limit of large times. This quantity can be interpreted as the order parameter of the depinning transition. Approaching a continuous phase transition the order parameter vanishes in leading order according to
$$v(H)=A\left(HH_\mathrm{c}\right)^\beta .$$
(7)
The corresponding data are shown in Fig. 6. The prefactor $`A`$ is a non-universal constant which can be used to compare the results obtained at zero temperature with those presented in the next section. Since in the vicinity of the depinning transition finite size effects may become important, we calculated each interface velocity $`v(H)`$ in systems of different linear extension $`L`$. For sufficiently large $`L`$ we observed no significant dependence on the system size from which we concluded that the data shown in Fig. 6 correspond within negligible errors to those of the limit $`L\mathrm{}`$. As can be seen from the data, Eq. (7) is fulfilled and we obtain $`A=0.671\pm 0.03`$, $`\beta =0.66\pm 0.04`$, and $`H_\mathrm{c}=1.37\pm 0.01`$.
The values of $`\beta `$ and $`\nu `$ obtained from our analysis coincide within the error-bars with those of the Edwards-Wilkinson equation with quenched disorder in $`d=2+1`$, $`\beta _{\mathrm{EW}}=2/3`$ and $`\nu _{\mathrm{EW}}=3/4`$. These values are obtained by an $`ϵ`$-expansion within a functional renormalization group scheme (see ). While the value of $`\beta _{\mathrm{EW}}`$ is obtained to first order of $`ϵ`$, there are arguments that $`\nu _{\mathrm{EW}}`$ is exact in all orders to $`ϵ`$ . Taking this into account, our results suggest that the depinning transition of a domain wall in the 3$`d`$-RFIM with quenched disorder is in the same universality class as the depinning transition of the corresponding Edwards-Wilkinson equation.
## IV Finite Temperatures
In this section we study the influence of finite temperatures on the depinning transition. For $`T>0`$ the interface velocity does not vanish for finite driving fields since the energy needed to overcome local energy barriers is provided by thermal fluctuations at any finite $`T`$. This results in a rounded depinning transition. The rounding can be seen in Fig. 7, where interface velocities for different driving fields and temperatures are presented. As expected, the rounding of the transition increases with increasing temperature. Again, we ensured that the interface velocities presented in this and the following Figures correspond within negligible errors to those of the thermodynamic limit. To analyse the thermal rounding of the depinning transition quantitatively, we first note that the depinning transition can be described in terms of a continuous non-equilibrium phase transition. This is suggested by the divergence of the correlation length (see determination of $`\nu `$ and Fig. 5) and the dependence of the interface velocity on the driving field near the transition point (Fig. 6). In the standard theory of critical phenomena a continuous phase transition is characterized by critical exponents (see for instance and references therein). Beside $`\beta `$ describing the field dependence of the order parameter and $`\nu `$ characterizing the divergence of the correlation length near the transition point, the rounding of a phase transition is characterized by the critical exponent $`\delta `$. In magnetic systems, for instance, $`\beta `$ and $`\delta `$ determine the magnetic equation of state. We now apply this approach to the depinning transition by assuming its order parameter to be a generalized homogenous function of temperature and driving field,
$$v[T,HH_\mathrm{c}]=\lambda v[\lambda ^{a_T}T,\lambda ^{a_H}(HH_\mathrm{c})].$$
(8)
Choosing $`\lambda =T^{1/a_T}`$ we obtain the scaling ansatz
$$v(T,H)=T^{1/\delta }f_T[(HH_\mathrm{c})T^{1/\beta \delta }],$$
(9)
with $`f_T(x0)=\text{const}`$. In particular, this equation corresponds to the magnetic equation of state . From an equation of motion of sliding charge density waves a scaling form corresponding to Eq. (9) has been obtained . Note that contrary to our ansatz which is based on Eq. (8) yields no predictions on the values for $`\beta `$ and $`\delta `$.
It has been shown previously that Eq. (9) is valid in the 2$`d`$-RFIM with quenched disorder . We have tested this scaling ansatz in the present situation for the 3$`d`$-RFIM with the interface velocities shown in Fig. 7. As can be seen from Fig. 8, the scaling ansatz leads to a data collapse for $`\beta =0.63\pm 0.07`$, $`\delta =2.38\pm 0.2`$, and $`H_\mathrm{c}=1.375\pm 0.01`$. Thus at $`H=H_\mathrm{c}`$ the influence of temperature on the interface velocity can be described by a power law $`vT^{1/\delta }`$. To support this value for $`\delta `$ we can determine $`\delta `$ from a different scaling function obtained from Eq. (8) by choosing $`\lambda =|HH_\mathrm{c}|^{1/a_H}`$:
$$v(T,H)=(HH_\mathrm{c})^\beta f_H[(HH_\mathrm{c})^{\beta \delta }T],$$
(10)
with $`f_H(x0)=\text{const}`$. This ansatz is valid above the transition point and it is closely related to Eq. (9). It corresponds to a different formulation of the magnetic equation of state. Interface velocities rescaled according to Eq. (10) are shown in Fig. 9. One obtains $`\beta =0.67\pm 0.03`$, $`\delta =2.55\pm 0.37`$, and $`H_\mathrm{c}=1.37\pm 0.05`$. This result confirms within the error-bars the value of $`\delta `$ determined by Eq. (9). Beside these quantities the data collapse also allows a determination of the prefactor $`A=f_H(x0)`$ \[see Eq. (7)\] which turns out to be $`A=0.685\pm 0.025`$.
The values of $`A`$, $`H_\mathrm{c}`$, and $`\beta `$ found for $`T>0`$ coincide within sufficient accuracy with those values obtained at $`T=0`$. We have demonstrated that Eqs. (9) and (10) are valid confirming that the interface velocity is a generalized homogenous function in the vicinity of the transition point . Thus, the influence of temperature on the depinning transition can be described within well-established concepts.
The knowledge of $`\beta `$ and $`\delta `$ allows a test of the scaling relation $`\delta =2+1/\beta `$ proposed by Tang and Stepanow . This scaling relation was shown to be fulfilled in the 2$`d`$-RFIM . For $`\beta 0.67`$ the scaling relation suggests $`\delta 3.5`$ which is not supported by our results. On the other hand, standard theory of critical phenomena predicts relations among critical exponents. For instance, combining the Rushbrooke, the Widom and the hyperscaling relation yields in equilibrium physics
$$\delta =\frac{d\nu }{\beta }1.$$
(11)
This scaling relation is valid in dimensions $`d`$ below the upper critical dimension $`d_\mathrm{c}`$ due to the restriction of the hyperscaling relation to $`d<d_\mathrm{c}`$. We have tested the scaling relation (11) with the numerically evaluated exponents at the depinning transition and found out that both the exponents in the present case $`d=3`$ as well as the exponents for $`d=2`$ ($`\nu _{2d}1.0`$, $`\beta _{2d}0.33`$, and $`\delta _{2d}5.0`$; see ) fulfill Eq. (11) within the error-bars. Unfortunately however, a firm foundation of this scaling relation in the present situation for non-equilibrium phase transitions is unknown.
## V Conclusion
We investigated the motion of a driven interface in a magnetic system with quenched disorder. To improve the efficiency of our numerics we applied antiperiodic boundary conditions. These boundary conditions allow to investigate the interface motion on any time scale. At zero temperature a depinning transition occurs at a finite driving field. We discussed the influence of overhangs and avalanches on this transition. If the strength of disorder exceeds the coupling constant, the interface motion is based on the existence of overhangs. Under these circumstances the depinning transition can be characterized by critical exponents, both below and above the critical threshold. Our results suggest that the depinning transition of a domain wall in the 3$`d`$-RFIM with quenched disorder and the depinning transition of the corresponding Edwards-Wilkinson equation are in the same universality class.
Thermal fluctuations yield a rounded transition. By assuming the interface velocity to be a generalized homogenous function of temperature and driving field, this rounding can be described within a scaling approach. The validity of this approach is confirmed by the fact that at the threshold field the interface velocity vanishes with decreasing temperature according to a power law characterized by an exponent $`\delta `$. We have tested a scaling relation \[Eq. (11)\] among different exponents characterizing the depinning transition and found numerical evidence, that the scaling relation is valid both in the 2$`d`$\- and the 3$`d`$-RFIM.
###### Acknowledgements.
This work was supported by the Deutsche Forschungsgemeinschaft through the Graduiertenkolleg Struktur und Dynamik heterogener Systeme at the University of Duisburg, Germany.
|
no-problem/9905/chao-dyn9905012.html
|
ar5iv
|
text
|
# The Construction of a Quantum Markov Partition
## I Introduction
For classical hyperbolic systems, symbolic dynamics provides the proper coordinates for an efficient description of the chaotic behavior . Such description does not exist at the quantum level (with the exception of a few important semiclassical treatments ). This work is an attempt to apply the techniques of symbolic dynamics in quantum mechanics. The ultimate goal of this kind of investigations is to rewrite the equations of quantum mechanics in terms of adequate symbols for a given (chaotic) problem.
Symbolic dynamics requires a partition of phase space in various regions. We are thus faced with the problem of defining properly the quantum analogues to bounded regions of phase space. The essential difficulties for doing this are the limitations imposed by the uncertainty principle. Strictly speaking, quantum mechanics is not only in contradiction with the notion of a phase space point but also with that of a finite subset of phase space.
In a previous paper a symbolic decomposition along these lines was studied, but no special constructions were necessary because the invariant manifolds were aligned with the coordinate axes, thus turning the elements of the generating partition into simple projectors. Here we generalize the method of by constructing certain objects (we call them quantum rectangles) which are the quantum equivalents to the classical elements of a generating partition. Then we investigate their properties and different possibilities for their construction. The quantum rectangles behave approximately as projectors over the corresponding classical regions except from diffraction effects which are characteristic of quantum phenomena.
Once the quantum rectangles have been defined, it is straightforward to construct a quantum generating partition. In perfect analogy with the classical case, this partition leads to a symbolic decomposition of the propagator. Eventually, we obtain an exact trace formula having the same structure as Gutzwiller’s.
The rest of the paper is structured as follows. In Section II we argue that the quantum analogue of a finite region of phase space can be constructed in a natural way by simply quantizing the characteristic function of that region. In Section III we show that in the semiclassical limit the quantized regions display properties consistent with the classical ones. Section IV describes the application of the quantum generating partition to decompose the propagator. Finally, Section V contains the concluding remarks.
## II Construction
The first step towards the construction of a quantum Markov partition consists in defining the quantum analogue for a finite region $`R`$ of the classical phase space (to be considered later as belonging to a generating partition). For the sake of simplicity, we restrict our analysis to two dimensional phase spaces with the topology of a torus (we further assume that the torus has unit area). Extensions to spaces of higher dimensionality or to other topologies can also be considered. We want to construct an operator which is the quantization of the characteristic function $`\mathrm{\Delta }_R`$ of the region $`R`$,
$$\mathrm{\Delta }_R(q,p)=\{\begin{array}{cc}1\hfill & \text{if }(q,p)ϵR\hfill \\ 0\hfill & \text{otherwise}\hfill \end{array}.$$
(1)
Let us just mention two simple properties of the characteristic functions: distributivity with respect to the set intersection and normalization
$`\mathrm{\Delta }_{R_1}\mathrm{\Delta }_{R_2}`$ $`=`$ $`\mathrm{\Delta }_{R_1R_2},`$ (2)
$`{\displaystyle 𝑑p𝑑q\mathrm{\Delta }_R(p,q)}`$ $`=`$ $`𝒜_R;`$ (3)
the integral is over the torus and $`𝒜_R`$ is the area (volume) of the region $`R`$. For the moment these regions are arbitrary but eventually they will become the elements of a partition of the phase space.
To establish the connection with quantum mechanics we make use of a phase space representation, that is, a basis $`\{\widehat{B}(q_k,p_j),\mathrm{\hspace{0.33em}1}k,jN\}`$ for operators acting on the Hilbert space $``$ of dimension $`N=1/2\pi \mathrm{}`$ (the $`q`$ and $`p`$ representations on the torus are discrete, and mutually related through a discrete Fourier transform ). Any operator $`\widehat{O}`$ can be written as a linear combination of the elements of the basis
$$\widehat{O}=\underset{k,j=1}{\overset{N}{}}O(q_k,p_j)\widehat{B}(q_k,p_j).$$
(4)
Conversely, for a given symbol $`O(q_k,p_j)`$, Eq. (4) defines an operator $`\widehat{O}`$. We require the operator basis to decompose the identity
$$\underset{k,j=1}{\overset{N}{}}\widehat{B}(q_k,p_j)=𝟙_{}.$$
(5)
Two examples of operator bases will be considered: The Kirkwood representation, associated to the basis $`\{|q_kq_k|p_jp_j|\}`$, and a representation of projectors over coherent states, $`\{|q_k+ip_jq_k+ip_j|\}`$. In both cases the discretization used is $`q_k=k/N`$ and $`p_j=j/N`$, $`1k,jN`$, corresponding to periodic boundary conditions on the torus. We construct the coherent set starting from a circular Gaussian packet centered at $`(1/2,1/2)`$, say in the $`q`$ representation. Then, this function is evaluated in the discrete $`q`$ mesh and normalized. The whole set of coherent states is obtained by successive translations of the initial state to all the points $`(q_k,p_j)`$ of the mesh .
Both representations allow a natural construction of the quantization $`\widehat{R}`$ of a phase space region $`R`$:
$`\widehat{R}_K`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{k,j=1}{\overset{N}{}}}\mathrm{\Delta }_R(q_k,p_j)|q_kq_k|p_jp_j|,`$ (6)
$`\widehat{R}_z`$ $`=`$ $`{\displaystyle \frac{1}{N^2}}{\displaystyle \underset{k,j=1}{\overset{N}{}}}\mathrm{\Delta }_R(q_k,p_j)|q_k+ip_jq_k+ip_j|.`$ (7)
The normalization prefactors $`1/N`$ and $`1/N^2`$ are such that the “quantum area” (to be defined later) of the whole torus is one. The additional factor $`1/N`$ in the coherent case is due to the overcompleteness of that representation. While $`\widehat{R}_z`$ is Hermitian and treats symmetrically $`p`$’s and $`q`$’s, $`\widehat{R}_K`$ is not. Therefore, in applications we use the symmetrical combination $`\widehat{R}_K^s=(\widehat{R}_K+\widehat{R}_K^{})/2`$ (we come back to this point later).
By defining the operators as quantizations of the characteristic functions of the classical regions we guarantee that they have the expected semiclassical limit. We will show that the Gaussian rectangle $`R_z`$ tends smoothly to its classical counterpart. On the other side, the convergence of $`R_K`$, which has been constructed from a sharp distribution, shows characteristic rapid oscillatory structure.
## III Properties
The spectral analysis of the quantum rectangles are the key to understanding their general properties. We begin by studying the Gaussian regions. For the case of a triangular region, Fig. 1 shows the way in which $`\widehat{R}_z`$ behaves in the limit $`N\mathrm{}`$. There we plot the eigenvalues $`\lambda _k`$ (associated to the eigenvectors $`|\psi _k`$) in decreasing order.
Most of the eigenvalues take the values $`0`$ or $`1`$. Intermediate values exist, but their relative number goes to zero in the semiclassical limit as a ratio surface to volume. Therefore, semiclassically, the rectangle behaves as a projector. Figure 2 shows that the Husimi representations $`|q+ip|\psi |^2`$ of the corresponding eigenfuctions are localized on nested triangles concentrically with the boundary of the classical region.
The situation is very similar to that of integrable Hamiltonians, where the Husimi density of an eigenfunction is localized over the associated quantized torus and decays exponentially as one moves away from the torus. Exploiting this analogy we can derive a semiclassical quantization rule for the eigenvalues and eigenfunctions of a quantum region. Notice first that the eigenvalue equation for $`\widehat{R}_z`$ is
$$\frac{1}{N^2}\underset{(q,p)}{}\mathrm{\Delta }_R|q+ipq+ip|\psi _k=\lambda _k|\psi _k,$$
(8)
implying that
$$\frac{1}{N^2}\underset{(q,p)R}{}|q+ip|\psi _k|^2=\lambda _k$$
(9)
(the sum over the whole torus giving one). Let’s now make the following assumptions. Sums can be substituted by integrals (we are interested in the limit $`N\mathrm{}`$). The Husimi of the $`k`$-th eigenfunction is associated to a quantized “torus” lying at a distance $`d_k`$ from the border of the region. The function $`d_k`$ depends on the shape of the region and arises from packing $`k`$ quasi one-dimensional strips of area $`h`$ concentrically with the border of the region, starting from inside. Last, in the direction perpendicular to the torus, $`\widehat{y}`$ ($`y=0`$ on the torus), the Husimi is a normalized Gaussian:
$$\mathrm{exp}(y^2/\mathrm{})/\sqrt{\pi \mathrm{}}.$$
(10)
Combining Eqs. (9,10) we arrive at the semiclassical quantization rule
$$\lambda _k=\frac{1}{2}+\frac{1}{2}\text{erf}\left(\frac{d_k}{\sqrt{\mathrm{}}}\right).$$
(11)
We give expressions for $`d_k`$ for the simplest-shaped regions: a square, the triangle of Fig. 1, and a circle
$$d_k=\{\begin{array}{cc}\frac{L\sqrt{kh}}{2}\hfill & \text{square}\hfill \\ \frac{L\sqrt{2kh}}{2+\sqrt{2}}\hfill & \text{triangle}\hfill \\ R\sqrt{\frac{kh}{\pi }}\hfill & \text{circle}\hfill \end{array},$$
(12)
where $`R`$ is the radius of the circle and $`L`$ is the side of both the square and the triangle (see inset Fig. 3).
In Fig. 3 we compare the analytical expression (11) with the numerical results for three different regions, verifying that the agreement is excellent, even for the relatively small $`N=90`$.
The quantization of classical regions with sharp boundaries by way of coherent states presented above has the advantage of producing very smooth and analytically understandable results. The sharp edges are blurred by the Gaussian smoothing and the resulting quantum rectangles are always “soft” on the scale of $`\mathrm{}`$.
Other representations, namely Kirkwood and Wigner, allow higher definition but display characteristic diffraction effects at the edges and corners. Fig. 4 shows the eigenvalues of the operator $`\widehat{R}_K^s`$ of the triangular region. Notice that the distribution of eigenvalues is not smooth as in the coherent case but presents a singularity associated to boundary effects. This singularity is inherent to the sharpness of the Kirkwood construction and is also displayed by the non-hermitian rectangles $`\widehat{R}_K`$ and $`\widehat{R}_K^{}`$ (not shown). Some typical eigenfunctions are also displayed (inset). In this case, the eigenfunctions do not present the high degree of symmetry of the coherent case, but are rather irregular. The eigenvalue still determines the localization of the eigenfunction with respect to the border ($`<1/2`$, interior; $`>1/2`$, exterior).
However, as the eigenfunctions are not nested like in the coherent case, the ordering is not always unambiguous.
Except for boundary effects, the Kirkwood rectangles behave asymptotically in the same way as the coherent ones, i.e., they tend to projectors over the classical regions.
Besides the nice spectral behavior discussed above, the quantum rectangles (either coherent or Kirkwood) should display some additional properties for our construction to be consistent:
(a) How does one define the “area” of a quantum region? In order to quantify the dissipation of a quantum Smale-horseshoe map, we argued in that the usual operator norm $`\text{Tr}(\widehat{R}\widehat{R}^{})`$ is a reasonable definition of area. For the Kirkwood rectangles $`\widehat{R}_K`$ it is easy to prove that this definition coincides exactly with the classical area $`𝒜_R`$. Alternatively one could simply define area as $`\text{Tr}\widehat{R}`$, in which case classical and quantum areas are identical for both representations. Anyway, as $`\widehat{R}`$ tends to a projector
$$\text{Tr}(\widehat{R}\widehat{R}^{})\text{Tr}\widehat{R}=𝒜_R.$$
(13)
Thus both expressions are acceptable definitions of quantum area.
(b) For the study of spectral properties the Hermitian operator $`\widehat{R}_K^s`$ was preferred to the non-Hermitian $`\widehat{R}_K`$ and $`\widehat{R}_K^{}`$. The latter are more appropriate for the decomposition of the propagators we present in Section IV. However, in the limit $`𝒜_R\mathrm{}`$, $`\widehat{R}_K`$ and $`\widehat{R}_K^{}`$ will be approximately equal, given that they only differ in the ordering of $`q`$’s and $`p`$’s. Then $`\widehat{R}_K`$, $`\widehat{R}_K^{}`$, and $`\widehat{R}_K^s`$ are semiclassically equivalent.
(c) Quantization and propagation must commute: If $`U`$ is a classical simplectic map and $`\widehat{U}`$ its quantization, then
$$\widehat{U}^T\widehat{R}\widehat{U}^T\widehat{U^T(R)}$$
(14)
where it is understood that one must fix $`T`$ and take the limit $`\mathrm{}0`$. To illustrate the way in which this limit may be reached we show in Fig. 5 the propagation of a Kirkwood element of the generating partition of Arnold’s cat map (see Fig. 6). Notice that besides the bulk classical propagation, diffraction effects associated to the edges and corners are clearly visible. We remark that this behavior is typical of sharp representations. Coherent rectangles behave in a much smoother way.
(d) We also expect quantization to commute with the classical set operations:
$`\widehat{R_1R_2}\widehat{R}_1\widehat{R}_2\widehat{R}_2\widehat{R}_1,`$ (15)
$`\widehat{R_1R_2}\widehat{R}_1+\widehat{R}_2\widehat{R}_1\widehat{R}_2.`$ (16)
In the next section we show an application of the quantum regions which is an indirect test of the validity of these statements.
## IV Symbolic decomposition of the traces of the propagator
Before discussing the applications of the quantum rectangles in quantum dynamics, we present a short reminder of classical symbolic dynamics in a setting appropriate to the transition to quantum mechanics.
For a hyperbolic map $`U`$, symbolic dynamics relates the orbits of $`U`$ to symbolic sequences by means of a partition of phase space. Such partition consists of a set of regions $`R_1,R_2,\mathrm{},R_P`$ (usually called “rectangles”) which satisfy the following (Markov) properties. The boundaries of $`R_i`$ are defined by segments of the expanding and contracting manifolds of $`U`$. Whenever $`U(R_i)`$ intersects the interior of $`R_j`$, the image cuts completely across $`R_j`$ in the unstable direction. Similarly, the backwards image $`U^1(R_i)`$ cuts completely across the other rectangles along the stable direction. .
Once one has constructed the Markov partition, successively finer partitions are obtained by intersecting the elements of the basic partition with its positive and negative images by the map (product partition):
$$R_{ϵ_K\mathrm{}ϵ_1ϵ_0ϵ_1\mathrm{}ϵ_M}=\underset{s=K}{\overset{s=M}{}}U^s(R_{ϵ_s}),$$
(17)
where $`ϵ_s`$ can take any of the values $`1,2,\mathrm{}P`$. Each element of the new partition can be labeled by a different symbolic code
$$\nu (K,M)=ϵ_K\mathrm{}ϵ_1ϵ_0ϵ_1\mathrm{}ϵ_M.$$
(18)
As the original rectangles, the rectangles above possess the property of decomposing the phase space into disjoint regions (we do not take into account borders, which are zero-measure). When acting on these rectangles, the map is simply a shift:
$$U^1(R_{ϵ_K\mathrm{}ϵ_1ϵ_0ϵ_1\mathrm{}ϵ_M})=R_{ϵ_K\mathrm{}ϵ_0ϵ_1\mathrm{}ϵ_M}.$$
(19)
If, in the limit $`K,M\mathrm{}`$, each element of the product partition is a single point, the code is said to be complete. It may well happen that some of the intersections of Eq. (17) are empty. This means that the transitions between certain pairs of basic regions are prohibited. The information about allowed and prohibited sequences is contained in a transition matrix
$$t_{ij}=\{\begin{array}{cc}1& \text{if}f(R_i)R_j\mathrm{}\\ 0& \text{otherwise}\end{array}.$$
(20)
In this way one has set up a one-to-one correspondence between phase space points and allowed sequences. (The case of different sequences being associated to the same point is taken care of by identifying such sequences, and working in a quotient space.) The matrix $`t_{ij}`$ establishes the grammar rules that forbid certain sequences of symbols. When $`t_{ij}`$ is of finite size the dynamics becomes topologically conjugate to a subshift of finite type.
The existence of a symbolic dynamics allows for an exhaustive coding of the orbits of the map. In particular, periodic orbits are in correspondence with the periodic sequences of the same periodicity. Given an arbitrary system, it is a hard task to decide if it admits a symbolic dynamics; even if it does, the translation from symbols to phase space coordinates is in general extremely difficult. The example we will consider (the cat map) does not present any of these difficulties, thus eliminating non-essential complications.
In the following we show how the symbolic dynamics of a classical map can be used to decompose the traces of the quantized map. The quantum analogues of the elements of the classical generating partition are the quantum rectangles $`\widehat{R}`$ described in Sections II and III. The quantum partitions are obtained by translating to quantum mechanics the steps in the construction of the classical ones. Starting from the quantizations of the regions of the classical basic partition, we define the quantum refinement in two steps. First the regions (quantum “projectors”) are propagated using the Heisenberg equations of motion. Then, noting that “intersections” of quantum rectangles correspond to matrix multiplications, we arrive at a quantum product partition with elements written as a time ordered multiplication of matrices
$`\widehat{R}_{\nu (K,M)}`$ $`=`$ $`\widehat{U}^K\widehat{R}_{ϵ_K}\widehat{U}^K\mathrm{}\widehat{U}^M\widehat{R}_{ϵ_M}\widehat{U}^M`$ (21)
$`=`$ $`\widehat{U}^K\widehat{R}_{ϵ_K}\widehat{U}\widehat{R}_{ϵ_{K+1}}\mathrm{}\widehat{R}_{ϵ_{M1}}\widehat{U}\widehat{R}_{ϵ_M}\widehat{U}^M.`$ (22)
The counterpart of the classical decomposition of the phase space is the quantum decomposition of the identity
$$\underset{\nu (K,M)}{}\widehat{R}_{\nu (K,M)}=𝟙/,$$
(23)
$`N`$ being the dimension of the Hilbert space. The quantum propagation is also a shift:
$$\widehat{U}^1\widehat{R}_{ϵ_K\mathrm{}ϵ_1ϵ_0ϵ_1\mathrm{}ϵ_M}\widehat{U}=\widehat{R}_{ϵ_K\mathrm{}ϵ_0ϵ_1\mathrm{}ϵ_M}.$$
(24)
Even though the quantum rectangles don’t have zero “intersection”, in the semiclassical limit, the product of two elements of the partition tends to the null operator, except from possible singularities due to border effects. Last, when $`N\mathrm{}`$ and $`K,M`$ fixed, the quantum rectangles tend to the classical ones. The precise meaning of this limit, and the way it is achieved were discussed in Sec. III.
The key property of the quantum partition we have constructed is the symbolic decomposition of the traces of the propagator. Consider the discrete path sum for the trace of a power of the propagator in the coherent state representation
$$\text{Tr}\widehat{U}^L=\frac{1}{N^{2L}}\alpha _0|\widehat{U}|\alpha _{L1}\alpha _{L1}|\widehat{U}\mathrm{}|\alpha _1\alpha _1|\widehat{U}|\alpha _0$$
(25)
where the sum runs over all the closed paths $`\alpha _0,\alpha _1,\mathrm{},\alpha _{L1},\alpha _L\alpha _0`$, which are discrete both in time and in the coordinates (we recall that $`\alpha q+ip`$ moves on the discrete $`q`$-$`p`$ grid). Semiclassically the trace of $`\widehat{U}^L`$ will be dominated by the periodic trajectories (of period $`L`$) of the classical map $`U`$ and their neighboring paths. Symbolic dynamics allows for classifying not only the trajectories but also the paths according to their symbolic history. So, one has a natural way of partitioning the space of paths into disjoints subsets, each one characterized by a symbol $`\nu `$ of length $`L`$ and containing the periodic trajectory $`\mathrm{}\nu \nu \mathrm{}`$. But this mechanism of path grouping is automatically implemented by the quantum projectors:
$`\text{Tr}\widehat{U}^L`$ $`=`$ $`\text{Tr}{\displaystyle \underset{\nu }{}}\widehat{U}\widehat{R}_{ϵ_{L1}}\mathrm{}\widehat{U}\widehat{R}_{ϵ_1}\widehat{U}\widehat{R}_{ϵ_0}`$ (26)
$``$ $`{\displaystyle \underset{\nu }{}}\text{Tr}\widehat{U}_\nu ^L.`$ (27)
The $`\widehat{R}`$’s are the quantum regions associated to the coherent representation and now the sum runs over the sequence labels $`\nu =ϵ_0ϵ_1\mathrm{}ϵ_{L1}`$. Eq. (26) is completely equivalent to (25), the difference being just the grouping of closed paths into families sharing the same symbolic code $`\nu `$. Each one of these families contributes to a partial trace $`\text{Tr}\widehat{U}_\nu `$. Analogous results are obtained in the Kirkwood case. In fact, starting from a path sum in the Kirkwood representation,
$$\text{Tr}\widehat{U}^L=\frac{1}{N^L}q_0|p_0p_0|\widehat{U}\mathrm{}\widehat{U}|q_1q_1|p_1p_1|\widehat{U}|q_0,$$
(28)
one arrives at the same result of Eq. (26) but with the Kirkwood rectangles instead of the coherent ones. Using the cyclic property, the partial traces of (26) (or the Kirkwood counterparts) can be rewritten in terms of the refined rectangles of Eq. (21)
$$\text{Tr}\widehat{U}_{\nu (K,M)}^L=\text{Tr}\left[\widehat{U}^L\widehat{R}_{\nu (K,M)}\right].$$
(29)
The integers $`K,M`$ must satisfy $`K+M=L1`$, but are otherwise arbitrary. By varying $`K`$ and $`M`$ ($`L`$ fixed) one constructs different types of rectangles, e.g., the choice $`K`$=0, $`M`$=$`L`$-1 produces “unstable” rectangles (stretched along the unstable manifolds)
$$\widehat{R}_{ϵ_0ϵ_1\mathrm{}ϵ_{L1}}=\widehat{R}_{ϵ_0}\widehat{U}^1\widehat{R}_{ϵ_1}\widehat{U}^1\mathrm{}\widehat{U}^{L1}\widehat{R}_{ϵ_L}\widehat{U}^{(L1)}.$$
(30)
Similarly, with $`M=0`$ and $`K`$=$`L`$-1, “stable” rectangles are obtained. Anyway, stable and unstable rectangles are related by the unitary transformation (24), ensuring that $`\text{Tr}\widehat{U}_{\nu (K,M)}^L`$ does not depend on the particular choice of $`K,M`$. Moreover, the trace of each symbolic piece is cyclically invariant \[as is obvious from (26)\] and therefore the decompositions into invariant cycles in one-to-one correspondence with periodic orbits of the map.
The refined rectangle $`\widehat{R}_{\nu (K,M)}`$ has as classical limit the characteristic function of the classical region $`R_{\nu (K,M)}`$. Thus, its role in (29) consists essentially in cutting the matrix $`\widehat{U}^L`$ into pieces. The Kirkwood rectangles act onto the Kirkwood matrix $`p|\widehat{U}^L|q`$:
$`\text{Tr}\left(\widehat{U}^L\widehat{R}_{K,\nu }\right)`$ $`=`$ $`{\displaystyle \underset{q,p}{}}p|\widehat{U}^L|qq|\widehat{R}_{K,\nu }|p`$ (31)
$``$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{q,p}{}}p|\widehat{U}^L|q\mathrm{\Delta }_{R_\nu }(q,p).`$ (32)
The coherent rectangles perform a similar action but on the operator symbol $`\alpha |\widehat{U}^L|\alpha `$:
$`\text{Tr}\left(\widehat{U}^L\widehat{R}_{z,\nu }\right)`$ $``$ $`\text{Tr}\left(\widehat{U}^L{\displaystyle \underset{\alpha R_\nu }{}}|\alpha \alpha |\right)`$ (33)
$`=`$ $`{\displaystyle \frac{1}{N^2}}{\displaystyle \underset{\alpha }{}}\alpha |\widehat{U}^L|\alpha \mathrm{\Delta }_{R_\nu }(\alpha ).`$ (34)
In both cases the semiclassical partial trace is obtained by summing over that piece of the matrix which corresponds to the classical rectangle. Thus each symbolic piece captures the local structure of the propagator in the vicinity of a periodic point labeled bu $`\nu `$ and by stationary phase yields the Gutzwiller-Tabor contribution of the corresponding periodic orbit. Forbidden symbols lead to semiclassically small contributions.
The symbolic decomposition we have presented has the nice feature of reducing the problem of understanding the asymptotic limit of the traces of the propagator to the analysis of individual “partial” traces $`\text{Tr}\widehat{U}_\nu ^L`$, each one characterized by a code given by the symbolic dynamics, and ruled by a periodic point.
### A A numerical application
The simplest system in which the quantum partitions can be applied to decompose the propagators is perhaps the baker’s map . Its generating partition consists in two rectangles, which, due to the fact that the expanding and contracting directions are parallel to the coordinate axes, are solely defined by conditions on $`q`$. As a consequence, the quantum rectangles for the baker’s reduce exactly to projectors on subspaces . This greatly simplifies the symbolic analysis of the quantum baker’s, allowing very detailed studies of its partial traces .
However, the baker’s is too special for illustrating the properties of the rectangles: many of them are satisfied trivially. Moreover, the partial traces of the baker’s display some unpleasant anomalies that difficult the semiclassical analyses .
Still simple enough, the Arnold’s cat map $`U`$ is more appropriate for a general illustration of the method and can be investigated numerically. The classical cat map is defined by
$$\left(\begin{array}{c}q^{}\\ p^{}\end{array}\right)=\left(\begin{array}{cc}2& 1\\ 1& 1\end{array}\right)\left(\begin{array}{c}q\\ p\end{array}\right)\text{mod 1}.$$
(35)
This is a linear, hyperbolic, and continuous map of the torus. As its invariant manifolds are not aligned with the coordinate axes, the rectangles of the generating partition (shown in Fig. 6) are not projectors. This makes the cat map non-trivial for our purposes.
Before proceeding, we must point out that the quantum cat map presents one very particular feature: Gutwiller’s semiclassical formula gives the exact traces . For this reason the cat map is not suitable for studying corrections to the trace formula. In principle, any decomposition into partial traces will introduce errors which, however, will cancel out when added up to produce the whole trace. Thus this model may be useful as a test of the mechanisms that lead to such cancellation.
Let’s now go to the details of the numerical example. The generating partition of the cat map consists of the five rectangles of Fig. 6, which, together with the “grammar rules” embodied in the transition matrix
$$t_{ij}=\left(\begin{array}{ccccc}01001& & & & \\ 01001& & & & \\ 10110& & & & \\ 10110& & & & \\ 10110& & & & \end{array}\right)$$
(36)
define the symbolic dynamics of the cat.
For simplicity we will restrict our analysis to the decomposition of the trace of the time-two propagator
$$\text{Tr}\widehat{U}^2=\underset{ϵ_0,ϵ_1=1}{\overset{5}{}}\text{Tr}\left(\widehat{U}\widehat{R}_{ϵ_0}\widehat{U}\widehat{R}_{ϵ_1}\right).$$
(37)
The rectangles $`\widehat{R}_ϵ`$ are the quantum versions of the regions of Fig. 6 and can be constructed from either the coherent state representation or Kirkwood’s. The construction of the quantum propagator $`\widehat{U}`$ for linear automorphisms of the torus is presented in \[notice that Arnold’s cat (35) is only quantizable for $`N`$ even\].
Each partial trace can be written asymptotically as a Gutzwiller term plus corrections that go to zero as $`N\mathrm{}`$:
$$\text{Tr}\widehat{U}_\nu ^2=A_\nu \mathrm{exp}\left(2\pi iNS_\nu \right)+\delta _\nu (N),$$
(38)
where $`A_\nu `$ is the amplitude and $`S_\nu `$ the action of the periodic orbit . We remark that in the case of cat maps the corrections $`\delta _\nu `$ will cancel out exactly when summing over $`\nu `$ because the semiclassical trace formula is exact in this special case. In general this will not be true, and the method allows to study the corrections coming from each periodic orbit.
In order to quantify the errors associated to the symbolic partition of the space of paths, we study numerically the semiclassical limit of one element of the partition, namely $`\text{Tr}U_{51}^2`$. This trace is dominated by the periodic trajectory shown in Fig. 6 and its neighborhood; its asymptotic limit is the Gutzwiller formula (38) with $`A_{51}=1/\sqrt{5}`$ and $`S_{51}=3/10`$ .
We can understand the asymptotic behavior of the corrections $`\delta _\nu `$ by recalling that our decomposition essentially amounts to cutting the matrix of $`\widehat{U}`$ into rectangular blocks. Let’s first estimate the corrections in the Kirkwood’s case. The Kirkwood matrix of $`U^L`$ has constant amplitude and phase that oscillates rapidly except in the vicinity of the fixed points of $`U^L`$ . Computing the partial trace amounts to summing up the matrix elements $`p|U^2|q`$ that lie inside the region $`R_{51}R_5U^1(R_1)`$ (shown in Fig. 5). In the semiclassical limit we can replace the sum by an integration and do the latter using the stationary phase method. In this approximation we must only take into account the contributions of the critical points . The most important contribution comes from the the periodic orbit (critical point of the first kind) and its neighborhood. This gives rise to the Guzwiller term, which is of order zero in $`\mathrm{}`$ \[$`𝒪(\mathrm{}^0)`$\]. The corrections $`\delta _{51}(N)`$ are associated to critical points of second and third kind. The critical points of the second kind, i.e., points where the phase is stationary with respect to displacements along the borders of the rectangle, contribute with terms $`𝒪(\mathrm{}^{1/2})`$. The corners (third kind critical points) contribute with terms $`𝒪(\mathrm{}^{3/2})`$. (In the baker’s the situation is more complicated because of the coalescence of critical points of different kind, namely some fixed points lie on the borders of the rectangles. These anomalous points give rise to terms $`𝒪(\mathrm{log}\mathrm{})`$ .) Having exhausted the critical points, we conclude that the border errors in Kirkwood’s representation are $`𝒪(\mathrm{}^{1/2})`$. On the other hand, in the coherent case, one expects the amplitudes $`\alpha |\widehat{U}^L|\alpha `$ to decay exponentially fast as one moves away from the classical trajectory. The phases do still oscillate fast. However, due to the exponential damping, the border effects in the coherent decomposition should then be $`𝒪[\mathrm{}^{1/2}\mathrm{exp}(C^2/\mathrm{})]`$, where $`C`$ is proportional to the distance from the fixed point to the border. Of course, this regime will only be reached once the stationary phase neighborhood of the fixed point \[whose radius is $`𝒪(\mathrm{}^{1/2})`$\] is completely contained in $`R_{51}`$.
For the coherent case we calculated numerically the correction $`\delta _{51}`$ as a function of $`N`$. Up to $`N=100`$ we computed the partial trace exactly, i.e.,
$$\frac{1}{N^4}\underset{\alpha R_1,\beta R_5}{}\alpha |\widehat{U}|\beta \beta |\widehat{U}|\alpha .$$
(39)
From then on, due to computer time limitations, we resorted to a local semiclassical approximation for the coherent-state propagator. This is equivalent to replacing the torus propagator $`\alpha |\widehat{U}^2|\beta `$ by a plane propagator which is the quantization of the linear dynamics in the vicinity of the period-two trajectory $`\nu =51`$. The errors introduced in this approximation arise from ignoring the contributions of “sources” located at equivalent (mod 1) positions in the plane . These errors are also $`𝒪[\mathrm{}^{1/2}\mathrm{exp}(C^2/\mathrm{})]`$, but with $`C^{}`$ much larger than $`C`$, and thus can be neglected. Once the partial trace was calculated, we obtained the correction $`\delta _\nu `$ by subtracting the Gutzwiller term.
In Fig. 7 we show the numerical results in a way that permits a direct comparison with our analytical considerations above. In fact, the log-linear plot suggests that the corrections $`\delta _\nu `$ in the coherent state decomposition are indeed exponentially small in the semiclassical parameter $`1/\mathrm{}`$. Accordingly, the decomposition which uses rectangles constructed from the Kirkwood representation introduces border errors of order $`\mathrm{}^{1/2}`$.
We recall that Gutzwiller’s trace formula is exact for the cat maps. For typical maps one expects corrections to this formula of order $`\mathrm{}^k`$, with $`k1`$; e.g., $`k=1`$ for the perturbed cat maps . Both Markov partitions considered here, either based on coherent-state or Kirkwood rectangles, allow to study such corrections term by term. In the coherent case, the partitioning of the space of paths does not introduce significant border effects, given that the contributions of neighboring paths decrease exponentially as one moves away from the central trajectory. On the other side, the use of a sharp representation like Kirkwood’s produces non-negligible boundary contributions to each partial trace. Of course these boundary terms will cancel out when the partial traces are summed up to give the whole trace. Even though, they have to be carefully identified to isolate the genuine partial corrections to the Gutzwiller trace formula.
## V Concluding remarks
We have begun the application of symbolic dynamics techniques, essential in classical chaotic problems, in quantum mechanics. As a fist step we constructed quantum analogues to regions of classical phase space: they are the quantizations of the characteristic functions of the classical regions. We have used Kirkwood’s and a coherent state representation. The study of metrical and spectral properties show that they behave asymptotically as projectors over those regions. They also present the diffraction effects typical of ondulatory phenomena.
For a finite-type subshift, the quantization of the rectangles of the classical generating partition gives rise to a quantum partition which induces a symbolic decomposition of the propagator. This partition allows for writing a trace formula which is both exact and structurally identical to the Gutzwiller trace formula. Thus the problem of understanding the semiclassical limit of the traces of a propagator is reduced to the analysis of partial traces coded by the symbolic dynamics. The objects we have constructed tend asymptotically to their classical counterparts and respond to same dynamics. In this way, one can verify step by step many manipulations that up to now could only be done at a semiclassical level.
Before concluding we would like to emphasize that the construction presented here is by no means restricted to phase space regions that are Markov partitions. Any region of phase space selected for ”attention” can be handled in the same way and its quantum properties explored. For example, if a closed problem is turned into a scattering one by the removal of a section of the boundary or the attachment of a soft wave guide the decomposition leads to the consideration of coupled interior and closure problems projected from the corresponding phase space regions . Another application is to think of the phase space projectors as “measurements” occurring along the quantum history of the system, and the associated decoherence that result.
###### Acknowledgements.
The authors have benefited from discussions with E. Vergini, A. Voros, and A. M. Ozorio de Almeida. R.O.V acknowledges Brazilian agencies FAPERJ and PRONEX for financial support, and the kind hospitality received at the Centro Brasileiro de Pesquisas Físicas and at Laboratorio TANDAR, where part of this work was done. Partial support for this project was obtained from ANPCYT PICT97-01015 and CONICET PIP98-420.
|
no-problem/9905/chao-dyn9905038.html
|
ar5iv
|
text
|
# Multichannel quantum defect theory: a quantum Poincaré map.
## Abstract
The multichannel quantum defect theory (MQDT) can be reinterpreted as a quantum Poincaré map in representation of angular momentum. This has two important implications: we have a paradigm of a true quantum Poincaré map without semi-classical input and we get an entirely new insight into the significance of MQDT.
PACS: 05.45.Mt; 33.80.Rv; 03.65.Sq
, , and thanks: Present address: Fachbereich 7, Physik, Universität G. H. Essen, 45117 Essen thanks: Corresponding author; Fax: +33 476 514 544; e-mail: Maurice.Lombardi@ujf-grenoble.fr
In recent years there has been a rapidly growing interest in the quantum Poincaré map (QPM) , i.e. the quantization of a classical Poincaré map, for a time independent Hamiltonian system. Bogomolny started out with a semi-classical formulation. Among other things he shows that unitarity of the representation is reached in the limit $`\mathrm{}0`$, while this is only approximate for finite $`\mathrm{}`$ . Prosen gives an elegant general solution to the unitarity problem at the expense of obtaining an infinite matrix for the QPM. The semi-classical approach common to most discussions causes a number of problems that make the use of this new and powerful tool a little obscure. With other words, the quantum Poincaré section implicitly defined by Bogomolny, lacks a paradigmatic example where a quantum treatment can be performed properly throughout and leads to a finite unitary matrix.
Multichannel quantum defect theory (MQDT) and its classical limit will be shown to provide the framework for such a paradigm. Indeed we shall see that a simplified model of the Rydberg molecule allows to construct a classical Poincaré map on the unit sphere, whose exact quantization is provided by MQDT. Thus the result is necessarily entirely quantal, exactly unitary and for finite $`\mathrm{}`$ given in terms of a finite matrix. We shall show that the results commonly derived for MQDT are directly properties of the unitary representation of this classical map as obtained by MQDT.
After a short description of the model for a Rydberg molecule and the simplification introduced in Ref. , we proceed to give the quantum map for this case explicitly. We illustrate the two important aspects of our result by two applications. First the new interpretation allows modifications of the MQDT method, that prove particularly effective in near integrable systems. Second we proceed to show by way of examples that the properties of this map are relevant to the study of chaos and order in this system.
Simplifying to the most basic case, these molecules can be viewed as a rotating system with positive charge and cylindrical symmetry that binds one electron in an orbit that is at large distances hydrogenic. The classical limit of the MQDT is the following classical model : The motion is composed of two consecutive steps. (i) when the electron is far from the molecular core (i.e. most of the time for a Rydberg electron) it feels only the Coulomb part ($`1/r`$) of the potential. Its orbit is hydrogenic and its angular momentum $`𝐋`$ is fixed in the laboratory reference frame. Meanwhile the core rotates freely with an angular momentum $`𝐍`$ which is also fixed in the laboratory frame. The total angular momentum $`𝐉=𝐋+𝐍`$ is always conserved. In the molecular reference frame, the $`OZ`$ axis is the cylindrical symmetry axis of the core. The core angular momentum $`𝐍`$ points in a perpendicular direction, taken as the $`OX`$ axis. The angles $`\theta _e`$ and $`\varphi _e`$ are the polar and azimuthal angles respectively of the electronic angular momentum $`𝐋`$ in this frame. During this step, $`𝐋`$ rotates freely around the $`OX`$ axis. (ii) during the so called ”collision” step, the electron senses also the cylindrically symmetric short range part of the potential of the core. Aside from the energy and $`𝐉`$, the projection of $`𝐋`$ onto the core axis $`\mathrm{\Lambda }=L\mathrm{cos}\theta _e`$ is conserved due to the cylindrical symmetry of the core. We will add an extra, simplifying, hypothesis, namely that the magnitude $`L`$ of $`𝐋`$ remains constant. This is justified for Rydberg Molecules at least for small $`L`$’s, but the classical and quantum map with this approximation exist for all $`L`$. Thus the collision can be described by a $`\theta _e`$–dependent rotation of $`𝐋`$ around the core axis. The simplest form of this rotation compatible with the symmetry is : $`\delta \phi _e=K\mathrm{cos}\theta _e`$, where $`K`$ is a coupling constant. This simplification is not essential. Notice further that the conservation of the total angular momentum $`𝐉`$ implies that the molecular core feels a simultaneous recoil which changes the direction and magnitude of $`𝐍`$. This change of $`N`$ in turn entails a change of the rotational energy $`E_N`$ of the core and because of conservation of total energy a change of the energy $`E_e`$ of the electron. This exchange of energy makes this model much richer than the kicked spin model (which is its limit when $`LJ`$, where this recoil can be neglected). In particular the energy of the electron may become positive after the collision, allowing to treat on equal footing bound and unbound (ionized) states. Possibility of chaotic motion comes from the conflict of these two steps, which consist of two rotations around distinct axes with different laws. The classically chaotic case can be obtained by increasing the coupling $`K`$. Near-integrable cases can be obtained for small coupling or at resonance, i.e. when the period of the electron is a multiple of half the period of rotation of the core.
The quantum problem is solved by using the MQDT. The configuration space is divided by a sphere of radius $`r_0`$ in a collision ($`r<r_0`$) and an asymptotic region ($`r>r_0`$) for the motion of the electron. $`r_0`$ is of the order of the core size and is chosen such that in the asymptotic region the potential acting on the electron is only Coulombic, whereas in the collision region it feels both Coulomb and cylindrical potential. The wave functions for both regions are joined appropriately at $`r=r_0`$. The conflict between the two motions is expressed in quantum mechanics by the existence for the wave function of two bases with different good quantum numbers (in addition to $`J,J_z,L`$ and total energy $`E`$). At short distance the Born Oppenheimer basis is appropriate. The Rydberg electron is strongly bound to the core, thus quantized in the molecular reference frame and the additional good quantum number is $`\mathrm{\Lambda }`$. At long distances the collision basis is appropriate. Here the electron is uncoupled from the core and the angular momentum $`N`$ of the core remains a good quantum number. The collision is described by phase shifts $`\mu _\mathrm{\Lambda }`$, which are identical to collision phase shifts if the total energy is positive enough for all channels to be open, and which are related to the classical $`K`$ parameter by $`\mu _\mathrm{\Lambda }=\mu _0(K/4\pi L)\mathrm{\Lambda }^2`$.
We focus on the completely bound situation, when total energy is low enough for the electron energy to be always negative whatever the value of $`N`$ within the allowed range $`[JL,J+L]`$. Demanding that the electron wave function goes to zero when $`r\mathrm{}`$ leads to demanding that the following determinant vanishes , i.e.
$$det𝖲=det\left\{U_{N\mathrm{\Lambda }}\mathrm{sin}(\pi (\mu _\mathrm{\Lambda }+\nu _N))\right\}=0,$$
(1)
where the unitary $`U`$ matrix given by
$$U_{N,\mathrm{\Lambda }}=<L,\mathrm{\Lambda },J,\mathrm{\Lambda }|N,0>(1)^{JN+\mathrm{\Lambda }}(2\delta _\mathrm{\Lambda },0)^{1/2}.$$
(2)
relates the two conflicting bases through a Clebsh Gordan coefficient. The principal (non integer) action $`\nu _N(E)`$ of the Coulomb electron, is related to the electron energy through $`E=E_N+E_e=N(N+1)/(2I)1/(2\nu _N^2)`$, where $`I`$ is the moment of inertia of the core, and we use atomic units ($`e=m=\mathrm{}=1`$). Corresponding wave functions are the eigenkets of $`𝖲`$ for the eigenvalue zero, labeled by $`\mathrm{\Lambda }`$ in the Born Oppenheimer basis:
$$𝖲|A_\mathrm{\Lambda }=0.$$
(3)
Similarly the eigenbras $`B_N|`$ of $`𝖲`$ (eigenkets of $`𝖲^t`$), labeled by $`N`$ describe the corresponding wave function in the collision basis.
To proceed notice first that $`𝖲`$ is the imaginary part of a complex unitary matrix
$$𝖤=𝖢+i𝖲=\left\{U_{N\mathrm{\Lambda }}\mathrm{exp}(i\pi (\mu _\mathrm{\Lambda }+\nu _N))\right\}.$$
(4)
This non symmetrical matrix maps the $`N`$ basis on the $`\mathrm{\Lambda }`$ basis: it is half the QPM we look for, and describes the motion between apogee and perigee. From it we can construct two QPM, which, by construction, turn out to be symmetrical unitary complex matrices.
$`𝖤^t𝖤`$ operates in Born Oppenheimer $`\mathrm{\Lambda }`$ space, and is the exact quantization of a classical Poincaré map. The latter is nearly the classical map used in refs. . That map on the unit sphere described the position of $`𝐋`$ in the molecular frame immediately after the collision, whereas the present map describes the position of $`𝐋`$ in the middle of the collision (perigee). That this matrix is the $`T`$ matrix defined by Bogomolny to quantize a Poincaré map will be shown by proving that the eigenvalues and eigenfunctions of the entire system result from
$$det(1T(E_n))=0,$$
(5)
i.e. Bogomolny’s equation for the quantized energy $`E_n`$. Indeed at quantized energies given by Eq. (1)
$$𝖤^t𝖤|A_\mathrm{\Lambda }=1|A_\mathrm{\Lambda },$$
(6)
which implies (5). To prove this key point first notice that unitarity of $`𝖤`$, namely $`𝖤^{}𝖤=(𝖢^ti𝖲^t)(𝖢+i𝖲)=𝕀`$, leads to $`𝖢^t𝖢+𝖲^t𝖲=𝕀`$ and $`𝖢^t𝖲=𝖲^t𝖢`$. Then $`𝖤^t𝖤=(𝖢^t+i𝖲^t)(𝖢+i𝖲)=𝕀2𝖲^t𝖲+2i𝖢^t𝖲`$, so that if $`𝖲|A_\mathrm{\Lambda }=0`$ then $`𝖤^t𝖤|A_\mathrm{\Lambda }=𝕀|A_\mathrm{\Lambda }`$.
Conversely $`\mathrm{𝖤𝖤}^t`$ operates in collision $`N`$ space and corresponds to a classical map between apogee and apogee in the laboratory frame.
We will now compare the traditional way of solving MQDT to the one implied by our QPM. The traditional way is to look for zeros of the determinant of the non-symmetric matrix $`𝖲`$ of Eq. (1), all of whose elements depend in a complex way on energy through $`\nu _N(E)`$. It is computed by a LU or a SVD method followed by a root searching algorithm . The present method is to look for eigenphases of $`𝖤^t𝖤`$ or $`\mathrm{𝖤𝖤}^t`$. They are computed by the diagonalization of a symmetrical unitary matrix. This is efficient and unproblematic because it diagonalizes in an orthonormal basis. Finally we search the zeros of the resulting eigenphases.
The situation is sketched in Fig. 1. The search for zeros of eigenphases which vary nearly linearly with energy is obviously much simpler than the search of zeros of a determinant which is sometimes nearly tangent to the horizontal axis. Moreover, in this case of near tangency we had problems to converge the wave functions because the eigenfunction switches between two nearly orthogonal values in a very narrow energy range , requiring the use of the more efficient but slower SVD algorithm. Such a situation occurs frequently in nearly integrable cases, due to the lack of level repulsion. The diagonalization, on the contrary, gives always correctly the orthogonal eigenfunctions even if eigenvalues are very close.
The implications of this procedure for the study of quantum chaos is of great interest. Figure 2 displays a chaotic situation. Comparing eigenphases in the near integrable (Fig. 1) and chaotic (Fig. 2) situations, we see that in the first case the phases as a function of energy display avoided crossings of straight lines running at different angles while for the second case these lines are practically parallel. This shows that the drift of eigenphases as a function of energy displays presence or absence of level repulsion or spectral rigidity more obviously than the energy levels themselves. This consideration is important in relation with the theory put forward by two of us , which relates Random Matrix Properties of eigenvalues of a quantum system to properties of invariance under canonical transformations of the structure of the corresponding classical system (structural invariance). This theory was developed for maps such as the scattering map, the stroboscopic map or the Poincaré map leading to results about their unitary representations, i.e. about eigenphases. To transfer statistical properties of eigenphases of the QPM to energy eigenvalues, it is necessary that the drift of eigenphases as a function of energy be nearly parallel for all phases. Analytic evidence that this must be true for chaotic systems is given in , in agreement with the numerical results shown in Fig. 2. To put this on a more quantitative basis we display on Fig. 3 histograms of the distributions of the velocities of the eigenphases curves. The chaotic case has a narrow distribution while the integrable one shows long tails. The present study is much more convincing in that respect than another one for billiards using directly Bogomolny theory, which is garbled by the non unitarity of the $`T`$ matrix for finite $`\mathrm{}`$.
The difference seen in Fig. 3 for the near integrable and the chaotic case is remarkable and it is thus tempting to consider the eigenphase velocity distribution as another signature of classical chaos in quantum mechanics. While the narrow distribution for chaotic systems is typical, the tails for the integrable systems are not generic: they depend on the details of the partition of phase space by separatrices. In other examples tails have different shapes.
Summarizing: in this paper we proposed to interpret the multichannel quantum defect theory as a quantum Poincaré map. This was implemented in detail in the approximation that the absolute value of the electron angular momentum is conserved, but it is quite clear that it is true in general. The new interpretation allows for a more stable and efficient way to find solutions to MQDT, particularly for near-degenerate levels. Beyond that, the approximate system has been used to exemplify quantum features of classically chaotic systems. This can be now extended to use MQDT as a paradigm for QPM. Indeed a study of the velocity distribution of eigenphases confirms properties expected from other studies.
This work was partially supported by DGAPA (UNAM) project IN… and the CONACYT grant 25192-E
|
no-problem/9905/gr-qc9905074.html
|
ar5iv
|
text
|
# Untitled Document
Article withdrawn. The conslusion that the singularity is strong is incorrect. Numerics indicate that it is weak. Thanks to Patrick Brady and Amos Ori for comments.
|
no-problem/9905/hep-ph9905229.html
|
ar5iv
|
text
|
# PHENOMENOLOGY OF ATMOSPHERIC NEUTRINOS 11footnote 1Talk given by one of us (M.L.) in the 17th International Workshop on Weak Interactions and Neutrinos, Cape Town, South Africa, January 1999
## References
|
no-problem/9905/astro-ph9905109.html
|
ar5iv
|
text
|
# On X-ray Variability in Active Binary Stars
## 1 Introduction
Observations of the solar corona over timescales of years have shown the coronal X-ray emission, together with other indicators of activity such as Ca II H and K emission line strength, to be modulated by the solar dynamo on the 22 year magnetic field polarity reversal cycle, with maxima and minima occuring every 11 years or so (e.g., see the review by Harvey 1992). Surveys of the X-ray sky performed by the Einstein Observatory, and later by EXOSAT and ROSAT, have also firmly established the existence of supposedly analogous hot X-ray emitting coronae throughout the late main sequence (F-M), and also in late-type giants down to spectral types near mid-K (e.g., Vaiana et al. 1981). One fundamental issue in stellar physics concerns the relationship between this magnetic activity on stars with a wide range of physical parameters and solar magnetic activity (see review by Saar & Baliunas 1992): how directly and how far does the solar analogy apply to other stars, and how do the underlying physical processes differ? Unfortunately, while stellar coronal X-ray emission has been known and studied for more than 20 years, the small number of satellites in orbit at any given time able to observe it severely limits our knowledge of any long-term trends in stellar X-ray activity. Such knowledge is currently restricted to a handful of stars caught during repeated brief snapshots of them afforded by observations of different satellites.
If magnetic cycles with similar timescales to that of the Sun are present on other stars, as convincing evidence from the long-term Mt. Wilson Ca II H+K monitoring program suggests (e.g., Baliunas et al. 1995 and references therein), then one might also expect these stars to modulate their coronal X-ray fluxes in a similar way to the Sun. Further, on the Sun these modulations are large: Solrad observations (Kreplin 1970) in the 44-60Å and 8-20Å passbands show that X-ray flux at activity maximum (c.1968) is $`20`$ and $`200`$ times greater than at activity minimum (July 1964) respectively (see also Vaiana & Rosner 1978). Also, as stated by Hempelmann, Schmitt & Stȩpièn (1996), the variation of the solar X-ray flux in the equivalent of the ROSAT/PSPC bandpass over its activity cycle is a factor of 10 or more (also Pallavicini 1993; but Ayres et al. 1996, extrapolating from XUV data predict a variation by only a factor $`4`$), similar to the ratio deduced for the variations in the soft X-ray range of Yohkoh based on ratios of X-ray fluxes in the 1-8Å passband of GOES (Aschwanden 1994), and as also directly observed by Yohkoh (Acton 1996). Such large long-term changes in mean stellar X-ray flux levels are, at least in principle, easily detectable. However, studies of stellar X-ray emission at different epochs based on Einstein and subsequent ROSAT observations of stars in open clusters (Stern et al. 1995, Gagné et al. 1995; Micela et al. 1996), as well as field stars (Schmitt, Fleming, & Giampapa 1995, Fleming et al. 1995) suggest that these active stars, at least, do not show strong long-term components of variability; some of these results are discussed by Stern (1998). A recent study by Hempelmann et al. (1996) of F-K main sequence stars also finds that the more active stars with higher surface X-ray fluxes tend not to have well-defined cyclic activity in terms of the Ca II H+K activity index. Some authors have suggested that this lack of clear detection of activity cycles might be an observational consequence of the dominant magnetic activity on the more active stars being due to a different dynamo process to the solar large-scale field $`\alpha \omega `$ dynamo (e.g., Stern et al. 1995; Drake et al. 1996).
In this paper, we turn to the most active stars – the RS CVn and BY Dra binaries – in order to investigate whether or not they might exhibit some form of cyclic, or other long-term variability in their X-ray emission. We look at a sample of active binary stars that have been detected by the Einstein Observatory (c.1978-81) and that have also been observed by the ROSAT/PSPC both during the all-sky survey (c.1990), and during later pointed observations (c.1991-1994). We compare the different observations in order to assess whether or not there is any significant difference between changes in flux levels over short-term timescales ($`\frac{1}{2}2`$ yrs; ROSAT All-Sky Survey v/s pointed phase) compared with changes over longer-term timescales ($`1012`$ yrs; ROSAT v/s Einstein).
In §2 we describe the star sample used in this study. In §3 we describe the statistical method we adopt to compare the samples and discuss the implications of our results: in §3.1 we consider the correlations of the samples and their deviations from equality; in §3.2 we discuss the statistical significance of the analysis; and in §3.3 we discuss the implications of our results in the context of stellar activity cycles. We summarize in §4.
## 2 Data Selection
We adopt the sample of 206 spectroscopic binary systems of Strassmeier et al. (1993) as our baseline database of active stars. This sample was selected by Strassmeier et al. such that each system has at least one late-type component that shows Ca II H and K emission in its spectrum.
In Table 1 we list a subset of the Strassmeier et al. stars which have at least one X-ray measurement with either the Einstein/IPC or the ROSAT/PSPC, together with the relevant observed count rates. We have used the widely available catalogs of the Einstein Slew Survey (“Slew”; Elvis et al. 1992, Plummer et al. 1994), the Einstein Extended Medium Sensitivity Survey (“EMSS”; Gioia et al. 1990, Stocke et al. 1991), and the Einstein/IPC Source Catalog (“EOSCAT”; Harris et al. 1990) to obtain Einstein/IPC measurements (“Einstein”); and the ROSAT All-Sky Survey (RASS) Bright Source Catalog (“RASSBSC”; Voges et al. 1996) and ROSAT public archive pointed data sets (“WGACAT”; White, Giommi, & Angelini 1994) to obtain the PSPC measurements. We have not augmented the WGACAT with independently measured fluxes in order to keep the X-ray sample homogeneous.<sup>1</sup><sup>1</sup>1Using other existing catalogs (e.g., “ROSATSRC”; Voges et al. 1994) of ROSAT pointed data sets does not change the overlaps (cf. Table 2) significantly.
If a particular star is found in more than one Einstein survey catalog, we adopt the count rate derived in EOSCAT over that of EMSS over that of Slew. If multiple PSPC pointings exist of a star, then we use only the measurement with the highest effective exposure (including vignetting) and the one closest to the field-center.
In comparing Einstein/IPC counts with ROSAT/PSPC counts of the same star, we adopt a conversion factor $`\frac{PSPC}{IPC}=3.7`$ based on a straight line fit to the Einstein-RASSBSC sample. Clearly, this is an approximate number that could vary according to the adopted plasma temperature, the metallicity of the corona, and column density of absorption to the source. The bandpasses and effective areas of both instruments are however similar enough over the temperature range of interest that the ratio of count rates are insensitive to these parameters (see §3.2). RASSBSC and WGACAT counts are extracted in slightly different passbands, and an appropriate correction ($`20\%`$) has also been applied to these datasets.
## 3 Analysis and Discussion
Subsets of the active binary stars that have been observed in two different epochs allow us to deduce the magnitude of the variability at different timescales. We begin by assuming that each such sample is statistically random, i.e., that there are no systematic changes in the variability of the sample from one epoch to the other, or in other words, that on average any increases in intrinsic luminosity is balanced by decreases in intrinsic luminosity. This assumption is supported by the Kruskal-Wallis test for both the combined samples (i.e., active binaries with X-ray data at all 3 epochs) and for the individual samples (active binaries observed in any of the 3 epochs): the hypothesis that the samples have the same mean cannot be rejected (the probability of obtaining the observed value of the K-W test statistic by chance is $`0.84\pm 0.02`$ and $`0.4\pm 0.06`$ respectively, much higher than an acceptable threshold of 0.05).<sup>2</sup><sup>2</sup>2Fleming et al. (1995) noted an increase in the mean X-ray emission level of a small, X-ray selected, sample of RS CVn and W UMa binaries between the Einstein and ROSAT epochs. However, the sample of stars used here has very little overlap with the Fleming et al. sample and is furthermore much larger in size and significant changes in the observed mean X-ray emission level are not expected. This result also confirms that the conversion factors correcting the passband differences between the catalogs have been properly evaluated.
We ignore censored data (stars detected in one survey but not in another \[11 in Einstein, 3 in WGACAT, 65 in RASSBSC\], as well as stars not detected in any survey ) in this work. The large dynamic range of the observed count rates ($`>10^2`$; cf. Figures 1-3) in the samples, and the strong correlations in the detected count rates show that ignoring undetected stars will not affect the results presented here. Further note that we use a sample that is not X-ray selected, and thus avoid the problem encountered by Fleming et al. (1995) who found general decreases in overall flux between Einstein and ROSAT measurements due to preferential detection of stars while flaring.
In the following sections we analyze the paired samples in greater detail (§3.1), assess the significance of our results (§3.2), and discuss the results in the context of stellar activity cycles (§3.3).
### 3.1 Correlations and Deviations
The sample of stars able to shed light on “short” timescale ($`03`$ yr) variability, i.e., stars re-observed after a short interval, are those active binaries present in both the RASSBSC and WGACAT, while there are two sets of paired datasets defining the “long” timescales ($`10`$ yr) – Einstein-RASSBSC and Einstein-WGACAT. These paired samples are shown in Figures 1-3: it is clear that the count rates are strongly correlated as one would expect in the case where intrinsic variability of a single star is much smaller than the range in brightness of the whole sample. For completeness, and to define the strength of the correlations in count rates, we have performed standard statistical correlation tests, the results of which are listed in Table 2. We have tested the sensitivity of the derived correlation coefficients to the statistical errors on the observed count rates by performing monte carlo simulations. These involved generating a new set of count rates (of the same sample size) for each star by sampling from a Gaussian with a mean identical to the observed value and standard deviation equal to the observed $`1\sigma `$ error; correlation coefficients are then derived using the new set of simulated count rates. We find that the derived coefficients are stable to within $`0.01`$.
The strong correlations within the paired samples imply that any actual variability in X-ray emission within the sample is not much larger than the measurement errors. Indeed, the majority of the observed count rates in the different samples are within a factor of 2 of each other (after allowing for the conversion between the Einstein and ROSAT passbands). This result is similar to other comparisons of Einstein and ROSAT observations of samples of mostly active late-type stars (e.g., Schmitt et al. 1995; Stern et al. 1995; Gagné et al. 1995; Micela et al. 1996; Fleming et al. 1995). However, the larger scatter apparent in the Einstein-RASSBSC (and to a lesser extent, Einstein-WGACAT) samples compared to the RASSBSC-WGACAT sample (cf. Figures 1-3) does appear to indicate the presence of some non-statistical scatter in the data. In the following, we quantify this apparent variability.
The issue we seek to address is the extent of the departure of a paired set of count rates from strict equality. Further, any measure of this departure must include the effects of the statistical uncertainties associated with the observed count rates. Thus, we define the quantity
$$\delta _{}=\frac{1}{N_{samp}}\underset{samp}{}\frac{D_{}}{\sigma _{tot}}$$
(1)
where $`D_{}`$ is the perpendicular distance of the pair of count rates from a straight line of unit slope passing through the origin, and $`\sigma _{tot}`$ is the total error associated with that pair as obtained by propagating the individual errors, and $`N_{samp}`$ is the number of paired count rates in the sample; if the count rates in the two samples are identical $`\delta _{}=0`$, and in the case of only statistical variations, $`\delta _{}1`$. Note that this is similar (but differs in the use of perpendicular deviations and division by the error) to the merit-function used to derive straight-line fits to data such that absolute deviation is minimized (cf. Press et al. 1992). In the case of small deviations, $`D_{}`$ may be obtained from the logarithmic ratio of the count rates.<sup>3</sup><sup>3</sup>3This is the statistic adopted by Gagné et al. (1995). Taking the count rates observed at two epochs to be $`c_1`$ and $`c_2`$, with $`c_1=c_2+\delta _{12}`$, $`\mathrm{ln}\left(\frac{c_1}{c_2}\right)`$ $``$ $`\mathrm{ln}\left(1+\frac{\delta _{12}}{c_2}\right)`$ $``$ $`\frac{\delta _{12}}{c_2}`$ $``$ $`\frac{D_{}}{c_2}`$. Note that this formulation preserves sign information (i.e., whether the first or the second epoch has the higher count rate; the expectation value of this statistic is 0 in the absence of variability, unlike that of $`\delta _{}`$ which has an expectation value $`1`$). However, since we are only interested in deviations from constancy, we essentially marginalize over this two-sidedness (and thereby improve our detection efficiency) by using the perpendicular deviates $`D_{}`$. Using the perpendicular deviates also allows us to include the effects of the measurement uncertainties in a straightforward fashion. Note that standard statistical measures such as the Students T, the F-statistic, the Sign test, etc. apply to the means and variances of the samples, and are not sensitive enough for our purposes. The adopted method also has the advantage of allowing us to parameterize the detected variability (albeit crudely; see §3.2). The values of $`\delta _{}`$ derived from the three pairs of datasets as defined in Equation 1 are listed in Table 3. The uncertainties in the derived values of $`\delta _{}`$ have been estimated from monte carlo simulations of the different datasets as described above: $`\delta _{}`$ was calculated for each realization of the datasets, and the estimated uncertainties correspond to the standard deviation of the simulated values of $`\delta _{}`$.
In Figure 4, we show the cumulative distribution of $`D_{}/\sigma _{tot}`$ computed for each paired dataset, augmented by monte carlo simulations performed as described above in order to illustrate the distributions more clearly. The fraction of stars in each sample with normalized perpendicular deviates $`>D_{}/\sigma _{tot}`$ are shown. When the differences between two samples may be attributed solely to statistical errors, the differential distribution of normalized perpendicular deviates are distributed as a one-sided Gaussian; this distribution is also illustrated in Figure 4. Any “excess variability” – deviates larger than expected on purely statistical grounds or systematic errors – manifest themselves in the form of wider distributions, i.e., with a larger fraction of stars in the sample showing perpendicular deviates at larger values of $`D_{}/\sigma _{tot}`$. Based on Figure 4, each of the samples considered shows clear and unambiguous signatures of excess variability. Indeed, 30% of the stars in the Einstein-RASSBSC sample, 45% of those in the Einstein-WGACAT sample, and 35% of those in the RASSBSC-WGACAT sample show scatter attributable to non-statistical variability at a level $`>5\sigma `$.
We have also carried out a similar analysis on subsamples of the largest of our three samples (Einstein-RASSBSC) in order to investigate whether or not there are any trends in $`D_{}/\sigma _{tot}`$ with spectral type or luminosity class. We find that the resulting distributions of $`D_{}/\sigma _{tot}`$ are similar to the distribution obtained for the full sample, indicating that in our data there is no significant evidence for such systematic changes in observed scatter or variability in soft X-ray emission.
One of the primary goals of this study is to look for evidence of underlying variability with characteristic timescales of order a decade or so, similar to that of the solar cycle. In the case of the Sun, such variability in soft X-rays is about an order of magnitude (e.g., Pallavicini 1993, Hempelmann et al. 1996) or more (Kreplin 1970, Aschwanden 1994, Acton 1996). If such a component of variability were present in the stars of our active binary sample, we would expect the two Einstein-ROSAT samples to exhibit a larger spread in $`D_{}/\sigma _{tot}`$ than the RASSBSC-WGACAT sample, since the Einstein and ROSAT respective observations span an interval more comparable to the expected period of the long-term variability. That such a signature is not easily discernible may be partly attributed to the generally larger errors associated with the Einstein measurements of count rates—note that the perpendicular deviates considered here are normalized relative to the estimated statistical error. Thus, in order to show a similar effect as the RASSBSC-WGACAT sample, the Einstein-ROSAT samples must have a correspondingly larger intrinsic non-statistical differences. However, as we have emphasized above, soft X-ray variability over the solar cycle amounts to an order of magnitude or more, which is well beyond the statistical uncertainties in the Einstein-ROSAT comparisons. Therefore, if any long-term, or cyclic, component of variability is present in the stars of our active binary sample, then the amplitude of this variability must be much less than in the solar case. In the following sections we discuss the implications of this result.
### 3.2 Stochastic Variability
The derived values of $`\delta _{}`$ (see Table 3) conclusively show that the data are inconsistent, at a very high significance, with the hypothesis that there are only statistical variations in count rates among the 3 datasets acquired at different epochs: i.e., we unambiguously detect the existence of excess variation among the samples.
The nature of this excess scatter is however not as well-determined. We rule out instrumental effects as being the main cause of the observed scatter since it is seen even in the RASSBSC-WGACAT sample. In the cases involving the IPC, we note that even though the passbands and instrument sensitivities of Einstein and ROSAT differ, for spectra generated from thermal plasma at temperatures between $`5\times 10^6`$ and $`10^7`$ K, which are the likely coronal temperatures of the stars being considered (e.g., Schmitt et al. 1990, Dempsey et al. 1993), these differences are small (cf. Wood et al. 1995) and the maximum error we are likely to make in the $`\frac{PSPC}{IPC}`$ count-ratio is $`10\%`$. monte carlo simulations of the datasets including this type of error show that its effect on the value of $`\delta _{}`$ is to offset it by $`0.2`$ and is hence negligible. We therefore conclude that the origin of the detected excess variations is intrinsic.
We now investigate the possibility that all of the observed excess variation can be attributed to stochastic variability, and then whether or not we can discern any differences in the magnitude of such variabilities between the different pairs of data (i.e., stars common to \[Einstein,RASSBSC\], \[Einstein,WGACAT\], or \[RASSBSC,WGACAT\]). This is not entirely straightforward because the three sets of observations were obtained under different conditions and with different instrumentation and have different measurement uncertainties. To do this we first assume that the variability detected here may be parameterized by modeling it as purely stochastic variability, relative to the estimated statistical error. We emphasize that we carry out this modeling only as a means to explore the range of $`\delta _{}`$, and that it is not our intention to claim that intrinsic variability in active binaries indeed follows this pattern. We assume that the variability may be characterized by the parameter $`\beta =\frac{\mathrm{\Delta }I}{\sigma _I}`$, where here $`\mathrm{\Delta }I`$ represents the effective change in soft X-ray emission from one observation to the next; $`\beta `$ then represents the ratio of the magnitude of the variation and the observed error in the count rate. Note that this assumption obviously underestimates the magnitude of cyclic variability, but is adequate to summarize our results given the absence of detailed time traces of photometric and X-ray brightness of the stars in the sample. For the parameter $`\beta `$ to be physically meaningful, the estimated errors must be insensitive to distance effects – i.e., the expected variability must not be a function of our special location. For the stellar sample in question (Table 1), we note that the X-ray luminosity spans a range $`\frac{max}{min}[L_x]17400`$, much greater than distance induced flux variations ($`\frac{max}{min}[d^2]4700`$). The spread in count rates is therefore much larger than variations induced by stellar distance (and errors therein); the bias introduced into the analysis due to farther sources being weaker and thus naturally having larger relative errors is thus minimized, and the adopted parameter is a reasonable quantity to use to describe the samples. Comparison of $`\beta `$ derived from different datasets is however still subject to the problem of different datasets having different relative errors, and we account for this later.
We derive the appropriate value of $`\beta `$ for each dataset pair as follows. Starting from an arbitrary sample of count rates (we used RASSBSC because it is the largest sample) and an assumed value of $`\beta `$, we generated using monte carlo simulations two new sets of count rates for each point. The new count rates were obtained by sampling from two Gaussians, both with means equal to the original count rate but with different standard deviations $`\sigma _1=\sigma _{tot}`$, the estimated statistical error (see Equation 1), and $`\sigma _2=\sqrt{1+\beta ^2}\sigma _{tot}`$. A $`\delta _{}`$ was then derived for the new pair of simulated datasets. This process was repeated for different values of $`\beta `$, resulting in predicted values of $`\delta _{}`$ as a function of $`\beta `$. For each dataset pair, $`\beta `$ was then derived by comparing this function with the observed $`\delta _{}`$. Note that by definition of $`\delta _{}`$ and $`\beta `$, this process is insensitive to details of the original sample such as number of points, sizes of individual errors, etc.
The results are listed in Table 3: All samples are characterized by non-statistical relative variabilities $`\beta >10`$, with the Einstein-RASSBSC sample being the lowest as expected (due to the relatively large errors on the count rates); and despite the significantly higher $`\delta _{}`$ of the Einstein-WGACAT sample relative to the RASSBSC-WGACAT sample, the range of relative variabilities $`\beta `$ overlap with each other, suggesting that long-term (potentially cyclic) variability is similar in magnitude to short-term (potentially episodic) variability.
The derived relative variabilities may also be used to estimate an “effective variability”, $`\frac{\mathrm{\Delta }I}{I}\beta /<SNR>`$, where $`<SNR>`$ is an average measure of the signal-to-noise ratio. Inclusion of this factor further minimizes the stellar-distance bias in $`\beta `$. We thus derive (see Table 3) $`\frac{\mathrm{\Delta }I}{I}=(0.290.36),(0.340.43),(0.410.52)`$ for RASSBSC-WGACAT, Einstein-WGACAT, and Einstein-RASSBSC respectively. We note that the observed RASS and WGA count rates for FF Aqr are sharply different ($`\frac{RASS}{WGA}15`$), and attribute this to a likely flare event during the ROSAT All-Sky Survey. This one star contributes $`10\%`$ of the measured<sup>4</sup><sup>4</sup>4We are potentially interested in detecting long-term cyclic variability, and hence would be justified in isolating the effects of such variability by eliminating other contributors to the measured variability. Note however that we do not exclude FF Aqr from our analyses because we are unable to unambiguously identify cyclic variability in the chosen samples. $`\frac{\mathrm{\Delta }I}{I}`$. The long-term samples have systematically larger values of $`\frac{\mathrm{\Delta }I}{I}`$, but are not significantly different given the size of the error bars, the possible systematic errors (see above), and the unsuitability of the adopted parameterization to characterize cyclic variability (see §3.3). Note that the measured “effective variability” over “short” timescales (RASSBSC-WGACAT) is similar to that found by Ambruster, Sciortino, & Golub (1987) by photon-arrival-time analysis of Einstein observations of selected stars over timescales ranging from $`10^210^3`$ s. A similar result was also found by Pallavicini, Tagliaferri, & Stella (1990) in their analysis of EXOSAT data of flare stars: they detect variability at a variety of timescales ($`3^m>100^m`$) in half the stars in their sample at strengths ranging from 15%-50%. Thus it might not be unreasonable to attribute most, and perhaps all, of the causes of the observed RASSBSC-WGACAT variability to processes (e.g., flares, rotational modulation, active region evolution) that operate on such relatively short timescales.
### 3.3 Cyclic Variability
Our modeling described above searches for stochastic (e.g., flaring) variability, and is insensitive to potential systematic (e.g., cyclic) variability in a sample where measurements of X-ray flux from individual stars are uncorrelated. However, if we compare the derived magnitude of the variability $`\frac{\mathrm{\Delta }I}{I}`$ in the short-term ($`1`$ yr) with the long-term ($`10`$ yr) samples, we do find indications of a larger variation over longer timescales. This result is statistically inconclusive since the error-bars on the variability indices ($`\frac{\mathrm{\Delta }I}{I}`$) derived for the various samples overlap, and further because of limitations imposed by the parameterization itself.
What fraction of the above variability is due to stochastic causes, and what fraction is due to the effects of periodic causes? In order to address this question, and thereby derive upper limits on the magnitude of a cyclic component to the variability, we model the flux variations as due entirely to a sinusoidal component combined with a constant base emission. We write the X-ray flux at an arbitrary time $`t`$,
$$f_x(t)=A_{cyc}\mathrm{sin}\left(\frac{2\pi t}{P_{cyc}}+\varphi \right)+A_{cyc}+f_{x_0},$$
(2)
where $`A_{cyc}`$ is the amplitude, $`P_{cyc}`$ is the period, and $`\varphi `$ is the phase of the cyclic component, and $`f_{x_0}`$ is a non-varying base emission. Note that $`f_x(t)0`$ for all $`t`$. The strength of the cyclic activity may be parameterized by the ratio of cyclic to base emission fluxes,
$$\zeta =\frac{2A_{cyc}}{f_{x_0}}.$$
(3)
Conversely, if $`f_{obs}`$ is the observed flux at, say, $`t=0`$, and $`\zeta ^{}`$ is an estimated fraction of the cyclic component,
$$A_{cyc}=\frac{\zeta ^{}f_{obs}}{2+\zeta ^{}(1+\mathrm{sin}\varphi )}.$$
(4)
We then estimate the maximum value that $`\zeta `$ can have for our adopted sample of stars, using a technique similar to that used to measure the strength of stochastic variability (§3.1). In order to minimize the effects of short-term variability on estimates of cyclic variability, we model the cyclic variability starting from a paired dataset (say $`A`$ and $`B`$). The modeling involves obtaining monte carlo transpositions of the count rates of one the samples (say $`A`$) to the epoch of a different sample (say $`C`$). This transposition is carried out for a fixed value for the strength of the cyclic component (i.e., $`\zeta =const.`$), and using values of $`P_{cyc}`$ randomly sampled from a log-normal distribution with a mean corresponding to 10 yr and $`1\sigma `$ range corresponding to 4-25 yr (this range is a rough approximation of the results tabulated by Baliunas et al. 1995, based on the Mt.Wilson Ca II H+K monitoring program; see their Figure 3), at randomly selected phases, and including the effects of the error bars as in §3.1 . A distribution of $`D_{}/\sigma _{tot}`$ is obtained as before for the paired datasets of the model (\[$`AC`$\]($`\zeta `$)) and the original dataset ($`B`$) – in other words, for the simulated pair $`BC`$ – and is compared with the distribution derived from the reference dataset of paired samples (here, the observed pair $`BC`$). The value of $`\zeta `$ that minimizes the difference between the modeled and reference distributions of $`D_{}/\sigma _{tot}`$ indicates the level of cyclic variability required in order to account for the difference between the two pairs of datasets, if the entire difference is to be attributed to cyclic variability. The difference between the distributions is simply parameterized by the Kolmogorov-Smirnoff test statistic of the maximal distance $`\delta _{KS}(\zeta )`$ between the cumulative distributions of $`D_{}/\sigma _{tot}`$. Note that we do not attempt to derive a probability value for this statistic, partly because doing so implicitly assumes that the model adopted for transposing the count rates (Equation 2) is correct, and partly because the modeling is carried out via monte carlo simulations. Thus, we set an upper limit on the amplitude of the cyclic variability observable in the paired datasets $`B`$ and $`C`$. Since we have three datasets, there are many combinations possible since each of RASSBSC, Einstein and WGACAT can correspond to all or any of $`A`$, $`B`$ and $`C`$. The curves illustrating the Kolmogorov-Smirnoff test statistic as a function of $`\zeta `$ for two cases are illustrated in Figure 5.
The derived limit does depend on the sampling distribution adopted for the cyclic periods (see above): if longer periods are preferentially sampled (e.g., by increasing the width of the distribution, or by shifting it to longer periods), the upper limit on $`\zeta `$ decreases – i.e., smaller cyclic amplitudes are sufficient to account for the entire change in the count rates of the samples studied – and vice versa. This trend is however weak when compared to the adopted distribution of $`P_{cyc}`$.
In order to investigate the cyclic component with greatest sensitivity, the observations obtained with longer time intervals between them need to be compared. In this case, our datasets $`A`$, $`B`$ and $`C`$ corresponded to RASSBSC, WGACAT and Einstein, respectively. The function $`\delta _{KS}(\zeta )`$ is illustrated for this case in Figure 5: considering paired samples of \[RASSBSC$``$Einstein\]($`\zeta `$) and WGACAT (in other words, simulated Einstein-WGACAT) with a directly observed sample of Einstein-WGACAT, we find that the actual cyclic component must have a strength $`\zeta 4`$ – i.e., not more than 80% of the X-ray emission may be in the cyclic component even if all of the long-term variability is ascribed to the effects of activity cycles. It is worth noting here that we implicitly include the effects of short-term variability by using the RASSBSC-WGACAT sample as the initial distribution for modeling. The cyclic component is thus imposed on top of any variations that may exist due to flaring, and the effects of the latter are thus minimized.<sup>5</sup><sup>5</sup>5The structure of our chosen datasets do not allow us to distinguish between “flaring” and “quiescent” (which may arise due to active region evolution, rotational modulation, etc.) variability, a distinction made by Kürster et al. (1997) using periodogram analysis of the light curve of AB Dor; the “short-term” variability we refer to includes both types.
Also worthy of investigation is the magnitude of cyclic variability that might be required to explain all the additional (non-statistical) scatter between the RASSBSC and WGACAT samples—ie the assumption that there is no short-term or stochastic variability. For example, starting with a sample of RASSBSC stars alone, transposing them to the approximate WGACAT epoch for various values of $`\zeta `$, and considering the resultant paired sample of \[RASSBSC$``$WGACAT\]($`\zeta `$) and RASSBSC, we find that only at large values of the cyclic component ($`\zeta 10`$) does the distribution match that derived from RASSBSC-WGACAT (Figure 5). However, such a large amplitude for cyclic variability would result in a much larger spread in count rates than is observed in our comparisons of Einstein vs. ROSAT datasets (Figures 1 and 2). This is as one would expect in the presence of significant short-term stochastic variability: indeed, it is telling us that the observed variability cannot only be of a cyclic nature. This constraint is only possible using observations obtained at three different epochs, as we have here.
It is interesting to compare our upper limit for $`\zeta `$ for this sample of active binary stars to the observed cyclic component of solar coronal activity. In the case of the Sun, data obtained by Yohkoh have illustrated the very large contrast in soft X-rays between the Sun at solar minimum, essentially devoid of active regions, and at solar maximum when several large active regions are generally present on the visible hemisphere at any one time (e.g., Hara 1996, Acton 1996). The observed change in soft X-ray flux from solar minimum to maximum amounts to at least an order of magnitude (Aschwanden 1994, Acton 1996) and likely much larger (Kreplin 1970), so that a solar value for our parameter describing the cyclic activity component is $`\zeta _{}10`$.
Fleming et al. (1995) have carried out an analysis of X-ray variability using an X-ray selected sample, viz., RASS flux measurements of EMSS stars. They find that relative to F stars, 24% of Solar-type stars, 49% of dMe stars, and 19% of RS CVn and W UMa stars show a significant decrease in emission, while 12%, 10%, and 48% respectively of the above types show significant increases. The larger apparent decreases for normal stars may be ascribed to the bias inherent in X-ray selected stellar samples. In contrast, the apparent increase in the number of active binary stars showing increased emission levels cannot be due to such a bias. We are however unable to confirm this effect using a different, and larger, sample of stars that include EOSCAT and Slew observations. Indeed, for unbiased, uncorrelated, samples it is difficult to envision a physical mechanism that would cause this apparent increase found by them, and we therefore attribute it to accidental sample selection effects (as Fleming et al. also do) and to possible variations in flux calibration (see Figure 3 of Fleming et al).
Comparison of X-ray fluxes between Einstein and ROSAT epochs using statistically complete samples of field stars (Schmitt et al. 1995) shows that there is little evidence for systematic changes in the mean X-ray emission levels of stars in excess of factors of 2 on timescales of 10 yr. Schmitt et al. point out that this result is valid especially for active flare stars, but caution that the apparently larger spread in X-ray emission observed for fainter stars could be an artifact of the small number of such stars present in the sample.
Attempts to constrain long-term variability in X-ray emission from stars in the Pleiades (Schmitt et al. 1993, Gagné et al. 1995, Micela et al. 1996) and the Hyades (Stern et al. 1995) clusters have also been inconclusive. Schmitt et al. (1993) compare Pleiades stars detected in the ROSAT All-Sky Survey with previous Einstein observations and find numerous instances of strong variability by factors of an order of magnitude that are unlikely to be due to rotational modulation, measurement or calibration errors, or flaring activity; they conclude that cyclic activity must be the cause of such large variations. In contrast, Gagné et al. (1995) and Micela et al. (1996) find at best a marginal increase in long-term variability compared to short-term variability in their analyses of pointed data: the latter find that 15% of the stars show variability by factors $`>2`$ over $`10`$ years and 10% over $`\frac{1}{2}`$ year; the former find 40% of the stars show significant variability over $`1011`$ years compared to 25% over $`\frac{1}{2}\frac{3}{2}`$ years, but that the difference could be attributed to a bias resulting from the increased sensitivity of ROSAT. Stern et al. (1995) conclude from their comparison of Einstein and ROSAT data of the Hyades that the majority of the stars show long-term variability of less than a factor of 2, and that there is no evidence of strong cyclic activity. As noted above, this amplitude of variability is similar to what one would expect on short timescales of $`10^3`$ s based on the time-series analyses of Einstein observations of various stars (Ambruster et al. 1987).
In this context, it must be noted that a long-term monitoring program of the young active K star AB Dor has been carried out with ROSAT by Kürster et al. (1997): they find that the X-ray flux is variable on short time-scales ranging from minutes to weeks, but shows no long-term trend indicative of cyclic activity over the $`5\frac{1}{2}`$ years of the program. The lack of a detection of cyclic variability in this star of course does not rule out its presence in other active stars, and indeed may even manifest itself in AB Dor itself at much longer timescales.
Micela & Marino (1998) have compared the changes in X-ray emission in field dM stars observed with ROSAT over timescales of days to months, and have compared that data with similar data for the Sun by constructing maximum-likelihood distribution functions of the flux variations. They conclude that variability is present at all timescales they have considered, but do not distinguish between long- and short-term or stochastic and cyclic variability. They have also applied their method to compare Pleiades dM star data from Einstein/IPC, ROSAT/PSPC, and ROSAT/HRI to Solar data and reach the same conclusion.
The problem of whether or not F-K main sequence stars in general are similar to the Sun in their trends of X-ray emission variations with magnetic cycles has been studied recently by Hempelmann et al. (1996). These authors attempted to circumvent the sparse X-ray observations of any one single star through a statistical study of a group of stars for which long-term Mt.Wilson Ca II H+K monitoring observations are available, and for which distinct cyclic activity behavior was detected (e.g., Baliunas et al. 1995 and references therein). They found that the deviation of X-ray surface fluxes $`F_\mathrm{x}`$ (derived from ROSAT all-sky survey and pointed phased observations) from the mean relation between $`F_\mathrm{x}`$ and Rossby number, $`R_o`$, is correlated with activity cycle phase as indicated by the Mt.Wilson Ca II H+K $`S`$-index.
In addition to stars with cyclic Ca II H+K emission, the Mt.Wilson monitoring program also revealed stars with less regular and more chaotic variability (Baliunas et al. 1995). Hempelmann et al. (1995) showed that the “regular” (cyclic) and “irregular” stars are strongly anti-correlated with the Rossby number, $`R_o`$; the X-ray fluxes of the latter group clearly show them to comprise the most active stars with the highest surface fluxes. Hempelmann et al. interpreted these results in terms of a transition from a non-linear to a linear dynamo going from the irregular to the regular stars. This view is supported by non-linear modeling of stellar dynamos (e.g. Tobias, Weiss, & Kirk 1995, Knobloch & Landsberg 1996, and references therein) which show that as stellar rotation period is decreased, an initially steady system begins to exhibit quasi-periodicity or maunder-minimum type aperiodicity; chaotic behavior is a natural consequence of such models.
Alternately, Drake et al. (1996) argued that the observational evidence indicating both active stars and fully-convective M-dwarfs—the latter supposedly being unable to support a solar-like $`\alpha \omega `$ dynamo—do not appear to show strong cyclic behavior provided empirical support for the qualitative theoretical framework outlined by Weiss (1993; see also Weiss 1996) in which the magnetic activity on active stars and low-mass fully-convective stars is predominantly maintained by a turbulent or distributed dynamo (e.g., Durney et al. 1993, Weiss 1993, Rosner et al. 1995). Stern et al. (1995; see also Stern 1998), in their comparison of Einstein and ROSAT observations of Hyades dG stars, made similar speculations. Small-scale magnetic fields generated by turbulent compressible convection within the entire convective zone (e.g., Nordlund et al. 1992, Durney et al. 1993) would result in different functional dependences of activity indicators with stellar parameters, and in particular, because of its disordered spatial and temporal nature, would not be expected to exhibit activity cycles (Cattaneo 1997).
In the above context, the general lack of well-defined activity cycles in the Ca II H and K emission of the most active stars is especially interesting in light of the very recent finding based on radio observations of a magnetic cycle on the rapidly rotating RS CVn binary UX Ari (G0 V + K0 IV; Period=6.44 day), with an apparent polarity reversal every 25.5 days (Massi et al. 1998). If this surprisingly short period were analogous to the solar 22 year cycle, it would appear to offer a promising explanation for the lack of obvious cyclic behavior on timescales of years. However, the situation regarding cycles on RS CVn and other active stars is not quite so clear. Several authors have found evidence for cyclic behavior with periods comparable to that of the solar cycle on very active stars. For example, cycles of 11.1 yr for $`\lambda `$ And, 8.5 yr for $`\sigma `$ Gem, 11 yr for II Peg, and 16 yr for V711 Tau were inferred by Henry et al. (1995) from mean brightness changes derived from up to 19 years of photoelectric photometry. Dorren & Guinan (1994) find evidence for an activity cycle with a period of about 12 years on the rapidly rotating solar-like G dwarf EK Dra (HD 129333; rotation period $`2.8`$ days). Rodonò et al. (1995) also find a periodic variation in the spot coverage of RS CVn with a period of about 20 years, while Lanza et al. (1998) find similar evidence for a 17 year activity cycle in the secondary of AR Lac. Alternatively, others have detected photometric variability, but failed to find firm evidence for cyclic behavior in numerous binaries (eg. Strassmeier et al. 1994 in the RS CVn star HR 7275; Oláh et al. 1997 in the case of HK Lac; Cutispoto 1993 in the case of IL Hya, LQ Hya, V829 Cen, V851 Cen, V841 Cen, GX Lib, V343 Nor, and V824 Ara).
Regardless of the observed periods of activity cycles of active binaries based on optical observations of modulation in, presumably, magnetically-related spot coverage, it is clear based on the results of our and earlier analyses that magnetic activity manifest in coronal X-ray emission is at best only weakly modulated by any long-term magnetic cycles present. This contrasts with the solar case in which coronal activity is very strongly dependent on the solar magnetic cycle. While we cannot rule out the existence of a multi-period cyclic dynamo, we suggest based on the relatively small difference between the short-term and long-term variabilities that this situation reinforces the earlier conclusions and conjectures of Stern et al. (1995), Drake et al. (1996) and Stern (1998) that a turbulent or distributed dynamo dominates the magnetic activity of the more active stars.
## 4 Summary
In order to determine the characteristics of the variability of X-ray emission on active binary systems, we have carried out a statistical comparison of the X-ray count rates observed at different epochs. From the list of active chromosphere stars cataloged by Strassmeier et al. (1993) we have extracted subsamples which were detected in the following surveys: Einstein/IPC EOSCAT, EMSS, and Slew; ROSAT All-Sky Survey (RASSBSC); ROSAT archival pointed dataset (WGACAT). Our study differs from and improves upon earlier comparisons of Einstein and ROSAT observations of late-type stars in that the analysis of both RASSBSC and WGACAT observations enables us, at least in principle, to distinguish between “short” and long-term components of variability.
Assuming that the emission from separate stars are uncorrelated, we compute a measure of the relative departure from equality ($`\delta _{}`$, Equation 1) for each combination of the above samples. We show that the values of $`\delta _{}`$ thus derived are inconsistent with each other, i.e., there is evidence for non-statistical variations in the observed count rates of the sample stars in the different epochs.
We model the detected variability as stochastic variability, and conclude that the “effective variability” (an average value of the fractional variation in the observed count rate) is apparently the lowest for the samples separated by the shortest timescales (RASSBSC-WGACAT: $`\frac{\mathrm{\Delta }I}{I}=0.32_{0.29}^{0.36}`$), and appears to be systematically larger for the samples separated by longer timescales (Einstein-RASSBSC: $`\frac{\mathrm{\Delta }I}{I}=0.46_{0.41}^{0.52}`$; Einstein-WGACAT: $`\frac{\mathrm{\Delta }I}{I}=0.38_{0.34}^{0.43}`$). This suggests the existence of a long-term component to the variability, but the evidence for such a component is marginal. If such a component exists, it could be due to stellar activity cycles strongly modified by X-ray emission arising due to relatively unmodulated small-scale fields generated by turbulent dynamos in the convective zone.
We model the long-term component as a sinusoidal cyclic variation atop a constant base emission, and constrain its strength by comparing simulated distributions of perpendicular deviations with observed distributions. We find that such a cyclic component, if it exists, may at most be 4 times as strong as a constant, base emission. This contrasts with the Solar case, where cyclic activity causes an increase in the soft X-ray emission by factors $`10`$ at activity maximum relative to the flux at activity minimum. We note earlier conclusions that the nature of coronal activity on active stars fits the scenario whereby the generation of magnetic fields whose dissipation is observed in the form of coronal heating and subsequent radiative loss is dominated by a turbulent or distributed dynamo, rather than by a solar-like $`\alpha \omega `$ large-scale field dynamo. This scenario is essentially the same as that suggested by, e.g., Weiss (1993, 1996), based on qualitative theoretical arguments. Comparisons of past and future observations of stellar coronal emission at different epochs, such as those analyzed here, for larger samples of stars with different activity levels and different spectral types, will be invaluable in distinguishing between different dynamo models.
We would like to thank Alanna Connors, Steve Saar, Brian Wood, Frank Primini, and the referee for useful comments. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. VK was supported by NASA grants NAG5-3173, NAG5-3189, NAG5-3195, NAG5-3196, NAG5-3831, NAG5-6755 and NAG5-7226 during the course of this research. JJD was supported by the AXAF Science Center NASA contract NAS8-39073.
|
no-problem/9905/cond-mat9905391.html
|
ar5iv
|
text
|
# Spin-Peierls Transition in CuGeO3: Critical, Tricritical or Mean Field?
## I Introduction
The spin-Peierls transition corresponds to the dimerization of a one-dimensional S = $`\frac{1}{2}`$ antiferromagnetic chain coupled to a three dimensional elastic medium -. Until relatively recently, spin-Peierls transitions had only been observed in organic charge transfer compounds such as copper bisdithiolene (TTF-CuBDT) -. Experimental information obtainable in such systems has been limited both by the size of available single crystals and by the sensitivity of these materials to damage by x-rays or electrons. Nevertheless, some important information on the spin-Peierls phase transition has been obtained in a number of different organic materials. Interestingly, in most, if not all cases, the data are consistent with a simple BCS-type mean field transition -.
Much more complete experimental work on the spin-Peierls transition has been made possible by the discovery that a structurally simple, inorganic chain compound copper germanate (CuGeO<sub>3</sub>) undergoes a spin-Peierls transition at a transition temperature around 14K . The crystal structure of CuGeO<sub>3</sub> is orthorhombic, space group Pbmm, with a unit cell of dimensions $`a=4.81`$ Å, $`b=8.47`$ Å and $`c=2.94`$ Å at room temperature . The Cu<sup>2+</sup> ion carries a spin $`S=\frac{1}{2}`$ and forms a (CuO<sub>2</sub>) chain with the neighboring Cu<sup>2+</sup> ions along the $`c`$-axis direction. The successive Cu<sup>2+</sup> S = $`\frac{1}{2}`$ spins are antiferromagnetically coupled through the superexchange interactions via the bridging oxygen atoms. Below the spin-Peierls transition temperature, $`T_{SP}`$, the dimerization of Cu-Cu pairs along the $`c`$-axis direction, accompanied by shifts of the bridging oxygen atoms in the $`ab`$ plane, gives rise to superlattice reflections at the $`(\frac{h}{2},k,\frac{l}{2})`$ ($`h,l`$: odd and $`k`$: integer) reciprocal-lattice positions . These have been observed in electron diffraction , x-ray , and elastic neutron scattering experiments. Using coarse resolution x-ray diffraction techniques, Pouget et al. have measured the pretransitional thermal lattice fluctuations whose correlation lengths diverge anisotropically with decreasing temperature in a manner consistent with mean field theory. These same fluctuations have been studied at high resolution using synchrotron x-ray diffraction techniques by Harris et al. . These latter authors observe within about 1K of $`T_{SP}`$ large length scale fluctuations with characteristic length scales about an order of magnitude longer than those characterizing the bulk critical fluctuations.
In spite of this large amount of work, it is still not agreed whether the observed transition behavior reflects mean field or critical behavior. Extant models include: a) tricritical to 3D Ising crossover behavior ; b) mean field behavior ; c) 3D XY with corrections to scaling , and, most exotically, d) a 2D XY to 3D XY crossover as $`T_{SP}`$ is approached . Harris et al. first argued that because of the one-component nature of the dimerization order parameter for a spin-Peierls phase transition, asymptotically the transition must be in the 3D Ising universality class. They argued further, that because of the coupling to the elastic strains, the precritical behavior should be tricritical-like. Similar conclusions, albeit based on different physical reasoning, were arrived at later by Werner and Gros . Proponents of 3D XY behavior typically argue that the copper and oxygen displacements are independent thence yielding a two-component order parameter . Implicitly, Harris et al. assume that all of the atomic displacements accompanying the spin-Peierls transition are linearly coupled thence reducing the system to a one-component order parameter. The 3D critical behavior models seem to be supported by measurements of the order parameter which for reduced temperatures $`2\times 10^3<1T/T_{SP}5\times 10^2`$ exhibits power law behavior $`(1T/T_{SP})^\beta `$ with $`\beta =0.33\pm 0.02`$, in good agreement with both 3D Ising and XY values of $`\beta =0.325`$ and 0.345 respectively . The heat capacity data are equally well described by a 3D critical behavior model (Ising or XY) and by a mean field model with Gaussian fluctuations .
In this paper we present an alternative model for CuGeO<sub>3</sub>, namely a Landau-Ginzburg model incorporating a tricritical to mean field crossover. As we shall show, this model describes all available data very well with few adjustable parameters. The format of this paper is as follows. In Section II we introduce the model including its genesis in studies of critical phenomena in thermotropic liquid crystals systems. Section III presents an analysis of the available data for CuGeO<sub>3</sub> using this model. Finally, in Section IV we give a summary, our conclusions and suggestions for future experiments.
## II The Model
The conundrum described above for CuGeO<sub>3</sub> is reminiscent of a similar divergence of views which occurred in the interpretation of experiments on smectic A - smectic C phase transitions in thermotropic liquid crystal systems -. In particular, in that case, measurements of the tilt order parameter typically reveal power law behavior $`\varphi (1T/T_{AC})^\beta `$ over the temperature range $`5\times 10^5<(1T/T_{AC})<5\times 10^3`$ with $`\beta =0.36\pm 0.02`$. However, this divergence of views was resolved by Huang and Viner and Birgeneau et al. who showed that all of the data including the heat capacity, order parameter, and tilt susceptibility, were consistent with the predictions of a simple Landau model with an anomalously large 6th order term. Clearly, it is of interest to carry out a similar analysis for the available data for the spin-Peierls transition in CuGeO<sub>3</sub>.
For the Landau-Ginzburg model the free energy is given by
$$F=a\tau \varphi ^2+b\varphi ^4+c\varphi ^6\mathrm{}.+\frac{1}{2m_\alpha }|_\alpha \varphi |^2$$
(1)
where $`\tau =T/T_c1`$.
With $`\tau _0=b^2/ac`$, standard calculations yield for the order parameter, $`\varphi `$, the specific heat, $`C`$, the susceptibility, $`\chi `$, and the correlation length, $`\xi _\alpha `$,:
$`\varphi `$ $`=`$ $`(b/3c)^{1/2}[(13\tau /\tau _0)^{1/2}1]^{1/2}\tau <0`$ (2)
$`C`$ $`=`$ $`\{\begin{array}{cc}0\hfill & \hfill \tau >0\\ (a^2T/2bT_c^2)(13\tau /\tau _0)^{1/2}\hfill & \hfill \tau <0\end{array}`$ (5)
$`\chi `$ $`=`$ $`1/2a\tau \tau >0`$ (6)
$`\xi _\alpha `$ $`=`$ $`(2am_\alpha \tau )^{1/2}\tau >0`$ (7)
with similar expressions for $`\tau <0`$ for $`\chi `$ and $`\xi `$. Eq. (2) and (3) are conveniently rewritten in the form
$`\begin{array}{cccc}\varphi \hfill & =\hfill & \varphi _0\begin{array}{cc}\left[\left(1+3\frac{T_{SP}T}{T_{SP}T_{CR}}\right)^{1/2}1\right]^{1/2}\hfill & \tau <0\hfill \end{array}\hfill & \end{array}`$ (10)
$`\begin{array}{cccc}C\hfill & =\hfill & \{\begin{array}{cc}0\hfill & \tau >0\hfill \\ C\mathrm{\_}T\left(1+3\frac{T_{SP}T}{T_{SP}T_{CR}}\right)^{1/2}\hfill & \tau <0\hfill \end{array}\hfill & \end{array}`$ (14)
where $`T_{CR}`$ is the crossover temperature from tricritical to mean field behavior. We note that in the above expressions the exponents are fixed and only the amplitudes and the two temperatures, $`T_{SP}`$ and $`T_{CR}`$, are variable. A log-log plot of Eq. (6) reveals that for the order parameter $`\varphi `$ the effective exponent $`\beta `$ crosses over gradually from $`\frac{1}{4}`$ to $`\frac{1}{2}`$ as $`T`$ varies from less than to greater than $`T_{CR}`$. In the smectic A - smectic C case the measurements span $`T_{CR}`$ and accordingly intermediate exponents, $`\beta 0.36`$, are found even though the actual transition is mean-field-like for temperatures in the immediate vicinity of $`T_{AC}`$ .
## III Analysis
We now apply this tricritical–mean field crossover model to CuGeO<sub>3</sub>. The first test is $`T_{SP}`$ itself or, more precisely, the ratio of the spin gap, $`\mathrm{\Delta }`$, to $`T_{SP}`$. In the mean field theory of Pytte , the spin-Peierls transition is BCS-like so that in the weak coupling limit $`2\mathrm{\Delta }/T_{SP}=3.5`$. In the charge transfer salts TTF - CuBDT , TTF - AuBDT , MEM - (TCNQ)<sub>2</sub> , and SBTTF - TCNQCl<sub>2</sub> this ratio is found to be 3.5, 3.7, 3.1 and $``$ 3.5 respectively, in good agreement with the BCS value. Critical fluctuations, either Ising or XY in character, would act to increase this ratio. For CuGeO<sub>3</sub>, $`\mathrm{\Delta }=24.5`$K and $`T_{SP}14`$K implying $`2\mathrm{\Delta }/T_{SP}=3.5`$, consistent with a BCS mean field theory description . At the minimum, this value for $`2\mathrm{\Delta }/T_{SP}`$ argues against any quantitatively important effect of critical fluctuations on $`T_{SP}`$ in CuGeO<sub>3</sub>.
The behavior of the order parameter in CuGeO<sub>3</sub> is of particular importance since this observable appears to provide the strongest evidence for true critical rather than mean field or tricritical behavior. A number of groups have reported measurements of the temperature dependence of the order parameter in CuGeO<sub>3</sub> . The measured phase transition temperature $`T_{SP}`$ varies between 13.3K and 14.6K in different samples. Nevertheless, near-universal behavior is observed for the order parameter provided that it is plotted as a function of the reduced temperature $`T/T_{SP}`$. As noted above, fits of the order parameter $`\varphi (T/T_{SP})`$ for $`1T/T_{SP}<0.05`$ to a single power law $`\varphi (1T/T_{SP})^\beta `$ all yield values of $`\beta =0.33\pm 0.02`$. As discussed by Gaulin and co-workers , inclusion of a correction-to-scaling multiplicative factor $`(1+B|\tau |^\delta )`$ in the expression for $`\varphi `$ both improves the goodness of fit and, not surprisingly, extends the range of validity of the fit.
We show in Fig. 1 our own measurements of the order parameter squared in a sample of CuGeO<sub>3</sub> with $`T_{SP}=14.6`$K. These data are consistent with those measured by both ourselves and other groups in a variety of samples . Fits to a single power law for $`\tau <0.04`$ yield $`\beta =0.314\pm 0.01`$. However, as noted by Harris et al. and as may be seen in Fig. 1, the data fall significantly below the power law curve for $`\tau >0.04`$. We show, in addition, in Fig. 1 the results of a fit to the tricritical to mean field crossover form, Eq. (6). This fit has only three adjustable parameters, $`\varphi _0^2`$, $`T_{CR}`$ and $`T_{SP}`$. This is the same number of parameters as those in the single power law fits discussed above and two less than the number of adjustable parameters in fits to a power law with corrections-to-scaling with both $`B`$ and $`\delta `$ varied. (We note that Lumsden et al. fix $`\delta =1/2`$ whereas Lorenzo et al. allow $`\delta `$ to vary; the latter group find an optimum fit for $`\delta 1`$). It is evident that Eq. (6) describes the order parameter data extremely well over the complete range of temperatures. The fit yields $`\tau _{CR}=1T_{CR}/T_{SP}=0.006\pm 0.001`$ implying that the crossover from tricritical to mean field behavior occurs at a quite small reduced temperature.
We now discuss the energy gap $`\mathrm{\Delta }`$. Using a simple scaling ansatz, Cross and Fisher argue that $`\mathrm{\Delta }\varphi ^{2/3}`$. We show in Fig. 2 the data of Lorenzo et al. for the magnetic energy gap for $`T<T_{SP}`$ in a sample of CuGeO<sub>3</sub> with $`T_{SP}=14.4`$K. In part because of the apparent jump of $`\mathrm{\Delta }(T)`$ at $`T_{SP}`$, Lorenzo et al. interpret these data as indicating a 2D XY Kosterlitz-Thouless transition . In fact, these data are readily explained using the model of Cross and Fisher together with the tricritical-mean field crossover form for $`\varphi `$, Eq. (6). In this case we hold $`T_{SP}`$ fixed at $`T_{SP}=14.4`$K and set $`\tau _{CR}=0.006`$ as determined above so that there is only one adjustable parameter, the overall amplitude $`\mathrm{\Delta }(0)`$. The result so-obtained is shown in Fig. 2. It is evident that the tricritical-mean field model with $`\mathrm{\Delta }(T)\varphi ^{2/3}`$ describes the measured gap energy $`\mathrm{\Delta }(T)`$ extremely well over a wide range of temperatures with only one adjustable parameter. Indeed, this is by far the best test to-date of the Cross-Fisher model. We should note that this model cannot explain the inferred pseudogap above T<sub>SP</sub> . However, the “pseudogap” is deduced using a heuristic line-shape analysis which lacks a firm theoretical basis.
The specific heat in CuGeO<sub>3</sub> has proven to be the most difficult thermodynamic quantity to interpret unambiguously . This is, in part, because of the extreme sensitivity of the specific heat near $`T_{SP}`$ to sample inhomogeneities and, in part, because of the inevitable large number of adjustable parameters required to describe the critical specific heat in any physically relevant model. Fig. 3 shows high resolution magnetic specific heat $`(C_M)`$ data for a sample of CuGeO<sub>3</sub> with $`T_{SP}=14.24`$K from Lasjaunias and coworkers . Hegman et al. have carried out an extensive analysis of these data using both a mean field “BCS plus Gaussian fluctuation” model and a critical behavior model. They find that both models describe $`C_M`$ quite well in the immediate vicinity of $`T_{SP}`$, albeit at the cost of a rather large number of adjustable parameters. The critical behavior model fits give a value for the specific heat exponent, $`\alpha `$, near 0. On the other hand, the Gaussian fluctuation analysis implies that the true critical behavior is confined to the region $`|\tau |<0.0006`$.
Given the uncertainties connected with the fits described above, the best one can hope for is to determine whether or not the tricritical-mean field crossover model is consistent with the experimental results for $`C_M`$ shown in Fig. 3. First, it is evident that Eq. (7) will be inadequate since one must, at the minimum, include Gaussian fluctuations above $`T_{SP}`$. We therefore include the fluctuations above T<sub>SP</sub> in the simplest way possible by replacing Eq. (7) by
$$\begin{array}{ccc}C_M\hfill & \hfill =& \{\begin{array}{cc}C_M^+T\left(1+3\frac{TT_{SP}}{T_{SP}T_{CR}}\right)^{1/2}+\gamma T\hfill & \hfill \tau >0\\ C_M^{}T\left(1+3\frac{T_{SP}T}{T_{SP}T_{CR}}\right)^{1/2}+B\mathrm{\_}\hfill & \hfill \tau <0\end{array}\hfill \end{array}$$
(15)
where $`\gamma T`$ is the regular linear term for a 1D Heisenberg antiferromagnet and $`B\mathrm{\_}`$ is the background term below $`T_{SP}`$. The background $`B_{}`$ should, in general, be temperature dependent; however, given the narrow range of temperatures we consider, a constant background is adequate. Eq. (8) is closely similar to the BCS plus Gaussian fluctuation model considered by Hegman et al. since the Gaussian fluctuations give rise to a $`|\tau |^{1/2}`$ contribution to $`C_M`$ both above and below $`T_{SP}`$. The solid lines in Fig. 3 correspond to fits to Eq. (8) with $`\tau _{CR}`$ fixed at 0.006 and $`C_M^+`$, $`C_M^{}`$,$`\gamma `$, $`B\mathrm{\_}`$ and $`T_{SP}`$ varied. Clearly Eq. (8) describes $`C_M`$ quite well; indeed the fit appears to be better than those for either of the models tested by Hegman et al. . The fit shown in Fig. 3 gives $`C_M^+/C_M^{}=1.1\pm 0.13`$; this ratio is expected to be non-universal so it cannot be simply interpreted. We conclude, therefore, that the tricritical-mean field crossover model describes $`C_M`$ well although not uniquely so.
Finally, we discuss the correlation length and the staggered susceptibility. Pouget et al. have found that the correlation length over a wide temperature range follows the behavior $`\xi (T/T_{SP}1)^{1/2}`$, consistent with mean field theory; however, the number of data points in their experiment near $`T_{SP}`$ is sufficiently small that their results do not meaningfully differentiate between various theoretical models. Harris et al. have reported a high resolution synchrotron x-ray study of the critical fluctuations above $`T_{SP}`$ in CuGeO<sub>3</sub>. They find pretransitional lattice fluctuations within 1K above $`T_{SP}`$ whose length scale is about an order of magnitude longer than those characterizing the bulk thermal fluctuations. The line-shape of the large length scale fluctuations is consistent with a Lorentzian-squared form. The measured critical exponents are $`\nu =0.56\pm 0.09`$ and $`\overline{\gamma }=2.0\pm 0.3`$ where $`\overline{\gamma }`$ is the exponent characterizing the divergence of the disconnected staggered susceptibility . The mean field predictions for these exponents are $`\nu =1/2`$ and $`\overline{\gamma }=2\gamma =2`$ whereas for 3D Ising (XY) critical behavior one expects $`\nu =0.63(0.67)`$ and $`\overline{\gamma }=2.5(2.64)`$. Thus the Harris et al. data favor the tricritical-mean field model but 3D Ising or XY critical models are not excluded. Precise measurements of the bulk staggered susceptibility using neutrons should yield accurate values for $`\nu `$ and $`\overline{\gamma }`$ and this, in turn, would definitively choose between the models.
## IV Discussion
In summary, each of the order parameter, magnetic energy gap, specific heat, correlation length and disconnected staggered susceptibility are well-described by a simple Landau-Ginzburg model exhibiting a tricritical-mean field crossover near $`T_{SP}`$. Further, the ratio of the energy gap to $`T_{SP}`$ is consistent with the value for a BCS mean-field transition. We conclude, therefore, that CuGeO<sub>3</sub>, in common with the organic change transfer salts, exhibits a mean field spin-Peierls transition for reduced temperatures $`|\tau |>0.001`$.
The principal remaining issue is the microscopic origin of the tricritical behavior. Harris et al. argue that this is caused by a diminution in the effective fourth order term in Eq. (1), $`b\varphi ^4`$, because of coupling to the macroscopic strain. It also seems possible that competing nearest and next-nearest neighbor exchange interactions along the chain could generate the tricritical instability . Specifically, Castilla et al. argue that the ratio of the next nearest neighbor to nearest neighbor exchange interaction along the chain is close to the critical value for spontaneous formation of a magnetic gap independent of coupling to the lattice. Heuristically, it seems that this could generate tricritical behavior in the phase diagram. Another possible source of tricritical behavior is competition between the Néel state and the spin-Peierls state, that is, competition between the coupling of the $`S=1/2`$chain to the lattice and the interchain exchange coupling. Clearly, a multidimensional theoretical analysis of the spin-Peierls phase diagram including magnetostriction, competing intrachain exchange interactions together with the interchain magnetic and elastic coupling is required.
Of course, the mean field behavior itself in all of these spin-Peierls systems is not yet well understood. In TTF-CuBDT there is evidence for a soft phonon at very high temperatures and Cross and Fisher speculate that the precursive soft mode accounts for the large length scale underlying the mean field behavior. In CuGeO<sub>3</sub>, no soft phonon at all has yet been seen. Thus, the microscopic origin of the large length scale in CuGeO<sub>3</sub> remains to be elucidated.
Finally, it would be very interesting to see if the putative nearby tricritical point could be accessed by changing some variable such as pressure, uniaxial stress or doping. Masuda et al. have shown that replacement of Cu by Mg both depresses $`T_{SP}`$ and appears to drive the spin-Peierls transition first order. The concomitant tricritical point could well account for the observed tricritical-mean field crossover in pure CuGeO<sub>3</sub>. We note, however, that the actual physics of magnetic dilution in CuGeO<sub>3</sub> is quite complex since dilution introduces frustration of the interchain elastic interaction . Replacement of Cu<sup>2+</sup> by Cd<sup>2+</sup> (Ref. ) or Ge<sup>4+</sup> by Ga<sup>4+</sup> (ref. ) both lead to mean field behavior over quite wide temperature ranges; that is, doping with these ions moves CuGeO<sub>3</sub> away from the tricritical point into the pure mean field regime. Again, further research, both experimental and theoretical, is required to elucidate these effects more completely.
## V Acknowledgements
We thank N. Hegman and J. C. Lasjaunias for comments on the manuscript, for sending us their results for $`C_M`$ in tabular form and for a preprint of Ref. . We are grateful to C. W. Garland and N. Goldenfeld for insightful communications on the interpretation of the critical heat capacity. We thank B. D. Gaulin for valuable comments on this paper and for drawing our attention to Ref. . This work was supported by the NSF under Grant No. DMR97-04532 and by the MRSEC Program of the NSF under award No. DMR98-08941.
|
no-problem/9905/cond-mat9905351.html
|
ar5iv
|
text
|
# Filling dependence of the Mott transition in the degenerate Hubbard model
## I Introduction
The Hubbard Hamiltonian is a simple model for studying strongly interacting systems. In particular it is used to investigate the Mott-Hubbard metal-insulator transition in half-filled systems. It is clear that for strong correlations such a system should be insulating, since in the atomic limit the states with exactly one electron per lattice site are energetically favored, while all other states are separated from those by a Coulomb gap. For a generalized Hubbard model with degenerate orbitals the same argument implies that for strong correlations not only the half-filled, but all integer filled systems will become Mott-Hubbard insulators. It is then natural to ask how the location of the transition depends on the filling.
As an example we consider a Hamiltonian describing the alkali doped Fullerides. It comprises the three-fold degenerate $`t_{1u}`$ orbital and the Coulomb interaction $`U`$ between the electrons on the same molecule. Using this Hamiltonian, we have recently shown that, although $`U`$ is substantially larger than the band width $`W`$, K<sub>3</sub>C<sub>60</sub> is not a Mott insulator but a (strongly correlated) metal. Prompted by the synthesis of an isostructural family of doped Fullerenes A<sub>n</sub>C<sub>60</sub> with different fillings $`n`$, we now address the question of the Mott transition in integer doped Fullerides. For these systems we have the interesting situation that for fillings $`n=`$ 1, 2, 3, 4, and 5 calculations in the local density approximation predict them all to be metallic, while in Hartree-Fock they all are insulators. Performing quantum Monte Carlo calculations for the degenerate Hubbard model at different fillings and for values of $`U`$ typical for the Fullerides, we find that all the systems are close to a Mott transition, with the critical correlation strength $`U_c`$ at which the transition takes place strongly depending on the filling $`n`$. More generally, our results show how, for an otherwise identical Hamiltonian, the location of the Mott transition $`U_c`$ depends on the filling. $`U_c`$ is largest at half-filling and decreases for fillings smaller or larger than half. We contrast these findings with the results from Hartree-Fock calculations which predict a much too small $`U_c`$ and show almost no doping dependence. We give an interpretation of the results of the quantum Monte Carlo calculations extending the hopping argument introduced in Ref. to arbitrary integer fillings. Despite the crudeness of the argument it explains the doping dependence found in quantum Monte Carlo. We therefore believe that our simple hopping argument captures the basic physics of the doping dependence of the Mott transition in degenerate systems.
In section II we introduce the model Hamiltonian for doped Fullerenes with a three-fold degenerate $`t_{1u}`$ band. We discuss the fixed-node approximation used in the diffusion Monte Carlo calculations, present the results of our quantum Monte Carlo calculations, and contrast them to the result of Hartree-Fock calculations. Section III gives an interpretation of the results of our calculations in terms of intuitive hopping arguments. We introduce the many-body enhancement of the hopping matrix elements, which explains how orbital degeneracy $`N`$ helps to increase the critical $`U`$ at which the Mott transition takes place and we analyze how frustration leads to an asymmetry of the critical $`U`$ for fillings $`n`$ and $`2Nn`$. A summary in Sec. IV closes the presentation.
## II Model calculations
### A Model Hamiltonian
Solid C<sub>60</sub> is characterized by a very weak inter-molecular interaction. Therefore the molecular levels merely broaden into narrow, well separated bands. The conduction band originates from the lowest unoccupied molecular orbital, the 3-fold degenerate $`t_{1u}`$ orbital. To get a realistic, yet simple description of the electrons in the $`t_{1u}`$ band, we use a Hubbard-like model that describes the interplay between the hopping of the electrons and their mutual Coulomb repulsion:
$`H`$ $`=`$ $`{\displaystyle \underset{ij}{}}{\displaystyle \underset{mm^{}\sigma }{}}t_{im,jm^{}}c_{im\sigma }^{}c_{jm^{}\sigma }^{}`$ (2)
$`+U{\displaystyle \underset{i}{}}{\displaystyle \underset{(m\sigma )<(m^{}\sigma ^{})}{}}n_{im\sigma }n_{im^{}\sigma ^{}}.`$
The sum $`ij`$ is over nearest-neighbor sites of an fcc lattice. The hopping matrix elements $`t_{im,jm^{}}`$ between orbital $`m`$ on molecule $`i`$ and orbital $`m^{}`$ on molecule $`j`$ are obtained from a tight-binding parameterization. The molecules are orientationally disordered, and the hopping integrals are chosen such that this orientational disorder is included. The band width for the infinite system is $`W=0.63eV`$. The on-site Coulomb interaction is $`U1.2eV`$. The model neglects multiplet effects, but we remark that these tend to be counteracted by the Jahn-Teller effect, which is also not included in the model.
We will investigate the above Hamiltonian for different integer fillings $`n`$ of the $`t_{1u}`$ band. The corresponding Hamiltonians describe a hypothetical family of doped Fullerides A<sub>n</sub>C<sub>60</sub> with space group Fm$`\overline{3}`$m, i.e. an fcc lattice with orientationally disordered C<sub>60</sub> molecules. In the calculations we use the on-site Coulomb interaction $`U`$ as a parameter to drive the system across the Mott transition.
### B Quantum Monte Carlo method
As the criterion for determining the metal-insulator transition we use the opening of the gap
$$E_g=E(N+1)2E(N)+E(N1),$$
(3)
where $`E(N)`$ denotes the total energy of a cluster of $`N_{mol}`$ molecules with $`N`$ electrons in the $`t_{1u}`$ band. Since we are interested in integer filled systems, $`N=nN_{mol}`$, $`n`$ an integer. For calculating the energy gap (3) we then have to determine ground-state energies for the Hamiltonian (2). This is done using quantum Monte Carlo. Starting from a trial function $`|\mathrm{\Psi }_T`$ we calculate
$$|\mathrm{\Psi }^{(n)}=[1\tau (Hw)]^n|\mathrm{\Psi }_T,$$
(4)
where $`w`$ is an estimate of the ground-state energy. The $`|\mathrm{\Psi }^{(n)}`$ are guaranteed to converge to the ground state $`|\mathrm{\Psi }_0`$ of $`H`$, if $`\tau `$ is sufficiently small and $`|\mathrm{\Psi }_T`$ is not orthogonal to $`|\mathrm{\Psi }_0`$. Since we are dealing with Fermions, the Monte Carlo realization of the projection (4) suffers from the sign-problem. To avoid the exponential decay of the signal-to-noise ratio we use the fixed-node approximation. For lattice models this involves defining an effective Hamiltonian $`H_{\mathrm{eff}}`$ by deleting from $`H`$ all nondiagonal terms that would introduce a sign-flip. Thus, by construction, $`H_{\mathrm{eff}}`$ is free of the sign-problem. To ensure that the ground-state energy of $`H_{\mathrm{eff}}`$ is an upper bound of the ground state of the original Hamiltonian $`H`$, for each deleted hopping term, an on-site energy is added in the diagonal of $`H_{\mathrm{eff}}`$. Since $`|\mathrm{\Psi }_T`$ is used for importance sampling, $`H_{\mathrm{eff}}`$ depends on the trial function. Thus, in a fixed-node diffusion Monte Carlo calculation for a lattice Hamiltonian, we choose a trial function and construct the corresponding effective Hamiltonian, for which the ground-state energy $`E_{\mathrm{FNDMC}}`$ can then be determined by diffusion Monte Carlo without a sign-problem.
For the trial function we make the Gutzwiller Ansatz
$$|\mathrm{\Psi }(U_0,g)=g^D|\mathrm{\Phi }(U_0),$$
(5)
where the Gutzwiller factor reflects the Coulomb term $`UD=Un_{im\sigma }n_{im^{}\sigma ^{}}`$ in the Hamiltonian (2). $`|\mathrm{\Phi }(U_0)`$ is a Slater determinant that is constructed by solving the Hamiltonian in the Hartree-Fock approximation, replacing $`U`$ by a variational parameter $`U_0`$. Details on the character of such trial functions and the optimization of Gutzwiller parameters can be found in Ref. .
To check the accuracy of the fixed-node approximation, we have determined the exact ground-state energies for a (small) cluster of four C<sub>60</sub> molecules using the Lanczos method. For systems with different on-site Coulomb interaction (Table I) and varying number of electrons (Table II), we consistently find that the results of fixed-node diffusion Monte Carlo are only a few $`meV`$ above the exact energies.
### C Quantum Monte Carlo results
Since the quantum Monte Carlo calculations are for finite clusters of $`N_{mol}`$ molecules, we have to extrapolate the calculated energy gaps to infinite system size. An obvious finite-size effect is the fact that the one-particle spectrum is discrete, hence there can be a gap, even for $`U=0`$. Furthermore, in evaluating (3), we add and subtract one electron to a finite system. Even if we distribute the extra charge uniformly over all molecules, there will be an electrostatic contribution of $`U/N_{mol}`$ to the gap. We therefore introduce
$$E_G=E_gE_g(U=0)\frac{U}{N_{mol}}.$$
(6)
These corrections are expected to improve the finite-size extrapolation. In practice they turn out to be quite small. For a cluster of 32 C<sub>60</sub> molecules, e.g., $`E_g(U=0)`$ is typically already less than $`10meV`$. In the thermodynamic limit both correction terms vanish, as they should.
The results of the quantum Monte Carlo calculations are shown in Fig. 1. Plotting the finite-size corrected gap $`E_G`$ for different values of the Coulomb interaction $`U`$ versus the inverse system size $`1/N_{mol}`$, we read off where the gap starts to open. For the system with one electron per molecule the gap opens around $`U_c0.75\mathrm{}1.00eV`$. At filling 2 the transition takes place later, at $`U_c1.25\mathrm{}1.50eV`$. For both, filling 3 and 4 we find the largest critical $`U`$: $`U_c1.50\mathrm{}1.75eV`$. For the system with 5 electrons per molecule the gap opens around $`U_c1.00\mathrm{}1.25eV`$. The results are summarized in Fig. 2. Thus we find that for an otherwise identical Hamiltonian the critical $`U`$ for the Mott transition depends strongly on the filling. $`U_c`$ is largest at half-filling and decreases away from half-filling. The decrease in $`U_c`$ is, however, not symmetric around half-filling. It is more pronounced for fillings $`<3`$ than for fillings $`>3`$.
We note that the opening of the gap is accompanied by a change in the character of that trial function which yields the lowest energy in the fixed-node approximation. For small $`U`$, where the system is still in the metallic regime, paramagnetic trial functions with small $`U_0`$ (see the discussion after eqn. (5)) are best. When the gap starts to open, trial functions with larger $`U_0`$, which have antiferromagnetic character, give lower energies. The corresponding Slater determinants $`|\mathrm{\Phi }(U_0)`$ describe a Mott insulator in Hartree-Fock approximation.
### D Hartree-Fock calculations
It is instructive to compare the results of the quantum Monte Carlo calculations with the predictions of Hartree-Fock theory. Figure 3 shows the the gap $`E_g`$ calculated for the Hamiltonian (2) within the Hartree-Fock approximation for the different integer fillings. Compared with quantum Monte Carlo, the gap opens much too early, around $`U0.4eV`$ ($`U/W0.65`$). Furthermore, there is only a very weak doping dependence: $`U_c`$ somewhat increases with the filling — in qualitative disagreement with the quantum Monte Carlo results. This failure is a direct consequence of the mean-field approximation. In Hartree-Fock the only way to avoid multiple occupancies of the molecules, in order to reduce the Coulomb repulsion, is to renormalize the on-site energy for the orbitals, thereby localizing the electrons in certain orbitals. For the Hamiltonian (2) this on-site energy is, apart from a trivial offset, given by $`\epsilon _{im\sigma }=U_{m^{}\sigma ^{}}n_{im^{}\sigma ^{}}n_{im\sigma }`$. Lowering the Coulomb energy in this way will, however, increase the kinetic energy. For small changes in the on-site energies this increase will scale like the inverse of the density of states at the Fermi level. This suggests that the critical $`U`$ should be the larger, the smaller the density of states at the Fermi level. Inspecting the density of states $`N(\epsilon )`$ for the non-interacting Hamiltonian (see e.g. Fig. 3 of Ref. ), we find that this is indeed the case: $`N(\epsilon )`$ slightly decreases with filling, explaining the corresponding increase in $`U_c`$. Hence the weak, but qualitatively wrong, doping dependence in Hartree-Fock can be understood as an effect of the small variation in the density of states of the non-interacting system.
## III Interpretation
### A Hopping enhancement
To find a simple interpretation for the doping dependence of the Mott transition we consider the limit of large Coulomb interaction $`U`$. In that limit the Coulomb energy dominates and we can estimate the energies entering the gap equation (3) by considering electron configurations in real space. According to the Hamiltonian (2) the contribution to the Coulomb energy from a molecule that is occupied by $`m`$ electrons is $`Um(m1)/2`$. Thus the energy of a system with filling $`n`$ is minimized for configurations with exactly $`n`$ electrons per molecule. The hopping of an electron to a neighboring molecule would cost the Coulomb energy $`U`$ and is therefore strongly suppressed in the large-$`U`$ limit. The energy for a cluster of $`N_{mol}`$ molecules with $`N=nN_{mol}`$ electrons (filling $`n`$) is then given by
$$E(N)=\frac{n(n1)}{2}N_{mol}U+𝒪(t^2/U),$$
(7)
where $`t`$ is a typical hopping matrix element. Adding an extra electron increases the Coulomb energy by $`nU`$, removing an electron reduces it by $`(n1)U`$. But there will also be a kinetic contribution to the energy $`E(N\pm 1)`$, since the extra charge can hop without any additional cost in Coulomb energy. To estimate the kinetic energy we calculate the matrix element for the hopping of the extra charge to a neighboring molecule. This matrix element will of course depend on the arrangement of the other $`N`$ electrons. It is well known that for the non-degenerate Hubbard model a ferromagnetic arrangement of the spins is energetically favored (Nagaoka’s theorem), allowing the extra charge to hop without disturbing the background spins. For a degenerate Hubbard model, however, the hopping matrix element is larger e.g. for an antiferromagnetic arrangement of the background spins. This is illustrated in Fig. 4 a) for an extra electron in a system with filling 2. Now, instead of only the extra electron, any one out of the three equivalent electrons can hop to the neighboring molecule. Denoting the state with the extra electron on molecule $`i`$ by $`|i`$, we find that the second moment of the Hamiltonian $`i|H^2|i`$ is given by the number of hopping channels $`k`$ (in the present case $`k=3`$) times the number of (equivalent) nearest neighbors $`Z`$ times the single-electron hopping matrix element $`t`$ squared. Thus by inserting the identity in the form $`_j|jj|`$, where $`|j`$ denotes the state where any one of the electrons has hopped form molecule $`i`$ to the neighboring molecule $`j`$, we find
$$i|H|j=\sqrt{k}t,$$
(8)
i.e. the hopping matrix element is enhanced by a factor of $`\sqrt{k}`$ over the one-particle hopping matrix element $`t`$. In a similar way we find for the system with an extra hole (Fig. 4 b) a hopping enhancement of $`\sqrt{k}`$ with $`k=2`$. The hopping enhancements for other fillings are listed in table III, where $`k_{}`$ denotes the enhancement for a system with an extra hole, and $`k_+`$ is for a system with an extra electron.
For a single electron the kinetic energy is of the order of $`W/2`$, where $`W`$ is the one-electron band width. The enhancement factor $`\sqrt{k}`$ in the many-body case then suggests that the kinetic energy for the extra charge is correspondingly enhanced, implying
$`E(N+1)`$ $``$ $`E(N)+nU\sqrt{k_+}W/2`$
$`E(N1)`$ $``$ $`E(N)(n1)U\sqrt{k_{}}W/2.`$
Combining these results we find
$$E_gU\frac{\sqrt{k_+}+\sqrt{k_{}}}{2}W,$$
(9)
i.e. the hopping enhancement leads to a reduction of the gap described by the factor $`c=(\sqrt{k_+}+\sqrt{k_{}})/2`$. This reduction is largest ($`1.73`$) for n=3, and becomes smaller away from half-filling: $`c1.57`$ for $`n=2,\mathrm{\hspace{0.17em}4}`$, and $`c1.21`$ for fillings 1 and 5. Extrapolating (9) to intermediate $`U`$ we find that the gap opens for U larger than $`U_c=cW`$. Therefore the above argument predicts that the critical $`U`$ for the Mott transition depends strongly on the filling, with $`U_c`$ being largest at half-filling and decreasing away from half-filling. This is qualitatively the same behavior as we have found in the Monte Carlo calculations. We note, however, that the argument we have presented is not exact. First, the hopping of an extra charge against an antiferromagnetically ordered background will leave behind a trace of flipped spins. Therefore the analogy with the one-electron case for determining the kinetic energy in the large-$`U`$ limit is not exact. Second, using (9) for determining $`U_c`$ involves extrapolating the results obtained in the limit of large $`U`$ to intermediate values of the Coulomb interaction, where the Mott transition takes place. Finally, considering only one nearest neighbor in the hopping argument (cf. Fig. 4) implicitly assumes that we are dealing with a bipartite lattice, where all nearest neighbors are equivalent.
### B Origin of the asymmetry
To analyze the asymmetry in the gaps around half-filling we use the following exact relation for the kinetic energy in the limit of infinite $`U`$, which follows from an electron-hole transformation
$$T_{\genfrac{}{}{0pt}{}{min}{max}}(nN_{mol}\pm 1)=T_{\genfrac{}{}{0pt}{}{max}{min}}((2Nn)N_{mol}1).$$
(10)
(Note how this symmetry is reflected in the hopping enhancements shown in Table III.) Since the gap for filling $`n`$ is given by
$$E_g(n)=U+T_{min}(nN_{mol}1)+T_{min}(nN_{mol}+1),$$
the asymmetry $`\mathrm{\Delta }=E_g(n)E_g(2Nn)`$ in the gaps can be written entirely in terms of energies for systems with an extra electron:
$$\mathrm{\Delta }=\begin{array}{ccc}T_{max}((2Nn)N_{mol}+1)& +& T_{min}(nN_{mol}+1)\\ T_{min}((2Nn)N_{mol}+1)& +& T_{max}(nN_{mol}+1).\end{array}$$
For a bipartite system the spectrum for a given filling will be symmetric, in particular $`T_{min}+T_{max}=0`$, and thus there will be no asymmetry in the gaps: $`\mathrm{\Delta }=0`$. Frustration breaks this symmetry. To study the effect of frustration we perform a Lanczos calculation in the large-$`U`$ limit, starting from a configuration $`|v_0`$ of the type shown in Fig. 4. The leading effect of frustration is given by the third moment, which already enters after the first Lanczos step. Diagonalizing the Lanczos matrix and expressing everything in terms of the moments of the Hamiltonian, the extreme eigenvalues are given by
$$\epsilon _{\genfrac{}{}{0pt}{}{max}{min}}=\frac{A_3\pm \sqrt{4A_2^3+A_3^2}}{2A_2},$$
(11)
where $`A_k=v_0|H^k|v_0`$ denotes the $`k^{th}`$ moment of $`H`$, and $`A_1=v_0|H|v_0=0`$ for a state like in Fig. 4. From this expression it is clear that the “band width” $`\epsilon _{max}\epsilon _{min}`$ is essentially given by the second moment, and that an enhancement of $`A_2`$ by a factor of $`k`$ leads to an increase in the band width by a factor of $`\sqrt{k}`$, as already described above. The main effect of the third moment (i.e. of frustration) is to shift the extremal eigenvalues, where the shift is determined by the third moment.
To get a contribution to the third moment the initial state $`|v_0`$ must be recovered after three hops. This is only possible if the extra charge hops around a triangle, without changing spins along its path. For a state with an extra electron this means that one and the same electron has to perform the triangular hop. Therefore, even in the many-body case, for each such electron we get the same contribution to the third moment as in the single electron case. It therefore makes sense to write the third moment $`A_3(n)`$ for a system with $`nN_{mol}+1`$ electrons in terms of the third moment $`A_3^s`$ of the single electron problem: $`A_3(n)=\kappa _+(n)A_3^s`$, where $`\kappa _+(n)`$ describes the many-body effects, just like we introduced $`k_+(n)`$ to describe the many-body enhancement of the second moment. Using these definitions we find that the size of the asymmetry $`\mathrm{\Delta }`$ in the gaps can be estimated by the doping dependence of the (positive) enhancement factors $`\kappa _+(n)`$ of the third moment, while the overall sign is determined by the single-electron moments:
$$\mathrm{\Delta }\left(\frac{\kappa _+(n)}{k_+(n)}\frac{\kappa _+(2Nn)}{k_+(2Nn)}\right)\frac{A_3^s}{A_2^s}.$$
(12)
To understand the doping dependence of $`\kappa _+/k_+`$ we proceed in two steps. First we observe that the upper limit for the number of different electrons that can perform a triangular hop is given by the number $`k_+`$ of electrons that can hop to a nearest neighbor. Hence, if frustration is not suppressed, $`\kappa _+/k_+=1`$. For filling $`n=1`$, $`N2`$ this upper limit can always be achieved without compromising large next-neighbor hopping by arranging the electrons in such a way as to avoid each other. This is shown in Fig. 5. For the corresponding filling $`2N1`$ the electrons can no longer be completely separated in that way. Thus the channel for triangular hops will be blocked by the the Pauli principle, reducing $`\kappa _+/k_+`$. In that way for the larger fillings frustration is reduced.
This reduction of frustration can already be seen in the simple model of a triangle with orbital degeneracy $`N=2`$ (cf. Fig. 5). Choosing matrix elements $`t=1`$ for hopping between like orbitals we find for filling $`n=1`$ a strong asymmetry $`T_{min}(3n+1)=2`$ and $`T_{max}(3n+1)=+4`$, while at filling $`n^{}=2Nn=3`$ there is no asymmetry in the extremal eigenvalues: $`T_{\genfrac{}{}{0pt}{}{max}{min}}(3n^{}+1)=\pm 2`$. We note that flipping one spin in the configuration for filling $`n=3`$ would allow for a triangular hop. In a Lanczos calculation this spin polarized configuration gives, however, only extremal eigenvalues $`T_{min}=2`$ and $`T_{max}=+1`$. The states described here for a triangle can be easily adapted to the situation in an fcc lattice, where the third moment involves hopping to the nearest neighbor sites, which form connected triangles.
From the non-interacting density of states for our model of the doped Fullerenes (cf. e.g. Fig. 3 of Ref. ) we see that both $`\epsilon _{min}`$ and $`\epsilon _{max}`$ are shifted upwards, compared to the center of the band, hence, looking at (11) we find that for a single electron the third moment is positive: $`A_3^s>0`$. Together with the reduction of the frustration for larger filling, we therefore expect from (12) that for the alkali doped Fullerenes $`E_g(n)>E_g(2Nn)`$; i.e. $`U_c(n)<U_c(2Nn)`$, as is found in the Monte Carlo calculations.
## IV Summary
Using quantum Monte Carlo, we have analyzed a model of alkali-doped Fullerenes and found that the Mott transition strongly depends on the (integer) filling $`n`$. $`U_c`$ is largest for $`n=3`$ and decreases away from half-filling. This result is qualitatively different from both, the results of density functional calculations in the local density approximation, and the results of Hartree-Fock calculations. The doping dependence of the Mott transition can be understood in terms of a simple hopping argument. The key observation is that, due to the orbital degeneracy, there are more hopping channels in the many-body than in the single-body case, thus leading to the degeneracy enhancement $`\sqrt{k}`$ discussed above. In addition, due to frustration, the gaps are not symmetric around half-filling.
The Gutzwiller approximation for a paramagnetic state also predicts a degeneracy enhancement. For a half-filled system, the predicted enhancement is, however, linear in the degeneracy $`(N+1)`$ instead of $`\sqrt{N}`$ as suggested by the hopping argument of Sec. III and as also found in infinite dimensions. The results of the Gutzwiller approximation are reproduced by a slave-boson calculation in the saddle-point approximation. In dynamical mean-field theory a degeneracy enhancement and a reduction of $`U_c`$ away from half-filling, similar to our result, is found.
## Acknowledgments
This work has been supported by the Alexander-von-Humboldt-Stiftung under the Feodor-Lynen-Program and the Max-Planck-Forschungspreis, and by the Department of Energy, grant DEFG 02-96ER45439.
|
no-problem/9905/astro-ph9905124.html
|
ar5iv
|
text
|
# Searching for the non-gaussian signature of the CMB secondary anisotropies
## 1 Introduction
The Cosmic Microwave Background (CMB) is a powerful tool for cosmology. As the CMB temperature anisotropies represent the superposition of primary (before matter-radiation decoupling) and secondary (after decoupling) fluctuations, the study of the anisotropies gives a direct insight into both the early Universe (and initial conditions) and the formation and evolution of cosmic structures. One of the goals of cosmology is to characterise the initial density perturbations which gave rise to those structures: galaxies and galaxy clusters. The statistical properties of the initial perturbations provide part of the necessary information for this characterisation. They can indeed be used to test and constrain the cosmological models and scenarios of structure formation. The angular power spectrum of the temperature fluctuations is one of the most important statistical quantities for CMB anisotropy studies. In fact, it allows the evaluation of the main cosmological parameters ($`\mathrm{\Omega }_b`$, $`\mathrm{\Omega }_0`$, $`\mathrm{\Lambda }`$, $`n`$, …) defining our Universe (Jungman et al. 1996). Some of the first constraints on the cosmological parameters came from CMB anisotropy measurements made by the COBE satellite (Smoot et al. 1992; Wright et al. 1992). The statistical properties of the CMB anisotropies give us information, in particular, on the physical process at the origin of the initial density fluctuations. Two classes of scenario account for the initial seeds of the structures. One is the “inflationary model” (Guth 1981; Linde 1982) in which the density perturbations result from the quantum fluctuations of scalar fields in the very early Universe. The other invokes the topological defects which themselves correspond to symmetry breaking in the unified theory (cosmic string, textures) (Vilenkin 1985; Bouchet 1988; Stebbins 1988; Turok 1989; Pen et al. 1994). Several studies have shown that the two scenarios predict different angular power spectra (Coulson et al. 1994; Albrecht et al. 1996; Magueijo et al. 1996). These differences of amplitude and/or shape represent rather tight constraints on the models. The statistical nature of the primary density perturbations, and hence their origin, is also encompassed within the distribution of the CMB anisotropies. The brightness, or temperature, distribution is indeed directly induced by the primeval mass or density distribution. If the initial perturbations result from an inflationary process the primary anisotropy distribution is gaussian. If the perturbations are generated by topological defects the anisotropy distribution is non-gaussian. The latter predict very specific patterns distinguishable from a gaussian random field. It is thus necessary to find statistical methods to test non-gaussianity and to separate primary and secondary non-gaussianity.
Several studies have been performed to test the CMB gaussianity. Traditional methods use the brightness or temperature distribution and their n$`th`$ order moments or their cumulants (Ferreira et al. 1997). Other methods are based on the n-point correlation functions or their spherical harmonic transforms (Luo & Schramm 1993; Magueijo 1995; Kogut et al. 1996; Ferreira & Magueijo 1997; Ferreira et al. 1998; Heavens 1998; Spergel & Goldberg 1998). Non-gaussianity can also be tested through topological discriminators based on pattern statistics (Coles 1988; Gott et al. 1990). Alternative methods test the non-gaussianity in the Fourier or wavelet space (Ferreira & Magueijo 1997; Hobson et al. 1998; Forni & Aghanim 1999).
In addition to the intrinsic statistical properties of the CMB anisotropies, the secondary fluctuations associated with cosmic structures (e.g., galaxies and galaxy clusters) induce non-gaussian signatures which could originate from point-like sources, peaked profiles, or from geometrical characteristics such as sharp edges or specific patterns. Future high sensitivity and high resolution CMB observations (e.g., MAP<sup>1</sup><sup>1</sup>1http://map.gsfc.nasa.gov/ and Planck Surveyor<sup>2</sup><sup>2</sup>2http://astro.estec.esa.nl/SA-general/Projects/Planck/ satellites) will provide data sets which should allow detailed tests of the primary anisotropy distribution. A detailed study of the non-gaussianity associated with secondary sources could be used to discriminate between the inflationary and topological defect models.
The present study deals with this first step: to predict and to specify the non-gaussian signature of the secondary anisotropies arising from the scattering of CMB photons by the ionised matter in the Universe. We apply the statistical discriminators developed in Forni & Aghanim (1999) to combinations of gaussian primary and secondary non-gaussian anisotropies. We take into account the contribution of a population of galaxy clusters through the Sunyaev-Zel’dovich (SZ) effect (Sunyaev & Zel’dovich 1980) as well as the effect of a spatially inhomogeneous re-ionisation of the Universe (Aghanim et al. 1996; Gruzinov & Hu 1998; Knox et al. 1998). The non-gaussian signature due to secondary anisotropies associated with weak gravitational lensing have been investigated in previous studies (Seljak 1996b; Bernardeau 1998; Winitzki 1998).
In section 2, we present the astrophysical contributions we take into account in our study. We then briefly present the statistical tests and detection strategy in section 3. We apply our tests to the combinations of primary and secondary anisotropies due to inhomogeneous re-ionisation alone in section 4, and to a configuration including the SZ effect of galaxy clusters in section 5. In section 6, we investigate the detectability of the non-gaussian signature for a MAP-like and a Planck-like instrumental configuration. Finally, in section 7, we discuss our results and present our conclusions.
## 2 Astrophysical contributions
The temperature anisotropies of the CMB contain the contributions of both the primary cosmological signal, directly related to the initial density fluctuations, and the foreground contributions amongst which are the secondary anisotropies. The secondary anisotropies are generated after matter-radiation decoupling. They arise from the interaction of the CMB photons with the matter and can be of a gravitational type (e.g. Rees-Sciama effect (Rees & Sciama 1968)), or of a scattering type when the matter is ionised (e.g. SZ or Ostriker-Vishniac effect (Ostriker & Vishniac 1986; Vishniac 1987)). In our study we adopt a canonical inflationary standard CDM (Cold Dark Matter) model for the generation of the primary anisotropies.
We simulate maps of the three astrophysical processes of interest in our study: the primary and secondary fluctuations due to inhomogeneous re-ionisation and the SZ effect. For each process, we made 100 realisations of $`512\times 512`$ pixels (1.5 arcminute pixel size). This fairly large number of realisations allows us to have statistically significant results. They represent about 40% of the whole sky, equivalent to the “clean” fraction of the sky coverage available for CMB analysis. Indeed, we do not expect to be able to analyse regions of the sky that are highly contaminated by galactic emissions (dust, synchrotron and free free). These contaminated regions account, more or less, for the Galactic latitudes with $`|b|<30^{}`$ ($`60`$% of the sky).
### 2.1 Primary CMB anisotropies
For the purpose of this study, that is the characterisation of the non-gaussianity from secondary anisotropies, we assume gaussian distributed primary fluctuations generated in an inflationary scenario. We choose the canonical standard CDM model, normalised to COBE data. The maps were generated using a code, kindly provided by P.G. Ferreira that generates square gaussian realisations given a power spectrum. The CMB power spectrum, displayed in figure 1, was computed using the CMBFAST code (Seljak & Zaldarriaga 1996).
### 2.2 Secondary CMB anisotropies
#### 2.2.1 From inhomogeneous re-ionisation
The first generation of emitting objects ionises the surrounding gas of the globally neutral Universe at high redshifts. The resulting spatially inhomogeneous re-ionisation generates secondary anisotropies associated with the peculiar motion along the line of sight of ionised bubbles. This produces anisotropies with maximum amplitude at the degree scale, and with $`(\delta T/T)_{rms}\mathrm{6.\hspace{0.17em}10}^6`$. The anisotropies are about ten times smaller than the primary fluctuations and spectrally indistinguishable from them. We use the model of Aghanim et al. (1996), in which these objects are early ionising quasars with assumed lifetimes of $`10^7`$ yrs. The number of quasars is normalised to match the data at $`z4`$ and has been extrapolated for $`4<z10`$. The positions of the centres of the ionised regions are drawn at random in the $`512\times 512`$ pixel maps, and we assume a spherically symmetric gaussian profile for the temperature anisotropy. The size and amplitude of the anisotropies depend on the quasar luminosities and its light-on redshifts. We compute the skewness and kurtosis (third and fourth moments of the distribution) of the maps. All the maps exhibit a non-gaussian signature associated with an excess of kurtosis of the order of one, the skewness being null.
#### 2.2.2 From Sunyaev-Zel’dovich effect
The Sunyaev-Zel’dovich effect represents the Compton scattering of the CMB photons by the free electrons of the ionised and hot intra-cluster gas. It results in the so-called thermal SZ effect which exhibits a peculiar spectral signature with a minimum at long wavelengths and a maximum at short wavelengths. When the cluster moves with respect to the CMB rest frame, the Doppler shift induces an additional effect called the kinetic SZ effect. It generates anisotropies with the same spectral signature as the primary ones. The temperature anisotropies generated by the clusters are thus composed of the thermal $`(\delta T/T)_{th}`$ and kinetic $`(\delta T/T)_{ki}`$ SZ anisotropies. We simulate both effects using an updated version (Aghanim et al. 1998) of the Aghanim et al (1997) model. The simulations use the $`\beta `$-model (Cavaliere & Fusco-Femiano 1978) to describe the gas distribution of each individual cluster. This description is generalised to a population of clusters derived from the Press-Schechter formalism (Press & Schechter 1974) and normalised to the X-ray temperature distribution (Viana & Liddle 1996). The positions of the cluster centres are drawn at random in the maps. Again we find a zero skewness but a strongly non-gaussian signature because the kurtosis is non-zero.
## 3 Statistical tests and detection strategy
In a previous study (Forni & Aghanim 1999), we developed statistical methods to search for non-gaussianity. The tests are based on the detection of gradients in the wavelet space. They use the statistical properties of a signal in wavelet space, namely the measurement of the excess of kurtosis (fourth moment, $`\mu _4`$, of a distribution) of the coefficients associated with the gradients. The predicted excess of kurtosis for a gaussian distribution is zero. If the non-gaussian signal is not skewed (third moment of the distribution is zero), any significant departure from gaussianity is indicated by a non-zero excess of kurtosis.
The first test for non-gaussianity is based on what we call the multi-scale gradient. It is the quadratic sum of the wavelet coefficients associated with $`(/x)^2+(/y)^2`$ and we find it follows a Laplace distribution for a gaussian distributed signal. For sets of 100 statistical realisations wavelet filtered at four decomposition scales (Fig. 1), we compute the normalised excess of kurtosis with respect to the Laplace distribution ($`k=\mu _4/\mu _2^26`$) together with the standard deviations (with respect to the median excess of kurtosis).
The second statistical test uses the wavelet coefficients associated with the horizontal and vertical gradients ($`/x`$ and $`/y`$ derivatives). We also use the coefficients related to the diagonal gradients ($`^2/xy`$ cross derivative), which are gaussian distributed for a gaussian signal. We then compute the normalised excess of kurtosis ($`k=\mu _4/\mu _2^23`$), with respect to a gaussian distribution and the standard deviations (with respect to the median) for all the realisations.
We apply the detection strategy proposed in Forni & Aghanim (1999) to demonstrate and to quantify the detectability of the non-gaussian signature. It is based on the comparison of a set of maps of the “real” observed sky to a set of gaussian realisations having the power spectrum of the “real sky”. The main advantage of this is that it can be reliably applied regardless of the power spectrum of the studied non-gaussian signal.
## 4 Analysis of the anisotropies: Primary + inhomogeneous re-ionisation
We first study the case of primary CMB anisotropies with secondary anisotropies due to inhomogeneous re-ionisation. The primary CMB anisotropies dominate at all scales larger than the cut off (at about 5 arcminutes). The non-gaussian signal is very small compared to the gaussian one. Indeed, the power spectrum of the secondary anisotropies represents, at most, less than 10% of the primary CMB power.
### 4.1 Multi-scale gradient
We compute the median value of the excess of kurtosis for the 100 realisations and the associated confidence intervals (Tab. 1). At the first decomposition scale the secondary anisotropies dominate the primary. We thus expect the non-gaussian signature of the secondary anisotropies to dominate, and indeed, we find a non-zero excess of kurtosis. At the two larger decomposition scales, the median value $`k`$ is marginally non-zero. The computed $`\sigma `$ values take into account the non symmetrical distribution of the kurtosis and exhibit a clear dichotomy between the upper ($`\sigma _+`$) and lower ($`\sigma _{}`$) boundaries of the confidence interval. This suggests that non-gaussianity has been detected. If $`k\sigma _{}`$ for one realisation is larger than zero by a value of the order of, or larger than, $`\sigma _{}`$; then this indicates a significant departure from gaussianity. If $`k\sigma _{}`$ is of the order of zero then, more sophisticated tests must be applied to conclude weather the “real sky” has a non-gaussian signature. At the second decomposition scale, the non-zero value of the median excess of kurtosis is possibly due to the sampling effects resulting from the sharp cut off in the primary CMB power spectrum at about 5 arcminutes (in a standard CDM model) combined with the rather narrow window filter we use in the analysis.
### 4.2 Partial derivatives
The analysis of the coefficients, associated with the cross and the first derivatives (Tab. 2), exhibit non-zero median excess of kurtosis at the first two decomposition scales. At the third and fourth scales the obtained excess of kurtosis is very close to the values of the CMB primary anisotropies alone.
In order to illustrate the non-gaussian characteristics of the different statistical realisations, we plot (Fig. 2) the excess of kurtosis of the wavelet coefficients associated with the partial derivatives ($`/x`$ and $`/y`$) and the cross derivative $`^2/xy`$. The solid line corresponds to the results obtained for the 100 realisations of a non-gaussian process made of the primary CMB + secondary anisotropies. Whereas the dotted line stands for the 100 gaussian test maps with same power spectrum as the studied process. The excesses of kurtosis are computed with the coefficients related to the horizontal gradient $`/x`$ (left panels), to the vertical gradient $`/y`$ (centre panels), and to the cross derivative $`^2/xy`$ (right panels). For the gaussian signal, the excesses of kurtosis of the cross derivative coefficients are centred around zero whereas they are not for the first derivative coefficients. This indicates that the cross derivative coefficients better characterise gaussian signals. They thus seem more appropriate to test for non-gaussianity, even though the coefficients have smaller amplitudes. The results show a clear departure of the excess of kurtosis from zero at the two first decomposition scales, for the non-gaussian signal. At the third decomposition scale, the excess of kurtosis for the cross derivative becomes very weak, indicating a marginal detection of non-gaussianity.
Following the detection strategy of Forni & Aghanim (1999), we perform a set of 100 gaussian realisations with same power spectrum as the sum of CMB and inhomogeneous re-ionisation, and we compute the excess of kurtosis for both the multi-scale gradient and the partial derivative coefficients. The probability distribution function (PDF) of the excess of kurtosis, for the non-gaussian (solid line) and gaussian (dashed line) realisations, are plotted in figure 3.
We display, in the left panels, the PDF for the cross derivative coefficients. In the right panels, we show the excess of kurtosis computed with the multi scale gradient coefficients. We note that investigating the statistical properties with the multi-scale gradient coefficients and cross derivative coefficients is quite complementary, because the multi scale gradient is related to $`/x`$ and $`/y`$. The detection of the non-gaussian signature is clear when the PDFs are clearly shifted. Our results show that the median excess of kurtosis of the multi scale gradient coefficients measures the non-gaussian nature of the secondary anisotropies due to inhomogeneous re-ionisation with a probability of 99.76% at the first decomposition scale. At all other scales, the detection level, for this discriminator, is below the one sigma limit. For the statistical test based on the cross derivative, the probability that the non-gaussianity is detected is 89.5% at the second decomposition scale. All the other scales show no significant departure from gaussianity.
## 5 Analysis of the anisotropies: including the SZ effect
Besides the secondary anisotropies that would arise in the context of an inhomogeneous re-ionisation of the Universe, there exist secondary anisotropies due to the SZ effect of galaxy clusters. In our study, we therefore add to the previous model the contribution of both thermal and kinetic SZ effects of a galaxy cluster population. We analyse the maps corresponding to the sum of CMB primary and secondary fluctuations (SZ + inhomogeneous re-ionisation) with a resolution of 1.5 arcminutes and the nominal Planck gaussian noise. The contribution of the thermal SZ effect, $`(\delta T/T)_{th}`$, is given at 2mm.
### 5.1 Multi-scale gradient
We compute the excess of kurtosis of the multi-scale gradient coefficients of the primary + secondary anisotropy maps. In Table 3, we note the extraordinarily large values of the median excesses of kurtosis $`k`$ with respect to the previously studied process (CMB primary anisotropies + secondary fluctuations due to inhomogeneous re-ionisation). More specifically, at the first and second decomposition scales the excess of kurtosis is respectively of the order 2300 and 180. At the third scale, we find $`k=0.87`$ which is already almost eight times greater than the corresponding value in Tab. 1. At the fourth and largest scale, the excess of kurtosis is very small; it is comparable to that measured without the SZ anisotropies, very close to the CMB alone. The non-gaussian signature, exhibited by the excess of kurtosis of the multi-scale gradient, is thus dominated at the first three scales by the SZ effect contribution, even though the latter does not dominate in terms of power.
### 5.2 Partial derivatives
For the same test maps, we compute the excess of kurtosis using the wavelet coefficients associated with the first and cross partial derivatives of the signal (Tab. 4 and Fig. 4). At the first three decomposition scales, the excess of kurtosis is very large due to the SZ contribution. We also note, in agreement with our suggestions of Sec. 4.2, that the computations using the cross partial derivative are more sensitive to non-gaussianity and thus more powerful in detecting it. In fact, the galaxy clusters exhibit very peaked profiles or even point-like behaviour. The wavelet coefficients associated with the cross derivative, which are very sensitive to symmetric profiles, are thus larger than in the previous study (inhomogeneous re-ionisation alone) in which we assumed a gaussian profile.
We illustrate, in Figure 5, the departure of the excess of kurtosis from zero. The $`x`$ and $`y`$ axes represent respectively, the number of the secondary and of the CMB primary anisotropy maps. The upper left and lower right images represent the excess of kurtosis computed with the coefficients of $`/x`$ and $`/y`$. The upper right images were obtained with the coefficients of $`^2/xy`$. The lower left image shows the excess of kurtosis computed with the multi-scale gradient coefficients. Up to the third scale, the horizontal lines dominate the image, outlining a highly non-gaussian signal due to the secondary anisotropies. This is particularly true for the cross derivative coefficients (top right image). The other statistical tests show the non-gaussian secondary anisotropies but they also exhibit the CMB associated features (vertical lines).
## 6 Effects of the instrumental configurations
We apply our statistical discriminators to test for non-gaussianity within the context of the representative instrumental configurations of the future MAP and Planck Surveyor satellite for CMB observations. The Planck configuration allows an investigation of the beam convolution effects alone, because the noise level remains unchanged. Whereas for the MAP configuration, we vary both the beam and the noise level.
### 6.1 MAP-like configuration
We have used a MAP-like instrumental configuration corresponding to a convolution, with a gaussian beam of full width at half maximum of 12 arcminutes, of maps consisting of the primary CMB anisotropies to which we added the secondary anisotropies. The noise added to the convolved maps is gaussian with $`rms`$ amplitude $`(\delta T/T)_{rms}=10^5`$. From these “observed” maps, we compute the median excess of kurtosis for the multi-scale gradient coefficients and for the coefficients of the first and cross derivatives. At the first two decomposition scales the signal is suppressed due to the beam dilution effects and the fourth decomposition scale is dominated by the CMB primary anisotropies. In the MAP-like configuration, we are therefore left with a unique decomposition scale, the third, to test non-gaussianity using our methods. At this scale, the excess of kurtosis for the cross derivatives is $`0.07\pm 0.03`$. Whereas it is rather large for the multi-scale gradient, and first derivatives respectively, $`1.15_{0.78}^{+2.24}`$ and $`0.14_{0.10}^{+0.13}`$. Here again, the non-zero excess of kurtosis is possibly due to a sample variance problem, as the 12 arcminute convolution sharply cuts the power at the third decomposition level (Fig. 1). Using the PDF of the excess of kurtosis, we compute the probability that the measured excess belongs to a non-gaussian signal and we find it below the one sigma detection limit. We also apply the Kolmogorov-Smirnov (K-S) test (Press et al. 1992) which compares globally two distribution functions, especially the shift in the median value. We find that the PDFs of the gaussian process and of the “real sky” observed by MAP are identical. These results, using our statistical discriminators, thus suggest that the MAP satellite will be unable to detect non-gaussianity.
### 6.2 Planck Surveyor-like configuration
We use the same astrophysical contributions as those of the MAP-like configuration (primary and SZ + inhomogeneous re-ionisation secondary anisotropies). These maps are convolved with a 6 arcminutes gaussian beam. We also take into account the expected gaussian noise of Planck ($`(\delta T/T)_{rms}\mathrm{2.\hspace{0.17em}10}^6`$ per 1.5 arcminute pixel). The convolution by a 6 arcminute beam suppresses the power at the corresponding scale (Scale I) and affects the second decomposition scale. The third one is not significantly altered by the convolution and we expect that the non-gaussianity could thus be detected. For the multi-scale gradient we find $`k=0.62_{0.60}^{+1.43}`$. Whereas we find for the first and cross derivatives respectively, $`k=0.07_{0.08}^{+0.11}`$ and $`0.16\pm 0.10`$. In order to quantify the detectability of the non-gaussianity in the Planck-like configuration, we generate gaussian distributed maps with same power spectrum as the studied signal. We plot (Fig. 6) the PDF of the gaussian (dashed line) and non-gaussian (solid line) processes. We derive the probability that the median excess of kurtosis measured on the “real sky” belongs to the gaussian process. Using the multi-scale gradient we find that the probability of detecting non-gaussianity is 71.9% at the second decomposition scale. There is no significant detection elsewhere. Whereas using the cross derivative coefficients the probability of detecting a non-gaussian signature at the third scale is 94.5%. We apply the K-S test to the distribution of the excess of kurtosis for the cross derivative and find a probability of 96.6% of detecting non-gaussianity. Since the K-S test compares the two distributions, it is very sensitive to departures form gaussianity. It thus gives better results on the detection of the non-gaussian signature.
## 7 Discussion & Conclusion
The secondary anisotropies, due to CMB photon interactions, are superimposed on the primary anisotropies which are directly related to the seeds of the cosmic structures. The primary anisotropies can be gaussian distributed (inflationary models) or can exhibit an intrinsic non-gaussian signature (topological defect models). In the context of future CMB observations (high sensitivity, high resolution and large sky coverage), we will use the full information related to the CMB temperature anisotropies, in particular the statistical information, to distinguish between the two main cosmological models. Similarly, studies aiming at predicting and quantifying the foreground contributions to the temperature anisotropies have to characterise the non-gaussian foreground signals in order to subtract them before detailed CMB analysis.
In the present study we investigate the tests of non-gaussianity when this is induced by secondary anisotropies, the primary anisotropies being gaussian distributed. We study the effects arising from the interactions of the CMB photons and the ionised matter. More specifically, we focus on two effects which dominate all the other secondary effects of a scattering nature: the spatially inhomogeneous re-ionisation which peaks at scales of a few tens of arcminutes to one degree and the SZ effect which dominates at the few arcminutes scale. In order to search for non-gaussianity we use discriminators based on the study of the statistical properties of the coefficients in a four level wavelet decomposition (Forni & Aghanim 1999).
The primary anisotropies are gaussian at all scales. Nonetheless, we find a non-zero value of the multi-scale gradient excess of kurtosis, and hence first derivatives, at the second decomposition scale which could be misinterpreted as a non-gaussian signature. This can be understood in the following way: the window function of the wavelet at this scale (centred around $`l2800`$) encompasses the cut off in the angular power spectrum. As a result the corresponding sample variance induces a non-zero kurtosis for the multi-scale gradient coefficients. The presence of this non-zero value depends on the cosmological model as well as on the window filter that is the wavelet function. A similar non-zero value could exist at any decomposition scale where the CMB power spectrum has a sharp cut off. For the standard CDM model we use here, the cut off occurs at the second scale. In the case where the cosmological model has more power at small angular scales, or undergoes an overall shift of the spectrum towards large multipoles, the sample variance effects decrease. In the same way, we can use a wider wavelet which in turn decreases the sample effects. However, this attenuates the non-gaussian signature we search for. We apply a detection strategy proposed in Forni & Aghanim (1999), which allows the quantification of the detectability regardless of the power spectrum of the studied signal.
We have studied the case of secondary anisotropies induced by a spatially inhomogeneous re-ionisation of the Universe. Assuming that this was the only source of secondary anisotropies, we succeed in demonstrating its non-gaussian signature at the first and second decomposition scales. However, inhomogeneous re-ionisation is far from being the only source of anisotropies. The SZ effect due to galaxy clusters is known to be the most common source which is related to the CMB photon scattering off free electrons. In this study, we also take into account the SZ effect of a predicted cluster population which we add to the primary CMB fluctuations and to the re-ionisation anisotropies. The non-gaussian foreground model is a worst case example because we do not remove any foregrounds. Owing to its peculiar spectral signature the thermal effect is expected to be removed from the cosmological signal (temperature anisotropies). However, the subtraction is not complete because almost 1/5 of the SZ effect contribution is due to the kinetic SZ effect, which is spectrally indistinguishable from the primary anisotropies, and there remains a significant non-gaussian foreground contribution. In our study, we find that the dominant non-gaussian signal is due to the SZ effect of clusters. The non-gaussian signature is found to be orders of magnitude larger than in the case without the SZ contribution and we clearly detect the non-gaussianity. The strong non-gaussian signature, associated with the SZ effect, comes from the gas profile of individual clusters. We have analysed temperature anisotropy maps with different profiles (gaussian, $`\beta `$ profiles or even so point-like sources) to which we add the primary gaussian anisotropies. As it is very peaked at the centre, the cluster induces a sharp variation in the signal from the center to the outskirts of the structure. In addition, an important fraction of the cluster population is composed of unresolved point-like clusters. We thus find that clusters represent the dominant non-gaussian foreground.
We apply our statistical tests to Planck-like and MAP-like instrumental configurations in order to compare the capabilities of the two planned satellites for detecting the non-gaussian signature induced by the secondary anisotropies (mainly the SZ effect). For both configurations the fourth, and largest scale, shows no significant non-gaussianity due to the SZ contribution. In the MAP-like configuration the beam convolution affects the first two decomposition scales. Therefore, we are only left with the third scale to search for non-gaussianity. At the same time, the convolution rather sharply reduces the contributions at angular scale associated with the third decomposition level. This induces a non-zero excess of kurtosis. We apply our detection strategy to overcome the problem and avoid a possible misinterpretation on non-gaussianity. We find no significant detection of the non-gaussian signature at the third scale for the MAP-like configuration. By contrast, for the Planck-like configuration, we detect the non-gaussian signature at the third decomposition scale, the first and second ones being affected by the beam convolution.
We have shown that our statistical tests combined with a detection strategy based on the characterisation of gaussian test maps, with same power spectrum as the non-gaussian studied process, are appropriate tools for demonstrating a non-gaussian signature. In a forthcoming paper, we will search for other discriminatory methods that allow two (or more) non-gaussian signals to be distinguished, in order to subtract the non-gaussian signature of the secondary anisotropies from the non-gaussian signature of the primary fluctuations.
###### Acknowledgements.
The authors would like to thank the referee A. Heavens for helpful comments that improved the paper and P.G. Ferreira for kindly providing an IDL code generating gaussian realisations, and for fruitful discussions. We also wish to thank J.-L. Puget and F.R. Bouchet for helpful comments and A. Jones for his careful reading.
|
no-problem/9905/hep-ph9905478.html
|
ar5iv
|
text
|
# References
Disclaimer
> This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, or The Regents of the University of California.
Lawrence Berkeley National Laboratory is an equal opportunity employer.
Introduction Each successive update of the precision electroweak data tends to reinforce the already spectacular agreement with the Standard Model (SM). An exception emerged in the Summer 1998 update, when new data from SLC on the $`b`$ quark front-back, left-right polarization asymmetry, $`A_{FBLR}^b`$, reinforced a possible discrepancy previously implicit in the LEP front-back asymmetry measurement, $`A_{FB}^b`$. Combined the two measurements implied a value for the $`b`$ asymmetry parameter $`A_b`$ three standard deviations ($`\sigma `$) below the SM value. The discrepancy continues today, though diminished to 2.6$`\sigma `$ in the Spring 1999 data, implying inconsistency with the SM at 99% confidence level (CL).
The convergence of the SLC and LEP determinations of $`A_b`$ at a value in conflict with the SM could resolve the longstanding disagreement between the SLC and LEP measurements of the effective weak interaction mixing angle, $`\mathrm{sin}^2\theta _W^{\mathrm{}}`$, a critical parameter that currently provides the most sensitive probe of the SM Higgs boson mass. If $`A_b`$ is affected by new physics then $`A_{FB}^b`$ must be removed from the SM fit of $`\mathrm{sin}^2\theta _W^{\mathrm{}}`$, leaving the remaining measurements in good agreement. This possibility is also consistent with theoretical prejudice that the third generation is a likely venue for the emergence of new physics.
On the other hand the discrepancy could have an experimental origin. The now resolved $`R_b`$ anomaly illustrates the difficulties, which may be even greater for $`A_b`$ and $`A_{FB}^b`$. Or it could be a statistical fluctuation. Unfortunately the study of $`Z`$ decays is nearing its end. When the dust settles we may still be left wondering about the significance of the discrepancy.
The purpose of this paper is to observe that there is another arena in which the $`A_b`$ anomaly can be studied. If it is a genuine sign of new physics unique to (or dominant in) the third generation, new phenomena must emerge in flavor-changing neutral current (FCNC) processes. Then if the underlying physics has a mass scale much greater than $`m_W`$ and $`m_t`$, $`Z`$ penguin amplitudes are enhanced by about a factor two. The cleanest tests are rare $`K`$ and $`B`$ decays, such as $`K^+\pi ^+\overline{\nu }\nu `$, $`K_L\pi ^0\overline{\nu }\nu `$, $`BX_s\overline{\nu }\nu `$, and $`B_s\overline{\mu }\mu `$. For instance $`K^+\pi ^+\overline{\nu }\nu `$ is enhanced by a factor 1.9 relative to the SM. A single event has been observed, with nominal central value from 1.8 to 2.7 times the SM prediction quoted below.
The $`A_b`$ anomaly could arise from new physics in the form of radiative corrections or $`ZZ^{}`$ or $`bQ`$ mixing. In the first case, but not in the latter two, there would generically also be enhanced gluon (and photon) penguin amplitudes. This possibility is favored by the recent measurements of $`ϵ^{}/ϵ`$, since the $`Z`$ penguin enhancement by itself exacerbates the existing disagreement with the SM, although the theoretical uncertainties are considerable. The gluon penguin enhancement cannot be deduced in a model independent way from the $`A_b`$ anomaly but can be estimated from $`ϵ^{}/ϵ`$. Enhanced $`Z`$ and gluon penguins can be tested in $`B`$ meson decays and elsewhere. They would have a big impact on studies of the CKM matrix and CP violation.
Fits of the $`b`$ quark couplings In the SM the $`b`$ quark asymmetry parameter is $`A_b=0.935`$ with negligible uncertainty. In terms of the left- and right-handed $`Z\overline{b}b`$ couplings $`g_{bL,R}`$ it is
$$A_b=\frac{g_{bL}^2g_{bR}^2}{g_{bL}^2+g_{bR}^2}.$$
$`(1)`$
It is measured directly by the front-back left-right asymmetry, $`A_b=A_{FBLR}^b=0.898(29)`$ and also by the front-back asymmetry using $`A_b=4A_{FB}^b/3A_{\mathrm{}}`$ where $`A_{\mathrm{}}`$ ($`\mathrm{}=\mathrm{e},\mu ,\tau `$) is the lepton asymmetry parameter defined as in eq. (1) with $`b\mathrm{}`$. Using $`A_{FB}^b=0.0991(20)`$ from LEP and $`A_{\mathrm{}}=0.1489(17)`$ from the combined leptonic measurements at SLC and LEP, we find $`A_b=0.887(21)`$. The two determinations together imply $`A_b=0.891(17)`$.
I have performed several fits to the five quantities that most significantly constrain $`g_{bL}`$ and $`g_{bR}`$. In addition to $`A_b`$ and the ratio of partial widths, $`R_b=\mathrm{\Gamma }_b/\mathrm{\Gamma }_h`$, they are the total $`Z`$ width $`\mathrm{\Gamma }_Z`$, the peak hadronic cross section $`\sigma _h`$, and the hadron-lepton ratio $`R_{\mathrm{}}=\mathrm{\Gamma }_h/\mathrm{\Gamma }_{\mathrm{}}`$. A brief summary is presented here; details will be given elsewhere.
The SM fit assumes $`\mathrm{sin}^2\theta _W^{\mathrm{}}`$ = 0.23128(22) as follows from $`A_{\mathrm{}}`$. It has chi-squared per degree of freedom $`\chi ^2/dof=10.4/5`$ with confidence level $`CL=6.5\%`$. In fit 1 $`g_{bL}`$ and $`g_{bR}`$ are allowed to vary while all other $`Z\overline{q}q`$ couplings are held at their SM values, yielding $`\chi ^2/dof=3.0/3`$ and $`CL=39\%`$. In fit 2 only $`g_{bR}`$ is allowed to vary; the result is $`\chi ^2/dof=7.8/4`$ with $`CL=10\%`$, little better than the SM fit. In fit 3 the couplings of the $`b`$, $`d`$ and $`s`$ quarks are varied equally, $`\mathrm{\Delta }g_{bL,R}=\mathrm{\Delta }g_{dL,R}=\mathrm{\Delta }g_{sL,R}`$; with a result nearly as good as fit 1. Other fits considered resulted in poorer $`CL`$’s than the SM.
We conclude that positive shifts are preferred for both $`g_{bL}`$ and $`g_{bR}`$, either for the $`b`$ quark alone as in fit 1 or for $`b`$, $`d`$ and $`s`$ equally as in fit 3. The need to shift both left and right couplings is clear: $`\delta A_b0.05`$ requires positive shifts in $`g_{bR}`$ and/or $`g_{bL}`$ (remember that $`g_{bL}<0`$) while $`g_{bL}^2+g_{bR}^2`$ is tightly constrained by the other measurements, forcing $`\delta g_{bR}^2\delta g_{bL}^2`$. Fit 3 seems unnatural in that $`s,d`$ couplings are varied while $`u,c`$ couplings are not, an issue finessed in fit 1 which presumeably reflects physics unique to the third generation quarks, perhaps due to the large value of the top quark mass. The 32% and 5% contours from fit 1 are shown in figure 1, with the SM values, $`g_{bL},g_{bR}=0.4197,+0.0771`$, and the fit central values, $`g_{bL},g_{bR}=0.4154,+0.0997`$.
The $`Z`$ penguin enhancement We now focus on fit 1 and the FCNC effects it implies. Physics from higher mass scales will couple to the $`SU(2)_L`$ quark eigenstates, so a nonuniversal $`Z\overline{b}_Lb_L`$ coupling, $`\delta g_{bL}`$, has its origin in a nonuniversal $`Z\overline{b}_L^{}b_L^{}`$ amplitude where $`b_L^{}`$ is the weak eigenstate, $`b_L^{}=V_{tb}b_L+V_{ts}s_L+V_{td}d_L`$. As a result $`Z\overline{b}s`$, $`Z\overline{b}d`$, , and $`Z\overline{s}d`$ interactions are induced.
The very same phenomenon occurs in the SM where the leading correction to the $`Z\overline{b}b`$ vertex arises from $`t`$ quark loop diagrams. For $`m_t\mathrm{}`$ the leading correction is
$$\delta g_{bL}^{\mathrm{SM}}=\frac{\alpha _W(m_t)}{16\pi }\frac{m_t^2}{m_W^2}$$
$`(2)`$
where $`\alpha _W=\alpha /\mathrm{sin}^2\theta _W^{\mathrm{}}`$. For $`m_t=174.3`$ GeV this is $`\delta g_{bL}^{\mathrm{SM}}0.0031`$. A more complete estimate based on the complete one loop result and with the pole mass $`m_t`$ replaced by the running $`\overline{MS}`$ mass, $`\overline{m}_t(m_t)m_t8`$ GeV, yields a similar result, $`\delta g_{bL}^{\mathrm{SM}}=0.0032`$, resulting in $`g_{bL}=0.4197`$. In fit 1 $`g_{bL}`$ is shifted by an additional amount, $`\delta g_{bL}^{\mathrm{A}_\mathrm{b}}=0.0043`$. These are large shifts: e.g., $`\delta g_{bL}^{\mathrm{SM}}`$ corresponds to a $`3\sigma `$ effect in $`R_b`$.
The same Feynman diagrams responsible for the leading $`Z\overline{b}b`$ vertex correction also generate the SM $`Z`$ penguin amplitude and in the limit $`m_t\mathrm{}`$ they are identical. Rewriting the one loop $`Z`$ penguin vertex for $`\overline{s}d`$ transistions as an effective $`\delta g_{\overline{s}dL}^{\mathrm{SM}}`$ coupling normalized like $`g_{bL}`$, we have (see eq. (2.18) of )
$$\delta g_{\overline{s}dL}^{\mathrm{SM}}=\lambda _t\frac{\alpha _W}{2\pi }C_0(x_t)$$
$`(3)`$
where $`\lambda _t=V_{ts}^{}V_{td}`$, $`x_t=m_t^2/m_W^2`$, and $`C_0`$ is
$$C_0(x)=\frac{x}{8}\left(\frac{x6}{x1}+\frac{3x+2}{(x1)^2}\mathrm{ln}(x)\right)$$
$`(4)`$
Taking $`m_tm_W`$ and comparing with eq. (2) we have
$$\delta g_{\overline{s}dL}^{\mathrm{SM}}=\lambda _t\delta g_{bL}^{\mathrm{SM}}.$$
$`(5)`$
Eq. (5) shows that if $`m_t`$ were much larger than any other relevant scale we could smoothly extrapolate the on-shell $`Z\overline{b}b`$ vertex correction to the related $`Z\overline{q}^{}q`$ penguin vertex. The same is true of any new physics at a scale $`m_Xm_W,m_t`$, whether it affects the $`Z\overline{b}b`$ vertex by radiative corrections or by $`ZZ^{}`$ or $`bQ`$ mixing. Therefore if the $`A_b`$ anomaly arises from physics at a very high scale, the additional contribution to the $`\overline{s}d`$ $`Z`$ penguin amplitude is
$$\delta g_{\overline{s}dL}^{\mathrm{A}_\mathrm{b}}=\lambda _t\delta g_{bL}^{\mathrm{A}_\mathrm{b}}.$$
$`(6)`$
With $`\delta g_{bL}^{\mathrm{A}_\mathrm{b}}`$ from fit 1, the contribution of $`\delta g_{\overline{s}dL}^{\mathrm{A}_\mathrm{b}}`$ is equal in sign and magnitude to the SM $`Z`$ penguin, resulting in a factor two enhancement in amplitude.
Rare $`K`$ and $`B`$ decays The enhancement of the $`Z`$ penguin implies increased rates for the rare $`K`$ decays $`K_L\pi ^0\overline{\nu }\nu `$ and $`K^+\pi ^+\overline{\nu }\nu `$, and the rare $`B`$ decays $`BX_s\overline{\nu }\nu `$, and $`B_s\overline{\mu }\mu `$. The predicted enhancement is consistent with the bound on the real part of the $`Z`$ penguin amplitude obtained in from $`K_L\overline{\mu }\mu `$.
Predictions for the rare $`K`$ decays are obtained following (with parametric updates from ), since $`Z_{ds}`$ defined in is $`Z_{ds}=\lambda _t(C_0(x_t)+C_b)`$ where
$$C_b=\frac{2\pi }{\alpha _W}\delta g_{bL}^{A_b}$$
$`(7)`$
with $`\alpha _W=\alpha /\mathrm{sin}^2\theta _W^{\mathrm{}}`$. The results are
$$\mathrm{BR}(K_L\pi ^0\overline{\nu }\nu )=6.7810^4(\mathrm{Im}\lambda _t)^2\left|X_0(x_t)+C_b\right|^2$$
$`(8)`$
and
$$\mathrm{BR}(K^+\pi ^+\overline{\nu }\nu )=1.5510^4\left|\lambda _t\left(X_0(x_t)+C_b\right)+\mathrm{\Delta }_c\right|^2$$
$`(9)`$
where $`\mathrm{\Delta }_c`$ is a nonnegligible charm quark contribution and $`X_0=C_04B_0`$ is a combination of the SM $`t`$ quark $`Z`$ penguin and box amplitudes. The SM box amplitude, $`B_0`$, is essential for gauge invariance and is numerically important in ‘t Hooft-Feynman gauge in which we work. In the limit $`m_tm_W`$ it is suppressed by $`m_W^2/m_t^2`$ relative to the penguin because of its softer UV behavior. We assume for the new physics underlying the $`A_b`$ anomaly that the penguin amplitude dominates over the box, as expected for instance in models with “hard GIM suppression.”
Similarly, following the parameterization in , the $`B`$ decay rates are
$$\mathrm{BR}(BX_s\overline{\nu }\nu )=1.5210^5\left|\frac{V_{ts}}{V_{cb}}\right|^2\left|X_0(x_t)+C_b\right|^2$$
$`(10)`$
and
$$\mathrm{BR}(B_s\overline{\mu }\mu )=3.410^9\left|Y_0(x_t)+C_b\right|^2$$
$`(11)`$
where $`Y_0=C_0B_0`$. $`\mathrm{BR}(BX_d\overline{\nu }\nu )`$ can be obtained by substituting $`V_{td}`$ for $`V_{ts}`$ in eq. (10), and $`\mathrm{BR}(B_d\overline{\mu }\mu )`$ can be obtained from eq. (11) using
$$\frac{\mathrm{BR}(B_d\overline{\mu }\mu )}{\mathrm{BR}(B_s\overline{\mu }\mu )}=\frac{\tau (B_d)}{\tau (B_s)}\frac{m_{B_d}}{m_{B_s}}\frac{F_{B_d}^2}{F_{B_s}^2}\frac{|V_{td}|^2}{|V_{ts}|^2}.$$
$`(12)`$
The results are displayed in table 1. For the $`A_b`$ anomaly the branching ratios are enhanced by factors between $`2`$ and $`3`$. The SM error estimates are taken from . For the $`A_b`$ anomaly two errors are quoted: the first is the parametric and theoretical error that is common to the $`A_b`$ anomaly and the SM, while the second reflects a $`\pm 0.0014`$ uncertainty in $`\delta g_{bL}^{A_b}`$. The uncertainties of the ratios are dominated by the uncertainty in $`\delta g_{bL}^{A_b}`$ alone.
The ratios differ from unity by about 2.6$`\sigma `$, which is the significance of the $`A_b`$ anomaly itself, whereas the predicted branching ratios differ less significantly because of the common parametric and theoretical error. The most significant difference is for $`BX_s\overline{\nu }\nu `$, which has the smallest parametric/theoretical error. Combining all errors in quadrature, the predicted SM and $`A_b`$ anomaly branching ratios for $`BX_s\overline{\nu }\nu `$ differ by $`2.3\sigma `$. For $`K_L\pi ^0\overline{\nu }\nu `$, $`K^+\pi ^+\overline{\nu }\nu `$, and $`B_s\overline{\mu }\mu `$ the differences are 1.2$`\sigma `$, 1.0$`\sigma `$, and 1.6$`\sigma `$ respectively. The precision of the $`K`$ decay predictions improves as the CKM matrix is measured more precisely, while the $`B_s\overline{\mu }\mu `$ prediction depends on the decay constant $`F_{B_s}`$. If instead of we take $`\lambda _t`$ from the CKM fit of the precision of the $`K`$ decay predictions is improved.
$`ϵ^{}/ϵ`$ Theoretical estimates of $`ϵ^{}/ϵ`$ suggest that the $`\delta g_{\overline{s}dL}^{A_b}`$ $`Z`$ penguin enhancement is disfavored unless QCD penguins are also enhanced. From the approximate analytical formula in we find $`ϵ^{}/ϵ=+7.310^4`$ for the SM and $`ϵ^{}/ϵ=0.210^4`$ with the $`\delta g_{\overline{s}dL}^{A_b}`$ $`Z`$ penguin enhancement. The most recent experimental average (with scaled error) is $`(ϵ^{}/ϵ)_{\mathrm{Expt}}=(21.2\pm 4.6)10^4`$.
Taking the theoretical estimates at face value, consistency requires that gluon penguins are also enhanced. If the principal gluon penguin term is enhanced by the same factor ($`2`$) as the $`Z`$ penguin, the result is $`1510^4`$, while a factor 3 enhancement yields $`2910^4`$.
However a large unquantifiable uncertainty hangs over all theoretical estimates of $`ϵ^{}/ϵ`$. Presently they depend sensitively on the strange quark running mass and the hadronic matrix elements $`B_6^{\frac{1}{2}}`$ and $`B_8^{\frac{3}{2}}`$, each estimated by nonperturbative methods not yet under rigorous control. Consequently we cannot conclude that the SM or the $`\delta g_{\overline{s}dL}^{A_b}`$ enhanced $`Z`$ penguin are truly inconsistent with the data. The uncertainties will hopefully be resolved by more powerful lattice simulations. Until then conclusions based on $`(ϵ^{}/ϵ)`$ must be regarded with caution.
Discussion The $`A_b`$ anomaly could be caused by radiative corrections of new bosons and/or quarks, by $`ZZ^{}`$ mixing, or by $`bQ`$ mixing with heavy quarks $`Q`$ in nonstandard $`SU(2)_L`$ representations. Generically radiative corrections would also affect gluon and photon penguin amplitudes, by model dependent amounts, while $`ZZ^{}`$ and $`bQ`$ mixing would only enhance the $`Z`$ penguin. With the major caveat expressed above, $`ϵ^{}/ϵ`$ appears to favor radiative corrections, since it could be explained if the gluon penguin is enhanced by a similar factor to the $`\delta g_{\overline{s}dL}^{A_b}`$ $`Z`$ penguin enhancement.
The hypothesis that the $`A_b`$ anomaly represents the effect of higher energy physics on third generation quarks can be falsified if the predicted $`Z`$ penguin enhancements are absent. If they are present the hypothesis remains viable and the $`A_b`$ anomaly provides key information beyond the FCNC studies. The large value of $`\delta g_{bR}`$ and the condition $`\delta (g_{bR}^2g_{bL}^2)|\delta (g_{bL}^2+g_{bR}^2)|`$ would then point to a radical departure from the SM with a sharply defined signature. For instance, the Higgs sector associated with a right-handed extension of the SM gauge sector could shift $`g_{bR}`$ and $`g_{bL}`$ with little effect on other precision measurements. Depending on the right-handed CKM matrix, there could also be observable right-handed FCNC effects.
The burgeoning program to study CP violation and the CKM matrix must measure $`Z`$ and gluon penguin amplitudes in order to fully achieve its goals — an enterprise characterized as controlling “penguin pollution.” In the process we should learn if the FCNC effects implied by fit 1 occur or not. If they do “penguin pollution” would be transformed into a window on an unanticipated domain of new physics, of which the measurement of $`A_b`$ would have provided the first glimpse.
Acknowledgements: I wish to thank R. Cahn, H. Quinn, P. Rowson, S. Sharpe, and especially Y. Grossman and M. Worah for helpful discussions. This work was supported by the Director, Office of Science, Office of Basic Energy Services, of the U.S. Department of Energy under contract DE-AC03-76SF00098.
Figure 1. $`\chi ^2`$ contours for fit 1. The diamond is the SM prediction and the box is the best value from fit 1. The inner contour indicates $`\chi ^2=3.5`$ corresponding to CL = 32% for 3 dof. The outer contour indicates $`\chi ^2=7.8`$ corresponding to CL = 5% for 3 dof.
Table 1. Predicted branching ratios for the $`\delta g_{\overline{s}dL}^{A_b}`$ enhanced $`Z`$ penguin amplitude and for the SM. The third line displays the ratio of the enhanced predictions to the SM.
| | $`K_L\pi ^0\overline{\nu }\nu `$ | $`K^+\pi ^+\overline{\nu }\nu `$ | $`BX_s\overline{\nu }\nu `$ | $`B_s\overline{\mu }\mu `$ |
| --- | --- | --- | --- | --- |
| | $`10^{11}`$ | $`10^{11}`$ | $`10^5`$ | $`10^9`$ |
| $`\delta g_{\overline{s}dL}^{A_b}`$ | $`6.6\pm 2.4_{1.4}^{+1.6}`$ | $`14.6\pm 5.7_{2.5}^{+2.7}`$ | $`7.8\pm 0.9_{1.7}^{+1.9}`$ | $`10.7\pm 3.7_{2.9}^{+3.4}`$ |
| SM | $`2.8\pm 1.1`$ | $`7.7\pm 3.0`$ | $`3.3\pm 0.4`$ | $`3.2\pm 1.1`$ |
| Ratio | $`2.3_{0.5}^{+0.6}`$ | $`1.9_{0.3}^{+0.4}`$ | $`2.3_{0.5}^{+0.6}`$ | $`3.3_{0.9}^{+1.1}`$ |
|
no-problem/9905/math9905109.html
|
ar5iv
|
text
|
# Demonstration Definition 1
Abstract. Given a lattice polytope $`P`$ (with underlying lattice $`𝕃`$), the universal counting function $`𝒰_P(𝕃^{})=|P𝕃^{}|`$ is defined on all lattices $`𝕃^{}`$ containing $`𝕃`$. Motivated by questions concerning lattice polytopes and the Ehrhart polynomial, we study the equation $`𝒰_P=𝒰_Q`$.
Mathematics Subject Classification: 52B20, 52A27, 11P21
1. The universal counting function
We will denote by $`V`$ a vector space of dimension $`n`$, by $`𝕃`$ a lattice in $`V`$, of rank $`n`$. Let
$$𝒢_𝕃=𝕃GL(𝕃)$$
be the group of affine maps of $`V`$ inducing isomorphism of $`V`$ and $`𝕃`$ into itself; in case
$$𝕃=^nV=^n,𝒢_n=^nGL(^n)$$
corresponds to affine unimodular maps. An $`𝕃`$–polytope is the convex hull of finitely many points from $`𝕃`$; $`𝒫_𝕃`$ denotes the set of all $`𝕃`$–polytopes. For a finite set $`A`$ denote by $`|A|`$ its cardinality. Finally, let $`_𝕃`$ be the set of all lattices containing $`𝕃`$.
###### Demonstration Definition 1
Given any $`𝕃`$–polytope $`P`$, the function $`𝒰_P:_𝕃`$ defined by
$$𝒰_P(𝕃^{})=|P𝕃^{}|$$
is called the universal counting function of $`P`$.
This is just the restriction of another function $`𝒰:𝒫_𝕃\times _𝕃`$ to a fixed $`P𝒫_𝕃`$, where $`𝒰`$ is given by
$$𝒰(P,𝕃^{})=|P𝕃^{}|.$$
Note, further, that $`𝒰_P`$ is invariant under the group, $`𝒢_{tr}`$, generated by $`𝕃`$–translations and the reflection with respect to the origin, but, of course, not invariant under $`𝒢_𝕃`$.
Example 1. Take for $`𝕃^{}`$ the lattices $`𝕃_k=\frac{1}{k}𝕃`$ with $`k`$. Then
$$𝒰_P(𝕃_k)=|P\frac{1}{k}𝕃|=|kP𝕃|=E_P(k)$$
where $`E_P`$ is the Ehrhart polynomial of $`P`$ (see \[Ehr\]). We will need some of its properties that are described in the following theorem (see for instance \[Ehr\],\[GW\]). Just one more piece of notation: if $`F`$ is a facet of $`P`$ and $`H`$ is the affine hull of $`F`$, then the relative volume volume of $`F`$ is defined as
$$rvol(F)=\frac{\underset{n1}{Vol}(F)}{\underset{n1}{Vol}(D)}$$
where $`D`$ is the fundamental parallelotope of the $`(n1)`$–dimensional sublattice of $`H𝕃`$. For a face $`F`$ of $`P`$ that is at most $`(n2)`$–dimensional let $`rvol(F)=0`$. Note that the relative volume is invariant under $`𝒢_𝕃`$ and can be computed, when $`𝕃=^n`$, since then the denominator is the euclidean length of the (unique) primitive outer normal to $`F`$ (when $`F`$ is a facet).
###### Theorem 1
Assume $`P`$ is an $`n`$–dimensional $`𝕃`$–polytope. Then $`E_P`$ is a polynomial in $`k`$ of degree $`n`$. Its main coefficient is $`Vol(P)`$, and its second coefficient equals
$$\frac{1}{2}\underset{F\text{ a facet of }P}{}rvol(F).$$
It is also known that $`E_P`$ is a $`𝒢_𝕃`$–invariant valuation, (for the definitions see \[GW\] or \[McM\]). The importance of $`E_P`$ is reflected in the following statement from \[BK\]. For a $`𝒢_𝕃`$–invariant valuation $`\varphi `$ from $`𝒫_𝕃`$ to an abelian group $`G`$, there exists a unique $`\gamma =(\gamma _i)_{i=0,\mathrm{},n}`$ with $`\gamma _iG`$ such that
$$\varphi (P)=\gamma _ie_{P,i}$$
where $`e_{P,i}`$ is the coefficient of $`k^i`$ of the Ehrhart polynomial.
It is known that $`E_P`$ does not determine $`P`$, even within $`𝒢_𝕃`$ equivalence. \[Ka\] gives examples lattice–free $`𝕃`$–simplices with identical Ehrhart polynomial that are different under $`𝒢_𝕃`$. The aim of this paper is to investigate whether and to what extent the universal counting function determines $`P`$.
We give another description of $`𝒰_P`$. Let $`\pi VV`$ be any isomorphism satisfying $`\pi (𝕃)𝕃`$. Define, with a slight abuse of notation,
$$𝒰_P(\pi )=|\pi (P)𝕃|=|P\pi ^1(𝕃)|.$$
Set $`𝕃^{}=\pi ^1(𝕃)`$. Since $`𝕃^{}`$ is a lattice containing $`𝕃`$ we clearly have
$$𝒰_P(\pi )=𝒰_P(𝕃^{}).$$
Conversely, given a lattice $`𝕃^{}_𝕃`$, there is an isomorphism $`\pi `$ satisfying the last equality. (Any linear $`\pi `$ mapping a basis of $`𝕃`$ to a basis of $`𝕃^{}`$ suffices.) The two definitions of $`𝒰_P`$ via lattices or isomorphisms with $`\pi (𝕃)𝕃`$ are equivalent. We will use the common notation $`𝒰_P`$.
Example 2. Anisotropic dilatations. Take $`\pi :^n^n`$ defined by
$$\pi (x_1,\mathrm{},x_n)=(k_1x_1,\mathrm{},k_nx_n),$$
where $`k_1,\mathrm{},k_n`$. The corresponding map $`𝒰_P`$ extends the notion of Ehrhart polynomial and Example 1.
Simple examples show that $`𝒰_P`$ is not a polynomial in the variables $`k_i`$.
2. A necessary condition
Given a nonzero $`z𝕃^{}`$, the dual of $`𝕃`$, and an $`𝕃`$–polytope $`P`$, define $`P(z)`$ as the set of points in $`P`$ where the functional $`z`$ takes its maximal value. As is well known, $`P(z)`$ is a face of $`P`$. Denote by $`H(z)`$ the hyperplane $`zx=0`$ (scalar product). $`H(z)`$ is clearly a lattice subspace.
###### Theorem 2
Assume $`P,Q`$ are $`𝕃`$–polytopes with identical universal counting function. Then, for every primitive $`z𝕃^{}`$,
$$rvolP(z)+rvolP(z)=rvolQ(z)+rvolQ(z).$$
$``$
The theorem shows, in particular, that if $`P(z)`$ or $`P(z)`$ is a facet of $`P`$, then $`Q(z)`$ or $`Q(z)`$ is a facet of $`Q`$. Further, given an $`𝕃`$–polytope $`P`$, there are only finitely many possibilities for the outer normals and volumes of the facets of another polytope $`Q`$ with $`𝒰_P=𝒰_Q`$. So a well–known theorem of Minkowski implies,
###### Corollary 1
Assume $`P`$ is an $`𝕃`$–polytope. Then, apart from lattice translates, there are only finitely many $`𝕃`$–polytopes with the same universal counting functions as $`P`$.
###### Demonstration Proof of Theorem 2
We assume that $`P,Q`$ are full–dimensional polytopes. It is enough to prove the theorem in the special case when $`𝕃=^n`$ and $`z=(1,0,\mathrm{},0)`$. There is nothing to prove when none of $`P(z),P(z),Q(z)`$, $`Q(z)`$ is a facet since then both sides of (\*) are equal to zero. So assume that, say, $`P(z)`$ is a facet, that is, $`rvolP(z)>0`$.
For a positive integer $`k`$ define the linear map $`\pi _kVV`$ by
$$\pi _k(x_1,\mathrm{},x_n)=(x_1,kx_2,\mathrm{},kx_n).$$
The condition implies that the lattice polytopes $`\pi _k(P)`$ and $`\pi _k(Q)`$ have the same Ehrhart polynomial. Comparing their second coefficients we get,
$$\underset{F\text{ a facet of }P}{}rvol\pi _k(F)=\underset{G\text{ a facet of }Q}{}rvol\pi _k(G),$$
since the facets of $`\pi _k(P)`$ are of the form $`\pi _k(F)`$ where $`F`$ is a facet of $`P`$.
Let $`\zeta =(\zeta _1,\mathrm{},\zeta _n)^n`$ be the (unique) primitive outer normal to the facet $`F`$ of $`P`$. Then $`\zeta ^{}=(k\zeta _1,\zeta _2,\mathrm{},\zeta _n)`$ is an outer normal to $`\pi _k(F)`$, and so it is a positive integral multiple of the unique primitive outer normal $`\zeta ^{\prime \prime }`$, that is $`\zeta ^{}=m\zeta ^{\prime \prime }`$ with $`m`$ a positive integer. When $`k`$ is a large prime and $`\zeta `$ is different from $`z`$ and $`\zeta _10`$, then $`m=1`$ and $`rvol\pi _k(F)=O(k^{n2})`$. When $`\zeta _1=0`$, then $`m=1`$, again, and the ordinary $`(n1)`$–volume of $`\pi _k(F)`$ is $`O(k^{n2})`$. Finally, when $`\zeta =\pm z`$, $`Vol\pi _k(F)=k^{n1}VolF`$.
So the dominant term, when $`k\mathrm{}`$, is $`k^{n1}(rvolP(z)+rvolP(z))`$ since by our assumption $`rvolP(z)>0`$. ∎
3. Dimension two
Let $`P`$ be an $`𝕃`$–polygon in $`V`$ of dimension two. Simple examples show again that $`𝒰_P`$ is not a polynomial in the coefficients of $`\pi `$.
In the planar case we abbreviate $`rvolP(z)`$ as $`|P(z)|`$. Extending (and specializing) Theorem 1 we prove
###### Proposition 3
Suppose $`P`$ and $`Q`$ are $`𝕃`$–polygons. Then $`𝒰_P=𝒰_Q`$ if and only if the following two conditions are satisfied:
(i) $`Area(P)=Area(Q)`$,
(ii) $`|P(z)|+|P(z)|=|Q(z)|+|Q(z)|`$ for every primitive $`z𝕃^{}`$.
###### Demonstration Proof
The conditions are sufficient: (i) and (ii) imply that, for any $`\pi `$, $`Area(\pi (P))=Area(\pi (Q))`$ and $`|\pi (P)(z)|+|\pi (P)(z)|=|\pi (Q)(z)|+|\pi (Q)(z)|`$. We use Pick’s formula for $`\pi (P)`$, (see \[GW\], say):
$$|\pi (P)𝕃|=Area\pi (P)+\frac{1}{2}\underset{z\text{ primitive}}{}|\pi (P)(z)|+1.$$
This shows that $`𝒰_P=𝒰_Q`$, indeed.
The necessity of (i) follows from Theorem 1 immediatley, (via the main coefficient of $`E_P`$), and the necessity of (ii) is the content of Theorem 2.∎
###### Corollary 2
Under the conditions of Proposition 3 the lattice width of $`P`$ and $`Q`$, in any direction $`z𝕃^{}`$ are equal.
###### Demonstration Proof
The lattice width, $`w(z,P)`$, of $`P`$ in direction $`z𝕃^{}`$ is, by definition (see \[KL\],\[Lo\]),
$$w(z,P)=\mathrm{max}\{z(xy)x,yP\}.$$
In the plane one can compute the width along the boundary of $`P`$ as well which gives
$$w(z,P)=\frac{1}{2}\underset{e}{}|ze|$$
where the sum is taken over all edges $`e`$ of $`P`$. This proves the corollary.∎
###### Theorem 3
Suppose $`P`$ and $`Q`$ are $`𝕃`$–polygons. Then $`𝒰_P=𝒰_Q`$ if and only if the following two conditions are satisfied:
(i) $`Area(P)=Area(Q)`$,
(ii) there exist $`𝕃`$–polygons $`X`$ and $`Y`$ such that $`P`$ resp. $`Q`$ is a lattice translate of $`X+Y`$ and $`XY`$ (Minkowski addition).
Remark. Here $`X`$ or $`Y`$ is allowed to be a segment or even a single point. In the proof we will ignore translates and simply write $`P=X+Y`$ and $`Q=XY`$.
###### Demonstration Proof
Note that (ii) implies the second condition in Proposition 3. So we only have to show the necessity of (ii).
Assume the contrary and let $`P,Q`$ be a counterexample to the statement with the smallest possible number of edges. We show first that for every (primitive) $`z𝕃^{}`$ at least one of the sets $`P(z),P(z),Q(z),Q(z)`$ is a point.
If this were not the case, all four segments would contain a translated copy of the shortest among them, which, when translated to the origin, is of the form $`[0,t]`$. But then $`P=P^{}+[0,t]`$ and $`Q=Q^{}+[0,t]`$ with $`𝕃`$–polygons $`P^{},Q^{}`$.
We claim that $`P^{},Q^{}`$ satisfy conditions (i) and (ii) of Proposition 3. This is obvious for (ii). For the areas we have that $`AreaPAreaP^{}`$ equals the area of the parallelogram with base $`[0,t]`$ and height $`w(z,P)`$. The same applies to $`AreaQAreaQ^{}`$, but there the height is $`w(z,Q)`$. Then Corollary 2 implies the claim.
So the universal counting functions of $`P^{},Q^{}`$ are identical. But the number of edges of $`P^{}`$ and $`Q^{}`$ is smaller than that of $`P`$ and $`Q`$. Consequently there are polygons $`X^{}`$, $`Y`$ with $`P^{}=X^{}+Y`$, and $`Q^{}=X^{}Y`$. But then, with $`X=X^{}+[0,t]`$, $`P=X+Y`$ and $`Q=XY`$, a contradiction.
Next, we define the polygons $`X,Y`$ by specifying their edges. It is enough to specify the edges of $`X`$ and $`Y`$ that make up the edges $`P(z),P(z),Q(z)`$, $`Q(z)`$ in $`X+Y`$ and $`XY`$. For this end we orient the edges of $`P`$ and $`Q`$ clockwise and set
$$P(z)=[a_1,a_2],P(z)=[b_1,b_2],Q(z)=[c_1,c_2],Q(z)=[d_1,d_2]$$
each of them in clockwise order. Then
$$a_2a_1=\alpha t,b_2b_1=\beta t,c_2c_1=\gamma t,d_2d_1=\delta t$$
where $`t`$ is orthogonal to $`z`$ and $`\alpha ,\gamma 0`$, $`\beta ,\delta 0`$ and one of them equals $`0`$. Moreover, by condition (ii) of Proposition 3, $`\alpha \beta =\gamma \delta `$.
Here is the definition of the corresponding edges, $`x,y`$ of $`X,Y`$:
$`x=\alpha t,y=\beta t\text{ if}`$ $`\delta =0,`$
$`x=\beta t,y=\alpha t\text{ if}`$ $`\gamma =0,`$
$`x=\gamma t,y=\delta t\text{ if}`$ $`\beta =0,`$
$`x=\delta t,y=\gamma t\text{ if}`$ $`\alpha =0.`$
With this definition, $`X+Y`$ and $`XY`$ will have exactly the edges needed. We have to check yet that the sum of the $`X`$ edges (and the $`Y`$ edges) is zero, otherwise they won’t make up a polygon. But $`(x+y)=0`$ since this is the sum of the edges of $`P`$, and $`(xy)=0`$ since this is the sum of the edges of $`Q`$. Summing these two equations gives $`x=0`$, subtracting them yields $`y=0`$. ∎
4. An example and a question
Let $`X`$, resp. $`Y`$ be the triangle with vertices $`(0,0),(2,0),(1,1)`$, and $`(0,0),(1,1),(0,3)`$. As it turns out the areas of $`P=X+Y`$ and $`Q=XY`$ are equal. So Theorem 3 applies: $`𝒰_P=𝒰_Q`$. At the same time, $`P`$ and $`Q`$ are not congruent as $`P`$ has six vertices while $`Q`$ has only five.
However, it is still possible that polygons with the same universal counting function are equidecomposable. Precisely, $`P_1,\mathrm{},P_m`$ is said to be a subdivision of $`P`$ if the $`P_i`$ are $`𝕃`$–polygons with pairwise relative interior, their union is $`P`$, and the intersection of the closure of any two of them is a face of both. Recall from section 1 the group $`𝒢_{tr}`$ generated by $`𝕃`$–translations and the reflection with respect to the origin. Two $`𝕃`$–polygons $`P,Q`$ are called $`𝒢_{tr}`$–equidecomposable if there are subdivisions $`P=P_1\mathrm{}P_m`$ and $`Q=Q_1\mathrm{}Q_m`$ such that each $`P_i`$ is a translate, or the reflection of a translate of $`Q_i`$ with the extra condition that $`P_i`$ is contained in the boundary of $`P`$ if and only if $`Q_i`$ is contained in the boundary of $`Q`$.
We finish the paper with a question which has connections to a theorem of the late Peter Greenberg \[Gr\]. Assume $`P`$ and $`Q`$ have the same universal counting function. Is it true then that they are $`𝒢_{tr}`$–equidecomposable? In the example above, as in many other examples, they are.
References
\[BK\] U. Betke, M. Kneser, Zerlegungen und Bewertungen von Gitterpolytopen, J. Reine ang. Math. 358 (1985), 202–208.
\[Eh\] E. Ehrhart, Polinomes arithmétiques et métode des polyédres en combinatoire, Birkhauser, 1977.
\[Gr\] P. Greenberg, Piecewise $`SL_2`$–geometry, Transactions of the AMS, 335 (1993), 705–720.
\[GW\] P. Gritzmann, J. Wills, Lattice points, in: Handbook of convex geometry, ed. P. M. Gruber, J. Wills, North Holland, Amsterdam, 1988.
\[KL\] R. Kannan, L. Lovász, Covering minima and lattice point free convex bodies, Annals of Math. 128 (1988), 577–602.
\[Ka\] J–M. Kantor, Triangulations of integral polytopes and Ehrhart polynomials, Beiträge zur Algebra und Geometrie, 39 (1998), 205–218.
\[Lo\] L. Lovász, An algorithmic theory of numbers, graphs and convexity, Regional Conference Series in Applied Mathematics 50, 1986.
\[McM\] P. McMullen, Valuations and dissections, in: Handbook of convex geometry, ed. P. M. Gruber, J. Wills, North Holland, Amsterdam, 1988.
|
no-problem/9905/cond-mat9905251.html
|
ar5iv
|
text
|
# Dynamics of Counterion Condensation
## Abstract
Using a generalization of the Poisson-Boltzmann equation, dynamics of counterion condensation is studied. For a single charged plate in the presence of counterions, it is shown that the approach to equilibrium is diffusive. In the far from equilibrium case of a moving charged plate, a dynamical counterion condensation transition occurs at a critical velocity. The complex dynamic behavior of the counterion cloud is shown to lead to a novel nonlinear force–velocity relation for the moving plate.
Many experimental techniques used to probe properties of macroions involve setting them to motion by applying some external driving mechanism (such as migration in an applied electric field or hydrodynamic flow, sedimentation, etc.), and monitoring the dynamical response. A macroion in solution is surrounded by an accompanying cloud of counterions, and the corresponding response of the counterions is in general very complex. This often makes interpretation of the observed properties of the dynamically perturbed system a nontrivial task.
For example, it is known that if a macroion moves, the asymmetry set by the motion causes a distortion in the counterion cloud. The distorted cloud, in turn, produces a nonzero electric field (called the asymmetry field ) that opposes the motion of the macroion. This phenomenon is known as the “relaxation effect,” and has been studied in the literature within the Debye-Hückel (DH) theory. The approximate DH description, which is based on a linearization of the nonlinear Poisson-Boltzmann (PB) equation, is however known to break down when the macroions are highly charged. In particular, it does not capture the phenomenon of counterion condensation, about which a lot has been learned by examining the available exact solutions of the full nonlinear equilibrium PB equation .
In this Letter, we develop a dynamical generalization of the nonlinear PB equation, which we use to study the dynamical aspects of counterion condensation. For the specific example of a single charged plate in the presence of counterions, we show that the equilibrium, corresponding to a Gouy-Chapman profile, is approached diffusively. In the far from equilibrium case of a moving charged plate we present an exact stationary solution of the dynamical PB equation, and show that the cloud of counterions phase separates into a comoving part and a condensate of the excess charge which is expelled to infinity. The amount of the comoving charge depends on the velocity of the plate, and is decreased as the velocity increases. At some critical velocity $`v_c`$, the system undergoes a dynamical phase transition, and eventually all the counterions evaporate from the neighborhood of the moving plate. We show that this leads to a novel nonlinear force–velocity relation for the moving charged plate in the presence of the counterions.
To study the dynamics of counterions in the presence of macroions, one needs to formulate a dynamical generalization of the PB equation. Consider a set of negatively charged macroions described by the given charge density $`\rho _m(𝐱,t)`$, embedded in a cloud of positively charged counterions with concentration $`c(𝐱,t)`$. The electric potential $`\varphi `$(x,t) is then given by the solution of the Poisson equation:
$$^2\varphi =\frac{4\pi }{ϵ}(ec+\rho _m),$$
(1)
in which $`ϵ`$ is the dielectric constant of the solvent, and $`e`$ is the electron charge. The dynamics of $`c(𝐱,t)`$ is governed by a continuity equation of the form $`_tc+𝐉=0`$, in which the current density $`𝐉`$ is composed of a deterministic part $`c𝐯`$, and a stochastic part $`Dc`$ as given by Fick’s law, where $`𝐯(𝐱,t)`$ is the velocity field, and $`D`$ is the diffusion constant of the counterions. In a mean-field approximation, the velocity field at each point is determined by the local value of the electric field $`𝐄(𝐱,t)`$ as $`𝐯=\mu 𝐄=\mu \varphi `$, where $`\mu `$ is the electric mobility of the counterions. Using the self-consistency relation in the continuity equation, one obtains the so-called Nernst-Planck equation
$$_tc=D^2c+\mu (c\varphi ).$$
(2)
Note that we have simplified the problem by neglecting any couplings to the hydrodynamics of the solvent. The above two equations (Eqs.(1) and (2)) describe the nonlinear dynamics of counterions in the presence of a given distribution of macroions. At equilibrium, $`_tc=0`$ and Eq.(2) yields $`c\mathrm{exp}(\mu \varphi /D)`$, where equilibration at a temperature $`T`$ implies an Einstein relation $`D=(\mu /e)k_BT`$. Inserting the Boltzmann form for $`c`$ back into Eq.(1) then yields the PB equation. In analogy to the equilibrium case, we can define a dynamical Bjerrum length $`\mathrm{}_B=e\mu /ϵD`$.
The above nonlinear equations are in general very difficult to solve, which is why they are usually dealt with in a linearized approximation . However, the approximation is known to break down in the neighborhood of highly charged surfaces, where interesting phenomena such as counterion condensation take place. To capture the essence of the nonlinearity in the dynamical context, we restrict ourselves to a one dimensional case where the equations prove to be more tractable. Eliminating the concentration field $`c`$ from the two equations, one obtains an equation for the electrostatic potential, which can be integrated twice to yield
$`_t\varphi (x,t)_t\varphi (\mathrm{},t)=D_x^2\varphi +{\displaystyle \frac{\mu }{2}}(_x\varphi )^2+{\displaystyle \frac{4\pi }{ϵ}}{\displaystyle _{\mathrm{}}^x}𝑑x^{}\left[\mu \rho _m(x^{},t)_x\varphi (x^{},t)+D_x\rho _m(x^{},t)+J_m(x^{},t)\right],`$ (3)
where the macroion current density $`J_m`$ is obtained from the continuity equation for the macroions $`_t\rho _m+_xJ_m=0`$, and the boundary condition $`E(\mathrm{},t)=0`$ has been implemented.
The above equation, which can be called a dynamical PB equation , belongs to the general class of the celebrated Kardar-Parisi-Zhang (KPZ) equations , with the coupling to the source appearing both in the additive and multiplicative forms. It is however important to note that in this context the sources represent slower dynamical degrees of freedom corresponding to the motion of the macroions (compared to counterions), which makes it somewhat different from the standard KPZ problems .
To study how counterions dynamically rearrange themselves around macroions, we focus on the specific example of a single negatively charged plate with charge (number) density $`\sigma `$ and surface area $`A`$, which is moving at a constant velocity $`v`$, i.e., $`\rho _m(x,t)=e\sigma \delta (xvt)`$. Using a Cole-Hopf transformation $`\varphi (x,t)\varphi (\mathrm{},t)=2D/\mu \mathrm{ln}W(x,t)`$ , Eq.(3) can be written as
$`_tW`$ $`=`$ $`D_x^2W`$ (4)
$``$ $`{\displaystyle \frac{2D}{\lambda }}\left[\delta (xvt)+\left({\displaystyle \frac{v\mu E_0}{D}}\right)\mathrm{\Theta }(xvt)\right]W,`$ (5)
where $`E_0(t)=2D/\mu _x\mathrm{ln}W|_{x=vt}`$ is the asymmetry field, $`\mathrm{\Theta }(x)`$ is the step function, and $`\lambda =1/\pi \sigma \mathrm{}_B`$ is a dynamical Gouy-Chapman length. Note that despite its simple form, this diffusion-like equation is still nonlinear due to the self-consistent coupling of the asymmetry field (electric field at the charged plate).
Let us first assume that the plate is not moving ($`v=0`$), and ask how the equilibrium configuration is dynamically approached. If we assume that the initial configuration of the counterions is symmetric with respect to the plate, the symmetry will be conserved during time evolution rendering $`E_0(t)=0`$ at all times. This simplifies Eq.(4) to a linear diffusion equation which can be solved exactly. One obtains
$$\frac{e\varphi (x,t)}{k_BT}=2\mathrm{ln}\left\{1+\frac{|x|}{\lambda }+_{\mathrm{}}^+\mathrm{}𝑑x^{}\left[\mathrm{exp}\left(\frac{e\varphi (x^{},0)}{2k_BT}\right)1\frac{|x^{}|}{\lambda }\right]\times \frac{1}{\sqrt{4\pi Dt}}\mathrm{exp}\left[\frac{(xx^{})^2}{4Dt}\right]\right\}.$$
(6)
In simple terms this means that the equilibrium Gouy-Chapman profile diffusively develops, as the initial configuration diffusively disappears. Naively, one might not expect such an amazingly simple behavior from a fully interacting Coulomb system.
For a stationary plate at equilibrium, all the counterions are known to be “condensed” in a Gouy-Chapman (density) profile which decays algebraically at large separations (see below). It is interesting to study this problem in the far from equilibrium case of a moving plate. In particular, we may ask if there is a stationary solution of Eq.(4), of the form $`W(x,t)=W(xvt)`$, in which all the counterions are comoving with the plate at the velocity $`v`$. The answer to this question turns out to be negative. Apparently, the Coulomb attraction which could overcome the combination of entropy and repulsion among the counterions in the equilibrium case, is not capable of accommodating the viscous drag of all the counterions in this balance, for any nonzero $`v`$. However, we can find stationary solutions if we do not require that all the counterions are comoving.
Such solutions can be shown to exist only for negative values of $`v`$, with a boundary condition that requires a specific value for the electric field at $`+\mathrm{}`$. This implies that the neutralizing counterion cloud actually phase separates into a comoving domain and a domain of excess charges accumulated at $`+\mathrm{}`$, as required by this one dimensional model . This class of solutions is parametrized by the number of comoving counterions. The stationary solution of Eq.(4) with maximum number of comoving counterions can be obtained as
$$W(x,t)=\{\begin{array}{cc}1\left(\frac{1+\lambda v/4D}{1\lambda v/4D}\right)^2\mathrm{exp}\left[\frac{v(xvt)}{D}\right],\hfill & \mathrm{for}x<vt,\hfill \\ & \\ \frac{\left(\lambda v/D\right)}{(1\lambda v/4D)^2}\left\{1+\left[1\left(\frac{\lambda v}{4D}\right)^2\right]\frac{(xvt)}{\lambda }\right\}\mathrm{exp}\left[\frac{v(xvt)}{2D}\right],\hfill & \mathrm{for}x>vt,\hfill \end{array}$$
(7)
and correspondingly for the counterion density profile as
$$c(x,t)=\{\begin{array}{cc}\frac{(v/D)^2}{2\pi \mathrm{}_B}\frac{\left(\frac{1+\lambda v/4D}{1\lambda v/4D}\right)^2\mathrm{exp}\left[\frac{v(xvt)}{D}\right]}{\left\{1\left(\frac{1+\lambda v/4D}{1\lambda v/4D}\right)^2\mathrm{exp}\left[\frac{v(xvt)}{D}\right]\right\}^2},\hfill & \mathrm{for}x<vt,\hfill \\ & \\ \frac{1}{2\pi \mathrm{}_B\lambda ^2}\frac{\left[1\left(\lambda v/4D\right)^2\right]^2}{\left\{1+\left[1\left(\lambda v/4D\right)^2\right]\left(\frac{xvt}{\lambda }\right)\right\}^2},\hfill & \mathrm{for}x>vt.\hfill \end{array}$$
(8)
The above solution has very interesting features. In the limit $`v0`$, Eq.(8) reduces to the equilibrium Gouy-Chapman profile
$$c_{\mathrm{eq}}(x)=\frac{1}{2\pi \mathrm{}_B\lambda ^2}\times \frac{1}{\left(1+\frac{|x|}{\lambda }\right)^2}.$$
(9)
In this limit, all the counterions are condensed, typically at a distance given by $`\lambda `$, although the profile has an algebraic decay.
For a nonzero $`v`$, while the density profile is still algebraically decaying “behind” the moving plate (for $`x>vt`$), it decays exponentially “ahead” of the moving plate (for $`x<vt`$), indicating a jamming of the profile as a result of the motion. In the comoving profile of counterions, there is a total charge of $`Q_>=\frac{1}{2}Q_0[1(\lambda v/4D)^2]`$ behind the plate, as opposed to $`Q_<=\frac{1}{2}Q_0[1+(\lambda v/4D)^2+\lambda v/2D]`$ ahead of the plate, where $`Q_0=e\sigma A`$ is the overall charge of the counterions. Note that the comoving counterions are distributed asymmetrically with $`Q_>>Q_<`$. The amount of the excess charge can be easily obtained as
$$Q_{\mathrm{ex}}(v)=Q_0Q_>Q_<=\frac{eA}{4\pi \mathrm{}_B}\times \frac{|v|}{D},$$
(10)
which is independent of $`\sigma `$. By examining the asymptotic limit of the electric field at infinity, one can easily see that this excess charge is indeed expelled to $`+\mathrm{}`$.
As the velocity of the plate increases, the number of comoving counterions decreases, until at $`\lambda v/4D=1`$ eventually all the counterions evaporate to infinity. This corresponds to a dynamical counterion condensation phase transition, happening at the critical velocity $`v_c=4D/\lambda `$, and is similar to the equilibrium transition in the case of a charged cylinder . It is easy to check from Eq.(8) that the density profile of the comoving counterions vanishes on both sides at $`\lambda v/4D=1`$.
The solution of Eq.(7) yields a value $`E_0=(1+\lambda v/8D)v/\mu `$ for the asymmetry field at the position of the plate, which leads to a very interesting mechanical response for the charged plate . In the stationary situation, the total force $`F_{\mathrm{tot}}`$ exerted on the charged plate should vanish. This means that an externally applied (mechanical) force $`F_{\mathrm{ext}}`$ (that is necessary to maintain the constant velocity motion) should balance an electrical contribution due to nonzero value of the electric field $`E_0`$ felt by the charged plate, namely, $`F_{\mathrm{ext}}Q_0E_0=0`$. Inserting the velocity dependent value for $`E_0`$, one obtains
$$|F_{\mathrm{ext}}|=\{\begin{array}{cc}\left(Q_0/\mu \right)|v|\frac{ϵ}{8\pi }\left(v/\mu \right)^2A,\hfill & \mathrm{for}|v|<v_c,\hfill \\ & \\ 2\pi Q_0^2/ϵA,\hfill & \mathrm{for}|v|v_c,\hfill \end{array}$$
(11)
as the force–velocity relation for a moving charged plate. A plot of Eq.(11) is sketched in Fig. 1, in which $`F_c=2\pi Q_0^2/ϵA`$.
In the limit of very low velocities ($`|v|v_c`$), one can neglect the quadratic velocity term in Eq.(11). This leads to a viscous drag with an effective friction coefficient $`\zeta _{\mathrm{eff}}=Q_0/\mu `$, where the “electrostatic friction” is due to dissipation corresponding to the comoving could of counterions. Note that all the counterions are condensed in this limit, and participate in the dissipation. In the opposite limit of $`|v|v_c`$, no counterions are condensed and the friction coefficient is consequently zero. However, since all the counterions participate in forming a condensate at infinity, there is a constant (independent of velocity) electrostatic attractive force acting on the plate, which is simply the force between two parallel and equally charged plates. In fact, one can rewrite Eq.(11) in such a way that these features are manifest at any velocity: $`|F_{\mathrm{ext}}|=(Q_c(v)/\mu )|v|+2\pi Q_{\mathrm{ex}}(v)^2/ϵA`$, where $`Q_c(v)=Q_>+Q_<=Q_0Q_{\mathrm{ex}}(v)`$ is the amount of condensed (comoving) charge. Interestingly, the nonlinear response function
$$\zeta (v)\frac{F_{\mathrm{ext}}(v)}{v}=Q_c(v)/\mu ,$$
(12)
gives a proper account of the number of condensed (comoving) counterions at any velocity.
The dynamical counterion condensation phase transition discussed above can be characterized by an order parameter, which is given by the number of condensed counterions. The tunning parameter for this dynamical transition is the external force, which is the analogue of temperature in the equilibrium case. In analogy to equilibrium critical phenomena, one can look at the critical exponents corresponding to the transition. For example, the order parameter vanishes at $`F_c`$ as $`Q_c(F_c|F_{\mathrm{ext}}|)^\beta `$, with a mean-field exponent $`\beta =1/2`$.
It is well known that at equilibrium counterion condensation is determined by dimensional considerations . In $`d`$ dimensions, a $`D`$ dimensional macroion of size $`L_{}`$ attracts a counterion at a perpendicular distance $`L_{}`$, with a Coulomb energy that goes like $`E_c1/L_{}^{dD2}`$. On the other hand, entropy of a particle confined in such a box can be obtained as $`S\mathrm{ln}(L_{}^DL_{}^{dD})`$. Comparing energy and entropy one obtains that for $`d<D+2`$ counterions tend to condense to minimize energy (lower values for $`L_{}`$ are favored), while for $`d>D+2`$ they prefer to be free and gain entropy (higher values for $`L_{}`$ are favored). The case $`d=D+2`$ is marginal, where a counterion condensation transition takes place . The analysis presented in this Letter clearly shows that this simple equilibrium picture is modified when the system is far from equilibrium. In particular, we found that complete condensation in the case of a charged plate will no longer hold for a moving plate. Another important difference is that, unlike the equilibrium case, there is a marked contrast between the condensed and the free counterions.
One might question the validity of the present formulation at high velocities such that $`\lambda v/D1`$, because of the assumption that the dynamics of the macroions is much slower compared to the counterions. The dimensionless parameter $`\lambda v/D`$ (often called the Péclet number) can be written as $`\tau _c/\tau _m`$, in which $`\tau _c`$ is the time it takes a counterion to diffuse a distance $`\lambda `$, whereas $`\tau _m`$ is the time in which the macroion drifts a same distance. It is thus possible that this ratio becomes comparable to 1, while the ratio between the corresponding diffusion times is still much less than unity.
In the derivation of Eq.(2), we have made use of a mean-field approximation similar to what is used to obtain the PB equation. It is well known that PB equation is incapable of describing intriguing phenomena such as attraction between like charged objects in the presence of condensed counterions, because of this approximation . One should then attempt to go beyond mean-field theory and study the effect of fluctuations on the dynamical PB equation along the lines of what has been done in the equilibrium case . In particular, it is shown in Ref. that fluctuations lead to a stronger condensation, because mean-field theory overestimates the repulsion between counterions. It will thus be interesting to see how this effect competes with the viscous drag in the case of a moving plate. Another important and very interesting extension of this work would be to incorporate the coupling to hydrodynamics of the solvent, which we anticipate to have dramatic effects.
I am grateful to R. Bruinsma, M. Kardar, P. Pincus, M. Sahimi, and R. da Silveira for invaluable discussions and comments. This research was supported in part by the National Science Foundation under Grants No. PHY94-07194 and DMR-93-03667.
|
no-problem/9905/cond-mat9905249.html
|
ar5iv
|
text
|
# Persistent current in a mesoscopic ring with diffuse surface scattering
## Abstract
The persistent current in a clean mesoscopic ring with ballistic electron motion is calculated. The particle dynamics inside a ring is assumed to be chaotic due to scattering at the surface irregularities of atomic size. This allows one to use the so-called “ballistic” supersymmetric $`\sigma `$ model for calculation of the two-level correlation function in the presence of a nonzero magnetic flux.
PACS numbers: 73.23.Ad, 05.45.Mt, 73.23.-b
Since the pioneering work of Büttiker, Imry and Landauer , persistent currents in small metallic rings have been a subject of great theoretical and experimental interest. While earlier observations were made on disordered samples with diffusive electron dynamics , more recent experiments measured the persistent currents in high mobility semiconductor heterostructures with the elastic mean free path $`l1.3L`$ ($`L`$ is the system size) . The semiclassical electron dynamics in such clean samples is ballistic and can be either regular (integrable) or chaotic. The former possibility seems to be rather exceptional, since any deviation of the sample shape from a perfect annulus, however small, breaks down the integrability of a system. The purpose of this paper is to calculate the persistent current in a ballistic ring, in which a bulk disorder is absent but the electron dynamics is nevertheless chaotic due to the multiple surface scattering. The electron-electron interaction is neglected.
According to Ref. , the average current for a canonical ensemble (i.e., for a fixed number of electrons in a sample) is given by the following formula:
$$I=\frac{s}{2\mathrm{\Delta }}\frac{}{\mathrm{\Phi }}𝑑ϵ_1𝑑ϵ_2n(ϵ_1)n(ϵ_2)K(ϵ_1,ϵ_2;\mathrm{\Phi })$$
(1)
(we use the units in which $`\mathrm{}=c=1`$). Here $`s`$ is the spin degeneracy, $`\mathrm{\Delta }=(\rho _0V)^1`$ is the mean level spacing in the system, $`\rho _0`$ is the average density of states, $`V`$ is the system volume, $`n(ϵ)`$ is the Fermi distribution function, $`K(ϵ_1,ϵ_2;\mathrm{\Phi })`$ is the dimensionless two-level correlation function at a nonzero magnetic flux $`\mathrm{\Phi }`$, which depends on the energy difference $`ϵ_2ϵ_1=\omega `$:
$$K(\omega ,\mathrm{\Phi })=\frac{1}{\rho _0^2}\delta \rho (E+\omega ,\mathrm{\Phi })\delta \rho (E,\mathrm{\Phi })$$
(2)
($`\delta \rho (E,\mathrm{\Phi })=\rho (E,\mathrm{\Phi })\rho _0`$ is the deviation of the one-particle density of states from its average value). We restrict our analysis to the limit of $`T\mathrm{\Delta }`$, where the formula (1) is valid.
We assume that the sample has the shape of a planar coaxial ring with outer and inner radii $`R_1`$ and $`R_2`$, respectively, threaded by a solenoid carrying a flux $`\mathrm{\Phi }`$ (see Fig. 1). We consider here only narrow rings with $`d=R_1R_2R=(R_1+R_2)/2`$. For this geometry, the number of transverse channels $`N=mv_\mathrm{F}d/2\pi `$, the average density of states $`\rho _0=m/2\pi `$ and the mean level spacing $`\mathrm{\Delta }=(mRd)^1`$.
In order to calculate the two-level correlation function (2), we use the supersymmetric $`\sigma `$ model, which has become a powerful tool in the theory of disordered metals and has been adapted to the description of the classically chaotic systems as well . Here it is appropriate to emphasize that in a clean ring, where a natural ensemble is absent but the dynamics is classically chaotic, the averaging in Eq. (2) must be performed over a wide energy band , in contrast to the case of a disordered ring, where the angular brackets in Eq. (2) imply averaging over different realizations of the random potential. If the shape of a sample is highly symmetric and its surface is smooth on the atomic scale, then the specular boundary conditions commonly used in chaotic billiards give rise to integrability of the system, and the whole approach based on using the supersymmetric $`\sigma `$ model fails (this case was studied in Ref. , using the semiclassical trace formulas). In order to make the particle dynamics chaotic, one can, for instance, slightly deform the shape of the billiard to break its perfect rotational symmetry . However, in this approach, the level correlations can be calculated only numerically. Another way to achieve the chaotic regime, at the same time preserving the macroscopic symmetry of a sample, is to assume that each act of the surface reflection is stochastic itself, i.e., the incident particle gets reflected in some random direction at the surface. This model is commonly referred to as the diffuse reflection and applies to the surfaces which are rough on the atomic scale, which seems to be quite reasonable physical assumption . After several reflections at the walls, the dynamics of an electron becomes fully chaotic.
At $`\omega \mathrm{\Delta }`$, the two-level correlation function (2) can be calculated perturbatively, using the “ballistic” version of the supersymmetric $`\sigma `$ model generalized to the presence of a nonzero magnetic flux :
$$K(\omega ,\mathrm{\Phi })=\frac{\mathrm{\Delta }^2}{2\pi ^2}\mathrm{Re}\underset{i}{}\frac{1}{[i\omega \lambda _i(\mathrm{\Phi })]^2}$$
(3)
where $`\lambda _i(\mathrm{\Phi })`$ are the eigenvalues of the (Cooperon) Liouville operator
$$=v_\mathrm{F}𝐧(_𝐫2ie𝐀)$$
(4)
inside the Aharonov-Bohm billiard ($`𝐧`$ is the direction of momentum). The region of small frequencies $`\omega \mathrm{\Delta }`$, where the perturbative approach fails and the level correlations are described by the universal formulas of the random matrix theory , lies beyond the limits of applicability of the thermodynamic approach to the description of persistent currents. The expression (3) is a direct analog of the Altshuler-Shklovskii spectral function for diffusive systems . Rewriting the Fermi distribution functions in Eq. (1) as Matsubara sums and integrating over energies, we end up with the following expression:
$$I=\frac{2\pi s}{\mathrm{\Delta }}T\underset{n>0}{}\omega _n\frac{K(i\omega _n,\mathrm{\Phi })}{\mathrm{\Phi }},$$
(5)
where $`\omega _n=2\pi nT`$. The sum over $`n`$ on the right-hand side is convergent due to the presence of the differentiation over flux.
The spectrum of the Liouville operator is determined by the eigenvalue equation
$$v_\mathrm{F}𝐧(_𝐫2ie𝐀)f(𝐫,𝐧)=\lambda f(𝐫,𝐧),$$
(6)
with some boundary conditions at the surfaces of the ring. Due to the similarity of Eq. (6) to the Boltzmann kinetic equation, $`f(𝐫,𝐧)`$ having the meaning of the classical distribution function, the boundary conditions at the diffusely reflecting surface $`𝐫=𝐑`$ with the outward normal $`𝐍`$ can be imposed by analogy with the classical kinetic theory. The distribution function of reflected particles can be represented as $`f(𝐑,𝐧)=pf_0(𝐑)+(1p)f(𝐑,\overline{𝐧})`$ , where $`0p1`$ is “the diffuseness coefficient”, $`f_0`$ is an isotropic distribution function, and $`\overline{𝐧}=𝐧2(\mathrm{𝐧𝐍})𝐍`$ is the direction of specular reflection. In this paper, we consider an isotropic diffuse scattering with $`p=1`$, corresponding to the limit of “strong chaos” (for the discussion of applicability of this model to real experimental samples, see below) and the boundary condition, which follows from the particle number conservation, takes the form
$$\frac{1}{\pi }f(𝐑_i,𝐧)|_{(\mathrm{𝐧𝐍}_i)<0}=_{(𝐧^{}𝐍_i)>0}𝑑𝐧^{}(𝐧^{}𝐍_i)f(𝐑_i,𝐧^{}).$$
(7)
Here $`𝑑𝐧=𝑑\varphi /2\pi `$ ($`\varphi `$ is the angle between the direction of momentum of incident particles and the outward normal $`𝐍`$), and $`i=1,2`$ correspond to the outer and inner surfaces of the ring. The distribution function of reflected particles on the left-hand side of Eq. (7) does not depend on $`𝐧`$. A similar approach was used in Ref. for calculation of the corrections to the universal level correlations in a two-dimensional disk without magnetic flux.
Since the magnetic field is absent inside the ring, the trajectory of a particle between collisions with the walls is a straight line. Equation (6) can be solved along the trajectory
$$f(l)=f(0)\mathrm{exp}\left(\frac{\lambda }{v_\mathrm{F}}l\right)\mathrm{exp}\left(2ie_0^l𝐀𝑑𝐥\right).$$
(8)
This expression allows one to establish a relation between $`f(𝐑_1,𝐧)|_{(\mathrm{𝐧𝐍}_1)>0}`$ and $`f(𝐑_1,𝐧)|_{(\mathrm{𝐧𝐍}_1)<0}`$ or $`f(𝐑_2,𝐧)|_{(\mathrm{𝐧𝐍}_2)<0}`$, and also between $`f(𝐑_2,𝐧)|_{(\mathrm{𝐧𝐍}_2)>0}`$ and $`f(𝐑_1,𝐧)|_{(\mathrm{𝐧𝐍}_1)<0}`$ in Eq. (7) and obtain a rather cumbersome algebraic equation for the eigenvalues of the Liouville operator in a ring of arbitrary width. Fortunately, in the case of a narrow ring, the problem can be considerably simplified, since we can replace our annular billiard by a strip of length $`L=2\pi R`$ and width $`d`$ such that $`\delta =d/L1`$ (see Fig. 2). In addition, the vector potential can be put constant inside the sample: $`A_\theta =\mathrm{\Phi }/L`$ ($`\theta `$ is the polar angle in real space). This simplification implies that the contribution from the trajectories connecting two points at the outer wall is neglected (it can be checked that this contribution is indeed small at $`\delta 1`$). However, there is an important property of the annular geometry which should be taken into account, namely the finiteness of the flight length $`l(\varphi )`$ between successive collisions with the walls ($`lL`$). This feature can be restored in the strip billiard if to assume that there exists the maximum scattering angle $`\varphi _0`$ such that $`\mathrm{sin}\varphi _0=R_2/R_112\pi \delta `$.
Let $`x=R\theta `$, then we obtain, from Eq. (8) and Fig. 2:
$$\begin{array}{ccc}& & f_{1,>}(x,\varphi )=f_{2,<}(xd\mathrm{tan}\varphi )\hfill \\ & & \times \mathrm{exp}\left(\frac{\lambda }{v_\mathrm{F}}\frac{d}{\mathrm{cos}\varphi }\right)\mathrm{exp}\left(2\pi i\frac{2\mathrm{\Phi }}{\mathrm{\Phi }_0}\frac{d\mathrm{tan}\varphi }{L}\right)\hfill \\ & & f_{2,>}(x,\varphi )=f_{1,<}(x+d\mathrm{tan}\varphi )\hfill \\ & & \times \mathrm{exp}\left(\frac{\lambda }{v_\mathrm{F}}\frac{d}{\mathrm{cos}\varphi }\right)\mathrm{exp}\left(2\pi i\frac{2\mathrm{\Phi }}{\mathrm{\Phi }_0}\frac{d\mathrm{tan}\varphi }{L}\right).\hfill \end{array}$$
(9)
Here $`f_{i,>(<)}(x,\varphi )=f(𝐑_i,𝐧)|_{(\mathrm{𝐧𝐍}_i)>0(<0)}`$ and $`\mathrm{\Phi }_0=2\pi \mathrm{}c/e`$ is the flux quantum. It is convenient to expand the functions $`f_i`$ in the Fourier series: $`f_{i,<}(x)=_qf_{i,q}e^{iqx}`$. Since $`f(x+L,𝐧)=f(x,𝐧)`$, the wave number is quantized: $`q=m(2\pi /L)`$, where $`m=0,\pm 1,\mathrm{}`$. Substitution of the expressions (9) in Eq. (7) results in the following equation for the eigenvalues $`\lambda _{m,k}(\mathrm{\Phi })=(v_\mathrm{F}/d)z_{m,k}(\mathrm{\Phi })`$ of the Liouville operator (4):
$$F_m(z,\nu )A_m^2(z,\nu )1=0.$$
(10)
Here $`\nu =2\mathrm{\Phi }/\mathrm{\Phi }_0`$ and
$`A_m={\displaystyle \frac{1}{\mathrm{sin}\varphi _0}}{\displaystyle _0^{\varphi _0}}𝑑\varphi \mathrm{cos}\varphi \mathrm{exp}(z\mathrm{sec}\varphi )`$ (11)
$`\times \mathrm{cos}[2\pi \delta (m\nu )\mathrm{tan}\varphi ].`$ (12)
To guarantee correct normalization of the distribution function of reflected particles, the prefactor $`\mathrm{sin}^1\varphi _0`$ has been included in the definition of $`A_m`$.
The eigenvalues of the Liouville operator at fixed $`m`$ are labeled by $`k=0,1,2,\mathrm{}`$, and have complex conjugated partners for all $`(m,k)`$. At $`\mathrm{\Phi }=0`$, Eq. (10) has the solution $`z_{0,0}=0`$ corresponding to the equilibrium distribution function, which does not depend on $`𝐧`$ and $`𝐫`$. It is clear from Eq. (11) that the eigenvalues $`\lambda _{m,k}(\mathrm{\Phi })`$ have the following property: $`\lambda _{m,k}(\mathrm{\Phi }+\mathrm{\Phi }_0/2)=\lambda _{m+1,k}(\mathrm{\Phi })`$ so that any quantity which can be represented in the form $`_{m,k}h(\lambda _{m,k}(\mathrm{\Phi }))`$, where $`h(\lambda )`$ is some function, must be periodic function of $`\mathrm{\Phi }`$ with the period of half the flux quantum $`\mathrm{\Phi }_0`$. Note that, since the eigenvalues of the Liouville operator physically correspond to the relaxation rates of different harmonics of a nonequilibrium classical distribution function, the real parts of all $`\lambda _{m,k}`$ are positive.
Using the representation of the sum over $`i=(m,k)`$ in Eqs. (3) and (5) as an integral over a contour $`C`$ enclosing all zeros of the function $`F_m(\nu ,z)`$ in the complex plane of $`z`$:
$`{\displaystyle \underset{m,k}{}}{\displaystyle \frac{1}{[\omega _n+\lambda _{m,k}(\mathrm{\Phi })]^2}}`$
$`=\left({\displaystyle \frac{d}{v_\mathrm{F}}}\right)^2{\displaystyle \underset{m}{}}{\displaystyle _C}{\displaystyle \frac{dz}{2\pi i}}{\displaystyle \frac{1}{(z+\omega _nd/v_\mathrm{F})^2}}{\displaystyle \frac{}{z}}\mathrm{ln}F_m(z,\nu )`$
$`=\left({\displaystyle \frac{d}{v_\mathrm{F}}}\right)^2{\displaystyle \underset{m}{}}{\displaystyle \frac{^2}{z^2}}\mathrm{ln}F_m(z,\nu )|_{z=\omega _nd/v_\mathrm{F}},`$
we finally obtain
$`{\displaystyle \frac{I}{I_0}}`$ $`=`$ $`{\displaystyle \frac{2s\delta ^2}{\pi N^3}}\left({\displaystyle \frac{T}{\mathrm{\Delta }}}\right)^2\mathrm{Re}{\displaystyle \underset{n>0}{}}{\displaystyle \underset{m}{}}n`$ (14)
$`\times {\displaystyle \frac{}{\nu }}{\displaystyle \frac{^2}{z^2}}\mathrm{ln}F_m(z,\nu )|_{z=2\pi \delta nT/N\mathrm{\Delta }},`$
where $`I_0=ev_\mathrm{F}/L`$ is the current carried by a single electron state in an ideal one-dimensional ring, and $`F_m(z,\nu )`$ is given by Eq. (10).
Due to the existence of different energy scales in the system, the temperature dependence of the persistent current is characterized by several distinct regimes. The smallest energy scale is the mean level spacing $`\mathrm{\Delta }`$, which also limits the applicability of the thermodynamic approach itself. Two other scales are given by the inverse times $`t_L^1=v_\mathrm{F}/L=N\mathrm{\Delta }`$ and $`t_d^1=v_\mathrm{F}/d=N\mathrm{\Delta }/\delta `$. It follows from Eqs. (14) and (11) that at $`Tt_d^1`$ the persistent current is exponentially small: $`II_0\mathrm{exp}(T/N\mathrm{\Delta })`$.
In a multichannel ring, there also exists yet another energy scale $`\mathrm{\Delta }E_cN\mathrm{\Delta }`$, whose origin can be most easily understood if to return to Eq. (5). Using the identity $`a^2=_0^{\mathrm{}}𝑑ye^{ay}y`$ and calculating the sum over $`n`$, we end up with the following expression
$$\frac{I}{I_0}=\frac{s}{2\pi ^4\delta }\frac{\mathrm{\Delta }}{T}_0^{\mathrm{}}\frac{d\xi \xi ^2}{\mathrm{sinh}^2\xi }\phi (\xi ;T,\mathrm{\Phi }),$$
(15)
where
$$\phi =\frac{\mathrm{\Phi }_0d}{2v_\mathrm{F}}\mathrm{Re}\underset{m,k}{}\frac{d\lambda _{m,k}(\mathrm{\Phi })}{d\mathrm{\Phi }}\mathrm{exp}\left(\frac{\lambda _{m,k}(\mathrm{\Phi })\xi }{\pi T}\right).$$
(16)
At sufficiently low temperatures, the dominant contribution comes from $`\lambda _{0,0}(\mathrm{\Phi })`$, which is the solution of Eq. (10) with smallest real part. At $`\mathrm{\Phi }0`$, the energy $`E_c`$ coincides with $`\lambda _{0,1}(\mathrm{\Phi }=0)`$, and at $`TE_c`$ the sum over $`(m,k)`$ in Eq. (16) can be replaced by its value at $`m=k=0`$. It follows from Eq. (10) that $`z_{0,0}(\nu )=2\pi \delta ^2|\mathrm{ln}\delta |\nu ^2`$ at $`\nu 0`$, and we obtain:
$$\frac{I}{I_0}=\frac{2s}{3\pi }\left(\delta \mathrm{ln}\frac{1}{\delta }\right)\frac{\mathrm{\Delta }}{T}\frac{\mathrm{\Phi }}{\mathrm{\Phi }_0}.$$
(17)
Thus the persistent current and also the orbital magnetic moment $`M=\pi R^2I`$ of a ballistic ring exhibit a Curie-type response on magnetic flux at $`\mathrm{\Delta }TE_c`$.
The energy $`E_c`$ is associated with the relaxation time $`t_c=E_c^1`$ of any initially nonequilibrium state of the system to a spatially uniform and isotropic distribution (the ergodic time). In the systems with diffusive electron dynamics, $`t_c=L^2/D`$ ($`L`$ is the system size, $`D`$ is the diffusion coefficient) and $`E_c`$ is called Thouless energy. In general, it is not so easy, however, to say what the ergodic time is in ballistic systems . In our case, it can be estimated from simple physical considerations, without solving Eq. (10) explicitly, as follows. The typical flight time $`t_f`$ between two reflections (see Fig. 2) is
$$t_f=\frac{1}{v_\mathrm{F}}\sqrt{l^2(\varphi )}t_L\delta ^{3/4}.$$
(18)
Since the particle completely “forgets” the initial direction of its momentum after each collision with the walls, the characteristic time for establishing an isotropic momentum distribution is of the order of $`t_f`$. In contrast, the characteristic time of filling uniformly all available configuration space is much longer. Indeed, the typical displacement between two collisions is given by $`\sqrt{(\delta x)^2}v_\mathrm{F}t_fL\delta ^{3/4}`$. According to the central limit theorem, the probability to find a particle at the distance $`\mathrm{\Delta }x=_{i=1}^M\delta x_i`$ after $`M1`$ collisions obeys the Gaussian distribution with the standard deviation $`\sqrt{(\mathrm{\Delta }x)^2}=\sqrt{M(\delta x)^2}`$. Substituting here $`Mt/t_f`$ and using Eq. (18), we obtain
$$(\mathrm{\Delta }x(t))^2=D_{\mathrm{eff}}t,D_{\mathrm{eff}}v_\mathrm{F}^2t_f.$$
(19)
Thus the motion of an electron along the circumference of the ring is in fact diffusive at $`tt_f`$, and
$$E_c=\frac{D_{\mathrm{eff}}}{L^2}t_L^1\delta ^{3/4}.$$
(20)
In a narrow multichannel ring the hierarchy of the characteristic energy scales looks as follows: $`\mathrm{\Delta }E_ct_L^1t_f^1t_d^1`$. Note that all our results are valid if one can neglect the deflection of the electron trajectories inside the ring by an external magnetic field, which is the case if the inverse cyclotron radius $`R_c^1=(eB/mv_\mathrm{F})`$ is smaller than the inverse typical flight length $`(v_\mathrm{F}t_f)^1`$. This condition, rewritten as $`\mathrm{\Phi }/\mathrm{\Phi }_0N\delta ^{7/4}`$, is always satisfied in narrow rings.
In order to facilitate comparison of our results with the experimental data, let us rewrite Eq. (17) in a different form, using the identity $`\mathrm{\Delta }/T=N^1(L_T/L)`$, where $`L_T=v_\mathrm{F}/T`$ is the length scale associated with temperature. In the experiment of Ref. , $`\delta 0.1`$, $`N4`$, $`L_T/L5`$, so that $`T\genfrac{}{}{0pt}{}{<}{}\mathrm{\Delta }`$. Due to the presence of the factor $`\delta \mathrm{ln}\delta 1`$ in Eq. (17), the predicted current turns out to be smaller than the experimentally observed (and also than the theoretically calculated for the case of specular reflection ). This discrepancy can be attributed to the fact that, because of the low density of carriers, the Fermi wavelength greatly exceeds the size of the surface irregularities, so that a considerable fraction of particles gets reflected specularly rather than diffusely (i.e., $`p<1`$), and one should describe the semiclassical dynamics in the experimental conditions of Ref. as “weakly chaotic”.
To summarize, we calculated the persistent current in a small clean metal ring, in which the electron dynamics is chaotic due to the stochastic surface scattering. A general analytical expression for the persistent current is derived in the limit of “strong chaos”, and a Curie-type orbital magnetic response on a small external flux is predicted at $`\mathrm{\Delta }TE_c`$.
The author would like to thank B. Simons for interest in this work and useful discussions. This work was financially supported by the Engineering and Physical Sciences Research Council, UK.
|
no-problem/9905/astro-ph9905119.html
|
ar5iv
|
text
|
# A state transition of GX 339-4 observed with RXTE
## 1 INTRODUCTION
GX 339-4 was discovered with the OSO-7 satellite (Markert et al. 1973), and identified with a V$``$18 magnitude star in the optical (Doxsey et al. 1979), with a $``$15 hour photometric period interpreted as the orbital period (Callanan et al. 1992). Although no dynamical measurement of the mass of the compact object is available, the source is included in the class of Black-Hole Candidates (BHCs) because of its fast aperiodic variability and the occurrence of spectral/timing transitions similar to those of established sources in the class (see Méndez & van der Klis 1997). The source is usually observed in the Low State (LS), where the 1-10 keV energy spectrum is a power-law with spectral index $`\mathrm{\Gamma }`$1.5-2.0 (Tananbaum et al. 1972) and the power spectrum consists of a strong (25-50% fractional rms) band-limited noise component similar to that observed in Cyg X-1 (Oda et al. 1971, Miyamoto et al. 1992). In the High State (HS), the source is brighter below 10 keV and an ultra-soft component appears in the energy spectrum, while the power law component steepens considerably; the power spectrum is reduced to a power law with a few percent fractional rms (Grebenev et al. 1991). In the Very-High State (VHS), observed on only one occasion, the source is much brighter below 10 keV, mainly due to the increased luminosity of the ultra-soft thermal component, the power law has a photon index of $`\mathrm{\Gamma }`$2.5, and the power spectrum shows a 1-15% rms variable band-limited noise with a characteristic break frequency much higher than in the LS (Miyamoto et al. 1991). Méndez & van der Klis (1997) identified a fourth state in GX 339-4, called Intermediate State (IS), observed also in GS 1124-68 by Belloni et al. (1997): its timing and spectral characteristics are similar to those of the VHS, but the IS appears at much lower luminosities than the VHS. In GS 1124-68, as the outburst proceeded, the source moved from the VHS to the HS to the IS and then to the LS, indicating that IS and VHS are indeed different states (see Belloni et al. 1997). Currently, GX 339-4 is the only system, together with GS 1124-68, in which all four states have been observed, although recently the superluminal transient GRO J1655-40 has shown a similar behavior (Méndez, Belloni & van der Klis (1998).
In this letter, we report the results of RXTE/PCA observations of GX 339-4 during a transition to the high state in 1998 and compare them with previous observations in the LS (analyzed in detail by Nowak, Wilms & Dove 1999 and Wilms et al. 1999).
## 2 X-RAY OBSERVATIONS
### 2.1 All-Sky Monitor
The All-Sky-Monitor (ASM: Levine et al. 1996) on board RXTE observed GX 339-4 almost continuously since the beginning of the mission. The ASM light curve, in 1-day bins, is shown in Fig. 1. The source was in a low-flux, hard state for the whole of 1996 and 1997. The flux level and the hardness ratio during this period indicate that the source was in the low state. Some variability can be seen, in the form of little “outbursts”, whose ASM flux is anti-correlated with the hard X-ray flux as observed by CGRO/BATSE (see Rubin et al. (1998)). In the beginning of 1998 January, a sharp increase in the ASM count rate is visible. The source reached a level of approximately 20 cts/s and remained approximately constant for $``$150 days before the flux started to decrease until it finally went back to a low value (around $``$2 cts/s). The switch to a higher count rate triggered a TOO observation with the PCA/HEXTE instruments.
### 2.2 PCA/HEXTE
In response to the Target-of-Opportunity call, RXTE observed GX 339-4 for 45 ks between 1998 Jan 18 and Jan 15 (observation B, see Table 1). The time of the observations corresponds to the rise phase of the outburst, just after a small flare (see Fig. 1). A month later the sources reached the peak of the outburst (Fig. 1) and a second, much shorter, pointing was performed (observation C, see Table 1). In addition, we analyzed a much older pointing extracted from the RXTE archive (observation A, see Table 1), obtained when the source was in the LS (Fig. 1).
From PCA observations A and B, we divided the light curve from the high-time resolution data in segments, produced a power spectrum from each segment, and averaged them together. The length of each segment was 256s and 64s for observations A and B respectively. All PCA channels were included in the analysis. We subtracted from the resulting power spectra the contribution from the Poissonian noise and the very large event window contribution (Zhang et al. 1995). Because of its shortness, no useful data could be obtained from observation C. The two power spectra can be seen in Fig. 2.
Observation A looks like a typical LS, with a strong band-limited noise (BLN) component. We fitted this power spectrum with a rather complicated model, consisting of a broken power law, a zero-centered Lorentzian of width $`\nu _b`$=0.75$`\pm `$0.04 Hz and a narrow QPO at $`\nu _1`$=0.35$`\pm `$0.03 Hz. In addition, a second QPO at $`\nu _2`$=0.48$`\pm `$0.03 Hz (visible as a relatively small but significant feature in Fig. 2) and a broad Lorentzian bump at $`\nu _3`$=3.14$`\pm `$0.17 Hz were needed. An examination of more LS power spectra from other observations showed that $`\nu _1`$ and $`\nu _2`$ are probably the second and third harmonics of a fundamental $`\nu _00.16`$ Hz. The total fractional rms in the 0.01–100 Hz range is 41%. The model used is different from that of the much more complete work on the LS by Nowak, Wilms & Dove (1999), but we use this observation only for comparison. The power spectrum from observation B looks completely different. Not much noise is observed and a simple power law model (with index 0.62$`\pm `$0.04) gives a good fit to the data. The total rms in the 0.1–100 Hz band is 2%. This weak noise component is characteristic of the HS.
From all three observations, after checking that there were no large flux variations, we extracted PCA and HEXTE energy spectra following the standard procedures for XTE, using ftools 4.2. For spectral accumulation, we selected intervals where all 5 PCA detectors were turned on and the pointing offset was less than 0.02 degrees. In order to minimize contamination, we further selected data only from intervals when the Earth elevation angle of the source was $`>`$10 degrees and the satellite was well outside the South Atlantic Anomaly. PCA background files were produced with the program pcabackest version 2.1b. We produced PCA detector response matrices using pcarmf v3.5. For HEXTE, we accumulated background spectra from off-source pointings and we used the latest background matrices made available by the RXTE team. For observations B and C, not enough signal was present in the HEXTE data above the first few channels, and those were therefore not used in the analysis. We used the HEXTE detector response matrices from 1997 March 20th. For the spectral fits, we used XSPEC 10.00 and added a 1% systematic error to the PCA spectra.
We fitted the spectra with a rather complex but standard model, consisting of a power law with an exponential cutoff at high energies, a multicolor disk blackbody (Mitsuda et al. 1984) and a gaussian emission line. Correction for interstellar absorption was included, as well as a “smeared edge” (Ebisawa et al. 1994) which was found to be needed in order to obtain a satisfactory fit. The central energy of the gaussian line was kept fixed at 6.4 keV. Not all components were needed to model all spectra. No gaussian line and no disk-blackbody components were needed for observation A, and no smeared edge was needed for observation C (in this case, because of the short exposure time). Notice that, for the HS spectra, the emission line and the edge could arise from the fact that the continuum model is an approximation, since here the thermal component is strong. Indeed, both the line and the edge are located in the energy range where the two continuum components become comparable. We do not attempt a physical interpretation of these features. The best fit parameters can be found in Table 2. The spectra and the residuals after model subtraction are shown in Fig. 3. It is evident from Table 2 that the large increase in X-ray flux between observations B/C and observation A is due entirely to the appearance of a soft thermal component, while the power law component steepens and becomes fainter.
## 3 Discussion
The RXTE/ASM light curve shown in Fig. 1 strongly suggests that GX 339-4 underwent a transition from the LS to the HS. The long-term behavior consists basically of an interval of $``$400 days of increased ASM flux. Our power spectra and energy spectra are unambiguous: during observation A (reported also by other authors: Nowak, Wilms & Dove 1999, Wilms et al. 1999) the source was in its Low State, the state in which GX 339-4 is mostly observed. This is characterized by a flat ($`\mathrm{\Gamma }`$1.6) power-law energy spectrum (with evidence of a high-energy cutoff) and by a strong band-limited noise in the power spectrum with a QPO peak and its harmonics. At 4 kpc (see Zdziarski et al. 1998), the 2.5-20 keV observed luminosity is $`4\times 10^{36}`$erg/s. Both energy distribution and power spectrum are extremely similar to those of Cyg X-1. This similarity includes the $``$3 Hz broad bump detected in the power spectrum. Notice that the low-frequency QPO at 0.35 Hz and the 3 Hz bump have been shown by Psaltis, Belloni & van der Klis (1999) to fit a correlation which is observed when combining QPO data from a number of sources, both containing neutron stars and black-hole candidates.
The X-ray properties of the source changed drastically after the transition (observations B and C). Very little variability is observed in the timing domain: the power spectrum shows only a weak power-law component. The energy spectrum is dominated by a thermal component, which we fitted with the standard model used for black-hole candidates, i.e. a multicolor disk-blackbody: the output parameters are temperature of the inner edge of the accretion disk and the radius of the inner edge itself. Interestingly, the radius which we derive is in the range expected for the inner-most stable orbit around a black hole, although the precise value cannot be determined as we do not know the inclination of the system. Moreover, note that due to the approximated form of the disk-blackbody model used (see Mitsuda et al. 1984), the derived radius is likely to be smaller than the real one as the effective blackbody temperature is probably smaller than the observed color temperature ( Lewin, van Paradijs & Taam 1995). The 2.5-20 keV luminosity of this component (at a distance of 4 kpc) is 5$`\times 10^{36}`$ and 8$`\times 10^{36}`$erg/s for observation B and C, respectively, whereas the corresponding luminosity of the (steeper) power law component is 10 and 4$`\times 10^{35}`$erg/s, respectively. The difference between the two pointings indicates a further anticorrelation between the two components. These parameters are very similar to what is observed for the HS both in GX 339-4 itself and in other sources (GX 339-4: Grebenev et al. 1991; GS 1124-68: Ebisawa et al. 1994; GRO J1655-40: Méndez, Belloni & van der Klis; 4U 1630-47: Kuulkers et al. 1998; Oosterbroek et al. 1998; LMC X-3, Ebisawa et al. 1993). Interestingly, Fender et al. (1999) found an anticorrelation between the X-ray flux (from RXTE/ASM) and radio flux. They show that the radio flux is strongly suppressed during the HS period. This is analogous to what is observed in Cyg X-1, where there is a suppression of radio flux during transitions to and from a LS (see Zhang et al. 1997b). The transition is clearly detected by CGRO/BATSE, in the form of a anti-correlation with the ASM data (Fender et al. 1999).
Comparing our results with those of Cui et al. (1997a,b), we can see that, despite the similarity in long-term light curves, in Cyg X-1 the situation is different. During the transition, a bright soft component appears in the energy spectrum of Cyg X-1, but the power law component remains relatively strong. Moreover, the power spectra show either a band-limited noise component or a power law component, but always with a fractional rms well above 10%. This is also evident in the light curve in Fig. 1 from Cui et al. (1997b), where large variations can be seen. Following Belloni et al. (1996), comparing our results for GX 339-4, we confirm that Cyg X-1 during the transition of 1996 never reached the HS (as observed in other sources like LMC X-3, always seen in this state, Ebisawa et al. 1993), but switched from the LS to the IS and back. However, there is no sign of the IS in our observations. Méndez & van der Klis (1997) compiled a list of flux thresholds for the various states based on previous state transitions. From their list, assuming a typical HS energy spectrum, we estimate that, based on the previous transitions, the IS should not have started until a count rate of $``$30 cts/s had been reached in the RXTE/ASM. If GX 339-4 went into the IS before our observations, it did it at a different flux level. As a direct comparison, the IS in Méndez & van der Klis had a 2-10 keV flux of 1.5$`\times 10^9`$erg cm<sup>-2</sup>s<sup>-1</sup>, a factor of 2.3 less than what we observe here. Notice that just before our observation B, a small peak is visible in Fig. 1, at 15-18 ASM cts/s. It is possible that during this time the source went indeed through an IS, although we cannot confirm it. This indicates that, if flux is a good tracer of accretion rate, it is not the only parameter governing these transitions.
A simple classification in terms of four basic states with a definite dependence on flux (see van der Klis 1995 for a review) fails to reproduce the whole wealth of behaviors observed in black-hole candidates. In addition to sources which do not seem to follow the simple scheme outlined above (e.g. GRS 1915+105, Belloni 1998; XTE J1550-564, Sobczak et al. 1999; GS 2023+338, Zycki, Done & Smith 1999), there are other examples which indicate that accretion rate is not the only parameter governing these transitions. This is particularly clear in the case of the 1998 outburst of 4U 1630-47, where a transition between IS and HS was not followed by a reverse transition as the source went back into quiescence (Dieters et al. 1999 in preparation). Moreover, in 1996, RXTE observed a state transition of Cyg X-1; the source increased its soft X-ray flux by a factor of 3-4 (Cui 1996; Cui, Focke & Swank 1996), while the bolometric flux remained approximately constant (Zhang et al. 1997a).
The results presented in this paper, together with recent results for 4U 1630-47 and other transients like GRS 1915+105 (Belloni 1998), GRO J1655-40 (Méndez, Belloni & van der Klis 1997; Tomsick et al. 1999), XTE J1550-564 (Cui et al. 1999; Sobczak et al. 1999) and XTE J1748-288 (Revnivtsev, Trudolyubov & Borozdin 1999; Focke & Swank 1999) show that the classification in terms of 4 source states is followed faithfully by some sources (like GRO J1655-40, XTE J1550-564 and XTE J1748-288), but is complicated by the absence of a unique flux “trigger” for transitions between states (like in 4U 1630-47 and GX 339-4) and by a completely different behavior in the case of GRS 1915+105.
MM is a fellow of the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina. This work was supported in part by the Netherlands Foundation for Research in Astronomy (ASTRON) under grant 781-76-017. TB is supported by NWO Spinoza grant 08-0 to E.P.J. van den Heuvel. SD is supported by NASA LTSA grant NAG 5-6021. WHGL acknowledges support from NASA. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
|
no-problem/9905/cond-mat9905394.html
|
ar5iv
|
text
|
# Non-k-diagonality in the interlayer pair-tunneling model of high-temperature superconductivity
## I INTRODUCTION
The interlayer pair-tunneling (ILT) model of high-temperature superconductivity has been the focus of much attention since it was introduced and later elaborated on quantitatively. Within the ILT model, the pairing of electrons in individual CuO<sub>2</sub>-layers is considerably enhanced by the tunneling of Cooper pairs between neighbouring layers, giving critical temperatures which are substantially higher than those arising solely from a reasonable in-plane effective electron-electron attraction in a two-dimensional (2D) BCS-like theory.
The central underlying assumption of the ILT model is that the normal state of the cuprates is a strongly correlated non-Fermi liquid, where single-electron interlayer tunneling is incoherent or strongly damped, resulting in a frustrated $`c`$-axis kinetic energy. This kinetic energy is substantially lowered in the superconducting state, through tunneling of Cooper pairs between CuO<sub>2</sub> layers. Thus, contrary to the situation in conventional superconductors, it is the lowering of the kinetic energy, and not the potential energy, which drives the transition.
Recently, there have been extensive discussions in the literature about experimental tests of an unconventional relation, predicted by the ILT model, between the $`c`$-axis penetration depth $`\lambda _c`$ and the condensation energy $`E_{\mathrm{cond}}`$. The agreement in LSCO seems quite good, but experiments on Hg-1201 and Tl-2201 give estimates for $`\lambda _c`$ which are 8-20 times larger than the predicted values. However, note that Chakravarty et al. have argued that this discrepancy between theory and experiment can be drastically reduced by taking more carefully into account the fluctuation contributions to the normal state specific heat when estimating $`E_{\mathrm{cond}}`$.
It is not our purpose here to consider the microscopic foundations of the ILT mechanism. Instead, we will take it as a phenomenological starting point, and explore the effects of some modifications of the form of the pair tunneling term used in Ref. . There it was argued that, in order to obtain critical temperatures of the same order of magnitude as found in the high-$`T_c`$ cuprates, it was essential that the 2D momentum of the Cooper-pair electrons was conserved in the tunneling process. This momentum conservation was argued to follow from the momentum conservation of the single-electron tunneling Hamiltonian, in the absence of inelastic scattering. Translated to real space, this momentum conservation means that the electron-electron attraction associated with the interlayer tunneling has an infinite range.
A natural question to ask is then how sensitive the critical temperature $`T_c`$ is to a relaxation of this constraint. Specifically, what is the typical order of the interaction range below which $`T_c`$ will drop to values which are no longer comparable to critical temperatures in the high-$`T_c`$ cuprates? Moreover, several of the unusual $`k`$-space features of the gap predicted within the ILT mechanism have their origin in the assumed momentum conservation.
We will address this question phenomenologically, modelling the finite range by postulating modified functional forms of the pair tunneling term in which phenomenological parameters are introduced to measure the degree of “screening”. This will in turn lead to modifications of the original gap equations, which are then solved self-consistently to obtain the critical temperature and the superconducting gap. We expect that qualitatively correct conclusions may be drawn from our modelling of the k-space broadening. Brief accounts of parts of this work have appeared in print elsewhere.
## II FORMULATION OF THE PROBLEM
For simplicity, we consider compounds with two CuO<sub>2</sub>-layers per unit cell. The generalization to an arbitrary number of CuO<sub>2</sub>-planes per unit cell is straightforward. Below the superconducting transition temperature, we will assume that the quasi-particle description is approximately valid. The total Hamiltonian is taken to be the sum of 2D BCS Hamiltonians for the individual layers, and an interlayer pair tunneling Hamiltonian, $`H=H_{\mathrm{layer}}+H_J`$. When the zero-momentum pairing assumption is invoked, the intralayer part is given by
$`H_{\mathrm{layer}}={\displaystyle \underset{k,\sigma ,i=1,2}{}}\epsilon _kc_{k,\sigma }^{(i)}c_{k,\sigma }^{(i)}+{\displaystyle \underset{k,k^{},i=1,2}{}}V_{k,k^{}}c_{k,}^{(i)}c_{k,}^{(i)}c_{k^{},}^{(i)}c_{k^{},}^{(i)},`$ (1)
while the interlayer pair-tunneling contribution to the Hamiltonian is given by the form
$$H_J=\underset{k,k^{}}{}T_J(k,k^{})c_k^{(1)}c_{k,}^{(1)}c_k^{}^{(2)}c_k^{}^{(2)}+\text{h.c.}$$
(2)
Here $`c_{k\sigma }^{(i)}`$ is the creation operator of an electron in layer $`i`$ ($`i=1,2`$) with 2D in-plane wave vector $`k`$ and spin projection $`\sigma `$, $`\epsilon _k`$ is the normal state dispersion measured relative to the Fermi level, and $`V_{k,k^{}}`$ is the inplane contribution to the pairing kernel.
An apparently pathological aspect of a particular version of (2), namely with a $`k`$-diagonal tunneling term $`T_J(k,k^{})=T_J\delta _{k,k^{}}`$, becomes evident on Fourier-transforming back to real space, where it takes the form
$`{\displaystyle \frac{T_J}{N}}{\displaystyle \underset{R_1,R_2,r}{}}c_{R_1+r/2,}^{(1)}c_{R_1r/2,}^{(1)}c_{R_2r/2,}^{(2)}c_{R_2+r/2,}^{(2)}+\text{h.c.}`$ (3)
where $`N`$ is the number of lattice sites per layer, and $`r`$ is the relative coordinate and $`R_i`$ the center of mass coordinate in layer $`i`$ of the two tunneling electrons . Note that there are no restrictions on $`|R_1R_2|`$ due to the zero-momentum pairing assumption, as in conventional superconductors. What is not conventional is that there is no restriction on the relative positions in each plane for which two electrons feel an attraction. Hence, $`T_J\delta _{k,k^{}}`$ represents an infinite-range attraction, contrary to the conventional case where it is a (retarded) contact-attraction. That such a version of the ILT-model then gives a large value of $`T_c`$ is perhaps not surprising, but it is difficult to understand how such an effective attraction is produced.
The $`k`$-diagonal model must therefore be viewed as an idealization, and the issue to adress is how representative this limit is, if at all. The more general model given in (2) yields
$`{\displaystyle \frac{T_J}{N}}{\displaystyle \underset{R_1,R_2,r}{}}G(|r|)c_{R_1+r/2,}^{(1)}c_{R_1r/2,}^{(1)}c_{R_2r/2,}^{(2)}c_{R_2+r/2,}^{(2)}+\text{h.c.}`$ (4)
The characteristic decay-length of the function $`G(|r|)=_ke^{ikr}f(k)`$, with $`f(k)`$ defined via $`T_J(k,k^{})=T_Jf(kk^{})`$, represents the range of the effective interlayer tunneling attraction.
By assuming a layer-independent pair amplitude, the total Hamiltonian becomes decoupled in the layer indices, and the gap equation is seen to be the same as in the BCS case when one makes the replacement $`V_{k,k^{}}V_{k,k^{}}T_J(k,k^{})`$, i.e.
$$\mathrm{\Delta }_k=\underset{k^{}}{}V_{k,k^{}}\mathrm{\Delta }_k^{}\chi _k^{}+\underset{k^{}}{}T_J(k,k^{})\mathrm{\Delta }_k^{}\chi _k^{},$$
(5)
where $`\mathrm{\Delta }_k`$ is the gap function, and $`\chi _k`$ is the pair susceptibility, given by $`\chi _k=\mathrm{tanh}(\beta E_k/2)/2E_k`$, where $`E_k=\sqrt{\epsilon _k^2+|\mathrm{\Delta }_k|^2}`$, $`\beta =1/k_BT`$, $`k_B`$ is Boltzmann’s constant and $`T`$ is the temperature. We will consider $`V_{k,k^{}}`$ to be a separable function of $`k`$ and $`k^{}`$, i.e. $`V_{k,k^{}}=Vg_kg_k^{}`$, where $`g_k`$ belongs to the set of basis functions for irreducible representations of the point group of the underlying lattice, and $`V>0`$ is an effective two-particle scattering matrix element.
## III Gap equation in energy space
Ref. studied the case $`T_J(k,k^{})=T_J\delta _{k,k^{}}`$, i.e. the pair tunneling matrix element is both diagonal and $`k`$-independent. Using also the BCS approximation $`g_k=\mathrm{\Theta }(\omega _D|\epsilon _k|)`$, where $`\omega _D`$ is an energy cutoff, the gap then depends on $`k`$ only through $`\epsilon _k`$, so that the gap equation can be written in energy space as
$$\mathrm{\Delta }(\epsilon )=\mathrm{\Delta }_0\mathrm{\Theta }(\omega _D|\epsilon |)+T_J\mathrm{\Delta }(\epsilon )\chi (\epsilon ),$$
(6)
where
$$\mathrm{\Delta }_0=\lambda _{\omega _D}^{\omega _D}𝑑\epsilon \mathrm{\Delta }(\epsilon )\chi (\epsilon ).$$
(7)
The BCS coupling constant is $`\lambda =VN(\epsilon _F)`$, where $`N(\epsilon _F)`$ is the density of states per spin at the Fermi level $`\epsilon _F`$ (i.e. here we have made the usual approximation of neglecting the variation of $`_k\epsilon `$ inside the thin Debye shell around the Fermi energy). This gap equation can be regarded as the limit $`\omega 0`$ of the more general equation
$$\mathrm{\Delta }(\epsilon )=\mathrm{\Delta }_0\mathrm{\Theta }(\omega _D|\epsilon |)+\frac{T_J}{2\omega }_{\epsilon \omega }^{\epsilon +\omega }𝑑\epsilon ^{}\mathrm{\Delta }(\epsilon ^{})\chi (\epsilon ^{}),$$
(8)
where the parameter $`\omega `$ provides a measure of the amount of $`k`$-space broadening in the interlayer pairing kernel.
We have solved (8) self-consistently and show in Fig. 1 the results for $`T_c`$ as function of $`\omega `$ for $`T_J=30`$ meV, $`\omega _D=20`$ meV and $`\lambda =0.1`$. The most important feature of this figure is the moderate reduction of $`T_c`$ as $`\omega `$ is increased from zero. To reduce $`T_c`$ by a factor $`2`$ requires a broadening of $`\omega 40`$ meV.
If we convert the energy broadening of the ILT term to a length using $`\omega =\mathrm{}^2k^2/(2M)`$, with $`M`$ equal to the electron mass, we obtain for the length $`l=1/k`$
$$l\left(\frac{62}{\sqrt{\omega }}\right)\text{Å},$$
(9)
where $`\omega `$ is to be measured in meV. Setting $`\omega =40`$ gives an interaction range $`l9.8`$ Å.
## IV Gap equation in 1D $`k`$-space
In this section, we will consider the gap equation (5) with a particular choice of $`T_J(k,k^{})`$. The main purpose of this paper is to establish a qualitative criterion for how robust the sharp $`k`$-space structures of the gap, obtained for a $`k`$-diagonal ILT term, are to momentum broadening. Given this limited purpose, it does make sense to simplify the problem by taking the $`k`$’s to be one-dimensional (1D). This simplification is purely mathematical, and of course does not imply anything about superconductivity with true off-diagonal long-range order in 1D systems, which is well-known not to exist for $`T>0`$, and prohibited by quantum fluctuations at $`T=0`$. The final justification of our 1D model lies in the qualitative conclusions established at the end of this section, which will be seen to apply also to a 2D system.
We will consider two different functional forms for $`g_k`$, both giving a $`k`$-symmetric gap as required for singlet pairing. The first is the BCS approximation $`g_k=\mathrm{\Theta }(\omega _D|\epsilon _k|)`$, also used in Sec. III. It is analogous to isotropic $`s`$-wave pairing in 2D. The second form is $`g_k=\mathrm{cos}(ka)`$, which is most closely analogous to $`s_{x^2+y^2}`$ or $`d_{x^2y^2}`$ pairing in 2D. The gap obtained for the first form does not change sign in the Brillouin zone, while the gap for the second form in general does.
For simplicity, we assume a simple tight-binding dispersion form for $`\epsilon _k`$,
$$\epsilon _k=2t\left[\mathrm{cos}(ka)\mathrm{cos}(k_Fa)\right],$$
(10)
where $`t`$ is the single-electron intralayer tunneling matrix element, $`a`$ is the lattice constant and $`k_F`$ is the Fermi wave vector. The pair tunneling term is taken to be of the form $`T_J(k,k^{})T_Jf(kk^{})`$, where we have chosen $`f(k)`$ to have the particular form
$$f(k)=\frac{k_0a^2}{2L}\frac{1}{\mathrm{sin}^2\left(\frac{ka}{2}\right)+\left(\frac{k_0a}{2}\right)^2},$$
(11)
where $`L`$ is the length of the system and $`k_0`$ is a measure of the width of $`f(k)`$. The prefactor in (11) is chosen to ensure a $`k`$-diagonal ILT term in (5) in the limit $`k_00`$. The sine function ensures that the scattering is periodic in the reciprocal lattice. One could construct infinitely many functions $`f(k)`$ which reduce to a delta function as $`k_00`$, and hence our particular choice (11) is inevitably somewhat arbitrary. However, since our focus here is merely on the qualitative aspects of momentum broadening, the detailed form of $`f(k)`$ is of no concern to us; any function $`f(k)`$ which is “smeared out” as $`k_0`$ increases, would give the same qualitative results. Note that $`G(r=0)=_kf(k)=1/\sqrt{1+(k_0a/2)^2}`$, which means that the effective value of $`T_J`$ actually decreases as $`k_0a`$ is increased. In this respect, the effect of momentum broadening is at least not underestimated in our model.
### Results and discussion
We have calculated the critical temperature $`T_c`$ and the zero-temperature gap function for various values of $`k_0a`$ by solving (5) self-consistently in the thermodynamic limit $`L\mathrm{}`$. In Fig. 2 we show the results for $`T_c`$ for $`T_J=30`$ meV, $`\omega _D=20`$ meV, $`t=25`$ meV, $`k_Fa=\pi /4`$ and $`LV/2\pi a=2.5`$ meV. It is seen that $`T_c`$ is slightly more sensitive to $`k_0a`$ for $`g_k=\mathrm{cos}(ka)`$ than for $`g_k=\mathrm{\Theta }(\omega _D|\epsilon _k|)`$. For $`g_k=\mathrm{cos}(ka)`$, $`T_c`$ is reduced by a factor 2 compared to the $`k`$-diagonal result when $`k_0a/\pi 0.25`$. Only 1/10 of this broadening is required for a 50% reduction of $`T_c`$ if one instead chooses $`T_J=50`$ meV, $`t=250`$ meV and $`LV/2\pi a=25`$ meV. The reason for this increased sensitivity to broadening is the large increase of $`t`$.
In Fig. 3 we show the gap at $`T=0`$ for four values of $`T_J`$ and fixed $`k_0=0`$, the other parameter values being the same as used for Fig. 2. In this case, the gap is given implicitly by
$$\mathrm{\Delta }_k=\frac{\mathrm{\Delta }_0g_k}{1T_J\chi _k},$$
(12)
where $`\mathrm{\Delta }_0V_kg_k\mathrm{\Delta }_k\chi _k`$. The maximum of the gap, and hence the critical temperature $`T_c`$, is determined by $`T_J`$ through the enhancement factor $`1/(1T_J\chi _k)`$, which has its maximum on the Fermi surface. However, as seen in Fig. 3, $`T_J`$ does not affect the sign of the gap, which is determined by $`g_k`$ alone. On a 2D square lattice, the analogous statement is that the transformation properties of the gap function under the symmetry operations of the point group of the square lattice, $`C_{4v}`$, is given entirely in terms of the intralayer contribution to the pairing kernel, which is expandable in terms of basis functions for the irreducible representations of $`C_{4v}`$.
In Fig. 4 we show the gap at $`T=0`$ for four values of $`k_0a`$ and fixed $`T_J=30`$ meV. Note how the $`k`$-space variation of the gap decreases with increasing $`k_0a`$. For large enough $`k_0a`$, $`f(k)`$ is essentially independent of $`k`$, so the ILT term in (5) essentially becomes a constant self-consistent shift of $`\mathrm{\Delta }_k`$. The main contribution to the shift comes from the Fermi surface region, where $`\mathrm{\Delta }_k`$ and $`\chi _k`$ are maximal. Therefore, the sign of the shift is essentially determined by the sign of $`\mathrm{\Delta }_k`$ on the Fermi surface, which in turn is determined by the sign of $`g_k`$ on the Fermi surface, which for $`g_k=\mathrm{cos}(ka)`$ changes at half-filling. Thus, the qualitative form of the gap is given by $`\mathrm{\Delta }_k=\mathrm{\Delta }_0g_k+T_J\mathrm{\Delta }_1`$, where, for $`g_k=\mathrm{cos}(ka)`$, the sign of $`\mathrm{\Delta }_1`$ is positive below half-filling and negative above half-filling. As a consequence of this shift, $`\mathrm{\Delta }_k`$ eventually ceases to change sign in the Brillouin zone for $`g_k=\mathrm{cos}(ka)`$, as seen in Fig. 4.
We now discuss the criterion for how much broadening is needed to obtain a substantial reduction of the maximum value of the gap, thereby smoothing out the sharp $`k`$-space structures obtained in the $`k`$-diagonal case. For this purpose, it is instructive to consider how a slightly broadened $`T_J(k,k^{})`$ affects the maximum value of the gap. For $`k_0a/\pi 1`$, $`\mathrm{\Delta }_k`$ varies more rapidly in the Fermi surface region than $`\chi _k`$, because $`1/(1T_J\chi _k)`$ is sharply peaked at the Fermi surface. Thus the variation of $`\mathrm{\Delta }_k\chi _k`$ in the Fermi surface region is essentially determined by $`\mathrm{\Delta }_k`$. Furthermore, the main contributions to $`_k^{}T_J(k_F,k^{})\mathrm{\Delta }_k^{}\chi _k^{}`$ roughly come from the region $`|k_Fk^{}|k_0`$. Temporarily denoting the gap calculated for $`k_0=0`$ as $`\mathrm{\Delta }_k(0)`$, it follows that as long as $`k_0`$ is much smaller than the characteristic width of the peak of $`\mathrm{\Delta }_k(0)`$, the broadened $`T_J(k_F,k^{})`$ essentially has the same effect as a $`\delta `$-function. Under such circumstances, the gap is little affected by the non-$`k`$-diagonality. A broadening of the order of the width of the peak of $`\mathrm{\Delta }_k(0)`$ is therefore required for a substantial effect of the broadening to be felt. Fig. 3 shows that the width of the peak of $`\mathrm{\Delta }_k(0)`$ increases with $`T_J`$. The detrimental effects on the gap of an increase of $`k_0a`$ will therefore be reduced with an increase of $`T_J`$. On the other hand, increasing $`t`$ will make the width of the peak of $`\mathrm{\Delta }_k(0)`$ smaller, because the factor $`1/(1T_J\chi _k)`$ drops more abruptly away from its peak value as one moves away from the Fermi surface when the overall amplitude of the variation of $`\epsilon _k`$ is increased, as seen from the fact that this drop is proportional to
$$\delta \epsilon _k=2ta\mathrm{sin}(k_Fa)\delta k.$$
(13)
Thus the parameters $`T_J`$ and $`t`$ have opposite effects on the sensitivity of the gap to broadening of $`T_J(k,k^{})`$. Note that one may scale the parameter $`t`$ entirely out of (5) to obtain a gap equation in terms of the dimensionless parameters $`1/\beta t`$, $`T_J/t`$, $`\mathrm{\Delta }_k/t`$, $`V/t`$ and $`k_0a`$ (and $`\omega _D/t`$, when $`g_k=\mathrm{\Theta }(\omega _D|\epsilon _k|)`$). It should also be mentioned that the ’realistic’ values of $`T_J/t`$ are difficult to ascertain, because the model we have considered is one-dimensional, and because the experimentally relevant values of $`T_J`$ are hard to extract. For these reasons, we can only draw qualitative conclusions from our model.
Another interesting consequence of (13) is that the width of the peak of $`\mathrm{\Delta }_k(0)`$ will increase as the Fermi level is moved towards the band edges where the dispersion flattens out, since then $`\delta \epsilon _k`$ decreases. So in our 1D model the system becomes more robust to a finite $`k_0`$ for a nearly empty or nearly full conduction band.
We finally stress that our qualitative conclusions regarding the sensitivity to momentum broadening are valid also for a 2D model. This is because the arguments used to arrive at these conclusions depend on premises that will be present also in 2D: 1) in the $`k`$-diagonal case, the gap shows sharp enhancement at the Fermi surface, 2) the width (and height) of the peak of this gap is increased by increasing the amplitude $`T_J`$ of the interlayer tunneling matrix element, 3) there will be parameters in the 2D single-electron intralayer dispersion analogous to $`t`$ in the 1D case, which control the bandwidth of the dispersion $`\epsilon _k`$, and therefore affect the width of the peak of the $`k`$-diagonal gap in a manner similar to what occurs in the 1D case. Note also that in the 2D case, the tight-binding dispersion flattens out near the points $`(\pm \pi /a,0)`$ and $`(0,\pm \pi /a)`$, which lie on the Fermi surface when the band is half-filled (for nearest-neighbor hopping only) or close to half-filling (when next-nearest neighbor hopping is included). These are also the points where the $`k`$-diagonal gap is at its maximum for such filling factors. Thus it appears that near half-filling in 2D, the maximum value of the gap should be fairly robust to moderate momentum broadening.
## V Conclusions
We have considered superconductivity within the ILT mechanism in the presence of non-$`k`$-diagonal interlayer tunneling. We find that the sensitivity to momentum broadening is larger the smaller the width of the peak of the gap obtained for $`k`$-diagonal tunneling. This width is increased by increasing the amplitude $`T_J`$ of the interlayer tunneling matrix element. The width is decreased by increasing the bandwidth of the single-electron intralayer dispersion. Finally, the width is larger at points on the Fermi surface where the dispersion is relatively flat as compared to points where the dispersion is steeper. Although we illustrated these features by solving a model with one-dimensional intralayer wavevectors, these qualitative conclusions are also valid for the more experimentally relevant case of two dimensions.
Several unusual properties of the superconducting state of the cuprates are given an explanation with the ILT mechanism. The essential feature of the ILT mechanism is the sharp $`k`$-space structure of the gap that arises from an unusual enhancement factor $`1/(1T_J\chi _k)`$ for a k-diagonal interlayer tunneling. Conclusions based on these sharp structures ought therefore to be reexamined in the presence of a slightly broadened interlayer tunneling term. This pertains for instance to the explanation of the anomalies in the neutron scattering peaks observed in YBCO using the ILT mechanism. In this case, non-trivial Fermi surface kinematics almost unique to the mechanism are essential.
###### Acknowledgements.
We thank N.-C. Yeh for useful discussions. J.O.F. acknowledges support from the Norwegian University of Science and Technology through a university fellowship. Support from the Norwegian Research Council (Norges Forskningsråd) through Grant No. 110569/410 is also acknowledged.
|
no-problem/9905/astro-ph9905104.html
|
ar5iv
|
text
|
# The swansong in context: long-timescale X-ray variability of NGC 4051
## 1 Introduction
On 9-11 May 1998, the highly variable, low-luminosity (2–10 keV luminosity, $`L_{210}5\times 10^{41}`$ ergs s<sup>-1</sup>) Seyfert 1 galaxy NGC 4051 was observed in an extremely low, constant flux state (2–10 keV flux $`1.3\times 10^{12}`$ ergs cm<sup>-2</sup> s<sup>-1</sup>, corresponding to $`L_{210}3\times 10^{40}`$ ergs s<sup>-1</sup>) by the Rossi X-ray Timing Explorer (RXTE), the Italian-Dutch X-ray astronomy satellite BeppoSAX and the Extreme Ultraviolet Explorer (EUVE). The BeppoSAX data were consistent with the intepretation that the source had ‘switched off’, leaving only the X-rays reflected from distant cold matter (possibly the molecular torus) as a witness to its earlier intensity (Guainazzi et al., 1998, henceforth G98).
In this paper, we present the RXTE spectrum of the source in its low state, confirming the interpretation presented in G98. We also place this ‘swansong’ of the AGN in NGC 4051 in context, by showing the two and a half year lightcurve of NGC 4051 obtained with RXTE, which shows a decline of the source average flux for nearly two years, culminating in the low state which lasted $`150`$ days before the source ‘switched on’ once more.
Variations on timescales of years, or even on $`150`$ days, in NGC4051 are particularly interesting, given the general pattern of AGN X-ray variability. On short timescales (minutes–hours), AGN such as NGC 4051 display scale invariant variability (e.g. M<sup>c</sup>Hardy & Czerny 1987; Lawrence et al. 1987; M<sup>c</sup>Hardy 1988; Green, M<sup>c</sup>Hardy & Lehto 1993; Lawrence & Papadakis 1993) which can be seen from the power-law shape of their X-ray power spectra. However on longer timescales, $`>`$ day in the case of NGC4051 (McHardy, Papadakis & Uttley 1998) and $`>`$ month in the case of the higher luminosity AGNs NGC 5506 (M<sup>c</sup>Hardy 1988) and NGC 3516 (Edelson and Nandra 1999), the power spectra flatten, as they must if the total variable power is not to become infinite. Thus the 150 days to few years timescale which we detect here is much longer than the flattening or ‘knee’ timescale in NGC 4051. In section 5 we discuss this result in the context of the mechanism for the long-timescale X-ray variability and speculate on the implications for other AGN.
## 2 Observations and Data Reduction
For the past two and a half years we have monitored NGC 4051 with RXTE in order to investigate its variability across a broad range of timescales. To this end, we have used short ($`<1`$ ksec) observations to obtain ‘snapshots’ of the source flux through a range of time intervals. From May 1996 we observed the source twice daily for two weeks, daily for four more weeks and at weekly intervals for the remainder of the year. Since 1997 we have observed the source every two weeks. We also observed NGC 4051 for a continuous period from 1998 May 9 16:43:12 UTC to 1998 May 11 20:55:28 UTC (61 ksec useful exposure), simultaneous with observations by BeppoSAX and EUVE.
RXTE observed NGC 4051 with the Proportional Counter Array (PCA) and the High Energy X-ray Timing Experiment (HEXTE) instruments. The PCA consists of 5 Xenon-filled Proportional Counter Units (PCUs), sensitive to X-ray energies from 2–60 keV. The HEXTE covers a range of between 20–200 keV, but due to the faint nature of the source we only consider the PCA data in this work. Discharge problems mean that of the 5 PCUs in the PCA, PCUs 3 and 4 are often switched off, so we include data from PCUs 0, 1 and 2 only. We extract data from the top layer of the PCA using the standard ftools 4.1 package, using the standard GTI criteria for electron contamination and excluding data obtained within and up to 30 minutes after SAA maximum and data obtained with earth elevation $`<10^{}`$. We estimate the background for the PCA with pcabackest v2.0c using the new L7 model for faint sources.
## 3 The low state X-ray spectrum
We now investigate the spectrum of the source in its low state, as measured by the PCA on board RXTE and the MECS instrument on BeppoSAX during the long-look of May 9-11 1998. The 2-10 keV lightcurve obtained by the PCA shows no significant variability above the expected level for systematic errors in the background estimation, consistent with the observed lack of variability in the BeppoSAX lightcurves (G98). We therefore use the PCA and MECS spectra integrated over the whole observation.
We will not consider the data from EUVE and the LECS instrument on board BeppoSAX in our fits. These data show evidence for a separate low-energy component at energies below 4 keV, in addition to the component seen at medium energies by the PCA and MECS. This low-energy component is also constant in flux over the 7 day duration of the EUVE observation (Fruscione, in preparation). Since we are interested in the medium energy spectral component, we shall only consider the PCA and MECS spectra in the energy ranges 4–15 keV and 4–10.5 keV respectively. We use a PCA response matrix generated by the pcarsp v2.36 script; details of the MECS calibration and data reduction can be found in G98.
We fit the spectra in xspec v10.0. Simple power-law fits show a very flat spectrum so, as in G98, we shall attempt to account for this hard spectrum in terms of a reflection model. By fitting up to 15 keV we can use the simple href multiplicative model for reflection of a power-law spectrum off a slab of cold material. We also include a gaussian iron line and galactic absorption. This simple model approximates the reflection spectrum of cold material with an unknown distribution around the primary X-ray source. Like G98, we assume that the reflecting material subtends $`2\pi `$ steradians of sky, as seen from the source of the incident continuum. The inclination angle of the reflector to the line of sight is unknown, but since it does not significantly affect the fits, we freeze it arbitrarily at 30 degrees. We find that the best-fitting observed fraction of the illuminating power-law continuum in all our model fits where it is left free is zero (i.e. the source has switched off completely); so we fix this parameter to zero for the purpose of constraining the other model parameters.
In table 1, we show the resulting best-fitting parameters for separate model fits to the PCA and MECS spectra. Both sets of data are fitted reasonably well by the model and the model parameters are consistent with being the same in both the PCA and MECS spectra. The agreement between the two instruments confirms the accuracy of the PCA background model. We therefore attempt to constrain the model parameters further by fitting both the PCA and MECS spectra jointly with the same model parameters. The resulting best-fitting parameters are also shown in table 1.
Fig. 1 shows the model fitted jointly to the spectra from both instruments. The inferred slope of the illuminating continuum, as obtained by the joint fit, is higher than that obtained from the individual fits to both the PCA and MECS spectra. The higher slope is due to the improved definition of the continuum flux at lower energies by the MECS data, which sets the 1 keV continuum normalisation to a higher value than that given by the PCA fit alone, combined with the greater sensitivity of the PCA at high energies which holds down the continuum at higher energies. As one might expect, given that none of the illuminating continuum is directly visible, the slope of that continuum is not well determined. However, we note that a continuum photon index of 2.3 was observed during simultaneous RXTE and Extreme Ultraviolet Explorer (EUVE) observations in May 1996 (Uttley et al., submitted to MNRAS).
The value inferred for the luminosity of the primary continuum incident on the reflector is fairly typical for NGC 4051 in its active state (e.g. Guainazzi et al., 1996). Note that the inferred value for the incident luminosity assumes the slab geometry which is inherent in the reflection model (i.e. 50% covering fraction). Since the inferred continuum luminosity is compatible with observations, the actual covering fraction must be of this order. Fixing the continuum slope of the combined-fit model to its best-fitting value, we can set a 99% confidence upper limit (for 2 interesting parameters) of 0.024 for the fraction of the illuminating primary flux which is directly observed. Combining this observation with the lack of variability from the EUV to medium X-ray bands, the simplest assumption is that the primary continuum has switched off completely.
The iron line parameters are not strongly affected by the parametrization of the underlying continuum. The iron line equivalent width is $`1`$ keV, consistent with the interpretation that the entire medium-energy spectrum originates in cold reflecting material. The line energy and width are also consistent with this interpretation.
We conclude that the May 1998 long looks at NGC 4051 show that the primary continuum source had switched off for at least seven days leaving the clear signature of reflected emission from cold material, possibly the molecular torus. We will now place this result in context by looking at the source history for the two years preceding these observations and the six months following them.
## 4 long timescale variability
In the upper panel of fig. 2 we show the two and a half year 2–10 keV lightcurve of NGC 4051 obtained with our monitoring observations (crosses). The May long look observation is indicated by a star. The lightcurve shows variability on a range of timescales, including a probable long-term component on timescales of $``$months, which we highlight in the lower panel of fig. 2 which shows the 100 day average fluxes, made with the monitoring data in the corresponding 100 day bins. The error bars on the 100 day averages are intended to represent the spread of points in each bin and do not represent an actual error on the 100 day mean. The lightcurve shows a decline from a high flux state in 1996, through an intermediate flux state in 1997, culminating in the low state in early 1998 which lasted for $`150`$ days. Shortly after the long-look observations in May 1998 (and as far as the most recent observations) the source became active again.
It is apparent from the lightcurve that the source flux variability is not statistically stationary (i.e. has a constant mean) on long timescales. We now show that this long-term variability is real and not some artifact due to the sparse sampling of an underlying stationary, stochastic lightcurve. We shall use only the monitoring observations since they are all of comparable length and the minimum seperation between observations is 0.5 days, which is of the order of the knee timescale. Hence if there are no long-timescale components to variability we would expect any lightcurve made up of such observations to be statistically stationary.
We can compare the mean fluxes of two sections of the lightcurve using Student’s t-test for significantly different means (e.g. Press et al., 1992), which gives a probability that both sets of data have the same mean. We first compare the mean 2–10 keV count rate of the 10 observations between TJD 10810 and 10860, (when the source appears to occupy an ultra-low state), with the mean count rate of the preceding 87 observations (0.92 counts s<sup>-1</sup> versus 6.75 counts s<sup>-1</sup>). The t-test shows that probability that both sets of observations come from the same parent population is $`10^{26}`$. However, we have selected the group of low-state observations because they look different to the preceding observations. Therefore we must confirm the likelihood that no group of 10 or more consecutive observations can be found to be from the same parent population as a preceding group of observations (with a probability of less than or equal to $`10^{26}`$), in any randomly generated statistically stationary lightcurve with the same number of data points as our own (106 observations). We have simulated $`10^4`$ statistically stationary lightcurves each with 106 data points. We then searched each simulated lightcurve for groups of 10 or more consecutive points with a mean count rate which is the same as the mean of the preceding points with a probability of $`10^{26}`$ or less. In $`10^4`$ lightcurves we find no such groups, so we conclude that the mean of the lightcurve between TJD 10810 and 10960 is different to the mean in the preceding time period at a level of better than $`99.99\%`$ confidence. Therefore, the entire lightcurve is not statistically stationary.
We now determine whether the lightcurve is statistically stationary prior to the ultra-low state, i.e. does the source simply switch between two flux states with constant average fluxes, or is there a more gradual change in the mean flux, culminating in the low state? To examine this possibility, we split the lightcurve into two parts of equal duration, corresponding to TJD 10196–10503 and TJD 10503–10810, with mean count rates 7.64 and 4.19 respectively. According to the t-test, these two sections of the lightcurve have significantly different means at better than $`99.99\%`$ confidence. We cannot show that the lightcurves are not stationary on shorter timescales by further splitting these sections of the lightcurves into equal halves. Therefore the X-ray lightcurve of NGC 4051 approximates a stationary lightcurve on timescales of days–weeks (hence the knee in the power spectrum), becoming non-stationary on longer timescales. We delineate these significantly different flux epochs in Figure 2, naming them epochs I, II, III, and IV. Epoch IV is the most recent section of the lightcurve, where the source seems to have returned to an active state.
Long-term variations in the average X-ray flux might be caused by absorption by a varying column of material along the line of sight. A 50% reduction in average flux (i.e. between epoch I and epoch II) requires an additional absorbing column of column density $`10^{23}`$ cm<sup>-2</sup>, which is ruled out at greater than 99.9% confidence by our spectral data from epoch II observations. Therefore the long-term variations must be intrinsic to the primary X-ray continuum.
Finally, we comment on evidence for non-linearity in the lightcurve. Green, M<sup>c</sup>Hardy & Done (1999) used the method of searching for asymmetry in the distribution of measured flux about the mean to show that the variability of NGC 4051 was non-linear during a ROSAT observation in November 1991, while an observation in November 1992 showed no evidence for non-linearity. Using the same technique, we find that the variability in epoch II is non-linear to 90% confidence although there is no evidence for non-linearity during epoch I (there is not sufficient data to comment on the linearity during the other epochs). It is interesting to note that extrapolating the differing mean X-ray fluxes in epochs I and II into the ROSAT band yield ROSAT count rates similar to those in November 1992 and November 1991 respectively (assuming a simple power law model with photon index 2.3 and galactic absorption), implying that the non-linear behaviour of NGC 4051 may be associated with the intermediate-flux state which characterises epoch II.
## 5 Discussion
We now discuss the implications of these results for the interpretation of the low state spectrum and the origin of the long-timescale variability.
Although the source appears to be quiescent during the three days of the May 1998 long-look, there does appear to be some low level of variability during epoch III, although the flux level remains very low, implying that the source may not be entirely switched off for the entire duration of epoch III. The low flux state lasts longer than $`150`$ days, but the reflection spectrum seen in the May 1998 long look, which occurs at the end of the low state, is consistent with reflection of a continuum with much higher flux. We infer that the reflecting matter lies at distances equal to or greater than $`150`$ light-days from the continuum source ($`>`$ few times $`10^{17}`$ cm), confirming the interpretation of G98 that we have directly detected the X-ray reflection spectrum from the molecular torus in NGC 4051. It is interesting to note that there is no detectable signature of neutral hydrogen gas along the line of sight to the continuum source in NGC 4051, over and above the expected galactic absorption \[M<sup>c</sup>Hardy et al. 1995\], whereas the detection of a reflection spectrum, almost certainly from the surrounding torus, implies substantial columns (greater than $`10^{24}`$ cm<sup>-2</sup>) out of the line of sight. This result is in agreement with the standard AGN unification scenario, where we expect substantial differences in the column density along different lines of sight to the central source.
Assuming a lower limit to the timescale for the observed long-term variability of $`150`$ days (i.e., comparable to the duration of the low state) we see that the long-term variability timescale is much longer than the knee timescale in NGC 4051 (by a factor $`>`$ 50). We therefore speculate that the long timescale component to variability may have an altogether different origin to the variability at much shorter timescales. The short knee timescale of NGC 4051 implies a low black hole mass of order $`10^5`$ M if the knee timescale scales linearly with black hole mass \[M<sup>c</sup>Hardy, Papadakis & Uttley 1998\], consistent with the relatively low luminosity of this AGN. The observed long-term variability timescale is much longer than the dynamical timescale for a black hole of this mass, or the thermal or sound-crossing timescales associated with the accretion disk which may fuel the AGN \[Edelson & Nandra 1999\]. However, the long-term variability timescale is comparable with the viscous timescale of an accretion disk \[Treves, Maraschi & Abramowicz 1988\]. We therefore speculate that the long-term X-ray variability of NGC 4051 is related to variations in the accretion flow in the X-ray emitting region close to the massive black hole.
Recently, evidence has emerged for a long timescale component (relative to the knee timescale) to the X-ray variability in the galactic black hole candidate, Cyg X-1 \[Rao et al. 1998\] on timescales $`>10^3`$ s. The long-term variability timescales in Cyg X-1 and NGC 4051 imply a scaling with luminosity (and possibly black hole mass), similar to the scaling with the knee timescale. We therefore speculate that long-term X-ray variability in more luminous AGNs will occur on even longer timescales than in NGC 4051 - from decades to centuries for typical Seyfert galaxies (of $`L_X10^{43}`$ ergs s<sup>-1</sup>), to thousands of years for quasars. The X-ray variability of NGC 4051 may represent a microcosm of the X-ray variability of all AGN.
Finally, we note that X-ray experiments with low spectral bandwidth may misclassify sources like NGC 4051, which have recently switched off, as being heavily absorbed AGN.
## 6 Conclusions
We have shown that the X-ray spectrum of NGC 4051 in its low state observed by RXTE and BeppoSAX in May 1998 is consistent with reflection of the primary continuum off distant ($`>150`$ light-days) cold gas, which may be the molecular torus envisaged by the AGN unification model.
We have shown that the X-ray lightcurve of NGC 4051 is not statistically stationary over long timescales, and that during the course of our monitoring campaign, the source does not simply switch between two flux states, but moves from a highly variable (probably linear) high flux state, through an intermediate variable (possibly non-linear) flux state, to the low state where variability was minimal. Since May 1998 the source has become active once more.
The long-timescale component to X-ray variability is intrinsic to the primary continuum (and not varying obscuration), and may be associated with variations in the accretion flow of the putative accretion disk, assuming a relatively small black hole of 10<sup>5</sup> M, consistent with the low luminosity of this AGN.
The X-ray variability of NGC 4051 may represent a microcosm of all AGN variability, showing in only a few years a range of states and behaviours which more luminous AGN may pass through on timescales of decades to thousands of years.
### Acknowledgments
We wish to thank the RXTE and BeppoSAX schedulers for efficiently co-ordinating and supporting these observations. PU acknowledges financial support from the Particle Physics and Astronomy Research Council, who also provided grant support to IM<sup>c</sup>H. MG acknowledges an ESA Research fellowship. AF was supported by AXAF Science Center NASA contract NAS 8-39073.
|
no-problem/9905/hep-lat9905028.html
|
ar5iv
|
text
|
# Chiral Fermions on the Lattice
## 1 Introduction
Since the beginning of lattice gauge theory the regularization of chiral fermions has been afflicted with severe problems. When regulating fermions on a lattice, typically, unwanted doublers with opposite chirality appear. These doublers can be lifted (given mass of the order of the cut-off) at the cost of explicitly breaking chiral symmetry, as in the Wilson fermion formulation. Alternatively, a remnant of chiral symmetry can be retained, with a smaller number of doublers interpreted as flavors, as in the staggered fermion formalism. However, at finite lattice spacing the flavor symmetry is broken. Both these approaches fail from the outset for regulating Weyl fermions. The central problem in the non-perturbative regularization of gauge theories with Weyl fermions is to write down a formula for the fermionic determinant when the fermion is in some complex representation of the gauge group, since depending on the topology of the gauge field the chiral Dirac operator can be square or rectangular, where the difference between rows and columns is the index.
Significant progress in the formulation of chiral gauge theories has been made by the overlap formalism . The overlap formalism was inspired by two papers . The central idea is that an infinite number of Dirac fermions (labeled by $`s`$) with a mass term of the form $`\overline{\psi }_s(P_LM_{ss^{}}+P_RM_{ss^{}}^{})\psi _s^{}`$ and chiral projectors $`P_{L,R}`$ can be used to regulate a single Weyl fermion if the infinite dimensional mass matrix $`M`$ has a single zero mode but $`M^{}`$ has no zero modes. Kaplan’s paper uses this idea to put chiral fermions on the lattice where they are referred to as domain wall fermions since Kaplan used a mass matrix that has a domain wall like structure. In the overlap formalism the infinite number of Dirac fermions is described by two non-interacting many body Hamiltonians, one for each side of the domain wall, and the chiral determinant is written as the overlap between their groundstates
$$det\mathrm{C}(U)0|0+.$$
(1)
$`|0`$ is the many body ground state of $`^{}=a^{}\gamma _5a`$ and $`|0+`$ the many body ground state of $`^+=a^{}H_w(U)a`$, with $`\gamma _5H_w(U)=D_w(U)`$ the usual Wilson-Dirac operator on the lattice with a fermion mass in the supercritical region ($`m_c<m<2`$). $`a`$ ($`a^{}`$) are canonical fermion annihilation (creation) operators. On a finite lattice, the single particle Hamiltonians are finite matrices of size $`2K\times 2K`$ with $`K=V\times N\times S`$ where $`V`$ is the volume of the lattice, $`N`$ is the size of the particular representation of the gauge group and $`S`$ is the number of components of a Weyl spinor. Then $`|0`$ is made up of $`K`$ particles. If $`|0+`$ is also made up of $`K`$ particles, then the overlap is not zero in the generic case. If the background gauge field is such that there are only $`KQ`$ negative energy states for $`H_w(U)`$ then the overlap is zero. Any small perturbation of the gauge field will not alter this situation. Furthermore, the overlap $`0|a_{i_1}^{}\mathrm{}a_{i_Q}^{}|0+`$ will not be zero in the generic case if the fermion is in the fundamental representation of the gauge group showing that there is a violation of fermion number by $`Q`$ units. So, clearly, the overlap definition of the chiral determinant (1) has the desired properties.
A generic problem with simulations of chiral gauge theories is that the chiral fermion determinant is complex. A “brute force approach” is feasible in the simulation of two dimensional models, and in this way the overlap formalism has successfully reproduced non-trivial results in two dimensional chiral models on the lattice . The brute force approach, however, is clearly not feasible in four dimensions, where efficient numerical techniques are essential. This prevented simulations of chiral gauge theories and tests of the overlap formalism in four dimensions so far.
Clearly, any formulation of lattice chiral gauge theories is also a formulation of massless vector gauge theories with an exact chiral symmetry and a positive fermion determinant (the product of the chiral determinant for the left handed fermions and its complex conjugate for the right handed ones). Lattice QCD using the overlap formalism reproduces the well-known mass inequalities between mesons and baryons, and the $`U(N_f)_V\times U(N_f)_A`$ symmetry in an $`N_f`$ flavor theory is broken down to $`U(1)_V\times SU(N_f)_V\times SU(N_f)_A`$ by gauge fields that carry topological charge <sup>1</sup><sup>1</sup>1See section 9 of for details.. If the $`SU(N_f)_V\times SU(N_f)_A`$ symmetry is spontaneously broken, then massless Goldstone bosons should naturally emerge in the overlap formalism. Since the symmetry breaking pattern is exactly as in the continuum, all the soft pion theorems should hold on the lattice as well.
For a vector gauge theory the computation of the fermionic determinant can be simplified significantly compared to the original version (1) that involves the computation of the overlap of two many body ground states. One way to derive the simplified expression is to start with the variant of domain wall fermions of ref. , applicable for a vector gauge theory. We choose this approach here to emphasize the close connection between domain wall and overlap fermions. Integrating out all the fermion and Pauli-Villars fields, Neuberger derived the following expression for the determinant describing a single light Dirac fermion
$$detD_{DW}(\mu ;L_s)=det\left\{\frac{1}{2}\left[1+\mu +(1\mu )\gamma _5\mathrm{tanh}\left(\frac{L_s}{2}\mathrm{ln}T_w\right)\right]\right\}.$$
(2)
Here $`T_w`$ is the transfer matrix in the extra direction, whose extent, $`L_s`$, has been kept finite, and $`0\mu 1`$ describes fermions with positive mass all the way from zero to infinity. In the fermion mass $`\mu `$ is denoted by $`m_f`$. In the limit $`L_s\mathrm{}`$ (2) becomes
$$detD_{DW}(\mu )=det\left\{\frac{1}{2}\left[1+\mu +(1\mu )\gamma _5ϵ(\mathrm{ln}T_w)\right]\right\}.$$
(3)
It is only in this limit that massless domain wall fermions have an exact chiral symmetry. Finally, taking the lattice spacing, $`a_s`$, in the extra direction to zero one obtains the Overlap-Dirac operator of Neuberger
$$D(\mu )=\frac{1}{2}\left[1+\mu +(1\mu )\gamma _5ϵ(H_w)\right].$$
(4)
The external fermion propagator is given by
$$\stackrel{~}{D}^1(\mu )=(1\mu )^1\left[D^1(\mu )1\right].$$
(5)
The subtraction at $`\mu =0`$ is evident from the original overlap formalism and the massless propagator anti-commutes with $`\gamma _5`$ . With our choice of subtraction and overall normalization the propagator satisfies the relation
$$\mu b^{}|\left[\gamma _5\stackrel{~}{D}^1(\mu )\right]^2|b=b^{}|\stackrel{~}{D}^1(\mu )|bb\mathrm{satisfying}\gamma _5|b=\pm |b$$
(6)
for all values of $`\mu `$ in an arbitrary gauge field background . The fermion propagator on the lattice is related to the continuum propagator for small momenta and small $`\mu `$ by
$$D_c^1(m_q)=Z_\psi ^1\stackrel{~}{D}^1(\mu )\mathrm{with}m_q=Z_m^1\mu $$
(7)
where $`Z_m`$ and $`Z_\psi `$ are the mass and wavefunction renormalizations, respectively. Requiring that (6) hold in the continuum results in $`Z_\psi Z_m=1`$. We find that a tree level tadpole improved estimate gives
$$Z_\psi =Z_m^1=\frac{2}{u_0}\left[m4(1u_0)\right],$$
(8)
where $`u_0`$ is one’s favorite choice for the tadpole link value. Most consistently, for the above relation, it is obtained from $`m_c`$, the critical mass of usual Wilson fermion spectroscopy.
In the rest of this paper we discuss practical implementations of the Overlap-Dirac operator and present some recent results. Owing to the recent flurry of theoretical activity arising from the “unearthing” of the Ginsparg-Wilson relation , a few remarks are in order. The massless Overlap-Dirac operator in (4) satisfies the Ginsparg-Wilson relation
$$D(0)\gamma _5+\gamma _5D(0)=2D(0)\gamma _5D(0)$$
(9)
implying that the massless propagator $`(D^1(0)1)`$ anticommutes with $`\gamma _5`$. If we write
$$D(0)=\frac{1}{2}\left[1+\gamma _5\widehat{H}_a\right].$$
(10)
then the Ginsparg-Wilson relation reduces to $`\widehat{H}_a^2=1`$. Since we would want $`\gamma _5D(0)`$ to be Hermitian, $`\widehat{H}_a`$ should be a Hermitian operator. With this reduction of the Ginsparg-Wilson relation, it is easy to show that
$$detD(0)=\left|0|0+\right|^2$$
(11)
i.e., the overlap formula for a vector theory. Here $`|0+`$ is the many body ground state of $`a^{}\widehat{H}_aa`$. This establishes a one-to-one correspondence between the overlap formula and the determinant of a fermionic operator satisfying the Ginsparg-Wilson relation for massless vector gauge theories.
## 2 Practical Implementations of $`ϵ(H_w)`$
In order to compute the action of $`ϵ(H_w)=\frac{H_w}{|H_w|}`$ on a vector one can proceed in several different ways. Since we are interested in working in four dimensions, it is not practical to store the whole matrix $`H_w`$. Therefore standard techniques to deal with the square root of a matrix will not be discussed.
One could attempt to solve the equation $`\sqrt{H_w^2}\varphi =H_wb`$ to obtain $`\varphi =ϵ(H_w)b`$ using iterative techniques. Such techniques have been developed to solve linear systems with fractional powers of a positive definite operator using Gegenbauer polynomials and applied to the Overlap-Dirac operator in .
Another approach is to efficiently approximate $`ϵ(H_w)`$ as a sum of poles:
$$ϵ(H_w)g_N(H_w)=H_w\left[c_0+\underset{k=1}{\overset{N}{}}\frac{c_k}{H_w^2+d_k}\right].$$
(12)
The action of $`ϵ(H_w)`$ on a vector then involves a single conjugate gradient with multiple shifts .
One approximation, called the polar decomposition , has been adapted in this context and first used in the study of the three dimensional Overlap-Dirac operator by Neuberger . Here the coefficient $`c_0=0`$ and
$$c_k=\frac{1}{N\mathrm{cos}^2\frac{\pi }{4N}(2k1)};d_k=\mathrm{tan}^2\frac{\pi }{4N}(2k1).$$
(13)
In this approximation,
$$ϵ(z)g_N(z)=\frac{(1+z)^{2N}(1z)^{2N}}{(1+z)^{2N}+(1z)^{2N}}.$$
(14)
Clearly $`g_N(z)=g_N(1/z)`$ and $`g_N(1)=1`$. The error $`ϵ(z)g_N(z)`$ is strictly positive and monotonically decreases from $`z=0`$ to $`z=1`$.
For another approximation, called the optimal rational approximation , the coefficients are obtained numerically by an optimal fit using the Remez algorithm. The coefficients in a slightly different notation have been tabulated for $`N=6,8,10`$ in Ref. . We have found it necessary to use $`N=14`$ for our recent applications. The coefficients in this case are shown in Table 1.
In the optimal rational approximation, the approximation to $`ϵ(z)`$ has oscillations and is not bounded by unity. A plot of the approximation $`g_N(z)`$ obtained as a fit over the region $`[0.001,1]`$ is shown in Fig. 1 for $`N=6`$ to $`14`$. While we fit over this region, the approximation is still good for $`z`$ somewhat larger than 1. The approximation is bounded by unity only if $`0.025<z<1.918`$ for $`N=14`$. In this range the maximum deviation from unity is equal to $`3.1\times 10^5`$. This range will increase if one increases the order of the approximation. For the current applications we found this range to be sufficient.
The approximation to $`ϵ(H_w)`$ by poles involves a multi-shift inner conjugate gradient and therefore it seems necessary to store $`N`$ vectors where $`N`$ is the order of the approximation. One can avoid storing the extra vectors if one is willing to perform two passes of the inner conjugate gradient . A Lanczos based algorithm that also avoids this extra storage by requiring two passes has been proposed, but it involves an explicit diagonalization of a tridiagonal matrix . In this method $`H_w`$ is approximated by a small dimensional tridiagonal matrix (anywhere between $`100`$ and $`1000`$) and $`ϵ(H_w)`$ is computed by first diagonalizing the tridiagonal matrix and then performing the trivial operation of $`ϵ`$ on the eigenvalues. The accuracy is increased by increasing the order of the tridiagonal matrix.
Since in practice it is the action of $`D(\mu )`$ on a vector we need, we can check for the convergence of the complete operator at each inner iteration of $`ϵ(H_w)`$. This saves some small amount of work at $`\mu =0`$ and more and more as $`\mu `$ increases, while at $`\mu =1`$ (corresponding to infinitely heavy fermions, c.f. Eq.(4)) no work at all is required.
Each action of $`ϵ(H_w)`$ involves several applications of $`H_w`$ on a vector with the number depending on the condition number of $`H_w(m)`$ in the supercritical mass region. In Figure 2 we show the density of (near) zero eigenvalues for $`m=1.7`$. We see that while $`\rho (0;1.7)`$ decreases rapidly as $`\beta `$ increases, it does not appear to go zero at a finite lattice spacing. The second part of Figure 2 emphasizes that $`\rho (0;1.7)`$ decreases exponentially in some power of $`1/a`$ (the power here is not well determined, but reasonably fits $`1/2`$). This result implies that on a specific gauge background, $`H_w`$ could have an arbitrarily large condition number due to a few small eigenvalues. This can make the computation of $`\varphi =ϵ(H_w)b`$ expensive for all methods considered. In addition some care is needed when using the approximation of $`ϵ(H_w)`$ by a sum over poles. Clearly $`ϵ(H_w)`$ can be replaced by $`ϵ(sH_w)`$ where $`s>0`$ is an arbitrary scale factor. We should choose the scale so that the maximum eigenvalue of $`sH_w`$ is not above the range where the approximation is deemed good. Having so chosen a value for $`s`$, we need to deal with the low lying eigenvalues that fall outside the range of the approximation to $`ϵ(H_w)`$. We do this by computing a few low lying eigenvalues and eigenvectors of $`H_w`$, for which we then know the contribution to $`ϵ(H_w)`$ exactly, and projecting them out before applying the approximation to the orthogonal subspace for which the approximation is good. The number of eigenvalues that have to be projected out will depend on the lattice coupling, the lattice size and the lower end of the range of the approximation. It will roughly increase with the volume at a fixed coupling making it difficult to go to large lattice volumes at strong coupling. However, the number of eigenvalues that have to be projected out will decrease as one goes to weaker coupling even at a fixed physical volume. This is because the density of eigenvalues of $`H_w`$ near zero goes to zero as one goes to the continuum limit . The Ritz functional method can be used to efficiently compute the necessary low lying eigenvalues and eigenvectors of $`H_w`$.
For the case of domain wall fermions at finite extent $`L_s`$ in the fifth direction, the degree to which $`\mathrm{tanh}(L_s(\mathrm{ln}T_w(m))/2)`$ approximates $`ϵ(\mathrm{ln}T_w(m))`$ is determined by $`L_s`$ and the eigenvalues of $`T_w(m)`$ near $`1`$. One can show analytically that in a fixed gauge background a unit eigenvalue of $`T_w(m)`$ and a zero eigenvalue of $`H_w(m)`$ occur at the same mass $`m`$. Also, the change of the corresponding eigenvalue in $`m`$ is the same for both $`T_w(m)`$ and $`H_w(m)`$. This implies that the density of zero eigenvalues $`\rho (0;m)`$ is the same for both $`H_w(m)`$ and $`\mathrm{ln}(T_w(m))`$. The degree to which these zero eigenvalues affect physical results is determined by the physical observable, $`L_s`$, and the fermion mass $`\mu `$. Studies of the $`L_s`$ dependence for various quantities at non-zero fermion masses can be found in Ref. . In particular, larger $`L_s`$ at fixed mass $`\mu `$ is needed for stronger coupling due to the increasing $`\rho (0;m)`$.
## 3 Spontaneous chiral symmetry breaking
Spontaneous chiral symmetry breaking is an important feature of QCD. However, it is not fully realized on a finite lattice and at finite quark mass. Thus one needs to carefully study the approach to the infinite volume and chiral limit. Conventional lattice fermion formulations explicitly break the chiral symmetry (at least partially) at finite lattice spacing, obscuring the approach to the infinite volume and chiral limit in practical simulations. Overlap fermions preserve the chiral symmetry at finite lattice spacing. This should facilitate a study of spontaneous chiral symmetry breaking. As a practical test of the Overlap-Dirac operator, we consider quenched QCD. The chiral limit of quenched QCD is tricky, though, because topologically non-trivial gauge fields are not suppressed in this limit. Gauge field topology results in exact zero modes of $`D(0)`$ as long as one is in the supercritical region of $`H_w(m)`$. This is demonstrated in in Fig. 3 where we show the spectral flow of eigenvalues of both $`H_w(m)`$ and $`H_o=\gamma _5D(0)`$ as a function of $`m`$ for an SU(2) gauge background at $`\beta =2.5`$ on an $`8^4`$ lattice. We see a single level crossing zero near $`m=0.9`$ in the spectral flow of $`H_w`$. At this mass, we see the sudden appearance of a zero eigenvalue (with chirality $`1`$) among the smoothly changing non-zero eigenvalues (in opposite sign pairs with chirality equal to their eigenvalue).
Zero eigenvalues of $`H_o`$ due to global topology have a definite chirality. The spectrum of $`H_o`$ is in \[-1,1\] and the non-zero eigenvalues of $`H_o`$ that have a magnitude less than one come in pairs, $`\pm \lambda `$. The associated eigenvectors are not eigenvectors of $`\gamma _5`$, but rather $`\gamma _5`$ has expectation value $`\pm \lambda `$ in the eigenvectors, $`\psi ^{}\gamma _5\psi =\pm \lambda `$. Since $`H_o`$ is an even dimensional matrix, the unpaired zero eigenvalues have to be matched by unpaired eigenvalues equal to $`\pm 1`$. This is what is expected to happen in a topologically non-trivial background. It is straightforward to obtain the spectrum of $`D(\mu )`$ from the spectrum of $`H_o(0)`$. Due to the continuum like spectrum one can study the approach to the chiral limit using the Overlap-Dirac operator by separating modes due to global topology from the remaining non-zero eigenvalues .
The main quantity that needs to be computed numerically is the fermion propagator $`\stackrel{~}{D}^1(\mu )`$ in Eqn. (5). Certain properties of the Overlap-Dirac operator enable us to compute the propagator for several fermion masses at one time using the multiple Krylov space solver and also go directly to the massless limit.
We note that
$$H_o^2(\mu )=D^{}(\mu )D(\mu )=D(\mu )D^{}(\mu )=\left(1\mu ^2\right)\left[H_o^2(0)+\frac{\mu ^2}{1\mu ^2}\right]$$
(15)
with
$$H_o^2(0)=\frac{1}{2}+\frac{1}{4}\left[\gamma _5ϵ(H_w)+ϵ(H_w)\gamma _5\right]$$
(16)
Eq. (15) implies that we can solve the set of equations $`H_o^2(\mu )\eta (\mu )=b`$ for several masses, $`\mu `$, simultaneously (for the same right hand $`b`$) using the multiple Krylov space solver described in Ref. . We will refer to this as the outer conjugate gradient inversion. It is easy to see that $`[H_o^2(\mu ),\gamma _5]=0`$, implying that one can work with the source $`b`$ and solutions $`\eta (\mu )`$ restricted to one chiral sector.
The numerically expensive part of the Overlap-Dirac operator is the action of $`H_o^2(0)`$ on a vector since it involves the action of $`[\gamma _5ϵ(H_w)+ϵ(H_w)\gamma _5]`$ on a vector. If the vector $`b`$ is chiral (i.e. $`\gamma _5b=\pm b`$) then, $`[\gamma _5ϵ(H_w)+ϵ(H_w)\gamma _5]b=[\gamma _5\pm 1]ϵ(H_w)b`$. Therefore we only need to compute the action of $`ϵ(H_w)`$ on a single vector.
To study the possible onset of spontaneous chiral symmetry breaking in quenched QCD, we stochastically estimate, for a fixed gauge field background,
$$\frac{1}{V}\underset{x}{}\overline{\psi }(x)\psi (x)_A=\frac{1}{V}\mathrm{Tr}[\stackrel{~}{D}^1(\mu )]$$
(17)
and average over gauge fields. We also compute stochastically $`\omega =\chi _\pi \chi _{a_0}`$
$$\omega =\frac{2}{V}\mathrm{Tr}(\gamma _5\stackrel{~}{D})^2(\mu )+\mathrm{Tr}\stackrel{~}{D}^2(\mu )=\frac{2}{\mu }\overline{\psi }\psi 2\frac{d}{d\mu }\overline{\psi }\psi _A.$$
(18)
For a derivation of the above equation we refer the reader to Ref. . Some simple manipulations yield the following relations
$`b|\stackrel{~}{D}^1(\mu )|b`$ $`=`$ $`{\displaystyle \frac{\mu }{1\mu ^2}}b^{}\left(\eta (\mu )b\right)`$ (19)
$`b|(\gamma _5\stackrel{~}{D})^2(\mu )+\stackrel{~}{D}^2(\mu )|b`$ $`=`$ $`{\displaystyle \frac{2\mu ^2}{(1\mu ^2)^2}}\left(\eta ^{}(\mu )b^{}\right)\left(\eta (\mu )b\right)`$ (20)
where
$$H_o^2(\mu )\eta (\mu )=b\mathrm{with}\gamma _5b=\pm b.$$
(21)
As discussed in Ref. , it is appropriate to remove the topological contributions to the above quantities in order to study the onset of chiral symmetry breaking. For this we first compute the low lying spectrum of $`\gamma _5D(0)`$ using the Ritz functional method . This gives us, in particular, information about the number of zero modes and their chirality. In gauge fields with zero modes we always find that all $`|Q|`$ zero modes have the same chirality. We have not found any accidental zero mode pairs with opposite chiralities. We then perform a stochastic estimate in the chiral sector that has no zero modes and double the result to get the total contribution to $`\overline{\psi }\psi `$ and $`\omega `$ excluding topology. In this sector, the propagator is non-singular even for zero fermion mass. Given a Gaussian random source $`b`$ with a definite chirality all we have to do is solve the equation $`H_o^2(\mu )\eta (\mu )=b`$ for several values of $`\mu `$.
In Fig. 4a we show $`\omega `$ without the topology term added for various lattice sizes and $`\beta `$ in SU(3) using $`m=1.65`$. We see some indication of the onset of spontaneous chiral symmetry breaking (with strong finite volume dependence) at $`\beta =5.85`$ where, as $`\mu `$ decreases, there is a small region where $`\omega 1/\mu `$, then $`\omega `$ turns over and goes like $`\mu ^2`$. This latter behavior is expected in finite volume and is obvious from the explicit $`\mu ^2`$ dependence in Eq. (20).
We show in Fig. 4b pseudoscalar and vector meson masses from a preliminary spectroscopy calculation for SU(3) $`\beta =5.85`$ and $`5.70`$ on an $`8^3\times 16`$ lattice. Masses are extracted using multiple correlation functions in an excited state fit. The fermion masses have been chosen to be above the region of decreasing $`\omega `$ from finite volume dependence in Fig. 4a, namely $`\mu >10^2`$. As in the calculations for $`\omega `$ above, a multiple mass shift conjugate gradient solver was used for several values of $`\mu `$ in the solution of $`H_o^2(\mu )\eta (\mu )=b`$ with chiral source $`b`$. We see some slight deviation of $`am_{PS}^2`$ from linearity for decreasing $`\mu `$, and $`am_{PS}^2`$ does not extrapolate to $`0`$ at $`\mu =0`$ which we attribute to finite volume dependence. The vector mass $`m_V`$ is fairly linear over the entire region.
## 4 The Overlap-Dirac operator and random matrix theory
The Goldstone pions, associated with the spontaneous breaking of chiral symmetry dominate the low-energy, finite-volume scaling behavior of the Dirac operator spectrum in the microscopic regime, defined by $`1/\mathrm{\Lambda }_{QCD}<<L<<1/m_\pi `$, with $`L`$ the linear extent of the system. The properties in this regime are universal and can be characterized by chiral random matrix theory (RMT) within three ensembles, depending on some symmetry properties of the Dirac operator, and according to the sector of fixed topology, entering via the number of exact zero modes (see for a recent review). Since the Overlap-Dirac operator has the same chiral properties as the Dirac operator in the continuum, and since it has exact zero modes in topologically non-trivial gauge fields, it is well suited to test the predictions of RMT. In Figure 5 the distribution of the lowest (non-zero) eigenvalue is compared to the predictions of chiral RMT for examples in all three universality classes – SU(2) in the fundamental representation for the orthogonal ensemble, SU(3) in the fundamental representation for the unitary ensemble and SU(2) in the adjoint representation for the symplectic case – and in the sectors with zero or one exact zero modes . Excellent agreement is seen. In addition, the condensate $`\mathrm{\Sigma }`$ obtained in the two different sectors of each ensemble from fits to the RMT predictions agreed within errors. This agreement further validates the chiral RMT predictions on the one hand and strengthens the case for the usefulness of the overlap regularization of massless fermions on the other hand.
## 5 Conclusions
The Overlap-Dirac operator provides a formulation of vector gauge theories on the lattice with an exact chiral symmetry in the massless limit and no fermion doubling problem. The use of the Overlap-Dirac operator is, however, CPU time intensive. We reviewed a few methods to implement the operator acting on a vector. Of these, we found the optimal rational approximation method, in conjunction with the exact treatment of a few low lying eigenvalues and eigenvectors of $`H_w`$ in $`ϵ(H_w)`$, the most efficient. Further improvements in the numerical treatment of the Overlap-Dirac operator would be very helpful.
The Overlap-Dirac operator has exact zero modes with definite chirality in the presence of topologically non-trivial gauge fields. Due to their good chiral properties overlap fermions are well suited for the study of spontaneous chiral symmetry breaking. It is possible to separate the contribution of the exact zero modes due to topology in a numerical computation. We have presented sample results in a quenched theory from the remaining non-topological modes. We presented first spectroscopy results with overlap fermions in quenched lattice QCD. Finally, we compared the distribution of the smallest eigenvalue of the Overlap-Dirac operator with the predictions from random matrix theory.
The authors would like to thank Herbert Neuberger for useful discussions. This research was supported by DOE contracts DE-FG05-85ER250000 and DE-FG05-96ER40979. Computations were performed on the QCDSP, CM-2, and the workstation cluster at SCRI, and the Xolas computing cluster at MIT’s Laboratory for Computing Science.
|
no-problem/9905/hep-th9905114.html
|
ar5iv
|
text
|
# A Generalized Shannon Sampling Theorem, Fields at the Planck Scale as Bandlimited Signals
## Abstract
It has been shown that space-time coordinates can exhibit only very few types of short-distance structures, if described by linear operators: they can be continuous, discrete or “unsharp” in one of two ways. In the literature, various quantum gravity models of space-time at short distances point towards one of these two types of unsharpness. Here, we investigate the properties of fields over such unsharp coordinates. We find that these fields are continuous - but possess only a finite density of degrees of freedom, similar to fields on lattices. We observe that this type of unsharpness is technically the same as the aperture induced unsharpness of optical images. It is also of the same type as the unsharpness of the time-resolution of bandlimited electronic signals. Indeed, as a special case we recover the Shannon sampling theorem of information theory.
UFIFT-HEP-99-04
hep-th/9905114
At the heart of every candidate theory of quantum gravity is an attempt to understand the structure of space-time at very short distances. The reason is a simple gedanken experiment: the latest when trying to resolve distances as small as the Planck scale the accompanying energy-momentum fluctuations due to the uncertainty relation should cause curvature fluctuations large enough to significantly disturb the very space-time distance which one attempts to resolve. Speculations about the resulting behavior of space-time at small distances have ranged from the idea that space-time is discrete, to that it is foam-like, to that space-time may be a derived concept with a highly dynamical short-distance structure, as e.g. string theory would suggest. At least at present, however, there is no experimental access to sufficiently small scales, and therefore, a priori, the short-distance structure of space-time could still be any one out of infinitely many possibilities.
In this context, it has recently been pointed out, in , that the range of possible short-distance structures can be reduced to only very few basic possibilities, under a certain assumption. The assumption is that the fundamental theory of quantum gravity possesses for each dimension of space-time an operator $`X^i`$ which is linear and whose expectation values are real. The dynamics of these $`X^i`$ may be complicated and the $`X^i`$ may or may not commute. Nevertheless, one can prove on functional analytic grounds that any such operator $`X^i`$, considered separately, describes a coordinate which is necessarily either continuous or discrete, or it is unsharp in one of two well-defined ways. All other cases are mixtures of these.
Since continua and lattices are familiar, we will here study one of the two types of unsharp short-distance structures. The second type of unsharpness will be dealt with elsewhere. The type of unsharp coordinate which we will here investigate can be characterized by an uncertainty relation : Such a coordinate is described by an operator $`X^i`$ for which the formal standard deviation $`\mathrm{\Delta }X^i=(X^iX^i)^2^{1/2}`$ obeys some positive lower bound:
$$\mathrm{\Delta }X^i(\varphi )\mathrm{\Delta }X_{min}^i(\varphi |X^i|\varphi )$$
Here, $`\varphi `$ is any vector on which the operator can act, and the function $`\mathrm{\Delta }X_{min}^i(x)`$ describes how the lower bound depends on the $`X^i`$\- expectation value. If this were nonrelativistic quantum mechanics, the interpretation would be that the $`X^i`$\- coordinate is unsharp in the sense that particles cannot be localized to arbitrary precision on the $`x^i`$\- axis and that the lower bound on the position resolution depends in general on the $`x^i`$\- expectation value, i.e. on where on the $`x^i`$\- axis one tries to localize the particle. The function $`\mathrm{\Delta }X_{min}^i(x)`$ may in general also take the value zero, but we will here focus on the case where it is strictly positive.
This type of unsharp short-distance structure has indeed frequently appeared in quantum gravity and in particular in string theory. For example, several studies, see e.g. , suggest that the Heisenberg uncertainty relation may effectively pick up Planck scale or string scale correction terms of the form:
$$\mathrm{\Delta }x\mathrm{\Delta }p\frac{\mathrm{}}{2}\left(1+\beta (\mathrm{\Delta }p)^2+\mathrm{}\right)$$
(1)
For $`\beta `$ positive, the lowest order correction in Eq.1 implies that there is a constant lower bound for $`\mathrm{\Delta }x`$, namely $`\mathrm{\Delta }x_{min}=\mathrm{}\sqrt{\beta }`$. Of course, it is not necessarily surprising if even quite different candidate quantum gravity theories arrive in this way or another at some positive lower bound $`\mathrm{\Delta }X_{min}^i(x)`$ on the formal uncertainty in coordinates $`X^i`$, because, as we mentioned, for real entities which are described by linear operators this is one out very few possibilities.
Our aim here is to investigate what this general type of unsharp short-distance structure means in field theory: Is it possible to define fields $`\varphi (x^i,y)`$ “over” such an unsharp coordinate $`X^i`$? The operator $`X^i`$ should act simply as $`X^i:\varphi (x^i,y)x^i\varphi (x^i,y)`$ while we let $`y`$ stand collectively for all other coordinates (if commutative) or any other quantum numbers. The main question is, how do the fields depend on $`x^i`$, given that an unsharp coordinate $`x^i`$ is neither continuous nor discrete? How does one calculate the Hilbert space scalar product of fields - does it involve an integral over $`x^i`$, a sum over discrete points on the $`x^i`$-axis, or something else?
As we will show here, the answer is that fields $`\varphi (x^i,y)`$ over such unsharp coordinates are indeed well-defined: these fields are continuous functions $`\varphi (x^i,y)`$ over a continuous variable $`x^i`$. Crucially, however, these fields are automatically ultraviolet cut off in the sense that they possess only finitely many degrees of freedom per unit length along the $`x^i`$ coordinate, similar to fields on lattices!
Before we begin describing the details, let us agree to from now on suppress the index $`i`$ and the other variables $`y`$. We should also mention that some of the operators which describe unsharp coordinates of this type can only be represented on fields which possess isospinor indices, but this phenomenon will be discussed elsewhere.
Let us begin with two definitions: By a discretization of the $`x`$-axis we mean a discrete set of real numbers, $`\{x_n\}`$, where $`x_{n+1}>x_n`$ and where $`n`$ runs through all integers. By a partitioning of the $`x`$-axis we mean a smoothly parametrized family of discretizations $`\{x_n(\alpha )\}`$ which together make up the entire $`x`$-axis, namely such that every point on the $`x`$-axis, i.e. every real number, occurs in exactly one of the discretizations.
Now our claim is that to each unsharp coordinate $`X`$, as characterized by a curve $`\mathrm{\Delta }X_{min}(x)`$, there corresponds a partitioning $`\{x_n(\alpha )\}`$ of the $`x`$-axis such that if a field $`\varphi (x)`$ is known only on one of the partitioning’s discretizations then the field can already be reconstructed everywhere on the $`x`$-axis. Namely, if for some arbitrary fixed $`\alpha `$ the amplitudes $`\varphi (x_n(\alpha ))`$ are known for all $`n`$ then $`\varphi (x)`$ can be recovered for all $`x`$ through a reconstruction formula of the form:
$$\varphi (x)=\underset{n}{}G(x,x_n(\alpha ))\varphi (x_n(\alpha ))$$
(2)
Thus, the knowledge of a field’s amplitudes at finitely many points per unit length along the $`x`$-axis indeed suffices to describe the field entirely. Thereby, the operation of reconstructing a field is interchangeable with the operation of multiplying it by $`X`$:
$$x\varphi (x)=\underset{n}{}G(x,x_n(\alpha ))x_n(\alpha )\varphi (x_n(\alpha ))$$
The scalar product of two fields (as far as the $`x`$\- dependence is concerned) is a sum:
$$\varphi _1|\varphi _2=\underset{n}{}\varphi _1^{}(x_n(\alpha ))\varphi _2(x_n(\alpha ))$$
This scalar product formula gives in fact the same result independently of $`\alpha `$, i.e. independently of the choice of discretization on which the sum is being calculated.
Similarly, also the $`X`$\- expectation value and the second moment of fields can be calculated on any one of the discretizations $`\{x_n(\alpha )\}`$ and the result does not depend on $`\alpha `$. Correspondingly, $`\mathrm{\Delta }X(\varphi )=(\varphi |X^2|\varphi \varphi |X|\varphi ^2)^{1/2}`$ is the standard deviation of the fields’ discrete samples on any one of the discretizations $`\{x_n(\alpha )\}`$ of the $`x`$-axis. We remark that, more generally, if a field is not only in the domain of $`X`$ but also in the domain of higher powers of $`X`$, say $`X^r`$, i.e. if the field decays at infinity with the corresponding inverse power, then the higher moments up to the $`2r`$’th are finite, and they too are independent of the discretization in which they are calculated:
$$\varphi |X^r|\varphi =\underset{n}{}(x_n(\alpha ))^r\varphi ^{}(x_n(\alpha ))\varphi (x_n(\alpha ))$$
We now still need to address the question exactly how the minimum position uncertainty curve $`\mathrm{\Delta }X_{min}(x)`$ corresponds to a partitioning of the $`x`$\- axis. One expects of course that in regions of the $`x`$-axis where $`\mathrm{\Delta }X_{min}(x)`$ is small the spacing needs to be tighter and vice versa.
To see the precise relationship, let us first recall the minimum position uncertainty curve for particles which live on a one-dimensional lattice $`\{x_n\}`$. Clearly, these particles can be localized to absolute precision $`\mathrm{\Delta }X=0`$ at each of the lattice sites, say $`x_{n_0}`$, namely with the wave-function $`\varphi (x_n)=\delta _{n,n_0}`$. If, however, a particle’s expectation value lies in between two lattice sites then its standard deviation cannot be lower than some finite value. As is straightforward to verify, the curve $`\mathrm{\Delta }X_{min}(x)`$ for a one-dimensional lattice consists of half-circles which arc from lattice site to lattice site.
The fields over an unsharp coordinate do not live on only one discretization of the $`x`$-axis, but simultaneously on a whole family of discretizations which together constitute a partitioning of the $`x`$-axis. In contrast to ordinary fields over a lattice, fields over unsharp coordinates therefore obey an equation of the form (for arbitrary fixed $`\alpha `$):
$$\underset{n}{}f_n(\alpha )\varphi (x_n(\alpha ))=0$$
(3)
Eq.3 expresses that on each one of the discretizations the fields cannot be too peaked: We will find that $`f_n(\alpha )0`$ for all $`n`$, which implies, for example, that fields $`\varphi (x_n)=\delta _{n,n_0}`$ do not occur. More precisely, Eq.3 implies that the variable lower bound $`\mathrm{\Delta }X_{min}(x)`$ is the joint lower bound of all the minimum $`X`$-uncertainty curves of the individual discretizations in the partitioning. Namely, if we denote the minimum $`X`$-uncertainty curve of the discretization to the parameter $`\alpha `$ by $`\mathrm{\Delta }X_{min}(x,\alpha )`$ (composed of half-circles which arc from point $`x_n(\alpha )`$ to point $`x_{n+1}(\alpha )`$ for all $`n`$) then:
$$\mathrm{\Delta }X_{min}(x)=\underset{\alpha }{\mathrm{max}}\mathrm{\Delta }X_{min}(x,\alpha )$$
In this way, every partitioning $`\{x_n(\alpha )\}`$ of the $`x`$-axis determines a minimum position uncertainty curve $`\mathrm{\Delta }X_{min}(x)`$ and vice versa. We can describe partitionings conveniently by how their lattice spacings vary over the $`x`$\- axis. Indeed, for each partitioning there is a unique lattice spacing function $`s(x)`$ which obeys for all $`n`$ and $`\alpha `$:
$$s((x_{n+1}(\alpha )+x_n(\alpha ))/2)=x_{n+1}(\alpha )x_n(\alpha )$$
Its inverse, $`\sigma (x):=1/s(x)`$, the “density of degrees of freedom” function, of course also describes an unsharp coordinate entirely.
Interestingly, $`s(x),\sigma (x)`$ and, correspondingly, the minimum position uncertainty curve $`\mathrm{\Delta }X_{min}(x)`$ cannot vary arbitrarily abruptly. Intuitively, the reason is clear: if a particle can be localized only to very little precision around one point on the $`x`$-axis, then it is plausible that the particle cannot be localized to very high precision around a closely neighboring point.
In fact, we find that the possible spatial variability of the unsharpness of a coordinate is constrained to the extent that one discretization, say $`\{x_n(0)\}`$, together with the set of data $`\{\frac{d}{d\alpha }x_n(0)\}`$, i.e. together with the discretization’s derivative with respect to $`\alpha `$, already determines an entire partitioning $`\{x_n(\alpha )\}`$. (Technically, the discrete amplitudes $`v(x_n(0)):=(1)^n(x_n(0)i)^1(dx_n/d\alpha (0))^{1/2}`$ belong to a field $`v(x)`$ which can be reconstructed through Eq.2, thereby yielding $`dx_n(\alpha )/d\alpha `$ and therefore $`\{x_n(\alpha )\}`$ for all values of $`\alpha `$.)
Any unsharp coordinate can therefore be specified entirely by specifying one of its discretizations $`\{x_n(0)\}`$ together with its derivative $`\{\frac{d}{d\alpha }x_n(0)\}`$. Let us abbreviate these data as $`x_n:=x_n(0)`$ and $`x_n^{}:=dx_n(\alpha )/d\alpha |_{\alpha =0}`$.
We still need to give explicit expressions for the coefficients $`f_n(\alpha )`$ of Eq.3 and of course also for the reconstruction kernel $`G`$ of Eq.2. Expressed in terms of the data $`\{x_n\}`$ and $`\{x_n^{}\}`$, we obtain (after lengthy calculation):
$$f_n(0)=(1)^n\sqrt{x_n^{}}$$
(4)
and
$$G(x,x_n)=(1)^{z(x,x_n)}\frac{\sqrt{x_n^{}}}{xx_n}\left(\underset{m}{}\frac{x_m^{}}{(xx_m)^2}\right)^{1/2}$$
(5)
Here, $`(1)^{z(x,x_n)}`$ provides a sign factor such that $`G(x,x_n)`$ is continuous in $`x`$. The sign factor arises naturally in a product representation:
$$G(x,x_n)=\underset{N\mathrm{}}{lim}\frac{_{|m|<N,mn}(xx_m)}{\sqrt{_{|r|<N}\frac{x_r^{}}{x_n^{}}_{|s|<N,sr}(xx_s)^2}}$$
The proof of these results is rather technical. It is contained in a previous version, see , and will be presented in detail in a follow-up paper. Let us here only sketch the proof: The self-adjoint operator $`X(0)`$ with purely discrete spectrum $`\{x_n\}`$ possesses simple symmetric restrictions $`X`$, each with a $`U(1)`$-family of self-adjoint extensions $`X(\alpha )`$. It can be shown that their spectra, $`\{x_n(\alpha )\}`$, yield partitionings of the real line and that the data $`\{x_n^{}\}`$ suffice to specify the restriction and consequently the partitioning. The main part of the proof then consists in calculating the unitaries which interpolate the eigenbases of the extensions. The matrix elements of those unitaries constitute the reconstruction kernel.
We eventually arrive at one-parameter resolutions of the Hilbert space identity in terms of an overcomplete and continuously parametrized set of normalizable vectors:
$$1=\frac{1}{2\pi }_0^{2\pi }𝑑\alpha \underset{n}{}|x_n(\alpha )x_n(\alpha )|=\frac{1}{2\pi }_{\mathrm{}}^+\mathrm{}𝑑x\frac{d\alpha }{dx}|xx|$$
Note that coherent states and continuous wavelets, see e.g. , yield analogous two-parameter resolutions of the identity.
Let us now consider the instructive special case of unsharp coordinates whose minimum position uncertainty curve $`\mathrm{\Delta }X_{min}(x)`$ is constant. In this case, also the density of degrees of freedom $`\sigma (x)`$ is constant, $`\sigma =(2\mathrm{\Delta }X_{min})^1`$, and the corresponding partitioning $`\{x_n(\alpha )\}`$ of the $`x`$-axis reads:
$$x_n(\alpha )=2n\mathrm{\Delta }X_{min}+\alpha $$
We read off that $`x_n=x_n(0)=2n\mathrm{\Delta }X_{min}`$ and $`x_n^{}=\frac{dx_n}{d\alpha }(0)=1`$. Applying these parameters in Eq.5 yields the reconstruction kernel. In this special case here we can use the fact that
$$\underset{n}{}\frac{1}{(zn)^2}=\left(\frac{\pi }{\mathrm{sin}\pi z}\right)^2$$
to obtain a particularly simple expression for the kernel:
$$G(x,x_n)=\text{sinc}\left(\frac{\pi (xx_n)}{2\mathrm{\Delta }X_{min}}\right)$$
We observe that the kernel, being a sinc-function, is the Fourier transform of the function which is $`1`$ in the frequency interval $`[1/4\mathrm{\Delta }X_{min},+1/4\mathrm{\Delta }X_{min}]`$ and which vanishes everywhere else. This means that the set of fields over a coordinate with constant unsharpness $`\mathrm{\Delta }X_{min}`$ has a particularly simple characterization: It is the set of fields whose frequency range is limited to the interval $`[\omega _{max},\omega _{max}]`$, where $`\omega _{max}=1/4\mathrm{\Delta }X_{min}`$. Also Eq.3 acquires a simple interpretation: Eq.4 yields $`f_n(0)=(1)^n`$ so that, as is readily verified, Eq.3 expresses that the fields’ Fourier transforms vanish at $`\pm \omega _{max}`$, i.e. Eq.3 is now a boundary condition in Fourier space.
The fact that functions whose frequency range is within the interval $`[\omega _{max},\omega _{max}]`$ can be reconstructed everywhere, via the sinc-function kernel $`G(x,x_n)=\text{sinc}(2\pi (xx_n)\omega _{max})`$, from their values on discrete points $`\{x_n\}`$ with spacing $`1/2\omega _{max}`$, is indeed well-known, namely as the Shannon sampling theorem. The sampling spacing $`x_{n+1}x_n=1/2\omega _{max}`$ is called the Nyquist sampling rate. The basic idea of the theorem was actually already known to Borel (1897) and, according to , perhaps even to Cauchy (1841).
Shannon is credited for introducing the theorem into information theory in the 1940s, see : Shannon showed that, due to noise and other limitations, in effect only finitely many amplitude levels of electronic signals can be resolved, say $`N`$. Consequently, for any given ensemble of signals, the measurement of a signal’s amplitude at some fixed time $`t`$ can yield at most $`\mathrm{log}_2N`$ bits of information. Crucially now, Shannon’s ansatz is to idealize electronic signals $`\varphi (t)`$ as bandlimited, i.e. as frequency-limited functions. The sampling theorem then shows that $`2\omega _{max}`$ amplitude measurements per unit time suffice to capture such signals entirely - and this implies that these signals can carry information at most at the rate $`b=2\omega _{max}\mathrm{log}_2N`$ in bits/sec or, in terms of the density of degrees of freedom: $`b=\sigma \mathrm{log}_2N`$.
The ability provided by the sampling theorem to reconstruct continuous signals from discrete samples and the analysis of their information content have indeed proven very useful in ubiquitous applications from scientific data taking and data analysis to digital audio and video engineering. This of course motivated several generalizations of the sampling theorem, see e.g. . For example, there are methods to improve the convergence of the reconstruction through oversampling, see e.g. .
One may ask, therefore, why it should have been difficult to generalize the theorem for time-varying information densities. The main reason is that what would seem to be the obvious approach, namely to try to use Fourier theory to define a notion of time-varying bandwidth, $`\omega _{max}(t)`$, faces major difficulties: Firstly, the resolution of a signal’s frequency content in time is of course limited by the time-frequency uncertainty relation. Secondly, even low bandwidth signals can actually oscillate arbitrarily fast in any interval of finite size (on these so-called superoscillations, see e.g. ).
We here avoid those problems by not even trying to define variable bandwidths $`\omega _{max}(t)`$ in any Fourier sense. Instead, we obtain a handle on variable information densities through variable densities of degrees of freedom $`\sigma (t)`$, which are well-defined directly in the time-domain. Possible practical applications are currently being explored.
We note that, as a by-product of considering the special case of constant density of degrees of freedom we have found that the unsharpness of space-time according to the quantum gravity and string theory motivated uncertainty relation, Eq.1, is indeed of the same type as the unsharpness in the time-resolution of bandlimited electronic signals. In fact, it is also the same type as the fundamental unsharpness of optical images since, as is well-known, the aperture induces a bandlimit on the measurement of angles. Of course, to find this type of unsharpness in such different contexts is again not necessarily surprising, given that unsharp real entities described by linear operators - within any arbitrary theory - can exhibit only two types of unsharpness.
Our finding that fields over unsharp coordinates possess finite densities of degrees of freedom can serve, as we saw, as the starting point for an information theoretic analysis of ensembles of fields. This should be interesting to pursue. Indeed, in studies in quantum gravity and in particular in string theory the counting of degrees of freedom and an information theoretical perspective have recently found renewed interest, in particular in the contexts of the black hole information loss problem and the holographic principle, see e.g. .
Our observation that fields over unsharp coordinates are continuous but behave in many ways like fields over lattices also raises questions such as, how do anomalies manifest themselves with this type of ultraviolet cut-off: perhaps through fermion doubling as on lattices, or else? Eventually, it should be possible to work out model independent phenomenological signatures of this type of unsharp space-time. These might be testable if, as recent models of large extra dimensions suggest possible, the onset of strong gravity effects is not too far above the currently experimentally accessible scale of about $`10^{18}m`$, rather than at the Planck scale of $`10^{35}m`$, see e.g..
Acknowledgement: The author is grateful to John Klauder for very valuable criticisms.
|
no-problem/9905/cond-mat9905225.html
|
ar5iv
|
text
|
# Stiff monatomic gold wires with a spinning zigzag geometry
\[
## Abstract
Using first principles density functional calculations, gold monatomic wires are found to exhibit a zigzag shape which remains under tension, becoming linear just before breaking. At room temperature they are found to spin, what explains the extremely long apparent interatomic distances shown by electron microscopy. The zigzag structure is stable if the tension is relieved, the wire holding its chainlike shape even as a free-standing cluster. This unexpected metallic-wire stiffness stems from the transverse quantization in the wire, as shown in a simple free electron model.
\]
The manipulation of matter at the atomic scale is heralding a technological revolution and opening new research avenues. A spectacular achievement is the recent fabrication of monatomic chains of gold atoms, the ultimate thin wires. Metallic nanowire contacts can be created with the scanning tunneling microscope , with mechanically controllable break junctions , or even with simple tabletop setups . The relationships between conduction, geometric, and mechanical properties have been studied by simultaneous measurements of conductance and applied force , by atomistic , continuous , or mixed model simulations, and by first-principles calculations . Until very recently, however, only indirect experimental information about the structure of the nanocontacts was available. This situation changed dramatically after Ohnishi et al directly visualized nanometric gold wires by transmission electron microscopy (TEM). Surprisingly, in a bridge of four atoms connecting two gold tips, which was stable for more than two minutes, the atoms were spaced by 3.5-4.0 Å. Later reports have even increased this distance up to $``$5 Å, a value much larger than that in Au<sub>2</sub> (2.5 Å) and in bulk gold (2.9 Å). Gold monatomic chains with a length of four or more atoms were independently associated by Yanson et al to the last conductance plateau during stretching (close to one conductance quantum $`2e^2/h`$). The histogram of these plateau lengths showed maxima at regular intervals, which might be related to the distances between gold atoms in the wire.
In this work we study the structure and stability of gold monatomic wires by first-principles density-functional calculations . We use Siesta , a code designed to treat large systems with local basis sets which has been already used to study gold clusters . Tests were performed for Au<sub>2</sub> and bulk gold, using both the local density approximation (LDA) and the generalized gradient approximation (GGA) . Core electrons were replaced by scalar-relativistic norm-conserving pseudopotentials . Valence electrons were described with a basis set of double-$`\zeta `$ $`s,p`$ and $`d`$ numerical pseudo-atomic orbitals. Real- and reciprocal-space integration grids were increased until a total-energy convergence better than 2 meV/atom was achieved. The results are in very good agreement with previous calculations, using the same functionals, and with the experimental geometries and vibration frequencies . The GGA improves the binding and cohesive energies, but not the geometries, which are the main focus of this work. In the LDA, we obtain, for the gold dimer, a bond length $`l`$=2.51 Å, a vibration frequency $`\nu `$=190 cm<sup>-1</sup>, and a binding energy $`D`$=3.18 eV. For the bulk fcc crystal, the calculated nearest-neighbor distance, bulk modulus, and cohesive energy are $`d`$=2.91 Å, $`B`$=194 GPa, and $`E_c`$=4.55 eV respectively. In the GGA, the results are $`l`$=2.57 Å, $`\nu `$=171 cm<sup>-1</sup>, $`D`$=2.72 eV, $`d`$=2.98 Å, $`B`$=137 GPa, and $`E_c`$=3.37 eV. The experimental values are $`l`$=2.47 Å, $`\nu `$=191 cm<sup>-1</sup>, $`D`$=2.29 eV, $`d`$=2.87 Å, $`B`$=172 GPa, and $`E_c`$=3.78 eV.
The wire calculations were performed for infinite monatomic chains, using periodic boundary conditions, as well as for finite wires of various lengths, either free-standing or confined between small pyramidal tips. All the calculations were repeated with the LDA and the GGA, and both ferromagnetic and antiferromagnetic solutions were searched. In every case, the geometry was relaxed until the maximum forces were smaller than 10 meV/Å (16 pN). As an additional cross-check, some critical geometries were recalculated with a different code, using a plane wave basis set. The results will be presented in full elsewhere. In short, we have found no qualitative differences, and only very minor quantitative differences between the finite and infinite wires, between plane wave and local basis sets, and between LDA and GGA, and no magnetic solutions could be stabilized at any wire length. We present in what follows the Siesta LDA results for the infinite wires, except where stated.
Fig. 1 shows the wire geometry and the binding energy as a function of the wire length. Except when very stretched, the wire adopts a nonlinear, planar zigzag geometry, with two atoms per unit cell. Unconstrained relaxations with larger cells did not result in longer periods, nor in out-of-plane deformations. The energy shows a shallow minimum at a length of 2.32 Å/atom, with a bond angle of 131<sup>o</sup>. The stability of this geometry was demonstrated by checking that the dynamical matrix, calculated in a cell of 16 atoms, had no negative eigenvalues. For comparison Fig. 1c shows the energy of a wire constrained to a linear geometry, which has a minimum 0.24 eV/atom higher, and at a wire length 0.25 Å longer, than in the zigzag geometry. This difference in wire length is almost entirely due to the change in bond angle, since the bond distances differ by only 0.02 Å between the two minima. The bond angle increases with stretching, but the wire becomes linear only shortly before breaking.
The comparison between the band structures of the linear and zigzag wires (Fig. 2) offers some hints for understanding their relative stability. In the linear chain, the overlap between the filled $`d`$ states broadens the $`d`$ bands until they reach the Fermi level, destabilizing the wire with their associated high density of states. For the same wire length, the zigzag configuration allows a larger bond distance, that brings back the $`d`$ bands below the Fermi level and leaves a single $`s`$ band crossing it. This is consistent with the observation of a single conduction channel in the monatomic wires . A Peierls dimerization instability is expected since the Fermi wave-vector is at the edge of the two-atom Brillouin zone. We have observed, however, that the magnitude of this gap-opening instability is negligible, only slightly noticeable just before the wire breaks, and thus playing no substantial role in the physics described here.
Although the appearance of a zigzag instability under compression may seem natural, its presence in a stretched wire is more surprising. Furthermore, its stabilization at a finite wire length is even harder to understand, since one would expect the wire to collapse into a compact, high-coordination structure typical of metals. However, we find that even free-standing clusters of four or eight atoms (the sizes calculated) are also stable with a zigzag chain structure. Although unexpected, this stability arises very naturally from the transverse quantization of the electron states. To see this, we model the wire as a tube of length $`a`$ per atom, with a rectangular section $`b\times c`$. Consistently with the standard jellium model , we assume a fixed volume per atom $`abc`$, but we allow a larger ‘box’ section $`(b+\delta )\times (c+\delta )`$ to account for an electron ‘spillage’ $`\delta `$/2 out of each jellium edge . Accepting from the ab-initio calculation that the zigzag is planar, we also fix its thickness $`c`$ or, equivalently $`a_0=\sqrt{ab}`$. The resulting free-electron energy is shown in Fig. 3 as a function of the wire length $`a`$, for reasonable values of $`a_0`$ and $`\delta `$. With a single occupied band, the compromise between the transversal and longitudinal kinetic energies results in a single minimum (dashed line). Including the second band, which becomes partially occupied at somewhat shorter lengths, allows the energy to decrease again (solid line), reproducing very well all the qualitative features observed in the ab-initio curve, such as the positions of the maximum, the minimum, and the point at which the second band crosses the Fermi level (1.83 Å/atom). The basic physics that this model illustrates is the higher stability of certain wire sections, due to the transverse quantization of the delocalized electron states . This shell structure effect, which has been recently observed for sodium wires , is similar to the so-called magic numbers (particularly stable sizes) of small metal clusters . The zigzag shape is a particular realization of these stable sections for the monatomic gold wires.
In agreement with previous ab initio calculations we find that the wire becomes unstable and breaks spontaneously when pulled by a force of more than 2.2 nN, i.e. beyond a length of 2.9 Å/atom, much shorter than that apparently observed in stable wires . We offer here an explanation for this puzzling discrepancy, based on the predicted zigzag geometry: if the actual wires observed have an odd number of atoms, with those at the extremes fixed by the contacts, the odd-numbered atoms would stay almost fixed on the same axis, while the even-numbered ones could rotate rapidly around that axis, offering a fuzzy image that could be missed by the TEM. We have calculated the relaxed geometry and the rotation energy barrier for a seven-atom wire suspended between two pyramidal tips. We find that the stable geometry is almost equal to that of the infinite wire, and that the rotation barrier is only 60 meV for the entire wire. The effect is illustrated in Fig. 4, where we show the electron density averaged over rotated configurations. Although not directly comparable to a TEM image, it can indeed be qualitatively appreciated that the odd-numbered atoms appear much sharper than the even-numbered ones, giving the impression of a four-atom wire with a large interatomic separation, similar to that observed experimentally. From the energy barrier obtained, we estimate that the thermal rotation would slow down to the millisecond scale, allowing the zigzag visualization, only for temperatures below $``$40 K.
Fig. 5(a) shows the calculated transversal and longitudinal phonon frequencies at $`\mathrm{\Gamma }`$, for the zigzag wire, as a function of its length. Negative values indicate modes with imaginary frequency, implying the breaking of the unstable wire. At the wire’s equilibrium length (2.32 Å/atom), the $`\mathrm{\Gamma }`$-point frequencies are 113 and 219 cm<sup>-1</sup>, for the transversal and longitudinal modes, respectively. These are quite larger than the bulk phonon frequencies, but comparable to those of the dimer. This is not surprising if we consider that the wire interatomic distance is only slightly larger than that in Au<sub>2</sub>. Fig. 5(b) shows the phonon dispersion relations for a wire length of 2.62 Å/atom, obtained from the full dynamical matrix in a supercell of sixteen atoms, calculated with finite differences. We hope that the comparison of the results in Fig. 5(a) and (b) with those of point contact spectroscopy experiments will help to confirm our predicted zigzag distorsion.
###### Acknowledgements.
We thank N. Agraït, C. Balbás, N. García, J. Kohanoff, G. Rubio, J. J. Sáenz, J. A. Torres, and E. Tosatti for useful discussions. D. S. P. is grateful to R. Martin for advice and support. This work was supported by grants from Spain’s DGES PB95-0202, and USA’s DOE 8371494 and DEFG 02/96/ER 45439.
|
no-problem/9905/hep-ph9905431.html
|
ar5iv
|
text
|
# References
The lowest-order short-distance contribution to the $`B_s\gamma \gamma `$ decay
Gela G. Devidze<sup>1</sup><sup>1</sup>1 Permanent address: High Energy Physics Institute, Tbilisi State University, University St.9, Tbilisi 380086, Rep. of Georgia. E-mail:devidze@hepi.edu.ge
Department of Physics, University of Durham, Durham DH1 3LE, UK
devidze@hepi.edu.ge devidze@hep.phys.soton.ac.uk
Abstract: The complete calculation of the lowest order short-distance contributions to the $`B_s\gamma \gamma `$ decay in the SM are presented. The amplitude and branching ratio are calculated.
The theoretical and experimental investigations of rare $`B`$-meson’s decays provide precise test of the Standard Model (SM) and possible new physics beyond. Among the rare $`B`$ decasys with particularly clean experimental signature is $`B_s`$-meson two photon radiative decay $`B_s\gamma \gamma `$. The present experimental bound on this decay is
$$Br(B_s\gamma \gamma )<1.4810^4$$
$`(1)`$
$`B`$-meson double radiative decay has rich final state. Two photon can be in a $`CP`$-odd and $`CP`$-even state. Tferefore this decay allows us to study $`CP`$ violating effects. In the SM the branching ratio of $`B_s\gamma \gamma `$ decay is of order$`10^7`$ without QCD corrections \[2-5\]. The branching ratio of this decay is enhanced with the addition of the QCD corrections \[6-14\]. The QCD corrections may correct the lowest order short-distance contributions to the $`B_s\gamma \gamma `$ decay in order of magnitude<sup>2</sup><sup>2</sup>2In the paper the authors have estimated the long-distance contributions to the $`B_s\gamma \gamma `$ decay arising from charmed-meson intermediate states. They have obtained that contributions of the diagrams with $`D_s^{}`$ may enhance the branching ratio more than an order of magnitude. The authors have mentioned that they neglected quite a few possible contributions to the process. They hope that the detail investigation does not invalidate the results presented in the paper ..
The planed experiments at the upcoming SLAC and KEK $`B`$-factories and hadronic accelerators are capable to measure the branching ratio as low as $`10^8`$. Therefore one expects the double radiative decay of the $`B_s`$-meson $`B_s\gamma \gamma `$ to be seen in these future facilities, thus stimulate theoretical investigations.
This decay is sensitive to possible new physics beyond the SM. Interstingly, the branching ratio can be enhanced in extensions of the SM . Before one goes on to study other new physics which potentially can influence this decay, it stands to reasons to improve upon previous calculations \[2-5\].
In this paper we study the lowest-order short-distance contributions to the $`B_s\gamma \gamma `$ decay in the SM without QCD corrections. We do not neglect mass of $`s`$-quark. It is not immediately obvious how such investigation correct the branching ratio. The diagrams contributing to this decay are presented in Fig.1. The lowest-order short-distance contribution to the $`B_s\gamma \gamma `$ decay arise from the following set of graphs: i) triangle diagrams with external photon leg (one particle reducible (OPR) diagrams), ii) box diagrams (one particle irreducible (OPI) diagrams).
One can write doun the amplitude for the decay $`B_s\gamma \gamma `$ in the following form, which is correct after gauge fixing for final photons
$$T(B_s\gamma \gamma )=ϵ_1^\mu (k_1)ϵ_2^\nu (k_2)[Ag_{\mu \nu }+iBϵ_{\mu \nu \alpha \beta }k_1^\alpha k_2^\beta ]$$
$`(2)`$
where $`ϵ_1^\mu (k_1)`$ and $`ϵ_2^\nu (k_2)`$ are the polarization vectors of final photons with momenta $`k_1`$ and $`k_2`$ respectively. Let us fix photon polarization by the conditions
$$ϵ_iϵ_j=0,i,j=1,2$$
$`(3)`$
The conditions (3) with allowance for the energy-momentum conservation in the diagrams of Fig.1 yeld
$$ϵP=ϵp_b=ϵp_s=0$$
$`(4)`$
where
$$P=k_1+k_2,p_b=p_s+k_1+k_2$$
$`(5)`$
Formulae (3)-(5) lead to useful kinematikal relation
$$k_1k_2=Pk_i=\frac{1}{2}M_{B_s}^2,Pp_b=m_bM_{B_s},Pp_s=m_sM_{B_s}$$
$$p_bp_s=m_sm_b,p_bk_i=\frac{1}{2}m_bM_{B_s},p_sk_i=\frac{1}{2}m_sM_{B_s}$$
$`(6)`$
With the aid of (3)-(6) one can calculate the cobntribution of each diagrams to the amplitude $`T`$. We used the ’t Hooft-Feynman gauge and evaluated divergent Feynman integrals by means of dimensional regularization. Only OPR diagrams contain divergent parts. The divergent parts mutually cancel in the sum of amplitude and due to the GIM mechanism .
Using formula (2) we directly obtain the expression for the branching ratio
$$Br(B_s\gamma \gamma )=\frac{1}{32\pi M_{B_s}\mathrm{\Gamma }_{tot}}[4A^2+\frac{1}{2}M_B^4B^2]$$
$`(7)`$
As from Fig.1 is seen the correct procedure assumes the necessity of final photon rearangament. In the kinemastics (3)-96) this procedure leads to doubling of all contributions exept of diagrams 19 and 20, where both photons are emitted from the same space-time point:
$$A=A_{19}+A_{20}+2\underset{i=1}{\overset{34}{}}^{^{}}A_i,B=B_{19}+B_{20}+2\underset{i=1}{\overset{34}{}}^{^{}}B_i,$$
$`(8)`$
where the stress over the sum means absence in the sum of 19-th and 20-th terms.
The amplitude $`T(B_s\gamma \gamma )`$ and hence its $`CP`$-even and $`CP`$-odd parts can be written as a sum of contributions from up-quarks
$$T(B_s\gamma \gamma )=\underset{i=u,c,t}{}\lambda _iT_i=\lambda _uT_u+\lambda _cT_c+\lambda _tT_t,$$
$`(9)`$
where $`\lambda _i=V_{is}V_{ib}^{}`$ ($`V_{kl}`$ being the corresponding elements of CKM matrix). Using the unitarity of the CKM matrix ($`\lambda _i=0`$) one can rewrite it in the form
$$T=\lambda _t\{T_tT_c+\frac{\lambda _u}{\lambda _t}(T_uT_c)\}$$
$`(10)`$
Below we restrict ourselves to evaluating the amplitude in the leading order ($`1/M_W^2`$). The $`u`$-quark and $`c`$-quark contributions are equal in this approximation ($`T_u=T_c`$). So, the expression for the amplitude becomes a simpler form
$$T=\lambda _t(T_tT_c)$$
$`(11)`$
Only the OPR diagrams have nonzero contributions into amplitude $`A`$ in this approximation. As conserning the amplitude $`B`$, it is grathered both from OPR diagrams and OPI diagrams 34 of Fig.1. The corresponding contributions are
$$A=i\frac{\sqrt{2}}{32\pi ^2}G_Ff_B(m_bm_s)M_{B_s}\lambda _t\{(\frac{m_b}{m_s}+\frac{m_s}{m_b})[C(x_t)C(x_c)]+C_1(x_t)C_1(x_c)\}$$
$$B=i\frac{\sqrt{2}}{16\pi ^2}G_Ff_B\lambda _t\{(\frac{m_b}{m_s}+\frac{m_s}{m_b})[C(x_t)C(x_c)]+C_2(x_t)C_2(x_c)32M_{B_s}^2I(m_c^2)\}$$
$`(12)`$
where
$$C(x)=\frac{22x^3153x^2+159x46}{6(1x)^3}+\frac{3(23x)x^2\mathrm{}nx}{(1x)^4}$$
$$C_1(x)=\frac{4}{3}\frac{6x^327x^2+25x9+6x^2\mathrm{}nx}{(1x)^3}$$
$$C_2(x)=\frac{22x^312x^245x+17}{3(1x)^3}+\frac{2x(8x^215x+4)\mathrm{}nx}{(1x)^4}$$
$$I(m_c^2)=\frac{1}{2M_{B_s}^2}\{1+\frac{m_c^2}{M_{B_s}^2}(\mathrm{}n^2\frac{1+\beta }{1\beta }\pi ^22i\pi \mathrm{}n\frac{1+\beta }{1\beta })\}$$
$$x_t=\frac{m_t^2}{M_W^2},\beta =\sqrt{14\frac{m_c^2}{M_{B_s}^2}}$$
$`(13)`$
We also used the following relations for hadronic matrix elements
$$<0\overline{s}\gamma _\mu \gamma _5bB_s(P)>=if_BP_\mu ,<0\overline{s}\gamma _5bB_s(P)>if_BM_{B_s}$$
$`(14)`$
Using expressions (7),(12) and (13) one can estimate the branching ratio of the $`B_s\gamma \gamma `$ decay
$$Br(B_s\gamma \gamma )=210^7$$
$`(15)`$
We have used the following set of parameters: $`m_t`$ = 175 GeV, $`m_b`$ = 4.8 GeV, $`m_s`$ = 0.5 GeV, $`f_B`$ = 200 MeV, $`\lambda _t=410^2`$, $`M_{B_s}`$ = 5.3 GeV, $`\mathrm{\Gamma }_{tot}(B_s)=510^4`$ eV. It should be mentioned that we do not neglect mass of $`s`$-quark. If one neglect mass of $`s`$-quark the branching ratio becomes $`30\%`$ larger than the result (15). The upcoming $`B`$ factories at SLAC, KEK and hadronic $`B`$ projects at LHC, HERA, TEVATRON will be possible to study decay modes with branching ratio as small as $`10^8`$. Branching ratio $`10^7`$ will be mesurable in these facilities. Detail investigation of the lowest-order short-distance contributions to the $`B_s\gamma \gamma `$ decay deckreases the branching ratio. This decay is sensitive to parameters and requierst further experimental and theoretical investigations.
Acknowledgments
This research was supported in part by The Royal Society. I am very grateful to prof. A.D. Martin for warm hospitality. I also would like to thank G.R. Jibuti, A.G. Liparteliani for discussions.
Figure
Fig.1. One particle reducible and one particle irreducible diagrams contributing to the $`B_s\gamma \gamma `$ decay.
|
no-problem/9905/astro-ph9905315.html
|
ar5iv
|
text
|
# High Energy Cosmic Rays, Gamma Rays And Neutrinos From Jetted GRBs
## 1 The Energy Crisis Of Spherical GRBs
Thanks to the precise and prompt localization by the Italian-Dutch satellite, BeppoSAX (see, e.g., Costa et al. 1997), long lived GRB afterglows spanning the wavelength range from X-ray to radio have now been detected in more than a dozen GRBs. They led to the redshift measurements, z=0.69, 0.835, 3.42, 1.096, 0.966, 1.61, 1.62 of GRBs 970228 (Kulkarni et al. 1999) 970508 (Metzger et al. 1997), 971214 (Kulkarni et al 1998), 980329, 980613 (Djorgovski et al 1999), 980703 (Djorgovski et al. 1998), 990123 (Anderson et al. 1999; Kulkarni et al. 1999), 990510 (Vreeswijk et al. 1999), respectively, from absorption lines in their optical afterglows and/or emission lines from their host galaxies. In addition, strong suppression has been observed with the Hubble Space Telescope in the spectrum of the host galaxies of GRBs 970228 and 980329 at wavelengths below 700 nm. If it is due to absorption in the Ly$`\alpha `$ forest (Fruchter 1999), then their redshifts are near z$`5`$. These measured/estimated redshifts indicate that most GRBs take place at very large cosmological distances. For instance, assuming a zero cosmological constant ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$), the luminosity distance
$$\mathrm{D}_\mathrm{L}=\frac{\mathrm{c}}{\mathrm{H}}\frac{2[2\mathrm{\Omega }_\mathrm{M}(1\mathrm{z})(2\mathrm{\Omega }_\mathrm{M})\sqrt{1+\mathrm{\Omega }_\mathrm{M}\mathrm{z}}]}{\mathrm{\Omega }_\mathrm{M}^2},$$
(1)
with the present canonical values for the cosmological parameters, $`\mathrm{H}=65\mathrm{km}\mathrm{cm}^1\mathrm{Mpc}^1`$ and $`\mathrm{\Omega }_\mathrm{M}=0.2`$ (which will be assumed in this paper), yield $`\mathrm{D}_\mathrm{L}5\times 10^{28}\mathrm{cm}`$ for z=2. The typical observed GRB fluence ($`\mathrm{F}_\gamma 10^5\mathrm{erg}\mathrm{cm}^2`$) and their large distances imply enormous energy release in gamma rays,
$$\mathrm{E}_\gamma =\frac{4\pi \mathrm{D}_\mathrm{L}^2\mathrm{F}_\gamma }{(1+\mathrm{z})}10^{53}\mathrm{erg},$$
(2)
if their energy release is isotropic as used to be assumed/advocated by the standard fireball models of GRBs and GRB afterglows (e.g., Piran 1999 and references therein). In particular, the large fluence $`\mathrm{F}_\gamma 5.1\times 10^4\mathrm{erg}\mathrm{cm}^2`$ (Kippen et al. 1999) and redshift z=1.61 of GRB 990123 yield $`\mathrm{E}_\gamma 3.4\times 10^{54}\mathrm{erg}`$. Such enormous energy release in gamma rays alone, implies an “energy crisis” for spherical GRBs (Dar 1998): The short duration and the very large energy release in GRBs indicate that they are powered by gravitational collapse of compact stars. But, the energy release in such events falls short of that required to power GRBs like 971214, 980329 and 990123, 990510 if they were isotropic. Furthermore, all the known luminous sources of gamma rays (quasars, radio galaxies, active galactic nuclei, accreting binaries, pulsars, supernova explosions, supernova remnants) exhibit rather a modest efficiency, $`\eta <10^4`$, in converting gravitational, kinetic or thermonuclear energy into gamma rays. If GRBs have a similar efficiency for converting the energy release from their central engine to gamma rays, then the energy crisis is common to most GRBs.
## 2 Jetted GRBs
### 2.1 No Energy Crisis For Jetted GRBs
Various authors have pointed out that the energy crisis in isotropically emitting GRBs is avoided if GRBs are beamed into a small solid angle, $`\mathrm{\Delta }\mathrm{\Omega }4\pi `$ such that their total energy release in gamma rays is
$$\mathrm{E}_\gamma =\frac{\mathrm{\Delta }\mathrm{\Omega }\mathrm{D}_\mathrm{L}^2\mathrm{F}_\gamma }{(1+\mathrm{z})}.$$
(3)
Beaming of gamma rays from GRBs is possible if the highly relativistic ejecta (Lorentz factor $`\mathrm{\Gamma }=1/\sqrt{1\beta ^2}1`$) that produces the GRB is beamed into a cone (conical beaming) of solid angle $`\mathrm{\Delta }\mathrm{\Omega }4\pi `$, or if the ejecta is jetted - namely, if after initial expansion the ejected cloud/plasmoid maintains nearly a constant cross section. Conical beaming (e.g., Mochkovich 1993; Rhoads 1997) can solve the “ energy crisis” of GRBs by reducing their inferred energies by the ratio $`\mathrm{\Delta }\mathrm{\Omega }/4\pi 1`$. But, it suffers from other deficiencies of isotropically emitting GRBs (Dar 1998). Jetting the ejecta (e.g., Shaviv and Dar 1995, Dar et al. 1998, Dar 1998) solves the energy crisis and can also explain the short time variability of GRB light curves, the versatility of their afterglows, the absence of a simple scaling between them and their sudden decline in some GRBs (970508, 990123, 990510):
The emission from a highly relativistic plasmoid which is isotropic in its rest frame and has a power-law spectral shape, $`\mathrm{F}_\nu ^{}=\mathrm{A}\nu ^\alpha `$, is collimated in the lab frame to small emission angles $`\theta 1/\mathrm{\Gamma }`$ relative to its direction of motion, according to
$$\mathrm{F}_\nu =\frac{2^{2+\alpha }\mathrm{\Gamma }^{3+\alpha }}{(1+\mathrm{\Gamma }^2\theta ^2)^3}\mathrm{F}_{\nu ^{}=\nu (1+\mathrm{\Gamma }^2\theta ^2)/2\mathrm{\Gamma }}.$$
(4)
Thus, the observed flux from a plasmoid with a typical spectral index $`\alpha 0.7`$ that moves with a Lorentz factor $`\mathrm{\Gamma }10^3`$ at an angle $`\theta <1/\mathrm{\Gamma }`$ relative to the line of sight is amplified by approximately $`\mathrm{\Gamma }^{3+\alpha }10^{11}`$. This amplification within a solid angle $`\mathrm{\Delta }\mathrm{\Omega }\pi /\mathrm{\Gamma }^2`$ can explain why highly relativistic jets with bulk motion Lorentz factors $`\mathrm{\Gamma }10^3`$, total kinetic energy $`\mathrm{E}_\mathrm{k}10^{52}\mathrm{erg}`$, and conversion efficiency $`\eta >10^4`$ into gamma rays can produce GRBs with equivalent isotropic energy of $`\mathrm{E}_\gamma =\eta \mathrm{E}_\mathrm{k}4\pi /\mathrm{\Delta }\mathrm{\Omega }>4\times 10^{54}\mathrm{erg}`$, as observed for GRB 990123 (Kippen et al. 1999).
### 2.2 The Beaming Angle Of GRBs
The enormous release of energy in GRBs during a short time suggests that they are energized by collapse of compact stars (Blinnikov 1984, Paczynski 1986) due to mass accretion (Goodman, Dar and Nussinov 1987; Dar et al. 1992) or phase transition (e.g., Dar 1999a). If GRBs are produced, e.g., by gravitational collapse of neutron stars (NS) to quark stars (QS) when they cooled and spun down sufficiently (e.g., Dar 1999a; Dar and De Rújula 1999), then the GRB rate is comparable to the NS birth-rate. The NS birth-rate is estimated to be $`\mathrm{R}_{\mathrm{NS}}0.02\mathrm{y}^1`$ in Milky Way like galaxies (van den Bergh and Tamman 1991). From the observed rate of GRBs, $`\mathrm{R}_{\mathrm{GRB}}[\mathrm{UNIV}]10^3\mathrm{y}^1`$ in the whole Universe, it was estimated that the rate of observable GRBs in Milky Way like galaxies is $`\mathrm{R}_{\mathrm{GRB}}[\mathrm{MW}]10^8\mathrm{y}^1`$ (e.g., Wijers et al. 1997). The beaming angle of GRBs therefore must satisfy
$$\mathrm{R}_{\mathrm{GRB}}[\mathrm{MW}]2(\mathrm{\Delta }\mathrm{\Omega }/4\pi )\mathrm{R}_{\mathrm{NS}}[\mathrm{MW}]$$
(5)
where we assumed that two opposite jets are ejected in every NS collapse. Hence, $`\mathrm{\Delta }\mathrm{\Omega }\pi /\mathrm{\Gamma }^2\pi \times 10^6`$. It implies that their bulk motion Lorentz factor is $`\mathrm{\Gamma }10^3`$. Such values have been inferred also from the absence of a break due to $`\gamma \gamma \mathrm{e}^+\mathrm{e}^{}`$ in GRB spectra (e.g., Baring and Harding 1997), from the peak energy of GRBs and from GRB duration and substructure (e.g., Shaviv and Dar 1995). Such strong beaming implies that we observe only a very small fraction, $`10^6`$, of the events that produce GRBs. I will call these events cosmological GRBs (CGRBs) if they occur in distant galaxies and “Galactic” GRBs (GGRBs) if they occur in our Milky Way (MW) galaxy.
### 2.3 The Jet Energy
Consider gravitational collapse that leads to the birth of a pulsar (e.g., gravitational collapse of NS to QS due to a phase transition of cold and highly compressed neutron matter to Bose condensate of diquark pairs \[Dar 1999a, Dar and De Rújula 1999\] or the birth of a pulsar in a supernova explosion \[Cen 1999\]) and perhaps to the ejection of two opposite highly relativistic jets. If momentum imbalance in the ejection of the relativistic jets (and not asymmetric neutrino emission) is responsible for the observed large mean velocity (Lyne and Lorimer 1994), $`\mathrm{v}450\pm 90\mathrm{km}\mathrm{s}^1`$, of slowly spinning pulsars, then momentum conservation implies that the difference in the kinetic energy of the jets satisfies
$$\mathrm{\Delta }\mathrm{E}_{\mathrm{jet}}\mathrm{cP}_{\mathrm{ns}}\mathrm{vM}_{\mathrm{NS}}\mathrm{c}4\times 10^{51}\mathrm{erg},$$
(6)
where we used the typical observed mass of NSs, $`\mathrm{M}_{\mathrm{NS}}1.4\mathrm{M}_{}`$. If $`\mathrm{\Delta }\mathrm{E}_{\mathrm{jet}}\mathrm{E}_{\mathrm{jet}}`$, then the kinetic energy of the jets must be $`\mathrm{E}_{\mathrm{jet}}10^{52}\mathrm{erg}`$ or larger. If $`\mathrm{\Gamma }10^3`$ then the ejected jet (plasmoid) has a mass $`\mathrm{M}_{\mathrm{jet}}1.5\times 10^6\mathrm{M}_{\mathrm{NS}}2.1\times 10^6\mathrm{M}_{}0.7\mathrm{M}_{\mathrm{Earth}}`$. Even if only a fraction $`\eta 10^4`$ of the jet kinetic energy is radiated in $`\gamma `$-rays, the inferred “isotropic” $`\gamma `$-ray emission in GRBs is $`\mathrm{E}_{\mathrm{isot}}4\eta \mathrm{\Gamma }^2\mathrm{E}_{\mathrm{jet}}4\times 10^{54}\mathrm{erg}`$, while the true $`\gamma `$-ray emission is only $`\mathrm{E}_\gamma 10^{48}\mathrm{erg}`$.
### 2.4 Jet Formation
Relativistic jets seem to be emitted by all astrophysical systems where mass is accreted at a high rate from a disk onto a central compact object. Highly relativistic jets were observed in galactic superluminal sources, such as the microquasars GRS 1915+105 (Mirabel and Rodriguez 1994; Mirabel and Rodriguez 1999) and GRO J165-40 (Tingay et al. 1995) where mass is accreted onto a stellar black hole (BH), and in many active galactic nuclei (AGN), where mass is accreted onto a supermassive BH. Mildly relativistic jets from mass accretion are seen both in AGN and in star binaries containing NS such as SS433 (e.g., Hjellming and Johnston 1988). The ejection of highly relativistic jets from accreting or collapsing compact stellar objects is not well understood. Therefore their properties must be inferred directly from observations and/or general considerations: High-resolution radio observation resolved the narrowly collimated relativistic jets of microquasars into blobs of plasma (plasmoids) that are emitted in injection episodes which are correlated with sudden removal of the accretion disk material (Rodriguez and Mirabel 1998). After initial expansion, these plasmoids seem to retain a constant radius of $`R_p10^3pc`$. The emission of Doppler shifted Hydrogen Ly$`\alpha `$ and Iron K$`\alpha `$ lines from the relativistic jets of SS433 suggest that the jets are made predominantly of normal hadronic plasma. Moreover, simultaneous VLA radio observations and X-ray observations of the microquasar GRS 1915+105 indicate that the jet ejection episodes are correlated with sudden removal of accretion disk material into the relativistic jets (Mirabel and Rodriguez 1999). Highly relativistic jets, probably, are also ejected in the birth or collapse of NSs due to mass accretion or phase transition. But, because the accretion rates and magnetic fields involved are much larger compared with those in quasars and microquasars, the bulk motion Lorentz factors of these jets may be much higher, perhaps, $`\mathrm{\Gamma }10^3`$ as implied by the above consideration and GRB observations. When these highly relativistic jets happen to point in our direction they produce the observed cosmological GRBs and their afterglows. They look like the Galactic micro copies of blazer ejections and therefore will be called “microblazers”. In fact, when the light curves and energy spectra of blazar flares and of microquasar plasmoids are scaled according to the Lorentz factors expected for GRB ejecta, they look quite similar to GRBs (Dar 1999b).
The high collimation of relativistic jets over huge distances (up to tens of pc in microquasars and up to hundreds of kpc in AGN), the confinement of their highly relativistic particles, their emitted radiations and observed polarizations, all indicate that the jets are highly magnetized, probably with a strong helical magnetic field along their axis. Magnetic fields as strong as a few tens mili Gauss in the jet rest frame have been inferred from microquasar observations (Mirabel and Rodriguez 1999), while hundreds Gauss were inferred for GRB ejecta (assuming equipartition of energy between internal kinetic and magnetic energy). The UV light and the X-rays from the jets ionize the ISM in front of them. The jet material and the swept-up ionized ISM material in front of the jet can be accelerated by the Fermi mechanism to a power-law energy distribution that extends to very high energies, as inferred from the observed radiations from jets. The interactions of these high energy particles in the jet and/or their interactions with the external medium, produce the GRBs and their afterglows:
### 2.5 Jet Production of Gamma Rays
The GRB jets may consist of pure $`\mathrm{e}^+\mathrm{e}^{}`$ plasmoids or of normal hadronic gas/plasma clouds. The GRB can be produced by electron synchrotron emission. If the jet consist of a single plasmoid, then individual $`\gamma `$-ray pulses that combine to form the GRB light curve can be produced by internal instabilities or by interaction with inhomogeneous external medium. If the jets consist of multiple ejections of plasmoids, then the GRB pulses may be produced when later ejected plasmoids collide with earlier ejected plasmoids that have slowed down by sweeping up the interstellar medium in front of them. But, such scenarios do not seem to provide a simple explanation why the GRB emission is peaked near $``$ MeV photon energy. Other GRB emission mechanisms, however, can provide such an explanation:
If the highly relativistic plasmoid consists of a pure $`\mathrm{e}^+\mathrm{e}^{}`$ plasma, then inverse Compton scattering of stellar light ($`\mathrm{h}\nu =ϵ_{\mathrm{ev}}\times 1\mathrm{eV}`$) by the plasmoid can explain the observed typical $`\gamma `$ energy ($`ϵ_\gamma 4\mathrm{\Gamma }_3^2ϵ_{\mathrm{eV}}/3(1+\mathrm{z})\mathrm{MeV}`$), GRB duration ($`\mathrm{T}\mathrm{R}_{\mathrm{SFR}}/2\mathrm{c}\mathrm{\Gamma }^250\mathrm{s}`$), pulse duration ($`\mathrm{t}_\mathrm{p}\mathrm{R}_\mathrm{p}/2\mathrm{c}\mathrm{\Gamma }^2150\mathrm{ms}`$), fluence $`(\mathrm{F}_\gamma 10^5\mathrm{erg}\mathrm{cm}^2)`$, light curve and spectral evolution of GRBs (Shaviv and Dar 1995; Shaviv 1996; Dar 1998). For instance,
$$\mathrm{F}_\gamma \frac{\sigma __\mathrm{T}\mathrm{N}ϵ_\gamma }{\mathrm{\Gamma }\mathrm{m}_\mathrm{e}\mathrm{c}^2}\frac{\mathrm{E}_{\mathrm{jet}}(1+\mathrm{z})}{\mathrm{D}^2\mathrm{\Delta }\mathrm{\Omega }}\frac{10^5\mathrm{z}_2\mathrm{N}_{22}\mathrm{\Gamma }_3ϵ_{\mathrm{ev}}\mathrm{E}_{52}}{\mathrm{D}_{29}^2}\frac{\mathrm{erg}}{\mathrm{cm}^2}$$
(7)
where $`\mathrm{D}=\mathrm{D}_{29}\times 10^{29}\mathrm{cm}`$ is the luminosity distance of the GRB at redshift z, $`\mathrm{z}_2=(1+\mathrm{z})/2`$, $`\mathrm{N}=\mathrm{N}_{22}\times 10^{22}\mathrm{cm}^2`$ is the column density of photons along the jet trajectory in the star forming region, $`\sigma __\mathrm{T}=0.65\times 10^{24}\mathrm{cm}^2`$ is the Thomson cross section, $`\mathrm{E}_{\mathrm{jet}}=\mathrm{E}_{52}\times 10^{52}\mathrm{erg}`$ and $`\mathrm{\Gamma }=\mathrm{\Gamma }_3\times 10^3`$.
If the plasmoid consists of normal crust material of neutron stars (Doppler-shifted $`\mathrm{K}_\alpha `$ iron line was detected from the jets of SS433), then photoabsorption of stellar light by partially ionized heavy metals like iron (Doppler-shifted to X-rays in the jet rest frame) and its reemission as $`\gamma `$ rays (iron X-rays lines in the jet rest frame) yield $`ϵ_\gamma \mathrm{\Gamma }ϵ_\mathrm{x}/(1+\mathrm{z})\mathrm{MeV}`$) in the observer frame and
$$\mathrm{F}_\gamma \frac{\sigma _\mathrm{a}\mathrm{N}ϵ_\gamma }{\mathrm{\Gamma }\mathrm{M}_{\mathrm{Fe}}\mathrm{c}^2}\frac{\mathrm{E}_{\mathrm{jet}}(1+\mathrm{z})}{\mathrm{D}^2\mathrm{\Delta }\mathrm{\Omega }}\frac{10^5\mathrm{z}_2\sigma _{19}\mathrm{N}_{22}\overline{ϵ}_\mathrm{x}\mathrm{\Gamma }_3\mathrm{E}_{52}}{\mathrm{D}_{29}^2}\frac{\mathrm{erg}}{\mathrm{cm}^2}$$
(8)
where $`\sigma _\mathrm{a}=\sigma _{19}\times 10^{19}\mathrm{cm}^2`$ is the mean photoabsorption cross section of X-rays by partially ionized iron.
## 3 GRB Afterglows
The afterglows of GRBs may be synchrotron emission from the decelerating plasmoids (e.g., Chiang and Dermer 1997), and then they are highly beamed and may exhibit superluminal velocities ($`\mathrm{c}<\mathrm{v}_{}\mathrm{\Gamma }\mathrm{c}`$) during and right after the GRB. The deceleration of a mildly relativistic spherical ejecta from NS collapse to QS may also produce spherical supernova like light curve (many planetary nebulae, e.g., NGC 7009, NGC 6826, and some SNRs (including, perhaps, SNR 1987A) show antiparallel jets superimposed on a spherical explosion). In the rest frame of the decelerating plasmoid, the synchrotron spectra can be modeled by convolving the typical electron energy spectrum ($`\mathrm{E}^\mathrm{p}`$ at low energies up to some “break energy” where it steepens to $`\mathrm{E}^{\mathrm{p}1}`$ and cuts off exponentially at some higher energy due to synchrotron losses in magnetic acceleration) with the synchrotron Green’s function (see, e.g., Meisenheimer et al. 1989). In the observer frame it yields spectral intensity (Dar 1998)
$$\mathrm{I}_\nu \nu ^\alpha \mathrm{t}^\beta \nu ^{0.75\pm 0.25}\mathrm{t}^{1.25\pm 0.08},$$
(9)
where $`\alpha =(\mathrm{p}1)/2`$ and $`\beta =(\mathrm{p}+5)/6`$ and where I assumed $`\mathrm{p}=2.5\pm 0.5`$ for Magnetic Fermi acceleration. This prediction is in agreement with observations of GRB afterglows. Moreover, the glows of microquasar plasmoids and radio quasar jets after ejection, and of blazar jets after flares, show the same universal behavior as observed in GRBs afterglows. For instance, the glows of the ejected plasmoids from GRS 1915+105 on April 16, 1994 near the source had $`\alpha =0.8\pm 0.1`$ and $`\beta =1.3\pm 0.2`$ (Rodriguez and Mirabel 1998), identical to those observed for SS 433 (Hjellming & Johnston 1988) and for the inner regions of jets of some radio galaxies (e.g., Bridle & Perley 1984). When the jets spread, the spectral index of their power-law time decline changed to $`\beta ^{}=2.6\pm 0.4`$ (e.g., Mirabel and Rodriguez 1999). Thus, their overall time decline can be described approximately by $`\mathrm{I}\mathrm{t}^\beta /[1+(\mathrm{t}/\mathrm{t}_0)^{\beta ^{}\beta }]`$ where t<sub>0</sub> is the time when the jet begins to spread. Indeed, such a behavior has been observed also in the afterglows of some GRBs (e.g., GRB 990510; Vreeswijk et al 1999).
## 4 Galactic GRBs - The Main Source of Cosmic Rays?
According to the current paradigm of cosmic rays (CRs) origin, CR nuclei with energies below 3 $``$ 10<sup>15</sup> eV (the “knee”) are accelerated in Galactic SNRs (Ginzburg 1957) and those above $`3\times 10^{18}`$ eV (the “ankle”), for which a disk origin is unlikely due to their isotropy, in sources far beyond our Galaxy (Burbidge 1962). However, recent observations suggest that, perhaps, SNRs are not the main source of Galactic cosmic rays and that the CRs above the ankle are not extragalactic:
\- Measurable fluxes of high energy gamma rays from interactions of cosmic ray nuclei in SNRs, as expected in models of SNR acceleration of CRs, were not detected from nearby SNRs (Prosch et al. 1996; Hess et al. 1997; Buckley et al. 1998).
\- The expected galactocentric gradient in the distribution of high energy gamma rays ($`>`$ 100 MeV) from interactions of CRs from SNRs in the Galactic interstellar medium is significantly larger than observed by the EGRET detector on board CGRO (Hunter et al. 1997; Strong and Moskalenko 1998).
\- Diffusive propagation of CRs from the observed Galactic distribution of SNRs yield anisotropies in the distribution of CRs above 100 TeV in excess of the observed value (Aglietta et al. 1995) by more than an order of magnitude (Ptuskin et al. 1997).
\- The absence (Takeda 1998) of the “GZK cutoff” in the intensity of CRs at energies above $`10^{20}`$ eV due to interactions with the cosmic microwave radiation (Greisen 1996; Zatsepin & Kuz’min 1966) have brought into question (e.g., Hillas 1998) their hypothesized extragalactic origin.
Relativistic jets are efficient CRs accelerators (e.g., Mannheim and Biermann 1992, Dar 1998b). A modest fraction of the total energy injected into the MW in jets from Galactic GRBs at a rate similar to the NS birth/collapse rate in the MW, that is converted to CR energy, can supply, $`1.5\times 10^{41}ergs^1`$, the estimated Galactic luminosity in CRs (Drury et al., 1989). Thus, Dar and Plaga (1999) have recently proposed that Galactic GRBs (GGRBs) are the main source of the CRs at all energies and consequentlty no GZK cutoff is expected in the CR spectrum:
The highly relativistic narrowly collimated jets/plasmoids from the birth or collapse of NSs in the disk of our Galaxy that are emitted with $`\mathrm{E}_{\mathrm{jet}}10^{52}\mathrm{erg}`$ perpendicular to the Galactic disk, stop only in the Galactic halo, when the rest mass energy of the swept-up ambient material becomes $``$ their initial kinetic energy. Through the Fermi mechanism, they accelerate the swept-up ambient matter to CR energies and disperse it into the halo from the hot spots which they form when they finally stop in the Galactic halo (Fig.1).
The typical equipartition magnetic fields in such hot spots may reach $`\mathrm{B}(3\mathrm{E}_{\mathrm{jet}}/\mathrm{R}_\mathrm{p}^3)^{1/2}1\mathrm{G}`$. Synchrotron losses cut off Fermi acceleration of CR nuclei with mass number A at $`\mathrm{E}\mathrm{\Gamma }\mathrm{A}^2\mathrm{Z}^{3/2}(\mathrm{B}/\mathrm{G})^{1/2}\times 10^{20}\mathrm{eV}.`$ Particle-escape cuts off Fermi acceleration when the Larmor radius of the accelerated particles in the plasmoid rest frame becomes comparable to the radius of the plasmoid, i.e., above $`\mathrm{E}\mathrm{\Gamma }\mathrm{Z}(\mathrm{B}/\mathrm{G})(\mathrm{R}_\mathrm{p}/0.1\mathrm{pc})\times 10^{20}\mathrm{eV}.`$ Consequently, CR with $`\mathrm{E}>\mathrm{Z}\times 10^{20}\mathrm{eV}`$ can no longer be isotropized by acceleration or deflection in hot spots with $`\mathrm{\Gamma }1`$.
Fermi acceleration in the highly relativistic jets from GRBs ($`\mathrm{\Gamma }10^3`$)can produce a broken power-law spectrum, $`\mathrm{dn}/\mathrm{dE}\mathrm{E}^\alpha `$, with $`\beta 2.2`$ below a knee around $`\mathrm{E}_{\mathrm{knee}}\mathrm{A}\mathrm{PeV}`$ and $`\beta 2.5`$ above this energy (Dar 1998b). Spectral indices $`\alpha `$ $`2.2`$ were obtained also in numerical simulation of relativistic shock acceleration (e.g., Bednarz and Ostrowski 1998) . Galactic magnetic confinement increases the density of Galactic CRs by the ratio $`\mathrm{c}\tau _\mathrm{h}/\mathrm{R}_\mathrm{G}`$ where $`\tau _\mathrm{h}(\mathrm{E})`$ is the mean residence time in the halo of Galactic CRs with energy E, and $`\mathrm{R}_\mathrm{G}50\mathrm{kpc}`$ is the radius of the Galactic magnetic-confinement region. With the standard choice for the energy dependence of the diffusion constant (observed, e.g., in solar-system plasmas) one gets: $`\tau _\mathrm{h}(\mathrm{E}/\mathrm{Z})^{0.5}`$. Consequently, the energy spectrum of CRs is predicted to be
$$\mathrm{dn}/\mathrm{dE}\mathrm{C}(\mathrm{E}/\mathrm{E}_{\mathrm{knee}})^\mathrm{p}$$
(10)
with $`p\alpha +0.52.7(3)`$ below (above) the knee. This power-law continues as long as the Galactic magnetic field confines the CRs.
Part of the kinetic energy released by GGRBs is transported into the Galactic halo by the jets. Assuming equipartition of this energy, without large losses, between CR, gas and magnetic fields in the halo during the residence time of CR there, the magnetic field strength B<sub>h</sub> in the halo is expected to be comparable to that of the disk $`\mathrm{B}_\mathrm{h}(2\mathrm{L}_{\mathrm{MW}}[\mathrm{CR}]\tau _\mathrm{h}/\mathrm{R}_\mathrm{h}^3)^{1/2}3\mu \mathrm{G}`$ where $`\tau _\mathrm{h}5\times 10^9\mathrm{y}`$ is the mean residence time of the bulk of the CRs in the Galactic halo. Cosmic rays with Larmor radius larger than the coherence length $`\lambda `$ of the halo magnetic fields, i.e., with energy above
$$\mathrm{E}_{\mathrm{ankle}}3\times 10^{18}(\mathrm{ZB}_\mathrm{h}/3\mu \mathrm{G})(\lambda /\mathrm{kpc})\mathrm{eV},$$
(11)
escape Galactic trapping. Thus, the CR ankle is explained as the energy where the mean residence time $`\tau _\mathrm{h}(\mathrm{E})`$ of CRs becomes comparable to the free escape time from the halo $`\tau _{\mathrm{free}}1.6(R_\mathrm{h}/50\mathrm{kpc})\times 10^5\mathrm{years}`$. Therefore, the spectrum of CRs with energies above the ankle, that do not suffer Galactic magnetic trapping, is the CRs spectrum produced by the jet, i.e.,
$$\mathrm{dn}/\mathrm{dE}\mathrm{C}(\mathrm{E}_{\mathrm{ankle}}/\mathrm{E}_{\mathrm{knee}})^3(\mathrm{E}/\mathrm{E}_{\mathrm{ankle}})^{2.5};\mathrm{E}>\mathrm{E}_{\mathrm{ankle}}.$$
(12)
Eqs. 10-12 describe well the overall CR energy spectrum.
## 5 High Energy Gamma Rays
The observed radiations from blazars, microquasars and GRBs indicate that their highly relativistic jets contain high energy charged particles with a power-law energy distribution $`\mathrm{dn}_\mathrm{p}/\mathrm{dE}\mathrm{AE}^\alpha `$ with $`\alpha 2.2`$, that extends to very high energies. Such distributions can be formed in the highly relativistic jets through Fermi acceleration of swept up material. This power-law distribution of energetic protons is boosted and beamed into a solid angle $`\mathrm{\Delta }\mathrm{\Omega }\pi /\mathrm{\Gamma }^2`$ in the lab frame. GRB afterglows suggest that GRBs are produced in star formation regions, probably molecular clouds. The typical column density of such clouds is $`N_p=10^{24\pm 1}cm^2`$. The clouds must be highly ionized along the line of sight to the GRB by the enormous fluxes of beamed X-rays and UV radiations from the GRB. Interaction of the highly relativistic GRB jets with the high column density of this ionized gas (or with diffuse matter at their production sites) can produce high energy gamma rays through $`\mathrm{pp}\pi ^0(\eta ^0)\mathrm{X}`$; $`\pi ^0(\eta ^0)2\gamma `$. The cross section for inclusive production of high energy $`\gamma `$-rays with a small transverse momentum, $`\mathrm{cp}_\mathrm{T}=\mathrm{E}_\mathrm{T}<1\mathrm{GeV}`$ in pp collisions (e.g., Neuhoffer et al. 1971; Boggild and Ferbel 1974; Ferbel and Molzon 1984) is well represented by
$$\frac{\mathrm{E}}{\sigma _{\mathrm{in}}}\frac{\mathrm{d}^3\sigma }{\mathrm{d}^2\mathrm{p}_\mathrm{T}\mathrm{dE}_\gamma }(1/2\pi \mathrm{p}_\mathrm{T})\mathrm{e}^{\mathrm{E}_\mathrm{T}/\mathrm{E}_0}\mathrm{f}(\mathrm{x}),$$
(13)
where $`E`$ is the incident proton energy, $`\sigma _{\mathrm{in}}35\mathrm{mb}`$ is the pp total inelastic cross section at TeV energies, $`E_00.16GeV`$ and $`\mathrm{f}(\mathrm{x})(1\mathrm{x})^3/\sqrt{\mathrm{x}}`$ is a function only of the Feynman variable $`\mathrm{x}=\mathrm{E}_\gamma /\mathrm{E}`$, and not of the separate values of the energies of the incident proton and the produced $`\gamma `$-ray (Feynman scaling). The exponential dependence on $`\mathrm{E}_\mathrm{T}`$ beams the $`\gamma `$-ray production into $`\theta <\mathrm{E}_\mathrm{T}/\mathrm{E}0.17/\mathrm{\Gamma }`$ along the incident proton direction. When integrated over transverse momentum the inclusive cross section becomes $`\sigma _{\mathrm{in}}^1\mathrm{d}\sigma /\mathrm{dx}\mathrm{f}(\mathrm{x}).`$ If the incident protons have a power-law energy spectrum, $`\mathrm{dn}_\mathrm{p}/\mathrm{dE}\mathrm{AE}^\alpha `$, then, because of Feynman scaling, the produced $`\gamma `$-rays have the same power-law spectrum:
$$\frac{\mathrm{dn}_\gamma }{\mathrm{dE}_\gamma }\mathrm{N}_\mathrm{p}\sigma _{\mathrm{in}}_{\mathrm{E}_\gamma }^{\mathrm{}}\frac{\mathrm{dn}_\mathrm{p}}{\mathrm{dE}}\frac{\mathrm{d}\sigma }{\mathrm{dE}_\gamma }\mathrm{dE}\mathrm{N}_\mathrm{p}\sigma _{\mathrm{in}}\mathrm{g}_{\mathrm{p}\gamma }\mathrm{AE}_\gamma ^\alpha ,$$
(14)
where $`\mathrm{N}_\mathrm{p}`$ is the column density of the target and $`\mathrm{g}_{\mathrm{p}\gamma }=_0^1\mathrm{x}^{\alpha 1}\mathrm{f}(\mathrm{x})\mathrm{dx}0.092`$ for $`\alpha 2.2`$. Consequently, the collimated flux of high energy gamma rays produced by a GRB jet with initial kinetic energy $`\mathrm{E}=\mathrm{E}_{52}\times 10^{52}\mathrm{erg}`$ that propagates through a molecular cloud of typical column density $`\mathrm{N}_\mathrm{p}=\mathrm{N}_{23}\times 10^{23}\mathrm{cm}^2`$ is given by
$$\frac{\mathrm{dn}_\gamma }{\mathrm{dE}}\frac{6\times 10^6\mathrm{E}_{52}\mathrm{N}_{23}\mathrm{\Gamma }_3^2}{\mathrm{D}_{29}^2}(1+\mathrm{z})^{2\alpha }\left[\frac{\mathrm{E}}{\mathrm{TeV}}\right]^{2.2}\mathrm{e}^{\tau (\mathrm{z},\mathrm{E})}\mathrm{cm}^2\mathrm{TeV}^1,$$
(15)
where $`\mathrm{D}(\mathrm{z})=\mathrm{D}_{29}\times 10^{29}\mathrm{cm}`$, is the luminosity distance to the GRB and $`\tau (\mathrm{z},\mathrm{E})`$ is the optical depth to the GRB (redshift z) at energy E. The fluxes of high energy gamma rays from GRBs at $`z2`$ are attenuated strongly ($`\tau >1`$) for E$`>20`$GEV. For not very distant GRBs, e.g., z$`<0.5`$, the gamma ray flux is not attenuated strongly at energy $`\mathrm{E}<100\mathrm{GeV}`$ (Salomon and Stecker, 1998). GRBs with z$`<0.1`$ ($`\mathrm{D}_\mathrm{L}<500\mathrm{M}\mathrm{p}\mathrm{c}`$) can be visible in TeV gamma rays. But, their expected rate is only
$$\mathrm{R}_{\mathrm{GRB}}(\mathrm{z}<0.1)\mathrm{R}_{\mathrm{GRB}}[\mathrm{L}_{}]\rho __\mathrm{L}\mathrm{V}_\mathrm{c}(\mathrm{z}<0.1)0.1\mathrm{y}^1,$$
(16)
where $`\mathrm{R}_{\mathrm{GRB}}[\mathrm{L}_{}]2\times 10^8\mathrm{y}^1`$ is the estimated mean rate of GRBs in $`\mathrm{L}_{}10^{10}\mathrm{L}_{}`$ galaxy for z$`<0.1`$, $`\rho __\mathrm{L}1.2\times 10^8\mathrm{L}_{}\mathrm{Mpc}^3`$ is the luminosity density in the local Universe and $`\mathrm{V}_\mathrm{c}(\mathrm{z}<0.1)5\times 10^8\mathrm{Mpc}^3`$ is the comoving volume within z$`<0.1`$. GeV gamma rays from 3 very luminous GRBs have been reported by the EGRET detector on board CGRO (Dingus et al 1994; Dingus 1995), consistent with the above predictions. However, they could have also been produced by inverse Compton scattering of GRB photons from energetic electrons in the GRB jets. Only the detection of high energy neutrinos from GRBs can establish the hadronic production origin of high energy photons from GRBs, i.e., the hadronic nature of the GRB jets.
## 6 High Energy Neutrinos From GRBs
Hadronic production of photons in diffuse targets is accompanied by neutrino emission mainly through hadronic production of mesons that decay into neutrinos, e.g., $`\mathrm{pp}\pi ^\pm \mu ^\pm \nu _\mu `$ ; $`\mu ^\pm \mathrm{e}^\pm \nu _\mu \nu _\mathrm{e}`$. Analytical calculations (e.g., Dar 1983, Dar 1984, Lipari 1993) show that a proton power-law spectrum, $`\mathrm{dn}_\mathrm{p}/\mathrm{dE}=\mathrm{AE}^\alpha `$ with a power index $`\alpha 2.2`$, generates power-law spectra of $`\gamma `$-rays and $`\nu _\mu `$’s that satisfy approximately, $`\mathrm{dn}_\nu /\mathrm{dE}0.80\mathrm{dn}_\gamma /\mathrm{dE}`$ (Dar and Shaviv, 1996). Consequently,
$$\frac{\mathrm{dn}_\nu }{\mathrm{dE}}\frac{5\times 10^6\mathrm{E}_{52}\mathrm{N}_{23}\mathrm{\Gamma }_3^2}{\mathrm{D}_{29}^2}(1+\mathrm{z})^{2\alpha }\left[\frac{\mathrm{E}}{\mathrm{TeV}}\right]^{2.2}\mathrm{cm}^2\mathrm{TeV}^1.$$
(17)
Thus, we predict that the high energy $`\gamma `$-ray emission from GRBs is accompanied by emission of high energy neutrinos with similar fluxes, light curves and energy spectra. The expected number of $`\nu _\mu `$ events from a GRB in a deep underwater/ice $`\nu _\mu `$ telescope is
$$\mathrm{N}_{\mathrm{events}}\mathrm{SN}_\mathrm{A}\mathrm{R}_\mu \frac{\mathrm{d}\sigma _{\nu \mu }}{\mathrm{dE}_\mu }\frac{\mathrm{dn}_\nu }{\mathrm{dE}}\mathrm{dE}_\mu \mathrm{dE}$$
(18)
where S is the effective surface area of the telescope, $`\mathrm{N}_\mathrm{A}`$ is Avogadro’s number, $`\sigma _{\nu \mu }`$ is the inclusive cross section for $`\nu _\mu \mathrm{p}\mu \mathrm{X}`$, and $`\mathrm{R}_\mu `$ is the range (in $`\mathrm{gm}\mathrm{cm}^2`$) of muons with energy $`E_\mu `$ in water/ice. The number of events is not sensitive to the detector energy threshold, if it is below 300 GeV where both the muon range and production cross section increase linearly with energy and yield a detection probability in ice/water, which increases like $`10^6(\mathrm{E}/\mathrm{TeV})^2`$ below 300 GeV. Using the neutrino cross sections that were calculated by Gandhi et al (1998) and neglecting detector threshold effects and neutrino attenuation in Earth (which becomes important only above 100 TeV), we predict that the number of neutrino events in deep underground ice/water neutrino telescopes per 1 km<sup>2</sup> is
$$\mathrm{N}_{\mathrm{events}}1.3\times (1+\mathrm{z})^{0.2}\frac{\mathrm{E}_{52}\mathrm{N}_{23}\mathrm{\Gamma }_3^2}{\mathrm{D}_{29}^2}\mathrm{km}^2.$$
(19)
Thus, a relatively nearby GRB (z=0.5) may generate $`40\mathrm{E}_{52}\mathrm{N}_{23}`$ upgoing muon events in underwater/ice telescope per 1 km<sup>2</sup> area and only $`1\mathrm{E}_{52}\mathrm{N}_{23}`$ events if it is at z=2. The expected time length of these neutrino bursts from GRBs is typically, $`\mathrm{t}\mathrm{R}_{\mathrm{cl}}/\mathrm{c}\mathrm{\Gamma }^2\mathrm{R}_{10}\times 10^3\mathrm{s}`$, where $`\mathrm{R}_{\mathrm{cl}}=10\mathrm{R}_{10}\mathrm{pc}`$ is the size of the molecular cloud. Such events can be distinguished from the atmospheric neutrino background by their directional and time coincidence with the GRBs and establish the hadronic nature of the relativistic jets from GRBs.
Unlike the neutrino bursts from nearby supernova explosions, the arrival times of $`\nu ^{}s`$ from GRBs, which are spread over $`\mathrm{t}\mathrm{t}_3\times 10^3\mathrm{s}`$, yield only poor limits on neutrino masses and lifetimes: $`\mathrm{m}_\nu \mathrm{c}^2>\sqrt{2\mathrm{t}/\mathrm{T}}\mathrm{E}_\nu \sqrt{\mathrm{t}_3/\mathrm{T}_{10}}[\mathrm{E}_\nu /\mathrm{Tev}]\times 10^5\mathrm{eV}`$, where $`\mathrm{T}_{10}`$ is the GRB lookback time in units of 10Gy. This limit cannot compete with the cosmological limit, $`\mathrm{m}_{\mathrm{nu}}\mathrm{c}^2<94\mathrm{\Omega }_\mathrm{M}\mathrm{h}^2\mathrm{eV}8\mathrm{eV}`$, for long lived neutrinos. The neutrino arrival times can be used, however, to improve the limit from Supernova 1987A (LoSecco 1987) on the equivalence principle of General Relativity.
Results presented here are based on an ongoing collaboration with A. De Rùjula and R. Plaga. This research was supported by the fund for the promotion of research at the Technion.
|
no-problem/9905/quant-ph9905039.html
|
ar5iv
|
text
|
# Experimental realization of Popper’s Experiment: Violation of the Uncertainty Principle?
## I Introduction
Uncertainty, one of the basic principles of quantum mechanics, distinguishes the world of quantum phenomena from the realm of classical physics. Quantum mechanically, one can never expect to measure both the precise position and momentum of a particle at the same time. It is prohibited. We say that the quantum observables “position” and “momentum” are “complementary” because the precise knowledge of the position (momentum) implies that all possible outcomes of measuring the momentum (position) are equally probable.
Karl Popper, being a “metaphysical realist”, however took a different point of view. In his opinion, the quantum formalism could and should be interpreted realistically: a particle must have precise position and momentum, which shares the same view as Einstein. In this regard he invented a thought experiment in the early 1930’s which aimed to support the realistic interpretation of quantum mechanics and undermine the Copenhagen orthodoxy. What Popper intends to show in his thought experiment is that a particle can have both precise position and momentum at the same time through the correlation measurement of an entangled two-particle system. This bears striking similarity to what EPR gedankenexperiment of 1935 seeks to conclude . But different from EPR’s gedankenexperiment, the physics community remained ignorant of Popper’s experiment.
In this paper we wish to report a recent realization of Popper’s thought experiment. Indeed, it is astonishing to see that the experimental results agree with Popper’s prediction. Through quantum entanglement one may learn the precise knowledge of a photon’s position and would therefore expect a greater uncertainty in its momentum under the usual Copenhagen interpretation of the uncertainty relations. However, the measurement shows that the momentum does not experience a corresponding increase of uncertainty. Is this a violation of the uncertainty principle?
As a matter of fact, one should not be surprised with the experimental result and should not consider this question as a new challenge. Similar results have been demonstrated in EPR type of experiments and the same question has been asked in EPR’s 1935 paper . In the past decades, we have been worrying about problems concerning causality, locality, and reality more than the “crux” of the EPR paradox itself: the uncertainty principle.
## II Popper’s Experiment
Similar to the EPR’s gedankenexperiment, Popper’s experiment is also based on the feature of two-particle entanglement. Quantum mechanics allows the entangled EPR-type state, a state in which if the position or momentum of particle 1 is known the corresponding observable of its twin, particle 2, is then 100% determined . Popper’s original thought experiment is schematically shown in Fig. 1. A point source S, positronium as Popper suggests, is placed at the center of the experimental arrangement from which entangled pairs of particles 1 and 2 are emitted in opposite directions along the respective positive and negative $`x`$-axes towards two screens A and B. There are slits on both screens parallel to the $`y`$-axis and the slits may be adjusted by varying their widths $`\mathrm{\Delta }y`$. Beyond the slits on each side stand an array of Geiger counters for the coincidence measurements of the particle pairs as shown in the figure. The entangled pair could be emitted to any direction in $`4\pi `$ solid angles from the point source. However, if particle 1 is detected in a certain direction then particle 2 is known to be in the opposite direction due to the momentum conservation of the quantum pair.
First, let us imagine the case in which slits A and B are adjusted both very narrowly. In this circumstance, counters should come into play which are higher up and lower down as viewed from the slits. The firing of these counters is indicative of the greater $`\mathrm{\Delta }p_y`$ due to the smaller $`\mathrm{\Delta }y`$ for each particle. There seems to be no disagreement in this situation between both the Copenhagen school and Popper and both sides can provide a reasonable explanation according to their own philosophical beliefs.
Next, suppose we keep the slit at A very narrow and leave the slit at B wide open. The main purpose of the narrow slit A is to provide the precise knowledge of the position $`y`$ of particle 1 and this subsequently determines the precise position of its twin (particle 2) on side B through quantum entanglement. Now, asks Popper, in the absence of the physical interaction with an actual slit, does particle 2 experience a greater uncertainty in $`\mathrm{\Delta }p_y`$ due to the precise knowledge of its position? Based on his “statistical-scatter” theory, Popper provides a straightforward prediction: particle 2 must not experience a greater $`\mathrm{\Delta }p_y`$ unless a real physical narrow slit B is applied. However, if Popper’s conjecture is correct, this would imply the product of $`\mathrm{\Delta }y`$ and $`\mathrm{\Delta }p_y`$ of particle 2 could be smaller than $`h`$ ($`\mathrm{\Delta }y\mathrm{\Delta }p_y<h`$). This may pose a serious difficulty for the Copenhagen camp and perhaps for many of us. On the other hand, if particle 2 going to the right does scatter like its twin which has passed though slit A, even though slit B is wide open, we are then confronted with an apparent action-at-a-distance!
## III Realization of Popper’s Experiment
We have realized Popper’s experiment with the use of the entangled two-photon source of spontaneous parametric down conversion (SPDC) . In order to clearly demonstrate all aspects of the historical and modern experimental concerns in a practical manner, Popper’s original design is slightly modified as shown in Fig. 2. The two-photon source is a CW Argon ion laser pumped SPDC which provides a two-photon entangled state that preserves momentum conservation for the signal-idler photon pair in the SPDC process. By taking advantage of the nature of entanglement of the signal-idler pair (also labeled “photon 1” and “photon 2”) one could make a “ghost image” of slit A at “screen” B, see Fig. 3. The physical principle of the two-photon “ghost image” has been reported in Ref. .
The experimental condition specified in Popper’s experiment is then achieved: when slit A is adjusted to a certain narrow width and slit B is wide open, slit A provides precise knowledge about position of photon 1 on the $`y`$-axis up to an accuracy $`\mathrm{\Delta }y`$ which equals the width of slit A and the corresponding “ghost image” of pinhole A at “screen” B determines the precise position $`y`$ of photon 2 to within the same accuracy $`\mathrm{\Delta }y`$. $`\mathrm{\Delta }p_y`$ of “photon 2” can be independently studied by measuring the width of its “diffraction pattern” at a certain distance from “screen” B. This is obtained by recording coincidences between detectors $`D_1`$ and $`D_2`$ while scanning detector $`D_2`$ along its $`y`$-axis which is behind “screen” B at a certain distance. Instead of a battery of Geiger counters, in our experiment only two photon counting detectors $`D_1`$ and $`D_2`$ placed behind the respective slits A and B are used for the coincidence detection. Both $`D_1`$ and $`D_2`$ are driven by step motors and so can be scanned along their $`y`$-axes. $`\mathrm{\Delta }y\mathrm{\Delta }p_y`$ of “photon 2” is then readily calculated and compared with $`h`$ .
The use of a “point source” in the original proposal has been much criticized and considered as the fundamental mistake Popper made . The major objection is that a point source can never produce a pair of entangled particles which preserves two-particle momentum conservation. However, notice that a “point source” is not a necessary requirement for Popper’s experiment. What is required is the position entanglement of the two-particle system: if the position of particle 1 is precisely known, the position of particle 2 is also 100% determined. So one can learn the precise knowledge of a particle’s position through quantum entanglement. Quantum mechanics does allow the position entanglement for an entangled system (EPR state) and there are certain practical mechanisms, such as that the “ghost-image” effect shown in our experiment, that can be used for its realization.
The schematic experimental setup is shown in Fig.4 with detailed indications of the various distances. A CW Argon ion laser line of $`\lambda _p=351.1nm`$ is used to pump a $`3mm`$ long beta barium borate (BBO) crystal for type-II SPDC to generate an orthogonally polarized signal-idler photon pair. The laser beam is about $`3mm`$ in diameter with a diffraction limited divergence. It is important not to focus the pump beam so that the phase-matching condition, $`𝐤_s+𝐤_i=𝐤_p`$, is well reinforced in the SPDC process , where $`𝐤_j`$ $`(j=s,i,p)`$ is the wavevectors of the signal (s), idler (i), and pump (p) respectively. The collinear signal-idler beams, with $`\lambda _s=\lambda _i=702.2nm=2\lambda _p`$ are separated from the pump beam by a fused quartz dispersion prism, and then split by a polarization beam splitter PBS. The signal beam (“photon 1”) passes through the converging lens LS with a $`500mm`$ focal length and a $`25mm`$ diameter. A $`0.16mm`$ slit is placed at location A which is $`1000mm`$ $`(=2f)`$ behind the lens LS. The use of LS is to achieve a “ghost image” of slit A ($`0.16mm`$) at “screen” B which is at the same optical distance $`1000mm`$ $`(=2f)`$ from LS, however in the idler beam (in the path of “photon 2”). The signal and idler beams are then allowed to pass through the respective slits A and B (a real slit B and then a “ghost image” of slit A) and to trigger the two photon counting detectors $`D_1`$ and $`D_2`$. A short focal length lens is used with $`D_1`$ for collecting the signal beam which passes through slit A. The point-like detector $`D_2`$ is located $`500mm`$ behind “screen” B. The detectors are Geiger mode avalanche photodiodes which are $`180\mu m`$ in diameter. $`10nm`$ band-pass spectral filters centered at $`702nm`$ are used with each of the detectors. The output pulses from the detectors are sent to a coincidence circuit. During the measurements, detector $`D_1`$ is fixed behind slit A while detector $`D_2`$ is scanned on the $`y`$-axis by a step motor.
Measurement 1: we first studied the case in which both slits A and B were adjusted to be $`0.16mm`$. The $`y`$-coordinate of $`D_1`$ was chosen to be $`0`$ (center) while $`D_2`$ was allowed to scan along its $`y`$-axis. The circled dot data points in Fig. 5 show the coincidence counting rates against the $`y`$-coordinates of $`D_2`$. It is a typical single-slit diffraction pattern with $`\mathrm{\Delta }y\mathrm{\Delta }p_y=h`$. Nothing is special in this measurement except we have learned the width of the diffraction pattern for the $`0.16mm`$ slit and this represents the minimum uncertainty of $`\mathrm{\Delta }p_y`$ . We should remark at this point that the single detector counting rates of $`D_2`$ is basically the same as that of the coincidence counts except for a higher counting rate.
Measurement 2: the same experimental conditions were maintained except that slit B was left wide open. This measurement is a test of Popper’s prediction. The $`y`$-coordinate of $`D_1`$ was chosen to be $`0`$ (center) while $`D_2`$ was allowed to scan along its $`y`$-axis. Because of entanglement of the signal-idler photon pair and the coincidence measurement, only those twins which have passed through slit A and the “ghost image” of slit A at “screen” B with an uncertainty of $`\mathrm{\Delta }y=0.16mm`$ (which is the same width as the real slit B we have used in measurement 1) would contribute to the coincidence counts through the simultaneous triggering of $`D_1`$ and $`D_2`$. The diamond dot data points in Fig. 5 report the measured coincidence counting rates against the $`y`$ coordinates of $`D_2`$. The measured width of the pattern is narrower than that of the diffraction pattern shown in measurement 1. At the same time, the width of the pattern is found to be much narrower than the actual size of the diverging SPDC beam at $`D_2`$. It is also interesting to notice that the single counting rate of $`D_2`$ keeps constant in the entire scanning range, which is very different from that in measurement 1. The experimental data has provided a clear indication of $`\mathrm{\Delta }y\mathrm{\Delta }p_y<h`$ in the coincidence measurements.
## IV Quantum Mechanical Prediction
Given that $`\mathrm{\Delta }y\mathrm{\Delta }p_y<h`$, is this a violation of uncertainty principle? Before drawing any conclusion, let us first examine what quantum mechanics predicts. If quantum mechanics does provide a solution with $`\mathrm{\Delta }y\mathrm{\Delta }p_y<h`$ for “photon 2”. Indeed, we would be forced to face a paradox as EPR had pointed out in 1935.
We begin with the question: how does one learn the precise position knowledge of photon 2 at “screen” B quantum mechanically? Is it really that $`0.16mm`$ as determined by the width of slit A? The answer is in the positive. Quantum mechanics predicts a “ghost” image of slit A at “screen” B which is $`0.16mm`$ for the above experimental setup. The crucial point is we are dealing with an entangled two-photon state of SPDC ,
$`|\mathrm{\Psi }={\displaystyle \underset{s,i}{}}\delta \left(\omega _s+\omega _i\omega _p\right)\delta \left(𝐤_s+𝐤_i𝐤_p\right)`$ (1)
$`\times a_s^{}(\omega (𝐤_s))a_i^{}(\omega (𝐤_i))|0,`$ (2)
where $`\omega _j`$, k<sub>j</sub> (j = s, i, p) are the frequencies and wavevectors of the signal (s), idler (i), and pump (p) respectively. $`\omega _p`$ and k<sub>p</sub> can be considered as constants while $`a_s^{}`$ and $`a_i^{}`$ are the respective creation operators for the signal and the idler. As given in the above form, the entanglement feature in state (1) may be thought of as the superposition of an infinite number of “two-photon” states that corresponds to the infinite numbers of ways the SPDC signal-idler can satisfy the conditions of energy and momentum conservation, as represented by the $`\delta `$-functions of the state which is technically known as phase-matching conditions:
$$\omega _s+\omega _i=\omega _p,𝐤_s+𝐤_i=𝐤_p.$$
(3)
It is interesting to see that even though there is no precise knowledge of the momentum for either the signal or the idler, the state nonetheless provides precise knowledge of the momentum correlation of the pair. In the language of EPR, the momentum for neither the signal photon nor the idler photon is determined but if a measurement on one of the photons yields a certain value, the momentum of the other photon is 100% determined.
To simplify the physical picture, we “unfold” the signal-idler paths in the schematic of Fig.4 into that shown in Fig.3, which is equivalent to assuming $`𝐤_s+𝐤_i=0`$ while not losing the important entanglement feature of the momentum conservation of the signal-idler pair. This important peculiarity selects the only possible optical paths of the signal-idler pairs that result in a “click-click” coincidence detection which are represented by straight lines in this unfolded version of the experimental schematic so that the “image” of slit A is well-produced in coincidences as shown in the figure. It is similar to an optical imaging in the “usual” geometric optics picture, bearing in mind the different propagation directions of the signal-idler indicated by the small arrows on the straight lines. It is easy to see that a “clear” image requires the locations of slit A, lens LS, and screen B to be governed by the Gaussian thin lens equation ,
$$\frac{1}{a}+\frac{1}{b}=\frac{1}{f}.$$
(4)
In our experiment, we have chosen $`a=b=2f=1000mm`$, so that the “ghost image” of slit A at “screen” B must have the same width as that of slit A. The measured size of the “ghost image” agrees with theory.
In Fig. 3 we see clearly these two-photon paths (straight lines) that result in a “click-click” joint detection are restricted by slit A, lens LS as well as momentum conservation. As a result, any signal-idler pair that passes through the $`0.16mm`$ slit A would be “localized” within $`\mathrm{\Delta }y=0.16mm`$ at “screen” B. In this way, one does learn the precise position knowledge of photon 2 through the entanglement nature of the two-photon system.
One could also explain this “ghost image” in terms of conditional measurements: conditioned on the detection of “photon 1” by detector $`D_1`$ behind slit A, “photon 2” can only be found in a certain position. In other words, “photon 2” is localized only upon the detection of photon 1.
Now let us go further to examine $`\mathrm{\Delta }p_y`$ of photon 2 which is conditionally “localized” within $`\mathrm{\Delta }y=0.16mm`$ at “screen” B. In order to study $`\mathrm{\Delta }p_y`$, the photon counting detector $`D_2`$ is scanned $`500mm`$ behind “screen” B to measure the “diffraction pattern”. $`\mathrm{\Delta }p_y`$ can be easily estimated from the measurement of the width of the diffraction pattern . The two-photon paths, indicated by the straight lines reach detector 2 which is located $`500mm`$ behind “screen” B so that detector $`D_2`$ will receive “photon 2” in a much narrower width under the condition of the “click” of detector $`D_1`$ as shown in measurement 2, unless a real physical slit B is applied to “disturb” the straight lines.
Apparently we have a paradox: quantum mechanics provides us with a solution which gives $`\mathrm{\Delta }y\mathrm{\Delta }p_y<h`$ in measurement 2 and the experimental measurements agree with the prediction of quantum mechanics.
## V Conclusion
It is the same paradox of EPR. Indeed, one could consider this experiment as a variant of the 1935 EPR gedankenexperiment in which the position-momentum uncertainty was questioned by Einstein-Podolsky-Rosen based on the discussion of a two-particle entangled state . Comparing with the EPR-Bohm experiment , which is a simplified version of the 1935 EPR gedankenexperiment, the spin for neither particle is determined (uncertain); however, if one particle is measured to be spin up along a certain direction, the other one must be spin down along that direction (certain). All the spin components of a particle can be precisely determined through the measurement of its twin.
Quantum mechanics gives prediction for the EPR and the EPR-Bohm correlations based on the measurements for entangled states. All reported historical experiments have shown good agreement with quantum mechanics as well as EPR’s prediction (but not their interpretation). The results of our experiment agree with quantum mechanics and Popper’s prediction too. We therefore consider the following discussions may apply to both EPR and Popper.
Popper and EPR were correct in the prediction of the physical outcomes of their experiments. However, Popper and EPR made the same error by applying the results of two-particle physics to the explanation of the behavior of an individual particle. The two-particle entangled state is not the state of two individual particles. Our experimental result is emphatically NOT a violation of the uncertainty principle which governs the behavior of an individual quantum.
In both the Popper and EPR experiments the measurements are “joint detection” between two detectors applied to entangled states. Quantum mechanically, an entangled two-particle state only provides the precise knowledge of the correlations of the pair. Neither of the subsystems is determined by the state. It can be clearly seen from our above analysis of Popper’s experiment that this kind of measurements is only useful to decide on how good the correlation is between the entangled pair. In other words, the behavior of “photon 2” observed in our experiment is conditioned upon the measurement of its twin. A quantum must obey the uncertainty principle but the “conditional behavior” of a quantum in an entangled two-particle system is different. The uncertainty principle is not for “conditional” behavior. We believe paradoxes are unavoidable if one insists the conditional behavior of a particle is the behavior of a particle. This is the central problem of the rationale behind both Popper and EPR. $`\mathrm{\Delta }y\mathrm{\Delta }p_yh`$ is not applicable to the conditional behavior of either “photon 1” or “photon 2” in the case of the Popper and EPR type of measurements.
The behavior of photon 2 conditioned upon photon 1 is well represented by the two-photon amplitudes. Each of the straight lines in the above discussion corresponds to a two-photon amplitude. Quantum mechanically, the superposition of these two-photon amplitudes are responsible for a“click-click” measurement of the entangled pair. A “click-click” joint measurement of the two-particle entangled state projects out certain two-particle amplitudes and only these two-particle amplitudes feature in the quantum formalism. In the above analysis we never consider “photon 1” or “photon 2” individually. Popper’s question about the momentum uncertainty of photon 2 is then inappropriate. The correct question to ask in these measurements should be: what is the $`\mathrm{\Delta }p_y`$ for the signal-idler pair which are “localized” within $`\mathrm{\Delta }y=0.16mm`$ at “screen” B and at “screen” A and governed by the momentum conservation? This is indeed the central point for this experiment. There is no reason to expect the“conditionally localized photon 2” will follow the familiar interpretation of the uncertainty relation as shown in Fig. 5.
Quantum mechanics shows that the superposition of these two-photon amplitudes results in a non-factorizable two-dimensional biphoton wavepacket instead of two individual wavepackets associated with photon 1 and photon 2. Figure 6 gives a simple picture of the biphoton wavepacket of SPDC. We believe all the problems raised by the EPR and Popper type experiments can be duly resolved if the concept of biphoton is adopted in place of two individual photons.
Once again, this recent demonstration of the thought experiment of Popper calls our attention to the important message: the physics of the entangled two-particle system must inherently be very different from that of individual particles. In the spirit of the above discussions, we conclude that it has been a long-standing historical mistake to mix up the uncertainty relations governing an individual single particle with an entangled two-particle system.
The authors acknowledge important suggestions and encouragement from T. Angelidis, A. Garuccio, C.K.W. Ma, and J.P. Vigier. We especially thank C.K.W. Ma from LSE for many helpful discussions. We are grateful to A. Sudbery for useful comments. This research was partially supported by the U.S. Office of Naval Research and the U.S. Army Research Office - National Security Agency grants.
|
no-problem/9905/astro-ph9905217.html
|
ar5iv
|
text
|
# Gamma-ray Burst Spectral Features: Interpretation as X-ray Emission from a Photoionized Plasma
## 1. Introduction
Numerous detections have been reported of features, either in emission or absorption, in the spectra of some gamma-ray bursts (GRB). Absorption features below 100 keV were reported by Konus (Mazets et al. (1981)), HEAO A-1 (Hueter (1984)), and Ginga (Murakami et al. (1988)), and in addition Konus and other instruments have detected broad emission-like features at high energy (400 – 500 keV) in a few cases (e.g. Mazets et al. (1979); Teegarden & Cline (1980)). The BATSE Spectroscopy Detectors (SD) on CGRO have recently reported 13 statistically significant line candidates, although some uncertainty in the contribution of systematics to the analysis make these detections uncertain (Briggs et al. (1999)). For the most statistically significant BATSE candidate, GRB 930916, the feature appears as a broad bump between 41 and 51 keV(Briggs et al. (1999)). Even if the BATSE features are not confirmed, the SD data cannot yet rule out the existence of features similar to those seen by Ginga in some fraction of GRBs (Band et al. (1996)), and confirmation awaits more sensitive spectroscopic gamma-ray instruments.
Originally, these line features were interpreted in the context of Galactic neutron star models for the GRB progenitors. The low-energy absorption features were explained as cyclotron resonance lines in a $`10^{12}`$ Gauss magnetic field (Higdon & Lingenfelter (1990); Fenimore et al. (1988)), and the high-energy lines were postulated to be from $`e^+e^{}`$ annihilation radiation gravitationally redshifted near the surface of a solar mass neutron star (e.g. Liang (1986)). Recently, however, detection of redshifted absorption lines in the optical counterparts associated with two bursts (Metzger et al. (1997),Kulkarni et al. (1999)), and emission lines from the galaxies associated with three others (Kulkarni et al. (1998),Djorgovski et al. 1998a ,Djorgovski et al. 1998b ) have confirmed cosmological distances for five GRBs. Although the BATSE data still allow a fraction of GRBs to be in a Galactic distribution (Loredo & Wasserman (1995)), the majority of long GRBs must be cosmological. It is possible that the observed gamma-ray spectral features are associated with a subpopulation of Galactic GRB progenitors. However, given the recent redshift and host galaxy observations this seems unlikely. It is therefore interesting to look for an explanation for these features in the context of cosmological GRB models.
Recently, Mészáros and Rees (1998) have discussed the possibility that the relativistic outflows associated with cosmological GRBs may entrain small blobs or filaments of dense, highly-ionized metal rich material that could give rise to broad features due to Fe K-edges in the GRB spectrum. For typical blob Lorenz factors of $`\mathrm{\Gamma }_b25100`$, Fe K-edges would give rise to isolated broad features in the 250 – 1000 keV band, similar perhaps to the high-energy lines observed by Konus. It would be difficult, however, to produce the line observed by Ginga, which have multiple features below 100 keV. In this letter, we accept the line detections as real, and investigate the possibility that the low-energy lines seen by Ginga could be produced by excitation and absorption in the predominantly Ne-like Fe-L complex and/or in outflows containing low-Z elements. As we show in §3, features similar to the Ginga lines can be produced both by O, Ne, Si -rich outflows, and by a combination of Fe and light elements. Finally, in the context of this model, we discuss the utility of broad band X- and gamma-ray spectroscopy for constraining fireball model parameters and the composition of the ejecta.
## 2. Model and Calculations
In the model described by Mészáros and Rees (1998) (hereafter MR98), metal-enriched, high-density regions become entrained in the fireball, and are confined by the high ambient and ram pressure of the relativistic outflow. These small blobs or filaments, although a negligible fraction of the total outflow mass, can have a significant covering factor. Blobs with gas temperature comparable to the comoving photon temperature ($``$1 keV for typical fireball model parameters) would form photoionized plasmas with prominent line emission, similar to those found in many X-ray emitting sources. The plasma density would be high by most astrophysical standards, reaching $`n_b10^{18}`$cm<sup>-3</sup> (assuming typical fireball parameters) for the case where the blob internal pressure balances the total external (magnetic and particle) pressure. These blobs would be accelerated by radiation or magneto-hydrodynamic pressure and would achieve a saturation bulk Lorentz factor well before reaching the emission region where internal shocks convert a significant fraction of the bulk kinetic energy into radiation ($`r_{sh}10^{13}`$cm).
As described in MR98, several factors would broaden any spectral features from the dense photoionized plasma. If all blobs have the same bulk Lorentz factor, $`\mathrm{\Gamma }_b`$, emission line features will be broadened due to contributions from regions with velocities that are at different angles to our line of sight, which will have different Doppler blue-shifts. In addition, the blobs may have a range of Lorentz factors, since those with low enough column would be accelerated to the velocity of the surrounding flow, whereas those with larger columns would reach slower terminal velocities. The range of Lorentz factors then depends on the range of blob sizes, which in turn depends on the details of the entrainment process and the extent to which instabilities break up the bigger blobs.
We have adopted this basic scenario described in MR98, and investigated the conditions under which low-energy features similar to those seen by Ginga can be formed. Our goal was to qualitatively reproduce multiple spectral features with a similar fraction of the total luminosity. It is important to note that the actual shape of the Ginga features is very sensitive to proper subtraction of the underlying continuum, and therefore on proper understanding of the instrument response. Given the imperfect knowledge of the Ginga response, varying assumptions about the continuum can cause the features to appear qualitatively different (Fenimore et al. (1988)). We assume that the blobs are in pressure equilibrium with the surrounding medium (otherwise they would not be stable), and in addition, treat the optically thin regime only. Although the model itself does not impose any limit on the optical depth, the optically thin assumption represents the simplest case, where self-shielding and other time-dependent effects can be ignored.
We used the photoionization code, XSTAR (Kallman & Krolik (1995)), to directly calculate the reprocessed spectrum and temperature of the photoionized material. As input to the code, we must specify the ionization parameter, $`\mathrm{\Xi }=L/(n_br^2)`$ (here $`L`$ is the luminosity, $`n_b`$ the blob particle density, and $`r`$ the blob size). In the comoving frame, $`\mathrm{\Xi }=L/n_br^2\mathrm{\Gamma }_b^2`$ (MR98). Additional inputs are the ionizing spectrum and relative elemental abundances. For the ionization parameter, we investigated the range $`\mathrm{\Xi }=1001000`$ consistent with the fireball model parameters of MR98. Similarly we considered the relevant range $`\mathrm{\Gamma }_b=25100`$. We assume a power-law ionizing spectrum with energy index $`\alpha `$, varied between 0.1 and 0.5, consistent with the average continuum spectrum early in the burst. Few constraints can be placed on the relative elemental abundances, since for neutron star mergers or hypernova scenarios, little is known about the composition of the surface. We have therefore investigated a range for the abundant elements O, Ne, Si, and Fe.
The XSTAR output, after ionization equilibrium is established, consists of the blob temperature (in the comoving frame), the abundance of each ion, prominent lines and edges with location and magnitude, and the ratio of bolometric recombination line to continuum luminosity. Given the assumption of pressure equilibrium, the choice of $`\mathrm{\Xi }`$ fixes the blob temperature in the co-moving frame; $`\mathrm{\Xi }=500T_7`$, where $`T=10^7T_7`$ K (MR98, assuming an isotropic blob distribution). We discard as inconsistent any solutions that do not satisfy the required relationship between $`\mathrm{\Xi }`$ and $`T`$. We also consider a range of reprocessing rates (ratio of total recombination luminosity to total ionizing luminosity), which we adjust in order to find solutions with a ratio of deposited luminosity in the broad features to total luminosity of a few percent, consistent with the Ginga observations. We keep only solutions that have total optical depth in the lines, $`\tau 1`$, consistent with the optically thin assumption.
With the output from XSTAR, and the assumed reprocessing rate, we calculate an observed spectrum by blueshifting (by $`\mathrm{\Gamma }_b`$), and broadening with the instrumental width and the (dominant) relativistic effect resulting from the variation of velocity projected along the line of sight. To determine the magnitude of the latter, we assume the Lorentz factor and the luminosity to be independent of time, and we integrate over the spherical emission surface, assuming the emitting material to be uniformly distributed. This results in a spectral smearing of $``$50%, similar to the 30 – 50% suggested by MR98. In addition, we investigated the effects of time-varying luminosity by parametrizing a decreasing luminosity resulting from shell expansion, and calculating the line broadening over the entire shell. For this time-dependent calculation, we included proper integration over the equal arrival time ellipse, as described by Panaitescu and Mészáros (1998). The time-dependence does result in additional broadening, however it is not sufficient to qualitatively change our conclusions, and for simplicity we therefore employ the time-independent calculation in the results presented here. We do not include any additional broadening due to possible range of blob bulk Lorentz factors.
The density of the blobs is very high by normal astrophysical standards, and the validity of the XSTAR code under these conditions is therefore of concern. XSTAR ignores three-body recombination, assuming that the ionization equilibrium is determined by photoionization, radiation, and dielectronic recombination. Explicit calculation shows that due to the high temperatures, radiative recombination will dominate over three-body recombination, and this will not result in significant inaccuracy. A further concern is that the code does not properly treat collisional redistribution among excited levels. We estimate these errors to be at the $``$25 – 100% level, and not of concern for reproducing gross spectral features.
Finally, we note that our results are not strongly dependent on the geometry of the entrained material. Mészáros and Rees (1998) point out that the blobs may have a filamentary structure resulting from the magnetic fields, however given our assumption of optically-thin emission, this will not significantly alter the observed spectrum.
## 3. Results
By investigating the range of parameter space described above, we found that we could reproduce spectral features in the 10 – 100 keV band resembling the BATSE and Ginga measurements. Figures 1 and 2 show results from two of the best cases (i.e. those most closely resembling the broad features seen by these instruments). For each, we show the spectrum of reprocessed photons in the comoving frame, the spectrum after relativistic broadening, and the observed spectrum after convolving with the Ginga instrument resolution. We have not included any additional broadening due to possible spread in the blob Lorentz factors. The first case (Figure 1) has a relative abundance of metals of O:Ne:Si = 1:0.5:1, and the second (Figure 2) has O:Ne:Si:Fe = 1:0.25:0.15:0.05. The values for the ionization parameter and energy spectral index are indicated in the captions. To reproduce features with the intensity of the Ginga or BATSE detections requires a reprocessing rate on order unity ($`\tau 1`$), marginally consistent with the assumption of optically thin emission.
From Figures 1 and 2, it is clear that multiple features resembling absorption dips can be produced below 100 keV from a combination of low-Z elements, and in the case where Iron is present, from the L-shell complex. These dips, interpreted as absorption features in the Ginga spectra, are a result of smearing of the complex ionization structure (line emission and edges), and are not due to absorption. If the ejecta contain Iron (Figure 2), then additional emission-like features are seen in the 100 – 200 keV band due to the K-shell. Note that only one instrument has reported high-energy lines simultaneously with low-energy features. This is due primarily to instrumental limitations: reasonably large collecting area is required for high-energy detection, while good energy resolution and clean instrumental response is required below 100 keV, and these have not been combined in a single experiment. We have therefore adjusted the blob Lorentz factor to the value required to match the Ginga observations ($`\mathrm{\Gamma }_b=25`$) for a GRB with redshift $`z=1`$. This value of the Lorentz factor is smaller than the $`\mathrm{\Gamma }_b100`$ typically invoked to ensure the emission region is not opaque due to photon-photon pair production. The latter value is, however, estimated by assuming the gamma-ray spectrum extends to 100 MeV. There is no evidence for such high-energy emission in the Ginga events, where the low-energy line features were observed, and the value $`\mathrm{\Gamma }_b=25`$ is consistent with all the observations. Note that the 300 - 500 keV emission-like features seen in some GRB could be associated with Fe K emission edges for values of $`\mathrm{\Gamma }_b50100`$. Including only broadening due to the variation of velocity projected along the line of sight and typical instrumental resolution produces features consistent with the observations, while any additional broadening due to variation in blob Lorentz factors would smear out the lines entirely. We emphasize that the results presented here are the best cases, achieved by searching a fairly wide range of parameter space. For many conditions still consistent with reasonable fireball model parameters, no observable features are produced.
## 4. Conclusion
We have investigated the possibility that the 10 – 100 keV features reported in the spectra of some GRB arise from smearing of the reprocessed radiation from a metal-rich, dense photoionized plasma entrained in blobs or filaments in the relativistic outflow. By searching a relatively wide region of parameter space consistent with generic fireball models, we can reproduce the observed features for a limited set of values and elemental composition. The dips in the spectrum observed after relativistic and instrumental broadening are due to the complex ionization structure, and are a combination of emission lines and edges. In addition, the sum of the recombination spectrum plus continuum produces a break in the observed continuum at $`100`$ keV, similar to that seen by BATSE. With no additional broadening due to a range of $`\mathrm{\Gamma }_b`$, relativistic effects are not sufficient to smear out the features in the recombination spectrum entirely.
If such dense, entrained blobs do exist, the broadband spectral features can be used to constrain the range of fireball model parameters, as well as the composition of the entrained material. In particular, the presence of Iron in the blobs would produce features due to K shell emission in the 100 - 1000 keV band that could be used to measure the Lorentz factor of the ejecta. We emphasize that detailed comparison with existing observations is not possible due to the uncertainties in the instrument response, and the poor signal to noise of the detections. Small variations in the response function, as well as assumed continuum, can severely alter the characteristics of the observed features (i.e. whether they are interpreted as emission or absorption dips). The primary characteristics of an experiment capable of confirming and measuring these spectral features is broad energy response (few keV – 1 MeV), large area, and clean, well-determined response function, and moderate energy resolution.
The authors wish to thank Masao Sako for assistance with XSTAR, and William Goldstein for useful discussions on the atomic physics of photoionized plasmas.
|
no-problem/9905/astro-ph9905337.html
|
ar5iv
|
text
|
# Brightness from the Blackest Night: Bursts of Gamma Rays and Gravity Waves from Black Hole Binaries
## 1. Introduction
Binaries containing a black hole, or single black holes, have been suggested for some time as good progenitors for gamma-ray bursts (Paczyński 1991, 1998, Mochkovitch et al. 1993, Woosley 1993, Fryer & Woosley 1998, MacFadyen & Woosley 1998). Reasons for this include the fact that the rest mass of a stellar mass black hole is comparable to what is required to energize the strongest GRB. Also, the horizon of a black hole provides a way of quickly removing most of the material present in the cataclysmic event that formed it. This may be important because of the baryon pollution problem: we need the ejecta that give rise to the GRB to be accelerated to a Lorentz factor of 100 or more, whereas the natural energy scale for any particle near a black hole is less than its mass. Consequently, we have a distillation problem of taking all the energy released and putting it into a small fraction of the total mass. The use of a Poynting flux from a black hole in a magnetic field (Blandford & Znajek 1977) does not require the presence of much mass, and uses the rotation energy of the black hole, so it provides naturally clean power.
In this paper, we discuss and combine a number of new developments in this area. First, the population synthesis calculations of Bethe & Brown (1998) provide good estimates of the formation rates of various suggested GRB progenitors. They stress the importance of black holes of relatively low mass ($`2.4\text{ }M_{}`$). Binaries with one neutron star and one such black hole are ten times more common than NS-NS binaries, and thus contribute much more to GRB and gravity wave rates. Second, three of us have recently reviewed the Blandford-Znajek (1977) mechanism as a possible central engine for GRBs (Lee, Wijers, & Brown 1999). We confirm that the basic mechanism works effectively, addressing the criticism of many authors.
In section 2 we discuss the various possible progenitors of GRBs and their potential for generating the right energy on the right time scale. In section 3 we discuss the formation rate of each of these. Then we combine these pieces of information to obtain estimates of the detection rates of GRBs and LIGO-detectable mergers (section 4), and summarize our findings (section 5).
## 2. Stellar sources of gravity waves and gamma-ray bursts
When a black hole forms from a single star, as in the collapsar model of MacFadyen & Woosley (1998) and the hypernova scenario by Paczyński (1998) it is surrounded by a substantial stellar envelope, giving two potential GRB energy sources. First, accretion can release neutrinos in such large amounts that $`\nu \overline{\nu }`$ annihilation produces up to 10<sup>52</sup> erg in a pair fireball. Second, the very large rotation energy of the black hole can be extracted via the Blandford-Znajek mechanism if the surrounding matter carries a magnetic field.
Mergers of compact-object binaries are strong sources of gravity waves. The merger leaves a central compact object that is most likely a black hole, because it contains more than the maximum mass of a neutron star. Now little mass is left as surrounding debris, perhaps at most 0.1 $`M_{}`$. Both accretion and rotation energy are available, but due to the small ambient mass the accretion energy is less likely to suffice for a strong GRB in this case.
### 2.1. The Blandford-Znajek mechanism
When a rapidly rotating black hole is immersed in a magnetic field, frame dragging twists the field lines near the hole, which causes a Poynting flux to be emitted from near the black hole. This is the Blandford-Znajek (1977) mechanism. The source of energy for the flux is the rotation of the black hole. The source of the field is the surrounding accretion disk or debris torus. We showed (Lee, Wijers, & Brown 1999) that at most 9% of the rest mass of a rotating black hole can be converted to a Poynting flux, making the available energy for powering a GRB
$`E_{\mathrm{BZ}}=1.6\times 10^{53}(M/M_{})\mathrm{erg}.`$ (1)
The power depends on the applied magnetic field:
$`P_{\mathrm{BZ}}6.7\times 10^{50}B_{15}^2(M/M_{})^2\mathrm{erg}\mathrm{s}^1`$ (2)
(where $`B_{15}=B/10^{15}`$ G). This shows that modest variations in the applied magnetic field may explain a wide range of GRB powers, and therefore of GRB durations. There has been some recent dispute in the literature whether this mechanism can indeed be efficient (Li 1999) and whether the power of the BH is ever significant relative to that from the disk (Livio, Ogilvie, & Pringle 1999). The answer in both cases is yes, as discussed by Lee, Wijers, & Brown (1999).
The issue, therefore, in finding efficient GRB sources among black holes is to find those that spin rapidly. There are a variety of reasons why a black hole might have high angular momentum. It may have formed from a rapidly rotating star, so the angular momentum was there all along (‘original spin’, according to Blandford 1999); it may also have accreted angular momentum by interaction with a disk (‘venial spin’) or have formed by coalescence of a compact binary (‘mortal spin’). We shall review some of the specific situations that have been proposed in turn.
### 2.2. NS-NS and NS-BH binaries
Neutron star mergers are among the oldest proposed cosmological GRB sources (Eichler et al. 1989, Goodman, Dar, & Nussinov 1987, Paczyński 1986), and especially the neutrino flux is still actively studied as a GRB power source (see, e.g., Ruffert & Janka 1998). However, once the central mass has collapsed to a black hole it becomes a good source for BZ power, since it naturally spins rapidly due to inheritance of angular momentum from the binary (Rees & Mészáros 1992). Likewise BH-NS binaries (Lattimer & Schramm 1974) will rapidly transfer a large amount of mass once the NS fills its Roche lobe, giving a rapidly rotating BH (Kluzniak & Lee 1998). The NS remnant may then be tidally destroyed, leading to a compact torus around the BH. It is unlikely that this would be long-lived enough to produce the longer GRB, but perhaps the short ($`t\text{ }<1`$ s) ones could be produced (e.g., Fryer, Woosley & Hartmann 1999). However, mass transfer could stabilize and lead to a widening binary in which the NS lives until its mass drops to the minimum mass of about 0.1 $`M_{}`$, and then becomes a debris torus (Portegies Zwart 1998). By then, it is far enough away that the resulting disk life time exceeds 1000 s, allowing even the longer GRB to be made. Thus BH-NS and NS-NS binaries are quite promising. They have the added advantage that their environment is naturally reasonably clean, since there is no stellar envelope, and much of the initially present baryonic material vanishes into the horizon.
### 2.3. Wolf-Rayet stars
The formation of a black hole directly out of a massive star has been considered for the production of GRB, either as hypernovae (Paczyński 1998), failed supernovae (Woosley 1993) or exploding WR stars (MacFadyen & Woosley 1999).
Another significant source of such events is the formation of a BH of about 7 $`M_{}`$ in black hole transients, which is discussed by Brown, Lee, & Bethe (1999). These BHs form from a helium star, because spiral-in of the companion has stripped the primary of its envelope.
Both the above scenarios suffer from a problem found by Spruit & Phinney (1998) with rotation of neutron stars: magnetic fields grown by differential rotation in the star may efficiently couple the core and envelope, prohibiting the core to ever rotate rapidly. Then the black holes formed in the above two ways would not contain enough spin energy to power a GRB, leaving only the more limited $`\nu \overline{\nu }`$ energy.
A third variety of black hole in a WR star would come from BH-WR mergers (Fryer & Woosley 1998). These happen in the same kinds of systems that form BH-NS binaries as discussed above, in cases where the initial separation is smaller, so that spiral-in leads to complete merger rather than formation of a binary. In this case, the BH and WR star are both spun up during the spiral-in process (i.e., part of the orbital angular momentum of the binary becomes spin angular momentum). Then there is enough spin in the system to power a GRB via the Blandford-Znajek process.
## 3. Progenitor formation rates
In order to evaluate the birth rates of the various progenitors discussed above, we need to establish the evolutionary paths from initial binaries taken by each, and then compute the fraction of all ZAMS binaries that evolve into the desired system. Such a population synthesis calculation is often done with large Monte Carlo codes (e.g. Portegies Zwart & Yungelson 1998). Here we follow the treatment by Bethe and Brown (1998), because it is analytic and thus it is relatively transparent how the results depend on the initial assumptions. It is limited to systems in which at least one star is massive enough to produce a supernova. Their final numbers agree remarkably well with the Monte Carlo simulations by Portegies Zwart & Yungelson (1998), if the same assumptions about stellar evolution are used in both.
To normalize their rates, Bethe & Brown (1998) used a supernova rate of $`\alpha =`$0.02/yr per galaxy, and assumed that this equaled the birth rate of stars with mass greater than $`10\text{ }M_{}`$. The birth rate of stars more massive than $`M`$ scales as $`M^n`$. Therefore, the supernova rate in mass interval d$`M`$ is
$`\mathrm{d}\alpha =\alpha n\left({\displaystyle \frac{M}{10\text{ }M_{}}}\right)^n{\displaystyle \frac{\mathrm{d}M}{M}}.`$ (3)
In their analysis, Bethe & Brown use $`n=1.5`$. Half of all stars are taken to be close binary systems with separations, $`a`$, in the range $`0.044\times 10^{13}`$ cm. The distribution of binary separations within this range is taken to be flat in $`\mathrm{ln}a`$. The distribution of mass ratios, $`q`$, in binaries with massive primaries is uncertain, especially at small mass ratios, and we here follow Bethe & Brown by taking it to be flat in $`q`$. All these assumptions, as well as the details of the evolution scenarios, introduce some amount of uncertainty, but the good agreement between recent analytic and numerical work suggests that the formation rates we quote below can be trusted to a factor few. The results of the discussion on birth rates are summarized in table 1.
### 3.1. NS-NS and NS-BH binaries
In the population synthesis of Bethe & Brown (1998), the formation rate of NS-NS binaries comes out to be $`10^5`$ per year in the Galaxy, or 10 GEM (Galactic Events per Megayear). This rate is considerably lower than estimates from population synthesis calculations prior to Bethe & Brown (1998) and Portegies Zwart & Yungelson (1998), but in good agreement with the estimated merger rate from the observed neutron star binaries (Phinney 1991, Van den Heuvel & Lorimer 1996). The discrepancy between the older theoretical estimates and newer ones is due to a few factors: some earlier studies did not include kick velocities, and none included the destruction of neutron stars by hypercritical accretion. This last process is an important difference between the Bethe & Brown analysis and previous work: they argued that when a neutron star spirals into a red giant, it accretes matter at a very high rate of up to 1 $`M_{}`$/yr. Then photons are trapped in the flow and the flow cools by neutrino emission, hence the Eddington limit does not apply. As a result, the neutron star accretes such a large amount of mass that it exceeds the maximum mass and turns into a low-mass (2–2.5 $`M_{}`$) black hole. Since the spiral-in is an essential part of the usual scenario for forming binary neutron stars, the formation rate is cut down greatly. Only those binaries in which the stars initially differ by less than 5% in mass does a binary neutron star form. This is because in those cases the evolutionary time scales of the two stars are so close that the initial secondary becomes a giant and engulfs the primary when the primary has not yet exploded as a supernova. Briefly, a close binary of two helium stars exists, and then both explode as supernovae, disrupting about half the systems.
An immediate consequence of this scenario is that the formation rate for binaries consisting of a neutron star and a low-mass black hole is an order of magnitude more, 100 GEM, because this is the fate of all the systems which in the absence of hypercritical accretion would have become binary neutron stars. The sum of the formation rates of NS-NS and NS-BH binaries in the Bethe-Brown scenario is therefore about equal to the NS-NS formation rate in older studies, providing all other assumptions are the same. The chief reason why such BH-NS binaries are not seen is the same as why we generally see only one neutron star of the pair in a NS-NS binary: the first-born neutron star gets recycled due to the accretion flow from its companion. If its magnetic field is reduced by a factor 100, as we observe, its visible lifetime is lengthened by that same factor 100, since it scales as the inverse of the field strength. The second-born pulsar is not recycled, hence only visible for a few million years and 100 times less likely to be seen. In BH-NS binaries, the neutron star is the second-born compact object, hence unrecycled and short-lived. With a ten times higher birth rate but 100 times shorter visible life, one expects to see ten times fewer of them, and thus the fact that none have yet been seen is understandable.
### 3.2. Wolf-Rayet stars
The rate at which the various progenitors involving WR stars discussed above (Sect. 2.3) are formed can be calculated easily from the Bethe & Brown (1998, 1999) model in the same way they calculated the merger rates.
Helium stars (WR stars) with a low-mass black hole (LBH) in them are formed from almost the same binaries that make LBH plus NS systems; the only difference is that they come from smaller initial orbits, in which the spiral-in does not succeed in ejecting the companion envelope and thus goes on to the center. From the total available range in orbital separations, $`0.04<a_{13}<4`$, LBH-NS binaries are only made when $`0.5<a_{13}<1.9`$ (where $`a_{13}`$ is the separation in units of $`10^{13}`$ cm). Inside that range, for $`0.04<a_{13}<0.5`$, the LBH coalesces with the He core. Hence, using a separation distribution flat in $`\mathrm{ln}a`$, coalescences are more common than LBH-NS binaries by a factor $`\mathrm{ln}(0.5/0.04)/\mathrm{ln}(1.9/0.5)=1.9`$. In Bethe & Brown (1998) the He star compact object binary was disrupted $`50\%`$ of the time in the last explosion, which we do not have here. Thus, the rate of LBH, He-star mergers is $`3.8`$ times the formation rate of LBH-NS binaries which merge, or $`R=3.8\times 10^4\mathrm{yr}^1`$ in the Galaxy, i.e. 380 GEM.
Bethe & Brown (1999) found that single stars need to have a ZAMS mass of at least 80 $`M_{}`$ to directly form a massive BH, based on evolution calculations by Woosley, Langer and Weaver (1993). It is now understood that their He-star mass loss rates were a factor of at least 2 too high. Calculations with lower mass loss rates carried out by Wellstein & Langer (1999) give somewhat higher He-star & CO core masses. The further evolution of the CO core has not been calculated yet, but may lower somewhat the Bethe & Brown mass limit for high-mass black-hole formation. Staying with the 80 $`M_{}`$–100 $`M_{}`$ range for this route, the rate is $`2.5\times 10^4\mathrm{yr}^1`$ in the Galaxy.
In addition, we consider the formation of massive black holes of about 7 $`M_{}`$ that are seen in soft X-ray transients like A 0620$``$00. Their evolution was discussed by Brown, Lee, & Bethe (1999), who found a formation rate of $`9\times 10^5\mathrm{yr}^1`$ in the Galaxy.
## 4. Observable rates
### 4.1. Binary Mergers for LIGO
The combination of masses that will be well determined by LIGO is the chirp mass
$`M_{\mathrm{chirp}}=\mu ^{3/5}M^{2/5}=(M_1M_2)^{3/5}(M_1+M_2)^{1/5}.`$ (4)
The chirp mass of a NS-NS binary, with both neutron stars of mass $`1.4\text{ }M_{}`$, is 1.2 $`M_{}`$. A birth rate of 10 GEM implies a rate of 3 yr<sup>-1</sup> out to 200 Mpc (Phinney 1991). Kip Thorne informs us that LIGO’s first long gravitational-wave search in 2002$``$2003 as discussed for binary neutron stars is expected to see binaries with $`M_{\mathrm{chirp}}=1.2\text{ }M_{}`$ out to 21 Mpc.
The chirp mass corresponding to the Bethe & Brown (1998) LBH-NS binary with masses $`2.4\text{ }M_{}`$ and $`1.4\text{ }M_{}`$, respectively, is 1.6 $`M_{}`$. Including a $`30\%`$ increase in the rate to allow for high-mass black-hole (HBH)-NS mergers (Bethe & Brown 1999) gives a 26 times higher rate than Phinney’s estimate for NS-NS mergers ($`10^5`$ yr<sup>-1</sup> in the Galaxy). These factors are calculated from the signal to noise ratio, which goes as $`M_{\mathrm{chirp}}^{5/6}`$, and then cubing it to obtain the volume of detectability, which is therefore proportional to $`M_{\mathrm{chirp}}^{5/2}`$. We then predict a rate of $`3\times (21/200)^3\times 26=0.09`$ yr<sup>-1</sup>. This rate is slim for 2003. The enhanced LIGO interferometer planned to begin in 2004 should reach out beyond 150 Mpc for $`M_{\mathrm{chirp}}=1.2\text{ }M_{}`$, increasing the detection rate to $`3\times (150/200)^3\times 26=33`$ yr<sup>-1</sup>, and HBH-NS mergers used in these estimates should be considered a lower limit (Sect. 3.2). We therefore find that inclusion of black holes in the estimates for LIGO predict that we will see more mergers per month than NS-NS mergers per year.
### 4.2. Gamma-ray bursts
Because gamma-ray bursts have a median redshift of 1.5–2 (e.g. Wijers et al. 1998), and the supernova rate at that redshift was 10–20 times higher than now, the gamma-ray burst rate as observed is higher than one expects using the above rates. However, for ease of comparison with evolutionary scenarios we shall use the GRB rate at the present time (redshift 0) of about 0.1 GEM. (Wijers et al. (1998) found a factor 3 lower rate, but had slightly underestimated it because they overestimated the mean GRB redshift; see Fryer, Woosley, & Hartmann (1999) for more extensive discussions of the redshift dependence). An important uncertainty is the beaming of gamma-ray bursts: the gamma rays may only be emitted in narrow cones around the spin axis of the black hole, and therefore most GRBs may not be seen by us. An upper limit to the ratio of undetected to detected GRB is 600 (Mészáros, Rees, & Wijers 1999), so an upper limit to the total required formation rate would be 60 GEM. We may have seen beaming of about that factor or a bit less in GRB 990123 (Kulkarni et al. 1999), but other bursts (e.g. 970228, 970508) show no evidence of beaming in the afterglows (which may not exclude beaming of their gamma rays). At present, therefore, any progenitor with a formation rate of 10 GEM or more should be considered consistent with the observed GRB rate.
## 5. Conclusions
We have shown that rapidly rotating black holes are an attractive power source for gamma-ray bursts. Via the Blandford-Znajek mechanism (1977) they can supply sufficient energy at a high rate. They also occur often enough to explain the observed GRB rate, even if the gamma-ray emission of a typical GRB is beamed to less 1% of the sky. Because of the requirement of rapid spin, the direct collapse of a stellar core to a black hole is a less likely candidate for making GRB (at least via the BZ effect). Therefore, mergers are much more attractive, which implies a natural connection between GRBs and strong sources of gravity waves. With advanced LIGO, the detection rate of mergers is predicted to become large enough that direct verification of events that produce both gravity wave and gamma-ray signals will become feasible, and will directly constrain GRB beaming. We have used the population synthesis calculations of Bethe & Brown (1998, 1999) to estimate the LIGO detection rate. We found it to be dominated by black-hole, neutron-star mergers, and to be higher by a factor 26 than previous estimates. As a result, we conclude that the most energetic phenomena in astrophysics stem from black holes, whose defining characteristic is paradoxically that no radiation can escape from them.
We would like to thank Roger Blandford, Chris Fryer, Sterl Phinney, Simon Portegies Zwart, Kip Thorne and Stan Woosley for useful suggestions and advice. This work was partially supported by the U.S. Department of Energy under Grant No. DE–FG02–88ER40388. HKL is also supported partly by KOSEF 985-0200-001-2.
|
no-problem/9905/hep-ph9905335.html
|
ar5iv
|
text
|
# Impact of the bounds on Higgs mass and 𝑚_𝑊 on effective theories
## 1 Introduction
Some standard model (SM) parameters have been measured with such a high precision that has allowed to constrain the values of other SM parameters, or even new physics, through the use of radiative corrections, as can be exemplified by the correct agreement between the predicted top mass and the observed value . Finding the Higgs boson remains as the final step to confirm the theoretical scheme of the SM. The present lowest experimental bound on the Higgs mass is $`m_H>90.4`$ GeV , this is a direct search limit. On contrary to the top quark case, radiative correction are only logarithmically sensitive to the Higgs mass, and thus it is more difficult to obtain an indirect bound. However, fits with present data seems to favor a light SM Higgs . Henceforth, it is interesting to ask how this conclusion will change if one goes beyond the SM.
The framework of effective Lagrangians, as a mean to parametrize physics beyond the SM in a model independent manner, has been used extensively recently . Within this approach, the effective lagrangian is constructed by assuming that the virtual effects of new physics modify the SM interactions, and these effects are parametrized by a series of higher-dimensional nonrenormalizable operators written in terms of the SM fields. The effective linear Lagrangian can be expanded as follows:
$$_{\mathrm{eff}}=_{\mathrm{SM}}+\underset{i,n}{}\frac{\alpha _i}{\mathrm{\Lambda }^n}O_n^i$$
(1)
where $`_{\mathrm{SM}}`$ denotes the SM renormalizable lagrangian. The terms $`O_n^i`$ are $`SU(3)\times SU(2)_L\times U(1)_Y`$ invariant operators. $`\mathrm{\Lambda }`$ is the onset scale where the appearance of new physics will happen. The parameters $`\alpha _i`$ are unknown in this framework, although ”calculable” within a specific full theory . This fact was used in Ref. to show that, in a weakly coupled full theory, a hierarchy between operators arises by analyzing the order of perturbation theory at which each operator could be generated e.g. by integrating the heavy degrees of freedom. Some operators can be generated at tree-level, and it is natural to assume that their coefficients will be suppressed only by products of coupling constants; whereas the ones that can be generated at the 1-loop level, or higher, will be also suppressed by the typical $`1/16\pi ^2`$ loop factors. This allows us to focus on the most important effects the high-energy theories could induce, namely those coming from tree-level generated dimension-six operators.
In this letter we address two related questions. First, we study how the effective lagrangian affects the determination of the W boson mass, and how this could affect the bounds on the Higgs mass. The second item under consideration will be to re-examine the effects on Higgs-vector boson production at hadron colliders by using the results obtained from the first part.
## 2 The SM $`W`$ mass
We shall use the results of Ref. , which parametrize the bulk of the radiative corrections to $`W`$ mass through the following expression:
$$m_W=m_W^o[1+d_1ln(\frac{M_h}{100})+d_2C_{em}+d_3C_{top}+d_4C_{as}+d_5ln^2(\frac{M_h}{100})],$$
(2)
where the coefficients $`d_i`$ are given in table 2 of Ref. , they incorporate the full 1-loop effects, and some dominant 2-loop corrections. The factors $`C_i`$ are given by:
$`C_{em}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }\alpha _h}{0.0280}}1,`$
$`C_{top}`$ $`=`$ $`({\displaystyle \frac{m_t}{175GeV}})^21,`$
$`C_{as}`$ $`=`$ $`{\displaystyle \frac{\alpha _s(M_Z)}{0.118}}1,`$ (3)
and they measure the dependence on the fine structure constant, top mass and strong coupling constant, respectively. The reference $`W`$ mass, $`m_W^o=80.383`$ GeV, is obtained with the following values: $`mt=175`$ GeV, $`\alpha _s=.118`$, $`\mathrm{\Delta }\alpha =0.0280`$, and $`m_H=100`$ GeV. By using eqs. (2-3), one finds a bound on the SM Higgs mass, $`m_t=176\pm 2`$ GeV, $`\mathrm{\Delta }\alpha =.0280`$ and $`\alpha _s=.118`$, of $`170<m_H<330`$ GeV. This result agree with several other studies which anticipates the existence of a light Higgs boson.
Including the effects of new physics will modify the W mass value. That effect should be combined with the previous SM corrections in order to determine to what extent new physics could change the bounds obtained for the Higgs mass.
## 3 Modification to the $`W`$ mass
The complete set of effective operators is large but the analysis simplifies because loop-level dimension six operators and tree-level dimension eight ones generate subdominant effects with respect to tree-level generated dimension six effective operators <sup>1</sup><sup>1</sup>1This is not the case when tree-level dimension six operators do not contribute .. The effective contributions to the input parameters in the formulas (2-3), besides $`m_W`$, can be show to dissappear after suitable redefinitions . Just two operators contribute:
$`O_\varphi ^{(1)}`$ $`=`$ $`(\varphi ^{}\varphi )(D^\mu \varphi ^{}D_\mu \varphi ),`$
$`O_\varphi ^{(3)}`$ $`=`$ $`(\varphi ^{}D_\mu \varphi )(D^\mu \varphi )^{}\varphi .`$ (4)
The notation is as usual, $`\varphi `$ denotes the Higgs doublet, and $`D_\mu `$ is the usual covariant derivative. An interesting characteristic of these operators is that only $`O_\varphi ^{(1)}`$ gives contribution to $`m_W`$; although both operators modify the coefficients of the vertices $`HVV`$ they leave intact its Lorentz structure. This approach is equivalent to the most usual one used in the literature in which more operators are allowed but they are constrained in a one-per-one basis, so that ”unnatural” cancellations are not allowed.
We shall now include the contribution to $`m_W`$ arising from the effective operators of eq. (4) into eqs. (2). The formulae for the $`W`$ mass becomes:
$$m_W|_{\mathrm{eff}}=m_W|_{\mathrm{SM}}(1+\frac{1}{4}\alpha _\varphi ^{(1)}(\frac{v}{\mathrm{\Lambda }})^2)$$
(5)
where $`m_W|_{\mathrm{SM}}`$ corresponds to the $`W`$ mass as defined in eq. (2), and the term in the parentheses arises from the effective Lagrangian.
To study the inter-relations between the Higgs mass and $`\alpha _\varphi ^{(1)}`$ and $`\mathrm{\Lambda }`$, we must set the allowed $`m_Wm_H`$ region. We are going to use the future expected minimum uncertainty for the W mass: $`\mathrm{\Delta }m_W=\pm .01`$ GeV for a nominal central value of $`m_W=80.33`$ GeV, and expand $`m_H`$ between the recent experimental bound, $`m_H90`$ GeV , and the perturbative limit, $`m_H700`$ GeV(Fig. 1). We take an optimum scenarios also for the top quark mass ($`m_t=176\pm 2`$ GeV). $`\alpha `$ and $`\mathrm{\Lambda }`$ should take values such that the resulting masses must satisfy the above constraints. Since we assumed the effective Lagrangian is derived from a weakly coupled theory, it is reasonable to impose<sup>2</sup><sup>2</sup>2We will choose the $`\alpha _i`$ signs that give the maximum and minimum values for the quantities of interest. $`|\alpha _\varphi ^{(1)}|1`$ , while $`\mathrm{\Lambda }`$ is set to values greater or equal than 1 TeV. This scale is a conventional one, it can be justified in some specific high-energy models. It turns out that the case when $`|\alpha _\varphi ^{(1)}|=1`$ and $`\mathrm{\Lambda }=1`$ TeV, simultaneously, is already excluded by present data.
Figure 2 shows the level curves in the $`\alpha \mathrm{\Lambda }`$ plane according with the bounds discussed before for $`m_W`$ and $`m_H`$. The allowed region of parameters corresponds to the area located to the right of curves A, B, C, D. Allowing for an enlargement of the allowed Higgs mass bound from $`170<m_H<330`$ GeV in the SM to $`90<m_H<700`$ GeV into the effective Lagrangian case. The curve A is obtained by taking $`m_H=700`$ GeV and $`m_W=80.32`$ GeV, while for the curve B we use $`m_H=90`$ GeV and $`m_W=80.34`$ GeV. The shadowed area between the curves (BD) marks the parameters region where no new physics effects can be disentangled from the SM uncertainties. These results were obtained by considering $`m_t=176`$ GeV; it is found that there are not substantial changes by adding the top mass uncertainty to the effective contributions. It is also found that the allowed ranges for $`\mathrm{\Lambda }`$ and $`\alpha _\varphi ^{(1)}`$ are as follow: for $`\mathrm{\Lambda }=1`$ TeV, $`.011<\alpha _\varphi ^{(1)}<.060`$ and $`.01010<\alpha _\varphi ^{(1)}<.01019`$. This correspond to Higgs masses between the perturbative limit and upper SM bound in the first interval, and between the lower SM bound (obtained from radiative corrections to $`m_W`$) and the experimental limit for the second interval. In the complementary case, i.e. taking $`\alpha _\varphi ^{(1)}`$ equal to its maximum value, it is found that $`4.1<\mathrm{\Lambda }<9.5`$ TeV, and $`4.64<\mathrm{\Lambda }<10`$ TeVwith same considerations for the Higgs mass as in the first case. Then, independiently of the value $`\alpha _\varphi ^{(1)}`$ effects arising from a scale beyond of 10 TeVcan not be disentangled from SM top uncertainties.
## 4 Associated $`W(Z)`$ and $`H`$ production
In this paper we also re-examined accordingly the modifications to the mechanism of associated production of Higgs boson with a vector particle ($`W,Z`$), due to the effective Lagrangian updating the results obtained in Ref. . The corresponding Lagrangian to be used is :
$$_{HVV}=\frac{m_Z}{2}(1+f_1)HZ_\mu Z^\mu +gm_W(1+f_2)HW_\mu ^+W^\mu ,$$
(6)
where the parameter $`f_i`$ are functions of $`ϵ_j=\alpha _j(\frac{v}{\mathrm{\Lambda }})^2`$ given as follows:
$$f_1=\frac{1}{2}(ϵ_\varphi ^{(1)}+ϵ_\varphi ^{(3)}),f_2=\frac{3}{4}(2ϵ_\varphi ^{(1)}ϵ_\varphi ^{(3)}).$$
(7)
The ratio of the effective cross-section to the SM one for the processes $`p\overline{p}H+V`$ have been evaluated. The parton convolution part is factored out, and only remains the ratio of partonic cross-sections, thus the result is valid for both FNAL and LHC. The expressions for the cross-sections ratios are:
$`R_{HW}`$ $`=`$ $`{\displaystyle \frac{\sigma _{\mathrm{eff}}(p\overline{p}H+W)}{\sigma _{\mathrm{SM}}(p\overline{p}H+W)}}`$ (8)
$`=`$ $`(1+f_2)^2`$
$`R_{HZ}`$ $`=`$ $`{\displaystyle \frac{\sigma _{\mathrm{eff}}(p\overline{p}H+Z)}{\sigma _{\mathrm{SM}}(p\overline{p}H+Z)}}`$ (9)
$`=`$ $`(1+f_1)^2`$
For the operators under consideration, the cross-sections ratio is independent of the Higgs mass, and as the best values, it is found that the cross-section is only slightly modified. The behavior of the cross-section ratios are shown in fig. 3, for typical values of $`\alpha `$ and $`\mathrm{\Lambda }`$ as found in section 4. We consider the same estimate for $`\alpha _\varphi ^{(3)}`$ as we got for $`\alpha _\varphi ^{(1)}`$.
As it can seen from figure 3, these two processes result almost insensible to new physics effects arising from the dimension-six operators $`O_\varphi ^{(1,3)}`$, since their effects is of the order $`10^3`$. The corresponding contributions that arises from the effective operators that were neglected are, at least, 2 orders of magnitude below that ones that are considered here. Of course, a more detailed study is needed in order to include the modifications to the expected number of events for discovery as a function of $`m_H`$, and that case is under study.
## 5 Conclusions
We have studied the modifications that new physics imply for the bound on the Higgs mass that is obtained from radiative corrections to electroweak observables, within the context of effective lagrangians. We found that the SM bound $`170m_H330`$ GeV that is obtained from a precise determination of the $`W`$ mass, can be substantially modified by the presence of dimension-6 operators that arise in the linear realization of the effective Lagrangian approach. A Higgs mass as heavy as 700 GeV is allowed for scales of new physics of the order of 1 TeV, with a corresponding value for $`|\alpha _\varphi ^{(1)}|`$ of the order of $`10^2`$. Aswell we found that even for $`|\alpha _\varphi ^{(1)}|=1`$, new physics effects arising from scales $`\mathrm{\Lambda }>10`$ TeV can not be separate from the uncertainties on the top quark mass, in an optimum scenarios for the observables considered here. Those results give us the landmark for the decoupling limits for both $`\alpha _i`$ and $`\mathrm{\Lambda }`$. Accordingly, it is found that such operators do not produce a significant modification for the present (FNAL) or future (LHC) studies for the associated production mechanism $`p\overline{p}H+V`$.
We acknowledge financial support from CONACYT and SNI (MEXICO). We also acknowledge to M.A. Pérez for discussions.
|
no-problem/9905/gr-qc9905098.html
|
ar5iv
|
text
|
# Acknowledgements
## Acknowledgements
The work of A.B. was partially supported by RFBR under the grant No 99-02-16122. The work of A.K. was partially supported by RFBR under the grant 99-02-18409 and under the grant for support of leading scientific schools 96-15-96458. A.B. and A.K. kindly acknowledge financial support by the DFG grants 436 RUS 113/333/4 during their visit to the University of Freiburg in autumn 1998.
|
no-problem/9905/cond-mat9905370.html
|
ar5iv
|
text
|
# Tunneling Spectroscopy of Tl2Ba2CuO6
## I INTRODUCTION
Tunneling spectroscopy has revealed the complex characteristics of high-$`T_c`$ superconductors (HTS’s). Tunneling spectra on Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (Bi-2212), Bi<sub>2</sub>Sr<sub>2</sub>CuO<sub>x</sub> and HgBa<sub>2</sub>CuO<sub>4</sub> (Hg-1201), have shown both symmetric and asymmetric tunneling conductance peaks, and variable subgap features that range from sharp cusp-like to flat, BCS-like. Additionally, tunneling experiments on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> (YBCO) and Bi-2212 in certain crystal orientations have also shown the presence of zero-bias peaks in the conductance data. In Bi-2212 there exists a prominent dip feature (at eV$`2\mathrm{\Delta }`$) that is asymmetric with bias voltage, being much stronger for a polarity that corresponds to removal of quasiparticles from the superconductor. These unusual observations have made it difficult to properly analyze the results of tunneling experiments and have complicated the deduction of important properties of HTS’s such as the pairing symmetry.
There is an emerging consensus that the predominant pairing symmetry in hole-doped HTS cuprates is $`d_{x^2y^2}`$ ($`d`$-wave). Evidence from tricrystal ring experiments points to pure $`d`$-wave for YBCO. Grain boundary and scanning tunneling microscopy (STM) junctions indicate a small $`s`$-wave contribution to the $`d`$-wave symmetry on YBCO which was attributed to the orthorhombicity of YBCO. Tunneling and penetration depth measurements of electron-doped Nd<sub>2-x</sub>Ce<sub>x</sub>CuO<sub>4</sub> are compatible with s-wave symmetry. Another well-studied, hole doped, HTS is Bi-2212 because of the availability of high quality single crystals, and the ability to easily cleave this crystal along the a-b plane. Results from angle resolved photoemission spectroscopy (ARPES) indicate an anisotropic gap with a minimum in the ($`\pi `$,$`\pi `$) direction that is consistent with $`d`$-wave symmetry. Furthermore, results from ARPES and STM have exhibited spectral features that are also observed in PCT, such as quasiparticle peak, dip and hump. They have also shown the puzzling feature of an increasing energy gap size with decreasing doping concentration in Bi-2212. DeWilde et al. have shown that the study on Bi-2212 using three different techniques (PCT, break-junction, and STM) can produce very similar results as far as gap size, dip structure, and subgap shape are concerned, even when the resistance of the STM junction was of the order of G$`\mathrm{\Omega }`$ while it was the order of 1k$`\mathrm{\Omega }`$-100k$`\mathrm{\Omega }`$ for PCT and break junction. However, point contact tunneling (PCT) results also occasionally show a flat subgap structure which is not easily reconciled with $`d`$-wave symmetry.
Quasiparticle tunneling has failed to definitely reveal the pairing symmetry in Hg-1201. PCT on polycrystal samples of this HTS seems to show a density of states (DOS) that is flat near zero-bias, consistent with $`s`$-wave symmetry, whereas Wei et al. claim that STM measurements on the same HTS seems to be consistent with a $`d`$-wave gap symmetry. While it has been shown that tunneling directionality effects can produce a flat subgap conductance with a $`d`$-wave gap, there is no obvious physical mechanism for preferred tunneling directions. It is therefore more likely that the sporadic observations of flat subgap conductances in HTS simply adds fuel to the debate over pairing symmetry.
Experimental evidence of $`d`$-wave pairing symmetry on Tl-2201 is more convincing. Results from tricrystal ring experiments indicate a pure $`d`$-wave pairing, although admixture of $`d`$ and $`s`$-wave pairing is also interpreted from in-plane torque anisotropy experiments. We have earlier reported the tunneling studies of optimally-doped Tl<sub>2</sub>Ba<sub>2</sub>CuO<sub>6</sub> crystals (Tl-2201) with $`T_c=91`$ K which clearly and reproducibly showed a tunneling DOS that is consistent with a momentum-averaged $`d`$-wave gap symmetry. In that report (Ref. ), our analysis of the superconductor-insulator-normal metal (SIN) tunneling conductance was somewhat primitive, utilizing a simple model for the $`d`$-wave DOS. In this report, we present additional tunneling data on Tl-2201 crystals with $`T_c=86`$ K that have been synthesized using a different technique than the one described in Ref. . The location of the quasiparticle peaks in the SIN conductance data are consistent with the ones in our earlier report and all of the data again display the cusp feature at zero bias. However, here we present a more exhaustive treatment of many junctions, with a wide range of junction conductance ($``$0.1 mS-2 mS). We have also performed a more rigorous analysis of the SIN conductance data using two different models for the tunneling DOS with a $`d`$-wave gap. We again find good agreement with $`d`$-wave symmetry.
Some of the SIN data display a weak dip feature at eV$`2\mathrm{\Delta }`$. We have generated superconductor-insulator-superconductor (SIS) conductance curves using the SIN data. The resulting SIS curves display the characteristic dip features at nearly $`3\mathrm{\Delta }`$ that are consistent with those observed in the SIS tunneling conductance of Bi-2212. This is the first study to clearly indicate that the dip feature is present in the SIN conductance data of Tl-2201, but with a smaller magnitude than observed in Bi-2212. Further comparison with tunneling data of Bi-2212 reveals that while the bulk $`T_c`$ of Bi-2212 and Tl-2201 are approximately the same, the magnitude of the typical energy gap of Tl-2201 is smaller. The origin of this discrepancy is still unknown at present, but some insight has been gained with this study. We note first that the largest gaps found for the Tl-2201 ($`\mathrm{\Delta }`$=25 meV) are close to those of Bi-2212 when both have the same T<sub>c</sub>=86 K. Furthermore, due to the strong dependence of the gap magnitude on doping concentration in Bi-2212, we suggest that the smaller gap values in Tl-2201 may be due to a surface that is slightly overdoped.
## II EXPERIMENTAL PROCEDURE AND RESULTS
Tl-2201 has a tetragonal crystal structure with a single $`CuO`$ layer per unit cell which is relatively simple when compared to the bilayer and trilayer high-$`T_c`$ superconductors. However, Tl<sub>2</sub>Ba<sub>2</sub>Ca<sub>2-n</sub>Cu<sub>2</sub>O<sub>2n+3</sub> (n=1, 2, and 3) family is very sensitive to thallium and oxygen content which influences the structure and superconducting properties. The optimally-doped compound of Tl-2201 has a $`T_c`$ of approximately 91 K and this value can be reduced to zero on the overdoped side by oxygen annealing.
The Tl-2201 single crystals were grown from a flux in an alumina crucible with an alumina lid, sealed to avoid loss of thallium oxide. Tl<sub>2</sub>O<sub>3</sub>, BaO<sub>2</sub> and CuO powders were mixed at the atomic ratio of Tl:Ba:Cu=2.2:2:2 using excess Tl<sub>2</sub>O<sub>3</sub> and CuO as the flux. The crucibles, containing about 50 g of charge, were loaded in a vertical tube furnace and heated rapidly to 925-950<sup>o</sup>C. This temperature was held for 1/2 hour. The furnace was then cooled at 5 <sup>o</sup>C/h to 875<sup>o</sup>C, and finally cooled to room temperature. The crystals were platelet-shaped, with a basal plane area of about 1 mm<sup>2</sup> and a thickness along the $`c`$-axis varying between 20-100$`\mu `$m. The critical temperature of the samples is determined by ac magnetization measurements.
The experimental setup of our PCT system is designed for data collection over a large range of sample temperature. In addition to this feature, tunneling measurements can also be performed in high magnetic fields, up to 6 T. The details of the measurement system can be found elsewhere. Cleaved single crystal samples of Tl-2201 usually have shiny surfaces in the a-b plane. Each is mounted on a substrate using an epoxy so that the tip approaches nominally along the $`c`$-axis. The electrical leads are connected to two sides of the sample by using silver paint. Non-superconducting Au is used as a counter-electrode. It is mechanically and chemically cleaned before each run.
While the differential micrometer driven tip approaches the sample, the $`I(V)`$ signal is continuously monitored on an oscilloscope until an acceptable tunnel junction is obtained, i.e. one which displays an obvious superconducting gap feature. All tunnel junctions are initially formed at 4.2 K to prevent any sample surface deterioration. First derivative measurements, $`\sigma =dI/dV`$, were obtained using a Kelvin bridge circuit with the usual lock-in procedure. $`I(V)`$ and $`dI/dV`$ are simultaneously plotted on a chart and digitally recorded on a computer. DeWilde et al. have shown that tunneling results on Bi-2212 using PCT can produce results that are consistent with those obtained using STM.
In contrast to other surface sensitive experimental methods such as STM, ARPES, Raman, and auger techniques, the advantage of the point contact method for cuprates is that the tip can be used to scrape, clean, and in some cases cleave the surface. The tip often can penetrate through the surface and reach the bulk of the crystal. The cleaving of the surface sometimes results in the formation of SIS junctions, as in the case of Bi-2212. This happens when a piece of the HTS crystal attaches itself to the tip, forming an ohmic contact. As the tip is retracted, the piece forms an SIS break junction with the bulk crystal. Unlike Bi-2212, Tl-2201 has stronger bonds between planes and consequently SIS junctions could not be formed this way.
Figure 1 shows the conductances of eight junctions on three different Tl-2201 crystals, each with a bulk $`T_c`$ near 86 K. These junctions are representative of a larger set of data and they demonstrate several characteristics that are typical for PCT tunneling in Tl-2201. Each junction exhibits a single energy gap feature with conductance peaks at $`|V|=2025`$ mV. The voltage is that of the sample respect to the tip and thus negative bias corresponds to removal of electrons from the superconductor. There is a characteristic asymmetry in the conductance peaks such that the negative bias peak is higher than the one at positive bias. This type of asymmetry has also been seen in PCT and STM studies of Bi-2212, most consistently in overdoped samples. It has been pointed out that this conductance peak asymmetry may be a signature of the $`d`$-wave pairing.
The background conductances for $`|eV|>\mathrm{\Delta }`$ are generally weakly decreasing with bias similar to that in Bi-2212, and it is these type of junctions that exhibit the largest peak height to background (PHB) ratio. A few junctions show a flat or slightly increasing background with a smaller PHB ratio. This implies that the decreasing background is an intrinsic property of the quasiparticle DOS. While such a feature may be due to the underlying band structure DOS, we note the absence of any van Hove singularity (VHS) in these data as well as earlier PCT data on Tl-2201. All of the junctions exhibit a cusp-like feature at zero bias which is characteristic of a $`d`$-wave DOS.
## III THEORETICAL MODEL
The tunneling data are analyzed with two different methods. For Model I, the superconducting data are first normalized by constructing a ”normal state” conductance obtained by fitting the high bias data to a third order polynomial. The normalized conductance data are compared to a weighted momentum averaged $`d`$-wave DOS,
$`N(E)={\displaystyle f(\theta )\frac{Ei\mathrm{\Gamma }}{\sqrt{(Ei\mathrm{\Gamma })^2\mathrm{\Delta }(\theta )^2}}𝑑\theta }.`$
Here $`\mathrm{\Gamma }`$ is a lifetime broadening factor, $`f(\theta )`$, is an angular weighting function, and $`\mathrm{\Delta }(\theta )=\mathrm{\Delta }_o\mathrm{cos}(2\theta )`$ represents the $`d`$-wave gap symmetry expected from a mean-field BCS-type interaction. This model is used because it allows for a quick estimation of the gap value and as we will show, gives an excellent fit to the data. The inclusion of the weighting function allows for a better fit with the experimental data in the gap region than with the non-weighted average as was done previously. Here, a weighting function $`f(\theta )=1+0.4cos(4\theta )`$ was used which imposes a preferential angular selection of the DOS along the absolute maximum of the $`d`$-wave gap and tapers off towards the nodes of the gap. This is a rather weak directional function since the minimum of $`f(\theta )`$ along the nodes of the $`d`$-wave gap is still non-negligible.
The second method (Model II) makes no attempt to normalize out any background conductance. Rather, an attempt is made to fit the entire spectrum by including a band structure, tunneling matrix element, and $`d`$-wave gap symmetry. The tunneling DOS is calculated using the single particle Green’s function,
$$N(E)=\frac{1}{\pi }Im\underset{𝐤}{}|T_𝐤|^2G(𝐤,E)$$
(1)
For the superconducting state,
$`G(𝐤,E)={\displaystyle \frac{u_k^2}{EE_k+i\mathrm{\Gamma }}}+{\displaystyle \frac{v_k^2}{E+E_k+i\mathrm{\Gamma }}}`$
where $`u_k^2`$ and $`v_k^2`$ are the usual coherence factors, $`\mathrm{\Gamma }`$ is the quasiparticle lifetime broadening factor, and $`E_k=\sqrt{|\mathrm{\Delta }(𝐤)|^2+\xi _𝐤^2}`$ with the gap function for d-wave symmetry $`\mathrm{\Delta }(𝐤)=\mathrm{\Delta }_o[\mathrm{cos}(k_xa)\mathrm{cos}(k_ya)]/2`$ . The tunneling matrix element $`\left|T_𝐤\right|^2`$ is written as
$`\left|T_𝐤\right|^2=v_gD(𝐤)`$
where $`v_g`$ is the group velocity defined as $`v_g=\left|_𝐤\xi _𝐤𝐧\right|`$ and D(k) is the directionality function that has the form
$$D(𝐤)=\mathrm{exp}\left[\frac{k^2(𝐤𝐧)^2}{(𝐤𝐧)^2\theta _o^2}\right]$$
(2)
Here the unit vector n defines the tunneling direction, which is perpendicular to the plane of the junction, whereas $`\theta _o`$ corresponds to the angular spread in k-space of the quasiparticle momenta with respect to n that has a non-negligible tunneling probability.
The band structure for the $`CuO`$ plane extracted from ARPES measurements on Bi-2212 is used. Presumably, other than the exact value of the chemical potential, the band structure for Tl-2201 should have the same generic features as the extracted band structure from Bi-2212. Unlike a similar analysis done in our earlier report, the presence of the VHS is not artificially removed by using a large chemical potential. Rather, the presence of the VHS is effectively diminished by the group velocity factor from the tunneling matrix element. Here, the value of the chemical potential has been altered slightly to produce the best comparison to the experimental data. The tunneling DOS from this model is compared directly to the experimental conductance data using a constant scaling factor. An interesting aspect of this second model is the robust asymmetric quasiparticle peaks in the tunneling DOS. This asymmetry, which has the higher peak in the filled states, is a direct consequence of the $`d`$-wave gap symmetry and directionality in the model. As will be shown, this result is consistent with our experimental tunneling data.
## IV ANALYSIS AND DISCUSSION
Figures 2 and 3 present two representative SIN tunneling conductances of Tl-2201. Figure 2 shows Junction E of Fig. 1, while Figure 3 is an additional conductance curve (Junction J) not shown in Fig. 1 that has a very high peak height to background ratio. As illustrated in Figs. 2(a) and 3(a), the SIN conductance data consistently display the sharp, cusp-like subgap feature, weakly decreasing background, and conductance peaks that are either weakly or strongly asymmetric, with the higher peak on the negative bias side. The presence of these features and the overall shape of the conductance data are very similar to the conductance data of Fig. 1 in our earlier report.
To compare the two data sets to Model I, the SIN conductance data are first normalized by dividing through with an extrapolated normal state conductance curve which is shown as the solid line in Fig. 2(a) and 3(a). The normalized conductances are then compared to the DOS obtained from Model I as shown in Fig. 2(b) and 3(b). Other than a remaining conductance peak asymmetry and somewhat broader experimental peaks, the model DOS shows a remarkably good overall fit in the gap region with the experimental DOS. Notice that while the process of normalization has reduced the degree of asymmetry of the conductance peaks in both data sets, it has not eliminated it. This proves that the asymmetry is not a consequence of the background.
The comparison of Model II with the unnormalized tunneling data is shown in Fig. 2(c) and 3(c). The most striking observation is the model’s ability to reproduce the peak asymmetry that is seen in the data. As was shown in Ref. , this type of asymmetry with the higher peak in the filled states is a robust property of $`d`$-wave gap symmetry and directional tunneling. The strength of the directionality here is defined by $`\theta _o`$ and the values used to compare both experimental data here are considerably larger than the ones used to analyze the Bi-2212 data. This implies that these two data sets are best fit with a weak directional tunneling processes which is consistent with the type of weighting function $`f(\theta )`$ in Model I.
As in Bi-2212, Model II could not accurately reproduce the background conductance although it does show the generic decreasing background seen in both data sets. This may also be due to the fact that we are not using the exact normal state band structure for Tl-2201 in the model. Model II also produces a poorer agreement with the subgap data which might be due to the particular choice of directionality function used. Note that the values of the energy gap from both models are very close to each other, with the Model I having a slightly lower gap values than Model II.
We would like to point out that attempts at comparing the normalized experimental data with just a pure $`d`$-wave gap symmetry without the $`f(\theta )`$ weighting function in the Model I led to a poorer fit to the data. Considering this, we reanalyze some of the SIN tunneling data from our previous report (Junction B and C in Fig. 4 of Ref. and relabeled here as Junction B’ and C’ respectively). We have restricted this analysis by using only Model I. Figure 4(a) shows the raw SIN conductance data of Junction B’ and the estimated normal state conductance used to obtain the normalized data. This normalized curve is shown in Fig. 4(b) along with the comparison to Model I. The model produces a better fit in the gap region (with identical gap value) when compared to our earlier fit. This procedure is repeated for Junction C’ as shown in Fig. 4(c) (which has been normalized by a constant). In this case, the overall fit is only slightly improved over the one we reported earlier, with an identical gap value.
One of the distinct features of the tunneling DOS in Bi-2212 is the strong dip beyond the quasiparticle peak in the occupied states. This feature is clearly seen in SIN conductance data of Bi-2212 from both STM and PCT. Furthermore, this dip feature is enhanced in the superconductor-insulator-superconductor (SIS) junction. This is apparent from the break junction tunneling data in Ref. . Our SIN conductance data of Tl-2201 from this work and in our previous report do not seem to distinctively show the same dip features, although there is evidence of a weak dip feature in the normalized data of Figs. 2 and 4 as well as junctions C and G in Fig. 1. We explore this issue further by generating SIS conductance curves from the raw data of Junction E and Junction B’ which should enhance any dip feature that may exists in these SIN data. As shown in Fig. 5, both conductance data generate SIS curves that are qualitatively similar to the experimental SIS tunneling data of Bi-2212. Both curves clearly display the prominent dip features located at slightly less than 3$`\mathrm{\Delta }`$. This indicates that the dip feature is also present in the SIN conductance data of Tl-2201 but at a smaller amplitude than the ones observed in the SIN data of Bi-2212.
Another significant difference between the tunneling data of Bi-2212 and Tl-2201 is the magnitude of the superconducting gap and this might be related to the dip feature discussed above. The optimally-doped Bi-2212 which has a $`T_c`$ of 93-95 K has an energy gap in the range of 37-38 meV. Due to the high reproducibility of the gap value for Bi-2212, it is presumed that the gap value for Bi-2212 is consistent with its $`T_c`$. Tl-2201 which has a bulk $`T_c`$ of 86 K in this study and 91 K in the previous study, has an energy gap in the range of 19-25 meV. This value is considerably less than the energy gap of Bi-2212 even though bulk $`T_c`$ for both cuprates are roughly the same. This discrepancy raises an important question in HTS cuprates, namely the relationship between gap size $`\mathrm{\Delta }(T=0)`$ and $`T_c`$. The unusual $`\mathrm{\Delta }`$ versus doping in Bi-2212, which violates mean-field theory, strongly suggest that $`T_c`$ is a phase coherence temperature. In this picture, there are strong superconducting fluctuations above $`T_c`$ and presumably the ability of each HTS to support such fluctuations depends on structural parameters, anistropy and the degree of 2-dimensionality. It is thus possible that there is no universal relationship between $`\mathrm{\Delta }`$ and $`T_c`$ for all HTS. If there exists a universal relationship between these two parameters for HTS cuprates as is approximately the case for conventional superconductors, then the difference in the gap size between these two HTS’s needs another explanation. It is possible, due to the strength of the interplane bonding, that the tunneling measurement is probing predominantly the surface of the Tl-2201 crystals which has been exposed to air and may have properties different from the bulk. This raises the possibility that the surface of Tl-2201 may be slightly overdoped, which results in a smaller gap size. When Bi-2212 is annealed in air it has a $`T_c`$86 K and is slightly overdoped. This is the equilibrium oxygen doping level at atmospheric conditions and a similar situation is found for Tl-2201. Air annealed Tl-2201 has a $`T_c`$82 K. Therefore, air-exposed Tl-2201 will have a tendency for the surface to be somewhat overdoped by coming to equilibrium with atmospheric conditions. We are then suggesting that when the sample is cooled down to 4.2 K, there are no changes in the surface concentration. Of course we have no proof of this. If there are changes in the surface concentrations upon cooling in vacuum, then these changes are highly reproducible because both Bi-2212 and Tl-2201 display highly reproducible spectra and gap values.
Furthermore, the strength of the dip feature seems to indicate that the surface is slightly overdoped. In Bi-2212, tunneling conductances exhibiting gap sizes of 35-40 meV exhibit dip strength of approximately 80% of the background conductance. For smaller gaps in the range of 15-20 meV (which are from overdoped Bi-2212), the dip strength is approximately 10%. This is consistent with what is observed in Fig. 5 for Tl-2201 and seems to support our argument that the surface of Tl-2201 crystals we measured is slightly overdoped. This however, is still speculation and requires further detailed study to account for the apparent gap size discrepancy. We note that preliminary temperature dependent data indicate that junctions which exhibit small gaps ($``$ 20 meV) also show a strong smearing out of the gap feature at a temperature below the bulk T<sub>c</sub>.
To summarize, we have performed SIN tunneling junction measurements on single crystals of Tl-2201 with bulk $`T_c`$ of 86 K. The conductance data obtained reproducibly show cusp-like subgap features, asymmetric conductance peaks and weakly decreasing backgrounds. These observations are consistent with our earlier report on Tl-2201 with a $`T_c`$ of 91 K that were synthesized in a different manner. The present data are fit reasonably well with two different models using the $`d`$-wave gap symmetry. The need for a weighting function in Model I and the prominent asymmetry of the data which is reproduced in Model II seem to indicate that the tunneling process in these cases may have a weak preferential tunneling direction centered at or near the absolute maximum of the $`d`$-wave gap. The magnitude of the superconducting gap for this cuprate is noticeably smaller than the gap size of optimally-doped Bi-2212 that has similar $`T_c`$. The existence of a universal relationship between the superconducting gap size and $`T_c`$ is still undetermined, and therefore the origin of the discrepancy between the gap size of these two cuprates is still uncertain.
## V ACKNOWLEDGMENTS
This work was partially supported by U.S. Department of Energy, Division of Basic Energy Sciences-Material Sciences under contract No. W-31-109-ENG-38, and the National Science Foundation, Office of Science and Technology Centers under contract No. DMR 91-20000. Z.Y. acknowledges support from the Division of Educational Programs, Argonne National Laboratory.
Fig. 1. Tunneling conductances of eight junctions on three different Tl-2201 crystals, each with a bulk $`T_c`$ near 86 K. Junction A, B, C, E, and F have been shifted vertically by 1.5, 1.2, 0.7, 0.3, and 0.1 mS respectively for clarity in their own scales.
Fig. 2. (a) SIN tunneling conductance of Junction E (circles) at 4.2 K and the estimated normal state conductance (line). (b) Comparison of the normalized SIN conductance with Model I. The inset shows the angular weighting function $`f(\theta )`$. (c) Comparison of the unnormalized conductance with Model II. Refer to Ref. for definitions of variables. $`c_o`$, which corresponds to the chemical potential, has been changed to 0.1585 eV for all comparisons done in this paper.
Fig. 3. (a) SIN tunneling conductance of Junction J (circles) at 4.2 K and the estimated normal state conductance (line). (b) Comparison of the normalized SIN conductance with Model I. (c) Comparison of the unnormalized conductance with Model II.
Fig. 4. (a) SIN tunneling conductance of Junction B’ (circles) at 4.2 K and the estimated normal state conductance (line). (b) Comparison of the normalized SIN conductance with Model I. (c) Comparison of the normalized SIN conductance of Junction C’ with Model I. The tunneling conductance has been normalized by a constant.
Fig. 5. SIS conductance curves generated from the unnormalized SIN conductance curves of Junction E and B’. Each SIS curve shows the prominent dip feature at nearly 3$`\mathrm{\Delta }`$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.