id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9905/hep-ex9905002.html
|
ar5iv
|
text
|
# NEW RESULTS FROM VES.
## 1 Introduction
We present the results of the PWA of $`\pi ^+\pi ^{}\pi ^{}`$, produced in reaction:
$`\pi ^{}Be`$ $``$ $`\pi ^+\pi ^{}\pi ^{}Be`$ (1)
and $`\omega \pi ^{}\pi ^0`$ system, produced in:
$$\pi ^{}Be\omega \pi ^{}\pi ^0Be,\omega \pi ^+\pi ^{}\pi ^0$$
(2)
Our previous results of the analysis of reaction (1) were published in , and of reaction (2) were partially reported at the conferences , . The measurements were carried out using VES spectrometer exposed by the $`37GeV`$ momentum $`\pi ^{}`$ beam. The description of the setup can be found in .
## 2 Results of the $`\pi ^+\pi ^{}\pi ^{}`$-system PWA.
The selection criteria for reaction (1) and the description of the PWA procedure can be found in . The relativistic covariant helicity formalism is used to construct the amplitudes and the positively definite density matrix of the full rank. The largest waves of the $`J^P=0^{}`$, $`1^+`$, $`2^{}`$ channels are decoupled from the other waves with the same $`J^PM^\eta `$ and are free to interfere with each other. The $`610^6`$ events with $`|t^{}|<0.06GeV^2`$ <sup>1</sup><sup>1</sup>1 $`t^{}=tt_{min}`$, where $`t`$\- momentum transfer from the beam to the final state squared, $`t_{min}`$-its minimum value. and $`210^6`$ with $`0.06<|t^{}|<0.7GeV^2`$ are analyzed separately.
We present the main features of the most significant waves in the high $`3\pi `$ mass region.
$`J^PM^\eta =1^+0^+`$. A peak in the $`1^+0^+D\rho `$ wave (Fig.1(b)) and the shoulder in the $`1^+0^+S\rho `$ wave (Fig.1(a), (d)) are observed at $`M1.7GeV`$ and are considered to be the $`a_1^{}(1700)`$ decay into the $`\rho \pi `$. The peak was fitted with the coherent sum of the Breit-Wigner resonance and the exponential background. The fit results in the following $`a_1^{}(1700)`$ parameters:
$$\begin{array}{cc}M=1.80\pm 0.05GeV\hfill & \mathrm{\Gamma }=0.23_{0.03}^{+0.10}GeV;\hfill \end{array}$$
where the errors are dominated by systematics. The $`a_1^{}(1700)`$ branching ratios into the D-wave $`\rho \pi `$ and P-wave $`f_2(1270)\pi `$ are found to be:
$$\frac{Br(a_1^{}(1700)(\rho \pi )_D)}{Br(a_1^{}(1700)(\rho \pi )_S)}<0.35\frac{Br(a_1^{}(1700)f_2\pi )}{Br(a_1^{}(1700)(\rho \pi )_S)}<0.23\text{ at 95\% CL}$$
$`J^PM^\eta =2^{}0^+`$. A complicated wave behaviour for low $`t^{}`$ shown in Fig.2 is described by the interplay of the $`\pi _2(1670)`$ and $`\pi _2^{}(2100)`$ states . A peak at $`M1.7GeV`$ in the F-wave $`\rho \pi `$ wave intensity is observed.
$`J^PM^\eta =3^+0^+`$. A resonance-like signal is observed near $`M1.8GeV`$ in the $`\rho \pi `$ and $`f_2\pi `$ waves, see Fig.3. The phase of the $`3^+0^+D\rho `$ as related to the $`2^{}0^+Sf_2`$ is shown in Fig.3(c) and is in accordance with the expectations for a resonance. The simultaneous fit of the $`f_2\pi `$ intensity with the relativistic Breit-Wigner function and the $`\rho \pi `$ intensity with the incoherent sum of the Breit-Wigner and the Chebyshev polynomial background results in the following parameters of the resonance:
$$\begin{array}{cc}M=1.86\pm 0.02GeV,\hfill & \mathrm{\Gamma }=0.54\pm 0.03GeV.\hfill \end{array}$$
The relative probability of decay into the $`f_2(1270)\pi `$ and $`\rho (770)\pi `$ is as follows:
$$\frac{Br(a_3f_2(1270)\pi )}{Br(a_3\rho \pi )}=0.5\pm 0.1.$$
$`J^PM^\eta =4^+1^+`$. A bump near $`M2GeV`$ is found in the $`4^+1^+G\rho `$ and $`4^+1^+Ff_2`$ waves produced at high $`t^{}`$ (Fig.3). The phase of the $`4^+1^+G\rho `$ relative to the $`\pi _2(1670)`$ is in accordance with the expectation for the resonance. The simultaneous fit of the $`f_2\pi `$ intensity with the relativistic Breit-Wigner function and the $`\rho \pi `$ intensity with the incoherent sum of the Breit-Wigner and the Chebyshev polynomial background results in the following parameters of the resonance:
$$\begin{array}{cc}M=1.95\pm 0.02GeV,\hfill & \mathrm{\Gamma }=0.34\pm 0.10GeV.\hfill \end{array}$$
We identify this object with the $`a_4(2040)`$, having the following branching ratio:
$`{\displaystyle \frac{Br(a_4(2040)f_2(1270)\pi )}{Br(a_4(2040)\rho \pi )}}=0.5\pm 0.2.`$
$`J^PM^\eta =1^{}1^+`$. The observation of a signal in the $`1^{}1^+P(\rho )`$ wave with $`M=1.62\pm 0.02GeV`$, $`\mathrm{\Gamma }=0.24\pm 0.05GeV`$ was previously reported as preliminary by the VES . Later, the observation of the state with $`M=1593\pm 8_{47}^{+20}MeV`$, $`\mathrm{\Gamma }=168\pm 20_{12}^{+150}MeV`$ was declared by the E852 . It was shown in the model dependence of the signal behaviour. We do not observe such a narrow signal (Fig. 5) in the fit results with the applied PWA model. However there appears a peak at $`M1.6GeV`$ with $`\mathrm{\Gamma }0.3GeV`$ in the $`1^{}1^+`$ wave intensity, described by the leading term in the expansion of the density matrix in terms of the eigenvalues ( shown in Fig.7).
## 3 The results of the $`\omega \pi ^{}\pi ^0`$-system PWA
The selection criteria for reaction (2) and the description of the PWA procedure can be found in . The results of the PWA are presented in Fig. 6.
$`J^PM^\eta =0^{}0^+`$. A peak in the region of $`1.74GeV`$ with the flat background dominates in the wave $`0^{}0^+P(\rho )`$ at low $`t^{}`$ (Fig. 6(a)). The phase of this wave relative to the smooth $`1^+0^+P(b_1)`$ wave has the resonant behaviour. The resonance parameters were determined by the fit of the wave intensity with the incoherent sum of a relativistic Breit-Wigner function and a cubic polynomial and are as follows: the mass $`M=1.737\pm 0.005\pm 0.015GeV`$ and the width $`\mathrm{\Gamma }=0.259\pm 0.019\pm 0.06GeV`$.
$`J^PM^\eta =2^{}0^+`$. A clear peak is observed at $`M1.67GeV`$ with $`\mathrm{\Gamma }0.2GeV`$ (Fig. 6(b)). The resonant phase behaviour of the $`2^{}0^+P1(\rho )`$ and $`2^{}0^+P2(\rho )`$ waves relative to the $`1^+0^+P(b_1)`$ wave is observed. The resonance parameters of the $`2^{}0^+P2(\rho )`$ peak were estimated in the same way as for $`\pi (1740)`$: $`M=1.687\pm 0.009\pm 0.015GeV\text{ and }\mathrm{\Gamma }=0.168\pm 0.043\pm 0.053GeV`$. We identify this phenomenon with the decay of the $`\pi _2(1670)`$ into $`\omega \rho `$. The partial branching ratio was found by normalization to the decay $`\pi _2(1670)f_2(1270)\pi `$ , observed in the current experiment decay:
$`Br(\pi _2(1670)^{}\omega \rho ^{})=0.027\pm 0.004\pm 0.01`$
The limits of the $`\pi _2(1670)`$ decay branching ratios are set at the $`2\sigma `$ confidence level:
$$\begin{array}{cc}Br(\pi _2(1670)\rho _1(1450)\pi )<0.0036,\hfill & Br(\pi _2(1670)b_1\pi )<0.0019.\hfill \end{array}$$
$`J^PM^\eta =3^+0^+`$. A peak at $`M_{5\pi }2GeV`$ and $`\mathrm{\Gamma }0.35GeV`$ is clearly seen in the $`3^+0^+S(\rho _3(1690))`$ wave for events with low $`t^{}`$ (Fig. 6(c)). However the resonant phase motion was not found. Such wave behaviour can be attributed to the Deck effect process .
$`J^PM^\eta =2^+1^+`$. The intensive production and decay of the $`a_2(1320)`$ is the main process at low masses (Fig. 6(d)). The decay probability was found to be $`Br(a_2(1320)\omega \pi ^{}\pi ^0)=(5\pm 1)\%`$. We define the $`a_2(1320)`$ partial width as that of a Breit-Wigner function with the S-wave $`\omega \pi ^{}\pi ^0`$ background. The nature of the $`1.7GeV`$ mass structure is unknown. There may be another resonance or the opening of the $`\omega \rho `$ channel . We can not make preference to a particular hypothesis.
$`J^PM^\eta =4^+1^+`$. The signal at $`M2GeV`$ (Fig. 6(e)) with the resonant phase behaviour can be identified as the $`a_4(2040)`$. The $`a_4(2040)`$ parameters are estimated by the fit with the incoherent sum of a $`D`$-wave relativistic Breit-Wigner function and a polynomial background to be: $`M=1.944\pm 0.008\pm 0.050GeV\text{ and }\mathrm{\Gamma }=0.324\pm 0.026\pm 0.075GeV`$. The $`t^{}`$-dependence of the $`a_4(2040)`$ is identical to that of the $`a_2(1320)`$.
$`J^PM^\eta =1^{}1^+`$. The intensity of the $`1^{}1^+S(b_1)`$ wave for events with high $`t^{}`$ shows a wide bump with maximum at $`M1.6÷1.7GeV`$. The highest intensity of this wave does not exceed $`15\%`$ of the $`2^+1^+S2(\rho )`$ wave intensity (Fig. 6(f)). The $`\omega \rho `$ P-waves are included in turn along with the $`b_1\pi `$ wave. Their intensity distribution differs in form from that of the $`b_1\pi `$ wave and their inclusion in the fit do not influence on the $`b_1\pi `$ wave intensity behaviour.
## 4 The $`J^PM^\eta =1^{}1^+`$ wave analysis
The combined analysis of the $`J^PM^\eta =1^{}1^+`$ $`b_1\pi `$ and the $`J^PM^\eta =2^+1^+`$ $`\omega \rho `$ was carried out in order to understand the nature of the $`b_1\pi `$ wave. The results of the $`\omega \pi ^{}\pi ^0`$ system PWA were used for this analysis. The diagonal elements and the real and imaginary parts of the non-diagonal element of the $`\rho `$-matrix corresponding to the $`1^{}1^+`$ $`b_1\pi `$ and $`2^+1^+`$ $`\omega \rho `$ waves with high $`t^{}`$ were simultaneously fitted. Fit results were used to predict the coherence parameter for cross checking. The $`b_1\pi `$ amplitude was saturated by the Breit-Wigner resonance and the coherent background. The $`\omega \rho ^{}`$ amplitude was saturated by the $`a_2(1320)`$-meson, a background and an $`a_2^{}`$ state has been also tried. The results of the fits with various ways of the $`\omega \rho ^{}`$ wave construction point out to the resonance nature of the $`b_1\pi `$ signal. The range of the parameters variation is large due to the freedom in the $`2^+1^+`$ state model.
The signal in the $`J^PM^\eta =1^{}1^+`$ wave of the $`\eta ^{}\pi ^{}`$-system with close parameters was observed earlier . The simultaneous fit of the $`b_1\pi `$ and $`\eta ^{}\pi `$ intensities with incoherent sum of a Breit-Wigner resonance and a background in each channel was carried out ( Fig. 7). The fit results in the following parameters:
$$\begin{array}{cc}M=1.58\pm 0.03GeV,\hfill & \mathrm{\Gamma }=0.30\pm 0.03GeV.\hfill \end{array}$$
The form of the signal in the $`1^{}1^+`$ $`\rho \pi `$ wave is close to the Breit-Wigner function with these parameters. A fit of all three channels results in the resonance parameters changed within errors. All these facts indicate to the existence of a wide resonance $`\mathrm{\Gamma }=0.29\pm 0.03GeV`$ with the mass $`M=1.61\pm 0.02GeV`$ and the relative branching ratio: $`Br(b_1\pi ):Br(\eta ^{}\pi ):Br(\rho \pi )=1:1.0\pm 0.3:1.6\pm 0.4.`$
## 5 Conclusions
The PWA of the reaction $`\pi ^{}Be\pi ^+\pi ^{}\pi ^{}Be`$ and $`\pi ^{}Be\pi ^+2\pi ^{}2\pi ^0Be`$ was performed.
The mass and the width of the resonance structure in the $`J^PM^\eta =0^{}0^+`$ $`\omega \rho ^{}`$ wave differs from that for $`\pi (1800)`$. There may exist two objects of the different nature: a hybrid $`\pi (1800)`$ and a $`3^1S_0\text{ }q\overline{q}`$ state decaying into $`\omega \rho `$.
The indication of the existence of the resonance $`a_1^{}`$ mostly decaying to the $`\rho \pi `$ in the S-wave is found.
The $`\pi _2(1670)`$ decays into the $`\omega \rho `$ and $`\rho \pi `$ in the F-wave are found.
The $`a_2(1320)^{}\omega \pi ^{}\pi ^0`$ decay and a wide bump of unknown nature at $`M1.7GeV`$ are observed in the $`J^PM^\eta =2^+1^+`$ wave. The $`a_3`$ and $`a_4(2040)`$ decays to the $`\rho \pi `$ and $`f_2\pi `$ are observed with the following relative branching ratio of the $`a_4(2040)`$ decays: $`Br(f_2(1270)\pi ):Br(\rho \pi ):Br(\omega \rho )=0.5\pm 0.2:1:1.5\pm 0.4`$
The preliminary results of the $`1^+`$ wave analysis point out to the existence of the resonance with exotic quantum numbers, which are forbidden for $`q\overline{q}`$ states, the mass $`M=1.61\pm 0.02GeV`$, the width $`\mathrm{\Gamma }=0.29\pm 0.02GeV`$ and the following relative branching ratio: $`Br(b_1\pi ):Br(\eta ^{}\pi ):Br(\rho \pi )=1:1.0\pm 0.3:1.6\pm 0.4.`$
|
no-problem/9905/hep-ph9905516.html
|
ar5iv
|
text
|
# 𝑢̄-𝑑̄ asymmetry - a few remarks.
## Abstract
We make a few remarks on possible sources of uncertainties of the $`\overline{d}\overline{u}`$ asymmetry obtained by different methods and comment on its possible verification in the future. In addition we comment on its present understanding.
In the last year both the E866 collaboration at Fermilab (Drell-Yan production of dimuons) and HERMES collaboration at DESY (semi-inclusive production of charged pions) published their new results on $`\overline{d}\overline{u}`$ asymmetry in the nucleon . During the DIS99 conference both groups have presented their new results with somewhat better statistics . The new results complement the older results obtained by the NMC on the Gottfried integral and earlier Drell-Yan experiment NA51 from CERN .
The E866 collaboration measured the ratio of the cross sections: $`\sigma _{pd}^{DY}/\sigma _{pp}^{DY}`$. This ratio is extremely sensitive to the $`\overline{d}/\overline{u}`$ ratio. The $`\overline{d}/\overline{u}`$ ratio is extracted in an iterative procedure assuming leading order formulae and that valence quark distributions as well as $`\overline{u}+\overline{d}`$ are as given by PDF’s . Next the difference $`\overline{d}\overline{u}`$ is obtained from
$$\overline{d}\overline{u}=\frac{\overline{d}/\overline{u}1}{\overline{d}/\overline{u}+1}[\overline{u}+\overline{d}].$$
(1)
In practice the E866 collaboration uses $`\overline{u}+\overline{d}`$ from one of the global NLO fit to the world data. Here different global PDF fits yield roughly similar result for $`x>0.05`$. In this range of $`x`$ the sum is strongly constrained from the (anti)neutrino experiments. At smaller values of $`x`$ one should worry about the consistency of using NLO PDF’s in LO formulae. At larger values of $`x>0.4`$ our knowledge of $`\overline{u}+\overline{d}`$ is rather limited. This must be taken into account particularly seriously in the planned P906 experiment . The average value of $`Q^2`$ in the E866 experiment is high enough not to expect any higher-twist effects.
In obtaining the $`\overline{d}\overline{u}`$ asymmetry the HERMES collaboration assumes the factorization between the hard scattering process and the hadronization of the struck quark
$$N^{\pi ^\pm }(x,z)\underset{i}{}e_i^2[q_i(x)D_{q_i}^{\pi ^\pm }(z)+\overline{q}_i(x)D_{q_i}^{\pi ^\pm }(z)],$$
(2)
i.e. assumes implicitly the validity of the parton model. The isospin symmetry (IS) between proton and neutron reduces the number of light-quark fragmentation functions to two, favoured and disfavored. Then
$$\frac{1+r}{1r}=\frac{ud+\overline{u}\overline{d}}{[u\overline{u}][d\overline{d}]}J(z),$$
(3)
where $`r(x,z)=\frac{N_p^\pi ^{}(x,z)N_n^\pi ^{}(x,z)}{N_p^{\pi ^+}(x,z)N_n^{\pi ^+}(x,z)}`$ and $`J(z)=\frac{3}{5}(\frac{1+D^{}(z)}{1D^{}(z)})`$, $`D^{}(z)=D_u^\pi ^{}(z)/D_u^{\pi ^+}(z)`$.
The HERMES experiment is a fixed target experiment with the beam electron energy of about 30 GeV, i.e. the small $`x`$ is associated with relatively small $`Q^2`$. In the lowest $`x`$ bin the average $`Q^2`$ is only slightly larger than 1 GeV<sup>2</sup>. It is an open problem how big are the higher-twist effects beyond the parton model at such small values of $`Q^2`$. A simple estimate of the VDM contribution to the structure function shows that it can be of the order of 20 %. Assuming IS for hadronic components we get for nucleon - virtual vector meson scattering: $`\sigma (pV^0\pi ^+)=\sigma (nV^0\pi ^{})>\sigma (pV^0\pi ^{})=\sigma (nV^0\pi ^+)`$. The inequality comes from the fact that presumably $`u_p>d_p`$ and $`\overline{d}_p>\overline{u}_p`$. Thus the presence of hadronic component would lead to a reduction of the experimentally extracted quantity $`r(x,z)`$. This means that the corresponding purely partonic quantity (exclusively theoretical quantity) would be bigger. This would result in a smaller $`\overline{d}\overline{u}`$. No quantitative estimate of the effect has been made up to now.
Assuming the validity of the parton model the HERMES collaboration extracts the quantity $`(\overline{d}\overline{u})/(ud)`$. The denominator is in our opinion not extremely well known. The measured region of $`x`$ is sensitive to the meson cloud effects . We wish to stress a poorly known fact that the meson cloud effects contribute both to the sea and valence quark distributions. Therefore it is not clear whether the PDF parametric forms used in global fits (even for valence quark distributions) are flexible enough to accomodate those effects.
Both the E866 and HERMES collaborations tried to estimate the integral $`_0^1[\overline{d}\overline{u}]𝑑x`$. It appears that the number obtained by the E866 collaboration is slightly lower than those obtained by the NMC and HERMES collaborations. Is it a random statistical fluctuation or there is a physical reason behind it? Recently we have shown that a two component model which includes the VDM contribution (modified for large $`x`$ for finite fluctuation times of the hadronic component of the photon) and a modified partonic component (vanishing at small $`Q^2`$ <sup>1</sup><sup>1</sup>1the traditional parton model do not posses this property) can describe both the proton and deuteron structure functions in the broad range of $`x`$ and $`Q^2`$; considerably better than the pure QCD-improved parton model. The model from has interesting predictions for $`F_2^pF_2^n`$. Here the VDM contribution cancels and one is left with a modified partonic component which tends to zero at $`Q^2`$ 0. Already at $`Q^2`$ as large as 4 GeV<sup>2</sup> (typical for NMC data) we find a non-negligible reduction of the parton model result. This prediction for a strong $`Q^2`$ dependence of the $`F_2^pF_2^n`$ seems to be confirmed by the world data for $`F_2^p`$ and $`F_2^d`$. The $`F_2^pF_2^n`$ as a relatively small quantity is very sensitive to statistical uncertainties and cannot be obtained by a simple subtraction of $`F_2^p(F_2^dF_2^p)`$. Here one can use the method proposed by the NMC . The QCD improved parton model seems to fail for the extracted $`F_2^pF_2^n`$ already at $`Q^2`$ as large as 7 GeV<sup>2</sup> . In the language of the higher-twist expansion this means that the twist-4 contribution is rather large and negative. This phenomenological observation is consistent with a recent QCD lattice result . The substantial higher-twist effects strongly modify our present understanding of the applicability of the QCD-improved parton model. The strong $`Q^2`$ dependence of the $`F_2^pF_2^n`$ can potentially explain the difference between the E866 (large $`Q^2`$) and NMC (small $`Q^2`$) results.
Despite the not fully resolved problems, mentioned above, the new experiments provided valuable information on $`\overline{d}\overline{u}`$ asymmetry in the nucleon and constitute a useful input which can be used to constrain PDF’s. The LO Altarelli-Parisi evolution equations generate an equal number of $`\overline{u}u`$ and $`\overline{d}d`$ pairs. The two-loop evolution gives a rather negligible effect . Because perturbative QCD is not able to explain the large asymmetry and the Gottfried Sum Rule violation it is clear that the relevant physics must be of nonperturbative origin.
The chiral symmetry and chiral symmetry breaking leads to the presence of the pion cloud in the nucleon. This concept provides the most natural and economic explanation of the asymmetry (see for instance ). There exists 2 technical formulations of such a model. In the traditional nuclear physics formulation the physical nucleon is expanded in terms of the meson-baryon Fock states as
$$|N>=|N_0>+|\pi N^{}>+|\pi \mathrm{\Delta }>+etc.$$
(4)
The most complete version of the model has been presented in . If the coupling constants are fixed from low-energy hadronic physics and the vertex form factors from high-energy production of barions, the model leads to (a) $`\pi ^+>\pi ^0>\pi ^{}`$, (b) the number of pions in the nucleon $`N(\pi )`$ = 0.2 - 0.3, and (c) the pion distribution $`P(x_\pi )`$ which peaks at $`x_\pi `$ 0.2 - 0.3. The latter means that the momentum fraction of the neutron associated with the pion would be about 0.7 - 0.8. This is fully consistent with the spectra of leading neutrons at HERA .
Parallel to the traditional approach, the effective chiral quark theory provides an alternative explanation. Here the relevant degrees of freedom are constituent quarks and Goldstone bosons. The most extended analysis of the light-antiquark asymmetry in this type of models can be found in . If the constituent quark - pion vertex form factor is fixed to the size of the Gottfried Sum Rule violation then: (a) $`\pi ^+:\pi ^0:\pi ^{}`$ = 2:3/2:1, (b) $`N(\pi )`$ = 0.6 - 0.7 and (c) $`P(x_\pi )`$ which peaks at $`x_\pi `$ 0.1.
The recent E866 experiment at Fermilab has reported a first high-precision mapping of the x-dependence of the $`\overline{u}\overline{d}`$ asymmetry with the finding that the difference of $`\overline{d}\overline{u}`$ seems to vanish at large $`x`$ 0.3. This surprising observation was not predicted by models which used only limitations on leading baryons. It was shown in that if the information on leading pions in hadronic reactions is used in addition, to limit the hadronic vertex form factors, than the new E866 data on $`\overline{d}\overline{u}`$ can be described automatically.
In all the present experimental analyses: muon deep inelastic scattering , E866 Drell-Yan experiment and HERMES semi-inclusive pion production both the proton and neutron (deuteron) targets are used. In order to obtain the information on the $`\overline{u}\overline{d}`$ asymmetry one assumes IS of quark (antiquark) distributions in the proton and neutron i.e.
$`u_n(x)=d_p(x),d_n(x)=u_p(x),`$
$`\overline{u}_n(x)=\overline{d}_p(x),\overline{d}_n(x)=\overline{u}_p(x).`$ (5)
Such a symmetry for PDF was never tested experimentally. Recently a simple analysis of the muon and neutrino structure function led to the conclusion of substantial isospin violation in PDF’s . Ascribing all the observed effect to isospin violation is rather an extreme view . Even if the true violation of IS is much smaller than suggested in it remains essentially unknown experimentally. Therefore all the present analyses are to some extent biased by the explicit assumption of IS. In we have suggested how to test the $`\overline{u}\overline{d}`$ asymmetry avoiding the assumption of IS. We suggested to measure at RHIC the asymmetry
$$A(ppW^\pm )=\frac{\sigma (ppW^+)\sigma (ppW^{})}{\sigma (ppW^+)+\sigma (ppW^{})}$$
(6)
as a function of $`W`$-boson rapidity or similar asymmetry for charged leptons from the decay of $`W`$ bosons. The $`x`$-dependence of the asymmetry could be obtained by varying the beam energy at RHIC.
|
no-problem/9905/cond-mat9905012.html
|
ar5iv
|
text
|
# Jarzynski equality for the transitions between nonequilibrium steady states
## Abstract
Jarzynski equality \[Phys. Rev. E 56, 5018 (1997)\], which has been considered to be valid for the transitions between equilibrium states, is found to be applicable to the transitions between nonequilibrium stationary states satisfying certain conditions. Also numerical results confirm its validity. Its relevance for nonequilibrium thermodynamics of the operational formalism is discussed.
The framework of nonequilibrium thermodynamics has been sought by many authors in order to treat various nonequilibrium systems such as chemical reactions, transport processes in solids, moleculer motors, etc. Insofar, all these attempts seem to be based on the fluid-dynamical approaches, which mostly has the assumption of local equilibrium at its starting point. Recently, Oono and Paniconi present a different type of nonequilibrium thermodynamics whose framework corresponds to equilibrium thermodynamics. The unique feature of their work lies in the fact that it is a set of laws concerning the operation from outside, as well as equilibrium thermodynamics. We refer their theory as the operational formalism. This formalism is so important that the concept of entropy in equilibrium thermodynamics is introduced concerning with the adiabatic operation . The relation between dynamical entropy and thermodynamic entropy is also discussed from this viewpoint . Hence it is interesting to construct nonequilibrium thermodynamics from the operational point of view, apart from the existing fluid-dynamical approach.
Operation by outside can cause energy exchange between the system and the external operator. In equilibrium thermodynamics, there is a principle of the minimum work for the system in the isothermal environment;
$$\mathrm{\Delta }FW,$$
(1)
where $`\mathrm{\Delta }F`$ denotes the free energy difference between the initial state and the final state of the system, and $`W`$ denotes the work done by the external operator. The average of a physical quantity $`f`$ is written as $`f`$, as usual. Note that the sign of $`W`$ is positive when the work is performed on the system. The equality holds when and only when the process is reversible. Jarzynski recently proposed the intriguing equality for the finite time transition between the equilibrium states ;
$$\mathrm{exp}(\beta \mathrm{\Delta }F)=\mathrm{exp}(\beta W),$$
(2)
where $`\beta `$ denotes the inverse temperature. Crooks gives another intriguing derivation of Eq. (2) using the fluctuation theorem . This equality is confirmed to be valid in the finite time transition between equilibrium states. In this Rapid Communication, however, we show that Eq. (2) is indeed applicable to the finite time transition between nonequilibrium steady states which satisfy certain conditions. The derivation is given below by roughly following Ref. .
Consider the system with the following Hamiltonian;
$$=H_0(p)+H(x;\alpha )xF(t),$$
(3)
where $`\alpha `$ is a parameter, and $`F(t)`$ denotes the purturbative driving force which may be responsible for the nonequilibrium situation, and $`H_0(p)`$ is independent of time. The external agent manipulates the system by varying the parameter $`\alpha `$. The system may be in contact with a heat bath or several heat baths of different temperatures. In any cases, we describe the dynamics of the system by the stochastic process in the phase space spanned by $`x`$ and $`p`$. We introduce the probability distribution function $`f(\mathrm{\Gamma },t)`$ and the transition probability $`P(\mathrm{\Gamma },t|\mathrm{\Gamma }^{},t^{})`$, where $`\mathrm{\Gamma }`$ denotes both $`x`$ and $`p`$, to get
$$f(\mathrm{\Gamma },t)=𝑑\mathrm{\Gamma }^{}P(\mathrm{\Gamma },t|\mathrm{\Gamma }^{},t^{})f(\mathrm{\Gamma }^{},t^{}).$$
(4)
This leads to
$$\frac{f(\mathrm{\Gamma },t)}{t}=𝑑x^{}R(\mathrm{\Gamma }|\mathrm{\Gamma }^{};t)f(\mathrm{\Gamma }^{},t),$$
(5)
where
$$R(\mathrm{\Gamma }|\mathrm{\Gamma }^{};t)=\underset{\mathrm{\Delta }t+0}{lim}\frac{P(\mathrm{\Gamma },t+\mathrm{\Delta }t|\mathrm{\Gamma }^{},t)P(\mathrm{\Gamma },t|\mathrm{\Gamma }^{},t)}{\mathrm{\Delta }t}.$$
(6)
The dynamics of our nonequilibrium system is described by Eq. (5) together with the initial condition. Then we make an important assumption that the steady state of our system is characterized by the following distribution function;
$$f_{steady}(\mathrm{\Gamma };\alpha )\mathrm{\Phi }(x,p)\mathrm{exp}[\overline{\beta }H(x;\alpha )],$$
(7)
where $`\mathrm{\Phi }(x,p)`$ is an arbitrary function of $`x`$ and $`p`$, and $`\overline{\beta }`$ is a parameter that should be regarded as the effective inverse temperature. In other words, we confine the theory to the systems whose stationary distribution functions are represented by Eq. (7). By the definition of the stationary state, Eq. (5) leads to;
$$\frac{f_{steady}}{t}=𝑑\mathrm{\Gamma }^{}R(\mathrm{\Gamma }|\mathrm{\Gamma }^{};t)\mathrm{\Phi }(x^{},p^{})\mathrm{exp}[\overline{\beta }H(x^{};\alpha )]=0.$$
(8)
Our goal is to obtain the steady state version of Eq. (2);
$$\mathrm{exp}(\overline{\beta }W)=\mathrm{exp}(\overline{\beta }\mathrm{\Delta }F),$$
(9)
while the meaning of $`\mathrm{\Delta }F`$ is unclear at this point. Note that $`\overline{\beta }`$ is identical to the one appearing in the distribution function Eq. (7). Adopting the path-integral expression, we write;
$$\mathrm{exp}(\overline{\beta }W)=𝒟\mathrm{\Gamma }(t)\mathrm{exp}(\overline{\beta }W)𝒫[\mathrm{\Gamma }(t)],$$
(10)
where $`𝒫[\mathrm{\Gamma }(t)]`$ is a probability distribution functional of the path $`\mathrm{\Gamma }(t)`$ in the phase space. The work done to the system is defined as
$$W=𝑑t\dot{\alpha }\frac{H(x;\alpha )}{\alpha }.$$
(11)
We manipulate the system by changing the value of $`\alpha `$ from $`\alpha (0)`$ to $`\alpha (𝒯)`$.
Then we discretize time duration of the operation $`[0,𝒯]`$ as $`(t_0,t_1,\mathrm{},t_N)`$, and write $`\mathrm{\Gamma }(t_i)`$ as $`\mathrm{\Gamma }_i`$ and $`𝒯/N`$ as $`\mathrm{\Delta }t`$, respectively. As a result of the discretization, the distribution functional $`𝒫[\mathrm{\Gamma }(t)]`$ is represented in terms of transition probability as follows;
$$𝒫[\mathrm{\Gamma }(t)]=P_N(\mathrm{\Gamma }_N|\mathrm{\Gamma }_{N1})\mathrm{}P_1(\mathrm{\Gamma }_1|\mathrm{\Gamma }_0)f_0(\mathrm{\Gamma }_0),$$
(12)
where $`f_0(\mathrm{\Gamma }_0)`$ denotes the initial probability distribution function. Similarly, Eq. (11) becomes
$$W=\underset{i=0}{\overset{N1}{}}\delta H_{i+1}(x_i),$$
(13)
where
$$\delta H_{i+1}(x_i)=H(x_i;\alpha _{i+1})H(x_i;\alpha _i).$$
(14)
Due to Eqs. (12) and (13), Eq. (10) is rewritten as
$$\mathrm{exp}(\overline{\beta }W)=\left[\underset{i=0}{\overset{N}{}}𝑑\mathrm{\Gamma }_i\right]P_N(\mathrm{\Gamma }_N|\mathrm{\Gamma }_{N1})e^{\overline{\beta }\delta H_N(x_{N1})}\mathrm{}P_1(\mathrm{\Gamma }_1|\mathrm{\Gamma }_0)e^{\overline{\beta }\delta H_1(x_0)}f_0(\mathrm{\Gamma }_0).$$
(15)
The integrals on the right-side of Eq. (15) are represented by the following iteration.
$$g_{i+1}(\mathrm{\Gamma })=𝑑\mathrm{\Gamma }_iP_{i+1}(\mathrm{\Gamma }|\mathrm{\Gamma }_i)e^{\overline{\beta }\delta H_{i+1}(x_i)}g_i(\mathrm{\Gamma }_i),$$
(16)
where
$`g_0(\mathrm{\Gamma })`$ $`=`$ $`f_0(\mathrm{\Gamma }),`$ (17)
$`\mathrm{exp}(\overline{\beta }W)`$ $`=`$ $`{\displaystyle g_N(\mathrm{\Gamma })𝑑\mathrm{\Gamma }}.`$ (18)
By taking the first order terms of $`\mathrm{\Delta }t`$, we have
$`P_{i+1}(\mathrm{\Gamma }|\mathrm{\Gamma }_i)`$ $`=`$ $`\delta (\mathrm{\Gamma }\mathrm{\Gamma }_i)+\mathrm{\Delta }tR_i(\mathrm{\Gamma }|\mathrm{\Gamma }_i)`$ (19)
$`e^{\overline{\beta }\delta H_{i+1}(x_i)}`$ $`=`$ $`1\overline{\beta }\delta H_{i+1}(x_i)`$ (20)
Substituting Eqs. (19) and (20) into the recursive relation Eq. (16), and taking the limit $`\mathrm{\Delta }t0`$, we get
$$\frac{g(\mathrm{\Gamma },t)}{t}=\overline{\beta }\dot{\alpha }\frac{H(x;\alpha )}{\alpha }g(\mathrm{\Gamma },t)+𝑑\mathrm{\Gamma }^{}R(\mathrm{\Gamma }|\mathrm{\Gamma }^{};t)g(\mathrm{\Gamma }^{},t).$$
(21)
This equation gives
$$g(\mathrm{\Gamma },t)\mathrm{\Phi }(x,p)\mathrm{exp}[\overline{\beta }H(x;\alpha (t))],$$
(22)
noting that the second term of the right-hand side of Eq. (21) vanishes by Eq. (8). Since Eq. (17) tells us that $`g(\mathrm{\Gamma },0)`$ is identical to the initial probability distribution function $`f_0(\mathrm{\Gamma })`$, the right-side of Eq. (22) must have an appropriate normalization factor;
$$g(\mathrm{\Gamma },t)=\frac{\mathrm{\Phi }(x,p)}{Z_0}\mathrm{exp}[\overline{\beta }H(x;\alpha (t))],$$
(23)
where
$$Z_0=𝑑\mathrm{\Gamma }\mathrm{\Phi }(x,p)\mathrm{exp}[\overline{\beta }H(x;\alpha (0))].$$
(24)
From Eq. (18), we finally obtain the desired quantity;
$$\mathrm{exp}(\overline{\beta }W)=𝑑\mathrm{\Gamma }g(\mathrm{\Gamma },𝒯)=\frac{Z_𝒯}{Z_0},$$
(25)
where
$$Z_𝒯=𝑑\mathrm{\Gamma }\mathrm{\Phi }(x,p)\mathrm{exp}[\overline{\beta }H(x;\alpha (𝒯))].$$
(26)
Note that $`Z_0`$ and $`Z_𝒯`$ depend on only the value of $`\alpha (0)`$ and $`\alpha (𝒯)`$, respectively, so that they are the state variables. Namely, the quantity $`\mathrm{exp}(\overline{\beta }W)`$ does not depend on the transition process but depend only on the initial and the final states. Furthermore, if we define the free energy by
$$F=\overline{\beta }^1\mathrm{log}Z,$$
(27)
Eq. (25) gives our goal Eq. (9), which is rewritten as;
$$\mathrm{\Delta }F=\overline{\beta }^1\mathrm{log}[\mathrm{exp}(\overline{\beta }W)].$$
(28)
This finishes the derivations of the steady state version of Jarzynski equality Eq. (9). In this derivation, the restriction on the stationary distribution function, Eq. (7), is imposed. It is quite unknown at this point that the Jarzynski equality holds for the system whose stationary distribution function does not satisfy the condition. Hereafter we check the validity of the results by numerical simulations on some concrete models.
We consider two examples. First we treat the uniform temperature system whose Hamiltonian is given by
$$=\frac{p^2}{2}+\frac{k(t)}{2}x^2xA\mathrm{sin}(\omega t).$$
(29)
This is one of the simplest models of the nonequilibrium steady state driven by external force. By changing $`k(t)`$, we can put work onto the nonequilibrium system. Although the sinusoidal force performs work to the system, its contribution is a stationary dissipation which characterizes nonequilibrium states; following Ref. , we call the work which stationarily dissipates ”house-keeping work”. We do not count its contribution into the work.
We employ the Langevin dynamics as a model of the heat bath;
$$\ddot{x}+\gamma \dot{x}+k(t)x=A\mathrm{sin}(\omega t)+\xi (t),$$
(30)
where $`\xi (t)`$ is the Gaussian white noise satisfying
$$\xi (t)=0,\xi (t)\xi (t^{})=2\gamma \beta ^1\delta (tt^{}).$$
(31)
The control parameter $`k(t)`$ is changed from $`1/4`$ to $`1`$ as
$$k(t)=\frac{1}{4}\left(1+\frac{3t}{𝒯}\right),$$
(32)
where $`𝒯`$ denotes the time duration of the operation.
Let us discuss the statistical property of the stationary state. The model Eq. (30) leads to a time-dependent Kramers equation which yields time-dependent distributions, if the forcing period $`2\pi /\omega `$ is longer than the relaxation time of the system. However, since the operation process is much slower than the forcing period, we average out the sinusoidal motion to get the stationary distribution. If the forcing period becomes comparable to the relaxation time, the response of the system cannot follow the forcing so that the distribution functions become Gibbsian in the high-frequency limit $`1/\omega 0`$. Here we choose the parameter such that the relaxation time of the position $`\tau _x\gamma `$ is longer than the forcing period, and the one of the momentum $`\tau _p\gamma ^1`$ is shorter than the forcing period, i.e. $`\gamma ^12\pi /\omega \gamma `$. We can expect that the distribution of the position $`\chi (x)`$ becomes Gibbsian and the one of the momentum $`\pi (p)`$ is non-Gibbsian in this parameter range. The obtained $`\chi (x)`$ and $`\pi (p)`$ are shown in Fig. 1, where we can see that our expectation is realized;
$$f(\mathrm{\Gamma };k)\mathrm{exp}[\beta k\frac{x^2}{2}]\pi _0(p).$$
(33)
Note that this satisfies the condition of Eq. (7). Following Eq. (27), $`\mathrm{\Delta }F`$ is calculated as $`\overline{\beta }^1\mathrm{log}2`$ for this process. Then we check if Eq. (28) holds. Since the distribution function is given by Eq. (33), $`\overline{\beta }`$ in Eq. (28) corresponds to $`\beta `$. The quantity to be forcused here, $`\overline{\beta }^1\mathrm{log}\mathrm{exp}[\overline{\beta }W]`$, are shown in Fig. 2 together with $`W`$. As is clearly seen, while $`W`$ changes its value depending on $`𝒯`$, $`\overline{\beta }^1\mathrm{log}\mathrm{exp}[\overline{\beta }W]`$ is an invariant with respect to the operation time $`𝒯`$, which has been proved to be a state variable. As $`𝒯`$ gets larger, $`W`$ seems to converge to a finite value, which is identical to $`\overline{\beta }^1\mathrm{log}\mathrm{exp}[\overline{\beta }W]`$; we can regard this quantity as $`\mathrm{\Delta }F`$. These facts clearly indicate the validity of our main result.
On the other hand, by tuning parameters, we can get different steady states whose distribution functions do not satisfy Eq. (7). In those cases, we found that our equlity no longer holds. However the principle of the minimum work still seems to be valid.
Second, we consider the system in contact with two heat baths of different temperatures. The model we treat here is two Brownian particles coupled via the linear interacting potential. The Hamiltonian of the systems is;
$$=\frac{p_1^2}{2}+\frac{p_2^2}{2}+\frac{k}{2}(xy)^2.$$
(34)
And the dynamics is written as;
$`\ddot{x}+\gamma _1\dot{x}+k(t)(xy)`$ $`=`$ $`\xi _1(t),`$ (35)
$`\ddot{y}+\gamma _2\dot{y}+k(t)(yx)`$ $`=`$ $`\xi _2(t).`$ (36)
Again $`\xi _1(t)`$ and $`\xi _2(t)`$ are the Gaussian white noise;
$$\xi _i(t)=0,\xi _i(t)\xi _j(t^{})=2\gamma _i\beta _i\delta _{ij}\delta (tt^{}),$$
(37)
where
$$\delta _{ij}=\{\begin{array}{cc}1,\hfill & (i=j)\hfill \\ 0.\hfill & (ij)\hfill \end{array}$$
(38)
This may be the simplest heat conduction system, which is of course in nonequilibrium. This system has been intensively studied by Sekimoto , and was found to have the following distribution;
$$f_{steady}(\mathrm{\Gamma };k)\mathrm{exp}[\overline{\beta }k\frac{(xy)^2}{2}]\mathrm{exp}[\overline{\beta }\frac{p_1^2+p_2^2}{2}],$$
(39)
where
$$\overline{\beta }=\frac{\gamma _1+\gamma _2}{\gamma _1\beta _1+\gamma _2\beta _2}\beta _1\beta _2.$$
(40)
The steady state of the system hence satisfies the condition of Eq. (7). We again control the parameter $`k(t)`$ as given in Eq. (32) and check if Jarzynski equality holds. With the knowledge on the distribution function, $`\mathrm{\Delta }F`$ is calculated to be $`\overline{\beta }^1\mathrm{log}2`$ again. The numerical result using $`\overline{\beta }`$ of Eq. (40) is shown in Fig. 2. It is clear that Jarzynski equality is also valid in this heat conducting systems.
In this Rapid Communication, we derive the steady state version of Jarzynski equality and reconfirm its validity by numerical simulations. The condition in which the equality holds is that the statinary distribution function is given by Eq. (7). Note that the principle of the minimum work immediately follows Eq. (28). Namely, the principle of minimum work is also valid for the transition between nonequilibrium steady states.
However, the equality has clear limitation on its application. The condition Eq. (7) is rather crucial that it cannot describe the steady state of the system where the temperature depends on the position; e.g. the Brownian particle in the nonuniform temperature environment . It is unclear to what extent Eq. (7) is satisfied in various nonequilibrium systems.
Another open question is the definition of the free energy in nonequilibrium systems. As we have seen in the numerical simulations above, Eq. (27) seems to be valid in the system satisfying Eq. (7) regardless of the statistical property of the momentum space. However, in more general systems, the definition of the nonequilibrium free energy as well as entropy is still unclear, although it may appear as the minimum work as is stated in Eq. (1). The broader application and the further development of the framework stated in Ref. must be fruitful for nonequilibrium thermodynamics and should be the main focus of the future problem.
The author thanks S. Sasa for helpful discussions and his critical reading of the manuscript. Discussions with S. Takesue, H. Nakazato, and Y. Yamanaka are also gratefully acknowledged.
|
no-problem/9905/astro-ph9905255.html
|
ar5iv
|
text
|
# 1 The Sunyaev-Zel’dovich Effect
## 1 The Sunyaev-Zel’dovich Effect
The scattering of Cosmic Microwave Background (CMB) photons by a hot thermal distribution of electrons leads to a unique distortion of the CMB spectrum known as the Sunyaev-Zel’dovich Effect (SZE) after the two Russian scientists who proposed it in the early 1970’s (\[Sunyaev and Zel’dovich 1970\]; \[Sunyaev and Zel’dovich 1972\]). In most cases, and in all cases considered here, the hot gas is provided by the intracluster medium of galaxy clusters. For the most massive clusters the mass of the intracluster medium greatly exceeds the mass contained in the individual galaxies, although, as discussed in detail below, roughly 85% of the mass of a cluster is contained in some other form of non-luminous matter. Even for the largest clusters with $`10^{15}M_{}`$ total mass the chance of a scattering a CMB photon traversing the cluster is only about 1%. This means that the resulting spectral distortion will have a small amplitude.
The photons that scatter off the much higher energy electrons ($`T_e10^8`$ K, 10 keV), will on average be shifted to higher energy. The emergent spectrum is therefore distinctly non-Planckian; compared to an initial Planck spectrum there are fewer photons at low energies and more at high energies. The spectral distortion is shown in Fig. 1 where the left panel shows the change in intensity and the right panel shows the change in Rayleigh Jeans (RJ) brightness temperature. The RJ brightness is shown because the sensitivity of radio telescope is calibrated in these units. It is defined simply by $`I_\nu =(2k\nu ^2/c^2)T_{RJ}\mathrm{\Omega }`$ where $`I_\nu `$ is the intensity at frequency $`\nu `$, $`k`$ is Boltzmann’s constant, $`\mathrm{\Omega }`$ is the solid angle and $`c`$ is the speed of light. The Planck spectrum of the CMB radiation is also shown by the dotted line in Fig. 1 for reference.
Also shown in Fig. 1 by the dashed curve is the kinetic SZE. This effect is caused by a non-zero bulk velocity of the cluster with respect to the CMB and along the line of sight; a peculiar velocity with respect to the Hubble flow. It results in a purely thermal distortion of the CMB spectrum (i.e., the emergent spectrum is still described completely by a Planck spectrum, but at a slightly different temperature, lower (higher) for positive (negative) peculiar velocities.
The derivation of the exact spectral dependence of SZE can be found in the original papers of Sunyaev and Zel’dovich as well as in a number of more recent papers, which include relativistic corrections to the earlier work, and reviews (e.g., \[Sunyaev and Zel’dovich 1970\]; \[Sunyaev and Zel’dovich 1972\]; \[Sunyaev and Zel’dovich 1980\]; \[Rephaeli 1995\]; \[Birkinshaw 1999\]).
At long wavelengths, the SZE toward massive and luminous galaxy clusters should be observed as a hole in the sky relative to the undistorted CMB brightness. This in itself is a remarkable feature. There is no other plausible explanation known for the existence of such a hole other than primary CMB anisotropy which should have much smaller amplitudes at the angular scales subtended by galaxy clusters (e.g., \[Holder and Carlstrom 1999\]). A clear detection of a SZE decrement was proposed as proof that the CMB was indeed cosmic (\[Sunyaev and Zel’dovich 1970\]); if proof is still needed we now know that it at least comes to us from beyond $`z=0.83`$.
In the Rayleigh-Jeans (RJ) limit the SZE spectral distortion is given by
$$\frac{\mathrm{\Delta }T}{T_{CMB}}=2\frac{kT_e}{m_ec^2}\sigma _Tn_e𝑑l$$
(1)
where $`T_{CMB}`$ is the radiation temperature of the CMB, $`k`$ is Boltzmann’s constant, $`n_e`$ and $`T_e`$ are the electron density and temperature, respectively, $`\sigma _T`$ is the Thompson cross section, $`m_e`$ is the mass of the electron, $`c`$ is the speed of light and the integral is along the line of sight. Using the definition for the Comptonization parameter $`y`$, we find that $`\frac{\mathrm{\Delta }T}{T_{CMB}}=2y`$ in the RJ limit.
Perhaps the most amazing fact of the SZE is best illustrated by Eq. 1; the observed brightness of the effect $`\mathrm{\Delta }T`$ is independent of the distance (redshift) to the cluster! Both $`\mathrm{\Delta }T`$ and $`T_{CMB}`$ suffer the same cosmological dimming, so their ratio is simply a function of the cluster properties. And, of course, we observe the same $`T_{CMB}`$ toward any cluster at any redshift. The integrated SZE flux from a cluster depends on $`\mathrm{\Delta }TD_A^2`$ and is thus dependent on distance. For observations in which the angular resolution is sufficient to resolve the effect, however, the observable is independent of redshift. This requirement is obtained easily, as clusters are large objects ($``$Mpc) and therefore subtend an arcminute or more at any redshift, assuming a reasonable cosmology. This wonderful property of the SZE makes it a potentially powerful probe of the high redshift universe.
As discussed in section 2, sensitive observations of the SZE provide a powerful and unique cosmological tool. Our group has used interferometric techniques to make high quality images of the SZE toward more than 25 clusters with redshifts spanning 0.13 to 0.83. In section 3, we give a brief review of our observing technique and present some of the resulting images. We discuss our progress on the Hubble constant and mass density of the universe in section 4, and finally in section 5 we briefly review our future plans.
## 2 Cosmology with the Sunyaev-Zel’dovich Effect
Beyond the novelty of measuring a hole in the sky, or adding to the already overwhelming evidence that the CMB is indeed cosmic, measurements of the SZE offer a powerful and unique way to test cosmological models and determine the values of the cosmological parameters which describe our universe.
Here we concentrate on the SZE from galaxy clusters, the largest known collapsed objects in the universe. Galaxy clusters themselves provide important sign-posts of structure in the universe. Their properties and evolution provide valuable constraints on cosmological models. In addition to its SZE, the hot intracluster gas is traced by its strong bremsstrahlung radiation at X-ray wavelengths. The deep gravitational potential can be probed by X-ray spectroscopy (to measure of the intracluster gas temperature), by the velocity dispersion of member galaxies, and by the gravitational lensing of background galaxies.
We now discuss briefly the cosmological probes that are offered directly by SZE measurements of clusters, both by itself and when used in conjunction with the measurements discussed above.
### 2.1 Hubble constant; expansion history of the universe
Perhaps the SZE is best known for providing a means to measure the distance to a galaxy cluster independent of any other distance scale; it does not require normalization to distances derived for more nearby objects as is the case with the “distance ladder” that is commonly used. The distance is derived by combining a measurement of the SZE with a measurement of the X-ray intensity of the cluster. To understand how this is possible it is only necessary to consider the different dependencies of the SZE and X-ray observables on the electron density of the intracluster gas.
The SZE is simply dependent on the integrated density as indicated in Eq. 1. The X-ray intensity is proportional to the density squared as given by
$$I_X(E,\delta _E)=\frac{1}{4\pi (1+z)^4}\frac{\mu _e}{\mu _H}n_e^2\mathrm{\Lambda }(E^{},\delta _E^{},T_e)𝑑l$$
(2)
where $`I_X(E,\delta _E)`$ is the X-ray intensity observed within a fixed detector bandwidth $`\delta _E`$ at energy $`E`$, $`\mu _j\rho /(n_jm_p)`$, $`\rho `$ is the gas mass density, $`m_p`$ is the mass of the proton, $`\mathrm{\Lambda }(E^{},\delta _E^{},T_e)`$ is the emissivity within a bandwidth $`\delta _E^{}`$ at energy $`E^{}`$ of a $`T_e`$ gas in the cluster rest frame ($`E^{}=(1+z)E,\delta _E^{}=(1+z)\delta _E`$), and the integral is again along the line of sight. Note, $`\mathrm{\Lambda }(E^{},T_e)`$ decreases steeply with $`z`$ over the energy range of interest (usually in the range 0.5 to 10 keV), so that the detectability of a cluster for a given X-ray detector typically falls steeper than $`1/(1+z)^4`$ in sharp contrast to the essentially redshift independence of the SZE signal.
Due to the different electron density dependencies of the observed SZE ($`n_e`$) and X-ray ($`n_e^2`$) emission, one can use the measurements to constrain the electron distribution, or at least constrain the parameters within a given model for the gas. One also needs the electron temperature which can be measured using X-ray spectroscopy. The determined gas distribution can then be compared with the measured angular distribution to solve for the distance to the cluster. A comparison with the mean redshift of the member galaxies gives the Hubble constant $`H_o`$.
The beauty of this technique for measuring the Hubble constant is that it is completely independent of other techniques, and that it can be used to measure distances at high redshifts. While the method depends only on well understood properties of fully ionized plasmas, there are several sources of uncertainty in the derivation of the Hubble constant for any particular cluster (e.g., see \[Birkinshaw 1999\]). The largest uncertainty is that we are making the assumption that the cluster size along the line of sight is comparable to its size in the plane of the sky. For this reason it is desirable to use a large survey of clusters to determine $`H_o`$. A large survey of perhaps a few hundred clusters with redshifts ranging from close by to beyond one would allow the technique to be used to trace the expansion history of the universe, providing a valuable independent check of the type Ia supernova results (e.g., \[Riess et al. 1998\]; \[Perlmutter et al. 1999\]).
### 2.2 Peculiar velocities
The line of sight velocity of a cluster with respect to the CMB rest frame, the peculiar velocity of the cluster, can be measured by separating the kinetic from the thermal SZE. From inspection of Fig. 1, it is clear that this is best done by observation at frequencies near the null of the thermal effect at $`218`$ GHz. Such measurements offer the ability to measure the peculiar velocity of clusters at high redshifts which could be used to constrain the large scale gravitational perturbations to the Hubble flow. The intrinsic weakness of the effect make the observation of the effect challenging. Upper limits have been placed on the peculiar velocities of clusters (\[Holzapfel et al. 1997a\]), but a clear detection of the kinetic effect has not yet been obtained. The kinetic SZE is a unique and potentially powerful cosmological tool as it provides the only known way to measure large scale velocity fields at high redshift. We are likely to see continued progress in these difficult observations. Our SZE observations discussed later in this paper were made at 30 GHz where the kinetic effect is clearly of second order to the thermal effect, and it is therefore not discussed further here.
### 2.3 Baryon mass fraction of clusters; $`\mathrm{\Omega }_M`$
A measurement of the SZE toward a cluster provides a measure of the mass of the intracluster medium, which is typically several times the mass responsible for the light from the galaxies. Combining the gas mass with a measure of the total mass determined either from gravitational lensing observations or from the virial theorem and the X-ray determined electron temperature, one can determine the fraction of the mass of the galaxy cluster contained in baryons. An estimate of the baryonic to total mass on the scale of massive galaxy clusters is important as it should represent the universal value; it is not believed that mass segregation occurres on the scales from which massive clusters condense $`1000\mathrm{Mpc}^3`$, although as noted below a small fraction of baryons ($``$15%) are likely lost during the cluster formation process.
The universal mass fraction of baryons to total matter, $`\mathrm{\Omega }_B/\mathrm{\Omega }_M`$ where $`\mathrm{\Omega }\rho /\rho _c`$, and $`\rho _c`$ is the critical density of the universe, can in turn be used to estimate $`\mathrm{\Omega }_M`$ given the value determined for $`\mathrm{\Omega }_B`$ from big bang nucleosynthesis calculations and the observed values for the primordial abundance of the light elements (\[Burles and Tytler 1998\]).
### 2.4 Cluster evolution; probing the high redshift universe
Perhaps the most powerful use of the SZE for cosmology will be to probe the high redshift universe. Sensitive, non-targeted surveys of large regions of the sky for the SZE will provide an inventory of clusters independent of redshift. The cut-off of the SZE cluster sample would be a lower mass cut-off set by the sensitivity of the observations; there would be no redshift cut-off. The sensitivity of a such a survey to cosmological parameters is shown in Fig. 2, where the predicted distribution in redshift of all clusters with masses greater than $`2\times 10^{14}M_{}`$ is shown (\[Holder, Carlstrom, and Mohr 1999\]). A sufficiently sensitive SZE survey would also be able to image the ionized gas expected in filamentary large scale structure, particularly the filaments associated with the formation of clusters.
The use of the number density of clusters, particularly massive clusters, as a function of redshift has been used by a number of authors to estimate $`\mathrm{\Omega }_M`$ from X-ray surveys (e.g., \[Bahcall 1999\]). The possible use of SZE surveys to constrain $`\mathrm{\Omega }_M`$ has been investigated as well (\[Barbosa et al. 1996\]; \[Colafrancesco et al. 1997\]) along with more recent studies which include the effect of the cluster gas evolution (\[Holder and Carlstrom 1999\]).
## 3 SZE Observations
### 3.1 Previous observations
In the twenty years following the first papers by Sunyaev and Zel’dovich, there were few firm detections of the SZE despite a considerable amount of effort (\[Birkinshaw 1991\]). Over the last several years, however, observations of the effect have progressed from low S/N detections and upper limits to high confidence detections and detailed images. The dramatic increase in the quality of the observations is due to improvements both in low-noise detection systems and in observing techniques, usually using specialized instrumentation with which the systematics that often prevent one from obtaining the required sensitivity are carefully controlled. Such systematics include, for example, the spatial and temporal variations in the emission from the atmosphere and the surrounding ground.
A recent review of the observations can be found in \[Birkinshaw 1999\]. Here we briefly review a few of the results to provide the reader with a measure of the quality of the data presently available. We then concentrate on our own technique and imaging results.
The first measurements of the SZE were made with single dish radio telescopes. Successful detections were obtained, although the reported results show considerable scatter, reflecting the difficulty of the measurement. Recent state-of-the-art single dish observations at radio wavelengths (\[Herbig et al. 1995\]; \[Myers et al. 1997\]), millimeter wavelengths (\[Wilbanks et al. 1994\]; \[Holzapfel et al. 1997b\]) and submillimeter wavelengths (\[Lamarre et al. 1998\]) have resulted in significant detections of the effect and limited mapping.
Interferometric techniques have been used to produce high quality images of the SZE (e.g., \[Jones et al. 1993\]; \[Grainge et al. 1993\]; \[Carlstrom, Joy, and Grego 1996\]; \[Grainge et al. 1996\]; \[Carlstrom et al. 1998\]). As discussed in the next section, the high stability and spatial filtering made possible with interferometry is being exploited to make these observations.
### 3.2 Interferometry basics
The stability of interferometry is attractive for avoiding many of the systematics which can prevent one from imaging very weak emission. The ‘beam’ of a two-element interferometer – all arrays can be thought of as a collection of $`n(n1)/2`$ two-element interferometers – is essentially a cosine corrugation on the sky; it is exactly analogous to a two slit interference pattern. The interferometer does the job of multiplying the sky brightness at the observing frequency by a cosine, integrating the product and producing the time average amplitude. The correlator performs the multiplication and time averaging. In practice two correlators are used to obtain the cosine and sine patterns. Simply put, the interferometer measures directly the Fourier transform of the sky at a spatial frequency given by $`B/\lambda `$, where $`B`$ is the component of the vector connecting the two telescopes (the baseline) oriented perpendicular to the source. Of course, a range of baselines are actually being used at any one time due to the finite size of the apertures of the individual array elements; this simply reflects that the sky has been multiplied by the gain pattern (beam) of the individual telescopes or, equivalently, that the Fourier transform measured is the transform of the true sky brightness convolved with the transform of the beam of an array element.
That the Fourier transform measured is the transform of the sky brightness convolved with the transform of the beam of an individual element has important consequences. The transformed beam is the auto-convolution of the aperture, and thus it is identically zero beyond the diameter of the telescopes expressed in wavelengths. The interferometer is therefore only sensitive to angular scales (spatial frequencies) near $`B/\lambda `$. It is not sensitive to gradients in the atmospheric emission or other large scale emission features. There are several other features which allow an interferometer to achieve extremely low systematics. For example, only signals which correlate between array elements will lead to detected signal. For most interferometers, this means that the bulk of the sky noise for each element will not lead to signal. Amplifier gain instabilities for an interferometer will not lead to large offsets or false detections, although if severe they may lead to somewhat noisy signal amplitude. To remove the effects of offsets or drifts in the electronics as well as the correlation of spurious (non-celestial) sources of noise, the phase of the signal received at each telescope is modulated before the correlator and then the proper demodulation is applied to the output of the correlator.
Lastly, the spatial filtering of an interferometer allows the emission from radio point sources to be separated from the SZE emission. This is possible because at high angular resolution ($`<10^{\prime \prime }`$) the SZE contributes very little flux. This allows one to use long baselines – which give high angular resolution – to detect and monitor the flux of radio point sources while using short baselines to measure the SZE. Nearly simultaneous monitoring of the point sources is important as they are often time variable. The signal from the point sources is then easily removed, if they are not too strong, from the short baseline data which are sensitive to the SZE. Nevertheless, one would still prefer to operate at shorter radio wavelengths since the point sources typically have falling spectra ($`\lambda ^{0.7}`$), while the SZE signal scales as ($`\lambda ^2`$).
For the reasons given above, interferometers offer an ideal way to achieve high brightness sensitivity for extended low-surface brightness sources, at least at radio wavelengths. Most interferometers, however, were not designed for imaging low-surface brightness sources. Interferometers are traditionally built to obtain high angular resolution with large individual elements for maximum sensitivity to small scale emission. Galaxy clusters, on the other hand are large objects. Most of the SZE signal will be distributed smoothly on angular scales of an arcminute or more for even the most distant clusters, scales for which most existing interferometric arrays are simply not sensitive.
### 3.3 OVRO and BIMA interferometric imaging of the SZE
Our solution to matching the angular scales important for observations of distant galaxy clusters (a few arcminutes) with those provided by an interferometer is to use existing mm-wave arrays, which were also designed for high resolution, but to degrade the angular resolution by outfitting the arrays with cm-wave receivers. This solution has a number of attractive features in addition to matching the arcminute scales appropriate for SZE measurements of distant clusters. Specifically, we are able to use very low noise cm-wave receivers. We are able to secure large amounts of observing time during the summer months when the atmosphere is not ideal for mm-wave observations, but reasonable for cm-wave. As one might imagine, there are also a number of advantages to using telescopes which have surface and pointing accuracies ten times higher than needed.
Over the last few summers we have thus installed low-noise, HEMT amplifier based receivers on the OVRO<sup>1</sup><sup>1</sup>1An array of six 10.4 m mm-wave telescopes located in the Owens Valley, CA and operated by Caltech. and BIMA<sup>2</sup><sup>2</sup>2An array of ten 6.1 m mm-wave telescopes located at Hat Creek, California and operated by the Berkeley-Illinois-Maryland-Association mm-wave arrays in California. The receivers operate from 26 - 36 GHz (30 GHz corresponds to 1 cm), and down convert the signal to the standard intermediate frequencies used at the two arrays. All of the normal array correlators, electronics, and software are used. About 1 GHz can be processed at one time with the standard observatory electronics. Data are taken from array configurations in which roughly half the baselines are sensitive only to point sources and the other half are as short as possible for maximum sensitivity to the SZE.
In Fig. 3 we show a subset of the 27 clusters that we imaged so far using the OVRO and BIMA arrays. Contaminating emission from radio point sources was removed before making these images. The short baseline data were emphasized when these images were made to enhance the surface brightness sensitivity. The data do contain significant information at higher resolution. A catalog of the point sources we measured toward our SZE imaged clusters can be found in \[Cooray et al. 1998\]. Fig. 3 illustrates the high quality data we have been able to obtain. The typical integration time used for each cluster is about 45 hours, which when one includes calibration and other overheads takes roughly 8 to 10 transits of the source.
Fig. 3 also clearly demonstrates the independence of the SZE on redshift. All of the clusters shown have similar high X-ray luminosities and, as can be seen, the strength of the SZE is similar for each one. For the associated X-ray emission, however, the rate of received photons falls off drastically with redshift.
The SZE image is shown (contours) on the corresponding X-ray emission in Fig. 4. As expected, the SZE which traces the density of the cluster gas is less centrally peaked than the X-ray emission which traces the square of the density.
## 4 Results
We have been developing methods for best extracting the cosmological parameters from our SZE data. One has to keep in mind that the images shown in Fig. 3, while they give a direct indication of the data quality, have been heavily filtered by the response of the interferometer. Therefore, we do all of our analyses in the Fourier domain where the data are actually measured.
We fit models for the cluster gas distribution to our data by first constructing a realization of the model, projecting it in the plane of the sky, multiplying it by the angular gain response of our instrument (the beam of an array element), and then Fourier transforming it to the spatial frequency of our data points. A comparison of the model Fourier transform points and the data points is used to construct the likelihood of the model.
For a suitable model we start with a standard $`\beta `$-model for the gas distribution
$$n_e(r)=n_e(0)\left(1+\frac{r^2}{R_c^2}\right)^{3\beta /2}$$
(3)
where $`n_e(0)`$ is the central electron density, $`R_c`$ is the core radius, and $`\beta `$ a power law index. In practice, we generalize this to be elliptical, fitting for the position angle and major/minor axes ratio. We also fit the location of the cluster as well as the locations and fluxes of any radio point sources. We assume the gas is isothermal. For each set of parameters a likelihood (actually a $`\chi ^2`$ since the noise is Gaussian) is obtained.
When X-ray data is also used in the analyses, a similar procedure is followed and the joint likelihood is obtained.
### 4.1 Hubble constant
We are performing joint analyses of X-ray and our SZE data to determine the distance to each cluster. The result of such a derivation for the cluster MS 0451-0305 is shown in Fig. 5 (\[Reese et al. 1999\]). The values for $`H_o`$ (in units of $`kms^1Mpc^1`$) are shown as contours on a plot of the model parameters $`\beta `$ and $`R_c`$. The 1, 2 and 3$`\sigma `$ confidence intervals are shown by the greyscale regions. Note the striking independence of the derived value of $`H_o`$ along the clear degeneracy in the fits to these model parameters.
There are uncertainties in the absolute calibration of both the SZE and X-ray data. However, the largest uncertainty in the derived $`H_o`$ for any one cluster is the unknown cluster aspect ratio, since we implicitly assume the cluster dimension along the line of sight is equal to its dimension across the line of sight. That our analyses of MS 0451-0305 gives a value for $`H_o`$ that is so close to the currently accepted range for $`H_o`$ is somewhat fortuitous. To obtain a trustworthy estimate for $`H_o`$ using the SZE one must obtain a large sample of distances and be careful of selection effects. We are in the process of conducting such a survey now. In his recent review, Birkinshaw found the average for all published SZE derived values of $`H_o`$ to be $`60\pm 10kms^1Mpc^1`$ (\[Birkinshaw 1999\]). Note, however, as Birkinshaw points out, that the underlying observations share common and fairly uncertain calibrations.
### 4.2 Cluster gas mass fractions and constraints on $`\mathrm{\Omega }_M`$
We have determined the gas mass fractions for 18 clusters using only our SZE data and electron temperatures derived from X-ray spectroscopy (\[Grego 1999\]; \[Grego et al. 1999a\]). We did not use X-ray imaging data, and in that sense our results provide a test of similar analyses done on X-ray data sets. Details of our technique in which we account for the expected bias of the gas mass fraction in the core region of a cluster can also be found in \[Grego et al. 1999b\]. The resulting gas mass fractions as a function of redshift are shown in Fig. 6. In most of the clusters, the uncertainty is dominated by poorly constrained electron temperatures.
The degeneracy in the fit parameters $`R_c`$ and $`\beta `$ is worse than that shown in Fig. 5 since we are not using the X-ray data to help constrain the fits. We find that the derived gas mass and the gas mass fraction are insensitive to the particular values of $`R_c`$ and $`\beta `$ within the region of acceptable fits, as long as these quantities are only calculated within about an arcminute radius region. This is not surprising, as the SZE signal is directly proportional to the mass and our data constrains the signal well on these scales. We relate the gas fractions we derive at arcminute scales to the expected gas fractions near the cluster virial radius using scaling relations derived from numerical simulations of cluster formation \[Evrard 1997\]).
The eighteen clusters range in redshift from 0.171 to 0.826. Assuming an open $`\mathrm{\Omega }_M=0.3,\mathrm{\Omega }_\mathrm{\Lambda }=0`$ cosmology, we find $`f_gh_{100}=0.075_{0.010}^{+0.007}`$ scaled to $`r_{500}`$, the radius at which the overdensity of the cluster is 500, nearly the virial radius. Using the six clusters with redshifts less than 0.26, so that the cosmology we assume will have little effect on the gas mass fractions derived, we find $`f_gh_{100}=0.085_{0.015}^{+0.011}`$, again scaled to $`r_{500}`$.
Our results are in good agreement with those derived by \[Myers et al. 1997\] for a sample of 3 nearby clusters, and also with values derived from X-ray emission (\[David, Jones, and Forman 1995\]; \[Mohr, Mathiesen, and Evrard 1999\]), suggesting that systematic differences between the methods, like the clumping of intracluster gas, are not severe.
Assuming that the baryonic mass fraction for clusters reflects the universal value and using the constraint on $`\mathrm{\Omega }_B`$ determined from BBN and primordial abundance measurements (\[Burles and Tytler 1998\]), $`\mathrm{\Omega }_Bh_{100}^2=0.019\pm 0.001`$, the low redshift gas mass fractions imply $`\mathrm{\Omega }_Mh_{100}\mathrm{\Omega }_B/f_g=0.22_{0.03}^{+0.05}`$. A best guess of $`\mathrm{\Omega }_M`$ can be made by attempting to account for the baryons lost during the cluster formation process (15%) and for the baryons contained in the galaxies, as well as using a best guess for the Hubble constant. This gives $`\mathrm{\Omega }_M0.23_{0.04}^{+0.06}h_{65}^1`$. The uncertainties should be taken with caution as they do not reflect the uncertainties in the assumptions used to extrapolate from $`\mathrm{\Omega }_B/f_g`$ to $`\mathrm{\Omega }_M`$.
## 5 Discussion and Future Plans
We are able to obtain high quality imaging data of the SZE toward distant galaxy clusters. Our constraint on the gas mass fractions of galaxy clusters provides further support that we live in a low $`\mathrm{\Omega }_M`$ universe. Our $`H_o`$ results are still preliminary, but show promise of providing a good independent estimate for the expansion of the universe.
We are continuing to work on our analysis tools, including testing thoroughly the effects of our assumptions such as the isothermality of the cluster gas and the limits of $`\beta `$-models.
We are still a long way from being able to constrain whether the universe is accelerating or not. And, even though our sensitivity is quite high, we are not yet in a position to conduct large sensitive, non-targeted surveys for the SZE over large regions of the sky. We have, however, clearly demonstrated the feasibility of using similar interferometric techniques to conduct such a survey.
A dedicated array with 10 elements of 2.5 m diameter and using our cm-wave receivers would increase the speed of system dramatically. In Fig. 7 we show the expected number of clusters detected with such an instrument (see \[Holder, Carlstrom, and Mohr 1999\]). One month of observing with this array would deliver more clusters with redshifts higher than $`z0.5`$ than are found in the deepest, large area X-ray cluster catalogs (\[Vikhlinin et al. 1998\]). The plot assumes a $`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ cosmological model; if $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ the counts are comparable (see Fig. 2).
The potential of using the SZE as a tool to help determine the cosmological parameters is now being realized nearly three decades after it was first proposed. We expect improvements to move forward at a rapid pace with sensitive SZE surveys of the the high redshift universe starting in the next few years.
We thank the OVRO and BIMA observatories for their crucial contributions to the SZE observations. JC thanks organizers of the symposium for a remarkably informative and enjoyable symposium. JC acknowledges support from a NSF-YI grant and the David and Lucile Packard Foundation.
|
no-problem/9905/hep-ex9905054.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The fragmentation process which transforms colored partons into colorless hadrons is typically characterized by the fragmentation function. The $`b`$ quark fragmentation is of special importance in the study of quark fragmentation because the large $`b`$ quark mass provides a natural mass scale in QCD calculations and allows the application of perturbative QCD and heavy quark expansion, and can help to extract non-perturbative effects in fragmentation, which is the least understood part of the fragmentation process.
According to the factorization theorem, the heavy quark fragmentation function can be described as a convolution of perturbative and non-perturbative effects. For the $`b`$ quark, the perturbative calculation is in principle understood . Nonperturbative effects have been parametrized in both model-dependent and model-independent approaches . The fact that several models are yet to be experimentally tested is an indication of a lack of precise and conclusive experimental results, if not theoretical understanding.
It is indeed experimentally challenging to measure the $`b`$ quark fragmentation function to a level of precision sufficient to distinguish among the various models. Since the $`b`$ quark fragmentation function is the probability distribution of the fraction of the momentum of the $`b`$ quark carried by the $`B`$ hadron, the most sensitive experimental determination of the shape of the $`b`$ fragmentation function is expected to come from a precise direct measurement of the $`B`$ hadron energy (or momentum) distribution. The difficulty in precisely measuring the $`B`$ hadron energy distribution stems mostly from the fact that most of the $`B`$ decays can only be partially reconstructed, causing a significant fraction of the $`B`$ energy to be missing from the $`B`$ decay vertex. Recent direct measurements at LEP and SLD have used overall energy-momentum constraints and calorimetric information to extract this missing energy in a sample of semi-leptonic $`B`$ decays. These measurements suffer from low statistics as well as poor $`B`$ energy resolution at low energy, and hence have a relatively weak discriminating power between different shapes of the fragmentation function. Indirect measurements such as the measurements of the lepton spectrum and charged multiplicity have been used to constrain the average $`B`$ energy. However, these measurements are not very sensitive to the shape of the energy distribution.
Here we report preliminary results of SLD’s new measurement of the $`B`$ hadron energy distribution. We developed a novel kinematic technique which uses only charged tracks associated with the $`B`$ vertex and the $`B`$ flight direction to reconstruct individual $`B`$ hadron energy with good resolution over the full kinematic range while achieving an efficiency much higher than previous measurements.
## 2 $`B`$ hadron Selection
A general description of the SLD detector can be found elsewhere The excellent tracking and vertexing capabilities at SLD are exploited in the reconstruction of $`B`$ decays in $`Z^0b\overline{b}`$events.
A set of cuts is applied to select hadronic $`Z^0`$ events well-contained within the detector acceptance. The efficiency for selecting a well-contained $`Z^0`$ $``$$`q\overline{q}(g)`$ event is estimated to be above 96% independent of quark flavor. The selected sample comprise 111,569 events, with an estimated $`0.10\pm 0.05\%`$ background contribution dominated by $`Z^0\tau ^+\tau ^{}`$ events.
The $`B`$ sample is selected using a topological vertexing technique based on the detection and measurement of charged tracks, which is described in full detail in Ref. . The topological vertexing algorithm is applied separately to the set of “quality” tracks in each hemisphere (defined with respect to the event thrust axis).
When a candidate vertex is found, tracks not associated with this “seed” vertex are attached to the vertex if they are more likely to have originated from this vertex than from the IP. This track-attachment procedure is tuned to minimize false track-vertex associations to the vertex. Attaching a false track to the vertex affects the vertex-kinematics more than failing to associate a genuine track originated from the vertex, and hence can cause significant degradation in the reconstructed $`B`$ energy resolution. On average, this procedure attaches 0.8 tracks to each seed vertex; about 92% of the reconstructed tracks which originated from the $`B`$-decay are associated with the reconstructed vertex, and 98% of the vertex-associated tracks are true $`B`$ decays tracks.
Figure 1 Distribution of the reconstructed $`P_t`$-corrected vertex mass in the 1996-97 data (points). Also shown is the prediction of the Monte Carlo simulation, for which the flavor composition is indicated.
The mass of the reconstructed vertex, $`M_{ch}`$, is calculated by assigning each track the charged-pion mass. Because of the tiny SLC IP error and the excellent vertex resolution, the $`B`$ flight direction pointing along the line joining the IP and the secondary vertex is well-measured. Therefore the transverse momentum $`P_t`$ of tracks associated with the vertex relative to the $`B`$ flight direction is also well-measured. The mass of the missing particles can then be partially compensated by using $`P_t`$ to form the “$`P_t`$ corrected mass”, $`M_{P_t}=\sqrt{M_{ch}^2+P_t^2}+|P_t|`$. To minimize effects of large fluctuations of $`P_t`$ at short decay length, the minimum transverse momentum (which is varied within the $`1\sigma `$ limits constraining the axis at the measured interaction point (IP) and reconstructed seed vertex) is calculated in order to determine $`M_{P_t}`$. Figure 1 shows the distribution of the $`P_t`$-corrected mass (points) for the 32,492 accepted hemi-
spheres in the data sample, and the corresponding simulated distribution. To obtain a high purity $`B`$ sample, $`B`$ hadron candidates are selected by requiring $`M_{P_t}`$ $`>`$ 2.0 GeV/$`c^2`$. A total of 19,604 hemispheres are selected, with an estimated efficiency for selecting a true $`B`$-hemisphere of 40.1%, and a sample purity of 98.2%. The contributions from light-flavor events in the sample are 0.15% for primary u,d and s events and 1.6% for c events.
## 3 $`B`$ Energy Reconstruction
Since the sum of the charged track energy at the $`B`$ vertex, $`E_{ch}`$, is known, we are only concerned with finding the energy of particles missing from the $`B`$ vertex.
Figure 2. The relative deviation of the maximum missing mass from the true missing mass for Monte Carlo simulated $`B`$ hadron decays, which is divided into three categories: $`B^0`$ and $`B^\pm `$ (open), $`B_s^0`$ (cross-hatched), and $`\mathrm{\Lambda }_b`$ (dark hatched).
Figure 3. Distribution of the reconstructed $`M_{0max}^2`$ for the selected vertices in the 1996-97 data (points). Also shown is the prediction of the Monte Carlo simulation.
Given reconstructed $`B`$ vertex, an upper bound on the mass of the missing particles from the vertex is found to be $`M_{0max}^2=M_B^22M_B\sqrt{M_{ch}^2+P_t^2}+M_{ch}^2,`$ where we assume the true mass of the $`B`$ hadron decayed at the vertex, $`M_B`$, equals the $`B^0`$ meson mass. Since the true missing mass $`M_0^{true}`$ is often rather close to $`M_{0max}`$ (Figure 2), $`M_{0max}`$ is subsequently used as an estimate of $`M_0^{true}`$ ($`M_0^{true}`$ is set to 0 if the reconstructed $`M_{0max}`$ is negative) to solve for the longitudinal momentum of the missing particles from kinematics:
$`P_{0l}=\frac{M_B^2\left(M_{ch}^2+P_t^2\right)\left(M_0^2+P_t^2\right)}{2\left(M_{ch}^2+P_t^2\right)}P_{chl}`$,
and hence the missing $`B`$ energy from the vertex, $`E_0`$. The $`B`$ hadron energy is then $`E_B=E_{ch}+E_0`$. Since $`0M_0^{true}M_{0max}`$, the $`B`$ energy is well-constrained when $`M_{0max}`$ is small. In addition, most $`uds`$ and $`c`$ backgrounds are concentrated at large $`M_{0max}`$. We choose an ad hoc upper cut on the $`M_{0max}^2`$ to achieve a nearly $`x_B`$-independent $`B`$ selection efficiency. Figure 3 shows the distribution of $`M_{0max}^2`$ after these cuts, where the data and Monte Carlo simulation are in good agreement. A total of 1920 vertices in the 1996-97 data satisfy all selection cuts. Figure 4 shows the distribution of the reconstructed scaled weakly decaying $`B`$ hadron energy for data and Monte Carlo. The overall $`B`$ selection efficiency is 3.9% and the estimated purity is about 99.5%. The efficiency as a function of $`x_B^{true}`$ is shown in Figure 5. We examine the normalized difference between the true and reconstructed $`B`$ hadron energies for Monte Carlo events. The distribution is fitted by a double Gaussian, resulting in a core width (the width of the narrower Gaussian) of 10.4% and a tail width (the width of the wider Gaussian) of 23.6% with a core fraction of 83%. Figure 6 shows the core and tail widths as a function of $`x_B^{true}`$. The core width depends only weakly on the true $`x_B`$, another feature that makes this method unique.
Figure 4. Distribution of the reconstructed scaled $`B`$ hadron energy for data(points) and simulation (histogram). The solid histogram shows the non-$`b\overline{b}`$ background.
Figure 5. Distribution of the efficiency as a function of the true $`B`$ energy.
Figure 6. The fitted core and tail widths of the $`B`$ energy resolution as a function of the true scaled $`B`$ hadron energy.
## 4 Tests of Functional Forms and Models
After background subtraction, the distribution of the reconstructed scaled $`B`$ hadron energy is compared with a set of ad hoc functional forms of the $`x_B`$ distribution in order to estimate the variation in the shape of the $`x_B`$ distribution. For each functional form, the default SLD Monte Carlo is re-weighted and then compared with the data bin-by-bin and a $`\chi ^2`$ is computed. The minimum $`\chi ^2`$ is found by varying the input parameter(s). The Peterson function , two ad hocgeneralizations of the Peterson function (ALEPH 1 and 2) and a 7th-order polynomial <sup>*</sup><sup>*</sup>*The behavior of this polynomial is rather unphysical at low $`x_B`$ and will not be considered hereafter are consistent with the data. We exclude the functional forms described in BCFY , Collins and Spiller , Kartvelishvili , Lund and a power function of the form $`f(x)=x^\alpha (1x)^\beta `$. The result is shown in Figure 6 and in Table 1 and 2.
Figure 7. Each figure shows the background-subtracted distribution of reconstructed $`B`$ hadron energy for the data (points) and for the simulation (histograms) based on the respective optimised input fragmentation function. The $`\chi ^2`$ fit uses data in the bins between the two arrows.
| Function | $`D(x)`$ | Reference |
| --- | --- | --- |
| ALEPH 1 | $`\frac{1x}{x}(1\frac{c}{x}\frac{d}{1x})^2`$ | |
| ALEPH 2 | $`\frac{1}{x}(1\frac{c}{x}\frac{d}{1x})^2`$ | |
| BCFY | $`\frac{x\left(1x\right)^2}{\left[1\left(1r\right)x\right]^6}[3+_{i=1}^4(x)^if_i(r)]`$ | |
| Collins and Spiller | $`(\frac{1x}{x}+\frac{\left(2x\right)ϵ_b}{1x})(1+x^2)(1\frac{1}{x}\frac{ϵ_b}{1x})^2`$ | |
| Kartvelishvili et al. | $`x^{\alpha _b}(1x)`$ | |
| Lund | $`\frac{1}{x}(1x)^aexp(bm_{}^2/x)`$ | |
| Peterson et al. | $`\frac{1}{x}(1\frac{1}{x}\frac{ϵ_b}{1x})^2`$ | |
| Polynomial | $`x(1x)(1+_{i=1}^5p_ix^i)`$ | (see text) |
| Power | $`x^\alpha (1x)^\beta `$ | (see text) |
Table 1. Fragmentation functional forms used in comparison with the data. For the BCFY function $`f_1(r)=3(34r)`$, $`f_2(r)=1223r+26r^2`$, $`f_3(r)=(1r)(911r+12r^2)`$, and $`f_4(r)=3(1r)^2(1r+r^2)`$. A polynomial function and a power function are also included.
| Function | $`\chi ^2/dof`$ | Parameters | $`x_B`$ |
| --- | --- | --- | --- |
| ALEPH 1 | 15.2/15 | $`c=0.860_{0.018}^{+0.019}`$ | 0.718$`\pm `$0.005 |
| | | $`d=0.019\pm 0.002`$ | |
| ALEPH 2 | 23.7/15 | $`c=0.938_{0.034}^{+0.039}`$ | 0.720$`\pm `$0.005 |
| | | $`d=0.036\pm 0.002`$ | |
| BCFY | 52.3/16 | $`r=0.2316_{0.0088}^{+0.0092}`$ | 0.713$`\pm `$0.005 |
| Collins and Spiller | 54.3/16 | $`ϵ_b=0.044_{0.004}^{+0.005}`$ | 0.714$`\pm `$0.005 |
| Kartvelishvili et al. | 79.6/16 | $`\alpha _b=4.15\pm 0.11`$ | 0.720$`\pm `$0.004 |
| Lund | 139.1/15 | $`a=2.116_{0.114}^{+0.118}`$ | 0.720$`\pm `$0.005 |
| | | $`bm_{}^2=0.408_{0.070}^{+0.073}`$ | |
| Peterson et al. | 26.0/16 | $`ϵ_b=0.0338_{0.0022}^{+0.0020}`$ | 0.719$`\pm `$0.005 |
| Polynomial | 14.4/12 | $`p_1=12.4\pm 0.4`$ | (see text) |
| | | $`p_2=58.7\pm 1.9`$ | |
| | | $`p_3=130.5\pm 4.2`$ | |
| | | $`p_4=136.8\pm 4.3`$ | |
| | | $`p_5=53.7\pm 1.8`$ | |
| Power | 78.5/15 | $`\alpha =3.91_{0.24}^{+0.25}`$ | 0.722$`\pm `$0.005 |
| | | $`\beta =0.894_{0.097}^{+0.102}`$ | |
Table 2. Results of the $`\chi ^2`$ fit of fragmentation functions to the reconstructed $`B`$ hadron energy distribution after background subtraction. Minimum $`\chi ^2`$, number of degrees of freedom and coresponding parameter values are listed. Errors are statistical only.
We then test several heavy quark fragmentation models. Since the fragmentation functions are usually functions of an experimentally inaccessible variable (e.g. $`z=(E+p_{})_H/(E+p_{})_Q`$), it is necessary to use a Monte Carlo generator such as JETSET to generate events according to a given input heavy quark fragmentation function. The resulting $`B`$ energy distribution is then used to re-weigh the Monte Carlo distribution before comparing with the data. The minimum $`\chi ^2`$ is found by varying the input parameter(s). Within the context of the JETSET Monte Carlo, Bowler and the Lund models are consistent with the data, while Peterson model is found to be inconsistent with the data.
## 5 Systematic Errors
We have considered both detector and physics modelling systematics. The dominant systematic error is related to charged track transverse momentum resolution smearing, which has been evaluated conservatively and can be reduced with a detailed study. All physics systematics are rather small. Other relevant systematic effects such as by varying the event selection cuts and the assumed $`B`$ hadron mass are also found to be small. In each case, conclusions about the shape of the $`B`$ energy distribution hold, and the systematics in the average $`B`$ hadron energy is added in quadrature to obtain the total systematics.
## 6 Conclusion
Taking advantage of SLC’s small beam-spot and SLD’s high vertex resolution, we have developed a new kinematic technique to measure, for the first time, the $`B`$ hadron energy distribution in $`Z^0`$ decays with good resolution over the full kinematic range. Using 1996-97 data, we exclude several functional forms of the $`B`$ energy distribution and the JETSET + Peterson fragmentation model. The mean of the scaled weakly decaying $`B`$ hadron energy distribution is obtained by taking the average of the means of the three functional forms which are found to be consistent with the data. The r.m.s. of the three means is regarded as a minimum error on model-dependence. We find
$`<x_B>=0.719\pm 0.005(stat)\pm 0.007(syst)\pm 0.001(model)`$
where the small model-dependence error indicates that $`<x_B>`$ is relatively insensitive to the allowed forms of the shape of the fragmentation function. The precision in the measured average $`B`$ hadron energy represents a substantial improvement over previous direct measurements. All results are preliminary.
## <sup>∗∗</sup>List of Authors
K. Abe,<sup>(2)</sup> K. Abe,<sup>(19)</sup> T. Abe,<sup>(27)</sup> I.Adam,<sup>(27)</sup> T. Akagi,<sup>(27)</sup> N. J. Allen,<sup>(4)</sup> A. Arodzero,<sup>(20)</sup> W.W. Ash,<sup>(27)</sup> D. Aston,<sup>(27)</sup> K.G. Baird,<sup>(15)</sup> C. Baltay,<sup>(37)</sup> H.R. Band,<sup>(36)</sup> M.B. Barakat,<sup>(14)</sup> O. Bardon,<sup>(17)</sup> T.L. Barklow,<sup>(27)</sup> J.M. Bauer,<sup>(16)</sup> G. Bellodi,<sup>(21)</sup> R. Ben-David,<sup>(37)</sup> A.C. Benvenuti,<sup>(3)</sup> G.M. Bilei,<sup>(23)</sup> D. Bisello,<sup>(22)</sup> G. Blaylock,<sup>(15)</sup> J.R. Bogart,<sup>(27)</sup> B. Bolen,<sup>(16)</sup> G.R. Bower,<sup>(27)</sup> J. E. Brau,<sup>(20)</sup> M. Breidenbach,<sup>(27)</sup> W.M. Bugg,<sup>(30)</sup> D. Burke,<sup>(27)</sup> T.H. Burnett,<sup>(35)</sup> P.N. Burrows,<sup>(21)</sup> A. Calcaterra,<sup>(11)</sup> D.O. Caldwell,<sup>(32)</sup> D. Calloway,<sup>(27)</sup> B. Camanzi,<sup>(10)</sup> M. Carpinelli,<sup>(24)</sup> R. Cassell,<sup>(27)</sup> R. Castaldi,<sup>(24)</sup> A. Castro,<sup>(22)</sup> M. Cavalli-Sforza,<sup>(33)</sup> A. Chou,<sup>(27)</sup> E. Church,<sup>(35)</sup> H.O. Cohn,<sup>(30)</sup> J.A. Coller,<sup>(5)</sup> M.R. Convery,<sup>(27)</sup> V. Cook,<sup>(35)</sup> R. Cotton,<sup>(4)</sup> R.F. Cowan,<sup>(17)</sup> D.G. Coyne,<sup>(33)</sup> G. Crawford,<sup>(27)</sup> C.J.S. Damerell,<sup>(25)</sup> M. N. Danielson,<sup>(7)</sup> M. Daoudi,<sup>(27)</sup> N. de Groot,<sup>(27)</sup> R. Dell’Orso,<sup>(23)</sup> P.J. Dervan,<sup>(4)</sup> R. de Sangro,<sup>(11)</sup> M. Dima,<sup>(9)</sup> A. D’Oliveira,<sup>(6)</sup> D.N. Dong,<sup>(17)</sup> P.Y.C. Du,<sup>(30)</sup> R. Dubois,<sup>(27)</sup> B.I. Eisenstein,<sup>(12)</sup> V. Eschenburg,<sup>(16)</sup> E. Etzion,<sup>(36)</sup> S. Fahey,<sup>(7)</sup> D. Falciai,<sup>(11)</sup> C. Fan,<sup>(7)</sup> J.P. Fernandez,<sup>(33)</sup> M.J. Fero,<sup>(17)</sup> K.Flood,<sup>(15)</sup> R. Frey,<sup>(20)</sup> T. Gillman,<sup>(25)</sup> G. Gladding,<sup>(12)</sup> S. Gonzalez,<sup>(17)</sup> E.L. Hart,<sup>(30)</sup> J.L. Harton,<sup>(9)</sup> A. Hasan,<sup>(4)</sup> K. Hasuko,<sup>(31)</sup> S. J. Hedges,<sup>(5)</sup> S.S. Hertzbach,<sup>(15)</sup> M.D. Hildreth,<sup>(27)</sup> J. Huber,<sup>(20)</sup> M.E. Huffer,<sup>(27)</sup> E.W. Hughes,<sup>(27)</sup> X.Huynh,<sup>(27)</sup> H. Hwang,<sup>(20)</sup> M. Iwasaki,<sup>(20)</sup> D. J. Jackson,<sup>(25)</sup> P. Jacques,<sup>(26)</sup> J.A. Jaros,<sup>(27)</sup> Z.Y. Jiang,<sup>(27)</sup> A.S. Johnson,<sup>(27)</sup> J.R. Johnson,<sup>(36)</sup> R.A. Johnson,<sup>(6)</sup> T. Junk,<sup>(27)</sup> R. Kajikawa,<sup>(19)</sup> M. Kalelkar,<sup>(26)</sup> Y. Kamyshkov,<sup>(30)</sup> H.J. Kang,<sup>(26)</sup> I. Karliner,<sup>(12)</sup> H. Kawahara,<sup>(27)</sup> Y. D. Kim,<sup>(28)</sup> R. King,<sup>(27)</sup> M.E. King,<sup>(27)</sup> R.R. Kofler,<sup>(15)</sup> N.M. Krishna,<sup>(7)</sup> R.S. Kroeger,<sup>(16)</sup> M. Langston,<sup>(20)</sup> A. Lath,<sup>(17)</sup> D.W.G. Leith,<sup>(27)</sup> V. Lia,<sup>(17)</sup> C.-J. S. Lin,<sup>(27)</sup> X. Liu,<sup>(33)</sup> M.X. Liu,<sup>(37)</sup> M. Loreti,<sup>(22)</sup> A. Lu,<sup>(32)</sup> H.L. Lynch,<sup>(27)</sup> J. Ma,<sup>(35)</sup> G. Mancinelli,<sup>(26)</sup> S. Manly,<sup>(37)</sup> G. Mantovani,<sup>(23)</sup> T.W. Markiewicz,<sup>(27)</sup> T. Maruyama,<sup>(27)</sup> H. Masuda,<sup>(27)</sup> E. Mazzucato,<sup>(10)</sup> A.K. McKemey,<sup>(4)</sup> B.T. Meadows,<sup>(6)</sup> G. Menegatti,<sup>(10)</sup> R. Messner,<sup>(27)</sup> P.M. Mockett,<sup>(35)</sup> K.C. Moffeit,<sup>(27)</sup> T.B. Moore,<sup>(37)</sup> M.Morii,<sup>(27)</sup> D. Muller,<sup>(27)</sup> V.Murzin,<sup>(18)</sup> T. Nagamine,<sup>(31)</sup> S. Narita,<sup>(31)</sup> U. Nauenberg,<sup>(7)</sup> H. Neal,<sup>(27)</sup> M. Nussbaum,<sup>(6)</sup> N.Oishi,<sup>(19)</sup> D. Onoprienko,<sup>(30)</sup> L.S. Osborne,<sup>(17)</sup> R.S. Panvini,<sup>(34)</sup> H. Park,<sup>(20)</sup> C. H. Park,<sup>(29)</sup> T.J. Pavel,<sup>(27)</sup> I. Peruzzi,<sup>(11)</sup> M. Piccolo,<sup>(11)</sup> L. Piemontese,<sup>(10)</sup> E. Pieroni,<sup>(24)</sup> K.T. Pitts,<sup>(20)</sup> R.J. Plano,<sup>(26)</sup> R. Prepost,<sup>(36)</sup> C.Y. Prescott,<sup>(27)</sup> G.D. Punkar,<sup>(27)</sup> J. Quigley,<sup>(17)</sup> B.N. Ratcliff,<sup>(27)</sup> T.W. Reeves,<sup>(34)</sup> J. Reidy,<sup>(16)</sup> P.L. Reinertsen,<sup>(33)</sup> P.E. Rensing,<sup>(27)</sup> L.S. Rochester,<sup>(27)</sup> P.C. Rowson,<sup>(8)</sup> J.J. Russell,<sup>(27)</sup> O.H. Saxton,<sup>(27)</sup> T. Schalk,<sup>(33)</sup> R.H. Schindler,<sup>(27)</sup> B.A. Schumm,<sup>(33)</sup> J. Schwiening,<sup>(27)</sup> S. Sen,<sup>(37)</sup> V.V. Serbo,<sup>(36)</sup> M.H. Shaevitz,<sup>(8)</sup> J.T. Shank,<sup>(5)</sup> G. Shapiro,<sup>(13)</sup> D.J. Sherden,<sup>(27)</sup> K. D. Shmakov,<sup>(30)</sup> C. Simopoulos,<sup>(27)</sup> N.B. Sinev,<sup>(20)</sup> S.R. Smith,<sup>(27)</sup> M. B. Smy,<sup>(9)</sup> J.A. Snyder,<sup>(37)</sup> H. Staengle,<sup>(9)</sup> A. Stahl,<sup>(27)</sup> P. Stamer,<sup>(26)</sup> R. Steiner,<sup>(1)</sup> H. Steiner,<sup>(13)</sup> M.G. Strauss,<sup>(15)</sup> D. Su,<sup>(27)</sup> F. Suekane,<sup>(31)</sup> A. Sugiyama,<sup>(19)</sup> S. Suzuki,<sup>(19)</sup> M. Swartz,<sup>(27)</sup> A. Szumilo,<sup>(35)</sup> T. Takahashi,<sup>(27)</sup> F.E. Taylor,<sup>(17)</sup> J. Thom,<sup>(27)</sup> E. Torrence,<sup>(17)</sup> N. K. Toumbas,<sup>(27)</sup> A.I. Trandafir,<sup>(15)</sup> J.D. Turk,<sup>(37)</sup> T. Usher,<sup>(27)</sup> C. Vannini,<sup>(24)</sup> J. Va’vra,<sup>(27)</sup> E. Vella,<sup>(27)</sup> J.P. Venuti,<sup>(34)</sup> R. Verdier,<sup>(17)</sup> P.G. Verdini,<sup>(24)</sup> S.R. Wagner,<sup>(27)</sup> D. L. Wagner,<sup>(7)</sup> A.P. Waite,<sup>(27)</sup> Walston, S.,<sup>(20)</sup> J.Wang,<sup>(27)</sup> C. Ward,<sup>(4)</sup> S.J. Watts,<sup>(4)</sup> A.W. Weidemann,<sup>(30)</sup> E. R. Weiss,<sup>(35)</sup> J.S. Whitaker,<sup>(5)</sup> S.L. White,<sup>(30)</sup> F.J. Wickens,<sup>(25)</sup> B. Williams,<sup>(7)</sup> D.C. Williams,<sup>(17)</sup> S.H. Williams,<sup>(27)</sup> S. Willocq,<sup>(27)</sup> R.J. Wilson,<sup>(9)</sup> W.J. Wisniewski,<sup>(27)</sup> J. L. Wittlin,<sup>(15)</sup> M. Woods,<sup>(27)</sup> G.B. Word,<sup>(34)</sup> T.R. Wright,<sup>(36)</sup> J. Wyss,<sup>(22)</sup> R.K. Yamamoto,<sup>(17)</sup> J.M. Yamartino,<sup>(17)</sup> X. Yang,<sup>(20)</sup> J. Yashima,<sup>(31)</sup> S.J. Yellin,<sup>(32)</sup> C.C. Young,<sup>(27)</sup> H. Yuta,<sup>(2)</sup> G. Zapalac,<sup>(36)</sup> R.W. Zdarko,<sup>(27)</sup> J. Zhou.<sup>(20)</sup>
(The SLD Collaboration)
<sup>(1)</sup>Adelphi University, South Avenue- Garden City,NY 11530, <sup>(2)</sup>Aomori University, 2-3-1 Kohata, Aomori City, 030 Japan, <sup>(3)</sup>INFN Sezione di Bologna, Via Irnerio 46 I-40126 Bologna (Italy), <sup>(4)</sup>Brunel University, Uxbridge, Middlesex - UB8 3PH United Kingdom, <sup>(5)</sup>Boston University, 590 Commonwealth Ave. - Boston,MA 02215, <sup>(6)</sup>University of Cincinnati, Cincinnati,OH 45221, <sup>(7)</sup>University of Colorado, Campus Box 390 - Boulder,CO 80309, <sup>(8)</sup>Columbia University, Nevis Laboratories P.O.Box 137 - Irvington,NY 10533, <sup>(9)</sup>Colorado State University, Ft. Collins,CO 80523, <sup>(10)</sup>INFN Sezione di Ferrara, Via Paradiso,12 - I-44100 Ferrara (Italy), <sup>(11)</sup>Lab. Nazionali di Frascati, Casella Postale 13 I-00044 Frascati (Italy), <sup>(12)</sup>University of Illinois, 1110 West Green St. Urbana,IL 61801, <sup>(13)</sup>Lawrence Berkeley Laboratory, Dept.of Physics 50B-5211 University of California- Berkeley,CA 94720, <sup>(14)</sup>Louisiana Technical University, , <sup>(15)</sup>University of Massachusetts, Amherst,MA 01003, <sup>(16)</sup>University of Mississippi, University,MS 38677, <sup>(17)</sup>Massachusetts Institute of Technology, 77 Massachussetts Avenue Cambridge,MA 02139, <sup>(18)</sup>Moscow State University, Institute of Nuclear Physics 119899 Moscow Russia, <sup>(19)</sup>Nagoya University, Nagoya 464 Japan, <sup>(20)</sup>University of Oregon, Department of Physics Eugene,OR 97403, <sup>(21)</sup>Oxford University, Oxford, OX1 3RH, United Kingdom, <sup>(22)</sup>Universita di Padova, Via F. Marzolo,8 I-35100 Padova (Italy), <sup>(23)</sup>Universita di Perugia, Sezione INFN, Via A. Pascoli I-06100 Perugia (Italy), <sup>(24)</sup>INFN, Sezione di Pisa, Via Livornese,582/AS Piero a Grado I-56010 Pisa (Italy), <sup>(25)</sup>Rutherford Appleton Laboratory, Chiton,Didcot - Oxon OX11 0QX United Kingdom, <sup>(26)</sup>Rutgers University, Serin Physics Labs Piscataway,NJ 08855-0849, <sup>(27)</sup>Stanford Linear Accelerator Center, 2575 Sand Hill Road Menlo Park,CA 94025, <sup>(28)</sup>Sogang University, Ricci Hall Seoul, Korea, <sup>(29)</sup>Soongsil University, Dongjakgu Sangdo 5 dong 1-1 Seoul, Korea 156-743, <sup>(30)</sup>University of Tennessee, 401 A.H. Nielsen Physics Blg. - Knoxville,Tennessee 37996-1200, <sup>(31)</sup>Tohoku University, Bubble Chamber Lab. - Aramaki - Sendai 980 (Japan), <sup>(32)</sup>U.C. Santa Barbara, 3019 Broida Hall Santa Barbara,CA 93106, <sup>(33)</sup>U.C. Santa Cruz, Santa Cruz,CA 95064, <sup>(34)</sup>Vanderbilt University, Stevenson Center,Room 5333 P.O.Box 1807,Station B Nashville,TN 37235, <sup>(35)</sup>University of Washington, Seattle,WA 98105, <sup>(36)</sup>University of Wisconsin, 1150 University Avenue Madison,WS 53706, <sup>(37)</sup>Yale University, 5th Floor Gibbs Lab. - P.O.Box 208121 - New Haven,CT 06520-8121.
|
no-problem/9905/cond-mat9905077.html
|
ar5iv
|
text
|
# Magic Islands and Barriers to Attachment: A Si/Si(111)7×7 Growth Model
## Abstract
Surface reconstructions can drastically modify growth kinetics during initial stages of epitaxial growth as well as during the process of surface equilibration after termination of growth. We investigate the effect of activation barriers hindering attachment of material to existing islands on the density and size distribution of islands in a model of homoepitaxial growth on Si(111)7$`\times `$7 reconstructed surface. An unusual distribution of island sizes peaked around “magic” sizes and a steep dependence of the island density on the growth rate are observed. “Magic” islands (of a different shape as compared to those obtained during growth) are observed also during surface equilibration.
Investigation of island structures formed during the initial stages of epitaxial growth allows us to explore kinetic mechanisms that govern the ordering of deposited atoms. A lot of attention has recently focused on the time- and growth-conditions dependence of the island density as well as on the bell-shaped distribution of island sizes whose origin can be traced back to the distribution of island capture zones. However, real growth systems are invariably more complicated than the idealized models of epitaxy commonly used. For example, the presence of surface reconstructions can completely change the growth behavior.
In homoepitaxy of Si on Si(111)7$`\times `$7 reconstructed surface, a process of “reconstruction destruction” was described by Tochihara and Shimada. The need to cancel surface reconstruction around a growing island gives rise to barriers to attachment of new material to existing islands. Growth with barriers to attachment has been already studied theoretically: The dependence of the island density on growth conditions was explored using analytic methods, while kinetic Monte Carlo (KMC) simulations of a simple growth model revealed an island size distribution multiple-peaked around “magic” sizes.
Here we present a detailed KMC model of Si/Si(111)7$`\times `$7 molecular beam epitaxy (MBE), with barriers to attachment included. With the help of this model we investigate the time- and growth-rate dependence of the island density, the shape of the island-size distribution, as well as island decay and filling of vacancy islands on the surface. The results of our simulations compare favorably to available experimental data about the Si/Si(111)7$`\times `$7 system. We also discuss those features of the model kinetics that are specific to growth with barriers to attachment.
Dynamics of Si/Si(111)7$`\times `$7 MBE growth was experimentally studied by Voigtländer et al. The most interesting feature observed, the existence of kinetically stabilized magic sizes in the island size distribution, was reported and KMC modeled in Ref. . The model discussed in this work is a generalization of the model from Ref. which is in turn based on the “reconstruction destruction” model of Si/Si(111)7$`\times `$7 growth proposed by Tochihara and Shimada. The material is deposited in units (cf. below) that are randomly placed onto sites of a honeycomb lattice. These sites represent half-unit cells (HUCs) of Si(111)7$`\times `$7 surface reconstruction. Two types of HUCs exist: In unfaulted HUCs (marked U), a surface atom bilayer follows bulk bilayer stacking, in faulted HUCs (marked F), the surface bilayer is 30 rotated with respect to the bulk, forming a stacking fault. Material in a HUC may be either non-transformed (marked $`X`$, models the free Si adatoms diffusing on the reconstructed Si surface) or transformed (marked $`T`$, corresponds to a HUC, where the material underwent the reconstruction destruction process and subsequent crystallization). The model thus becomes de facto a two-species one. The non-transformed units of the material randomly walk on the surface, meet each other and transform. Transformation is an activated process, activation energy for transformation is supposed to be higher in F HUCs due to the need to remove the stacking fault.
In the experiment, deposited material does not diffuse as HUCs. The use of units of deposited material allows us to model easily collective processes during ”reconstruction destruction” around island edges. Since, in general, the processes at island edges determine behavior of growth models (as they determine growth behavior of real systems), the simplification of material deposition and surface diffusion should not affect the model behavior in a substantial way. Our simulation scheme is coarse-grained and ignores all processes on the length scales smaller than a HUC and on the time scales shorter than the time required to transport material from one HUC to its nearest neighbor.
The hopping rate for a HUC hop is $`\nu _D`$$`=`$$`\nu _0\mathrm{exp}(E_D/k_BT)`$ where $`E_D`$$`=`$$`E_S+(x+t)E_N^X`$ for an X-HUC, $`E_D`$$`=`$$`E_S+xE_N^X+tE_N^T`$ for a T-HUC, $`x`$ and $`t`$ being the numbers of X and T neighbors, respectively, $`E_N^X`$ bond strengths of $`XX`$ and $`XT`$ pairs, $`E_N^T`$ bond strength of a $`TT`$ pair, and $`E_S`$ the surface barrier to diffusion. The rate of an X-HUC transformation is $`\nu _T`$$`=`$$`\nu _0\mathrm{exp}(E_T/k_BT)`$ where $`E_T^F=E_AtE_{\mathrm{edge}}`$ for an F-HUC, $`E_T^U=E_AtE_{\mathrm{edge}}E_{\mathrm{diff}}`$ for a U-HUC, $`E_A`$ being the barrier to attachment, $`E_{\mathrm{edge}}`$ a decrease in the barrier due to a transformed neighbor, and $`E_{\mathrm{diff}}`$ the barrier difference for F- and U-HUC overgrowth. Transformation of an island begins at an F-HUC with $``$2 X-neighbors, and the rate of this nucleation process is $`\nu `$$`=`$$`\nu _0\mathrm{exp}[(E_TE_{\mathrm{edge}})/k_BT]`$. The model has seven parameters, here we report results for $`\nu _0`$=10<sup>13</sup> s<sup>-1</sup>, $`E_S`$=1.5 eV, $`E_N^T`$=0.3 eV, $`E_N^X`$=0.1 eV, $`E_A`$=2.3 eV, $`E_{\mathrm{edge}}`$=$`E_{\mathrm{diff}}`$=0.35 eV, which gave the best agreement with experimental results. Using the model, we tried to reproduce both growth and equilibration processes on the Si/Si(111)7$`\times `$7 surface on a real time- and spatial scale. The HUC in the model is thus considered to be 1 bilayer (BL) of Si(111) thick, of a triangular shape with the edge length of $`a`$=26.9 Å and consisting of 49 Si atoms. All calculations were performed on 200$`\times `$200 HUC lattice with periodic boundary conditions.
Growth. We tried to fit the exponent $`\chi `$ given by
$$NF^\chi $$
(1)
where the dependence of the island density $`N`$ on flux $`F`$ at a constant temperature is measured. The experimental value of $`\chi `$=0.75 for $`T`$=680 K and $`T`$=770 K was reported in Ref. . In this experiment, samples were prepared at a given temperature by deposition of $``$0.15 BL Si on the Si(111)7$`\times `$7 surface followed by rapid quenching to room temperature. The experimental morphologies of the layer can therefore be considered snapshots from the Si/Si(111)7$`\times `$7 surface morphology evolution.
In the model, we calculated the flux dependence of the density of transformed (i.e., crystalline) islands $`N`$ at constant total coverage $`\mathrm{\Theta }_{\mathrm{tot}}`$ (Ref. ). Results are shown in Fig. 1. At lower fluxes, a power-law $`N`$$`=`$$`F^\chi `$ dependence with $`\chi _{680}`$=0.76$`\pm `$0.03, $`\chi _{770}`$=0.75$`\pm `$0.04 is observed. In the experiment, disordered growth occured at high fluxes. In the model, deviations from the power-law behavior of $`N`$$`=`$$`N(F)`$ are observed.
The presence of the barriers to attachment in the model introduces specific features into its dynamical behavior. In the dependence of $`N`$ on the total coverage $`\mathrm{\Theta }_{\mathrm{tot}}`$ (Fig. 2a) the nucleation onset (fast increase of $`N`$) takes place at higher coverages than in standard (minimal) growth model (Fig. 2b), Ref. . The maxima of $`N(\mathrm{\Theta }_{\mathrm{tot}})`$ are sharp and occur at higher $`\mathrm{\Theta }_{\mathrm{tot}}`$ than the broad maxima of the standard model.
Island growth in our model starts by a transformation of 3 neighboring HUCs. Due to the barriers to attachment this happens only after a certain time elapsed from HUC clustering . This time does not scale with flux in contrast to the time for clustering of units of material. Transformation becomes the rate-limiting process of island formation. By the time the transformation begins, more material has been deposited for the case of higher flux $`F`$. This shifts the nucleation onset and $`N_{\mathrm{max}}(\mathrm{\Theta }_{\mathrm{tot}})`$ position to higher $`\mathrm{\Theta }_{\mathrm{tot}}`$. With increasing $`F`$, amount of non-transformed material at a given $`\mathrm{\Theta }_{\mathrm{tot}}`$ increases (Fig. 2c), and so does the disorder of the quenched sample.
A newborn island composed of 3 units can further grow or decay. The competition between growth and decay of transformed clusters results in a steep decrease of $`N(\mathrm{\Theta }_{\mathrm{tot}})`$ for $`N>N_{\mathrm{max}}`$. This decrease is not due to coalescence, because the islands at $`N`$$`=`$$`N_{\mathrm{max}}`$ are small. Both the delay in island formation and the instability of islands leave traces in the time evolution of the mean island size, cf. Fig. 3.
The shift of $`N_{\mathrm{max}}`$ to higher $`\mathrm{\Theta }_{\mathrm{tot}}`$ than in standard model and the fast decrease of $`N(\mathrm{\Theta }_{\mathrm{tot}})`$ after reaching $`N_{\mathrm{max}}`$ contribute to the high value of $`\chi `$ observed at $`\mathrm{\Theta }_{\mathrm{tot}}`$=const during growth with barriers to attachment. With the barrier to attachment decreasing and the nearest-neighbor bond strength increasing, $`\chi `$ decreases.
Analytical theories usually relate the value of the growth exponent $`\chi `$ to $`i^{}`$, the number of material units in a “critical” (i.e., largest unstable) island in the growth system. For the determination of $`i^{}`$, we can use a formula $`\chi =2i^{}/(i^{}+3)`$ derived by Kandel. $`\chi `$ in the model varies smoothly with $`E_S`$, $`E_N^X`$, and $`E_N^T`$ so that the corresponding values of $`i^{}`$ are non-integer numbers between $`i^{}`$=1 ($`\chi `$=0.5) and $`i^{}`$=2 ($`\chi `$=0.8). In addition, a dependence of the island density on $`E_S`$ (the substrate contribution to the hopping barrier) was found, in contrast to predictions of Ref. .
The instability of growing islands does not strongly affect morphologies obtained for Si/Si(111)7$`\times `$7 MBE growth. We observe the characteristic multiple-peaked island size distribution (Fig. 4c, d), but with broader peaks (in better agreement with the experimental results) as compared to the model with detachment of material from islands forbidden. In the morphologies of experimental samples, non-transformed clusters of Si adatoms formed during quenching of samples are visible (Fig. 4b). The density of non-transformed material may be experimentally determined and compared to the model results.
Equilibration. The decay of Si/Si(111)7$`\times `$7 reconstructed 1 BL-high adatom (A) and vacancy (V) islands was experimentally studied in Refs. . Using STM at an elevated temperature, the authors of Refs. and followed a number of isolated (a nearest island or a step edge at a distance more than 800 Å) adatom or vacancy islands and studied the temperature dependence of their decay rates. The decay rates $`\nu ^A`$, $`\nu ^V`$ showed Arrhenius behavior $`\nu ^{A,V}`$$`=`$$`\nu _0^{A,V}\mathrm{exp}(E_a^{A,V}/k_BT)`$ with $`\nu _0^A`$$`=`$2$`\times `$10<sup>11±1</sup>adatoms$``$s<sup>-1</sup>, $`E_a^A`$$`=`$1.5$`\pm `$0.1 eV for adatoms, $`\nu _0^V`$$`=`$3$`\times `$10<sup>9±1</sup>adatoms$``$s<sup>-1</sup>, $`E_a^V`$$`=`$1.3$`\pm `$0.2 eV for vacancies, respectively. The decay rate of vacancy islands was found to be approx. 5 times lower than that of adatom islands.
With our model, we traced the evolution of a 96-HUC compact adatom or vacancy island placed on a vicinal Si surface (U-type steps, the terrace width of 480 nm, the distance from the descending step edge of 240 nm) equilibrated at a given temperature. Step edges on the vicinal surface form adatom sources and traps necessary for true disappearance of a single adatom or vacancy island in a model with periodic boundary conditions.
The temperature dependence of the decay rates for adatom and vacancy islands in our model is shown in Fig. 5a, b. With the parameters listed above, the decay rates in the model are higher than the experimental ones ($`\nu _0^A`$$`=`$3$`\times `$10<sup>14.5±0.5</sup>adatoms$``$s<sup>-1</sup>, $`E_a^A`$$`=`$2.1$`\pm `$0.1 eV, $`\nu _0^V`$$`=`$1$`\times `$10<sup>14±1</sup>adatoms$``$s<sup>-1</sup>, $`E_a^V`$$`=`$2.0$`\pm `$0.1 eV), and the decay rate of vacancy islands is approx. 2 times lower than the decay rate of adatom islands.
The authors of Ref. attributed the difference between the decay rates of adatom and vacancy islands to the effect of the Ehrlich-Schwoebel (step-edge) barrier in the Si/Si(111)7$`\times `$7 system. We do not believe that the Ehrlich-Schwoebel barrier plays any role: Growth experiments provide no compelling evidence of the presence of an appreciable Ehrlich-Schwoebel barrier at step edges on Si/Si(111)7$`\times `$7 surface within the relevant temperature range.
We also modeled adatom and vacancy islands decay with the barrier to attachment “switched off”. The decay rates thus obtained were lower ($`\nu _0^A`$$`=`$5$`\times `$10<sup>13±1</sup>adatoms$``$s<sup>-1</sup>, $`E_a^A`$$`=`$2.1$`\pm `$0.2 eV, $`\nu _0^V`$$`=`$1$`\times `$10<sup>12±1</sup>adatoms$``$s<sup>-1</sup>, $`E_a^V`$$`=`$2.1$`\pm `$0.2 eV), but the decay rate of vacancy islands was still approx. 2 times lower than for adatom islands. This observation agrees with results of a standard growth model on square lattice. The difference in these decay rates on the vicinal surface thus seems to originate from the difference of geometry of adatom and vacancy island boundaries.
In Fig. 5c, a typical time evolution of the size of a decaying island in a model with the barriers to attachment is shown. We see that stable (“magic”) Si islands do exist. They correspond to equilibrium island shapes experimentally observed and differ from magic shapes observed during Si/Si(111)7$`\times `$7 growth. Magic islands are compact (2 nearest-neighbors for all perimeter HUCs) and the barrier to attachment prevents their shape from “being spoiled” by attachment of material surrounding the island. No stable shapes are observed during island decay for the model without barriers to attachment.
In this work, we presented a coarse-grained model of Si/Si(111)7$`\times `$7 MBE growth with an activation barrier to attachment of new material to existing islands implemented. We demonstrated that this barrier contributes to the steep growth-rate dependence of the island density observed in Si/Si(111)7$`\times `$7 MBE and helps to stabilize “magic” island shapes in both growth and relaxation experiments.
This work was supported by the Grant Agency of the Czech Republic, project GAČR 202/97/1109.
|
no-problem/9905/hep-ex9905048.html
|
ar5iv
|
text
|
# 1 Introduction:
## 1 Introduction:
During recent years major discoveries have been made in very high energy (VHE) $`\gamma `$-ray astronomy. Ground-based experiments operating at TeV energies using atmospheric Cherenkov technique have unambigously detected $`\gamma `$-rays from a handful of sources at VHE (for recent review see Ong, 1998). Six sources ( Crab Nebula, Mrk501, Mrk421, Vela, SN1006 and PSR B1706-44) were observed with significance levels in excess of 6 standard deviations above background. Their spectra have been measured up to maximum energies 10-50 TeV. $`\gamma `$-rays from these sources can initiate muons with probability of order 1%. Muons originate from decay of pions, kaons and charmed particles produced by shower photons and from muon pair production by photons. The production of high-energy muons in gamma-induced showers in the atmosphere and possible detection of muons underground were discussed by Kudryavtsev and Ryazhskaya (1985), Stanev et al. (1985), Stanev (1986), Berezinsky et al. (1988), Halzen et al. (1997). Since 1985, a number of experiments has looked for a muon excess from known sources and so far the old results of NUSEX (Battistoni et al., 1985) and SOUDAN (Marshak et al., 1985) collaborations which detected muon excess from Cyg X-3 have not been confirmed by other experiments (Ahlen et al., 1993, Giglietto et al., 1997, Poirier et al., 1997). A new interest for muon astronomy arises from the recent success of ground-based $`\gamma `$-ray astronomy.
Single muons observed by LVD detector have been used to search for a possible flux from $`\gamma `$-ray sources discovered by ground-based experiments in the northern hemisphere as well as from some other known sources. Here we present the results of such analysis.
## 2 Detector and Data Analysis:
The data used for the analysis were collected with the 1st LVD tower during 22789 hours of live time. The 1st LVD tower contains 38 identical modules and has dimensions 13m $`\times `$ 6.3m $`\times `$ 12m. Each module consists of 8 scintillation counters and 4 layers of limited streamer tubes (tracking detector) attached to the bottom and to one vertical side of the metallic supporting structure. Geometric acceptance for isotropic flux is about 1700 m<sup>2</sup> sr. A detailed description of the detector was given in Aglietta et al. (1992). The depth of LVD site (42<sup>o</sup>27 N and 13<sup>o</sup>34 E) averaged over the muon flux is about 3650 hg/cm<sup>2</sup> which corresponds to the median energy of vertical muons at sea level of about 2.2 TeV. LVD detects muons crossing from 3000 hg/cm<sup>2</sup> to more than 12000 hg/cm<sup>2</sup> of rock (which corresponds to the median muon energies at sea level from 1.6 TeV to 40 TeV for conventional atmospheric muons) at zenith angles from 0<sup>o</sup> to 90 <sup>o</sup> (on the average, larger depths correspond to higher zenith angles). This allows us to analyse muons in different energy ranges. Three muon samples have been chosen in our study: 1) muons crossing all column densities of rock (corresponding energy threshold defined as the median surface energy of conventional vertical muons which cross the minimal rock thikness of 3 km w.e., $`E_\mu ^{thr}`$, is equal to 1.6 TeV), 2) muons crossing rock thickness greater than 5 km w.e. ($`E_\mu ^{thr}`$=3.9 TeV) and 3) muons crossing rock thickness greater than 7 km w.e. ($`E_\mu ^{thr}`$=8.4 TeV).
Muon celestial coordinates have been stored in two dimensional map with a cell size of 1<sup>o</sup> in right ascension (R.A.) and 0.01 in $`\mathrm{sin}\delta `$ (where $`\delta `$ is declination).
## 3 Results and Discussion:
We used the shadowing of cosmic rays by the Moon to confirm the pointing accuracy of the LVD detector. The data used in the search for the shadow of the Moon included 1.85$``$10<sup>6</sup> muons. For every muon arrival time, R.A. and $`\delta `$ of the geocentric apparent position of the center of the Moon has been computed taking into account the corrections for parallax. The angle between muon direction and the position of the center of the Moon has been evaluated. We simulated the background events from the experimental zenith-azimuthal distribution of muons and the mean time between two consecutive muons observed by LVD run by run. Then the angle between background event and Moon position has been calculated. Figure 1 shows the angular density $`dN/d\mathrm{\Omega }`$ as a function of the angular distance from the center of the Moon. The observed deficit has a significance of 2.62 standard deviations (s.d.). This study confirms that the track reconstruction and pointing accuracy have no serious systematic errors.
The distribution of the data versus declination (after summing over R.A.) for three selected ranges of depth is presented in Figures 2a, 2b, 2c together with calculated background of atmospheric muons. The difference in the distributions for three depth ranges reflects different mountain structure for these regions at LVD site. Figure 2d shows the distribution of the muon flux versus R.A. (summed over declination). The calculated background fits data for three analysed depth ranges rather well.
To test the presence of a significant muon excess above the background from any angular cell in the sky we used cells of equal solid angles with a width of $`3^o`$ in R.A. and 0.04 in $`\mathrm{sin}\delta `$ which corresponds approximately to the solid angle of a cone with a half angle of 1.5<sup>o</sup>. The deviation from the mean was computed using the Gaussian statistics $`\frac{n_{exp}n_{mc}}{\sqrt{n_{mc}}}`$, where $`n_{exp}`$ is a number of muons observed in the experiment and $`n_{mc}`$ is the simulated background from atmospheric muons. No cell with an excess of more than 3.5 s.d. has been found for the first depth range. The Gaussian fit gives $`\chi ^2/Dof=0.86`$. For the second range we found two cells with excesses of 3.72 s.d. and 3.84 s.d. and worse Gaussian fit with $`\chi ^2/Dof=1.41`$. We shifted the cells by $`1^o`$ in R.A. and 0.01 in $`\mathrm{sin}\delta `$, repeated this procedure and did not find any excess more than 3.5 s.d in the overlapping bins. To have better statistics for the third depth range the cells have been enlarged up to $`10^o`$ in R.A. and 0.1 in $`\mathrm{sin}\delta `$. Two bins with excesses of 4.19 s.d. and 3.83 s.d. were found but disappeared after the cells were shifted in the same way as for the second depth range. Gaussian fit for this range gives $`\chi ^2/Dof=1.29`$. We conclude that we have not found significant excess of muons over background from any cell on the sky in the selected depth ranges. Figure 3 shows the results of this study.
We have also made a search for a possible flux in narrow cones ($`1.5^o`$ half angle) around the position of the sources observed in $`\gamma `$-ray ground-based experiments (visible in the northern hemisphere) and some other sources which have drawn attention of underground experiments during the last decade. Again the data from three depth ranges were considered. To obtain an upper limit (95% C.L.) on the flux from a source we used the following formula:
$$F=\frac{n_\mu }{fϵAkT}$$
(1)
where $`n_\mu `$ is the upper limit on the number of muons calculated according to the procedure given in (Helene, 1983), $`f`$=0.9 is correction factor to calculate the muons scattered out of $`1.5^o`$ half angle cone, this factor is estimated taking into account muon deflection in the rock convolved with the simulated detector response function for single muons, $`ϵA`$ is the weighted average of the product of efficiency of muon detection and reconstruction times the area of the cross–section of the detector perpendicular to the muon track, it depends on source position and the depth range, for Cyg X–3 and the first depth range $`ϵA`$=$`8410^4`$ cm<sup>2</sup>, $`k`$ takes into account the time when the source is visible and T is the exposure time. The results are presented in Table 1.
No statistically significant enhancement (more than 1.5 s.d.) of observed muons above calculated background of atmospheric muons has been found from any source for all-depth range and for slant depths greater than 5 km w.e. For the range of slant depths more than 7 km w.e. 6 muon events against the background of 1.4 events were observed from the angular cell which includes Mrk 501 (this corresponds to the probability of 0.0031). However the excess can be connected with the complicated mountain structure at these depths. To test this hypothesis a special depth cut has been applied both for the observed and simulated events. For every muon we calculated the depths for nearby cells at an angular distance of no more than $`3^o`$. As mean angular deviation of muons during their passage through the rock, mainly caused by multiple Coulomb scattering, is about 0.5<sup>o</sup> (Antonioli et al., 1997), we excluded events if the slant depths in the nearby cells were less than 6.5 km. The values in the column $`N_7^{cut}`$ of Table 1 show the results after this cut. As a result of the depth cut we have 4 muon events against 1 background event which corresponds to the probability of 0.019.
During 1997 Markarian 501 had a remarkable flaring activity and was the brightest source in the sky at TeV energies. We used LVD data to look at possible enhancement of observed muons above calculated background during the period of Mrk501 activity from the middle of March till the end of August, 1997. The results are presented in Table 2.
## 4 Conclusions:
We have confirmed a lack of serious systematic errors both in reconstruction of muon direction and pointing accuracy of the LVD detector by observing the effect of the Moon shadow with a statistical significance of 2.62 s.d. Three depth (muon energy) ranges have been selected to search for point sources of VHE gammas. No statistically significant excess of muons above the simulated background has been found from any angular cell on the sky and for all energy ranges included in the analysis procedure. We have not found either any enhancement of muon flux from the angular cells which include some known astrophysical sources.
## 5 Acknowledgements:
We wish to thank the staff of the Gran Sasso Laboratory for their aid and collaboration. This work is supported by the Italian Institute for Nuclear Physics (INFN) and in part by the Italian Ministry of University and Scientific-Technological Research (MURST), the Russian Ministry of Science and Technologies, the US Department of Energy, the US National Science Foundation, the State of Texas under its TATRP program, and Brown University.
References
Aglietta, M. et al. 1992, Nuovo Cimento 105A, 1793.
Ahlen, S. et al., 1993, ApJ, 412, 301.
Antonioli, P. et al, 1997, Astrop. Phys., 7, 357.
Battistoni, G. et al., 1985, Phys. Lett. B., 155, 465.
Berezinsky, V.S. et al., 1988, A$`\&`$A, 189, 306.
Giglietto, N. et al., 1997, Proc. 25th ICRC (Durban), 6, 377.
Halzen, F., Stanev, T. and Yodh, G.B., 1997, Phys. Rev. D, 55, 4475.
Helene, O. 1983, Nucl. Instr. Meth., 212, 319.
Kudryavtsev, V.A., and Ryazhskaya, O.G., 1985, ZhETPh Lett.,42, 300.
Ong, R.A., 1998, Phys. Rep., 305, 93 and references therein.
Marshak, M. L. et al., 1985, Phys. Rev. Lett., 54, 2079.
Poirier, J. et al., 1977, Proc. 25th ICRC (Durban), 6, 370.
Stanev, T., Vankov, Ch. P., and Halzen, F., 1985, Proc. 17th ICRC (La Jolla), 7, 219.
Stanev, T., 1986, Phys. Rev. D, 33, 2740.
|
no-problem/9905/cond-mat9905279.html
|
ar5iv
|
text
|
# Thick surface flows of granular materials: The effect of the velocity profile on the avalanche amplitude
## I General principles
### A Onset of avalanches
It is a dailylife experience that the top surface of a mass of granular matter need not be horizontal unlike that of a stagnant liquid. However, there exists an upper limit to the slope of the top surface, and the angle between this maximum slope and the horizontal is known, for non-cohesive material, as the Coulomb critical angle $`\theta _{max}`$. Above this angle, the material becomes unstable, and an avalanche at the surface might occur. The Coulomb angle is related to the friction properties through $`\mathrm{tan}\theta _{max}=\mu _i`$ where $`\mu _i`$ is an internal friction coefficient \[\].
As of today, the physical picture associated with the onset of the avalanche is still obscure. One could imagine a local scenario in which the dislodgement of some unstable grains leads by amplification to a global avalanche (see for instance \[\]). Alternately, one can think of a delocalized mechanism \[\], in which a thin slice of material is destabilized and starts to slide as a whole. In the present paper, we will focus on the latter point of view.
It has been recently suggested \[\] that the thickness of the initial gliding layer should be of the order of $`\xi `$, the mesh size of the contact force network \[\]. For simple grain shapes (spheroidal), one expects $`\xi `$ 5–10 grain diameter $`d`$. The angle at which the avalanche process actually starts is of the order of $`\theta _{max}+\xi /L`$, where $`L`$ is the size of the free surface. At the moment of onset, our picture is that this initial layer starts to slip, and is rapidly fluidized by the collisions with the underlying heap, therefore generating a layer of rolling grains on the whole surface.
Now that we have proposed a description of the initial situation, we may turn to the model scheme accounting for the further evolution of the avalanche.
### B Saturation effects for thick avalanches
Some years ago, Bouchaud, Cates, Ravi Prakash and Edwards introduced a model to describe surface flows of granular materials \[\]. The model assumes a rather sharp distinction between immobile particles and rolling particles and, accordingly, introduces the following two important physical quantities (see Fig. 1): the local height of immobile particles $`h(x,t)`$ (where $`x`$ denotes the horizontal coordinate \[\] and $`t`$ the time), and the local amount of rolling particles $`R(x,t)`$.
The time evolution of $`h(x,t)`$ is written in the form
$$\frac{h}{t}=\gamma R(\theta _n\theta )$$
(1)
where $`\theta \mathrm{tan}(\theta )=h/x`$ is the local slope, $`\gamma `$ a characteristic frequency and $`\theta _n`$ the neutral angle of grains at which erosion of the immobile grains balances accretion of the rolling grains. For the rolling particles, Bouchaud and co-workers wrote a convection-diffusion equation \[\] that was later simplified by de Gennes as \[\]
$$\frac{R}{t}=v\frac{R}{x}\frac{h}{t}$$
(2)
where $`v`$ is the downhill typical velocity of the flow, and is assumed to be constant.
According to the Bouchaud-Cates-Ravi Prakash-Edwards (BCRE) model, $`h/t`$ is linear in $`R`$ \[see Eq. (1)\]. This is natural at small $`R`$, when all the rolling grains interact with the immobile particles. But as explained in Refs. \[\], this cannot hold when $`R`$ becomes larger than a given saturation length $`\xi ^{}`$, since the grains in the upper part of the rolling phase are no longer in contact with the immobile grains. The length $`\xi ^{}`$ is expected to be of the order of a few grain diameters $`d`$ \[\]. This led Boutreux, Raphaël, and de Gennes to propose \[\] a modified saturated version of the BCRE Eq. (1), valid for thick surface flows and of the form
$$\frac{h}{t}=v_{uh}(\theta _n\theta )(R\xi ^{})$$
(3)
where $`v_{uh}`$ is defined by $`v_{uh}\gamma \xi ^{}`$. The constant $`v_{uh}`$ has the dimensions of a velocity.
The description of thick avalanches modelized by Eq. (3) was discussed in Ref. \[\]. However, one might encounter situations where the local amount $`R`$ of rolling particles is rather large except in some regions of space where it takes values smaller than $`\xi ^{}`$. For such cases, various ‘generalized’ forms of the BCRE equations valid both in the large and small $`R`$ limit, and able to handle intermediate values have been proposed \[\]. As we will be concerned only with thick flows, we will henceforth use the saturated form (3).
### C Velocity profiles in thick flows
We now consider the hypothesis made in Eq. (2) that the downhill typical convection velocity of the rolling grains $`v`$ is constant. As a matter of fact, $`v`$ might vary for two reasons.
First, $`v`$ depends on the local slope $`h/x`$ of the static bed, reflecting that the mean convection velocity should increase as the sandpile is further tilted. However, in the situation we are going to consider, the slope should never depart from $`\theta _n`$ by more than a few degrees, so that the variations of $`v`$ originating in this may reasonably be taken to be negligible.
Second, $`v`$ might as well depend on the local amount of rolling particles $`R`$. This dependence is quite natural, since as soon as the thickness of the flow exceeds a few grain diameters, one would expect a velocity gradient perpendicular to the flow to establish. Such a possibility was already considered by Bouchaud et al. \[\], but, to our best knowledge, not further studied. We think that taking this velocity gradient into account does lead to an improvement of the model description of avalanches. In the forthcoming sections we will analyse the physical consequences of this modification.
If analyticity is assumed, we can expand $`v(R)`$ in powers of $`R`$, and considering only the first two contributions to be significant, we write:
$$v(R)=v_0+\mathrm{\Gamma }R.$$
(4)
with $`\mathrm{\Gamma }`$ a constant, homogeneous to a shear rate, and $`v_0`$ a constant velocity.
When $`R`$ becomes small, Eq. (4) tells us that $`v(R)`$ becomes constant \[$`v(R)v_0`$\]. Physically this velocity should correspond to the typical convection velocity of a single grain on a bed of immobile grains. For simple grain shapes (spheroidal) and average levels of inelastic collisions, one expects this velocity $`v_0`$ to scale as $`(gd)^{1/2}`$ (where $`g`$ is the gravity) \[\]. Similarly, the shear rate $`\mathrm{\Gamma }`$ is expected to scale as $`(gd)^{1/2}/d(g/d)^{1/2}`$ \[\]. We can therefore rewrite Eq. (4):
$$v(R)=\mathrm{\Gamma }(R+d).$$
(5)
We note that $`v_0`$ becomes negligible compared to $`\mathrm{\Gamma }R`$ as soon as $`R`$ exceeds a few grain diameters.
In our approach, the typical velocity $`v(R)`$ depends linearly on the local rolling height $`R`$ \[Eq. (5)\]. Such a form is in part motivated by the recent work of Douady et al. \[\] (see also Section IV C). It is also supported by the experimental results of Rajchenbach et al. who carried measurements in a rotating drum \[\]. These authors have found linear velocity profiles in the surface flow, with a shear rate $`\mathrm{\Gamma }`$ independent of the thickness of the flow. However, in other experiments of chute flows carried on rough inclined planes, Anzaza et al. \[\] and Pouliquen \[\] observe that the mean velocity (averaged on cross-sections) scales as a power-law of the thickness with an exponent about 3/2. In the following we will mainly focus on the linear form (4), since it allows us to give explicit analytical solutions, and shall discuss the changes that are to be brought in the case of a power-law velocity in Section IV A.
In the next section, we will derive the governing equations from the saturated BCRE equations and the above considerations on the velocity profile inside the flow.
### D Governing equations
We may define a reduced profile $`\stackrel{~}{h}`$, deduced from $`h`$ by substracting the ‘neutral’ profile $`\theta _nx`$:
$$\stackrel{~}{h}(x,t)h(x,t)\theta _nx.$$
(6)
Using Eqs (2)(3)(5) and (6), we easily obtain the following system:
$`{\displaystyle \frac{\stackrel{~}{h}}{t}}`$ $`=`$ $`v_{uh}{\displaystyle \frac{\stackrel{~}{h}}{x}}`$ (7)
$`{\displaystyle \frac{R}{t}}`$ $`=`$ $`\mathrm{\Gamma }(R+d){\displaystyle \frac{R}{x}}+v_{uh}{\displaystyle \frac{\stackrel{~}{h}}{x}}.`$ (8)
In our approach, Eqs. (7) and (8) are the governing equations for surface avalanches displaying linear velocity profiles.
An important point is that we must have $`R>0`$ for Eqs. (7) and (8) to be valid. If we reach $`R=0`$ in a certain spatial domain, then Eq. (7) must be replaced in that domain by $`\stackrel{~}{h}/t=0`$.
## II Application to the simple case of an open system
### A Physical situation
We will now solve Eqs. (7) and (8) in the following simple situation: we consider a cell, of dimension $`L`$, partially filled with monodisperse grains of diameter $`d`$, as shown on Fig. 2. The heap has an initial uniform slope $`\theta _{max}`$, the Coulomb angle of the material. The origin of the $`x`$ axis is taken at the bottom of the cell, and the orientation of the axis is such that the slope of the heap is positive.
We now consider that an avalanche has just started in the cell (see Section I A), so that we have at time $`t=0`$ a layer of rolling grains in the whole cell, of thickness $`\xi `$ greater than the saturation length $`\xi ^{}`$. We may thus use the saturated equations (7) and (8) from the beginning of the avalanche.
As the rolling population will rapidly grow and become independent of the initial thickness $`\xi `$ (for $`\xi `$ small), we can as well consider the initial condition on $`R`$ to be:
$$R(x,t=0)=0.$$
(9)
We also know the initial value of $`\stackrel{~}{h}`$:
$$\stackrel{~}{h}(x,t=0)=(\theta _{max}\theta _n)x\eta x,$$
(10)
where $`\eta `$ is defined as the (positive) difference between the Coulomb angle and the neutral angle.
We have additional conditions in our system, due to the boundaries. At the top of the cell, there is no feeding in rolling species, so that we impose:
$$R(x=L,t)=0\text{at any time }t0.$$
(11)
Another condition arises from the fact that grains fall off the cell at the bottom and cannot accumulate there:
$$\stackrel{~}{h}(x=0,t)=0\text{at any time }t0.$$
(12)
### B Uphill wave in the static phase
Equation (7) can be readily solved along with conditions (10) and (12) to give:
$$\stackrel{~}{h}(x,t)=\eta v_{uh}H(xv_{uh}t)x\text{ for }0xL,$$
(13)
where $`H`$ denotes the Heavyside unit step function \[$`H(u)=1`$ if $`u>0`$, $`H(u)=0`$ otherwise\]. This result corresponds to the uphill propagation (at constant speed $`v_{uh}`$) of a surface wave on the static phase. Let us call $`x_{uh}(t)`$ the time-dependent position of the wavefront, given by:
$$x_{uh}(t)=v_{uh}t.$$
(14)
(where the subscript $`uh`$ stands for ‘uphill’).
The wave starts from the bottom of the cell at time $`t=0`$ and reaches the upper end at time $`t_2`$ defined by:
$$t_2L/v_{uh}.$$
(15)
At a given time $`t`$ (smaller than $`t_2`$), the profile of the static phase can be described as follows: ahead of the wavefront \[$`x_{uh}(t)xL`$\], the profile is linear and the slope is the initial angle $`\theta _{max}`$ (since $`\stackrel{~}{h}=\eta v_{uh}`$). Behind the wavefront \[$`0xx_{uh}(t)`$\], the slope has decreased and reached the neutral angle $`\theta _n`$ ($`\stackrel{~}{h}=0`$) (see Fig. 3). For times $`tt_2`$ the slope of the static phase inside the cell is uniformly equal to the final value $`\theta _n`$, which is thus the angle of repose of our specific open cell system \[\].
### C Downhill convection of rolling grains
Substituting Eq. (13) into the evolution equation (8) for $`R`$ gives:
$$\frac{R}{t}\mathrm{\Gamma }(R+d)\frac{R}{x}=\eta v_{uh}H(xv_{uh}t)$$
(16)
Eq. (16) is a non-linear convection equation. The rolling species are thus convected downhill, with a convection velocity dependent on the local rolling thickness $`R`$. In the spatial region $`x>v_{uh}t`$, the right-hand side (which couples the evolution of $`R`$ to that of $`\stackrel{~}{h}`$) plays the role of a source term, leading to an amplification of the avalanche. On the contrary, for $`xv_{uh}t`$, the right-hand side goes to zero, so that the material flowing through the surface $`x=v_{uh}t`$ from uphill is simply convected, without amplification nor damping.
Equation (16) can be solved analytically by using the method of characteristics \[\], which utilizes the property that certain types of partial differential equations reduce to a set of ordinary differential equations along particular lines, known as the characteristic curves. For more details on this method and on its application in the case of Eq. (16), see Appendix.
### D Propagation of boundary effects in the cell
Before we go to the precise solutions, we can try to get some physical insight of the way the avalanche is going to develop. The global shape of the rolling phase at different moments during the avalanche is of course very dependent on the boundary condition (11) for $`R`$ in the cell, but also on the condition (12) for $`\stackrel{~}{h}`$, since the evolution of $`\stackrel{~}{h}`$ and $`R`$ are coupled.
However, the effects of these boundary conditions cannot spread over the entire cell instantly after the beginning of the avalanche, and shall propagate with finite velocities. We then expect the progression of these boundary effects (one could say, the propagation of the ‘information’ on the boundaries) to control the evolution of both the rolling and the static phase. For instance, in the case of the static profile $`\stackrel{~}{h}`$, Eq. (13) tells us that the bottom boundary condition (12) \[$`\stackrel{~}{h}(x=0,t)=0`$ at any time $`t`$\] brings progressively $`\stackrel{~}{h}`$ to zero everywhere in the cell, and also, that the propagation proceeds with a velocity $`v_{uh}`$.
Hence, we expect that the description of the avalanche should naturally split up in different temporal ‘stages’, according to the degree of extension of the different boundary effects, and that the cell should divide in several ‘regions’, according to whether it is under the influence of the top boundary condition or the bottom, or both, etc. This shall become clear as we will now go into the precise description of the avalanche.
## III Unfolding of the avalanche
### A Stage I: The avalanche grows to maturity
This stage starts at $`t=0`$ with the beginning of the avalanche. From the above considerations, we know that the boundary effects start to propagate with finite velocities from both ends of the cell. We can therefore define ‘propagation fronts’ for these effects: we call $`x_{dh}(t)`$ the position of the front originating in the boundary condition at the top of the cell \[the subscript $`dh`$ means that the motion of this front is downhill\], and $`x_{uh}(t)`$ the corresponding ‘uphill’ front, originating in the bottom boundary condition, and that we already defined earlier as $`x_{uh}(t)=v_{uh}t`$ \[Eq. (14)\]. Figure 4-a presents a typical picture of the situation during Stage I, where the fronts, after leaving their respective cell ends, move in opposite directions and one toward the other. As a consequence, they shall finally meet at a certain time, that we hereafter denote $`t_1`$. This time $`t_1`$ defines the end of what we call ‘Stage I’ (which is thus characterized as the time interval $`0tt_1`$), and the beginning of ‘Stage II’ (described in next section).
The relative positions of the fronts naturally define three spatial regions in the cell (Fig. 4-a). To the left of $`x_{uh}`$, the effects of the bottom boundary condition \[Eq. (12)\] are predominant. We call this region the bottom region. We remark that it is constantly extending uphill during Stage I \[following the motion of $`x_{uh}(t)`$\]. To the right of $`x_{dh}(t)`$, we define the top region, which extends downhill, and where the evolution of the avalanche is controlled by the upper end condition \[Eq. (11)\]. Finally, between those two regions remains a central region, where none of the boundary effects can yet be felt. This last region shrinks during Stage I, and ultimately disappears at time $`t=t_1`$ when the bottom and top region connect. We now describe the precise evolution of the avalanche, region by region \[we will only give the form of the rolling amount $`R(x,t)`$, since $`h(x,t)`$ is already known from Eq. (13)\].
#### 1 Top region
From the above definition, the top region corresponds to the spatial domain $`x_{dh}(t)xL`$. Within this domain, Eq. (16) reads:
$$\frac{R}{t}\mathrm{\Gamma }(R+d)\frac{R}{x}=\eta v_{uh}$$
(17)
\[since $`x>x_{dh}(t)>x_{uh}=v_{uh}t`$\].
Solving this equation with the boundary condition $`R(x=L,t)=0`$ (see Appendix for details) gives the expression of $`R`$ valid in this region:
$$R(x,t)=d+\sqrt{d^2+2(Lx)\frac{\eta v_{uh}}{\mathrm{\Gamma }}}.$$
(18)
We also obtain the precise position of the ‘downhill front’ $`x_{dh}(t)`$:
$$x_{dh}(t)=L+\frac{\mathrm{\Gamma }}{2\eta v_{uh}}d^2\frac{1}{2}\mathrm{\Gamma }\eta v_{uh}\left(t+\frac{d}{\eta v_{uh}}\right)^2.$$
(19)
Thus, according to Eq. (18), $`R`$ has a stationary shape (independent of time), but on a domain that extends downhill with time. Interestingly, we note that the motion of $`x_{dh}(t)`$ is uniformly accelerated throughout the stage. This is a direct consequence of the non-linearity in Eq. (16).
#### 2 Central region
In the central region, the boundary conditions have no influence on the evolution of $`R`$ and $`\stackrel{~}{h}`$. The central region is hence spatially defined by $`x_{uh}(t)xx_{dh}(t)`$, and shrinks at both ends to disappear at the end of the stage.
The evolution equation for $`R`$ is the same as in the top region \[Eq. (17)\], but now we must impose the initial condition (9) (no boundary condition). The solution (see Appendix) writes:
$$R(x,t)=\eta v_{uh}t.$$
(20)
In the central region the rolling phase grows linearly with time and uniformly in space, thus forming a plateau (see Fig. 4-b). This constant growth rate is a consequence of the saturated form of the BCRE equations we have used \[Eq. (3)\]. The uniformity of the solution, on the other hand, stems from the fact that since none of the boundaries is at work, and since the initial static profile was uniform, the central region behaves like an infinite medium for which translational invariance is to be obeyed.
We finally remark that solutions (18) and (20) connect continuously at $`x=x_{dh}(t)`$.
#### 3 Bottom region
The bottom region is controlled by the bottom boundary condition, and spreads over the spatial interval $`0xx_{uh}(t)`$. In this region the evolution equation for $`R`$ displays no more amplification (because $`x<x_{uh}(t)=v_{uh}t`$): $`R/t\mathrm{\Gamma }(R+d)R/x=0`$.
Since there is no constraint on $`R`$ at the bottom of the cell, the condition on $`R`$ is given by the physical assumption that it should be continuous across the border of the central and bottom regions, i.e. $`R(x=x_{uh}(t),t)=\eta v_{uh}t`$ for $`t0`$.
This leads to the following expression for $`R`$:
$`R(x,t)={\displaystyle \frac{1}{2}}\left(d+{\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }}}\eta v_{uh}t\right)`$ (22)
$`+{\displaystyle \frac{v_{uh}}{2\mathrm{\Gamma }}}\sqrt{\left({\displaystyle \frac{\mathrm{\Gamma }d}{v_{uh}}}+1\mathrm{\Gamma }\eta t\right)^2+4{\displaystyle \frac{\mathrm{\Gamma }\eta }{v_{uh}}}(x+\mathrm{\Gamma }dt)}.`$
In this region also the height of rolling grains increases with time, due to an increasing input of material at the frontier with the central region.
#### 4 Derivation of time $`t_1`$
Stage I ends when the top and bottom regions meet, at $`t=t_1`$ defined by $`x_{uh}(t=t_1)=x_{dh}(t=t_1)`$. Using Eqs. (14) and (19), we easily obtain
$$t_1=\left(\frac{d}{\eta v_{uh}}+\frac{1}{\mathrm{\Gamma }\eta }\right)+\sqrt{\left(\frac{d}{\eta v_{uh}}+\frac{1}{\mathrm{\Gamma }\eta }\right)^2+\frac{2L}{\mathrm{\Gamma }\eta v_{uh}}}.$$
(23)
As will be shown later, the maximum thickness of the avalanche, $`R_{max}`$, is actually reached for $`t=t_1`$ and $`x=x_{uh}(t=t_1)=x_{dh}(t=t_1)`$. We can clearly see on Fig. 4 that the $`R`$-profile at time $`t=t_1`$ displays a cusp. As the prediction of the maximum amplitude $`R_{max}`$ is an important result of our analysis, we shall devote Section IV A to it, and defer the analytical derivation of $`R_{max}`$ and its application to physical examples until there.
Fig. 4-b presents successive ‘snapshots’ of the rolling phase profile during Stage I.
### B Stage II: The static profile reaches its final state
Stage II starts at $`t=t_1`$. At time $`t_1`$, the two ‘propagation fronts’ of the boundary effects $`x_{uh}(t)`$ and $`x_{dh}(t)`$ pass each other, and then pursue their respective motions towards the opposite cell edge. Figure 5-a illustrates this situation. As in Stage I, it appears that the cell is naturally divided in three spatial regions: a top region \[defined spatially as $`x_{uh}(t)<xL`$\], under the sole influence of the upper edge of the cell; a bottom region \[$`0x<x_{dh}(t)`$\], under the influence of the bottom edge; and finally, a central region \[$`x_{dh}(t)xx_{uh}(t)`$\], where in contrast with Stage I, the effects of both boundaries now combine. As another difference with the situation described in Stage I, the top and bottom regions progressively shrink, whereas the central one grows in extension (Fig. 5-a).
Due to their motion, the fronts $`x_{dh}`$ and $`x_{uh}`$ are bound to reach, sooner or later, the bottom and top end of the cell (respectively). At time $`t_2`$ \[Eq. (15)\], the uphill front reaches the upper limit of the cell ($`x_{uh}=L`$). The static profile is then in its relaxed final state, with a uniform slope $`\theta _n`$. This is the end of Stage II, which is thus defined as the time interval $`t_1<tt_2`$. In most cases, as is discussed below, we expect the downhill front to reach the bottom edge before $`t=t_2`$.
#### 1 Top region
In this region, we have $`x>x_{uh}`$, so that the evolution equation (17) still holds, and we still have to solve with respect to the upper boundary condition of Eq. (11). Therefore, as in Stage I, $`R`$ is given by Eq. (18) \[but now, the lower limit of the domain on which this solution is valid is $`x_{uh}(t)`$, not $`x_{dh}(t)`$\]. This top region shrinks, until finally disappearing when the uphill front reaches the upper end of the cell ($`t=t_2`$).
#### 2 Central region
In this part, since $`xx_{uh}(t)`$, there is no amplification of the rolling amount, so that the right-hand side of the evolution equation of $`R`$ vanishes: $`R/t\mathrm{\Gamma }(R+d)R/x=0`$. Now, we further impose that $`R`$ shall be continuous at the border with the top region, i.e.:
$`R(x=x_{uh}(t),t)=d+\sqrt{d^2+2(Lx){\displaystyle \frac{\eta v_{uh}}{\mathrm{\Gamma }}}}.`$
Solving these two equations together leads to look for $`R`$ as one of the roots of the third-degree equation $`R^3+a_2R^2+a_1(t)R+a_0(x,t)=0`$, with the following coefficients:
$`a_2`$ $`=`$ $`\left({\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }}}+3d\right)`$
$`a_1(t)`$ $`=`$ $`2{\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }}}\left(d+{\displaystyle \frac{\mathrm{\Gamma }d^2}{v_{uh}}}\eta (Lv_{uh}t)\right)`$
$`a_0(x,t)`$ $`=`$ $`2\eta {\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }}}\left({\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }}}(Lx)+2d(Lv_{uh}t)\right).`$
The solution, given by Cardano formulas \[\], writes:
$$R(x,t)=\frac{a_2}{3}+S\frac{Q}{S},$$
(24)
with the auxiliary quantities \[\]:
$`S`$ $``$ $`\sqrt[3]{P+\sqrt{D}}`$
$`D`$ $``$ $`Q^3+P^2`$
$`Q`$ $``$ $`{\displaystyle \frac{1}{9}}(3a_1a_2^2)`$
$`P`$ $``$ $`{\displaystyle \frac{9a_2a_127a_02a_2^3}{54}}.`$
We saw in the previous section that the crest of the avalanche $`R_{max}`$ appeared at the end of Stage I. What happens to this crest during Stage II? It is easy to prove that the crest remains located on the downhill front $`x_{dh}`$. Besides, $`x_{dh}`$ now moves at constant speed (in contrast with Stage I where it accelerated): $`x_{dh}(t)=v_{uh}t_1\mathrm{\Gamma }(R_{max}+d)(tt_1)`$.
We can also prove that the height of the crest remains constant (equal to $`R_{max}`$) as it travels downhill, until it finally comes out of the cell. The exit time $`t_{exit}`$ of this crest $`R_{max}`$ is obtained by solving $`x_{dh}(t)=0`$:
$$t_{exit}=t_1+\frac{v_{uh}t_1}{\mathrm{\Gamma }(R_{max}+d)}=t_1\left(1+\frac{v_{uh}}{\mathrm{\Gamma }(R_{max}+d)}\right).$$
(25)
#### 3 Bottom region
The bottom region is defined as the region where $`0xx_{dh}(t)`$. The evolution equation for $`R`$ is the same as in the central region, and we impose continuity at the border with the central region. We find that $`R(x,t)`$ is given by Eq. (22) as in Stage I. Physically, in this region, we simply observe the convection of what was left in the bottom region at the end of Stage I.
The bottom region disappears when the downhill front reaches the bottom end: $`x_{dh}(t)=0`$, that is by definition at time $`t=t_{exit}`$ \[Eq. (25)\]. To determine precisely the subsequent evolution of the avalanche, we must discuss whether the disappearance of the bottom region occurs before the end of Stage II or not (that is, whether $`t_{exit}t_2`$ or $`t_{exit}t_2`$). Using Eq. (25), we form the ratio:
$`{\displaystyle \frac{t_{exit}}{t_2}}={\displaystyle \frac{t_1}{t_2}}\left(1+{\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }(R_{max}+d)}}\right).`$
Provided that the cell dimension $`L100d`$ and that $`v_{uh}\mathrm{\Gamma }d`$ (these requirements being usually satisfied for common experiments), we have $`\mathrm{\Gamma }R_{max}v_{uh}`$, and $`t_1/t_21`$. Hence we generally expect $`t_{exit}t_2`$, and, consequently, as claimed earlier, in most cases the bottom region disappears before the end of Stage II.
To resume, during Stage II, the central region extends both downhill and uphill with constant (though different) velocities at each end, progressively invading the whole cell. It reaches the bottom edge at time $`t_{exit}`$ (in situations where $`t_{exit}t_2`$), then the upper edge at time $`t_2`$. At the end of Stage II, the central region occupies the entire cell, and $`R(x,t=t_2)`$ is everywhere given by Eq. (24). We present successive calculated ‘snapshots’ of the rolling amount $`R`$ during this stage in Fig. 5-b.
### C Stage III: The last grains are evacuated
This stage lasts from $`t=t_2`$ until the end of the avalanche, at $`t=t_{end}`$. Both fronts have reached the edges of the cell at the end of Stage II ($`x_{dh}=0,x_{uh}=L`$), and there is only one region (see Fig. 6-a). Moreover, as $`t>t_2`$, the slope of the static part is everywhere $`\theta _n`$ and no amplification of the rolling grains can take place; the rolling phase is simply convected downwards. We now have to solve the evolution equation $`R/t\mathrm{\Gamma }(R+d)R/x=0`$, with respect to the initial condition that Stage III evolves from what has been left by Stage II \[i.e. $`R(x,t=t_2)`$ as given by Eq. (24)\].
Solving with the method of characteristics gives the following implicit solution
$$R(x,t)=R(\xi ,t_2),$$
(26)
where
$$\xi =x\mathrm{\Gamma }\left[R(\xi ,t_2)+d\right](tt_2)$$
(27)
(and $`0\xi L`$).
The physical interpretation of these equations is actually very simple: Eq. (26) states that the quantity of rolling species found in $`x`$ at time $`t`$ was previously located in $`\xi `$ at the beginning of Stage II $`(t=t_2)`$. Equation (27) gives a determination of this initial position $`\xi `$, by stating that from $`t_2`$ until the considered instant $`t`$, the quantity of grains moved with a constant speed $`\mathrm{\Gamma }[R(\xi ,t_2)+d]`$, dependent on the local height. In other words, during Stage III, the $`R`$-profile left by Stage II is convected downhill, but each vertical slice rolls with its own velocity, which is a function of its height. The grains that were near the top edge of the cell at the end of Stage II are convected the most slowly, since there $`R`$ was close to zero. The profile inherited from Stage II thus dilates upon rolling, under the effect of velocity inhomogeneities (Fig. 6-b) \[\].
The last grains to fall off the cell are those that leave the top end of the cell at the beginning of Stage III, at time $`t_2`$. Since at the top edge we have $`R=0`$, these grains move with a constant speed $`v_0=\mathrm{\Gamma }d`$. At time $`t`$, they are located at $`x_{last}(t)=L\mathrm{\Gamma }d(tt_2)`$, and the avalanche is extinct uphill: $`R=0`$ for $`x>x_{last}(t)`$.
Finally, the avalanche ends when the last grains reach the bottom limit of the cell ($`x_{last}=0`$), that is at time $`t_{end}=t_2+L/(\mathrm{\Gamma }d)`$.
## IV Discussion and simple checks
### A Predictions for the maximum amplitude of the avalanche
#### 1 Linear velocity profile
Up to now, we have focused on flows displaying linear velocity profiles. For such flows, as we saw in Section III A, the avalanche reaches its maximum amplitude $`R_{max}`$ at the end of Stage I, at time $`t=t_1`$ \[Eq. (23)\]. The exact analytical expression of $`R_{max}`$ is easily found by using Eq. (20) at time $`t_1`$ (that is, the value of $`R`$ given by the central region at the very moment it disappears):
$$R_{max}=d\frac{v_{uh}}{\mathrm{\Gamma }}+\sqrt{\left(d+\frac{v_{uh}}{\mathrm{\Gamma }}\right)^2+\frac{2L\eta v_{uh}}{\mathrm{\Gamma }}}.$$
(28)
For large values of L, $`R_{max}`$ scales as:
$$R_{max}\sqrt{2\eta \frac{v_{uh}}{\mathrm{\Gamma }}L},$$
(29)
that is, as the square root of the system size $`L`$.
Let us give a couple of numerical applications of this last expression. For the case of a standard laboratory experiment, with $`L=1`$ m, $`d=1`$ mm, $`v_{uh}/\mathrm{\Gamma }=3d`$ and $`\eta 0.1`$ rad, we find $`R_{max}=2.45`$ cm. In the case of a system at the scale of a desert dune, made of fine sand, we take $`L=10`$ m, $`d=0.2`$ mm and, with others parameters unchanged, we get $`R_{max}=3.46`$ cm. One has to notice that $`R_{max}`$ is quite small, even for large systems as a sand dune.
It is interesting to contrast this result with the work of Boutreux et al. \[\], who carried the same calculation in an open cell configuration, but with a constant downhill convection velocity $`v(R)v_0`$ \[instead of Eq. (4)\]. They found $`R_{max}\eta L`$. For the two above examples, this formula leads to maximum amplitudes of respectively $`10`$ cm and $`1`$ m. The effect of the velocity gradient is thus to considerably limit the amplitude of avalanches, especially for large systems.
#### 2 Generalization for a power-law dependency
In the beginning of this article, we quoted the work of Azanza et al. \[\] and of Pouliquen \[\] who find that the average speed of a chute flow of granular material on a rough plane is related to its thickness through a power-law relation $`v(R)\mathrm{\Gamma }R^\alpha `$ with $`\alpha `$ close to 3/2. However, as pointed out by Pouliquen \[\], the influence of the rough underlying bed plane on the rheology of chute flows is complex and not clearly understood, and might not be comparable to situations where the flow occurs on a free granular bed as has been considered in this paper. Since the question is still open, we will here present an intuitive derivation of $`R_{max}`$ valid for any power-law (undetermined exponent $`\alpha `$). To check the validity of this simple derivation, we first present it in the linear case $`\alpha =1`$, the generalization being then straightforward.
Let us consider a point initially at the top edge of the cell. At $`t=0`$, it starts being swept along by the granular flow and we assume that this point travels with the local surface velocity of the flow $`v=\mathrm{\Gamma }R`$. We are now interested in the temporal evolution of the rolling height $`R`$ at this travelling point, which shall be computed from the Lagrangian derivative $`dR/dt=R/t+vR/x`$. As long as the amplification process takes place, we have with the use of Eq. (16): $`dR/dt=\eta v_{uh}`$. This implies
$$R(t)=\eta v_{uh}t.$$
(30)
Hence, $`R(t)`$ at the travelling point increases with time, as long as the amplification process lasts. After the amplification has stopped, $`R`$ at the travelling point keeps constant (since $`dR/dt=0`$). Thus, $`R`$ reaches its maximum value $`R_{max}`$ at the end of the amplification. Let us call $`t_{amp}`$ the duration of the amplification. We compute $`t_{amp}`$ in the following way: the distance that the travelling point goes over during the amplification is of order $`L`$, so that $`t_{amp}`$ must verify:
$`L`$ $``$ $`{\displaystyle _0^{t_{amp}}}𝑑x={\displaystyle _0^{t_{amp}}}v𝑑t`$
$`\text{i.e.}L`$ $``$ $`{\displaystyle _0^{t_{amp}}}\mathrm{\Gamma }\eta v_{uh}t𝑑t={\displaystyle \frac{1}{2}}\mathrm{\Gamma }\eta v_{uh}t_{amp}^{}{}_{}{}^{2},`$
\[in this calculation, we used $`v\mathrm{\Gamma }R(t)`$\]. We finally find: $`t_{amp}\sqrt{2L/(\mathrm{\Gamma }\eta v_{uh})}`$.
Inserting this last expression into Eq. (30) gives the value of $`R_{max}`$:
$$R_{max}\sqrt{2\eta \frac{v_{uh}}{\mathrm{\Gamma }}L}.$$
(31)
This is exactly Eq. (29) found analytically, which was itself the limit of the complete expression of $`R_{max}`$ \[Eq. (28)\] for large values of L (greater than a hundred $`d`$), and $`v_{uh}`$ of order $`\mathrm{\Gamma }d`$.
The strongest assumption in the above simple derivation is that the amplification takes place over a distance $`L`$. Rigorously, this distance is $`Lx_{dh}(t=t_1)`$; but what makes our simple derivation successful is that the position of the downhill front at time $`t_1`$, $`x_{dh}(t=t_1)`$, is generally quite close to zero (for $`L`$ greater than a hundred $`d`$ and $`v_{uh}`$ of order $`\mathrm{\Gamma }d`$).
We may now generalize the above results to a power-law dependency of the velocity $`v(R)\mathrm{\Gamma }R^\alpha `$. The same derivation leads us to the result:
$$R_{max}\left((\alpha +1)\frac{\eta v_{uh}}{\mathrm{\Gamma }}L\right)^{1/(\alpha +1)}$$
(32)
Note that $`R_{max}`$ diminishes as $`\alpha `$ increases.
In particular, for $`\alpha =3/2`$, Eq. (32) can be rewritten as $`R_{max}(5\eta Lv_{uh}/\mathrm{\hspace{0.17em}2}\mathrm{\Gamma })^{\mathrm{\hspace{0.25em}2}/5}`$.
### B Possible experimental checks
The loss of material at the bottom edge of the cell might be measured experimentally, and could be compared to the following theoretical prediction. This loss corresponds to the flow rate at the bottom of the cell $`Q(x=0,t)=_0^{R(x=0,t)}v(z)𝑑z`$, and is given by
$$Q(x=0,t)=\frac{\mathrm{\Gamma }}{2}R(x=0,t)^2+\mathrm{\Gamma }dR(x=0,t),$$
(33)
where $`R(x=0,t)`$ is given by Eq. (22) during Stages I and II, and by Eq. (26) during Stage III. Figure 7 shows the predicted shape of $`Q(x=0,t)`$ as a function of time (solid curve). The curve displays a maximum at time $`t=t_{exit}`$, corresponding to the moment when the maximum amplitude $`R_{max}`$ rolls out of the cell. The maximum flow rate is obtained by replacing $`R(x=0,t)`$ by $`R_{max}`$ in Eq. (33).
It is of interest to compare our prediction for the loss of material with that of Boutreux et al. \[\], who assumed a constant downhill velocity $`v`$ in the rolling phase. This comparison, however, requires some caution: in our approach, the granular flow is characterized by a constant velocity gradient $`\mathrm{\Gamma }`$, whereas, in Boutreux et al., the description is based on a typical downhill convection velocity of the grains $`v`$ (see Section I C). Figure 7 compares the results of both approaches for the loss of material, assuming $`vv_{uh}3\mathrm{\Gamma }d`$ (see Ref. \[\]).
### C Concluding remarks
#### 1 Regions of small $`R`$
We notice that, during the avalanche, we had several spatial zones in the cell where R was close to 0 ($`R<\xi ^{}`$), e.g. at the upper edge of the cell, or at the end of the avalanche. Thus in these zones, the use of the saturated Eqs. (7) and (8) is not fully justified. In order to obtain a continuous description between the saturated case and the thin one, we could use the interpolated equations that have been proposed by de Gennes \[\] and studied in a model case by Boutreux and Raphaël in Ref. \[\]:
$`{\displaystyle \frac{\stackrel{~}{h}}{t}}`$ $`=`$ $`\gamma {\displaystyle \frac{R\xi ^{}}{R+\xi ^{}}}{\displaystyle \frac{\stackrel{~}{h}}{x}}`$
$`{\displaystyle \frac{R}{t}}`$ $`=`$ $`\mathrm{\Gamma }(R+d){\displaystyle \frac{R}{x}}+\gamma {\displaystyle \frac{R\xi ^{}}{R+\xi ^{}}}{\displaystyle \frac{\stackrel{~}{h}}{x}}.`$
The results of \[\] show however that the physical behaviour is not dramatically changed, and that the description in the zones of small R with saturated equations might be slightly wrong but qualitatively verified.
#### 2 Effects of polydispersity
It is of common knowledge that real granular materials are generally intrinsically polydisperse. This may have drastic effects on the behavior of the flow, and capturing more precisely the physics of real avalanches would certainly suppose to take polydispersity into account. However, the treatment of full polydispersity is a difficult task. Yet, the BCRE equations have been extended to the case of binary mixtures \[\], and it could be interesting to study the changes brought up in this case by a velocity gradient in the flow.
#### 3 Domain of validity of the BCRE approach
The general approach introduced by Bouchaud et al. to describe surface flows is rather phenomenological, and as pointed out by Bouchaud and Cates \[\], we still lack criteria to determine the range of physical situations to which it can be successfully applied. In a recent work, Douady et al. \[\] proposed a justification of the BCRE modelization on the basis of hydrodynamic conservation laws. According to these authors, the form of the BCRE equations should not remain invariant when different laws are chosen for the velocity profile in the flow. Douady et al. argue that only for a velocity linear in $`R`$ (or constant) shall the equations take the simple form of our equations \[Eqs. (7) and (8)\]; in other cases, they find that a supplementary term coupling $`R`$ and $`h`$ should add in Eq. (7). Certainly, more work needs to be done in this direction in order to exactly assert the domain of validity of the BCRE analysis.
###### Acknowledgements.
We thank T. Boutreux, F. Chevoir, A. Daerr, S. Douady, J. Duran and O. Pouliquen for oral and/or written exchanges.
##
### 1 Method for solving the evolution equation of $`R`$
Eq. (16) is a first-order partial differential equation, of the quasi-linear class, that is, linear in the first derivatives. Such equations can be solved by the well-known method of characteristics. See, for example, Ref. \[\].
More specifically, we will solve Eq. (16) along characteristic curves given in the parametric form $`\{t(s),x(s),R(s)\}`$, with $`s`$ the parameter. The functions $`t(s)`$, $`x(s)`$ and $`R(s)`$ are derived from the set of coupled ordinary differential equations:
$`{\displaystyle \frac{dt}{ds}}`$ $`=`$ $`1`$ (34)
$`{\displaystyle \frac{dx}{ds}}`$ $`=`$ $`\mathrm{\Gamma }(R+d)`$ (35)
$`{\displaystyle \frac{dR}{ds}}`$ $`=`$ $`\eta v_{uh}H(xv_{uh}t).`$ (36)
By integration, one founds the equations for the characteristics with unspecified integration constants. One then imposes the boundary and/or initial conditions to identify these constants.
We here give the detailed calculations only for the first stage of the avalanche. The derivations are separated into the different spatial regions that were defined earlier, and we will show how they naturally emerge from the derivations.
### 2 Top region
In this region, Eqs. (35) become:
$`{\displaystyle \frac{dt}{ds}}`$ $`=`$ $`1`$ (37)
$`{\displaystyle \frac{dx}{ds}}`$ $`=`$ $`\mathrm{\Gamma }(R+d)`$ (38)
$`{\displaystyle \frac{dR}{ds}}`$ $`=`$ $`\eta v_{uh}.`$ (39)
We also use the boundary condition $`R(x=L,t)=0`$ for $`t0`$, which we parametrize with the parameter $`\xi `$. For simplicity’s sake, on each characteristic crossing the boundary curve we arbitrarily choose the value of the parameter $`s`$ to be zero at the crossing point. This determines the integration constants to be:
$`t(s=0)`$ $`=`$ $`\xi `$ (40)
$`x(s=0)`$ $`=`$ $`L`$ (41)
$`R(s=0)`$ $`=`$ $`0.`$ (42)
Note that $`\xi 0`$, since $`t0`$ (the experiment started at time $`t=0`$ on).
Solving for Eqs. (38) together with Eqs. (41) gives the equations of the characteristic curves:
$`t(s)`$ $`=`$ $`s+\xi `$ (43)
$`x(s)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{\Gamma }\eta v_{uh}s^2\mathrm{\Gamma }ds+L`$ (44)
$`R(s)`$ $`=`$ $`\eta v_{uh}s.`$ (45)
We now want to write the solution $`R`$ explicitly in terms of $`x`$ and $`t`$, so that we have to eliminate $`\xi `$ and $`s`$. Eq. (44) can be solved to give $`s`$ as a function of $`x`$, and replacing into Eq. (45) brings the analytical solution:
$$R(x,t)=d+\sqrt{d^2+2(Lx)\frac{\eta v_{uh}}{\mathrm{\Gamma }}},$$
(46)
which is the same as Eq. (18).
We now have to verify the condition that $`\xi 0`$. By combining Eq. (43) with (45) this condition can be rewritten as $`tR(x,t)/\eta v_{uh}`$. Replacing $`R`$ into this inequality \[by (46)\] gives us a spatial condition for solution (46) to be valid: we must have $`xx_{dh}(t)`$, where
$`x_{dh}(t)L+{\displaystyle \frac{\mathrm{\Gamma }}{2\eta v_{uh}}}d^2{\displaystyle \frac{1}{2}}\mathrm{\Gamma }\eta v_{uh}(t+{\displaystyle \frac{d}{\eta v_{uh}}})^2.`$
This is the mathematical origin of the ‘downhill front’ that we described intuitively in the main text as the limit of extension of the boundary effects originating in the upper edge of the cell.
### 3 Central region
The evolution equation for $`R`$ is the same as in the top zone, so that the differential equations giving the characteristics also are the same \[Eqs. (38)\]. But now we must impose the initial condition $`R(x,t=0)=0`$, which gives the following set of initial conditions for the characteristics: $`t(s=0)=0,x(s=0)=\xi ,R(s=0)=0`$ (and $`0\xi L`$). We obtain:
$`t(s)`$ $`=`$ $`s`$ (47)
$`x(s)`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{\Gamma }\eta v_{uh}s^2\mathrm{\Gamma }ds+L`$ (48)
$`R(s)`$ $`=`$ $`\eta v_{uh}s.`$ (49)
Combining (47) and (49) gives an explicit solution for $`R`$: $`R(x,t)=\eta v_{uh}t`$.
This solution is valid in a certain spatial domain. It is limited upwards by the top region \[i.e. $`xx_{dh}(t)`$\]. It is also limited downwards by $`x_{uh}(t)`$, because at this point the form of the evolution equation of $`R`$ changes (the amplification term vanishes), and consequently does the form of the differential equations that give the characteristics.
### 4 Bottom region
In this region, Eqs. (35) are given by: $`dt/ds=1,dx/ds=\mathrm{\Gamma }(R+d),dR/ds=0`$.
Here, the boundary condition is given by the continuity of $`R`$ at the border of the central and the bottom zones: $`R(x=x_{uh}(t),t)=\eta v_{uh}t`$ for $`t0`$. This gives the initial conditions: $`t(s=0)=\xi ,x(s=0)=v_{uh}\xi ,R(s=0)=\eta v_{uh}\xi `$. Solving and rewriting $`R`$ explicitly in terms of $`x`$ and $`t`$ leads to the solution:
$`R(x,t)={\displaystyle \frac{1}{2}}\left(d+{\displaystyle \frac{v_{uh}}{\mathrm{\Gamma }}}\eta v_{uh}t\right)`$ (51)
$`+{\displaystyle \frac{v_{uh}}{2\mathrm{\Gamma }}}\sqrt{\left({\displaystyle \frac{\mathrm{\Gamma }d}{v_{uh}}}+1\mathrm{\Gamma }\eta t\right)^2+4{\displaystyle \frac{\mathrm{\Gamma }\eta }{v_{uh}}}(x+\mathrm{\Gamma }dt)},`$
valid for $`0xx_{uh}(t)`$.
|
no-problem/9905/hep-ph9905256.html
|
ar5iv
|
text
|
# Inflationary Reheating and Fermions
## Abstract
Coherent oscillations of the inflaton field at the end of inflation can parametrically excite fermions in much the same way that bosons are created in preheating. Although Pauli-blocking prohibits the occupation number of created fermions from growing exponentially, fermion production occurs in a manner significantly different from the expectations of simple perturbation theory. Here, I discuss the nature of fermion production after inflation and possible applications including the efficient transfer of inflaton energy and the production of super-massive fermions during fermionic preheating.
Consider a simple model of chaotic inflation with the potential $`\frac{1}{4}\lambda \varphi ^4`$ for an inflaton field $`\varphi `$ coupled to a massless spin-$`\frac{1}{2}`$ field $`\psi `$ by a Yukawa interaction. At the end of inflation, the inflaton field will oscillate coherently about the minimum of its effective potential with an initial amplitude $`\varphi _oO(0.1M_p)`$. In perturbation theory, one treats the homogeneous and quasi-classical inflaton field as a condensate of scalar inflaton particles, each of which can decay into a pair of $`\psi `$-particles. Each fermion then carries away half of the energy of a typical inflaton particle, giving a spectrum narrowly peaked around the comoving momentum $`k0.42\sqrt{\lambda }\varphi _o`$. The inflaton energy is transfered to fermions after $`O(\frac{1}{h^2})1`$ inflaton oscillations.
To investigate fermion production non-perturbatively, we are interested in the equation of motion for the field operator $`\psi `$. Following the usual prescription (see GK98 and references therein for details), one seeks eigenfunctions of the Dirac equation in the presence of the classical time-dependent source, $`\varphi (t)`$. As the inflaton field is spatially homogeneous, only the temporal part of the eigenmode obeys a non-trivial equation of motion. These modes, $`X_k(\tau )`$, obey an oscillator-type equation with a complex frequency that varies periodically with time:
$$X_k^{\prime \prime }+\left(\kappa ^2+qf^2i\sqrt{q}f^{}\right)X_k=0.$$
(1)
Here, the comoving momentum $`k`$ enters the equation in the combination $`\kappa ^2\frac{k^2}{\lambda \varphi _o^2}`$ and the character of the solutions is defined by the parameter $`q\frac{h^2}{\lambda }`$ ($`h`$ is the dimensionless Yukawa coupling). The background oscillations enter in the form $`f(\tau )=cn(\tau ,\frac{1}{\sqrt{2}})`$ having unit amplitude and a period $`T=7.416`$. Note that we are working in scaled conformal field and time variables so the effects of expansion do not appear in the equations of motion. We can express the comoving occupation number of particles in a given state through the solutions of eq. (1)
$$n_k(\tau )=\frac{(\mathrm{\Omega }_k\sqrt{q}f)}{2\mathrm{\Omega }_k}[|X_k^{}|^2+\mathrm{\Omega }_k^2|X_k|^22\mathrm{\Omega }_kIm(X_kX_{k}^{}{}_{}{}^{})],$$
(2)
where $`\mathrm{\Omega }_k^2\kappa ^2+qf^2`$. The energy density of created fermions is $`ϵ_\psi =\frac{1}{2\pi ^3}d^3k\mathrm{\Omega }_kn_k`$.
It turns out that, for all $`\kappa ^2`$ and $`q`$, the solutions of eq. (1) are periodic in time. This is shown by the numerical solutions for the comoving occupation number in Fig. (2). Furthermore, it is easy to show that the comoving occupation number defined by eq. (2) will obey $`n_k1`$ in accordance with the Pauli principle. We see from Fig. (2) that, while the occupation number exhibits some high frequency oscillations (period $`<\frac{T}{2}`$), the most interesting behavior occurs over longer periods. If we average the occupation number over an inflaton period, $`\overline{n}_k(\tau )=\frac{1}{T}_\tau ^{(\tau +T)}𝑑\tau n_k(\tau )`$, the average occupation number is found to obey the simple equation: $`\overline{n}_k(\tau )=F_k\mathrm{sin}^2\nu _k\tau `$. For a given theory, i.e. for a given value of the resonance parameter $`q`$, $`F_k1`$ is a momentum dependent amplitude and $`\nu _k`$ is a momentum dependent frequency. In Fig. (4) the amplitude $`F_k`$ as a function of $`\kappa ^2`$ is plotted for several values of $`q`$. We see that the perturbative expectation is only met for $`q10^4`$.
In fact, for large $`q`$, the fermions are excited up to $`\kappa ^2\sqrt{q}`$. This is the same result as for the broad bosonic resonance and can be understood in the same manner. The comoving occupation number, eq. (2), is an adiabatic invariant of the mode equation (1). The condition that the modes $`X_k`$ evolve non-adiabatically is $`\dot{\mathrm{\Omega }}_k\mathrm{\Omega }_k^2`$ which leads to the condition $`\kappa ^2\sqrt{q}`$ for non-adiabatic evolution, and thus, particle creation. This occurs much more rapidly than one would expect from perturbation theory. In Fig. (4) the period of mode oscillations $`\frac{\pi }{\nu _kT}`$ is plotted in units of inflaton oscillations. For all $`q`$, the bands saturate after only $`10100`$ inflaton oscillations.
Turning to the energetics for the most interesting case of the broad resonance excitation, $`q1`$, we find $`ϵ_\psi 0.1h^2q^{1/4}ϵ_\varphi `$, where the inflaton energy is $`ϵ_\varphi =\frac{1}{4}\lambda \varphi _o^4`$. In chaotic $`\frac{1}{4}\lambda \varphi ^4`$-inflation, $`\lambda 10^{13}`$. If the resonance parameter $`q`$ is large but the coupling parameter is small, $`h0.1`$, only a small fraction of the inflaton’s energy will be converted into fermions. Although explosive decay will not occur for this model, more general theories such as hybrid models can have large enough resonance parameters to allow efficient decay to fermions.
If the fermion field has a small bare mass term or if we consider inflation with a $`\frac{1}{2}m_\varphi ^2\varphi ^2`$ potential, the conformal invariance of the theory will be broken. In this case, the occupation number of fermions for $`q1`$ no longer evolves periodically but becomes stochastic, rapidly fluctuating between $`n_k=1`$ and $`n_k=0`$. This destroys the well defined resonance bands depicted in Fig. (4) and can lead to the production of super-massive fermions of mass $`m_\psi h\varphi _o`$. These fermions can come to dominate the energy density of the universe or survive as massive relics.
As a particular example of the changes brought by expansion, consider $`m_\varphi ^2\varphi ^2`$-inflation with a Yukawa coupling to a still massless fermion. In the broad resonance case, one gets a sphere in comoving momentum space with average occupation $`\overline{n}_k=\frac{1}{2}`$ that expands with time. A snapshot of this sphere is shown in Fig. (2).
|
no-problem/9905/gr-qc9905025.html
|
ar5iv
|
text
|
# Nonexistence theorems for traversable wormholes
## Abstract
Gauss–Bonnet formula is used to derive a new and simple theorem of nonexistence of vacuum static nonsingular lorentzian wormholes. We also derive simple proofs for the nonexistence of lorentzian wormhole solutions for some classes of static matter such as, for instance, real scalar fields with a generic potential obeying $`\varphi V^{}(\varphi )0`$ and massless fermions fields.
A lorentzian wormhole is a solution of Einstein equations with asymptotically flat regions connected by intermediary “throats”. These solutions have received considerable attention since Morris, Thorne and Yurtsever discussed the possibility of traversing them and their connection with time machines. (For a review, see .) Euclidean wormholes, i.e., wormhole solutions of Einstein equations with signature (+,+,+,+), have also been intensively studied in connection with the cosmological constant problem.
Originally, the analysis of lorentzian wormholes was restricted to the static spherically symmetrical case. Birkhoff’s theorem assures that the only vacuum spherically symmetrical lorentzian wormhole is the maximally extended Schwarzschild solution. However, in this case the “throat”connecting the two asymptotically flat regions ($`r\pm \mathrm{}`$) is singular for $`r=0`$, and hence it is not traversable. We recall that a lorentzian wormhole solution is said to be traversable if it does not contain horizons that prevent the crossing of the “throats” and if an observer crossing them does not experience strong tidal forces. In order to obtain a solution with a traversable wormhole, we are enforced to accept the presence of matter and/or to give up of the spherical symmetry. A list of recent solutions includes stationary and static axisymmetric electrovac cases, and solutions in Brans-Dicke, Kaluza-Klein, Einstein–Gauss–Bonnet, Einstein–Cartan and nonsymmetric field theories. Some dynamical, i.e., time-dependent, solutions have also been proposed.
For the Euclidean case, there are some theorems about the nonexistence of vacuum, i.e., Ricci flat, wormholes. In , it was presented a theorem stating that any asymptotically flat, nonsingular, Ricci flat metric in $`R^4\{N\mathrm{points}\}`$ is flat. The proof uses some topological invariants of four-dimensional manifolds that can be expressed by means of integrals of curvature invariants, viz. the signature $`\tau ()`$ and the Euler characteristic $`\chi ()`$. The latter can be expressed, for a closed orientable diffentiable manifold $``$ of dimension $`n=2p`$ endowed with a Riemannian metric $`g`$, by the Gauss-Bonnet formula as:
$$_{}ϵ_{a_1\mathrm{}a_n}R^{a_1a_2}\mathrm{}R^{a_{n1}a_n}=(1)^{p1}2^{2p}\pi ^pp!\chi (),$$
(1)
where $`R_a^b`$ is the curvature 2-form of $`g`$. The signature can also be expressed by an integral of curvature invariants. (See for references).
One can consider a four-dimensional manifold with $`N+1`$ asymptotically flat regions as being topologically equivalent to $`R^4`$ with $`N`$ points removed; one to each additional asymptotic region. Hence, the result of rules out the existence of vacuum nonsingular euclidean wormhole solutions. This result was extended to the non-empty case in , where it is shown that there is no nonsingular euclidean wormhole satisfying Einstein equations with conformal invariant matter fields obeying appropriate falloff conditions in the asymptotic regions.
For the lorentzian case, there are no equivalent theorems in the literature. We remind a long standing theorem due to Lichnerowicz, which states that any stationary, complete, asymptotically flat, and Ricci-flat lorentzian metric in $`R^4`$ is flat. However, this theorem cannot be extended to the case of many asymptotically flat regions ($`R^4\{N\mathrm{points}\}`$). The purpose of the present work is to contribute to fill this gap with the following theorem:
Any asymptotically flat, static, nonsingular, Ricci flat lorentzian metric in $`R^4\{N\mathrm{points}\}`$ is flat.
We will show that for the non-empty case some nonexistence theorems can also be formulated. The approach to prove our main result, based on the Gauss-Bonnet formula, will be similar to the used in . Although Gauss-Bonnet theorem is rather subtle for other signatures, we can use Chern’s intrinsic proof of the Gauss-Bonnet formula (1) with minor modifications.
Let us now briefly review some points of Chern’s proof with relevance to our purposes. In , Chern showed that $`\mathrm{\Omega }`$ can be written as $`\mathrm{\Omega }=d\mathrm{\Pi }`$ for a suitable $`(n1)`$-form $`\mathrm{\Pi }`$. We will consider here this problem for the case of an open four-dimensional manifolds with a lorentzian complete metric.
Be $`V`$ a continuous unit timelike vector field $`(V_aV^a=1)`$. Note that contrary to the case of $``$ closed, here there is no topological obstruction to the global existence of such a vector field. Let us introduce the vector-valued 1-form
$$\theta ^a=DV^a=dV^a+\omega _b^aV^b,$$
(2)
where $`\omega _b^a`$ is the Levi-Civita connection 1-form. Due to that $`V^aV_a=1`$, the 1-forms $`\theta ^a`$ are linearly dependent, indeed $`V_a\theta ^a=0`$. From (2) we have
$$d\theta ^a=\theta ^b\omega _b^a+R_b^aV^b.$$
(3)
Now, consider the following 3-forms
$`\varphi _0`$ $`=`$ $`ϵ_{abcd}V^a\theta ^b\theta ^c\theta ^d,`$ (4)
$`\varphi _1`$ $`=`$ $`ϵ_{abcd}V^a\theta ^bR^{cd}.`$ (5)
Using the linearly dependence of the $`\theta ^a`$, the Bianchi’s identities, (2), and (3) we get
$`d\varphi _0`$ $``$ $`3ϵ_{abcd}V^aV_fR^{fb}\theta ^c\theta ^d=\mathrm{terms}\mathrm{prop}.\mathrm{to}\omega _b^a,`$ (6)
$`d\varphi _1`$ $``$ $`ϵ_{abcd}\left(\theta ^a\theta ^bR^{cd}+V^aV_fR^{fb}R^{cd}\right)=\mathrm{terms}\mathrm{prop}.\mathrm{to}\omega _b^a.`$ (7)
The left-handed sides of (6) are clearly intrinsic expressions, and, by choosing normal coordinates around an arbitrary point, we can easily show that they indeed vanish.
We can cast (6) in a more convenient form by using the generalized Kronecker delta
$$\delta _{b_1\mathrm{}b_n}^{a_1\mathrm{}a_n}=\left|\begin{array}{ccc}\delta _{b_1}^{a_1}& \mathrm{}& \delta _{b_n}^{a_1}\\ \mathrm{}& & \mathrm{}\\ \delta _{b_1}^{a_n}& \mathrm{}& \delta _{b_n}^{a_n}\end{array}\right|.$$
(8)
For $`n>4`$, $`\delta _{b_1\mathrm{}b_n}^{a_1\mathrm{}a_n}`$ vanishes identically. In particular, one has
$$\delta _{abcdf}^{a^{}b^{}c^{}d^{}f^{}}=\delta _a^f^{}\delta _{bcdf}^{a^{}b^{}c^{}d^{}}\delta _b^f^{}\delta _{acdf}^{a^{}b^{}c^{}d^{}}+\delta _c^f^{}\delta _{abdf}^{a^{}b^{}c^{}d^{}}\delta _d^f^{}\delta _{abcf}^{a^{}b^{}c^{}d^{}}+\delta _f^f^{}\delta _{abcd}^{a^{}b^{}c^{}d^{}}=0$$
(9)
Using that $`V_aV^a=1`$, the linearly dependence of $`\theta ^a`$, and (9) we obtain
$`ϵ_{abcd}V^aV_fR^{fb}\theta ^c\theta ^d`$ $`=`$ $`{\displaystyle \frac{1}{2}}ϵ_{abcd}R^{ab}\theta ^c\theta ^d,`$ (10)
$`ϵ_{abcd}V^aV_fR^{fb}R^{cd}`$ $`=`$ $`{\displaystyle \frac{1}{4}}ϵ_{abcd}R^{ab}R^{cd}.`$ (11)
From (10) we finally get $`\mathrm{\Omega }=ϵ_{abcd}R^{ab}R^{cd}=d\mathrm{\Pi },`$ where
$$\mathrm{\Pi }=4\left(\varphi _1\frac{2}{3}\varphi _0\right)=4ϵ_{abcd}V^a\theta ^b\left(R^{cd}\frac{2}{3}\theta ^c\theta ^d\right).$$
(12)
The expression (12) is essentially the result of Chern. The difference is that in the case of closed manifolds, $`\mathrm{\Pi }`$ is not defined everywhere. In general, continuous vector fields have some singular points in a closed manifold, and thus we cannot construct a vector field with $`V_aV^a0`$ everywhere. As already said, in the present case there is no topological obstruction to the existence of a nowhere vanishing vector field.
We can now prove our result. For static, Ricci-flat lorentzian metrics $`\mathrm{\Omega }`$ is non negative. To see it, first note that with $`R_{ab}=0`$ one has
$$\mathrm{\Omega }=\frac{1}{6}\left(R_{abcd}R^{abcd}4R_{ab}R^{ab}+R^2\right)\nu =\frac{1}{6}R_{abcd}R^{abcd}\nu ,$$
(13)
where $`\nu `$ is the standard volume form in $`R^4`$. Consider now normal coordinates around an arbitrary point $`P`$. We have
$$\mathrm{\Omega }_P=\frac{\nu _P}{6}\left(4\underset{\alpha \beta \gamma }{}\left(R_{1\alpha \beta \gamma }\right)_P^2+4\underset{\alpha \beta }{}\left(R_{1\alpha 1\beta }\right)_P^2+\underset{\alpha \beta \gamma \delta }{}\left(R_{\alpha \beta \gamma \delta }\right)_P^2\right).$$
(14)
Hereafter, Roman and Greek indices run, respectively, over $`\{1,2,3,4\}`$ and $`\{2,3,4\}`$. Due to the assumption of a nonsingular $`g`$, we have $`\mathrm{\Omega }<\mathrm{}`$. The hypothesis that $`g`$ is static assures that there is always a coordinate system $`\{x^a\}`$ for which
$$g_{ab}=\left(\begin{array}{cc}\mathrm{\Delta }& 0\\ 0& h_{\alpha \beta }\end{array}\right),$$
(15)
where $`\mathrm{\Delta }>0`$ and $`g_{ab}`$ does not depend on $`x^1`$. One can check that for metrics of the type (15), $`R_{1\alpha \beta \gamma }=0`$, and consequently $`\mathrm{\Omega }0`$. Moreover, the equality holds only if $`R_{abcd}=0`$. Thus, in the case of $`_𝒳\mathrm{\Omega }=0`$, we have that $`R_{abcd}=0`$ in $`𝒳`$. This will be the strategy of our proof. The assumption of an asymptotically flat metric guarantees that for each asymptotic region one has
$`\underset{r\mathrm{}}{lim}\mathrm{\Delta }1`$ $`=`$ $`O^{\mathrm{}}\left({\displaystyle \frac{1}{r^{m+\epsilon }}}\right),`$ (16)
$`\underset{r\mathrm{}}{lim}h_{\alpha \beta }\delta _{\alpha \beta }`$ $`=`$ $`O^{\mathrm{}}\left({\displaystyle \frac{1}{r^{m+\epsilon }}}\right).`$ (17)
for some $`m\{0,1,2,\mathrm{}\}`$ and $`0<\epsilon 1`$, where $`r^2=x^\alpha x^\beta \delta _{\alpha \beta }`$. If $`F=O^{\mathrm{}}(f)`$ it means that $`F=O(|f|)`$, $`F^{}=O(|f^{}|)`$, and so on. We can construct the 3-form $`\mathrm{\Pi }`$ starting from the timelike unit vector $`V=\frac{1}{\sqrt{\mathrm{\Delta }}}\frac{}{x^1}`$. From (16), we obtain the following expression, valid for each asymptotic region,
$$\underset{r\mathrm{}}{lim}\mathrm{\Pi }=O^{\mathrm{}}\left(\frac{1}{r^{m+3+\epsilon }}\right).$$
(18)
Integrating $`\mathrm{\Omega }`$ over $`=R^4\{N\mathrm{points}\}S^4\{(N+1)\mathrm{points}\}`$ and with the assumption of a nonsingular $`g`$ one has
$$_{}\mathrm{\Omega }=\underset{i=1}{\overset{N+1}{}}__i\mathrm{\Pi },$$
(19)
where the boundaries $`_i`$ correspond to the asymptotic regions. For each of these regions, with the asymptotic conditions (16), we have
$$\underset{r\mathrm{}}{lim}\mu \left(_i\right)=O^{\mathrm{}}\left(r^3\right),$$
(20)
where $`\mu \left(_i\right)`$ denotes the measure of the boundary $`_i`$. From (18) and (20) we have finally that the right handed side of (19) vanishes, establishing our result.$`\mathrm{}`$
It is shown in that, for the euclidean case, sometimes the matter equations themselves can be used to rule out non-vacuum wormholes. The same arguments can be applied here for some static matter fields. Let us take, for instance, a real scalar field $`\varphi (x)`$ with a potential obeying $`\varphi V^{}(\varphi )0`$. The hypothesis of a nonsingular $`g`$ requires that $`\varphi `$ and $`_a\varphi `$ be smooth and bounded on $``$. The corresponding equation in this case is
$$D_aD^a\varphi =V^{}(\varphi ).$$
(21)
Multiplying by $`\varphi `$ and integrating over $``$ one obtains
$$_{}\left(g^{ab}(_a\varphi )(_b\varphi )+\varphi V^{}(\varphi )\right)𝑑\mathrm{vol}\underset{i=1}{\overset{N+1}{}}__i\varphi _a\varphi d\mathrm{\Sigma }^a=0.$$
(22)
With the assumption that $`\varphi `$ obeys an asymptotic condition like
$$\underset{r\mathrm{}}{lim}\varphi =O^{\mathrm{}}\left(\frac{1}{m+\epsilon }\right),$$
(23)
the boundary terms in (22) vanish. This assumption requires that $`V^{}(\varphi )=0`$ for $`\varphi =0`$. The first term in (22) is, for the static case, nonnegative, implying that $`\varphi `$ must vanish in $``$ if $`\varphi V^{}(\varphi )0`$. We make here the same remark done for the euclidean case: if $`V^{}(\varphi )=0`$ for some $`\varphi 0`$, and $`\varphi `$ assumes to a nonzero value in some of the asymptotic regions, the boundary term in (22) may not vanish in general. A nonexistence result holds also for conformally coupled massless fields. In this case, we have
$$D_aD^a\varphi \frac{R}{6}\varphi =0.$$
(24)
However, Einstein equations imply in this case that $`R=0`$, and we get in fact a particular case of (22).
Another example of nonexistence of non-empty wormhole solutions is the case of massless fermions. In this case, the matter equations are
$$i_a\gamma ^a\psi =0.$$
(25)
Applying the Dirac operator $`i_a\gamma ^a`$ again one gets a conformally coupled Klein-Gordon equation for each component of $`\psi `$
$$\left(D_aD^a\frac{1}{6}R\right)\psi =0.$$
(26)
Einstein equations also imply that $`R=0`$ in this case. Also, we can redefine $`\psi `$ in order to have only real components. As in the previous cases, provided that $`\psi `$ obeys appropriate falloff conditions in the asymptotic regions, we conclude that there is no wormhole solution with massless fermions.
We finish noticing that, with the same procedure use here, we can show that in $`R^4\{N\mathrm{points}\}`$, any metric of signature $`(,,+,+)`$, Ricci-flat, asymptotically flat, static simultaneously with respect to two linearly independent time-like Killing vectors, is flat.
###### Acknowledgements.
The author is grateful to DAAD, CNPq and FAPESP for the financial support, and to Prof. H. Kleinert and Dr. A. Pelster for the warm hospitality at the Freie Universität Berlin.
|
no-problem/9905/math9905052.html
|
ar5iv
|
text
|
# A note on generating functions
## A note on generating functions
Suppose $`A`$ is an affine symplectic space. There is an affine-invariant view of generating functions of symplectic transformations of $`A`$. Namely, let $`H`$ be a function on $`A`$. At any point $`x`$ we take the vector $`u_x`$ defined by $`d_xH=u_x\mathrm{}\omega `$ ($`\omega `$ is the symplectic form) and put it in $`A`$ so that $`x`$ lies in its middle. Then the map $`\mathrm{\Phi }_H`$ sending the tails of $`u_x`$’s to their heads is a symplectic transformation:
Notice that for infinitesimal $`H`$, this is the usual infinitesimal transformation generated by Hamiltonian $`H`$. The map $`H\mathrm{\Phi }_H`$ is a kind of Cayley transform: choosing an origin in $`A`$ (to turn it to a vector space) and restricting ourselves to quadratic forms, we get the usual Cayley transform $`𝔰𝔭Sp`$.
Symplectic transformations can be composed. The corresponding composition of generating functions is $`H(x)=H_1(x_1)+H_2(x_2)+\text{symplectic area of }\mathrm{}PQR`$:
Recall that the integral kernel of the Moyal product is $`K(x_1,x_2,x)=\mathrm{exp}(\sqrt{1}\times \text{symplectic area of }\mathrm{}PQR/\mathrm{})`$. We may notice that $`\mathrm{exp}(\sqrt{1}H/\mathrm{})`$ is the classical part of the Moyal product of $`\mathrm{exp}(\sqrt{1}H_1/\mathrm{})`$ and $`\mathrm{exp}(\sqrt{1}H_2/\mathrm{})`$.
Let us have a look where these claims come from. A symplectic transformation of $`A`$ is (more-or-less) the same as a Lagrangian submanifold of $`\overline{A}\times A`$ (the graph of the map). For each point $`xA`$ the symmetry with respect to $`x`$ is a symplectic map. Identity is also a symplectic map, so that we have many Lagrangian submanifolds of $`\overline{A}\times A`$:
In this way we have an isomorphism between $`\overline{A}\times A`$ and $`T^{}A`$. Explicitely (as one immediately sees from the picture), a pair $`(P,Q)\overline{A}\times A`$ corresponds to $`((P+Q)/2,(QP)\mathrm{}\omega )T^{}A`$. Here the vector-and-its-midpoint picture appears.
Correspondence between generating functions and symplectic transformations is clear now: $`dH`$ is a Lagrangian submanifold of $`T^{}A`$, and therefore of $`\overline{A}\times A`$. Let us also have a look where the composition law comes from. $`\overline{A}\times A`$ is a symplectic groupoid (the pair groupoid of $`A`$). The graph of its multiplication is a Lagrangian submanifold; using the identification of $`T^{}A`$ and $`\overline{A}\times A`$, it should be given by a closed 1-form on $`A\times A\times A`$; this 1-form is the differential of the function $`(x_1,x_2,x)\text{symplectic area of }\mathrm{}PQR`$. The composition of generating functions and its connection with Moyal product follows.
For the fun of it, let us make a similar construction, replacing $`A`$ by the sphere $`S^2`$ with the area 2-form. Again, symmetry with respect to a point is a symplectic map, therefore we locally have a similar identification between $`\overline{S^2}\times S^2`$ and $`T^{}S^2`$; more precisely, there is an isomorphism between the subset of covectors in $`T^{}S^2`$ of length less than 2 and $`\overline{S^2}\times S^2`$ with erased pairs of antipodal points. Explicitely, to a non-antipodal pair $`(P,Q)`$ we associate a point in $`TS^2`$ (and thus, via $`\omega `$, a point in $`T^{}S^2`$) as on the picture:
$`x`$ is the midpoint of the shorter geodesic arc $`PQ`$ and $`uT_xS^2`$ appears by its orthogonal projection. This picture can be derived from the famous theorem of Archimedes, claiming that certain map between cylinder and sphere is area-preserving.
As a result, we have a similar picture of generating functions: for a function $`H`$ on $`S^2`$ and any point $`x`$ we take the vector $`u_x`$ defined by $`d_xH=u_x\mathrm{}\omega `$, place it into the tangent plane so that $`x`$ is in its middle and project it into the sphere; $`\mathrm{\Phi }_H`$ maps $`P`$ to $`Q`$. Composition rule looks as before (only triangles are spherical now).
Generally, this picture works with no changes for arbitrary symmetric symplectic space $`M`$. Using the symmetries we locally identify $`\overline{M}\times M`$ with $`T^{}M`$. Multiplication in this pair groupoid is again given by the symplectic area of a surface bounded by the geodesic triangle $`PQR`$ with $`x_1,x_2,x`$ being the midpoints of its sides. The identification between $`\overline{M}\times M`$ and $`T^{}M`$ is via a projection of $`M`$ into $`T_xM`$, as in the case $`M=S^2`$: Up to coverings, we embed $`M`$ into an affine space $`A`$. For any $`xM`$, the symmetry with respect to $`x`$ will be extended to an involution $`\sigma _x`$ of $`A`$; we project $`M`$ to $`T_xM`$ in the direction of $`A^{\sigma _x}`$ (the subspace of $`A`$ fixed by $`\sigma _x`$). Namely, since $`M`$ is a symmetric space, it is (a covering of) $`G/G^\sigma `$, where $`G`$ is a Lie group and $`\sigma `$ is an involutory automorphism of $`G`$. Let $`𝔤=𝔤^\sigma 𝔭`$ be the decomposition of $`𝔤`$ to $`\pm 1`$ eigenspaces of $`d\sigma `$ (to make $`G/G^\sigma `$ into a symmetric symplectic space, one has to specify a $`G^\sigma `$-invariant symplectic form on $`𝔭`$). As a homogeneous symplectic space, $`M`$ can be embedded (up to coverings) into an affine space over $`𝔤^{}`$ via (non-equivariant) moment map. If $`xM`$ is fixed by $`G^\sigma `$, $`T_xM`$ is $`𝔭^{}`$ translated to $`x`$; we project $`M`$ to $`T_xM`$ in the direction of $`𝔤_{}^{\sigma }{}_{}{}^{}`$.
Pavol Ševera, I.H.É.S. (IPDE postdoc)
severa@ihes.fr
|
no-problem/9905/astro-ph9905123.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
One of the main source of information about the physical properties of the intergalactic gas is the study of metal-line absorption in quasar spectra. Metal lines in the Lyman Limit Systems (LLS), – i.e. in systems with $`N_{\mathrm{HI}}<\mathrm{\hspace{0.33em}2}\times 10^{17}`$ cm<sup>-2</sup> which are optically thin in the Lyman continuum, – are of particular interest in this regard because the Lyman continuum opacity does not affect the metal abundance measurements in this case . Besides the LLSs often show carbon and silicon line absorption from different ionization states which allows estimating the ionization parameter $`U`$ (the ratio of the number of photons with energies above one Rydberg to the number of atoms, $`U=n_\gamma /n_\mathrm{H}`$).
The LLSs are usually assumed to arise in the outer regions of intervening galactic halos where the electron density is rather low, $`n_e10^210^3`$ cm<sup>-3</sup>. For such tenuous gas the collisional ionization is not important in determining the ionization fractions of ions since typical kinetic temperatures for the gas showing absorption in CII – CIV lines and in SiII – SiIV lines are of the order $`10^42\times 10^4`$ K. It follows, that the outer parts of galactic halos are mainly photoionized and that the thermal and the ionization state of the gas may be specified by the ionization parameter $`U`$ only, . Thus, for a given value of $`n_\gamma `$ (or the specific radiation flux $`J_0`$ at 1 Rydberg), the dispersion of $`U`$ along the line of sight represents a varying gas density.
In recent years, great efforts have been made towards the precise measurements of metal line profiles in QSO spectra obtained with high spectral resolution, FWHM $`57`$ km s<sup>-1</sup>. However, current theoretical models cannot give a complete prediction to match the observational data .
The observations often show complex structures of the line profiles. It is traditional to treat them using the standard Voigt fitting deconvolution procedure. This procedure is based on the assumption that the apparent fluctuations within the line profile are caused by density clumps (‘cloudlets’) with different radial velocities. The non-thermal (turbulent) velocity field inside each cloudlet is accounted for in the so-called microturbulent approximation. It was shown in , however, that the microturbulent analysis may produce unphysical kinetic temperatures.
The more general mesoturbulent approach (see and the references cited therein) is based on the assumption that the intensity fluctuations within the line profile arise mainly from the irregular Doppler shifts in the absorption coefficient caused by macroscopic large-scale, rather than thermal, motions. If the macroscopic velocity field has a correlation length not small as compared with the linear size of the absorbing region, then the radial velocity distribution may deviated significantly from the Gaussian model.
In a previous paper we developed a method aimed at recovering the kinetic temperature from complex metal-line spectra assuming a homogeneous gas density and a random velocity field. Here we outline our first results for the case when both the density and the velocity fields are of random nature.
## 2 The ERM procedure and results
The entropy-regularized $`\chi ^2`$-minimization (ERM) procedure utilizes complex but similar absorption line profiles of ions with different atomic weights to estimate the mean value of $`T_{\mathrm{kin}}`$ for the whole absorbing region. The similarity of the complex profiles of ions with different masses and ionization potentials stems from the equal ionization fraction for both of them. To illustrate this statement we used a model thoroughly described in : an optically thin gas ionized by a typical QSO photoionizing spectrum with the metallicity $`\mathrm{Z}/\mathrm{Z}_{}=0.1`$.
The consequent steps of our computational experiment are shown in Fig. 1. Panel (a) presents the random velocity field $`u(x)=v(x)/\sigma _{\mathrm{turb}}`$, where $`x`$ is the space coordinate $`s`$ along the line of sight in units of the linear size $`L`$ of the absorbing region. The fluctuating density field $`n(x)/n_0`$ is depicted in panel (b). Both the velocity and the gas density fields were calculated using the moving average method described in . To obtain the real gas density fluctuations, we used the log-normal distribution for the density contrast $`\delta =(nn_0)/n_0`$ with the rms value of $`\sigma _\delta =1`$ and $`n_0=2\times 10^3`$ cm<sup>-3</sup>. The rms value for the velocity field was assumed to be 20 km s<sup>-1</sup>, and the linear size of the region $`L=5`$ kpc. The chosen specific radiation flux $`J_0=5\times 10^{22}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> Hz<sup>-1</sup> corresponds in our case to $`U_0=1.3\times 10^3`$. In panel (c) we plot the equilibrium kinetic temperature $`T_4(x)`$ in units $`10^4`$ K as a function of the ionization parameter $`U(x)`$ in accord with the numerical results . The fluctuations of $`U(x)`$ are shown in panel (d). For each species we calculated the density weighted mean temperature
$$T=\frac{n_{\mathrm{ion}}(s)T(s)𝑑s}{n_{\mathrm{ion}}(s)𝑑s}.$$
(1)
The corresponding values of $`T`$ we find for CII, SiII, CIV, and SiIV are 15120 K, 15140 K, 17120 K, and 16390 K, respectively.
These random fields lead in turn to complex profiles of CII$`\lambda 1334`$, SiII$`\lambda 1260`$, CIV$`\lambda 1548`$, and SiIV$`\lambda 1394`$. All profiles have been convolved with a Gaussian spectrograph function having the width of 7 km s<sup>-1</sup> (FWHM), and then a Gaussian noise of S/N = 75 has been added. The final patterns are shown in panels (e), (f), (g), and (h), respectively, by dots with corresponding error bars.
It is seen that the CII and SiII profiles are similar whereas the intensity fluctuations within the CIV profile differ from those for the SiIV line. This different behavior of low and high ionized species formed in the same absorbing region is clearly illustrated in panel (i). The ratio of fractional ionization for the CII and SiII lines is almost constant over the whole range of $`U`$-values, but the corresponding ratio for the CIV and SiIV lines is highly sensitive to the variation of $`U`$.
Next we considered these spectra as ‘observed’ and analyzed them in two ways. At first we applied the standard Voigt profile fitting procedure. The results are shown in Figures 1e–1h, in where the individual components are indicated by tick marks and the numbers give the corresponding parameters (the first line gives the column density, the second the $`b`$ parameter. In Fig. 1h the third line gives the derived temperature). The smooth curves show the resulting ‘theoretical’ profiles. We see that the derived temperatures vary between 2800 K and 45900 K whereas the input temperature (Fig. 1c) varies only between 14360 K and 22090 K. Contrary, when we apply our new ERM procedure to the CII and SiII lines we find from Fig. 1j (shaded area) a temperature of $`15200\pm 200`$ K. This value is in good agreement with the density averaged temperatures derived before.
This result reinforces our previous finding that the Voigt profile fitting procedure may lead to unphysical temperatures. On the other side it indicates that – at least for the cases studied so far – our new procedure gives a physically reasonable average value of the kinetic temperature.
This is a report on work in progress.
Acknowledgment. This work has been supported by the Deutsche Forschungsgemeinschaft. SAL thanks the conference organizers for financial assistance.
|
no-problem/9905/cond-mat9905025.html
|
ar5iv
|
text
|
# Vortex Plasma in a Superconducting Film with Magnetic Dots
## Abstract
We consider a superconducting film, placed upon a magnetic dot array. Magnetic moments of the dots are normal to the film and randomly oriented. We determine how the concentration of the vortices in the film depends on the magnetic moment of a dot at low temperatures. The concentration of the vortices, bound to the dots, is proportional to the density of the dots and depends on the magnetization of a dot in a step-like way. The concentration of the unbound vortices oscillates about a value, proportional to the magnetic moment of the dots. The period of the oscillations is equal to the width of a step in the concentration of the bound vortices.
PACS numbers: 74.60.Ge, 74.25.Dw, 74.76.-w
Superconductivity of thin films was studied for a long time . An important difference of the two-dimensional superconductors from the three-dimensional ones is related with the topological defects. Vortices appear in thin films of the superconductors which are of the first kind in the bulk . They can appear spontaneously even in the absence of the magnetic field. In specially prepared films with the size about the effective screening length, unbound vortices appear above the Berezinskii-Kosterlitz-Thouless transition -.
A recent surge of interest to this problem is associated with advances in preparation of magnetic nanostructures interacting with the superconducting films -. Magnetic field from the magnetic nanostructures (dots) gives rise to vortices and pins them. As a result, in a superconducting film placed upon an array of magnetic dots, a periodic field dependence of the magnetoresistance and superconducting transition temperature was observed , .
Theoretically the problem of a superconducting film supplied by a periodic array of magnetic dots was studied in Ref. . It was shown that the properties of such a system depend on the orientation of the magnetization of the dots, their mutual distances and the coercive field. In the case of strong coercive force, the reorientation of the magnetic moments is a slow process. Hence, a random array of magnetic moments occurs at zero-field cooling below the Curie temperature. If the easy magnetic axes are perpendicular to the film, each dot favors creation of a vortex . Thus, a random vortex structure appears in the superconductor. The vortices pinned by the dots induce a random potential in the film. If the period of the dot array is small enough, the random potential may be sufficient for creation of additional unpinned vortices. The resulting vortex plasma phase is expected to have a much larger resistance, than the pinned vortex state .
In the present paper we consider a toy model of the vortex plasma state which reproduces the phenomena predicted in , but leads also to new predictions of a rich phase diagram and elementary excitations. We study the dependence of the concentration of the vortices on the magnetic moment of a dot. The concentration of the pinned vortices is found to be proportional to the density of the dots. The concentration of the unbound vortices is less to the factor of order $`a/\lambda _{eff}`$, where $`\lambda _{eff}`$ is the effective screening length , $`a`$ the lattice constant of the dots. Both concentrations grow as the magnetic moment of a dot increases. The concentration of the pinned vortices depends on the magnetic moment in a step-like way. The concentration of the unpinned vortices has linear and oscillating components.
We consider first a thin film of the size $`\lambda _{eff}=\lambda ^2/d`$, where $`\lambda `$ is the London penetration length, $`d`$ the thickness of the film. Then we discuss what happens in a film, which is larger than the effective screening length $`\lambda _{eff}`$. We argue that the behavior of the vortex plasma does not depend qualitatively on the size of the system.
The energy of the system of the vortices is composed of three components. These are the single vortex energies, the energy of the vortex-vortex interaction and the coupling energy between the vortices and the dots. A single vortex energy is
$$E_i=ϵ_0n_i^2\mathrm{ln}\frac{\lambda _{eff}}{\xi },$$
(1)
$`n_i`$ being the vorticity, $`\xi `$ the core size and $`ϵ_0=\mathrm{\Phi }_0^2/16\pi ^2\lambda _{eff}`$, where $`\mathrm{\Phi }_0`$ is the magnetic flux quantum. The interaction of a pair of vortices separated by a distance $`r`$ decreases fast at $`r>\lambda _{eff}`$, while at $`r<\lambda _{eff}`$ it depends on the distance logarithmically
$$E_{ij}=2ϵ_0n_in_j\mathrm{ln}\frac{\lambda _{eff}}{r},$$
(2)
where $`n_i,n_j`$ are the vorticities. The interaction of a vortex and a dot is estimated in Ref. as
$$E_i^d=ϵ_0\frac{\mathrm{\Phi }_d}{\mathrm{\Phi }_0}n_i,$$
(3)
where $`\mathrm{\Phi }_d`$ is the magnetic flux generated by the dot. The total Hamiltonian of the system reads
$$H=\underset{i}{}E_i+\underset{i>j}{}E_{ij}+\underset{i}{}E_i^d.$$
(4)
The vortex plasma appears when the distance between the dots $`a\lambda _{eff}`$ . Then the number of the dots in the film of the size $`\lambda _{eff}`$ is $`N_D=(\lambda _{eff}/a)^21`$. In this case the system, governed by the Hamiltonian (4), displays strong collective effects. For simplification, we approximate the slow logarithmic dependence (2) at the scales $`a<r<\lambda _{eff}`$ with a constant. Then the model Hamiltonian mimicking the real Hamiltonian (4) has a form
$$H=U\underset{i}{}\sigma _in_i+\underset{i}{}E_i+2ϵ_0\underset{i>j}{}n_in_j,$$
(5)
where $`\sigma _i=\pm 1`$ describe two possible orientations of the magnetizations of the dots, $`U=ϵ_0\mathrm{\Phi }_d/\mathrm{\Phi }_0`$. This Hamiltonian can be rewritten as
$$H=U\underset{i}{}\sigma _in_i+ϵn_i^2+ϵ_0\left(\underset{i}{}n_i\right)^2,$$
(6)
where $`ϵ=ϵ_0\mathrm{ln}(a/\xi )`$.
The minimum of the energy (6) corresponds to zero total vorticity $`Q=_in_i`$. Indeed, if $`\mathrm{\Delta }Q`$ vortices are removed, so that the vorticity become $`Q^{}=Q\mathrm{\Delta }Q`$, the last term in (6) would decrease by the value proportional to $`Q\mathrm{\Delta }Q`$. At the same time, the maximal possible energy loss due to the first two terms of the Hamiltonian is proportional to $`\mathrm{\Delta }Q`$. Hence, decreasing the total vorticity is favorable unless $`Q1`$. In the system with the number of vortices $`(\lambda _{eff}/a)^21`$ this is practically zero. Thus, the “neutrality” condition must be satisfied:
$$\underset{i}{}n_i=0.$$
(7)
Our aim is to find the ground state of the Hamiltonian (6) with the constraint (7). The ground state depends on the parameter $`\kappa =U/ϵ`$. The discussion below is limited by the case $`\kappa \lambda _{eff}/a`$.
Let us assume that there are $`N`$ “positive” dots favoring creating positive vortices, and $`N+K`$ “negative” dots which favor negative vortices. Obviously, $`N\lambda _{eff}^2/(2a^2)`$ and $`K\lambda _{eff}/a`$ are random. Note that in the ground state all the unbound vortices have the same sign and the unit vorticity. Below we assume that the unbound vortices are positive. As seen below, this assumption is equivalent to the condition $`K>0`$. The case of the negative vorticity is completely analogous.
Let us consider an arbitrary dot with occupancy $`n`$. The neutrality condition (7) allows to change the occupancy by $`\pm 1`$ and simultaneously create an unbound vortex or antivortex. In the ground state these excitations can not decrease the energy. This gives a restriction on the possible values of the occupancy. The energies of the excitations are
$$\mathrm{\Delta }E=ϵ[1+(n\pm 1)^2(n\pm 1)\kappa ]ϵ[n^2n\kappa ]=ϵ[2\pm (2n\kappa )]0.$$
(8)
Hence, $`2n2\kappa 2n+2`$. Thus, at $`\kappa =2(q+\delta )`$, where $`q`$ is integer, $`0<\delta <1`$, the only possible values of $`n`$ are $`q`$ and $`q+1`$. At $`\kappa =2q`$ an additional possible occupancy is $`q1`$.
Let us show that a non-zero number of unbound vortices appears when $`\kappa `$ is equal to an even integer only. Indeed, for the system in the ground state, the energy must not decrease when an unbound vortex is placed onto a dot, or when an unbound vortex and a bound antivortex are removed. The consideration, similar to the derivation of Eq. (8), leads to a condition
$$2m_{min}\kappa 2n_{max},$$
(9)
where $`m_{min}`$ and $`n_{max}`$ are the minimal occupancy of the positive dots and the maximal occupancy of the negative ones respectively. Eq. (9) is compatible with (7) only if the inequalities in (9) are actually equalities. Indeed, the number of the positive vortices $`V^+Nm_{min}`$ and the negative vortex number $`V^{}(N+K)n_{max}`$. Since $`KN`$, it follows from the neutrality condition $`V^+=V^{}`$ that $`m_{min}=n_{max}`$. Hence, $`\kappa =2k`$ where $`k`$ is an integer. Note that Eq. (9) does not contradict Eq. (7) if $`K0`$ only. We assume below that this is the case.
Let us consider the case of $`\kappa =2(q+\delta )`$, $`0<\delta <1`$. At these values of $`\kappa `$ unbound vortices are absent. Let the number of the positive dots with occupancy $`q`$ be $`S`$. The numbers of the negative dots with occupancies $`q`$ and $`q+1`$ are then determined by the neutrality condition. The energy as a function of $`S`$ is $`E(S)=\mathrm{constant}+2Sϵ[\kappa 2q1]`$. Depending on $`\kappa `$, the minimum of the energy $`E(S)`$ corresponds to $`S=0`$ or $`S=N`$. At $`\kappa =2q+1`$ the ground state is degenerate.
Now we study the case of $`\kappa =2q`$. We denote the numbers of the positive dots with occupancies $`q1`$ and $`q+1`$ as $`A_+`$ and $`B_+`$ respectively, the numbers of the negative dots with occupancies $`q1`$ and $`q+1`$ as $`A_{}`$ and $`B_{}`$ respectively. The number of the unbound vortices is determined by the neutrality. The energy dependence on $`A_\pm ,B_\pm `$ is given by the expression $`E=\mathrm{constant}+2ϵ[A_++B_{}]`$. Thus, in the ground state $`A_+=B_{}=0`$. At the same time the energy does not depend on $`A_{}`$ and $`B_+`$.
Now we are in position to describe all the ground states. Below we consider the case when the number of the negative dots $`(N+K)`$ is larger than the number of the positive dots $`N`$. The opposite case is analogous.
1) At $`\kappa <1`$ the vortices are absent.
2) At $`\kappa =2n1`$ the ground state is degenerate. All the dots can be divided into 4 groups with occupancies $`n`$ positive, $`(n1)`$ positive, $`(n1)`$ negative and $`n`$ negative vortices on each dot, respectively. The numbers of dots in these groups are $`NS`$, $`S`$, $`nK+S`$ and $`N(n1)KS`$, respectively, where $`S`$ is any integer satisfying inequality $`SN(n1)K`$.
3) At $`2n1<\kappa <2n`$, $`n`$ vortices are bound with each positive dot and with $`N(n1)K`$ negative dots. Each of the other $`nK`$ negative dots is occupied by $`(n1)`$ vortices.
4) At $`\kappa =2n`$ the ground state is degenerate. There are 4 groups of dots with occupancies $`(n1)`$ negative, $`n`$ negative, $`(n+1)`$ positive and $`n`$ positive vortices on each dot. The groups contain $`S`$, $`N+KS`$, $`nKSP`$ and $`N+S+PnK`$ dots, respectively, where integer $`P`$ and $`S`$ satisfy inequality $`S+PnK`$. Besides, there are $`P`$ unbound vortices.
5) At $`2n<\kappa <(2n+1)`$ each negative dot is occupied by $`n`$ vortices. $`nK`$ positive dots are occupied by $`(n+1)`$ vortices and the other $`NnK`$ positive dots are occupied by $`n`$ vortices.
Thus, the concentration $`c_b`$ of the bound vortices obeys a step-like low
$$c_b=\left[\frac{\kappa +1}{2}\right]\frac{1}{a^2}+O\left(\frac{1}{\lambda _{eff}a}\right),$$
(10)
where the square brackets denote the integer part. The unbound vortices exist only at $`\kappa =2n`$, and their concentration $`c_u`$ satisfies a relation
$$c_u\frac{n|K|}{\lambda _{eff}^2}$$
(11)
where $`|K|`$ is the absolute value of the difference between the numbers of the positive and negative dots. The disorder average of $`|K|`$ is $`2\lambda _{eff}/(\sqrt{\pi }a)`$.
The fact, that the concentration of the unbound vortices is proportional to $`1/(\lambda _{eff}a)`$, can be understood in the framework of the approach . It was argued in Ref. that the bound vortices induce a random potential with the characteristic variation $`ϵ_0\lambda _{eff}/a`$. This leads to creation of unbound vortices screening the random potential. The concentration at which appearance of new vortices becomes unfavorable is $`c_u1/(\lambda _{eff}a)`$.
The model (5) is oversimplified in two respects. First, within the model the potential created by the vortices is completely screened. It is natural that in the ground state the potential is screened at the scales $`r>a`$, larger than the intervortex distance. However, the potential can not be screened at the scales $`ra`$. This unscreened potential provides an additional contribution to the Hamiltonian (5) leading to dependence of the vortex energy on the position. This contribution lifts the degeneracy of the ground state at even $`\kappa `$ and fixes the number of the unbound vortices. It also makes the creation of the unbound vortices favorable at non-integer values of $`\kappa `$. This is a consequence of the fact that the low-lying states are almost degenerate at $`\kappa 2n`$, and the distances between the low-lying levels may turn out to be less than the value of the unscreened potential variation. Still the maxima of the unbound vortex density correspond to the integer even values of $`\kappa `$. Another effect of the incomplete screening is the lifting of the ground state degeneracy at $`\kappa =2n+1`$. In this case the total number of the bound vortices is determined by the unscreened potential. This potential smears the concentration steps at the odd integer values of $`\kappa `$.
The second simplification consists in the choice of the energy of a vortex upon a dot in the form $`E=E_i+E_i^d`$, where $`E_i`$ and $`E_i^d`$ are given by Eqs. (1,3). This value of $`E`$ provides only an upper boundary for the energy of a vortex pinned by a dot. Since the energies of the bound vortices are lower than it is assumed in Eq. (5), the creation of the unbound vortices is less favorable. As a consequence their concentration in a more realistic model is lower than (11).
We expect that the behavior of a large film is qualitatively the same as the behavior of the film of the size $`\lambda _{eff}`$. The interaction of the vortices at the distance $`r>\lambda _{eff}`$ can be calculated with the method of Ref. . The main contribution originates from the interaction of the magnetic fields, induced by the vortices. It depends on the distance as $`V1/r`$. Due to the screening, the blocks of the size $`\lambda _{eff}`$ are to be considered not as free charges, but as dipoles. Their potential obeys $`1/r^2`$ law. Since the orientation of the dipoles is random, the interaction of the distant blocks is irrelevant. Thus, to cut-off the intervortex interaction at some scale $`R\lambda _{eff}`$ is a reasonable approximation. Then a qualitatively correct picture is given by the following model. The system is divided into blocks of the size $`R`$. The interaction of the vortices from the different blocks is neglected. Inside a block the Hamiltonian (4) is valid. The main features of this model are the same as in the film of the size $`\lambda _{eff}`$.
Besides the ground states, we have determined the spectrum of elementary excitations. In particular, the energy cost of an unbound vortex is $`ϵ_0|\kappa 2[(\kappa +1)/2]|`$, where $`[\mathrm{}]`$ denotes the integer part. Our approach is similar to the idea of Efros and Shklovskii . They found a soft Coulomb gap in the doped semiconductor. In our case a slower dependence of the interaction on the distance leads to a gap of the finite width. Within the toy-model the gap disappears at the even integer $`\kappa `$. Although this result is most probably an artifact of the model, we expect that the gap is minimal at the even values of $`\kappa `$. An interesting question concerns the role of the collective excitations. For the problem of Coulomb blockade it was recently discussed in Ref. .
The superconducting film includes regions of the size $`\lambda _{eff}`$ with correlated positive or negative values of the random potential. The behavior of the vortices near the borders of the regions is relevant for the transport properties. In particular, an important process for the resistivity is the transport of the free vortices between the regions with the same sign of the random potential through the points of intersection of the borders. Another important process is the transport along the borders, since the borders constitute a percolating equipotential cluster. The resistivity depends on the temperature, potential barriers and concentration of the unbound vortices. It increases as the number of the unbound vortices grows. At low temperatures a complicated energy landscape may lead to the glassy dynamical behavior.
In conclusion, we obtain that both, the concentrations of the bound and unbound vortices, increase as the magnetization of a dot increases. The concentration of the bound vortices depends on the magnetization in a step-like way and is proportional to the density of the dots. The concentration of the unbound vortices is proportional to $`1/(\lambda _{eff}a)`$. Its dependence on the magnetic moment of a dot can be represented as oscillations about a value, proportional to the dot magnetization. The period of the oscillations is the same as the width of a step of the bound vortices concentration.
This work was partly supported by grants DEFG03-96ER45598, NSF DMR-97-05182, THECB ARP 010366-003. The research of DEF was partly supported by the Russian Program of Leading Scientific Schools, grant 96-15-96756.
|
no-problem/9905/patt-sol9905005.html
|
ar5iv
|
text
|
# Discrete breathers in systems with homogeneous potentials - analytic solutions
## I Introduction
The understanding of dynamical localization in classical spatially extended and ordered systems experienced recent considerable progress , , . Specifically time-periodic and spatially localized solutions of the classical equations of motion exist, which are called (discrete) breathers, or intrinsic localized modes. The attribute discrete stands for the spatial discreteness of the system, i.e. instead of field equations one typically considers the dynamics of degrees of freedom ordered on a spatial lattice. The lattice Hamiltonians are invariant under discrete translations in space. The discreteness of the system produces a cutoff in the wavelength of extended states, and thus yields a finite upper bound on the spectrum of eigenfrequencies $`\mathrm{\Omega }_q`$ (phonon band) of small-amplitude plane waves (one usually assumes that usually for small amplitudes the Hamiltonian is in leading order a quadratic form of the degrees of freedom). If now the equations of motion contain nonlinear terms, the nonlinearity will in general allow to tune frequencies of periodic orbits outside of the phonon band, and if all multiples of a given frequency are outside the phonon band too, there seems to be no further barrier preventing spatial localization (for reviews see ,).
Discrete breathers have been recently experimentally detected in weakly coupled waveguides , MX solids and Josephson junction ladders . The broad spectrum of applicability of the localization concept described above makes it worthwhile to continue theoretical efforts to characterize the properties of discrete breathers.
So far there is very little knowledge about explicit forms of breather solutions. Except for trivial limiting cases like the antiintegrable limit, i.e. the case of uncoupled degrees of freedom , we know only about the solutions of the Ablowitz-Ladik lattice (an integrable one-dimensional variant of the nonlinear Schrödinger equation). What is generically available is the abstract knowledge about existence or nonexistence of discrete breathers for a specific system, and the spatial decay properties far from the center of the breather, where due to the smallness of the amplitudes linearizations or other perturbation techniques are applicable. Note that generically breathers can appear in nonintegrable systems.
From the above it appears that nonintegrability spoils the possibility to obtain analytic forms of the breather solution. We will show that this is not the case by constructing Hamiltonians which allow for explicit solutions and are most probably not integrable. Moreover we will even construct solutions for two- and three-dimensional lattices. Although these models are not motivated by certain applications, the study of their properties can be helpful with respect to discrete breathers.
## II A bond-ordered quasi-linear chain
In this section we consider a one-dimensional model with the Hamiltonian
$$H=W_k+W_p,W_k=\underset{l}{}\frac{1}{2}p_l^2,W_p=\frac{1}{2}\underset{l}{}(x_lx_{l+1})^2h(s_l),s_l(\{x_l^{}\})=\frac{x_l}{x_{l+1}}+\frac{x_{l+1}}{x_l}.$$
(1)
The integer $`l`$ marks the lattice sites of the chain. The equations of motion read
$$\dot{x}_l=p_l,\dot{p}_l=\frac{W_p}{x_l}.$$
(2)
Since we are interested in obtaining solutions which decay to zero at spatial infinities and can be interpreted as excitations above some regular ground state $`x_l=0`$, we demand that $`h(s)`$ behaves regularly and especially that $`h(s\pm \mathrm{})`$ does not diverge. The potential energy $`W_p`$ is a homogeneous function of the coordinates $`x_l`$ since $`W_p(\{\lambda x_l\})=\lambda ^2W_p(\{x_l\})`$. The homogeneity of the potential function can be used to separate time ($`G(t)`$) and space ($`u_l`$) variables as done e.g. in ,:
$$x_l(t)=u_lG(t),\ddot{G}=\kappa G,\kappa u_l=\frac{W_p(\{u_l^{}\})}{u_l}.$$
(3)
Here $`\kappa >0`$ is needed to ensure boundness of the solutions. Indeed the time dependence is then given by
$$G(t)=A\mathrm{cos}(\omega t+\varphi ),\omega ^2=\kappa .$$
(4)
The equations for the spatial amplitudes $`u_l`$ read
$`\kappa =\left(1{\displaystyle \frac{u_{l+1}}{u_l}}\right)h(s_l)+\left(1{\displaystyle \frac{u_{l+1}}{u_l}}\right)^2h^{}(s_l)p_l`$ (5)
$`+\left(1{\displaystyle \frac{u_{l1}}{u_l}}\right)h(s_{l1})\left(1{\displaystyle \frac{u_{l1}}{u_l}}\right)^2h^{}(s_{l1})p_{l1},`$ (6)
$`p_l={\displaystyle \frac{u_l}{u_{l+1}}}{\displaystyle \frac{u_{l+1}}{u_l}}`$ (7)
and $`s_ls_l(\{u_l^{}\})`$. Here $`f^{}(x)`$ means the first derivative of $`f`$ w.r.t. $`x`$. We are looking for a solution of $`\{u_l^{}\}`$ which is localized in space, i.e. $`u_{l\pm \mathrm{}}0`$. In order to find such a solution to (6) we assume that $`s_l`$ is constant in the spatial tails. This condition is equivalent to having exponential decay. Taking
$$u_l=(1)^l\mathrm{e}^{\beta |l|}$$
(8)
and combining the two different cases $`l=0`$ (center of the solutions) and $`l0`$ (tails of the solution) we find
$$h(s)=s(s+1)h^{}(s),s=2\mathrm{cosh}(\beta )$$
(9)
and
$$\kappa =\left(2+s(1\gamma )\right)h(s)\left[1+\frac{\gamma }{s+1}\gamma \left(1+\frac{s}{2}(1\gamma )\right)\right],\gamma =\sqrt{1\left(\frac{2}{s}\right)^2}.$$
(10)
Suppose there exists a value for $`s=s_0>2`$ such that (9) is satisfied. If for this value we also have $`h(s)>0`$ then (10) defines a positive value for $`\kappa `$. Moreover the found solution $`s_0`$ of (9) will be structurally stable against changes in $`h(s)`$. Such models can be generated by starting with the trivial case $`h(s)=1`$. A strong enough local distortion in $`h(s)`$ at $`s<2`$ will generate solutions of the above equations, preserving the overall positivity and boundness $`a>h(s)>0`$ for all $`s`$ with finite $`a`$. Thus the state $`x_l=c`$ (here $`c`$ is an arbitrary constant) will be a state with minimum energy $`H=0`$, with all other trajectories having larger energies. We can therefore interprete the found explicit localized time-periodic solution (8) as a discrete breather solution, an excitation above a classical homogeneous ground state.
Let us consider an example. Choose
$$h(s)=1+a\mathrm{e}^{bs^2}.$$
(11)
For parameters
$$a>\frac{\mathrm{e}^{3/2}}{2+3\sqrt{3/(2b)}},b>\frac{3}{8}$$
(12)
there is at least one solution to (9) with $`s>2`$. This solution is structurally stable against perturbations in $`h(s)`$. E.g. for $`b=0.5`$ and $`a=0.7`$ the solution is $`s2.06781`$, $`\beta 0.25967`$.
Why did we choose a staggered solution $`u_l(1)^l`$ ? This is motivated by the fact that we consider perturbed harmonic chains ($`h(s)=1`$) for which the spectrum of plane waves is acoustic. Thus discrete breathers in order to localize should show up with frequencies above the acoustic phonon band which implies out-of-phase motion of nearest neighbours, or simply staggered solutions. It is not that simple to apply these arguments to the case of $`h(s)1`$, since there is no simple way to linearize the equations of motion around $`x_l=0`$ in the general case. Without going into further details let us state here, that using a nonstaggered ansatz one arrives at equations which do not ensure positivity of $`\kappa `$ in general.
Let us study the problem of small amplitude excitations above the groundstate of (1). First we recall that we consider only positive and bounded functions $`h(s)`$. Then there exists a continuous family of ground states, i.e. solutions with lowest possible energy $`E=0`$ and $`\dot{x}_l=0`$ which are given by
$$x_l=c.$$
(13)
Notice that the Hamiltonian (1) is not invariant under transformations $`x_lc+x_l`$, yet the ground state energy is degenerate. An expansion around one of the ground states yields
$$W_p=\frac{c^2}{2}\underset{l}{}\left[h(2)\left(\delta _l\delta _{l+1}\right)^2+h^{}(2)\left(\delta _l\delta _{l+1}\right)^4+O(\delta ^5)\right],\delta _l=\frac{x_l}{c}1.$$
(14)
The neglected terms of fifth and higher orders in (14) are not invariant under transformations $`\delta _l\stackrel{~}{c}+\delta _l`$. However the terms up to fourth order are invariant. Taking into account only terms up to fourth order thus yields a so-called Fermi-Pasta-Ulam chain for the dynamics of small deviations from the ground state $`x_l=c`$. Note that we can not simply perform the limit $`c0`$ since the expansion (14) is valid only if $`|\delta _l|c`$. The above found breather solution, which decays to zero at infinities (and not to $`c0`$), can not be easily deformed in order to decay to $`c0`$ at infinities. However it is wellknown that (14) allows for discrete breather solutions if $`h^{}(2)>0`$ (which can not be given in a closed analytical form) (e.g. , , ). So we conclude that for the considered model (1) discrete breather excitations above the ground state $`x_l=c0`$ are solutions with generic features, and the ground state $`x_l=0`$ allows for discrete breather excitations given in a closed analytical form.
In the last part of this section we will discuss the existence of different variants of discrete breathers with $`x_l\mathrm{}0`$ asymptotics. The existence of the above derived discrete breather solution can be interpreted as follows. Our ansatz $`u_l=(1)^l\mathrm{e}^{\beta |l|}`$ contains together with the parameter $`\kappa `$ (coming from separating time and space in (3)) two parameters to be determined - $`\beta `$ and $`\kappa `$. With our ansatz we found two equations - one for the spatial wings of the solution, and one for the center. Two equations with two variables can be solved in general, and the additional inequality $`\kappa >0`$ will serve as an additional restriction for the choice of possible functions $`h(s)`$, but will not change the fact that once solutions are found, they will be in general structurally stable against changes in $`h(s)`$. Let us now look for a solution of the form $`u_l=\mathrm{e}^{\beta l}`$ for $`l0`$, $`u_l=a\mathrm{e}^{\beta ^{}l}`$ for $`l1`$. We now have four equations - two in the tails, one at $`l=0`$ and one at $`l=1`$, and four parameters - $`\kappa ,\beta ,\beta ^{},a`$. So we conclude that in general such solutions will exist. Indeed, the solution from above is a variant of the more general case discussed here with $`a=1`$ and $`\beta ^{}=\beta `$. So we can expect in general a countable set of other solutions with $`a1`$ and $`\beta ^{}\beta `$. All these solutions will have a closed analytical form with parameters to be determined numerically from the mentioned equations.
## III Site ordered models in $`d`$ dimensions
Consider a $`d`$-dimensional hypercube lattice with a scalar coordinate $`x_𝐥`$ associated to each lattice site. The site index $`𝐥=(l_1,l_2,\mathrm{},l_d)`$ is a $`d`$-dimensional vector with integer components $`l_i`$. Consider the operator $`\widehat{L}`$ defined as
$$\widehat{L}x_𝐥=\underset{𝐥^{},|𝐥^{}𝐥|=1}{}x_𝐥^{}.$$
(15)
Here $`|𝐥|^2=l_1^2+l_2^2+\mathrm{}+l_d^2`$. Also define
$$s_𝐥=\frac{\widehat{L}x_𝐥}{x_𝐥}.$$
(16)
The Hamiltonian is given by a sum over kinetic and potential energies as in (1), with the potential energy
$$W_p=\frac{1}{2}\underset{𝐥}{}\left(\widehat{L}x_𝐥\right)^2h(s_𝐥).$$
(17)
Assuming time-space separability as in (3) we obtain
$$\ddot{G}=\kappa G$$
(18)
for the time dependence. Again we need $`\kappa >0`$ to ensure bounded solutions.
The spatial coordinates have to satisfy an equation similar to (3). Let us calculate the derivative
$$\frac{W_p}{u_𝐥}=\frac{}{u_𝐥}\left[\frac{1}{2}\left(\widehat{L}x_𝐥\right)^2h(s_𝐥)\right]+\frac{}{u_𝐥}\widehat{L}\left[\frac{1}{2}\left(\widehat{L}x_𝐥\right)^2h(s_𝐥)\right].$$
(19)
Evaluation of these equations leads to the result
$$\kappa u_𝐥=\frac{1}{2}u_𝐥s_𝐥^3h^{}(s_𝐥)+\widehat{L}\left[u_𝐥s_𝐥h(s_𝐥)\right]+\frac{1}{2}\widehat{L}\left[u_𝐥s_𝐥^2h^{}(s_𝐥)\right].$$
(20)
A closer study of equation (20) shows that it could be easily solved if $`s_𝐥`$ is essentially constant everywhere on the lattice, more precisely everywhere except for one site $`𝐥=\mathrm{𝟎}`$:
$$s_{𝐥\mathrm{𝟎}}=s_1,s_{𝐥=\mathrm{𝟎}}=s_0.$$
(21)
In that case evaluation of (20) yields
$`\kappa =s_1^2h(s_1),|𝐥|>1,`$ (22)
$`\kappa =s_1^2h(s_1)+{\displaystyle \frac{u_\mathrm{𝟎}}{u_𝐥}}\left[s_0h(s_0)+{\displaystyle \frac{1}{2}}s_0^2h^{}(s_0)s_1h(s_1){\displaystyle \frac{1}{2}}s_1^2h^{}(s_1)\right],|𝐥|=1,`$ (23)
$`\kappa =s_0^2h(s_0),𝐥=\mathrm{𝟎}.`$ (24)
Defining a new function
$$g(s)=\frac{1}{2}s^2h(s)$$
(25)
equations (22)-(24) are reduced to
$$\kappa =2g(s_0),g(s_0)=g(s_1),g^{}(s_0)=g^{}(s_1).$$
(26)
These relations (26) are in fact conditions on the choice of Hamiltonians, i.e. of the function $`h(s)`$. They are so far rather general, but we have to specify the values $`s_0,s_1`$. These values will be connected with the solution of (20) through the conditions (21), which actually constitute a linear equation:
$$\widehat{L}u_𝐥=s_𝐥u_𝐥.$$
(27)
A solution to this equation can be cast into the form
$$u_𝐥=dq_1dq_2\mathrm{}dq_d\frac{\mathrm{cos}(l_1q_1)\mathrm{cos}(l_2q_2)\mathrm{}\mathrm{cos}(l_dq_d)}{2\left(\mathrm{cos}(q_1)+\mathrm{cos}(q_2)+\mathrm{}\mathrm{cos}(q_d)\right)s_1},$$
(28)
where the integration extends for each variable $`q_i`$ from $`\pi `$ to $`\pi `$. Then the value for $`s_0`$ is given by
$$\frac{1}{s_0s_1}=dq_1dq_2\mathrm{}dq_d\frac{1}{2\left(\mathrm{cos}(q_1)+\mathrm{cos}(q_2)+\mathrm{}\mathrm{cos}(q_d)\right)s_1}.$$
(29)
In other words, given the solution (28) to (20), we can generate the corresponding Hamiltonian by solving (26) with the additional constraint (29), which fixes the value $`s_0`$ relative to $`s_1`$. The function $`g(s)`$ can then be constructed in the following way: first choose $`s_1`$, then determine $`s_0`$, finally find a function $`g(s)`$ which is positive in $`s_1`$ and whose values and derivatives are equal in both $`s_0`$ and $`s_1`$. This function $`g(s)`$ will then define $`h(s)`$ with (25) and thus the Hamiltonian with potential energy (17).
Notice that (28) is exponentially localized around $`𝐥=\mathrm{𝟎}`$ if $`|s_1|>2d`$.
## IV Conclusions
We obtained explicit discrete breather solutions for several classes of Hamiltonian lattice models with different lattice dimensions. The outlined approach can be extended to other classes of lattice models with homogeneous potentials.
An interesting question for future studies will be the linear stability analysis of the obtained solutions, which is also closely related to the spectrum and character of small amplitude fluctuations around the state $`x_l=0`$. Note that any solution which satisfies time-space separability can be continuously tuned to $`x_l=0`$ by letting $`A0`$ in (4), regardless of its spatial profile.
Finally one can extend the solutions of the site-ordered models to multisite breathers, for which
$$s_{𝐥𝐥_m}=s_{\mathrm{}},s_{𝐥_m}=s_m$$
(30)
where the number of ’defect’ sites $`𝐥_m`$ is finite and the largest distance between any pair of ’defect’ sites is finite. Work is in progress.
<sup>1</sup> Permanent address: Institute of Chemical Physics, Kosygin St. 4, 117334 Moscow, Russia.
|
no-problem/9905/astro-ph9905075.html
|
ar5iv
|
text
|
# The Proper Motion of Sgr A*: I. First VLBA Results
## 1 Introduction
Sgr A\* is a compact radio source, similar to weak active nuclei found in other galaxies. Since its discovery more than two decades ago by Balick & Brown (1974), the possibility that Sgr A\* is a super-massive ($`10^6\mathrm{M}_{}`$) black hole has been actively considered. However, its radio luminosity of $`10^2\mathrm{L}_{}`$ (Serabyn et al. 1997) and its estimated total luminosity of $`<10^5\mathrm{L}_{}`$ are many orders of magnitude below that possible from a $`10^6\mathrm{M}_{}`$ black hole. Thus, on the basis of its spectral energy distribution Sgr A\* could be an unusual contact binary containing $`10`$ $`\mathrm{M}_{}`$ and radiating near its Eddington limit.
Recently, Eckart & Genzel (1997) and Ghez et al. (1999) measured proper motions of stars near the position of Sgr A\*, as determined by Menten et al. (1997). Stellar speeds in excess of 1000 km s<sup>-1</sup> at a distance of $`0.015`$ pc from Sgr A\* indicate a central mass of $`2.6\times 10^6\mathrm{M}_{}`$. While these dramatic results are consistent with the theory that Sgr A\* is a super-massive black hole, it is still conceivable that most of the central mass could come from a combination of stars and perhaps some form of dark matter. Clearly, independent constraints on the mass of Sgr A\* are needed to establish whether or not it is a super-massive black hole nearly at rest at the dynamical center of the Galaxy.
The apparent motion of Sgr A\* can be used to estimate the mass and elucidate the nature of this unusual source. An apparent motion of Sgr A\* can be attributed to at least three possible components: 1) a secular motion induced by the orbit of the Sun about the Galactic Center, 2) a yearly oscillation owing to the Earth’s orbital motion around the Sun (trigonometric parallax), and 3) a possible motion of Sgr A\* with respect to the dynamical center of the Galaxy. Measurement of, or limits for, these components of Sgr A\*’s apparent motion can provide unique information on the circular rotation speed ($`\mathrm{\Theta }_0`$ ) of the Local Standard of Rest (LSR) and the peculiar motion of the Sun ($`V_{}`$ ), the distance to the center of the Galaxy ($`R_0`$ ), and the nature of Sgr A\* itself.
In 1991 we started a program with the Very Long Baseline Array (VLBA) of the National Radio Astronomy Observatory <sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is operated by Associated Universities Inc., under cooperative agreement with the National Science Foundation. (NRAO) to measure the apparent motion of Sgr A\* . In principle, VLBA observations of Sgr A\* , phase-referenced to extragalactic radio sources, can achieve an accuracy sufficient to detect secular motions of $`<1\mathrm{kms}^1`$ for a source at the distance of the Galactic Center. However, achieving this accuracy is quite challenging technically as it involves observing at short wavelengths (7 mm), in order to minimize the effects of interstellar scattering toward Sgr A\*, phase-referencing to extragalactic sources, and careful modeling of atmospheric effects (because of the low source elevations).
In the early years of the project, we searched for strong, compact, extragalactic sources nearby in angle to Sgr A\*and worked toward an optimum observing strategy. In this paper, we report results of observations spanning a two year period from 1995 to 1997. Our VLBA images clearly show movement of Sgr A\* with respect to extragalactic sources over many synthesized beams. While the current positional accuracy is inadequate to determine a trigonometric parallax, the secular motion of Sgr A\* is easily measured. This yields an accurate estimate of the angular rotation rate of the Sun around the Galactic Center, $`(\mathrm{\Theta }_0+V_{})/R_0`$, and places interesting limits on the mass of the black hole candidate responsible for the radio emission. Our results, and those of Backer and Sramek (1999) from observations with the Very Large Array (VLA) over a 15 year period, indicate that the apparent proper motion of Sgr A\* is dominated by the orbit of the Sun about the Galactic Center and that any peculiar motion of Sgr A\* is very small.
## 2 Observations
Our successful observations using the VLBA were conducted in the late-night, early-morning periods of 1995 March 4, 1996 March 20 and 31, and 1997 March 16 and 27. (Observations attempted during two August evenings in 1996 experienced high water vapor turbulence in the atmosphere, and phase referencing was not successful.) The observing frequency was 43.2 GHz and we observed four 8 MHz bands, each at right and left circular polarization. We employed 2-bit sampling at the Nyquist rate, which required the maximum aggregate sampling rate supported by the VLBA of 256 megabits per second. Only the inner five VLBA stations (Fort Davis, Los Alamos, Pie Town, Kitt Peak, and Owens Valley) were used, as baselines longer than about 1500 km heavily resolve the scatter broadened image of Sgr A\* at 43 GHz (eg, Bower & Backer 1998).
The observing sequence involved rapid switching between compact extragalactic sources and Sgr A\*. Two sources, J1745–283 and J1748–291, from the catalog of Zoonematkermani et al. (1990) were found to be strong enough ($`>10`$ mJy at 43 GHz) to serve as reference background sources. These sources are two of the three used by Backer & Sramek (1999) in their program to measure the proper motion of Sgr A\* with the VLA; their third background source, J1740–294, proved to have a steep spectrum and was too weak for inclusion in our 43 GHz program. We switched among the sources repeating the following pattern: Sgr A\*, J1745–283, Sgr A\*, J1748–291. Sources were changed every 15 seconds, typically achieving 7 seconds of on-source data, except for the earliest observation in 1995 when we were experimenting with longer switching times. We used Sgr A\* as the phase-reference source, because it is considerably stronger than the background sources and could be detected on individual baselines with signal-to-noise ratios typically between 10 and 20 in the 7 seconds of available on-source time.
We edited and calibrated data using standard tasks in the Astronomical Image Processing System (AIPS) designed for VLBA data. This involved applying data flagging tables generated by the on-line antenna and correlator systems, station gain curves, and system temperature measurements. We solved for station-dependent, intermediate-frequency band delays and phases on a strong, compact source (NRAO 530). After applying these corrections, the multi-band data for Sgr A\* could be combined coherently and interferometer phases as a function of time determined. These phase solutions were examined by an AIPS task specially written for our observations that looked for and flagged data when baseline-dependent phases on adjacent Sgr A\* scans changed by more than one radian. Under good weather conditions between 10 and 30% of the data were discarded by this process. This provided relatively unambiguous “phase connection” for the remaining data and allowed removal of most of the effects of short-term atmospheric fluctuations from all sources. (We note that during average-to-poor weather conditions, our phase measurements on Sgr A\* every 30 seconds were not frequent enough to provide unambiguous phase connection. Thus, our 15 second switching time is probably an optimum trade-off between on-source duty cycle and atmospheric coherence losses.) Data calibrated in this manner produced high (eg, 50:1) dynamic range maps of all sources with little or no spatial blurring. The images of the background sources appeared less resolved than that of Sgr A\*, with no signs of complex or multiple component structures.
We found that the differences in relative positions between a background source and Sgr A\* for closely spaced ($`10`$ d) epochs were $`1`$ mas. These differences exceeded the formal precision, estimated by the least-squares fitting process, typically by a large factor. Since the observational conditions and data analysis were nearly identical for these epochs, small geometric errors (eg, in baselines, source coordinates, or Earth’s orientation parameters) are unlikely to yield position shifts of this magnitude given the small angular separation of Sgr A\* and the background sources. Therefore, we evaluated the possibility that refractive scattering of the radio waves in the interstellar medium or modeling errors for the Earth’s atmospheric propagation delay could be responsible.
Refractive scattering can cause changes in the apparent flux density and position of a radio source. Gwinn et al. (1988) published VLBI observations that limit refractive position wander for water vapor masers in Sgr B2N, a star forming region close to the Galactic Center. For these maser spots, their 22 GHz observations revealed a diffractive scattering size of 0.3 mas and an upper limit to a Gaussian component of refractive position wander of 0.018 mas. Theoretical estimates of refractive position wander, based on the diffractive scattering size, by Romani, Narayan & Blandford (1986) agree with this limit for a Kolmogorov electron density spectrum. Assuming that the refractive effects scale as the diffractive scattering size, we expect any refractive wander of Sgr A\* at 43 GHz to be $`<0.04`$ mas, a value about a factor of 25 smaller than our observed position differences.
After careful study of the data, we concluded that the most likely source of relative position error is a small error in the atmospheric model used by the VLBA correlator. The following simple analysis supports this view: The phase-delay of the neutral atmosphere, $`\tau `$, can be approximated by $`\tau _0\mathrm{sec}Z`$, where $`\tau _0`$ is the vertical phase-delay and $`Z`$ is the local source zenith angle. When measuring the difference in position of two sources separated in zenith angle by $`\mathrm{\Delta }Z`$, a first-order Taylor expansion of $`\tau `$ yields the expected differenced phase-delay error for a single antenna:
$$\mathrm{\Delta }\tau \frac{\tau }{Z}\mathrm{\Delta }Z=\tau _0\mathrm{sec}Z\mathrm{tan}Z\mathrm{\Delta }Z.$$
$`(1)`$
The seasonally-averaged atmospheric model (Niell 1996) used by the VLBA correlator is likely to miss-estimate $`\tau _0`$ by about 0.1 nsec, equivalent to a zenith phase-delay of $`3`$ cm in path length. This comes mostly from the highly variable contribution by water vapor (eg, Treuhaft & Lanyi 1987). Based on Eq. (1), this should result in an antenna-dependent error of $`\mathrm{\Delta }\tau 0.3`$ cm for typical source zenith angles of $`70^{}`$ and for our typical source separations of $`\mathrm{\Delta }Z0.012\mathrm{rad}(0.7^{})`$. Since atmospheric errors are largely uncorrelated for different antennas, on an interferometer baseline at an observing wavelength of $`0.7`$ cm, we would expect a relative position shift of roughly $`\sqrt{2}\times 0.3`$ cm or $`70`$% of a fringe spacing. This corresponds to $`0.4`$ and $`1.7`$ mas in the easterly and northerly directions, respectively, for our longer baselines. This effect should only partially cancel among the different baselines and can explain the position errors seen in the raw maps made from observations made $`10`$ days apart.
In order to improve our relative position measurements, we modeled simultaneously our differenced-phase data for the “J1745–283 minus Sgr A\*” and “J1748–291 minus Sgr A\*” source pairs. The model allowed for a relative position shift for each source pair and a single vertical atmospheric delay error in the correlator model for each antenna. This approach significantly improved the accuracy of the relative position measurements as evidenced by the smaller deviations in relative positions for observations closely spaced in time. The vertical atmospheric delay parameters typically indicated a correlator model error of a few cm and these parameters were estimated with uncertainties of about 1 cm. Using this approach, we would estimate from Eq. (1) and the above discussion that relative position errors should be $`0.1`$ and $`0.4`$ mas in the East–West and North–South coordinates, respectively, for one day’s observation.
The differenced-phases often displayed post-fit residuals of $`30`$ degrees of phase, which were correlated over periods of hours. Assuming equal and uncorrelated contributions from the two antennas forming an interferometer pair, this suggests a delay change of about 0.1 nsec of time or about 3 cm of uncompensated path length. Since typical source zenith angles were about 70 degrees, this corresponds to a vertical delay change in the atmosphere above each antenna of about 0.03 nsec or about 1 cm of path length. This behavior is consistent with expected large-scale changes in the atmospheric delay, and it suggests that significant improvement can be obtained by monitoring and correcting for large scale atmospheric changes.
The data in Table 1 summarize our relative position measurements. The data taken on 1995 March 4 were of poor quality, only the stronger of the two background sources (J1745–283) was detected, and the positional accuracy was significantly worse than for the later epochs. Positions of the strongest background source, J1745–283, phase-referenced to Sgr A\*, for epochs spanning 2 years are plotted in Fig. 1 with open circles in the sense Sgr A\* relative to J1745–283. They indicate a clear apparent motion for Sgr A\* relative to J1745–283, consistent in magnitude and direction with the reflex motion of the Sun around the Galactic Center (see §3.1). The positions in the East-West direction have typical uncertainties of about 0.1 mas, as estimated from the scatter of the post-fit position residuals about a straight-line motion. It is interesting to note that, while it takes $`220`$ My for the Sun to complete an orbit around the Galactic Center, the East–West component of the parallax from only 10 days motion can be detected with the VLBA! The position uncertainties in the North-South direction are larger, about 0.4 mas, owing to the low declination of the Galactic Center.
The apparent motions of Sgr A\* relative to J1745–283 over a 2 year time period and J1748–291 relative to J1745–283 over a 1 year time period are given in Table 2. The uncertainties in Table 2 include estimates of the systematic effects, dominated by errors in modeling of atmospheric effects, as discussed above. Assuming that J1745–283 is sufficiently distant that it has negligible intrinsic angular motion, Sgr A\*’s apparent motion is $`3.33\pm 0.1`$ and $`4.94\pm 0.4`$ mas y<sup>-1</sup> in the easterly and northerly directions, respectively. This motion is shown by the solid lines in Fig. 1.
As a check on the accuracy of our measurements, we measured relative positions between the two calibration sources. These positions are plotted in Fig. 1 (crosses) in the sense “J1748–291 minus J1745–283,” offset to fit the plotting scale for the “Sgr A\* minus J1745–283” data. The best fit motions are $`+0.17\pm 0.14`$ and $`0.22\pm 0.56`$ mas y<sup>-1</sup> in the easterly and northerly directions, respectively, as indicated with the dashed lines in Fig. 1. The uncertainty in the relative motion of J1748–291 with respect to J1745–283 is $``$40% larger than for the motion of Sgr A\* with respect to J1745–283,because the angular separation of the two background sources ($`1.0`$ deg) is greater than between Sgr A\* and either of the background sources ($`0.7`$ deg). Thus, the background sources display no statistically significant motion relative to each other, as expected for extragalactic sources.
Finally, we have determined the position of Sgr A\* relative to an extragalactic source with high accuracy and, therefore, can derive an improved absolute position of Sgr A\*. VLBI observations carried out by the joint NASA/USNO/NRAO geodetic/astrometric array (Eubanks, private communication) detected J1745–283 at 8.4 GHz and determined its position in the U.S. Navy 1997-1998 reference frame to be
J1745–283 $`\alpha `$(2000)=17 45 52.4968, $`\delta `$(2000)=–28 20 26.294 ,
with a uncertainty of about 12 mas. Assuming this result, we find the position of Sgr A\* measured at 1996.25 to be
Sgr A\* $`\alpha `$(2000)=17 45 40.0409, $`\delta `$(2000)=–29 00 28.118 .
The uncertainty in this position is dominated by that of J1745–283. Note that were one not to correct for the “large” apparent proper motion of Sgr A\*, the position of Sgr A\* determined for observations made more than 2 years from 1996.25 would be shifted by an amount greater than the $`12`$ mas uncertainty.
## 3 Discussion
The apparent motion of Sgr A\* with respect to background radio sources can be used to estimate the rotation of the Galaxy and any peculiar motion of the super-massive black hole candidate Sgr A\*. In Fig. 2 we plot the change in apparent position on the plane of the sky of Sgr A\* relative to J1745–283. The dotted line is the variance-weighted least-squares fit to the data, and the solid line denotes the orientation of the Galactic Plane. Clearly the apparent motion of Sgr A\* is almost entirely in the Galactic Plane. Thus, it is natural to convert the apparent motion from equatorial to galactic coordinates. For Sgr A\* relative to J1745–283, this yields an apparent motion of $`5.90\pm 0.35`$ and $`+0.20\pm 0.30`$ mas y<sup>-1</sup> in galactic longitude and latitude, respectively. The apparent motion in the plane of the Galaxy should be dominated by the effects of the orbit of the Sun around the Galactic Center, while the motion out of the plane should contain only small terms from the Z-component of the Solar Motion and a possible motion of Sgr A\*. In the following subsections, we investigate the various components of the apparent motion of Sgr A\*, place limits on any offset of Sgr A\* from the dynamical center of the Galaxy, derive limits on the mass of Sgr A\*, and constrain the distribution of dark matter in the Galactic Center.
### 3.1 Motion of Sgr A\* in the Plane of the Galaxy
Assuming a distance of $`8.0\pm 0.5`$ kpc (Reid 1993), the apparent angular motion of Sgr A\* in the plane of the Galaxy translates to $`223\pm 19`$ km s<sup>-1</sup> . The uncertainty includes the effects of measurement errors and the 0.5 kpc uncertainty in $`R_0`$ . Provided that the peculiar motion of Sgr A\* is small (see §3.2), this corresponds to the reflex of true orbital motion of the Sun around the Galactic Center. This reflex motion can be parameterized as a combination of a circular orbit (i.e., of the LSR) and the deviation of the Sun from that circular orbit (the Solar Motion). The Solar Motion, determined from Hipparcos data by Dehnen & Binney (1998), is $`5.25\pm 0.62`$ km s<sup>-1</sup> in the direction of galactic rotation. Removing this component of the Solar Motion from the reflex of the apparent motion of Sgr A\* yields an estimate for $`\mathrm{\Theta }_0`$ of $`218\pm 19`$ km s<sup>-1</sup> . This value is consistent with most recent estimates of about 220 km s<sup>-1</sup> (Kerr & Lynden-Bell 1986) and can be scaled for different values of the distance to the Galactic Center by multiplying by $`R_0/8`$ kpc.
The most straightforward comparison of our direct measurement of the angular rotation rate of the LSR at the Sun ($`\mathrm{\Theta }_0`$ /$`R_0`$ ) can be made with Hipparcos measurements based on motions of Cepheids. Feast & Whitelock (1997) conclude that the angular velocity of circular rotation at the Sun, $`\mathrm{\Theta }_0`$ /$`R_0`$ (= Oort’s A–B), is $`27.19\pm 0.87`$ km s<sup>-1</sup> kpc<sup>-1</sup> ($`218\pm 7`$ km s<sup>-1</sup> for $`R_0=8.0`$ kpc). Our value of $`\mathrm{\Theta }_0/R_0`$, obtained by removing the Solar Motion in longitude from the reflex of the motion of Sgr A\* in longitude, is $`27.2\pm 1.7`$ km s<sup>-1</sup> kpc<sup>-1</sup>. The VLBA and Hipparcos measurements are consistent within their joint errors, and both measurements are insensitive to the value of $`R_0`$ , as it is only used to remove the small contribution of the Solar Motion. It is important to note that our value of $`\mathrm{\Theta }_0/R_0`$ is a true “global” measure of the angular rotation rate of the Galaxy. The consistency of the local (A–B) and global measures of $`\mathrm{\Theta }_0/R_0`$ suggests that local variations in Galactic dynamics ($`d\mathrm{\Theta }_0/dR_0`$) are less than the joint uncertainties of about 2 km s<sup>-1</sup> kpc<sup>-1</sup>.
After removing the best estimate of the motion of the Sun around the Galactic Center, our VLBA observations yield an estimate of the peculiar motion of Sgr A\* of $`0.0\pm \sqrt{0.87^2+1.7^2}`$ km s<sup>-1</sup> kpc<sup>-1</sup> or $`0\pm 15`$ km s<sup>-1</sup> towards positive galactic longitude. This estimate of the “in plane” motion of Sgr A\* comes from differencing two angular motions. Since this difference is negligible, the uncertainty in $`R_0`$ does not affect this component of the peculiar motion of Sgr A\*. Given the excellent agreement in the global and local measures of the angular rotation rate of the Galaxy, and the lack of a detected peculiar motion for Sgr A\*, it is likely that Sgr A\* is at the dynamical center of the Galaxy.
### 3.2 Motion of Sgr A\* out of the Plane of the Galaxy
Whereas the orbital motion of the Sun (around the Galactic Center) complicates estimates of the “in plane” component of the peculiar motion of Sgr A\*, motions out of the plane are simpler to interpret. One needs only to subtract the small Z-component of the Solar Motion from the observed motion of Sgr A\* to estimate the out-of-plane component of the peculiar motion of Sgr A\*. An implicit assumption in this procedure is that the Solar Motion reflects the true peculiar motion of the Sun. Since most estimates of the Solar Motion are relative to stars in the solar neighborhood, this assumes that “local” and “global” estimates of the Solar Motion are similar. This procedure could be compromised slightly were the solar neighborhood to have a significant motion out of the plane of the Galaxy, owing, for example, to a galactic bending or corrugation mode.
One way to limit the magnitude of a possible difference between a local and global estimate for the Solar Motion is to compare motions based on nearby stars with those based on much more distant stars. Using stars within about 0.1 kpc Dehnen & Binney (1998) find the Z-component for the Solar Motion to be $`7.17\pm 0.38\mathrm{kms}^1`$, while Feast & Whitelock (1997) determine a value of $`7.61\pm 0.64\mathrm{kms}^1`$ for stars with distances out to a few kpc. Since these values agree within their joint uncertainties of about 0.74 km s<sup>-1</sup> , it seems unlikely that local and globlal values for the Solar Motion could differ by more than about 1 km s<sup>-1</sup> .
The Hipparcos measurements of large numbers of stars in the solar neighborhood provide an excellent reference for determining the local solar motion. We adopt the value of Dehnen & Binney (1988), which comes from the velocities of more than 10,000 stars within about 0.1 kpc of the Sun. Removing $`7.17`$ km s<sup>-1</sup> from our measured apparent motion of Sgr A\* out of the plane of the Galaxy, we estimate the peculiar motion of Sgr A\* to be $`15\pm 11`$ km s<sup>-1</sup> toward the north galactic pole (see Table 3). The uncertainty is dominated by our proper motion measurements and can be greatly improved by future measurements. Note, for example, that increasing the weight (decreasing the estimated uncertainty) of the 1995 measurement would decrease the magnitude of our peculiar motion estimate. We do not consider our estimate of the peculiar motion of Sgr A\* out of the plane of the Galaxy to be statistically distinguishable from a null result.
### 3.3 Limits on the Mass of Sgr A\*
Our estimates of a peculiar motion of Sgr A\* provide an upper limit of about $`20`$ km s<sup>-1</sup> each for motions in and out of the galactic plane. Since stars in the inner-most regions of the central cluster move at speeds in excess of 1000 km s<sup>-1</sup> (Eckart & Genzel 1997, Ghez et al. 1999), a central “dark mass” of approximately $`2.6\times 10^6`$ $`\mathrm{M}_{}`$ contained within 0.015 pc of Sgr A\* seems required. It is likely, but unproven, that most of this mass is contained in a super-massive black hole: Sgr A\*. Given the fact that independent measurements (Backer & Sramek 1999, and this paper) show that Sgr A\* moves at least two orders of magnitude slower than its surrounding stars, Sgr A\* must be much more massive than the $`10\mathrm{M}_{}`$ stars observed in the central cluster. (See also Gould & Ramírez for discussion of the implications of a lack of apparent acceleration of Sgr A\*.) In this section we derive a lower limit to the mass of Sgr A\* and constrain possible distributions of dark matter, not in the form of a super-massive black hole.
#### 3.3.1 Virial Theorem
Unfortunately, the Virial theorem is of little help in relating the masses and velocities of stars to that of a central massive black hole. For Virial equilibrium,
$$T_s+T_{bh}=\frac{1}{2}(U_s+U_{bh}),$$
$`(2)`$
where $`T`$ and $`U`$ correspond to the kinetic and potential energy terms and the subscripts $`s`$ and $`bh`$ identify those associated with the stars and a central, massive, black hole, respectively. Essentially all the kinetic energy can be tied up in the stars ($`T_s`$) and all the gravitational potential energy found associated with the black hole ($`U_{bh}`$). In this case, attempts to estimate the kinetic energy of the black hole ($`T_{bh}`$) require differencing two large and uncertain quantities and will be essentially useless.
#### 3.3.2 Equipartition of Kinetic Energy
Upper limits on the motion of Sgr A\* have been used to infer lower limits on the mass of Sgr A\* by assuming equipartition of kinetic energy (eg, Backer 1996, Genzel et al. 1997). This is reasonable for stellar systems such as globular clusters, where massive stars are found concentrated toward the cluster center and move more slowly than lower mass stars. Similar results might also hold for a system involving a central black hole and a surrounding stellar cluster, provided the core mass of the cluster greatly exceeds that of the black hole. However, given the likely mass dominance of Sgr A\* over the stars within 0.015 pc, where high stellar speeds have been measured, equipartition of kinetic energy may be an unreliable approximation.
Indeed, our solar system may prove a better “scale model” (with planets corresponding to stars and the Sun corresponding to Sgr A\*). The Sun orbits the barycenter of the solar system, approximately in a binary orbit with Jupiter. Neglecting small perturbations from other planets, for a binary orbit in the center of mass frame, momentum conservation requires that $`m_Jv_J=M_{}V_{}`$, where the subscripts $`J`$ and $``$ refer to Jupiter and the Sun, respectively. The ratio of the kinetic energy of Jupiter to the Sun is given by $`m_Jv_J^2/M_{}V_{}^2=M_{}/m_J`$. Hence, the kinetic energy of Jupiter exceeds that of the Sun by a factor equal to the inverse of the ratio of their masses and equipartition of kinetic energy does not apply.
#### 3.3.3 Case 1: $`M_{SgrA}2.6\times 10^6`$ $`\mathrm{M}_{}`$
In order to better evaluate how an upper limit to the motion of Sgr A\* can be used to provide a lower limit to its mass, we carried out N-body simulations of stars orbiting about a massive black hole. We used a simple, direct integration code (NBODY0) of Aarseth (1985), documented by Binney & Tremaine (1987) and modified for our purposes. Initial simulations used 255 stars orbiting a $`2.6\times 10^6\mathrm{M}_{}`$ black hole. The stellar masses were chosen randomly to represent the upper end of a stellar mass function with a power law distribution from 20 down to 2 $`\mathrm{M}_{}`$ . The number of stars and their masses used in this simulation are comparable to those observed within the central 0.5 arcsec, or 4000 AU (Genzel et al. 1997). Stellar orbits were chosen by randomly assigning a distance from the black hole, uniformly distributed in the range 10 to 10,000 AU, calculating a circular orbital speed, and then adjusting the speed randomly by between $`\pm 20\%`$ of the circular speed for each of the three Cartesian coordinates. Before starting the N-body integrations, the orbital orientations were randomized by rotating the coordinates (and velocity components) through three Euler rotations with angles chosen at random.
The N-body simulations show quasi-random motions of the massive black hole. After relatively short periods of time ($`10,000`$ years) a “steady state” condition appeared to be reached. The speed of a typical star was about 700 km s<sup>-1</sup> at an average distance of 6,000 AU. The motion of Sgr A\* changed completely in all three coordinates on time scales $`100`$ years and was typically $`<0.1`$ km s<sup>-1</sup> in each coordinate. The rapid, but bounded, changes in the motion of Sgr A\* suggests that close encounters with individual stars are responsible for most of the observed motions. Assuming this is the case, one can make a simple analytical estimate of the expected motion of Sgr A\*, owing to close encounters with stars in the dense central cluster.
For a two-body interaction conserving momentum and viewed in the center of mass frame,
$$mv=MV,$$
$`(3)`$
where $`m`$ and $`v`$ are the mass and speed of the star and $`M`$ and $`V`$ are the mass and speed of the black hole, respectively. For the case of interest where $`Mm`$, the orbital speed of the star at periastron, $`v_p`$, is given by the well known relation
$$v_p^2=\frac{GM}{a}\left(\frac{1+e}{1e}\right),$$
$`(4)`$
where $`G`$ is the gravitation constant, $`a`$ is the stellar semi-major axis, and $`e`$ is the orbital eccentricity. Defining $`V_p`$ as the speed of the black hole at periastron, combining Eqs. (3) and (4) yields
$$V_p=\left(\frac{m}{M}\right)\left(\frac{GM(1+e)}{a(1e)}\right)^{1/2}.$$
$`(5)`$
Our observations are only sensitive to orbital periods longer than of order 1 year. Such orbital periods occur for stellar semimajor axes greater than about 50 AU (1000 Schwarzschild radii) for a $`2.6\times 10^6\mathrm{M}_{}`$ black hole. Thus, for our application reasonable values for the parameters in Eq. (5) are as follows: $`m10\mathrm{M}_{}`$, $`e0.5`$, and $`a50`$ AU, and the expected orbital speed of Sgr A\* would be $`0.03`$ km s<sup>-1</sup> . (We adopt the periastron speed, instead of the lower average orbital speed, because the influence of many orbiting stars will likely increase the speed of Sgr A\*, compared to the single star result.) This speed is well below our current limit for the motion of Sgr A\*. Thus, the simplest interpretation consistent with the fast stellar motions and the slow Sgr A\* motion is that Sgr A\* is a super-massive black hole.
#### 3.3.4 Case 2: $`M_{SgrA}2.6\times 10^6`$ $`\mathrm{M}_{}`$
While the simplest interpretation is that Sgr A\* is a $`2.6\times 10^6`$ $`\mathrm{M}_{}`$ black hole, our upper limit on any peculiar motion for Sgr A\* currently is two to three orders of magnitude above its expected motion for that mass. Thus, it seems reasonable to investigate the possibility that the mass within a radius of 0.01 pc is not dominated by Sgr A\*, but instead is in some form of “dark” matter. In this case, Sgr A\* will react to the gravitational potential and orbit the center of mass of this dark matter. Below we show that the upper limits on the motion of Sgr A\* are complimentary to the stellar proper motion results and strongly constrain both the mass of Sgr A\* and any possible configuration of matter within the central 0.01 pc.
Fig. 3 displays the enclosed mass versus radius for four mass models that are consistent with the stellar proper motions (cf., Genzel et al. 1997, Ghez et al. 1999). These models all yield flat “enclosed-mass versus radius” relations at distances $`>0.01`$ pc from the center of mass of the system where measurements exist. The most centrally condensed mass distribution, a point mass, is shown as the horizontal dash-dot line labeled “a”. The least centrally condensed mass distribution plotted is for a Plummer density distribution, where density, $`\rho `$, is given by $`\rho =\rho _0\left(1+(r/r_0)^2\right)^{\alpha /2}`$, for $`\rho _0=6\times 10^{11}`$ $`\mathrm{M}_{}`$ pc<sup>-3</sup>, $`r_0=0.01`$ pc, and $`\alpha =5`$. This distribution is shown with a curved dash-dot line labeled “d”. It is difficult to make a physically reasonable mass distribution that is significantly less centrally condensed than this and consistent with the stellar motion data. These two “extreme” model distributions approximately bound all allowed mass distributions; two particular examples of intermediate models are shown in Fig. 3 with dashed curves labeled “b” and “c”.
Assuming that Sgr A\* has a mass $`10^6`$ $`\mathrm{M}_{}`$ , it will orbit about the center of mass of the system. Orbital speed for a body at a radius, r, from the center of mass is given by $`V=\sqrt{GM_{encl}/r},`$ where $`M_{encl}`$ is the enclosed mass at that radius. Setting $`V=20`$ km s<sup>-1</sup> , our limit for the motion of Sgr A\*, produces the sloping solid line in Fig. 3. Only enclosed masses below that line are permitted by our observations. For radii greater than about $`3\times 10^5`$ pc (6 AU), this limit rules out all but the least centrally condensed mass models. For radii less than about $`3\times 10^5`$ pc, the orbital motion of Sgr A\* produces angular excursions less than 0.8 mas. In this case, while the orbital speed might greatly exceed 20 km s<sup>-1</sup> , we may have failed to detect these excursions owing to our poor temporal sampling and $`0.4`$ mas errors in the North–South direction. Thus, we do not extend the motion limit line below $`3\times 10^5`$ pc, and at this radius we replace the motion limit with a vertical line in Fig. 3.
The stellar proper motions and the limits on the proper motion of Sgr A\* combine to exclude almost all of “parameter space” for models of the density distribution of material in the inner 0.1 pc of the Galactic Center. The stellar motions exclude “soft” gravitational potentials (i.e., the least centrally condensed mass distributions) and the motion limit for Sgr A\* excludes “hard” gravitational potentials. Continued VLBA observations of Sgr A\* over the next five years could reduce the uncertainty in the peculiar motion of Sgr A\* to about 2 km s<sup>-1</sup> (dotted line in Fig. 3) out of the plane of the Galaxy. Improved accuracy for positions in individual VLBA observations, necessary for a trigonometric parallax of Sgr A\*, could move the small angular excursion limit to $`<0.2`$ mas. This would further and drastically restrict the range of possible models for a dominant central dark matter condensation.
The point-like mass distribution labeled “a” in Fig. 3 essentially requires a super-massive black hole. Since we have assumed in this section that this is not Sgr A\*, we would be left with the question of why a low-mass Sgr A\* radiates far more than a super-massive black hole in essentially the same environment. Other mass models, and especially those with the most centrally condensed mass distributions (eg, labeled “b” in Fig. 3) require exceedingly high mass densities. For example, model “b” has $`5\times 10^5`$ $`\mathrm{M}_{}`$ within a radius of 6 AU, resulting in a density of $`10^{19}`$ $`\mathrm{M}_{}`$ pc<sup>-3</sup>. Theoretical arguments suggest that such models are unlikely to be stable for even $`10^7`$ y, regardless of the composition of the matter (Maoz 1998).
The hardest cases to exclude on either observational or theoretical grounds are given by the mass distributions that are the least centrally condensed, but still consistent with the stellar motion data. For this case, one can ask the question, at what mass will perturbations by stars in the central cluster lead to detectable motion of Sgr A\*? In order to answer this question, we modified the N-body code described in § 3.3.3 to include a fixed gravitational potential appropriate for a Plummer law mass distribution with $`\alpha =5`$, $`\rho _0=6\times 10^{11}`$ $`\mathrm{M}_{}`$ pc<sup>-3</sup>, and $`r_0=0.01`$ pc (model “d”). Sgr A\* was assigned a mass ($`10^6`$ $`\mathrm{M}_{}`$ ) and the entire system, including 254 stars was allowed to evolve in time. For the softer allowable gravitational potentials (eg, “c” and “d”) there is little enclosed mass within $`10^4`$ pc to bind Sgr A\*. We found that when Sgr A\*’s mass was less than $`3,000`$ $`\mathrm{M}_{}`$ , Sgr A\* gradually moved outward from the center of the gravitational potential and achieved orbital speeds in excess of 20 km s<sup>-1</sup> . Thus, we conclude that the lack of detectable motion for Sgr A\* places a conservative lower limit of about 1,000 $`\mathrm{M}_{}`$ for the mass associated with Sgr A\*.
## 4 Conclusions
The apparent proper motion of Sgr A\*, relative to extragalactic sources, is consistent with that expected from the Sun orbiting the center of the Galaxy. Thus, Sgr A\* must be very close to, and most likely at, the dynamical center of the Galaxy. In this case, the proper motion measurement gives the angular rotation speed at the Sun, $`(\mathrm{\Theta }_0+V_{})/R_0`$, directly, from which we estimate $`\mathrm{\Theta }_0=218\pm 19`$ km s<sup>-1</sup> for $`R_0=8`$ kpc.
Our lower limit for the peculiar motion of Sgr A\* of about 20 km s<sup>-1</sup> implies a lower limit for mass of Sgr A\* of $`10^3`$ $`\mathrm{M}_{}`$ . This rules out the possibility that Sgr A\* is any known multiple star system, such as a contact binary containing $`10`$ $`\mathrm{M}_{}`$ and radiating near its Eddington luminosity. A mass of more than $`10^3\mathrm{M}_{}`$ and a luminosity $`<10^5\mathrm{L}_{}`$ indicates that Sgr A\* is radiating at $`<0.1\%`$ of its Eddington limit.
All observations are consistent with Sgr A\* being a super-massive black hole. Since the lower limit for the mass of Sgr A\* is only about 0.1% of the gravitational mass inferred from the stellar motions, one cannot claim from our observations alone that even a significant fraction of the dark mass must be in Sgr A\*. However, alternative models involving “dark” matter distributions are severely restricted by observations.
Future VLBA observations should be able to reduce the uncertainty in the measurement of the motion of Sgr A\* out of the Galactic plane to $`0.2`$ km s<sup>-1</sup> , at which point knowledge of the Solar Motion may become the limiting factor. Should the peculiar motion of Sgr A\* be less than 0.2 km s<sup>-1</sup> , then its mass almost certainly exceeds $`10^5\mathrm{M}_{}`$. Such a large mass tied directly to the radio source, whose size is $`<1`$ AU from VLBI observations, would be compelling evidence that Sgr A\* is a super-massive black hole.
We thank D. Backer for providing coordinates for sources J1745–283 and J1748–291 early in our project, M. Eubanks for measuring the astrometric position of J1745–283, and V. Dhawan for helping with the VLBA setup.
Fig. 1. Position residuals of Sgr A\* relative to J1745–283 (circles) and J1748–291 relative to J1745–283 (crosses) versus time. Eastward components are shown in the top panel and Northward components in the bottom panel. The solid and dashed lines give the variance-weighted best fit components of proper motion. The J1748–291—J1745–283 positions have been offset to fit the plot scale for the Sgr A\*—J1745–283 data.
Fig. 2. Position residuals of Sgr A\* relative to J1745–283 on the plane of the sky. North is to the top and East to the left. Each measurement is indicated with an ellipse, approximating the apparent, scatter broadened size of Sgr A\* at 43 GHz, the date of observation, and $`1\sigma `$ error bars. The dashed line is the variance-weighted best-fit proper motion, and the solid line gives the orientation of the Galactic plane.
Fig. 3. Enclosed mass versus radius for various model distributions of dark matter, assuming the mass of Sgr A\* $`10^6`$ $`\mathrm{M}_{}`$ . Models labelled “a” through “d” have decreasing central mass condensations (progressively softer gravitational potentials) and approximately bound mass distributions that are consistent with stellar proper motion data. Model “a” is a point mass; model “b” through “d” have Plummer density distributions with $`\rho _0`$ of $`3.9\times 10^{18},2.5\times 10^{14},\mathrm{and}6.0\times 10^{11}`$ $`\mathrm{M}_{}`$ pc<sup>-3</sup>; $`r_0`$ of 0.00002, 0.001, and 0.01 pc; and $`\alpha `$ of 3, 4, and 5, respectively. The sloping solid line indicates the upper limit for enclosed mass based on the proper motion of Sgr A\*. The vertical solid line at $`3\times 10^5`$ pc (6 AU) indicates the upper limit in radius, where angular excursions of Sgr A\* of $`<0.8`$ mas could be missed owing to insufficient astrometric accuracy. The sloping dotted line indicates expected improvement in the measurement of the proper motion of Sgr A\* within $`5`$ years.
|
no-problem/9905/cond-mat9905070.html
|
ar5iv
|
text
|
# Coarse-grained surface energies and temperature-induced anchoring transitions in nematic liquid crystals
\[
## Abstract
We introduce a coarse-grained description of the surface energy of a nematic liquid crystal. The thermal fluctuations of the nematic director close to the surface renormalize at macroscopic scales the bare surface potential in a temperature-dependent way. The angular dependence of the renormalized potential is dramatically smoothed, thus explaining the success of the Rapini-Papoular form. Close to the isotropic phase, the anchoring energy is strongly suppressed and the change of its shape allows for anchoring transitions. Our theory describes quantitatively the temperature dependence of the anchoring energy and the temperature-induced anchoring transitions reported in the literature.
\]
In recent years, surface phenomena have attracted a lot of interest. Particularly, the interface between liquid crystals and solid substrates displays a rich variety of behaviors: orientational wetting and spreading , memory effects , surface melting , Kosterlitz-Thouless transitions , quasi-critical behavior of surface energies , surface anchoring transitions , etc. Despite the complexity and the diversity of the interactions between nematic liquid crystals and solid substrates, some simple—though unexplained—ubiquitous aspects emerge from the experiments: the angular dependences of the surface potentials are extremely smooth and well described by the so-called Rapini-Papoular law , at high temperatures the preferred surface orientation is usually either parallel or perpendicular to the substrate, tilted orientations are difficult to achieve , and temperature-driven anchoring transitions systematically occur immediately below the bulk isotropic transition temperature . In this Letter, we show that all these effects can be explained in terms of a renormalization of the surface energy by the short-wavelength orientational fluctuations in the bulk.
Nematic liquid crystals are fluid mesophases made of elongated molecules displaying a broken orientational symmetry along a non-polar direction called the nematic director $`𝐧`$ . At a molecular level, nematics exhibit large orientational fluctuations and usually some degree of short-range biaxial and positional order. The link between the microscopic and the macroscopic description can be established by means of a coarse-graining procedure . Before determining the consequences of this procedure on the surface behavior, let us briefly discuss how it is carried out in the bulk. One can associate a local orientation $`𝐦`$ to each molecule; then the probability $`p[𝐦]`$ of a given instantaneous microscopic configuration $`𝐦(𝐫)`$ is proportional to the Boltzmann factor $`\mathrm{exp}(\beta [𝐦])`$, where $`[𝐦]`$ is a complex microscopic free-energy and $`\beta =1/k_\mathrm{B}T`$ is the inverse temperature. At macroscopic scales the bulk free-energy $`F[𝐧]`$ is well described by a simple elasticity involving only the average molecular orientation $`𝐧`$ and its gradient . The quantity $`\mathrm{exp}(\beta F[𝐧])`$ gives the probability of observing a given smooth director configuration $`𝐧(𝐫)`$, whatever the details of its rapidly varying components. One therefore obtains the coarse-grained free-energy $`F[𝐧]`$ by summing the microscopic probabilities $`\mathrm{exp}(\beta [𝐦])`$ over all the Fourier components $`𝐦(𝐪)`$ with wavevector $`|𝐪|>\mathrm{\Lambda }`$, where $`\mathrm{\Lambda }`$ is some macroscopic cutoff, the components $`𝐦(𝐪)`$ with $`|𝐪|<\mathrm{\Lambda }`$ being fixed and denoted by $`𝐧(𝐪)`$. The resulting free-energy $`F[𝐧]`$, which corresponds to the Frank elasticity , is therefore meaningful only for the slowly varying Fourier components $`|𝐪|<\mathrm{\Lambda }`$ of $`𝐧`$. Such a coarse-graining procedure yields a renormalized elasticity expanded in power series of the derivatives of $`𝐧`$, whose coefficients depend on $`T`$ and are expected to scale as powers of the range $`b`$ of the molecular interactions. This is why, for $`\mathrm{\Lambda }b1`$, it is justified to retain in $`F[𝐧]`$ only the lowest-order gradient terms.
Using the Frank elasticity $`F[𝐧]`$ implicitly entails a coarse-graining in the bulk. For the sake of consistency, this procedure should also be performed on the surface. The latter will then be effectively transformed into a blurred layer of width $`\mathrm{\Lambda }^1`$ acquiring some properties of the bulk. Such a coupling between surface and bulk is usually introduced in a phenomenological way by means of Landau expansions in the tensorial nematic order-parameter $`𝐐`$ . However, modeling surface properties in this way is rather complicated, since one has to deal with spatially varying tensorial fields, and also somewhat arbitrary, since high powers of $`𝐐`$ should be included due to the first-order character of the nematic–isotropic transition.
We have calculated the renormalization of the surface energy by coarse-graining the director orientation from a mesoscopic length $`a`$, of the order of the nematic coherence length $`\xi _{\mathrm{NI}}`$ at the isotropic transition , up to a scale $`\mathrm{\Lambda }^1`$ at least a few times larger than $`a`$. Our results, based on perturbation theory, are valid for weak anchorings, in the sense $`\mathrm{}\xi _{\mathrm{NI}}`$, where $`\mathrm{}`$ is the anchoring extrapolation length . Calling $`\theta `$ and $`\varphi `$ the director’s polar angles and starting from a “bare” surface anchoring energy expanded in Fourier harmonics of the form $`\mathrm{cs}(2n\theta )\mathrm{cs}(m\varphi )`$ ($`\mathrm{cs}`$ being either $`\mathrm{cos}`$ or $`\mathrm{sin}`$), we find that each harmonic is independently renormalized by a Debye-Waller factor $`\mathrm{exp}(\alpha _n\beta _m)`$, with $`\alpha _n,\beta _nn^2k_\mathrm{B}T/Ka`$, where $`K`$ is a bulk nematic elastic constant. At scales larger than $`\mathrm{\Lambda }^1`$, the high-order harmonics are thus strongly suppressed by the director thermal fluctuations: this explains the success of the Rapini-Papoular form . Moreover, the anchoring energy naturally acquires a temperature dependence through the elastic constant $`K(T)`$. The surface energy is thus reduced close to the nematic–isotropic transition—where $`K`$ is lowered—in agreement with experiments . The different temperature dependences of the surface harmonics allow for anchoring transitions: as the temperature increases, the suppression of the high-order harmonics shifts the minimum of the anchoring energy towards some symmetry axis of the surface. Our results fit well the quasi-critical temperature dependence of the azimuthal anchoring energy measured by Faetti et al. , the oblique-to-homeotropic anchoring transition measured by Patel and Yokoyama , and the bistable oblique-to-planar anchoring transition measured by Jägemalm et al. . It turns out that the effective macroscopic anchoring, whose minimum gives the coarse-grained director orientation at the surface, can significantly differ from the mesoscopic surface potential (Fig. 1). The director fluctuations within the coarse-grained region dramatically smooth the fine details of the anchoring energy. They can also shift the average equilibrium position of the director at the surface, similarly to the amplitude-dependent shift of the average position of an anharmonic oscillator .
Precisely, we consider a semi-infinite nematic slab in the $`z0`$ half-space and we describe the nematic director by its spherical coordinates $`\theta `$, $`\varphi `$ centered on the $`z`$-axis. We start with a bulk elasticity already coarse-grained on a mesoscopic length $`a\xi _{\mathrm{NI}}`$, such that it can be expressed in the usual Frank form. In the one-constant approximation , its harmonic part, expanded about an arbitrary direction ($`\theta _0`$, $`\varphi _0`$), takes the form
$$_0=\frac{1}{2}K(T)d^3r\left[\left(\theta \right)^2+\mathrm{sin}^2\theta _0\left(\varphi \right)^2\right].$$
(1)
At the length-scale $`a`$, the “bare” surface potential $`_s`$ is given by some local functional of the mesoscopic director’s orientation at the surface
$$_s=d^2r_{}v(\theta ,\varphi ).$$
(2)
The total free-energy $`F_t`$ of the nematic slab is given by the path integral
$$\mathrm{exp}(\beta F_t)=𝒟[\theta ]𝒟[\varphi ]\mathrm{exp}\left[\beta (_0+_{\mathrm{nh}}+_s)\right],$$
(3)
where $`_{\mathrm{nh}}`$ contains the non-harmonic bulk terms. To further coarse-grain on a length-scale $`\mathrm{\Lambda }^1>a`$, we put $`\theta (𝐫)=\theta ^<(𝐫)+\theta ^>(𝐫)`$ and $`\varphi (𝐫)=\varphi ^<(𝐫)+\varphi ^>(𝐫)`$, where $`\theta ^<(𝐫)`$, $`\varphi ^<(𝐫)`$ have Fourier components with wavevectors $`|𝐪|\mathrm{\Lambda }`$, and $`\theta ^>(𝐫)`$, $`\varphi ^>(𝐫)`$ have Fourier components with $`\mathrm{\Lambda }<|𝐪|<2\pi /a`$. Integrating out $`\theta ^>`$ and $`\varphi ^>`$ yields the renormalized Hamiltonian $`^<`$ at length-scales $`\mathrm{\Lambda }^1`$; to lowest-order in perturbation theory, it is given by
$$^<=_0+_{\mathrm{nh}}+_s_>$$
(4)
where $`\mathrm{}_>`$ indicates the statistical average over the high-wavevector components, weighted by the Gaussian Hamiltonian $`_0`$. At first-order, the bulk and the surface are therefore renormalized independently.
Due to the $`𝐧𝐧`$ invariance, the bare surface energy density can be Fourier expanded as
$`v(\theta ,\varphi )={\displaystyle \underset{n,m}{}^{}}\mathrm{cos}n\theta `$ $`\left[w_{nm}^{\mathrm{cc}}\mathrm{cos}m\varphi +w_{nm}^{\mathrm{cs}}\mathrm{sin}m\varphi \right]`$ (5)
$`+{\displaystyle \underset{n,m}{}^{\prime \prime }}\mathrm{sin}n\theta `$ $`\left[w_{nm}^{\mathrm{sc}}\mathrm{cos}m\varphi +w_{nm}^{\mathrm{ss}}\mathrm{sin}m\varphi \right],`$ (6)
where $`n`$ runs over all even integers, and $`m`$ runs over all even (resp. odd) integers in the first (resp. second) sum.
Even though (6) is non-linear, the renormalized Hamiltonian (4) can be transformed into Gaussian integrals by writing the trigonometric functions as complex exponentials. We obtain a renormalized surface energy density $`V(\theta ^<,\varphi ^<)`$ whose Fourier components $`W_{nm}^{\alpha \beta }`$ ($`\alpha `$, $`\beta `$ being either c or s) are separately renormalized:
$$W_{nm}^{\alpha \beta }(t)=w_{nm}^{\alpha \beta }\mathrm{exp}\left[\frac{1}{2}\left(n^2+\frac{m^2}{\mathrm{sin}^2\theta _0}\right)t\right],$$
(7)
where, for $`\mathrm{\Lambda }a1`$,
$$t=\theta ^>(𝐫)^2_>=k_\mathrm{B}T_\mathrm{\Lambda }^{2\pi /a}\frac{d^3q}{(2\pi )^3}\frac{1}{Kq^2}\frac{k_\mathrm{B}T}{\pi Ka}.$$
(8)
Note that $`\varphi ^>(𝐫)^2_>=t/\mathrm{sin}^2\theta _0`$ also appears in (7).
It follows from (7) that for large $`t`$, i.e., in the vicinity of the nematic-isotropic transition, where $`K`$ is reduced, the effective surface potential should be very-well described by its first harmonics. This is indeed what was measured by Faetti et al. using the nematic liquid crystal 5CB, planarly anchored on a glass plate treated by oblique evaporation of SiO: exploring the azimuthal anchoring energy in regions far away from the parabolic minimum, they could not detect any deviation from the simple form
$$V=W_0\mathrm{sin}^2\varphi =W_{02}^{\mathrm{cc}}\mathrm{cos}2\varphi +\mathrm{const}.$$
(9)
Setting $`\theta _0=\pi /2`$, $`\varphi _0=0`$, we have fitted their data with (7), i.e., $`W_{02}^{\mathrm{cc}}(t)=w_{02}^{\mathrm{cc}}\mathrm{exp}(2t)`$, in which we have assumed a Landau-de Gennes form for the temperature dependence of $`K`$ , yielding
$$t=\frac{k_\mathrm{B}T_{\mathrm{NI}}}{\pi K_{\mathrm{NI}}a}\left(\frac{4}{3+\sqrt{98\tau }}\right)^2,$$
(10)
where $`\tau =(TT^{})/(T_{\mathrm{NI}}T^{})`$ is the reduced temperature, $`T_{\mathrm{NI}}`$ the nematic-isotropic transition temperature, $`T^{}`$ the isotropic supercooling temperature, and $`K_{\mathrm{NI}}`$ the elastic constant at the transition. With $`T_{\mathrm{NI}}306.8^{}\mathrm{K}`$, $`T_{\mathrm{NI}}T^{}1.5^{}\mathrm{K}`$ , our best fit, shown in Fig. 2, yields $`t_{\mathrm{NI}}k_\mathrm{B}T_{\mathrm{NI}}/\pi K_{\mathrm{NI}}a=2.1\pm 0.1`$ and $`W_{02}^{\mathrm{cc}}(t_{\mathrm{NI}})=(2.08\pm 0.08)\times 10^4\mathrm{erg}/\mathrm{cm}^2`$. For an extrapolated $`K_{\mathrm{NI}}0.5\mathrm{pN}`$ , the value of $`t_{\mathrm{NI}}`$ yields $`a13\mathrm{\AA }`$, which roughly compares with $`\xi _{\mathrm{NI}}`$ for a first-order phase transition.
Close to the nematic-isotropic transition of the nematic compound E7, Patel and Yokoyama have observed a tilted-to-homeotropic anchoring transition on a fluoropolymer-coated surface. Again, we can explain it by retaining in $`V`$ the first two harmonics allowed by symmetry, i.e.,
$$V=W_{20}^{\mathrm{cc}}\mathrm{cos}2\theta +W_{40}^{\mathrm{cc}}\mathrm{cos}4\theta .$$
(11)
Minimizing (11) with respect to $`\theta `$ yields the anchoring phase diagram shown in the inset of Fig. 3, which displays the following regions separated by second-order transition lines: conical oblique (O), homeotropic (H), degenerated planar (P), and metastable homeotropic/planar (H/P). According to (7) and (10), as temperature increases, the easy-axis of the anchoring follows the typical path shown in the phase diagram. Setting $`T_{\mathrm{NI}}57.7^{}\mathrm{C}`$ and $`T_\mathrm{H}=54.48^{}\mathrm{C}`$ , where $`T_\mathrm{H}`$ is the homeotropic-to-oblique transition temperature, we fit closely the temperature dependence of the easy-axis direction $`\theta `$ with $`T_{\mathrm{NI}}T^{}1.62^{}\mathrm{K}`$ and $`t_{\mathrm{NI}}=1.2\pm 0.04`$ (Fig. 3).
Using the same values of $`T_{\mathrm{NI}}`$, $`T_{\mathrm{NI}}T^{}`$ and $`t_{\mathrm{NI}}`$, which depend only on the nematic material, we can fit equally well the bistable oblique-to-planar anchoring transition observed by Jägemalm et al. at $`T=T_\mathrm{P}=49.7^{}\mathrm{C}`$ for the same compound E7 (Fig. 4). This anchoring transition also appears close to the nematic-isotropic transition, in the presence of a glass substrate treated by an oblique evaporation of SiO. Here, by symmetry, the lowest-order expansion of $`V`$ is
$$V=W_{20}^{\mathrm{cc}}\mathrm{cos}2\theta +W_{02}^{\mathrm{cc}}\mathrm{cos}2\varphi +W_{21}^{\mathrm{sc}}\mathrm{sin}2\theta \mathrm{cos}\varphi .$$
(12)
Setting $`\overline{W}_\theta =W_{20}^{\mathrm{cc}}/W_{02}^{\mathrm{cc}}`$ and $`\overline{W}_{\theta \varphi }=|W_{21}^{\mathrm{sc}}/W_{02}^{\mathrm{cc}}|`$, the corresponding anchoring phase diagram, shown in the inset of Fig. 4, displays the four regions, separated by second-order transition lines, that are observed experimentally : oblique in the evaporation plane (O), bistable symmetric with respect to the evaporation plane (B), homeotropic (H), and planar orthogonal to the evaporation plane (P). Once fixed the values of $`T_\mathrm{P}`$, and, by the previous fit, of $`T_{\mathrm{NI}}`$, $`T_{\mathrm{NI}}T^{}`$, and $`t_{\mathrm{NI}}`$, the fit on $`\theta `$ has no adjustable parameter, while for the fit on $`\varphi `$ there remains the only free parameter $`w_{21}^{\mathrm{sc}}/w_{02}^{\mathrm{cc}}`$, best adjusted to the value $`6.76\pm 0.05`$.
To estimate the validity of our first-order perturbative expansion in the surface potential, we have calculated the second-order correction to $`W_{40}^{\mathrm{cc}}`$ coming from $`w_{20}^{\mathrm{cc}}`$ . For $`\mathrm{\Lambda }a1`$, we find
$$\frac{\delta W_{40}^{\mathrm{cc}}}{w_{20}^{\mathrm{cc}}}\frac{2w_{20}^{\mathrm{cc}}}{\pi \mathrm{\Lambda }K}\mathrm{exp}(4t),$$
(13)
which is negligible for $`\mathrm{\Lambda }^1\mathrm{}`$, where $`\mathrm{}=K/w_{20}^{\mathrm{cc}}`$ is a bare anchoring extrapolation length; this sets the limit of validity of our analysis. Coarse-graining on a length $`\mathrm{\Lambda }^1>\mathrm{}`$ effectively reduces the anchoring for purely elastic reasons, as will be described elsewhere.
Finally, note that, since we coarse-grained on a length $`\mathrm{\Lambda }^1>\xi _{\mathrm{NI}}`$, our model is insensitive to the variations of the scalar order-parameter $`S`$ . This does not imply, however, that $`S`$ must be uniform throughout the sample: any substrate inducing a surface variation of $`S`$ will still be described—at length-scales larger than $`\xi _{\mathrm{NI}}`$—by some potential $`v(\theta ,\varphi )`$ depending only on the director’s orientation. Our analysis simply disregards the underlying surface variations of $`S`$: this is a limitation only when $`\xi _{\mathrm{NI}}`$ becomes critically large.
We thank A. Ajdari, G. Durand, P.-G. de Gennes, and L. Peliti for useful discussions. P. G. acknowledges the support of a Chaire Joliot de l’ESPCI.
|
no-problem/9905/hep-th9905107.html
|
ar5iv
|
text
|
# Microscopic and Bulk Spectra of Dirac Operators from Finite-Volume Partition Functions
## 1 Introduction
The spectral density $`\rho (\lambda )`$ of the Dirac operator in QCD contains interesting informations as it is for example directly proportional to the chiral condensate $`\mathrm{\Sigma }`$ at the origin through the Banks-Casher relation . While the full spectrum is only accessible numerically in QCD on the lattice many analytic results for parts of the spectrum have been obtained during the past years using random matrix theory (RMT), chiral perturbation theory and finite-volume partition functions (for a recent review see ). Within these results two different regimes have been investigated: (i) unscaled macroscopic correlations and (ii) microscopic correlations between eigenvalues rescaled by the mean spectral density. In the region (ii) one furthermore has to distinguish between scaling at the origin and in the bulk of the spectrum. In the macroscopic regime (i) the slope of the spectral density at the origin
$$\rho ^{}(\lambda =0)=\frac{\mathrm{\Sigma }^2}{16\pi ^2F_\pi ^4}\frac{(N_f2)(N_f+\beta )}{\beta N_f},$$
(1)
has been calculated by Smilga and Stern for fundamental $`SU(N_c3)`$ fermions $`(\beta =2)`$ and very recently by Toublan and Verbaarschot for fundamental $`SU(2)`$ fermions $`(\beta =1)`$ and adjoint fermions $`(\beta =4)`$ with any gauge group. Here $`F_\pi `$ denotes the pion decay constant.
Coming to the rescaled correlations (ii) at the origin all higher order correlation functions of scaled eigenvalues $`x=\lambda /\mathrm{\Sigma }V`$ are known. They are given analytically as functions of the chiral condensate $`\mathrm{\Sigma }`$ and scaled sea-quark masses $`\mu _f=m_f/\mathrm{\Sigma }V`$ (for details see ). This description holds in the limit of Leutwyler and Smilga $`1/\mathrm{\Lambda }_{QCD}V^{1/4}1/m_\pi `$, where $`m_\pi `$ is the pion mass, up to a certain scale $`\lambda _{Th}V^{1/2}`$ called Thouless energy in analogy to disordered systems. Within this region the analytic description provides an exact nonperturbative limit of QCD, dealing, however, with an unphysically heavy pion that does not fit into the finite-volume of the system.
Away from the origin much less is known about correlations on the microscopic scale. Up to now it has been found only empirically that also there correlations follow a (non-chiral) RMT description . The aim of the results presented here is to understand analytically why this is possible. Therefore, the correlation functions are expressed in terms of finite-volume partition functions and explored in the bulk region but still in the vicinity of the origin. The region of validity is therefore given by the one of finite-volume partition functions. It does not hold all the way up to the unphysical turn-over in the spectrum (dotted line in Fig. 1) which is due to cut-off and finite-size effects on the lattice. This intermediate region remains unexplained so far.
## 2 Correlation functions from finite-volume partition functions
Before taking the bulk limit we have to recall how the correlators can be expressed using the Leutwyler and Smilga partition functions in the microscopic limit at the origin. The generation function for the correlators is the kernel of orthogonal polynomials. For gauge group $`SU(N_c3)`$ with fermions in the fundamental representation it is given by
$`K_S(x,y;\{\mu _f\})`$ $`=`$ $`c\sqrt{xy}{\displaystyle \underset{f=1}{\overset{N_f}{}}}\sqrt{(x^2+\mu _f^2)(y^2+\mu _f^2)}{\displaystyle \frac{𝒵_\nu ^{(N_f+2)}(\{\mu _f\},ix,iy)}{𝒵_\nu ^{(N_f)}(\{\mu _f\})}}`$ (2)
$`𝒵_\nu ^{(N_f)}(\{\mu _f\})`$ $`=`$ $`{\displaystyle \frac{det𝒜(\{\mu _f\})}{\mathrm{\Delta }(\mu _f^2)}},𝒜_{ij}=\mu _i^{j1}I_{\nu +j1}(\mu _i),`$ (3)
where $`𝒵_\nu ^{(N_f)}`$ is the partition function of $`N_f`$ flavors with rescaled masses $`\mu _f`$, $`\nu `$ zeromodes and $`\mathrm{\Delta }`$ being the Vandermonde determinant. The rescaled correlation functions can then be obtained as
$$\rho _S^{(N_f,\nu )}(x_1,\mathrm{},x_k;\{\mu _f\})=\underset{1a,bk}{det}K_S(x_a,x_b;\{\mu _f\}).$$
(4)
This description is equivalent to a unitary chiral RMT description $`(\beta =2)`$. Since we want to go to the bulk of the spectrum we should expect that the chiral properties no longer play a role for the correlations. We will find that this is indeed the case even in the vicinity of the origin. It is therefore instructive to compare to 3-dimensional QCD with flavor symmetry breaking, which is described by non-chiral RMT . The corresponding kernel can be written in the same way as in Eq. (2 after dropping the prefactor $`c\sqrt{xy}`$, where the product now runs over $`N_f/2`$ flavors occurring in pairs $`\pm \mu _f`$. The corresponding partition function reads
$$𝒵^{(N_f)}(\{\mu \})=det\left(\begin{array}{cc}A(\{\mu _f\})\hfill & A(\{\mu _f\})\hfill \\ A(\{\mu _f\})\hfill & A(\{\mu _f\})\hfill \end{array}\right)\mathrm{\Delta }(\{\mu \})^1,A_{ij}=(\mu _i)^{j1}e^{\mu _i}.$$
(5)
## 3 The bulk limit
In RMT it is known for massless flavors how to “invert” the origin scaling limit where $`V\mathrm{}`$, $`\lambda 0`$ and $`x=\lambda /\mathrm{\Sigma }V`$ is kept fixed:
$$\underset{\begin{array}{cc}x,y\mathrm{}\hfill & \\ xy=O(1)\hfill & \end{array}}{lim}K_{\text{origin}}(x,y)=K_{\text{bulk}}(x,y)=c^{}\frac{\mathrm{sin}(xy)}{xy}.$$
(6)
It takes us from the Bessel-kernel at the origin (Eq. (2) for $`m_f=0`$) to the sine-kernel in the bulk. We will see in the following that the same result holds including the masses. In RMT the translational invariance of the spectrum then implies that the sine-kernel holds everywhere in the bulk. Since we cannot make such a strong statement about the QCD Dirac spectrum we will be restricted to the domain of validity of finite-volume partition functions.
In taking the bulk limit of the full kernel Eq. (2) we also have to scale back all masses $`\mu _f\mathrm{}`$ at the same time since they would otherwise drop out trivially. Using the asymptotic expansion of Bessel functions we obtain
$`\underset{\begin{array}{cc}x,y,\mu _f\mathrm{}\hfill & \\ \text{differences}O(1)\hfill & \end{array}}{lim}K_S(x,y;\{\mu _f\})`$ $`=`$ $`c^{}{\displaystyle \underset{f=1}{\overset{N_f}{}}}{\displaystyle \frac{(\mu _f^2+xy)}{\sqrt{(x^2+\mu _f^2)(y^2+\mu _f^2)}}}{\displaystyle \frac{\mathrm{sin}(xy)}{xy}}.`$ (9)
Expanding the prefactor around one of the arguments it is unity up to higher orders and thus we find the same result as in the massless case Eq. (6). Taking the same limit of the non-chiral theory Eqs. (2),(5) we obtain precisely the same result and thus find, that the chiral properties are lost in the bulk even in the vicinity of the origin. Another consequence is that in the bulk any mass scale given by the sea-quarks drops out on the microscopic scale.
In Ref. it has been stated that the Thouless energy scale at the origin is larger or equal to the one in the bulk. Looking at the bulk but close to the origin it is clear that if we take $`\lambda _1<\lambda _{Th}`$ and $`\lambda _2>\lambda _{Th}`$ we are outside the domain of validity although it may still be $`|\lambda _1\lambda _2|<\lambda _{Th}`$.
## Acknowledgments
Helpful discussions with P.H. Damgaard, T. Guhr, A.D. Jackson, J.J.M. Verbaarschot and H.A. Weidenmüller are gratefully acknowledged. Furthermore I wish to thank the organizers for the stimulating workshop.
|
no-problem/9905/astro-ph9905153.html
|
ar5iv
|
text
|
# Photomeson production in astrophysical sources
## 1 Introduction
Ultrahigh-energy nucleons propagating through dense radiation fields lose their energy mainly through photomeson production. Radiation fields are universal (Cosmic Microwave Background Radiation (CMBR)) and are very intense at luminous astrophysical sources such as Active Galactic Nuclei (AGN) jets and Gamma Ray Bursts. The secondary products of photomeson interactions decay, and lead eventually to the emission of neutrino and gamma-ray fluxes from the source, which may be observable. This has triggered several authors (e.g. Stecker 1973 , Berezinsky & Gazizov 1993 , Mannheim & Schlickeiser 1994 ) to study photopion production in more detail. Collider measurements indicate a complicated structure of the cross section, especially in the astrophysically important lower energy range. In order to overcome this problem, either simplified approximations for the cross section, e.g. the so-called $`\mathrm{\Delta }`$–approximation (Stecker 1973 , Gaisser et al. 1995 ), or Stecker’s isobar model , have been used. Berezinsky & Gazizov used interpolated cross section measurements and implemented them into numerical codes to derive neutrino production spectra. Recently, it has been shown that in realistic photon fields photon–proton collisions may happen at low and high center–of–mass (CM) energies (Mücke et al 1999a ), making a more realistic treatment of the pion production process necessary. For example, hadronic interactions of ultrarelativistic protons in flat ambient photon spectra, e.g. in Gamma Ray Burst radiation fields and the synchrotron radiation field in $`\gamma `$-ray loud AGN, are roughly equally likely to occur in the resonance and the multiparticle production regions.
In this paper we discuss photopion production in typical GRB and AGN jet radiation fields. We discuss microphysical quantities relevant for the photohadronic production of gamma-rays, neutrinos, and neutrons (Section 2), by using our newly developed Monte Carlo event generator SOPHIA (presented also in these proceedings (Mücke et al 1999b )), and compare them with results from a widely used approximation (Section 3). As a qualitatively new effect we also discuss the photoproduction of anti-baryons (Section 4). The relevance of our results is briefly discussed in the context of neutrino and cosmic ray production (Section 5).
## 2 Relevant microphysical quantities
One of the major motivations for discussing photohadronic interactions in astrophysical sources like AGN jets and GRB is the prediction of observable neutrino fluxes from these systems. The total flux of neutrinos for a specific source model can be related to the predicted fluxes of gamma rays and cosmic rays, which in turn can be compared with current observations. Neutrino fluxes from hadronic AGN jet models are not expected to be observable as point sources by current neutrino telescopes, and therefore contribute to the extragalactic neutrino background. Hence, estimates for diffuse neutrino fluxes originating from blazar jets can be derived from measurements of the diffuse extragalactic gamma ray background (EGRB) (see e.g. Mannheim 1995 ) if the entire EGRB is produced photohadronically. Upper limits can be set if only part of the EGRB is due to unresolved blazars (see Chiang & Mukherjee 1997 , Mücke & Pohl 1998 ).
Of particular relevance is here the production of gamma rays. In photohadronic sources, the opacity for the primary gamma rays emerging from the decay of neutral pions must be large, since otherwise the efficiency for the photohadronic interactions themselves would be too low to produce observable fluxes. The interactions of primary photons (and electrons) cause electromagnetic cascades which reprocess the leptonic power to lower photon energies, and photons are eventually emitted in the range $`<100`$ GeV (Mannheim 1993 ).
The exact relation between the gamma ray and neutrino fluxes is determined by the microphysics of photomeson production, i.e., by the photon-to-neutrino total energy ratio per interaction, $`_\gamma /_\nu =E_\gamma /E_\nu `$. The $`\mathrm{\Delta }`$–approximation, which has been often used, gives $`_\gamma /_\nu =3`$. This ratio has been slightly modified by considering other photon producing processes, like Bethe-Heitler pair production off the energetic protons (Mannheim 1993 ).
Another way to constrain neutrino fluxes is a comparison with the cosmic ray spectrum. This is motivated by the fact that neutrino producing interactions also cause an isospin-flip of the proton into a neutron, which is not magnetically confined and can be ejected from the accelerator. In contrast, protons need to be confined in order to be accelerated, and their escape from sources involving relativistic flows, like AGN jets or GRB, is strongly affected by adiabatic losses. Neutron ejection can also be suppressed if the opacity for $`n\gamma `$ interactions exceeds unity. It is possible however to relate the $`n\gamma `$ opacity to the $`\gamma \gamma `$ opacity of very high energy photons, and one can show that sources transparent to very high energy (VHE) gamma rays are also transparent to neutrons at most energies . Then, regardless of the proton confinement, the minimum cosmic ray ejection is given by the produced neutron flux, which allows an upper limit on the possible neutrino flux (Waxman & Bahcall 1999 ). It has been pointed out by Mannheim et al. , that the complicated propagation properties of cosmic rays make it difficult to apply a model independent cosmic ray bound on the neutrino flux. The relation of the cosmic ray flux to neutrino flux is again determined by a microphysical parameter, namely the neutrino-to-neutron total energy relation $`_\nu /_n=E_\nu /E_n`$, which is predicted $`0.19`$ in the $`\mathrm{\Delta }`$–approximation.
Other kinematical quantities of interest are the inelasticity of the proton, $`\mathrm{\Delta }E_p/E_p`$, which determines the proton energy loss time scale due to photohadronic interactions, and the average fractional neutrino energy per interaction, $`E_\nu /E_p`$, which determines the maximum neutrino energy if the maximum proton energy is determined by the acceleration process. In the same way, we may also consider the fractional neutron energy per interaction.
## 3 Cosmic ray, gamma ray and neutrino production
In the typical scenario of hadronic interactions in AGN jet and GRB radiation fields, relativistic nucleons with energy $`E_pϵ`$ are assumed to interact in an isotropic radiation field in the comoving frame of the relativistic plasma flow. Because of the strong magnetic fields required to accelerate protons, photoproduced electrons and positrons can be assumed to have 100% radiation efficiency, and transform their energy completely into photons through synchrotron emission. The total and average energy of the produced neutrinos includes $`\nu _e`$ and $`\nu _\mu `$ and their antiparticles. The $`\nu _\mu /\overline{\nu }_\mu `$ contribution, possibly detectable in current or planned underwater/-ice neutrino experiments, is roughly $`2/3`$ of the total neutrino flux. We assume that protons are magnetically confined, and count only photohadronically produced neutrons as ejected cosmic rays.
### 3.1 Blazar/AGN jets
There exists a large variety of models to explain the observed $`\gamma `$-ray emission from blazars (= flat spectrum radio quasars and BL Lac objects). The leptonic models, which assume inverse Compton scattering of low energy photons up to gamma ray energies, currently dominate the thinking of the scientific community. Alternatively, photopion production has been proposed to be the origin for the observed $`\gamma `$-ray flux (Mannheim 1993 ). A clear distinction between the two models is the production of neutrinos, which is negligible in the leptonic models, but may be equally important as the photon production in the hadronic models, and has been predicted to cause observable fluxes in large underwater/-ice neutrino detectors.
Gamma ray and neutrino production in hadronic blazar models occurs through photopion production (and subsequent cascading) of relativistic protons in either external (accretion disk or the IR radiation field from a molecular torus) radiation fields or the synchrotron radiation field produced by electrons which are shock accelerated together with the protons. Here, we confine our discussion to the latter model. For simplicity, we consider TeV-blazars only, which are characterized by a low energy spectrum extending to X-ray energies. The synchrotron emission from blazar jets at low energies can be well explained by a superposition of several self-absorbed synchrotron components (e.g. Cotton et al 1980 , Shaffer & Marscher 1979 ) leading to a flat target spectrum. The local synchrotron radiation spectrum follows a power law with photon index $`\alpha =1.5`$ ($`n(ϵ)ϵ^\alpha `$) up to a break energy (see e.g. Mannheim 1993 ). Above the break energy $`ϵ_b10^4`$eV (see e.g. Rachen, & Mészáros ) it is loss dominated to become $`n(ϵ)ϵ^2`$. The synchrotron emission in TeV-blazars like Mkn 421 and Mkn 501 is observed to continue up to 1-10 keV. Thus, for our application we approximate the typical target photon spectrum in the jet frame (assuming a Doppler factor of D=10) by
$`n(ϵ)`$ $``$ $`ϵ^{3/2}\text{for}10^5\mathrm{eV}ϵ10^4\mathrm{eV}`$
$`n(ϵ)`$ $``$ $`ϵ^2\text{for}10^4\mathrm{eV}ϵ10^3\mathrm{eV}`$ (1)
For the maximum proton energy we use $`E_{p,\mathrm{max}}=\gamma _pm_pc^2=10^{10}`$ GeV in the rest frame of the jet, corresponding roughly to the highest observed cosmic ray energies after Doppler boosting. This is also consistent in order of magnitude with the values derived by equating the acceleration time of the proton with its total loss time due to adiabatic, synchrotron and photohadronic cooling (see Rachen & Mészáros for a more detailed discussion). Due to the threshold condition for photopion production of protons with Lorentz factor $`\gamma _p`$ interacting with photons of energy $`ϵ`$
$$\gamma _p>\frac{m_\pi c^2}{2ϵ}\left(1+\frac{m_\pi }{2m_p}\right)$$
(2)
only photons from the steep part of the target radiation field interact photohadronically with protons of $`\gamma _p10^{10}`$.
Fig. 1 shows $`_\gamma /_\nu `$ from SOPHIA simulations for protons of energy $`E_p`$ interacting in the synchrotron field of blazar jets. The prominent resonance/threshold region dominates the interaction, independently of input proton energy. Consequently, the total fractional photon energy $`_\gamma /E_p0.1`$, independent of $`E_p`$, and an overall photon-to-neutrino ratio of $`_\gamma /_\nu 1.2`$. We note that this result differs by a factor $`3`$ from the prediction of the $`\mathrm{\Delta }`$-approximation, although the steep spectrum emphasizes the threshold region with the dominant $`\mathrm{\Delta }(1232)`$ resonance. This can be understood in view of the strong contribution of non-resonant direct $`\pi ^+`$ production at threshold, and the contribution of other resonances with different isospin in the vicinity of the $`\mathrm{\Delta }(1232)`$ resonance (see Mücke et al., these proceedings ).
In contrast, kinematical quantities like inelasticity and fractional secondary particle energy are well reproduced by the $`\mathrm{\Delta }`$-approximation (see Fig. 2), since these quantities vary only slowly with energy. For example, the proton inelasticity is in general logarithmically rising with increasing nucleon input energy in the resonance region (Mücke et al 1999b ). Thus, when interactions occur near the $`\mathrm{\Delta }(1232)`$-resonance like in this case, the $`\mathrm{\Delta }`$–approximation gives a good description. The same applies for the total neutrino-to-neutron energy ratio. Here, we find $`_\nu /_n0.18`$ (see Fig. 3).
### 3.2 Gamma Ray Bursts
The cosmological Gamma Ray Burst fireball model has been very successful in explaining the observed temporal evolution of the afterglow photon spectra (see Piran 1998 and references therein). In this model a relativistically expanding fireball transforms most of the explosion energy into the kinetic energy of baryons in a relativistic blast wave with bulk Lorentz factor of 100 – 300. This energy is reconverted into radiation at shocks, produced either in collisions between different shells ejected from the central source (internal shock scenario), or through deceleration of the blast wave when it sweeps up the external medium (external shock scenario). The former process is thought to be responsible for the GRB itself, while the latter produces the afterglow. In this scenario, the observed photon radiation is explained as synchrotron radiation from the accelerated electrons. In the comoving shock frame this target radiation field for photopion production of the relativistic protons may be approximated by a broken power law:
$`n(ϵ)`$ $``$ $`ϵ^{2/3}\text{for}10^3\mathrm{eV}ϵ10^3\mathrm{eV}`$
$`n(ϵ)`$ $``$ $`ϵ^2\text{for}10^3\mathrm{eV}ϵ10^5\mathrm{eV}`$ (3)
Waxman and Vietri suggested that if a significant fraction of the observed GRB power is transformed into ultrahigh energy cosmic rays (UHECRs), GRB may well be the source for all the observed UHECRs. In fact, it has been shown that comoving proton Lorentz factors of up to $`10^9`$ can be reached in GRB shells, which are boosted in the observers frame to $`310^{20}`$ eV (,). This scenario gives also rise to fluxes of very high energy neutrinos ($`E_\nu >100`$ TeV) correlated with gamma ray bursts, which could be detected in a km<sup>3</sup> neutrino observatory because the exact temporal and directional information reduces the background to virtually zero (Waxman & Bahcall 1997 ). The neutrino emission, however, is suppressed at the highest energies because of the adiabatic and synchrotron losses of pions and muons prior to their decay (Waxman & Bahcall 1997 , Rachen & Mészáros 1998 , Waxman & Bahcall 1999 ).
In the highly energetic GRB photon field mostly photons from the flat part of the ambient radiation field interact photohadronically. For incident protons with $`\gamma _p>10^7`$ interactions predominantly occur in the multiparticle region of the cross section, while for $`\gamma _p10^7`$ the resonance/threshold region determines the secondary particle production. This leads to an increase of photon and neutrino production by roughly a factor of 2 with $`_\gamma _\nu `$ for protons with $`\gamma _p>10^7`$ in comparison with protons with $`\gamma _p10^5`$ (see Fig. 4). The photon-to-neutrino total energy ratio seems to be fairly robust $`_\gamma _\nu 1`$ at all proton energies, except for a slight deviation at $`\gamma _p310^5`$ where photons from the flat part of the target photon spectrum interact mainly via the $`\mathrm{\Delta }(1232)`$-resonance. Also here, this result is about a factor of 3 different from the $`\mathrm{\Delta }`$–approximation.
The average number of neutrinos produced per interaction can increase with CM energy by up to an order of magnitude for the proton energy range relevant in GRB (see Mücke et al 1999b ). Because of the high multiplicity of the secondaries, the mean energy of the neutrinos produced in the multipion region (i.e. for $`\gamma _p>10^7`$) is $`1\%`$ of the proton input energy (see Fig. 5). At lower energies it reaches $`E_\nu /E_p0.04`$, approximately in agreement with the $`\mathrm{\Delta }`$–approximation. Therefore, the ejection of ultra-high energy neutrinos ($`E_\nu >10^{19}`$ eV), as recently proposed by Vietri (but note also ) requires observer frame proton energies of $`10^{21}`$ eV. Such high proton energies are not expected from shock acceleration scenarios in GRB shells. In the general case, however, the synchrotron losses of charged pions and muons in the highly magnetic ($`B10^310^6`$G) GRB environment and their adiabatic deceleration determine the high energy end of the observable neutrino flux from GRB (see Rachen & Mészáros 1998 ).
If protons are magnetically confined at the source, the proton–neutron conversion probability determines the production of cosmic rays in photohadronic sources. At high energies it decreases from $`0.5`$ to $`0.3`$ (see Mücke et al, these proceedings ), and the average fractional energy of neutrons decreases to $`<0.5`$ of the proton energy. Together with the increasing power going into secondary particles, this leads to a neutrino-to-neutron ratio $`_\nu /_n1`$, about a factor of $`5`$ above the $`\mathrm{\Delta }`$–approximation. This increase of the neutrino power relative to the cosmic ray power of GRB (if protons are confined) is, however, masked by the secondary particle losses (see Rachen & Mészáros 1998 , Waxman & Bahcall 1999 ). At lower energies the neutrino-to-neutron ratio approaches the value expected in the $`\mathrm{\Delta }`$-approximation (see Fig. 6).
## 4 High energy anti-baryon production
A qualitatively new effect which can be investigated with SOPHIA is the production of secondary baryon/anti-baryon pairs in high energy photohadronic interactions. This requires a detailed simulation of QCD string hadronization, which is implemented in SOPHIA. The theoretical threshold of this process is $`\sqrt{s}=3m_pc^2`$, corresponding to a photon energy in the proton rest frame $`ϵ^{}=4m_pc^2`$. For $`ϵ^{}>2`$ TeV, about $`40\%`$ of all events contain antinucleons. This raises the interesting question about the contribution of antiprotons to the cosmic ray flux from extragalactic photopion production sources.
Due to the threshold condition, this process can only affect the antimatter to matter ratio at extremely high energy. Assuming the minimal cosmic ray ejection hypothesis, (only neutral baryons can leave the source and become cosmic rays) the relevant quantities are the anti-neutron to neutron ratio $`_{\overline{n}}/_n`$ and the corresponding multiplicity ratio $`N_{\overline{n}}/N_n`$.
As noted above, cosmic rays with energy $`\gamma _p>10^7`$ accelerated in GRBs tend to interact with photons from the flat part of the ambient radiation fields, in the multiparticle production regime. While the total energy dissipated into photons, neutrinos and neutrons is roughly the same ($`_\gamma _\nu _n0.2E_p`$) in this high energy part of the cross section, the antineutron production is only 1/40 of the neutron production. The neutrons carry on average 50% of the input energy (see Fig. 5) with a neutron multiplicity of roughly 0.4. Antineutrons have mean energies of $`0.1E_p`$ with a multiplicity of approximately 0.05. Antineutrons and neutrons of the same energy are produced by protons of different energy. For example, we need an input proton energy of $`10^{10}`$GeV for the production of a antineutron of energy $`10^9`$GeV while for a neutron of the same energy the proton input energy must be $`210^9`$GeV. One has to weight the ratio of the resulting neutron and antineutron multiplicities, and this is determined by the input proton spectrum. For this purpose we use a $`E^2`$ differential spectrum as a typical equilibrium particle spectrum for GRB. For photon–proton interactions at high CM energies, as is the case for GRBs, we therefore expect antineutron-to-neutron multiplicities of the order 0.01.
Since photon–proton interactions in TeV-blazar ambient photon fields take part predominantly at threshold, these objects are not expected to be strong sources of antinucleons. SOPHIA simulations show that the total energy for antineutron production does not exceed $`10^4`$ of the proton input energy with the average antineutron carrying roughly 1/10 of $`E_p`$. Neutrons, on the other hand, take up about 20% of the input energy with a mean particle energy of $`0.8E_p`$, as expected from the $`\mathrm{\Delta }`$–approximation. Weighting again with the typical equilibrium differential particle spectrum expected in blazar jet environments, which follows a $`E^1`$ power law, we find that the $`\overline{n}/n`$–ratio does not exceed $`10^3`$.
The $`\overline{n}/n`$–ratios resulting from SOPHIA in GRB and TeV-blazar radiation fields as a function of proton energy are shown in Fig. 7. After $`\beta `$-decay, the ejected $`\overline{n}/n`$ contribute to the overall $`\overline{p}/p`$ flux. The $`\overline{n}/n`$ ratio derived here has to be regarded as an upper limit to the observable $`\overline{p}/p`$ ratio, since direct proton injection may contribute to the cosmic ray flux without increasing the anti-proton population.
## 5 Discussion and outlook
Our simulation results obtained with the new photomeson production event generator SOPHIA demonstrate that the widely used $`\mathrm{\Delta }`$-approximation for photohadronic interactions has only a limited applicability to many important astrophysical applications, namely the secondary gamma ray, neutrino and neutron production in radiation fields of AGN jets and GRB. For AGN jets, where interactions at low center–of–mass energy dominate, the major kinematical quantities, like proton inelasticity, average fractional neutrino energy per interaction, and the neutrino-to-neutron energy ratio are found to be well predicted by the $`\mathrm{\Delta }`$-approximation. For interactions of ultra–high energy cosmic rays in GRB shells, however, we find deviations up to an order of magnitude for some of these quantities.
As a robust result for both kinds of sources we find that the average total energy channeled into neutrinos, $`_\nu `$, and gamma-rays, $`_\gamma `$ are approximately equal. By comparison, the $`\mathrm{\Delta }`$–approximation predicts $`_\gamma /_\nu 3`$. This relation has been widely used to normalize expected neutrino fluxes from AGN jets to their photon flux. As a consequence of this deviation from the $`\mathrm{\Delta }`$–approximation found in our SOPHIA results, the expected neutrino fluxes from such AGN models would increase by a factor of $`3`$ assuming that the observed gamma ray emission is entirely of photohadronic origin. The lower gamma-ray-to-neutrino ratio seen in our SOPHIA results, together with the neutrino-to-neutron ratio in agreement with the $`\mathrm{\Delta }`$–approximation, implies also that AGN neutrino models, which were initially designed to comply with the limitation set by the cosmic ray data in a straight line propagation scenario (Mannheim, 1995 , Model A), can no longer produce a large fraction of the 100 MeV gamma ray background as initially assumed. This may comply with the recent finding of Chiang & Mukherjee (1997 ) that only part of the measured EGRB may be due to unresolved blazars. However, it has been pointed out by Mannheim et al. that an increased and so far unexpected contribution from sources with maximum cosmic ray energies below $`10^{19}`$ eV can evade the problem without being in conflict with the cosmic ray data. Moreover, it is possible that the optical depth of blazars for ultra-high energy neutrons is larger than unity, which allows a higher diffuse neutrino and gamma ray flux for a given cosmic ray flux.
For the derivation of the upper bound of the neutrino emission, the neutrino-to-neutron energy ratio reflects the microphysics of the photohadronic interactions. The value $`_\nu /_n0.19`$, expected from the $`\mathrm{\Delta }`$-approximation and used in Waxman and Bahcall , was confirmed for AGN jets in our simulations. For GRB, however, we find a ratio of about $`1`$. This implies that the cosmic ray upper bound for GRB neutrinos has to be increased by a factor of $`5`$ at high energies. In practice this theoretical bound is unlikely to be reached by GRB neutrinos, since their emission is suppressed at high energies by secondary particle (pion and muon) cooling in the hadronic cascade . Models which try to evade such losses by shifting the acceleration region into the outer shock and the afterglow of the GRB, as suggested by Vietri (; note also ), can be shown not to be efficient enough to reach such fluxes for reasonable energetics .
The average neutrino energy due to photopion production in GRB spectra at very high proton energies is up to an order of magnitude below the value expected from the $`\mathrm{\Delta }`$-approximation. This has severe consequences for the prospects of GRB–correlated neutrino events above $`10^{19}`$ eV. The fact that the mean proton energy loss rises with increasing proton energy up to $`\mathrm{\Delta }E/E0.5`$ (compared to the $`\mathrm{\Delta }`$–approximation value $`\mathrm{\Delta }E/E=0.2`$) may add to the problem, since it leads to lower proton energies in photohadronically limited acceleration scenarios.
A qualitatively new feature which can be explored with SOPHIA is the photoproduction of antibaryons . Our prediction of the maximal antiproton contribution from GRB of about $`2\%`$, reached for energies above $`510^{17}`$ eV, is several orders of magnitude above the background expected at this energy from cosmic ray-nucleon collisions in the galactic disk. The latter is measured at energies $`10`$ GeV as approximately $`10^3`$, and expected to decrease with $`E_p^{[0.30.6]}`$ following the leaky box model. Since the expectation of a significant increase of the $`\overline{p}`$ contribution at high energies is unique for GRB, this could provide an independent test whether GRB are indeed the dominant sources of cosmic rays. Unfortunately, there is presently no imaginable way to distinguish between nucleons and anti-nucleons at such high energies, where cosmic rays can only be measured in air shower experiments. Our result may therefore be regarded as of rather academic interest, or may be kept in mind for presently not anticipated, future detection techniques.
In conclusion, we have demonstrated that the application of SOPHIA to astrophysical problems involving the interaction of energetic cosmic rays in photon backgrounds can (a) improve the accuracy of the predictions from such models, and (b) open the possibility to explore particle physics effects so far neglected in astrophysics.
## Acknowledgments
The work of AM and RJP is supported by the Australian Research Council. RE and TS acknowledge the support by the U.S. Department of Energy under grant number DE FG02 01 ER 40626. TS is also supported in part by NASA grant NAG5-7009. The contribution of JPR was supported by NASA NAG5-2857 and by the EU-TMR network “Astro-Plasma Physics” under contract number ERBFMRX-CT98-0168.
## References
|
no-problem/9905/cond-mat9905277.html
|
ar5iv
|
text
|
# Observation of breathers in Josephson ladders
## Abstract
We report on the observation of spatially-localized excitations in a ladder of small Josephson junctions. The excitations are whirling states which persist under a spatially-homogeneous force due to the bias current. These states of the ladder are visualized using a low temperature scanning laser microscopy. We also compute breather solutions with high accuracy in corresponding model equations. The stability analysis of these solutions is used to interpret the measured patterns in the $`IV`$ characteristics.
The present decade has been marked by an intense theoretical research on dynamical localization phenomena in spatially discrete systems, namely on discrete breathers (DB). These exact solutions of the underlying equations of motion are characterized by periodicity in time and localization in space. Away from the DB center the system approaches a stable (typically static) equilibrium. (For reviews see ,). These solutions are robust to changes of the equations of motion, exist in translationally invariant systems and any lattice dimension. Discrete breathers have been discussed in connection with a variety of physical systems such as large molecules, molecular crystals , spin lattices , to name just a few.
For a localized excitation such as a DB, the excitation of plane waves which might carry the energy away from the DB does not occur due to the spatial discreteness of the system. The discreteness provides a cutoff for the wave length of plane waves and thus allows to avoid resonances of all temporal DB harmonics with the plane waves. The nonlinearity of the equations of motion is needed to allow for the tuning of the DB frequency .
Though the DB concept was initially developed for conservative systems, it can be easily extended to dissipative systems . There discrete breathers become time-periodic spatially localized attractors, competing with other (perhaps nonlocal) attractors in phase space. The characteristic property of DBs in dissipative systems is that these localized excitations are predicted to persist under the influence of a spatially homogeneous driving force. This is due to the fact, that the driving force compensates the dissipative losses of the DB.
So far research in this field was predominantly theoretical. Identifying and analyzing of experimental systems for the direct observation of DBs thus becomes a very actual and challenging problem. Experiments on localization of light propagating in weakly coupled optical waveguides , low-dimensional crystals and anti-ferromagnetic materials have been recently reported.
In this work we realize the theoretical proposal to observe DB-like localized excitations in arrays of coupled Josephson junctions. A Josephson junction is formed between two superconducting islands. Each island is characterized by a macroscopic wave function $`\mathrm{\Psi }\mathrm{e}^{\mathrm{i}\theta }`$ of the superconducting state. The dynamics of the junction is described by the time evolution of the gauge-invariant phase difference $`\phi =\theta _2\theta _1\frac{2\pi }{\mathrm{\Phi }_0}𝐀𝑑𝐬`$ between adjacent islands. Here $`\mathrm{\Phi }_0`$ is the magnetic flux quantum and $`𝐀`$ is the vector potential of the external magnetic field (integration goes from one island to the other one). In the following we consider zero magnetic fields $`𝐀=\mathrm{𝟎}`$. The mechanical analogue of a biased Josephson junction is a damped pendulum driven by a constant torque. There are two general states in this system: the first state corresponds to a stable equilibrium, and the second one corresponds to a whirling pendulum state. When treated for a chain of coupled pendula, the DB corresponds to the whirling state of a few adjacent pendula with all other pendula performing oscillations around their stable equilibria. In an array of Josephson junctions the nature of the coupling between the junctions is inductive. A localized excitation in such a system corresponds to a state where one (or several) junctions are in the whirling (resistive) state, with all other junctions performing small forced oscillations around their stable equilibria. According to theoretical predictions , the amplitude of these oscillations should decrease exponentially with increasing distance from the center of the excitation.
We have conducted experiments with ladders consisting of Nb/Al-AlO<sub>x</sub>/Nb underdamped Josephson tunnel junctions . We investigated annular ladders (closed in a ring) as well as straight ladders with open boundaries. The sketch of an annular ladder is given in Fig. 1. Each cell contains 4 small Josephson junctions. The size of the hole between the superconducting electrodes which form the cell is about $`3\times 3\mu `$m<sup>2</sup>. Here we define vertical junctions ($`\mathrm{𝐽𝐽}_V`$) as those in the direction of the external bias current, and horizontal junctions ($`\mathrm{𝐽𝐽}_H`$) as those transverse to the bias. Because of fabrication reasons we made the superconducting electrodes quite broad so that the distance between two neighboring vertical junctions is 30$`\mu `$m as can be seen in Fig. 3. The ladder voltage is read across the vertical junctions. According to the Josephson relation, a junction in a whirling state generates a dc voltage $`V=\frac{1}{2\pi }\mathrm{\Phi }_0\frac{d\phi }{dt}`$, where $`\mathrm{}`$ means the time average. In order to force junctions into the whirling state we used two different types of bias. The current $`I_\mathrm{B}`$ was uniformly injected at every node via thin-film resistors. Another current $`I_{\mathrm{local}}`$ was applied locally across just one vertical junction.
We studied ladders with different strength of horizontal and vertical Josephson coupling determined by the junction areas. The ratio of the junction areas is called the anisotropy factor and is expressed in terms of the junction critical currents $`\eta =I_{\mathrm{cH}}/I_{\mathrm{cV}}`$. If this factor is equal to zero, vertical junctions will be decoupled and can operate independently one from another. Measurements have been performed at $`4.2`$K. The number of cells $`N`$ in different ladders varied from 10 to 30. The discreteness of the ladder is expressed in terms of the parameter $`\beta _\mathrm{L}=2\pi LI_{\mathrm{cV}}/\mathrm{\Phi }_0`$, where $`L`$ is the self-inductance of the elementary cell of the ladder. The damping coefficient $`\alpha =\sqrt{\mathrm{\Phi }_0/(2\pi I_\mathrm{c}CR_\mathrm{N}^2)}`$ is the same for all junctions as their capacitance $`C`$ and resistance $`R_\mathrm{N}`$ scale with the area and $`C_\mathrm{H}/C_\mathrm{V}=R_{\mathrm{NV}}/R_{\mathrm{NH}}=\eta `$. The damping $`\alpha `$ in the experiment can be controlled by temperature and its typical values are between $`0.1`$ and $`0.02`$.
We have measured the dc voltage across various vertical junctions as a function of the currents $`I_{\mathrm{local}}`$ and $`I_\mathrm{B}`$. In order to generate a localized rotating state in a ladder we started with applying the local current $`I_{\mathrm{local}}>2I_{\mathrm{cH}}+I_{\mathrm{cV}}`$. This switches one vertical and the nearest horizontal junctions into the resistive state. After that $`I_{\mathrm{local}}`$ was reduced and, simultaneously, the homogeneous bias $`I_\mathrm{B}`$ was tuned up. In the final state we kept the bias $`I_\mathrm{B}`$ constant and reduced $`I_{\mathrm{local}}`$ to zero. Under these conditions, with a spatially-homogeneous bias injection, we observed a spatially-localized rotating state with non-zero dc voltage drops on just one or a few vertical junctions.
Various measured states of the annular ladder in the current-voltage $`I_\mathrm{B}V`$ plane with $`I_{\mathrm{local}}=0`$ are presented in Fig. 2. The voltage $`V`$ is recorded locally on the same vertical junction which was initially excited by the local current injection. The vertical line on the left side corresponds to the superconducting (static) state. The rightmost (also the bottom) curve accounts for the spatially-homogeneous whirling state (all vertical junctions rotate synchronously). Its nonlinear $`I_\mathrm{B}(V)`$ shape is caused by a strong increase of the normal tunneling current at a voltage of about $`2.5`$mV corresponding to the superconducting energy gap. The series of branches represent various localized states. These states differ from each other by the number of rotating vertical junctions.
In order to visualize various rotating states in our ladders we used the method of low temperature scanning laser microscopy . It is based on the mapping of a sample voltage response as a function of the position of a focused low-power laser beam on its surface. The laser beam locally heats the sample and, therefore, introduces an additional dissipation in the area of few micrometers in diameter. Such a dissipative spot is scanned over the sample and the voltage variation at a given bias current is recorded as a function of the beam coordinate. The resistive junctions of the ladder contribute to the voltage response while the junctions in the superconducting state show no response. The power of the laser beam is modulated at a frequency of several kHz and the sample voltage response is measured using a lock-in technique.
Several examples of the ladder response are shown in Fig. 3 as 2D gray scale maps. The spatially-homogeneous whirling state is shown in Fig. 3(A). Here all vertical junctions are rotating but the horizontal ones are not. Fig. 3(B) corresponds to the uppermost branch of Fig. 2. We observe a localized whirling state expected for a DB, namely a rotobreather . In this case 2 vertical junctions and 4 horizontal junctions of the DB are rotating, with all others remaining in the superconducting state. The same state is shown on an enlarged scale in Fig. 3(C). Fig. 3(D) illustrates another rotobreather found for the next lower branch of Fig. 2 at which 4 vertical junctions are in the resistive state. The local current at the beginning of each experiment is passing through the vertical junction being the top one on each map. In Fig. 3(E), which accounts for one of the lowest branches, we find an even broader localized state. Simultaneously, on the opposite side of the ring we observe another DB excited spontaneously (without any local current). An interesting fact is that in experiments with open boundary ladders (not closed in a ring) we also detected DBs with even or odd numbers of whirling vertical junctions.
Various states shown in Fig. 3 account for different branches in the $`I_\mathrm{B}V`$ plane in Fig. 2. Each resistive configuration is found to be stable along its particular branch. On a given branch the damping of the junctions in the rotating state is compensated by the driving force of the bias current $`I_\mathrm{B}`$. The transitions between the branches are discontinuous in voltage. In Fig. 2, we see that all branches of localized states lose their stability at a voltage of about $`1.4`$mV. Furthermore, as indicated in the inset on Fig. 2, a peculiar switching occurs: upon lowering the bias current $`I_\mathrm{B}`$ the system switches to a larger voltage. According to our laser microscope observations, the lower is the branch in Fig. 2, the larger is the number $`M`$ of resistive vertical junctions. The slope of these branches is $`dV/dI_\mathrm{B}MR_{\mathrm{NV}}/(M+\eta )`$, thus the branches become very close to each other for large $`M`$. The fact that the voltage at the onset of instability is independent of the size of the breather, indicates that the instability is essentially local in space and occurs at the border between the resistive and nonresistive junctions.
The occurrence of DBs is inherent to our system. We have also found various localized states to arise without any local current. Namely, when biasing the ladder by the homogeneous current $`I_\mathrm{B}`$ slightly below $`NI_{\mathrm{cV}}`$, we sometimes observed the system switching to a spatially-inhomogeneous state, similar to that shown in Fig. 3(E).
To interpret the experimental observations, we analyze the equations of motion for our ladders (see for details). Denote by $`\phi _l^v,\phi _l^h,\stackrel{~}{\phi }_l^h`$ the phase differences across the $`l`$th vertical junction and its right upper and lower horizontal neighbors. Using $`\phi _l=\phi _{l+1}\phi _l`$ and $`\mathrm{\Delta }\phi _l=\phi _{l+1}+\phi _{l1}2\phi _l`$, the Josephson equations yield
$`\ddot{\phi }_l^v+\alpha \dot{\phi }_l^v+\mathrm{sin}\phi _l^v=\gamma {\displaystyle \frac{1}{\beta _\mathrm{L}}}(\mathrm{\Delta }\phi _l^v+\stackrel{~}{\phi }_{l1}^h\phi _{l1}^h)`$ (1)
$`\ddot{\phi }_l^h+\alpha \dot{\phi }_l^h+\mathrm{sin}\phi _l^h={\displaystyle \frac{1}{\eta \beta _\mathrm{L}}}(\phi _l^h\stackrel{~}{\phi }_l^h+\phi _l^v)`$ (2)
$`\ddot{\stackrel{~}{\phi }}_l^h+\alpha \dot{\stackrel{~}{\phi }}_l^h+\mathrm{sin}\stackrel{~}{\phi }_l^h={\displaystyle \frac{1}{\eta \beta _\mathrm{L}}}(\phi _l^h\stackrel{~}{\phi }_l^h+\phi _l^v)`$ (3)
Here $`\gamma =I_\mathrm{B}/(NI_{\mathrm{cV}})`$. First, we compute the dispersion relation for Josephson plasmons $`\phi \mathrm{e}^{\mathrm{i}(ql\omega t)}`$ at $`\alpha =0`$ in the ground state (no resistive junctions). We obtain three branches: one degenerated with $`\omega =1`$ (horizontal junctions excited in phase), the second one below $`\omega =1`$ with weak dispersion (mainly vertical junctions excited) and finally the third branch with the strongest dispersion above the first two branches (mainly horizontal junctions excited out of phase), cf. the inset in Fig. 4. The region of experimentally observed breather stability is also shown. Note, that for DBs with symmetry between the upper and lower horizontal junctions the voltage drop on the horizontal junctions is half the drop across the vertical ones. This causes the characteristic frequency of the DB to be two times smaller than the value expected from the measured voltage drop on the vertical junctions.
In order to compare experimental results of Fig. 2 to the model given by Eqs. (1)-(3) we have integrated the latter equations numerically. We also find localized breather solutions, in particular solutions similar to the ones reported in previous numerical studies . These solutions are generated with using initial conditions when M vertical junctions of the resistive cluster (cf. Fig. 3) have $`\phi =0`$ and $`\dot{\phi }=2V`$ and the horizontal junctions adjacent to the vertical resistive cluster have $`\phi =0`$ and $`\dot{\phi }=V`$, while all other phase space variables are set to zero. The obtained current-voltage characteristics are shown in Fig. 4. Note that the superconducting gap structure and the nonlinearity of slopes are not reproduced in the simulations, as we use a voltage independent dissipation constant $`\alpha `$ in of (1)-(3). We find several instability windows of DB solutions, separating stable parts of the current-voltage characteristics.
In addition to direct numerical calculation of $`I_\mathrm{B}V`$ curves, we have also computed numerically exact breather solutions of (1)-(3) by using a generalized Newton map . We have studied the linear stability of the obtained breather by solving the associated eigenvalue problem. The spatial profile of the eigenmode which drives the instability (associated with the edges of the instability windows in Fig.4) is localized on the breather, more precisely on the edges of the resistive domain. This is in accord with the experimental observation (Fig.3) where several independent breathers can be excited in the system.
The DB states turn to be either invariant under $`\phi _l^h\stackrel{~}{\phi }_l^h`$ transformation or not. Both such solutions have been obtained numerically. To understand this, we consider the equations of motion (1)-(3) in the limit $`\eta 0`$ and look for time-periodic localized solutions. In this limit the brackets on right hand side of (2) and (3) vanish, and vertical junctions decouple from each other. Let us then choose one vertical junction with $`l=0`$ to be in a resistive state and all the others to be in the superconducting state. To satisfy periodicity in the horizontal junction dynamics we arrive at the condition
$`\phi _0^h={\displaystyle \frac{1}{k}}\phi _0^v,\stackrel{~}{\phi }_0^h={\displaystyle \frac{k1}{k}}\phi _0^v\mathrm{or}`$ (4)
$`\phi _0^h={\displaystyle \frac{k1}{k}}\phi _0^v,\stackrel{~}{\phi }_0^h={\displaystyle \frac{1}{k}}\phi _0^v`$ (5)
and a similar set of choices for $`\phi _1^h,\stackrel{~}{\phi }_1^h`$, with all other horizontal phase differences set to zero. Here $`k`$ is an arbitrary positive integer. Continuation to nonzero $`\eta `$ values should be possible . The symmetric DBs in Fig.3 correspond to $`k=2`$. The mentioned asymmetric DBs correspond to $`k=1`$. The current-voltage characteristics for asymmetric $`k=1`$ DBs show a different behavior from that discussed above. These DBs are stable down to very small current values, and simply disappear upon further lowering of the current, so that the system switches from a state with finite voltage drop to a pure superconducting state with zero voltage drop.
The observed DB states are clearly different for the well known row switching effect in 2D Josephson junction arrays. The DBs demonstrate localization transverse to the bias current (driving force), whereas the switched states of non-interacting junction rows are localized along the current. At the same time, DB states inherent to Josephson ladders are closely linked to the recently discovered meandering effect in 2D arrays .
In summary, we have experimentally detected various types of rotobreathers in Josephson ladders and visualized them with the help of laser microscopy . Our experiments show that DBs in Josephson ladders may occupy several lattice sites and that the number of occupied sites may increase at specific instability points. The possibility of exciting DBs spontaneously, without using any local force, demonstrates their inherent character. The observed DBs are stable in a wide frequency range. Numerical calculations confirm the reported interpretation and allow for a detailed study of the observed instabilities.
|
no-problem/9905/math9905186.html
|
ar5iv
|
text
|
# On the babylonian method of extracting root squares
## 1 Introduction
The oldest babylonian mathematical texts known to us date from the period 1900-1600 b.C.. It is well known, for example, that this people knew how to extract the root square of any positive number. Despite the fact that there was no general statements, rules or justified procedures in their mathematics, it seems reasonable to suppose that their probable line of thought was the following :
1. The root square of, say, 17 is a number whose square is 17.
2. The root square of 17 is approximatelly (as a first approximation) 4. Let us call this number $`r_1`$.
3. The number $`r_1=4`$ is not the root square of 17; but if we multiply it by $`\frac{17}{4}`$, we have 17 as result. In other words, both numbers, 4 and $`\frac{17}{4}`$ are good approximations of the root square of 17.
4. The arithmetic mean of 4 and $`\frac{17}{4}`$ should be a better approximation for the root square of 17. This arithmetic mean, which is a second approximation, is $`4\frac{1}{8}`$. We refer to this number as $`r_2`$.
5. The number $`r_2=4\frac{1}{8}`$ is not the root square of 17; but if we multiply it by $`\frac{17}{r_2}`$, we get 17 as result. In other words, both numbers, $`r_2`$ and $`\frac{17}{r_2}`$, are good approximations of the root square of 17.
6. The arithmetic mean of $`r_2`$ and $`\frac{17}{r_2}`$ should be $`r_3`$, and so on.
It seems quite obvious that this procedure is a rudiment of a numerical method. If we consider the case where the number of approximations goes to infinity, we have the limit $`L`$, which is the square root of 17, satisfying the following:
$$\frac{L+\frac{17}{L}}{2}=L,$$
(1)
which gives the equation $`L^2=17`$.
## 2 Newton’s Method
Now, consider the Newton’s method of approximating to the zeroes of a given function $`f(x)`$:
$$x_{n+1}=x_n\frac{f(x_n)}{f^{}(x_n)},$$
(2)
where $`f^{}(x)`$ is the first derivative of $`f(x)`$ with respect to $`x`$, and $`x_1`$ is a first approximation to a root of the equation $`y=f(x)`$.
In the particular case
$$f(x)=x^217,$$
(3)
where one of the roots is $`\sqrt{17}`$, we have the following:
$$x_{n+1}=\frac{x_n+\frac{17}{x_n}}{2},$$
(4)
which is exactly the same method used by the babylonians. In other words, the babylonians used a particular case of Newton’s method although they were not aware about that.
## 3 Speculating
Now, let us make some speculation. Was it possible, for a babylonian mathematician, to use very simple arguments (similar to those presented in the first Section of this paper) in order to extract the root cubic of a positive number? We believe that the answer is positive.
The root cubic of 17 is one of the zeroes of
$$f(x)=x^317.$$
(5)
According to Newton’s method given by equation (2) the approximations are obtained as it follows:
$$x_{n+1}=\frac{x_n+x_n+\frac{17}{x_n^2}}{3},$$
(6)
where $`x_1`$ is the first approximation. If we were babylonian mathematicians we could make some analogy with the method presented in the first Section as it follows:
1. The root cubic of 17 is a number whose cubic is 17.
2. The root cubic of 17 is approximatelly (as a first approximation) 2. Let us call this number $`r_1`$.
3. The number $`r_1=2`$ is not the root cubic of 17; but if we perform the product $`2\times 2\times \frac{17}{2^2}`$, we have 17 as a result. In other words, the numbers, 2 and $`\frac{17}{2^2}`$ are good approximations of the root cubic of 17.
4. The arithmetic mean of 2, 2, and $`\frac{17}{2^2}`$ should be a better approximation for the root cubic of 17. This arithmetic mean, which is a second approximation, is $`2\frac{3}{4}`$. We call this number as $`r_2`$.
5. The number $`r_2=2\frac{3}{4}`$ is not the root cubic of 17; but if we calculate the product $`r_2\times r_2\times \frac{17}{r_2^2}`$, we get 17 as result. In other words, the numbers, $`r_2`$ and $`\frac{17}{r_2^2}`$, are good approximations of the root cubic of 17.
6. The arithmetic mean of $`r_2`$, $`r_2`$, and $`\frac{17}{r_2^2}`$ should be $`r_3`$, and so on.
In other words, we are again using the notion of arithmetic mean, just as in the case of root squares. So, it seems natural that a hypothetical clever babylonian mathematician could create a method for solving the problem of the root cubic of a positive number, although there was (obviously) no knowledge about Newton’s method at that time.
## 4 A Simple Generalization
The reader can easily prove, that a generalization of equations (6) and (4) is given by:
$$x_{n+1}=\frac{_{i=1}^{m1}x_n+\frac{r}{x_n^{m1}}}{m},$$
(7)
where $`x_1`$ is the first approximation of $`\sqrt[m]{r}`$ and $`_{i=1}^{m1}x_n=(m1)x_n`$. It can be easily proved as well that any positive real number can be the first approximation $`x_1`$.
|
no-problem/9905/cond-mat9905232.html
|
ar5iv
|
text
|
# 1 Thermal-to-mechanical energy conversion power-output 𝑊 in units 𝐼²/ℏ (as determined by ()) as a function of Δ; 𝐼=𝐽>0, ϵ=50𝐼, Γ_↑=0, Γ_↓=10𝐼/ℏ, 2Γ=3𝐼/ℏ. Parameter 𝐾=0.1𝐼, 0.05𝐼 and 0.01𝐼 for curves a, b, and c, respectively.
A Thought Construction of Working Perpetuum Mobile of the Second Kind
V. Čápek and J. Bok
Institute of Physics of Charles University, Faculty of Mathematics and Physics,
Ke Karlovu 5, 121 16 Prague 2, Czech republic
(Tel. (00-420-2)2191-1330 or (00-420-2)2191-1450,
Fax (00-420-2)296-764,
E-mail capek@karlov.mff.cuni.cz or bok@karlov.mff.cuni.cz)
March 17, 1999
A previously published model of the isothermal Maxwell demon as one of models of open quantum systems endowed with faculty of selforganization is reconstructed here. It describes an open quantum system interacting with a single thermodynamic bath but otherwise not aided from outside. Its activity is given by the standard linear Liouville equation for the system and bath. Owing to its selforganization property, the model then yields cyclic conversion of heat from the bath into mechanical work without compensation. Hence, it provides an explicit thought construction of perpetuum mobile of the second kind, contradicting thus the Thomson formulation of the second law of thermodynamics. No approximation is involved as a special scaling procedure is used which makes the kinetic equations employed exact.
PACS: 05.60.+w, 87.22.Fy
Submitted to Czech. J. Phys.; preliminary version reported as a poster at
MECO 23, April 27-29, 1998, ICTP, Trieste
1. Introduction
In this letter, we should like to report on a result which can be obtained, for the model in question, without approximations from standard quantum theory of open systems governed by the linear Liouville - von Neumann equation. Irrespective of this, it contradicts the second law of thermodynamics. Leaving details of physical motivation to a next publication, we mention here just the fact that the main inspiration for construction of the model has been taken from biology, namely from topological changes of biologically important molecules upon detecting, at a specific site (receptor), particles (excitations, molecules or molecular groups) to be processed .
Previous version of the model has been published in . Detailed treatment of this model as well as other microscopic models of open quantum systems working on analogous principles revealed property of spontaneous (i.e. not induced by external flows) selforganization. This then leads to such unexpected phenomena contradicting basic principles of statistical thermodynamics as, e.g., violation of consequences of the detailed balance (in connection with impossibility of rigorous justification thereof). From this, implicit violations of the second law of thermodynamics could be deduced as mentioned in, e.g., . Here, we reconstruct the model so that it is able to work cyclically and without compensation as a perpetuum mobile of the second kind, in the sense contradicting explicitly the Thomson formulation of the second law of thermodynamics .
2. Model
The fully quantum Hamiltonian of our model in its simplest version (see also ) can be as usual written as a sum of the Hamiltonians of the (extended) system, thermodynamic bath, and that of the system-bath interaction. Thus,
$$H=H_S+H_B+H_{SB}$$
(1)
where
$$H_S=J(c_1^{}c_0+c_0^{}c_1)|dd|+I(c_1^{}c_0+c_0^{}c_1)|uu|+K(c_1^{}c_1+c_1^{}c_1)$$
$$+\frac{ϵ}{2}[12c_0^{}c_0][|uu||dd|].$$
(2)
As for the Hamiltonians of the bath $`H_B`$ and that of the system-bath interaction $`H_{SB}`$, they are not important here. Before explaining the symbols used in (2), let us only add that we shall assume $`H_{SB}`$ to consist of two additive and non-interfering contributions $`H_{SB}^{}`$ and $`H_{SB}^{\prime \prime }`$. Here $`H_{SB}^{}`$ causes transitions between states of the central system $`|u`$ and $`|d`$ (see below) while $`H_{SB}^{\prime \prime }`$ is responsible for a sufficiently intense (and, for the sake of simplicity, equally strong) dephasing of the particle states at all three sites. As an example, one can take a model of the bath consisting of harmonic oscillators $`H_B=_k\mathrm{}\omega _kb_k^{}b_k`$ interacting with the central system by a linear site-local coupling causing the $`|u|d`$ transitions exactly as in but complemented, for our purposes, by, e.g., a non-interfering site-local coupling of the particle to the bath with the same coupling constant at all three sites. This means, e.g., $`H_{SB}=\frac{1}{\sqrt{N}}_k\mathrm{}\omega _kG_k(b_k+b_k^{})[|ud|+|du|]`$ $`+\frac{1}{N}_{kk^{}}\mathrm{}\sqrt{\omega _k\omega _k^{}}g_{kk^{}}(b_k+b_k^{})(b_k^{}+b_k^{}^{})(c_1^{}c_1+c_0^{}c_0+c_{+1}^{}c_{+1})`$ $`H_{SB}^{}+H_{SB}^{\prime \prime }`$. These specific forms of $`H_B`$ and $`H_{SB}`$ will not be, however, used below.
As for $`H_S`$ in (2), we have chosen the simplest model with only three particle states, i.e. sites ($`m=1,0`$ and $`+1`$). Operators $`c_m^{}`$ and $`c_m`$, $`m=1,0,`$ or $`+1`$ designate the particle creation and annihilation operators at site $`m`$. For simplicity, we shall assume only one particle in the system. This is why we do not need the commutational (anticommutational) relations of the particle creation and annihilation operators. In (2), $`I`$, $`J`$ and $`K`$ are transfer (hopping or resonance) integrals connecting the sites involved. Worth noticing is that in (2), the forth and back transfers in any pair of the sites are always with the same amplitude as a consequence of the hermicity of $`H_S`$. The one-directional character of the process reported here is not owing to a contingent difference between these amplitudes but results exclusively (as it will become clear later on) from the existence of spontaneous processes. Site ‘0’ is understood to be attached to a central system representing, e.g., a specific molecule or molecular group (e.g. tail connecting site ‘0’ with either site ‘-1’ or ‘+1’ but not both simultaneously). This central system is assumed to have (in a given range of energies of interest) two levels with energies $`\pm ϵ/2`$ (with corresponding states $`|d`$ and $`|u`$; we assume $`ϵ>0`$). At the moment when the particle is transferred to site ‘0’ attached to the central system, the relative order (on the energy axis) of the two levels of the central system gets interchanged. (This of course causes instability of the central system with respect to the $`|d|u`$ transition which is the same effect as additional-load-induced instability of a ship in water.) Asymmetry in transfer rates $`|u|d`$ is ensured by the above spontaneous processes with respect to the bath. Conversion of this up-and-down asymmetry into the left-and-right one is then owing to the above special form of the first two terms on the right hand side of (2) discussed below. Technically, this imbalancing is due to the fourth term on the right hand side of (2) proportional to $`12c_0^{}c_0`$ and may be in reality due to correlation effects as the particle transferred may, upon its transition to site $`0`$, change the topology (originally stable conformation may become energetically disadvantageous) or orientation of the central system in space. (Such changes condition activity of many biologically important molecules in living organisms .)
3. Equations of motion
In our theory, we shall closely follow but shall be interested in just a stationary situation. That is why we can apply any proper kinetic method yielding time derivatives of matrix elements of the density matrix of our extended system (particle + the central two-level system), and to set these time derivatives zero. In order to avoid unnecessary technical complications, we have avoided time-convolution methods. From mutually equivalent time-convolutionless methods, we have chosen (like in ) that of Tokuyama and Mori . In order to avoid unnecessary discussions about role of approximations, we have employed one of the scaling procedures turning, in the limiting sense, Tokuyama-Mori equations into exact kinetic equations. This means the following steps:
* We formally introduce a joint small parameter, say $`g`$, of both $`H_{SB}`$ and all the hopping integrals $`I`$, $`J`$, and $`K`$ in the sense of setting $`H_{SB}g`$ but $`I,J,Kg^2`$. Right here, let us remind the reader of the fact that this scaling of some parameters from $`H_S`$ is what distinguishes our approach from scaling standardly used in weak coupling theories (inapplicable to our presumably intermediate or rather strong coupling situation discussed here) .
* We introduce new time unit $`\tau g^2`$, i.e. introduce new time $`t^{}=t/\tau _0`$. This step formally disappears as far as we are, like here, interested in just the stationary (long-time) asymptotics.
* We divide our general Tokuyama-Mori kinetic equations by $`g^2`$ and perform the limit $`g0`$.
The reader can easily see that this method preserves just the lowest order (in $`g^2`$) terms which can be calculated exactly. The result can be reported as follows (for details see a next extended publication):
First, we arrange all the 36 matrix elements $`\rho _{\alpha \gamma }(t)`$ of the density matrix of our extended system ($`\alpha ,\beta \mathrm{}=md`$ or $`mu`$ with $`m=0`$ or $`\pm 1`$ while $`d`$ or $`u`$ designate the states of the central system) in groups of nine designating
$$(\rho _{uu})^T=(\rho _{1u,1u},\rho _{0u,0u},\rho _{1u,1u},\rho _{1u,0u},\rho _{1u,1u},\rho _{0u,1u},\rho _{0u,1u},\rho _{1u,1u},\rho _{1u,0u}),$$
$$(\rho _{dd})^T=(\rho _{1d,1d},\rho _{0d,0d},\rho _{1d,1d},\rho _{1d,0d},\rho _{1d,1d},\rho _{0d,1d},\rho _{0d,1d},\rho _{1d,1d},\rho _{1d,0d}),$$
$$(\rho _{ud})^T=(\rho _{1u,1d},\rho _{0u,0d},\rho _{1u,1d},\rho _{1u,0d},\rho _{1u,1d},\rho _{0u,1d},\rho _{0u,1d},\rho _{1u,1d},\rho _{1u,0d})$$
(3)
and similarly for $`\rho _{du}`$. Superscript $`\mathrm{}^T`$ designates transposition. Then the above kinetic equations in the asymptotic time domain (and after the above scaling) read as
$$\left(\begin{array}{c}0\\ 0\\ 0\\ 0\end{array}\right)=\left(\begin{array}{cccc}𝒜& & \mathrm{𝟎}& \mathrm{𝟎}\\ 𝒞& 𝒟& \mathrm{𝟎}& \mathrm{𝟎}\\ \mathrm{𝟎}& \mathrm{𝟎}& \mathrm{}& \mathrm{}\\ \mathrm{𝟎}& \mathrm{𝟎}& \mathrm{}& \mathrm{}\end{array}\right)\left(\begin{array}{c}\rho _{uu}\\ \rho _{dd}\\ \rho _{ud}\\ \rho _{du}\end{array}\right).$$
(4)
Here, in the square matrix, all the elements are in fact blocks $`9\times 9`$. Hence, the whole set splits into two independent sets of $`18`$ equations each; we shall be interested just in that one for $`\rho _{uu}`$ and $`\rho _{dd}`$. This reads as in (4) with typical forms of the block $`9\times 9`$ matrices $`𝒜,,𝒞`$ and $`𝒟`$. Here
$$𝒜=$$
$$\left(\begin{array}{ccccccccc}\mathrm{\Gamma }_{}& 0& 0& 0& iK/\mathrm{}& 0& 0& iK/\mathrm{}& 0\\ 0& \mathrm{\Gamma }_{}& 0& 0& 0& 0& iI/\mathrm{}& 0& iI/\mathrm{}\\ 0& 0& \mathrm{\Gamma }_{}& 0& iK/\mathrm{}& 0& iI/\mathrm{}& iK/\mathrm{}& iI/\mathrm{}\\ 0& 0& 0& k2\mathrm{\Gamma }& iI/\mathrm{}& 0& 0& 0& iK/\mathrm{}\\ iK/\mathrm{}& 0& iK/\mathrm{}& iI/\mathrm{}& \mathrm{\Gamma }_{}2\mathrm{\Gamma }& 0& 0& 0& 0\\ 0& 0& 0& 0& 0& k^{}2\mathrm{\Gamma }& iK/\mathrm{}& iI/\mathrm{}& 0\\ 0& iI/\mathrm{}& iI/\mathrm{}& 0& 0& iK/\mathrm{}& k^{}2\mathrm{\Gamma }& 0& 0\\ iK/\mathrm{}& 0& iK/\mathrm{}& 0& 0& iI/\mathrm{}& 0& \mathrm{\Gamma }_{}2\mathrm{\Gamma }& 0\\ 0& iI/\mathrm{}& iI/\mathrm{}& iK/\mathrm{}& 0& 0& 0& 0& k2\mathrm{\Gamma }\end{array}\right).$$
(5)
Here $`k=iϵ/\mathrm{}0.5(\mathrm{\Gamma }_{}+\mathrm{\Gamma }_{})`$ and $`\mathrm{}^{}`$ designates complex conjugation. $`\mathrm{\Gamma }_{}`$ and $`\mathrm{\Gamma }_{}`$ designate bath-assisted uphill and downhill transfer rates in our two-level system calculated as if, formally (owing to the above scaling) $`I=J=K=0`$. Let us mention that via $`\mathrm{\Gamma }_{}`$ and $`\mathrm{\Gamma }_{}`$, the (initial) bath temperature $`T`$ enters the game. These rates are known from the usual Pauli Master Equation approach to general kinetic problems and, consequently, are related to each other by the detailed balance condition $`\mathrm{\Gamma }_{}=\mathrm{\Gamma }_{}\mathrm{e}^{\beta ϵ}`$, $`\beta =1/(k_BT)`$. In general, however, owing to intermixture of coherent and incoherent transfer channels, the detailed balance condition does not apply to our particle transfer problem. Finally, $`2\mathrm{\Gamma }`$ is the dephasing rate determined by $`H_{SB}^{\prime \prime }`$. As for the block $``$, it is fully diagonal with diagonal elements $`_{11},\mathrm{},_{99}`$ equal to $`\mathrm{\Gamma }_{}`$, $`\mathrm{\Gamma }_{}`$, $`\mathrm{\Gamma }_{}`$, $`(\mathrm{\Gamma }_{}+\mathrm{\Gamma }_{})/2`$, $`\mathrm{\Gamma }_{}`$, $`(\mathrm{\Gamma }_{}+\mathrm{\Gamma }_{})/2`$, $`(\mathrm{\Gamma }_{}+\mathrm{\Gamma }_{})/2`$, $`\mathrm{\Gamma }_{}`$ and $`(\mathrm{\Gamma }_{}+\mathrm{\Gamma }_{})/2`$, respectively. Next,
$$𝒟=$$
$$\left(\begin{array}{ccccccccc}\mathrm{\Gamma }_{}& 0& 0& iJ/\mathrm{}& iK/\mathrm{}& iJ/\mathrm{}& 0& iK/\mathrm{}& 0\\ 0& \mathrm{\Gamma }_{}& 0& iJ/\mathrm{}& 0& iJ/\mathrm{}& 0& 0& 0\\ 0& 0& \mathrm{\Gamma }_{}& 0& iK/\mathrm{}& 0& 0& iK/\mathrm{}& 0\\ iJ/\mathrm{}& iJ/\mathrm{}& 0& k^{}2\mathrm{\Gamma }& 0& 0& 0& 0& iK/\mathrm{}\\ iK/\mathrm{}& 0& iK/\mathrm{}& 0& \mathrm{\Gamma }_{}2\mathrm{\Gamma }& 0& iJ/\mathrm{}& 0& 0\\ iJ/\mathrm{}& iJ/\mathrm{}& 0& 0& 0& k2\mathrm{\Gamma }& iK/\mathrm{}& 0& 0\\ 0& 0& 0& 0& iJ/\mathrm{}& iK/\mathrm{}& k2\mathrm{\Gamma }& 0& 0\\ iK/\mathrm{}& 0& iK/\mathrm{}& 0& 0& 0& 0& \mathrm{\Gamma }_{}2\mathrm{\Gamma }& iJ/\mathrm{}\\ 0& 0& 0& iK/\mathrm{}& 0& 0& 0& iJ/\mathrm{}& k^{}2\mathrm{\Gamma }\end{array}\right).$$
(6)
As for the block $`𝒞`$, it reads as $``$ except for the interchange $`\mathrm{\Gamma }_{}\mathrm{\Gamma }_{}`$. The form of all the matrices in (4) is exactly the same as, e.g., that one which we would get from the stochastic Liouville equation SLE provided, however, that $`H_{SB}`$ is replaced by a proper stochastic (e.g. Gaussian delta-correlated) potential field acting on the central system. The only difference between our form of the $`𝒜𝒟`$ blocks and that of the same matrices in the corresponding SLE theory is dictated by physics of the problem: Namely, in contrast to the SLE approach, spontaneous processes with respect to the bath naturally appear in our fully quantum model ($`H_{SB}`$ is, in our case, a coupling to a genuine quantum bath). Thus, $`\mathrm{\Gamma }_{}<`$ or even $`\mathrm{\Gamma }_{}`$ in our case. As SLE is quite standard and well understood, this comment will hopefully turn attention of suspicious readers from potential speculations about correctness of our approach to the form of our Hamiltonian. It is the latter what is responsible for the striking conclusions obtained.
4. Stationary flow and power output.
Upper half of (4) provides a set of 18 linear algebraic homogeneous equations for 18 components of the density matrix $`\rho _{uu}`$ and $`\rho _{dd}`$ (remaining components becoming decoupled and irrelevant in what follows) of the rank 17. So, it should be complemented by the normalization condition
$$\underset{m=1}{\overset{+1}{}}[\rho _{mu,mu}+\rho _{md,md}]=1.$$
(7)
Then this set and (7) determine the relevant components of the density matrix uniquely. From the physical meaning of the transfer rates, one can then determine the flow ‘-1’$``$‘0’$``$‘+1’$``$‘-1’ as
$$𝒥=\mathrm{\Gamma }_{}\rho _{0d,0d}\mathrm{\Gamma }_{}\rho _{0u,0u}.$$
(8)
Calculated flow is, as simple inspection or numerical solution shows, always nonzero. Already this implies, owing to the persistent character of the flow, important consequences. We are, however, here interested rather in the possibility of converting the heat from the bath into a usable work. For that, we connect the particle running through our system organized as a circle of three sites (as ‘-1$``$0$``$+1$``$-1’) with a screw (with its axis perpendicular to the plane of our three sites ‘-1’, ‘0’ and ‘+1’). The above persistent flow could then drive the screw at the cost of just the thermal energy of the bath, converting thus the latter directly to the mechanical work. So the existence or nonexistence of the stationary flow according to the above definitions converts to a question of a direct violation of the 2nd law of thermodynamics as applied to our system in the sense of the existence or nonexistence of the perpetuum mobile of the second kind. The problem can be easily solved numerically in our model. The idea is to provide the system with potential steps the particle (and the screw) must overcome when passing from ‘-1’ to ‘0’, from ‘0’ to ‘+1’ or from ‘+1’ to ‘-1’. (When passing virtually in the opposite directions, the potential steps as felt by the particle would get the opposite sign.) We have chosen these steps equal, designating their value as $`\mathrm{\Delta }/3`$. Hence $`\mathrm{\Delta }>0`$ is the mechanical work the particle exerts on the screw during one turn in the above direction. One should realize that taking these steps as those of the potential energy of the screw connected with the particle, the latter potential energy is not unique as a function of the particle position on the above triangle of the sites. Instead, it is unique as a function of the rotation angle $`\varphi `$ of the particle (or screw). In other words, when the particle performs one turn ‘-1’$``$‘0’$``$‘+1’$``$‘-1’, $`\varphi \varphi +2\pi `$, the potential ascribed to the particle connected with the screw increases by $`\mathrm{\Delta }`$ though the particle formally returns to the same site.
Technically, inclusion of the above potential steps is simple. As it follows from the above formalism or already even from the Liouville equation for the whole complex of the system and the bath, it only means to add to block matrices $`𝒜`$ and $`𝒟`$ above the $`9\times 9`$ block $``$ with all matrix elements equal to zero except for $`_{44}=_{77}=_{88}=_{55}=_{66}=_{99}=i\mathrm{\Delta }/(3\mathrm{})`$.
From the solution to
$$0=\left(\begin{array}{cc}𝒜+& \\ 𝒞& 𝒟+\end{array}\right)\left(\begin{array}{c}\rho _{uu}\\ \rho _{dd}\end{array}\right).$$
(9)
and (7), one can then determine the mechanical energy power output (as measured on the screw), i.e. the thermal-to-mechanical energy conversion power output (per second)
$$W=𝒥\mathrm{\Delta }.$$
(10)
Fig.1 then unambiguously illustrates the positive result of our test whether the conversion of the thermal energy of our single bath to the mechanical work is, in our model, possible or not. One should realize the fact that the mechanical output is, as Fig. 1 shows, really positive. Hence, the system produces, owing to its selforganizational properties and properly timed opening and closing of the particle transfer channel across site ‘0’, positive mechanical work. This work can, because of the construction of the model, be at the cost of just heat of the thermodynamic bath. Hence, we have a direct ‘heat $``$ mechanical work’ conversion. As we have just one bath, there is no compensation possible. Thus, our system working cyclically provides an example of perpetuum mobile of the second kind, violating thus the second law of thermodynamics in its Thomson form.
|
no-problem/9905/quant-ph9905054.html
|
ar5iv
|
text
|
# Untitled Document
Disclaimer
> This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof or The Regents of the University of California and shall not be used for advertising or product endorsement purposes.
Lawrence Berkeley Laboratory is an equal opportunity employer.
Shifting the Paradigm
A controversy is raging today about the power of our minds. Intuitively we know that our conscious thoughts can guide our actions. Yet the chief philosophies of our time proclaim, in the name of science, that we are mechanical systems governed, fundamentally, entirely by impersonal laws that operate at the level of our microscopic constituents.
The question of the nature of the relationship between conscious thoughts and physical actions is called the mind-body problem. Old as philosophy itself it was brought to its present form by the rise, during the seventeenth century, of what is called ‘modern science’. The ideas of Galileo Galilei, Renê Descartes, and Isaac Newton created a magnificent edifice known as classical physical theory, which was completed by the work of James Clerk Maxwell and Albert Einstein. The central idea is that the physical universe is composed of “material” parts that are localizable in tiny regions, and that all motion of matter is completely determined by matter alone, via local universal laws. This local character of the laws is crucial. It means that each tiny localized part responds only to the states of its immediate neighbors: each local part “feels” or “knows about” nothing outside its immediate microscopic neighborhood. Thus the evolution of the physical universe, and of every system within the physical universe, is governed by a vast collection of local processes, each of which is ‘myopic’ in the sense that it ‘sees’ only its immediate neighbors.
The problem is that if this causal structure indeed holds then there is no need for our human feelings and knowings. These experiential qualities clearly correspond to large-scale properties of our brains. But if the entire causal process is already completely determined by the ‘myopic’ process postulated by classical physical theory, then there is nothing for any unified graspings of large-scale properties to do. Indeed, there is nothing that they can do that is not already done by the myopic processes. Our conscious thoughts thus become prisoners of impersonal microscopic processes: we are, according to this “scientific” view, mechanical robots, with a mysterious dangling appendage, a stream of conscious thoughts that can grasp large-scale properties as wholes, but exert, as a consequence of these graspings, nothing not done already by the microscopic constituents.
The enormous empirical success of classical physical theory during the eighteenth and nineteenth centuries has led many twentieth-century philosophers to believe that the problem with consciousness is how to explain it away: how to discredit our misleading intuition by identifying it as product of human confusion, rather than recognizing the physical effects of consciousness as a physical problem that needs to be answered in dynamical terms. That strategy of evasion is, to be sure, about the only course available within the strictures imposed by classical physical theory.
Detailed proposals abound for how to deal with this problem created by adoption of the classical-physics world view. The influential philosopher Daniel Dennett (1994, p.237) claims that our normal intuition about consciousness is “like a benign user illusion” or “a metaphorical by-product of the way our brains do their approximating work”. Eliminative materialists such as Richard Rorty (1979) hold that mental phenomena, such as conscious experiences, simply do not exist. Proponents of the popular ‘Identity Theory of Mind’ grant that conscious experiences do exist, but claim each experience to be identical to some brain process. Epiphenomenal dualists hold that our conscious experiences do exist, and are not identical to material processes, but have no effect on anything we do: they are epiphenomenal.
Dennett (1994, p.237) described the recurring idea that pushed him to his counter-intuitive conclusion: “a brain was always going to do what it was caused to do by local mechanical disturbances.” This passage lays bare the underlying presumption behind his own theorizing, and undoubtedly behind the theorizing of most non-physicists who ponder this matter, namely the presumptive essential correctness of the idea of the physical world foisted upon us by the assumptions of classical physical theory.
It has become now widely appreciated that assimilation by the general public of this “scientific” view, according to which each human being is basically a mechanical robot, is likely to have a significant and corrosive impact on the moral fabric of society. Dennett speaks of the Spectre of Creeping Exculpation: recognition of the growing tendency of people to exonerate themselves by arguing that it is not “I” who is at fault, but some mechanical process within: “my genes made me do it”; or “my high blood-sugar content made me do it.” \[Recall the infamous “Twinkie Defense” that got Dan White off with five years for murdering San Francisco Mayor George Moscone and Supervisor Harvey Milk.\]
Steven Pinker (1997, p.55) also defends a classical-type conception of the brain, and, like Dennett, recognizes the important need to reconcile the science-based idea of causation with a rational conception of personal responsibility. His solution is to regard science and ethics as two self-contained systems: “Science and morality are separate spheres of reasoning. Only by recognizing them as separate can we have them both.” And “The cloistering of scientific and moral reasoning also lies behind my recurring metaphor of the mind as machine, of people as robots.” But he then decries “the doctrines of postmodernism, poststructuralism, and deconstructionism, according to which objectivity is impossible, meaning is self-contradictory, and reality is socially constructed.” Yet are not the ideas he decries a product of the contradiction he embraces? Self-contradiction is a bad seed that bears relativism as its evil fruit.
The current welter of conflicting opinion about the mind-brain connection suggests that a paradigm shift is looming. But it will require a major foundational shift. For powerful thinkers have, for three centuries, been attacking this problem from every angle within the bounds defined by the precepts of classical physical theory, and no consensus has emerged.
Two related developments of great potential importance are now occurring. On the experimental side, there is an explosive proliferation of empirical studies of the relations between a subject’s brain process — as revealed by instrumental probes of diverse kinds — and the experiences he reports. On the theoretical side, there is a growing group of physicists who believe almost all thinking on this issue during the past few centuries to be logically unsound, because it is based implicitly on the precepts of classical physical theory, which are now known to be fundamentally incorrect. Contemporary physical theory differs profoundly from classical physical theory precisely on the nature of the dynamical linkage between minds and physical states.
William James (1893, p.486), writing at the end of the nineteenth century, said of the scientists who would one day illuminate the mind-body problem:
“the best way in which we can facilitate their advent is to understand how great is the darkness in which we grope, and never forget that the natural-science assumptions with which we started are provisional and revisable things.”
How wonderfully prescient!
It is now well known that the precepts of classical physical theory are fundamentally incorrect. Classical physical theory has been superceded by quantum theory, which reproduces all of the empirical successes of classical physical theory, and succeeds also in every known case where the predictions of classical physical theory fail. Yet even though quantum theory yields all the correct predictions of classical physical theory, its representation of the physical aspects of nature is profoundly different from that of classical physical theory. And the most essential difference concerns precisely the connection between physical states and consciousness.
My thesis here is that the difficulty with the traditional attempts to understand the mind-brain system lies primarily with the physics assumptions, and only secondarily with the philosophy: once the physics assumptions are rectified the philosophy will take care of itself. A correct understanding of the mind/matter connection cannot be based on a conception of the physical aspects of nature that is profoundly mistaken precisely at the critical point, namely the role of consciousness in the dynamics of physical systems.
Contemporary science, rationally pursued, provides an essentially new understanding of the mind/brain system. This revised understanding is in close accord with our intuitive understanding of that system: no idea of a “benign user illusion” arises, nor any counter-intuitive idea that a conscious thought is identical to a collection of tiny objects moving about in some special kind of way.
Let it be said, immediately, that this solution lies not in the invocation of quantum randomness: a significant dependence of human action on random chance would be far more destructive of any rational notion of personal responsibility than microlocal causation ever was.
The solution hinges not on quantum randomness, but rather on the dynamical effects within quantum theory of the intention and attention of the observer.
But how did physicists ever manage to bring conscious thoughts into the dynamics of physical systems? That is an interesting tale.
The World as Knowings
In his book “The creation of quantum mechanics and the Bohr- Pauli dialogue” the historian John Hendry (1984) gives a detailed account of the fierce struggles, during the first quarter of this century, by such eminent thinkers as Hilbert, Jordan, Weyl, von Neumann, Born, Einstein, Sommerfeld, Pauli, Heisenberg, Schroedinger, Dirac, Bohr and others, to come up with a rational way of comprehending the data from atomic experiments. Each man had his own bias and intuitions, but in spite of intense effort no rational comprehension was forthcoming. Finally, at the 1927 Solvay conference a group including Bohr, Heisenberg, Pauli, Dirac, and Born come into concordance on a solution that came to be called “The Copenhagen Interpretation”. Hendry says: “Dirac, in discussion, insisted on the restriction of the theory’s application to our knowledge of a system, and on its lack of ontological content.” Hendry summarized the concordance by saying: “On this interpretation it was agreed that, as Dirac explained, the wave function represented our knowledge of the system, and the reduced wave packets our more precise knowledge after measurement.”
Let there be no doubt about this key point, namely that the mathematical theory was asserted to be directly about our knowledge itself, not about some imagined-to-exist world of particles and fields.
Heisenberg (1958a): “The conception of objective reality of the elementary particles has thus evaporated not into the cloud of some obscure new reality concept but into the transparent clarity of a mathematics that represents no longer the behavior of particles but rather our knowledge of this behavior.”
Heisenberg (1958b): “…the act of registration of the result in the mind of the observer. The discontinuous change in the probability function…takes place with the act of registration, because it is the discontinuous change in our knowledge in the instant of registration that has its image in the discontinuous change of the probability function.”
Heisenberg (1958b:) “When the old adage ‘Natura non facit saltus’ is used as a basis of a criticism of quantum theory, we can reply that certainly our knowledge can change suddenly, and that this fact justifies the use of the term ‘quantum jump’. ”
Wigner (1961): “the laws of quantum mechanics cannot be formulated … without recourse to the concept of consciousness.”
Bohr (1934): “In our description of nature the purpose is not to disclose the real essence of phenomena but only to track down as far as possible relations between the multifold aspects of our experience.”
Certainly this profound shift in physicists’ conception of the basic nature of their endeavor, and the meanings of their formulas, was not a frivolous move: it was a last resort. The very idea that in order to comprehend atomic phenomena one must abandon ontology, and construe the mathematical formulas to be directly about the knowledge of human observers, rather than about the external real events themselves, is so seemingly preposterous that no group of eminent and renowned scientists would ever embrace it except as an extreme last measure. Consequently, it would be frivolous of us simply to ignore a conclusion so hard won and profound, and of such apparent direct bearing on our effort to understand the connection of our knowings to our physical actions.
This monumental shift in the thinking of scientists was an epic event in the history of human thought. Since the time of the ancient Greeks the central problem in understanding the nature of reality, and our role in it, has been the puzzling separation of nature into two seemingly very different parts, mind and matter. This had led to the divergent approaches of Idealism and Materialism. According to the precepts of Idealism our ideas, thoughts, sensations, feelings, and other experiential realities, are the only realities whose existence is certain, and they should be taken as basic. But then the enduring external structure normally imagined to be carried by matter is difficult to fathom. Materialism, on the other hand, claims that matter is basic. But if one starts with matter then it is difficult to understand how something like your experience of the redness of a red apple can be constructed out of it, or why the experiential aspect of reality should exist at all if, as classical mechanics avers, the material aspect is causally complete by itself. There seems to be no rationally coherent way to comprehend the relationship between our thoughts and the thoughtless atoms that external reality was imagined to consist of.
Einstein never accepted the Copenhagen interpretation. He said:
“What does not satisfy me, from the standpoint of principle, is its attitude toward what seems to me to be the programmatic aim of all physics: the complete description of any (individual) real situation (as it supposedly exists irrespective of any act of observation or substantiation).” (Einstein, 1951, p.667)
and
“What I dislike in this kind of argumentation is the basic positivistic attitude, which from my view is untenable, and which seems to me to come to the same thing as Berkeley’s principle, esse est percipi.” (Einstein, 1951, p. 669).\[Translation: To be is to be perceived\]
Einstein struggled until the end of his life to get the observer’s knowledge back out of physics. But he did not succeed! Rather he admitted that:
“It is my opinion that the contemporary quantum theory…constitutes an optimum formulation of the \[statistical\] connections.” (ibid. p. 87).
He referred to:
“the most successful physical theory of our period, viz., the statistical quantum theory which, about twenty-five years ago took on a logically consistent form. … This is the only theory at present which permits a unitary grasp of experiences concerning the quantum character of micro-mechanical events.” (ibid p. 81).
One can adopt the cavalier attitude that these profound difficulties with the classical conception of nature are just some temporary retrograde aberration in the forward march of science. Or one can imagine that there is simply some strange confusion that has confounded our best minds for seven decades, and that their absurd findings should be ignored because they do not fit our intuitions. Or one can try to say that these problems concern only atoms and molecules, and not things built out of them. In this connection Einstein said:
“But the ‘macroscopic’ and ‘microscopic’ are so inter-related that it appears impracticable to give up this program \[of basing physics on the ‘real’\] in the ‘microscopic’ alone.” (ibid, p.674).
What Is Really Happening?
Orthodox quantum theory is pragmatic: it is a practical tool based on human knowings. It takes our experiences as basic, and judges theories on the basis of how well they work for us, without trying to attribute any reality to the entities of the theory, beyond the reality for us that they acquire from their success in allowing us to find rational order in the structure of our past experiences, and to form sound expectations about the consequences of our possible future actions.
But the opinion of many physicists, including Einstein, is that the proper task of scientists is to try to construct a rational theory of nature that is not based on so small a part of the natural world as human knowledge. John Bell opined that we physicists ought to try to do better than that.
The question thus arises as to what is ‘really happening’.
Heisenberg (1958) answered this question in the following way:
“Since through the observation our knowledge of the system has changed discontinuously, its mathematical representation also has undergone the discontinuous change, and we speak of a ‘quantum jump’.”
“A real difficulty in understanding the interpretation occurs when one asks the famous question: But what happens ‘really’ in an atomic event?”
“If we want to describe what happens in an atomic event, we have to realize that the word ‘happens’ can apply only to the observation, not to the state of affairs between the two observations. It \[ the word ‘happens’ \] applies to the physical, not the psychical act of observation, and we may say that the transition from the ‘possible’ to the ‘actual’ takes place as soon as the interaction of the object with the measuring device, and therefore with the rest of the world, has come into play; it is not connected with the act of registration of the result in the mind of the observer. The discontinuous change in the probability function, however, occurs with the act of registration, because it is the discontinuous change in our knowledge in the instant of recognition that has its image in the discontinuous change in the probability function.”
This explanation uses two distinct modes of description. One is a pragmatic knowledge-based description in terms of the Copenhagen concept of the discontinuous change of the quantum-theoretic probability function at the registration of new knowledge in the mind of the observer. The other is an ontological description in terms of ‘possible’ and ‘actual’, and ‘interaction of object with the measuring device’. The latter description is an informal supplement to the strict Copenhagen interpretation. I say ‘informal supplement’ because this ontological part is not tied into quantum theoretical formalism in any precise way. It assuages the physicists’ desire for an intuitive understanding of what could be going on behind the scenes, without actually interfering with the workings of the pragmatic set of rules.
Heisenberg’s transition from ‘the possible’ to ‘the actual’ at the dumb measuring device was shown to be a superfluous and needless complication by von Neumann’s analysis of the quantum process of measurement (von Neumann, 1932, Chapter VI). I shall discuss that work later, but note here only the key conclusion. von Neumann introduced the measuring instruments and the body/brains of the community of human observers into the quantum state, which is quantum theory’s only representation of “physical reality”. He then showed that if an observer experiences the fact that, for example, ‘the pointer on a measuring device has swung to the right’, then this increment in the observer’s knowledge can be associated exclusively with a reduction (i.e., sudden change) of the state of the brain of that observer to the part of that brain state that is compatible with his new knowledge. No change or reduction of the quantum state at the dumb measuring device is needed: no change in “knowledge” occurs there. This natural association of human “knowings” with events in human brains allows the ‘rules’ of the Copenhagen interpretation pertaining to “our knowledge” to be represented in a natural ontological framework. Indeed, any reduction event at the measuring device itself would, strictly speaking, disrupt in principle the validity of the predictions of quantum theory. Thus the only natural ontological place to put the reduction associated with the increases in knowledge upon which the Copenhagen interpretation is built is in the brain of the person whose knowledge is increased.
My purpose in what follows is to reconcile the insight of the founders of quantum theory, namely that the mathematical formalism of quantum theory is about our knowledge, with the demand of Einstein that basic physical theory be about nature herself. I shall achieve this reconciliation by incorporating human beings, including both their body/brains and their conscious experiences, into the quantum mechanical description of nature.
The underlying commitment here is to the basic quantum principle that information is the currency of reality, not matter: the universe is an informational structure, not a substantive one. This fact is becoming ever more clear in the empirical studies of the validity of the concepts of quantum theory in the context of complex experiments with simple combinations of correlated quantum systems, and in the related development of quantum information processing. Information-based language works beautifully, but substance-based language does not work at all..
Mind/Brain Dynamics: Why Quantum Theory Is Needed
A first question confronting a classically biased mind-brain researcher is this: How can two things so differently described and conceived as substantive matter and conscious thoughts interact in any rationally controlled and scientifically acceptable way. Within the classical framework this is impossible. Thus the usual tack has been to abandon or modify the classical conception of mind while clinging tenaciously to the “scientifically established” classical idea of matter, even in the face of knowledge that the classical idea of matter is now known by scientists to be profoundly and fundamentally mistaken, and mistaken not only on the microscopic scale, but on the scale of meters and kilometers as well (Tittel, 1998). Experiments show that our experiences of instruments cannot possibly be just the passive witnessing of macroscopic physical realities that exist and behave in the way that the ideas of classical physical theory say that macroscopic physical realities ought to exist and behave.
Scientists and philosophers intent on clinging to familiar classical concepts normally argue at this point that whereas long-range quantum effects can be exhibited under rigorous conditions of isolation and control, all quantum effects will be wiped out in warm wet brains on a very small scale, and hence classical concepts will be completely adequate to deal with the question of the relationship between our conscious thoughts and the large-scale brain activities with which they are almost certainly associated.
That argument is incorrect. The emergence of classical-type relationships arise from interactions between a system and its environment. These interactions induce correlations between this system and its environment that make certain typical quantum interference effects difficult to observe in practice, and that allow certain practical computations to be simplified by substituting a classical system for a quantum one. However, these correlation (decoherence) effects definitely do not entail the true emergence — even approximately — of a single classically describable system. (Zurek, 1986, p.89 and Joos, 1986, p.12). In particular, if the subsystem of interest is a brain then interactions between its parts produce a gigantic jumble of partially interfering classical-type states: no single approximately classical reality emerges. Yet if no — even-approximate — single classical reality emerges at any macroscopic scale, but only a jumble of partially interfering quantum states, then the investigation of an issue as basic as the nature of the mind-brain connection ought in principle to be pursued within an exact framework, rather than crippling the investigation from the outset by replacing correct principles by concepts known to be fundamentally and grossly false, just because they allow certain practical computations to be simplified.
This general argument is augmented by a more detailed examination of the present case. The usual argument for the approximate pragmatic validity of a classical conceptualization of a system is based on assumptions about the nature of the question that is put to nature. The assumption in the usual case is that this question will be about something like the position of a visible object. Then one has a clear separation of the world into its pertinent parts: the unobservable atomic subsystem, the observable features of the instrument, and unobserved features of the environment, including unobserved micro-features of the instrument. The empirical question is about the observable features of the instrument. These features are essentially just the overall position and orientation of a visible object.
But the central issue in the present context is precisely the character of the brain states that are associated with conscious experiences. It is not known a priori whether or how a self-observing quantum system separates into these various parts. It is not clear, a priori, that a self-observing brain can be separated into components analogous to observer, observee, and environment. Consequently, one cannot rationally impose prejudicial assumptions — based on pragmatic utility in simple cases in which the quantum system and measuring instrument are two distinct systems both external to the human observer, and strongly coupled to an unobservable environment — in this vastly different present case, in which the quantum system being measured, the observing instrument, and “the observer” are aspects of one unified body/brain/mind system observing itself.
In short, the practical utility of classical concepts in certain special situations arises from the very special forms of the empirical questions that are to be asked in those situations. Consequently, one must revert to the basic physical principles in this case where the special conditions of separation fail, and the nature of the questions put to nature can therefore be quite different.
The issue here is not whether distinct objects that we observe via our senses can be treated as classical objects. It is whether in the description of the complex inner workings of a thinking human brain it is justifiable to assume — not just for certain simple practical purposes, but as a matter of principle — that this brain is made up of tiny interacting parts of a kind known not to exist.
The only rational scientific way to proceed in this case of a mind/brain observing itself is to start from basic quantum theory, not from a theory that is known to be profoundly incorrect.
The vonNeumann/Wigner “orthodox” quantum formalism that I employ automatically and neatly encompasses all quantum and classical predictions, including the transition domains between them. It automatically incorporates all decoherence effects, and the partial “classicalization” effects that they engender.
vonNeumann/Wigner Quantum Theory
Wigner used the word “orthodox” to describe the formulation of quantum theory developed by von Neumann. It can be regarded as a partial ontologicalization of its predecessor, Copenhagen quantum theory.
The central concept of the Copenhagen interpretation of quantum theory, as set forth by the founders at the seminal Solvay conference of 1927, is that the basic mathematical entity of the theory, the quantum state of a system, represents “our knowledge” of the system, and the reduced state represents our more precise knowledge after measurement.
In the strict Copenhagen view, the quantum state is always the state of a limited system that does not include the instruments that we use to prepare that system or later to measure it. Our relevant experiences are those that we described as being our observations of the observable features of these instruments.
To use the theory one needs relationships between the mathematical quantities of the theory and linguistic specifications on the observable features of the instruments. These specifications are couched in the language that we use to communicate to our technically trained associates what we have done (how we have constructed our instruments, and put them in place) and what we have learned (which outcomes have appeared to us). Thus pragmatic quantum theory makes sense only when regarded as a part of a larger enveloping language that allows us describe to each other the dispositions of the instruments and ordinary objects that are relevant to the application we make. The connections between these linguistic specifications and the mathematical quantities of the theory are fixed, fundamentally, by the empirical calibrations of our instruments.
These calibration procedures do not, however, fully exploit all that we know about the atomic properties of the instruments.
That Bohr was sensitive to this deficiency, is shown by following passage:
“On closer consideration, the present formulation of quantum mechanics, in spite of its great fruitfulness, would yet seem no more than a first step in the necessary generalization of the classical mode of description, justified only by the possibility of disregarding in its domain of application the atomic structure of the measuring instruments. For a correlation of still deeper lying laws of nature … this last assumption can no longer be maintained and we must be prepared for a … still more radical renunciation of the usual claims of so-called visualization. (Bohr, 1936, p,293-4)”
Bohr was aware of the work in this direction by John von Neumann (1932), but believed von Neumann to be on a wrong track. Yet the opinion of many other physicists is that von Neumann made the right moves: he brought first the measuring instruments, and eventually the entire physical universe, including the human observers themselves, into the physical system represented by the quantum state. The mathematical theory allows one to do this, and it is unnatural and problematic to do otherwise: any other choice would be an artifact, and would create problems associated with an artificial separation of the unified physical system into differently described parts. This von Neumann approach, in contrast to the Copenhagen approach, allows the quantum theory to be applied both to cosmological problems, and to the mind-body problem.
Most efforts to improve upon the original Copenhagen quantum theory are based on von Neumann’s formulation. That includes the present work. However, almost every other effort to modify the Copenhagen formulation aims to improve it by removing the consciousness of the observer from quantum theory: they seek to bring quantum theory in line with the basic philosophy of the superceded classical theory, in which consciousness is imagined to be a disconnected passive witness.
I see no rationale for this retrograde move. Why should we impose on our understanding of nature the condition that consciousness not be an integral part of it, or an unrealistic stricture of impotence that is belied by the deepest testimony of human experience, and is justified only by a theory now known to be fundamentally false, when the natural form of the superceding theory makes experience efficacious?
I follow, therefore, the von Neumann/Wigner \[vN/W\] formulation, in which the entire physical world is represented by a quantum mechanical state, and each thinking human being is recognized as an aspect of the total reality: each thinking human being is a body/brain/mind system, consisting of a sequence of conscious events, called knowings, bound together by the physical structure that is his body/brain.
However, the basic idea, and the basic rules, of Copenhagen quantum theory are strictly maintained: the quantum state continues to represent knowledge, and each experiential increment in knowledge, or knowing, is accompanied by a reduction of the quantum state to a form compatible with that increase in knowledge.
By keeping these connections intact one retains both the close pragmatic link between the theory and empirical knowledge, which is entailed by the quantum rules, and also the dynamical efficacy of conscious experiences, which follows from the action of the ‘reduction of the quantum state’ that, according to the quantum rules, is the image in the physical world of the conscious event.
In this theory, each conscious event has as its physical image not a reduction of the state of some small physical system that is external to the body/brain of the person to whom the experience belongs, as specified by the Copenhagen approach. Rather, the reduction is in that part of the state of the universe that constitutes the state of the body/brain of the person to whom the experience belongs: the reduction actualizes the pattern of activity that is sometimes called the “neural correlate” of that conscious experience. The theory thus ties in a practical way into the vast field of mind-brain research: i.e., into studies of the correlations between, on the one hand, brain activities of a subject, as measured by instrumental probes and described in physical terms, and, on the other hand, the subjective experiences, as reported by the subject, and described in the language of “folk psychology” \[i.e., in terms of feelings, beliefs, desires, perceptions, and the other psychological features.\]
My aim now is to show in more detail how the conscious intentions of a human being can influence the activities of his brain. To do this I must first explain the two important roles of the quantum observer.
The Two Roles of the Quantum Observer
Most readers will have heard of the Schroedinger equation: it is the quantum analog of Newton’s and Maxwell’s equations of motion of classical mechanics. The Schroedinger equation, like Newton’s and Maxwell’s equations, is deterministic: given the motion of the quantum state for all times prior to the present, the motion for all future time is fixed, insofar as the Schroedinger equation is satisfied for all times.
However, the Schroedinger equation fails when an increment of knowledge occurs: then there is a sudden jump to a ‘reduced’ state, which represents the new state of knowledge. This jump involves the well-known element of quantum randomness.
A superficial understanding of quantum theory might easily lead one to conclude that the entire dynamics is controlled by just the combination of the local-deterministic Schroedinger equation and the elements of quantum randomness. If that were true then our conscious experiences would again become epiphenomenal side-shows.
To see beyond this superficial appearance one must look more closely at the two roles of the observer in quantum theory.
Niels Bohr (1951, p.223), in recounting the important events at the Solvay Conference of 1927, says:
“On that occasion an interesting discussion arose also about how to speak of the appearance of phenomena for which only predictions of a statistical nature can be made. The question was whether, as regards the occurrence of individual events, we should adopt the terminology proposed by Dirac, that we have to do with a choice on the part of ‘nature’ or, as suggested by Heisenberg, we should say that we have to do with a choice on the part of the ‘observer’ constructing the measuring instruments and reading their recording.”
Bohr stressed this choice on part of the observer:
“…our possibility of handling the measuring instruments allow us only to make a choice between the different complementary types of phenomena we want to study.”
The observer in quantum theory does more than just read the recordings. He also chooses which question will be put to Nature: which aspect of nature his inquiry will probe. I call this important function of the observer ‘The Heisenberg Choice’, to contrast it with the ‘Dirac Choice’, which is the random choice on the part of Nature that Dirac emphasized.
According to quantum theory, the Dirac Choice is a choice between alternatives that are specified by the Heisenberg Choice: the observer must first specify what aspect of the system he intends to measure or probe, and then put in place an instrument that will probe that aspect.
In quantum theory it is the observer who both poses the question, and recognizes the answer. Without some way of specifying what the question is, the quantum rules will not work: the quantum process grinds to a halt.
Nature does not answer, willy-nilly, all questions: it answers only properly posed questions.
A question put to Nature must be one with a Yes-or-No answer, or a sequence of such questions. The question is never of the form “Where will object O turn out to be?”, where the possibilities range in a smooth way over a continuum of values. The question is rather of a form such as: “Will the center of object O — perhaps the pointer on some instrument — be found by the observer to lie in the interval between 6 and 7 on some specified ‘dial’?”
The human observer poses such a question, which must be such that the answer Yes is experientially recognizable. Nature then delivers the answer, Yes or No. Nature’s answers are asserted by quantum theory to conform to certain statistical conditions, which are determined jointly by the question posed and the form of the prior state (of the body/brain of the observer.) The observer can examine the answers that Nature gives, in a long sequence of trials with similar initial conditions, and check the statistical prediction of the theory.
This all works well at the pragmatic Copenhagen level, where the observer stands outside the quantum system, and is simply accepted for what he empirically is and does. But what happens when we pass to the vN/W ontology? The observer then no longer stands outside the quantum system: he becomes a dynamical body/brain/mind system that is an integral dynamical part of the quantum universe.
The basic problem that originally forced the founders of quantum theory to bring the human observers into the theory was that the evolution of the state via the Schroedinger equation does not fix or specify where and when the question is posed, or what the question actually is. This problem was resolved by placing this issue in the hands and mind of the external human observer.
Putting the observer inside the system does not, by itself, resolve this basic problem: the Schroedinger evolution alone remains unable to specify what the question is. Indeed, this bringing of the human observer into the quantum system intensifies the problem, because there is no longer the option of shifting the problem away, to some outside agent. Rather, the problem is brought to a head, because the human agent is precisely the quantum system that is under investigation.
In the Copenhagen formulation the Heisenberg choice was made by the mind of the external human observer. I call this process of choosing the question the Heisenberg process. In the vN/W formulation this choice is not made by the local deterministic Schroedinger process and the global stochastic Dirac process. So there is still an essential need for a third process, the Heisenberg process. Thus the agent’s mind can continue to play its key role. But the mind of the human agent is now an integral part of the dynamical body/brain/mind. We therefore have, now, an intrinsically more complex dynamical situation, one in which a person’s conscious thoughts can — and evidently must, if no new element is brought in, — play a role that is not reducible to the combination of the Schroedinger and Dirac processes. In an evolving human brain governed by ionic concentrations and electric-magnetic field gradients, and other continuous field-like properties, rather than sharply defined properties, or discrete well-defined “branches” of the wave function, the problem of specifying, within this amorphous and diffusive context, the well-defined question that is put to nature is quite nontrivial.
Having thus identified this logical opening for efficacious human mental action, I now proceed to fill in the details of how it might work.
How Conscious Thoughts Could Influence Brain Process
Information is the currency of reality. That is the basic message of quantum theory.
The basic unit of information is the “bit”: the answer ‘Yes’ or ‘No’ to some specific question.
In quantum theory the answer ‘Yes’ to a posed question is associated with an operator $`P`$ that depends on the question. The defining property of a projection operator is that $`P`$ squared equals $`P`$: asking the very same question twice it is the same as asking it once. The operator associated with the answer ‘No’ to this same question is $`1P`$. Note that $`(1P)`$ is also a projection operator: $`(1P)^2=12P+P^2=12P+P=(1P)`$.
To understand the meaning of these operators $`P`$ and $`(1P)`$ it is helpful to imagine a trivial classical example. Suppose a motionless classical heavy point-like particle is known to be in a box that is otherwise empty. Suppose a certain probability function F represents all that you know about the location of this particle. Suppose you then send some light through the left half of the box that will detect the particle if it is in the left half of the box, but not tell you anything about where in the left half of the box the particle lies. Suppose, moreover, that the position of the particle is undisturbed by this observation. Then let P be the operator that acting on any function $`f`$ sets that function to zero in the right half of the box, but leaves it unchanged in the left half of the box. Note that two applications of P has exactly the same effect as one application, $`P^2=P`$. The question put to nature by your probing experiment is: “Do you now know that particle is in the left half of the box? Then the function PF represents, apart from an overall normalization factor, your new state of knowledge if the answer to the posed question was YES. Likewise, the function (1-P)F represents, apart from overall normalization, the new probability function, if the answer was NO.
The quantum counterpart of F is the operator S. Operators are like functions that do not commute: the order in which you apply them matters. The analog of $`PFPFP`$ is $`PSP`$, and the analog of $`(1P)F(1P)F(1P)`$ is $`(1P)S(1P)`$.
This is how the quantum state represents information and knowledge, and how increments in knowledge affect the quantum state.
I have described in my book (Stapp, 1993, Ch 6) my conception of how the quantum mind/brain works. It rests on some ideas/findings of William James.
William James(1910, p.1062) says that:
“a discrete composition is what actually obtains in our perceptual experience. We either perceive nothing, or something that is there in sensible amount. This fact is what in psychology is known as the law of the ‘threshold’. Either your experience is of no content, of no change, or it is of a perceptual amount of content or change. Your acquaintance with reality grows literally by buds or drops of perception. Intellectually and on reflection you can divide these into components, but as immediately given they come totally or not at all.”
This wholeness of each perceptual experience is a main conclusion, and theme, of Jamesian psychology. It fits neatly with the quantum ontology.
Given a well posed question about the world to which one’s attention is directed quantum theory says that nature either gives the affirmative answer, in which case there occurs an experience describable as “Yes, I perceive it!” or, alternatively, no experience occurs in connection with that question.
In vN/W theory the ‘Yes’ answer is represented by a projection operator P that acts on the degrees of freedom of the brain of the observer, and reduces the state of this brain — and also the state S of the universe — to one compatible with that answer ‘Yes’: S is reduced to PSP. If the answer is ‘No’, then the projection operator $`(1P)`$ is applied to the state S: S is reduced to (1-P)S(1-P). \[See Stapp (1998b) for technical details.\]
James (1890, p.257) asserts that each conscious experience, though it comes to us whole, has a sequence of temporal components ordered in accordance with the ordering in which they have entered into one’s stream of conscious experiences. These components are like the columns in a marching band: at each viewing only a subset of the columns is in front of the viewing stand. At a later viewing a new column has appeared on one end, and one has disappeared at the other. (cf. Stapp, 1993, p. 158.) It is this possibility of having a sequence of different components present in a single thought that allows conscious analysis and comparisons to be made.
Infants soon grasp the concept of their bodies in interaction with a world of persisting objects about them. This suggests that the brain of an alert person normally contains a “neural” representation of the current state of his body and the world about him. I assume that such a representation exists, and call it the body-world schema. (Stapp, 1993, Ch. 6)
Consciously directed action is achieved, according to this theory, by means of a ‘projected’ (into the future) temporal component of the thought, and of the body-world schema actualized by the thought: the intended action is represented in this projected component as a mental image of the intended action, and as a corresponding representation in the brain, (i.e., in a body-world schema) of that intended action. The neural activities that automatically flow from the associated body-world schema tend to bring the intended bodily action into being.
The coherence and directedness of a person’s stream of consciousness is maintained, according to this theory, because the instructions effectively issued to the unconscious processes of the brain by the natural dynamical unfolding that issues from the actualized body-world schema include not only the instructions for the initiation or continuation of motor actions but also instructions for the initiation or continuation of mental processing. This means that the actualization associated with one thought leads physically to the emergence of the propensities for the occurrence of the next thought, or of later thoughts. (Stapp, 1993, Ch. 6)
The idea here is that the action — on the state $`S`$ — of the projection operator P that is associated with a thought $`T`$ will actualize a pattern of brain activity that will dynamically evolve in such a way as to tend to create a subsequent state that is likely to achieve the intention of the thought $`T`$. The natural cause of this positive correlation between the experiential intention of the thought $`T`$ and the matching confirmatory experience of a succeeding thought $`T^{}`$ is presumably set in place during the formation of brain structure, in the course of the person’s interaction with his environment, by the reinforcement of brain structures that result in empirically successful pairings between experienced intentions and subsequently experienced perceptions. These can be physically compared because both are expressed physically by similar body-world schemas.
As noted previously, the patterns of brain activity that are actualized by an event unfold not only into instructions to the motor cortex to institute intended motor actions. They unfold also into instructions for the creation of the conditions for the next experiential event. But the Heisenberg uncertainties in, for example, the locations of the atomic and ionic constituents of the nerve terminals, and more generally of the entire brain, necessarily engender a quantum diffusion in the evolving state of the brain. Thus the dynamically generated state that is the pre-condition for the next event will not correspond exactly to a well defined unique question: some ‘scatter’ will invariably creep in. However, a specific question must be posed in order for the next quantum event to occur!
This problem of how to specify “the next question” is the central problem in most attempts to ‘improve’ the Copenhagen interpretation by excluding “the observer”. If one eliminates the observer, then something else must be brought in to fix the next question: i.e., to make the Heisenberg choice.
The main idea here is to continue to allow the question to be posed by the ‘observer’, who is now an integral part of the quantum system: the observer is a body/brain/mind subsystem. The Heisenberg Choice, which is the choice of an operator P that acts macroscopically, as a unit, on the observing system, is not fixed by the Schroedinger equation, or by the Dirac Choice, so it is most naturally fixed by the experiential part of that system, which seems to pertain to macroscopic aspects of brain activity taken as units.
Each experience is asserted to have an intentional aspect, which is its experiential goal or aim, and an attentional aspect, which is an experiential focussing on an updating of the current status of the person’s idea of his body, mind, and environment.
When an action is initiated by some thought, part of the instruction is normally to monitor, by attention, the ensuing action, in order to check it against the intended action.
In order for the appropriate experiential check to occur, the appropriate question must be asked. The intended action is formulated in experiential terms, and the appropriate monitoring question is whether this intended experience matches the subsequently occurring experience. This connection has the form of the transference of an experience defined by the intentional aspect of an earlier experience into the experiential question attended to — i.e., posed — by a later experience.
This way of closing the causal gap associated with the Heisenberg Choice introduces two parallel lines of causal connection in the body/brain/mind system. On the one hand, there is the physical line that unfolds — under the control of the local deterministic Schroedinger equation — from a prior event, and that generates the physical potentialities for succeeding possible events. Acting in parallel to this physical line of causation, there is a mental line of causation that transfers the experiential intention of an earlier event into an experiential attention of a later event. These two causal strands, one physical and one mental, join to form the physical and mental poles of a succeeding quantum event.
In this model there are three intertwined factors in the causal structure: (1), the local causal structure generated by the Schroedinger equation; (2), the Heisenberg Choices, which is based on the experiential aspects of the body/brain/mind subsystem that constitutes a person; and (3), the Dirac Choices on the part of nature.
The point of all this is that there is within the vN/W ontology a logical necessity, in order for the quantum process to proceed, for some process to fix the Heisenberg Choice of the operator $`P`$, which acts over an extended portion of the body/brain of the person. Neither the Schroedinger evolution nor the Dirac stochastic choice can do the job. The only other known aspect of the system is our conscious experience. It is possible, and natural, to use this mind part of body/brain/mind system to produce the needed choice.
The mere logical possibility of a mind-matter interaction such as this, within the vN/W formulation, indicates that quantum theory has the potential of permitting the experiential aspects of reality to enter into the causal structure of body/brain/mind dynamics, and to enter in a way that is not fully reducible to a combination of local mechanical causation specified by the Schroedinger equation and the random quantum choices. The requirements of quantum dynamics demand some further process, and an experienced-based process that fits both our ideas about our psychological make up and also the quantum rules that connect our experiences to the informational structure carried by the evolving physical state of the brain seems to be the perfect candidate.
What has been achieved here is, of course, just a working out in more detail of Wigner’s idea that quantum theory, in the von Neumann form, allows for mind — pure conscious experience — to interact with the ‘physical’ aspect of nature, as that aspect is represented in quantum theory. What permits this interaction is the fact that the physical aspect of nature, as it is represented in quantum theory, is informational in character, and hence links naturally to increments in knowledge. Because each increment in knowledge acts directly upon the quantum state, and reduces it to the informational structure compatible with the new knowledge, there is, right from the outset, an action of mind on the physical world. I have just worked out a possible scenario in more detail, and in particular have emphasized how the causal gap associated with the Heisenberg Choice allows mind to enter into the dynamics in a way that is quite in line with our intuition about the efficacy of our thoughts. It is therefore simply wrong to proclaim that the findings of science entail that our intuitions about the nature of our thoughts are necessarily illusory or false. Rather, it is completely in line with contemporary science to hold our thoughts to be causally efficacious, and reducible neither to the local deterministic Schroedinger process, nor to that process combined with stochastic Dirac choices on the part of nature.
Idealism, Materialism, and Quantum Informationism.
I have stressed just now the idea-like character of the physical state of the universe, within vN/W quantum theory. This suggests that the theory may conform to the tenets of idealism. This is partially true. The quantum state undergoes, when a fact become fixed in a local region, a sudden jump that extends over vast reaches of space. This gives the physical state the character of a representation of knowledge rather than a representation of substantive matter. When not jumping the state represents potentialities or probabilities for actual events to occur. Potentialities and probabilities are normally conceived to be idea-like qualities, not material realities. So as regards the intuitive conception of the intrinsic nature of what is represented within the theory by the physical state it certainly is correct to say that it is idea-like.
On the other hand, the physical state has a mathematical structure, and a behaviour that is governed by the mathematical properies. It evolves much of the time in accordance with local deterministic laws that are direct quantum counterparts of the local deterministic laws of classical mechanics. Thus as regards various structural and causal properties the physical state certainly has aspects that we normally associate with matter.
So this vN/W quantum conception of nature ends up having both idea-like and matter-like qualities. The causal law involves two complementary modes of evolution that, at least at the present level theoretical development, are quite distinct. One of these modes involves a gradual change that is governed by local deterministic laws, and hence is matter-like in character. The other mode is abrupt, and is idea-like in two respects.
This hybrid ontology can be called an information-based reality. Each answer, Yes or No, to a quantum question is one bit of information that is generated by a mental-type event. The physical repository of this information is the quantum state of the universe: the new information is recorded as a reduction of the quantum state of the universe to a new form, which then evolves deterministically in accordance with the Schroedinger equation. Thus, according to this quantum conception of nature, the physical universe — represented by the quantum state — is a repository of evolving information that has the dispositional power to create more information.
This hybrid ontology can be called an information-based reality. Each answer. Yes or No, to a quantum question is one bit of information that is generated by a mental-type event. This event is registered as a reduction of the quantum state of the universe to a new form. This information is stored in this state, which evolves deterministically in accordance with the Schroedinger equation. Thus, according to the quantum conception, the physical universe — represented by the quantum state — is a repository evolving information that has the dispositional power to create more information.
Quantum Zeno Effect and The Efficacy of Mind
In the model described above the specifically mental effects are expressed solely through the choice and the timings of the questions posed. The question then arises as to whether just the choices about which questions are asked, with no control over which answers are returned, can influence the dynamical evolution of a system.
The answer is ‘Yes’: the evolution of a quantum state can be greatly influenced by the choices and timings of the questions put to nature.
The most striking example of this is the Quantum Zeno Effect. (Chui, Sudarshan, and Misra, 1977, and Itano, et al. 1990). In quantum theory if one poses repeatedly, in very rapid succession, the same Yes-or-No question, and the answer to the first of these posings is Yes, then in the limit of very rapid-fire posings the evolution will be confined to the subspace in which the answer is Yes: the effective Hamiltonian will change from H to PHP, where P is the projection operator onto the Yes states. This means that evolution of the system is effectively “boxed in” in the subspace where the answer continues to be Yes, if the question is posed sufficiently rapidly, even if it would otherwise run away from that region.
This fact that the Hamiltonian is effectively changed in this macroscopic way shows that the choices and timings of which questions are asked can affect observable properties.
Free Will and Causation
Personal responsibility is not reconciled with the quantum understanding of causation by making our thoughts free, in the sense of being completely unconstrained by anything at all. It is solved, rather, by making our thoughts part of the causal structure of the body/brain/mind system, but a part that is not under the complete dominion of myopic (i.e., microlocal) causation and random chance. Our thoughts then become aspects of the causal structure that are entwined with the micro-physical and random elements, yet are not completely reducible to them, or replaceable by them.
Pragmatic Theory of the Mind/Brain
This vN/W theory gives a conceivable ontology. However, for practical purposes it can be viewed as a pragmatic theory of the human psycho-physical structure. It is deeper and more realistic than the Copenhagen version because it links our thoughts not directly to objects (instruments) in the external world, but rather to patterns of brain activity. It provides a theoretical structure based explicitly on the two kinds of data at our disposal, namely the experiences of the subject, as he describes these experiences to himself and his colleagues, and the experiences of the observers of that subject, as they describe their experiences to themselves and their colleagues. These two kinds of descriptions are linked together by a theoretical structure that neatly, precisely, and automatically accounts, in a single uniform and practical way, for all known quantum and classical effects. But, in contrast to the classical-physics based model, it has a ready-made place for an efficacious mind, and provides a rational understanding of how such a mind could be causally enmeshed with brain processes.
If one adopts this pragmatic view then one need never consider the question of nonhuman minds: the theory then covers, by definition, the science that we human beings create to account for the structure of our human experiences.
This pragmatic theory should provide satisfactory basis for a rational science of the human mind/brain. It gives a structure that coherently combines the psychological and physical aspect of human behavior. However, it cannot be expected to be exactly true, for it would entail the existence of collapse events associated with increments in human knowledge, but no analogous events associated with non-humans.
One cannot expect our species to play such a special role in nature. So this human-based pragmatic version must be understood, from the ontological standpoint, as merely the first stage in the development of a better ontological theory: one that accommodates the evolutionary precursors to the human knowings that the pragmatic theory is based upon.
So far there is no known empirical evidence for the existence of any reduction events not associated with human knowings. This impedes, naturally, the development of a science that encompasses such other events.
Future Developments: Representation and Replication
The primary purpose of this paper has been to describe the general features of a pragmatic theory of the human mind/brain that allows our thoughts to be causally efficacious yet not controlled by local-mechanistic laws combined with random chance. Eventually, however, one would like to expand this pragmatic version into a satisfactory ontology theory.
Human experiences are closely connected to human brains. Hence events similar to human experiences would presumably not exist either in primitive life forms, or before life began. Hence a more general theory that could deal with the evolution of consciousness would presumably have to be based on something other than the “experiential increments in knowledge” that were the basis of the pragmatic version described above.
Dennett (1994, p.236) identified intentionality (aboutness) as a phenomenon more fundamental than consciousness, upon which he would build his theory of consciousness. ‘Aboutness’ pertains to representation: the representation of one thing in another.
The body-world schema is the brain’s representation of the body and its environment. Thus it constitutes, in the theory of consciousness described above, an element of “aboutness” that could be seized upon as the basis of a more general theory.
However, there lies at the base of the quantum model described above an even more rudimentary element: self-replication. The basic process in the model is the creation of events that create likenesses of themselves. This tendency of thoughts to create likenesses of themselves, helps to keep a train of thought on track.
Abstracting from our specific model of human consciousness one sees the skeleton of a general process of self-replication.
Fundamentally, the theory described above is a theory of events, where each event has an attentional aspect and an intentional aspect. The attentional aspect of an event specifies an item of information that fixes the operator $`P`$ associated with that event. The intentional aspect of the event specifies the functional property injected into the dynamics by the action of $`P`$ on $`S`$. This functional property is a tendency of the Schroedinger-directed dynamics to produce a future event whose attentional aspect is the same as that of the event that is producing this tendency. The effect of these interlocking processes is to inject into the dynamics a directional tendency, based on approximate self-replication, that acts against the chaotic diffusive tendency generated by the Schroedinger equation. Such a process could occur before the advent of our species, and of life itself, and it could contribute to their emergence.
Conflation and Identity
A person’s thoughts and ideas appear — to that person himself — to be able to do things: a person’s mental states seem to be able cause his body to move about in intended ways. Thus thoughts seem to have functional power. Indeed, the idea of functionalism is that what makes thoughts and other mental states what they are is precisely their functional power: e.g., my pain is a pain by virtue of its functional or causal relationship to other aspects of the body/brain/mind system. Of course, this would be merely a formal definition of the term “mental state” if it did not correspond to the occurrence of an associated element in a person’s stream of consciousness: in the context of the present study — of the connection between our brains and our inner experiential lives — the occurrence of a mental state in a person’s mind is supposed to mean the occurrence of a corresponding element in his stream of consciousness.
The identity theory of mind claims that each mental state is identical to some process in a brain. But combining this idea with the classical-physics conception of the physical universe leads to problems. They stem from the fact that the precepts of classical physical theory entail that the entire causal structure of any complex physical system is completely determined by its microscopic physical structure alone. Alternative high-level descriptions of certain complex physical systems might be far more useful to us in practice, but they are in principle redundant and unnecessary if the principles of classical physics hold. Thus it is accurate to say that the heat of the flame caused the paper to ignite, or that the tornado ripped the roofs off of the houses and left a path of destruction. But according to the precepts of classical physical theory the high-level causes are mere mathematical reorganizations of microscopic causes that are completely explainable micro-locally within classical physical theory. Nothing is needed beyond mathematical reorganization and — in order for us to be able to apply the theory — the assumption that we can empirically know, through observations via our senses,the approximate relative locations and shapes of sufficiently large macroscopically localized assemblies of the microscopic physical elements that the theory posits.
In the examples just described our experiences themselves are not the causes of the ignition or destruction: our experiences merely help us to identify the causes. In fact, the idea behind classical physical theory is that the local physical variables of the theory represent a collection of ontologically distinct physical realities each of whose ontological status is (1), intrinsically microlocal, (2), ontologically independent of our experiences, and (3), dynamically non-dependent upon experiences. That is why quantum theory was such a radical break with tradition: in quantum theory the physical description became enmeshed with our experiential knowledge, and the physical state became causally dependent upon our mental states.
Quantum theory is, in this respect, somewhat similar to the identity theory of mind: both entangle mind and physical process already at the ontological level. But the idea of the classical identity theory of the mind is to hang onto the classical conception of physical reality, and aver that a correct understanding of the true nature of a conscious thought would reveal it to be none other than a classically describable physical process that brings about what the thought intends, given the appropriate alignment of the relevant physical mechanisms.
That idea is, in fact, what would naturally emerge from quantum theory in the classical limit where the difference between Planck’s constant and zero can be ignored, and the positions of particles and their conjugate momentum can both be regarded as well defined, relative to any question that is posed. In that limit there is no effective quantum dispersion caused by the Heisenberg uncertainty principle, and hence no indeterminism, and the only Heisenberg Choices of questions about a future state that can get an answer ‘Yes’ are those that are in accord with the functional properties of the present state. So there would be, in that classical approximation to the quantum process described above, a collapse of the two lines of causation, the physical and the mental, into a single one that is fixed by the local classical deterministic rules. Thus in the classical approximation the mental process would indeed be doing nothing beyond what the classical physical process is already doing, and the two process might seem to be the same process. But Planck’s constant is not zero, and the difference from zero introduces quantum effects that separate the two lines of causation, and allow their different causal roles to be distinguished.
The identity theory of mind raises puzzles. Why, in a world composed primarily of ontologically independent micro-realities, each able to access or know only things in its immediate microscopic environment, and each completely determined by micro-causal connections from its past, should there be ontological realities such as conscious thoughts that can grasp or know, as wholes, aspects of huge macroscopic collections of these micro-realities, and that can have intentions pertaining to the future development of these macroscopic aspects, when that future development is already completely fixed, micro-locally, by micro-realities in the past?
The quantum treatment discloses that these puzzles arise from the conflation in the classical limit of two very different but interlocked causal processes, one micro-causal, bound by the past, and blind to the future, the other macro-causal, probing the present, and projecting to the future.
Mental Force and the Volitional Brain
The psychiatrist Jeffrey Schwartz (1999) has described a clinically successful technique for treating patients with obsessive compulsive disorder (OCD). The treatment is based on a program that trains the patient to believe that his own willful redirection of his attention away from intense urges of a kind associated with pathological activity within circuitry of the basal ganglia, and toward adaptive functional behaviours, can, with sufficient persistent effort, systematically change both the intrusive, maladaptive, obsessive-compulsive symptoms, as well as the pathological brain activity associated with them. This treatment is in line with the quantum mechanical understanding of mind/brain dynamics developed above, in which the mental/experiential component of the causal structure enters brain dynamics via intentions that govern attentions that influence brain activity.
According to classical physical theory “a brain was always going to do what it was caused to do by local mechanical disturbances,” and the idea that one’s “will”, is actually able to cause anything at all is “a benign user illusion”. Thus Schwartz’s treatment amounts, according to this classical conceptualization, to deluding the patient into believing a lie: according to that classical view Schwartz’s intense therapy causes directly, in the patients behaviour, a mechanical shift that the patient delusionally believes is the result of his own intense effort to redirect his activities, for the purpose of effecting an eventual cure, but which (felt effort) is actually only a mysterious illusionary by-product of his altered behaviour.
The presumption about the mind/brain that is the basis of Schwartz’s successful clinical treatment, and the training of his patients, is that willful redirection of attention is efficacious. His success does not prove that ‘will’ is efficacious, but it does constitute prima facie evidence that it is. In fact, the belief that our thoughts can influence our actions is so basic to our entire idea of ourselves and our place in nature, and is so essential to our actual functioning in this world, that any suggestion that this idea is false would become plausible only under extremely coercive conditions, such as its incompatibility with basic physics. But no such coercion exists. Contemporary physical theory does allow our experiences, per se, to be truly efficacious and non-reducible: our experiences are elements of the causal structure that do necessary things that nothing else in the theory can do. Thus science, if pursued with sufficient care, demands no cloistering of disciplines, or interpretation as user illusions of the apparent causal effects of our conscious thoughts upon our physical actions.
References
Niels Bohr (1934), Atomic Theory and the Description of Nature, Cambridge Univ. Press, Cambridge.
Niels Bohr (1936), Causality and Complementarity, Philos. of Science, 4. (Address to Second International Congress for the Unity of Science, June, 1936).
Niels Bohr (1951), in reference A. Einstein (1951).
Niels Bohr (1958), Atomic Physics and Human Knowledge, Wiley, New York.
C.B. Chiu, E.C.G. Sudarshan, and B. Misra, (1977) Phys. Rev. D 16, 520.
Daniel Dennett (1994), in A Companion to the Philosophy of Mind, ed. Samuel Guttenplan, Blackwell, Oxford. ISBN 0-631-17953-4.
A. Einstein (1951), Albert Einstein: Philosopher-Physicist ed, P.A. Schilpp, Tudor, New York.
W. Itano, D. Heinzen, J. Bollinger, D. Wineland, (1990), Phys. Rev. 41A, 2295-2300.
Heisenberg, W. (1958a) ‘The representation of nature in contemporary physics’, Deadalus bf 87, 95-108.
Heisenberg, W. (1958b) Physics and Philosophy (New York: Harper and Row).
William James (1910), Some Problems in Philosophy, Ch X; in William James/ Writings 1902-1910, The Library of America, New York. (1987). ISBN 0-940450-18-0.
William James (1890), The principles of Psychology, Vol I, Dover, New York. ISBN 0-486-20381-6.
E. Joos (1986), Annals, NY Acad. Sci. Vol 480 6-13. ISBN 0-89766-355-1.
Richard Rorty (1979), Philosophy and the Mirror of Nature. Princeton U.P.
Jeffery M. Schwartz, A Role for Volition and Attention in the Generation of New Brain Circuitry: Toward a Neurobiology of Mental Force, Journal of Consciousness Studies, June/July 1999.
Henry P. Stapp (1972), The Copenhagen Interpretation, Amer. J. Phys. 40, 2098-1116. Reprinted in Stapp (1993).
Henry P. Stapp (1993), Mind, Matter, and Quantum Mechanics, Springer-Verlag, New York, Berlin, Heidelberg. ISBN 0-387-56289-3.
Henry P. Stapp (1998a), Pragmatic approach to consciousness, in Brain and Values: Is a Biological Science of Values Possible? ed. Karl H. Pribram, Lawrence Erlbaum, Mahwah, NJ. ISBN 0-8058-3154-1.
Or at www-physics.lbl.gov/(tilde)stapp/stappfiles.html \[(tilde) means the tilde sign\] New Book “Knowings” (Book1.txt).
Henry P. Stapp (1998b), at www-physics.lbl.gov/~stapp/stappfiles.html
See “Basics” for mathematical details about the vN/W formalism.
Steven Pinker (1997), How the Mind Works, Norton, NY. ISBN 0-39304545-8.
W. Tittel, J. Brendel, H. Zbinden, and N. Gisin (1998), Physical Review Letters 81, 3563-3566.
J. von Neumann (1932), The mathematical principles of quantum mechanics, Princeton U.P. Princeton NJ, 1955.
Wigner, E. (1961) ‘The probability of the existence of a self-reproducing unit’, in The Logic of Personal Knowledge ed. M. Polyani (London: Routledge & Paul) pp. 231-238.
W. Zurek (1986) Annals, NY Acad. Sci. Vol 480, 89-97. ISBN 0-89766-355-1.
|
no-problem/9905/hep-ph9905325.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is well known that baryons can be obtained from topological solutions, known as SU(2) Skyrmions, since the homotopy group $`\mathrm{\Pi }_3(SU(2))=Z`$ admits fermions . Using the collective coordinates of the isospin rotation of the Skyrmion, Adkins et al. have performed semiclassical quantization to obtain the static properties of baryons within about 30$`\%`$ of the corresponding experimental data.
On the other hand, in order to quantize physical systems subjective to constraints, Dirac quantization scheme has been widely used. However, whenever we adopt the Dirac method, we frequently meet the problem of operator ordering ambiguity. In order to avoid this problem, Batalin, Fradkin, and Tyutin (BFT) developed a method , which converts second class constraints into first class ones by introducing auxiliary fields. Recently, we have clarified the relation between the Dirac scheme and BFT one, which has been obscure and unsettled up to now, in the framework of the SU(2) Skyrmion model .
The motivation of this paper is to systematically apply the BFT and Batalin, Fradkin, and Vilkovisky (BFV)-BRST method to the SU(2) Skyrmion as a phenomenological example of topological system, and to consider the problem of finding all local symmetries of the system through both the Hamiltonian and Lagrangian approaches. As a result, we will show that these approaches give the same symmetry structure of the SU(2) Skyrmion. In section 2, we will briefly recapitulate the construction of the first class SU(2) Skyrmion Hamiltonian. In section 3, we will study full symmetry structure of the system in the pure Hamiltonian approach recently proposed . In section 4, we will treat symmetry structure of the Lagrangian, which includes the Wess-Zumino (WZ) term, corresponding to the first class Hamiltonian in the Lagrangian approach, to compare with the results in the previous section. Finally, we will construct the BRST invariant gauge fixed Lagrangian from the extended one corresponding to the first class Hamiltonian in the BFV scheme .
## 2 BFT Hamiltonian for SU(2) Skyrmion
Now we start with the Skyrmion Lagrangian of the form
$$L_{SM}=dr^3[\frac{f_\pi ^2}{4}\mathrm{tr}(_\mu U^{}^\mu U)+\frac{1}{32e^2}\mathrm{tr}[U^{}_\mu U,U^{}_\nu U]^2]$$
(2.1)
where $`f_\pi `$ is the pion decay constant and $`e`$ is a dimensionless parameter and $`U`$ is an SU(2) matrix satisfying the boundary condition $`lim_r\mathrm{}U=I`$ so that the pion field vanishes as $`r`$ goes to infinity. On the other hand, in the Skyrmion model, spin and isospin states can be treated by collective coordinates $`a^\mu =(a^0,\stackrel{}{a})`$ $`(\mu =0,1,2,3)`$ corresponding to the spin and isospin rotations
$$A(t)=a^0+i\stackrel{}{a}\stackrel{}{\tau }.$$
(2.2)
With the hedgehog ansatz and the collective rotation $`A(t)`$ SU(2), the Skyrmion Lagrangian can be written as
$$L_{SM}=E+2\dot{a}^\mu \dot{a}^\mu $$
(2.3)
where $`E`$ and $``$ are the soliton energy and the moment of inertia, respectively. Introducing the canonical momenta $`\pi ^\mu =4\dot{a}^\mu `$ conjugate to the collective coordinates $`a^\mu `$ one can then obtain the canonical Hamiltonian
$$H=E+\frac{1}{8}\pi ^\mu \pi ^\mu .$$
(2.4)
Then, our system has the following second class constraints
$`\mathrm{\Omega }_1`$ $`=`$ $`a^\mu a^\mu 10,`$
$`\mathrm{\Omega }_2`$ $`=`$ $`a^\mu \pi ^\mu 0,`$ (2.5)
to yield the Poisson algebra with $`ϵ^{12}=ϵ^{21}=1`$
$$\mathrm{\Delta }_{kk^{}}=\{\mathrm{\Omega }_k,\mathrm{\Omega }_k^{}\}=2ϵ^{kk^{}}a^\mu a^\mu .$$
(2.6)
Next, let us briefly recapitulate the construction of the first class SU(2) Hamiltonian. Following the abelian BFT formalism which systematically converts the second class constraints into first class ones, we introduce two auxiliary fields $`\mathrm{\Phi }^i`$ corresponding to $`\mathrm{\Omega }_i`$ with the Poisson brackets
$$\{\mathrm{\Phi }^i,\mathrm{\Phi }^j\}=ϵ^{ij}.$$
(2.7)
One can then obtain the following first class constraints
$`\stackrel{~}{\mathrm{\Omega }}_1`$ $`=`$ $`\mathrm{\Omega }_1+2\mathrm{\Phi }^1`$
$`\stackrel{~}{\mathrm{\Omega }}_2`$ $`=`$ $`\mathrm{\Omega }_2a^\mu a^\mu \mathrm{\Phi }^2`$ (2.8)
satisfying the first class constraint algebra $`\{\stackrel{~}{\mathrm{\Omega }}_i,\stackrel{~}{\mathrm{\Omega }}_j\}=0`$. Then, by demanding that they are strongly involutive in the extended phase space, i.e., $`\{\stackrel{~}{\mathrm{\Omega }}_i,\stackrel{~}{}\}=0`$, we construct the first class BFT physical fields $`\stackrel{~}{}=(\stackrel{~}{a}^\mu ,\stackrel{~}{\pi }^\mu )`$ corresponding to the original fields $`=(a^\mu ,\pi ^\mu )`$, as a power series of the auxiliary fields $`\mathrm{\Phi }^i`$, as follows<sup>1</sup><sup>1</sup>1 Here one notes that the Poisson brackets of $`\stackrel{~}{}`$’s have the same structure as that of the corresponding Dirac brackets .
$`\stackrel{~}{a}^\mu `$ $`=`$ $`a^\mu \left[1{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(1)^n(2n3)!!}{n!}}{\displaystyle \frac{(\mathrm{\Phi }^1)^n}{(a^\mu a^\mu )^n}}\right]`$
$`\stackrel{~}{\pi }^\mu `$ $`=`$ $`(\pi ^\mu a^\mu \mathrm{\Phi }^2)\left[1+{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(1)^n(2n1)!!}{n!}}{\displaystyle \frac{(\mathrm{\Phi }^1)^n}{(a^\mu a^\mu )^n}}\right].`$ (2.9)
Now, exploiting the novel property that any functional $`𝒦(\stackrel{~}{})`$ of the first class fields $`\stackrel{~}{}`$ will also be first class, i.e.,
$$\stackrel{~}{𝒦}(;\mathrm{\Phi })=𝒦(\stackrel{~}{})$$
one can directly construct the first class Hamiltonian in terms of the above BFT physical variables as follows
$$\stackrel{~}{H}=E+\frac{1}{8}\stackrel{~}{\pi }^\mu \stackrel{~}{\pi }^\mu $$
(2.10)
omitting infinitely iterated standard procedure . As a result, the corresponding first class Hamiltonian with the original fields and auxiliary fields is given by
$$\stackrel{~}{H}=E+\frac{1}{8}(\pi ^\mu a^\mu \mathrm{\Phi }^2)(\pi ^\mu a^\mu \mathrm{\Phi }^2)\frac{a^\nu a^\nu }{a^\nu a^\nu +2\mathrm{\Phi }^1}$$
(2.11)
which is also strongly involutive with the first class constraints
$$\{\stackrel{~}{\mathrm{\Omega }}_i,\stackrel{~}{H}\}=0.$$
(2.12)
However, with the first class Hamiltonian (2.11), one cannot naturally generate the first class Gauss’ law constraint from the time evolution of the primary constraint $`\stackrel{~}{\mathrm{\Omega }}_1`$. Now, by introducing an additional term proportional to the first class constraints $`\stackrel{~}{\mathrm{\Omega }}_2`$ into $`\stackrel{~}{H}`$, we obtain an equivalent first class Hamiltonian
$$\stackrel{~}{H}^{}=\stackrel{~}{H}+\frac{1}{4}\mathrm{\Phi }^2\stackrel{~}{\mathrm{\Omega }}_2$$
(2.13)
which naturally generates the Gauss’ law constraint
$$\{\stackrel{~}{\mathrm{\Omega }}_1,\stackrel{~}{H}^{}\}=\frac{1}{2}\stackrel{~}{\mathrm{\Omega }}_2,\{\stackrel{~}{\mathrm{\Omega }}_2,\stackrel{~}{H}^{}\}=0.$$
(2.14)
Here one notes that $`\stackrel{~}{H}`$ and $`\stackrel{~}{H}^{}`$ act on physical states in the same way since such states are annihilated by the first class constraints.
It seems appropriate to comment on the phenomenolocal application of the above Hamiltonian $`\stackrel{~}{H}^{}`$. Using the first class constraints in this Hamiltonian (2.13), one can finally obtain the Hamiltonian of the form
$$\stackrel{~}{H}^{}=M+\frac{1}{8}(a^\mu a^\mu \pi ^\nu \pi ^\nu a^\mu \pi ^\mu a^\nu \pi ^\nu ).$$
(2.15)
Following the symmetrization procedure, the first class Hamiltonian yields the slightly modified energy spectrum with the Weyl ordering correction
$$\stackrel{~}{H}^{}=E+\frac{1}{2}[I(I+1)+\frac{1}{4}]$$
(2.16)
where $`I`$ is the isospin quantum number of baryons.
## 3 Symmetry Structure of First Class Hamiltonian
Now, since we have successfully converted the second class SU(2) Skyrmion into the corresponding first class one with the BFT scheme, we are ready to unravel gauge symmetries of the first class system following the purely Hamiltonian approach , which is very recently proposed. This total Hamiltonian approach<sup>2</sup><sup>2</sup>2For an extended Hamiltonian approach, see the work of . This extended Hamiltonian approach with suitable gauge conditions is equivalent to the total Hamiltonian approach. See also Ref. . is based on the requirement of the commutativity of a general gauge variation with the time derivative operation which puts restrictions on gauge parameters and Lagrange multipliers.
Following Dirac’s conjecture , let us first construct the generators of gauge transformation for the SU(2) Skyrmion model which has totally two constraints as
$$G=ϵ^a\stackrel{~}{\mathrm{\Omega }}_a,a=1,2.$$
(3.1)
Here, $`ϵ^a`$ are in general functions of phase space variables and $`\stackrel{~}{\mathrm{\Omega }}_a`$ are first class constraints in Eq.$`(\text{2.8})`$. Then, the infinitesimal gauge transformation is given by the relation of $`\delta F(p,q)=\{F(p,q),G\}`$ in which $`F`$ is a function of phase space variables. The total Hamiltonian is easily read as
$$\stackrel{~}{H}_\mathrm{T}=\stackrel{~}{H}^{}+\lambda \stackrel{~}{\mathrm{\Omega }}_1,$$
(3.2)
where $`\stackrel{~}{H}^{}`$ is the canonical Hamiltonian $`\stackrel{~}{H}_\mathrm{C}`$ and $`\stackrel{~}{\mathrm{\Omega }}_1`$ the primary first class constraint. Comparing the general expression of gauge algebra
$`\{\stackrel{~}{\mathrm{\Omega }}_a,\stackrel{~}{H}_\mathrm{C}\}`$ $`=`$ $`V_a^b\stackrel{~}{\mathrm{\Omega }}_b,`$
$`\{\stackrel{~}{\mathrm{\Omega }}_a,\stackrel{~}{\mathrm{\Omega }}_b\}`$ $`=`$ $`C_{ab}^c\stackrel{~}{\mathrm{\Omega }}_c`$ (3.3)
with Eq. (2.14), we can determine the gauge functions of $`V_a^b`$ as well as $`C_{ab}^c`$. Note that the subscripts $`a`$ and $`b`$ count all the number of constraints, while denote the subscript of the primary constraints as $`a_1`$ and the secondary constraints as $`a_2`$.
Now the requirement of the commutativity of the general variation with the time derivative operator gives the relation as follows
$`\delta v^{b_1}`$ $`=`$ $`{\displaystyle \frac{dϵ^{b_1}}{dt}}ϵ^a(V_a^{b_1}+v^{a_1}C_{a_1a}^{b_1}),`$ (3.4)
$`0`$ $`=`$ $`{\displaystyle \frac{dϵ^{b_2}}{dt}}ϵ^a(V_a^{b_2}+v^{a_1}C_{a_1a}^{b_2}).`$ (3.5)
Here, $`v^{b_1}`$ are Lagrange multipliers associated with the primary first class constraints in the total Hamiltonian, and Eq.(3.5) gives the restriction on the gauge parameters.
Since there exist one primary and one secondary constraints for our SU(2) Skyrmion case, we easily see that the condition (3.5) imposed on the gauge parameters is simply rewritten as the relation of $`\dot{ϵ}^2=\frac{1}{2}ϵ^1`$. Moreover, making use of this relation, we explicitly obtain the infinitesimal gauge transformation of the field variables as follows
$$\delta a=\{a^\mu ,G\}=a^\mu ϵ,\delta \mathrm{\Phi }^1=\{\mathrm{\Phi }^1,G\}=a^\mu a^\mu ϵ,$$
(3.6)
where we have rewritten the independent gauge paramether $`ϵ^2`$ as $`ϵ`$.
As a result of applying the approach on a purely Hamiltonian level to the SU(2) Skyrmion model, we have derived the rule of the full symmetry transformation.
## 4 Symmetry Structure of Corresponding Lagrangian
Now let us consider the partition function of the model in order to present the Lagrangian corresponding to the first class Hamiltonian $`\stackrel{~}{H}^{}`$ in Eq.(2.13). First of all we identify the auxiliary fields $`\mathrm{\Phi }^i`$ with a canonical conjugate pair $`(\theta ,\pi _\theta )`$, i.e.,
$$\mathrm{\Phi }^i=(\theta ,\pi _\theta )$$
(4.1)
which satisfy Eq. (2.7). Then, the starting partition function in the phase space is given by the Faddeev-Senjanovic formula as follows
$`Z`$ $`=`$ $`N{\displaystyle 𝒟a^\mu 𝒟\pi ^\mu 𝒟\theta 𝒟\pi _\theta \underset{i,j=1}{\overset{2}{}}\delta (\stackrel{~}{\mathrm{\Omega }}_i)\delta (\mathrm{\Gamma }_j)det|\{\stackrel{~}{\mathrm{\Omega }}_i,\mathrm{\Gamma }_j\}|e^{i{\scriptscriptstyle dtL}}}`$
$`L`$ $`=`$ $`\dot{a}^\mu \pi ^\mu +\dot{\theta }\pi _\theta \stackrel{~}{H}^{}`$ (4.2)
where the gauge fixing conditions $`\mathrm{\Gamma }_i`$ are chosen so that the determinant occurring in the functional measure is nonvanishing.
Now, exponentiating the delta function $`\delta (\stackrel{~}{\mathrm{\Omega }}_2)`$ as $`\delta (\stackrel{~}{\mathrm{\Omega }}_2)=𝒟\xi e^{i{\scriptscriptstyle dt\xi \stackrel{~}{\mathrm{\Omega }}_2}}`$ and performing the integration over $`\pi _\theta `$, $`\pi ^\mu `$ and $`\xi `$, we obtain the the following partition function
$`Z`$ $`=`$ $`N{\displaystyle 𝒟a^\mu 𝒟\theta \delta (a^\mu a^\mu 1+2\theta )\underset{i=1}{\overset{2}{}}\delta (\mathrm{\Gamma }_i)det|\{\stackrel{~}{\mathrm{\Omega }}_i,\mathrm{\Gamma }_j\}|e^{i{\scriptscriptstyle dtL}}}`$ (4.3)
$`L`$ $`=`$ $`E+{\displaystyle \frac{2}{a^\sigma a^\sigma }}\dot{a}^\mu \dot{a}^\mu {\displaystyle \frac{2}{(a^\sigma a^\sigma )^2}}\dot{\theta }^2.`$ (4.4)
As a result, we have obtained the desired Lagrangian (4.4) corresponding to the first class Hamiltonian (2.13). Here one notes that the Lagrangian (4.4) can be reshuffled to yield the gauge invariant action of the form
$`S`$ $`=`$ $`{\displaystyle dt(E+2\dot{a}^\mu \dot{a}^\mu )}+S_{WZ}`$
$`S_{WZ}`$ $`=`$ $`{\displaystyle dt\left[\frac{4}{a^\sigma a^\sigma }\dot{a}^\mu \dot{a}^\mu \theta \frac{2}{(a^\sigma a^\sigma )^2}\dot{\theta }^2\right]},`$ (4.5)
where $`S_{WZ}`$ is the new type of the Wess-Zumino term restoring the gauge symmetry. Moreover the corresponding partition function (4.3) can be rewritten simply in terms of the first class physical fields (2.9)
$`\stackrel{~}{Z}`$ $`=`$ $`N{\displaystyle 𝒟\stackrel{~}{a}^\mu \delta (\stackrel{~}{a}^\mu \stackrel{~}{a}^\mu 1)\underset{i=1}{\overset{2}{}}\delta (\mathrm{\Gamma }_i)det|\{\stackrel{~}{\mathrm{\Omega }}_i,\mathrm{\Gamma }_j\}|\mathrm{exp}^{i{\scriptscriptstyle dt\stackrel{~}{L}}}},`$
$`\stackrel{~}{L}`$ $`=`$ $`E+2\dot{\stackrel{~}{a}}^\mu \dot{\stackrel{~}{a}}^\mu `$ (4.6)
where $`\stackrel{~}{L}`$ is form invariant Lagrangian of Eq. (2.3).
Next, in order to derive the exact form of transformation in which the Lagrangian (4.4) is invariant, we use the recently proposed method of Lagrangian approach which is based on a singular Hessian in the equations of motion. Starting with the Lagrangian (4.4) with the constraint $`\stackrel{~}{\mathrm{\Omega }}_1=a^\mu a^\mu 1+2\theta =0`$, we can obtain the equations of motion of the form
$`L_i^{(0)}=W_{ij}^{(0)}\ddot{q}^j+\alpha _i^{(0)}=0`$ (4.7)
where $`W_{ij}^{(0)}=\frac{^2}{\dot{q}^i\dot{q}^j}`$ is a Hessian, $`\alpha _i^{(0)}=\frac{^2}{q^j\dot{q}^i}\dot{q}^j\frac{}{q^i}`$, and the superscript for later convenience means the zeroth iteration. If we denote $`q^i=(a^\mu ,\theta )`$, we have
$`W_{ij}^{(0)}`$ $`=`$ $`{\displaystyle \frac{4}{a^\sigma a^\sigma }}\left(\begin{array}{cc}\delta _{\mu \nu }& 0\\ 0& \frac{1}{a^\sigma a^\sigma }\end{array}\right)`$ (4.10)
$`\alpha _i^{(0)}`$ $`=`$ $`{\displaystyle \frac{4}{(a^\sigma a^\sigma )^2}}\left(\begin{array}{c}2\dot{a}_\mu a^\rho \dot{a}^\rho +a_\mu \dot{a}^\rho \dot{a}^\rho \frac{2}{a^\sigma a^\sigma }a_\mu \dot{\theta }^2\\ \frac{4}{a^\sigma a^\sigma }a^\rho \dot{a}^\rho \dot{\theta }\end{array}\right)`$ (4.13)
Since the constraint $`\mathrm{\Omega }_1`$ is an A-type defined by a function without velocities in configuration space, we require as a consistency condition the following identity
$$L_5\frac{\mathrm{d}^2}{\mathrm{d}t^2}(\frac{1}{2}\mathrm{\Omega }_1)=a^\mu \ddot{a}^\mu +\ddot{\theta }+\dot{a}^\mu \dot{a}^\mu =0.$$
(4.14)
This requirement is similar to the time stability condition of constraints in the Hamiltonian formalism. Then, the resulting equation may be summarized in the form of the set of ”first generation” equations
$$L_{i_1}^{(1)}W_{i_ij}^{(1)}\ddot{q}^j+\alpha _{i_1}^{(1)}=\{\begin{array}{cc}L_i^{(0)},i=a^\mu ,\theta \hfill & \\ \frac{\mathrm{d}^2}{\mathrm{d}t^2}(\frac{1}{2}\mathrm{\Omega }_1)\hfill & \end{array}$$
(4.15)
where
$$W_{i_1j}^{(1)}=\left(\begin{array}{cc}W_{ij}^{(0)}& \\ & \\ \begin{array}{cc}a^\mu & 1\end{array}& \end{array}\right),\alpha _{i_1}^{(1)}=\left(\begin{array}{c}\alpha _i^{(0)}\\ \dot{a}^\mu \dot{a}^\mu \end{array}\right).$$
(4.16)
Since the first iterated Hessian in Eq. (4.16) is of rank four, there exists a null eigenvector satisfying
$$\lambda _{i_1}^{(1)}W_{i_1j}^{(1)}=0$$
(4.17)
from which we can find the solution
$$\lambda _{i_1}^{(1)}=(a^\mu ,a^\mu a^\mu ,\frac{4}{a^\mu a^\mu }).$$
(4.18)
In general, the null eigenvectors are known to generate further Lagrange constraints which is of a function of the coordinates and velocities, but not of accelerations through $`\lambda _{i_k}^{(k)}L_{i_k}^{(k)}=0`$ at the k-th generation of iteration. However, in our case we have
$$\lambda _{i_1}^{(1)}L_{i_1}^{(1)}=\frac{2}{(a^\sigma a^\sigma )^2}(\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{\Omega }_1)^20$$
(4.19)
which means that no further constraints are generated. The algorithm is then ended up at this stage.
The symmetries of the Lagrangian (4.4) are encoded in the identity (4.19), which is a special case of a general theorem stating that the identity can always be written in the form of
$$\mathrm{\Omega }^{(l)}=\underset{s=0}{\overset{l}{}}(1)^s\frac{d^s}{dt^s}\varphi _k^{i(s)}L_i^{(0)}0$$
(4.20)
where the superscript $`l`$ denotes the last stage of iteration giving the identity. The corresponding Lagrangian is then invariant under the transformation
$$\delta \phi ^i=\underset{k}{}\left(ϵ_k\varphi _k^{i(0)}+\dot{ϵ}_k\varphi _k^{i(1)}\right).$$
(4.21)
In the first class SU(2) Skyrmion, the coefficients $`\varphi ^{i(s)}`$ in Eq. (4.20) are given by $`\varphi ^{a^\mu (0)}=a^\mu `$, $`\varphi ^{\theta (0)}=a^\mu a^\mu `$. As results, by using Eq. (4.21), the desired form of symmetry transformation can be read as
$$\delta a^\mu =a^\mu ϵ,\delta \theta =a^\mu a^\mu ϵ.$$
(4.22)
It can be easily checked that the Lagrangian (4.4) is invariant under the transformation (4.22).
Therefore we have shown that both the Hamiltonian and Lagrangian approaches have the same symmetry structure because the symmetry transformation rule (4.22) obtained in the Lagrangian approach is exactly the same as that in Eq.(3.6) obtained when we consider the effective first class constraints (2.8) as the symmetry generators in the pure Hamiltonian formalism.
## 5 BRST-BFV analysis for consistent gauge fixing
Now, in order to obtain the BRST invariant Lagrangian in the framework of the BFV formalism which is applicable to theories with the first class constraints, we introduce two canonical sets of ghosts and anti-ghosts together with auxiliary fields $`(𝒞^i,\overline{𝒫}_i)`$, $`(𝒫^i,\overline{𝒞}_i)`$, $`(N^i,B_i)`$, $`(i=1,2)`$ which satisfy the super-Poisson algebra <sup>3</sup><sup>3</sup>3 Here the super-Poisson bracket is defined as
$$\{A,B\}=\frac{\delta A}{\delta q}|_r\frac{\delta B}{\delta p}|_l(1)^{\eta _A\eta _B}\frac{\delta B}{\delta q}|_r\frac{\delta A}{\delta p}|_l$$
where $`\eta _A`$ denotes the number of fermions called ghost number in $`A`$ and the subscript $`r`$ and $`l`$ right and left derivatives.
$$\{𝒞^i,\overline{𝒫}_j\}=\{𝒫^i,\overline{𝒞}_j\}=\{N^i,B_j\}=\delta _j^i.$$
In the SU(2) Skyrmion model, the nilpotent BRST charge $`Q`$, the fermionic gauge fixing function $`\mathrm{\Psi }`$ and the BRST invariant minimal Hamiltonian $`H_m`$ are given by
$`Q`$ $`=`$ $`𝒞^i\stackrel{~}{\mathrm{\Omega }}_i+𝒫^iB_i,`$
$`\mathrm{\Psi }`$ $`=`$ $`\overline{𝒞}_i\chi ^i+\overline{𝒫}_iN^i,`$
$`H_m`$ $`=`$ $`\stackrel{~}{H}^{}{\displaystyle \frac{1}{2}}𝒞^1\overline{𝒫}_2`$ (5.1)
which satisfy the relations $`\{Q,H_m\}=0`$, $`Q^2=\{Q,Q\}=0`$, $`\{\{\mathrm{\Psi },Q\},Q\}=0`$. The effective quantum Lagrangian is then described as
$$L_{eff}=\pi ^\mu \dot{a}^\mu +\pi _\theta \dot{\theta }+B_2\dot{N}^2+\overline{𝒫}_i\dot{𝒞}^i+\overline{𝒞}_2\dot{𝒫}^2H_{tot}$$
(5.2)
with $`H_{tot}=H_m\{Q,\mathrm{\Psi }\}`$. Here $`B_1\dot{N}^1+\overline{𝒞}_1\dot{𝒫}^1=\{Q,\overline{𝒞}_1\dot{N}^1\}`$ terms are suppressed by replacing $`\chi ^1`$ with $`\chi ^1+\dot{N}^1`$.
Now we choose the unitary gauge
$$\chi ^1=\mathrm{\Omega }_1,\chi ^2=\mathrm{\Omega }_2$$
(5.3)
and perform the path integration over the fields $`B_1`$, $`N^1`$, $`\overline{𝒞}_1`$, $`𝒫^1`$, $`\overline{𝒫}_1`$ and $`𝒞^1`$, by using the equations of motion, to yield the effective Lagrangian of the form
$`L_{eff}`$ $`=`$ $`\pi ^\mu \dot{a}^\mu +\pi _\theta \dot{\theta }+B\dot{N}+\overline{𝒫}\dot{𝒞}+\overline{𝒞}\dot{𝒫}`$ (5.4)
$`E{\displaystyle \frac{1}{8}}(\pi ^\mu a^\mu \pi _\theta )(\pi ^\mu a^\mu \pi _\theta ){\displaystyle \frac{a^\sigma a^\sigma }{a^\sigma a^\sigma +2\theta }}{\displaystyle \frac{1}{4}}\pi _\theta \stackrel{~}{\mathrm{\Omega }}_2`$
$`+2a^\mu a^\mu \pi _\theta \overline{𝒞}𝒞+\stackrel{~}{\mathrm{\Omega }}_2N+B\mathrm{\Omega }_2+\overline{𝒫}𝒫`$
with redefinitions: $`NN^2`$, $`BB_2`$, $`\overline{𝒞}\overline{𝒞}_2`$, $`𝒞𝒞^2`$, $`\overline{𝒫}\overline{𝒫}_2`$, $`𝒫𝒫_2`$.
Next, using the variations with respect to $`\pi ^\mu `$, $`\pi _\theta `$, $`𝒫`$ and $`\overline{𝒫}`$, one obtain the relations
$`\dot{a}^\mu `$ $`=`$ $`{\displaystyle \frac{1}{4}}(\pi ^\mu a^\mu \pi _\theta )a^\sigma a^\sigma +a^\mu ({\displaystyle \frac{1}{4}}\pi _\theta NB)`$
$`\dot{\theta }`$ $`=`$ $`{\displaystyle \frac{1}{4}}a^\mu (\pi ^\mu a^\mu \pi _\theta )a^\sigma a^\sigma +a^\mu a^\mu ({\displaystyle \frac{1}{2}}\pi _\theta 2\overline{𝒞}𝒞+N)+{\displaystyle \frac{1}{4}}a^\mu \pi ^\mu `$
$`𝒫`$ $`=`$ $`\dot{𝒞},\overline{𝒫}=\dot{\overline{𝒞}}`$ (5.5)
to yield the effective Lagrangian
$`L_{eff}`$ $`=`$ $`E+{\displaystyle \frac{2}{a^\sigma a^\sigma }}\dot{a}^\mu \dot{a}^\mu 2\left[{\displaystyle \frac{\dot{\theta }}{a^\sigma a^\sigma }}+(B+2\overline{𝒞}𝒞)a^\sigma a^\sigma \right]^2`$ (5.6)
$`+{\displaystyle \frac{4}{a^\sigma a^\sigma }}a^\mu \left[\dot{a}^\mu +a^\mu ({\displaystyle \frac{\dot{\theta }}{a^\sigma a^\sigma }}+(B+2\overline{𝒞}𝒞)a^\sigma a^\sigma )\right](B+N)`$
$`+B\dot{N}+\dot{\overline{𝒞}}\dot{𝒞}.`$
Finally, with the identification
$$N=B+\frac{\dot{\theta }}{a^\sigma a^\sigma },$$
(5.7)
we obtained the desired BRST invariant Lagrangian of the form
$`L_{eff}`$ $`=`$ $`E+{\displaystyle \frac{2}{a^\sigma a^\sigma }}\dot{a}^\mu \dot{a}^\mu {\displaystyle \frac{2}{(a^\sigma a^\sigma )^2}}\dot{\theta }^22(a^\mu a^\mu )^2(B+2\overline{𝒞}𝒞)^2`$ (5.8)
$`{\displaystyle \frac{\dot{\theta }\dot{B}}{a^\sigma a^\sigma }}+\dot{\overline{𝒞}}\dot{𝒞},`$
which is invariant under the BRST transformation
$`\delta _Ba^\mu `$ $`=`$ $`\lambda a^\mu 𝒞,\delta _B\theta =\lambda a^\mu a^\mu 𝒞,`$
$`\delta _B\overline{𝒞}`$ $`=`$ $`\lambda B,\delta _B𝒞=\delta _BB=0.`$ (5.9)
Here one notes that the above BRST transformation including the rules for the (anti)ghost fields is just the generalization of the previous one (4.22).
This completes the standard procedure of BRST invariant gauge fixing in the BFV formalism.
## 6 Conclusions
In summary, we have constructed the first class BFT physical fields, in terms of which the first class Hamiltonian is formulated to be consistent with the Hamiltonian with the original fields and auxiliary fields. After converting the second class SU(2) Skyrmion into the effectively first class one, we have analyzed the full symmetry structure of the model purely based on the Hamiltonian approach. On the other hand, we have also constructed the effective Lagrangian corresponding to the first class Hamiltonian in the path integral approach to the partition function. This Lagrangian includes the new type of the WZ term restoring the gauge symmetry. Then, we have explicitly derived the symmetry structure of this effective Lagrangian through the Lagrangian approach, showing that the two approaches are equivalent to give the same symmetry structure of the SU(2) Skyrmion. Furthermore, in the BFV scheme, we have obtained the BRST invariant gauge fixed Lagrangian including the (anti)ghost fields, and its BRST transformation rules. Finally, through further investigation, the SU(3) extension , which is a real phenomenolical model, of this analysis will be worth to being studied.
One of us (S.T.H.) would like to thank Professor G.E. Brown at Stony Brook for constant concerns and encouragement. The present work was partly supported by the BK21 Project No. D-0055, Ministry of Education, 1999 and the Sogang University Research Grants in 1999.
|
no-problem/9905/hep-lat9905011.html
|
ar5iv
|
text
|
# THE PHASE DIAGRAM OF 2 FLAVOUR QCD WITH IMPROVED ACTIONS
## 1 Introduction and Motivation
The study of the finite temperature phase diagram of 2-flavour QCD with Wilson fermions has revealed a rather intricate picture. This picture is based on the idea of spontaneous breakdown of parity and flavour symmetry and has been investigated analytically as well as numerically . The main features of this phase diagram are:
* the critical line $`\kappa _c(\beta )`$ defined by a vanishing pion screening mass for finite temporal lattice size marks the phase boundary with a phase of spontaneously broken parity and flavour symmetry.
* for large enough $`N_\tau `$ five cusps moving towards weak coupling should develop, separating the 5 sets of doublers.
* the finite temperature phase transition line $`\kappa _t(\beta )`$ should not cross the critical line, but run past it towards larger values of the hopping parameter, presumably turning back toward weak coupling as one crosses the $`m_q=0`$ line.
Because the Wilson term breaks all chiral symmetries of the naive fermion action, the critical line above cannot consistently be interpreted as the chiral limit of QCD at any finite lattice spacing. It has recently become clear that two scenarios are possible. Either there exists a second order phase transition to an Aoki phase (of width $`a^3`$) in which parity and flavour symmetry are broken and along its phase boundary the pion mass vanishes or the system exhibits a first order transition along the line of vanishing quark mass and the pion does not becomes massless. The analysis also suggests that which scenario is realized can change as one varies the action. One therefore has to check, that the phase diagram with improved Wilson fermions does exhibit an Aoki phase. As the Aoki phase retracts from the weak coupling limit as the temperature increases, it naturally explains the absence of any non-analyticities across the $`m_q=0`$ line in the high temperature phase. In order to separate the high temperature side from the low temperature side of the phase diagram, the finite temperature transition line should run past the cusp of the Aoki phase and continue toward larger $`\kappa `$ values. The thermal line of the deconfinement phase transition therefore crosses the $`m_q=0`$ line and should bounce back towards weaker coupling due to a symmetry under the change of sign of the mass term in the continuum theory . If the thermal line does not touch the tip of the cusp, there would be room for the second scenario mentioned above.
## 2 The Simulation
We have simulated 2 flavours of Wilson fermions on lattices of size $`8^3\times 4`$ and $`12^2\times 24\times 4`$. We have employed tree level Symanzik improvement which for the fermion sector amounts to adding the so called clover term with coefficient one and for the gauge fields to adding a $`2\times 1`$ loop. We have first mapped out the phase diagram on the smaller lattice and have then corroborated our results on the larger lattice for two $`\beta `$ values.
## 3 Quark mass and Pion screening mass
Our results for the quark and pion screening mass are shown in figure (1). The current quark mass is defined via the axial Ward identity :
$$2m_q\frac{_\mu 0|A^\mu \pi }{0|P|\pi }$$
(1)
For $`\beta =2.8`$ we find some curvature for the quark mass as a function of $`1/\kappa `$, though a linear fit produces a reasonable $`\chi ^2`$. For $`\beta =3.1`$ we have also explored a region of hopping parameters where the quark mass becomes negative. There we find a rather peculiar behaviour. The quark mass first decreases as one lowers $`\kappa `$ towards $`\kappa _c`$ and only rises again very close to $`\kappa _c`$. This results in different slopes of $`m_q`$ as a function of $`1/\kappa `$ for positive and negative quark masses, which is in contrast to simulations with unimproved Wilson fermions that have shown the same behaviour for positive and negative quark masses . We have also accurately measured the pion screening mass. For $`\beta =2.8`$ the decrease of the pion mass is consistent with a linear behaviour $`m_\pi ^21/\kappa 1/\kappa _c`$ down to small values of $`m_\pi ^2`$. This also applies for $`\beta =3.1`$ for $`\kappa `$ values that correspond to positive quark masses. For negative quark masses the behaviour is quite different and inconsistent with a linear behaviour for the points measured. We will however argue below that for $`\beta =3.1`$ the pion mass does not go to zero as the quark mass goes to zero, because one crosses the finite temperature transition line before the quark mass becomes zero.
## 4 Polyakov loop
Figure (2) shows our results for the Polyakov loop as a function of $`\kappa `$ including data from both lattices. The vertical lines indicate the position of the extrapolated $`\kappa _c`$ from the pion screening mass, except for $`\beta =3.0`$ where the pion norm was used. This extrapolated $`\kappa _c`$ decreases with increasing $`\beta `$. For $`\beta =2.8`$ all our data for the pion mass lie in the confined region. We therefore conclude that the pion mass vanishes as the quark mass goes to zero, i.e. for $`\beta =2.8`$ we hit the Aoki phase as we increase $`\kappa `$. For $`\beta =3.0`$ and $`3.1`$ this is no longer so clear. The Polyakov loop already shows a high temperature behaviour where the extrapolated pion mass would be small. For $`\beta =3.0`$ the situation is less prominent and one could still argue that the pion becomes massless, but since the Polyakov loop is in the high temperature phase immediately after one crosses $`\kappa _c`$, the finite temperature line and the line of vanishing quark mass come very close together. For $`\beta =3.1`$ it becomes evident, that one crosses the finite temperature transition line before the line of vanishing quark mass. The pion will therefore not become massless as the quark mass vanishes. Because the Polyakov loop continues to rise past the $`m_q=0`$ line, one can exclude that the finite temperature line bounces back towards weaker coupling immediately.
## 5 Chiral order parameter
Because of the explicit breaking of chiral symmetry by the Wilson action, one has to define a properly subtracted chiral order parameter to obtain the correct continuum limit. Using axial Ward identities the order parameter is defined as follows :
$$\overline{\psi }\psi _{sub}=2m_qZ\underset{x}{}\pi (x)\pi (0)$$
(2)
Here $`Z`$ is a renormalisation factor for which we take its tree level value $`Z=(2\kappa )^2`$. The sum over the pion correlation function is just the pion norm. Our results are shown in figure (3). For $`\beta =2.8`$ the data extrapolate to a finite intercept at $`m_q=0`$. For $`\beta =3.1`$ the data show more curvature and we expect $`\overline{\psi }\psi _{sub}`$ to go to zero for vanishing quark mass.
## Acknowledgments
Work in part supported by NATO grant no. CRG940451 and by the European Community TMR Programs TRACS and ERBFMRX-CT97-0122
## References
|
no-problem/9905/hep-th9905177.html
|
ar5iv
|
text
|
# A Covariant Entropy Conjecture
## 1 Introduction
### 1.1 Bekenstein’s bound
Bekenstein has proposed the existence of a universal bound on the entropy $`S`$ of any thermodynamic system of total energy $`M`$:
$$S2\pi RM.$$
(1.1)
$`R`$ is defined as the circumferential radius,
$$R=\sqrt{\frac{A}{4\pi }},$$
(1.2)
where $`A`$ is the area of the smallest sphere circumscribing the system.
For a system contained in a spherical volume, gravitational stability requires that $`MR/2`$. Thus Eq. (1.1) implies
$$S\frac{A}{4}.$$
(1.3)
The derivation of Eq. (1.1) involves a gedankenexperiment in which the thermodynamic system is dropped into a Schwarzschild black hole of much larger size. The generalized second law of thermodynamics requires that the entropy of the system should not exceed the entropy of the radiation emitted by the black hole while relaxing to its original size; this radiation entropy can be estimated . Independently of this fundamental derivation, the bound has explicitly been shown to hold in wide classes of equilibrium systems .
Bekenstein specified conditions for the validity of these bounds. The system must be of constant, finite size and must have limited self-gravity, i.e., gravity must not be the dominant force in the system. This excludes, for example, gravitationally collapsing objects, and sufficiently large regions of cosmological space-times. Another important condition is that no matter components with negative energy density are available. This is because the bound relies on the gravitational collapse of systems with excessive entropy, and is intimately connected with the idea that information requires energy. With matter of negative energy at hand, one could add entropy to a system without increasing the mass, by adding enthropic matter with positive mass as well as an appropriate amount of negative mass. A thermodynamic system that satisfies the conditions for the application of Bekenstein’s bounds will be called a Bekenstein system.
When the conditions set forth by Bekenstein are not satisfied, the bounds can easily be violated. The simplest example is a system undergoing gravitational collapse. Before it is destroyed on the black hole singularity, its surface area becomes arbitrarily small. Since the entropy cannot decrease, the bound is violated. Or consider a homogeneous spacelike hypersurface in a flat Friedmann-Robertson-Walker universe. The entropy of a sufficiently large spherical volume will exceed the boundary area . This is because space is infinite, the entropy density is constant, and volume grows faster than area. From the point of view of semi-classical gravity and thermodynamics, there is no reason to expect that any entropy bound applies to such systems.
### 1.2 Outline
Motivated by the holographic conjecture , Fischler and Susskind have suggested that some kind of entropy bound should hold even for large regions of cosmological solutions, for which Bekenstein’s conditions are not satisfied. However, no fully general proposal has yet been formulated. The Fischler-Susskind bound , for example, applies to universes which are not closed or recollapsing, while other prescriptions can be used for sufficiently small surfaces in a wide class of cosmological solutions.
Our attempt to present a general proposal as arising, in a sense, from fundamental considerations, should not obscure the immense debt we owe to the work of others. The importance of Bekenstein’s seminal paper will be obvious. The proposal of Fischler and Susskind , whose influence on our prescription is pervasive, uses light-like hypersurfaces to relate entropy and area. This idea can be traced to the use of light-rays for formulating the holographic principle . Indeed, Corley and Jacobson were the first to take a space-time (rather than a static) point of view in locating the entropy related to an area. They introduced the concept of “past and future screen-maps” and suggested to choose only one of the two in different regions of cosmological solutions. Moreover, they recognized the importance of caustics of the light-rays leaving a surface. A number of authors have investigated the application of Bekenstein’s bound to sufficiently small regions of the universe , and have carefully exposed the difficulties that arise when such rules are pushed beyond their range of validity . These insights are invaluable in the search for a general prescription.
We shall take the following approach. We shall make no assumptions involving holography anywhere in this paper. Instead, we shall work only within the framework of general relativity. Taking Bekenstein’s bound as a starting point, we are guided mainly by the requirement of covariance to a completely general entropy bound, which we conjecture to be valid in arbitrary space-times (Sec. 2). Like Fischler and Susskind , we consider entropy not on spacelike regions, but on light-like hypersurfaces. There are four such hypersurfaces for any given surface $`B`$. We select portions of at least two of them for an entropy/area comparison, by a covariant rule that requires non-positive expansion of the generating null geodesics.
In Sec. 3, we provide a detailed discussion of the conjecture. We translate the technical formulation into a set of rules (Sec. 3.1). In Sec. 3.2, we discuss the criterion of non-positive expansion and find it to be quite powerful. We give some examples of its mechanism in Sec. 3.3 and discover an effect that protects the bound in gravitationally collapsing systems of high entropy. In Sec. 3.4 we establish a theorem that states the conditions under which spacelike hypersurfaces may be used for entropy/area comparisons. Through this theorem, we recover Bekenstein’s bound as a special case.
In Sec. 4 we use cosmological solutions to test the conjecture. Our prescription naturally selects the apparent horizon as a special surface. In regions outside the apparent horizon (Sec. 4.1), our bound is satisfied for the same reasons that justified the Fischler-Susskind proposal within its range of validity. We also explain why reheating after inflation does not endanger the bound. For surfaces on or within the apparent horizon (Sec. 4.2), we find under worst case assumptions that the covariant bound can be saturated, but not exceeded. By requiring the consistency of Bekenstein’s conditions, we show that the apparent horizon is indeed the largest surface whose interior one can hope to treat as as a Bekenstein system. This conclusion is later reached independently, from the point of view of the covariant conjecture, in Sec. 5.1.
In Sec. 5 we discuss a number of recently proposed cosmological entropy bounds. Because of its symmetric treatment of the four light-like directions orthogonal to any surface, which marks its only significant difference from the Fischler-Susskind bound , our covariant prescription applies also to closed and recollapsing universes (Sec. 5.2). Other useful bounds refer to the entropy within a specified limited region. The covariant bound can be used to understand the range of validity of such bounds (Sec. 5.3). As an example, we apply a corollary derived in Sec. 5.1 to understand why the bound of Ref. cannot be applied to a flat universe with negative cosmological constant .
In Sec. 6 we argue that the conjecture is true for surfaces inside gravitationally collapsing objects. We identify a number of subtle mechanisms protecting the bound in such situations (Sec. 6.1), and perform a quantitative test by collapsing an arbitrarily large shell into a small region. We find again that the bound can be saturated but not exceeded.
In Sec. 7 we stress that the conjectured entropy law is invariant under time reversal. On the other hand, the physical mechanisms responsible for its validity do not appear to be T-invariant, and of course the very concept of thermodynamic entropy has a built-in arrow of time. Therefore the covariant bound must be linked to the statistical origin of entropy. Yet, it holds independently of the microscopic properties of matter. Thus the covariant bound implies a fundamental limit on the total number of independent degrees of freedom that are actually present in nature. The holographic principle thus appears in this paper not as a presupposition, but as a conclusion.
#### Notation and conventions
We will ban formal definitions into footnotes, whenever they refer to concepts which are intuitively clear. We work with a manifold $`M`$ of $`3+1`$ space-time dimensions, since the generalization to $`D`$ dimensions is obvious. The terms light-like and null are used interchangeably. Any three-dimensional submanifold $`HM`$ is called a hypersurface of $`M`$ . If two of its dimensions are everywhere spacelike and the remaining dimension is everywhere timelike (null, spacelike), $`H`$ will be called a timelike (null, spacelike) hypersurface. By a surface we always refer to a two-dimensional spacelike submanifold $`BM`$. By a light-ray we never mean an actual electromagnetic wave or photon, but simply a null geodesic. We use the terms congruence of null geodesics, null congruence, and family of light-rays interchangeably. A light-sheet of a surface $`B`$ will be defined in Sec. 3.1 as a null hypersurface bounded by $`B`$ and generated by a null congruence with non-increasing expansion. A number of definitions relating to Bekenstein’s bound are found on page 4.2. We set $`\mathrm{}=c=G=k=1`$.
## 2 The conjecture
In constructing a covariant entropy bound, one first has to decide whether to aim at an entropy/mass bound, as in Eq. (1.1), or an entropy/area bound, as in Eq. (1.3). Local energy is not well-defined in general relativity, and for global definitions of mass the space-time must possess an infinity . This eliminates any hope of obtaining a completely general bound involving mass. Area, on the other hand, can always be covariantly defined as the proper area of a surface.
Having decided to search for an entropy/area bound, our difficulty lies not in the quantitative formula, $`SA/4`$; this will remain unchanged. The problem we need to address is the following: Given a two-dimensional surface $`B`$ of area $`A`$, on which hypersurface $`H`$ should we evaluate the entropy $`S`$? We shall retain the rule that $`B`$ must be a boundary of $`H`$. In general space-times, this leaves an infinite choice of different hypersurfaces. Below we will construct the rule for the hypersurfaces, guided by a demand for symmetry and consistency with general relativity.
As a starting point, we write down a slightly generalized version of Bekenstein’s entropy/area bound, Eq. (1.3): Let $`A`$ be the area of any closed two-dimensional surface $`B`$, and let $`S`$ be the entropy on the spatial region $`V`$ enclosed by $`B`$. Then $`SA/4`$.
Bekenstein’s bounds were derived for systems of limited self-gravity and finite extent (finite spatial region $`V`$). In order to be able to implement general coordinate invariance, we shall now drop these conditions. Of course this could go hopelessly wrong. The generalized second law of thermodynamics gives no indication of any useful entropy bound if Bekenstein’s conditions are not met. Ignoring such worries, we move on to ask how the formulation of the bound has to be modified in order to achieve covariance.
An obvious problem is the reference to “the” spatial region. If we demand covariance, there cannot be a preferred spatial hypersurface. Either the bound has to be true for any spatial hypersurface enclosed by $`B`$, or we have to insist on light-like hypersurfaces. But the possibility of using any spatial hypersurface is already excluded by the counterexamples given at the end of Sec. 1.1; this was first pointed out in Ref. .
Therefore we must use null hypersurfaces bounded by $`B`$. The natural way to construct such hypersurfaces is to start at the surface $`B`$ and to follow a family of light-rays (technically, a “congruence of null geodesics”) projecting out orthogonally<sup>1</sup><sup>1</sup>1While it may be clear what we mean by light-rays which are orthogonal to a closed surface $`B`$, we should also provide a formal definition. In a convex normal neighbourhood of $`B`$, the boundary of the chronological future of $`B`$ consists of two future-directed null hypersurfaces, one on either side of $`B`$ (see Chapter 8 of Wald for details). Similarly, the boundary of the chronological past of $`B`$ consists of two past-directed null hypersurfaces. Each of these four null hypersurfaces is generated by a congruence of null geodesics starting at $`B`$. At each point on $`pB`$, the four null directions orthogonal to $`B`$ are defined by the tangent vectors of the four congruences. This definition can be extended to smooth surfaces $`B`$ with a boundary $`B`$: For $`pB`$, the four orthogonal null directions are the same as for a nearby point $`qBB`$, in the limit of vanishing proper distance between $`p`$ and $`q`$. We will also allow $`B`$ to be on the boundary of the space-time $`M`$, in which case there will be fewer than four options. For example, if $`B`$ lies on a boundary of space, only the ingoing light-rays will exist. We will not make such exceptions explicit in the text, as they are obvious. from $`B`$. But we have four choices: the family of light-rays can be future-directed outgoing, future-directed ingoing, past-directed outgoing, and past-directed ingoing (see Fig. 1).
Which should we select? And how far may we follow the light-rays?
In order to construct a selection rule, let us briefly return to the limit in which Bekenstein’s bound applies. For a spherical surface around a Bekenstein system, the enclosed entropy cannot be larger than the area. But the same surface is also a boundary of the infinite region on its outside. The entropy outside could clearly be anything. From this we learn that it is important to consider the entropy only on hypersurfaces which are not outside the boundary.
But what does “outside” mean in general situations? The side that includes infinity? Then what if space is closed? Fortunately, there exists an intuitive notion of “inside” and “outside” that is suitable to be generalized to a covariant rule. Think of ordinary Euclidean geometry, and start from a closed surface $`B`$. Construct a second surface by moving every point on $`B`$ an infinitesimal distance away to one side of $`B`$, along lines orthogonal to $`B`$. If this increases the area, then we say that we have moved outside. If the area decreases, we have moved inside.
This consideration yields the selection rule. We start at $`B`$, and follow one of the four families of orthogonal light-rays, as long as the cross-sectional area is decreasing or constant. When it becomes increasing, we must stop. This can be formulated technically by demanding that the expansion<sup>2</sup><sup>2</sup>2The expansion $`\theta `$ of a congruence of null geodesics is defined, e.g., in Refs. and will be discussed further in Sec. 3.2. It measures the local rate of change of the cross-sectional area as one moves along the light-rays. Let $`\lambda `$ be an affine parameter along a family of light-rays orthogonal to $`B`$. Let $`𝒜(\lambda )`$ denote the area of the surfaces spanned by the light-rays at the affine time $`\lambda `$. Then $`𝒜(0)=A`$, the area of $`B`$. $`𝒜(\lambda )`$ is independent of the choice of Lorentz frame , so it is a covariant quantity. Then $`\theta =\frac{d𝒜/d\lambda }{𝒜}`$. We choose the affine parameter to be increasing in the directions away from the surface $`B`$ (this implies that we are using a different affine parameter $`\lambda _i`$ for each of the four null-congruences). Non-positive expansion, $`\theta 0`$, thus means that the cross-sectional area is not increasing in the direction away from $`B`$. of the orthogonal null congruence must be non-positive, in the direction away from the surface $`B`$. By continuity across $`B`$, the expansion of past-directed light-rays going to one side is the negative of the expansion of future-directed light-rays heading the other way. Therefore we can be sure that at least two of the four null directions will be allowed. If the expansion of one pair of null congruences vanishes on $`B`$, there will be three allowed directions. If the expansion of both pairs vanishes on $`B`$, all four null directions will be allowed.
This covariant definition of “inside” and “outside” does not require the surface $`B`$ to be closed. Only the naive definition, by which “inside” was understood to mean a finite region delimited by a surface, needed the surface to be closed. Therefore we shall now drop this condition and allow any connected two-dimensional surface. This enables us to assume without loss of generality that the expansion in each of the four null directions does not change sign anywhere on the surface $`B`$. If it does, we simply split $`B`$ into suitable domains and apply the entropy law to each domain individually.
Finally, in an attempt to protect our conjecture against pathologies such as superluminal entropy flow, we will require the dominant energy condition to hold: for all timelike $`v_a`$, $`T^{ab}v_av_b0`$ and $`T^{ab}v_a`$ is a non-spacelike vector. This condition states that to any observer the local energy density appears non-negative and the speed of energy flow of matter is less than the speed of light. It implies that a space-time must remain empty if it is empty at one time and no matter is coming in from infinity . It is believed that all physically reasonable forms of matter satisfy the dominant energy condition , so we are not imposing a significant restriction. It may well be possible, however, to weaken the assumption further; this is under investigation<sup>3</sup><sup>3</sup>3We do not wish to permit matter with negative energy density, since that would open the possibility of creating arbitrary amounts of positive energy matter, and thus arbitrary entropy, by simultaneously creating negative energy matter. Worse still, if such a process can be reversed, one would be able to destroy entropy and violate the second law. Therefore, negative energy matter permitting such processes must not be allowed in a physical theory. A negative cosmological constant is special in that it cannot be used for such a process. Indeed, we are currently unaware of any counterexamples to the covariant bound in space-times with a negative cosmological constant (see Secs. 3.4 and 5.3). This suggests that it may be sufficient to demand only causal energy flow (for all timelike $`v_a`$, $`T^{ab}v_a`$ is a non-spacelike vector), without requiring the positivity of energy. — We wish to thank Nemanja Kaloper and Andrei Linde for a discussion of these issues..
We should also require that the space-time is inextendible and contains no null or timelike (“naked”) singularities. This is necessary if we wish to exclude the possibility of destroying or creating arbitrary amounts of entropy on such boundaries. These conditions are believed to hold in any physical space-time, so we shall not spell them out below. (Of course we are not excluding the spacelike singularities occuring in cosmology and in gravitational collapse; indeed, much of this paper will be devoted to investigating the validity of the proposed bound in the vicinity of such singularities.) We thus arrive at a conjecture for a covariant bound on the entropy in any space-time.
#### Covariant Entropy Conjecture
Let $`M`$ be a four-dimensional space-time on which Einstein’s equation is satisfied with the dominant energy condition holding for matter. Let $`A`$ be the area of a connected two-dimensional spatial surface $`B`$ contained in $`M`$. Let $`L`$ be a hypersurface bounded by $`B`$ and generated by one of the four null congruences orthogonal to $`B`$. Let $`S`$ be the total entropy contained on $`L`$. If the expansion of the congruence is non-positive (measured in the direction away from $`B`$) at every point on $`L`$, then $`SA/4`$.
## 3 Discussion
### 3.1 The recipe
In the previous paragraph, we formulated the conjecture with an eye on formal accuracy and generality. For all practical purposes, however, it is more useful to translate it into a set of rules like the following recipe (see Fig. 1):
1. Pick any two-dimensional surface $`B`$ in the space-time $`M`$.
2. There will be four families of light-rays projecting orthogonally away from $`B`$ (unless $`B`$ is on a boundary of $`M`$): $`F_1\mathrm{}F_4`$.
3. As shown in the previous section, we can assume without loss of generality that the expansion of $`F_1`$ has the same sign everywhere on $`B`$. If the expansion is positive (in the direction away from $`B`$), i.e., if the cross-sectional area is increasing, $`F_1`$ must not be used for an entropy/area comparison. If the expansion is zero or negative, $`F_1`$ will be allowed. Repeating this test for each family, one will be left with at least two allowed families. If the expansion is zero in some directions, there may be as many as three or four allowed families.
4. Pick one of the allowed families, $`F_i`$. Construct a null hypersurface $`L_i`$, by following each light-ray until one of the following happens:
1. The light-ray reaches a boundary or a singularity of the space-time.
2. The expansion becomes positive, i.e., the cross-sectional area spanned by the family begins to increase in a neighborhood of the light-ray.
The hypersurface $`L_i`$ obtained by this procedure will be called a light-sheet of the surface $`B`$. For every allowed family, there will be a different light-sheet.
5. The conjecture states that the entropy $`S_i`$ on the light-sheet $`L_i`$ will not exceed a quarter of the area of $`B`$:
$$S_i\frac{A}{4}.$$
(3.1)
Note that the bound applies to each light-sheet individually. Since $`B`$ may possess up to four light-sheets, the total entropy on all light-sheets could add up to as much as $`A`$.
We should add some remarks on points 2 and 3. In many situations it will be natural to call $`F_1\mathrm{}F_4`$ the future-directed ingoing, future-directed outgoing, past-directed ingoing and past-directed outgoing family of surface-orthogonal geodesics. But we stress that “ingoing” and “outgoing” are just arbitrary labels distinguishing the two sides of $`B`$; only if $`B`$ is closed, there might be a preferred way to assign these names. Our rule does not refer to “ingoing” and “outgoing” explicitly.
In fact the covariant entropy bound does not even refer to “future” and “past.” The conjecture is manifestly time reversal invariant. We regard this as its most significant property. After all, thermodynamic entropy is never T-invariant, and neither is the generalized second law of thermodynamics, which underlies Bekenstein’s bound. This will be discussed further in Sec. 3.4. One can draw strong conclusions from these simple observations (Sec. 7).
If $`B`$ is a closed surface, we can characterize it as trapped, anti-trapped or normal (see Refs. for definitions). This provides a simple criterion for the allowed families. If $`B`$ is trapped (anti-trapped), it has two future-directed (past-directed) light-sheets. If $`B`$ is normal, it has a future-directed and a past-directed light-sheet on the same side, which is usually called the inside. If $`B`$ lies on an apparent horizon (the boundary between a trapped or anti-trapped region and a normal region), it can have more than two light-sheets. For example, if $`B`$ is marginally outer trapped , i.e., if the future-directed outgoing geodesics have zero convergence, then it has two future-directed light-sheets and a past-directed ingoing light-sheet.
### 3.2 Caustics as light-sheet endpoints
We have defined a light-sheet to be a certain subset of the null hypersurface generated by an “allowed” family of light-rays. The rule is to start at the surface $`B`$ and to follow the light-rays only as long as the expansion is zero or negative. In order to understand if and why the conjecture may be true, we must understand perfectly well what it is that can cause the expansion to become positive. After all, it is this condition alone which must prevent the light-sheet from sampling too much entropy and violating the conjectured bound. Our conclusion will be that the expansion becomes positive only at caustics. The simplest example of such a point is the center of a sphere in Minkowski space, at which all ingoing light-rays intersect.
Raychauduri’s equation for a congruence of null geodesics with tangent vector field $`k^a`$ and affine parameter $`\lambda `$ is given by
$$\frac{d\theta }{d\lambda }=\frac{1}{2}\theta ^2\widehat{\sigma }_{ab}\widehat{\sigma }^{ab}8\pi T_{ab}k^ak^b+\widehat{\omega }_{ab}\widehat{\omega }^{ab},$$
(3.2)
where $`T_{ab}`$ is the stress-energy tensor of matter. The expansion $`\theta `$ measures the local rate of change of an element of cross-sectional area $`𝒜`$ spanned by nearby geodesics:
$$\theta =\frac{1}{𝒜}\frac{d𝒜}{d\lambda }.$$
(3.3)
The vorticity $`\widehat{\omega }_{ab}`$ and shear $`\widehat{\sigma }_{ab}`$ are defined in Refs. . The vorticity vanishes for surface-orthogonal null congruences. The first and second term on the right hand side are manifestly non-positive. The third term will be non-positive if the null convergence condition holds:
$$T_{ab}k^ak^b0\text{for all null}k^a.$$
(3.4)
The dominant energy condition, which we are assuming, implies that the null convergence condition will hold. (It is also implied by the weak energy condition, or by the strong energy condition.)
Therefore the right hand side of Eq. (3.2) is non-positive. It follows that $`\theta `$ cannot increase along any geodesic. (This statement is self-consistent, since the sign of $`\theta `$ changes if we follow the geodesic in the opposite direction.) Then how can the expansion ever become positive? By dropping two of the non-positive terms in Eq. (3.2), one obtains the inequality
$$\frac{d\theta }{d\lambda }\frac{1}{2}\theta ^2.$$
(3.5)
If the expansion takes the negative value $`\theta _0`$ at any point on a geodesic in the congruence, Eq. (3.5) implies that the expansion will become negative infinite, $`\theta \mathrm{}`$, along that geodesic within affine time $`\mathrm{\Delta }\lambda 2/|\theta _0|`$ . This can be interpreted as a caustic. Nearby geodesics are converging to a single focal point, where the cross-sectional area $`𝒜`$ vanishes. When they re-emerge, the cross-sectional area starts to increase. Thus, the expansion $`\theta `$ jumps from $`\mathrm{}`$ to $`+\mathrm{}`$ at a caustic. Then the expansion is positive, and we must stop following the light-ray. This is why caustics form the endpoints of the light-sheet.
### 3.3 Light-sheet examples and first evidence
The considerations in Sec. 3.2 allow us to rephrase the rule for constructing light-sheets: Follow each light-ray in an allowed family until a caustic is reached. The effect of this prescription can be understood by thinking about closed surfaces in Minkowski space. The simplest example is a spherical surface. The past-directed outgoing and future-directed outgoing families are forbidden, because they have positive expansion. The past- and future-directed ingoing families are allowed. Both encounter caustics when they reach the center of the sphere. Thus they each sweep the interior of the sphere exactly once. If we deform the surface into a more irregular shape, such as an ellipsoid, there may be a line, or even a surface, of caustics at which the light-sheet ends. In some cases (e.g. if the surface is a box and does not enclose much matter), non-neighbouring light-rays may cross. This does not constitute a caustic,<sup>4</sup><sup>4</sup>4We thank Ted Jacobson for pointing this out. and the light-rays need not be terminated there. They can go on until they are bent into caustics by the matter they encounter. Thus some of the entropy may be counted more than once. This is merely a consequence of a desirable feature of our prescription: that it is local and applies separately to every infinitesimal part of any surface.
For a spherical surface surrounding a spherically symmetric body of matter, the ingoing light-rays will end on a caustic in the center, as for an empty sphere. If the interior mass distribution is not spherically symmetric, however, some light-rays will be deflected into angular directions, and will form “angular caustics” (see Fig. 2).
This does not mean that the interior will not be completely swept out by the light-sheet. Between two light-rays that get deflected into different overdense regions, there are infinitely many light-rays that proceed further inward. It does mean, however, that we have to follow some of the light-rays for a much longer affine time than we would in the spherically symmetric case.
This does not make a difference for static systems: they will be completely penetrated by the light-sheet in any case. In a system undergoing gravitational collapse, however, light-rays will hit the future singularity after a finite affine time. Consider a collapsing ball which is exactly spherically symmetric, and a future-ingoing light-sheet starting at the outer surface of the system, when it is already within its own Schwarzschild horizon. We can arrange things so that the light-sheet reaches the caustic at $`r=0`$ exactly when it also meets the singularity (see Fig. 3).
Now consider a different collapsing ball, of identical mass, and identical radius when the light-rays commence. While this ball may be spherically symmetric on the large scale, let us assume that it is highly disordered internally. The light-rays will thus be deflected into angular directions. As Fig. 2 illustrates, this means that they take intricate, long-winded paths through the interior: they “percolate.” This consumes more affine time than the direct path to the center taken in the first system. Therefore the second system will not be swept out completely before the singularity is reached (see Fig. 3).
Why did we spend so much time on this example? The first ball is a system with low entropy, while the second ball has high entropy. One might think that no kind of entropy bound can apply when a highly enthropic system collapses: the surface area goes to zero, but the entropy cannot decrease. The above considerations have shown, however, that light-sheets percolate rather slowly through highly enthropic systems, because the geodesics follow a kind of random walk. Since they end on the black hole singularity within finite affine time, they sample a smaller portion of a highly enthropic system than they would for a more regular system. (In Fig. 2, the few light-rays that go straight to the center of the inhomogeneous system also take more affine time to do so than in a homogeneous system, because they pass through an underdense region. In a homogeneous system, they would encounter more mass; by Raychauduri’s equation, this would accelerate their collapse.) Therefore it is in fact quite plausible, if counter-intuitive, that the covariant entropy bound holds even during the gravitational collapse of a system initially saturating Bekenstein’s bound.
In Sec. 6, we will discuss additional constraints on the penetration depth of light-sheets in collapsing highly enthropic systems, and a quantitative test will be performed.
### 3.4 Recovering Bekenstein’s bound
The covariant entropy conjecture can only be sensible if we can recover Bekenstein’s bound from it in an appropriate limit. For a Bekenstein system (see Sec. 1.1: a thermodynamic system of constant, finite size and limited self-gravity), the boundary area should bound the entropy in the spatial region occupied by the system. The covariant bound, on the other hand, uses null hypersurfaces to compare entropy and area. Then how can Bekenstein’s bound be recovered?
While null hypersurfaces are certainly required in general, it turns out that there is a wide class of situations in which an entropy bound on spacelike hypersurfaces can be inferred from the covariant entropy conjecture. We will now identify sufficient conditions and derive a theorem on the entropy of spatial regions. We will then show that Bekenstein’s entropy/area bound is indeed implied by the covariant bound, namely as a special case of the theorem.
Let $`A`$ be the area of a closed surface $`B`$ possessing at least one future-directed light-sheet $`L`$. Suppose that $`L`$ has no boundary other than $`B`$. Then we shall call the direction of this light-sheet the “inside” of $`B`$. Let the spatial region $`V`$ be the interior of $`B`$ on some spacelike hypersurface through $`B`$. If the region $`V`$ is contained in the causal past of the light-sheet $`L`$, the dominant energy condition implies that all matter in the region $`V`$ must eventually pass through the light-sheet $`L`$. Then the second law of thermodynamics implies that the entropy on $`V`$, $`S_V`$, cannot exceed the entropy on $`L`$, $`S_L`$. By the covariant entropy bound, $`S_LA/4`$. It follows that the entropy of the spatial region $`V`$ cannot exceed a quarter of its boundary area: $`S_VA/4`$.
The condition that the future-directed light-sheet $`L`$ contain no boundaries makes sure that none of the entropy of the spatial region $`V`$ escapes through holes in $`L`$. Neither can any of the entropy escape into a black hole singularity, because we have required that the spatial region must lie in the causal past of $`L`$. Since we are always assuming that the space-time is inextendible and that no naked singularities are present, all entropy on $`V`$ must go through $`L`$. We summarize this argument in the following theorem.
#### Spacelike Projection Theorem
Let $`A`$ be the area of a closed surface $`B`$ possessing a future-directed light-sheet $`L`$ with no boundary other than $`B`$. Let the spatial region $`V`$ be contained in the intersection of the causal past of $`L`$ with any spacelike hypersurface containing $`B`$. Let $`S`$ be the entropy on $`V`$. Then $`SA/4`$.
Now consider, in asymptotically flat space, a Bekenstein system in a spatial region $`V`$ bounded by a closed surface $`B`$ of area $`A`$. The future-directed ingoing light-sheet $`L`$ of $`B`$ exists (otherwise $`B`$ would not have “limited self-gravity”), and can be taken to end whenever two (not necessarily neighbouring) light-rays meet. Thus it will have no other boundary than $`B`$. Since the gravitational binding of a Bekenstein system is not strong enough to form a black hole, $`V`$ will be contained in the causal past of $`L`$. Therefore the conditions of the theorem are satisfied, and the entropy of the system must be less than $`A/4`$. We have recovered Bekenstein’s bound.
There are many other interesting applications of the theorem. In particular, it can be used to show that area bounds entropy on spacelike sections of anti-de Sitter space. This can be seen by taking $`B`$ to be any sphere, at any given time. The future-ingoing light-sheet of $`B`$ exists, and unless the space contains a black hole, has no other boundary. Its causal past includes the entire space enclosed by $`B`$. This remains true for arbitrarily large spheres, and in the limit as $`B`$ approaches the boundary at spatial infinity.
The theorem is immensely useful, because it essentially tells us under which conditions we can treat a region of space as a Bekenstein system. In general, however, the light-sheets prescribed by the covariant entropy bound provide the only consistent way of comparing entropy and area.
We pointed out in Sec. 3.1 that the covariant entropy bound is T-invariant. The spacelike projection theorem is not T-invariant; it refers to past and future explicitly. This is because the second law of thermodynamics enters its derivation. The asymmetry is not surprising, since Bekenstein’s bound, which we recovered by the theorem, rests on the second law. We should be surprised only when an entropy law is T-invariant. It is this property which forces us to conclude that the origin of the covariant bound is not thermodynamic, but statistical (Sec. 7).
## 4 Cosmological tests
The simplest way to falsify the conjecture would be to show that it conflicts directly with observation. In this section, we will apply the covariant bound to the cosmological models that our universe is believed to be described by (and to many more by which it certainly is not described). Since we claim that the conjecture is a universal law valid for all space-times satisfying Einstein’s equations (with the dominant energy condition holding for matter), it must be valid for cosmological solutions in particular. We will see that it passes the test, and more significantly, it just passes it.
We consider Friedmann-Robertson-Walker (FRW) cosmologies, which are described by a metric of the form
$$ds^2=dt^2+a^2(t)\left(\frac{dr^2}{1kr^2}+r^2d\mathrm{\Omega }^2\right).$$
(4.1)
An alternative form uses comoving coordinates:
$$ds^2=a^2(\eta )\left[d\eta ^2+d\chi ^2+f^2(\chi )d\mathrm{\Omega }^2\right].$$
(4.2)
Here $`k=1`$, $`0`$, $`1`$ and $`f(\chi )=\mathrm{sinh}\chi `$, $`\chi `$, $`\mathrm{sin}\chi `$ correspond to open, flat, and closed universes respectively.
The Hubble horizon is the inverse of the expansion rate $`H`$:
$$r_{\mathrm{HH}}=H^1=\frac{a}{da/dt}.$$
(4.3)
The particle horizon is the distance travelled by light since the big bang:
$$\chi _{\mathrm{PH}}=\eta .$$
(4.4)
The apparent horizon is defined geometrically as a sphere at which at least one pair of orthogonal null congruences have zero expansion. It is given by
$$r_{\mathrm{AH}}=\frac{1}{\sqrt{H^2+\frac{k}{a^2}}}.$$
(4.5)
Using Friedmann’s equation,
$$H^2=\frac{8\pi \rho }{3}\frac{k}{a^2},$$
(4.6)
one finds
$$r_{\mathrm{AH}}=\sqrt{\frac{3}{8\pi \rho }},$$
(4.7)
where $`\rho `$ is the energy density of matter.
We consider matter described by $`T_{ab}=\text{diag}(\rho ,p,p,p)`$, with pressure $`p=\gamma \rho `$. The dominant energy condition requires that $`\rho 0`$ and $`1\gamma 1`$. The case $`\gamma =1`$ is special, because it leads to a different global structure from the other solutions. It corresponds to de Sitter space, which has no past or future singularity. This solution is significant because it describes an inflationary universe. We will comment on inflation at the end of Sec. 4.1.
We will test the conjecture on spherical surfaces $`B`$ characterized by some value of $`r`$, or of $`(\eta ,\chi )`$. As we found at the end of Sec. 3.1, the directions of the light-sheets of a surface depend on its classification as trapped, normal, or anti-trapped. In the vicinity of $`\chi =0`$ (and for closed universes, also on the opposite pole, near $`\chi =\pi `$), the spherical surfaces will be normal. The larger spheres beyond the apparent horizon(s) will be anti-trapped. Some universes, for example most closed universes, or a flat universe with negative cosmological constant , recollapse. Such universes necessarily contain trapped surfaces. But trapped regions can occur in any case by gravitational collapse. The surfaces in the interior of such regions provide a serious challenge for the covariant entropy conjecture, because their area shrinks to zero while the enclosed entropy cannot decrease. We address this problem in some generality in Sec. 6, where we argue that the conjecture holds even in such regions. Because of its importance, the special case of the adiabatic recollapse of a closed universe will be treated explicitly in Sec. 5.2. In the present section, we shall therefore discuss only normal and anti-trapped surfaces.
### 4.1 Anti-trapped surfaces
An anti-trapped surface $`B`$ contains two past-directed light-sheets. Unless $`B`$ lies within the particle horizon (of either pole in the closed case), both light-sheets will be “truncated” at the Planck era near the past singularity. The truncation has the desirable effect that the volume swept by the light-sheets grows not like $`A^{3/2}`$, but roughly like the area. In fact, the “ingoing” light-sheet coincides with the “truncated lightcones” used in the entropy conjecture of Fischler and Susskind , and the other light-sheet can be treated similarly. In open universes the bound will be satisfied more comfortably than in flat ones . The validity of the bound was checked for flat universes with $`0\gamma 1`$ in Ref. and for $`1<\gamma <0`$ in Ref. . Here we will give a summary of these results, and we will explain why reheating after inflation does not violate the covariant bound. Subtleties arising in closed universes will be discussed separately in Sec. 5.2.
The reason why the bound is satisfied near a past singularity is simple. The first moment of time that one can sensibly talk about is one Planck time after the singularity. At this time, there cannot have been more than one unit of entropy per Planck volume, up to a factor of order one. This argument does not involve any assumptions about “holography;” we are merely applying the usual Planck scale cutoff. We cannot continue light-sheets into regions where we have no control over the physics. A backward light-sheet of an area $`A`$ specified at $`t=2t_{\mathrm{Pl}}`$ will be truncated at $`t=t_{\mathrm{Pl}}`$. It will sweep a volume of order $`Al_{\mathrm{Pl}}`$ and the entropy bound will at most be saturated.
Let us define $`\sigma `$ as the entropy/area ratio at the Planck time. Consider a universe filled with any type of matter allowed by the dominant energy condition, $`1\gamma 1`$. We may exclude $`\gamma =1`$, since de Sitter space does not contain a singularity in the past. The scale factor will be given by $`a(t)=t^{\frac{2}{3(\gamma +1)}}`$. If the evolution is adiabatic, the ratio of entropy to area behaves as
$$\frac{S}{A}\sigma t^{\frac{\gamma 1}{\gamma +1}}$$
(4.8)
for any past-directed light-sheet of areas specified at a later time $`t`$. (Equality holds, e.g., for spherical areas at least as large as the particle horizon.) The exponent is non-positive, so the entropy/area ratio does not increase. Since the bound is satisfied at the Planck time, it will remain satisfied later.
Another way to see this is to consider the particle horizon $`n`$ (e.g. $`n=10`$) Planck times after the big bang of a flat FRW universe. Its area will be $`O(n^2)`$ and its past-directed light-sheet will sweep $`O(n^3)`$ Planck volumes. Let us assume that each of these Planck volumes contains one unit of entropy; then the bound would be violated. But most of these volumes are met by the light-sheet at a time later than $`t_{\mathrm{Pl}}`$. Therefore there must have been, at the Planck time, Planck volumes containing more than a unit of entropy, which is impossible. (Note that this argument would break down if we allowed naked singularities!)
If the evolution is non-adiabatic, the entropy bound nevertheless predicts that $`S/A1/4`$, implying that there is a limit on how rapidly the universe can produce entropy. This will be further discussed in Sec. 4.2.
#### Inflation
Of course the notion that the standard cosmology extends all the way back to the Planck era is not seriously tenable. In order to understand essential properties of our universe, such as its homogeneity and flatness, and its perturbation spectrum, it is usually assumed that the radiation dominated era was preceded by a vacuum dominated era.
Inflation ends on a spacelike hypersurface $`V`$ at $`t=t_{\mathrm{reheat}}`$. At this time, all the entropy in the universe is produced through reheating. Both before and after reheating, all spheres will be anti-trapped except in a small neighbourhood of $`r=0`$ (or small neighbourhoods of the poles of the $`S^3`$ spacelike slice in a closed universe), of the size of the apparent horizon. Therefore the spacelike projection theorem does not apply to any but the smallest of the surfaces $`BV`$. The other spheres may be exponentially large, but the covariant conjecture does not relate their area to the enclosed entropy. The size and total entropy of the reheating hypersurface $`V`$ is thus irrelevant.
Outside the apparent horizon, entropy/area comparisons can only be done on the light-sheets specified in the conjecture. Indeed, the past-directed light-sheets of anti-trapped surfaces of the post-inflationary universe do intersect $`V`$. Since there is virtually no entropy during inflation, we can consider the light-sheets to be truncated by the reheating surface. Because inflation cannot produce more than one unit of entropy per Planck volume, the bound will be satisfied by the same arguments that were given above for universes with a big bang.
### 4.2 Normal surfaces
Spatial regions enclosed by normal surfaces will turn out, in a sense specified below, to be analogues to “Bekenstein systems” (see Sec. 1.1). In order to understand this, we must establish a few properties of Bekenstein systems. For quick reference, we will call the first bound, Eq. (1.1), Bekenstein’s entropy/mass bound, and the second bound, Eq. (1.3), Bekenstein’s entropy/area bound. For a spherical Bekenstein system of a given radius and mass, the entropy/mass bound is always at least as tight as the entropy/area bound. This because a Bekenstein system must be gravitationally stable ($`MR/2`$), which implies $`2\pi RM\pi R^2=A/4`$. A spherical system that saturates Bekenstein’s entropy/area bound will be called a saturated Bekenstein system. The considerations above imply that this system will also saturate the entropy/mass bound. In semi-classical gravity, a black hole, viewed from the outside, is an example of a saturated Bekenstein system; but for many purposes it is simpler to think of an ordinary, maximally enthropic, spherical thermodynamic system just on the verge of gravitational collapse. A system that saturates the entropy/mass bound but not the entropy/area bound will be called a mass-saturated Bekenstein system. If we find a spherical thermodynamic system which saturates the entropy/area bound but does not saturate the entropy/mass bound, we must conclude that it was not a Bekenstein system in the first place.
The normal region contains a past- and a future-directed light-sheet. Both of them are ingoing, i.e. they are directed towards the center of the region at $`\chi =0`$ (or $`\chi =\pi `$ for the normal region near the opposite pole in closed universes). This is crucial. If outgoing light-sheets existed even as $`\chi 0`$, the area would become arbitrarily small while the entropy remained finite. This is one of the reasons why the Fischler-Susskind proposal does not apply to closed universes (see Sec. 5.2). Except for this constraint, the past-directed light-sheet coincides with the light-cones used by Fischler and Susskind. The entropy/area bound has been shown to hold on such surfaces . We will therefore consider only the future-directed light-sheet.
The future-directed light-sheet covers the same comoving volume as the past light-sheet. Therefore the covariant entropy bound will be satisfied on it if the evolution is adiabatic. But we certainly must allow the possibility that additional entropy is produced. Consider, for example, the outermost surface on which future-directed light-sheets are still allowed, a sphere $`B`$ on the apparent horizon. Suppose that an overfunded group of experimental cosmologists within the apparent horizon are bent on breaking the entropy bound. They must try to produce as much entropy as possible before the matter passes through the future-directed light-sheet $`L`$ of $`B`$. What is their best strategy?
Note that they cannot collect any mass from outside $`B`$, because $`L`$ is a null hypersurface bounded by $`B`$ and the dominant energy condition holds, preventing spacelike energy flow. The most enthropic system is a saturated Bekenstein system, so they should convert all the matter into such systems. (As discussed above, saturated Bekenstein systems are just on the verge of gravitational collapse and contain the same amount of entropy as a black hole of the same mass and radius. By using ordinary thermodynamic systems instead of black holes, one ensures that the light-sheet actually permeates the systems completely and samples all the entropy, rather than being truncated by the black hole singularity; see Secs. 3.3 and 6 for a discussion of these issues.)
If all matter is condensed into several small highly enthropic systems, they will be widely separated, i.e. surrounded by empty regions of space which are large, flat, and static compared to the length scale of any individual system. By the dominant energy condition, no negative energy is present. Therefore conditions given in Sec. 1.1 are fulfilled. We are thus justified in applying Bekenstein’s bounds to the systems, and we should use the entropy/mass bound, Eq. (1.1), because it is tighter. In order to create the maximum amount of entropy, however, it is best to put the matter into a few large saturated Bekenstein systems, rather than many small ones. Therefore we should take the limit of a Bekenstein system, as large as the apparent horizon and containing the entire mass within it. Of course the question arises whether the calculation remains consistent in this limit, both in its treatment of the interior as a Bekenstein system, and in its evaluation of the mass. We will show below, purely from the point of view of Bekenstein’s bound, that the interior of the apparent horizon is in fact the largest system for which Bekenstein’s conditions can be considered to hold; for larger systems, inconsistencies arise. In Sec. 5.3, we will use the spacelike projection theorem to arrive at the same conclusion within the framework of the covariant entropy conjecture.
We have cautioned in Sec. 2 against using an entropy/mass bound in general space-times, because there is no concept of local energy density. In this case, however, the mass is certainly well defined before we take the limit of a single Bekenstein system, because the many saturated systems will be widely separated and can be treated as immersed in asymptotically flat space. In the limit of a single system, we can follow Ref. and treat the interior of the apparent horizon as part of an oversized spherical star. (We are thus pretending that somewhere beyond the apparent horizon, the space-time may become asymptotically flat and empty. This is not an inconsistent assumption as long as Bekenstein’s conditions are satisfied; we show below that this is indeed the case. Related ideas, not referring specifically to the apparent horizon, underly some proposals for cosmological entropy bounds given in Refs. .) Then we can apply the usual mass definition for spherically symmetric systems to the interior region.
The circumferential radius of our system is the apparent horizon radius, and is given by Eq. (4.7):
$$r_{\mathrm{AH}}=\sqrt{\frac{3}{8\pi \rho }}.$$
(4.9)
The mass inside the apparent horizon is given by
$$M(r_{\mathrm{AH}})=_0^{r_{\mathrm{AH}}}4\pi r^2\rho 𝑑r=\frac{4\pi }{3}r_{\mathrm{AH}}^3\rho .$$
(4.10)
This yields $`r_{\mathrm{AH}}=2M(r_{\mathrm{AH}})`$. By Bekenstein’s entropy/mass bound, Eq. (1.1), the entropy cannot exceed $`2\pi M(r_{\mathrm{AH}})r_{\mathrm{AH}}`$. Thus we find
$$S_{\mathrm{max}}=\pi r_{\mathrm{AH}}^2.$$
(4.11)
This is exactly a quarter of the area of the apparent horizon.
Eq. (4.7) follows from the Friedmann equation (which involves only the density but not the pressure), and from Eq. (4.5), which is a geometric property of the FRW metrics. Thus the calculation holds independently of the equation of state. We have not dropped any factors of order one, and attained exactly the saturation of the bound. This would not be the case for the Hubble horizon or the particle horizon. We conclude that the entropy on the future-directed light-sheet $`L`$ will not exceed a quarter of the area of the boundary $`B`$. The covariant entropy bound may be saturated on the apparent horizon, but it will not be violated, no matter how hard we try to produce entropy.
The property $`r=2M(r)`$ is special to the apparent horizon. It suggests that we should consider the interior of the apparent horizon to be the largest region with non-dominant self-gravity, and thus the largest system to which Bekenstein’s bound can be applied. This statement can be made more precise. The property $`r_{\mathrm{AH}}=2M(r_{\mathrm{AH}})`$ means that if we treat the interior as a Bekenstein system, it can be saturated, not just mass-saturated (see page 4.2 for definitions). If we chose a smaller surface $`r_X<r_{\mathrm{AH}}`$, the enclosed mass would be less than half of the radius, $`2M<r_X`$. In this case, (area-)saturation would not be possible, but only mass-saturation. This is because the entropy/mass bound yields $`2\pi Mr_X<\pi r_X^2=A/4`$. We conclude that the covariant entropy bound, which uses area, cannot be saturated on such surfaces. On the other hand, surfaces outside the apparent horizon, $`r_X>r_{\mathrm{AH}}`$, possess no future-directed light-sheet to which we could apply the covariant bound. We find this reflected in the property $`2M>r_X`$ for such surfaces. It implies that one could build a mass-saturated spherical system which breaks Bekenstein’s entropy/area bound: $`2\pi Mr_X>\pi r_X^2`$. As we discussed at the beginning of this subsection, this indicates a breakdown of the treatment of the enclosed region as a Bekenstein system. It follows that the apparent horizon is the largest sphere whose interior may be treated as a Bekenstein system. For larger systems, Bekenstein’s assumptions would not be self-consistent.
From the point of view of the covariant entropy conjecture, there are well-defined sufficient conditions for the treatment of a spatial region as a Bekenstein system, namely those spelled out in the spacelike projection theorem (Sec. 3.4). This will be applied to cosmology in the Sec. 5.1. We will show formally that the region inside the apparent horizon is indeed the largest region for which Bekenstein’s entropy/area bound may be guaranteed to hold, if certain additional conditions are met.
## 5 Cosmological entropy bounds
### 5.1 A cosmological corollary
In Sec. 4.2, we tested the covariant entropy conjecture on future-directed light-sheets in normal regions, by assuming maximal entropy production in the interior spatial region. We found that the bound may be saturated, but not violated. We will now switch viewpoints, assume that the covariant entropy conjecture is a correct law, and derive an entropy bound for spatial regions in cosmology.
Normal regions offer an interesting application of the spacelike projection theorem (Sec. 3.4). It tells us under which conditions we can treat the interior of the apparent horizon as a Bekenstein system. Let $`A`$ be the area of a sphere $`B`$, on or inside of the apparent horizon. (“Inside” has a natural meaning in normal regions.) Then the future-directed ingoing light-sheet $`L`$ exists. Let us assume that it is complete, i.e., $`B`$ is its only boundary. (This condition is fulfilled, e.g., for any radiation or dust dominated FRW universe with no cosmological constant.) Let $`V`$ be a region inside of and bounded by $`B`$, on any spacelike hypersurface containing $`B`$. If no black holes are produced, $`V`$ will be in the causal past of $`L`$, and the conditions for the spacelike projection theorem are satisfied. Therefore, the entropy on $`V`$ will not exceed $`A/4`$. In particular, we may choose $`B`$ to be on the apparent horizon, and $`V`$ to be on the spacelike slice preferred by the homogeneity of the FRW cosmologies. We summarize these considerations in the following corollary:
#### Cosmological Corollary
Let $`V`$ be a spatial region within the apparent horizon of an observer. If the future-directed ingoing light-sheet $`L`$ of the apparent horizon has no other boundaries, and if $`V`$ is entirely contained in the causal past of $`L`$, then the entropy on $`V`$ cannot exceed a quarter of the area of the apparent horizon.
By definition, the spheres beyond the apparent horizon are anti-trapped and possess no future-directed light-sheets. Therefore the spacelike projection theorem does not apply, and no statement about the entropy enclosed in spatial volumes can be made. (Of course, we can still use their area to bound the entropy on their light-sheets.) Thus the covariant entropy conjecture singles out the apparent horizon as a special surface. It marks the largest surface to which the spacelike projection theorem can possibly apply, and hence the region inside it is the largest region one can hope to treat as a Bekenstein system. This conclusion agrees with the result obtained in the previous subsection from a consistency analysis of Bekenstein’s assumptions for regions larger than the apparent horizon.
As an example, the conditions of this corollary are satisfied by the apparent horizon of de Sitter space. It coincides with the cosmological horizon, at $`r=(3/\mathrm{\Lambda })^{1/2}`$. Thus the entropy within the cosmological horizon cannot exceed $`3\pi /\mathrm{\Lambda }`$. The corollary will be further applied in Sec. 5.3.
The corollary tells us if and how Bekenstein’s bound can be applied to spatial regions in cosmological solutions. It follows from, but is not equivalent to, the covariant entropy bound. Like the spacelike projection theorem, the corollary is a statement of limited scope. It contains no information how to relate entropy to the area of trapped or anti-trapped surfaces in the universe, and even for surfaces within the apparent horizon, a “spacelike” bound applies only under certain conditions. Thus, the role of the corollary is to define the range of validity of Bekenstein’s entropy/area bound in cosmological solutions. Precisely for this reason, the corollary is far less general than the covariant conjecture, which associates at least two hypersurfaces with any surface in any space-time, and bounds the entropy on those hypersurfaces.
### 5.2 The Fischler-Susskind bound
Among the recently proposed cosmological entropy bounds , the prescription of Fischler and Susskind (FS) is distinct in that it attempts to relate entropy to every spherical surface in the universe, namely the entropy on the past-directed “ingoing” null hypersurface. The covariant entropy conjecture is very much in this spirit; but we have changed the prescription from “past-ingoing” to a general selection rule determining at least two light-sheets on which entropy can be compared to area. The FS hypersurface will often be one of them, but not always. The limitations of the FS proposal can be understood in terms of this selection rule.
Consider the entropy on the null hypersurface formed by the particle horizon, emanating from the South pole in a closed, adiabatic, dust-dominated FRW universe . The solution is given by
$$a(\eta )=\frac{a_{\mathrm{max}}}{2}(1\mathrm{cos}\eta ),t(\eta )=\frac{a_{\mathrm{max}}}{2}(\eta \mathrm{sin}\eta ).$$
(5.1)
The entropy may not exceed one per Planck volume at the Planck time, $`\eta _{\mathrm{Pl}}a_{\mathrm{max}}^{1/3}`$. Therefore the total entropy of the universe will not be larger than $`S_{\mathrm{max}}a(\eta _{\mathrm{Pl}})^3a_{\mathrm{max}}`$.
The particle horizon has $`\eta =\chi `$, while the apparent horizon is given by $`\eta =2\chi `$ (see Fig. 4).
Thus the particle horizon will initially be outside the apparent horizon, in an anti-trapped region. Any surfaces met by the particle horizon in this region (such as $`B_1`$) possess a past-directed light-sheet that coincides with the particle horizon. Therefore the FS bound and the covariant bound should both be satisfied. Before the time of maximum expansion ($`\eta =\pi `$), the particle horizon reaches the sphere $`B_2`$ at $`\eta =\chi =2\pi /3`$. Let us verify explicitly that the bounds are still satisfied there. At this time, the particle horizon covers two-thirds of the total entropy of the universe, which will be of order $`a_{\mathrm{max}}`$. Its area, however, is $`4\pi (3/4)^3a_{\mathrm{max}}^2`$, which is much larger.
For $`\eta =\chi >2\pi /3`$ the particle horizon ends on normal, rather than anti-trapped spheres. It has entered a normal region surrounding the North pole and bounded by a different apparent horizon, $`\eta =2(\pi \chi )`$. The surfaces in this region contain a future- and a past-directed light-sheet, both pointing towards the North pole. The particle horizon, which goes to the South pole, is now one of the “forbidden” families of light-rays (Sec. 3.1). According to the covariant bound, the entropy contained on it has nothing to do with the surface that bounds it. Indeed, its area approaches zero when it encompasses nearly the entire entropy of the universe ($`B_3`$). The FS bound cannot be applied here. It would continue to compare entropy and area of the particle horizon, and would be violated .
In Sec. 2, the selection rule was motivated by the requirement that one should compare the area of a surface only to the entropy that is, in some sense, inside of it, not outside. This consideration is paying off here. The spheres very close to the North pole, like any other $`S^2`$ on an $`S^3`$, enclose two regions. Because the region including the North pole is much smaller than the region including the South pole, one would like to call the former the “inside” and the latter the “outside.” The selection rule turns this intuitive notion into a covariant definition, which takes into account not only the shape of space but also its dynamics. Trapped surfaces, for example, have light-sheets on both sides, but only future-directed ones. This makes sense because in a collapsing region, loosely speaking, the direction in which surfaces are getting smaller is the future.
Returning to the example of a closed, adiabatic, dust-dominated FRW universe, consider a surface $`B_4`$ in the trapped region, near the equator of the $`S^3`$ spacelike surfaces (see Fig. 4). By choosing $`B_4`$ to be very close to the future singularity, its area can be made arbitrarily small. The FS hypersurface would go to the past and would pick up nearly half of the total entropy, so the FS bound cannot be used in this region. The covariant bound remains valid, because it applies only to the future-directed null hypersurfaces, which are soon truncated by the singularity.
### 5.3 Other cosmological entropy bounds
Other proposals for cosmological entropy bounds are based on the idea of defining a limited spatial region to which Bekenstein’s bound can be applied.<sup>5</sup><sup>5</sup>5We should point out that the first application of any entropy bound to cosmology was by Bekenstein, in Ref. . The definitions refer variously to the Hubble horizon (or to a region of size $`H^1=a/\dot{a}`$, a region of size $`t`$ , and, remarkably, the apparent horizon . In its simplified version, the Fischler-Susskind proposal can be included in this class, as referring to the region within the particle horizon .
The prescriptions do not aim to relate entropy to any surfaces larger than the specified ken. Also, most do not claim validity during the collapsing era of a closed universe, and none can be applied in arbitrary gravitationally collapsing systems. The covariant entropy bound differs from this approach in that it attempts generality: It associates hypersurfaces with any surface in any space-time and bounds the entropy contained on those hypersurfaces.
The proposed cosmological bounds are very useful for estimating the maximal entropy in limited regions of cosmological solutions. Even if a horizon other than the apparent horizon is used, one may still obtain correct results at least within factors of order one. In order to avoid pitfalls, however, they must be used carefully (as many authors have stressed). The corollary derived in Sec. 5.1 contains precise conditions determining whether, and to which regions of the universe, Bekenstein’s bound can be applied.
Consider, for example, the bound of Bak and Rey , which refers explicitly to the spatial region within the apparent horizon. One might be tempted to consider this as a special case of the covariant entropy bound, in the sense of the corollary derived in Sec. 5.1. Then it should always be valid. But Kaloper and Linde have shown that this bound is exceeded in flat, adiabatic FRW universes with an arbitrarily small but non-vanishing negative cosmological constant. So what has gone wrong?
As the Penrose diagram for this space-time (Fig. 5)
shows, the universe starts out much like an ordinary flat FRW universe. As the normal matter is diluted, it eventually becomes dominated by the negative cosmological constant. This slows down the expansion of the universe so much that it starts to recollapse. The evolution is symmetric about the turn-around time. Eventually, matter dominates again and the universe ends in a big crunch. As the turnaround time is approached, the apparent horizon moves out to spatial infinity , and the enclosed volume grows without bound. The entropy density is constant, and the area grows more slowly than the volume. Thus the entropy eventually exceeds the Bak-Rey bound.
But the cosmological corollary (Sec. 5.1) states that one can use the interior of the apparent horizon for an entropy/area bound only if the future-directed ingoing light-sheet is complete. As Fig. 5 shows, this ceases to be the case before the turnaround time. After the time $`t_1`$, where $`t_1<t_{\mathrm{turnaround}}`$, the future-directed light-sheet of the apparent horizon will have another boundary, namely on the future singularity of the space-time. Thus the conditions for the spacelike projection theorem, and the corollary it implies, are no longer met. The entropy in the spatial interior of the apparent horizon will not be bounded by its area in this region.
The covariant entropy conjecture states that the entropy on the future-directed ingoing light-sheet, as well as the two past-directed light-sheets of the apparent horizon will each be less than $`A/4`$. Because the future-directed light-sheet is truncated by the future singularity, and the past-directed light-sheets are truncated by the past singularity, the comoving volume swept out by any of these light-sheet grows only like the area as one moves further along the apparent horizon. Thus there is no contradiction with the covariant bound.
Neither is there any contradiction with the calculation performed in Sec. 4.2, which concluded that Bekenstein’s bound can be applied to the region within the apparent horizon. This calculation was done within the framework of Bekenstein’s conditions, and thus explicitly assumed the positivity of energy, which is violated here. As we pointed out in Sec. 1.1, Bekenstein’s bound cannot be applied to regions containing a negative energy component, such as a negative cosmological constant. This indicates a striking difference to the covariant bound. While we have formally assumed the positivity of energy as a condition for the covariant entropy bound, we have argued in Sec. 2 that the validity of the bound may well extend to space-times with a negative cosmological constant. We find ourselves encouraged by this example.
## 6 Testing the conjecture in gravitational collapse
In Sec. 4 we tested the covariant entropy conjecture for anti-trapped and normal surfaces in cosmological space-times. We found that even the most non-adiabatic processes can only saturate, but not violate, the bound. We now turn to trapped surfaces, which occur not only in collapsing universes, but arise generally during gravitational collapse and inside black holes.
Like anti-trapped regions, trapped regions are manifestly dominated by self-gravity, and Bekenstein’s bound will be of little help. The covariant entropy bound must be justified by other considerations. This is a more subtle problem for trapped, than for anti-trapped surfaces. It was reasonable to require (Sec. 4.1) that the entropy one Planck time away from a past singularity cannot exceed one per Planck volume; this merely amounts to a sensible specification of initial conditions. Because of the rapid expansion, the entropy/area ratio decreases at later times, and a situation in which the covariant bound would be exceeded does not arise. But near future singularities, one cannot use the time-reverse of this argument to “retrodict” that some experimental setup was impossible to start with. Initial conditions are set in the past, not in the future.
At first sight it seems obvious that the covariant entropy bound, like any entropy/area bound, will be violated in trapped regions. Consider a saturated Bekenstein system of area $`A_0`$, in which gravitational collapse is induced. The system will shrink, but by the second law, the entropy will not decrease. A short time after the beginning of the collapse, the surface of the system will have an area $`A_1<A_0`$. Because the surface is trapped, the past-directed ingoing light-rays have positive expansion and cannot be considered. But there will be a future-directed ingoing light-sheet. If this light-sheet penetrated the entire system, it would contain an entropy of $`SA_0/4>A_1/4`$ and the bound would be exceeded. There are a number of effects, however, which constrain the extent to which the light-sheet can sample entropy. We will discuss them qualitatively, before turning to a quantitative test in Sec. 6.2.
### 6.1 Light-sheet penetration into collapsing systems
When the boundary of the system is a suffiently small proper time away from the future singularity, the light-sheet will not intersect the whole system, because it will be truncated by the singularity (or a surface where the Planck density is reached), much like the past light-sheet of spheres larger than the particle horizon in a flat FRW universe. Truncation is a basic limitation , but additional constraints will be needed.
Consider the Oppenheimer-Snyder collapse of a dust ball , commencing from a momentarily static state with $`R=2M`$, as shown in Fig. 6.
A future-directed ingoing light-sheet, starting at the surface $`B`$ at a sufficiently early time but inside the event horizon, can easily traverse the ball before the singularity (or the Planck density, at $`RM^{1/3}`$) is reached. But this light-sheet would endanger the bound only if the system collapsed from a state in which it nearly saturated Bekenstein’s bound. So how much entropy does the dust star actually contain? Strictly, the Oppenheimer-Snyder solution describes a dust ball at zero temperature. Since it also must be exactly homogeneous, it contains not even the usual positional entropy equal to the particle number. Thus the entropy is zero. In order to introduce a sizable amount of entropy, we have to violate the conditions under which the solution is valid: homogeneity and zero temperature. This collapse will be described by a different solution, for which a detailed calculation would have to be done to determine the penetration depth of light-sheets.
By definition, highly enthropic systems undergoing gravitational collapse are very irregular internally and contain strong small scale density perturbations. This will make the collapse inhomogeneous, with some regions reaching the singularity after a shorter proper time than other regions. One might call this effect Local Gravitational Collapse. A saturated Bekenstein system is globally just on the verge of gravitational collapse: $`R=2M`$. But as it contracts, individual parts of the system, of size $`\mathrm{\Delta }R<R`$ will become gravitationally unstable: $`\mathrm{\Delta }R2\mathrm{\Delta }M`$. Particles in an overdense region will reach a singularity after a proper time of order $`\mathrm{\Delta }R^2/\mathrm{\Delta }M`$, which is shorter than the remaining lifetime of average regions, $`R^2/M`$. This makes it more difficult for a light-sheet to penetrate the system completely, unless it originates near the beginning of the collapse, when the area is still large.
The internal irregularity of highly enthropic systems also enhances the effect of Percolation, discussed in Sec. 3.3. Inhomogeneities that break spherical symmetry will deflect the rays in the light-sheet and cause dents in their spherical cross-sections. Such dents will develop into “angular” caustics. At any caustic, the light-sheet ends, because the expansion becomes positive (Sec. 3.2). Any light-rays that do not end on caustics will follow an irregular path through the object similar to random walk. Since they waste affine time on covering angular directions, they may not proceed far into the object before the singularity is reached. Thus it may well be impossible for a light-sheet to penetrate through a collapsing, highly entropic system far enough to sample excessive entropy.
The quantitative investigation of the formation of angular caustics on light-sheets penetrating collapsing, highly enthropic systems lies beyond the scope of this paper. A strong, quantitative case for the validity of the bound may still be made by eliminating the percolation effect. We will consider a system containing only radial modes. This system is spherically symmetric even microscopically, and cannot deflect light-rays into angular directions. With no constraints on mass and size, it can contain arbitrary amounts of entropy, but cannot lead to angular caustics on the light-sheet.
### 6.2 A quantitative test
Consider a Schwarzschild black hole of horizon size $`r_0`$ (see Fig. 7).
Let $`B`$ be a sphere on the apparent horizon at some given time; thus $`B`$ has area
$$A=4\pi r_0^2.$$
(6.1)
The surface $`B`$ is marginally outer trapped and possesses three light-sheets. The past-directed ingoing light-sheet, $`L_1`$ counts the entropy $`S_{\mathrm{in}}`$ that went into the black hole. The generalized second law of thermodynamics guarantees that the entropy conjecture will hold on this light-sheet, since
$$A/4=S_{\mathrm{bh}}S_{\mathrm{in}}.$$
(6.2)
The future-directed ingoing light-sheet, $`L_2`$, may intersect some or all of the collapsing object that formed the black hole. The future-directed outgoing light-sheet $`L`$ intersects objects falling into the black hole at a later time. The validity of the conjecture for the future-directed light-sheets was supported by qualitative arguments in the previous subsection. For a quantitative test, we shall now try to send excessive entropy through $`L`$.
$`L`$ has zero expansion and defines the black hole horizon as long as no additional matter falls in. When $`L`$ does encounter matter, the expansion becomes negative and $`L`$ collapses to $`r=0`$ within a finite affine time. Our strategy will be to use an infalling shell of matter to squeeze as much entropy as possible across $`L`$ before the light-sheet ends on a caustic or a singularity. We shall not have any scruples about making the mass of the shell extremely large compared to the mass of the black hole. By preparing the shell far outside the black hole and letting it collapse, we can thus transport an arbitrary amount of entropy to $`r_0`$. This means, of course, that the shell may be well inside its own Schwarzschild radius by the time it reaches $`L`$; but by the second law, this cannot reduce the entropy it carries. What we must ensure, however, is that $`L`$ actually penetrates the entire shell and samples all of its entropy. We will now show that this requirement keeps the shell entropy within the conjectured bound.
Let $`\theta `$ be the expansion of the null generators of $`L`$. Its rate of change is given by Raychauduri’s equation, Eq. (3.2), from which we obtain the inequality
$$\frac{d\theta }{d\lambda }\frac{1}{2}\theta ^28\pi T_{ab}k^ak^b.$$
(6.3)
The first term on the right hand side is non-positive, and the dominant energy condition is sufficient to ensure that the second term is also non-positive. Initially, $`\theta =0`$, since $`L`$ is an apparent horizon. Now consider a shell of matter, of mass $`M`$, crossing the light-sheet $`L`$. We would like to make the shell as wide as possible in order to store a lot of entropy. But we also must keep it sufficiently thin, so that the light-sheet does not collapse due to the first term in Eq. (6.3), before the shell has completely crossed it.
The maximum width of the shell can be easily estimated. Consider an infinitely thin shell of mass $`M`$ falling towards the black hole at $`r_0`$. Outside the shell, the metric will be given by a Schwarzschild black hole of mass
$$\stackrel{~}{M}=M+\frac{r_0}{2},$$
(6.4)
by Birkhoff’s theorem. Once the shell has crossed the light-sheet $`L`$ at $`r_0`$, the null generators of $`L`$ will be moving in a Schwarzschild interior of mass $`\stackrel{~}{M}`$. Therefore this meeting occurs at proper time
$$\mathrm{\Delta }\tau _{\mathrm{dead}}=r_0+2\stackrel{~}{M}\mathrm{ln}\left(1\frac{r_0}{2\stackrel{~}{M}}\right)\frac{r_0^2}{4\stackrel{~}{M}}$$
(6.5)
before the generators reach a singularity. We may approximate a shell of finite proper thickness $`w`$ by an infinitely thin shell of the same mass, located $`w/2`$ from either side. We are thus pretending that the shell mass contributes to the second term of the Raychauduri equation all at once, at the moment when half of the shell has already passed through $`L`$. Since we require that the light-sheet penetrates the shell entirely, we must be sure that the other half of the shell passes through $`L`$ before the light-sheet hits the singularity. This requires $`w/2\mathrm{\Delta }\tau _{\mathrm{dead}}`$, whence
$$w_{\mathrm{max}}r_0^2/2\stackrel{~}{M}.$$
(6.6)
The maximum width of the shell is thus inversely proportional to its mass, and is always less than $`r_0`$.
The next step will be to calculate the maximum entropy of a spherical shell of mass $`M`$ and width $`w`$. We will build the shell at $`r=R`$, far away from the black hole and outside its own gravitational radius; i.e., $`R`$ is much greater than any of the quantities $`w`$, $`M`$, and $`r_0`$. Thus Bekenstein’s bound can be used to estimate the entropy. Then we will let the shell collapse into the black hole. Since $`w_{\mathrm{max}}<r_0/2`$ by Eq. (6.6), the width of the shell will be smaller than the curvature of space during the entire time of the collapse, and we can take it to remain constant. This enables us to neglect effects of local gravitational collapse (which, if included, constrain the setup further; see Sec. 6.1). In order to exclude effects of angular caustics, which are difficult to deal with quantitatively, we must specify that the shell contain only spherically symmetric micro-states, i.e., micro-states living in the radial, not the angular directions. We will now estimate the maximum entropy of the shell.
The shell can be split into a large number $`n=R^2/w^2`$ of roughly cubic boxes of volume $`w^3`$ and mass $`M/n`$, separated by impenetrable radial walls. By Eq. (1.1), the maximum entropy of an isolated, single box is the largest dimension times the mass:
$$S_{\mathrm{box}}=2\pi w\frac{M}{n}.$$
(6.7)
Since all states are restricted to be radial, no new states are added by removing the wall between two adjacent boxes. Thus the entropy of two boxes will simply be twice the entropy of one box. By repeating this argument, we can remove all the separating walls. Therefore the shell has a maximum entropy of
$$S_{\mathrm{shell}}=2\pi wM.$$
(6.8)
The width $`w`$ is restricted by Eq. (6.6). One has $`M/\stackrel{~}{M}1`$ by Eq. (6.4), even if the limit $`M\mathrm{}`$ is permitted. Thus the shell entropy cannot exceed $`\pi r_0^2`$, which by Eq. (6.1) is a quarter of the area bounding the light-sheet:
$$S_{\mathrm{shell}}\frac{A}{4}.$$
(6.9)
We conclude that $`A/4`$ is the maximum amount of entropy one can transport through a future-directed outgoing light-sheet $`L`$ bounded by a black hole apparent horizon of area $`A`$, using an exactly spherically symmetric shell of matter. We take this as strong evidence in favor of the entropy bound we propose.
## 7 Conclusions
Bekenstein has shown that the entropy of a thermodynamic system with limited self-gravity is bounded by its area. By demanding general coordinate invariance and constructing a selection rule, we arrived at a bound on the entropy present on null hypersurfaces in arbitrary space-times. We tested the conjecture on cosmological solutions and inside gravitationally collapsing regions. We found, under the most adverse assumptions, that the bound can be saturated but not exceeded. This evidence suggests that the covariant entropy conjecture may be a universal law of physics.
But can the conjecture be proven? The processes by which the bound is protected appear to be rather subtle. They differ according to the physical situation studied, and they can involve combinations of different effects more reminiscent of a conspiracy than of an elegant mechanism (Sec. 6.1). (This is quite in contrast to Bekenstein’s bound, which is protected by gravitational collapse; see below.) We have verified the bound in a wide class of space-times and space-time regions, but from the perspective of general relativity, the processes protecting the bound appear eclectic, and its success remains mysterious. This indicates that we may be looking at nature in an artificial and complicated way when we describe it as a $`3+1`$-dimensional space-time filled with matter. If the covariant bound is correct, we believe it must arise from a more fundamental description of nature in an obvious way. But this does not exclude the possibility that it can be proven (in a complicated way) entirely within general relativity. The proof would have to combine the tools used for establishing the laws of black hole mechanics with the formalism of the Hawking-Penrose singularity theorems .
In the final paragraphs of Secs. 3.1 and 3.4 we discussed the most important property of the conjecture: It is manifestly time reversal invariant. Therefore the second law of thermodynamics, which underlies Bekenstein’s bound, cannot be responsible for the covariant bound, and one is forced to contemplate the possibility of a different origin.
As a thermodynamic concept, entropy has a built-in arrow of time. The T-invariance of the covariant entropy conjecture can be understood only if the bound is interpreted as a bound on the number of degrees of freedom of the matter systems present on the light-sheets. This number is always at least as large as the thermodynamic entropy. With this statistical interpretation, T-invariance is natural. However, we never made any assumptions about the microscopic properties of matter, which would limit the number of degrees of freedom present. This leaves no choice but to conclude that the number of degrees of freedom in nature is fundamentally limited, as proposed in Refs. .
The idea that the world is effectively two-dimensional was put forth by ’t Hooft and was further developed by Susskind . Based on Bekenstein’s bound, the holographic hypothesis was a bold leap. One could argue, after all, that Bekenstein’s bound is not a fundamental limit on the number of degrees of freedom, but a practical restriction on thermodynamic entropy. There could be far more degrees of freedom in a system than its surface area, but if too many of them were excited at the same time, a black hole would form and the system would no longer satisfy Bekenstein’s conditions. One may view Bekenstein’s bound as arising from this elegant gravitational collapse mechanism. Bekenstein’s law applies only to Bekenstein systems; it works because a Bekenstein system exceeding the bound ceases to be one.
The covariant entropy bound, on the other hand, applies even to surfaces in collapsing regions, or on cosmological scales. Its generality, together with its T-invariance, force us to the conclusion that a holographic principle underlies the description of nature. Moreover, the bound leads naturally to a background-independent formulation of the principle. The number of independent degrees of freedom on any light-sheet of a surface $`B`$ cannot exceed a quarter of the area of $`B`$.
## Acknowledgments
I thank Gerard ’t Hooft, Nemanja Kaloper, Andrei Linde, and Lenny Susskind for many extensive discussions. This work has benefitted in countless ways from their criticism and encouragement. I am also indebted to Jacob Bekenstein and Werner Israel for helpful correspondence, and to Ted Jacobson for valuable comments on an earlier version of this paper.
|
no-problem/9905/hep-th9905022.html
|
ar5iv
|
text
|
# Prescriptionless light-cone integrals
## I Introduction.
Light-cone gauge for gauge field theories is probably one of the most widely used among the algebraic non-covariant gauges. Its popularity has known ups and downs along its history. Among the ups are that the emerging propagator has a deceivingly simple structure compared to other non-covariant choices, the decoupling of Fadeev-Popov ghosts from the physical fields, and the possibility of describing and modeling complex supersymmetric string theories in it. The ugly side of the coin is represented by the subtle $`(kn)^\alpha `$ singularities present in all the physical amplitudes described within it. Such complication demanded ad hoc prescriptions to handle the singularity in a mathematically consistent way. Apart from the fact that such expedient has to be carried out by hand, it was soon realized that it was not enough to be mathematically well-defined, it had to be physically consistent as well. Thus, not any prescription is suitable, but only causal prescriptions are eligible for the light-cone gauge.
Probably the major breakthrough in recent years along this line is the realization that $`D`$-dimensional Feynman integrals can be analytically continued to negative dimensions and performed there and then brought back to positive dimensionality . Negative dimensional integration method (NDIM) is tantamount to the performing of fermionic integration in positive dimensions . This, can be applied to light-cone integrals with surprising effects. No prescription is called for the computation and moreover, as it can be shortly seen, it can dispense altogether with the necessity of partial fractioning products of gauge-dependent poles , a condition sine qua non when one resorts to the use of prescriptions.
In this work we shall demonstrate the two surprising features of NDIM when employed in the light-cone context: no prescriptions and no partial fractionings are needed. Our lab-testing is performed taking the simplest scalar and tensorial structures for one-loop integrals.
## II One-loop light-cone gauge loop integrals.
First of all, let us make things more concrete, by analysing the framework of vector gauge fields, e.g. the pure Yang-Mills fields, where, after taking the limit of vanishing gauge parameter, the propagator reads:
$$D_{\mu \nu }^{ab}(k)=\frac{i\delta ^{ab}}{k^2+i\epsilon }\left[g_{\mu \nu }\frac{n_\mu k_\nu +n_\nu k_\mu }{kn}\right]$$
(1)
where here $`(a,b)`$ are the gauge group indices, $`n_\mu `$ is the arbritary and constant light-like four-vector which defines the gauge, $`nA^a(x)=0;n^2=0`$. This propagator generates $`D`$-dimensional Feynman integrals of the following generic form:
$$I_{lc}=\frac{d^Dk_i}{A(k_j,p_l)}\frac{f(k_jn^{},p_ln^{})}{h(k_jn,p_ln)},$$
(2)
where $`p_l`$ labels all the external momenta, and $`n_\mu ^{}`$ is a null four-vector, dual to $`n_\mu `$. A conspicuous feature that we need to note first of all, is that the dual vector $`n_\mu ^{}`$, when it appears at all, it does so always and only in the numerators of the integrands. And herein comes the first seemingly “misterious” facet of light-cone gauge. How come that from a propagator expression like (1), which contains no $`n^{}`$ factors, can arise integrals of the form (2), with prominently seen $`n^{}`$ factors? Again, this is most easily seen in the framework of definite external vectors $`n`$ and $`n^{}`$. An alternative way of writing the generic form of a light-cone integral is
$$I_{lc}^{\mu _1\mathrm{}\mu _n}=\frac{d^Dk_i}{A(k_j,p_l)}\frac{g(k_j^{\mu _j},p_l^{\mu _l})}{h(k_jn,p_ln)},$$
(3)
where the numerator $`g(k_j^{\mu _j},p_l^{\mu _l})`$ defines a tensorial structure in the integral. For a vector, we have $`k^\mu =(k^+,k^{},𝐤^\mathrm{t})`$, where $`k^+=2^{1/2}(k^0+k^{D1})`$ and $`k^{}=2^{1/2}(k^0k^{D1})`$. If we choose definite $`n`$ and $`n^{}`$ such that $`n_\mu =(1,\mathrm{\hspace{0.33em}0},\mathrm{},\mathrm{\hspace{0.33em}1})`$, and $`n_\mu ^{}=(1,\mathrm{\hspace{0.33em}0},\mathrm{},1)`$, this allows us to write $`k^+kn`$ and $`k^{}kn^{}`$. We have therefore traced back the origin for the numerator factors containing $`n^{}`$. We would like to emphasize here that the presence of this $`n^{}`$ in the numerators of integrands has nothing whatsoever to do with some kind of prescription input. It is rather an intrinsic feature of the general structure of a Feynman integral in the light-cone gauge.
Of course, for practical reasons we illustrate the NDIM methodology picking up only few of the scalar, vector and second-rank tensor one-loop integrals. So, we shall be considering the following:
$$T_1(i,j,l)=d^Dq𝐍(q),$$
(4)
$$T_1^\mu (i,j,l)=d^Dqq^\mu 𝐍(q),$$
(5)
$$T_1^{\mu \nu }(i,j,l)=d^Dqq^\mu q^\nu 𝐍(q),$$
(6)
where
$`𝐍(q)[(qp)^{2i}](qn)^j(qn^{})^l.`$
and
$$T_2(i,j,l,m)=d^Dq𝐑(q),$$
(7)
$$T_2^\mu (i,j,l,m)=d^Dqq^\mu 𝐑(q),$$
(8)
$$T_2^{\mu \nu }(i,j,l,m)=d^Dqq^\mu q^\nu 𝐑(q),$$
(9)
where
$`𝐑(q)[(qp)^{2i}](qn)^j[(qp)n]^l(qn^{})^m.`$
In the first three type $`T_1`$ integrals, after they are computed in NDIM, only the exponents $`(i,j)`$ will be analytically continued to allow for negative values, since the original structure of the Feynman integral demands exponent $`l0`$. Similarly, for the last three type $`T_2`$ integrals only the exponents $`(i,j,l)`$ will be analytically continued to negative values whereas $`m0`$. We strongly emphasize this point in view of the fact that we must respect the very nature of the original structure for the light-cone integrals, where factors of the form $`(qn^{})`$ never appears in denominators.
Observe that we are not invoking any kind of prescription for the $`(qn)^j`$ factors to solve the integrals in NDIM, since before analytic continuation $`j`$ is strictly positive and there are no poles to circumvent! This is the beauty and strength of NDIM! Neither are the $`(qn^{})^l`$ numerator factors due to some sort of prescription input as they are, e.g., in the Mandelstam-Leibbrandt (ML) treatment, where one makes the substitution
$$=\frac{d^Dq}{(qp)^2(qn)}^{ML}\frac{d^Dq(qn^{})}{(qp)^2\left[(qn)(qn^{})+iϵ\right]}.$$
(10)
Let us then evaluate the integrals using the NDIM approach. In fact, our first integral $`T_1`$ has already been calculated with great details in our previous paper whose result is,
$$T_1^{AC}(i,j,l)=\pi ^\omega \chi ^{i+\omega }(pn)^j(pn^{})^l\frac{(i|2i+\omega )(j|i\omega )}{(1+l|i+\omega )},$$
(11)
where
$`\chi {\displaystyle \frac{2pnpn^{}}{nn^{}}},`$
and the superscript “$`AC`$” means that the exponents $`(i,j)`$ were analytically continued to allow for negative values, $`\omega =D/2`$ and we use the Pochhammer symbol,
$$(a|b)(a)_b=\frac{\mathrm{\Gamma }(a+b)}{\mathrm{\Gamma }(a)}.$$
(12)
Observe that $`l`$ must take only positive values or zero since the Pochhammer symbol containing $`\mathrm{\Gamma }(1+l)`$ was not analytically continued.
Consider now the second integral, vectorial, given in (5). For this case, let,
$$G^\mu =d^Dqq^\mu \mathrm{exp}\left[\alpha (qp)^2\beta (qn)\gamma (qn^{})\right].$$
(13)
Introducing the standard trick of substituting the $`q^\mu `$ factor for a derivative in $`p_\mu `$ , we obtain,
$`G^\mu `$ $`=`$ $`\left({\displaystyle \frac{\pi }{\alpha }}\right)^\omega {\displaystyle \frac{e^{\alpha p^2}}{2\alpha }}{\displaystyle \frac{}{p_\mu }}\mathrm{exp}\left[\alpha p^2+{\displaystyle \frac{\beta \gamma }{2\alpha }}(nn^{})\beta p^+\gamma p^{}\right]`$ (14)
$`=`$ $`\left(p^\mu {\displaystyle \frac{\beta }{2\alpha }}n^\mu {\displaystyle \frac{\gamma }{2\alpha }}n^\mu \right)𝒢_0,`$ (15)
where $`p^+=pn`$ and $`p^{}=pn^{}`$, as usual in the light-cone notation , and define
$$𝒢_0\left(\frac{\pi }{\alpha }\right)^\omega \mathrm{exp}\left[\frac{\beta \gamma }{2\alpha }(nn^{})\beta p^+\gamma p^{}\right].$$
(16)
Now, Taylor expanding the exponential in equation (13),
$$G^\mu =\underset{i,j,l=0}{\overset{\mathrm{}}{}}\frac{(1)^{i+j+l}\alpha ^i\beta ^j\gamma ^l}{i!j!l!}T_1^\mu (i,j,l),$$
(17)
and following the steps for NDIM calculation we get finally,
$$T_1^{\mu ,AC}(i,j,l)=V_1^\mu T_1^{AC}(i,j,l),$$
(18)
where
$$V_1^\mu p^\mu \left[\frac{(i+\omega )p^{}}{(1+i+l+\omega )(nn^{})}\right]n^\mu \left[\frac{(i+\omega )p^+}{(1+i+j+\omega )(nn^{})}\right]n^\mu .$$
(19)
This result is in Euclidean space and valid for positive dimension ($`D=2\omega >0`$), negative exponents $`(i,j)`$ and $`l0`$.
The second-rank tensor integral in (6) can be evaluated in a similar way. The only thing that need to be taken into account is that now a second derivative is called for and the calculation becomes lengthier. We only quote the final result:
$$T_1^{\mu \nu ,AC}(i,j,l)=V_1^{\mu \nu }T_1^{AC}(i,j,l),$$
(20)
where
$`V_1^{\mu \nu }`$ $``$ $`p^\mu p^\nu \left[{\displaystyle \frac{(i+\omega )p^+p^{}}{(1+i+j+\omega )(1+i+l+\omega )(nn^{})}}\right]g^{\mu \nu }`$ (21)
$``$ $`\left[{\displaystyle \frac{(i+\omega )p^{}}{(1+i+l+\omega )(nn^{})}}\right](p^\mu n^\nu +p^\nu n^\mu )`$ (22)
$``$ $`\left[{\displaystyle \frac{(i+\omega )p^+}{(1+i+j+\omega )(nn^{})}}\right](p^\mu n^\nu +p^\nu n^\mu )`$ (23)
$`+`$ $`\left[{\displaystyle \frac{(i+\omega )(1+i+\omega )p^+p^{}}{(1+i+j+\omega )(1+i+l+\omega )(nn^{})^2}}\right](n^\mu n^\nu +n^\nu n^\mu )`$ (24)
$`+`$ $`\left[{\displaystyle \frac{(i+\omega )(1+i+\omega )(p^{})^2}{(2+i+l+\omega )(1+i+l+\omega )(nn^{})^2}}\right]n^\mu n^\nu `$ (25)
$`+`$ $`\left[{\displaystyle \frac{(i+\omega )(1+i+\omega )(p^+)^2}{(2+i+j+\omega )(1+i+j+\omega )(nn^{})^2}}\right]n^\mu n^\nu .`$ (26)
It can be noted that for the particular case of $`i=j=1`$ the pole piece for $`\omega 2`$ only arises in the scalar integral factor $`T_1^{AC}(i,j,l)`$, equation (11).
Now, let us consider the integrals $`\{T_2\}`$. These contain two scalar products with $`n_\mu `$, but again they are harmless in NDIM approach because their exponents, before analytic continuation, are positive. However, in the usual positive dimensional approach, such factors can become singular and prescriptions become a necessity. Yet prescriptions cannot handle products; one needs to use partial fractioning first. Thus, the recourse is to use the so-called “decomposition formulas” such as (see, for example, )
$$\frac{1}{(kn)(pk)n}=\frac{1}{pn}\left[\frac{1}{(pk)n}+\frac{1}{kn}\right],pn0,$$
(27)
NDIM does not require any of such partial fractionings; it can handle products at the same time. Not only that, NDIM can handle any power of these products simultaneously, i.e., factors of the form $`(kn)^\alpha [(pk)n]^\beta `$, with $`(\alpha ,\beta =2,3,\mathrm{})`$ which, of course, becomes the more strenuously difficult to handle by partial fractioning the higher the power we have.
To evaluate $`T_2`$ using NDIM, let us then consider the Gaussian-like integral,
$$G_2=d^Dq\mathrm{exp}\left[\alpha (qp)^2\beta qn\gamma (qp)n\delta qn^{}\right],$$
(28)
which yields
$$G_2=\left(\frac{\pi }{\alpha }\right)^{D/2}\mathrm{exp}\left(\beta p^+\delta p^{}+\frac{\beta \delta }{2\alpha }nn^{}+\frac{\gamma \delta }{2\alpha }nn^{}\right).$$
(29)
On the other hand, direct Taylor expansion of (28) yields
$$G_2=\underset{i,j,l,m=0}{\overset{\mathrm{}}{}}(1)^{i+j+l+m}\frac{\alpha ^i\beta ^j\gamma ^l\delta ^m}{i!j!l!m!}T_2(i,j,l,m).$$
(30)
Comparing both expressions and solving for $`T_2(i,j,l,m)`$ we get a unique solution for a system of $`4\times 4`$ linear algebraic equations , which analytically continued to positive dimension and negative values for $`(i,j,l)`$ finally gives
$$T_2^{AC}(i,j,l,m)=\pi ^\omega \chi ^{i+\omega }(p^+)^{j+l}(p^{})^m\frac{(i|2i+l+\omega )(j|il\omega )}{(1+m|i+\omega )}.$$
(31)
Again, superscript “$`AC`$” means $`(i,j,l)`$ strictly negative and $`m0`$.
With help of equation (29) it is easy to solve the two remaining integrals, whose final results we quote here:
$$T_2^{\mu ,AC}(i,j,l,m)=V_2^\mu T_2^{AC}(i,j,l,m),$$
(32)
where
$$V_2^\mu p^\mu \left[\frac{(i+\omega )p^{}}{(1+i+m+\omega )(nn^{})}\right]n^\mu \left[\frac{(i+l+\omega )p^+}{(1+i+j+l+\omega )(nn^{})}\right]n^\mu ,$$
(33)
and
$$T_2^{\mu \nu ,AC}(i,j,l,m)=V_2^{\mu \nu }T_2^{AC}(i,j,l,m),$$
(34)
where
$`V_2^{\mu \nu }`$ $``$ $`p^\mu p^\nu \left[{\displaystyle \frac{(i+l+\omega )p^+p^{}}{(1+i+j+l+\omega )(1+i+m+\omega )(nn^{})}}\right]g^{\mu \nu }`$ (35)
$``$ $`\left[{\displaystyle \frac{(i+\omega )p^{}}{(1+i+m+\omega )(nn^{})}}\right](p^\mu n^\nu +p^\nu n^\mu )`$ (36)
$``$ $`\left[{\displaystyle \frac{(i+l+\omega )p^+}{(1+i+j+l+\omega )(nn^{})}}\right](p^\mu n^\nu +p^\nu n^\mu )`$ (37)
$`+`$ $`\left[{\displaystyle \frac{(i+l+\omega )(1+i+\omega )p^+p^{}}{(1+i+m+\omega )(1+i+j+l+\omega )(nn^{})^2}}\right](n^\mu n^\nu +n^\nu n^\mu )`$ (38)
$`+`$ $`\left[{\displaystyle \frac{(i+\omega )(1+i+\omega )(p^{})^2}{(2+i+m+\omega )(1+i+m+\omega )(nn^{})^2}}\right]n^\mu n^\nu `$ (39)
$`+`$ $`\left[{\displaystyle \frac{(i+l+\omega )(1+i+l+\omega )(p^+)^2}{(2+i+j+l+\omega )(1+i+j+l+\omega )(nn^{})^2}}\right]n^\mu n^\nu .`$ (40)
Finally, before closing this section, let us analyse (7) with momentum shift $`q=pk`$, so that
$$T_2(i,j,l,m)=(1)^{j+l+m}\tau _2(i,j,l,m),\mathrm{or}\tau _2(i,j,l,m)=(1)^{jlm}T_2(i,j,l,m),$$
(41)
where
$$\tau _2(i,j,l,m)=d^Dkk^{2i}[(kp)n]^j(kn)^l[(kp)n^{}]^m.$$
(42)
Then, we can easily write down the following results
$$\tau _2^\mu =p^\mu \tau _2(1)^{jlm}T_2^\mu ,$$
(43)
and
$$\tau _2^{\mu \nu }=p^\mu p^\nu \tau _2+p^\mu \tau _2^\nu +p^\nu \tau _2^\mu +(1)^{jlm}T_2^{\mu \nu }.$$
(44)
Particular cases for $`T_1`$, $`T_2`$ and $`\tau _2`$ such as $`T_1(1,1,0)`$, $`T_2(1,1,1,0)`$, etc., can be worked out from the general expressions. All the above results are in agreement with the ones tabulated in .
## III Discussion and Conclusion.
NDIM is a technique wherein the principle of analytic continuation plays a key role. We solve a “Feynman-like” integral, i.e., a negative dimensional loop integral with propagators raised to positive powers in the numerator and then analytically continue the result to allow for negative values of those exponents and positive dimension.
In positive dimensions, Feynman integrals for covariant gauge choices can be worked out with a variety of methods. However, when we work in the light-cone gauge things become more complicated in virtue of the presence of unwieldy gauge dependent singularities. And herein comes the help of NDIM with surprising effect: propagators raised to positive powers in the “Feynman-like” integrals does not have poles of any kind to trouble us. Therefore no prescription is needed in the NDIM approach, and moreover, no partial fractioning is necessary. The beauty and the strength of NDIM to deal with light-cone integrals is revealed and demonstrated in a marvelous way.
So, we can summarize all this by enumerating the outstanding features of NDIM: i) No prescription at all is required to deal with gauge dependent poles of the usual Feynman integrals; ii) The overall structure of the Feynman integrals in the light-cone gauge is preserved, i.e., there is no need to introduce factors of the form $`qn^{}`$ in denominators as prescription input; iii) There is no need to use parametrization of any kind, so that there are no parametric integrals to solve; iv) There is no need to perform integration with split components such as in , where the integration in space-time is performed by decomposing $`d^{2\omega }qd^{2\omega 1}𝐪dq_4`$; v) There is no need to resort to partial fractionings such as (27); vi) The final result comes out for arbitrary negative exponents of propagators, so that special cases of interest are all contained therein; vii) The final result is already within the dimensional regularization context.
In this work we calculated integrals — scalar, vector, and second-rank tensor — pertaining to light-cone gauge with arbitrary exponents of propagators and dimension. Our results given in (11), (18), (20), (31), (32), (34), (41), (43), and (44) can be worked out for particular values for the exponents and compared to those existing in the literature and checked that they are all in agreement.
But, with no doubt, the most outstanding conclusion that we can draw from this exercise is that no prescription was required to tackle the light-cone singularities. Of course, it is a matter of straightforward generalization that all other non-covariant gauge choices will follow suit.
###### Acknowledgements.
A.G.M.S. gratefully acknowledges FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo, Brasil) for financial support.
|
no-problem/9905/quant-ph9905051.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Quantum chaos is a relatively new field, comprising the study of the quantized versions of systems which are classically chaotic. The current consensus is that quantum systems do not show sensitive dependence on initial conditions in the same way as classical systems. This would appear to disqualify them from being described as chaotic. However, classical chaos is also apparent in other aspects of a system’s evolution, the quantum analogies of which are of great interest. In particular, these include Kolmogorov-Arnold-Moser (KAM) tori and cantori, the study of which is the focus of this paper. Our group has previously published experimental studies of a particular, classically chaotic system containing KAM tori and cantori, which is realized experimentally in the quantum regime, and simulated both classically and quantum mechanically. This paper concentrates on the detailed results of our computer simulations.
KAM tori and cantori in a classical phase space are predicted to influence the corresponding quantum system. An unbroken KAM boundary will prohibit classical diffusion through it, while tunneling across the barrier is possible in a quantum system. When interaction terms in the perturbing Hamiltonian are sufficiently large as to break up the boundary and create a cantorus or *turnstile*, classical particles will quickly diffuse through that cantorus but the quantum wavefunction will be inhibited . A heuristic model proposes that with the presence of a perturbing Hamiltonian, quantum diffusion is constrained when the classical phase space area escaping through the cantorus each period is $`\mathrm{}`$ . Even though the barrier has been broken, the quantum wave function still appears to tunnel through the cantorus.
The link between the quantum domain and the familiar classical world remains a hotly debated topic. The quantum-classical correspondence (QCC) principle requires that quantum mechanics contains the classical macroscopic limit. A promising approach to the question of quantum-classical correspondence is the study of *decoherence* — the analysis of the effect of coupling to the environment, which inevitably occurs in a real system, in terms of quantum coherence . We introduce increased environmental interactions into our quantum simulations in order to test the hypothesis that the resulting behaviour will more closely resemble the classical system.
The structure of this paper is as follows. In Section 2 we introduce the specifically designed double-kicked rotor system, and present the results of a classical analysis. In Section 3 we show that this classical evolution can be well described by a simple random model. In Section 4 we study the corresponding quantum behaviour and analyze the properties of the system using the Floquet method. In Section 5 we introduce decoherence into the quantum system by two different methods, and also generate Wigner functions in order to help understand the origin of classical behavior from a quantum system. Finally a summary is contained in Section 6. All the parameters used in our model were chosen so as to correspond with those used in our experiments .
## 2 Classical Double-Kicked Rotor
The original and most commonly studied system in quantum chaos is the $`\delta `$-kicked rotor. The observation of dynamical localization in the atomic optics realization of the $`\delta `$-kicked rotor provided an important experimental link to the most studied system in Hamiltonian chaos. However, a periodic pulsed potential of finite time duration (as used in the $`\delta `$-kicked rotor experiments) produces a KAM boundary, which becomes more noticeable for longer pulse widths. If one wishes to study diffusion through a cantorus, a train of single pulses is not the best system. The classical phase space outside the first long-lived cantorus is not strongly chaotic and contains many regular regions which will inhibit particle diffusion. Hence we have studied the dynamics due to a train of double pulses. This system has been the subject of experimental investigation by our group in its atomic optics manifestation . Figure 15 displays our double pulse train. We can write the dimensionless form of the Hamiltonian as
$$H=\frac{p^2}{2}K\mathrm{cos}\varphi \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}f(tn)$$
(1)
where $`p`$ is the dimensionless momentum conjugate to the coordinate $`\varphi `$ and $`f(t)`$ specifies the temporal shape of the pulses. $`K`$ is the dimensionless ‘kicking strength’ which is the single parameter varied in our investigation of the classical system. The double pulse train Hamiltonian can be written as
$$H=\frac{p^2}{2}K\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}a_m\mathrm{cos}(\varphi 2\pi m\tau )$$
(2)
where $`a_m=\frac{1}{10}`$sinc$`(\frac{m\pi }{20})\mathrm{cos}(\frac{m\pi }{10})`$ (with the sinc function defined as sinc$`(x)=\mathrm{sin}(x)/x`$). Each pulse is of width $`\alpha /2`$ and the leading edge separation of the two pulses is given by $`\mathrm{\Delta }`$. The KAM boundaries at $`p=\pm 10\pi `$ and $`\pm 30\pi `$ correspond to zero values for $`a_5`$ and $`a_{15}`$.
As discussed in , below a critical kicking strength $`K=K^{}`$, the phase space of the classical system contains KAM tori given approximately by the lines $`p=\pm 10\pi `$ for our choice of $`\alpha =\mathrm{\Delta }=0.1`$. For $`K>K^{}`$ these break up to become partially penetrable cantori. Figure 15 shows the phase space for $`K=70`$. We observe chaotic seas on either side of the cantorus around $`p=10\pi `$. Figure 15 shows the phase space for $`K=280`$. We have stronger chaos with little island structure remaining. The KAM torus around $`p=30\pi `$ is still unbroken for this kick strength, and we find that this is the case for all kicking strengths studied in this paper. The strongly chaotic seas surrounding the $`p=10\pi `$ cantorus provide an ideal phase space structure for studying the transport of particles through the barrier.
For a system kicked by finite length pulses, numerical solutions for the motion are considerably more difficult to generate than for $`\delta `$-kicks. If the pulses are rectangular, then the system alternates between the motion of a pendulum, and free rotation. The solutions for pendulum motion are in terms of Jacobi elliptic functions and elliptic integrals. In our simulations, the elliptic integrals are efficiently evaluated with a specialized algorithm coded in C , which is then interfaced with MATLAB.
The starting point for the calculation of the momentum distributions is an initial distribution uniform in $`\varphi `$, and Gaussian in $`p`$. For our chosen $`\alpha `$ and $`\mathrm{\Delta }`$ we have $`\sigma _p=3.6\pi `$ which gives $`99.5\%`$ of initial conditions inside $`p=\left|10\pi \right|`$. We find that the results are not strongly dependent on initial distribution, provided this proportion is close to 1. To obtain numerical distributions, we choose $`10^5`$ initial conditions randomly from this distribution, and propagate them through 70 kick cycles. Figure 15 shows examples of our simulated distributions. Once the KAM boundaries at $`p=\pm 10\pi `$ are broken the classical particles will eventually distribute themselves uniformly between the $`p=\pm 30\pi `$ tori.
## 3 Diffusion Model
A very simple model, which mimics some aspects of the classical momentum distributions, is obtained by treating the system as consisting of three regions where strong, homogeneous diffusion occurs, divided by two permeable barriers. The system is enclosed by unbroken barriers. Applying this model to our system, we then assume that after a short time, the distributions both inside and outside the permeable barriers are each essentially uniform. We also assume that over time, the populations can leak across the penetrable barriers. The system can then be described by three variables giving the population associated with each region. If we choose our initial condition so that the population in each of the two ‘outer’ regions is small, we expect the populations to decay exponentially towards equilibrium values where the overall distribution is completely uniform.
We now take the barriers to be located at $`p=\pm 10\pi `$. The outer boundaries are at $`\pm 30\pi `$, so the system is divided into three equal parts. Each has a phase space area (in dimensionless units) of $`40\pi ^2`$. With each kick, some area of phase space $`F`$ is mapped from inside the cantori to above $`p=10\pi `$. This area is referred to as the ‘phase space flux per kick’ through the $`p=10\pi `$ cantorus. Identical amounts of phase space are mapped from above $`p=10\pi `$ to inside, from inside to below $`p=10\pi `$, and from below $`p=10\pi `$ back inside. This follows from the incompressibility of phase space flow in a Hamiltonian system and the symmetry of this system in $`p`$. The assumption of separate uniform distributions inside each of the three intervals means that the probability of a trajectory inside the cantori being mapped outside is $`2F/40\pi ^2`$. The probability for a trajectory outside the cantori to be mapped inside is $`F/40\pi ^2`$. Now let $`P(\left|p\right|<10\pi ,t)`$ be the probability for a particular trajectory to be inside the cantori. Take the initial condition to be $`P(\left|p\right|<10\pi ,0)=1`$, so that both outside regions are empty. The evolution will maintain the symmetry between them so that the probability that the trajectory is in a particular outside region is $`P(p>10\pi ,t)=P(p<10\pi ,t)=(1P(\left|p\right|<10\pi ,t))/2`$. The change in the distribution caused by one kick is
$`P(\left|p\right|<10\pi ,t+1)P(\left|p\right|<10\pi ,t)`$ $`=`$ $`{\displaystyle \frac{2F}{40\pi ^2}}P(\left|p\right|<10\pi ,t)+{\displaystyle \frac{F}{40\pi ^2}}(1P(\left|p\right|<10\pi ,t))`$ (3)
$`=`$ $`{\displaystyle \frac{3F}{40\pi ^2}}(P(\left|p\right|<10\pi ,t){\displaystyle \frac{1}{3}}).`$ (4)
We can write this as
$$P(\left|p\right|<10\pi ,t+1)\frac{1}{3}=\left(1\frac{3F}{40\pi ^2}\right)(P(\left|p\right|<10\pi ,t)\frac{1}{3}).$$
(5)
By induction, using our initial condition $`P(\left|p\right|<10\pi ,0)=1`$, we have
$`P(\left|p\right|<10\pi ,t)`$ $`=`$ $`{\displaystyle \frac{1}{3}}+{\displaystyle \frac{2}{3}}\left(1{\displaystyle \frac{3F}{40\pi ^2}}\right)^t`$ (6)
$`=`$ $`{\displaystyle \frac{1}{3}}+{\displaystyle \frac{2}{3}}e^{at}`$ (7)
where $`a=\mathrm{ln}(13F/40\pi ^2)3F/40\pi ^2`$ ($`a`$ must be small for the model to apply). The probability for a trajectory to be outside the cantori is given by
$$P(\left|p\right|>10\pi ,t)=\frac{2}{3}(1e^{at}).$$
(8)
The parameter $`a`$ is a function of kick strength which we do not know, except that it must increase with $`K`$. We can reverse the problem, estimating the flux $`F`$ by fitting an exponential decay to the proportion of trajectories outside $`p=\pm 10\pi `$ versus $`t`$ for a particular kick strength. This estimate is informative for comparing classical and quantum results. Figure 15(a) shows $`P(\left|p\right|>10\pi ,t)`$ versus $`t`$, for a range of kicking strengths while Figure 15(b) shows $`\mathrm{ln}[2/3P(\left|p\right|>10\pi ,t)]`$, which should be a straight line according to the simple diffusion model (after a few kicks).
We see that the model appears to be reasonably good for $`K280`$. For higher values of $`K`$ it may be that the transport across the barrier is too fast for equilibrium to approximately hold in each region. Also, in the real system, the equilibrium population outside will differ slightly from $`2/3`$, so that the straight line form of our graph will break down for $`P(\left|p\right|<10\pi )`$ very close to equilibrium. Inspection suggests that the real equilibrium value differs from $`2/3`$ by $`e^30.05`$. Figure 15 shows estimates of the flux per kick $`F`$ through the cantori based on fitting straight lines to the curves in Figure 15.
## 4 Quantum Momentum Distributions and Floquet States
For comparison with classical simulations, we use an initial density matrix
$$m\left|\widehat{\rho }_0\right|n=\frac{1}{A}\mathrm{exp}\left(\frac{n^2\mathrm{¯}k^2}{2\sigma _p^2}\right)\delta _{m,n}$$
(9)
where $`\sigma _p=3.6\pi `$, $`A`$ is a normalization constant and $`\mathrm{¯}k`$ is Planck’s constant in our dimensionless system. Again the results are not strongly dependent on the initial spread. We simulate the system dynamics for up to 70 kicks. Figure 15 shows examples of these distributions, for the same kicking strengths as were presented in the classical case, with $`\mathrm{¯}k=2.6`$. Figure 15(a) shows the quantum probability for the atom to be outside $`p=\left|10\pi \right|`$ versus $`t`$, for a range of kicking strengths. Figure 15(b) shows $`\mathrm{ln}[2/3P(\left|p\right|>10\pi ,t)]`$.
Comparing these graphs to Figure 15 we see that, for all of these kicking strengths, the quantum behaviour deviates qualitatively from the classical after less than 20 kicks, and the probability outside the boundary for a quantum system always levels off well below the classical equilibrium value. This is consistent with the premise that the diffusion is suppressed when the phase space flux across the cantorus is $`\mathrm{¯}k`$. This KAM localization is distinct from the more widely studied dynamical localization .
The time evolution operator $`U`$ is unitary, so that its eigenvalues are of the form $`\alpha _j=\mathrm{exp}(iE_j/\mathrm{¯}k)`$ where $`E_j`$ is a (dimensionless) quasi-energy. Its eigenstates (*quasi-energy states* or *Floquet states*) satisfy
$$U|\alpha _j=e^{iE_j/\mathrm{¯}k}|\alpha _j.$$
(10)
In a basis made up of these states, the evolution operator is diagonal. They are therefore equivalent to eigenstates of the Hamiltonian in a time-independent system, and if we examine the system only once per kicking cycle, they are effectively stationary. If we are interested in the state of the system after quantum saturation has occurred, we can examine the asymptotic (long-time average) momentum distribution, which can be written
$`P(n|\rho _0)`$ $`=`$ $`\underset{N\mathrm{}}{lim}{\displaystyle \frac{1}{N}}{\displaystyle \underset{t=0}{\overset{N1}{}}}n\left|\rho _t\right|n`$ (11)
$`=`$ $`\underset{N\mathrm{}}{lim}{\displaystyle \frac{1}{N}}{\displaystyle \underset{t=0}{\overset{N1}{}}}n\left|U^t\rho _0U^t\right|n.`$ (12)
By inserting the spectral decomposition of $`U`$ (representation in terms of the Floquet states) we obtain
$$P(n|\rho _0)=\underset{j}{}\alpha _j\left|\widehat{\rho }_0\right|\alpha _J\left|n|\alpha _j\right|^2$$
(13)
and if our initial condition is a pure state $`\widehat{\rho }_0=|n_0n_0|`$ we have
$$P(n|n_0)=\underset{j}{}\left|n_0|\alpha _j\right|^2\left|n|\alpha _j\right|^2.$$
(14)
This function characterizes the asymptotic mixing between the two momentum states. We can generate pseudo-colour plots of $`P(n|n_0)`$, to obtain visual representations of the momentum confinement in the quantum system for particular choices of $`K`$ and $`\mathrm{¯}k`$.
Figure 15 shows a series of these pseudo-colour plots. We have used $`\mathrm{¯}k=2.6`$ and $`N=128`$, so that $`\mathrm{¯}k\left|n_{max}\right|>50\pi `$. The plots use a logarithmic colour-scale. Black and dark grey regions correspond to vanishingly small probability. Mid grey regions indicate a low probability, at the level of quantum tunneling. Pale grey and white regions have a sizeable probability density. Inspection of the plots shows that their shape appears to be chiefly determined by the classical barriers to momentum diffusion. The light line along the line $`p=p_0`$ arises from each state being mapped onto itself, to some degree. Each plot also shows a central light square with $`\left|p\right|`$ and $`\left|p_0\right|`$ less than $`10\pi `$, indicating the strong coupling of each state in this range to all the others. Sharp borders on this square indicate strong confinement by the cantori. These borders blur as the cantori become less effective. For increased kicking strength we eventually see a corresponding square associated with the KAM tori at $`p=\pm 30\pi `$. Figure 15(a) has $`K=80`$ and we see very strong signatures of the classical barriers. Figure 15(b) and (c) show $`K=180`$ and $`K=280`$ respectively. We see significant penetration of the $`p=\pm 10\pi `$ cantori, but their effect is still obvious. The $`p=\pm 30\pi `$ KAM tori still provide a very strong barrier. Figure 15(d) has $`K=400`$. The effects of the $`p=\pm 10\pi `$ cantori have almost disappeared, but penetration through the $`p=\pm 30\pi `$ KAM tori is still fairly small.
Again, we see strong localization which begins to break down for $`K300`$. Referring to Figure 15, we see that the classical flux through each cantorus becomes comparable with $`\mathrm{¯}k=2.6`$ when $`K250`$. It has previously been observed that this criterion appears to give a reasonable estimate of the kicking strength for which strong quantum localization by a cantorus will be destroyed.
## 5 Decoherence via Spontaneous Emission
In order to study decoherence we consider the atomic optics manifestation of this double kicked system . A periodic potential is created by two counter-propagating laser beams, and atoms are subjected to a force due to the dipole potential. The dipole potential is derived by neglecting the excited state amplitude of the atom . We will now consider the first order effects arising from a small non-zero amplitude. Rather than making coherent stimulated transitions which is the usual interaction between the atoms and the pulsed potential, an atom in the excited state may, with finite probability, decay to the ground state by spontaneous emission. We can treat this effect as a stochastic process, which is different for each realization. The effect of a spontaneous emission event arises from the recoil momentum imparted to the atom by both the photon exciting the atom and the photon emitted by the atom. The atom recoils with a change in $`p`$ of $`u\mathrm{¯}k`$ where $`1u1`$ and $`u=\frac{1}{2}(\pm 1+\mathrm{cos}\beta )`$, where the upper and lower signs occur with equal probability and $`\beta `$ is the angle which the spontaneously emitted photon makes with the $`x`$ axis. $`\beta `$ is random, with a distribution which is a sum of dipole distributions over the set of possible atomic orientations. To a fairly good approximation , $`u`$ can be treated as uniformly distributed between $`1`$ and $`1`$. We define $`\eta `$ to be the probability per kicking cycle that the atom undergoes spontaneous emission.
### 5.1 Density Matrix Calculations
To include spontaneous emission in our density matrix calculations, we use the following approximate technique. For a particular run we choose a probability per kicking cycle $`\eta `$ that a particular atom will spontaneously emit. Equivalently $`\eta `$ is the proportion of the atoms in the ensemble represented by the density matrix which will spontaneously emit in each cycle. We then discretise $`u`$, so that $`u=\pm 1`$ with equal probability. Therefore the recoil of an atom in state $`|n`$ will leave it in state $`|n1`$ or $`|n+1`$. These states are representable by the density matrix, unlike those arising from continuous $`u`$. Once per kick, the following replacement is made:
$$m\left|\widehat{\rho }\right|n\frac{1}{2}\eta \left(m+1\left|\widehat{\rho }\right|n+1+m1\left|\widehat{\rho }\right|n1\right)+(1\eta )m\left|\widehat{\rho }\right|n$$
(15)
where we apply periodic boundary conditions.
We have also performed Monte Carlo wavefunction simulations which are more realistic but considerably more time consuming. In particular they take into account the continuous distribution of recoil momenta in the $`x`$ direction, and the fact that spontaneous emission can occur at any time during the laser kick. The recoil due to spontaneous emission is continuous between $`\mathrm{¯}k`$ and $`\mathrm{¯}k`$, and alters the ‘ladder’ on which coherent dynamics occur for the particular atom. The approximate way in which we account for spontaneous emission in our density matrix calculation produces results which are negligibly different from the Monte Carlo wavefunction calculation, but are significantly more computationally efficient.
### 5.2 Momentum Distributions
Figures 15 and 15 show momentum distributions versus number of kicks for the quantum double-kicked rotor, with spontaneous emission effects included. Figure 15(a) shows behaviour for $`K=80`$ and $`\eta =2\%`$. Comparing this to Figure 15(a), any broadening of the distributions due to the spontaneous emission is not obvious. However the ‘spiky’ nature of the pure quantum version has been significantly suppressed. Figure 15(a) has the same kicking strength and $`\eta =5\%`$. Here we still see little broadening but even stronger suppression of the quantum peaks. Figure 15(b), with $`K=180`$ and $`\eta =2\%`$, again shows similarity to the pure quantum case (in Figure 15(b)), except for some suppression of peaks and slight broadening. In Figure 15(b) has $`K=180`$ and $`\eta =5\%`$. We now see unmistakable movement of probability into the wings of the distribution, qualitatively resembling that in the classical system for $`K=180`$, in Figure 15(b), although not as strong. Figure 15(c) has $`K=280`$ and $`\eta =2\%`$. There is much more flow of probability through the cantori than in the pure quantum case in Figure 15(b), although the spikiness of the distribution is still strikingly different from the classical case in Figure 15(b). For $`\eta =5\%`$ in Figure 15(c), the distributions might be considered to look more classical than quantum, although the rate of transport through the boundary does not match the classical case. Finally for $`K=400`$, we again see (Figures 15(d) and 15(d)) a significant increase in the probability flow due to spontaneous emission, accompanied by distribution shapes which qualitatively resemble classical behaviour except for a somewhat smaller rate of transport.
### 5.3 Wigner Functions
A convenient way to visualize the information represented by the density matrix is in the form of a Wigner function. For a discrete, truncated basis we use the toroidal Wigner function as defined in ,
$$w(X_k,P_l,t)=\underset{j=0}{\overset{2N1}{}}\mathrm{exp}\left(i\frac{\pi jk}{N}\right)\frac{1+(1)^{l+j}}{2}\frac{l+j}{2}\left|\widehat{\rho }\right|\frac{lj}{2}$$
(16)
where $`P_l=(\mathrm{¯}k/2)l`$ and $`X_k=\pi k/N`$. This gives a Wigner function defined on a $`2N\times 2N`$ grid. Averaging over cells of four adjacent points we reduce the grid to $`N\times N`$. This was implemented in Matlab using a fast Fourier transform algorithm.
Figures 15 through 22 show a series of three-dimensional graphs of Wigner functions. Each function represents the state of the quantum double-kicked system after 20 kicks, with the same initial condition as for our other simulations. Figure 15 has $`K=80`$ and no spontaneous emission. We see that the Wigner function is strongly contained by the classical barriers. It has a complicated folded shape with some sharp spikes. In Figure 15 we introduce spontaneous emission with rate $`\eta =5\%`$. The Wigner function is qualitatively similar, but close comparison shows that it has become somewhat smoother, with the contrast between peaks and troughs being reduced. In Figure 15 we have $`K=180`$ and $`\eta =0`$. The Wigner function ‘spills’ over the classical barriers, although the total probability outside is small. We chiefly notice that the Wigner function is now much more complicated in shape, varying rapidly in position and momentum. There are significant negative peaks present. The addition of spontaneous emission in Figure 15 again serves to suppress this variation. It is not obvious from inspection of these graphs, but the cantori localization is also destroyed to some degree by finite $`\eta `$.
In Figure 22, with $`K=280`$ and $`\eta =0`$, we again see an increase in the complexity of the Wigner function. At this kicking strength, the effect of the KAM tori around $`p=\pm 30\pi `$ has become apparent. The introduction of spontaneous emission in Figure 22 again serves to smooth this variation somewhat, while pushing more (quasi) probability into the wings of the function. Finally we examine $`K=400`$. For the situation without spontaneous emission (Figure 22) the function is again very complicated and noisy-looking in shape. The main visible change with increased kicking strength is the increase in the function near to the $`p=\pm 30\pi `$ boundaries. Introduction of spontaneous emission (Figure 22) again suppresses the rapid variation to some extent.
In the limit of small $`\mathrm{¯}k`$ we expect that the Wigner pseudo-probability distribution will tend to a classical probability distribution. The Wigner function itself cannot be interpreted as a probability distribution because it is not always positive. We can argue that the Wigner functions of states of particularly quantum character will exhibit this non-positivity strongly. The normalization of our discrete Wigner function is
$$\underset{k,l=1}{\overset{N}{}}\overline{W}(\varphi _k,p_l)=1$$
(17)
but due to non-positivity
$$\underset{k,l=1}{\overset{N}{}}\left|\overline{W}(\varphi _k,p_l)\right|1.$$
(18)
We would like to quantify the ‘non-classicality’ of a given state with a single positive number. A possible choice is
$$S=\underset{k,l=1}{\overset{N}{}}\left(\left|\overline{W}(\varphi _k,p_l)\right|\overline{W}(\varphi _k,p_l)\right)0.$$
(19)
We refer to S as ‘quantum strangeness’. In this paper, we will not analyze this quantity in any detail. However, for example, a mixed state consisting of two Gaussian wave packets centered at $`(\varphi ,p)=(0,\pm 16\mathrm{¯}k)`$ has $`S=0.1765`$, while for a superposition state with equally weighted components of the two wavepackets $`S=0.7647`$. For the initial state we use for our simulations $`S=0`$. In general the larger the value of $`S`$, the more non-classical the character of the state. Figure 22 shows this parameter calculated for the state of our system after 20 kicks, for several kicking strengths $`K`$ and spontaneous emission rates $`\eta =0`$, 2% and 5%.
### 5.4 Decoherence Versus Heating
An atom which undergoes spontaneous emission receives a random momentum kick. This will obviously lead to an increase in the width of the momentum distribution, the net effect being some incoherent absorption of energy from the laser beams. This effect is referred to as ‘heating’. This however is not the main physical mechanism behind the increased diffusion we observe due to spontaneous emission. In Figure 22(a) and 22(b) we show the additional diffusion through $`p=\pm 10\pi `$ over the pure quantum case, for several kicking strengths and two values of spontaneous emission rate $`\eta `$. We note that the contribution of heating to the crossing of this boundary will be strongest when the momentum distribution is steep near $`p=\pm 10\pi `$. If the broadening effect was due entirely to heating it should be most pronounced for the smaller kicking strengths $`K`$ where the pure quantum distribution is strongly localized by the cantori, and for fixed $`\eta `$, should not increase with increasing $`K`$. Figure 22(a) shows that the additional diffusion caused by spontaneous emission rate $`\eta =2\%`$ is much larger at $`K=180`$, 280 and 400, than at $`K=80`$. Figure 22(b) shows the same trend for $`\eta =5\%`$. There must therefore be another mechanism which increases the diffusion rate, and which is much stronger than the heating mechanism for $`K180`$. This mechanism is the destruction of quantum coherence, or decoherence, caused by the randomizing effect of spontaneous emission. We have also experimentally verified that heating is negligible for $`K>150`$ .
### 5.5 Measurement Decoherence, <br>or the Anti-Zeno Effect
Our model for spontaneous emission leads to decoherence which we can consider to be ‘environmentally induced’. Kaulakys and Contis discuss the effect of projective momentum measurements on the dynamics of the quantum $`\delta `$-kicked rotor. They find that if a momentum measurement is made after every kick, then quantum localization is completely destroyed, and unbounded diffusion occurs, with the same diffusion constant as in the classical system. They refer to this modification of the dynamics as an *anti-Zeno effect*. Each measurement corresponds to the diagonalization of the density matrix in the momentum representation (i.e. off-diagonal elements become zero), or equivalently, the loss of all information about position. This effect is formulated without appealing to a collapse of the wave-function, and can in principle be produced experimentally.
We perform a similar simulation for our system to compare this form of decoherence to the spontaneous emission induced decoherence and to the classical motion. Computationally this is very similar to the density-matrix spontaneous emission simulations. After each double-kick coherent cycle, the density matrix is replaced by a matrix with the same diagonal entries, but all zero off-diagonal entries. The cycle then repeats. We note that the Wigner functions for states generated by this type of simulation are very simple, being the product of a uniform position distribution and the momentum distribution. We therefore do not show them, but point out that the ‘quantum strangeness’ parameter $`S`$ is always zero for these functions.
### 5.6 Comparison of Quantum and Classical Results
To compare our quantum and classical results, we calculate the probability for a particle to cross the $`p=\pm 10\pi `$ cantori, $`P(\left|p\right|>10\pi ,t)`$, for classical, pure quantum, spontaneous emission and anti-Zeno simulations of the system. Figure 22(a) has $`K=80`$. Classical and pure quantum simulations give essentially horizontal lines with some fluctuation, because the KAM tori present at this kicking strength are effective barriers for both systems. The initial conditions determine the level of this line. The quantum simulations with spontaneous emission show a gradual leakage and the quantum anti-Zeno simulation gives a qualitatively similar result. This suggests that, at this kicking strength, decoherence breaks down the quantum cantorus localization, *destroying* the correspondence between the quantum and classical systems.
In Figure 22(b), we present simulations for $`K=180`$. There is now a clear difference between the classical and quantum simulations. In fact they appear to differ even for very small $`t`$, where the quantum curve initially rises more sharply than the classical, before saturation sets in. Adding spontaneous emission destroys the saturation, and for $`t>10`$ causes the quantum system to qualitatively more closely resemble the classical. The anti-Zeno simulation however suggests that the limit of large decoherence is again *faster* transport across the cantori than in the classical case. We note that this simulation does not involve heating effects, so the transport must be due to decoherence.
Now shifting our attention to $`K=280`$, in Figure 22(c), we see a large difference between classical and pure quantum behaviour which appears to be bridged by the introduction of spontaneous emission. The anti-Zeno calculation now closely corresponds to the classical picture, and we conclude that at this kicking strength decoherence does make the quantum system ‘more classical’.
Finally we consider $`K=400`$ in Figure 22(d). The pure quantum case is not as far from the classical behaviour as for $`K=280`$, but again decoherence is effective in increasing the similarity between the systems. The anti-Zeno calculation now gives results that are almost indistinguishable from the classical. The close correspondence between the anti-Zeno and classical simulations for this and the previous figure can be related to the fact that for these kicking strengths the size of the classical flux per kick (in Figure 15) has become comparable to our dimensionless Planck’s constant $`\mathrm{¯}k=2.6`$.
Our group has previously published experimental results for $`K=280`$ . These show good agreement with the spontaneous emission simulations, especially the Monte Carlo runs. With the elimination of some systematic experimental errors we expect that still better agreement would be achieved.
Figure 22 demonstrates that, as we have seen, when $`\eta =0`$, quantum saturation sets in by $`t=20`$ over our entire range of kicking strengths. If we examine the $`\eta =0`$ results we see that the quantum strangeness $`S`$ is very small for $`K=80`$, but rapidly increases with increasing kicking strength. In general, we expect to see $`S`$ increase with the chaoticity of the corresponding classical system. Classical systems with strong chaos rapidly develop phase space structure on all scales, which cannot be reproduced in a quantum system with finite $`\mathrm{¯}k`$, so the quantum system must begin to exhibit non-classical features. It is interesting here that when $`K=80`$, $`S`$ is almost negligible, which seems to correspond to the fact that the classical and quantum systems are both strongly localized by the KAM tori in the classical system. As $`K`$ increases, the small scale phase space structure making up the cantori must differ in the classical and quantum systems, and we see that $`S`$ rises quickly with $`K`$ in this regime. Examining the effects of introducing spontaneous emission, we see that $`S`$ is significantly reduced even by a rate $`\eta =2\%`$. Along with the ‘more classical’ diffusion seen in Figure 22, we can see that the Wigner function also exhibits more ‘classicality’ when we introduce environmental decoherence.
## 6 Discussion
We have analyzed and numerically simulated the classical double-kicked rotor system and verified that it possesses KAM tori and cantori which present barriers to diffusion. We have established the success of a simple diffusion model for the system and used it to estimate the phase space flux per kick through a cantorus as a function of kicking strength. The double-kicked rotor is an interesting chaotic system, which would reward further analysis. One aspect would be locating periodic trajectories of the system, especially the series of these trajectories converging to the noble KAM torus or cantorus.
The quantum double-kicked system shows strong localization corresponding to classical KAM tori. The system is also localized by classical *cantori* and does not show the sharp transition shown classically. Whereas the classical system eventually reaches a uniform distribution in phase space, the quantum system saturates with probability still significantly confined by the classical barriers. This saturation is confirmed by a Floquet analysis of asymptotic momentum distributions. The effect is still obvious even when the size of the classical flux per kick becomes of the order of our dimensionless Planck’s constant, $`\mathrm{¯}k`$. In addition to the saturation, we observe fluctuating peaks in the momentum distributions, which contrast strongly to the flat-topped distributions seen classically. The quantum transport through the classical boundary more closely resembles the classical situation as the kicking strength increases, but examination of the Wigner functions shows that the system becomes *more* non-classical. This is consistent with the general theory of quantum chaos which suggests that stronger chaos in the classical system accelerates the appearance of quantum coherence effects .
The modern theory of environmental decoherence states that the interaction of quantum systems with the environment is a necessary condition for the appearance of classical behaviour in real systems. The traditional semi-classical limit, which in our formulation is given by $`\mathrm{¯}k0`$, is unsatisfactory because the quantum break time becomes infinite only logarithmically, and can be surprisingly short for a real macroscopic system. After this time arbitrary quantum superpositions may arise.
We have observed the effects of decoherence on our quantum system, using two models; dissipation by spontaneous emission caused by the kicking laser beam, and the anti-Zeno effect. Where the classical flux per kick through the cantorus is finite, decoherence disrupts quantum saturation and the system is qualitatively more like the classical version. Examination of the Wigner function also indicates that quantum interference effects are suppressed by the decoherence. The anti-Zeno calculations are a kind of extreme decoherence limit, and we have seen that they reproduce the classical dynamics accurately, provided that $`\mathrm{¯}k`$ is not too large compared to the classical flux per kick. It appears not unreasonable to suppose that this correspondence continues indefinitely.
There are numerous aspects of decoherence in this system which could be further investigated. The introduction of noise into the kicking of the system is expected to have a similar effect to that of dissipation through spontaneous emission . Particular states of the system will be ‘resistant’ to decoherence. These are expected to be the ‘classical’ states in the limit of strong decoherence and small $`\mathrm{¯}k`$. We would like to determine some of these states and quantify the ‘decoherence times’ for other states, for comparison with the quantum break time of the coherent system. It would be interesting to quantitatively compare the anti-Zeno effect with decoherence via noise and dissipation.
Our results support the idea that the apparently classical nature of the universe arises entirely from quantum mechanics. In the real world the unpredictable behaviour of chaotic systems must ultimately arise from quantum uncertainty. As further progress is made in the investigation of decoherence, there is hope that physics will develop a consistent picture of a smooth transition between quantum and classical descriptions of reality.
## Acknowledgements
This work was supported by the University of Auckland Research Committee and the Royal Society of New Zealand Marsden Fund.
|
no-problem/9905/chao-dyn9905023.html
|
ar5iv
|
text
|
# Self-organization in nonlinear wave turbulence
## I Introduction: NLS and Soliton Turbulence
A fascinating feature of many turbulent fluid and plasma systems is the emergence and persistence of large–scale organized states, or coherent structures, in the midst of small–scale turbulent fluctuations. A familiar example is the formation of macroscopic quasi–steady vortices in a turbulent large Reynolds number two dimensional fluid. Such phenomena also occur for many classical Hamiltonian systems, even though the dynamics of these systems is formally reversible. In the present work, we shall focus our attention on another class of nonlinear partial differential equations whose solutions exhibit the tendency to form persistent coherent structures immersed in a sea of microscopic turbulent fluctuations. This is the class of nonlinear wave systems described by the well–known nonlinear Schrödinger (NLS) equation:
$$i_t\psi +\mathrm{\Delta }\psi +f(|\psi |^2)\psi =0,$$
(1)
where $`\psi (𝐫,t)`$ is a complex field and $`\mathrm{\Delta }`$ is the Laplacian operator. The NLS equation describes the slowly-varying envelope of a wave train in a dispersive conservative system. It models, among other things, gravity waves on deep water , Langmuir waves in plasmas , pulse propagation along optical fibers , and superfluid dynamics. When $`f(|\psi |^2)=\pm |\psi |^2`$ and eqn. (1) is posed on the whole real line or on a bounded interval with periodic boundary conditions, the equation is completely integrable . Otherwise, it is nonintegrable.
The NLS equation (1) may be cast in the Hamiltonian form $`i_t\psi =\delta H/\delta \psi ^{}`$, where $`\psi ^{}`$ is the complex conjugate of the field $`\psi `$, and $`H`$ is the Hamiltonian:
$$H(\psi )=\left(|\psi |^2F(|\psi |^2)\right)𝑑𝐫.$$
(2)
Here, the potential $`F`$ is defined via the relation $`F(a)=_0^af(y)𝑑y`$. The dynamics (1) conserves, in addition to the Hamiltonian, the particle number
$$N(\psi )=|\psi |^2𝑑𝐫.$$
(3)
We shall assume throughout that eqn. (1) is posed in a bounded one dimensional interval with either periodic or homogeneous Dirichlet boundary conditions. We restrict our attention to attractive, or focusing, nonlinearities $`f(f(a)0,f^{}(a)>0)`$ such that the dynamics described by (1) is nonintegrable, free of wave collapse, and admits stable solitary–wave solutions. The dynamics under these conditions has been referred to as soliton turbulence . Such is the case for the important power law nonlinearities, $`f(|\psi |^2)=|\psi |^s`$, with $`0<s<4`$ (in the periodic case, $`s2`$ for nonintegrability) , and also for the physically relevant saturated nonlinearities $`f(|\psi |^2)=|\psi |^2/(1+|\psi |^2)`$ and $`f(|\psi |^2)=1\mathrm{exp}(|\psi |^2)`$, which arise as corrections to the cubic nonlinearity for large wave amplitudes .
Equation (1) in one spatial dimension has solitary wave solutions of the form $`\psi (x,t)=\varphi (x)\mathrm{exp}(i\lambda ^2t)`$, where $`\varphi `$ satisfies the nonlinear eigenvalue equation:
$$\varphi _{xx}+f(|\varphi |^2)\varphi \lambda ^2\varphi =0.$$
(4)
It has been argued that the solitary wave solutions play a prominent role in the long–time dynamics of (1), in that they act as statistical attractors to which the system relaxes. The numerical simulations in , as well as the simulations we shall present within this article, support this conclusion. Indeed, it is seen that for rather generic initial conditions the field $`\psi `$ evolves, after a sufficiently long time, into a state consisting of a spatially localized coherent structure, which compares quite favorably to a solution of (4), immersed in a sea of turbulent small-scale turbulent fluctuations. At intermediate times the solution typically consists of a collection of these soliton-like structures, but as time evolves, the solitons undergo a succession of collisions in which the smaller soliton decreases in amplitude, while the larger one increases in amplitude. When solitons collide or interact, they shed radiation, or small–scale fluctuations. The interaction of the solitons continues until eventually a single soliton of large amplitude survives amidst the turbulent background radiation. Figure (1) below illustrates the evolution of the solution of (1) for the particular nonlinearity $`f(|\psi |^2)=|\psi |`$ and with periodic boundary conditions on the spatial interval $`[0,256]`$.
In modeling the long–time behavior of a Hamiltonian system such as NLS, it seems natural to appeal to the methods of equilibrium statistical mechanics. That such an approach may be relevant for understanding the asymptotic–time state for NLS has already been suggested in , although the thermodynamic arguments presented by these authors are rather formal and somewhat incomplete. Motivated in part by the ideas outlined in, Jordan et al. have recently constructed a mean–field statistical theory to characterize the large–scale structure and the statistics of the small–scale fluctuations inherent in the asymptotic–time state of the focusing nonintegrable NLS system (1). The main prediction of this theory is that the coherent state that emerges in the long–time limit is the ground state solution of equation (4). That is, it is the solitary wave that minimizes the Hamiltonian $`H`$ given the constraint $`N=N^0`$, where $`N^0`$ is the initial and conserved value of the particle number integral. This prediction is in accord with previous theories, but the approach taken in is new, and provides a definite interpretation to the notion set forth in the earlier works that it is “thermodynamically advantageous” for the NLS system to approach a coherent solitary wave structure that minimizes the Hamiltonian subject to fixed particle number. The statistical theory also gives predictions for the particle number spectral density and the kinetic energy spectral density, at least for a finite–dimensional spectral truncation of the NLS dynamics (1). In particular, it predicts an equipartition of kinetic energy among the small–scale fluctuations.
In the present work, we shall begin with a brief review of this statistical theory. The predictions of the statistical theory will then be compared in detail with the results of direct numerical simulations of the NLS system. In addition, we will also closely examine the evolution of the particle number spectrum in our numerical simulations, as well as the dynamics (of finite spectral approximations) of the integrals $`S_m(\psi )=|D^m\psi |^2𝑑x`$ (here $`D^m`$ denotes the $`m`$–th derivative with respect to the spatial variable). The statistical model, being strictly an equilibrium theory, does not give predictions concerning the finite time dynamics of these quantities. However, we shall see that it does give accurate estimates for the long–time saturation values of these quantities for a finite dimensional spectral approximation of the NLS dynamics. In addition, we will demonstrate that the integrals $`S_m`$ exhibit interesting power law growth in time, as suggested by the weak turbulence theory developed by Pomeau.
## II Mean-Field Statistical Model
In order to develop a meaningful statistical theory, we begin by introducing a finite–dimensional approximation of the NLS equation (1). To fix ideas and notation, we will consider the NLS system with homogeneous Dirichlet boundary conditions on an interval $`\mathrm{\Omega }`$ of length $`L`$. Our methods can easily be modified to accommodate other boundary conditions, and we will consider below the predictions of the theory for periodic boundary conditions, as well. In addition, our techniques can easily be extended to higher dimensions, but we wish to concentrate on the one–dimensional case for ease of presentation.
Let $`e_j(x)=\sqrt{2/L}\mathrm{sin}(k_jx)`$ with $`k_j=\pi j/L`$, and for any function $`g(x)`$ on $`\mathrm{\Omega }`$ denote by $`g_j=_\mathrm{\Omega }g(x)e_j(x)𝑑x`$ its $`j`$th Fourier coefficient with respect to the orthonormal basis $`e_j,j=1,2,\mathrm{}`$. Define the functions $`u^{(n)}(x,t)=_{j=1}^nu_j(t)e_j(x)`$ and $`v^{(n)}(x,t)=_{j=1}^nv_j(t)e_j(x)`$, where the real coefficients $`u_j,v_j,j=1,\mathrm{},n`$, satisfy the coupled system of ordinary differential equations
$$\begin{array}{c}\dot{u}_jk_j^2v_j+\left(f((u^{(n)})^2+(v^{(n)})^2)v^{(n)}\right)_j=0\\ \\ \dot{v}_j+k_j^2u_j\left(f((u^{(n)})^2+(v^{(n)})^2)u^{(n)}\right)_j=0.\end{array}$$
(5)
Then the complex function $`\psi ^{(n)}=u^{(n)}+iv^{(n)}`$ satisfies the equation
$$i\psi _t^{(n)}+\psi _{xx}^{(n)}+P^n(f(|\psi ^{(n)}|^2)\psi ^{(n)})=0,$$
where $`P^n`$ is the projection onto the span of the eigenfunctions $`e_1,\mathrm{},e_n`$. This equation is a natural spectral approximation of the NLS equation (1), and it may be shown that its solutions converge as $`n\mathrm{}`$ to solutions of (1) .
For given $`n`$, the system of equations (5) defines a dynamics on the $`2n`$–dimensional phase space $`𝐑^{2n}`$. This finite-dimensional dynamical system is a Hamiltonian system, with conjugate variables $`u_j`$ and $`v_j`$, and with Hamiltonian
$$H_n=K_n+\mathrm{\Theta }_n,$$
(6)
where
$$K_n=\frac{1}{2}_\mathrm{\Omega }((u_x^{(n)})^2+(v_x^{(n)})^2)𝑑x=\frac{1}{2}\underset{j=1}{\overset{n}{}}k_j^2(u_j^2+v_j^2),$$
(7)
is the kinetic energy, and
$$\mathrm{\Theta }_n=\frac{1}{2}_\mathrm{\Omega }F((u^{(n)})^2+(v^{(n)})^2)𝑑x,$$
(8)
is the potential energy. The Hamiltonian $`H_n`$ is, of course, an invariant of the dynamics. The truncated version of the particle number
$$N_n=\frac{1}{2}_\mathrm{\Omega }((u^{(n)})^2+(v^{(n)})^2)𝑑x=\frac{1}{2}\underset{j=1}{\overset{n}{}}(u_j^2+v_j^2),$$
(9)
is also conserved by the dynamics (5). The factor $`1/2`$ is included in the definition of the particle number for convenience. The Hamiltonian system (5) satisfies the Liouville property, which is to say that the measure $`_{j=1}^ndu_jdv_j`$ is invariant under the dynamics . This property together with the assumption of ergodicity of the dynamics provide the usual starting point for a statistical treatment of a Hamiltonian system.
With the finite dimensional Hamiltonian system in hand, we now consider a macroscopic description in terms of a probability density $`\rho ^{(n)}(u_1,\mathrm{},u_n,v_1\mathrm{},v_n)`$ on the $`2n`$–dimensional phase–space $`𝐑^{2n}`$. We seek a probability density that describes the statistical equilibrium state for the truncated dynamics. In accord with standard statistical mechanics and information theoretic principles, we define this state to be the density $`\rho ^{(n)}`$ on $`2n`$–dimensional phase space which maximizes the Gibbs–Boltzmann entropy functional
$$S(\rho )=_{𝐑^{2n}}\rho \mathrm{log}\rho \underset{j=1}{\overset{n}{}}du_jdv_j,$$
(10)
subject to constraints dictated by the conservation of the Hamiltonian and the particle number under the dynamics (5) .
The usual canonical ensemble
$$\rho \mathrm{exp}\left(\beta H_n\mu N_n\right),$$
results from maximizing the entropy subject to the mean constraints $`H_n=H^0`$ and $`N_n=N^0`$, where $`H^0`$ and $`N^0`$ are the given values of the Hamiltonian and the particle number, respectively, and $`\beta `$ and $`\mu `$ are the Lagrange multipliers to enforce these constraints. However, it has been shown in that, for the focusing nonlinearities we consider here, the canonical ensemble is ill–defined in the sense that it is not normalizable (i.e., $`_{𝐑^{2n}}\mathrm{exp}[\beta H_n\mu N_n]_{j=1}^ndu_jdv_j`$ diverges ). Thus, we are obliged to consider an alternative statistical equilibrium description of the NLS system based on constraints other than those that give rise to the canonical ensemble. The key to constructing an appropriate statistical model is based on the observation from numerical simulations that, for a large number of modes $`n`$, in the long–time limit, the field $`(u^{(n)},v^{(n)})`$ decomposes into two essentially distinct components: a large–scale coherent structure, and small–scale radiation, or fluctuations. As time progresses, the amplitude of the fluctuations decreases, until eventually the contribution of the fluctuations to the particle number and the potential energy component of the Hamiltonian becomes negligible compared to the contribution from the coherent state, so that $`N_n`$ and $`\mathrm{\Theta }_n`$ are determined almost entirely by the coherent structure. We have checked that this effect becomes even more pronounced when the resolution of the numerical simulations is improved (i.e., when the number of modes is increased with the length $`L`$ of the spatial interval fixed). On the other hand, as the fluctuations exhibit rapid spatial variations, the amplitude of their gradient does not, in general, become negligible in the asymptotic time limit. Consequently, the fluctuations can make a significant contribution to the kinetic energy component $`K_n`$ of the Hamiltonian. This is illustrated in Fig. (2).
Denoting by $`u_j`$ and $`v_j`$ the means of the variables $`u_j`$ and $`v_j`$ with respect to the yet to be determined ensemble $`\rho ^{(n)}`$, we now identify the coherent state with the mean–field pair $`(u^{(n)}(x),v^{(n)}(x))=(_{j=1}^nu_je_j(x),_{j=1}^nv_je_j(x))`$. The fluctuations, or small-scale radiation inherent in the long–time state then correspond to the difference $`(\delta u^{(n)},\delta v^{(n)})(u^{(n)}u^{(n)},v^{(n)}v^{(n)})`$ between the state vector $`(u^{(n)},v^{(n)})`$ and the mean–field vector. The statistics of the fluctuations are encoded in the probability density $`\rho ^{(n)}`$. Based on the considerations of the preceding paragraph, and the results of the numerical simulations displayed in Fig. (2), it seems reasonable to conjecture that the amplitude of the fluctuations of the field $`\psi ^{(n)}`$ in the long-time state of the NLS system (5) should vanish entirely (in some appropriate sense) in the continuum limit $`n\mathrm{}`$. Thus we are led to the following vanishing of fluctuations hypothesis:
$$_\mathrm{\Omega }\left[(\delta u^{(n)})^2+(\delta v^{(n)})^2\right]𝑑x\underset{j=1}{\overset{n}{}}\left[(\delta u_j)^2+(\delta v_j)^2\right]0,\text{as }n\mathrm{}.$$
(11)
Here, $`\delta u_j=u_ju_j`$ represents the fluctuations of the Fourier coefficient $`u_j`$ about its mean value $`u_j`$, and similarly for $`\delta v_j`$. We emphasize that (11) is a hypothesis used to construct our statistical theory, and not a conclusion drawn from the theory itself.
An immediate consequence of the vanishing of fluctuations hypothesis is that for $`n`$ sufficiently large, the expectation $`N_n`$ of the particle number is determined almost entirely by the mean $`(u^{(n)},v^{(n)})`$. Furthermore, the hypothesis (11) implies that for $`n`$ large, the expectation $`\mathrm{\Theta }_n(u^{(n)},v^{(n)})`$ of the potential energy is well approximated by $`\mathrm{\Theta }_n(u^{(n)},v^{(n)})`$, which is the potential energy of the mean. This may be seen by expanding the potential $`F`$ about the mean $`(u^{(n)},v^{(n)})`$ in equation (8), taking expectations, and noting that because of the vanishing of fluctuations hypothesis (11), there holds $`|\mathrm{\Theta }_n(u^{(n)},v^{(n)})\mathrm{\Theta }_n(u^{(n)},v^{(n)})|=o(1)`$ as $`n\mathrm{}`$. Notice, however, that the vanishing of fluctuations hypothesis does not imply that the contribution of the fluctuations to the expectation of the kinetic energy becomes negligible in the limit $`n\mathrm{}`$. Indeed, this contribution is $`(1/2)_{j=1}^nk_j^2[(\delta u_j)^2+(\delta v_j)^2]`$, which need not tend to 0 as $`n\mathrm{}`$, even if (11) holds. Thus, from these arguments, we conclude that for $`n`$ sufficiently large, $`H_n\frac{1}{2}_{j=1}^nk_j^2(u_j^2+v_j^2)\frac{1}{2}_\mathrm{\Omega }F(u^{(n)}^2+v^{(n)}^2)𝑑x`$. These considerations lead us to impose the following mean–field constraints on the admissible probability densities $`\rho `$ on the $`2n`$-dimensional phase space:
$$\begin{array}{c}\stackrel{~}{N}_n(\rho )\frac{1}{2}_{j=1}^n(u_j^2+v_j^2)=N^0\\ \\ \stackrel{~}{H}_n(\rho )\frac{1}{2}_{j=1}^nk_j^2(u_j^2+v_j^2)\frac{1}{2}_\mathrm{\Omega }F(u^{(n)}^2+v^{(n)}^2)𝑑x=H^0.\end{array}$$
(12)
Here, $`N^0`$ and $`H^0`$ are the conserved values of the particle number and the Hamiltonian, as determined from initial conditions. The statistical equilibrium states are then defined to be probability densities $`\rho ^{(n)}`$ on the phase–space $`𝐑^{2n}`$ that maximize the entropy (10) subject to the constraints (12). We shall refer to the constrained maximum entropy principle that determines the statistical equilibria as (MEP).
Further justification and motivation for the vanishing of fluctuations hypothesis (11), which leads to the mean–field constraints in the maximum entropy principle (MEP), are provided in . In particular, it is proved in that the solutions $`\rho ^{(n)}`$ of (MEP) concentrate on the phase–space manifold on which $`H_n=H^0`$ and $`N_n=N^0`$ in the continuum limit $`n\mathrm{}`$, in the sense that $`N_nN^0`$, $`H_nH^0`$, and $`\text{var }N_n0,\text{var }H_n0`$ in this limit. Here, $`\text{var }W`$ denotes the variance of the random variable $`W`$. This concentration property establishes a form of asymptotic equivalence between the mean–field ensembles $`\rho ^{(n)}`$ and the microcanonical ensemble, which is the invariant measure concentrated on the phase–space manifold on which $`H_n=H^0`$ and $`N_n=N^0`$. It therefore provides a strong theoretical justification for the mean–field statistical model.
## III Calculation and Analysis of Equilibrium States
The solutions $`\rho ^{(n)}`$ of (MEP) are calculated by an application of the Lagrange multiplier rule
$$S^{}(\rho ^{(n)})=\mu \stackrel{~}{N}_n^{}(\rho ^{(n)})+\beta \stackrel{~}{H}_n^{}(\rho ^{(n)}),$$
where $`\beta `$ and $`\mu `$ are the Lagrange multipliers to enforce that the probability density $`\rho ^{(n)}`$ satisfy the constraints (12). A straightforward but tedious calculation yields the following expression for the maximum entropy distribution $`\rho ^{(n)}`$:
$$\rho ^{(n)}(u_1,\mathrm{},u_n,v_1,\mathrm{},v_n)=\underset{j=1}{\overset{n}{}}\rho _j(u_j,v_j),$$
(13)
where, for $`j=1,\mathrm{},n`$,
$$\rho _j(u_j,v_j)=\frac{\beta k_j^2}{2\pi }\mathrm{exp}\left\{\frac{\beta k_j^2}{2}\left((u_ju_j)^2+(v_jv_j)^2\right)\right\},$$
(14)
with:
$$\begin{array}{c}u_j=\frac{1}{k_j^2}\left(f(u^{(n)}^2+v^{(n)}^2)u^{(n)}\right)_j\frac{\mu }{\beta k_j^2}u_j\\ \\ v_j=\frac{1}{k_j^2}\left(f(u^{(n)}^2+v^{(n)}^2)v^{(n)}\right)_j\frac{\mu }{\beta k_j^2}v_j.\end{array}$$
(15)
Thus, for each $`j`$, $`u_j`$ and $`v_j`$ are independent Gaussian variables, with means given by the nonlinear equations (15) and with identical variances
$$\text{var }u_j=\text{var }v_j=\frac{1}{\beta k_j^2}.$$
(16)
Note that var $`u_j=(\delta u_j)^2`$ by definition, and likewise for $`v_j`$. Obviously, the multiplier $`\beta `$ must be positive. Notice also that, since the probability density $`\rho ^{(n)}`$ factors according to (13), the Fourier modes $`u_j,v_j,j=1,\mathrm{},n`$, are mutually uncorrelated. In addition, we see from (15) that the complex mean–field $`\psi ^{(n)}=u^{(n)}+iv^{(n)}`$ is solution of (setting $`\lambda =\mu /\beta `$)
$$\psi ^{(n)}_{xx}+P^n\left(f(|\psi ^{(n)}|^2)\psi ^{(n)}\right)\lambda \psi ^{(n)}=0,$$
(17)
which is clearly the spectral truncation of the eigenvalue equation (4) for the continuous NLS system (1). It follows, therefore, that the mean–field predicted by our theory corresponds to a solitary wave solution of the NLS equation. Alternatively, the mean $`(u^{(n)},v^{(n)})`$ is a solution of the variational equation $`\delta H_n+\lambda \delta N_n=0`$, where $`\lambda `$ is a Lagrange multiplier to enforce the particle number constraint $`N_n=N^0`$.
Now, as the maximum entropy distribution $`\rho ^{(n)}`$ is required to satisfy the mean–field Hamiltonian constraint (12), it follows from (13)–(17) that
$$H^0=\frac{n}{\beta }+H_n(u^{(n)},v^{(n)}).$$
(18)
The term $`n/\beta `$ represents the contribution to the kinetic energy from the Gaussian fluctuations, and $`H_n(u^{(n)},v^{(n)})`$ is the Hamiltonian of the mean. Notice that the contribution of the fluctuations to the kinetic energy is divided evenly among the $`n`$ Fourier modes. From (18), we obtain the following expression for $`\beta `$ in terms of the number of modes $`n`$ and the Hamiltonian $`H_n(u^{(n)},v^{(n)})`$ of the mean:
$$\beta =\frac{n}{H^0H_n(u^{(n)},v^{(n)})}.$$
(19)
Using equations (13)–(19), we may easily calculate the entropy of any solution $`\rho ^{(n)}`$ of (MEP). This yields, after some algebraic manipulations, that
$$S(\rho ^{(n)})=C(n)+n\mathrm{log}\left(\frac{L^2[H^0H_n(u^{(n)},v^{(n)})]}{n}\right).$$
(20)
where $`C(n)=n_{j=1}^n\mathrm{log}(j^2\pi /2)`$ depends only on the number of Fourier modes $`n`$. Clearly, the entropy $`S(\rho ^{(n)})`$ will be maximum if and only if the mean field pair $`(u^{(n)},v^{(n)})`$ corresponding to $`\rho ^{(n)}`$ realizes the minimum possible value of $`H_n`$ over all fields $`(u^{(n)},v^{(n)})`$ that satisfy the constraint $`N_n(u^{(n)},v^{(n)})=N^0`$.
Equation (20) reveals that in statistical equilibrium the entropy is, up to additive and multiplicative constants, the logarithm of the the kinetic energy contained in the turbulent fluctuations about the mean state. This result, therefore, provides a precise interpretation to the notions set forth by Zakharov et al. and Pomeau that the entropy of the NLS system is directly related to the amount of kinetic energy contained in the small-scale fluctuations, and that it is “thermodynamically advantageous” for the solution of NLS to approach a ground state which minimizes the Hamiltonian for the given number of particles.
We now know that $`H_n(u^{(n)},v^{(n)})=H_n^{}`$, where $`H_n^{}`$ is the minimum vale of $`H_n`$ allowed by the particle number constraint $`N_n=N^0`$. As a consequence, the Lagrange multiplier $`\beta `$ is uniquely determined by (19):
$$\beta =\frac{n}{H^0H_n^{}}.$$
(21)
That the “inverse temperature” $`\beta `$ scales linearly with the number of Fourier modes $`n`$ is required in order to obtain a meaningful continuum limit $`n\mathrm{}`$ in which the expectations of the Hamiltonian and particle number remain finite. The scaling of the inverse temperature with the number of modes is a common feature of the equilibrium statistical mechanics of finite dimensional approximations of other plasma and fluid systems with infinitely many degrees of freedom, as well . The parameter $`\lambda `$ (which depends on $`n`$) is also determined by the requirement that the mean $`(u^{(n)},v^{(n)})`$ realize the minimum value of the Hamiltonian $`H_n`$ given the particle number constraint $`N_n=N^0`$.
Using eqns. (16) and (21), we may now obtain an exact expression for the contribution of the fluctuations to the expectation of the particle number. This is
$$\frac{1}{2}\underset{j=1}{\overset{n}{}}\left[(\delta u_j)^2+(\delta v_j)^2\right]=\frac{H^0H_n^{}}{n}\underset{j=1}{\overset{n}{}}\frac{1}{k_j^2}=O(\frac{1}{n}),\text{as }n\mathrm{}.$$
(22)
Recall that in the derivation of the mean–field constraints (12), we assumed the vanishing of fluctuations condition (11). The calculation (22) shows, therefore, that the maximum entropy distributions $`\rho ^{(n)}`$ indeed satisfy the hypothesis (11), and hence, that the mean–field statistical theory is consistent with the assumption that was made to derive it. But as the analysis of this section has shown, the maximum entropy distributions $`\rho ^{(n)}`$ provide much more information than is contained in the hypothesis (11). Most importantly, we know that the mean–field corresponding to $`\rho ^{(n)}`$ is an absolute minimizer of the Hamiltonian $`H_n`$ subject to the particle number constraint $`N_n=N^0`$. In addition, the theory yields predictions for the particle number and kinetic energy spectral densities, at least for the $`2n`$-dimensional spectrally truncated NLS system (5) with $`n`$ large. Indeed, we have the following prediction for the particle number spectral density
$$|\psi _j|^2=|\psi _j|^2+\frac{H^0H_n^{}}{nk_j^2},$$
(23)
where we have used the identity $`\psi _j=u_j+iv_j`$, and eqns. (16) and (21). The first term on the right hand side of (23) is the contribution to the particle number spectrum from the mean, and the second term is the contribution from the fluctuations. Since the mean field is a smooth solution of the ground-state equation, its spectrum decays rapidly, so that for $`j>>1`$, we have the approximation $`|\psi _j|^2(H^0H_n^{})/(nk_j^2)`$. The kinetic energy spectral density is obtained simply by multiplying eqn. (23) by $`k_j^2`$. As emphasized above, we have the prediction that the kinetic energy arising from the fluctuations is equipartitioned among the $`n`$ spectral modes, with each mode contributing the amount $`(H^0H_n^{})/n`$.
While we have chosen to present the statistical theory specifically for homogeneous Dirichlet boundary conditions, it is straightforward to develop the theory for NLS on a periodic interval of length $`L`$, as well. In this case, it is most convenient to write the spectrally truncated complex field $`\psi ^{(n)}`$ as
$$\psi ^{(n)}=\underset{j=n/2}{\overset{n/2}{}}\psi _j\mathrm{exp}(ik_jx),$$
for $`n`$ an even positive integer, where $`k_j=2\pi j/L`$. The predictions of the statistical theory remain the same as in the case of Dirichlet boundary conditions. In particular, the mean field $`\psi ^{(n)}`$ is a minimizer of the Hamiltonian $`H_n`$ given the particle number constraint $`N_n=N^0`$, and the particle number spectrum satisfies (23) for $`j0`$ The Fourier coefficient $`\psi _0`$ may be consistently chosen to be deterministic (i.e., $`\text{var }\psi _0=0`$ and $`\psi _0\psi _0`$), to eliminate the ambiguity arising from the 0 mode..
## IV Numerical results
The general predictions of the statistical theory developed above do not depend crucially on the particular nonlinearity $`f`$ in the NLS equation (1). Indeed, for any $`f`$ satisfying the conditions stated in the introduction, the coherent structure predicted by the theory in the continuum limit $`n\mathrm{}`$ corresponds to the solitary wave that minimizes the energy for the given number of particles $`N^0`$. Also, for any such nonlinearity $`f`$, the particle number spectrum in the long-time limit for the spectrally truncated NLS system (5), according to the statistical theory, should obey the relation (23). Of course, the minimum value $`H_n^{}`$ of the Hamiltonian $`H_n`$ which enters this formula does depend on $`f`$.
Here, we will present numerical results primarily for periodic boundary conditions and for the focusing power law nonlinearity $`f(|\psi |^2)=|\psi |`$. That is, we shall solve numerically the particular NLS equation
$$i_t\psi +_{xx}\psi +|\psi |\psi =0,$$
(24)
on a periodic interval of length $`L`$. We have, however, carried out similar numerical experiments for different focusing nonlinearities and for Dirichlet boundary conditions, and we observed that the general qualitative features of the long-time dynamics are unaltered by such changes. The nonlinearity $`f(|\psi |^2)=|\psi |`$ actually represents a nice compromise between the focusing effect and nonlinear interactions. For weaker nonlinearities (such as the saturated ones), the interaction between modes is weak, and the time required to approach an asymptotic equilibrium state is quite long. On the other hand, for stronger nonlinearities, the solitary wave structures that emerge exhibit narrow peaks of large amplitude, and therefore, greater spatial resolution is required in the numerical simulations.
The numerical scheme that we use for solving (24) is the well-known split-step Fourier method for a given number $`n`$ of Fourier modes. Throughout the duration of the simulations, the relative error in the particle number is kept at less than $`10^6`$ percent, and the relative error in the Hamiltonian is no greater than $`0.1`$ percent. Notice that the numerical simulations, performed naturally for a finite number of modes, provide an ideal context for comparisons with the mean–field statistical theory outlined above.
On the whole real line, the nonlinear Schrödinger equation (24) has solitary wave solutions of the form $`\psi (x,t)=\varphi (x)e^{i\lambda ^2t}`$, with
$$\varphi (x)=\frac{3\lambda ^2}{2\mathrm{c}\mathrm{o}\mathrm{s}\mathrm{h}^2(\frac{\lambda (xx_0)}{2})}$$
(25)
The particle number $`N`$ and the Hamiltonian $`H`$ of these soliton–like solutions are determined by the parameter $`\lambda `$ through the relationships $`N=6\lambda ^3`$ and $`H=\frac{18}{5}\lambda ^5`$. These solutions are centered at $`x=x_0`$, as shown in Figure (3), and because of the focusing property of equation (24), as $`N`$ increases, the amplitude of the solitary wave increases, while its width decreases. For a given value of the particle number $`N`$, the solitary wave (25) is the global minimizer of the Hamiltonian $`H`$ (when the integrals in the definitions (2) and (3) of the Hamiltonian and the particle number extend over the real line). Of course, the solitary wave solutions for the equation (24) on a finite interval, as well as those for the spectrally-truncated version (5), differ from the solution (25) over the infinite interval. However, because the solitary waves (25) exhibit an exponential decay, for a large enough interval, and for a large enough number of modes $`n`$, such differences can be neglected for all practical purposes.
For constant $`A`$, the condensate $`\psi (x,t)=Ae^{iAt}`$ is an equilibrium solution of (24). However, since the nonlinearity is focusing, this spatially homogeneous solution is modulationally unstable. Indeed, if we expand $`\psi `$ around this solution in a series of the form
$$\psi (x,t)=(A+\psi _ke^{(\sigma t+ikx)})e^{iAt}$$
we obtain the dispersion relation:
$$\sigma ^2=Ak^2k^4$$
Thus, the condensate is stable for $`k^2>A`$, and unstable for $`k^2<A`$. The most unstable wave-number is $`k_i=\sqrt{A/2}`$.
We choose to present in this paper the following set of numerical simulations: starting with the spatially homogeneous solution $`\psi (x,t=0)=A`$ (with $`A`$ of order $`1`$), we add initially a small spatially uncorrelated random perturbation, so that the modulational instability develops. Although we have checked that the long–time behavior of the solution is not dependent on the initial conditions, except through the initial and conserved values $`N^0`$ and $`H^0`$ of the particle number and the Hamiltonian, this class of initial conditions is particularly convenient for our purposes. For example, by considering different realizations of the initial random perturbation, we may perform an ensemble average over different initial conditions for a given $`A`$ (and therefore for fixed $`N^0`$ and $`H^0`$). Such initial conditions provide interesting analogies to standard fluid turbulence problems, as we will emphasize in the conclusion.
The spatially uniform initial conditions we consider here may be thought of as being far away from the expected statistical attractor described by the maximum entropy probability density $`\rho ^{(n)}`$. Indeed, the spectrum of the condensate differs considerably from the predicted statistical equilibrium spectrum (23). The numerical simulations that we perform here provide strong evidence that the solutions of the spectrally truncated NLS system converge in the long-time limit to a state that may be considered as statistically steady. We shall compare the statistical properties of this long-time state with the predictions of the mean–field statistical theory that was developed and analyzed above. In addition, we shall also investigate the following questions concerning the nature of the evolution leading from the initial state to the long-time statistical equilibrium state: 1- How long does it take for the system to reach the vicinity of its statistical attractor, so that subsequently its statistical features may be considered as stationary? 2- How well can we characterize the “path” that a solution follows en route to the statistically steady state? That is, what are the generic features of the transitory dynamics?
Figure (1) demonstrates that the transitory dynamics can be roughly decomposed into three stages: in the first stage, illustrated in Figure 1 a), the modulational instability creates an array of soliton-like structures separated by a typical distance $`l_i=2\pi /k_i`$ associated with most unstable wave number $`k_i`$. The second stage is characterized by the interaction and coalescence of these solitons. In this coarsening process, the number of solitons decreases, while the amplitudes of the surviving solitons increase, until eventually a single soliton of large amplitude persists amongst a sea of small-amplitude background radiation (Figures 1 b) and c)). This intermediate stage has previously been observed for other nonlinear Schrödinger equations in one and two spatial dimensions , and it was shown in that this coarsening process follows a self-similar dynamics. The dynamical exponents of these processes are not very well understood at this point, however. During the final stage of the dynamics, the surviving large-scale soliton interacts with the small-scale fluctuations. As time increases, the amplitude of the soliton increases, while the amplitude of the fluctuations decreases (note the changes from Figure 1 c) to Figure 1 d)). In this stage of the dynamics, the mass (or number of particles) is gradually transferred from the small-scale fluctuations to the large-scale coherent soliton. For a finite number of modes $`n`$, the dynamics eventually reaches a “stationary” state whose properties are very well described by the mean–field statistical equilibrium theory developed above, as we shall demonstrate. This implies that long-time state may, in fact, be thought of as a “statistical attractor”, in the sense that, according to the statistical theory, it corresponds to a maximizer of the entropy functional (10) subject to the dynamical constraints (12). Note that because the dynamics is reversible, intermediate states such as those in Figure (1 b) theoretically could still be attained even after the statistical equilibrium state has been reached. In fact, a numerical simulation starting from the state in Fig. (1 d)) but with the time step taken negative shows the reverse dynamics up to round-off errors, where one can observe the decomposition of the solution into an array of soliton-like structures as in figure (1 a)) for intermediate times, while in the limit $`t\mathrm{}`$ an equilibrium state such as the one of figure (1 d)) is once again attained.
The tendency of the solution of the NLS system (24) to approach the statistical equilibrium state is also captured in the evolution of the kinetic and potential energies (see Figure 4). While the sum of these two quantities, which is the Hamiltonian, remains constant in time, we observe that the kinetic energy increases monotonically, and, consequently, the potential energy decreases monotonically as time goes on. The initial time period where these quantities evolve most rapidly (say $`t<20000`$) corresponds to the first two stages of the dynamics described above, in which the modulational instability creates an array of soliton-like structures which then coalesce into a single coherent soliton. After the coalescence has ended, the kinetic (potential) energy increases (decreases) very slowly to its saturation value. In the process, fluctuations develop on finer and finer spatial scales, which accounts for the gradual increase of kinetic energy, while the surviving soliton slowly absorbs mass from the background fluctuations, thereby increasing the magnitude of the contribution to the potential energy from the coherent structure. In the long-time limit, therefore, the soliton accounts for the vast majority of the potential energy, while the fluctuations make a substantial contribution to the kinetic energy.
The mean–filed statistical theory provides a prediction for the expected value of the kinetic energy $`K_n`$ in statistical equilibrium for a given number of modes $`n`$. This is $`K_n=K_n(\psi ^{(n)})+H^0H_n^{}`$, which follows directly upon multiplying eqn. (23) by $`k_j^2`$ and summing over $`j`$. The first term in this expression for $`K_n`$ is the contribution to the mean kinetic energy from the coherent soliton structure which minimizes the Hamiltonian $`H_n`$ subject to the particle number constraint $`N_n=N^0`$. The second term in $`K_n`$ is the contribution to the expectation of the kinetic energy from the fluctuations. $`H_n^{}`$ is the minimum value of $`H_n`$ given the particle number constraint. As $`n\mathrm{}`$, we see that $`K_n`$ converges to $`K(\psi ^{\mathrm{}})+H^0H^{}`$, where $`\psi ^{\mathrm{}}`$ is the minimizer of the Hamiltonian $`H`$ given the particle number constraint $`N=N^0`$ for continuous NLS system on the interval $`[0,L]`$, and $`H^{}=H(\psi ^{\mathrm{}})`$. Approximating $`K(\psi ^{\mathrm{}})`$ and $`H(\psi ^{\mathrm{}})`$ by $`K(\varphi )`$ and $`H(\varphi )`$, where $`\varphi `$ is the solitary wave on the real line whose particle number is $`N^0`$, we obtain for the setting considered in Figure (4) the large $`n`$ estimates $`K_n(\psi ^{(n)})9.2,H^0H_n^{}22.4`$, and therefore, $`K_n31.6`$. Also, according to the statistical theory, the expected value $`\mathrm{\Theta }_n`$ of the potential energy in statistical equilibrium should converge as $`n\mathrm{}`$ to $`\mathrm{\Theta }(\psi ^{\mathrm{}})`$. Approximating this by $`\mathrm{\Theta }(\varphi )`$, with $`\varphi `$ as above, we have the estimate $`\mathrm{\Theta }_n37.1`$, which we expect to be accurate for sufficiently large $`n`$. We see that the kinetic (potential) energy of the numerical solution is bounded above (below) by the estimate based on the statistical theory, but as expected, the solution does not attain the theoretically predicted value for a finite number of modes. This is because, for the spectrally truncated system, a finite amount of the particle number and the potential energy integrals are actually contained in the small-scale fluctuations (according to the statistical theory, the contribution of the fluctuations to these quantities should be $`O(1/n)`$, where $`n`$ is the number of spectral modes –this follows from (23) ). It may be checked that the spatial resolution is improved (i.e., when the number of modes $`n`$ is increased, while the length $`L`$ of the spatial interval, and the values $`H^0`$ and $`N^0`$ of the Hamiltonian and the particle number are held fixed), the contributions of the fluctuations to the particle number and the potential energy decrease, and the saturation values of the kinetic and potential energy attained in the numerical simulations come closer to the predicted statistical equilibrium averages of these quantities (see the inset in Fig. (6), which shows that the saturation value of kinetic energy increases towards the predicted statistical equilibrium value of modes $`n`$ in the numerical simulation increases). We expect that the contributions of the fluctuations to the particle number and the potential energy should vanish entirely as $`n\mathrm{}`$ for fixed $`L`$, $`H^0`$ and $`N^0`$, and that the predicted statistical equilibrium values for the mean kinetic energy and potential energy should be approached very closely by the numerical solution in the long-time limit when the number of modes in the simulation is sufficiently large.
Figures (1) and (4) clearly illustrate that for a given (large) number of modes $`n`$, the dynamics converges when $`t\mathrm{}`$ to a state consisting of a large-scale coherent soliton, which accounts for all but a small fraction of the particle number and the potential energy integrals, coupled with small-scale radiation, or fluctuations, which account for the kinetic energy that is not contained in the coherent structure. Formula (23) suggests, in fact, that in the long–time limit, the coherent structure and the background radiation exist in balance (or in statistical equilibrium) with each other, through the equipartition of kinetic energy of the fluctuations. In Figure 5), we display the particle number spectral density $`|\psi _k|^2`$, where $`\psi _k`$ is the Fourier transform of the field $`\psi `$, as a function of the wave number $`k`$ for a long time run. To obtain this spectrum, we have performed both an ensemble average over 16 initial conditions, and a time average over the final 1000 time units for each run. For comparison, we have displayed in this figure the spectrum of the solitary wave (25) whose particle number is equal to conserved value of the particle number for the simulation. Observe that there is both a qualitative and quantitative agreement between the spectrum of this solitary wave solution and the small wavenumber portion of the spectrum arising from the numerical simulations. This is in accord with the statistical equilibrium theory, which predicts that the coherent structure should coincide with this solitary wave (in the limit $`n\mathrm{}`$). For larger wavenumbers, the spectrum of the numerical solution is dominated by the small scale fluctuations. We have indicated on the graph the large wavenumber spectrum predicted by the statistical theory. This prediction comes from the second expression on the right hand side of eqn. (23), except that we have approximated the minimum value $`H_n^{}`$ of the Hamiltonian for the spectrally truncated system with $`n`$ modes by the Hamiltonian $`H^{}`$ of the above-mentioned solitary wave solution for the continuum system. Not only is there a good qualitative agreement with the predicted equipartition of kinetic energy amongst the small-scale fluctuations (i.e., the $`k^2`$ slope), but there is also an excellent quantitative agreement between the numerical results and the formula (23) for large $`k`$. Let us mention that the long-time spectrum obtained from a single simulation starting from a given initial condition, and without time averaging, though similar to the spectrum displayed in Figure (5), is much noisier.
As we have mentioned above, the numerical spectrum shown in Figure (5) arises from an ensemble average over long time and over different initial conditions (with the same values of the particle number and the Hamiltonian). Now, under the assumption that the dynamics is ergodic, such an average should coincide with an average with respect to the microcanonical ensemble for the spectrally truncated NLS system . Since it can be shown that the the mean–field statistical ensembles $`\rho ^{(n)}`$ constructed above concentrate on the microcanonical ensemble in the continuum limit $`n\mathrm{}`$ (see Theorem 3 of reference ), it should be that averages with respect to $`\rho ^{(n)}`$ for large $`n`$ agree with the ensemble average of the numerical simulations over initial conditions and time, assuming ergodicity of the dynamics. While we have not shown that the dynamics is ergodic, we have, in fact, demonstrated what we believe to be a convincing agreement between the predictions of the mean–field ensembles $`\rho ^{(n)}`$ and the results of direct numerical simulations.
We have also monitored the time evolution of the quantities
$$S_m(\psi ^{(n)})=\underset{j}{}k_j^{2m}|\psi _j|^2,$$
(26)
for $`m`$ a positive integer. For the periodic boundary conditions considered here, the index $`j`$ ranges from $`n/2`$ to $`n/2`$ and the wavenumber $`k_j`$ is given by $`k_j=2\pi j/L`$. Note that $`S_1`$ is the kinetic energy. In general, $`S_m(\psi )`$ is the squared $`L_2`$ norm of the m–th derivative of the field $`\psi `$. The growth of $`S_m`$ in time is an indicator of the development of fluctuations of the field on fine spatial scales. In addition, we may consider that $`S_m`$ gives an estimate of the evolution of the typical wave number $`K(t)`$ of the fluctuations since, roughly speaking, we can estimate $`S_mK(t)^{2(m1)}`$.
The mean–field statistical theory provides the following prediction for the expectation of $`S_m`$ in statistical equilibrium for a given number of modes $`n`$:
$$S_m=\underset{j=n/2}{\overset{n/2}{}}k_j^{2m}|\psi _j|^2+\left(\frac{2\pi }{L}\right)^{2(m1)}\frac{H^0H_n^{}}{n}\underset{j=n/2}{\overset{n/2}{}}j^{2(m1)},$$
(27)
where we have used eqn. (23). The first term is the contribution to $`S_m`$ from the mean field (the coherent structure), and the second term is the contribution from the fluctuations. Note that for a finite number of modes $`n`$, $`S_m`$ is finite for each $`m`$, but only $`S_1`$, which is the mean of the kinetic energy, remains finite in the continuum limit $`n\mathrm{}`$. The divergence of $`S_m`$ for $`m2`$ comes from the second expression on the right hand side of (27) (i.e., from the fluctuations), which is of the order $`n^{2(m1)}`$ as $`n\mathrm{}`$. For example, when $`m=2`$ this expression is found to be $`\pi ^2(H^0H_n^{})(n^2+3n+2)/(3L^2)`$, and we have the following formula for $`S_2`$ for a given number of modes $`n`$ in the spectrally truncated NLS system:
$$S_2=\underset{j=n/2}{\overset{n/2}{}}k_j^2|\psi _j|^2+\frac{\pi ^2(H^0H_n^{})(n^2+3n+2)}{3L^2}.$$
(28)
Based on the considerations of the previous paragraph, we expect that the numerical simulations for given number of modes $`n`$ will reveal that the quantities $`S_m`$ are bounded and saturate when $`t\mathrm{}`$, but that the larger the number of modes $`n`$, the larger the saturation value of $`S_m`$ (at least for $`m2`$). Figure (6) shows the evolution in time of $`S_2`$ for different values of $`n`$ (with the same $`L`$, $`N^0`$ and $`H^0`$). We observe that saturation does indeed occur for a finite number of modes. Also, as $`n`$ increases, the saturation value increases, as does the time required to reach saturation. By approximating the sum in eqn. (28) by $`_{\mathrm{}}^{\mathrm{}}|\varphi _{xx}|^2𝑑x`$ and approximating $`H_n^{}`$ by $`H(\varphi )`$, where $`\varphi `$ is the solitary wave on the whole real line whose particle number is equal to the conserved particle number for the simulations treated in Figure (6), we obtain the following estimates: $`S_227.1,45.5,97.6`$ and $`170.3`$ for $`n=48,64,96`$, and $`128`$, respectively. Note that these estimates for $`S_2`$ agree closely with the observed saturation values of $`S_2`$ in the numerical simulations for $`n=48`$ and $`n=64`$. For $`n=96`$, saturation has not quite yet been reached, but the value of $`S_2`$ at the final time $`t=3\times 10^5`$ is still reasonably close to the theoretical estimate of $`97.6`$. For $`n=128`$, $`S_2`$ is still growing considerably at the final time of the simulation, and so we can not make comparisons with the statistical prediction for $`S_2`$ at this point. The inset in Figure (6) shows the evolution of the kinetic energy $`S_1`$ as a function of time for $`n=48,64,96`$, and $`128`$. We see that the kinetic energy saturates nearly at the same rate for all of the values of $`n`$ considered here. Clearly, $`S_1`$ remains bounded as $`n`$ increases. As discussed above, $`S_1`$, the statistical equilibrium value of of the mean kinetic energy, converges as $`n\mathrm{}`$ to $`K(\psi ^{\mathrm{}})+H^0H(\psi ^{\mathrm{}})`$, where $`\psi ^{\mathrm{}}`$ is the solitary wave that minimizes the Hamiltonian $`H`$ for the given particle number $`N^0`$ for the NLS system on the interval $`[0,L]`$. Once again, approximating $`\psi ^{\mathrm{}}`$ by the solitary wave $`\varphi `$ on the whole real line that minimizes $`H`$ given the particle number constraint $`N=N^0`$, we may estimate the limiting value $`S_1`$ by $`K(\varphi )+H^0H(\varphi )`$, which is $`7.3`$ for the value $`N^0=9.6`$ considered in Figure (6). This estimate provides an upper bound on the saturation values of $`S_1`$ observed in the simulations, and as the number of modes in the simulations increases, $`S_1`$ saturates closer to this approximation of the statistical equilibrium value.
When the spatial resolution of the numerical simulations is improved (i.e, when $`n`$ is increased with $`L`$ fixed), the functions $`S_m`$ are typically seen to exhibit power law growth in time before reaching saturation (see Figure 7).
Indeed, we observe for $`m=2,3`$ and $`4`$ that $`S_m`$ obeys the following power law dynamics:
$$S_mt^{2(m1)\nu },$$
(29)
with $`\nu =0.25\pm 0.01`$. This behavior is observed for $`t`$ large enough that the coalescence process has ended. It corresponds, therefore, to the regime where the kinetic energy has essentially reached saturation. Remarkably, the observed dynamical exponent $`\nu `$ is in good agreement with the prediction of Pomeau , which estimates the evolution of the typical wave number $`K(t)`$ of the fluctuations as time increases. The estimate comes from a dimensional analysis of the weak turbulence equation deduced from equation (24). Describing the fluctuation field $`\delta \psi `$ as:
$$\delta \psi =\frac{1}{\sqrt{L}}𝑑k(\delta I_k)^{1/2}e^{i(kx\omega t)},$$
the relation between the energy $`\omega `$ and the wave number $`k`$ is called the spectrum of excitations (we refer the reader to for details). In , it has been shown that if this wave number $`K(t)`$ is in the range where the spectrum of excitations obeys $`\omega (k)k^2`$, which means that the fluctuations behave essentially like free particles, then, assuming that there is two–wave resonance in the weak turbulence approximation, it follows that $`K(t)(ϵt)^{1/4}`$. Here, $`ϵ`$ is the spatial energy density of the fluctuations (so $`ϵ(H^0H^{})/L`$). The analysis in was carried out for cubic defocusing NLS in two spatial dimensions, and does not immediately go over to the case under consideration here. In fact, strictly speaking, in the regime $`\omega (k)=k^2`$, the resonance of two waves cannot hold in one spatial dimension. However, we conjecture that, due to the interactions with the large–scale coherent structure, the resonance may in fact be meaningful in the present setting, and therefore, we believe that a dimensional weak turbulence analysis along the lines of that developed in may be relevant. We hope to explore this possibility in the future. Interestingly, for the NLS system, the approximation $`\omega (k)=k^2`$ is usually valid in the limit $`k1`$. In the numerical simulations, the finite number of modes provides an ultraviolet cutoff since the largest wave number of the system is
$$k_{max}=\frac{\pi }{dx}=n\frac{\pi }{L}.$$
We remark that we have been able to recognize the power-law growth in time of the quantities $`S_m`$ only for the smallest $`dx`$ we have considered in our simulations. For larger $`dx`$ the free particle regime might not be realized, and it is not surprising in this case that the power law behavior is not observed.
The previous considerations allow us to attach a more precise meaning to what we have been referring to as the transitory dynamics and the equilibrium state for the spectrally truncated NLS system. For sufficiently small grid sizes $`dx`$, one may consider that $`S_m`$ grows according to the power law (29) until the time $`t_n`$ at which the typical wave number $`K(t)`$ reaches the largest available wave number $`k_{max}`$. This time $`t_n`$ (which appears to scale as $`n^4`$ for fixed $`L`$, $`N^0`$ and $`H^0`$) defines a crossover between the transitory regime in which the solution evolves towards the spectrum (23), and the statistical equilibrium regime where the system investigates its phase-space according to the probability density $`\rho ^{(n)}`$. Notice that in the continuum limit $`n\mathrm{}`$, $`t_n`$ diverges to infinity, so that continuous NLS system can not reach statistical equilibrium in finite time. Such conjectures are supported by the investigation of the dynamics of the particle number spectrum during the intermediate time regime after which the coarsening process has ended, but before the final statistical equilibrium state has been reached. Note that the statistical equilibrium model does not provide predictions about the time evolution of the spectrum, because it is strictly an equilibrium theory. In fact, based on the statistical theory alone, nothing can be said about path which the system follows from the statistically unlikely initial condition to the final statistical equilibrium state. Figure (8) displays the particle number spectrum at $`t=510^5`$ unit time for a spatial resolution of $`dx=0.1`$.
This figure illustrates that the system investigates smaller and smaller scales as time increases. Indeed, the particle number spectrum for the modes $`k20`$ is still at the level of the initial noise. Thus, the smallest scales available to the system have yet to be excited at the time $`t=510^5`$. For larger scales, however, one can recognize both the spectrum corresponding to the coherent soliton structure, and the fluctuation spectrum which appears to follow, at least approximately, the equilibrium law $`|\psi _k|^2k^2`$. This suggests the following scenario for the spectrally truncated NLS dynamics: as time increases, smaller and smaller scales are explored, until eventually all available modes are excited. However, at any given time after the coalescence process has ended, the system may be considered as being in statitistical equilibrium over all the modes that have been excited up to that point. Defining $`k_{max}(t)`$ to be the largest wave number that the system has reached up to time $`t`$, then we obtain from (29) that $`k_{max}(t)(ϵt)^{1/4}`$, if $`k_{max}`$ is large enough. Now if we denote by $`n(t)`$ the number of modes that have been excited up to time $`t`$, then based on our previous arguments, we have that $`k_{max}(t)=\pi n(t)/L`$. But, using $`k_{max}(t)(ϵt)^{1/4}`$, we obtain the following estimate for the spectrum of the fluctuations at time $`t`$:
$$|\delta \psi _k|^2\frac{H^0H^{}}{n(t)k^2}\frac{\pi ϵ^{3/4}}{t^{1/4}k^2},$$
(30)
for $`|k|k_{\mathrm{max}}(t)`$. The total particle number spectrum at time $`t`$, of course, has to be taken as the sum of the spectrum corresponding to the large-scale coherent structure (which decreases exponentially for large k) and the spectrum of the fluctuations.
We emphasize that the derivation of the equation (30) describing the particle number spectrum at an intermediate time $`t`$ crucially depends on the assumption that the system evolves in such a way that it is nearly in statistical equilibrium over all the modes that have been investigated up to that time. The Figure (8) has motivated us to make this assumption, but clearly further numerical investigations should be carried out in order to test the validity of this hypothesis, as well as the accuracy of the formula (30). Nevertheless, we find it quite interesting that the fluctuation spectrum (30) agrees with the prediction in , which was derived from a dimensional analysis of the weak turbulence equations for the NLS system.
## V Discussion-Conclusion-Acknowledgements.
The primary purpose of the present work has been to test the predictions of a mean–field statistical model of self-organization in a generic class of nonintegrable focusing NLS equations defined by eqn. (1). This statistical theory, which has been summarized above, was originally developed and analyzed in . In fact, we have demonstrated a remarkable agreement between the predictions of the statistical theory and the results of direct numerical simulations of the NLS system. There is a strong qualitative and quantitative agreement between the mean field predicted by the statistical theory and the large-scale coherent structure observed in the long-time numerical simulations. In addition, the statistical model accurately predicts the the long-time spectrum of the numerical solution of the NLS system. The main conclusions we have reached are 1) The coherent structure that emerges in the asymptotic time limit is the solitary wave that minimizes the system Hamiltonian subject to the particle number constraint $`N=N^0`$, where $`N^0`$ is the given (conserved) value of $`N`$, and 2) The difference between the conserved Hamiltonian and the Hamiltonian of the coherent state resides in Gaussian fluctuations equipartitioned over wavenumbers.
While the statistical model we have developed is an equilibrium theory, and, strictly speaking, only provides predictions concerning the long-time statistical properties of the NLS system, we have combined this theory with insight gained from numerical simulations to paint a picture of the nature of the dynamics leading up to the statistical equilibrium state. Specifically, the simulations (and, in particular, the results shown in Fig. 8) indicate that evolution is such that, at a given time after the coarsening process has ended, the system is nearly in statistical equilibrium over all the modes that have been excited by that time. From this observation the fact that the quantities $`S_m`$ defined in (26) are seen to exhibit the power law growth in time according to eqn. (29), we have arrived at the prediction (30) for the time–dependence of the spectrum of the fluctuations. As we have mentioned above, results such as (29) and (30) have previously been derived by Pomeau from weak turbulence arguments, but for the defocusing cubic NLS equation in a bounded two-dimensional spatial domain. We believe that it would be an interesting exercise to check whether these formulas can be derived directly from a weak turbulence analysis in the present context –that is, for 1D nonintegrable focusing NLS equations in the absence of collapse.
We would like to point out certain analogies between the dynamics of the NLS systems we have considered here, and the dynamics of a turbulent 2D Navier-Stokes fluid. A prominent feature of large Reynolds number 2D Navier-Stokes turbulence is the formation of quasisteady coherent vortex structures . Starting from generic initial conditions, the evolution of the fluid is characterized by the formation of a collection of large-scale vortices, and the subsequent merger or coalescence of like-signed vortices . The large-scale soliton structures in (focusing, nonintegrable) NLS play a role similar to that of the vortices in 2D Navier-Stokes turbulence. Indeed, we have observed in our numerical simulations of NLS the formation of an array of soliton-like structures which eventually coalesce into a single persistent soliton of large amplitude. Another characteristic feature of turbulence in two-dimensions is the presence of a dual cascade . There is a direct cascade of enstrophy to small scales and an inverse cascade of energy to large scales. As pointed out long ago by Kraichnan , the existence of the inverse cascade of energy is indicative of the formation of a large-scale structure in the system. In NLS, there is also a dual cascade. Indeed, our numerical simulations, which correspond to injecting as initial conditions particle number and energy at a given scale $`l_i`$ associated with the modulational instability, have revealed that their is a direct transfer of kinetic energy to spatial scales smaller than $`l_i`$, while the particle number is transferred to large scales. While the 1D NLS equation is much simpler system to investigate, both analytically and numerically, than a turbulent 2D fluid system, we believe that the understanding of the coalescence and transfer processes in this generic model of nonlinear wave turbulence might provide important insight into the nature of turbulent systems in general.
It is a pleasure to thank Robert Almgren, Shiyi Chen, Leo Kadanoff, Yves Pomeau, Bruce Turkington, and Scott Zoldi for valuable discussions and suggestions. R. J. acknowledges support from an NSF Mathematical Sciences Postdoctoral Research Fellowship and from the DOE through a grant to the Center for Nonlinear Studies at Los Alamos National Laboratory. The research of C. J. has been supported by the ASCI Flash Center at the University of Chicago under DOE contract B341495.
|
no-problem/9905/hep-th9905179.html
|
ar5iv
|
text
|
# Algebraic Holography
## 1 Introduction and results
The conjectured correspondence (so-called “holography”) between quantum field theories on 1+$`s`$-dimensional anti-deSitter space-time $`AdS_{1,s}`$ (the “bulk space”) and conformal quantum field theories on its conformal boundary $`CM_{1,s1}`$ which is a compactification of Minkowski space $`^{1,s1}`$, has recently raised enthusiastic interest. If anti-deSitter space is considered as an approximation to the space-time geometry near certain gravitational horizons (extremal black holes), then the correspondence lends support to the informal idea of reduction of degrees of freedom due to the thermodynamic properties of black holes . Thus, holography is expected to give an important clue for the understanding of quantum theory in strong gravitational fields and, ultimately, of quantum gravity.
While the original conjecture was based on “stringy” pictures, it was soon formulated in terms of (Euclidean) conventional quantum field theory, and a specific relation between generating functionals was conjectured. These conjectures have since been exposed with success to many structural and group theoretical tests, yet a rigorous proof has not been given.
The problem is, of course, that the “holographic” transition from anti-deSitter space to its boundary and back, is by no means a point transformation, thus preventing a simple (pointwise) operator identification between bulk fields and boundary fields. In the present note, we show that in contrast, an identification between the algebras generated by the respective local bulk and boundary fields is indeed possible in a very transparent manner. These algebraic data are completely sufficient to reconstruct the respective theories.
We want to remind the reader of the point of view due to Haag and Kastler (see for a standard textbook reference) which emphasizes that, while any choice of particular fields in a quantum field theory may be a matter of convenience without affecting the physical content of the theory (comparable to the choice of coordinates in geometry), the algebras they generate and their algebraic interrelations, notably causal commutativity, supply all the relevant physical information in an invariant manner. The interested reader will find in a review of the (far from obvious, indeed) equivalence between quantum field theory in terms of fields and quantum field theory in terms of algebras, notably on the strategies available to extract physically relevant information, such as the particle spectrum, superselection charges, and scattering amplitudes, from the net of algebras without knowing the fields.
It is crucial in the algebraic approach, however, to keep track of the localization of the observables. Indeed, the physical interpretation of a theory is coded in the structure of a “causal net” of algebras which means the specification of the sets of observables $`B(X)`$ which are localized in any given space-time region $`X`$.<sup>1</sup><sup>1</sup>1The assignment $`XB(X)`$ is a “net” in the mathematical sense: a generalized sequence with a partially ordered index set (namely the set of regions $`X`$).
The assignment $`XB(X)`$ is subject to the conditions of isotony (an observable localized in a region $`X`$ is localized in any larger region $`YX`$, thus $`B(Y)B(X)`$), causal commutativity (two observables localized at space-like distance commute with each other), and covariance (the Poincaré transform of an observable localized in $`X`$ is localized in the transformed region $`gX`$; in the context at hand replace “Poincaré” by “anti-deSitter”). Each $`B(X)`$ should in fact be an algebra of operators (with the observables its selfadjoint elements), and to have sufficient control of limits and convergence in order to compute physical quantities of interest, it is convenient to let $`B(X)`$ be von Neumann algebras.<sup>2</sup><sup>2</sup>2A von Neumann algebra is an algebra of bounded operators on a Hilbert space which is closed in the weak topology of matrix elements. E.g., if $`\varphi `$ is a hermitean field and $`\varphi (f)`$ a field operator smeared over a region $`X`$ containing the support of $`f`$, then operators like $`\mathrm{exp}i\varphi (f)`$ belong to $`B(X)`$.
For most purposes it is convenient to consider as typical compact regions the “double-cones”, that is, intersections of a future directed and a past directed light-cone, and to think of point-like localization in terms of very small double-cones. On the other hand, certain aspects of the theory are better captured by “wedge” regions which extend to space-like infinity. A space-like wedge (for short: wedge) in Minkowski space is a region of the form $`\{x:x_1>|x_0|\}`$, or any Poincaré transform thereof. The corresponding regions in anti-deSitter space turn out to be intersections of $`AdS_{1,s}`$ with suitable flat space wedges in the ambient space $`^{2,s}`$, see below. In conformally covariant theories there is no distinction between double-cones and wedges since conformal transformations map the former onto the latter.
It will become apparent in the sequel that to understand the issue of “holography”, the algebraic framework proves to be most appropriate.
The basis for the holography conjectures is, of course, the coincidence between the anti-deSitter group $`SO_0(2,s)`$ and the conformal group $`SO_0(2,s)`$. ($`SO_0(n,m)`$ is the identity component of the group $`SO(n,m)`$, that is the proper orthochronous subgroup distinguished by the invariant condition that the determinants of the time-like $`n\times n`$ and of the space-like $`m\times m`$ sub-matrices are both positive.) The former group acts on $`AdS_{1,s}`$ (as a “deformation” from the flat space Poincaré group in 1+$`s`$ dimensions, $`SO_0(1,s)^{1,s}`$), and the latter group acts on the conformal boundary $`CM_{1,s1}`$ of $`AdS_{1,s}`$ (as an extension of the Poincaré group in 1+($`s1`$) dimensions, $`SO_0(1,s1)^{1,s1}`$) by restriction of the former group action on the bulk. The representation theoretical aspect of this coincidence has been elaborated (in Euclidean metric) in .
In terms of covariant nets of algebras of local observables (“local algebras”), it is thus sufficient to identify one suitable algebra in anti-deSitter space with another suitable algebra in the conformal boundary space, and then to let $`SO_0(2,s)`$ act to provide the remaining identifications. As any double-cone region in conformal space determines a subgroup of the conformal group $`SO_0(2,s)`$ which preserves this double-cone, it is natural to identify its algebra with the algebra of a region in anti-deSitter space which is preserved by the same subgroup of the anti-deSitter group $`SO_0(2,s)`$. It turns out that this region is a space-like wedge region which intersects the boundary in the given double-cone.
For a typical bulk observable localized in a wedge region, the reader is invited to think of a field operator for a Mandelstam string which stretches to space-like infinity. Its holographic localization on the boundary has finite size, but it becomes sharper and sharper as the string is “pulled to infinity”. We shall see that one may be forced to take into consideration theories which possess only wedge-localized, but no compactly localized observables.
Our main algebraic result rests on the following geometric Lemma:<sup>3</sup><sup>3</sup>3For details, see Sect. 2. We denote double-cones in the boundary by the symbol $`I`$, because (i) we prefer to reserve the “standard” symbol $`O`$ for double-cones in the bulk space, and because (ii) in 1+1 dimensions the “double-cones” on the boundary are in fact open intervals on the circle $`S^1`$.
Lemma: Between the set of space-like wedge regions in anti-deSitter space, $`WAdS_{1,s}`$, and the set of double-cones in its conformal boundary space, $`ICM_{1,s1}`$, there is a canonical bijection $`\alpha :WI=\alpha (W)`$ preserving inclusions and causal complements, and intertwining the actions of the anti-deSitter group $`SO_0(2,s)`$ and of the conformal group $`SO_0(2,s)`$
$$\alpha (g(W))=\dot{g}(\alpha (W)),\alpha ^1(\dot{g}(I))=g(\alpha ^1(I))$$
where $`\dot{g}`$ is the restriction of the action of $`g`$ to the boundary. The double-cone $`I=\alpha (W)`$ associated with a wedge $`W`$ is the intersection of $`W`$ with the boundary.
Given the Lemma, the main algebraic result states that bulk observables localized in wedge regions are identified with boundary observables localized in double-cone regions:
Corollary 1: The identification of local observables
$$B(W):=A(\alpha (W)),A(I):=B(\alpha ^1(I))$$
gives rise to a 1:1 correspondence between isotonous causal conformally covariant nets of algebras $`IA(I)`$ on $`CM_{1,s1}`$ and isotonous causal anti-deSitter covariant nets of algebras $`WB(W)`$ on $`AdS_{1,s}`$.
An observable localized in a double-cone $`O`$ in anti-deSitter space is localized in any wedge containing $`O`$, hence the local algebra $`B(O)`$ should be contained in all $`B(W)`$, $`WO`$. We shall define $`B(O)`$ as the intersection of all these wedge algebras. These intersections do no longer correspond to simple geometric regions in $`CM_{1,s1}`$ (so points in the bulk have a complicated geometry in the boundary), as will be discussed in more detail in 1+1 dimensions below.
The following result also identifies states and representations of the corresponding theories:
Corollary 2: Under the identification of Corollary 1, a vacuum state on the net $`A`$ corresponds to a vacuum state on the net $`B`$. Positive-energy representations of the net $`A`$ correspond to positive-energy representations of the net $`B`$. The net $`A`$ satisfies essential Haag duality if and only if the net $`B`$ does. The modular group and modular conjugation (in the sense of Tomita-Takesaki) of a wedge algebra $`B(W)`$ in a vacuum state act geometrically (by a subgroup of $`SO_0(2,s)`$ which preserves $`W`$ and by a CPT reflection, respectively) if and only if the same holds for the double-cone algebras $`A(I)`$.
Essential Haag duality means that the algebras associated with causally complementary wedges not only commute as required by locality, but either algebra is in fact the maximal algebra commuting with the other one.
The last statement in the Corollary refers to the modular theory of von Neumann algebras which states that every (normal and cyclic) state on a von Neumann algebra is a thermal equilibrium state with respect to a unique adapted “time” evolution (one-parameter group of automorphisms = modular group) of the algebra. In quantum field theories in Minkowski space, whose local algebras are generated by smeared Wightman fields, the modular groups have been computed for wedge algebras in the vacuum state and were found to coincide with the boost subgroup of the Lorentz group which preserves the wedge (geometric action). In conformal theories, mapping wedges onto double-cones by suitable conformal transformations, the same result also applies to double-cones . This result is an algebraic explanation of the Unruh effect according to which a uniformly accelerated observer attributes a temperature to the vacuum state, and provides also an explanation of Hawking radiation if the wedge region is replaced by the space-time region outside the horizon of a Schwarzschild black hole .
The modular theory also provides a “modular conjugation” which maps the algebra onto its commutant. For Minkowski space Wightman field theories in the vacuum state as before, the modular conjugation of a wedge algebra turns out to act geometrically as a CPT-type reflection (CPT up to a rotation) along the “ridge” of the wedge which maps the wedge onto its causal complement. This entails essential duality for Minkowski space as well as conformally covariant Wightman theories.
The statement in Corollary 2 on the modular group thus implies that, if the boundary theory is a Wightman theory, then the boundary and the bulk theory both satisfy essential Haag duality, and also in anti-deSitter space a vacuum state of $`B`$ in restriction to a wedge algebra $`B(W)`$ is a thermal equilibrium state with respect to the associated one-parameter boost subgroup of the anti-deSitter group which preserves $`W`$, i.e., the Unruh effect takes place for a uniformly accelerated observer. Furthermore, the CPT theorem holds for the theory on anti-deSitter space. On the other hand, essential Haag duality and geometric modular action for quantum field theories on $`AdS_{1,s}`$ were established under much more general assumptions , implying the same properties for the associated boundary theory even when it is not a Wightman theory (see below).
We emphasize that the Hamiltonians $`\frac{1}{R}M_{0,d}`$ on $`AdS_{1,s}`$ and $`P^0`$ on $`CM_{1,s1}`$ are not identified under the identification of the anti-deSitter group and the conformal group. Instead, $`M_{0,d}`$ is (in suitable coordinates) identified with the combination $`\frac{1}{2}(P^0+K^0)`$ of translations and special conformal transformations in the $`0`$-direction of $`CM_{1,s1}`$. This is different from the Euclidean picture where the anti-deSitter Hamiltonian is identified with the dilatation subgroup of the conformal group. In Lorentzian metric, the dilatations correspond to a space-like “translation” subgroup of the anti-deSitter group. This must have been expected since the generator of dilatations does not have a one-sided spectrum as is required for the real-time Hamiltonian. The subgroup generated by $`\frac{1}{2}(P^0+K^0)`$ is well-known to be periodic and to satisfy the spectrum condition in every positive-energy representation. (Periodicity in bulk time of course implies a mass gap for the underlying bulk theory. This is not in conflict with the boundary theory being massless since the respective subgroups of time evolution cannot be identified.)
Different Hamiltonians give rise to different counting of degrees of freedom, since entropy is defined via the partition function. Thus, the “holographic” reduction of degrees of freedom can be viewed as a consequence of the choice of the Hamiltonian: The anti-deSitter Hamiltonian $`M_{0,d}=\frac{1}{2}(P^0+K^0)`$ has discrete spectrum and has a chance (at least in 1+1 dimensions) to yield a finite partition function. One the other hand, the partition function with respect to the boundary Hamiltonian $`P^0`$ exhibits the usual infrared divergence due to infinite volume and continuous spectrum.
A crucial aspect of the present analysis is the identification of compact regions in the boundary with wedge regions in the bulk. With a little hindsight, this aspect is indeed also present in the proposal for the identification of generating functionals . While the latter is given in the Euclidean approach, it should refer in real time to a hyperbolic differential equation with initial values given in a double-cone on the boundary which determine its solution in a wedge region of bulk space.
We also show that in 1+1 dimensions there are sufficiently many observables localized in arbitrarily small compact regions in the bulk space to ensure that compactly localized observables generate the wedge algebras. This property is crucial if we want to think of local algebras as being generated by local fields:
Proposition: Assume that the boundary theory $`A`$ on $`S^1`$ is weakly additive (i.e., $`A(I)`$ is generated by $`A(J_n)`$ whenever the interval $`I`$ is covered by a family of intervals $`J_n`$). If a wedge $`W`$ in $`AdS_{1,1}`$ is covered by a family of double-cones $`O_nW`$, then the algebra $`B(W)`$ is generated by the observables localized in $`O_n`$:
$$B(W)=\underset{n}{}B(O_n).$$
In order to establish this result, we explicitly determine the observables localized in a double-cone region on $`AdS_{1,1}`$. Their algebra $`B(O)`$ turns out to be non-trivial: it is the intersection of two interval algebras $`A(I_i)`$ on the boundary $`S^1`$ where the intersection of the two intervals $`I_i`$ is a union of two disjoint intervals $`J_i`$. $`B(O)`$ contains therefore at least $`A(J_1)`$ and $`A(J_2)`$. In fact, it is even larger than that, containing also observables corresponding to a “charge transport” , that is, operators which annihilate a superselection charge in $`J_1`$ and create the same charge in $`J_2`$. The inclusion $`A(I_1)A(J_2)B(O)`$ therefore carries (complete) algebraic information about the superselection structure of the chiral boundary theory .
As the double-cone on $`AdS_{1,1}`$ shrinks, the size of the intervals $`J_i`$ also shrinks but not their distance, so points in 1+1-dimensional anti-deSitter space are related to pairs of points in conformal space. But we see that sharply localized bulk observables involve boundary observables localized in large intervals: the above charge transporters. This result provides an algebraic interpretation of the obstruction against a point transformation between bulk and boundary.
The issue of compactly localized observables in anti-deSitter space is more complicated in more than two dimensions, and deserves a separate careful analysis. Some preliminary results will be presented in Section 2.3. They show that if the bulk theory possesses observables localized in double-cones, then the corresponding boundary theory violates an additivity property which is characteristic for Wightman field theories, while its violation is expected for non-abelian gauge theories due to the presence of gauge-invariant Wilson loop operators. Conversely, if the boundary theory satisfies this additivity property, then the observables of the corresponding bulk theory are always attached to infinity, as in topological (Chern-Simons) theories.
Let us point out that the conjectures in suggest a much more ambitious interpretation, namely that the correspondence pertains to bulk theories involving quantum gravity, while the anti-deSitter space and its boundary are understood in some asymptotic (semi-classical) sense. Indeed, the algebraic approach is no more able to describe proper quantum gravity as any other mathematically unambiguous framework up to now. Most arguments given in the literature in favour of the conjectures refer to gravity as perturbative gravity on a background space-time. Likewise, our present results concern the semi-classical version of the conjectures, treating gravity like any other quantum field theory as a theory of observables on a classical background geometry. In fact, the presence or absence of gravity in the bulk theory plays no particular role. This is only apparently in conflict with the original arguments for a holographic reduction of degrees of freedom of a bulk theory in the vicinity of a gravitational horizon in which gravity is essential. Namely, our statement can be interpreted in the sense that once the anti-deSitter geometry is given for whatever reason (e.g., the presence of a gravitational horizon), then it can support only the degrees of freedom of a boundary theory.
## 2 Identification of observables
We denote by $`H_{1,s}`$ the $`d`$=1+$`s`$-dimensional hypersurface defined through its embedding into ambient $`^{2,s}`$,
$$x_0^2x_1^2\mathrm{}x_s^2+x_d^2=R^2$$
with Lorentzian metric induced from the 2+$`s`$-dimensional metric
$$ds^2=dx_0^2dx_1^2\mathrm{}dx_s^2+dx_d^2.$$
Its group of isometries is the Lorentz group $`O(2,s)`$ of the ambient space in which the reflection $`xx`$ is central. Anti-deSitter space is the quotient manifold $`AdS_{1,s}=H_{1,s}/_2`$ (with the same Lorentzian metric locally). We denote by $`p`$ the projection $`H_{1,s}AdS_{1,s}`$.
Two open regions in anti-deSitter space are called “causally disjoint” if none of their points can be connected by a time-like geodesic. The largest open region causally disjoint from a given region is called the causal complement. In a causal quantum field theory on the quotient space $`AdS_{1,s}`$, observables and hence algebras associated with causally disjoint regions commute with each other.
The reader should be worried about this definition, since causal independence of observables should be linked to causal connectedness by time-like curves rather than geodesics. But on anti-deSitter space, any two points can be connected by a time-like curve, so they are indeed causally connected, and the requirement that causally disconnected observables commute is empty. Yet, as our Corollary 1 shows, if the boundary theory is causal, then the associated bulk theory is indeed causal in the present (geodesic) sense. We refer also to where it is shown that vacuum expectation values of commutators of observables with causally disjoint localization have to vanish whenever the vacuum state has reasonable properties (invariance and thermodynamic passivity), but without any a priori assumptions on causal commutation relations (neither in bulk nor on the boundary).
Thus in the theories on anti-deSitter space we consider in this paper, observables localized in causally disjoint but causally connected regions commute; see for a discussion of the ensuing physical constraints on the nature of interactions on anti-deSitter space.
The causal structure of $`AdS_{1,s}`$ is determined by its metric modulo conformal transformations which preserve angles and geodesics. As a causal manifold, $`AdS_{1,s}`$ has a boundary (the “asymptotic directions” of geodesics). The boundary inherits the causal structure of the bulk space $`AdS_{1,s}`$, and the anti-deSitter group $`SO_0(2,s)`$ acts on this space. It is well known that this boundary is a compactification $`CM_{1,s1}=(S^1\times S^{s1})/_2`$ of Minkowski space $`^{1,s1}`$, and $`SO_0(2,s)`$ acts on it like the conformal group.
The notions of causal disjoint and causal complements on $`CM_{1,s1}`$ coincide, up to conformal transformations, with those on Minkowski space . In $`d=`$1+1 dimensions, $`s=1`$, the conformal space is $`S^1`$, and the causal complement of an interval $`I`$ is $`I^c=S^1\backslash \overline{I}`$.
Both anti-deSitter space and its causal boundary have a “global time-arrow”, that is, the distinction between the future and past light-cone in the tangent spaces (which are ordinary Minkowski spaces) at each point $`x`$ can be globally chosen continuous in $`x`$ (and consistent with the reflection $`xx`$). The time orientation on the bulk space induces the time orientation on the boundary. The time arrow is crucial in order to distinguish representations of positive energy.
2.1 Proof of the Lemma
Any ordered pair of light-like vectors $`(e,f)`$ in the ambient space $`^{2,s}`$ such that $`ef<0`$ defines an open subspace of the hypersurface $`H_{1,s}`$ given by
$$\stackrel{~}{W}(e,f)=\{x^{2,s}:x^2=R^2,ex>0,fx>0\}.$$
This space has two connected components. Namely, the tangent vector at each point $`x\stackrel{~}{W}(e,f)`$ under the boost in the $`e`$-$`f`$-plane, $`\delta _{e,f}x=(fx)e(ex)f`$, is either a future or a past directed time-like vector, since $`(\delta _{e,f}x)^2=2(ef)(ex)(fx)>0`$. We denote by $`\stackrel{~}{W}_+(e,f)`$ and $`\stackrel{~}{W}_{}(e,f)`$ the connected components of $`\stackrel{~}{W}(e,f)`$ in which $`\delta _{e,f}x`$ is future and past directed, respectively. By this definition, $`\stackrel{~}{W}_+(f,e)=\stackrel{~}{W}_{}(e,f)`$, and $`\stackrel{~}{W}_+(e,f)=\stackrel{~}{W}_+(e,f)`$.
The wedge regions in the hypersurface $`H_{1,s}`$ are the regions $`\stackrel{~}{W}_\pm (e,f)`$ as specified. The wedge regions in anti-deSitter space are their quotients $`W_\pm (e,f)=p\stackrel{~}{W}_\pm (e,f)`$. One has $`W_+(e,f)=W_{}(f,e)=W_+(e,f)`$, and $`W_+(e,f)`$ and $`W_{}(e,f)`$ are each other’s causal complements. For an illustration in 1+1 dimensions, cf. Figure 1.
Figure 1. Wedge regions $`\stackrel{~}{W}_+(e,f)`$ and $`\stackrel{~}{W}_{}(e,f)`$ in 1+1 dimensions, and their intersections with the boundary. The light-like vectors $`e`$ and $`f`$ are tangent to $`\stackrel{~}{W}_{}`$ in its apex $`x`$. In anti-deSitter space, $`\stackrel{~}{W}_{}`$ is identified with $`\stackrel{~}{W}_{}`$, and $`W_\pm =p\stackrel{~}{W}_\pm `$ are causal complements of each other.
We claim that the projected wedges $`W_\pm (e,f)`$ intersect the boundary of $`AdS_{1,s}`$ in regions $`I_\pm (e,f)`$ which are double-cones of Minkowski space $`^{1,s1}`$ or images thereof under some conformal transformation. Note that any two double-cones in $`^{1,s1}`$ are connected by a conformal transformation, and among their conformal transforms are also the past and future light-cones and space-like wedges in $`^{1,s1}`$.
We claim also that the causal complement $`W_{}(e,f)`$ of the wedge $`W_+(e,f)`$ intersects the boundary in the causal complement $`I_{}(e,f)=I_+(e,f)^c`$ of $`I_+(e,f)`$.
It would be sufficient to compute the intersections of any single pair of wedges $`\stackrel{~}{W}_\pm (e,f)`$ with the boundary and see that it is a pair of causally complementary conformal double-cones in $`CM_{1,s1}`$, since the claim then follows for any other pair of wedges by covariance. For illustrative reason we shall compute two such examples.
We fix the “arrow of time” by declaring the tangent vector of the rotation in the 0-$`d`$-plane, $`\delta _tx=(x_d,0,\mathrm{},0,x_0)`$, to be future directed.
In stereographic coordinates $`(y_0,\stackrel{}{y},x_{})`$ of the hypersurface $`x^2=R^2`$, where $`x_{}=x_dx_s`$ and $`(y_0,\stackrel{}{y})=(x_0,\stackrel{}{x})/x_{}`$, $`\stackrel{}{x}=(x_1,\mathrm{},x_{s1})`$, the boundary is given by $`|x_{}|=\mathrm{}`$. Thus, in the limit of infinite $`x_{}`$ one obtains a chart $`y_\mu =(y_0,\stackrel{}{y})`$ of $`CM_{1,s1}`$. The induced conformal structure is that of Minkowski space, $`dy^2=f(y)^2(dy_0^2d\stackrel{}{y}^2)`$.
Our first example is the one underlying Figure 1: we choose $`e_\mu =(0,\mathrm{},0,1,1)`$ and $`f_\mu =(0,\mathrm{},0,1,1)`$. The conditions for $`x\stackrel{~}{W}(e,f)`$ read $`x_dx_s>0`$ and $`x_d+x_s<0`$, implying $`x_d^2x_s^2<0`$ and hence $`x_0^2\stackrel{}{x}^2>R^2`$. The tangent vector $`\delta _{e,f}x`$ has $`d`$-component $`\delta _{e,f}x_d=2x_s>0`$. Hence, it is future directed if $`x_0>0`$, and past directed if $`x_0<0`$:
$$\stackrel{~}{W}_+(e,f)=\{x:x^2=R^2,x_s<|x_d|,x_0>0\}.$$
After dividing $`(x_0,\stackrel{}{x})`$ by $`x_{}=x_dx_s\mathrm{}`$, we obtain the boundary region
$$I_+(e,f)=\{y=(y_0,\stackrel{}{y}):y_0^2\stackrel{}{y}^2>0,y_0>0\},$$
that is, the future light-cone in Minkowski space; similarly, $`I_{}(e,f)`$ is the past light-cone, and $`I_\pm (e,f)`$ are each other’s causal complements in $`CM_{1,s1}`$.
Next, we choose $`e_\mu =(1,1,0,\mathrm{},0)`$ and $`f_\mu =(1,1,0,\mathrm{},0)`$. The conditions for $`x\stackrel{~}{W}(e,f)`$ read $`x_1<|x_0|`$, implying $`x_0^2\stackrel{}{x}^2<0`$ and hence $`x_d^2x_s^2>R^2>0`$. The tangent vector $`\delta _{e,f}x`$ has $`0`$-component $`\delta _{e,f}x_d=2x_1>0`$. Hence, it is future directed if $`x_d<0`$ hence $`x_dx_s<0`$, and past directed if $`x_d>0`$ hence $`x_dx_s>0`$:
$$\stackrel{~}{W}_+(e,f)=\{x:x^2=R^2,x_1<|x_0|,x_{}<0\}.$$
After dividing $`(x_0,\stackrel{}{x})`$ by $`x_{}=x_dx_s\mathrm{}`$, we obtain the boundary region
$$I_+(e,f)=\{y=(y_0,\stackrel{}{y}):y_1>|y_0|\},$$
that is, a space-like wedge region in Minkowski space; similarly, $`I_{}(e,f)`$ is the opposite wedge $`y_1<|y_0|`$, which is again the causal complement of $`I_+(e,f)`$ in $`CM_{1,s1}`$.
Both light-cones and wedge regions in Minkowski space are well known to be conformal transforms of double-cones, and hence they are double-cones on $`CM_{1,s1}`$. The pairs of regions computed above are indeed causally complementary pairs.
We now consider the map $`\alpha :W_+(e,f)I_+(e,f)`$. Since the action of the conformal group on the boundary is induced by the action of the anti-deSitter group on the bulk, we see that $`\stackrel{~}{W}_\pm (ge,gf)=g(\stackrel{~}{W}_\pm (e,f))`$ and $`I_\pm (ge,gf)=\dot{g}(I_\pm (e,f))`$, hence $`\alpha `$ intertwines the actions of the anti-deSitter and the conformal group. It is clear that $`\alpha `$ preserves inclusions, and we have seen that it preserves causal complements for one, and hence for all wedges. Since $`SO_0(2,s)`$ acts transitively on the set of double-cones of $`CM_{1,s1}`$, the map $`\alpha `$ is surjective. Finally, since $`W_+(e,f)`$ and $`I_+(e,f)`$ have the same stabilizer subgroup of $`SO_0(2,s)`$, it is also injective.
This completes the proof of the Lemma. $`\mathrm{}`$
2.2 Proof of the Corollaries
We identify wedge algebras on $`AdS_{1,s}`$ and double-cone algebras on $`CM_{1,s1}`$ by
$$B(W_\pm (e,f))=A(I_\pm (e,f)),$$
that is, $`B(W)=A(\alpha (W))`$. The Lemma implies that if $`A`$ is given as an isotonous, causal and conformally covariant net of algebras on $`CM_{1,s1}`$, then $`B(W)`$ defined by this identification constitute an isotonous, causal and anti-deSitter covariant net of algebras on $`AdS_{1,s}`$, and vice versa. Namely, the identification is just a relabelling of the index set of the net which preserves inclusions and causal complements and intertwines the action of $`SO_0(2,s)`$. Thus we have established Corollary 1. $`\mathrm{}`$
As for Corollary 2, we note that, as the algebras are identified, states and representations of the nets $`A`$ and $`B`$ are also identified.
Since the identification intertwines the action of the anti-dSitter group and of the conformal group, an anti-deSitter invariant state on $`B`$ corresponds to a conformally invariant state on $`A`$. The generator of time translations in the anti-deSitter group corresponds to the generator $`\frac{1}{2}(P^0+K^0)`$ in the conformal group which is known to be positive if and only if $`P^0`$ is positive (note that $`K^0`$ is conformally conjugate to $`P^0`$). Hence the conditions for positivity of the respective generators of time-translations are equivalent.
By the identification of states and algebras, also the modular groups are identified. The modular group and modular conjugation for double-cone algebras in a vacuum state of conformally covariant quantum field theories are conformally conjugate to the modular group and modular conjugation of a Minkowski space wedge algebra, which in turn are given by the Lorentz boosts in the wedge direction and the reflection along the ridge of the wedge . It follows that the modular group for a wedge algebra on anti-deSitter space is given by the corresponding subgroup of the anti-deSitter group which preserves the wedge (for a wedge $`W_+(e,f)`$, this is the subgroup of boosts in the $`e`$-$`f`$-plane), and the modular conjugation is a CPT transformation which maps $`W_+`$ onto $`W_{}`$.
These remarks suffice to complete the proof of Corollary 2. $`\mathrm{}`$
Let us mention that the correspondence given in Corollary 1 holds also for “weakly local” nets both on the bulk and on the boundary. In a weakly local net, the vacuum expectation value of the commutator of two causally disjoint observables vanishes, but not necessarily the commutator itself. Weak locality for quantum field theories on anti-deSitter space follows from very conservative assumptions on the vacuum state without any commutation relations assumed. Thus, also the boundary theory will always be weakly local.
2.3 Compact localization in anti-deSitter space
Let us first note that as the ridge of a wedge is shifted into the interior of the wedge, the double-cone on the boundary shrinks. Thus, sharply localized boundary observables correspond to bulk observables at space-like infinity . We now show that sharply localized bulk observables do not correspond to a simple geometry on the boundary, but must be determined algebraically.
An observable localized in a double-cone $`O`$ of anti-deSitter space must be contained in every wedge algebra $`B(W)`$ such that $`OW`$. The algebra $`B(O)`$ is thus at most the intersection of all $`B(W)`$ such that $`OW`$. We may define it as this intersection, thereby ensuring isotony, causal commutativity and covariance for the net of double-cone algebras in an obvious manner.
Double-cone algebras on anti-deSitter space are thus delicate intersections of algebras of double-cones and their conformal images on the boundary, and might turn out trivial. In 1+1 dimensions, the geometry is particularly simple since a double-cone is an intersection of only two wedges. We show that the corresponding intersection of algebras is non-trivial, and shall turn to $`d>1+1`$ below.
Let us write (in 1+1 dimensions) the relation
$$B(O)=B(W_1)B(W_2)A(I_1)A(I_2)\text{whenever}O=W_1W_2,$$
where $`W_i`$ are any pair of wedge regions in $`AdS_{1,1}`$ and $`I_i=\alpha (W_i)`$ their intersections with the boundary, that is, open intervals on $`S^1`$.
The intersection $`W_1W_2`$ might not be a double-cone. It might be empty, or it might be another wedge region. Before discussing the above relation as a definition for the double-cone algebra $`B(O)`$ if $`O=W_1W_2`$ is a double-cone, we shall first convince ourselves that it is consistent also in these other cases.
If $`W_1`$ contains $`W_2`$, or vice versa, then $`O`$ equals the larger wedge, and the relation holds by isotony. If $`W_1`$ and $`W_2`$ are disjoint, then the intersections with the boundary are also disjoint, and $`B(\mathrm{})=A(I_1)A(I_2)`$ is trivial if the boundary net $`A`$ on $`S^1`$ is irreducible (that is, disjoint intervals have no nontrivial observables in common).
Next, it might happen that $`W_1`$ and $`W_2`$ have a nontrivial intersection without the apex of one wedge lying inside the other wedge. In this case, the intersection is again a wedge, say $`W_3`$. Namely, any wedge in $`AdS_{1,1}`$ is of the form $`W_+(e,f)`$ where $`e`$ and $`f`$ are a future and a past directed light-like tangent vector in the apex $`x`$ (the unique point in $`AdS_{1,1}`$ solving $`ex=fx=0`$). The condition $`ef<0`$ implies that both tangent vectors point in the same (positive or negative) 1-direction. The wedge itself is the surface between the two light-rays emanating from $`x`$ in the directions $`e`$ and $`f`$ (cf. Figure 1). The present situation arises if the future directed light-ray of $`W_1`$ intersects the past directed light-ray of $`W_2`$ (or vice versa) in a point $`x_3`$ without the other pair of light-rays intersecting each other. The intersection of the two wedges is the surface between the two intersecting light-rays travelling on from the point $`x_3`$, which is another wedge region $`W_3`$ with apex $`x_3`$. It follows that the intersection of the intersections $`I_i`$ of $`W_i`$ with the boundary equals the intersection $`I_3`$ of $`W_3`$ with the boundary. Hence consistency of the above relation is guaranteed by $`A(I_1)A(I_2)=A(I_3)`$ where $`I_1`$ and $`I_2`$ are two intervals on $`S^1`$ whose intersection $`I_3`$ is again an interval.
Now we come to the case that $`W_1W_2`$ is a double-cone $`O`$ in the proper sense. This is the case if the closure of the causal complement of $`W_1`$ is contained in $`W_2`$. It follows that the closure of the causal complement of $`I_1`$ is contained in $`I_2`$, hence the intersection of $`I_1`$ and $`I_2`$ is the union of two disjoint intervals $`J_1`$ and $`J_2`$. The latter are the two light-like geodesic “shadows”, cast by $`O`$ onto the boundary.
Thus, the observables localized in a double-cone in anti-deSitter space $`AdS_{1,1}`$ are given by the intersection of two interval algebras $`A(I_i)`$ on the boundary for intervals $`I_i`$ with disconnected intersections (or equivalently, by essential duality, the joint commutant of two interval algebras for disjoint intervals). Such algebras have received much attention in the literature , notably within the context of superselection sectors. Namely, if $`I_1I_2=J_1J_2`$ consists of two disjoint intervals, then the intersection of algebras $`A(I_1)A(I_2)`$ is larger than the algebra $`A(J_1)A(J_2)`$. The excess can be attributed to the existence of superselection sectors , the extra operators being intertwiners which transport a superselection charge from one of the intervals $`J_i`$ to the other.
We conclude that (certain) compactly localized observables on anti-deSitter space are strongly delocalized observables (charge transporters) of the boundary theory. Yet there is no obstruction against both theories being Wightman theories generated by local Wightman fields, as the following simple example shows.
In suitable coordinates $`x_\mu =R(\mathrm{cos}t,\mathrm{cos}x,\mathrm{sin}t)/\mathrm{sin}x`$, the bulk is the strip $`(t,x)\times (0,\pi )`$ with points $`(t,x)(t+\pi ,\pi x)`$ identified, while the boundary are the points $`(0,u)`$, $`u\text{mod}\mathrm{\hspace{0.33em}2}\pi `$. The metric is a multiple of $`dt^2dx^2`$, thus the light rays emanating from the bulk point $`(t,x)`$ hit the boundary at the points $`u_\pm =t\pm x\text{mod}\mathrm{\hspace{0.33em}2}\pi `$. We see that, as the double-cone $`O`$ shrinks to a point $`(t,x)`$ in bulk, the two intervals $`J_i`$ on the boundary also shrink to points (namely $`u_\pm `$) while their distance remains finite.
Now, we consider the abelian current field $`j(u)`$ on the boundary, and determine the associated fields on anti-deSitter space. First, for $`(t,x)`$ in the strip, both $`j(t\pm x)`$ are localized at $`(t,x)`$ and give rise to a conserved vector current $`j^\mu `$ with components $`j^0(t,x)=j(t+x)+j(tx)`$, $`j^1(t,x)=j(t+x)+j(tx)`$. Furthermore, the fields $`\varphi _\alpha (t,x)=\mathrm{exp}i\alpha _{tx}^{t+x}j(u)𝑑u`$ (suitably regularized, of course), $`\alpha `$, are also localized at $`(t,x)`$. Namely, since the charge operator $`_{S^1}j(u)𝑑u`$ is a number $`q`$ in each irreducible representation, $`\varphi _\alpha (t,x)`$ may as well be represented as $`e^{i\alpha q}\mathrm{exp}i\alpha _{t+x2\pi }^{tx}j(u)𝑑u`$ and hence is localized in both complementary boundary intervals $`[tx,t+x]`$ and $`[t+x2\pi ,tx]`$ which overlap in the points $`u_+`$ and $`u_{}`$, as required.
Indeed, the fields $`\varphi _\alpha `$ can be obtained from bounded Weyl operators with finite localization as follows. $`A(I)`$ is generated by boundary observables of the Weyl form $`W(f)=\mathrm{exp}ij(f)`$ where $`f`$ is a smearing function on $`S^1`$ which is constant outside the interval $`I`$. Adding a constant to $`f`$ is immaterial for the localization since the commutation relations are given by the symplectic form $`f^{}g𝑑u`$. A Weyl operator $`W(f)`$ is localized in both intervals $`I_1`$, $`I_2`$ (notation as before) if $`f`$ has constant values on both gaps between $`J_1`$, $`J_2`$, but it is not a product of Weyl operators in $`J_1`$ and in $`J_2`$ whenever these values are different. As a bulk observable, $`W(f)`$ is localized in the double-cone $`O=W_1W_2`$, and operators of this form generate $`B(O)`$. Suitably regularized limits of $`W(f)`$ yield the point-like local fields $`\varphi _\alpha (t,x)`$.
For the more expert reader, we mention that our identification of double-cone algebras in bulk with two-interval algebras on the boundary also shows how the notorious difficulty to compute the modular group for two-interval algebras is related to the difficulty to compute the modular group of double-cone algebras in massive theories. (We discuss below that in a scaling limit the massive anti-deSitter theory approaches a conformal flat space theory. In this limit, the modular group can again be computed.)
We now prove the Proposition of Sect. 1. It asserts that the algebras $`B(O_n)`$ generate $`B(W)`$ whenever a family of double-cones $`O_nW`$ covers the wedge $`WAdS_{1,1}`$.
Each $`B(O_n)`$ is of the form $`A(I_{n1})A(I_{n2})`$ where $`I_{n1}I=\alpha (W)`$ and $`I_{n1}I_{n2}=J_{n1}J_{n2}`$ is a union of two disjoint intervals. By definition, the assertion is equivalent to
$$A(I)=\underset{n}{}A(I_{n1})A(I_{n2}),$$
where the inclusion “$``$” holds since each $`A(I_{n1})`$ is contained in $`A(I)`$. On the other hand, the algebras on the right hand side are larger than $`A(J_{n1})A(J_{n2})`$. If $`O_n`$ cover the wedge $`W`$, then the intervals $`J_{n1}`$ and $`J_{n2}`$, as $`n`$ runs, cover the interval $`I=\alpha (W)`$. So the claim follows from weak additivity of the boundary theory. $`\mathrm{}`$
In $`d2+1`$ dimensions, the situation is drastically different. Namely, if a family of small boundary double-cones $`I_i`$ covers the space-like basis of a large double-cone $`I`$, and $`W_i`$ and $`W`$ denote the associated anti-deSitter wedge regions, then – unlike in 1+1 dimensions – $`W`$ will contain a bulk double-cone $`O`$ which is space-like to all $`W_i`$. Consequently, $`B(O)B(W)=A(I)`$ must commute with the algebra $`_iA(I_i)`$ generated by all $`B(W_i)=A(I_i)`$. But in theories based on gauge-invariant Wightman fields (with the localization of operators determined in terms of smearing functions), the latter algebra coincides with $`A(_iI_i)`$. This algebra in turn coincides with $`A(I)`$ whenever the dynamics is generated by a Hamiltonian which is an integral over a local density, because then the observables in a neighbourhood of the space-like basis of $`I`$ determine the observables in all of $`I`$. Thus $`B(O)`$ must belong to the center of $`A(I)`$ which is commutative (classical). Hence, a Wightman boundary theory is associated with a bulk theory without compactly localized quantum observables.
Conversely, if there are double-cone localized bulk observables (e.g., if the bulk theory is itself described by a Wightman field ), then the nontriviality of $`B(O)`$ requires $`A(_iI_i)=A(I)`$ to be strictly larger than $`_iA(I_i)`$. This violation of additivity seems to be characteristic of non-abelian gauge theories where Wilson loop operators are not generated by point-like gauge invariant fields (cf. also the discussion in ).
These issues certainly deserve a more detailed and careful analysis. For the moment, we conclude that the holographic correspondence necessarily relates, in more than 1+1 dimensions, Wightman type boundary theories to bulk theories without compactly localized observables (topological theories), in agreement with a remark on Chern-Simons theories in , and, conversely, bulk theories with point-like fields to boundary theories which share properties of non-abelian gauge theories, in agreement with the occurrence of Yang-Mills theory in .
## 3 Speculations
It is an interesting side-aspect of the last remark in the previous section that the holographic correspondence in both directions relates gauge theories to Wightman theories. It might therefore provide a new constructive scheme giving access to gauge theories.
If one is interested in quantum field theories on Minkowski rather than anti-deSitter space, one may consider the flat space limit in which the curvature radius $`R`$ of anti-deSitter space tends to infinity, or equivalently consider a region of anti-deSitter space which is much smaller than the curvature radius. The regime $`|x|<<R`$ asymptotically becomes flat Minkowski space, and the anti-deSitter group contracts to the Poincaré group. Thus, one obtains a Minkowski space theory on $`^{1,s}`$ from a conformal theory on $`CM_{1,s1}`$ through a scaling limit of the associated theory on $`AdS_{1,s}`$.
For $`d=1+1`$, this can be done quite explicitly. The double-cones algebras $`B(O)`$ are certain extensions of the algebras $`A(J_1)A(J_2)`$, as discussed before. Now in the flat regime the intervals $`J_i`$ become small of order $`|O|/R`$. Thus for a substantial portion of Minkowski space, the relevant intervals $`J_i`$ are all contained in a suitable but fixed pair of non-overlapping intervals $`K_i`$. Let us now assume that the conformal net has the split property (an algebraic property valid in any chiral quantum field theory for which Tr exp $`\beta L_0`$ exists), which ensures that states can be independently prepared on causally disjoint regions with a finite distance . Then $`A(K_1)A(K_2)`$ is unitarily isomorphic to $`A(K_1)A(K_2)`$, and the isomorphism is inherited by all its subalgebras $`A(J_1)A(J_2)A(J_1)A(J_2)`$. Under this isomorphism, the larger algebra $`B(O)`$ is identified with the standard construction of 1+1-dimensional conformal Minkowski space observables from a given chiral conformal net (which corresponds to the diagonal modular invariant and is sometimes quoted as the Longo-Rehren net): $`B(O)B_{\mathrm{LR}}(J_1\times J_2)`$ if $`O`$ corresponds to $`I_1I_2=J_1J_2K_1K_2`$. The unitary isomorphism of algebras, however, does not take the vacuum state on $`B`$ to the vacuum state on the LR net.
Thus, the flat space limit of the anti-deSitter space theory in 1+1 dimensions associated with a given chiral conformal theory, is given by the LR net associated with that same chiral theory. Note that the LR net has 1+1-dimensional conformal symmetry, but of course the anti-deSitter net is not conformally invariant due to the presence of the curvature scale $`R`$.
It would be interesting to get an analogous understanding of the flat space limit of the anti-deSitter space theory in higher dimensions in terms of the associated conformal theory.
One might speculate whether one can “iterate holography”, and use the flat space limit of the bulk theory on $`AdS_{1,s}`$ as a boundary input for a new bulk theory on $`AdS_{1,s+1}`$. Here, however, a warning is in order. Namely, the limiting flat space theory on Minkowski space $`^{1,s}`$ will, like the LR net, in general not be extendible to the conformal compactification $`CM_{1,s}`$ but rather to a covering thereof. One might therefore endeavour to extend the present analysis to theories on covering spaces both of anti-deSitter space and of its boundary.
There is an independent and physically motivated reason to study quantum field theories on a covering of anti-deSitter space. Namely, it has been observed (see above, ) that the local commutativity for causally disjoint but not causally disconnected observables leads to severe constraints on the possible interactions on anti-deSitter space proper. These constraints will disappear on the universal covering space.
This “anti-deSitter causality paradox” parallels very much the old “conformal causality paradox” that causal commutativity on $`CM_{1,s}`$ proper excludes most conformal theories of interest; it was solved by the recognition that conformal fields naturally live on a covering space. Holography tells us that both problems are the two sides of the same coin.
Extending the present analysis to covering spaces seems a dubious task for $`d=1+1`$ since the boundary of the covering of two-dimensional anti-deSitter space has two connected components. In higher dimensions, however, we do not expect serious obstacles.
Acknowledgment
Previous versions of this paper have been improved in several respects on the basis of discussions with D. Buchholz, R. Verch (who also made the Figure), B. Schroer, R. Helling and many others, as well as questions raised by the referees. My thanks are due to all of them.
|
no-problem/9905/cond-mat9905224.html
|
ar5iv
|
text
|
# Critical Behavior of Anisotropic Heisenberg Mixed-Spin Chains in a Field
## Abstract
We numerically investigate the critical behavior of the spin-$`(1,\frac{1}{2})`$ Heisenberg ferrimagnet with anisotropic exchange coupling in a magnetic field. A quantized magnetization plateau as a function of the field, appearing at a third of the saturated magnetization, is stable over whole the antiferromagnetic coupling region. The plateau vanishes in the ferromagnetic coupling region via the Kosterlitz-Thouless transition. Comparing the quantum and classical magnetization curves, we elucidate what are essential quantum effects.
PACS numbers: 75.10.Jm, 75.40Mg, 75.50.Gg, 75.40.Cx
Quantized plateaux in magnetization curves as functions of a magnetic field for spin chains have been attracting much current interest. The trimerized spin-$`\frac{1}{2}`$ chain exhibits a massive phase at $`m/m_{\mathrm{sat}}=\frac{1}{3}`$ , while the dimerized spin-$`1`$ chain at $`m/m_{\mathrm{sat}}=\frac{1}{2}`$ , where $`m`$ is the magnetization per unit period and $`m_{\mathrm{sat}}`$ is its saturation value. The presence of finite gap and plateau has further been discussed and actually been observed for various polymerized spin chains and ladders. It may be the Lieb-Schultz-Mattis theorem and its generalization in recent years that motivate such vigorous arguments. Oshikawa, Yamanaka, and Affleck (OYA) pointed out that quantized plateaux in magnetization curves may appear satisfying the condition
$$\stackrel{~}{S}m=\text{integer},$$
(1)
where $`\stackrel{~}{S}`$ is the sum of spins over all sites in the unit period.
The OYA argument stimulates us to study quantum mixed-spin chains as well. An arbitrary alignment of alternating spins $`S`$ and $`s`$ in a magnetic field, which is described by the Hamiltonian
$$=\underset{j=1}{\overset{N}{}}\left[(𝑺_j𝒔_j)_\alpha +(𝒔_j𝑺_{j+1})_\alpha H(S_j^z+s_j^z)\right],$$
(2)
with $`(𝑺𝒔)_\alpha =S^xs^x+S^ys^y+\alpha S^zs^z`$, shows ferrimagnetism , instead of antiferromagnetism, and is another current topic from both theoretical and experimental points of view. As $`H`$ increases from zero to the saturation field
$$H_{\mathrm{sat}}=\alpha (S+s)+\sqrt{\alpha ^2(Ss)^2+4Ss},$$
(3)
the OYA criterion (1) allows us to expect quantized plateaux at $`m=S+s1`$, $`S+s2`$, $`\mathrm{}`$, $`1`$ (or $`\frac{1}{2}`$). Since the low-energy physics of the model (2) is qualitatively the same regardless of $`S`$ and $`s`$ as long as $`Ss`$, here, let us consider the simplest case $`(S,s)=(1,\frac{1}{2})`$. Then a plateau may appear at $`m=\frac{1}{2}`$. At the Heisenberg point, the ground state of the Hamiltonian (2) without field is a multiplet of spin $`(Ss)N`$ and thus has elementary excitations of two distinct types . The ferromagnetic excitations, reducing the ground-state magnetization, exhibit a gapless dispersion relation, whereas the antiferromagnetic ones, enhancing the ground-state magnetization, are gapped from the ground state. Therefore, at the isotropic point, $`m`$ as a function of $`H`$ should jump up to $`\frac{1}{2}`$ just as the field is applied and remain unchanged until the field reaches the antiferromagnetic excitation gap $`1.759`$ .
Once we turn on the exchange-coupling anisotropy, the plateau is not so trivial any more. We show in Fig. 1 the zero-temperature magnetization curves of the anisotropic chains, which have been calculated by the numerical diagonalization technique combined with a finite-size scaling analysis . In order to verify the reliability of our scaling analysis, which is briefly explained later, we have carried out quantum Monte Carlo (QMC) calculations as well at $`N=32`$ and $`N=16`$, where, due to the small correlation length of the system, the data show no size dependence beyond the numerical uncertainty. Although the QMC findings are obtained at a sufficiently low but finite temperature, they fully suggest that the present diagonalization-based calculations well describe the thermodynamic-limit properties. As the model approaches the Ising limit ($`\alpha \mathrm{}`$), the plateau monotonically grows and ends up with a stepwise magnetization curve. On the other hand, the introduction of the $`XY`$-like coupling anisotropy reduces the plateau. Thus we take great interest in where and how the plateau vanishes. Alcaraz and Malvezzi showed that the ground state of the model (2) without field is in the critical phase over the whole region $`1\alpha <1`$. At the Heisenberg point, the model still lies in the massless phase but the low-energy dispersion as a function of momentum is quadratic . For $`\alpha >1`$, the model is in the massive phase and its low-energy structure is well understood by the spin-wave dispersions
$$\omega _k^{}=\sqrt{\alpha ^2(S+s)^24Ss\mathrm{cos}^2\frac{k}{2}}\alpha (Ss),$$
(4)
which describe the sector of the magnetization $`_j(S_j^z+s_j^z)M<(Ss)N`$ and that of $`M>(Ss)N`$, respectively. Thus the introduction of the anisotropy essentially changes the nature of the model (2) and a fascinating physics must lie especially in the $`XY`$-like coupling region. In the present article, we clarify how the quantized plateau of the $`(1,\frac{1}{2})`$ model behaves as a function of the anisotropy and aim to reveal critical phenomena inherent in quantum ferrimagnets.
In order to investigate the quantum critical behavior, we carry out a scaling analysis on the numerically calculated energy spectra of finite clusters up to $`N=12`$. Let $`E(N,M)`$ denote the lowest energy in the subspace with a fixed magnetization $`M`$ for the Hamiltonian (2) without the Zeeman term. The upper and lower bounds of the field which induces the ground-state magnetization $`M`$ are, respectively, given by
$`H_+(N,M)=E(N,M+1)E(N,M),`$ (5)
$`H_{}(N,M)=E(N,M)E(N,M1).`$ (6)
If the system is massive at the sector labeled $`M`$, $`H_\pm (N,M)`$ should exhibit exponential size corrections and result in different thermodynamic-limit values $`H_\pm (m)`$, which can precisely be estimated through the Shanks’ extrapolation . For the critical system, on the other hand, $`H_\pm (N,M)`$ are expected to converge to the same value as
$$H_\pm (N,M)H(m)\pm \frac{\pi v_\mathrm{s}\eta }{N},$$
(7)
where $`v_\mathrm{s}`$ is the sound velocity and $`\eta `$ is the critical index defined as $`\sigma _0^+\sigma _r^{}(1)^rr^\eta `$ for the relevant spin operator $`\sigma `$, which may here be an effective combination of $`𝑺`$ and $`𝒔`$.
Figure 1 was thus obtained, where we smoothly interpolated the raw data $`H(m)`$. The system is trivially gapless at all the sectors of $`M`$ for $`\alpha 1`$ and should therefore encounter a massive-massles phase transition in the $`XY`$-like coupling region. Now we present in Fig. 2 the magnetization curves at $`\alpha 0`$ so as to detect the transition. Surprisingly, the plateau still exists at the $`XY`$ point ($`\alpha =0`$) and the transition occurs in the ferromagnetic-coupling region. At an naive idea of relating the massive state with the staggered Néel-like order in the direction of the external field, we are never able to understand why the plateau is so stable against the $`XY`$-like anisotropy.
The plateau length $`\mathrm{\Delta }_N=H_+(N,M)H_{}(N,M)`$ is a relevant order parameter to detect the phase boundary. When the system is critical, $`\mathrm{\Delta }_N`$ should be proportional to $`1/N`$ because of the scaling relation (7). We plot in Fig. 3(a) the scaled quantity $`N\mathrm{\Delta }_N`$ as a function of $`\alpha `$. $`N\mathrm{\Delta }_N`$ is almost independent of $`N`$ in a finite range of $`\alpha `$, rather than at a point, which exhibits an aspect of the Kosterlitz-Thouless(KT) transition . It is likely that the $`XY`$-like anisotropy induces the KT transition followed by the gapless spin-fluid phase whose spin correlation shows a power-law decay. Let us evaluate the central charge $`c`$ of the critical phase, which is expected to be unity. The asymptotic form of the ground-state energy
$$\frac{E(N,M)}{N}\epsilon (m)\frac{\pi cv_\mathrm{s}}{N^2},$$
(8)
allows us to extract $`c`$ from the finite-cluster energy spectrum provided $`v_\mathrm{s}`$ is given. Here we calculate $`v_\mathrm{s}`$ as
$$v_\mathrm{s}=\frac{N}{2\pi }\left[E_{k_1}(N,M)E(N,M)\right],$$
(9)
where $`k_1=2\pi /N`$ and $`E_k(N,M)`$ is the lowest energy in the subspace specified by the momentum $`k`$ as well as by the magnetization $`M`$. The size correction for the formula (9) is of order $`O(1/N^2)`$, which is essentially negligible in the present system. In Fig. 3(b) we plot $`c`$ versus $`\alpha `$ and find that $`c`$ approaches unity as the system goes toward the critical region. We further investigate the critical exponent $`\eta `$ so as to verify the KT universality and to specify the phase boundary. In the critical region the asymptotic formula $`\mathrm{\Delta }_N2\pi v_\mathrm{s}\eta /N`$ enables us to estimate $`\eta `$. Since the KT transition holds $`\eta =\frac{1}{4}`$ at the phase boundary, we can evaluate the transition point $`\alpha _\mathrm{c}`$ from $`\eta `$ as a function of $`\alpha `$. Figure 3(b) claims that $`\alpha _\mathrm{c}=0.41\pm 0.01`$, where $`c=1.00\pm 0.01`$. The phenomenological renormalization-group (PRG) technique is another numerical tool to determine the phase boundary. Taking $`\mathrm{\Delta }_N`$ as the order parameter, we extract the size-dependent fixed point $`\alpha _\mathrm{c}(N,N+2)`$ from the PRG equation
$$(N+2)\mathrm{\Delta }_{N+2}(\alpha )=N\mathrm{\Delta }_N(\alpha ).$$
(10)
In Fig. 4 we plot $`\alpha _\mathrm{c}(N,N+2)`$ as a function of $`1/(N+1)`$, which is linearly extrapolated to $`\alpha _\mathrm{c}=0.57`$. The PRG estimate is somewhat discrepant from the above-obtained phase boundary. Here we should be reminded of Nomura-Okamoto’s enlightening analysis on usage of the PRG method. The PRG equation applied to a gapful-gapful phase transition yields a reliable estimate of the critical point, whereas, for a transition of KT type, the PRG estimate is quite likely to miss the exact solution due to the incidental logarithmic size correction, encroaching upon the KT-phase region. Considering the limited availability of the PRG analysis, we may recognize the present PRG solution as the lower boundary of $`\alpha _\mathrm{c}`$.
In order to elucidate how far from intuitive the present observation is, we compare it with the classical behavior. Let us consider the classical version of the Hamiltonian (2), where $`𝑺_j`$ and $`𝒔_j`$ are classical vectors of magnitude $`1`$ and $`\frac{1}{2}`$, respectively. We show in Fig. 5 the classical magnetization curves and learn both similarity and difference between the quantum and classical behaviors. In the ferromagnetic and Ising-like antiferromagnetic exchange-coupling regions they are quite alike, which is convincing in that quantum effects are supposed to be less significant in both the regions. However, the quantum behavior is qualitatively different from the classical one in the $`XY`$-like coupling region. The classical state of $`M=N/2`$, which is stable enough to form a plateau in the Ising-like coupling region, is no more massive at $`\alpha 0.943`$. The spin configuration as a function of the field, revealed in Fig. 6, is suggestive in understanding the prompt collapse of the classical plateau with the increase of the $`XY`$-like anisotropy. In the classical case, the spin configuration in the massive state of $`M=N/2`$ is stuck to $`(S_j^z,s_j^z)=(1,\frac{1}{2})`$. In other words, the plateau can not appear unless the configuration $`(S_j^z,s_j^z)=(1,\frac{1}{2})`$ is realized. This is not the case for the quantum system. The quantum spin configuration in the massive state of $`M=N/2`$ generally depends on $`\alpha `$ and exhibits a quantum reduction from the classical Néel-like state. At the Heisenberg point, for example, the quantum averages of the sublattice magnetizations per unit cell in the massive state are estimated to be $`0.793`$ and $`0.293`$, respectively. It must be the quantum spin reduction that makes the massive state tough against the $`XY`$-like anisotropy.
In sum the quantum mixed-spin Heisenberg model (2) with $`(S,s)=(1,\frac{1}{2})`$ shows the three distinct phases at the sector of $`M=N/2`$; the plateau phase, the gapless spin-fluid phase, and the ferromagnetically ordered phase, as illustrated in Fig. 7. The plateau appears for $`\alpha >0.41`$, including a ferromagnetic-coupling region. We note that on the boundary of the plateau phase except for the point $`(\alpha ,H)=(\alpha _\mathrm{c},H_\mathrm{c})(0.41,0.293)`$ which is indicated as KT point in Fig. 7, the plateau length is generally finite, namely, the relevant correlation length is not divergent. The only point $`(\alpha _\mathrm{c},H_\mathrm{c})`$ possesses the KT character. The long-lived plateau against the $`XY`$-like anisotropy, which is contrastive to the corresponding classical behavior, deserves special remark and further investigation. We expect magnetic measurements on anisotropic systems .
It is a pleasure to thank H.-J. Mikeska and U. Schollwöck for helpful discussions. This work was supported by the Japanese Ministry of Education, Science, and Culture through Grant-in-Aid No. 09740286 and by the Okayama Foundation for Science and Technology. The numerical computation was done in part using the facility of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
|
no-problem/9905/hep-ph9905515.html
|
ar5iv
|
text
|
# References
INSTITUT FÜR KERNPHYSIK, UNIVERSITÄT FRANKFURT
D - 60486 Frankfurt, August–Euler–Strasse 6, Germany
IKF–HENPG/2–99
Evidence for Statistical Production of $`J/\psi `$ Mesons in Nuclear Collisions at the CERN SPS
Marek Gaździcki<sup>1</sup><sup>1</sup>1E–mail: marek@ikf.physik.uni–frankfurt.de
Institut für Kernphysik, Universität Frankfurt, Germany
Mark I. Gorenstein<sup>2</sup><sup>2</sup>2Permanent address: Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine<sup>,</sup><sup>3</sup><sup>3</sup>3E–mail: goren@th.physik.uni-frankfurt.de
Institut für Theoretische Physik, Universität Frankfurt, Germany
## Abstract
The hypothesis of statistical production of $`J/\psi `$ mesons at hadronization is formulated and checked against experimental data. It explains in the natural way the observed scaling behavior of the $`J/\psi `$ to pion ratio at the CERN SPS energies. Using the multiplicities of $`J/\psi `$ and $`\eta `$ mesons the hadronization temperature $`T_H170`$ MeV is found, which agrees well with the previous estimates of the temperature parameter based on the analysis of the hadron yield systematics.
Charmonium production in hadronic and nuclear collisions is usually considered to be composed of three stages: the creation of a $`c\overline{c}`$ pair, the formation of a bound $`c\overline{c}`$ state and the subsequent interaction of this $`c\overline{c}`$ bound state with the surrounding matter. The first process is calculated within perturbative QCD, whereas modeling of non–perturbative dynamics is needed to describe the last two stages (see, e.g. and references therein). The interaction of the bound $`c\overline{c}`$ state with matter causes suppression of the finally observed charmonium yield relative to the initially created number of bound $`c\overline{c}`$ states. This initial number is assumed to be proportional to the number of Drell–Yan pairs, which then allows for the experimental study of the charmonium suppression pattern. It was proposed that the magnitude of the measured suppression in nuclear collisions can be used as a probe of the state of high density matter created at the early stage of the collision. The suppression of the $`J/\psi `$ yield observed in p+A and O(S)+A collisions at the CERN SPS is considered to be caused by the interactions with nucleons occurring while the primordial baryons keep interpenetrating . The rapid increase of the suppression (anomalous suppression) observed when going from peripheral to central Pb+Pb collisions is often attributed to the formation of a quark–gluon plasma . However alternative interpretations are still under discussion .
It was recently found that the mean multiplicity of $`J/\psi `$ mesons increases proportionally to the mean multiplicity of pions when proton–proton (p+p), proton–nucleus (p+A) and nucleus–nucleus (A+A) collisions at CERN SPS energies are considered. We illustrate this unexpected experimental fact by reproducing in Fig. 1 the plot from Ref. , where the ratio $`J/\psi /h^{}`$ is shown as a function of the mean number of nucleons participating in the interaction for inelastic nuclear collisions at the CERN SPS. The $`J/\psi `$ and $`h^{}`$ denote here the mean multiplicites of $`J/\psi `$ mesons and negatively charged hadrons (more than 90% are $`\pi ^{}`$ mesons), respectively. We note that the analysis presented in Ref. indicates that the scaling of the $`J/\psi /h^{}`$ ratio is also valid for central Pb+Pb collisions at the CERN SPS.
In the standard picture of the $`J/\psi `$ production based on the hard creation of $`c\overline{c}`$ pairs and the subsequent suppression of the bound $`c\overline{c}`$ states the observed scaling behavior of the $`J/\psi `$ multiplicity appears to be due to an ‘accidental’ cancelation of several large effects. This motivates our effort to find an alternative production mechanism of $`J/\psi `$ mesons which would explain the experimental data in a natural way.
In this letter we show that a scaling property of the $`J/\psi `$ multiplicity
$$\frac{J/\psi }{h^{}}const(A)$$
(1)
can be understood assuming that a dominant fraction of $`J/\psi `$ mesons is produced directly at hadronization according to the available hadronic phase space.
Since a long time statistical models are used to describe hadron multiplicities in high energy collisions. Thermal hadron production models have been successfully used to fit the data on particle multiplicities in A+A collisions at the CERN SPS energies (see, e.g. ). Due to the large number of particles a grand canonical formulation is used for the modeling of high energy heavy ion collisions . Recently, an impressive success of the statistical model applied to hadron multiplicities in elementary $`e^++e^{}`$, $`p+p`$ and $`p+\overline{p}`$ interactions at high energy was also reported . However, in the latter case the use of a canonical formulation of the model, which assures exact conservation of the conserved charges, is necessary. The temperature parameter which characterizes the available phase space for the hadron production is found in these interactions to be 160–190 MeV . It does not show any significant dependence on the type of reaction and on the collision energy. Moreover, it coincides with the chemical freeze–out temperature estimate obtained in hadron gas models for A+A collisions at the CERN SPS . These facts suggest the possibility to ascribe the observed statistical properties of hadron production systematics in elementary and nuclear collisions at high energies to the statistical nature of the hadronization process .
Based on the above facts we formulate a hypothesis that a dominant fraction of the $`J/\psi `$ mesons produced in hadronic and nuclear collisions at the CERN SPS energies is created at hadronization according to the available hadronic phase space.
$`J/\psi `$ mesons are neutral and unflavored, i.e. all charges conserved in the strong interaction (electric charge, baryon number, strangeness and charm) are equal to zero for this particle. Therefore, its production is not influenced by the conservation laws of quantum numbers. For suffiently high collision energies, the effect of the strict energy–momentum conservation in the statistical model formulation can be neglected. Consequently, the $`J/\psi `$ production can be calculated in the grand canonical approximation and, therefore, its multiplicity is proportional to the volume, $`V`$, of the matter at hadronization. Thus, the statistical yield of $`J/\psi `$ mesons at hadronization is given by
$`J/\psi `$ $`=`$ $`{\displaystyle \frac{(2j+1)V}{2\pi ^2}}{\displaystyle _0^{\mathrm{}}}p^2𝑑p{\displaystyle \frac{1}{\mathrm{exp}[(p^2+m_\psi ^2)^{1/2}/T_H]1}}`$
$``$ $`(2j+1)V\left({\displaystyle \frac{m_\psi T_H}{2\pi }}\right)^{3/2}\mathrm{exp}\left({\displaystyle \frac{m_\psi }{T_H}}\right),`$
where $`j=1`$ and $`m_\psi 3097`$ MeV are the spin and the mass of the $`J/\psi `$ meson and $`T_H`$ is the hadronization temperature. The previously mentioned results of the analysis of hadron yield systematics in elementary and nuclear collisions within the statistical approach indicate that the hadronization temperature $`T_H`$ is the same for different colliding systems and collision energies. This reflects the universal feature of the hadronization process.
The total entropy of the produced matter is proportional to its volume. As most of the entropy in the final state is carried by pions, the pion multiplicity is also expected to be proportional to the volume of the hadronizing matter. Thus the scaling property (1) follows directly from the hypothesis of statistical production of $`J/\psi `$ mesons at hadronization and the universality of the parameter $`T_H`$.
Since elements of hadronizing matter move in the overall center of mass system the volume $`V`$ in Eq. (S0.Ex1) characterizes in fact the sum of the proper volumes of all elements in the collision event.
The hypothesis of statistical production of $`J/\psi `$ mesons at a constant hadronization temperature $`T_H`$ leads to the prediction of a second scaling property of the $`J/\psi `$ multiplicity, namely:
$$\frac{J/\psi }{h^{}}const(\sqrt{s})$$
(3)
which should be valid for sufficiently large c.m. energies, $`\sqrt{s}`$. This scaling property is illustrated in Fig. 2 which shows the ratio $`J/\psi /h^{}`$ as a function of $`\sqrt{s}`$ for proton–nucleon interactions. The experimental data on $`J/\psi `$ yields are taken from a compilation given in . The values of $`h^{}`$ are calculated using a parameterization of the experimental results as proposed in .
Onwards from the CERN SPS energies, $`\sqrt{s}20`$ GeV, the ratio $`J/\psi /h^{}`$ is approximately constant, in line with the expected scaling behavior (3). The rapid increase of the ratio with collision energy observed below $`\sqrt{s}20`$ GeV should be attributed to a significantly larger energy threshold for the $`J/\psi `$ production than for the pion production. In terms of the statistical approach the effect of strict energy–momentum conservation has to be taken into account by use of the microcanonical formulation of the model.
The statistical $`J/\psi `$ multiplicity (S0.Ex1) depends on two parameters, $`T_H`$ and $`V`$. In general the calculation of the hadron yields in the statistical model should take into account the conservation of charges and resonance feeddown contributions. However, a simple way to estimate of the crucial temperature parameter in Eq. (S0.Ex1) from the experimental data is possible, provided that we find a second hadron which has the properties of the $`J/\psi `$ meson i.e. it is neutral, unflavored and stable with respect to strong decays. The best candidate is the $`\eta `$ meson. The multiplicity of $`\eta `$ mesons seems to obey also the scaling properties (1) and (3). We note, however, that the data on $`\eta `$ production are scarce. The independence of the $`\eta /\pi ^0`$ ratio on the collision energy was observed quite a long time ago . Recent data on $`\eta `$ production in central Pb+Pb collisions at the CERN SPS suggest that the $`\eta /\pi ^0`$ ratio is also independent of the size of the colliding objects. In order to illustrate this scaling Fig. 3 shows the $`\eta /\pi ^0`$ ratio as a function of the number of interacting nucleons for inelastic p+p and S+S interactions and for central Pb+Pb collisions at the CERN SPS energies (158–400 A GeV).
From the ratios, $`J/\psi /h^{}`$ and $`\eta /\pi ^0`$, presented in Fig. 1 and Fig. 3 we estimate a mean ratio $`J/\psi /\eta =(1.3\pm 0.3)10^5`$. Here we use the experimental ratio $`\pi ^0/h^{}1`$ in N+N interactions . Under the hypothesis of the statistical production of $`J/\psi `$ and $`\eta `$ mesons at hadronization the measured ratio can be compared to the ratio calculated using Eq. (S0.Ex1):
$$\frac{J/\psi }{\eta }3\left(\frac{m_\psi }{m_\eta }\right)^{3/2}\mathrm{exp}\left(\frac{m_\eta m_\psi }{T_H}\right),$$
(4)
where $`m_\eta 547`$ MeV is the mass of the $`\eta `$ meson. This leads to an estimate of the hadronization temperature, $`T_H=170\pm 2`$ MeV. Assuming a maximum 50% uncertainty on the $`J/\psi /\eta `$ ratio due to the contribution from resonance decays<sup>4</sup><sup>4</sup>4 An estimate of the fraction of the $`\eta `$ yield from decays of heavy hadrons in p+p interactions is about 50% , which is close to the measured fraction of $`J/\psi `$ yield originating from decays (30–50%) . we obtain an estimate of an additional systematic error on $`T_H`$ of about 7 MeV. A graphical solution of Eq. (4) is shown in Fig. 4 which illustrates the high sensitivity of the estimate of the $`T_H`$ parameter by using the $`J/\psi /\eta `$ ratio. This is due to the large difference between mass of the $`J/\psi `$ and the $`\eta `$ mesons.
In summary, we show that the $`J/\psi `$ production in hadronic and nuclear collisions can be understood assuming that a dominant fraction of $`J/\psi `$ mesons is produced at hadronization according to the available hadronic phase space. The estimate of the hadronization temperature based on $`J/\psi `$ multiplicity, $`T_H170`$ MeV, agrees well with the values of the temperature parameter obtained from the analysis of the hadron yield systematics in $`e^++e^{}`$, $`p+p`$, $`p+\overline{p}`$ interactions and nucleus–nucleus collisions.
If the new interpretation of the $`J/\psi `$ data presented in this letter is correct, it may have several important implications:
1. $`J/\psi `$ yields are not sensitive to the state of high density matter created at the early stage of A+A collisions because their production takes place at hadronization.
2. The creation of the $`J/\psi `$ mesons is due to the straight thermal production at hadronization and not due to the coalescence of $`c\overline{c}`$ quarks produced before hadronization. Therefore the yield of $`J/\psi `$ mesons is independent of the production of open charm, which is carried mainly by the $`D`$ mesons in the final state. The $`D`$ meson multiplicity is determined by the number of $`c\overline{c}`$ quark pairs created in the early parton stage before the hadronization.
3. Due to the large mass of $`J/\psi `$ mesons the data on their production are very sensitive to the value of the hadronization temperature and therefore allow for a precise study of the hadronization process.
Acknowledgements
We thank F. Becattini, L. Gerland, I. Mishustin, St. Mrówczyński, P. Seyboth, R. Stock, H. Stöcker and G. Yen for discussion and comments to the manuscript. We acknowledge financial support of BMBF and DFG, Germany.
|
no-problem/9905/astro-ph9905326.html
|
ar5iv
|
text
|
# Line profile analysis of the 𝛿 Scuti star HD 2724≡BB Phe: mode identification and amplitude variations Based on observations collected at the Coudé Auxiliary Telescope of the European Southern Observatory – La Silla, Chile (Proposal 60.E-0113)
## 1 Introduction
The light and line profile variations of the $`\delta `$ Scuti star HD 2724$``$BB Phe have been recently studied by Bossi et al. (1998, hereinafter Paper I). On the basis of 11 consecutive nights of photometric observations and 5 simultaneous consecutive nights of spectroscopic ones they detected 13 probable pulsation modes, 7 of which were determined in an unambiguous way. For 4 modes it was possible to suggest an identification of their $`\mathrm{}`$ and $`m`$ parameters; in particular, the proposed identification of the lowest frequency mode as the radial fundamental one allowed a more accurate refinement of the stellar physical parameters which were also consistent with the Hipparcos parallax.
Due to the relatively short spectroscopic baseline, some of the modes detected in Paper I had barely resolved frequencies; as a consequence, some ambiguities arose in their detection and in the successive attempts of their typing. With the aim to confirm and eventually improve these findings, an application for a longer run was submitted to ESO. In this paper we describe the results we obtained; unfortunately, in the meantime all the facilities for getting photometric data were dismissed and we had to limit ourselves to a purely spectroscopic campaign, to which 10 consecutive nights were allotted (October 1–11, 1997).
## 2 New spectroscopic observations and data processing
The spectroscopic observations were made at La Silla Observatory (ESO) with the Coudé Echelle Spectrometer attached to the Coudé Auxiliary Telescope. Owing to bad weather, it was possible to get useful observations during 8 nights only. The run was performed in Remote Control Mode from Garching headquarters; the CES was configured in the blue path with the long camera and the CCD #38. The resulting reciprocal dispersion was 0.018 Å pix<sup>-1</sup> with an effective resolution of about 54000. The useful spectral region ranges from 4482 to 4534 Å. The integration time was set to 15 minutes; a total of 189 useful spectrograms were collected covering about 52 hours of stellar monitoring on a baseline of 8.3 days. Data reduction was performed using the MIDAS package. The spectrograms were normalized by means of internal quartz lamp flat fields and calibrated into wavelengths by means of a thorium lamp.
Due to stellar projected rotational velocity, only the Feii line at 4508.4 Å is completely free from blends of adjacent features and allows a good normalization to the stellar continuum. Therefore, in the same way as we did in Paper I, we studied the behaviour of this line. All the spectrograms were averaged to obtain a very high $`S/N`$ average spectrum. It was then possible to select two windows on both sides of the Feii line. As a further step, the individual spectrograms were normalized to the continuum defined by a linear least–squares fit of these windows. Finally, the spectra were shifted and rebinned in order to remove the observer’s motion. In the rebinning procedure the spectrograms were resampled with a step of 0.04 Å (average of 2 original pixels): in such a way we saved the effective resolution according to the Nyquist criterion and we improved the $`S/N`$ of the resulting profiles. The mean standard deviation of the pixels on the stellar continuum allowed us to estimate the $`S/N`$ of the spectrograms: the resulting average value at the continuum level is 368.
A non–linear least–squares fit of a rotationally broadened gaussian profile to the average line profile allowed us to estimate the projected rotational velocity and intrinsic width: $`v\mathrm{sin}i=83.0\pm 1.0`$ km s<sup>-1</sup> and $`W_i=13.7\pm 0.5`$km s<sup>-1</sup>. This $`v\mathrm{sin}i`$ is in excellent agreement with the value of $`82\pm 2`$ km s<sup>-1</sup> derived in Paper I. Figure 1 shows the average profile obtained from the 1997 observations fitted with the computed rotationally broadened profile (dashed line).
## 3 Analysis of line profile variations
### 3.1 The least–squares algorithm
The search for periodicities in the line profile variations was performed by using a generalized form of the least–squares power spectrum technique, originally developed for the study of monodimensional time series analysis by Vaniĉek (1971). Let $`P(\lambda _j,t_k)`$ the observed line profiles, $`j`$ the pixel number and $`t_k`$ the time of the $`k`$-th spectrogram. The global variance is defined by:
$$\sigma _T=\underset{j,k}{}w_k^2(P(\lambda _j,t_k)P_0(\lambda _j))^2$$
where $`P_0(\lambda _j)`$ is the time averaged profile and $`w_k`$ are the normalized weights derived from the $`S/N`$ of the spectrograms. If we have already detected $`m`$ periodic sinusoidal components (“known constituents”) and if we are looking for the $`(m+1)`$-th component, we can explore the useful frequency range (0$`<\nu _i<25`$ cd<sup>-1</sup>) by fitting each pixel time series $`j`$ with the series
$`p_{i,j}(t_k)=\overline{p}_i+{\displaystyle \underset{l=1,m}{}}A_{i,j,l}\mathrm{cos}(2\pi \nu _lt_k+\varphi _{i,j,l})`$
$`+A_{i,j,m+1}\mathrm{cos}(2\pi \nu _it_k+\varphi _{i,j,m+1})`$
where $`\overline{p}_i,A_{i,j,l}`$ and $`\varphi _{i,j,l}`$ (with $`1lm+1`$) are the free parameters. Then we compute the global reduction of variance defined as
$$RF_i=1\underset{j,k}{}w_k^2(p_{i,j}(t_k)P(\lambda _j,t_k))^2/\sigma _m$$
where $`\sigma _m`$ is the global residual variance after the fit of the line profile variations with the $`m`$ “known constituents”. The frequency $`\nu _i`$ giving the highest $`RF`$ (or one of its 1 cd<sup>-1</sup> aliases if there is a better agreement with the photometrically detected modes) is then selected as the $`(m+1)`$–th known constituent and the procedure is iterated again. At the end of this procedure, after having detected $`M`$ known constituents, we can perform a final fit of $`P(\lambda _j,t_k)`$ and derive the functions: $`\overline{p}_M(\lambda _j)`$ (i.e. the best estimate of the unperturbed line profile), $`A_l(\lambda _j),\varphi _l(\lambda _j)`$ (with $`1lM`$) and also their formal errors.
### 3.2 Frequency identification in 1997 data
Following the procedure described above, 8 terms have been successively detected: 8.58, 8.05, 6.49, 0.07 (or 1.07), 5.74, 5.31, 7.38 and 6.99 cd<sup>-1</sup>. Other smaller amplitude terms are probably present, but we cannot assign a frequency value in a reliable way. The detected modes are listed in Tab. 1 in order of increasing frequency (first column); the second column reports their rms amplitudes computed along the whole line profile and expressed in thousandths of the continuum amplitude.
Frequency detection by means of the least–squares technique can seem a subjective procedure when dealing with single–site observations. As our experience on photometric data proves, this is not true: as an example, see the frequencies detected by Mantegazza et al. (1994) on the basis of single–site observations and their confirmation by Breger et al. (1995) on the basis of multisite ones. However, to give an independent confirmation of these results, we performed a second analysis by using the CLEAN algorithm generalized so as to analyse line profile variations (Mantegazza et al. 1999, in preparation). The averaged spectrum of the line profile variations is shown in Fig. 2.
The only difference between the two algorithms is the value assigned to the low frequency term: 0.07 cd<sup>-1</sup> from the least–squares analysis and 1.07 cd<sup>-1</sup> from the CLEAN algorithm (of course, one term is an alias of the other). However, we do not assign great significance to this peak: given its closeness to the value of the sidereal day it is probably an artifact due to observation and/or reduction procedures. In particular, during the reductions it has been observed that a slight rotation of the spectrograms on the CCD occurred during the night; this could be responsible for this low frequency term. Therefore, this term will not be considered any more in the following. All the other terms detected independently by the two techniques are coincident.
### 3.3 Comparison with the 1993 results
In the third column of Tab. 1 we list the rms amplitudes of the terms detected from the 1993 spectrograms. These values were obtained by including in the least–squares fit both the 5.31 and the 6.26 cd<sup>-1</sup>terms. Owing to the fact that they are barely resolved, their amplitudes are rather uncertain. Furthermore, the amplitudes of the other terms can change by up to about 20% or so according to whether one term or both are considered in the fit. Notwithstanding these uncertainties it is quite evident that, comparing the 1993 and the 1997 data, the amplitudes of the other terms have decreased, with the only exception of the 8.58 cd<sup>-1</sup> term which shows the opposite behaviour. For the modes detected in the 1993 photometric data, we have reported their $`b`$–light amplitude in the fourth column of Tab. 1.
We see that there is a substantial agreement in the frequency detection between the two seasons: only one mode was detected in 1993 but not in 1997 (6.26 cd<sup>-1</sup>) and only one in 1997 but not in 1993 (6.99 cd<sup>-1</sup>). The probable reality of the 6.26 cd<sup>-1</sup> term was already discussed in Paper I, taking also into account its relative closeness to the 1 cd<sup>-1</sup> alias of the 5.31 cd<sup>-1</sup>term.
The 6.99 cd<sup>-1</sup> term was not detected in the 1993 data probably because it was drowned in the 7.05 cd<sup>-1</sup> alias of the strongest spectroscopic mode (8.05 cd<sup>-1</sup>). In fact, in 1993 data the peak at 7.05 cd<sup>-1</sup>was the highest one, probably owing to the 6.99 cd<sup>-1</sup>contribution. The correct value (8.05 cd<sup>-1</sup>) was deduced with the help of the photometry. The resolution of the 1997 data is sufficient enough to detect the 6.99 cd<sup>-1</sup>term: as a matter of fact the least–squares analysis finds this term both considering a term at 7.05 or at 8.05 cd<sup>-1</sup>.
In summary, five terms were detected in the two spectroscopic datasets and in the photometric one: 5.736, 5.311, 6.488, 8.049 and 7.382 cd<sup>-1</sup> . The 6.26, 6.99 and 8.58 cd<sup>-1</sup> are purely spectroscopic terms, while the 4.430, 4.536, 5.629, 5.877 and 6.123 cd<sup>-1</sup> ones are only photometric. Even if the detection of these terms in the three different time series (especially in the spectroscopic ones) was not an easy task, we obtained a well–defined picture of the pulsational content of HD 2724 as a final result. In particular, we had for the first time the possibility to compare two spectroscopic solutions obtained in different years.
The amplitudes of modes are changed, as it can be seen comparing the different columns of Tab. 1 where we list the average amplitudes along the line profile of the identified modes in the two observing seasons (cols. 2 and 3).
### 3.4 Phase diagrams of spectroscopic terms
Figure 3 shows the amplitude and phase diagrams of the behaviour along the line profile of the detected modes (i.e. the functions $`A_l(\lambda _j),\varphi _l(\lambda _j),1l7`$ previously described). For the sake of clarity only the points with a formal error bar smaller than $`40^o`$ are shown in the phase panels.
The behaviours of the phase diagrams are similar for the modes detected in the two seasons, with only a notable exception concerning the 5.31 cd<sup>-1</sup> term. In the analysis of the 1993 data the reality of the 6.26 cd<sup>-1</sup> term was established, but this term is not very prominent in the 1997 data, because meanwhile its amplitude may have decreased. However, this dataset has a better frequency resolution and the 5.31 and 6.26 cd<sup>-1</sup> terms could be well resolved. Hence, we can obtain a more clear phase diagram of the 5.31 cd<sup>-1</sup> term: we verified that the phase increases with the wavelength, although we stated the opposite behaviour in Paper I. We performed several tests on the 1993 data and we definitely established that the mode identification proposed in Paper I suffers from the interference between one term and the alias at $`\pm `$ 1 cd<sup>-1</sup> of the other term (in particular the 5.31 cd<sup>-1</sup> term was affected by the alias at 5.26 cd<sup>-1</sup> of the 6.26 cd<sup>-1</sup> term). We therefore believe that the most reliable solution should be the one derived from the new data and that the mode identification proposed in the next subsection is more reliable than the one reported in Paper I.
Among the other phase diagrams the cleanest is that of the 8.58 cd<sup>-1</sup> term, which clearly indicates that it is a relatively high–degree prograde mode, in agreement with the results of Paper I.
### 3.5 Mode typing
An attempt to identify the modes can be accomplished by fitting the line profile variations with the technique already described in Paper I. In the case of the present data, we are forced to fit the line profile variations only without the simultaneous light variations; this makes it difficult to introduce the flux (or temperature) variations in the model. Of course, without this constraint, the model obtained from the 1997 data is a little less well defined, but in spite of that we could obtain important statements on the pulsational content. Therefore, we limited ourselves to considering the vertical velocity ($`v_r`$) and its phase as the only free parameters of the model, while the horizontal velocity $`v_h`$ was kept linked to $`v_r`$ by the usual relationships $`v_h=74.4Q^2v_r`$ ($`Q`$: pulsational constant) and we assumed the same phase for both. These approximations are legitimated by the fact that $`v_h<<v_r`$. Finally, no flux variations were allowed. The other parameters were the same as in Paper I, in particular the stellar physical parameters.
We explored all the possible combinations of $`\mathrm{},m`$ up to $`\mathrm{}=4`$ and inclination angles between 20<sup>o</sup> (at this inclination the rotational velocity is close to the stellar break–up velocity) and 90<sup>o</sup>. For the highest $`S/N`$ term (8.58 cd<sup>-1</sup> ) term we extended the search to $`\mathrm{}=7`$, because its phase diagram clearly indicated that it could be a relatively high–degree mode.
The discriminants of the best fitting modes are reported for each term in the panels of Fig. 4 as a function of the inclination angle. It should be noted that in each phase diagram we show only the plausible solutions, i.e. those giving a low value for the discriminant. The solutions yielding unacceptable values are omitted for the sake of clarity. As an example of the quality of the fits supplied by this technique, Fig. 5 shows the variations of the line profile due to the 8.58 cd<sup>-1</sup> term phased on a complete cycle and the corresponding best fitting variations produced by a model of a nonradial mode with $`\mathrm{}=6,m=5`$ and assuming $`i=70^o`$. As can be seen from Fig. 4 we can be reasonably sure that the 8.58 cd<sup>-1</sup> is a mode with $`\mathrm{}=6\pm 1`$ and $`m=5`$: unfortunately, we cannot be more precise since the discriminants of these three modes are practically coincident.
By examining Fig. 4 we see that the results are not very clear in the sense that for most of the terms different modes give equivalent fits at different inclinations. However, the 6.99 cd<sup>-1</sup> mode seems to be slightly in favour of a solution with a high inclination (say $`i60^o`$). In Paper I we left an uncertainty on two possible values for the inclination angle, i.e. $`50^o`$ or $`70^o`$. On the basis of the new results, we prefer the latter; however, most of the proposed mode identifications are plausible also for the former value.
Owing to the fact that more than one identification is possible for the same term, the identifications suggested in Paper I are generally agreed on. The strongest disagreement is about the 5.31 cd<sup>-1</sup> term, which should be prograde or axissymmetric according to the old data while the new ones indicates clearly that it is retrograde. The reason for this discrepancy has been already discussed when dealing with the phase diagrams. According to that discussion we have to be more confident in the results obtained from 1997 data.
We report in Tab. 2 the identifications which are compatible both with the present data and with the 1993 ones. In particular, some possible identifications suggested by Fig. 4 have been rejected (as for example the 3,–3 and 4,–4 couples for the 6.49 cd<sup>-1</sup>) because they are unable to fit the light variations observed in 1993.
As regards the 8.05 cd<sup>-1</sup> term, our calculations show that the local flux of a (4,-4) mode varies of about 0.3 mag and that of a (3,-3) mode of about 0.15 mag. The cancellation effect reduces it to the observed mmag level.
## 4 Conclusions
The new set of spectrograms of HD 2724 has allowed to confirm the detection of the modes found in the 1993 data and a new mode has been detected at 6.99 cd<sup>-1</sup>. There is evidence that the amplitudes of the modes have considerably changed, in particular the 8.58 cd<sup>-1</sup> term, which was among the weakest in 1993, is now the strongest. Amplitude variations are well established in $`\delta `$ Sct stars thanks to extensive photometric studies. A a matter of fact, HD 2724 is the first star in which these variations are also observed spectroscopically. In particular, it should be emphasized that the 6.26 cd<sup>-1</sup> term, clearly detected in the 1993 data, was not recovered in the 1997 ones.
As regards the frequency content, HD 2724 is very similar to 4 CVn (Breger & Hiesberger 1999). At least 7 frequencies (5.88, 6.12, 6.26, 6.49, 6.99, 7.38 and 8.58 cd<sup>-1</sup> in HD 2724; 5.85, 6.12, 6.19, 6.44, 6.98, 7.38 and 8.59 cd<sup>-1</sup> in 4 CVn) have almost the same value. Among the high–amplitude terms in 4 CVn, only the 5.05 cd<sup>-1</sup> term has no correspondance in HD 2724. The physical parameters of the two stars are similar: T<sub>eff</sub>=6900$`\pm `$100 K, $`\mathrm{log}g`$=3.4$`\pm `$0.1, $`v\mathrm{sin}i`$=73 km s<sup>-1</sup> for 4 CVn (Breger et al. 1990), 7200$`\pm `$100 K, 3.44$`\pm `$0.03, 83 km s<sup>-1</sup> for HD 2724 (Paper I). HD 2724 does not show the cross–coupling terms found in the 4 CVn light curve, but this can be due to the smaller amplitude of the modes excited in HD 2724. On the basis of this similarity, the comparison between the frequency content of $`\delta `$ Sct stars deserves further attention in the future.
The mode typing has been partially hampered by the absence of the light curve, which prevented us from modelizing flux variations in the line profiles. Within these limitations we nevertheless obtained a rather satisfactory fit of the strongest mode (8.58 cd<sup>-1</sup>) which resulted to have $`\mathrm{}=6\pm 1`$ and $`m=5`$. From the fit to the line profile variations induced by the other modes some indications of their $`\mathrm{},m`$ value have been obtained and they are in general agreement with those of Paper I. A remarkable result is the retrograde nature of the 5.31 cd<sup>-1</sup> term. Moreover, the 5.74 cd<sup>-1</sup> term have probably $`\mathrm{}=2`$ and $`m=2`$ considering the 1993 and 1997 phase diagrams.
The importance of the study of the line profile variations is emphasized by the detection of modes not observed photometrically. In order to propose asteroseismological models of $`\delta `$ Sct stars, the combination of the two techniques is strongly recommended. It should be noted that we could obtain 13 independent frequencies in the case of HD 2724 by single–site campaigns. However, we believe that the results obtained from the 1997 data are very close to the best effort we could obtain from one site spectroscopic campaigns lasting 7–10 days.
###### Acknowledgements.
The authors wish to thank M. Breger for drawing their attention to the similarity between HD 2724 and 4 CVn; J. Vialle improved the English form of the manuscript.
|
no-problem/9905/hep-th9905196.html
|
ar5iv
|
text
|
# A Bogomol’nyi equation for intersecting domain walls
\[
DAMTP-1999-68
hep-th/9905196
## Abstract
We argue that the Wess-Zumino model with quartic superpotential admits stable static solutions in which three domain walls intersect at a junction. We derive an energy bound for such junctions and show that configurations saturating it preserve 1/4 supersymmetry.
\]
Domain walls arise in many areas of physics. They occur as solutions of scalar field theories whenever the potential is such that it has isolated degenerate minima. There are two circumstances in which this happens naturally. One is when a discrete symmetry is spontaneously broken; in this case the degeneracy is due to the symmetry. The other is when the field theory is supersymmetric; in this case the potential is derived from a superpotential, the critical points of which are degenerate minima of the potential. A simple model illustrating the latter case is the 3+1 dimensional (or D=4) Wess-Zumino (WZ) model with a superpotential $`W(\varphi )`$ that is a polynomial function of the complex scalar field $`\varphi `$. The static domain wall solutions of this theory are stable for topological reasons but the stability can also be deduced from the fact that a static domain wall is ‘supersymmetric’, i.e. it partially preserves the supersymmetry of the vacuum. An advantage of the latter point of view is that the condition for supersymmetry leads immediately to a first order ‘Bogomol’nyi’ equation, the solutions of which automatically solve the second order field equations.
The possibilities for partial preservation of supersymmetry in the WZ model can be analysed directly in terms of the N=1 D=4 supertranslation algebra. Allowing for all algebraically independent central charges, the matrix of anticommutators of the spinor charge components $`S`$ is
$$\{S,S\}=H+\mathrm{\Gamma }^{0i}P_i+\frac{1}{2}\mathrm{\Gamma }^{0ij}U_{ij}+\frac{1}{2}\mathrm{\Gamma }^{0ij}\gamma _5V_{ij}$$
(1)
where $`H`$ is the Hamiltonian, $`P_i`$ the 3-momentum, $`U`$ and $`V`$ are two 2-form charges, $`(\mathrm{\Gamma }^0,\mathrm{\Gamma }^i)`$ are the $`4\times 4`$ Dirac matrices and $`\gamma _5=\mathrm{\Gamma }^{0123}`$. The fraction of supersymmetry preserved by any configuration carrying these charges is one quarter of the number of zero-eigenvalue eigenspinors $`\zeta `$ of the matrix $`\{S,S\}`$. Supersymmetric configurations other than the vacuum will preserve either 1/2 or 1/4 of the supersymmetry. A domain wall in the 1-3 plane, for example, has $`(U_{13},V_{13})=H(\mathrm{cos}\alpha ,\mathrm{sin}\alpha )`$ for some angle $`\alpha `$ ; the corresponding spinors $`\zeta `$ are eigenspinors of $`\mathrm{\Gamma }^{013}\mathrm{exp}(\alpha \gamma _5)`$ from which it follows that the domain wall preserves 1/2 supersymmetry. Now consider a configuration with non-zero $`H`$, $`U_{13}=u`$, $`V_{23}=v`$, all other charges vanishing; such a configuration preserves 1/4 supersymmetry if $`|u|+|v|=H`$, with $`\zeta `$ an eigenspinor of both $`\mathrm{\Gamma }^{013}`$ and $`\mathrm{\Gamma }^{023}\gamma _5`$. Such a configuration would naturally be associated with domain walls in the 1-3 and 2-3 planes intersecting on the 3-axis. In this paper we argue that this possibility is realized in the WZ model.
Intersections of domain walls have been extensively studied in the context of a theory with a single real scalar field $`\phi `$ on $`\text{𝔼}^3`$ . Static configurations are presumed to satisfy an equation of the form
$$^2\phi =V^{}(\phi ),$$
(2)
where $`^2`$ is the Laplacian on $`\text{𝔼}^3`$ and $`V(\phi )`$ is a real positive function of $`\phi `$ with two adjacent isolated minima at which $`V`$ vanishes. Let these minima be at $`\phi =\pm 1`$ and let $`(x,y,w)`$ be cartesian coordinates for $`\text{𝔼}^3`$. If one assumes that $`\phi \pm 1`$ as $`x\pm \mathrm{}`$, uniformly in $`y`$ and $`w`$, then solutions of (2) are necessarily planar because all such solutions satisfy the first order ordinary differential equation
$$\frac{d\phi }{dx}=\sqrt{V}.$$
(3)
The solutions of this equation are the static domain walls which are stable for topological reasons. In the context of a D=3 supersymmetric model they are also supersymmetric, for reasons explained at the conclusion of this article
Now consider the possibility of static intersecting domain wall solutions of (2). An existence proof has been given showing that (2) admits a solution representing two orthogonal domain walls. The solution has Dirichlet type boundary data : $`\varphi =0`$ on the planes $`x=0`$ and $`y=0`$ and $`\varphi \pm 1`$ as $`|𝐱|\mathrm{}`$ within the first quadrant. Given that the solution exists in the first quadrant, it may be obtained in the remaining quadrants by reflection. It seems clear, although we are unaware of formal proofs, that there should also exist solutions for which $`2n`$ domain walls intersect, adjacent walls making an angle $`\pi /n`$. However all these intersecting solutions are expected to be unstable; it is certainly the case that they cannot be supersymmetric. We shall return to this point later.
Domain walls with two or more scalar fields have been investigated in . In , three-phase boundaries were shown to minimise the energy and to correspond, in the thin-wall limit, to a ‘Y-intersection’ (with 120 degree angles). The WZ model is a special case of models of this type. The energetics of domain wall intersections in the WZ model was investigated in (see also ). The possibilities depend on the form of the superpotential $`W`$. If it is cubic then there are two possible domains and only one type of domain wall separating them. Intersections of two such walls cannot be more than marginally stable. Stable intersections can occur only if the superpotential is at least quartic. A quartic superpotential can therefore model a tri-stable medium with three possible stable domains and three types of domain wall. The 1+1 dimensional analysis of indicates that triple intersections of the three walls should be stable for some range of parameters, but static solutions representing such intersections are intrinsically 2+1 dimensional (given that we ignore dependence on the coordinate of the string intersection), so they cannot be found from the truncation to 1+1 dimensions. However, they should be minimum energy solutions of the reduction of the WZ model to 2+1 dimensions. The energy density of static configurations in this reduced 2+1 dimensional theory is
$$=\frac{1}{4}\varphi \overline{\varphi }+|W^{}(\varphi )|^2$$
(4)
where $`=(_x,_y)`$ with $`(x,y)`$ being cartesian coordinates for the two-dimensional space.
Let $`z=x+iy`$. The above expression for the energy density can then be rewritten as
$$=\left|\frac{\varphi }{z}e^{i\alpha }\overline{W^{}}\right|^2+2\mathrm{R}\mathrm{e}\left(e^{i\alpha }\frac{W}{z}\right)+\frac{1}{2}J(z,\overline{z})$$
(5)
where $`\alpha `$ is an arbitrary phase, and
$$J(z,\overline{z})=\left(\frac{\varphi }{\overline{z}}\frac{\overline{\varphi }}{z}\frac{\varphi }{z}\frac{\overline{\varphi }}{\overline{z}}\right).$$
(6)
We now observe that
$$Q\frac{1}{2}𝑑x𝑑yJ(z,\overline{z})=\mathrm{\Omega },$$
(7)
where $`\mathrm{\Omega }`$ is the 2-form on 2-space induced by the closed 2-form $`(i/4)d\overline{\varphi }d\varphi `$ on the target space (assumed here to be the complex plane). Since $`\mathrm{\Omega }`$ is real and closed, $`Q`$ is a real topological charge. We may assume without loss of generality that it is non-negative. Integration over space then yields the following expression for the energy
$$E=𝑑x𝑑y\left|\frac{\varphi }{z}e^{i\alpha }\overline{W^{}}\right|^2+\mathrm{Re}\left[e^{i\alpha }T\right]+Q,$$
(8)
where $`T`$ is the complex boundary term
$$T=2𝑑x𝑑y\frac{W}{z}.$$
(9)
We thereby deduce the Bogomol’nyi-type bound
$$EQ+|T|,$$
(10)
which is saturated by solutions of the ‘Bogomol’nyi’ equation
$$\frac{\varphi }{z}=e^{i\alpha }\overline{W^{}}.$$
(11)
Before considering what solutions this equation may have, we shall first show that generic solutions preserve 1/4 supersymmetry. The fields of the WZ model reduced to 2+1 dimensions comprise a complex scalar $`\varphi `$ and a complex $`SL(2;\text{})`$ spinor field $`\psi ^\alpha `$; we use an $`SL(2;\text{})`$ notation in which
$$_{\alpha \beta }=\delta _{\alpha \beta }_t+(\sigma _1)_{\alpha \beta }_x+(\sigma _3)_{\alpha \beta }_y$$
(12)
and $`\psi _\alpha =\psi ^\beta \epsilon _{\beta \alpha }`$. Similarly, $`^{\alpha \beta }=\epsilon ^{\alpha \gamma }\epsilon ^{\beta \delta }_{\beta \gamma }`$. The Lagrangian density is
$``$ $`=`$ $`{\displaystyle \frac{1}{8}}^{\alpha \beta }\varphi _{\alpha \beta }\overline{\varphi }+{\displaystyle \frac{i}{2}}\overline{\psi }^\alpha _{\alpha \beta }\psi ^\beta `$ (14)
$`+{\displaystyle \frac{i}{2}}\left[W^{\prime \prime }\psi ^\alpha \psi _\alpha +\overline{W^{\prime \prime }}\overline{\psi }^\alpha \overline{\psi }_\alpha \right]|W^{}|^2.`$
Note that the corresponding bosonic hamiltonian density is precisely (4). The action is invariant, up a surface term, under the infinitesimal supersymmetry transformations
$`\delta \varphi `$ $`=`$ $`2iϵ_\alpha \psi ^\alpha `$ (15)
$`\delta \psi ^\alpha `$ $`=`$ $`^{\alpha \beta }\varphi \overline{ϵ}_\beta 2\overline{W^{}}\epsilon ^{\alpha \beta }ϵ_\beta ,`$ (16)
and their complex conjugates (we adopt the convention that bilinears of real spinors are pure imaginary).
We see from (15) that purely bosonic configurations are supersymmetric provided that the equation
$$^{\alpha \beta }\varphi \overline{ϵ}_\beta +2\overline{W^{}}\epsilon ^{\alpha \beta }ϵ_\beta =0$$
(17)
admits a solution for some constant complex spinor $`ϵ`$. For a time-independent complex field $`\varphi `$ this equation is equivalent to
$$(1\sigma _2)\overline{ϵ}(\overline{}\varphi )+(1+\sigma _2)\overline{ϵ}(\varphi )=2\overline{W^{}}\sigma _3ϵ$$
(18)
where $`/z`$ and $`\overline{}/\overline{z}`$. For a field $`\varphi `$ satisfying (11) we deduce that $`ϵ`$ satisfies
$$\sigma _2\overline{ϵ}=\overline{ϵ},\sigma _3\overline{ϵ}=e^{i\alpha }ϵ.$$
(19)
These constraints preserve just one of the four supersymmetries. Solutions of (11) are therefore 1/4 supersymmetric.
The supersymmetry Noether charge of the above model is the complex $`SL(2;\text{})`$ spinor
$$S=\frac{1}{2}𝑑x𝑑y\left\{\left[\dot{\varphi }\sigma _1_x\varphi \sigma _3_y\varphi \right]\overline{\psi }2i\sigma _2W^{}\psi \right\}.$$
(20)
We can now use the canonical anticommutation relations of the fermion fields to compute the anticommutators. After restricting to static bosonic fields one finds that $`\{S,\overline{S}\}=H\sigma _2Q`$. Thus the junction charge $`Q`$ appears as a central charge in the supertranslation algebra. The charge $`T`$ appears in the $`\{S,S\}`$ anticommutator and the positivity of the complete matrix of supercharges implies the bound (10). Of interest here is how the junction charge $`Q`$ appears in the D=4 supersymmetry algebra from which we started. It appears in the same way as would the $`P_3`$ component of the momentum and is asociated with the constraint $`\mathrm{\Gamma }^{03}\zeta =\zeta `$. This constraint is equivalent to $`\mathrm{\Gamma }^{023}\gamma _5\zeta =\zeta `$ on the $`+1`$ eigenspace of $`\mathrm{\Gamma }^{013}`$ so, indirectly, we have found a field theory realization of the 1/4 supersymmetric charge configurations that we earlier deduced from the N=1 D=4 supersymmetry algebra alone.
We now return to the ‘Bogomol’nyi’ equation (11). If $`\varphi `$ is restricted to be a function only of $`x`$ then this equation reduces to the one studied in , which admits domain wall solutions parallel to the y-axis. Each domain wall is associated with a complex topological charge of magnitude $`|𝑑x_xW|`$ and phase $`\alpha `$. The question of stability of domain wall junctions was adressed in by asking whether two domain walls parallel to the y-axis, at least locally, can fuse to form a third domain wall of lower energy. It was found that this is possible only if their phases differ; otherwise, stability is marginal. Given that the energetics allows the formation of an intersection, we would like to find the static intersecting domain wall solution to which the system relaxes. Such solutions must depend on both $`x`$ and $`y`$ (equivalently, on both $`z`$ and $`\overline{z}`$), and hence are much harder to find.
To simplify our task, we shall consider the simple quartic superpotential,
$$W(\varphi )=\frac{1}{4}\varphi ^4+\varphi .$$
(21)
This has three critical points, at $`\varphi =1,\omega ,\omega ^2`$, and a $`\text{}_3`$ symmetry permuting them. There are therefore three possible domains and three types of domain wall separating them. The Bogomol’nyi equation corresponding to this superpotential is
$$\frac{\varphi }{z}=1\overline{\varphi }^3.$$
(22)
We have set the phase $`\alpha =1`$ since it can now be removed by a redefinition of $`z`$. This equation is invariant under the $`\text{}_3`$ action : $`(z,\varphi )(\omega z,\omega \varphi )`$, so we are led to seek a $`\text{}_3`$ invariant solution such that $`\varphi 1`$ as one goes to infinity inside the sector $`\frac{\pi }{6}<\mathrm{arg}z<\frac{\pi }{6}`$, subject to the condition that $`\mathrm{arg}\varphi =\mathrm{arg}z`$ on the boundary. By symmetry $`\varphi `$ must vanish at the origin and so $`\varphi z`$ for small $`z`$. Given that a stable static triple intersection exists, there should also exist meta-stable networks of domain walls . For example, one may imagine a static lattice consisting of hexagonal domains, rather like graphite. The vertices form triple intersections and one may consistently label the hexagons of the array with $`1,\omega ,\omega ^2`$, in such a way that no two domains which touch along a common edge carry the same label. The evolution of networks of domain walls has been studied numerically in . We believe that it would be fruitful to study the WZ model in this context.
It is well known that topological defects such as strings and domain walls admit wavelike excitations travelling along them at the speed of light. The domain wall junctions considered here are no exception. One easily checks that the D=4 WZ equations are satisfied if $`\varphi (z)`$ solves our Bogomol‘nyi equation (11) but is also allowed to have arbitrary dependence upon either $`tw`$ or $`t+w`$, where $`w`$ is the third space coordinate on which we reduced to get the (2+1) dimensional model. However, only one choice preserves supersymmetry. To see this we note that the $`SL(2;\text{})`$-invariant condition for preservation of supersymmetry in the unreduced D=4 WZ model is
$$^{\alpha \dot{\beta }}\varphi \overline{ϵ}_{\dot{\beta }}+2\overline{W^{}}\epsilon ^{\alpha \beta }ϵ_\beta =0.$$
(23)
Given that the reduced D=3 equation (17) is satisfied, and that the spinor $`ϵ`$ satisfies (19), we then deduce that
$$_+\varphi \overline{ϵ}=0$$
(24)
where $`_+=_t\pm _w`$, the sign depending on the choice of conventions. Thus, we again have 1/4 supersymmetry if $`_+\varphi =0`$ but no supersymmetry if $`_{}\varphi =0`$. This result is not unexpected because we saw earlier that the junction charge $`Q`$ appears in the D=4 supersymmetry algebra in the same way as does $`P_3`$.
Note that since an individual domain wall preserves 1/2 supersymmetry its low energy dynamics must be described by a (2+1)-dimensional supersymmetric field theory with two supersymmetries (corresponding to N=1). The two components of the spinor field of this effective theory are the coefficients of two Nambu-Goldstone fermions associated with the broken supersymmetries. The domain wall junction preserves only 1/4 supersymmetry, so there must be a total of three Nambu-Goldstone fermions localised on the intersecting domain wall configuration as a whole. Only two of these are free to propagate within the walls, so the third Nambu-Goldstone fermion must be localised on the string junction. This can also be seen by viewing the junction as a 1/2-supersymmetric defect on a given wall. The fact that half of the wall’s supersymmetry is preserved means that the junction’s low energy dynamics is described by a (1,0)-supersymmetric (1+1) dimensional field theory. This theory is chiral with one fermion that is either left-moving or right-moving; let us declare it to be left-moving. This fermion is the Nambu-Goldstone fermion associated with the fact that the junction also breaks half the wall’s supersymmetry. Its bosonic partner under (1,0) supersymmetry must also be left-moving. It follows that right-moving waves are supersymmetric whereas left-moving ones are not, precisely as we deduced above from other considerations.
Now that we have a good understanding of the pattern of supersymmetry breaking in the WZ model we return to the simpler model discussed earlier with one real scalar field. This model has an N=1 supersymmetrization in (2+1) dimensions, with $`V=4(W^{})^2`$, obtained by restricting all quantities in the N=2 model discussed above to be real. Taking $`_y\varphi =0`$ we then find that solutions of (3) are supersymmetric, with the real 2-component spinor $`ϵ`$ an eigenspinor of $`\sigma _3`$. This might seem paradoxical in view of the fact that the N=1 D=3 supertranslation algebra admits no central charges, of either scalar or vector type, that are algebraically independent of the 3-momentum. The resolution is that the anticommutator of supersymmetry charges $`S_\alpha `$ is
$`\{S_\alpha ,S_\beta \}`$ $`=`$ $`\delta _{\alpha \beta }H`$ (25)
$`+`$ $`(\sigma _1)_{\alpha \beta }\left(P_x+T_y\right)+(\sigma _3)_{\alpha \beta }\left(P_yT_x\right)`$ (26)
where $`H`$ is the Hamiltonian, $`𝐏`$ is the field 2-momentum and $`𝐓=d^2xW`$ is a 2-vector topological charge (the corrresponding algebra of currents was discussed in ). For static solutions $`𝐏`$ vanishes, while $`_yW`$ vanishes for solutions with $`_y\varphi =0`$. For such solutions we have
$$\{S,S\}=H+\sigma _3T_x.$$
(27)
It follows that $`H|T_x|`$. Field configurations that saturate this bound preserve 1/2 the supersymmetry and are associated with eigenspinors of $`\sigma _3`$, as claimed. An intersecting domain wall solution in this N=1 (2+1)-dimensional model cannot satisfy (3) (because its only static solutions are the planar domain walls) and this means that it cannot be supersymmetric. In contrast to the model with a complex scalar, one cannot use supersymmetry to argue for the stability of domain wall junctions in a model with only one real scalar field.
Acknowledgements: GWG thanks Martin Barlow and Alberto Farina for informing him of their, and related, work on domain walls. We are also grateful for conversations with Paul Shellard and David Stuart.
|
no-problem/9905/astro-ph9905139.html
|
ar5iv
|
text
|
# A Sunyaev-Zel’dovich map of the massive core in the luminous X-ray cluster RXJ1347-1145
## 1 Introduction
The hot intergalactic gas ($`10^6`$-$`10^8`$K) is, with the galaxies themselves and the gravitational effects on background objects, one of the tools used to derive mass distributions within clusters of galaxies. It can be detected at X-ray wavelengths via its bremsstrahlung emission. From submillimeter to centimeter wavelengths, the cosmic microwave background (CMB) blackbody spectrum is distorted in the direction of the cluster by the so-called Sunyaev-Zel’dovich effect (SZ, Sunyaev & Zel’dovich (1972)). This characteristic distortion is due to the inverse Compton scattering of the CMB photons by the intracluster electrons (see Birkinshaw (1998) for a detailed review on the SZ effect).
In this letter, we report the SZ measurement of the X-ray cluster RXJ1347-1145 with the ground based Diabolo millimeter instrument. This cluster has been observed with the ROSAT-PSPC and HRI instruments by Schindler et al. (schindler95 (1995), schindler97 (1997)). At a redshift of $`z=0.451`$, it appears as the most luminous X-ray cluster ($`L_{Bol}=21\times 10^{45}`$ergs s<sup>-1</sup>) and so far one of the most massive ($`M_{tot}^{Xray}(r<1Mpc)=5.8\times 10^{14}`$ M). It is also a relatively hot and very dense cluster (temperature: $`T_e=9.3\pm 1`$ keV, central density: $`n_0=0.094\pm 0.004`$ cm<sup>-3</sup>). Optical studies of the gravitational lensing effects toward RXJ1347-1145 have also been performed by Fischer & Tyson (1997) and Sahu et al. (1998). The results have pointed out a discrepancy between the total mass obtained from the optical and the X-ray data, with a surface lensing mass toward the core ($`r<240`$kpc) being one to three times higher than the X-ray mass estimates. Because the SZ effect also directly probes the projected gas mass, which is not the case for X-ray masses, the comparison with SZ measurements might help to discriminate.
In section 2, we describe the Diabolo instrument and our observations of RXJ1347-1145. The data reduction is explained in section 3. In a fourth part we present the values of the physical parameters we derived from the data fit.
## 2 Observations
Diabolo is a millimeter photometer which provides an angular resolution of about 23” when installed at the focus of the IRAM 30 meter radio telescope at Pico Veleta (Spain). It uses two wavelengths channels centered at about 1.2 and 2.1 mm. The detectors are bolometers cooled at 0.1 K with an open cycle <sup>4</sup>He-<sup>3</sup>He dilution refrigerator (Benoit et al. benoit98 (1998)). Two thermometers associated to a heater and a PID digital control system are used to regulate the temperature of the 0.1 K plate. There are three adjacent bolometers per channel, arranged in an equilateral triangle at the focus of the telescope. For a given channel, each bolometer is coaligned with one bolometer of the second channel, both looking toward the same sky direction. Detections of the SZ effect have already been achieved with Diabolo on nearby clusters (A2163, 0016+16, A665) with a single large throughput bolometer per channel at 30” resolution. The experimental setup is described in Désert et al. (desert98 (1998)). The only difference of the present configuration with the one described in the paper is the increase of the number of bolometers per wavelength channel and a slight decrease of the beam FWHM from 30” to 23”. With three bolometers at the focus of the telescope, there is no more one detector on the central optical axis. The 30 meter telescope focus being of Nasmyth type, the rotation of the field has to be taken into account in the sky maps reconstruction.
RXJ1347-1145 has been observed in December 1997. The center of our observations is the ROSAT-HRI X-ray emission center reported by Schindler et al. (schindler97 (1997)): $`\alpha _{2000}=13^\mathrm{h}47^\mathrm{m}31^\mathrm{s}`$, $`\delta _{2000}=11^{}45^{}11\mathrm{"}`$. The observations have been performed using the wobbling secondary mirror of the IRAM telescope at a frequency of 1 Hz and with a modulation amplitude of 150”. An elementary observation sequence is a $`120^{\prime \prime }\times 55^{\prime \prime }`$ map in right ascension-declination coordinates for a duration of 277 seconds each. This is obtained using the right ascension drift provided by the Earth rotation so that the telescope could be kept fixed during the measurement. This was done to minimize microphonic noises and electromagnetic influences from the motors driving the IRAM 30m antenna. The map of the cluster is obtained by stepping in declination between two consecutive lines. The line length is 120” with a step of 5”. The wobbling is horizontal, thus not aligned with the scan direction. However, the wobbling amplitude is large enough for the reference field to be always far out of the cluster. In order to remove systematic signal drifts that are produced by the antenna environment, we used alternatively the positive and negative beam to map the cluster. We performed 208 such individual maps on the cluster, for a total duration of 16 hours.
Another target of the Diabolo’s 1997 run was the direction of the decrement detected at 8.44 GHz by Richards et al. (richards97 (1997)). We refer to this source as VLA1312+4237 in the following. Richards et al. (richards97 (1997)) measured a flux decrement of $`13.9\pm 3.3\mu `$Jy in a 30” beam. The presence of two quasars in this direction led them claim to the possible existence of a cluster at a redshift of $`z=2.56`$. Campos et al. (campos98 (1998)) have reported the detection of a concentration of Ly$`\alpha `$-emitting candidates around the quasars. They argued that the probability for such a clustering to be random is $`5\times 10^5`$. Our pointing direction was $`\alpha =13^\mathrm{h}12^\mathrm{m}17^\mathrm{s}`$, $`\delta =42^{}37^{}30^{\prime \prime }`$. We performed 287 individual maps on this target for a total time of about 20 hours.
## 3 Data reduction and calibration
The reduction procedure includes the following main steps: (i) We remove cosmic ray impacts. (ii) A synchronous demodulation algorithm is applied taking into account the wobbling secondary frequency and amplitude. (iii) We remove from the 2.1mm bolometer time line the signal which is correlated with the 1.2mm bolometer looking at the same sky pixel. This correlated signal is mainly due to the atmospheric emission, which spectral color is very different from the SZ effect. (iv) Correction for opacity is done from the bolometer total power measurements and its calibration by skydips. (v) To eliminate the low frequency detector noises, a baseline is subtracted to each line of the map. The baseline is a 1 degree polynomial. It is fitted to 60% of the data points: 30% at each end of the line. (vi) Each map is then resampled on a regular right-ascension/declination grid, taking into account the field rotation in the Nasmyth focal plane. (vii) An average map is computed for each bolometer. Since the weather conditions were not permanently ideal, the noise quality of the individual maps is not homogeneous, particularly at 1.2 mm. We thus exclude from the average the maps which rms pixel to pixel fluctuation is larger than 1.5 times the median rms value of all the individual maps. (viii) A single map is then produced for each channel (1.2mm and 2.1mm) by coaddition of the three bolometer average maps.
During the run, pointing verifications and mapping of reference sources have been performed. We have used the planet Mars as a calibration target. The apparent angular diameter of Mars was 5”, so that we can consider it as a point source with respect to the Diabolo’s beam. The accuracy of the absolute calibration obtained is of order of 25% at 1.2mm and 15% at 2.1mm. Mars observations are also used for the characterization of the Diabolo’s beams. The measured FWHM are 24” and 22” at 1.2mm and 2.1mm respectively. Mars has been observed in an azimuth-elevation mapping mode with a scanning speed which is slower than the natural drift speed of the cluster observation mode. This later speed is fast enough compared to the wobbler period to significantly spread the signal in the scanning direction (i.e. right ascension). The resulting beam FWHM for the cluster mode along this direction is 28”. It has been experimentally determined by observation of a quasar lying at about the same declination as the cluster.
## 4 Results
The final map of RXJ1347-1145 at 2.1mm is shown on Fig. A Sunyaev-Zel’dovich map of the massive core in the luminous X-ray cluster RXJ1347-1145. The X-ray contours have been overplotted. The average right-ascension profile at 2.1mm is plotted on Fig. A Sunyaev-Zel’dovich map of the massive core in the luminous X-ray cluster RXJ1347-1145. The profile obtained for the VLA1312+4237 direction using the same data processing, has been overplotted. The map and the profiles have been smoothed with a gaussian filter of 25” FWHM to maximize the signal to noise ratio. The 2.1 mm RXJ1347-1145 map presents a very strong decrement. For a thermal SZ effect this corresponds to a Comptonization parameter of the order of $`10^3`$. The decrement that we measure is not centered on the cluster X-ray maximum. We will show in the analysis that this effect can be explained by the superposition of the SZ decrement from the intracluster gas and a positive emission from a known radio source slightly shifted west off the cluster center.
We have no detection for the direction of VLA 1312+4237. Our $`3\sigma `$ upper limit is $`y<1.5\times 10^4`$. This is actually compatible with the decrement measured by Richards et al. (richards97 (1997)) which translates to a central comptonization parameter of the order of $`7\times 10^5`$ for a thermal SZ effect. If this decrement is in fact due to a kinetic SZ effect then we expect a signal at 2.1mm which is equivalent to a thermal SZ effect of $`y_c=1.4\times 10^4`$, still within our $`3\sigma `$ limit.
Actually, we have used the VLA1312+4237 data set to obtain a reliable assessment of the error bars on RXJ1347-1145. The individual maps have been averaged over increasing durations to evaluate the effective scatter of the average signal over independent data sets. The maximum duration that could be checked with this method is about 5 hours, corresponding to an average of 64 individual maps. We have checked that for all bolometers the rms pixel noise scales as the square root of the integration time. The error bars extrapolated from this analysis to longer integration times are consistent with the error bars derived from the internal scatter of the data averaged for RXJ1347-1145. The typical sensitivity reached in the 2.1 mm channel is of the order of 1 mJy in a 25” beam.
## 5 Analysis and discussion
In the following, we have used for the intracluster gas density a spherical $`\beta `$ model with the parameter values derived from the X-ray analysis of Schindler et al. (schindler97 (1997)): core radius $`r_c=8.4`$arcsec (57kpc), $`\beta =0.56`$, central density $`n_0=0.094`$ cm<sup>-3</sup>, and temperature $`T_e=9.3`$ keV. We choose to cut off this distribution at radial distance of $`r_{cut}=15r_c`$. We assume the same cosmological parameters too, $`H_0=50`$km/s/Mpc and $`\mathrm{\Omega }_0=1`$ ($`\mathrm{\Lambda }=0`$). With such a model the measured SZ skymap reads:
$$I(\overline{\nu },\stackrel{}{\mathrm{\Omega }})=y_c\tau (\nu )SZ(\nu ,T_e)𝑑\nu P(\stackrel{}{\mathrm{\Omega }})L(\stackrel{}{\mathrm{\Omega }}\stackrel{}{\mathrm{\Omega }^{}})𝑑\stackrel{}{\mathrm{\Omega }^{}}$$
(1)
where $`y_c={\displaystyle \frac{k}{m_ec^2}}\sigma _TT_en_e(r)𝑑l`$, is the Comptonization parameter towards the cluster center. $`n_e(r)=n_0(1+(r/r_c)^2)^{3\beta /2}`$ being the $`\beta `$ radial distribution of the gas density. $`\tau (\nu )`$ is the normalized Diabolo band spectral efficiency (given in Desert et al. desert98 (1998)). $`SZ(\nu ,T_e)`$ is the spectral density of the thermal SZ distortion for a unit comptonization parameter, including the relativistic weak dependence on $`T_e`$ (see Pointecouteau, Giard & Barret pointecouteau98 (1998)). In fact, for a 9.3 keV cluster, the use of relativistic spectra avoids making errors on the SZ flux estimations of 45% and 10% at 1.2 and 2.1 mm respectively. We did not include any kinetic SZ contribution which is generally weak (Birkinshaw (1998)). $`k`$, $`m_e`$, $`c`$ and $`\sigma _T`$ are respectively the Bolztman constant, the electron mass, the speed of light and the Thomson cross-section. $`P(\stackrel{}{\mathrm{\Omega }})`$ and $`L(\stackrel{}{\mathrm{\Omega }})`$ are the normalized angular distributions of the cluster and the experimental beam respectively. $`P(\stackrel{}{\mathrm{\Omega }})`$ has no analytical expression, it is numerically computed by integration of the gas density beta profile on the line of sight.
Two radio sources are known from the NRAO VLA Sky Survey in the neighborhood of the cluster (Condon et al. (1998)). One, at $`(\alpha ,\delta )=(13^\mathrm{h}47^\mathrm{m}30.67^\mathrm{s},11^{}45^{^{}}8.6^{^{\prime \prime }})`$, is very close to the cluster center and is likely to correspond to the central Cd galaxy. Komatsu et al. (komatsu98 (1998)) have compiled observations of this radio source at 1.4 GHz, 28.5 GHz and 105 GHz. They have derived the following power law for the radiosource spectrum: $`F_\nu (band)=(55.7\pm 1.0)\left({\displaystyle \frac{\nu }{1GHz}}\right)^{0.47\pm 0.02}`$ mJy. So, the extrapolated millimeter flux should be $`F_\nu (1.2\mathrm{mm})=3.7\pm 0.4`$mJy/beam and $`F_\nu (2.1\mathrm{mm})=4.9\pm 0.5`$mJy/beam .
To properly analyze the data, we have performed a realistic simulation of the Diabolo observations on the sky map of the SZ model (Eq. 1). The whole set of observed individual maps has been simulated taking into account the 150” wobbling amplitude and the proper sky rotation at the Nasmyth focus. The simulated data have been processed through the same pipe-line as the observed data set to obtain averaged model maps.
Finally, using this simulated data set, we have simultaneously fitted the SZ decrement amplitude and the point source flux on the 2.1mm profile with $`y_c`$ and $`F_\nu (2.1\mathrm{mm})`$ as free parameters. The best fit parameters are $`y_c=(12.7_{3.1}^{+2.9})\times 10^4`$ and $`F_\nu (2.1\mathrm{mm})=6.1_{4.8}^{+4.3}`$mJy/beam with a reduced $`\chi ^2`$ of 1.3. Results are given at 68% confidence level. The absolute calibration error, 25% and 15% at 1.2mm and 2.1mm, is not included. $`F_\nu (2.1\mathrm{mm})`$ is compatible with the value expected from radio observations. The best fit is overplotted on the data (see Fig. A Sunyaev-Zel’dovich map of the massive core in the luminous X-ray cluster RXJ1347-1145). It reproduces the asymmetric profile. This asymmetry is due to the point source contribution which fills part of the SZ decrement.
In a second time, we have fixed the radio point source flux at the expected value deduced from Komatsu et al. (komatsu98 (1998)), $`F_\nu (2.1mm)=4.9`$mJy/beam , and we have fitted with a maximum likelihood method both the central Comptonization parameter $`y_c`$, and the angular core radius $`\theta _c`$. We have found $`y_c=(13.2_{2.6}^{+0.2})\times 10^4`$ and $`\theta _c=7.2_{7.2}^{+7.3}`$arcsec with a reduced $`\chi ^2`$ of 1.2. The Comptonisation parameter value is consistent with the previous one. The angular core radius is consistent with the X-ray value within the 68% confidence level.
### 5.1 Conclusion
We confirm through our SZ detection that RXJ1347-1145 is an extremely massive and hot cluster. We have measured the deepest SZ effect ever observed. It corresponds to a very large Comptonization parameter, $`y_c=(12.7_{3.1}^{+2.9})\times 10^4`$. This is almost twice the value expected from the X-ray data, $`y_{Xray}=(7.3\pm 0.7)\times 10^4`$ if we use the cluster gas parameters derived by Schindler et al. (schindler97 (1997)). Although our result points to a mass higher than the X-ray mass, as is the case for gravitational lenses measurements, the uncertainties do not allow to firmly conclude to a discrepancy. The X-ray flux toward the cluster center is actually dominated by the very strong cooling flow in the core. The average temperature of the gas which contributes to the SZ effect is thus likely to be higher than the temperature derived from the X-ray data, $`T_e=9.3\pm 1`$ keV. The gas temperature needed to produce the thermal SZ effect we have observed is $`T_e=16.2\pm 3.8`$ keV, assuming all other parameters are kept unchanged. In a re-analysis which takes into account the heterogeneity of the cluster, Allen and Fabian (allen98 (1998)) have actually derived for this cluster a very high gas temperature: $`T_e=26.4_{12.3}^{+7.8}`$ keV, which is indeed consistent with our measurement. Under the hypothesis of hydrostatic equilibrium, a higher gas temperature implies a higher total cluster mass, thus decreasing the gas fraction if all other cluster parameters are kept unchanged. For $`T_e=16.2\pm 3.8`$ keV the total mass of RXJ1347-1145 within 1 Mpc is considerable, $`M_{tot}(r<1\mathrm{M}\mathrm{p}\mathrm{c})=(1.0\pm 0.3)\times 10^{15}`$ M and the corresponding gas fraction is $`f_{gas}(r<1\mathrm{M}\mathrm{p}\mathrm{c})=(19.5\pm 5.8)\%`$.
We are very grateful to the IRAM staff at pico Veleta for their help during observations. We thank Laurent Ravera for very useful comments during the data analysis phase. Diabolo is supported by the Programme National de Cosmologie, Institut National pour les Sciences de l’Univers, Ministère de l’Education National de l’Enseignement supérieur et de la Recherche, CESR, CRTBT, IAS-Orsay and LAOG. We thank the anonymous referee for numerous comments and corrections which allowed us to considerably improve the paper.
FIGURE LEGENDS
Figure A Sunyaev-Zel’dovich map of the massive core in the luminous X-ray cluster RXJ1347-1145
Figure A Sunyaev-Zel’dovich map of the massive core in the luminous X-ray cluster RXJ1347-1145
|
no-problem/9905/hep-ph9905434.html
|
ar5iv
|
text
|
# Jet Production with Polarized Beams at Next-to-Leading Order
## Abstract
Jet production cross-sections in polarized proton-proton and electron-proton collisions are studied to next-to-leading order accuracy. Phenomenological results are presented for RHIC and HERA kinematics.
The last decade has seen an important advance in our understanding of polarized nucleon structure functions as a result of the analysis of deep inelastic scattering (DIS) data. Unfortunately, the use of DIS data alone does not allow an accurate determination of the polarized parton densities. This is true in particular for the gluon, since this quantity contributes to DIS in leading order (LO) only via the $`Q^2`$-dependence of the spin asymmetry ($`A_1^N`$).
At variance with DIS, collider physics offers a relatively large number of processes whose dependence upon the gluon density is dominant already at LO. The study of these processes is therefore crucial in order to measure this density in a direct way. Among them, jet production is an obvious candidate, because of the large rates.
In order to make reliable quantitative predictions for high-energy processes, it is crucial to determine the NLO QCD corrections to the Born approximation. The key issue here is to check the perturbative stability of the process considered. Only if the corrections are under control can a process that shows good sensitivity to, say, $`\mathrm{\Delta }g`$ at the lowest order be regarded as a genuine probe of the polarized gluon distribution and be reliably used to extract it from future data. NLO QCD corrections are expected to be particularly important for the case of jet-production, since it is only at NLO that the QCD structure of the jet starts to play a rôle in the theoretical description, providing for the first time the possibility to realistically match the experimental conditions imposed to define a jet.
The main purpose of this talk is, therefore, to study the perturbative stability and the phenomenological consequences of jet production cross-sections to NLO accuracy. We have implemented the NLO QCD corrections to jet production in polarized proton-proton and electron-proton (in the photoproduction regime) collisions by extending the unpolarized MonteCarlo code constructed in ref. to the case of polarized beams. As a result, we present a customized code, with which it is possible to calculate any infrared-safe quantity corresponding to either single- or di-jet production to NLO accuracy. For the theoretical details about the implementation, we refer the reader to refs. .
The best way to analyze the effect of NLO corrections on the perturbative stability of an observable is to study the dependence of the full NLO result on the renormalization and factorization scales. Throughout we will set the two scales equal, i.e. $`\mu _R=\mu _F\mu `$, and vary $`\mu `$ as a way to quantify the theoretical uncertainty on the cross section. In fig. 1a we show the next-to-leading and leading order $`p_T`$-distributions for polarized $`pp`$ collisions for the three different scales: $`\mu _0,2\mu _0`$ and $`\frac{1}{2}\mu _0`$. Figure 1b is the corresponding plot for the unpolarized case. Clearly, the dependence on the scale is substantially reduced when going to next-to-leading order. The situation in the polarized case is indeed very similar to the unpolarized one. We have observed the same reduction in the scale dependence for other single and double-differential observables. It is therefore sensible to use our code to investigate a few phenomenological issues relevant to hadronic physics at RHIC.
In fig. 2, the one-jet asymmetry is shown as a function of $`p_T`$. The results have been obtained by choosing six different parametrizations of the polarized parton densities . Figure 2 clearly shows that the choice of the polarized parton densities induces an uncertainty on the theoretical results of more than two orders of magnitude. This enormous spread is basically due to the fact that at this energy the jet cross section is dominated by $`gg`$\- and $`qg`$-initiated parton processes. Therefore, and since the minimum value of the asymmetry observable at RHIC is quite small, the measurement of the polarized jet cross section at RHIC will be useful in order to rule out some of the polarized sets that are at present consistent with the data.
The Born results differ from those presented in fig. 2 for a factor up to 20% and the shape is also different. Therefore, NLO corrections give non-trivial information on the structure of the asymmetries
Moving to the case of electron-proton collisions, it is clear that in order to obtain large statistics one should go to the photoproduction regime. In this case the cross-sections can be approximated as a convolution of the photon-proton cross sections with the Weizsäcker-Williams flux. The photon-proton cross section is given by a sum of two terms, denoted as the point-like and hadronic components. In order to compute the hadronic component one needs, besides the polarized parton distributions in the proton, also the polarized densities in the photon , which are completely unmeasured so far. To obtain a realistic estimate for the theoretical uncertainties due to these densities, we use the two very different scenarios considered in ref. .
We have studied the scale dependence of several observables at NLO and found that the variation of the scale induces a variation of the cross section of the order of 10% over most of the $`\eta `$ range considered. The scale dependence is also strongly reduced when going from Born to NLO results.
We now turn to the problem of studying the dependence of our results upon the available proton and photon polarized parton densities. In fig. 3 we present the results for the asymmetry in terms of the pseudorapidity of the single-inclusive jet at polarized HERA, obtained using GRSV STD as the polarized proton set and both polarized photon sets. The Born and NLO results are both shown. We can see that in the large $`\eta `$ region the difference induced by the choice of the two photon sets is extremely large. On the other hand, towards negative $`\eta `$ values this difference tends to vanish. This is because in that region the point-like component, which does not depend upon photon densities, is the dominant one. We can also observe that in the positive $`\eta `$ region there is a very small difference between the NLO and LO results, while for negative $`\eta `$’s the radiative corrections are positive and reduce the asymmetry considerably.
In fig. 4, we show the curves obtained by fixing the polarized photon set to SV MAX $`\gamma `$, and by considering the various polarized proton sets. As expected, the largest differences can be seen at negative $`\eta `$ values, where theoretical predictions can vary for about one order of magnitude.
It follows that, if high luminosity will be collected, it will be possible to get information on the polarized parton densities in the proton. As far as the polarized photon densities are concerned, if the “real” densities are similar to those of the set SV MIN $`\gamma `$, it will be extremely hard to even get the experimental evidence of a hadronic contribution to the polarized cross section. On the other hand, a set like SV MAX $`\gamma `$ appears to give measurable cross sections.
In conclusion, we reported the calculation of jet cross-sections in polarized hadron-hadron and electron–hadron (in the photoproduction regime) collisions, which is accurate to NLO in perturbative QCD. For all the observables considered, it has been found that the scale dependence is smaller than that of the LO result. The inclusion of the NLO terms changes the size of the asymmetries by 20% in the case of proton-proton collisions at RHIC and, considerably reduces it, in the case of electron-proton collisions at HERA in the pseudorapidity region where the contamination from the hadronic photon contribution is minimal. From our analysis, it is clear that the inclusion of the NLO corrections is indispensable in order to have reliable quantitative calculations.
Measurements of jet cross-sections with polarized beams at RHIC and HERA will be fundamental tools in order to extract the polarized gluon distribution in the proton. It is, therefore, worth emphasizing that the theoretical tools for the future NLO analysis of the forthcoming data are already available.
It is a pleasure to thank S. Frixione, A. Signer and W. Vogelsang for enjoyable collaborations.
|
no-problem/9905/chao-dyn9905033.html
|
ar5iv
|
text
|
# FUZZY CONTROL OF CHAOS
## 1 Introduction
Chaos control exploits the sensitivity to initial conditions and to perturbations that is inherent in chaos as a means to stabilize unstable periodic orbits within a chaotic attractor. The control can operate by altering system variables or system parameters, and either by discrete corrections or by continuous feedback. Many methods of chaos control have been derived and tested \[Chen & Dong, 1993, Lindner & Ditto, 1995, Ogorzałek, 1993\]. Why then consider fuzzy control of chaos?
A fuzzy controller works by controlling a conventional control method. We propose that fuzzy control can become useful together with one of these other methods — as an extra layer of control — in order to improve the effectiveness of the control in terms of the size of the region over which control is possible, the robustness to noise, and the ability to control long period orbits.
In this paper, we put forward the idea of fuzzy control of chaos, and we provide an example showing how a fuzzy controller applying occasional proportional feedback to one of the system parameters can control chaos in Chua’s circuit.
## 2 Fuzzy Control
Fuzzy control \[Driankov et al., 1993, Terano et al., 1994\] is based on the theory of fuzzy sets and fuzzy logic \[Yager & Zadeh, 1991, Bezdek, 1993\]. The principle behind the technique is that imprecise data can be classified into sets having fuzzy rather than sharp boundaries, which can be manipulated to provide a framework for approximate reasoning in the face of imprecise and uncertain information. Given a datum, $`x`$, a fuzzy set $`A`$ is said to contain $`x`$ with a degree of membership $`\mu _A(x)`$, where $`\mu _A(x)`$ can take any value in the domain $`[0,1]`$. Fuzzy sets are often given descriptive names (called linguistic variables) such as FAST; the membership function $`\mu _{\text{FAST}}(x)`$ is then used to reflect the similarly between values of $`x`$ and a contextual meaning of FAST. For example, if $`x`$ represents the speed of a car in kilometres per hour, and FAST is to be used to classify cars travelling fast, then FAST might have a membership function equal to zero for speeds below 90 km/h and equal to one for speeds above 130 km/h, with a curve joining these two extremes for speeds between these values. The degree of truth of the statement the car is travelling fast is then evaluated by reading off the value of the membership function corresponding to the car’s speed.
Logical operations on fuzzy sets require an extension of the rules of classical logic. The three fundamental Boolean logic operations, intersection, union, and complement, have fuzzy counterparts defined by extension of the rules of Boolean logic. A fuzzy expert system uses a set of membership functions and fuzzy logic rules to reason about data. The rules are of the form ‘if $`x`$ is FAST and $`y`$ is SLOW then $`z`$ is MEDIUM’, where $`x`$ and $`y`$ are input variables, $`z`$ is an output variable, and SLOW, MEDIUM, and FAST are linguistic variables. The set of rules in a fuzzy expert system is known as the rule base, and together with the data base of input and output membership functions it comprises the knowledge base of the system.
A fuzzy expert system functions in four steps. The first is fuzzification, during which the membership functions defined on the input variables are applied to their actual values, to determine the degree of truth for each rule premise. Next under inference, the truth value for the premise of each rule is computed, and applied to the conclusion part of each rule. This results in one fuzzy set to be assigned to each output variable for each rule. In composition, all of the fuzzy sets assigned to each output variable are combined together to form a single fuzzy set for each output variable. Finally comes defuzzification, which converts the fuzzy output set to a crisp (nonfuzzy) number.
A fuzzy controller may then be designed using a fuzzy expert system to perform fuzzy logic operations on fuzzy sets representing linguistic variables in a qualitative set of control rules — see Figure 1.
As a simple metaphor of fuzzy control in practice, consider the experience of balancing a stick vertically on the palm of ones hand. The equations of motion for the stick (a pendulum at its unstable fixed point) are well-known, but we do not integrate these equations in order to balance the stick. Rather, we stare at the top of the stick and carry out a type of fuzzy control to keep the stick in the air: we move our hand slowly when the stick leans by a small angle, and fast when it leans by a larger angle. Our ability to balance the stick despite the imprecision of our knowledge of the system is at the heart of fuzzy control.
## 3 Techniques for Fuzzy Chaos Control
To control a system necessitates perturbing it. Whether to perturb the system via variables or parameters depends on which are more readily accessible to be changed, which in turn depends on what type of system is to be controlled — electronic, mechanical, optical, chemical, biological, etc. Whether to perturb continuously or discretely is a question of intrusiveness — it is less intrusive to the system, and less expensive to the controller, to perturb discretely. Only when discrete control is not effective might continuous control be considered.
Ott, Grebogi, and Yorke \[Ott et al., 1990\] invented a method of applying small feedback perturbations to an accessible system parameter in order to control chaos. The OGY method uses the dynamics of the linearized map around the orbit one wishes to control. Using the OGY method, one can pick any unstable periodic orbit that exists within the attractor and stabilize it. The control is imposed when the orbit crosses a Poincaré section constructed close to the desired unstable periodic orbit. Since the perturbation applied is small, it is supposed that the unstable periodic orbit is unaffected by the control.
Occasional proportional feedback \[Hunt, 1991, Lindner & Ditto, 1995\] is a variant of the original OGY chaos control method. Instead of using the unstable manifold of the attractor to compute corrections, it uses one of the dynamical variables, in a type of one-dimensional OGY method. This feedback could be applied continuously or discretely in time; in occasional proportional feedback it is applied discretely. Occasional proportional feedback exploits the strongly dissipative nature of the flows often encountered, enabling one to control them with a one-dimensional map. The method is easy to implement, and in many cases one can stabilize high period unstable orbits by using multiple corrections per period. It is a suitable method on which to base a fuzzy logic technique for the control of chaos, since it requires no knowledge of a system model, but merely an accessible system parameter.
## 4 An Example: Fuzzy Control of Chaos in Chua’s Circuit
Chua’s circuit \[Matsumoto, 1984, Kennedy, 1993\] exhibits chaotic behaviour that has been extensively studied, and whose dynamics is well known \[Madan, 1993\]. Recently, occasional proportional feedback has been used to control the circuit \[Johnson et al., 1993\]. The control used an electronic circuit to sample the peaks of the voltage across the negative resistance and if it fell within a window, centred about $`a`$ by a set-point value, modified the slope of the negative resistance by an amount proportional to the difference between the set point and the peak value. The nonlinear nature of this system and the heuristic approach used to find the best set of parameters to take the system to a given periodic orbit suggest that a fuzzy controller that can include knowledge rules to achieve periodic orbits may provide significant gains over occasional proportional feedback alone.
We have implemented a fuzzy controller to control the nonlinearity of the nonlinear element (a three segment nonlinear resistance) within Chua’s circuit. The block diagram of the controller is shown in Figure 1. It consists of four blocks: knowledge base, fuzzification, inference and defuzzification. The knowledge base is composed of a data base and a rule base. The data base consists of the input and output membership functions (Figure 2). It provides the basis for the fuzzification, defuzzification and inference mechanisms. The rule base is made up of a set of linguistic rules mapping inputs to control actions. Fuzzification converts the input signals $`e`$ and $`\mathrm{\Delta }e`$ into fuzzified signals with membership values assigned to linguistic sets. The inference mechanisms operate on each rule, applying fuzzy operations on the antecedents and by compositional inference methods derives the consequents. Finally, defuzzification converts the fuzzy outputs to control signals, which in our case control the slope of the negative resistance $`\mathrm{\Delta }a`$ in Chua’s circuit (Figure 3). The fuzzification maps the error $`e`$, and the change in the error $`\mathrm{\Delta }e`$, to labels of fuzzy sets. Scaling and quantification operations are applied to the inputs. Table 1 shows the quantified levels and the linguistic labels used for inputs and output. The knowledge rules (Table 2) are represented as control statements such as ‘if $`e`$ is NEGATIVE BIG and $`\mathrm{\Delta }e`$ is NEGATIVE SMALL then $`\mathrm{\Delta }a`$ is NEGATIVE BIG’.
The normalized equations representing the circuit are
$`\dot{x}`$ $`=`$ $`\alpha \left(yxf(x)\right),`$
$`\dot{y}`$ $`=`$ $`xy+z,`$
$`\dot{z}`$ $`=`$ $`\beta y,`$ (1)
where $`f(x)=bx+\frac{1}{2}(ab)(|x+1||x1|)`$ represents the nonlinear element of the circuit. Changes in the negative resistance were made by changing $`a`$ by an amount
$$\mathrm{\Delta }a=\text{Fuzzy Controller Output}\times \text{Gain}\times a.$$
(2)
We have performed numerical simulations, both in C and in Simulink, of Chua’s circuit controlled by the fuzzy logic controller. Figure 3 shows the whole control system in the form of a block diagram, including Chua’s circuit, the fuzzy controller, the peak detector, and the window comparator. Figure 4 gives a sample output of the fuzzy controller stabilizing an unstable period-1 orbit by applying a single correction pulse per cycle of oscillation. By changing the control parameters we can stabilize orbits of different periods. In Figure 5 we illustrate more complex higher period orbits stabilized by the controller. One can tune the fuzzy control over the circuit to achieve the type of response required in a given situation by modifying some or all of the rules in the knowledge base of the system.
Of course, in the case of Chua’s circuit the system equations are available and fuzzy logic is thus not necessary for control, but this simple example permits us to see the possibilities that fuzzy control provides, by allowing a nonlinear gain implemented in the form of knowledge based rules.
## 5 Conclusions
We have introduced the idea of using fuzzy logic for the control of chaos. Fuzzy logic controllers are commonly used to control systems whose dynamics is complex and unknown, but for expositional clarity here we have given an example of its use with a well-studied chaotic system. We have shown that it is possible to control chaos in Chua’s circuit using fuzzy control. Further work is necessary to quantify the effectiveness of fuzzy control of chaos compared with alternative methods, to identify ways in which to systematically build the knowledge base for fuzzy control of a particular chaotic system, and to apply the fuzzy controller to further chaotic systems.
|
no-problem/9905/cond-mat9905300.html
|
ar5iv
|
text
|
# Finite time and asymptotic behaviour of the maximal excursion of a random walk
## Abstract
We evaluate the limit distribution of the maximal excursion of a random walk in any dimension for homogeneous environments and for self-similar supports under the assumption of spherical symmetry. This distribution is obtained in closed form and is an approximation of the exact distribution comparable to that obtained by real space renormalization methods. Then we focus on the early time behaviour of this quantity. The instantaneous diffusion exponent $`\nu _n`$ exhibits a systematic overshooting of the long time exponent. Exact results are obtained in one dimension up to third order in $`n^{1/2}`$. In two dimensions, on a regular lattice and on the Sierpiński gasket we find numerically that the analytic scaling $`\nu _n\nu +An^\nu `$ holds.
Keywords: random walk, maximal excursion, finite size scaling, enumeration technique, Sierpiński gasket
The random walk (RW) on a lattice has long been studied due to its widespread applications in mathematics, physics, chemistry and other research areas. It turns out that despite the huge amount of accomplished work, it still remains a thriving research topic. Lots of results can be obtained in the continuum limit (Brownian motion) but results for RW on a lattice often yield drastically different behaviour - as it is the case for the winding angle distribution \- or, at least, unusual finite time convergence properties. In the present work we investigate a central quantity for RW, the maximal excursion from the origin at time $`n`$, $`M_n=\mathrm{max}(x_m,0mn)`$. This random variable is of great interest in many practical purposes such as the control of pollution spread, propagation ranges of epidemics, tracer displacement in fluids, the radius of gyration of polymer chains , or of lattice animals or other extreme statistics. A great deal of work was devoted to the first passage time (FPT) statistics which is a closely related quantity. Nevertheless methods used to find the exact FPT distribution in one dimensional inhomogeneous environments do not help to get a closed form of the exact distribution of $`M_n`$. Except for the one dimensional case, only the leading order asymptotic expressions (as $`n\mathrm{}`$) are available. It was proved long ago by Erdős and Kac , that in this limit the distribution of $`M_n`$ coincides with that of the Brownian motion. This result appears as some kind of a central limit theorem. However, it does not deal with centered, reduced variables. Moreover it offers no practical access (for physically motivated purposes) to the convergence speed towards the limit law. The only global estimates available for $`M_n`$ are the laws of iterated logarithm of Khinchine and Chung for the one dimensional random walk, claiming that although all the distributions have the same limit form, the intrinsic uncertainty on $`M_n`$ increases with $`n`$. Hence, it is not clear what the finite time behaviour of the maximal excursion $`M_n`$ is. It is our aim to clarify this point.
In this article, we first derive the leading order expression for the distribution of $`M_n`$ in a generalized form. This expression is shown to apply also on self-similar structures. Then we proceed to the next leading order expansion for short time. In this regime, the first moment of $`M_n`$ scales as $`M_nn^{\nu _n}`$ where $`\nu _n`$ is the effective instantaneous diffusion exponent, and tends to $`\nu _n\nu `$ as $`n\mathrm{}`$ ($`\nu =1/2`$ on regular lattices, $`\nu =\mathrm{ln}2/\mathrm{ln}(d+3)`$ on the $`d`$-dimensional Sierpiński gasket). We show numerical evidence that the effective instantaneous diffusion exponent $`\nu _n`$ approaches the limiting value $`\nu `$ according to $`\nu _n\nu n^\nu `$. This result is valid for both regular and self-similar lattices. We finally discuss this point in the context of other problems of statistical physics.
First let us briefly recall the formulae for the maximal excursion of a $`d`$-dimensional Brownian motion $`𝐫_t`$, that is $`M_t=\mathrm{max}(𝐫_u_2,ut)`$, where $`𝐫_2=\sqrt{_ir_i^2}`$ is the Euclidean distance. The limit distribution is denoted by $`𝐏_d(a,t)=\mathrm{Pr}\{M_t<a\}`$. The calculation goes through the solution of the diffusion equation in $`d`$ dimensions with spherical symmetry and absorbing boundaries on the hypersphere of radius $`a`$. Let $`U(𝐫,t)`$ be the probability density function for the position vector $`𝐫`$ of the walker relative to the origin at time $`t`$, without ever crossing the hypersphere boundary at distance $`a`$. Then $`U(𝐫,t)`$ satisfies the diffusion equation
$$_tU(𝐫,t)=\frac{1}{2d}_𝐫^2U(𝐫,t)$$
(1)
where $`_𝐫^2`$ is the $`d`$-dimensional Laplace operator. The diffusion constant is set as $`1/(2d)`$ so that the solution corresponds to a simple random walk on $`𝐙^d`$ with a time $`\tau `$ between steps and a lattice spacing $`\sqrt{\tau }`$ in the limit $`\tau 0`$. The boundary condition is that $`U(𝐫,t)=0`$ for $`𝐫_2=a`$, and the initial condition is
$$U(𝐫,0)=\delta (𝐫)=\frac{\delta _+(|𝐫|)}{A_d|𝐫|^{d1}},$$
where $`A_d`$ is the surface area of the unit hypersphere in $`d`$ dimensions and $`\delta _+`$ is the (one-sided) delta function. The probability of remaining inside the hypersphere up to time $`t`$, $`𝐏_d(a,t)`$, is the volume integral of $`U(𝐫,t)`$ over the hypersphere. Due to spherical symmetry, $`U(𝐫,t)`$ is a function of $`r=𝐫_2`$ only, which we now denote by $`U(r,t)`$, so that, from (1)
$$_tU(r,t)=\frac{1}{2dr^{d1}}_rr^{d1}_rU(r,t),$$
(2)
with $`U(r,0)=\frac{\delta _+(r)}{A_dr^{d1}},U(a,t)=0`$ and
$$𝐏_d(a,t)=_0^aA_dr^{d1}U(r,t)𝑑r$$
The solution of (2) is given in the form of an infinite eigenfunction expansion. This calculation can be done for self-similar lattices in the framework of the O’Shaughnessy-Procaccia approximation . It consists in assuming a spherical symmetry of a fractal object, and in introducing an effective diffusion coefficient $`D=D_0r^{2+1/\nu }`$ computed from the solution of the stationary diffusion problem on self-similar lattices without angular dependence. Thanks to this approximation, an analytic approach can be pursued. The final distribution, denoted $`𝐏_{d,\nu }`$ for general $`\nu `$ is obtained in the Laplace domain in closed form
$$𝐏_{d,\nu }(a,s)=\frac{1}{s}\left[1\frac{2^{1d\nu }}{\mathrm{\Gamma }(d\nu )}\frac{(4\nu ^2D_0^1a^{\frac{1}{\nu }}s)^{\frac{d\nu 1}{2}}}{I_{d\nu 1}\left(\sqrt{4\nu ^2D_0^1a^{\frac{1}{\nu }}s}\right)}\right]$$
(3)
Here $`I_n(x)`$ is the modified Bessel function of order $`n`$. From this formula all moments are plainly computed
$$M_t^k_{d,\nu }=\left\{\frac{2\nu k}{\mathrm{\Gamma }(k\nu +1)}\frac{2^{1d\nu }}{\mathrm{\Gamma }(d\nu )(4D_0^1\nu ^2)^{k\nu }}\underset{0}{\overset{\mathrm{}}{}}\frac{u^{(2k+d)\nu 2}}{I_{d\nu 1}(u)}𝑑u\right\}t^{k\nu }$$
(4)
Putting $`D_0=1/2d`$ and $`\nu =1/2`$ in (3), we easily recover the known distributions on regular lattices in one , two and three dimensions. A similar method was used in to solve the first passage time problem in the presence of a steady potential flow. It is worthwhile mentioning that the limit $`d\mathrm{}`$ for $`\nu =1/2`$ in (3) yields $`𝐏_{\mathrm{}}(a,s)=\frac{1}{s}\left(1\mathrm{exp}(a^2s)\right)`$ to leading order. Hence, as intuitively expected, the maximal excursion of a random walker is exactly known in infinite dimension and peaks at $`a=\sqrt{t}`$.
On self-similar lattices ($`\nu 1/2`$), Equation (3) is only an estimate of the exact limit law. We have compared distribution (3) with the distribution obtained by real space renormalization group (RSRG) techniques for the 2D Sierpiński gasket . Both laws turn out to approximate the exact distribution to the same order (see Figure 1 and the discussion in ).
We have also evaluated the moments. To first order, they behave as $`M^kn^{k\nu }`$, so we define the normalized moments $`M^k=M_n^k/n^{k\nu }`$ which tend to a constant asymptotically. The first two normalized moments obtained from (4), $`M=1.20`$ and $`M^2=1.59`$, should be compared with the moments obtained from the RSRG method ($`M=1.19`$, $`M^2=1.57`$) and with the numerical estimates ($`M=1.28`$, $`M^2=1.84`$). Both theoretical formulae underestimate the actual values . This can be understood as follows. Strictly speaking, no limit distribution can be defined for $`M_n`$ but, for consecutive time series $`n,5n,5^2n,\mathrm{}`$, $`P_n(a/n^\nu )`$ is left invariant, because if a random walker takes $`T`$ steps to leave a triangle of size $`R`$, it needs a time $`5T`$ to leave a triangle twice bigger. Thus $`P_n(a)`$ fulfills the scaling relation $`P_n(a)=P_{5n}(2a)`$. However, between these times and for fixed $`a/n^\nu `$, the rescaled distribution $`P_n(a/n^\nu )`$ has a log-periodic variation. This log-periodic behavior has been known for some time for lattice random walks . Both function (3) and the RSRG result give the probability to stay in the triangle of size $`R=2^i`$, that is one minus the probability to reach sites at distance $`R+1`$ from the origin. As observed in Figure (1) this corresponds to an extremum in the oscillation of $`P_n(a/n^\nu )`$ (leftmost dashed line of Fig. 1) rather than to an average.
Now we turn our attention to the convergence speed towards the asymptotic law (3). For convenience we study the case of the discrete time random walk on the lattice, where analytical results can be obtained in one dimension and an exact numerical approach is possible in higher dimensions. We focus on the instantaneous diffusion exponent $`\nu _n`$ which furnishes an information about the convergence speed of the moments. The numerical estimation of $`\nu `$ using a Monte-Carlo sampling can lead to false conclusions as in (see ). Here we use exact enumeration methods and therefore we avoid such problems.
The exact solution of the problem in one dimension is obtained by solving the master equation with absorbing boundaries at points $`\pm a`$ with the use of a Fourier development (obtained in with a minor misprint), but the moments cannot be calculated in a straightforward manner from this expression beyond first order. We derived another form of the distribution by a recursive use of the reflection theorem. The probability density of the maximal excursion at step $`n`$ reads
$$P_n(M_n=a)=2\underset{k=0}{\overset{\mathrm{}}{}}(1)^k\left[p_n\left((2k+1)a\right)+p_n\left((2k+1)(a+1)\right)+2\underset{i=1}{\overset{2k}{}}p_n\left((2k+1)a+i\right)\right]$$
(5)
where $`p_n(x)`$ is the probability density function for the discrete random walk to be at position $`x`$ (which is null for $`|x|>n`$). Formula (5) allows a convergent expansion of the first moments in powers of $`n^{1/2}`$, which exist, since $`P_n(M_n=a)`$ is an analytic function of $`n^{1/2}`$. At order $`n^{1/2}`$, divergent series are encountered which can be summed by classical methods , yielding
$$M_n=\sqrt{\frac{\pi n}{2}}\frac{1}{2}+\frac{1}{12}\sqrt{\frac{2\pi }{n}}+𝒪\left(\frac{1}{n}\right)$$
(6)
$$M_n^2=2Gn\sqrt{\frac{\pi n}{2}}+\frac{G+1}{3}+𝒪\left(\frac{1}{\sqrt{n}}\right)$$
(7)
where $`G=0.9166\mathrm{}`$ is the Catalan constant. In the calculation of the second cumulant the terms of order $`\sqrt{n}`$ and $`1/\sqrt{n}`$ cancel, as expected. The distribution $`P_t(M_t=a)`$ for continuous time random walks (CTRW) follows plainly from (5) since $`P_t(M_t=a)=_nP_n(M_n=a)\mathrm{\Pi }_t(n)`$, where $`\mathrm{\Pi }_t(n)`$ is the probability for the CTRW to perform $`n`$ steps in time $`t`$. Numerically, a series expansion similar to (7) is found for an exponential distribution of waiting times. The exponential distribution is particular because it is the only one for which a master equation formulation and a CTRW on the same lattice are isomorphic . A striking feature of the random variable $`M`$ compared to other extremal quantities at finite times is that the leading order expansion of $`M_n^k`$ scales as $`n^{k/2}+\mathrm{cte}.n^{(k1)/2}`$ and not as $`n^{k/2}+\mathrm{cte}.n^{k/21}`$, hence finite size effects persist for a large number of steps.
In two dimensions, no exact result is available for finite times and the analyticity of the probability density $`P_n(M_n=r)`$ is not obvious. Hence we investigate this case numerically. It is possible to perform an exact enumeration of the walks by studying the joint probability density of the position and maximal excursion $`P_n(x,y,M)`$ on the square lattice $`^2`$. We can compute $`P_n(x,y,M)`$ in the region $`0xM,0yx`$ only, due to symmetries. We use the family of metrics $`d_p(𝐱)=\left(_i(|x_i|^p)\right)^{1/p}`$ to compute the maximal excursion from the origin of the lattice. In Figure 2 we plot the instantaneous exponent $`\nu _n`$ with three classical choices of metric: $`d_1`$, $`d_2`$ (Euclidean distance) and $`d_{\mathrm{}}`$ (max distance). The metric $`d_1`$ and $`d_{\mathrm{}}`$ both induce a strong overshooting of $`\nu _n`$ with the limit value $`1/2`$. The curves have the same shape as that of the one dimensional case, also potted for reference in Figure 2. The first two moments have many features in common with their 1D equivalents. For example, using the metric $`d_{\mathrm{}}`$ we find that the series expansion $`M_n^k=_{p=0}^{\mathrm{}}m_{kp}^k(\sqrt{n})^{kp}`$ holds for both first and second moments up to third order, with $`m_0^1=0.50,m_1^1=0.322`$ and $`m_1^2=1.083,m_0^2=0.95,m_1^2=0.33`$. The leading order terms are exactly evaluated from the asymptotic results and read $`m_1^1=1.0830,m_2^2=1.3048`$. In the Euclidean metric $`d_2`$, we find a drastic change in the shape of the curve $`\nu _n`$ (Figure 2).
The instantaneous exponent approaches $`\frac{1}{2}`$ from below and remains below $`\frac{1}{2}`$ at time $`n=400`$. This phenomenon is a lattice effect. We show this fact by computing $`\nu _n`$ in an off-lattice random walk model (Figure 3).
Since it is not possible to use exact enumeration techniques in this case, we resort to a Monte-Carlo simulation. We inspect $`2.10^8`$ random walks with fixed distance increments and a uniform distribution of the angles (Pearson walks ). In this situation, $`\nu _n`$ in metric $`d_2`$ is very close to that obtained in metric $`d_1`$ and $`d_{\mathrm{}}`$, and it decreases towards $`\frac{1}{2}`$. Both lattice and off-lattice models should give equivalent results once the discretization effects are smoothed out. Therefore $`\nu _n`$ should ultimately approach $`\frac{1}{2}`$ from above in the on-lattice model. We have investigated the change of $`\nu _n`$ when varying continuously the metric $`d_p`$ with $`1p\mathrm{}`$ on the lattice (Figure 2). For large enough values of $`p`$ ($`p>50`$), we do observe that the curve crosses the value $`\frac{1}{2}`$. In the metric $`d_2`$, however, the time needed for $`\nu _n`$ to cross $`\frac{1}{2}`$ should be enormous.
This result shows that the definition of the metric strongly influences the convergence properties of the maximal excursion on regular lattices. The two natural metrics for the square lattice, $`d_1`$ and $`d_{\mathrm{}}`$, lead to a behaviour similar to that observed in the continuum model.
On the Sierpiński gasket we enumerate the walks starting from the top of the biggest triangle up to a fixed time. The metric chosen here is the chemical distance from the origin. For each maximal excursion $`r`$ we compute the probability of remaining below $`r`$ after $`n`$ steps, $`P_n^S(r)`$. Unlike the transfer matrix method, this method works only at fixed time, but allows us to discard the long transient regime and to compare the exact limit distribution to its spherically symmetric approximation. We have computed the instantaneous exponent $`\nu _n`$ up to step $`n=10^4`$ on a gasket of size 256 (cf. Figure 4).
Like the moments, $`\nu _n`$ displays a log-periodic oscillation persisting in the long time regime with an amplitude less than $`810^3`$. $`\nu _n`$ tends to the asymptotic value $`\nu =\frac{\mathrm{ln}2}{\mathrm{ln}5}`$ for long time. It seems that on a very general class of lattices the finite time behaviour of $`\nu _n`$ and therefore of $`M_n^k`$ is an analytic function of $`n^\nu `$. This fact lacks a clear physical understanding. The real space renormalization results do show that $`n^\nu `$ is the well defined time scale for this problem but the exact evaluation of finite size effects is not accessible from this method. To assess this hypothesis we have smoothed out the log-periodic oscillations of $`\nu _n`$. For a log-periodic function $`f(x)=f(Tx)`$, one can define $`z=\mathrm{ln}(x)`$ and $`\stackrel{~}{f}(z)=f(x)`$ so that the running logarithmic average reads
$$\overline{f}\left(\frac{Tx}{2}\right)=\frac{1}{_z^{z+\mathrm{ln}T}𝑑u}_z^{z+\mathrm{ln}T}\stackrel{~}{f}(u)𝑑u=\frac{1}{_x^{Tx}\frac{dv}{v}}_x^{Tx}f(v)\frac{dv}{v}$$
Using a discrete form of this average we write
$$\overline{\nu }_{\frac{5n}{2}}=\frac{1}{_{i=n}^{5n}\frac{1}{i}}\underset{i=n}{\overset{5n}{}}\frac{\nu _i}{i}$$
We tried to fit $`\overline{\nu }_n`$ using
$$\overline{\nu }_n=\nu +An^\nu +Bn^{2\nu }+o(n^{2\nu })$$
(8)
and we found a very good regression for $`A=0.082\pm 0.002`$ and $`B=0.18\pm 0.1`$. However, the best fit with only one power law is $`\nu +\mathrm{cte}.n^\alpha `$ with $`\alpha =0.49`$, so that strictly speaking a nonanalytic short time dependence cannot be ruled out. The underlying assumption in the computation of $`\overline{\nu }_n`$ is that the regular log-oscillatory pattern of $`\nu _n`$ is additive. This assumption does not hold because the averaged exponent $`\overline{\nu }_n`$ still shows some oscillation. The local exponent $`\alpha `$ fluctuates between 0.40 and 0.52, which does not allow to confirm unambiguously the hypothesis of the analytic behavior of $`\overline{\nu }_n`$ as a function of $`n^\nu `$.
We would like to point out that in the context of lattice animals, the quantity $`M_n^k`$, or equivalently the “caliper diameter” (average spanning diameter of lattice animals once projected on a fixed axis) displays a sub-leading order behaviour which scales as $`n^{(k1)\nu }`$ , aside from the well-known non analytic subleading term, and can be interpreted as a ’surface contribution’. In the case of the maximal excursion of a random walk, we have proved in one dimension and evidenced through enumerations in higher dimensions that the early time instantaneous exponent $`\nu _n`$ is systematically above its limit value $`\nu `$ with a leading order development $`\nu _n\nu +An^\nu `$ where $`A`$ depends on the precise choice of the metric. This result is consistent with the fact that the corrective scaling to the moments due to finite size effects includes only terms of the form $`(n^\nu )^p,p`$, which was proved in one dimension and which can also be interpreted as a surface contribution.
In conclusion, besides its intrinsic interest, the maximal excursion of a random walk shares difficulties which are often encountered in physical problems dealing with finite size series. In certain cases, power law exponents inferred from the finite size series expansions should be considered with caution, as might be the case for directed percolation series.
|
no-problem/9905/hep-ex9905001.html
|
ar5iv
|
text
|
# NATURAL PARITY RESONANCES IN THE 𝜂𝜋⁺𝜋⁻ AND 𝜔𝜔
## 1 Introduction
The $`\eta \pi ^+\pi ^{}`$ and $`\omega \omega `$ final states has been studied in the charge-exchange reactions
$$\pi ^{}p\eta \pi ^+\pi ^{}n,\eta \gamma \gamma $$
(1)
and
$$\pi ^{}p\omega \omega n,\omega \pi ^+\pi ^{}\pi ^0.$$
(2)
respectively. Our previous results of the analysis of reaction (1) were reported at the conference and of reaction (2) were published in . The measurements were carried out using VES spectrometer exposed by the $`37GeV`$ momentum $`\pi ^{}`$ beam.
The study of the final system is done in two steps. At first, the mass-independent fit of events, sorted into the narrow mass bins is carried out. Within a bin, the Dalitz plot and the angular distributions are fitted to individual partial waves of all possible coupling of the isobar spins and their orbital angular momentum. In the second stage the mass-dependent fit is done inside the wide mass band. Each partial wave production amplitude is saturated by the coherent sum of the resonance Breit-Wigner amplitudes.
## 2 Results of the $`\eta \pi ^+\pi ^{}`$ system analysis
We present the results of a spin-parity analysis of $`4.210^4`$ events of the $`\eta \pi ^+\pi ^{}`$ final state in the mass range $`1.4÷2.48GeV`$ for $`|t^{}|<0.2GeV^2`$ <sup>1</sup><sup>1</sup>1 $`t^{}=tt_{min}`$, where $`t`$\- momentum transfer from the beam to the final state squared, $`t_{min}`$-its minimum value.. The $`\eta \pi ^+\pi ^{}`$ system in this kinematical region is dominated by the states with quantum numbers allowed for the $`\pi \pi `$ system: $`J^P=1^{}`$, $`3^{}`$ with $`I=1`$, and $`J^P=2^+`$, $`4^+`$ with $`I=0`$, due to the dominance of one pion exchange. The model with the rank two density matrix is used in the fit. The results of the amplitude analysis are presented in Fig. 1 and 2. The $`J^{PC}M^\eta =1^{++}0^+`$ <sup>2</sup><sup>2</sup>2notations from $`\rho (770)\eta `$ S-wave is dominated in the natural parity exchange(NPE) set of the partial waves suppressed by the $`t^{}`$-cut. In the unnatural parity exchange(UPE) sector the most significant waves are as follows:
$`J^{PC}=1^{}`$. The $`1^{}\rho (770)\eta `$ wave is the dominant wave in the UPE sector, see Fig. 1(a). The best description is obtained when the $`1^{}`$ wave is introduced as a coherent sum of the Breit-Wigner amplitudes with PDG parameters for $`\rho (1450)`$ , $`\rho (1700)`$ and $`\rho (2150)`$. Amongst these, the $`\rho (1700)`$ is strongly dominant. However, details of the $`1^{}`$ wave are somewhat sensitive to (i) the magnitude of the $`\rho (1450)`$ contribution and its poorly known mass and width, (ii) the mass and width of the $`\rho (2150)`$. This prevents us from assigning accurate model independent parameters to $`\rho (1700)`$.
$`J^{PC}=3^{}`$. A clear result is the presence of the $`\rho _3(1690)`$ decaying to the $`a_2(1320)\pi `$ and $`\rho (770)\eta `$; it is seen as a rapid rise of the $`3^{}`$ intensities in the $`1.7GeV`$ region for both $`a_2(1320)\pi `$ and $`\rho (770)\eta `$, Figs. 1(b) and (c), and even more distinctly in their phase variation relative to the $`1^{}`$ wave, Figs. 2(a) and (b). The mass-dependent PWA gives the following values for the $`\rho _3(1690)`$ mass and width:
$$\begin{array}{cc}M=1.69\pm 0.01(stat)\pm 0.02(syst)GeV,\hfill & \mathrm{\Gamma }=0.23\pm 0.02(stat)\pm 0.05(syst)GeV.\hfill \end{array}$$
Taking into account the PDG value of the branching ratio of the $`a_2(1320)\eta \pi `$ decay, we obtain: $`\frac{BR(\rho _3(1690)a_2(1320)\pi )}{BR(\rho _3(1690)\rho (770)\eta )}=6.0\pm 3.0.`$ The error is mostly systematic. There is an indication of a further resonance in the $`3^{}`$ waves around $`2.1GeV`$. The $`a_2(1320)\pi `$ cross section falls sharply at about $`2.15GeV`$ and is fitted with a resonance at $`M=2.18\pm 0.04GeV`$ with $`\mathrm{\Gamma }=0.26\pm 0.05GeV`$. There is a corresponding weak peak in the $`3^{}`$ total cross section summed over all final states, Fig. 1(d). The large phase variation observed in Figs. 2(a) and (b) with respect to the $`1^{}`$ waves in the mass range 1.8 to $`2.4GeV`$ requires that at least one $`3^{}`$ resonance above $`\rho _3(1690)`$ is present in this mass range. At higher mass, there is some tentative evidence for further activity around $`2.3GeV`$, namely a peak in the $`a_2(1320)\pi `$ wave intensity and $`3^{}`$ total cross section. A fit results in the following parameters of the $`\rho _3(2300)`$:
$$\begin{array}{cc}M=2.30\pm 0.05GeV,\hfill & \mathrm{\Gamma }=0.24\pm 0.06GeV.\hfill \end{array}$$
$`J^{PC}=2^{++}`$. The intensities of $`2^{++}`$ waves are rather small compared to the $`1^{}`$ and $`3^{}`$, see Figs. 1(e)-(g). A sharp structure near $`1.55GeV`$ is seen in the $`a_2(1320)\pi `$ wave, Fig. 1(e). The Breit-Wigner parameters found for this peak are:
$$\begin{array}{cc}M=1.53\pm 0.02(stat)\pm 0.02(syst)GeV,\hfill & \mathrm{\Gamma }=0.13\pm 0.03(stat)\pm 0.04(syst)GeV.\hfill \end{array}$$
We interpret it as an evidence for the $`f_2(1565)`$ decaying through a previously unobserved $`a_2(1320)\pi `$ decay mode. Considering now higher masses, the $`2^{++}f_2(1270)\eta `$ intensity shows a definite peak centered near $`1.9GeV`$, Fig. 1(f). Earlier, both VES and GAMS collaborations have reported a $`2^+`$ resonance at $`1.92GeV`$ in the $`\omega \omega `$. The $`f_2(1950)`$ parameters are fixed to that from the $`\omega \omega `$-system fit: $`M=1.93GeV`$, $`\mathrm{\Gamma }=0.21GeV`$. A definite phase variation consistent with this mass and width is observed in Fig. 2(d).
$`J^{PC}=4^{++}`$. The $`f_2(1270)\eta `$ intensity shows a rapid rise in the $`2.3GeV`$ region ( Fig. 1(i) ) and have significant phase variation in the region $`2.0÷2.3GeV`$, Figs. 2(f). This may point to the existence of the $`f_2(1270)\eta `$ resonance state. If the parameters of the $`f_4(2050)`$ are kept at the PDG values, the following parameters for the $`f_4(2300)`$ meson are obtained:
$$\begin{array}{cc}M=2.33\pm 0.03GeV,\hfill & \mathrm{\Gamma }=0.29\pm 0.07GeV.\hfill \end{array}$$
## 3 Results of the $`\omega \omega `$ analysis
The results of the amplitude analysis of 9800 events of the $`\pi ^{}p\omega \omega n`$ with $`\omega \omega `$ mass inside $`1.6÷2.5GeV`$ are shown in Fig. 3. Production of the $`0^{}`$, $`1^+`$, $`2^{}`$ and $`3^+`$ final states requires weak NPE, amounting to $`10\%`$ of UPE. The $`2^{++}`$ and the $`4^{++}`$ UPE waves are the most significant. The production of the $`0^+`$ and $`4^+`$ final states turns out to be coherent, corresponding to the $`\pi `$-exchange and spin-flip of the spectator nucleon. In the last stage of analysis these waves are included coherently. The $`2^{++}`$ S-wave exhibits the violation of coherence with the $`4^{++}`$ wave in the region of $`1.9GeV`$, as shown by the coherence parameter in Fig. 3(d). The reason for this is not understood now, although the NPE $`a_1`$ exchange may be the possible explanation. For this reason the model with the rank two density matrix is used in the fit, with the $`2^{++}`$ S-wave is presented in the second rank. There is a strong $`2^+`$ amplitude near the $`\omega \omega `$ threshold, which may be attributed to the $`f_2(1565)`$ . The mass region $`1.9÷2.05GeV`$ requires two resonances: an $`f_2`$ with $`M=1.94\pm 0.01GeV`$, $`\mathrm{\Gamma }=0.15\pm 0.02GeV`$, and an $`f_4`$ with $`M=2.01\pm 0.02GeV`$, $`\mathrm{\Gamma }=0.25\pm 0.04GeV`$. In the higher mass range there is weak evidence for the existence of a $`f_4(2300)`$ with the following parameters:
$$\begin{array}{cc}M=2.33\pm 0.02GeV,\hfill & \mathrm{\Gamma }=0.24\pm 0.04GeV.\hfill \end{array}$$
## 4 Conclusions
Results of the $`\eta \pi ^+\pi ^{}`$ and $`\omega \omega `$ final state PWA reveal the resonant structures, which are identified by the phase motion as well as peaks in the intensities of the partial waves. A fit to $`1^{}\rho (770)\eta `$ partial wave is consistent with the presence of a small contribution from the $`\rho (1450)`$, a dominant contribution from the $`\rho (1700)`$ with PDG values of the mass and width and some contribution from the $`\rho (2150)`$. The well known $`J^{PC}=3^{}\rho _3(1690)`$ is accompanied by a $`\rho _3(2300)`$ in the $`\eta \pi ^+\pi ^{}`$ final state. The $`f_2(1565)`$ meson is observed in the $`\eta \pi ^+\pi ^{}`$ and $`\omega \omega `$ final states. The $`f_4(2050)`$ is observed in the $`\omega \omega `$ state. There is indication of a $`\rho _3(2100)`$ in the $`\eta \pi ^+\pi ^{}`$ and the $`f_2(1950)`$ in the $`\eta \pi ^+\pi ^{}`$ and $`\omega \omega `$. There is evidence for a $`f_4(2300)`$ meson, decaying to the $`f_2(1270)\eta `$ in the F-wave and to the $`\omega \omega `$.
|
no-problem/9905/hep-ex9905053.html
|
ar5iv
|
text
|
# Polarized lepton nucleon scattering — summary of the experimental spin sessions at DIS 99
## 1 OVERVIEW
Exciting and even unexpected results have been presented in the sessions about “polarized lepton nucleon scattering”, at the DIS 99 workshop. Most striking are the presentations of the flavor decomposition of the quark polarization, the first direct observation of a positive gluon polarization in the nucleon and the observation of a double-spin asymmetry in diffractive $`\rho ^0`$ production. First results on the Gerasimov-Drell-Hearn (GDH) sum rule have been reported. The interesting and new fields of polarization phenomena related to transversity, leading twist-3 distributions and fragmentation came into the reach of experimentalists this year. Promising results were announced about azimuthal pion asymmetries and the polarization of $`\mathrm{\Lambda }`$ hyperons in the final state.
## 2 INCLUSIVE SPIN PHYSICS
Spin structure functions have been measured for many years, and an impressive, precise data set has been collected by the experiments at SLAC, by SMC and by HERMES . At all three sites the spin structure function $`g_1(x,Q^2)`$ was measured for the proton and the neutron and consistent results were obtained that agree with the $`Q^2`$ evolution as predicted by QCD. The experiment E155x at SLAC is running in spring 1999 with the aim to provide precise data on $`g_2`$ and to gain access to the interesting twist-3 component of $`g_2`$ that remains after subtraction of the Wandzura-Wilczek contribution $`g_2^{WW}`$.
The situation of the spin sum rules has not changed significantly since the last DIS meeting. The Ellis-Jaffe sum rule is found to be violated and all results support the Bjorken sum rule. The precision of the sum rule tests is limited by theoretical uncertainties in the extrapolation of the spin structure functions at low x. Experimentally, the low x range will be accessible only by a future high energy spin experiment at a new facility as the proposed polarized HERA collider . A measurement of the high $`x`$ region at low energies is planned at Jefferson Lab .
### 2.1 Gerasimov-Drell-Hearn sum rule
The GDH sum rule relates the polarization dependent part of the total photoproduction cross section to the anomalous magnetic moment of the nucleon. This important relation has been tested by a precision experiment at the tagged polarized photon beam of the microtron MAMI in Mainz, Germany. The energy range accessible by the experiment is 200-800 MeV. Fig. 1 shows a preliminary analysis of a small subset of the data. For the integral a number of $`230\pm 20\mu `$b was reported, which accounts for most of the GDH prediction of 204 $`\mu `$. The remaining difference might be due to contributions at higher energies which are planned to be measured at Bonn.
The GDH sum rule can be generalized for electroproduction. Data have been recently published by the HERMES collaboration and a new measurement in the important resonance region was performed in spring 1999 as experiment E94-010 at Jefferson Lab . First raw data of this experiment have been shown at this conference.
## 3 SEMI-INCLUSIVE DIS
### 3.1 Flavor decomposition
The violation of the Ellis-Jaffe sum rule, which was first reported by EMC , was interpreted as evidence for the fact that only a fraction of the nucleon spin can be attributed to the quark spins and that the strange quark sea seems to be negatively polarized . This caused the so-called spin crisis. This interpretation is based on the assumption that the spin structure of the baryon multiplets is SU(3)<sub>f</sub> flavor symmetric.
Semi-inclusive data can be used to measure the sea polarization directly and to test SU(3)<sub>f</sub> symmetry by comparing the first moments of the flavor distributions to the SU(3)<sub>f</sub> predictions. In addition, semi-inclusive polarized DIS experiments can determine the separate spin contributions $`\mathrm{\Delta }q_f`$ of quark and antiquark flavors $`f`$ to the total spin of the nucleon not only as a total integral but as a function of the Bjorken scaling variable $`x`$.
Hadron production in DIS is described by the absorption of a virtual photon by a point-like quark and the fragmentation into a hadronic final state. The two processes can be characterized by two functions: the quark distribution function $`q_f(x,Q^2)`$, and the fragmentation function $`D_f^h(z,Q^2)`$. The semi-inclusive DIS cross section $`\sigma ^h(x,Q^2,z)`$ to produce a hadron of type $`h`$ with energy fraction $`z=E_h/\nu `$ is then given by
$$\sigma ^h(x,Q^2,z)\underset{f}{}e_f^2q_f(x,Q^2)D_f^h(z,Q^2).$$
(1)
In the target rest frame, $`E_h`$ is the energy of the hadron, $`\nu =EE^{}`$ and $`Q^2`$ are the energy and the squared four-momentum of the exchanged virtual photon, $`E`$($`E^{}`$) is the energy of the incoming (scattered) lepton and $`e_f`$ is the quark charge in units of the elementary charge. The Bjorken variable $`x`$ is calculated from the kinematics of the scattered lepton according to $`x=Q^2/2M\nu `$ with $`M`$ being the nucleon mass. It is assumed that the fragmentation process is spin independent, i.e. that the probability to produce a hadron of type $`h`$ from a quark of flavor $`f`$ is independent of the relative spin orientations of quark and nucleon. The spin asymmetry $`A_1^h`$ in the semi-inclusive cross section for production of a hadron of type $`h`$ by a polarized virtual photon is given by
$$A_1^h(x,Q^2,z)=C_R\frac{\underset{f}{}e_f^2\mathrm{\Delta }q_f(x,Q^2)D_f^h(z,Q^2)}{_fe_f^2q_f(x,Q^2)D_f^h(z,Q^2)}$$
(2)
where $`\mathrm{\Delta }q_f(x,Q^2)=q_f^{}(x,Q^2)q_f^{}(x,Q^2)`$ is the polarized quark distribution function and $`q_f^{()}(x,Q^2)`$ is the distribution function of quarks with spin orientation parallel (anti-parallel) to the spin of the nucleon. The unpolarized quark distribution functions are defined by $`F_2`$ (and not by $`F_1`$):
$$F_2=\underset{f}{}e_f^2xq_f(x,Q^2)$$
(3)
and they include therefore a longitudinal component of the photon absorption cross section. The term
$$C_R=\frac{1+R(x,Q^2)}{1+\gamma ^2},$$
(4)
with $`R=\sigma _L/\sigma _T`$ being the ratio of the longitudinal to transverse photon absorption cross section, corrects for the longitudinal component which a priori is not present in the asymmetry and the polarized distribution functions. It is assumed that the ratio of longitudinal to transverse components is flavor and target independent and that the contribution from the second spin structure function $`g_2(x,Q^2)`$ can be neglected. The term $`\gamma =\sqrt{Q^2}/\nu `$ is a kinematic factor which enters from the $`g_2=0`$ assumption. Eq. (2) can be used to extract the quark polarizations $`\mathrm{\Delta }q_f(x)/q_f(x)`$ from a set of measured asymmetries on the proton and neutron for positively and negatively charged hadrons.
Results on the decomposition of the proton spin into contributions from the valence spin distributions $`\mathrm{\Delta }u_v`$ and $`\mathrm{\Delta }d_v`$ and from the sea $`\mathrm{\Delta }q_s`$ have been previously reported by SMC . New, more precise data from HERMES have been presented at this workshop . Fig. 2 shows the polarization $`\mathrm{\Delta }q/q`$ of quarks in the proton, separated into flavors. The up flavor has a positive polarization which reaches about 40% at large $`x`$ whereas the down flavor has a polarization opposite to the proton spin, in excess of 20%. In the sea region at small $`x`$ the up and down polarizations do not vanish completely. The sea polarization itself is compatible with zero as shown in the lower panel. The extraction of the sea was done under the assumption that the polarization of the sea quarks is independent of their flavor.
The polarized quark distribution functions $`\mathrm{\Delta }q_f(x)`$ have been extracted by multiplying the quark polarizations $`\mathrm{\Delta }q_f(x)/q_f(x)`$ with the known unpolarized distribution functions. Fig. 3 shows that the HERMES result agrees with the LO parametrization of Ref. of world data. However, not all LO parametrizations agree with HERMES and it seems that the parametrizations are internally inconsistent. Agreement can be achieved by dividing out a factor (1+R). The explanation is that the ratio R of the longitudinal to transverse cross section is usually set to zero in the LO fits. On the other hand experimentalists usually use the measured R, which is non-zero, to extract $`g_1`$. The result of the fit then depends on the choice of the input for the fit, either $`A_1`$ (as in Ref. ) or $`g_1`$ (as in Ref. ). From the relation
$$A_1=\frac{2xg_1}{F_2}\frac{(1+R)}{(1+\gamma ^2)},$$
(5)
which is quoted here for the approximation that $`g_2=0`$, it follows directly that the two choices will give different results.
The first and second moments of the spin distributions have been determined by HERMES. In the measured region, the results agree between HERMES and SMC within the quoted errors. A simple Regge-type extrapolation has been applied at low $`x`$ to obtain the total integrals as quoted in Table 1. The HERMES results for the first and second moment of $`\mathrm{\Delta }u_v`$ show a significant discrepancy with a prediction from quenched lattice QCD in Ref. . The result for $`\mathrm{\Delta }u+\mathrm{\Delta }\overline{u}`$ is inconsistent with the result from the inclusive data based on SU(3)<sub>f</sub> flavor symmetry as in Ref. . The inconsistency of the up flavor has its counterpart in the difference which is observed in the sea results. The inclusive analysis obtains a large negative strange sea compared to the zero sea in the semi-inclusive analysis. Possible explanations of these differences are that either SU(3)<sub>f</sub> is violated, which would modify the inclusive result, or that the assumption about the flavor independence of the sea is wrong, which would modify the semi-inclusive result. To test the applicability of SU(3)<sub>f</sub> and SU(2)<sub>f</sub> flavor and isospin symmetry, the semi-inclusive results for $`\mathrm{\Delta }q_8`$ and $`\mathrm{\Delta }q_3`$ have been compared to the predictions $`\mathrm{\Delta }q_8=3FD`$ and $`\mathrm{\Delta }q_3=F+D`$ (Bjorken sum rule). Both predictions agree with the HERMES results when the appropriate QCD corrections are taken into account. For a decisive conclusion about the origin of the discrepancy, the precision has to be further improved and the sea assumption has to be tested explicitly.
A significant improvement of the precision of the $`\mathrm{\Delta }d(x)+\mathrm{\Delta }\overline{d}`$ determination is expected in near future from HERMES using the 1999 deuterium data set. The newly installed RICH will allow a direct measurement of $`\mathrm{\Delta }s(x)`$ using Kaon identification.
### 3.2 The gluon polarization
The “most wanted” component of the nucleon spin is the polarization of gluons, as they are probably responsible for the spin deficit of the quarks. As the virtual photon does not couple directly to gluons, a measurement of the gluon polarization was up to now only very indirectly possible by using the QCD evolution equations which relate the $`Q^2`$-dependence of the quark distributions to the gluon distribution.
For the first time a more direct measurement of the gluon polarization has been announced . By selecting events with two hadrons with opposite charge and with large transverse momentum, HERMES was able to accumulate a sample of events which is enriched by photon-gluon fusion events. By requiring a large transverse momentum of 1.5 (1) GeV/c for the first (second) hadron, the sub-process where the gluon splits into two quarks has a hard scale and can be treated pertubatively. HERMES estimates from Monte-Carlo studies that the average squared transverse momentum of the quarks is 2.1 (GeV/c)<sup>2</sup>. As long as the fragmentation process is spin independent, the spin asymmetry in the production of the quark-antiquark pair is the same as the spin asymmetry of the observed final state. The measured asymmetry is however affected by background processes. The unique signature of the HERMES result is that the spin asymmetry comes out negative. All background processes have a positive asymmetry, as long as they are dominated by the positive polarization of the “up”-quarks in the proton. The observed negative asymmetry can be explained by a significant positive gluon polarization. The change of sign comes from the negative analyzing power of the photon-gluon fusion diagram. Using a specific background Monte Carlo, HERMES obtains a value of the gluon polarization of $`\mathrm{\Delta }G/G=0.41\pm 0.18\pm 0.03`$ at $`x_G=0.17`$. The quantitative result depends however critically on the detailed understanding of the background processes.
### 3.3 Transverse asymmetries
The next step in polarized DIS beyond the understanding of the collinear part of the quark and gluon polarization in the nucleon is the understanding of the transverse polarization components. Single-spin asymmetries in polarized hadronic reactions are interpreted as effects of time-reversal-odd distribution functions (Sivers mechanism) or time-reversal-odd fragmentation functions (Collins mechanism). The numerous possible processes were discussed in the theory spin sessions .
SMC presented at this conference the first measurement of semi-inclusive DIS hadron production on a transversely polarized target . Leading hadron production has been analyzed in terms of the Collins angle and indeed a non-zero asymmetry $`A_N=11\%\pm 6\%`$ has been found for positive hadrons, whereas the negative hadrons yield $`2\%\pm 6\%`$.
A significant result has been reported by HERMES on a related quantity . HERMES measured the asymmetry of hadron production on a longitudinally polarized target. Even in this case an asymmetry is expected in the azimuthal angle between the plane which contains the produced pion and the virtual photon and the plane which contains the scattered lepton and the virtual photon. Fig. 4 shows this single-spin asymmetry as a function of the azimuthal angle for positive pions. A sinusoidal fit yields an asymmetry of $`A_N=2.0\%\pm 0.4\%`$ for the positive and $`A_N=0.1\%\pm 0.5\%`$ for the negative pions. The good statistical precision of this result is due to the hadron acceptance of the detector and the pure, highly polarized hydrogen target.
### 3.4 $`\mathrm{\Lambda }`$ polarization
A related quantity is the polarization of $`\mathrm{\Lambda }`$ hyperons. HERMES reported two results here .
The first one is the measurement of the polarization transfer in DIS scattering of longitudinally polarized electrons off unpolarized targets. A $`\mathrm{\Lambda }`$ polarization of $`P_\mathrm{\Lambda }=0.03\pm 0.06\pm 0.03`$ was reported, a number which is not precise enough to distinguish between different predictions. A naive quark model which assumes 100% polarization of s-quarks in $`\mathrm{\Lambda }`$ hyperons predicts $`P_\mathrm{\Lambda }=0.018`$, whereas a SU(3)<sub>f</sub> symmetric model from Jaffe predicts $`P_\mathrm{\Lambda }=0.057`$.
A much more precise result was reported concerning the transverse polarization of $`\mathrm{\Lambda }`$ hyperons in quasi-photoproduction off an unpolarized target:
$$\gamma ^{()}p\mathrm{\Lambda }X.$$
(6)
The polarization was measured in respect to the plane perpendicular to the $`\mathrm{\Lambda }`$ production plane. Fig. 5 shows the polarization as a function of the transverse momentum of the $`\mathrm{\Lambda }`$ and $`\overline{\mathrm{\Lambda }}`$. A large positive polarization is observed for $`\mathrm{\Lambda }`$ hyperons, with the tendency to increase with its transverse momentum. The $`\overline{\mathrm{\Lambda }}`$ antihyperons show a negative polarization. There is no straight-forward explanation of the observed asymmetries in QCD; however, similar polarizations have been found in hadronic collisions.
## 4 DIFFRACTIVE DOUBLE-SPIN ASYMMETRIES
Results on double-spin asymmetries in diffractive $`\rho ^0`$-production have been reported by SMC and HERMES . Naively, no spin asymmetry is expected in the approach where diffraction is described by the exchange of a pomeron with vacuum quantum numbers. In this frame there should be no way that the diffractive $`\rho ^0`$ production knows about the spin of the target nucleon. The SMC result is in agreement with this prediction. No significant asymmetry has been observed.
HERMES reported a significant, unexpected positive asymmetry of $`A_1^\rho =0.30\pm 0.11\pm 0.05`$. The asymmetry in the production of other vector mesons $`\varphi `$ and $`J/\psi `$ was also measured, but with much less precision and came out to be compatible with zero. The non-zero $`\rho `$-asymmetry can possibly be understood by comparing this process $`\gamma ^{}+pp+\rho `$ to the similar process $`\gamma ^{}+pp+\gamma ^{}`$ which is related to deep inelastic scattering and its well known positive asymmetry $`A_1^p`$. The difference between the HERMES and SMC result may have its reason in the different $`Q^2`$ and $`W^2`$ ranges of the two experiments.
## 5 FUTURE FACILITIES
The future of polarized DIS will consist of both, fixed target and collider experiments .
### 5.1 Fixed target
At SLAC the fixed target inclusive era will end with the precise measurement of $`g_2^{p,d}`$ at E155x. At lower energies, MAMI at Mainz, ELSA at Bonn and CEBAF at Jefferson Lab will continue to do spin physics. The main future players at higher energies will be HERMES at DESY and COMPASS at CERN. Both experiments will concentrate on semi-inclusive results.
The main aims of the experiments are the measurement of the gluon polarization, the flavor decomposition of polarized quark distributions, polarized vector meson production, polarized fragmentation functions, transversity, and in the case of COMPASS also the angular momentum of quarks and gluons and off-forward parton distributions.
HERMES has an upgrade program to improve particle identification with pion, kaon and proton separation in the full kinematic region, using a RICH detector. An improved muon acceptance and identification will allow for a better $`J/\mathrm{\Psi }`$ detection. A wheel of silicon detectors just behind the target cell improves the acceptance especially for $`\mathrm{\Lambda }`$ decay products. A recoil detector system is dedicated to low energy, large angle target fragments and spectator nucleons.
COMPASS will start in 2000 in the experimental area where the SMC experiment has been, however with an improved beam, improved target, high luminosity and, compared to the SMC experiment, with a much better and larger hadron acceptance and particle identification. In the final stage COMPASS will have two spectrometer magnets, two RICH detectors, two hadron and two electromagnetic calorimeters. Compared to HERMES, the large beam energy of COMPASS of 100-200 GeV enables measurements at smaller $`x`$ and larger $`Q^2`$ and $`W^2`$. The large $`W^2`$ allows for charm production well above the threshold and for the generation of hadrons with large transverse momentum. The production of open charm allows for a direct measurement of the gluon polarization.
A future fixed target machine is ELFE, a possible new European electron machine in the 15-30 GeV range, which is discussed in connection with the TESLA project at DESY and also at CERN as a machine which could re-use the cavities from LEP. Two further experiments which both proposed to measure the gluon polarization via charm production in photoproduction were proposed some years ago, and are still discussed: E-156 at SLAC and APOLLON at ELFE.
### 5.2 Collider
Two colliders will govern the high energy part of spin physics in future: the polarized proton collider RHIC at BNL , and possibly the HERA collider at DESY which may have polarized protons in future .
In both machines, the acceleration and storage of polarized protons is a major challenge to machine physicists. Several Siberian snakes will be installed which compensate depolarizing resonances of the beam polarization.
RHIC will start its physics program in 2000. Main points on the agenda are the measurement of the antiquark polarization and the gluon polarization. The antiquark polarization can be extracted using Drell-Yan production via $`W^+`$ and $`W^{}`$. As $`W`$-production depends on flavor and on helicity, the experiment can extract $`\mathrm{\Delta }u`$, $`\mathrm{\Delta }\overline{u}`$, $`\mathrm{\Delta }d`$ and $`\mathrm{\Delta }\overline{d}`$ separately. The gluon polarization is approached by the production of prompt photons, $`\pi ^0`$’s, jets and heavy quarks (charm).
The main aims of a polarized HERA collider are the measurement of the spin structure functions at low $`x`$, which will improve the precision of the verification of the fundamental Bjorken sum rule, and the polarization of the gluon. As at RHIC, the gluon polarization can be extracted from the production of heavy flavors and from jet production.
## 6 CONCLUSIONS
Spin physics is a rich field. At this workshop an extraordinary number of results was presented, from CERN, SLAC, Jefferson Lab, Mainz and other institutions. A main source of results was the HERMES experiment at DESY which released a variety of different analyses of data taken in the last three years, some of which were exciting and unexpected. Spin physics will continue to be a rich field when the future facilities which were discussed at this workshop come into operation.
## 7 ACKNOWLEDGMENTS
I want to thank all speakers of the spin sessions for their excellent contributions and I thank the organizers for the invitation and for the perfect support of the session conveners.
|
no-problem/9905/chao-dyn9905036.html
|
ar5iv
|
text
|
# Restart Strategies and Internet Congestion
## 1 Introduction
The impressive growth in the number of Internet users (from 61 million in 1996 to over 150 million today) has led to a radical increase in the volume of data that is transmitted at any given time. Whereas a few years ago email was the preponderant component of Internet traffic, the Web, with its rich and varied content of images and text, makes up for most of the transmitted data today. In addition, financial and other forms of electronic transactions put a premium on mechanisms that ensure timely and reliable transactions in cyberspace. This is an important problem given the bursty nature of Internet congestion , which leads to a large variability in the risk and cost of executing transactions.
Earlier, we presented a methodology for quantitatively managing the risk and cost of executing transactions in a distributed network environment . By associating cost with the time it takes to complete the transaction, and risk with the variance in that time, we considered different methods that are analogous to asset diversification, and which yield mixed strategies that allow an efficient trade-off between the average and the variance in the time a transaction will take. Just as in the case of financial portfolios, we found that some of these mixed strategies can execute transactions faster on average and with a smaller variance in their speed.
A potential problem with this portfolio methodology is that if everybody uses it, the latency characteristics of the Internet might shift so as to render the method useless. It this were the case, one would be confronted with a classical social dilemma, in which cooperation would amount to restraining from the use of such strategies and defection on their exploitation . Thus it becomes important to determine the effects that many users employing a portfolio strategy have on Internet latencies and their variances. Equally relevant is to determine what the increased latencies are as a function of the fraction of users deciding to employ such strategies.
In order to investigate this issue, we conducted a series of computer simulations of a group of agents deciding asynchronously whether to use the Internet or not. The agents base their decision on knowledge of the congestion statistics over a past window of time. We found that when every agent uses the portfolio strategy there is still a range of parameters such that a) a portfolio exists and b) all agents are better off using it than not. Even when all agents do so, the optimum restart strategy leads to a situation no worse than when no one uses the restart strategy.
## 2 Restart Strategies
Anyone who has browsed the World Wide Web has probably discovered the following strategy: whenever a web page takes too long to appear, it is useful to press the reload button. Very often, the web page then appears instantly. This motivates the implementation of a similar but automated strategy for the frequent ”web crawls” that many Internet search engines depend on. In order to ensure up-to-date indexes, it is important to perform these crawls quickly. More generally, from an electronic commerce perspective, it is also very valuable to optimize the speed and variance in the speed of transactions, automated or not, especially when the cost of performing those transactions is taken into account. Again, restart strategies may provide measurable benefits for the user.
The histogram in Figure 1 shows the variance associated with the download time for the text on the main page of over 40,000 web sites. Based on such observations, Lukose and Huberman recently proposed an economics based strategy for quantitatively managing the risk and cost of executing transactions on a network. By associating the cost with the time it takes to complete the transaction and the risk with the variance in that time, they exploited an analogy with the modern theory of financial portfolio management first suggested in a more general context in a previous paper . In modern portfolio theory, the fact that investors are risk-averse means that they may prefer to hold assets from which they expect a lower return if they are compensated for the lower return with a lower level of risk exposure. Furthermore, it is a non-trivial result of portfolio theory that simple diversification can yield portfolios of assets which have higher expected return as well as lower risk. In the case of latencies on the Internet, thinking of different restart strategies is analogous to asset diversification: there is an efficient trade-off between the average time a request will take and the variance or risk in that time.
Consider a situation in which a message has been sent and no acknowledgment has been received for some time. This time can be very long in cases where the latency distribution has a long tail. One is then faced with the choice to either continue to wait for the acknowledgment, to send out another message or, if the network protocols allow, to cancel the original message before sending out another. For simplicity, we consider the case in which it is possible to cancel the original message before sending out another at the time. $`P(t)`$, the probability that a page has been successfully downloaded in time less than t, is given by
$`P(t)=p(t)\mathrm{if}t\tau ,`$
$`P(t)=(1{\displaystyle _0^\tau }p(t)𝑑t)P(t\tau )\mathrm{if}t>\tau .`$
where $`p(t)`$ is the probability distribution for the download time without restart. The latency and risk in loading a page is then given by
$`t={\displaystyle _0^{\mathrm{}}}tP(t)𝑑t,`$
$`\sigma =\sqrt{Var(t)}=\sqrt{(tt)^2}.`$
If we allow an infinite number of restarts, the recurrence relation above can be solved in terms of the partial moments $`M_n(\tau )=_0^\tau t^nP(t)𝑑t`$:
$`t={\displaystyle \frac{1}{M_0}}(M_1+\tau (1M_0)),`$
$`t^2={\displaystyle \frac{1}{M_0}}(M_2+\tau (1M_0)(2{\displaystyle \frac{M_1}{M_0}}+\tau ({\displaystyle \frac{2}{M_0}}1))).`$
In the case of a log-normal distribution $`p(t)=\frac{1}{\sqrt{2\pi }x\sigma }\mathrm{exp}(\frac{(\mathrm{log}x\mu )^2}{2\sigma ^2})`$, $`t`$ and $`t^2`$ can be expressed in terms of the error function:
$`M_n(\tau )={\displaystyle \frac{1}{2}}\mathrm{exp}({\displaystyle \frac{\sigma ^2n^2}{2}}+\mu n)(1+\mathrm{erf}({\displaystyle \frac{\mathrm{log}\tau \mu }{\sigma \sqrt{2}}}{\displaystyle \frac{\sigma \mathrm{n}}{\sqrt{2}}})).`$
The resulting $`t`$ versus $`\sigma `$ curve is shown in Fig. 2(a). As can be seen, the portfolio has a cusp point that represents the restart time $`\tau `$ that is preferable to all others. No strategy exists in this case with a lower expected waiting time (at possibly the cost of a higher risk) or with a lower risk (at possibly the cost of a higher expected waiting time). The location of the cusp can be translated into the optimum value of the restart time to be used to reload the page.
There are many variations to the restart strategy described above. In particular, in Fig. 2(b), we show the family of curves obtained from the same distribution used in (a), but with a restriction on the maximum number of restarts allowed in each transaction. Even a few restarts yield an improvement.
Clearly, in a network without any kind of usage-based pricing, sending many messages to begin with would be the best strategy as long as we do not overwhelm the target computer. On the other hand, everyone can reason in exactly the same way, resulting in congestion levels that would render the network useless. This paradoxical situation, sometimes called a social dilemma, arises often in the consideration of ”public goods” such as natural resources and the provision of services which require voluntary cooperation . This explains much of the current interest in determining the details of an appropriate pricing scheme for the Internet, since users do consume Internet bandwidth greedily when downloading large multimedia files for example, without consideration of the congestion caused by such activity.
Note that the histogram in Figure 1 represents the variance in the download time between different sites, whereas a successful restart strategy depends on a variance in the download times for the same document on the same site. For this reason, we can not use the histogram in Figure 1 to predict the effectiveness of the restart strategy. While a spread in the download times of pages from different sites reduces the gains that can be made using a common restart strategy, it is possible to take advantage of geography and the time of day to fine tune and improve the strategy’s performance. As a last resort, it is possible to fine tune the restart strategy on a site per site basis.
As a final caution, we point out that with current client-server implementations, multiple restarts will be detrimental and very inefficient since every duplicated request will enter the server’s queue and will be processed separately until the server realizes that the client is not listening to the reply. This is an important issue for a practical implementation, and we neglect it here: our main assumption will be that the restart strategy only affects the congestion by modifying the perceived latencies. This will only be true if the restart strategy is implemented in an efficient and coordinated way on both the client and server side.
## 3 Restart Strategies for Multiple Users
While restart strategies based on a portfolio approach may improve returns for a single user, it is important to consider whether the use of such strategies leads to a different dilemma. What happens when every user makes use of the restart strategy? This question is analogous to the problem of adjustments to equilibrium encountered in finance .
In order to investigate the effects on congestion of many agents using a restart strategy, we performed a series of computer simulations of a group of agents deciding asynchronously whether to use the Internet or not. The agents base their decision on knowledge of the congestion statistics over a past window of time. As shown by Huberman and Glance , such a collective action dilemma leads to an optimal strategy which is basically determined by a threshold function: cooperate if parameters of the problem are such that a critical function exceeds a certain value and defect otherwise. In terms of the dilemma posed by using the Internet this translates into downloading pages if latencies are below a certain value and not doing so if they exceed a particular value. Thus, each agent is restricted to a simple binary decision, and the dynamics are those of a simple threshold model with uncertainties built in.
With this in mind, we model each agent as follows: an agent measures the current congestion, expressed in arbitrary ”latency time” units. Since the latency is a function of the number of users, and their number fluctuates in time, it is reasonable to make the decision to use or not to use the restart strategy a function of the histogram of the load over a past window of time. This histogram is used to calculate the perceived latency time using different strategies, as described in the previous section. The agent compares the perceived latency to a threshold: if the former is larger, he decides to ”cooperate” and refrains from using the Internet. If the latency is short enough, he decides to ”defect”. Agents make these decisions in an asynchronous fashion, with exponentially distributed waiting times .
We assume that the load created by an agent who decides to make use of the network’s resources does not depend on whether or not he uses the restart strategy. This is only true if the server can efficiently detect multiple requests and cancel the superfluous ones, in order to avoid sending the same data to the client multiple times. Current implementations do not offer this feature. As a result, while a restart strategy may be beneficial to a single user, the net effect will be to cause more congestion.
In order to calculate the latency $`\lambda `$ as a function of the number of users $`N_D`$, we use the average waiting time for a M/M/1 queue , with a capacity one larger than the total number of agents $`N`$:
$`\lambda =1/(1+NN_D).`$
Note that this simple M/M/1 queue model is not meant to provide anything more than an intuitive justification for the value of the latency and the qualitative behavior of its fluctuations (especially their correlations). In particular, the notion of a single M/M/1 queue is inconsistent with the idea of a restart strategy.
As it stands, this model would simply relax to an equilibrium in which the number of users is such that the latency is the threshold latency. To remedy this, we add fluctuations in the latency times using multiplicative noise (taken from a gaussian distribution with unit mean). This multiplicative noise will also be correlated in time (we model it as an Ornstein-Uhlenbeck process). This correlation is a crucial aspect of the model: agents who arrive at the same time should experience a similar congestion. If the noise were completely uncorrelated, agents might as well be on different networks. (We also performed simulations using additive noise. This produced no qualitative changes in the results).
A summary of the model is given in the appendix. It includes a list of all the parameters and the pseudocode for the simulation.
Since the restart strategy is designed to reduce the effect of congestion, we expect that as more users employ it the congestion will increase. However, this will be acceptable as long as the perceived latency with restarts does not become worse than the latency originally was without restarts. This should be the case, as the following mean field model indicates.
In a mean field model, $`f_d`$, the average fraction of agents who defect, is determined by the differential equation
$`{\displaystyle \frac{df_d}{dt}}=\alpha (f_d\rho (f_d)),`$
where in the context of this paper $`\rho (f_d)`$ is the average probability that the perceived latency is below threshold (i.e. the probability that a cooperating agent will decide to defect ), and $`\alpha `$ is the frequency with which agents evaluate their decisions to cooperate or not. The inverse of this frequency sets the time scale for all processes in the simulations. In the presence of imperfect knowledge modeled by a gaussian distribution (with width $`\sigma `$) of perceived utilities,
$`\rho (f_d)={\displaystyle \frac{1}{2}}(1+\mathrm{erf}({\displaystyle \frac{\mathrm{U}(\mathrm{f}_\mathrm{d})\mathrm{U}_\mathrm{c}}{\sigma \sqrt{2}}}))`$
where $`U(f_d)`$ is the utility of defecting given that a fraction $`f_d`$ of agents is defecting, and $`U_c`$ is the threshold utility below while users will cooperate. Expanding around values of $`f_d`$ such that $`U(f_d)=U_c`$, and setting the right hand side of the differential equation equal to $`0`$ in order to find the equilibrium point, we obtain
$`U(f_{equ})=U_{equ}=U_c+\sqrt{2}\sigma (2f_{equ}1).`$
Thus, whatever the details of the threshold based multi-agent model, an average equilibrium number of defectors can be mapped onto an expected utility (where the utility may be a combination of both average latency and the variance in the latency). Clearly, the restart strategy will always provide a higher utility than no restart strategy, i.e. $`U_{restart}(f)U_{norestart}(f)`$. However, the threshold utility remains the same. Thus, as Figure 3 illustrates for the case in which $`\sigma =0`$, the equilibrium point is located at the intersection marked 1 when no agent is using the restart strategy, and by point 2 when all agents use the restart strategy. In the first case, a single agent who is using the restart strategy will benefit from a much larger utility (point 1’). However, when most agents do so, this advantage is lost, and the converse is true: if a few agents are opting to not use the restart strategy, they will experience a utility below threshold (point 2’) and will cooperate (by not using bandwidth greedily). There are two other interesting conclusions to the analysis above if $`\sigma >0`$. First, if $`f_{equ}>0.5`$ the perceived equilibrium utility will be above threshold. Second, the average perceived utility will increase when all agents use the restart strategy. However, this latter effect will be small if the change in $`f_{equ}`$ is small. Both these results are confirmed by the simulations.
To summarize, the utility perceived by the agents using the dominant strategy will not decrease as the strategy switches from no restart to restart. However, if a small number of agents decide not to switch to the restart strategy, they will obtain a lower utility than before. Thus, every agent is always better off using the restart strategy, even if everyone does so. There is no dilemma. However, such an argument does not model the effect of dynamics and fluctuations in a large agent population, which we now study.
## 4 Results
A typical trace of the number of defectors (network users) as a function of time is shown in Figure 4, with and without use of the restart strategy. The simulations were performed with 500 agents, a threshold latency $`\lambda _c`$ of 0.05 and a variance in the noise of 0.01 with a correlation time of 1. The histogram was collected with a relaxation time of 10000 (in units of $`\alpha ^1`$). While the differences between the two traces in Fig. 4 is difficult to see, the average number of defectors and the amplitude of the fluctuations were both larger when agents made use of the restart strategy, as expected.
The two portfolio curves that result from the simulation in Figure 4 are shown in Figure 5. The dotted line indicates the threshold latency: when agents do not use the restart strategy, the end point of the curve (corresponding to an infinitely long restart time) is expected to fall on the dotted line. In (a), when agents do use the restart strategy, the waiting time at the cusp equals the threshold latency. As expected, the optimum point (cusp) shifts to higher average and variance as more agents make use of the restart strategy. Similarly, a single agent who does not make use of the restart strategy while every body else does will experience larger waiting times and larger risk, on average, than he would have experienced with nobody using the restart strategy. Note that the section of the portfolio curves corresponding to very short restart times falls on the $`y=x`$ line – this can be verified by expanding the expressions for $`t`$, $`t^2`$ and $`p(t)`$ for small restart times $`\tau `$.
Figure 6 shows the behavior of the expected waiting time as a function of the restart time. As more agents make use of the restart strategy, the optimum restart time (corresponding to the minimum expected waiting time) shifts to larger values.
We performed an extensive set of simulations to check these results as the parameters of the model were varied. Very short correlation times washed out the interactions between different agents, effectively placing them on different networks – the restart strategy had no effect. As the correlation time was made larger, so that a large fraction of the agents effectively updated their decisions simultaneously, the restart strategy became more effective. In fact, other than causing a shift in the equilibrium latencies, noise amplitude played a relatively minor role compared to the fluctuations due to the interaction of agents (modelled by the correlations in the noise).
To first order, the amplitude of the noise fluctuations had a relatively minor effect. However, Figure 7 illustrates one of the predictions made by the mean field theory illustrated in Figure 3: a large variance will lead to an increased utility (lower latency) if $`f_{equ}>0.5`$ but to a decreased utility (higher latency) if $`f_{equ}<0.5`$.
Figure 8 illustrates the portfolios that result from a mix of agents using and not using the restart strategy. As the number of agents aware of the restart strategy increases, the optimum point shifts to lower variance but higher mean latency, until the mean latency at optimum restart equals the threshold. Until that point, the variance for agents not using the restart strategy also decreases. Once the number of restart agents becomes too large, the network is too congested and the remaining agents defect.
## 5 Conclusion
In this paper we have shown that when every agent uses a portfolio strategy in order to deal with Internet congestion, there is a range of parameters such that a) a portfolio exists and b) all agents are better off using it than not. Even when all agents use the optimum restart strategy, the situation is no worse than the situation in which no one uses the restart strategy. This solution was obtained both by using mean field arguments and computer simulations that took into account the dynamics of agent decisions and discounted past information on network conditions. These solutions, which show the existence of a possibly noisy fixed point, illustrate the power of simulations in understanding multiagent dynamics under varying dynamical constraints.
In economics, the existence of a unique and stable equilibrium is usually considered without taking dynamics into account. Using a physics metaphor, this is equivalent to overdamped or viscous dynamics in which velocity (rate of change in time) does not matter. And yet, we know of a number of economic systems which are continuously changing and sometimes are unable to reach an equilibrium. Agent based simulations , combined with simple analytic models, are powerful tools to study dynamic effects, since new assumptions can be implemented and tested rapidly, and be compared to mean field theories in order to verify their applicability.
SMM was supported by the John and Fannie Hertz Foundation.
## Appendix A Model Parameters and Pseudocode
### A.1 Parameters
* Number of agents $`N`$
* Number of agents using restart strategy $`N_R`$
* Histogram relaxation rate $`\tau _H`$
* Noise correlation time $`\tau `$
* Threshold latency $`\lambda _c`$
* Noise variance $`\sigma `$ (noise mean is 1.0)
The reevaluation rate $`\alpha `$ for a single agent sets the time scale for the model. We also need a function to map the number of agents using the network into an average expected latency. We use the functional dependence expected for a M/M/1 queue with a capacity larger than the number of users.
### A.2 Pseudocode
REPEAT
Pick a time step $`\mathrm{\Delta }t`$
exponential random deviate with mean $`\frac{1}{N\alpha }`$
Update running user histogram
Discount histogram using $`\tau _H`$
Add $`\mathrm{\Delta }t`$ to appropriate bin
Pick an agent randomly
Compute the perceived average latency $`\lambda `$
based on the running histogram and the
chosen agent’s strategy
Multiply $`\lambda `$ by $`1+N(0,\sigma ^2)`$
$`N(m,\sigma ^2)`$ is a normal distribution with mean $`\mu `$ and variance $`\sigma ^2`$
IF ($`\lambda <\lambda _c`$)
Agent does use the network (defect)
ELSE
Agent does not use the network (cooperate)
END REPEAT
|
no-problem/9905/cond-mat9905239.html
|
ar5iv
|
text
|
# A new hierarchy of avalanches observed in Bak-Sneppen evolution model
## Abstract
A new quantity, $`\overline{f}`$, denoting the average fitness of the ecosystem, is introduced in Bak-Sneppen model. Through this new quantity, a new hierarchy of avalanches, $`\overline{f}_0`$-avalanche, is observed in the evolution of Bak-Sneppen model. An exact gap equation governing the self-organization of the model in terms of $`\overline{f}`$ is presented. It is found that self-organized threshold $`\overline{f}_c`$ of $`\overline{f}`$ can be exactly obtained. Two basic exponents of the new avalanche, $`\tau `$, avalanche distribution, and D, avalanche dimension are given through simulations of one- and two dimensional Bak-Sneppen models. It is suggested that $`\overline{f}`$ may be a good quantity in determining the emergence of criticality.
PACS number(s): 87.10.+e, 05.40.+j, 64.60.Lx
The term of avalanche may originate from the phenomena occurred in nature. It is referred to as a sequential events which may cause devastating catastrophe. The phenomena of avalanches are ubiquitous in nature. The canonical example of avalanche in nature is the mountain slide, during which great mass of snow and ice at a high altitude slide down a mountain side, often carrying with it thousands of tons of rock, and sometimes destroy forests, houses, etc in its path . Since avalanches occur everywhere, from the sandpile or ricepile, to the Himalayan sandpiles; from the river network, to the earthquake, starquakes and even solar flares; from the biology, to the economy, , etc, it is hence proposed that avalanches may be the underlying mechanism of the formation of various geographical structures and complex organisms, e.g., brains, etc. Furthermore, avalanches may be the origin of fractals in the world. From this point of view, avalanches can be viewed as the immediate results of complex systems, and hence can be used as the theoretical justification for catastrophism. This is because if the real world is complex then the catastrophes are inevitable and unavoidable in biology, history and economics. It is now even proposed by Meng et al that the formation of colorless gluon clusters may be attributed to avalanches intrigued by the emission or absorption of gluons.
Plenty of patterns provided by nature exhibit coherent macroscopic structures developed at various scales and do not exhibit elementary interconnections. They immediately suggest seeking a compact description of the spatio-temporal dynamics based on the relationship among macroscopic elements rather than lingering on their inner structure . That is, one needs to condense information when dealing with complex systems. Maybe only this way is efficient and turns out successful.
As known, avalanche is one kind of macroscopic phenomenon driven by local interactions. The size of avalanche, spatial and temporal as well, may be sensitive to the initial configuration, or more generally, the detailed dynamics of the system. However, the distribution of avalanches, Gutenberg-Ritcher law , or equivalently, power-law, does not depend on such kind of details due to the universality of complexity. Hence, in this sense avalanche study may be an appropriate tools in studying various complex phenomena. On the other hand, observation of a great variety of patterns, such as self-similar, fractal behavior in nature , $`\frac{1}{f}`$ noise in quasar , river flow and brain activity , and many natural and social phenomena, including earthquakes, economic activity and biological evolution suggests that these phenomena are signatures of spatiotemporal complexity and can be related via scaling relations to the fractal properties of the avalanches . This suggests the occurrence of these general, empirical phenomena may be attributed to the same underlying avalanche dynamics. Thus, one can see that study of avalanche is crucial in investigating the critical features of complex systems. It can be even inferred that avalanche dynamics does provide much useful information for us to understand the general features of the ubiquitous complexity around us. That is probably also why this paper focuses on such kind of topic.
Despite the fact that avalanche may provide insight into complexity, the definition of which can be vastly different for various systems, and the same kinds of systems, even the same system. Let us recall some definitions of avalanches given before. In sandpile model , an avalanche is intrigued by adding a grain or several grains of sand into the system at some time and causing the topple of some sites, which may later on cause some other sites to topple. The avalanche is considered over when the height of all sites are less than the critical value, say, 4. In Bak-Sneppen model , several kinds of avalanche are presented. For instance, $`f_0`$-avalanche, G(s)-avalanche, forward avalanche, backward avalanche, etc. Despite the fact that these kinds of definitions of avalanches may show the various hierarchical structures they manifest the same underlying fractal feature of the ecosystem, i.e., self-organized criticality. Relating all these kinds of avalanches one can provide a general definition of the avalanche for Bak-Sneppen model: An avalanche corresponds to sequential mutations below certain threshold. One can see that this kind of definition can ensure the mutation events within a single avalanche are casually and spatially connected. In addition, with this definition there exists a hierarchy of avalanches, each defined by their respective threshold. It is the hierarchical structure of the avalanche that exhibits the fractal geometry of the system and that implies complexity.
It can be inferred from the definition of avalanche that there always exists a triggering event which initiates the avalanche and whose effect, that is, causing an avalanche to spread within the system later and later on, will disappear at the end of the avalanche. And, the observation of avalanche through the triggering event, up to now, is based on the individual level, despite that the avalanche is a macroscopic and global phenomenon of the system studied, in the laboratory, and in nature as well. Take sandpile model, the triggering event is adding a grain or several grains of sand to some sites and causing them to topple, thus initiating an avalanche. Consider another model, Bak-sneppen model, in which the corresponding triggering event of an avalanche is mutation of the extremal species causing the fitness of the extremal site at the next time step less than a certain threshold. One can see that in the above two models triggering events are directly concerned with the feature of individuals, e.g., the height of the site in the former model, or, the fitness of the extremal site in the latter one. It can be readily learned that the triggering events, whether those in the laboratory or those in nature, are not directly related to the global feature of the systems although avalanche can span across the whole systems. Generally speaking, the observation of avalanches is done through some feature of individuals, instead of that of the system as a whole. However, general feature of the complex system may provide insight into knowing the tendency of the evolution of the system. Specifically, global feature of a complex system may help one to understand the critical behavior of the system. That is, it is feasible that some characteristic quantities representing the corresponding global features can be employed in describing the critical behavior of the system. Furthermore, these quantities ought to be related to avalanche dynamics, and hence can be used to describe complexity emerged in a variety of complex systems. Apparently, our aim is to search for or define such kind of quantities and then to expect to observe new kind of avalanche based on these quantities. Indeed, we obtain a new quantity which can be used to define a new hierarchy of avalanches in Bak-Sneppen model. We suggest that this quantity may be used as a criterion in determining the emergence of criticality. It will be shown later that this new kind of avalanche still exhibits spatio-temporal complexity in a different context.
Consider Bak-Sneppen model , which is a very simple evolution model of biological ecosystem. Despite the simplicity of the model itself, it can exhibit the skeleton of species evolution, punctuated equilibrium behavior. Detailed information about this original model of evolution can be available in Ref. . In Bak-Sneppen model, each species is represented by a single fitness. The fitness may represent population of a whole species or living capability of the species . Hence, one can see that fitness is a vital quantity and is the only one describing the model. No other additional quantities are considered in this oversimplified model. Thus, the fitness is the most important feature of species and that of the model. So, when considering the global feature of the species ecosystem, one has to relate this general feature to the feature of individuals. That is, the general feature of the ecosystem should be associated with the fitness of the species. As previously mentioned, a corresponding quantity should be found to describe this general feature. Before presenting such kind of quantity let us briefly review Bak-Sneppen model so that the readers who are not so familiar with this model can have a rough idea of what it is about.
Bak-Sneppen model is perhaps the simplest model of self-organized criticality. In this ”toy” model, random numbers, $`f_i`$, chosen from a flat distribution, $`p(f)`$, are assigned independently to each species located on a d-dimensional lattice of linear size $`L`$. At each time step, the extremal site, i.e., the species with the smallest random number, together with its $`2d`$ nearest neighboring sites, is chosen for updating by assigning $`2d+1`$ new random numbers also chosen from the same uniform distribution $`p(f)`$ to them. This updating process continues indefinitely. After a long transient process the system reaches a statistically stationary state where the density of random numbers in the system vanishes for $`f<f_c`$ and is uniform above $`f_c`$, the self-organized threshold.
Having briefly introduced the model, next, we will introduce a new quantity. Please note that the model we used is still Bak-Sneppen model. We observe the evolution of the model without adding anything to the model. We simply introduce the new quantity based on the fitness of the species. Define the average fitness, denoted by $`\overline{f}`$, as below,
$$\overline{f}=\frac{1}{L^d}\underset{i=1}{\overset{L^d}{}}f_i$$
(1)
, where $`f_i`$ is the fitness of the $`ith`$ species. Here, we refer to $`\overline{f}`$ as the average fitness of the whole system and as a global quantity. $`\overline{f}`$ may represent the average population or average living capability of the whole ecosystem. Large $`\overline{f}`$, i.e., high average fitness, may imply the total population of the system is immense or its average living capability is great, and vice versa. Initial value of $`\overline{f}`$, denoted by $`\overline{f}(0)`$ , can be easily calculated. As known, at the beginning of the evolution $`f_i`$’s are uniformly distributed between (0,1). So, for an infinite system, $`\overline{f}(0)`$ equals to 0.5. However, for a finite-size system $`\overline{f}(0)`$ will fluctuate slightly due to the finite size of the system, which is not so important in the latter evolution. We will simply consider the average value, 0.5. It should be pointed out that $`\overline{f}(0)`$ does not reflect the correlation among species. As the evolution goes on the correlation among the species within a system will become more and more distinctive. Denote $`\overline{f}(s)`$ the average fitness of the system at time step $`s`$ in the evolution. Hence, in the $`s`$ limit, i.e., $`sL^d`$, $`\overline{f}(s)`$ may partly reflect information about correlation. As a global quantity, $`\overline{f}(s)`$ should include information concerning the interaction between species. Hence, it is natural to expect that $`\overline{f}`$ may be a good quantity in describing the feature of the system as a whole.
Before introducing the new hierarchy of avalanches it is necessary and worthwhile to know some feature of the new quantity, $`\overline{f}(s)`$. Firstly, let us present some theoretical analysis. Recall the definition of $`\overline{f}`$ one can see that $`\mathrm{\Delta }\overline{f}(s)=\overline{f}(s+1)\overline{f}(s)`$ approaches zero in the $`L\mathrm{}`$. An observer can not even perceived the change of $`\overline{f}(s)`$ during the short time period since it is vanishingly small . However, changes at very time step are accumulated to form a relatively distinctive change after a long time, which is perceivable for the observers. This long time period is required to be much greater than the system size, i.e., $`sL^d`$. In other words, $`\overline{f}(s+s_0)\overline{f}(s_0)`$ may only be ”noticed” when $`sL^d`$. Thus, one can not expect $`\overline{f}(s)`$ will have great variation from the current time step to the next nearest time step, which is vastly different from the variation of the fitness of extremal site. The latter can vary from one value, say, $`0`$, to the next value, $`1`$ between two successive time steps. It should also be expected that there exists an increasing tendcy of $`\overline{f}`$ versus time $`s`$. This is because at each time step the least fitness is eliminated from the system so the general fitness of the whole system will tend to increase. And due to the slow fluctuation of the $`\overline{f}`$ the increasing behaves like a stepwise, i.e., Devil’s stepwise . One then may expect to observe such behavior, i.e., punctuated equilibrium , of $`\overline{f}`$ in the evolution of Bak-Sneppen model.
In order to show the feature of $`\overline{f}`$ versus time $`s`$ we performed simulations of Bak-Sneppen model. At each time step, in addition to the updating of the extremal sites, we also track the signals $`\overline{f}(s)`$. FiG. 1 shows the evolution of $`\overline{f}(s)`$ versus time during a time period for a one-dimensional Bak-Sneppen model of size $`L=200`$. This plot shows that $`\overline{f}`$ varies slightly between two successive time steps but tends to increase in the long evolution process. Simulation of a two-dimensional model of size $`L=20`$ exhibits the similar behavior of the evolution of $`\overline{f}(s)`$.
Before searching for the punctuated equilibrium behavior let us first introduce another quantity, F(s), the gap of the average fitness. The definition of F(s) is given as follows: Initial value of F(s) is equal to $`\overline{f}(0)`$. After $`s`$ updates, a large $`F(s)>F(0)`$ opens up. The current gap F(s) is the maximum of all $`F(s^{})`$, for all $`0s^{}<s`$. FiG. 2 shows the F(s) as a step-wising increasing function of s during the transient for a one-dimensional Bak-Sneppen model of size $`L=100`$. Actually, the gap is an envelope function that tracks the increasing peaks in $`\overline{f}(s)`$. Indeed, punctuated equilibrium behavior appears in terms of this new quantity, $`\overline{f}(s)`$.
By definition , the separate instances when the gap F(s) jumps to its next higher value are separated by avalanches. Avalanches correspond to plateaus in F(s) during which $`\overline{f}(s)<F(s)`$, which ensures the mutation events within a single avalanche are spatially and casually connected. A new avalanche is initiated each time the gap jumps and ends up when the gap jumps again. As the gap increases, the probability for the average fitness, $`\overline{f}`$, to fall below the gap increases also, and larger and larger avalanches typically occur.
We can also obtain an exact gap equation of F(s), similar to the one found for Bak-sneppen model in Ref. . Suppose in the system the current gap is F(s). If F(s) is to be increased by $`\mathrm{\Delta }F`$, i.e., from F(s) to $`F(s)+\mathrm{\Delta }F`$, the average number of avalanches needed is $`N_{\mathrm{av}}=\mathrm{\Delta }FL^d/(1F(s))`$. We can guarantee $`N_{\mathrm{av}}1`$ by selecting $`\mathrm{\Delta }FL^d`$. In the large $`L`$ limit, $`N_{\mathrm{av}}`$ can be arbitrarily large. Hence, in this limit, the average number of time steps required to increase the gap from F(s) to $`F(s)+\mathrm{\Delta }F`$ is given by the interval $`\mathrm{\Delta }s=S_{\mathrm{F}(\mathrm{s})}N_{\mathrm{av}}=s_{\mathrm{F}(\mathrm{s})}\mathrm{\Delta }FL^d/(1F(s))`$, where $`S_{\mathrm{F}(\mathrm{s})}`$ is the average avalanche size of the plateaus in the gap function. From the law of large numbers the fluctuation of this interval around its average value vanishes. In the $`\mathrm{\Delta }F0`$ limit, $`\mathrm{\Delta }s0`$. Taking the continuum limit we can obtain the differential equation for F(s),
$$\frac{\mathrm{dF}(\mathrm{s})}{\mathrm{ds}}=\frac{1F(s)}{L^dS_{\mathrm{F}(\mathrm{s})}}.$$
(2)
Note this equation is exact.
All SOC models, e.g., the BTW sandpile model , the earthquake models , or Bak-Sneppen model , exhibit self-organized criticality in terms of a power-law distribution of avalanche. It is natural to expect that we can observe SOC in terms of the hierarchical structure of $`\overline{f}`$, which itself manifests complexity. Using this new quantity to define the avalanche is simply another way of observing the same phenomenon which can be observed in other ways. As known, the emergence of complexity is independent of the tools used to observe them provided that these tools are efficient and strong enough. Similar to the ones used in Refs. , we present the definition of $`\overline{f}_0`$-avalanche, where $`\overline{f}_0`$ is only a parameter between 0.5 and 1 to define the avalanche. Suppose at time step $`s_1`$, $`\overline{f}(s_1)`$ is larger than $`\overline{f}_0`$. If , at time step $`s_1`$+1, $`\overline{f}(s_1+1)`$ is less than $`\overline{f}_0`$, this initiates a creation-annihilation branching process. The avalanche still continues at time step $`s^{}`$, if all the $`\overline{f}(s)`$ are less than $`\overline{f}_0`$ for $`1ss^{}1`$. And the avalanche stops, say, at time step $`s_1+S`$, when $`\overline{f}(s_1+S)>\overline{f}(s_1)`$. In terms of this definition, the size of the avalanche is the number of time steps between subsequent punctuation of the barrier $`\overline{f}_0`$ by the signal $`\overline{f}(s)`$. In the above example, the size of the avalanche is $`S`$. It can be clearly seen from FiG.1 that this definition guarantees the hierarchical structure of avalanches, larger avalanches consists of smaller avalanches. As $`\overline{f}_0`$ is lowered, bigger avalanches are subdivided into smaller ones. Hence, the statistics of $`\overline{f}_0`$-avalanche will inevitably have a cutoff if $`\overline{f}_0`$ is not chosen to be the value of $`\overline{f}(s)`$ at critical state, denoted by $`\overline{f}_c`$. We can also define the $`\overline{f}_c`$-avalanche. Nevertheless, $`\overline{f}_0`$-avalanche in the stationary state has the same scaling behavior as $`\overline{f}_c`$-avalanche provided that $`\overline{f}_0`$ is close to $`\overline{f}_c`$. We measure $`\overline{f}_0`$-avalanche distribution for one- and two-dimensional Bak-Sneppen models. The simulation results are given in FiG. 3. The exponent $`\tau `$, defined by $`P(S)s^\tau `$, is 1.80 for 1D model and 1.725 for 2D model. Another exponent, $`D`$, avalanche dimension , defined by $`n_{\mathrm{cov}}S^{D/d}`$, where $`n_{\mathrm{cov}}`$ is the number of sites covered by an avalanche, and $`d`$ is the space dimension, is measured. We find $`D`$=2.45 for 1D model and 1.55 for 2D model.
Up to now, a question is still unsolved. It is about the critical value of $`\overline{f}`$, $`\overline{f}_c`$. This may be a hard bone if the system size is finite, but when the consider the L limit, everything will be smooth and can be easily accomplished. Recall the evolution of Bak-Sneppen model, or the detailed research of this model , the densities of sites with random numbers is uniform above G and vanishes below G in the $`L\mathrm{}`$, where G is the gap of extremal site and detailed information of it can be found, for instance, in Ref. . Hence, one can obtain,
$$\underset{L\mathrm{}}{lim}\overline{f}(s)=\underset{L\mathrm{}}{lim}\frac{1+G(s)}{2}.$$
(3)
Interestingly, inserting Eq. (3) into the gap equation of G found in Ref. , one can immediately obtain Eq. (2). Please note that Eq. (2) is also valid for finite-size systems. From Eq. (3) one can immediately obtain,
$$\underset{L\mathrm{}}{lim}\overline{f}_c=\underset{L\mathrm{}}{lim}\frac{1+f_c}{2}.$$
(4)
Hence, $`\overline{f}_c`$ can be easily determined from Eq. (4). Using the results of $`f_c`$ provided by Refs. , one can obtain $`\overline{f}_c`$, 0.83351 for 1D model and 0.66443 for 2D model. However, Eqs. (3) and (4) are not valid for a finite-size system, since one can not ensure the distribution of random numbers during a finite-size system is really uniform. Due to the fluctuation of $`\overline{f}(s)`$ it is extremely difficult for one to determine exactly the critical value of $`\overline{f}`$ for a finite-size system. One may estimate $`\overline{f}_c`$ for a finite-size system using the simulation. We find that this value weakly depends on the system size. When the system size is very large $`\overline{f}_c`$ approaches the corresponding value for infinite systems. Actually, the value of $`\overline{f}_c`$ itself is not so important. FiG. 4 shows the fluctuation of $`\overline{f}`$ for a one-dimensional model of size $`L=200`$ near its critical state. We note, in this figure, $`\overline{f}`$ fluctuates slightly around some average value and does not tend to increase any more for a long time. We may say that the system approaches its stationary state. In this sense, we suggest that $`\overline{f}`$ may be a good quantity in determining the emergence of criticality. That is, the great fluctuation of $`f_{\mathrm{min}}`$ will not affect us to determine when we approach the critical state. We need only to know the feature of $`\overline{f}`$. This is more reasonable and easily accepted since $`\overline{f}`$ is a global quantity and condenses information of the system and its components.
Why we call the $`\overline{f}_0`$-avalanche a new hierarchy of avalanches? Firstly, this kind of avalanche is defined on the global level, in terms of the new global quantity, $`\overline{f}`$. The background of this definition is different from any one used before. This new kind of avalanche reflects the fractal geometry in terms of the global feature. Secondly, one can notice that the exponents $`\tau `$ obtained in our simulation are different from the ones found in Ref. . From this point of view, one can judge that this kind of avalanche is totally different from any one observed before. Hence, its a new kind of avalanche.
Self-organized criticality is suggested by Bak et al to be the ”fingerprint” of a large variety of complex system (they call system with variability as complex) and is represented by a scale-free line on a log-log plot. In order to know the criticality of a system one needs to know when the system reaches the stable stationary state where the phase transition occurs. It is extremely difficult and almost impossible for one to know when a system in nature approaches, not even reaches , its critical state. One can just study the ubiquitous fractal geometrical structure carved by avalanches through thousands of millions of years. However, in laboratory experiments and computer simulations, one needs a criterion to judge when stationary state approaches, even, reaches, since statistics of avalanches may only be done under critical state of the system. Given Bak-Sneppen model, when the extremal signal, $`f_{\mathrm{min}}`$, approaches to the self-organized threshold, $`f_c`$, the ecosystem reaches its stationary state. However, $`f_{\mathrm{min}}`$ itself fluctuates greatly time to time, which brings great difficulty in determining the appearance of criticality. Thus, we provide a new quantity, $`\overline{f}`$, for a candidate in judging the emergence of criticality. As shown, $`\overline{f}`$ is relatively stable in a short time period. Hence, when $`\overline{f}`$ does not tend to increase any more, we may say that the system approaches its stationary state. And, we can observe criticality in a rather long time period. Surely, the emergence of criticality is rather complex, other physical mechanism is needed, this is what we will consider in the future work.
In conclusion, a new hierarchy of avalanches is observed in Bak-Sneppen model. A new quantity, $`\overline{f}`$, is presented and is suggested by us to be a possible candidate in determining the emergence of criticality. An exact gap equation and simulation results are also given.
This work was supported by NSFC in China. We thank Prof. T. Meng for correspondence and helpful discussions.
Figure Captions
FiG. 1: The variation of $`\overline{f}`$ versus time during a time period for a (a) one-dimensional Bak-Sneppen model of size $`L=200`$ and (b) two-dimensional Bak-Sneppen model of size $`L=20`$. The plots show the hierarchical structure of $`\overline{f}`$.
FiG. 2: Punctuated equilibrium of $`\overline{f}`$ for a (a) one-dimensional Bak-Sneppen model of size $`L=200`$ and (b) two-dimensional Bak-Sneppen model of size $`L=20`$. We track the increasing signal of $`\overline{f}_s`$, i.e, F(s).
FiG. 3: Distribution of $`\overline{f}_0`$-avalanche for a (a) one-dimensional Bak-Sneppen model of size $`L=200`$ and (b) two-dimensional Bak-Sneppen model of size $`L=20`$. $`\overline{f}_0`$ for (a) is chosen to be 0.821, and for (b), 0.648. The slopes are -1.800 and -1.725 for the two plots respectively.
FiG. 4: The fluctuation of $`\overline{f}`$ around the critical state of a one-dimensional Bak-Sneppen model of size $`L=200`$.
|
no-problem/9905/astro-ph9905321.html
|
ar5iv
|
text
|
# The WARPS survey: III. The discovery of an X-ray luminous galaxy cluster at 𝑧=0.833 and the impact of X-ray substructure on cluster abundance measurements
## 1. Introduction
The space density of distant clusters of galaxies is a measurable quantity whose theoretical value is highly sensitive to the physical and cosmological parameters of models of structure formation and evolution (e.g., Oukbir & Blanchard, 1992; Bahcall & Cen, 1992; Viana & Liddle, 1996; Carlberg et al., 1997; Oukbir & Blanchard, 1997; Eke et al., 1998).
A large number of independent measurements of the cluster X-ray luminosity function (XLF) have been performed in the past decade. Given the diversity of the original observations used in these studies and of the data analysis techniques applied, the good agreement of the results is impressive. Virtually all studies agree that the abundance of clusters of low to intermediate X-ray luminosity ($`L_\mathrm{X}<4\times 10^{44}`$ erg s<sup>-1</sup> , $`0.52.0`$ keV) does not change significantly out to $`z0.8`$ (Gioia et al., 1990b; Henry et al., 1992; Burke et al., 1997; Ebeling et al., 1997; Vikhlinin et al., 1998a; Jones et al., 1998; Rosati et al., 1998; DeGrandi et al. 1999; Nichol et al., 1999).
At higher X-ray luminosities, however, a consistent picture has yet to emerge. Reports of strong negative evolution at $`L_\mathrm{X}>3\times 10^{44}`$ erg s<sup>-1</sup> ($`0.33.5`$ keV) already at moderate redshifts just beyond $`z=0.3`$ (Gioia et al., 1990b; Henry et al., 1992; see also Nichol et al., 1997, for a contrary result) are supported by the findings of Vikhlinin et al. (1998a), although the latter rest on a statistically less secure basis. If these results are correct, a much more pronounced dearth of X-ray luminous clusters is expected at yet higher redshift, unless cluster evolution is a strongly non-linear, almost discontinuous function of X-ray luminosity and redshift. However, Luppino & Gioia (1995) show that the cluster XLF at $`0.5z1`$ is consistent with the one found in the EMSS for the redshift range $`0.3<z<0.6`$ (median $`z=0.33`$), i.e., there appears to be no further significant evolution of luminous clusters beyond $`z0.6`$ (although such evolution is not ruled out).
With only a handful of X-ray luminous clusters currently known at $`z>0.5`$, the key to understanding these apparently conflicting results lies clearly in new discoveries, and more detailed observations, of X-ray luminous (and, by inference, massive) clusters at high redshift. Any additional detection of a massive cluster at high redshift (and certainly at $`z0.8`$) is thus of paramount importance as it brings us one step closer to an accurate measurement of the cluster abundance at very high redshift, where its sensitivity to evolutionary effects is greatest.
## 2. The significance of distant massive clusters
With the exception of the Bright SHARC Survey (Nichol et al., 1999), all of the present deep PSPC cluster surveys<sup>1</sup><sup>1</sup>1i.e., the ROSAT North Ecliptic Pole Survey (Mullis, Gioia & Henry 1998), the ROSAT Deep Cluster Survey (Rosati et al., 1995; Rosati et al., 1998), the southern Serendipitous High-Redshift Archival Cluster Survey (SHARC-S, Collins et al., 1997; Burke et al., 1997), the Wide Angle ROSAT Pointed Survey (Scharf et al., 1997, Paper I; Jones et al., 1998, Paper II; Fairley et al., 1999) and the CfA cluster survey (Vikhlinin et al., 1998a,b, 1999) provide sufficient depth to detect a cluster of $`L_\mathrm{X}>10^{45}`$ erg s<sup>-1</sup> ($`0.33.5`$ keV)<sup>2</sup><sup>2</sup>2note that we use $`h=0.5,q_0=0.5`$ throughout out to $`z1`$ and beyond. When the cumulative high-redshift EMSS XLF (median $`z=0.33`$) is scaled to the comoving volume corresponding to the redshift range $`0.8<z<1`$ (i.e., assuming no evolution between $`z0.33`$ and $`z=0.81`$), it predicts about 15 clusters with $`L_\mathrm{X}>10^{45}`$ erg s<sup>-1</sup> ($`0.33.5`$ keV) per steradian, i.e. $`4.6\times 10^3`$ per square degree. Since the mentioned cluster surveys cover only from 18 square degrees (SHARC-S) to 160 square degrees (CfA cluster survey), the detection of only very few X-ray luminous clusters in any of these surveys places significant constraints on the evolution of clusters and large scale structure in general. Note that this is true only for the most luminous systems: if the X-ray luminosity criterion is relaxed and clusters down to $`L_\mathrm{X}=4\times 10^{44}`$ erg s<sup>-1</sup> ($`0.33.5`$ keV) are considered, the expected cluster density in the same redshift range rises by almost an order of magnitude and any individual cluster detection becomes much less significant.
We emphasize that, although any detection (X-ray, optical, infra-red) of massive clusters at very high redshifts is an important discovery in its own right, it is clusters detected in the course of statistically complete surveys that bear the most weight. Only the latter allow the space density of such systems to be quantified and compared to predictions from theoretical models. Any systematic effects in the data analysis and interpretation that could cause such clusters to be missed or misidentified need to be thoroughly understood and corrected for before conclusions about the physical or cosmological parameters governing cluster evolution are drawn from derived statistics such as the cluster XLF.
In the rest of this paper we summarize the current observational status (Section 3) and give a short overview of the WARPS serendipitous cluster survey (Section 4). We then describe the WARPS discovery of ClJ0152.7–1357, a very X-ray luminous cluster at $`z=0.8325`$, and discuss and summarize the results from all available X-ray observations of this system (Section 5). Prompted by our finding that ClJ0152.7–1357 was missed in the EMSS, we take a closer look at how deviations from spherical symmetry in a cluster’s X-ray emission may affect the EMSS cluster sample (and possible other cluster samples) as a whole (Section 6). Trying to assess the importance of biases caused by complex cluster morphology we investigate the prevalence of substructure in distant clusters (Section 7) before, finally, discussing the implications of our findings for attempts to constrain cosmological parameters using X-ray flux limited cluster samples (Section 8).
## 3. Previously known very distant, X-ray luminous clusters of galaxies
Very few clusters of galaxies have been detected at redshifts greater than 0.8, and even fewer can be called X-ray luminous. Prior to the discovery of ClJ0152.7–1357, only two X-ray selected clusters were known at $`z>0.8`$: MS1054.4–0321 ($`z=0.829`$, $`L_\mathrm{X}=1.42\times 10^{45}`$ erg s<sup>-1</sup> in the $`0.33.5`$ keV band<sup>3</sup><sup>3</sup>3The given luminosity was derived from the archival ROSAT HRI observation of this cluster., Donahue et al., 1998) and, much less X-ray luminous, RX J1716.6+6708 ($`z=0.813`$, $`L_\mathrm{X}=3.2\times 10^{44}`$ erg s<sup>-1</sup> in the $`0.52.0`$ keV band, Henry et al. 1997, Gioia et al. 1999). Slightly closer than $`z=0.8`$ ($`z=0.782`$, Gioia & Luppino 1994), but distant and X-ray luminous enough to be noteworthy in this context, is MS1137.5+6625 ($`L_\mathrm{X}=1.03\times 10^{45}`$ erg s<sup>-1</sup> in the $`0.33.5`$ keV band<sup>3</sup><sup>3</sup>footnotemark: 3). The discovery of an even more distant X-ray emitting cluster at $`z=1.27`$ was recently reported by Rosati et al. (1999); however, at $`L_\mathrm{X}1.5\times 10^{44}`$ erg s<sup>-1</sup> ($`0.52.0`$ keV) this system is even less X-ray luminous than RX J1716.6+6708.
All other presently known clusters at very high redshift have been optically selected as projected galaxy overdensities in deep CCD images (Gunn, Hoessel & Oke, 1986, GHO; Postman et al., 1996, PDCS, Scodeggio et al. 1999) or were originally detected at radio or infrared wavelengths (e.g. Crawford & Fabian 1996, Deltorn et al., 1997, Stanford et al., 1997). Although there are now an impressive number of optically selected clusters at $`z>0.8`$ (Postman and coworkers alone list a dozen clusters at $`z1`$ in their PDCS sample), it ought to be emphasized that, for the majority of these possibly very distant optical clusters, the published ‘redshifts’ are estimated statistically and are not the result of actual spectroscopic measurements. The physical reality of many of these systems thus remains to be confirmed through either X-ray or extensive spectroscopic observations. The difficulties inherent to the optical approach are evidenced by, e.g., the work of Oke, Postman & Lubin (1998) who obtained 892 redshifts in the fields of nine distant cluster candidates selected from the GHO and PDCS catalogues. Three of their nine candidate clusters showed no significant peak in the observed redshift histogram, and three others showed between two and four equally significant peaks at very different redshift, illustrating the severity of projection effects in optically selected cluster samples. By way of contrast, Castander et al. (1994) demonstrate how X-ray observations can be used efficiently to test whether optically selected distant clusters are indeed gravitationally bound, massive systems. Castander and coworkers analyzed PSPC observations of five GHO clusters which had spectroscopic redshifts ranging from 0.7 to 0.92. They detected only two of the five, and found all five to have measured X-ray luminosities, or upper limits, of less than $`1\times 10^{44}`$ erg s<sup>-1</sup> ($`0.52.0`$ keV); i.e. they are, at best, poor clusters which, unless detected in very large numbers, do not provide stringent constraints on either the rate of cluster evolution or the cosmological parameters of structure formation models.
## 4. The WARPS cluster survey
The goal of the Wide Angle Rosat Pointed Survey (WARPS) is to compile a complete and unbiased, X-ray selected sample of clusters of galaxies from serendipitous detections of X-ray sources in deep ROSAT PSPC pointings. A comprehensive overview of the scientific goals of the project, the X-ray source detection algorithm employed (Voronoi Tesselation and Percolation: VTP), the sample selection and flux corrections techniques, as well as first results, are presented in Paper I. VTP is particularly well suited for the detection and characterization of low-surface brightness emission (Ebeling & Wiedenmann 1993; Ebeling et al., 1996; Paper I) and is likely to recognize even very distant clusters as extended X-ray sources (Paper I). However, in order to reduce possible incompleteness in our cluster sample due to erroneous classification of distant clusters as point sources, our optical follow-up observations are not limited to extended X-ray sources but also include likely point sources without obvious optical counterparts (Paper II). Paper II also discusses the WARPS $`\mathrm{log}N\mathrm{log}S`$ distribution of poor clusters of galaxies and its implications for cluster evolution.
The first two WARPS papers focus on results for a complete sample of clusters compiled over a geometric solid angle of 14.7 square degrees during the first phase of the project. In May 1997, the WARPS project went into its second phase which increases the total solid angle to 73 deg<sup>2</sup> and will yield a statistically complete sample of more than 70 X-ray selected galaxy clusters at $`z>0.3`$. In this second phase, cluster candidates without obvious optical counterpart on the POSS plates (as provided by the Digitized Sky Survey) were imaged at the Michigan-Dartmouth-MIT 1.3m and University of Hawaii 2.2m telescopes in preparation for spectroscopic follow-up observations at larger telescopes. Although observations of a few very distant cluster candidates have yet to be performed (scheduled for spring 2000), the WARPS cluster sample is already complete at $`z<0.84`$ over the full solid angle.
## 5. ClJ0152.7–1357
In 1996 and 1997, the cluster ClJ0152.7–1357 was discovered independently in the RDCS and WARPS surveys. Later, ClJ0152.7–1357 was also detected in the Bright SHARC survey (Nichol et al., 1999). In this section we describe the discovery of ClJ0152.7–1357 in the WARPS survey and discuss and summarize previous and subsequent X-ray observations of this system.
### 5.1. Discovery in the WARPS survey
The standard WARPS X-ray analysis detected ClJ0152.7–1357 as a very extended source 14.2 arcmin off-axis in a 20 ks PSPC pointed observation of NGC 720; the POSS-2 Digitized Sky Survey image is blank at the position of the source. Figure 9 shows an I band image of ClJ0152.7–1357, taken with the UH 2.2m telescope on Aug 4, 1997, with adaptively smoothed PSPC X-ray flux contours overlaid. A blow-up of the central cluster region is shown in Figure 9. Based on the X-ray source extent and the observed overdensity of faint galaxies at and around the position of the X-ray source, it was classified as a likely distant cluster of galaxies. The X-ray emission from ClJ0152.7–1357 shows a high degree of substructure and a pronounced elongation along a position angle of about 40 which follows roughly the distribution of galaxies in the cluster core (see Section 7 for a discussion of the dynamical state of ClJ0152.7–1357).
On Aug 11, 1997 we observed a total of 14 distant cluster candidates, among them ClJ0152.7–1357, with the low-resolution spectrograph LRIS (Oke et al. 1995) on the Keck-II 10m telescope on Mauna Kea. Using a longslit of $`1.5^{\prime \prime }`$ width and the 300/5000 grating which provides 2.4 Å/pixel resolution and spectral coverage from 5000 Å to 10000 Å, we obtained spectra of six galaxies (see Fig. 9) close to the peak of the X-ray emission from ClJ0152.7–1357 and found redshifts as listed in Table 1. The spectra are shown in Fig. 9. All redshifts are accordant and consistent with a cluster redshift of $`z=0.8325`$. All spectra show absorption features typical of old stellar populations in elliptical galaxies, and none shows emission lines that would suggest AGN contamination.
### 5.2. Other X-ray observations of ClJ0152.7–1357
NGC 720 was observed not only with the ROSAT PSPC in 1992, but also with the EINSTEIN IPC in 1980, and with the ROSAT HRI in 1994. We examine and compare the images from all three observations in Fig. 9. The images are shown in chronological order and have been registered using the astrometry solutions in the respective FITS headers. To allow an assessment of the quality of the raw data as well as of the presence and morphology of any extended emission, we show contours of the smoothed emission (a Gaussian smoothing kernel with $`\sigma =30`$ arcsec was used) overlaid on the observed raw photon data. The latter are binned such that they slightly oversample the point spread function of the respective instrument which is represented by the FWHM bar in the lower left corner of each image.
In the following we discuss the serendipitous HRI and IPC observations of ClJ0152.7–1357 in more detail before summarizing briefly the results of a recent targeted observation of the cluster with the BeppoSAX satellite.
#### 5.2.1 ROSAT HRI
The relatively high angular resolution of the ROSAT HRI ($`12`$ arcsec at an off-axis angle of 14 arcmin) allows us to investigate whether contaminating point sources might contribute to the observed PSPC flux of ClJ0152.7–1357. At an off-axis angle of 14 arcmin a point source with a flux of about one third of the flux detected from ClJ0152.7–1357 would be detectable with the HRI at greater than $`5\sigma `$ significance. However, a secure detection of diffuse emission from ClJ0152.7–1357 with the HRI would require an exposure time in excess of 100 ks. As can be seen in the rightmost panel of Fig. 9, no point sources are detected within the contours shown in Fig. 9 but there is marginal evidence of low-surface-brightness excess emission at the position of the cluster, indicating that the overwhelming majority of the emission detected with the PSPC originates from the cluster. Moreover, the emission detected with the HRI shows a clear elongation along the same position angle of about 40 as the one found in the PSPC data. Although the southwestern extension of the emission detected with the PSPC does not coincide with any prominent galaxy overdensity in the UH2.2m image (cf. Fig. 9), we note that, if any of the major X-ray surface brightness peaks in the PSPC image were due to a single, unvarying point source, they would have been detected with the HRI.
#### 5.2.2 EINSTEIN IPC
It is noteworthy that ours is in fact not the first detection of ClJ0152.7–1357 at X-ray wavelengths. As mentioned before, NGC 720 was not only observed with ROSAT but was, in 1980, also a pointing target of observations with the EINSTEIN observatory. A source at $`\alpha =01^h\mathrm{\hspace{0.17em}52}^m\mathrm{\hspace{0.17em}42.8}^s`$, $`\delta =13^{}\mathrm{\hspace{0.17em}57}^{}\mathrm{\hspace{0.17em}49}^{\prime \prime }`$ (J2000) (i.e. within one arcmin of the PSPC position of ClJ0152.7–1357) is clearly detected with the Imaging Proportional Counter (IPC, sequence number of pointing I 5769); the EINSTEIN source catalogue assigns this source the number 496 and quotes a significance of detection of 4.8$`\sigma `$ in the IPC broad band and within the detect cell (Harris et al., 1990). We show the IPC broad band data around this position in the leftmost panel of Fig. 9. The position angle (approximately zero) of the apparent elongation of the source is different from the one found from the ROSAT PSPC and HRI data. However, since the IPC point spread function has a FWHM of about 90 arcsec FWHM in the broad band, the source elongation found in this short IPC observation is only marginally significant. The same IPC source is also listed as EXSS 0150.2–1411 in the catalogue of extended EINSTEIN detections compiled by Oppenheimer, Helfand & Gaidos (1997) who find the source significance (presumably in the broad band) to be 4.7 and 5.7$`\sigma `$ within circular apertures of 1.25 and 2.35 arcmin radius. Although this source therefore appears to be sufficiently significant to be included in the EMSS catalogue, it remained un-identified until its re-discovery in the RDCS and WARPS surveys in 1996/97. We will come back to the IPC detection of ClJ0152.7–1357 in Section 6.1.
#### 5.2.3 BeppoSAX
A recent pointed BeppoSAX observation of ClJ0152.7–1357 allowed the temperature and metallicity of the intra-cluster gas to be measured: Della Ceca et al. (1999) report values of k$`T=6.5_{1.2}^{+1.7}`$ keV and $`Z=0.53_{0.24}^{+0.29}`$. This gas temperature is consistent both with the temperature estimate of $`5.9_{2.1}^{+4.4}`$ keV obtained by us from the PSPC data (cf. Fairley et al., 1999) and with the cluster X-ray luminosity–temperature relation as determined by Allen & Fabian (1998) which predicts k$`T=7.8`$ keV. The relatively poor angular resolution of the BeppoSAX telescope does not allow any conclusions to be drawn about the possibility of point source contamination.
### 5.3. X-ray properties
Using the Galactic neutral Hydrogen column density in the direction of the cluster of $`1.47\times 10^{20}`$ cm<sup>-2</sup> (Dickey & Lockman 1990) as well as a metallicity of 0.5 and a gas temperature of 6.5 keV (Della Ceca et al., 1999), we convert the total PSPC count rate of $`(0.0237\pm 0.0015)`$ ct s<sup>-1</sup> (PHA channels 50 to 200) measured in the WARPS analysis into a total, unabsorbed flux of $`(2.90\pm 0.18)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (0.5–2.0 keV), corresponding to an X-ray luminosity of $`(8.59\pm 0.53)[(15.5\pm 0.95),(33.7\pm 2.08)]\times 10^{44}`$ h$`{}_{}{}^{2}{}_{50}{}^{}`$ erg s<sup>-1</sup> in the 0.5–2.0 keV \[0.3–3.5 keV, bolometric\] band. Thus, ClJ0152.7–1357 is slightly more luminous than MS1054.4–0321, making it the most X-ray luminous distant cluster detected so far. It is also worth noting that both ClJ0152.7–1357 and MS1054.4–0321 are more X-ray luminous than any other known cluster at $`z>0.55`$.
We find our measurement of the total, unabsorbed cluster flux of ClJ0152.7–1357 in the 0.5–2.0 keV band to be in good agreement with all other results obtained from the available X-ray observations of this system:
$`f(<1.5h_{50}^1\mathrm{Mpc})=(3.76\pm 0.84)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (this work)
$`f(<1.5h_{50}^1\mathrm{Mpc})=(2.2\pm 0.2)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (Della Ceca et al., 1999)
$`f(\mathrm{total})=(2.90\pm 0.18)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (this work)
$`f(\mathrm{total})=(2.93\pm 0.16)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (Romer et al., 1999)
$`f(<1.5h_{50}^1\mathrm{Mpc})=(2.8\pm 1.1)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (this work)
$`f(<2.0h_{50}^1\mathrm{Mpc})=(1.9\pm 0.4)\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (Della Ceca et al., 1999)
All ROSAT flux measurements agree within the errors<sup>4</sup><sup>4</sup>4note that the RDCS and HRI measurements use a fixed metric aperture whereas WARPS and Bright SHARC measure the total cluster flux. By comparison, the IPC result is high and the BeppoSAX result is low; compared directly, the discrepancy between these two measurements is significant at the $`2\sigma `$ confidence level.
As noted before in Section 5.2.3, the measurements of the cluster gas temperature obtained independently from the PSPC data (this work) and the BeppoSAX data (Della Ceca et al., 1999) also agree well within their errors.
Although we cannot rule out that some of the observed emission originates from one or more variable point sources, the overall good agreement of the source positions, X-ray fluxes and cluster gas temperatures measured for ClJ0152.7–1357 between 1980 and 1998 makes major contamination unlikely.
Table 2 summarizes the optical and X-ray properties of ClJ0152.7–1357.
## 6. Possible systematic biases in the EMSS cluster sample
As mentioned in Section 5.2.2, ClJ0152.7–1357 was detected with the EINSTEIN IPC at 4.8$`\sigma `$ significance (EOSCAT, Harris et al., 1990); however, the EMSS source catalogue (Gioia et al., 1990a) lists the respective IPC field (I 5769) as containing no serendipitous detections that would be significant at the greater than 4$`\sigma `$ level. Since this discrepancy has been the subject of some debate, we investigate the issue in detail in the following. Specifically, we address three questions: firstly, how can the two catalogues, using (apparently) the same data, arrive at substantially different significances of detection for the same source? Secondly, what are the implications of the absence of ClJ0152.7–1357 from the EMSS catalogue for the overall completeness of the EMSS cluster sample? And thirdly, what are the consequences of our findings for the clusters included in the EMSS?
### 6.1. The IPC detection of ClJ0152.7–1357
Both the EINSTEIN IPC source catalogue (EOSCAT, Harris et al., 1990) and the EMSS sample (Gioia et al., 1990a) were compiled using the same source detection algorithm. It combines a sliding cell detection algorithm (cell geometry: $`2.4\times 2.4`$ arcmin<sup>2</sup>) with a maximum likelihood (ML) peak finding algorithm which fits a Gaussian model of the instrumental point spread function (the size of which varies with the chosen energy range) to the data inside the detect cell. The final source positions are taken from the ML results. While this approach is adequate for the detection of point sources, the use of a peak finding algorithm can clearly lead to non-optimal results in the case of extended sources with internal structure.
While the EOSCAT and EMSS results for ClJ0152.7–1357 are obtained from the same data, the compilation procedures of the two catalogues are not entirely identical. EOSCAT computes the source significance within a detect cell centred on the ML source position measured in the IPC broad band ($`0.163.5`$ keV), whereas the EMSS uses the ML source position determined in the IPC hard band ($`0.813.5`$ keV). However, both catalogues use the broad band photons within the detect cell to compute the source significance that is used as the final criterion for the inclusion of sources in the respective catalogue. The rationale behind the two-band approach chosen by the EMSS team is to take advantage of the higher resolution of the IPC in the hard band without sacrificing the better photon statistics of the broad band data (Maccacaro and Gioia, private communication). The energy dependence of the instrumental resolution means, however, that a narrower point spread function will be used by the ML algorithm in the hard band — which, as we shall see, is part of the reason why the EMSS missed ClJ0152.7–1357.
#### 6.1.1 Re-analysis of the IPC data for ClJ0152.7–1357
We re-analyze the IPC data for field I 5769 in both the hard and the broad band using the same sliding cell algorithm employed by the EOSCAT and EMSS teams. For the broad band data our analysis yields results similar to those listed in EOSCAT for source #496: at the position maximizing the source significance in the broad band we find the detect cell to contain 44 photons of which 9.9 are expected to be background. The resulting signal-to-noise ratio (snr) in the broad band is 5.1. At the ML source position quoted by EOSCAT we measure 38 counts in the detect cell (EOSCAT: 44) of which 9.8 are attributed to background (EOSCAT: 11.7); the resulting snr value is 4.6 (EOSCAT: 4.8). Our results are also in good agreement with those of Oppenheimer, Helfand & Gaidos (1997) who, in their independent re-analysis of the EINSTEIN IPC data, find the significance of their source EXSS 0150.2–1411 to be 4.7 and 5.7$`\sigma `$ (presumably in the broad band) within circular apertures of 1.25 and 2.35 arcmin radius.
The left panel in Figure 9 shows contours of the smoothed X-ray emission in the IPC broad band at the position of ClJ0152.7–1357 with both ours and the EOSCAT source position marked. Also shown is the contour within which our analysis finds the signal-to-noise ratio in the hard band and within the detect cell to exceed the threshold value of four. Although the astrometry used to create this image is taken directly from the EINSTEIN events list, the offset of the marked EOSCAT source position from the peak of the emission suggests that the satellite attitude solution used in the original EOSCAT analysis may have differed by some 10 to 20 arcseconds. Note that the non-sphericity of the emission causes the position of the X-ray peak to lie only marginally within the $`\text{snr}=4`$ contour.
The results of the same analysis in the IPC hard band are shown in the right panel of Figure 9. Again we show the $`\text{snr}=4`$ (broad band) contour as well as our best estimate of the source position in the broad band. Also shown is the source position returned by the ML algorithm from the EMSS analysis of the hard band data (kindly provided by Isabella Gioia). Due to the more pronounced bimodality of the source in the hard band, the ML algorithm, using a narrower model of the point spread function than in the IPC broad band, now centres on an apparent peak more than one arcmin north of the position that maximizes the source significance in the broad band. At the EMSS source position we measure a value of 4.1 for the broad band snr; slight differences in the astrometric solution caused the original EMSS snr measurement at this position to fall just below the threshold value of four. Consequently, the EMSS rejected ClJ0152.7–1357 as not sufficiently significant to be included in the EMSS catalogue, and thus missed what would have been the most distant and most X-ray luminous cluster in the EMSS sample.
#### 6.1.2 Summary of our re-analysis of the IPC data
As demonstrated in the previous section and illustrated in Fig. 9, the use of a peak-finding ML detection algorithm in the EINSTEIN IPC data analysis leads to significant offsets between the ML source position and the one maximizing the broad-band snr of ClJ0152.7–1357. Moreover, in the presence of emission with apparent substructure the ML source positions in different energy bands can differ significantly. If ClJ0152.7–1357 were a spherically symmetric, relaxed system with a radial surface brightness profile following a beta model, the peak centering algorithm would very likely have come closer to returning the maximal possible source significance of $`5.6\sigma `$ (using the PSPC count rate and assuming a core radius of 250 kpc). This leads us to investigate whether the failure of the EMSS to include ClJ0152.7–1357 can be regarded as symptomatic of a general bias against unrelaxed clusters.
### 6.2. Detection bias in the EMSS cluster sample
Before we attempt to assess the importance of cluster substructure for the efficiency of the EMSS point source detection algorithm (or, more generally, any algorithm that explicitly or implicitly assumes a unimodal source geometry), it should be stressed that this assumption is not vital to the source detection process. The choice of source detection algorithm is crucial though, as the algorithm’s biases can have a significant impact on the statistical quality of the resulting sample. The EMSS and WARPS surveys, for instance, are inherently different due to differences in the source detection process. The EMSS is X-ray surface brightness limited (the selection criterion is the significance of the flux in a detect cell of fixed angular size, and the survey flux limit refers to the flux in the same detect cell, Gioia et al., 1990a) while WARPS is almost completely X-ray flux limited (the detection procedure uses a very low surface brightness threshold – see Paper I – and the limiting flux is the total flux of the cluster including the fraction that has escaped direct detection).
#### 6.2.1 Simulations of unrelaxed clusters
We investigate the redshift and luminosity dependence of the EMSS detection efficiency for morphologically complex sources by simulating IPC observations of two kinds of unrelaxed clusters: firstly, mergers of two similarly extended components (akin to ClJ0152.7–1357) and, secondly, extended systems containing a compact but off-centre core (similar to MS$`10540321`$, see Section 7). Table 3 gives an overview of the model parameters used in the simulations. In all simulations we assume a uniform background of $`2.5\times 10^2`$ ct s<sup>-1</sup> within the detect cell and an exposure time of 2.5 ks, values typical of the average IPC pointing; we also blur all simulated images by convolving them with a Gaussian of 33 arcsec width ($`1\sigma `$) thereby accounting for the IPC point spread function (Lea & Henry 1988). Finally we scale the total emission from both components such that the maximal significance in the broad band is always constant at the EMSS threshold value of $`4\sigma `$ within the detect cell. Since we are investigating a systematic effect, no Poisson noise is added to the simulated data.
Figure 9 summarizes the results of our simulations by showing, for a range of projected subcluster separations, the signal to noise ratio (snr) for a detect cell centred on the overall peak of the emission as a function of redshift.
For the merger scenario we find no evidence for a systematic underestimation of the source significance at any redshift as long as the projected separation of the two cluster components remains less than about 400 kpc. This is not surprising: at redshifts greater than 0.3 such small separations are simply not resolved by the IPC. For projected subcluster separations of more than 400 kpc, however, the source significance is systematically underestimated when measured around the position of the highest peak within the emission region. The effect is small ($`\delta (\mathrm{snr})<0.2`$) but redshift dependent. The underestimation is most severe for X-ray luminous systems ($`L_\mathrm{X}>5\times 10^{44}`$ h$`{}_{}{}^{2}{}_{50}{}^{}`$ erg s<sup>-1</sup> , $`0.33.5`$ keV) at intermediate to high redshift, although it takes pronounced substructure on the scale of more than 700 kpc (in projection) to produce a noticeable effect at $`z0.8`$.
For the offset-core scenario we find the redshift and luminosity dependence to be reversed: now it is nearby clusters ($`z<0.3`$) of moderate luminosity ($`L_\mathrm{X}<3\times 10^{44}`$ h$`{}_{}{}^{2}{}_{50}{}^{}`$ erg s<sup>-1</sup> , $`0.33.5`$ keV) that are most strongly affected.
These trends are consistent with the mentioned observations of clusters at $`z0.8`$: while ClJ0152.7–1357 is missed by the EMSS, MS1137.5+6625 and MS1054.4–0321 (the first apparently relaxed, the latter a case of substructure akin to our second simulated scenario, see Section 7) are both detected. Taken together, our results thus indicate that, in the presence of different kinds of substructure, the EMSS peak finding algorithm tends to underestimate the significance both of nearby clusters of low to moderate X-ray luminosity, and of distant clusters of very high X-ray luminosity.
While Figure 9 suggests that the underestimation of the snr within the detect cell is small (0.1–0.2), we note that the real effect will be magnified by photon noise (not included in our simulations) which will cause the peak position found by the EMSS ML algorithm to deviate from the true position. The resulting positional error is considerable: for the photon statistics of our simulated example we find a radius of 20 arcsec for the 90% confidence error circle of the ML peak position. In most cases measuring the source significance around this ML fit position will yield values that are lower than those in Fig. 9. This is underlined by the very case of ClJ0152.7–1357, a distant, X-ray luminous cluster with substructure on the scale of 600 kpc. Its source position as determined by the EMSS peak finding algorithm in the IPC hard band lies so far off the X-ray centroid that the source significance in the IPC broad band is underestimated by more than $`1\sigma `$ – far more than what is implied by Figure 9.
#### 6.2.2 Impact of the detection bias on the EMSS cluster sample
Although the above arguments suggest that the EMSS detection bias against unrelaxed clusters could be severe, a re-analysis of the EINSTEIN IPC data or numerical simulations beyond the scope of this paper would be required to accurately quantify the effect. To be conservative, the values from our simple simulation may be taken at face value, in which case the smallness of the amplitude of the bias might cause one to believe that its impact on the EMSS cluster sample will be negligible. This is, however, not necessarily true. The EMSS catalogue as used for the definition of the EMSS cluster sample (Gioia et al. 1990b) comprises 733 sources, 93 of which were identified as clusters of galaxies. From the distribution of source significances we estimate the number of sources with significances between 3.8 and 4$`\sigma `$ to be about 60; 5 of these are expected to be clusters at redshifts greater than 0.2. Since the fraction of significantly unrelaxed clusters at these redshifts is almost certainly non-negligible (see Section 7), and considering the inherent uncertainties of our crude analysis, we are left with the conclusion that the number of distant and X-ray luminous, but unrelaxed, clusters missed by the EMSS is likely to be of the order of a few. While not immediately alarming, this estimate is still disconcertingly high given that the EMSS cluster sample contains only a handful of distant X-ray luminous clusters to begin with.
## 7. The X-ray morphology of distant clusters
In addition to the cosmological relevance of the sheer existence of a distant cluster as X-ray luminous as ClJ0152.7–1357, the complex optical and X-ray morphology of this cluster provides further important clues. As can be seen from Fig. 9, ClJ0152.7–1357 consists of at least two pronounced subclusters which are (in projection) about 600 kpc apart and are likely to merge within a few Gyr (assuming a true spatial separation of one to a few Mpc and equal masses of a few $`10^{14}M_{\mathrm{}}`$ for the two main cluster components).
The fact that ClJ0152.7–1357 is still in the process of formation has several interesting implications. Firstly, the subclusters observed today are likely to have existed as separate clusters of $`L_\mathrm{X}4\times 10^{44}`$ erg s<sup>-1</sup> (0.5–2.0 keV) at a redshift considerably greater than unity, and, secondly, the X-ray luminosity of ClJ0152.7–1357 is bound to increase as the merger proceeds, possibly rendering ClJ0152.7–1357 more X-ray luminous than any cluster observed to date. Thirdly, ClJ0152.7–1357 is the third X-ray selected cluster (out of five) detected at $`z0.8`$ that shows pronounced substructure and is distinctly non-virialized, in contrast to the morphologically much more diverse local cluster population.
The last point is illustrated by Figure 9 which shows adaptively smoothed X-ray flux contours of all three $`z0.8`$ clusters for which high-resolution X-ray images are currently available. Our HRI data reduction corrects for particle background as well as exposure time variations using software kindly provided by Steve Snowden. For each cluster we align and merge all available observations which yields total exposure times as follows. MS1137.5+6625 ($`z=0.782`$): 98.0 ks; MS1054.4–0321 ($`z=0.829`$): 186.6 ks; RX J1716.6+6708 ($`z=0.813`$): 167.2 ks. The final images use a pixel size of $`2.5\times 2.5`$ arcsec<sup>2</sup> thus slightly oversampling the HRI point-spread function. Using Asmooth (Ebeling et al. 1999) the HRI counts image is then adaptively smoothed with a Gaussian kernel the size of which is adjusted such that the local significance of the signal within the kernel exceeds 99%. The boxy thick contours in Fig. 9 mark the regions within which the signal is high enough for this criterion to be met and within which all structure apparent in the contour plots is thus significant at greater than 99% confidence. The dashed boxes illustrate the effect of a placement of the EMSS detect cell on the highest peak in the emission region. According to Fig. 9, the only relaxed cluster of the three is MS1137.5+6625 while both MS1054.4–0321 and RX J1716.6+6708 exhibit significantly nonspherical emission with off-centre cores.
Although this high-redshift sample is still too small to allow more quantitative conclusions, the rarity of relaxed systems is intriguing and may indicate that we are beginning to actually observe the epoch of formation of the majority of massive clusters.
## 8. Summary and Caveat Emptor
The discovery of the X-ray luminous, unrelaxed galaxy cluster ClJ0152.7–1357 in the WARPS cluster survey has important implications for our understanding of the evolution of clusters as a function of X-ray luminosity and redshift.
ClJ0152.7–1357 was previously detected in a pointed observation with the EINSTEIN IPC; however, due to an underestimation of its significance the source is missing from the EMSS catalogue. ClJ0152.7–1357 would have been the most distant and most X-ray luminous cluster in the EMSS sample. Simulations of IPC observations of unvirialized clusters show that the absence of ClJ0152.7–1357 from the EMSS cluster sample may reflect a general bias of the EMSS against unrelaxed, distant clusters. We cannot currently quantify accurately the amplitude of such a bias; however, conservative estimates suggest that of the order of a few X-ray luminous clusters may have been missed at $`z>0.3`$.
We attempt to assess the frequency of significant substructure in distant X-ray luminous clusters by comparing the X-ray morphology of all such systems observed to date with the ROSAT HRI. Although the resulting sample is small, we find tentative evidence that highly unrelaxed systems such as ClJ0152.7–1357 may indeed be common at high redshift.
An important implication of our findings is that quantitative cosmological conclusions based on measurements of the abundance of X-ray luminous, distant clusters ought to be regarded with caution. Any comparison of cluster space densities with the predictions of structure formation models assumes that the clusters used satisfy the collapse criteria specified in those models (e.g. Press-Schechter). In the light of our morphological observations we add a cautionary note that it is possible that many of these distant systems do not yet meet these conditions. Clearly this would seriously complicate the measurement of cosmological quantities using cluster counts. However, it could also offer a new means to tackle these questions through detailed observation and a dynamical analysis of merger rates in statistically selected ‘proto-clusters’.
As far as the representative nature of current cluster samples is concerned, the dynamical state of a cluster could complicate matters beyond the detection bias discussed in Section 6.2. Numerical simulations by Ricker (1998) indicate that shock fronts created in the primary collision of two merging clusters can increase the total X-ray luminosity of the merging system by up to an order of magnitude compared to the combined X-ray luminosity of the progenitor clusters. While this effect is expected to be prominent only for less than half the sound crossing time (typically a few times $`10^8`$ yr), it may still, to some extent, counteract any detection bias against merging clusters (see Section 6.2) by causing such systems to be preferentially detected in X-ray flux limited surveys.
If cluster mergers are indeed common at high redshift and the net X-ray emission of these systems does not adequately distinguish between formed and forming systems, we may be forced to develop much more sophisticated models and data analysis strategies in order to draw secure conclusions about the physical mechanisms and cosmological implications of cluster evolution.
Deeper observations and more detailed analyses of a sizeable, representative sample of distant X-ray luminous clusters are required to conclusively address these issues.
## 9. ClJ0152.7–1357: Outlook
Observing time with Chandra’s ACIS-I imaging spectrometer is scheduled in Cycle 1 for a high-resolution X-ray study of ClJ0152.7–1357; the cluster is also a GTO (Guaranteed Time Observation) target of XMM. In combination with ongoing observations at optical and infrared wavelengths from the ground these X-ray observations will allow in-depth studies of the internal dynamics and mass distribution of this system. A detailed optical study of the cluster galaxy population with the Keck-2 telescope is underway and first results will be presented shortly. For now we only mention that our recent multi-slit spectroscopy observations yielded more than 20 accordant redshifts for this system.
We thank the telescope time allocation committees of the University of Hawai‘i, UCO/Lick, the Kitt Peak National Observatory, and the Michigan-Dartmouth-MIT observatory for their generous support of the WARPS optical follow-up programme. Several members of the EMSS team provided us with advice and much useful information about details of the EMSS data processing and source selection procedure. Their help is gratefully acknowledged. Special thanks to Pat Henry for many fruitful discussions that led to substantial improvements in the presentation and interpretation of our results. Alexey Vikhlinin kindly proofread an early version of the manuscript and made a number of helpful suggestions. Thanks also to Steve Snowden for kindly providing the latest version of his $`\mathrm{cast}\mathrm{\_}\mathrm{hri}`$ package. This work has made use of data obtained through the WWW interfaces to the GSFC/HEASARC and MPE ROSAT Public Data Archives, as well as the STScI Digitized Sky Survey. HE acknowledges financial support from SAO contract SV4-64008 and NASA LTSA grant NAG 5-8253. LRJ thanks the UK PPARC for financial support.
|
no-problem/9905/cond-mat9905334.html
|
ar5iv
|
text
|
# Composite Fermions and the Fractional Quantum Hall Effect: Essential Role of the Pseudopotential
## Introduction.
The MFCF picture does remarkably well in predicting the band of angular momentum ($`L`$) multiplets that form the low energy sector of a 2D electron system in a strong magnetic field $`B`$. A Laughlin incompressible $`L=0`$ ground state of an $`N`$ electron system occurs when the magnetic monopole (which produces the radial magnetic field at the surface of the Haldane sphere) has strength $`2S_m=m(N1)`$, where $`m`$ in an odd integer.
For $`2S`$ different from $`2S_m`$ there will be $`|2S2S_m|`$ quasiparticles (QP’s). This is illustrated in Fig. 1, which displays the energy spectra of ten electrons on a Haldane sphere at monopole strength $`252S29`$. Frame (a) shows the Laughlin incompressible ground state at $`L=0`$. Frames (b) and (c) show states containing a single quasielectron QE (a) and quasihole QH (b) at $`L=5`$. In frames (d) and (e) the two QP states form the low energy bands. In the MFCF picture, the effective monopole strength $`2S^{}`$ is given by $`2S^{}=2S2p(N1)`$, where $`p`$ is an integer. $`S^{}`$ is the angular momentum $`l_0^{}`$ of a MF CF in the lowest CF Landau level. At $`2S=27`$ (with $`p=1`$), $`l_0^{}=9/2`$ and the lowest shell accommodates $`2l_0^{}+1=10`$ CF’s, so that the shell is filled giving $`L=0`$. At $`2S=27\pm 1`$ there will be one CF QHwith $`l_{\mathrm{QH}}=5`$ or one CF QE with $`l_{\mathrm{QE}}=5`$, giving $`L=5`$. At $`2S=27\pm 2`$ there will be two CF QH each with $`l_{\mathrm{QH}}=11/2`$ giving $`L=0`$, 2, 4, 6, 8, 10, or two CF QE each with $`l_{\mathrm{QE}}=9/2`$ giving $`L=0`$, 2, 4, 6, 8.
It is quite remarkable that the MFCF picture works so well since its energy scale is $`\mathrm{}\omega _c^{}=(2p+1)^1\mathrm{}\omega _cB`$, in contrast to the scale of the Coulomb interaction $`e^2/\lambda \sqrt{B}`$, where $`\lambda `$ is the magnetic length. The energy values obtained in the MFCF picture are totally incorrect, but the structure of the low energy spectrum (which multiplets form the lowest lying band) is correct. As first suggested by Haldane, this is a result of the behavior of the pseudopotential $`V(L^{})`$ (interaction energy of a pair of electrons vs. pair angular momentum) in the lowest LL.
## Pseudopotential.
In Fig. 2 we plot $`V(L^{})`$ vs. $`L^{}(L^{}+1)`$ for the lowest ($`n=0`$) and first excited ($`n=1`$) LL for different values of $`2l`$.
Note that for $`n=0`$ $`V(L^{})`$ rises more steeply than linearly with increasing $`L^{}`$ at all values of $`L^{}`$, but for $`n=1`$ this is not true at the highest allowed values of $`L^{}`$.
A useful operator identity relates the total angular momentum $`\widehat{L}=_i\widehat{l}_i`$ to the sum over all pairs of the pair angular momentum $`\widehat{L}_{ij}=\widehat{l}_i+\widehat{l}_j`$,
$$\underset{i<j}{}\widehat{L}_{ij}^2=\widehat{L}^2+N(N2)\widehat{l}^2.$$
(1)
Here, each Fermion has angular momentum $`l`$, so that $`\widehat{l}^2`$ has the eigenvalue $`l(l+1)`$. From Eq. (1) it is not difficult to show that for a “harmonic” pseudopotential defined by $`V_H(L^{})=A+BL^{}(L^{}+1)`$, the energy $`E_\alpha (L)`$ of the $`\alpha `$th multiplet with total angular momentum $`L`$ would be independent of $`\alpha `$, and that $`E(L)`$ would be of the form $`a+bL(L+1)`$. Because the actual pseudopotential is different from $`V_H(L^{})`$, the degeneracy of the multiplets $`\alpha `$ of the same $`L`$ is lifted.
For a pseudopotential (which we will refer to as a short range, SR, potential) that rises more quickly with $`L^{}`$ than $`V_H(L^{})`$, the lowest energy multiplets must, to the extent that it is possible, avoid having pair amplitude (or coefficient of fractional parentage ) from the largest values of $`L^{}`$. For $`V_H(L^{})`$ the lowest angular momentum states have the lowest energy. However, the difference $`\mathrm{\Delta }V(L^{})=V(L^{})V_H(L^{})`$ lifts the degeneracy of multiplets having the same value of $`L`$. If some low value of $`L`$ has a very large number $`N_L`$ of multiplets, $`\mathrm{\Delta }V(L^{})`$ can push the lowest multiplet at that $`L`$ value to a lower energy than any multiplet of a neighboring smaller $`L`$ value for which $`N_L`$ is much smaller.
## Energy Spectra of SR Pseudopotential.
Fig. 3 displays some very informative results for a simple four particle system at different values of the single particle angular momentum $`l`$ (which differs by $`\mathrm{\Delta }l=p(N1)`$, $`p=1`$, 2, …).
Note that the set of multiplets at $`lp(N1)`$ is always the subset of the multiplets at $`l`$. The SR pseudopotential appears to have the property that its Hilbert space $``$ splits into subspaces $`_p`$ containing states with no parentage from pair angular momentum $`L^{}=2(lp)1`$. $`_0`$ is the entire space, $`_1`$ is the subspace that avoids $`L_{\mathrm{MAX}}^{}=2l1`$, $`_2`$ avoids $`L=2l1`$ and $`2l3`$, etc. Since the interaction energy in each subspace $`_p`$ is measured on the scale $`V(L^{}=2(lp)1)`$, the spectrum splits into bands with gaps between bands associated with the differences in appropriate pseudopotential coefficients. The largest gap is always between the zeroth and first band, the next largest between the first and second, etc. Note that the subset of multiplets at $`l^{}=lp(N1)`$ is exactly the subset chosen by the MFCF picture. In addition, at the Jain values $`\nu =n(1+2pn)^1`$, where $`n=1`$, 2, …, there is only a single multiplet at $`L=0`$ in the “lowest subset” for an appropriate value of $`p`$.
These ideas can be made more formal by using the algebra of angular momentum addition and the “coefficients of fractional parentage” familiar to atomic and nuclear physicists. The conclusions are quite clear. There is really only one energy scale, that of the Coulomb interaction $`e^2/\lambda `$. Laughlin states occur when the fractional parentage for electrons (or holes) allows avoidance of the pseudopotential $`V(2(lp)1)`$ for $`p=0`$, 1, …. Jain states occur when the fractional parentage of the appropriate $`V(2(lp)1)`$ is much smaller (but not zero) for $`L=0`$ than for other allowed multiplets. The MFCF picture works only if $`V(L^{})`$ is a SR potential that rises like $`[L^{}(L^{}+1)]^\beta `$ with $`\beta >1`$.
## Other Pseudopotentials.
For the $`n=1`$ and higher LL’s, $`V(L^{})`$ is not SR for all values of $`L^{}`$. For $`n=1`$, $`V(L^{})`$ is essentially harmonic at $`L^{}=L_{\mathrm{MAX}}^{}`$, and for $`n>1`$ it is subharmonic at the largest values of $`L^{}`$. Therefore, even if ground states at filling factors like $`\nu =2+1/3`$ have $`L=0`$, they are not Laughlin type incompressible states which avoid pair angular momentum $`L_{\mathrm{MAX}}^{}=2l1`$.
A CF hierarchy scheme was proposed by Sitko et al. in which the CF transformation was reapplied to QP’s in partially filled shells. The application of the MFCF approximation was found to work in some cases but not in others. Some idea of when the MFCF approximation is valid can be obtained from looking at the 2QE and 2QH states in Fig. 1. The QH pseudopotential is SR at $`L=10`$, but not at $`L=8`$. The QE pseudopotential is certainly not SR at $`L=8`$, but at $`L=6`$ it might be. This suggests that Laughlin states will be formed by QH’s of the $`\nu =1/3`$ state at $`\nu _{\mathrm{QH}}=1/3`$ and by QE’s of the $`\nu =1/3`$ state at $`\nu _{\mathrm{QE}}=1`$, explaining the strong FQHE of the underlying electron system at the Jain $`\nu =2/7`$ and 2/5 filling factors. In contrast, no FQHE at $`\nu _{\mathrm{QH}}=1/5`$ ($`\nu =4/13`$ electron filling) or $`\nu _{\mathrm{QE}}=1/3`$ ($`\nu =4/11`$) would be expected because the QP pseudopotentials are not SR at these values.
A final interesting example is that of a multi-component plasma of electrons and one or more negatively charged excitonic ions $`X_k^{}`$ (a bound state of $`k`$ neutral excitons and an electron) formed in an electron-hole system. These excitonic ions are long lived Fermions with LL structure. The angular momentum of an $`X_k^{}`$ on a Haldane sphere is $`l_k=Sk`$. The pseudopotentials describing the interactions of $`X_k^{}`$ ions with electrons and with one another can be shown to be SR. In fact, $`V_{AB}(L^{})`$, where $`A`$ or $`B`$ or both are composite particles has a “hard core” for which one or more of the largest values of $`V_{AB}(L^{})`$ are effectively infinite.
The following configurations of ions have low energy in the twelve electron–six hole system at $`2S=17`$. The $`6X^{}`$ configuration (i) has the maximum total binding energy $`\epsilon _\mathrm{i}`$. Other expected low lying bands are: (ii) $`e^{}+5X^{}+X^0`$ with $`\epsilon _{\mathrm{ii}}`$ and (iii) $`e^{}+4X^{}+X_2^{}`$ with $`\epsilon _{\mathrm{iii}}`$. Here, $`\epsilon _\mathrm{i}>\epsilon _{\mathrm{ii}}>\epsilon _{\mathrm{iii}}`$ are all known. Although we are unable to perform an exact diagonalization for this system in terms individual electrons and holes, we can use appropriate pseudopotentials and binding energies to obtain the low lying states in the spectrum. The results are presented in Fig. 4.
There is only one $`6X^{}`$ state (the $`L=0`$ Laughlin $`\nu _X^{}=1/3`$ state) and two bands of states in each of groupings (ii) and (iii). A gap of 0.0626 $`e^2/\lambda `$ separates the $`L=0`$ ground state from the lowest excited state.
## Generalized Composite Fermion Picture.
In order to understand the numerical results obtained in Fig. 4, it is useful to introduce a generalized CF picture by attaching to each particle fictitious flux tubes carrying an integral number of flux quanta $`\varphi _0`$. In the multi-component system, each $`a`$-particle carries flux $`(m_{aa}1)\varphi _0`$ that couples only to charges on all other $`a`$-particles and fluxes $`m_{ab}\varphi _0`$ that couple only to charges on all $`b`$-particles, where $`a`$ and $`b`$ are any of the types of Fermions. The effective monopole strength seen by a CF of type $`a`$ (CF-$`a`$) is $`2S_a^{}=2S_b(m_{ab}\delta _{ab})(N_b\delta _{ab})`$. For different multi-component systems we expect generalized Laughlin incompressible states when all the hard core pseudopotentials are avoided and CF’s of each kind fill completely an integral number of their CF shells (e.g. $`N_a=2l_a^{}+1`$ for the lowest shell). In other cases, the low lying multiplets are expected to contain different kinds of quasiparticles (QP-$`A`$, QP-$`B`$, …) or quasiholes (QH-$`A`$, QH-$`B`$, …) in neighboring filled shells. Our multi-component CF picture can be applied to the $`12e+6h`$ spectrum given in Fig. 4. The agreement is really quite remarkable and strongly indicates that our multi-component CF picture is correct.
In this work we have emphasized that the success of the MFCF picture is critically dependent on the nature of the pseudopotential. We have presented several examples of SR pseudopotentials for which the CF picture works well, and several subharmonic pseudopotentials for which it does not.
We gratefully acknowledge partial support from the Materials Research Program of Basic Energy Sciences, US Department of Energy.
|
no-problem/9905/astro-ph9905096.html
|
ar5iv
|
text
|
# Changes in the structure of the accretion disc of HS1804+67 through the outburst cycle This work was partially supported by CNPq research grant no. 300 354/96-7 and by PRONEX grant FAURGS/FINEP 7697.1003.00.
## 1 Introduction
Dwarf novae yield an uneven opportunity to study the time evolution of non-stationary accretion discs, in particular in the transition between the high and low viscosity (and accretion rate) states which are believed to occur during the outbursts of these objects. HS1804+67 is a long period ($`5.5`$ hs) eclipsing dwarf nova with outbursts of moderate amplitude ($`12`$ mag) and recurrence intervals of $`1`$ month \[Billington et al, 1995, Fiedler et al, 1997\].
In this paper we report on the results of the analysis with eclipse mapping techniques \[Horne, 1985\] of a set of lightcurves of HS1804+67, which allows to follow the evolution of the structure of its accretion disc through the outburst cycle. The eclipse maps capture “snapshots” of the disc brightness distribution on the rise to maximum, during maximum light, through the decline phase, and at the end of the eruption – when the system goes through a phase of minimum light before recovering its quiescent brightness level.
## 2 Observations and analysis
Time series of CCD differential photometry of HS1804+67 in the R band were obtained with the JGT 1-m telescope at the University of St. Andrews during 1995-96 covering 4 consecutive eruptions of the star. The data were grouped and average lightcurves were obtained for 11 different phases through the outburst cycle. The average lightcurves were analyzed with eclipse mapping techniques to produce a map of the disc brightness distribution and an additional uneclipsed component in each case. The data lightcurves and corresponding eclipse mapping models are shown in Fig. 1. Maps of the brightness distribution for 9 of the lightcurves are shown in Fig. 2 in a logarithmic grayscale. Fig. 3 shows the evolution of the radial intensity distribution in the disc of HS1804+67 through the outburst cycle.
## 3 Discussion
The results reveal the formation of a spiral structure at the early stages of the outburst (fig. 2a) and shows how the disc expands until it fills almost all of the primary Roche lobe at maximum light (figs. 2c and 3), becoming progressively fainter through the decline while the bright spot starts to become more and more perceptible at the outer edge of the disc (figs. 2d-h and 3). At the phase of minimum the disc mostly disappears, leaving only a small bright region around the white dwarf, possibly a boundary layer (fig. 2i). In quiescence the disc is asymmetric, with the region along the gas stream trajectory being noticeably brighter than the neighbouring regions (fig. 2j). This is is agreement with the results obtained from Doppler tomography of H$`\alpha `$ emission \[Billington et al, 1995\].
The comparison of the maps during the rise to maximum indicates that the eruption starts in the outer disc and that a heating front wave (which triggers the high viscosity and mass accretion state) moves inwards and reaches the central parts of the disc at outburst maximum. A cooling front wave characterizes the decline of the eruption and also propagates from the outer parts to disc centre (fig. 3) reaching the central parts of the disc by the end of the outburst (fig. 2h). The comparison of the maps at minimum light and at quiescence suggests that mass accretion over the white dwarf is substantially reduced in the former phase and that probably most of the matter transferred from the secondary star at these stages accumulates in the outer disc, restarting the eruption cycle.
The evolution of the uneclipsed flux through the outburst can be seen in Fig. 1. A dotted line indicate the value of the uneclipsed component at minimum light and is interpreted as being due to the (fixed) contribution of the secondary star to the flux in the R band. The variable part of the uneclipsed component is probably due to emission in a vertically extended disc chromosphere + wind. Fig. 1 shows that the emission from this latter region follows the changes in brightness of the inner parts of the disc during the outburst. At minimum light, when mass accretion at the inner disc is substantially reduced, the emission in the disc chromosphere \+ wind practically disappears. These results support the suggestion that the ejection of material in the wind originates from the inner parts of the disc and that the emission of the resulting chromosphere + wind is sensitive to the disc mass accretion rate, in accordance to inferences drawn by a similar study of the novalike UX UMa \[Baptista et al, 1998\].
|
no-problem/9905/astro-ph9905082.html
|
ar5iv
|
text
|
# The X-ray Fundamental Plane and 𝐿_X-𝑇 Relation of Clusters of Galaxies
## 1 Introduction
Correlations among physical quantities of clusters of galaxies are very useful tools for studying formation of clusters and cosmological parameters. In particular, the luminosity ($`L_\mathrm{X}`$)-temperature ($`T`$) relation in X-ray clusters has been studied by many authors. Observations show that clusters of galaxies exhibit a correlation of approximately $`L_\mathrm{X}T^3`$ (Edge & Stewart 1991; David et al. 1993; Allen & Fabian 1998; Markevitch 1998; Arnaud & Evrard 1998). On the other hand, a simple theoretical model predicts $`L_\mathrm{X}T^2`$ on the assumptions that (1) the internal structures of clusters of different mass are similar; in particular, the ratio of gas mass to virial mass in the clusters ($`f=M_{\mathrm{gas}}/M_{\mathrm{vir}}`$) is constant; (2) all clusters identified at some redshift have the same characteristic density, which scales with the mean density of the universe (e.g. Kaiser 1986; Navarro, Frenk, & White 1995; Eke, Navarro, & Frenk 1998). This discrepancy remains one of the most important problems in clusters of galaxies.
The discrepancy of the $`L_\mathrm{X}T`$ relation is not easily resolved even if we relax one of these basic assumptions. We just show an example, where the assumption (1) is relaxed. The X-ray luminosity of clusters is approximately given by $`L_\mathrm{X}\rho _0^2R^3T^{1/2}`$, where $`\rho _0`$ is the characteristic gas density, and $`R`$ is the core radius. Thus, the observed relation $`L_\mathrm{X}T^3`$ indicates that $`\rho _0^2R^3T^{5/2}`$. If the gravitational matter has the same core radius as gas, the baryon mass fraction is given by $`f\rho _0R^3/RT\rho _0R^2T^1`$. If we assume $`fT^\alpha `$, we obtain $`\rho _0T^{23\alpha }`$, $`RT^{1/2+2\alpha }`$, $`M_{\mathrm{gas}}T^{1/2+3\alpha }`$, $`M_{\mathrm{vir}}T^{1/2+2\alpha }`$, and the characteristic density of gravitational matter $`\rho _{\mathrm{vir}}M_{\mathrm{vir}}/R^3T^{24\alpha }`$. Assuming that $`\rho _{\mathrm{vir}}`$ is constant in the spirit of the above assumption (2) (so-called recent-formation approximation), we should take $`\alpha =1/2`$. Thus, this model predicts a correlation of $`\rho _0RT^{1/2}`$. However, such a correlation has not been found, although many authors have investigated relations among the physical quantities of clusters (e.g. Edge & Stewart 1991; Mohr & Evrard 1997; Arnaud & Evrard 1998, Mohr, Mathiesen, & Evrard 1999). It is to be noted that in the spirit of the above assumption (1), it is favorable to use core radii when comparing clusters with different masses, although some previous studies use isophotal radii instead of core radii in the analysis (e.g. Mohr & Evrard 1997). Some other studies use a ’virial’ radius, defined as the radius of a sphere of which the mean interior density is proportional to the critical density of the universe at the observed redshift of the cluster ($`z0`$). However, these radii are derived from the temperatures of clusters, and are not independent of the temperatures (e.g. Mohr et al. 1999). Moreover, $`L_\mathrm{X}`$ is mainly determined by structure around core region which preserves the information of the background universe when the cluster collapsed (e.g. Navarro, Frenk, & White 1997; Salvador-Solé, Solanes, & Manrique 1998). Thus, we adopt the core radius as the characteristic scale of a cluster. Since most previous works implicitly assumed that clusters form a one-parameter family, the failure of finding the correlations including core radii suggests that clusters form a two-parameter family instead.
In this Letter, we reanalyze the observational data of X-ray clusters and study the relations in detail based on the idea of fundamental plane. Originally, the word, ’fundamental plane’, represents a relation among effective radius, surface brightness, and velocity dispersion of elliptical and S0 galaxies (e.g. Faber et al. 1987; Djorgovski & Davis 1987). In this study, we apply the notion of the fundamental plane to X-ray clusters and discuss relations among $`\rho _0`$, $`R`$, and $`T`$. In §2, results are presented and in §3, their implications are discussed. Throughout the paper we assume $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
## 2 Data
We use the observational data of the central density, $`\rho _0`$, core radius, $`R`$, and temperature, $`T`$, of 45 clusters in the catalogue of Mohr et al. (1999). We have confirmed that the results in this section are almost identical to those based on the catalogue of Jones & Forman (1984). Mohr et al. (1999) gathered the temperature data of previous ASCA, Ginga and Einstein observations. On the other hand, they obtained central densities and core radii using ROSAT data; they fitted surface brightness profiles by the conventional $`\beta `$ model,
$$\rho _{\mathrm{gas},1}(r)=\frac{\rho _1}{[1+(r/R_1)^2]^{3\beta /2}},$$
(1)
where $`r`$ is the distance from the cluster center, and $`\rho _1`$, $`R_1`$, and $`\beta `$ are fitting parameters. If an excess in emission (so-called cooling flow) is seen in the innermost region, Mohr et al. (1999) fitted this component by an additional $`\beta `$ model,
$$\rho _{\mathrm{gas},2}(r)=\frac{\rho _2}{[1+(r/R_2)^2]^{3\beta /2}}.$$
(2)
Since we are interested in global structure of clusters, we use $`\rho _1`$ and $`R_1`$ as $`\rho _0`$ and $`R`$, respectively. Since Mohr et al. (1999) presented only $`\rho _2`$ for the clusters with central excess, we calculate $`\rho _1`$ by
$$\rho _1=\left(\frac{I_1R_2}{I_2R_1}\right)^{1/2}\rho _2,$$
(3)
where $`I_1`$ and $`I_2`$ are the central surface brightness corresponding to the components (1) and (2), respectively. Although $`R`$ and $`\beta `$ are correlated, each of them was determined exactly enough for our analysis (see Fig.4 in Mohr et al. )
The data plotted in the $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$ space are fitted with a plane,
$$A\mathrm{log}\rho _0+B\mathrm{log}R+C\mathrm{log}T+D=0.$$
(4)
The result of the least square fitting with equal weight for simplicity is $`A:B:C=1:1.39:1.29`$. The scatter about the plane is 0.06 dex. This amounts to a scatter of about 15%, which is a typical observational error. We call the plane ’the fundamental plane’, hereafter. The ratio $`A:B:C`$ is close to $`2:3:2.5`$, which is expected when $`L_\mathrm{X}T^3\rho _0^2R^3T^{1/2}`$. Thus, the observed relation, $`L_\mathrm{X}T^3`$, basically corresponds to a cross section of the fundamental plane.
In order to study more closely, we investigate further the distribution of the observational data on the fundamental plane. We fit the data to another plane,
$$a\mathrm{log}\rho _0+b\mathrm{log}R+c\mathrm{log}T+d=0,$$
(5)
under the constraint,
$$Aa+Bb+Cc=0.$$
(6)
This means that the plane (5) is perpendicular to the fundamental plane (4). The result is $`a:b:c=1:1.18:2.04`$. The scatter about the plane is 0.2 dex. We call this plane ’the vertical plane’. For convenience, two unit vectors in the $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$ space are defined by,
$$𝒆_\mathrm{𝟏}=\frac{1}{\sqrt{A^2+B^2+C^2}}(A,B,C)=(0.47,0.65,0.60),$$
(7)
$$𝒆_\mathrm{𝟐}=\frac{1}{\sqrt{a^2+b^2+c^2}}(a,b,c)=(0.39,0.46,0.80).$$
(8)
Moreover, one of the unit vectors perpendicular to both $`𝒆_\mathrm{𝟏}`$ and $`𝒆_\mathrm{𝟐}`$ is defined as $`𝒆_\mathrm{𝟑}=(0.79,0.61,0.039)`$. The set of three vectors is one of the bases in the $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$ space. Thus, the equations $`X=\rho _0^{0.47}R^{0.65}T^{0.60}`$, $`Y=\rho _0^{0.39}R^{0.46}T^{0.80}`$, and $`Z=\rho _0^{0.79}R^{0.61}T^{0.039}`$ are three orthogonal quantities. Figure 1 shows the cross section of the fundamental plane viewed from the $`Y`$ axis. Figure 2 shows the data on the $`(Y,Z)`$ plane, i.e., the fundamental plane. As can be seen, a clear correlation exists on the plane, that is, clusters form a band in the $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$ space. The major axis of the band is the cross line of the fundamental and vertical planes, and a vector along the major axis is proportional to $`𝒆_\mathrm{𝟑}`$. We refer to the band as ’fundamental band’, hereafter. Note that the line determined by the least square method directly from the three-dimensional data is almost parallel to the vector $`𝒆_\mathrm{𝟑}`$. The vector $`𝒆_\mathrm{𝟑}`$ means that
$$\rho _0R^{1.3\pm 0.2},$$
(9)
$$TR^{0.06\pm 0.1}\rho _0^{0.05\pm 0.1},$$
(10)
Relation (10) indicates that the major axis of the fundamental band is nearly parallel to the $`\mathrm{log}\rho _0\mathrm{log}R`$ plane, i.e., temperature varies very little along the fundamental band. Thus, the observed relation $`L_\mathrm{X}T^3`$ should be the correlation along the minor axis of the band on the fundamental plane as is explicitly shown in the next section.
## 3 Discussion
The results presented in the previous section demonstrate that the clusters of galaxies are seen to populate a planar distribution in the global parameter space $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$. Therefore, clusters turn out to be a two-parameter family. The observed relation $`L_\mathrm{X}T^3`$ is a cross section of this ’fundamental plane’. Moreover, there is a correlation among the data on the fundamental plane although the dispersion is relatively large. This ’fundamental band’ is a newly found correlation between density and radius with a fixed temperature.
In order to further investigate the relation between physical quantities and the data distribution in the $`(\mathrm{log}\rho _0,\mathrm{log}R,\mathrm{log}T)`$ space, we represent $`L_\mathrm{X}`$, $`M_{\mathrm{gas}}`$, $`M_{\mathrm{vir}}`$, $`f`$, and $`\rho _{\mathrm{vir}}`$ by $`X`$, $`Y`$, and $`Z`$, using the obtained relations
$$\rho _0X^{0.47}Y^{0.39}Z^{0.79},$$
(11)
$$RX^{0.65}Y^{0.46}Z^{0.61},$$
(12)
$$TX^{0.60}Y^{0.80}Z^{0.039}.$$
(13)
The results are
$$L_\mathrm{X}\rho _0^2R^3T^{1/2}X^{2.6}Y^{2.6}Z^{0.27},$$
(14)
$$M_{\mathrm{gas}}\rho _0R^3X^{2.9}Y^{1.8}Z^{1.0},$$
(15)
$$M_{\mathrm{vir}}RTX^{0.05}Y^{1.3}Z^{0.65},$$
(16)
$$f=M_{\mathrm{gas}}/M_{\mathrm{vir}}X^{2.4}Y^{0.51}Z^{0.39},$$
(17)
$$\rho _{\mathrm{vir}}M_{\mathrm{vir}}R^3X^{1.9}Y^{0.12}Z^{1.2}.$$
(18)
Exactly speaking, $`M_{\mathrm{gas}}`$ and $`M_{\mathrm{vir}}`$ represent the core masses rather than the masses of the whole cluster. In relation (16), we assume that clusters of galaxies are in dynamical equilibrium. The scatters of $`X`$, $`Y`$, and $`Z`$ are $`\mathrm{\Delta }\mathrm{log}X=0.06`$, $`\mathrm{\Delta }\mathrm{log}Y=0.2`$, and $`\mathrm{\Delta }\mathrm{log}Z=0.5`$, respectively. Thus, $`Z`$ is the major axis of the fundamental band and is the primary parameter of the data distribution. On the other hand, relation (13) indicates that the scatter of $`Y`$ nearly corresponds to a variation of $`T`$ because $`\mathrm{\Delta }\mathrm{log}T=0.60\mathrm{\Delta }\mathrm{log}X+0.80\mathrm{\Delta }\mathrm{log}Y0.039\mathrm{\Delta }\mathrm{log}Z`$. It can be also shown that a variation of $`L_\mathrm{X}`$ is dominated by the scatter of $`Y`$. Since $`Y`$ corresponds the minor axis of the fundamental band, this means that the $`L_\mathrm{X}T`$ relation is well represented by only the secondary parameter $`Y`$, but not by the primary parameter $`Z`$. To put it differently, $`L_\mathrm{X}(\rho _0^2R^3T^{1/2})`$ depends on only $`T`$, which is consistent with previous findings. The result reflects the fact that a combination of $`\rho _0`$ and $`R`$ like $`\rho _0^2R^3`$ behaves as a function of $`T`$ (relation ), while $`\rho _0`$ or $`R`$ varies almost independently of $`T`$ (relations ). If we safely ignore the scatter of $`X`$ and $`Z`$ in relations (13) and (14), we obtain $`TY^{0.80}`$, and $`L_\mathrm{X}T^{3.3}`$. This slope of the $`L_\mathrm{X}T`$ relation approaches the observed ones, although it is slightly larger. On the other hand, $`M_{\mathrm{gas}}`$, $`M_{\mathrm{vir}}`$, and $`f`$ are not represented by any one of the parameters $`X`$, $`Y`$, and $`Z`$; both $`Y`$ and $`Z`$ contribute to their variations. Note that $`\rho _{\mathrm{vir}}`$ is mainly governed by $`Z`$ as relation (18) shows.
The above analysis raises two questions. The first question is why the combination like $`\rho _0^2R^3`$ behaves as a function of only $`T`$, or equivalently why $`X`$ is nearly constant. In the following arguments, we assume that the scatter of $`X`$ is due to observational errors, that is, $`\mathrm{\Delta }\mathrm{log}X`$ is essentially zero. The behavior of $`f`$ may be a clue to the question. Since we allow two parameters, $`f`$ can be expressed in terms of any two physical parameters. For example, if we express $`f`$ with $`M_{\mathrm{vir}}`$ and $`\rho _{\mathrm{vir}}`$, $`f`$ turns out to be determined by $`fM_{\mathrm{vir}}^{0.4}\rho _{\mathrm{vir}}^{0.1}`$. This means that the baryon fraction in clusters is an increasing function of $`M_{\mathrm{vir}}`$. If we adopt the relation of $`fM_{\mathrm{vir}}^{0.4}`$ by hand and ignore $`\rho _{\mathrm{vir}}^{0.1}`$ hereafter, we obtain $`\rho _0^2R^{3.2}T^{2.8}`$ (relations -), which is roughly consistent with the shape of the fundamental plane, and the $`L_\mathrm{X}T`$ relation. Such a relation of $`f`$ may be realized if supernovae in the galaxies in the clusters heat the intracluster medium. In other words, the behavior of $`f`$ is likely to originate from the thermal history of clusters of galaxies.
The second question is why clusters form a two-parameter family. We think that one natural parameter is $`M_{\mathrm{vir}}`$. As another physically meaningful parameter, we may choose $`\rho _{\mathrm{vir}}`$. Relation (18) implies that $`\rho _{\mathrm{vir}}`$ is not constant, which is inconsistent with the simple theoretical prediction, and that it varies nearly independent of temperature. Since $`\rho _{\mathrm{vir}}`$ is supposed to reflect the critical density of the universe when the cluster, especially around the core region, collapsed, this suggests that the present day clusters consist of objects with a range of collapse redshift. In a separate paper, we investigate cosmological implication of the results presented in this paper (Fujita & Takahara 1999).
Finally, we show that the results of this paper reproduce the size-temperature relation found by Mohr & Evrard (1997) and gas mass-temperature relation found by Mohr et al. (1999). Since surface brightness profiles of clusters in the envelope region are given by $`I(r)\rho _0^2T^{1/2}R(r/R)^3`$ when $`\beta =2/3`$, isophotal size, $`r=R_\mathrm{I}`$, has the relation $`R_\mathrm{I}\rho _0^{2/3}R^{4/3}T^{1/6}`$. Eliminating $`\rho _0`$ by the relation of the fundamental plane, $`\rho _0R^{1.39}T^{1.29}`$, we obtain the relation $`R_\mathrm{I}R^{0.41}T^{1.03}Y^{1.0}Z^{0.3}`$. This is consistent with the size-temperature relation $`R_\mathrm{I}T^{0.93}`$, although a coefficient $`R^{0.4}`$ induces scatter of $`30`$% for a given $`T`$. The correlation corresponds to a cross section of the fundamental plane seen slightly inclined from the direction of $`Z`$ axis. Next, the consistency with the gas mass-temperature relation is explained as follows: As in Mohr et al. (1999), let us define $`R_{\mathrm{vir},\mathrm{m}}T^{1/2}`$, $`M_{\mathrm{vir},\mathrm{m}}T^{3/2}`$, and $`M_{\mathrm{gas},\mathrm{m}}f_\mathrm{m}\rho _{\mathrm{vir},\mathrm{m}}R_{\mathrm{vir},\mathrm{m}}^3`$, where $`R_{\mathrm{vir},\mathrm{m}}`$, $`M_{\mathrm{vir},\mathrm{m}}`$, and $`M_{\mathrm{gas},\mathrm{m}}`$ are the virial radius, the virial mass, and the gas mass of a cluster, respectively; index m refers to the quantities for $`r<R_{\mathrm{vir},\mathrm{m}}`$. When $`\beta 2/3`$, we can show that $`f_\mathrm{m}f`$, because $`f_\mathrm{m}M_{\mathrm{gas},\mathrm{m}}/M_{\mathrm{vir},\mathrm{m}}\rho _0R^2R_{\mathrm{vir},\mathrm{m}}/M_{\mathrm{vir},\mathrm{m}}\rho _0R^2T^1f`$. Since we find $`fM_{\mathrm{vir}}^{0.4}(RT)^{0.4}`$, and since $`\rho _{\mathrm{vir},\mathrm{m}}`$ is nearly constant by definition, we obtain the relation $`M_{\mathrm{gas},\mathrm{m}}R^{0.4}T^{1.9}Y^{1.7}Z^{0.3}`$. This is consistent with the relation $`M_{\mathrm{gas},\mathrm{m}}T^{1.98}`$ found by Mohr et al. (1999). Note that the scatter originated from $`R^{0.4}`$ is not conspicuous when the observational data are plotted, because of the steepness of the relation ($`T^2`$). This correlation also corresponds to a cross section of the fundamental plane seen from very near to the direction of $`Z`$ axis.
This work was supported in part by the JSPS Research Fellowship for Young Scientists.
|
no-problem/9905/nucl-th9905005.html
|
ar5iv
|
text
|
# QUARK-GLUON PLASMA*footnote **footnote *Presented at the XXXVIII Cracow School of Theoretical Physics, Zakopane, Poland, June 1-10, 1998; Acta Phys. Pol. B29 (1998) 3711.
## I Introduction
The quark-gluon plasma is a state of the extremely dense matter with the quarks and gluons being its constituents. Soon after the Big Bang the matter was just in such a phase. When the Universe was expanding and cooling down the quark-gluon plasma turned into hadrons - neutrons and protons, in particular - which further formed the atomic nuclei. The plasma is not directly observed in Nature nowadays, but the astrophysical compact objects as the neutron stars may conceal the quark-gluon nuggets in their dense centers. The most exciting however are the prospects to study the plasma in the laboratory experiments. A broad research program of the heavy-ion collisions, which provide a unique opportunity to produce the quark-gluon droplets in the terrestrial conditions, is underway. While the question, whether the plasma is already produced with the currently operating heavy-ion accelerators, is right now vigorously debated, there are hardly any doubts that we will have a reliable evidence of the plasma within a few years. Then, there will be completed the accelerators of a new generation: the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN.
The aim of this article is to give a very elementary introduction to the physics of quark-gluon plasma. We start with a few words on the Quantum Chromodynamics which is a fundamental theory of strong interactions. Then, the structure of hadrons, which are built up of quarks and gluons, is briefly discussed. A particular emphasis is paid on the confinement hypothesis. In the next section we try to explain why and when the hadrons can dissolve liberating the quarks and gluons from their interiors. Finally, the generation of the quark-gluon plasma in heavy-ion collisions is considered.
There is a huge literature on every topic touched in this article. Instead of citing numerous original papers, we rather recommend two collections of the review articles . The more recent progress in the field can be followed due to the proceedings of the regular “Quark Matter” conferences .
Throughout the article we use the natural units where the velocity of light $`c`$, the Planck $`\mathrm{}`$ and Boltzmann $`k`$ constants are all equal unity. Then, the mass, momentum and temperature have the dimension of energy, which is most often expressed in MeV. The distances in space or time are either measured in the length units, which are usually given in fm $`(1\mathrm{fm}=10^{13}\mathrm{cm})`$, or in the inverse energy units. One easily recalculates the length into the inverse energy or vice verse keeping in mind that $`\mathrm{}c=197.3\mathrm{MeV}\mathrm{fm}`$.
## II Quantum Chromodynamics
Quantum Chromodynamics (QCD) strongly resembles the Quantum Electrodynamics (QED). While QED describes the interaction of electric charges - usually electrons and their antiparticles positrons - with the electromagnetic field represented by photons, QCD deals with the quarks and gluons corresponding to, respectively, the electrons and photons. The quarks are, as electrons, massive and curry a specific charge called color, which is however not of one but of three types: red, blue and green. The gluons, which are, as photons, massless, but not neutral. In contrast to photons, they curry color charges being the combinations of the quark ones. The electromagnetic interaction proceeds due to the photon exchanges. Analogously the quarks interact exchanging the gluons. Although the photons being neutral cannot interact directly with each other, there are forces acting between gluons.
QCD has emerged as a theory of quarks and gluons which built up the hadrons i.e. strongly interacting particles such as neutrons and protons. However, there is no commonly accepted model of the hadron structure. The difficulty lies in the very nature of the strong interaction - its strength. While the perturbative expansion, where the noninteracting system is treated as a first approximation, appear to be the only effective and universal computational method in the quantum field theory, a large value of the QCD coupling constant excludes applicability of the method for the system of quarks and gluons. However, QCD possesses a remarkable property called the asymptotic freedom. The coupling constant $`\alpha _s`$ effectively depends on the four-momentum $`Q`$ transferred in the interaction as
$$\alpha _s(Q)=\frac{12\pi }{(332N_f)\mathrm{ln}Q^2/\mathrm{\Lambda }_{QCD}^2},$$
(1)
where $`N_f`$ is the number of the number of the quark flavours (types) and $`\mathrm{\Lambda }_{QCD}`$ is the QCD scale parameter, $`\mathrm{\Lambda }_{QCD}200`$ MeV. Eq. (1) shows that the coupling constant is small when $`Q^2\mathrm{\Lambda }_{QCD}^2`$. Therefore, the interactions with a large momentum transfer can be treated in the perturbative way. QCD appears to be indeed very successful in describing the hard processes, such as a production of jets in high energy proton-antiproton collisions, which proceed with high $`Q`$.
## III Hadron structure
The description of the soft processes in QCD, which, in particular, control the hadron structure, remains a very serious unresolved problem of the strong interactions. One has to rely on the phenomenological models, the validity of which can be only tested by confrontation against the experimental data. Dealing with the soft QCD one often uses the concept of the constituent quarks which should be distinguished the current quarks. The latter ones are the fundamental elementary spin 1/2 particles which are present in the QCD lagrangian. The current up ($`u`$) and down ($`d`$) quarks are light with the mass of a few MeV. The constituent $`u`$ and $`d`$ quarks are supposed to be the effective quasiparticles with the (large) masses of about 300 MeV (1/3 of the nucleon) generated by the interaction. The difference between the current and constituent strange ($`s`$) quark masses is less dramatic. They are, respectively, about 150 and 450 MeV. In the case of heavy quarks - charm ($`c`$), bottom ($`b`$) and top ($`t`$) - the distinction is no longer valid.
Within the constituent quark model the baryon is described as three bounded quarks. The meson is then the system of quark and antiquark. In terms of current quarks the hadron is seen as a cloud of quarks and gluons. The baryon is then no longer built of three quarks and the meson of quark and antiquark. Instead, the hadron curries the quantum numbers of, respectively, three quarks or the quark and antiquark. Therefore, the hadron is composed of the valence quarks (quark and antiquark in the case of meson and three quarks for the baryon) and the sea constituted by the quark-antiquark pairs and gluons.
The gluons are believed to glue together the quarks which form the hadron, but a satisfactory theory of the hadron binding is still missing. Such a theory has to explain the hypothesis of confinement which is the fundamental element of our understanding of the hadron world. While the electric charges tend to form electrically neutral atoms and molecules, in the case of the chromodynamic interactions there seems to be a strict rule that the color charges occur only within the white configurations. The existence of the separated color objects as quarks and gluons is excluded. They must be confined in the colorless systems such as hadrons. The three quarks forming a baryon curry three fundamental colors which give all together the white object. In the meson case the quark color is complementary to the color of the antiquark. In principle, the confinement hypothesis allows for the existence of not only the meson (quark-antiquark) and baryon (three quark) configurations but for any white one such as a dibaryon which is the six quark system. However, in spite of hard experimental efforts, the reliable evidence for the hadrons different than baryons and mesons is lacking.
There are many phenomenological approaches to the confinement. Let us briefly present here the string model inspired by the Meissner effect which is the expelling of the magnetic field from the superconducting material. The model assumes that the vacuum behaves as a diaelectric medium, where the chromodynamic field cannot propagate but is confined in thin tubes or strings which connect the field sources. In Fig. 1 we show the electric field generated by the two opposite charges which are in the (normal) vacuum (a) and in the diaelectric medium (b). Let us compute the potential which acts between charges in the latter case. Using the Gauss theorem one immediately finds the electric field as $`E=q/\sigma `$, where $`q`$ denotes the charge and $`\sigma `$ the cross section of the tube. If $`\sigma `$ is independent of the distance $`r`$ between the charges, their potential energy equals
$$V(r)=\frac{q^2}{\sigma }r.$$
(2)
As seen the potential energy grows lineary with $`r`$ when the charges are put in the diaelectric medium.
Keeping in mind the result (2), the confinement hypothesis, which is illustrated in Fig. 2, can be understood as follows. Let us imagine that one tries to burst the meson up separating the quark from the antiquark. Stretching the meson requires pumping of the energy to the system. When the energy is sufficient to produce the quark-antiquark pair, the string breaks down and we have two mesons instead of one.
At the end of this section we mention another commonly used model of the hadron structure the bag model. One assumes here that the vacuum exerts the pressure $`B`$ on the (colored) quarks and gluons. Then, the hadron is the bag with quarks and gluons similar to the bubble of vapour in the liquid. The bag is usually spherical but the deformations are possible. The model appears to be really successful in describing a very reach mass spectrum of hadrons.
## IV Quark-Gluon Plasma
The quark-gluon plasma is the system of quarks and gluons, which are no longer confined in the hadron interiors, but can propagate in the whole volume occupied by the system. Thus, the quark-gluon plasma resembles the ionized atomic gas with quarks and gluons corresponding to electrons and ions and hadrons being the analogs of atoms. One may wonder whether the existence of quark-gluon plasma is not in conflict with the confinement hypothesis. We note that the plasma is white as a whole. Thus, the color charges are still confined in the colorless system. However, it is still unclear why and when the hadrons can dissolve liberating quarks and gluons. We will consider this issue in the next section. Here we would like to discuss a somewhat unexpected consequence of the asymptotic freedom.
Due to the mentioned difficulties of the soft QCD, the properties of the hadronic matter - either composed of hadrons or of quarks and gluons - are poorly known at the moderate temperatures. The situation changes qualitatively when the system temperature $`T`$ is much larger than the QCD scale parameter $`\mathrm{\Lambda }_{QCD}`$. In this limit $`T`$ is the only dimensional parameter which describes the system. In particular, it determines the average momentum transfer in the interaction of quarks and gluons. Therefore, $`Q^2=cT^2`$ where $`c`$ is a dimensionless constant. On the basis of the dimensional argument we expect to achieve the asymptotic freedom regime (smallness of the coupling constant) when $`T\mathrm{\Lambda }_{QCD}`$. Then, the quark-gluon plasma is a weakly interacting gas of massless quarks and gluons.
The detailed calculations performed within the thermal QCD show that the asymptotic freedom regime is indeed obtained in the high temeprature limit. Here we will only add a simple physical argument to the dimensional one used above. The momentum transfer, which enters the formula of the running coupling constant (1), corresponds (due to the Fourier transform) to the distance between the interacting partons. The distance is proportional to the inverse momentum transfer. When the plasma temperature increases, the density of the quark-gluon system grows as $`T^3`$ in the high temperature limit. (One can refer here again to the dimensional arguments or simple calculations presented in the next section.) Since the average particle separation in the gas of the density $`\rho `$ equals $`\rho ^{1/3}`$, the average inter-particle distance decreases with $`T`$ as $`T^1`$. Consequently, the average coupling constant vanishes when $`T\mathrm{}`$.
## V Deconfinement phase transition
The transformation of the hadron gas into the quark-gluon plasma is called the deconfinement phase transition. There are many independent indications that such a transition indeed takes place in the dense hadronic medium. First of all one should mention the results obtained within the lattice formulation of QCD, where the space continuum is replaced by the discrete points. The Monte Carlo simulations show that there are two phases in the lattice QCD, which are identified with the hadron and quark-gluon phase, respectively. Here we would like to discuss simple arguments in favour of the existence of the quark-gluon plasma.
We start with the observation that hadrons are not point-like objects but their size is finite. The hadron radius is about 1 fm. So, let us consider the hadron gas which is so dense that the average separation of hadrons is about 1 fm. There is no reason to think that the confining potential still operates in such a medium at the distances which are significantly larger than 1 fm. When the quark and antiquark forming a meson are pulled way from each other there is no vacuum, which expels the chromodynamic field, but there are other hadrons all around. Therefore, the confining potential is expected to be screened at the distances comparable to the average inter-particle separation.
There are two ways, as illustrated in Fig. 3, to produce the dense hadronic matter. The first one is evident - squeezing of the nuclear matter. Since the baryon number, which is curried by neutrons and protons, is conserved, the nucleons cannot disappear and will start to overlap when the average inter-particle distance is smaller than the nucleon radius $`r_N1`$ fm. The inter-nucleon separation is smaller than $`r_N`$ at the densities exceeding $`\rho =r_N^31\mathrm{fm}^36\rho _0`$, where $`\rho _0=0.16\mathrm{fm}^3`$ is the so-called saturation or normal nuclear density being approximately equal the nucleon density in the nucleus center.
The second method to produce the dense hadronic matter is to heat up the nuclear matter or the hadron gas. The point is that in the contrast to the baryon or lepton number, which are the conserved quantities, the particle number is not. Therefore, when the gas temperature (measured in the energy units) becomes comparable to the particle mass, further heating leads not only to the increase of the average particle kinetic energy but to the growth of the average particle number. Of course, the particle number growth cannot violate the conservation laws. Therefore, the particles, which curry a conserved charge, must be produced in the particle-antiparticle pairs. For example, to keep the baryon number of the system fixed we can add to the system only pairs of the baryon and antibaryon. The essentially neutral particles such as $`\gamma `$ or $`\pi ^0`$, which do not curry conserved charges, can be added to the system without restrictions, although the average particle number is controlled by the equilibrium conditions. More specifically, the numbers are determined by the minimum of the system free energy when the system temperature and volume are fixed.
Let us now consider the gas of noninteracting pions. Its temperature is assumed to be so high that the pions can be treated as massless particles. (The approximation appears to work quite well even for the temperatures which are close to the pion mass equal 140 MeV.) The pion density is then
$$\rho _\pi =\frac{d^3p}{(2\pi )^3}\frac{g_\pi }{e^{E/T}1}=\frac{g_\pi \zeta (3)}{\pi ^2}T^30.73T^3,$$
(3)
where $`E|𝐩|`$; $`\zeta (z)`$ is the Riemann function and $`\zeta (3)1.202`$; $`g_\pi `$ is the number of the particle internal degrees of freedom, which is 3 for the gas of $`\pi ^+`$, $`\pi ^0`$ and $`\pi ^{}`$. One immediately finds from (3) that the inter-pion separation is smaller than 1 fm for $`T>219`$ MeV.
The deconfinement phase transition has been studied in the numerous phenomenological approaches. We consider here the simplest possible model, where the phase transition is assumed to be of the first order. Then, one can apply the Gibbs criterion to construct the phase diagram. The hadron phase is modeled by the ideal gas of massless pions of three species ($`g_\pi =3`$) while the quark-gluon one by the ideal gas of quarks and gluons, which is baryonless i.e. the total baryon charge vanishes due to the equal number of quarks and antiquarks. Let us compute the number of the internal degrees of freedom in the quark-gluon gas. One should distinguish here the fermionic degrees of freedom of quarks $`g_q`$ and the bosonic of gluons $`g_g`$. There are two light quark flavours ($`u`$ and $`d`$); there are quarks and antiquarks; we have two spin and three color quark states. So, $`g_q=2\times 2\times 2\times 3=24`$. The gluons are in two spin and eight color states, which give $`g_g=2\times 8=16`$.
Since the pressure of the ideal gas of massless particles equals one third of the energy density, the pressure exerted by the pion gas is
$$p_\pi =\frac{1}{3}\frac{d^3pE}{(2\pi )^3}\frac{g_\pi }{e^{E/T}1}=\frac{g_\pi \pi ^2}{90}T^40.33T^4,$$
(4)
while that of the quark-gluon plasma equals
$$p_{qg}=\frac{1}{3}\frac{d^3pE}{(2\pi )^3}\left[\frac{g_g}{e^{E/T}1}+\frac{g_q}{e^{E/T}+1}\right]=\left(g_g+\frac{7}{8}g_q\right)\frac{\pi ^2}{90}T^44.1T^4.$$
(5)
According to the Gibbs criterion the phase, which generates the higher pressure at a given temperature, is realized. Then, one finds from eqs. (4, 5) that the pressure of the quark-gluon plasma is always greater than that of pions. Therefore, we should have, in conflict with the experiment, the quark-gluon phase at any temperature. However, we have not taken into account the pressure exerted by the vacuum on quarks and gluons. Subtracting the bag constant $`B`$ from the r.h.s. of eq. (5), one finds that below the critical temperature $`T_c`$ there is the pion gas and above the quark-gluon plasma. The critical temperature is given as
$$T_c=\left[\frac{90B}{\pi ^2\left(g_g+\frac{7}{8}g_qg_\pi \right)}\right]^{1/4}0.72B^{1/4}.$$
Taking $`B^{1/4}=200`$ MeV, we get $`T_c=144`$ MeV.
In Fig. 4 we show a schematic phase diagram of the strongly interacting matter. The baryon density is measured in the units of the normal nuclear one. The point at $`\rho =\rho _0`$ and $`T=0`$ represents normal nuclei. In fact, this is the only point in the diagram which is really well known and understood. At $`\rho >\rho _0`$ and $`T=0`$ there is a region of the dense nuclear matter with several exotic forms being suggested. When the baryon density exceeds $`23\rho _0`$ one expects a transition to the quark-gluon phase. At even higher densities the plasma is believed to be perturbative i.e. weakly interacting. So, one deals with the quasi ideal strongly degenerated quark gas. The nuclear matter at the temperature larger than a few MeV is traditionally called a hadron gas being mostly composed of pions when the baryon density vanishes. Then, we have quite reliable QCD lattice results which tell us that there is the deconfinement phase transition at $`T180`$ MeV. With the significantly larger temperatures we approach again the perturbative regime where the plasma is weakly interacting. In the next section we discuss how the phase diagram can be explored by means of the heavy-ion collisions.
## VI Heavy-Ion Collisions
As already mentioned in the introduction, the heavy-ion collisions provide a unique opportunity to study the quark-gluon plasma in the laboratory experiments. More precisely, a drop of a dense hadronic matter can be created in the collisions; the question whether the matter is for some time in the deconfined phase is a separate issue.
The physics of heavy-ion collisions essentially depends on the collisions energy. A good measure is not the whole energy of the incoming nucleus but the energy per nucleon. The point is that at the energy of a few GeV per nucleon, which is the lowest energy being interesting from the point of view of the quark-gluon plasma, the nucleus does not interact as a whole but there is an interaction of the overlapping parts of the colliding nuclei as depicted in Fig. 5. The nucleons to be found in these parts are called the participants while the remaining ones the spectators. One often distinguishes the central from the peripheral collisions. In the latter ones, which proceed with a large value of the impact parameter, most of the nucleons are spectators. In the case of the central collisions basically all nucleons from the smaller nucleus (target or projectile) are participants and the interaction zone is the largest. Obviously the central collisions are the most interesting in the context of the quark-gluon plasma searches. Unfortunately, the nucleus-nucleus cross section is dominated by the peripheral collisions. The cross section contribution of the collisions with a given impact parameter $`b`$ is $`2\pi bdb`$. Therefore, the contribution vanishes when $`b0`$.
The high-energy nucleus-nucleus collision proceeds according to the following scenario. The overlapping parts of the colliding nuclei strongly interact and a dense and hot hadronic system, which is often called a fireball, is created. Initially it is formed either by the quarks and gluons or by the hadrons. Then, the system expands and cools down. If the fireball matter has been initially in the deconfined phase, the system experiences the hadronization i.e. the quarks and gluons are converted into the hadrons. The fireball further expands. When its density is so low that the mean free path of the hadrons is close to the system size or the expansion velocity is comparable the that of the individual particles, the whole system decouples into the hadrons which do not interact with each other any longer. However, the unstable particles can still gradually decay into the final state hadrons. The moment of decoupling is called the freeze-out, because the particle momenta are basically fixed at that time. Thus, the final state hadrons, which are observed by experimentalists in the particle detectors, characterize the fireball at the freeze-out.
As we discussed in Sec. V, the dense hadronic matter can be obtained either due to the nuclear matter compression or the nuclear matter heating. It appears that the significant effect of compression at the early stage of heavy-ion collisions, which is still insufficient for the quark-gluon generation, is observed at the relatively low energies, not larger than 1 GeV. Then, one can study the dense nuclear matter of rather small temperature. At higher energies the atomic nuclei appear to be rather transparent i. e. the colliding nuclei traverse each other. The participants are only slightly deflected from their straight line trajectories due to the interaction. However, they lose a sizable portion of their energy which is further manifested in the form of produced particles, mainly pions. Therefore, the baryon density of the produced hadronic system is not much increased even at the early collisions stage. When the produced system expands, the baryon density soon gets a value which is smaller than the normal nuclear density. The strongly interacting matter however is significantly heated up. If the temperature exceeds the deconfinement transition temperature, the matter is expected to be in the quark-gluon phase.
A comment is in order here. We use the concept of the temperature which explicitly assumes that the system is in the thermodynamic equilibrium. It is far not obvious that this is really the case. Any physical system needs some time to reach the state of equilibrium. The hadronic matter produced in heavy-ion collisions cannot achieve the global equilibrium but the theoretical as well as experimental arguments suggest that the local quasi equilibrium is possible. The global equilibrium is characterized by the thermodynamic parameters which are unique for the whole system. Before the global equilibrium is achieved a system is usually for some time in the local equilibrium state. The system’s parts are then already in the equilibrium but the thermodynamic parameters - temperature, density, hydrodynamic velocity - vary from part to part. The hadronic matter produced in heavy-ion collisions is however not kept in any container but it immediately starts to expand, mainly along the beam axis. Consequently there is a sizable variation of the hydrodynamic velocity in this direction.
## VII Plasma Signatures
The plasma creation is expected to occur at the early collision stage, but it must hadronize i.e. experience the transition to the hadron gas, when the matter is expanding and cooling down at the later stages. Therefore, we always observe hadrons in the final state of the collisions and it is really difficult to judge whether the plasma has been present or not. Although it is hard to imagine a smoking gun proof, a few signatures of the plasma creation has been proposed. We discuss below the two which seem to be the most promising.
It has been argued that the presence of the plasma at the early collision stage increases the number of strange particles observed in the final state. The point is that the mass of strange quark mass appears to be significantly smaller than that of strange particles. The strange $`(s)`$ quark mass is about 150 MeV. Therefore, one needs 300 MeV to produce the $`s\overline{s}`$ pair. The strange quarks must be, of course, produced in pairs because the strangeness is conserved in the strong interactions. The most energetically favorable way to produce the strangeness at the hadron level proceeds in the reaction $`\pi +NK+\mathrm{\Lambda }`$. Here the incoming particles - the pion and nucleon - must curry over 500 MeV in the center of mass frame. Thus, it is easier to produce strangeness at the quark-gluon than hadron level. Once the strange quarks appear in the plasma, they cannot disappear - a rather rare annihilation process of $`s\overline{s}`$ pairs can be neglected - and consequently, they are distributed among the final state hadrons. Quantitative comparison of the two scenarios with and without the plasma indeed shows that the presence of the plasma leads to the significant strangeness enhancement.
The second plasma signatures deals with the $`J/\psi `$ particle or charmonium which is a bound state of the charm quark $`c`$ and antiquark $`\overline{c}`$. The $`J/\psi `$ particle is expected to dissolve much easier in the quark-gluon environment than in the hadron one. This can be understood as a result screening of the potential, which binds the $`c`$ and $`\overline{c}`$ quarks, by the color charges of the plasma particles. Therefore, the number of the $`J/\psi `$ particles in the final state should be significantly reduced if the plasma is present at the collision early stage.
The two predicted quark-gluon plasma signatures have been indeed experimentally observed in the central heavy-ion collisions at the energy of about 200 GeV per nucleon which have been recently studied at CERN. The whole set of the experimental data however does not fit to the theoretical expectations. There have been advocated the mechanisms of the strangeness enhancement and $`J/\psi `$ suppression which are different than those mentioned above. Therefore, it is a matter of hot debate whether the plasma is produced at the energies which currently available. The plasma generation at higher energies seems to be guaranteed.
## VIII Perspectives
In the near future the nucleus-nucleus collisions will be studied at the accelerators of a new generation: Relativistic Heavy-Ion Collider (RHIC) at Brookhaven and Large Hadron Collider (LHC) at CERN. The collision energy will be larger by one or two orders of magnitude than that of the currently operating machines. The heavy-ion experiments have been performed till now in the beam-targets system where the accelerated ion is smashed against the target nucleus which is initially at rest. RHIC and LHC will use another principle - there will be accelerated two intersecting ion beams. The energy of each beam will be 100 GeV per nucleon at RHIC and 3 000 GeV at LHC. Thus, the collision energy in the center of mass frame will equal, respectively, 200 and 6 000 GeV which should be compared to 20 GeV presently available in the beam-target systems.
The proton-proton collisions have been already studied experimentally at the energy domain of RHIC. The perturbative QCD, which is hardly available at lower energies, is extensively used here. This is possible because the average momentum transfer grows with the collision energy and the QCD coupling constant is ten relatively small. Therefore, the theoretical understanding of the collisions, paradoxically, improves with the growing energy.
The nuclear collisions at RHIC or LHC are expected to be so violent that the quarks and gluons comprising the nucleons will be easily deconfined, for some time of course. At such huge energies the nucleon is visualized as a cloud of partons which breaks up into the parton showers as a result of collision with another nucleon. Thus, the creation of the quark-gluon plasma seems to be unavoidable at RHIC and LHC. The method of its detection however remains to a large extend an open question.
I am very grateful to Marek Gaździcki for critical reading of the manuscript.
Figure Captions
Fig. 1. The electric field lines in the vacuum (a) and diaelectric medium (b).
Fig. 2. The confinement.
Fig. 3. Two technologies to produce the dense hadronic matter: compression (a) and heating (b).
Fig. 4. The phase diagram of the strongly interacting matter.
Fig. 5. The geometry of the nucleus-nucleus collision at high energy.
|
no-problem/9905/astro-ph9905287.html
|
ar5iv
|
text
|
# Young Low-Mass Stars and Brown Dwarfs in IC 348
## 1 Introduction
Brown dwarfs have been discovered over a wide range of ages. Examples of evolved field brown dwarfs ($`1`$ Gyr) include the companion GL 229B (Nakajima et al. 1995; Oppenheimer et al. 1995) and the free floating Kelu 1 (Ruiz, Leggett, & Allard 1997), in addition to substellar objects identified through the near-infrared (IR) surveys of 2MASS (Kirkpatrick et al. 1999) and DENIS (Martín et al. 1997; Tinney, Delfosse, & Forveille 1997; Delfosse et al. 1997). Warmer, more luminous brown dwarfs have been found at younger ages, such as the field companion G 196-3B ($`300`$ Myr; Rebolo et al. 1998) and objects in the Pleiades (125 Myr) (Stauffer, Hamilton, & Probst 1994; Martín et al. 1998, references therein) and in the youngest clusters ($`10`$ Myr) in Orion (Hillenbrand 1997), $`\sigma `$ Orionis (Béjar, Zapatero Osorio, & Rebolo 1999), Taurus (Briceño et al. 1998), $`\rho `$ Oph (Luhman, Liebert, & Rieke 1997, hereafter LLR; Wilking, Greene, & Meyer 1999), IC 348 (Luhman et al. 1998b, hereafter LRLL), Chamaeleon I (Comerón, Rieke, & Neuhäuser 1999), and TW Hydrae (Lowrance et al. 1999). In a sample of candidate low-mass members of the Hyades, Reid & Hawley (1999) have also serendipitously discovered five likely pre-main-sequence (PMS) brown dwarfs possibly behind the Hyades and associated with the Taurus-Auriga star forming region. Given the relatively small numbers of these objects, it is not clear whether they are representative of brown dwarfs at a given age, mass, and environment. Larger samples of brown dwarfs are critical for understanding the formation and evolution of substellar objects.
In this study, I provide a more complete picture of brown dwarfs and low-mass stars at their earliest stages of evolution ($`<10`$ Myr). Work along these lines began with observations of V410 X-ray 3 in the L1495E region of Taurus (Luhman et al. 1998a), which is one of the first M6 objects (0.06-0.1 $`M_{}`$) discovered in a star forming region (Strom & Strom 1994). The colors from $`V`$ through $`L\mathrm{}`$ were approximately dwarf-like with no significant near-IR excess emission. The IR and optical spectra exhibited features indicative of both dwarfs and giants, a behavior that has been seen in the other young cool objects discovered subsequently (e.g., LRLL). Given its youth (1-10 Myr), proximity (300 pc), and rich, compact nature (300 stars, $`D20\mathrm{}`$), IC 348 is an excellent site for expanding this work to larger numbers and later types. Recent studies of IC 348 include IR imaging (Lada & Lada 1995), optical photometry (Trullols & Jordi 1997), H$`\alpha `$ measurements and optical images and spectroscopy (Herbig 1998), and deeper IR photometry and optical and IR spectroscopy concentrated towards the $`5\mathrm{}\times 5\mathrm{}`$ cluster core (LRLL).
I have begun to increase both the area and depth of the photometry and spectroscopy, with the goal of systematically identifying and characterizing the stellar and substellar populations within the entire cluster of IC 348. When this survey is completed, the analysis of LRLL concerning the initial mass function (IMF), star formation history, and disk properties will be updated with better number statistics and completeness to lower substellar masses. However, with data recently collected, the known low-mass population has grown enough that the typical characteristics of young low-mass stars and brown dwarfs can be investigated.
I will describe new optical imaging and spectroscopy of low-mass candidates, discuss the optical classification of young late-type objects in comparison to field dwarfs and giants, and present spectral types for the coolest sources (M4-M8.25). I will then examine the behavior of the $`JHK`$ colors relative to dwarfs and to warmer, more massive ($``$M0, 0.5-1 $`M_{}`$) classical T Tauri stars (CTTS). After estimating reddenings, effective temperatures, and bolometric luminosities and constructing a Hertzsprung-Russell (H-R) diagram, the locus of objects in IC 348 and the presumably coeval components of the multiple system GG Tau (White et al. 1999, hereafter W99) will be used as empirical isochrones to test the theoretical evolutionary models and temperature scales. With these results, I discuss the likely masses and ages of the objects observed in IC 348 and suggest that the hydrogen burning limit at young ages occurs near M6.
## 2 Observations
Optical images of IC 348 were obtained with the four shooter camera at the Fred Lawrence Whipple Observatory 1.2 m telescope on 1998 September 23 under photometric conditions. The instrument contained four $`2048\times 2048`$ CCDs at a plate scale of $`0\stackrel{}{\mathrm{.}}33`$ pixel<sup>-1</sup>, with the detectors separated by $`1\mathrm{}`$ and arranged in a $`2\times 2`$ grid. Four positions were observed towards the center of IC 348 in dithers of a few arcminutes, covering a total area of $`25\mathrm{}\times 25\mathrm{}`$. At each position, images were obtained at $`R`$ and $`I`$ with exposure times of 20 min. The images were bias subtracted, divided by dome flats, registered, and combined into one image at each band. Image coordinates and photometry were extracted with DAOFIND and PHOT under the IRAF package APPHOT. The plate solution was derived from coordinates of all sources observed by Lada & Lada (1995) that appeared in the optical images and were not saturated. Saturation occurred near $`I=16.5`$ and the average completeness limits were $`R23`$ and $`I21`$, with brighter limits towards the nebulosity in the cluster center. For instance, with the enhanced background, source 613 has a limit of $`R>21.5`$. Measurements were also hampered near the bright B stars BD+$`31\mathrm{°}`$643 and $`o`$ Per. A few sources detected in the IR such as 611 fell within diffraction spikes of brighter stars and could not be measured.
The $`R`$-band filter in the four shooter camera has the same shape as Cousins $`R`$ but with a full width at half maximum that is 150 Å smaller. The $`I`$-band sensitivity of the four shooter is broader in wavelength than Cousins $`I`$, with a long tail to the red. Whereas Cousins $`I`$ falls from 90 to 5% of the peak sensitivity from 8500 to 9000 Å, the four shooter maintains 80 and 60% at these wavelengths, with 35 and 15% at 9500 Å and 1 µm. Standard stars calibrated in Cousins $`R`$ and $`I`$ by Landolt (1992) were observed at colors of $`RI<1.2`$ and $`RI=2.7`$. Alternately, the transformation from the instrumental system to Cousins was modeled by convolving spectra of M dwarfs at various types with the instrument sensitivities – detector quantum efficiencies and filter transmissions – for the four shooter and the Cousins system (Landolt 1992). The color transformation derived in this modeling agreed with the one measured from the photometry of the standards. I then modeled the effect of reddening on the uniqueness of the color correction, i.e. do a reddened M4 star and an unreddened M7 star of the same instrumental $`RI`$ have the same color in the Cousins system? The results of the modeling indicate that the reddened mid-M star can have a Cousins color that is redder than the late-type star by $`0.2`$ mag, and thus the color transformation does depend on spectral type and reddening. Transformations were derived for $`A_V=0`$ and 3 and the average of the two was applied to the data set for IC 348. This photometry is used in Figure 1. A more precise color correction can be computed at a particular instrumental color if either the spectral type or the reddening is known. For the spectroscopic sample discussed in this work, the spectral types were incorporated into the modeling to produce a second set of color corrections. The resulting photometry is provided in Table 1. The measurements of $`RI`$ of Herbig (1998) agree with the data presented here for $`RI<1.6`$, but the colors of Herbig (1998) become systematically redder by 0.3-0.5 mag at $`RI>1.6`$. On the other hand, as discussed by LRLL, at $`RI>1.6`$ the colors reported by Trullols & Jordi (1997) become progressively bluer than those of Herbig (1998) by 0.3-1.5 mag, and hence bluer than the colors presented here by up to a magnitude. The colors implied by the spectra of the reddest sources (e.g., object 405) are more consistent with the photometry reported in this work.
Photometry and coordinates for cluster members with spectral types of M4 or later are listed in Table 1. Optical and IR measurements of Herbig (1998), Lada & Lada (1995), and LRLL are included as well. The IR photometry of LRLL for sources not found in Table 1 can be obtained by contacting the author. Although the IR colors agree for sources in common between the latter studies, there is an unexplained offset of 0.2 mag in all three bands, where the photometry of LRLL is fainter than that of Lada & Lada (1995). For purposes of this work, agreement is obtained by arbitrarily subtracting 0.2 mag from the measurements of LRLL. Because the completeness limit in the Lada & Lada (1995) survey is near $`K=14`$, the uncertainties are large ($`\pm 0.2`$ mag) for the faintest sources in Table 1. The photometry of LRLL is deeper ($`K16.5`$) and should be fairly accurate ($`\pm 0.05`$ mag).
Spectra were obtained for low-mass candidates in IC 348 during a few hours of service observations with the Keck II low-resolution imaging spectrometer (LRIS; Oke et al. 1995) on 1998 August 7. The multi-slit mode of LRIS was used with the 150 l mm<sup>-1</sup> grating ($`\lambda _{\mathrm{blaze}}=7500`$ Å) and GG495 order-blocking filter. The maximum wavelength coverage of LRIS, 3800 to 11000 Å, was provided in one grating setting centered near 7500 Å. One slit mask covered a field of view of $`5\mathrm{}\times 7\mathrm{}`$ and used slitlets $`8\mathrm{}`$ in length and $`1\stackrel{}{\mathrm{.}}2`$ in width. This configuration produced a spectral resolution of 20 Å. Two exposures of 25 min were obtained with a single slit mask. The spectrophotometric standard star was Feige 11, observed through a $`1\stackrel{}{\mathrm{.}}0`$ slit. After subtracting the bias from the frames and flat-fielding with internal continuum lamps, the spectra were extracted and calibrated in wavelength with the Ne and Ar lamp spectra. The data were then corrected for the sensitivity function measured from Feige 11. These observations provided spectra for the low-mass cluster members 405, 611, and 613, in addition to several background stars.
The remainder of the data were collected at the Kitt Peak Mayall 4 meter telescope. Object 407 was observed with the 4 meter Cryogenic Camera (CryoCam) on 1998 December 22, while another $`70`$ sources were observed with the 4 meter RC Spectrograph (RCSP) on 1998 December 23 and 26. Under good weather conditions, I obtained a spectrum of 407 ($`I=19.5`$) with the 300 l mm<sup>-1</sup> grism ($`\lambda _{\mathrm{blaze}}=8010`$ Å), OG550 order-blocking filter, and $`1\stackrel{}{\mathrm{.}}7`$ slit. The spectral resolution was 12 Å and the exposure time was 40 min. Additional measurements were made for the spectrophotometric standard Hiltner 102 and the spectral type standards LHS 2065 (M9V) and LHS 2243 (M8V). During the observations with RCSP, I used the multi-slit mode to obtain spectra of 8-10 objects in each of several pointings. The slit masks projected a circular field of view that was $`5\mathrm{}`$ in diameter. The slitlets were $`2\mathrm{}`$ in width and the spectral resolution was 14 Å. Spectra were obtained with the 158 l mm<sup>-1</sup> grating ($`\lambda _{\mathrm{blaze}}=7000`$ Å) and the OG570 order-blocking filter. The data from both RCSP and CryoCam provided wavelength coverages similar to that of the LRIS spectra. Because of moderate cirrus during most of the RCSP run, I observed with several slit masks designed for brighter stars ($`I<16`$), in addition to two masks that targeted faint, low-mass candidates ($`I=17`$-19.5). Exposures times ranged between 30 and 45 min. Hiltner 102 and two young late-type sources in Taurus, V410 X-ray 5a and V410 X-ray 6, were observed through a long slit. For each mask and long slit, exposures were taken with quartz continuum and He-Ar-Ne lamps. The data reduction procedures were the same as for the LRIS data.
## 3 Discussion
### 3.1 Identification of Low-Mass Candidates and Determination of Cluster Membership
A color-magnitude diagram with the new $`RI`$ photometry for all of IC 348 ($`25\mathrm{}\times 25\mathrm{}`$) is shown in Figure 1. Saturation occurred at $`I16.5`$ and data of Herbig (1998) were used for brighter stars within his $`7\mathrm{}\times 14\mathrm{}`$ survey area. At lower masses and cooler temperatures, cluster members should become redder and fainter. The colors of M dwarfs eventually saturate near $`RI=2.4`$ for the latest types. In selecting low-mass candidates to observe through spectroscopy, the highest priority was given to targets near the cluster core to reduce field star contamination and extend the depth of the previous study of LRLL. The sources observed spectroscopically by LRLL outlined a locus of likely cluster members clearly separated from most of the background stars. In addition, IR photometry from LRLL and Lada & Lada (1995) was combined with the optical data to help identify low-mass cluster members. Since most stars have similar near-IR colors, $`JH`$ and $`HK`$ provide rough estimates of extinction, thus distinguishing cool, low-luminosity brown dwarfs from reddened, luminous early-type stars (either cluster members or background) that appear in the optical color-magnitude diagram. Unfortunately, at the faintest limits ($`I>19`$) where the most background contamination occurs, the available $`JHK`$ data is sufficiently deep only in the cluster core. Actively accreting stars often have strong ultra-violet and optical excess emission that reduce the apparent $`RI`$ color while leaving $`I`$ relatively unchanged. Such a star could therefore be mistakenly rejected as a background object in the selection of candidates.
Field stars are easily distinguished from low-mass cluster members with the spectroscopic data. Unlike young cool stars, foreground M dwarfs have strong absorption in Na and K (see Figs. 2-5), no signs of reddening in the spectra or colors, no IR excess emission, and little H$`\alpha `$ emission ($`<15`$ Å). Only a few foreground M stars are expected towards the relatively small area covered by IC 348 (see Herbig 1998). Most background stars are rejected by their positions in the optical color-magnitude diagram. Ones that fall within the locus of cluster members are background giants, which exhibit spectra that differ greatly from late M cluster members.
The combined new and published spectroscopic samples are shown in the top panel of Figure 1. Reddened low-mass stars belonging to the cluster are scattered among the likely brown dwarfs ($``$M6). Because of the compact nature of the cluster, there is very little contamination from foreground or background stars within the locus of cluster members. As demonstrated in the lower panel of Figure 1, there are many likely low-mass cluster members that remain to be observed spectroscopically.
### 3.2 Spectral Types
#### 3.2.1 Method of Classification
For the classification of sources in the new spectroscopic sample, spectra of standard dwarfs and giants are taken from Kirkpatrick, Henry, & McCarthy (1991), Henry, Kirkpatrick, & Simons (1994), and Kirkpatrick, Henry, & Irwin (1997). The spectral types are represented by averages of one or more stars. For M4V-M6V, they consist of: M4V=GL 213, GL 275.2A, and GL 402; M4.5V=GL 234AB and GL 268; M5V=GL 51 and GL 866AB; M5.5V=GL 65A and G 208-44AB; M6V=GL 406. The stars that are used for M7V-M9V and M5III-M9III are listed in Luhman et al. (1998a) and LLR. There are some differences in the band strengths among the standards of a given spectral type, particularly at M8 and M9 where the VO changes very rapidly. These fluctuations correspond to $`0.25`$ subclass. Hence, the exact spectral type depends slightly on the choice of standards.
As illustrated in Figs. 2-5, TiO and VO give rise to several distinctive absorption features that are the primary indicators of spectral type. The spectrum of each young object was compared to the dwarfs, giants, and averages of the two (each normalized at 7500 Å) at various spectral types and reddenings until a best match was achieved. After this initial classification of the sample, all spectra at the same spectral type were dereddened to match the least reddened spectrum. These data were compared closely and minor adjustments ($`0.25`$ subclass) were made in the classifications, until a point was reached where objects of a given spectral type were identical within the noise. Because of the rapid change in optical features with lower temperatures, the relative classifications are quite precise. Typical uncertainties are $`\pm 0.25`$ subclass for objects with one spectral type listed in Table 1. A larger range of possible spectral types is given for some sources. Even for data of very low signal-to-noise, the uncertainties in spectral types were no more than $`\pm 1`$ subclass. Spectra previously classified in LRLL have been included with the new sample when deriving spectral types, some of which have been revised slightly from those reported by LRLL. Table 1 lists all known objects in IC 348 with classifications of M4 or later. New spectral types for earlier sources will be presented in a future study. The IR types from LRLL are given in Table 1 as well, in addition to optical types of Herbig (1998) for the two sources that were not in my optical sample. A spectral type of M5.5 was measured for the two sources in Taurus, V410 X-ray 5a and V410 X-ray 6, compared to previous optical classifications of M5 for both objects (Strom & Strom 1994) and M5.5 for 5a (Briceño et al. 1998).
#### 3.2.2 Comparison to Dwarfs and Giants
While the relative spectral types of the sample in IC 348 are precise, the absolute classifications require further attention. The late-M spectroscopic standards defined to date consist of field dwarfs and giants. The optical spectra of these standards differ between luminosity classes for a given spectral type. Hence, classifications of young M objects can depend on the choice of dwarfs or giants as the standards and on the wavelength range considered. In previous studies, I have used spectra of field dwarfs and giants and averages of the two in the classification of a small number of young late-type objects, finding that the averages produce the best match. The new observations in IC 348 combined with the studies of LRLL and LLR provide spectra for a large population of young cool sources, including six objects at M8-M8.5. Because the young cool sources in this study form well-defined spectral classes, I can select objects with low reddening that are representative of each spectral class. After estimating the extinctions for these objects, the spectra are then dereddened to their intrinsic form. These data will be compared to the spectra of standard dwarfs and giants and their averages at spectral types of M5 through M8.5, thus revealing the spectral behavior of cool PMS objects and indicating the most suitable choice of standards for their classification. Similar discussions pertaining to $`K`$-band spectral features are found in LR98, LRLL, and Luhman & Rieke (1999).
The spectrum of source 277 exhibits the least extinction of the M5 objects and thus will be used to represent this spectral type upon dereddening. The two bluest spectra among slightly earlier and later spectral types are those of 266 and 163, respectively, which have similar slopes and are bluer than 277 by $`A_V1`$. It is likely that 266 and 163 have little extinction since they show the least reddened spectra out of 26 objects from M4.75-M5.25. An extinction of $`A_V>1`$ would imply that their spectra are intrinsically bluer than those of both dwarfs and giants and the reddenings derived from $`JH`$ for 163, 266, and 277 in § 3.3 are low ($`A_V0.5\pm 1`$). An extinction of $`A_V=1`$ is therefore adopted for 277 and used in estimating the intrinsic spectrum. In Figure 2, the dereddened spectrum of 277 is shown with the M5 standards. To facilitate the comparison of individual features between the young star and the standards, the reddenings of the standards have been adjusted to match their 7000-8500 Å slopes to the spectrum of 277. The slope of 277 agrees well with an average of M5 dwarf and giant spectra without any such adjustment. Thus, young M5 objects appear to be redder than giants and bluer than dwarfs. The TiO strength from 7100 to 7300 Å in 277 is the same as in M5V, while weaker than in M5III, so that the average of the two luminosity classes has TiO that is slightly stronger than in 277. This young object shows very weak absorption in K and Na, similar to the giant. Although K and Na are not useful in measuring spectral types, they do clearly indicate PMS nature and cluster membership. Absorption in VO at 7900 and 8500 Å is not sensitive to gravity and is unchanged among the dwarf, giant, and young star.
In a study of the relatively unreddened V410 X-ray 3 ($`A_V0.6`$), Luhman et al. (1998a) found that the spectral slope from 6500 to 9000 Å of this young M6 object was reproduced well by M6III, while bluer than M6V. In a different comparison in Figure 3, the spectral slopes of the standard stars and V410 X-ray 3 have been aligned through dereddening, as with source 277. The TiO absorption at 7100 to 7300 Å in the giant is slightly stronger than in the young object. The average of the dwarf and giant matches the CaH, TiO, and VO between 6900 and 7500 Å quite well, much better than either a dwarf or giant a alone. The K and Na absorption and overall slope resemble a giant more than a dwarf. The VO at 7900 and 8500 Å in V410 X-ray 3 is matched by both luminosity classes.
In the sample for IC 348, there are no spectra showing low reddening and high signal-to-noise near a spectral type of M7. W99 have recently obtained high-quality data for the relatively unreddened binary components GG Tau Ba and Bb. They reported spectral types of M5$`\pm 0.5`$ and M7$`\pm 0.5`$, respectively, where VY Peg (M7III) was used as the standard in the classifying Bb. From the optical spectra of GG Tau Ba and Bb kindly provided by R. White, I have measured spectral types that fall within the uncertainties quoted by W99. However, I find that more accurate and precise spectral types of M5.5$`\pm 0.25`$ and M7.5$`\pm 0.25`$ can be achieved from the data. As seen in Figure 3 in W99, the VO absorption at 7900 and 8500 Å is stronger in Bb than in either M7V or M7III. A better match to these features and the remainder of the spectrum is provided by an average of M7 and M8 dwarfs and giants.
The spectrum of source 405 is shown in Figure 4 with the M8 dwarf and giant standards. By applying a very small amount of reddening ($`A_V=0.25`$) to the standard spectra, an optimum match is obtained to the slope of 405 shortward of 8500 Å. Late M giants become progressively redder than dwarfs beyond 8500 Å and 405 is intermediate between the two. As demonstrated in Figure 4, the average of M8V and M8III clearly produces the best agreement with the spectrum of 405, particularly across the structure between 7900 and 8500 Å.
In a comparison of the spectrum of a young brown dwarf in $`\rho `$ Oph (162349.8-242601, GY141) to averages of dwarf and giant spectra at M7, M8, and M9, LLR found that this object was intermediate between M8 and M9 and assigned a spectral type of M8.5. For a fixed M8.5 spectral class, the spectrum of this object is now compared to data representing a dwarf, a giant, and an average of the two in Figure 5. GY141 has strong VO at 8500 Å and a red spectral slope that resembles a giant, while the CaH, TiO, and VO short-ward of 8500 Å are reproduced well by the dwarf/giant average. The spectrum of a young M8.5 object in $`\sigma `$ Ori is very similar to that of GY141 (Béjar et al. 1999), suggesting that GY141 may be representative of this PMS spectral type.
Absorption in VO at 7900 and 8500 Å is a very good indicator of spectral type in young late-type objects since it is strong, easy to detect in faint sources, and does not depend on surface gravity. Most of the other spectral features have strengths intermediate between dwarf and giant values. Indeed, straight averages of field dwarf and giant spectra reproduce the data for cool PMS sources remarkably well. Except for differences in reddening, the examples described here are generally representative of young ($`10`$ Myr) cool objects in IC 348 and in other regions. Thus, I advocate the use of averages of dwarf and giant standards for classification of young objects at M5 and later, with the caveats for individual features discussed in this section. This choice of standards is also practical since the majority of young late-type objects discovered to date have been classified in this manner (this work, LRLL, LLR). Young sources with little reddening ($`A_V<1`$) and precise classifications should be used as supplemental standards.
### 3.3 Colors and Extinctions
The typical behavior of the optical and IR colors of young late-type sources is investigated in the data for IC 348, followed by a determination of the best method of deriving extinctions for such a population. Standard dwarf colors are taken from the compilation of Kenyon & Hartmann (1995) for types earlier than M0 and from the young disk populations described by Leggett (1992) for types of M0 and later. The IR colors are placed on the CIT system with relations between the Johnson-Glass and CIT systems found in Bessell & Brett (1988). Reddenings are calculated with the extinction law of Rieke & Lebofsky (1985).
The low-mass sources in IC 348 and standard dwarfs and giants are shown in a diagram of $`HK`$ versus $`JH`$ in Figure 6. With later M types, $`JH`$ colors for dwarfs remain near 0.6-0.7 while $`HK`$ colors increase from 0.15 to 0.5. Giants depart from dwarf colors near $`JH=0.6`$ and $`HK=0.1`$ and approach colors of 1 and 0.3 at the latest types (Bessell & Brett 1988). For each of the three ranges of spectral types in Figure 6, a reddening vector originates at the corresponding dwarf colors. The young systems in IC 348 move progressively to redder $`HK`$ with later types, closely following the dwarf behavior. The $`JH`$ colors in the least reddened objects are similar to dwarfs and bluer than giants, indicating that both IR colors of the central stars are dwarf-like.
As with higher mass CTTS, some of the low-mass objects in IC 348 show emission in excess of reddened dwarf colors. Meyer, Calvet, & Hillenbrand (1997) measured a locus of dereddened IR colors for classical T Tauri stars near M0 and modeled it in terms of star-disk systems, where the origin of this locus coincides with the unreddened colors of an M0 dwarf. At later M types, they predict that the origin will continue to follow the dwarf colors while maintaining a fairly constant slope. To test this suggestion, a fit to the CTTS locus of Meyer et al. (1997) is given in Figure 6 with the origin adjusted for each of the three spectral types. As discussed shortly, it may not be possible to measure accurate reddenings from optical colors of these young cool objects. Hence, it is difficult to deredden the IR colors for comparison to the CTTS locus. The large photometric uncertainties and the small number of sources with significant $`HK`$ excesses provide further obstacles in determining the intrinsic colors of low-mass CTTS. However, it is apparent that at the latest types the average observed colors fall below the CTTS locus, a trend that would become more pronounced if the colors were dereddened. In other words, these systems seem to have significant color excesses at $`HK`$ but not $`JH`$. Relative to CTTS at higher masses, this behavior would suggest cooler emitting regions with respect to the central objects, possibly due to cooler disks, larger inner holes, or contributions from material in infalling envelopes. The latter emission source is particularly likely in object 407, which has a higher $`HK`$ excess than can be easily explained by star-disk systems (Meyer, Calvet, & Hillenbrand 1997). As seen in Figure 6, the systems with higher H$`\alpha `$ emission tend to show larger excesses at $`HK`$, which is expected for more actively accreting disks. It also appears that on average the latest types show the largest excesses. However, the uncertainties in the colors are largest for the faintest, coolest objects. For instance, the anomalous colors of 432 ($`JH=0.9`$, $`HK=0.04`$) reported by Lada & Lada (1995) are likely due to the uncertain $`K`$-band measurement ($`\pm 0.39`$ mag). A more conclusive and detailed analysis of these various issues requires more accurate $`JHK`$ photometry and data at longer wavelengths.
As demonstrated by the $`JH`$ color excesses in Figure 6 ($`A_J=2.63E(JH)`$), the sources at M4-M5 exhibit $`A_J=0`$-2 while the later types have $`A_J1`$, which is a reflection of the completeness limit of the spectroscopy and selection against reddened late-type objects. The M4-M5 stars with $`A_J<1`$ are saturated in the new $`RI`$ photometry. To examine $`RI`$ and $`IJ`$ for the remaining sources, the colors have been dereddened with the extinctions derived from $`JH`$ assuming intrinsic colors of dwarfs. The dereddened $`RI`$ colors are bluer than dwarf values by 0.2-0.6 mag for objects at M4-M5, which is a tendency towards giant-like colors. At later types, the young sources have colors similar to dwarfs and giants, which have comparable Cousins $`RI`$ colors for M6 and later. (It is redward of Cousins $`I`$ where late M giants become redder than dwarfs.) This behavior is consistent with the spectra, where the slopes across the Cousins $`R`$ and $`I`$ bands are bluer than in dwarfs for M4-M6 and similar to both dwarfs and giants at later types. Because the cooler objects have low extinctions and the non-saturated M4-M5 stars are reddened, the results of this comparison are susceptible to systematic effects from the color correction. However, careful modeling of the color transformation as a function of spectral type and reddening (§ 2) indicates that the optical colors are fairly accurate ($`\pm 0.1`$) and the blue dereddened colors at M4-M5 should be a real effect. Comparing the optical and IR data, the dereddened $`IJ`$ colors are redder than dwarf values by 0, 0.3, and 0.6 mag at M4-M5, $`>`$M5-M6, and $`>`$M6. As discussed in § 2, the IR photometry of LRLL was offset by $`0.2`$ mag to agree with the data of Lada & Lada (1995). If the reverse adjustment is made, then these color differences in $`IJ`$ are reduced by 0.2 mag. A noticeable departure from dwarf colors would remain at the latest types, with $`IJ`$ intermediate between that of dwarfs and giants. The spectra of the latest objects rise rapidly beyond the $`I`$-band in a similar fashion as giants, confirming the behavior of $`IJ`$. Color anomalies from $`B`$ through $`J`$ have also been observed in earlier type (K7-M1) PMS stars by Gullbring et al. (1998). Because dwarf and giant colors are similar in this spectral type regime, they suggest that cool companions and star spots are instead the probable causes.
To derive extinctions for stars earlier than mid-M, LRLL used $`E(RI)`$ and assumed intrinsic colors of dwarfs, with the constraint that the extinction could not produce a dereddened $`JH`$ much bluer than in dwarfs. When $`R`$ and $`I`$ were not available, the stars were dereddened to the dwarf colors of $`JH`$. For the three latest sources in LRLL, the extinctions were estimated from $`IK`$ by assuming dwarf colors since this was the most accurate color for these faint objects. The new results in this study indicate that the intrinsic near-IR colors of young late-type objects are probably dwarf-like while $`I`$ relative to $`J`$, $`H`$, and $`K`$ appears to be redder than in dwarfs. The $`RI`$ colors of young objects also seem to deviate from those of dwarfs, making it difficult to use $`R`$ or $`I`$ in measuring extinctions. Possible systematic effects in the optical color transformation for reddened late-type stars is an added concern. Unlike optical data, near-IR photometry is not susceptible to significant color corrections and the colors are consistently dwarf-like, thus extinctions are derived in this work by dereddening $`JH`$ to dwarf colors. Unfortunately, much of the near-IR data for IC 348 suffers from large photometric errors at the faintest objects. The more accurate IR data from LRLL is used when possible and the resulting extinctions are given in Table 1, which are generally consistent with the reddenings implied by the spectra. Since most sources exhibit $`HK`$ excesses of $`<0.2`$ mag, the contamination of the $`JH`$ color by excess emission should be negligible. Object 407 has a large $`HK`$ excess, thus an extinction cannot be confidently derived with $`JH`$. Cousins $`RI`$ colors are similar between dwarfs and giants at the latest M types, hence this color is used in estimating a reddening of $`A_J1`$, which is consistent with the appearance of the spectrum. Since LRLL could not measure $`H`$-band data for source 413, the $`JK`$ color was used to measure the extinction.
### 3.4 Bolometric Luminosities and Effective Temperatures
To minimize contamination from UV and IR excess emission, the $`I`$ and $`J`$ bands are generally preferred for estimating the bolometric luminosities of young stars. Both the photometry and the spectra indicate that $`IJ`$ may be redder in young cool objects than in dwarfs. Because $`JH`$ and $`JK`$ are dwarf-like, $`J`$ is likely a better choice than $`I`$ for measuring luminosities. The dwarf-like near-IR colors and the fact that most of the luminosity is released in $`J`$ through $`K`$ suggest that the bolometric corrections for dwarfs may be satisfactory approximations for these young sources. Hence, the luminosities reported in Table 1 have been calculated by applying the dwarf bolometric corrections to the dereddened $`J`$-band data and assuming a distance modulus of 7.5. Bolometric corrections are taken from Kenyon & Hartmann (1995) for $`<`$M6 and compiled from Bessell (1991), Monet et al. (1992), Tinney, Mould, & Reid (1993), and Leggett et al. (1996) for $``$M6. Given the uncertainties in the photometry, reddenings, bolometric corrections, and distance, the typical errors in the bolometric luminosities are $`\pm 0.08`$ to 0.13 in log $`L_{\mathrm{bol}}`$ from early K to the latest M types.
Theoretical mass tracks at low masses are mostly vertical in the H-R diagram, thus the conversion of spectral types to effective temperatures directly influences the mass estimates for young late-type objects. LR98 discussed the differences between the available temperature scales for M dwarfs, finding that the comparison of synthetic spectra to observed data by Leggett et al. (1996) produced a scale in good agreement with measurements for the eclipsing spectroscopic binaries YY Gem and CM Dra. Because the latest type in the study of Leggett et al. (1996) was M6.5, LR98 extrapolated the scale to later types assuming the same offset between the M6-M9 subclasses as found in a previous generation of spectral modeling by Kirkpatrick et al. (1993). This version of the scale of Leggett et al. (1996), given in Table 2, is consistent with newer modeling of colors at the latest M types by Leggett, Allard, & Hauschildt (1998). Since PMS stars are intermediate in surface gravity between dwarfs and giants, it is useful to consider the temperature scale for giants as well. The giant temperature scale in Table 2 is from the fit provided by van Belle et al. (1999) for types earlier than M7 and from Perrin et al. (1998) and Richichi et al. (1998) for M7 to M9. The temperatures of giants are warmer than those of dwarfs by 200-400 K, as illustrated in Figure 7.
In previous studies of young populations composed primarily of objects at mid-M and earlier, LR98, LRLL, Luhman & Rieke (1999) applied the dwarf temperature scale, while Luhman et al. (1998a) and LLR explored the effect of a warmer, more giant-like scale on mass estimates for two late M objects. For the late-type sources in IC 348 in Table 1, spectral types are converted to temperatures with the dwarf scale. In § 3.5, it will be determined if this scale or one intermediate between those of dwarfs and giants can produce agreement between the available theoretical isochrones and the empirical isochrones formed by the locus of stars in IC 348 and the components of GG Tau.
### 3.5 H-R Diagram
Using the temperatures and luminosities for objects in Table 1 ($``$M4) and in the study of LRLL ($`<`$M4), an H-R diagram is generated for the low-mass population in IC 348 and shown with three sets of theoretical evolutionary tracks in Figs. 8 and 9. The models include those by D’Antona & Mazzitelli (1997) (hereafter DM97), Burrows et al. (1997), and Baraffe et al. (1998) (hereafter B98) with a mixing length of $`\alpha =1.9`$. The models of Burrows et al. (1997) shown here are not the same as the older 1997 suite of Burrows presented in LLR, LRLL, and Luhman et al. (1998a). Burrows et al. (1997) calculated their own synthetic atmospheres, whereas the NextGen atmospheres of Allard were used in the previous generation of models. With evolutionary models, the data for a young population can be interpreted in terms of masses and ages and combined into a mass function and a star formation history. However, even the most recent models imply significantly different masses and ages for the same objects. In addition, the temperature scale for young cool stars may differ from the dwarf conversion. Therefore, meaningful estimates of masses and ages require observational constraints of the models and the temperature scale.
Luhman (1998) briefly reviewed some of the observational tests of the models at ages of $`0.1`$ Gyr, which included the independently measured masses, radii, and temperatures of CM Dra and YY Gem and the empirical isochrones observed in the Pleiades and in globular clusters. In a more detailed study, Stauffer, Hartmann, & Barrado (1995) compared each set of model isochrones to the Pleiades locus and determined whether the temperature scale could be adjusted to produce agreement. While the models at evolved ages are similar to each other and in reasonable agreement with observations, at ages of $`<10`$ Myr they differ greatly and lack definitive constraints (e.g., eclipsing binaries). As a crude test at young ages ($`1`$ Myr), Luhman (1998) calculated the IMF of L1495E with each set of tracks to search for anomalously large deviations from the field mass function.
#### 3.5.1 Coevality as a Test of the Models
Coevality can be utilized as a constraint of the models at the youngest ages in the same way that the Pleiades locus acts as a test near 100 Myr. Binary systems should be one source of coeval stars, as explored by Hartigan, Strom, & Strom (1994) and Prato (1998). More recently, W99 have measured spectral types and photometry for the members of the quadruple system GG Tau. An H-R diagram was constructed from this data, where the presumably coeval components should form an empirical isochrone extending across a large range of spectral types (K7-M7). W99 examined several sets of models and found that the calculations of B98 with a mixing length of $`\alpha =1.9`$ could be combined with a temperature scale intermediate between those of dwarfs and giants to produce coeval ages for the components of GG Tau. These models also implied masses of DM Tau and GG Tau Aa+Ab that were in rough agreement with dynamical estimates. Using revised spectral types for GG Tau Ba and Bb (§ 3.2.2) and the reddenings and $`J`$-band photometry measured by W99, the luminosities of GG Tau have been calculated in the same manner as for the sources in IC 348. The luminosities agree within the uncertainties with those of W99. The revised measurements for GG Tau and the new data for IC 348 are combined in the following analysis.
#### 3.5.2 GG Tau and IC 348 as Empirical Isochrones
A test of the model isochrones with data for GG Tau and IC 348 is based on the assumptions that the components of GG Tau are coeval and that luminosities are precise indicators of age. Considering the stellar densities in Taurus and separations within GG Tau, the members of this multiple system have probably formed through fragmentation rather than capture or disk instabilities, thus coevality is likely (Ghez, White, & Simon 1997). When the populations of young clusters are placed on H-R diagrams, the distribution of luminosities at a given temperature is generally interpreted in terms of an age spread. However, other factors in addition to age, such as binarity and star spots, may significantly influence the observed luminosities. If true, the measured luminosity of a particular star would only indicate a crude age, while the average distribution of stars in a cluster should still reflect the age of the cluster and act as an empirical isochrone. Thus, a comparison of GG Tau to the locus of stars in IC 348 can test both the coevality of the multiple system and the precision with which luminosities trace age. As illustrated in Figs. 8 and 9, the components of GG Tau form a line parallel to the population in IC 348. If the observed luminosities of young stars were dominated by factors other than age, the components of GG Tau should have luminosities randomly drawn from the range of values found in young clusters, and this is clearly not the case. Hence, it is indeed likely that the luminosities do reflect the age of GG Tau and that the components are coeval.
For each set of models, I will examine if the inferred age for GG Tau and average age and age spread for IC 348 are constant as a function of mass, whether a temperature scale intermediate between those of dwarfs and giants can improve agreement, and review other observational constraints discussed by LR98, Luhman (1998), and W99. The sample in IC 348 is representative down to M5-M6 and biased towards less reddened and more luminous objects at later types. Consequently, for stars earlier than M6, the average and upper and lower boundaries of the locus can be compared directly to the model isochrones. Later than M6, the upper envelope should be representative, and the average locus and lower envelope can be treated as upper limits.
#### 3.5.3 D’Antona & Mazzitelli 1997
The models of DM97 are fairly consistent with the stellar parameters derived for the eclipsing binaries CM Dra (0.22 $`M_{}`$) and YY Gem (0.6 $`M_{}`$). The predicted radii agree with the values measured for CM Dra while different by two sigma from those of YY Gem (see Luhman 1998). The temperatures and luminosities are within two sigma and one sigma of the measurements for CM Dra and YY Gem, respectively. As seen in Figure 8, when the Pleiades brown dwarfs Teide 1 and Calar 3 are placed on the H-R diagram with the dwarf temperature scale, the age inferred from DM97 agrees with the age of the Pleiades (125 Myr; Stauffer, Schultz & Kirkpatrick 1998). The models are also consistent with the masses (65 $`M_J`$; Basri & Martín 1998) and ages of the binary components of PPL 15. An additional test of the models is provided by the dynamical mass measured for the system of GG Tau Aa and Ab through observations of the circumbinary disk ($`1.28\pm 0.08`$ $`M_{}`$; Dutrey, Guilloteau, & Simon 1994; Guilloteau, Dutrey, & Simon 1999). Similar estimates are available for GM Aur ($`0.84\pm 0.05`$ $`M_{}`$; Dutrey et al. 1998) and DM Tau ($`0.47\pm 0.06`$ $`M_{}`$; Guilloteau & Dutrey 1998). For DM97, W99 found that the inferred masses of GG Tau A (0.80 $`M_{}`$) and GM Aur (0.51 $`M_{}`$) are lower than the observed values, while the predicted mass of DM Tau (0.44 $`M_{}`$) is consistent with the data. However, because of the uncertainty in the inclination of GM Aur, Dutrey et al. (1998) cannot rule out a mass of 0.6 $`M_{}`$, which is close to the mass implied by DM97.
The H-R diagram in Figure 8 indicates that the average age and age spread for IC 348 with the models of DM97 is fairly constant as a function of mass, except at 0.05-0.15 $`M_{}`$ where the isochrones rise relative to the observed locus. The same trend is seen for GG Tau. The components Aa, Ab, and Bb are coeval on the model isochrones while Ba is older. The only change in the temperature scale that could produce agreement between the empirical and theoretical isochrones is adjusting spectral types of M4-M6 to cooler temperatures without changing the scale at other types. Such a conversion would be discontinuous and cooler than the scales of dwarfs and giants and therefore is not a reasonable option. The models of DM97 are most compatible with the dwarf temperature scale adopted in this work, supporting the accuracy of the IMFs derived in previous studies of LRLL and Luhman & Rieke (1999) that used this combination of tracks and temperature scale. However, DM97 may underestimate masses above 0.5 $`M_{}`$, possibly introducing a systematic error in the IMFs at intermediate masses.
#### 3.5.4 Burrows et al. 1997
Since the models of Burrows et al. (1997) include only masses of $`0.1`$ $`M_{}`$, they cannot be tested directly against the data for CM Dra and YY Gem or the dynamical mass estimates of GG Tau A, DM Tau, and GM Aur. Model X of Burrows et al. (1993) reproduces the radii of the CM Dra components but predicts temperatures and luminosities that fall outside of the uncertainties of the measurements by two and three sigma, respectively (Luhman 1998). At 0.1 $`M_{}`$, the radius, temperature, and luminosity at the main sequence calculated by Burrows et al. (1997) are very similar to the values predicted in Model X, thus the newer models probably do not agree with the data for CM Dra either. As with DM97, the calculations of Burrows et al. (1997) are consistent with the data for the Pleiades brown dwarfs. Given the limited range of masses calculated by Burrows et al. (1997), a comparison of the theoretical isochrones to the locus in IC 348 and GG Tau is not conclusive. The models imply that most of the objects in IC 348 have ages less than 1 Myr, much younger than expected from other evidence, such as the fraction of sources exhibiting IR excess emission (LRLL; Lada & Lada 1995). There is also a tendency towards younger ages at higher masses in Figure 8, and no temperature scale between dwarfs and giants removes this implied age gradient.
#### 3.5.5 Baraffe et al. 1998
The radius, temperature, and luminosity as a function of mass predicted by B98 for a main sequence star closely matches the calculations of DM97 above 0.2 $`M_{}`$. Thus, the comparison of DM97 to the data for CM Dra and YY Gem applies to B98 as well. In the H-R diagram in Figure 9, the 100 Myr isochrone of B98 is within the uncertainties of data for the Pleiades brown dwarfs Teide 1 and Calar 3. However, if PPL 15 is an equal mass binary, then the system luminosity shown in the H-R diagram should be reduced by 0.3 dex to reflect the individual components. At this new position PPL 15, falls below the 100 Myr isochrone and above the hydrogen burning limit, which is not consistent with the age of the Pleiades or the substellar mass of 65-80 $`M_J`$ implied by the binary data (Basri & Martín 1998) and Li measurements (Basri, Marcy, & Graham 1996).
The theoretical PMS evolution of stars at intermediate masses ($`>0.6`$ $`M_{}`$) is sensitive to the treatment of convection (Chabrier & Baraffe 1997). Before the study of B98 was published, models with a mixing length parameter of $`\alpha =1.0`$ were made available for use in the work of LR98, LLR, Luhman (1998), and Luhman et al. (1998a). B98 subsequently concluded that calculations with $`\alpha =1.9`$ reproduced the properties of the Sun. Compared to the observed masses of $`1.28\pm 0.08`$ $`M_{}`$, 0.6-0.9 $`M_{}`$, and $`0.47\pm 0.06`$ $`M_{}`$ for GG Tau A, GM Aur, and DM Tau, W99 derived B98 masses of 2.00, 1.06, and 0.67 $`M_{}`$ for $`\alpha =1.0`$ and 1.46, 0.78, and 0.64 $`M_{}`$ for $`\alpha =1.9`$. Not only do the models with $`\alpha =1.0`$ fail to work for the Sun, but they also significantly overestimate masses in young solar-mass stars, which was indicated previously by LR98 and Luhman (1998) in examining the IMF of L1495E. On the other hand, the calculations with $`\alpha =1.9`$ produce fairly good agreement with the data for solar-mass stars at both the main sequence and the earliest stages.
The data for IC 348 and GG Tau are shown with the models of B98 ($`\alpha =1.9`$) in Figure 9. The isochrones imply an age gradient in both sets of data, where the less massive objects are progressively younger. W99 suggested that a warmer temperature scale for GG Tau Ba and Bb could bring them onto the same model isochrone as Aa and Ab. The necessary departure from a dwarf scale is increased slightly by the revision of Ba and Bb to later spectral types (§ 3.2.2). The temperatures of Ba and Bb that are required for coevality are shown in Figure 7. These temperatures are reasonable for PMS objects since they are intermediate between dwarf and giant values. The warmer source Ba is closer to the dwarf scale than Bb. As an experiment, this trend is extrapolated to earlier types until M0, where the dwarf and giant scales converge. For M8 and M9, the temperatures were selected to be continuous with the values at earlier types and intermediate between dwarfs and giants with no other justification. This intermediate temperature scale is listed in Table 2 and illustrated in Figure 7. Using this scale, the data for GG Tau and IC 348 are placed on the H-R diagram in the lower panel of Figure 9. As defined, the components of GG Tau are now coeval on the isochrones of B98. In addition, the locus of IC 348 maintains a constant age and age spread with mass on these isochrones. Although construction of the intermediate temperature scale was somewhat ad hoc at types earlier and later than GG Tau Ba and Bb, using this scale with the B98 models is consistent with the constraints at young ages over a wide range of masses, and should therefore provide the relatively reliable ages and masses.
#### 3.5.6 Implications of Tests
The theoretical calculations of the evolution of low-mass stars and brown dwarfs have become quite sophisticated in recent years. However, among the various models there remain large differences in the predicted path of these objects on the H-R diagram, particularly at ages of $`<10`$ Myr. Consequently, the masses and ages estimated for young low-mass objects are sensitive to the choice of models and the adopted a temperature scale. Given the observational constraints provided GG Tau and IC 348, the models of DM97 are compatible with the dwarf temperature scale, while a scale intermediate between those of dwarfs and giants works well with B98. The calculations of Burrows et al. (1997) do not appear to provide an adequate description of young low-mass sources for any reasonable temperature scale. Both models of B98 and DM97 imply that the hydrogen burning limit at young ages occurs at a spectral type of $``$M6 and that several objects in IC 348 fall below the substellar boundary with masses as low as 20-30 $`M_J`$.
## 4 Conclusion
I have obtained deep optical photometry and spectroscopy of the young cluster IC 348 and combined it with previous IR and optical observations. The conclusions are as follows:
1. Expanding on previous optical and IR searches for low-mass stars and brown dwarfs in IC 348, I have identified a rich population of new low-mass candidates through $`R`$ and $`I`$ photometry. Low-resolution optical spectroscopy of a subset of these objects has confirmed their youth and late spectral types.
2. Using the large number of sources in the spectroscopic sample, I have described the typical behavior of the optical spectra of young late-type objects (M5-M8.5) relative to standard dwarfs and giants. Overall, averages of dwarf and giant spectra closely resemble the optical data for young objects and comprise good calibrators of spectral types for young low-mass populations.
3. It appears that the intrinsic $`RI`$ and $`IJ`$ colors of young late M objects are intermediate between the colors of dwarfs and giants, which is consistent with the behavior of the spectra in these bands. Meanwhile, the intrinsic $`JH`$ and $`HK`$ colors are dwarf-like with an additional $`HK`$ excess in some sources, probably arising from a circumstellar disk or an infalling envelope.
4. After testing the models with empirical isochrones in the form of the multiple system GG Tau and the population of IC 348, I find that the calculations of Burrows et al. (1997) are not consistent with the data while the models of DM97 are roughly compatible with the data when a dwarf temperature scale is used. The models of B98 produce the best agreement with observational constraints at young ages, particularly if a temperature scale intermediate between those of dwarfs and giants is adopted.
5. Under the constraints of the empirical isochrones, both DM97 and B98 suggest that the hydrogen burning limit occurs near M6 at ages of $`10`$ Myr. These models indicate the presence of several new brown dwarfs in the spectroscopic sample of IC 348.
I am particularly grateful to the staff of Keck Observatory for their careful execution of the service observations. I thank F. Allard, I. Baraffe, A. Burrows, N. Calvet, and F. D’Antona for providing their most recent calculations and useful advice. Comments on the manuscript by L. Hartmann, G. Rieke, J. Stauffer, and R. White are greatly appreciated. I also thank R. White for access to his spectra of GG Tau B. I was funded by a postdoctoral fellowship at the Harvard-Smithsonian Center for Astrophysics. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
|
no-problem/9905/astro-ph9905303.html
|
ar5iv
|
text
|
# Primordial black hole evolution in tensor-scalar cosmology
## I INTRODUCTION
Various approaches to unified theories and quantum gravity suggest the possibility of one or more massless scalar fields such as the dilaton or other moduli coupled to the trace of the energy-momentum tensor of matter. Theories with such fields in addition to the spacetime metric are dubbed “tensor-scalar” theories. The action for an illustrative class of such theories with one scalar is
$`S`$ $`=`$ $`(16\pi G_{})^1{\displaystyle d^4xg_{}^{1/2}\left(R_{}2g_{}^{\mu \nu }_\mu \phi _\nu \phi \right)}`$ (2)
$`+S_m[\psi _m,A^2(\phi )g_{\mu \nu }].`$
The metric $`g_{\mu \nu }`$ is called the “Einstein metric” and $`R_{}`$ is its scalar curvature. The matter fields are collectively denoted $`\psi _m`$, and they couple universally to the “Jordan-Fierz metric”
$$\stackrel{~}{g}_{\mu \nu }A^2(\phi )g_{\mu \nu },$$
(3)
where the form of the coupling function $`A(\phi )`$ is input which presumably descends from a more fundamental theory. The Newton constant $`\stackrel{~}{G}`$ as measured in Cavendish type experiments is given by
$$\stackrel{~}{G}=(1+\alpha ^2(\phi ))A^2(\phi )G_{},$$
(4)
where $`\alpha d\mathrm{ln}A/d\phi `$, which is not in fact constant in a cosmological solution. Quantities defined with respect to $`g_{\mu \nu }`$ are commonly referred to as being given in the “Einstein frame” while those defined with respect to $`\stackrel{~}{g}_{\mu \nu }`$ are said to be in the “Jordan-Fierz frame”.
Damour and Nordtvedt identified a generic attractor mechanism which drives a tensor-scalar cosmology to a purely tensor (Einstein) one at late times, thus significantly increasing the plausibility of these models. According to this mechanism the scalar $`\phi `$ is driven to a local minimum of $`A(\phi )`$. Definite predictions for the residual effects of the scalar(s) emerge from their analysis. The Jordan-Fierz-Brans-Dicke theory is a special case of (2) with $`A(\phi )=\mathrm{exp}(\alpha _0\phi )`$. Since such a coupling function has no minimum the attractor mechanism does not work for this theory and fine tuning is therefore required. Generically, however, one might expect $`A(\phi )`$ to possess local minima. A related mechanism in a model motivated by string theory was studied by Damour and Polyakov.
Barrow and Carr initiated a study of the evolution of a population of primordial black holes in tensor-scalar cosmology. They considered two differing scenarios, “scenario A” in which the value of the scalar field (and hence $`\stackrel{~}{G}`$) at the horizon evolves along with the cosmological value, and “scenario B” in which a black hole “remembers” the value of the scalar field at the time of its formation. The black hole evolution is very different in the two cases if, as is natural in these models, Newton’s constant (4) decreases over cosmological timescales. Analyzing the black hole evolution in the Jordan-Fierz frame, and assuming the black hole mass $`\stackrel{~}{M}`$ changes only due to the Hawking evaporation, Barrow and Carr argued that in scenario A the Hawking luminosity increases as $`LT_H^4\times (\mathrm{Area})(\stackrel{~}{G}\stackrel{~}{M})^2`$. A black hole born when $`\stackrel{~}{G}`$ was larger would thus have a longer lifetime than would be surmised from the present value of $`\stackrel{~}{G}`$. In scenario B on the other hand, the black hole remembers the primordial value of $`\stackrel{~}{G}`$ so its lifetime would be even longer.
It turns out that neither of these two scenarios is correct. It will be shown in this paper first that there is no “gravitational memory”, so the value of $`\stackrel{~}{G}`$ at the black hole keeps up with the cosmological change. Second, even if the mass of the black hole is essentially constant in the Einstein frame, the mass increases in the Jordan-Fierz frame in proportion to $`1/A(\phi )`$. This mass increase would counteract the Hawking evaporation (as described in the Jordan-Fierz frame), and could significantly “magnify” the mass of primordial black holes.
## II Evolution of the scalar field at the horizon
The problem to solve is this: if a small black hole is embedded in a cosmology with changing scalar field how does the scalar field at the horizon evolve? At the outset, it is worth remarking that it seems unlikely that the scalar field would be pinned at the horizon, since that would entail increasing gradients in the scalar field as it interpolates between the horizon and the changing cosmological value. It would seem that such gradients would lead to propagation that would even out the field.
We approach the problem by exploiting the great separation of scales between the black hole and the cosmological background. Since the black hole is much smaller than the cosmological length or time scales it is reasonable to think of it as sitting in a local asymptotically flat space, with a boundary condition for the scalar field set by the cosmological evolution $`\phi _c(t)`$. If the scalar field at the black hole follows $`\phi _c(t)`$ then, since this change is very slow compared with the size of the black hole, the solution should be a small perturbation of the stationary black hole. If such a perturbation exists in which $`\phi `$ at the horizon keeps up with $`\phi _c(t)`$, then our assumption will be shown to be consistent.
For simplicity we first discuss nonrotating black holes, and we work in the Einstein frame. The only such black holes in Einstein-scalar gravity are the Schwarzschild metric with a constant scalar, and the only spherically symmetric perturbations of these black holes are pure scalar fields satisfying the wave equation in the Schwarzschild background. Thus we need only look for spherically symmetric solutions to the wave equation,
$$\left(g^{tt}_t^2+\frac{1}{\sqrt{g}}_r\sqrt{g}g^{rr}_r\right)\phi (t,r)=0,$$
(5)
(in Schwarzschild coordinates) subject to the boundary condition
$$\phi (t,r=\mathrm{})=\phi _c(t)=\dot{\phi _c}t.$$
(6)
Since the cosmological timescale is assumed to be very long compared with that of the black hole, it is consistent to set the cosmological scalar perturbation equal to a linear function of the Schwarzschild time coordinate $`t`$ as we have done here with $`\dot{\phi }_c`$ a constant.
The unique solution to the wave equation (5) depending only on $`t`$ and satisfying the boundary condition is just the boundary value itself,
$$\phi _1(t,r)=\dot{\phi }_ct.$$
(7)
However this can not be the perturbation we are looking for since the coordinate $`t`$ and hence the “perturbation” $`\phi _1`$ diverges at the black hole horizon. The unique solution depending only on $`r`$ is given up to constants by
$$\phi _2(t,r)=\mathrm{ln}(1r_0/r)$$
(8)
where $`r_0`$ is the Schwarzschild radius and we have used $`\sqrt{g}g^{rr}=(rr_0)r\mathrm{sin}\theta `$. This solution also diverges at the horizon, but there is a linear combination of these two solutions that is regular at the horizon.<sup>*</sup><sup>*</sup>*This observation is due to Amos Ori. Since $`\phi _2`$ vanishes at infinity, this linear combination will be the perturbation we seek.
The advanced time coordinate $`v=t+r^{}`$, with $`r^{}=r+r_0\mathrm{ln}(r/r_01)`$, is regular on the horizon. In terms of $`v`$ we have $`\mathrm{ln}(1r_0/r)=(vtr)/r_0+\mathrm{ln}r_0/r`$, hence the linear combination of $`\phi _1`$ and $`\phi _2`$ which is regular on the horizon and approaches $`\phi _c`$ at infinity is
$$\phi _3=\phi _1+r_0\dot{\phi }_c\phi _2=\dot{\phi }_c(vrr_0\mathrm{ln}\frac{r}{r_0}).$$
(9)
The existence of the solution (9) establishes the result that the cosmological change of the scalar field can extend smoothly to the black hole horizon. Any other regular perturbation satisfying the boundary condition must be a superposition of waves which will dissipate by falling into the black hole or spreading out to infinity, hence after transients the perturbation (9) will describe the secular change of the scalar field.
A surface of constant $`\phi _3`$ intersects the horizon at an advanced time $`v_H`$ and reaches infinity at a cosmological or Schwarzschild time $`t_{\mathrm{}}`$, these two times being related by $`v_Hr_0=t_{\mathrm{}}`$. The two surfaces $`v=v_H`$ and $`t=t_{\mathrm{}}`$ thus intersect at $`r^{}=v_Ht_{\mathrm{}}=r_0`$ which is at around $`r1.5r_0`$. Thus there is not much “lag” between the horizon value and the cosmological value.
The solution $`\phi _3`$ generalizes with little change to the case of a rotating black hole. Using Boyer-Lindquist coordinates for the Kerr metric the wave equation for functions of $`t`$ and $`r`$ again takes the form (5). Hence we again find that $`\phi _1(t)`$ (7) solves the wave equation, and in place of (8) we find the stationary solution
$$\phi _2^{}=\mathrm{ln}\frac{rr_+}{rr_{}}$$
(10)
where $`r_\pm `$ are the radii of the outer and inner horizons and we have used $`\sqrt{g}g^{rr}=(rr_+)(rr_{})\mathrm{sin}\theta `$. The advanced time coordinate $`v=t+r^{}`$, now with $`r^{}=𝑑r(r^2+a^2)/(rr_+)(rr_{})=r+\frac{r_++r_{}}{r_+r_{}}[r_+\mathrm{ln}(\frac{r}{r_+}1)r_{}\mathrm{ln}(\frac{r}{r_{}}1)]`$, is regular on the Kerr horizon. The linear combination of $`\phi _1`$ and $`\phi _2^{}`$ which is finite on the horizon and approaches $`\phi _c`$ at infinity is given by
$`\phi _3^{}`$ $`=`$ $`\phi _1+\left({\displaystyle \frac{2mr_+}{r_+r_{}}}\dot{\phi }_c\right)\phi _2^{}`$ (11)
$`=`$ $`\dot{\phi }_c\left[vr2m\mathrm{ln}({\displaystyle \frac{r}{r_{}}}1)+{\displaystyle \frac{2mr_+}{r_+r_{}}}\mathrm{ln}{\displaystyle \frac{r_+}{r_{}}}\right]`$ (12)
where $`2m=r_++r_{}`$. Thus also for a rotating black hole there is a perturbation describing a scalar field changing linearly with time in the same way at the horizon as at infinity.
## III Evolution of the black hole mass
The mass of a black hole may grow by accretion or decrease by emission of Hawking radiation. Aside from standard accretion processes, in tensor-scalar cosmology a changing scalar at the horizon entails a flux of energy into the black hole, thus increasing its mass. In addition, we shall see that even if the mass in the Einstein frame is approximately constant, the mass defined in the Jordan-Fierz frame changes simply due to the conformal scaling of the metric (3).
To avoid confusion, in this section asterisk subscripts are attached to quantities defined with respect to the Einstein metric, and tildes adorn quantities defined with respect to the Jordan-Fierz metric. This asterisk is unrelated to the superscript on the radial tortoise coordinate in the previous section.
The rate of change of a black hole mass with respect to cosmological time, arising from the changing scalar, is most easily evaluated in the Einstein frame. Let us consider for simplicity a nonrotating black hole. Provided the change of the mass is slow on the scale of the black hole, i.e. $`dM_{}/dt_{}M_{}/r_0`$, it is given approximately by the flux of Killing energy across the horizon,
$$dM_{}/dt_{}=\mathrm{Area}_{}\times T_{v_{}v_{}}^{(\phi )}=2G_{}^1r_0^2(d\phi _c/dt_{})^2.$$
(13)
If $`\phi `$ changes by an amount $`\mathrm{\Delta }\phi `$ over an Einstein-frame cosmological time $`\mathrm{\Delta }t_{}`$, the fractional increase of the mass is given by
$$\frac{\mathrm{\Delta }M_{}}{M_{}}\frac{r_0}{\mathrm{\Delta }t_{}}(\mathrm{\Delta }\phi )^2$$
(14)
provided $`r_0\mathrm{\Delta }\phi /\mathrm{\Delta }t_{}1`$. Small black holes can therefore experience a huge variation in $`\phi `$ with essentially no increase of the mass $`M_{}`$ in the Einstein frame as long as the change is adiabatic. This conclusion also follows directly from the fact that in adiabatic processes the black hole entropy $`A_{}/4G_{}=4\pi G_{}M_{}^2`$ is unchanged.
The relation between the mass $`\stackrel{~}{M}`$ in the Jordan-Fierz frame and that in the Einstein frame is given by $`\stackrel{~}{M}/M_{}=dt_{}/d\stackrel{~}{t}`$ (since the mass (energy) is by definition the value of the generator of (asymptotic) Killing time translations, which scales inversely with the time coordinate), which from (4) yields
$$\stackrel{~}{M}=A^1(\phi )M_{}.$$
(15)
Thus, even when $`M_{}`$ is constant, the black hole mass in the Jordan-Fierz frame is not constant. This conclusion is somewhat surprising since, by contrast, the rest mass of ordinary matter is constant in the Jordan-Fierz frame.In the string motivated variation of (2), however, one expects dilaton-dependence of (at least) hadron masses in both the Jordan-Fierz and Einstein frames.
In the Einstein frame both the black hole mass and Newton’s constant are constant, hence the Hawking evaporation process is identical to that in ordinary Einstein gravity. All that is needed for cosmology then is to transform the results back into the Jordan-Fierz time frame in which the matter is typically understood to evolve.
If alternatively the evaporation is analyzed in the Jordan-Fierz frame, as in , it is necessary to take into account not only the change of Newton’s constant (4) but also the change of $`\stackrel{~}{M}`$ (15) arising from the time dependence of $`A(\phi )`$. To determine the dependence of the Hawking luminosity $`\stackrel{~}{L}`$ on $`\stackrel{~}{G}`$ and $`\stackrel{~}{M}`$ we must start with the fact that $`\stackrel{~}{L}`$ is determined directly by geometrical quantities in the Jordan-Fierz frameThe surface gravities in the Jordan-Fierz and Einstein frames are simply related by $`\kappa =A(\phi _c)\stackrel{~}{\kappa }`$ where $`\phi _c`$ is the asymptotic cosmological value of $`\phi `$. One way to see this is to note that surface gravity is conformally invariant except for the effect due to the different asymptotic normalization of the Killing field. The absorption coefficients would in general not transform in any simple way except for conformally coupled fields, however in the adiabatic approximation the conformal factor $`A^2(\phi )`$ relating the two metrics is constant, so in fact the absorption coefficients for massless fields are identical in the two frames, and those for massive fields are related by rescaling the mass. Thus one can also determine the luminosity $`L_{}`$ directly in the Einstein frame. The luminosities $`\stackrel{~}{L}`$ and $`L_{}`$ are related simply by the transformation of the energy and time scales, hence we have $`L_{}=A^2(\phi )\stackrel{~}{L}`$. : the surface gravity $`\stackrel{~}{\kappa }`$ (which determines the Hawking temperature $`\stackrel{~}{T}_H=\mathrm{}\stackrel{~}{\kappa }/2\pi `$) and the black hole absorption coefficients $`\stackrel{~}{\mathrm{\Gamma }}`$. The luminosity behaves approximately as $`\stackrel{~}{L}\stackrel{~}{\mathrm{Area}}\times \stackrel{~}{\kappa }^4`$ which for a Schwarzschild black hole becomes $`\stackrel{~}{L}\stackrel{~}{r}_{0}^{}{}_{}{}^{2}`$ where $`\stackrel{~}{r}_0`$ is the Schwarzschild radius. Note however that $`\stackrel{~}{r}_0`$ is not the same as $`2\stackrel{~}{G}\stackrel{~}{M}`$. Being purely geometrical, $`\stackrel{~}{r}_0`$ must be related to the Schwarzschild radius in the Einstein frame $`r_0=2G_{}M_{}`$ by the same scale factor that relates the two metrics (3), $`\stackrel{~}{r}_0=A(\phi )r_0`$. Using (4) and (15) on the other hand we find $`\stackrel{~}{G}\stackrel{~}{M}=(1+\alpha ^2)AG_{}M_{}`$, hence evidently $`\stackrel{~}{r}_0=2\stackrel{~}{G}\stackrel{~}{M}/(1+\alpha ^2)`$.
If $`A(\phi )`$ decreases by a large amount between the epoch of primordial black hole formation and today then it is possible that the Jordan-Fierz frame mass of primordial black holes would have been significantly magnified. Such large changes of $`A(\phi )`$ are not inconsistent with observations. According to Damour and Pichon nucleosynthesis bounds are consistent with $`A_{10\mathrm{MeV}}/A_{\mathrm{today}}`$ as large as 150 or even greater. Going back further to the early radiation era $`A(\phi )`$ could in principle have been tremendously larger. Indeed, Damour and Polyakov argued in a dilatonic model that, as a result of the effect accumulated each time the temperature drops through the annihilation energy of a particle species during the radiation era, $`A(\phi )`$ is decreased by a net factor given by $`A_{\mathrm{out}}/A_{\mathrm{in}}(A_{\mathrm{out}}/A_{\mathrm{today}})^{1/F^2}`$ where $`F=1.87\times 10^4\kappa ^{9/4}`$ and $`\kappa `$ is a parameter which is naturally of order unity. Thus $`A(\phi )`$ could have decreased since the beginning of the radiation era by a factor of order $`10^{10^8}`$ for example if $`A_{\mathrm{out}}/A_{\mathrm{today}}=100`$. Although the quadratic model $`\mathrm{ln}A(\phi )(\phi \phi _{\mathrm{min}})^2`$ underlying these calculations should not be taken seriously over such a range, the numbers serve to illustrate the point that $`A(\phi )`$ could have been extremely large at the beginning of the radiation era compared to now.
It is thus conceivable that the mass of a small primordial black hole could have been magnified by a very large factor. The mass today depends of course not only on the mass magnification factor but on the initial mass. Since the Einstein frame mass is unchanged and today the two frames coincide, this “mass magnification” would only deserve the name if the expected masses for primordial black holes were fixed in the Jordan-Fierz frame rather than the Einstein frame. Whether this is true would depend on the physics producing these black holes.
The potential mass spectrum of primordial black holes will not be discussed here, except to point out one fact. Carr and Hawking’s original estimate, that the mass of primordial black holes formed from collapse of early universe overdensities might be expected to be of order the horizon mass, arose from the agreement $`M_{\mathrm{horizon}}M_{\mathrm{Jeans}}`$ of the upper limit horizon mass $`M_{\mathrm{horizon}}t/G`$ and the lower limit Jeans mass $`M_{\mathrm{Jeans}}=(G^3\rho )^{1/2}`$. This coincidence does not necessarily hold in tensor-scalar gravity. Since the Jordan-Fierz and Einstein metrics are conformally related, they define the same particle horizon, so the horizon masses are related by (15), $`\stackrel{~}{M}_{\mathrm{horizon}}=A^1M_{\mathrm{horizon}}A^1(G_{}^3\rho _{})^{1/2}`$ (where $`\rho _{}`$ is the Einstein frame cosmological energy density). The Jeans mass is naturally defined in the Jordan-Fierz frame since the matter is minimally coupled there. Hence, using $`\rho _{}=A^4\stackrel{~}{\rho }`$ and (4), we have
$`\stackrel{~}{M}_{\mathrm{Jeans}}`$ $`=`$ $`(\stackrel{~}{G}^3\stackrel{~}{\rho })^{1/2}`$ (16)
$`=`$ $`(1+\alpha ^2)^{3/2}A^1(G_{}^3\rho _{})^{1/2}`$ (17)
$``$ $`(1+\alpha ^2)^{3/2}\stackrel{~}{M}_{\mathrm{Horizon}}.`$ (18)
If $`\alpha \stackrel{<}{}\mathrm{\hspace{0.33em}1}`$ then the upper and lower limits still coincide so the expected mass would be the horizon mass at the time of formation. If however $`\alpha 1`$, then the Jeans mass is much smaller than the horizon mass, which would allow a much smaller mass for primordial black holes formed from overdensities than otherwise expected in the standard scenario.
## Acknowledgements
I would like to thank Doug Armstead for collaboration in the early stages of this work; John Barrow, Matt Choptuik, Thibault Damour, Carsten Gundlach, Amos Ori and Patrick Brady for helpful discussions; and Bernard Carr, Thibault Damour and Carsten Gundlach for useful comments on earlier drafts of this paper. This work was supported in part by the National Science Foundation under grants No. PHY98-00967 at the University of Maryland and PHY94-07194 at the Institute for Theoretical Physics.
|
no-problem/9905/hep-ph9905456.html
|
ar5iv
|
text
|
# Non-linear Regge spectrum fits to experimentally up-dated meson states
## Abstract
The exciting feature of recent times is the inflow of new experimental data. Fits to the $`\pi `$, $`\rho `$ and $`f`$ meson resonances found by the VES collaboration, along with those of the PDG 1998 compilation are reported using a non-linear model for Regge trajectories. There are only five parameters including the pion mass and the states come out as Regge -type excitations of the pion. Total number of mesons fitted are 23 and the fit is good to $``$ 3% for 18 of them and $``$ 9% for others.
We have suggested the use of a modified Regge trajectory for heavy as well as light mesons and in the case of the upsilon radial excitations the non-linearity is most prominent . The trajectories are based on the use of deformed Poincaré algebra for the internal degrees of freedom of the hadron. In fact it was shown recently that the kinetic excitation of the hadron must obey the usual Einstein relation in order that the thermodynamic functions can be defined for the system, and that the limiting temperature predicted by the fit is larger than the older Hagedorn estimate.
The square of masses of the resonances is given in the model by
$$E^2=Sinh^1[sinh^2(\frac{mϵ}{2})+(\frac{ϵ^2}{4})(\frac{L}{\alpha ^{}}+\frac{L}{\beta ^{}}+\frac{S}{\gamma ^{}}+\frac{J}{\delta ^{}})]$$
(1)
The parameters we choose are $`\alpha ^{}=0.72`$, $`\beta ^{}=0.62`$, $`\gamma ^{}=2.1`$ and $`\delta ^{}=9.2`$, all in $`GeV^2`$. The parameter $`ϵ`$ is taken to be 0.912 $`GeV^1`$ as found in from the upsilon fit. The states are given in the following tables.
Table 1 gives the pion states. Note the state $`\pi (1740)`$ given by , revised from the PDG value $`1800`$ fits exceedingly well with our model. These states, revised by and was the motivation for this addendum and are shown by us with a superscript asterisk, in the Tables.
We have not tried to fit the $`\rho `$ and $`a`$ separately from the $`\omega `$ and the $`f`$ mesons, since they are roughly degenerate. This could be done easily by choosing the $`\alpha ^{},\beta ^{},\gamma ^{}`$ and $`\delta ^{}`$ separately for the two sets but is not otherwise very meaningful. There are very few omega meson states (4) and the $`f`$ mesons are excitations of both $`\omega `$ and $`\eta `$. The new experiments do not report $`\eta `$ mesons and they are not considered in this paper. But all the mesons as well as baryons can be fitted in the model . The interesting point is the inflow of the more recent data.
The Table 2 shows the fits to the $`\rho \omega `$ system. Note that the new $`a_1(1800)`$ state reported by with an experimental error of 50 MeV and width 230 MeV, fits very well into our scheme as a nodal excitation, n =1, in the L = 1 channel.
|
no-problem/9905/cond-mat9905206.html
|
ar5iv
|
text
|
# The Phase Diagram of Star Polymer Solutions
\[
## Abstract
The phase diagram of star polymer solutions in a good solvent is obtained over a wide range of densities and arm numbers by Monte Carlo simulations. The effective interaction between the stars is modeled by an ultrasoft pair potential which is logarithmic in the core-core distance. Among the stable phases are a fluid as well as body-centered cubic, face-centered cubic, body-centered orthogonal, and diamond crystals. In a limited range of arm numbers, reentrant melting and reentrant freezing transitions occur for increasing density.
\]
A major challenge in statistical physics is to understand and predict the macroscopic phase behavior from a microscopic many-body theory for a given interaction between the particles . For a simple classical fluid , this interaction is specified in terms of a radially symmetric pair potential $`V(r)`$ where $`r`$ is the particle separation. Significant progress has been made during the last decades in predicting the thermodynamically stable phases for simple intermolecular pair potentials, such as for Lennard-Jones systems, plasmas or hard spheres, using computer simulations and density functional theory . An important realization of classical many-body systems are suspensions of colloidal particles dispersed in a fluid medium. A striking advantage of such colloidal samples over molecular ones is that their effective pair interaction is eminently tunable through experimental control of particle and solvent properties . This brings about more extreme pair interactions, leading to novel phase transformations. For instance, if the colloidal particles are sterically stabilized against coagulation, the ‘softness’ of the interparticle repulsion is governed by the length of the polymer chains grafted onto the colloidal surface, their surface grafting density and solvent quality. Computer simulations and theory have revealed that a fluid freezes into a body-centered-cubic (bcc) crystal for soft long-ranged repulsions and into a face-centered-cubic (fcc) one for strong short-ranged repulsions . This was confirmed in experiments on sterically stabilized colloidal particles . A similar behavior occurs for charge-stabilized suspensions where the softness of $`V(r)`$ is now controlled by the concentration of added salt . Less common effects were observed for potentials involving an attractive part aside from a repulsive core. In reducing the range of the attraction, a vanishing liquid phase has been observed and an isostructural solid-solid transition was predicted . More complicated pair potentials can even lead to stable quasicrystalline phases and a quadruple point in the phase diagram .
The aim of this letter is to study the phase diagram of an ultrasoft repulsive pair potential $`V(r)`$ which is logarithmic in $`r`$ inside a core of diameter $`\sigma `$ and vanishes exponentially in $`r`$ outside the core. The motivation to do this is twofold: first, such a potential is a good model for the effective interaction between star polymers in a good solvent , which can be regarded as sterically stabilized particles where the size of the particles is much smaller than the length of the grafted polymer chains . These stars are characterized by their arm number (or functionality) $`f`$, i.e., the number of polymer chains tethered to the central particle, and their corona diameter $`\sigma `$ which measures the spatial extension of the monomer density around a single star center. Second, more fundamentally, phase transitions for such soft potentials are expected to be rather different from that for stronger repulsions. From a study of the pure logarithmic potential in two spatial dimensions , it is known that one needs a critical prefactor to freeze the system, which is quite different from, e.g., inverse-power potentials. Furthermore, the potential crossover at $`r=\sigma `$ is expected to influence drastically the freezing transition, if the number density $`\rho `$ of the stars is near the overlap concentration, $`\rho ^{}1/\sigma ^3`$.
We obtain the full phase diagram of star polymer solutions by Monte Carlo simulation and theory. As a result, among the stable phases are a fluid as well as bcc, fcc, body-centered-orthogonal (bco), and diamond crystals. We emphasize that the stability of a bco crystal with anisotropic rectangular elementary cell and a diamond structure was never obtained before for a radially symmetric pair potential. In fact, there is a widespread belief in the literature that anisotropic or three-body forces are solely responsible for a stable diamond lattice . We show that both the crossover at $`r=\sigma `$ and the ultrasoftness of the core are crucial for the stability of the bco and the diamond phase. Moreover, we get reentrant melting for $`34\genfrac{}{}{0pt}{}{<}{}f\genfrac{}{}{0pt}{}{<}{}60`$, and reentrant freezing for $`44\genfrac{}{}{0pt}{}{<}{}f\genfrac{}{}{0pt}{}{<}{}60`$ as $`\rho `$ is increasing. Some features of the presented phase diagram have already been observed in a syatem of copolymer micelles exhibiting a very similar interaction to star polymers .
With $`k_BT`$ denoting the thermal energy, our effective pair potential between two star centers is a combination of a logarithm inside the core of size $`\sigma `$ and a Yukawa-potential outside the core :
$$V(r)=\frac{5}{18}k_BTf^{3/2}\{\begin{array}{cc}\mathrm{ln}(\frac{r}{\sigma })+\frac{1}{1+\sqrt{f}/2}\hfill & \text{(}r\sigma \text{);}\hfill \\ \frac{\sigma }{1+\sqrt{f}/2}\frac{\mathrm{exp}(\sqrt{f}(r\sigma )/2\sigma )}{r}\hfill & \text{(}r>\sigma \text{),}\hfill \end{array}$$
(1)
such that both the potential and its first derivative (or, equivalently, the force) are continuous at $`r=\sigma `$ . The decay length of the exponential is given by the largest blob diameter within the Daoud-Cotton theory for single star polymers . Experimental support for this potential comes from neutron scattering data on the structural ordering of 18-arm stars in the fluid phase and shear moduli measurements in the crystalline phase of micelles . Furthermore, microscopic simulations of two star polymers have shown that this potential provides an excellent description of the effective star interaction for a broad range of arm numbers . We note that $`V(r)`$ becomes the hard sphere potential for $`f\mathrm{}`$.
Due to the purely entropic origin of the interstar repulsion, the strength of the pair potential (1) scales linearly with $`k_BT`$, causing the temperature to be an irrelevant thermodynamic quantity. Therefore, for the calculation of the phase diagram, only the packing fraction of the stars, $`\eta =\pi /6\rho \sigma ^3`$, and the arm number $`f`$ matter, the latter playing the role of an “effective inverse temperature.” We use computer simulations to access the phase diagram. The free energies of the fluid phase and several possible solid phases are calculated by thermodynamic integration via Monte Carlo simulations . The free energy of the fluid phase, $`F_{fl}`$, is obtained either by the well-known “pressure- or density-route” , or, alternatively, by the so-called “$`f`$-route”. The pressure-route relates the free energy for nonvanishing $`\eta `$ to that at zero packing fraction, keeping $`f`$ fixed. In the $`f`$-route, $`f`$ is used as an artificial thermodynamic variable, now keeping $`\eta `$ fixed. The free energy of star polymers with a certain arm number $`f`$ is then obtained by the following integration:
$$F_{fl}=_0^f𝑑f^{}\frac{U}{f^{}}_f^{}.$$
(2)
Here, $`U=_{i<j}V(|𝐫_i𝐫_j|)`$ is the total potential energy function which depends on $`f`$ since $`V(r)`$ depends on $`f`$ parametrically. $`\mathrm{}_f^{}`$ denotes the canonical ensemble average for a system with fixed arm number $`f^{}`$. Therefore, in order to carry out the $`f`$-route integration, a series of simulations at fixed $`\eta `$ but for increasing $`f^{}`$ is performed to calculate the integrand of Eq. (2).
We use the Frenkel-Ladd method for continuous potentials to obtain the free energy of the solid phases . For these Monte Carlo calculations, suitable candidate crystal structures have to be chosen. Our method to get information about the possible stable structures for fixed $`f`$ and $`\eta `$ consists of two steps: first, we calculate lattice sums for a wide class of crystals, including the ‘usual’ structures with cubic elementary cells (fcc, bcc, hcp, and simple cubic) and several ‘unusual’ structures. These unusual structures are the hexagonal lattice, the diamond lattice, representations of quasicrystalline structures (see, e.g., Ref. ), and generalizations of the usual structures, which were obtained by stretching the elementary cell lengths (denoted as $`a`$, $`b`$, and $`c`$) of these structures by arbitrary factors, then using the two independent ratios $`b/a`$ and $`c/a`$ as minimization parameters of the lattice sum. Second, we calculate the global bond order parameters of the equilibrated structures, which were spontaneously formed in a first set of simulations, always starting from a purely random configuration. Crystal structures whose bond order parameters are in agreement with these measured parameters, and which have reasonably small values of the lattice sum, are then chosen as candidate structures for the free energy calculations. This procedure was performed for a wide range of arm numbers, $`18f512`$, and packing fractions $`0\eta 1.5`$. Finally, the obtained free energy data at fixed $`f`$ were used to explore the phase boundaries via the common double tangent construction. The resulting phase diagram is displayed in Fig. 1.
In the explored range of $`f`$ and $`\eta `$, four different stable crystal structures are found besides a fluid phase. For $`f<f_c34`$, the fluid phase is stable for all densities, which is in agreement with results obtained from a effective hard sphere mapping procedure and from scaling theory . We remark that Witten et al. only estimated $`f_c`$ within one order of magnitude to be around $`f100`$. For $`ff_c`$, at least one stable crystal phase is found. We focus first on the crystal phases at $`0.2\genfrac{}{}{0pt}{}{<}{}\eta \genfrac{}{}{0pt}{}{<}{}0.7`$: for $`f_c<f\genfrac{}{}{0pt}{}{<}{}54`$, a bcc phase is found, whereas for $`f\genfrac{}{}{0pt}{}{>}{}70`$, only the fcc structure turns out to be stable. At intermediate $`f`$ ($`54\genfrac{}{}{0pt}{}{<}{}f\genfrac{}{}{0pt}{}{<}{}70`$), bcc-fcc phase transitions occur. For $`0.2\genfrac{}{}{0pt}{}{<}{}\eta \genfrac{}{}{0pt}{}{<}{}0.7`$, the mean interparticle distance $`\overline{r}=\rho ^{1/3}`$ is larger than $`\sigma `$, leaving only the exponential part of $`V(r)`$ to be relevant for the phase behavior. Therefore, the observance of a fcc phase for large $`f`$, corresponding to a short-range, strongly screened potential, and a bcc phase for small $`f`$, corresponding to a long-range, less screened potential, is analogous to the phase behavior found for charged colloids . In Fig. 1, the freezing and melting points for hard spheres, corresponding to $`f\mathrm{}`$, are shown as well, denoted by black triangles. We emphasize that even star polymers with very high arm numbers freeze at considerably smaller $`\eta `$ than hard spheres. In fact, our simulations show that a ‘hard-sphere like’ structure is only found for extremely high arm numbers $`f\genfrac{}{}{0pt}{}{>}{}10000`$. Thus the change in the phase boundary cannot be shown on the scale of the figure.
Let us now consider the phase behavior for $`\eta 0.7`$, where $`\overline{r}`$ is in the order of $`\sigma `$ and the logarithmic part of $`V(r)`$ becomes relevant. From our calculations, a reentrant melting transition, i.e., a transition from a solid to a liquid phase with increasing $`\eta `$, is found for $`34<f\genfrac{}{}{0pt}{}{<}{}60`$. We note that this reentrant melting was already predicted qualitatively by Witten et al. . For $`f\genfrac{}{}{0pt}{}{>}{}60`$, a solid-solid phase transformation into a bco phase takes place. This unusual phase is stable up to $`\eta 1.0`$. For $`44\genfrac{}{}{0pt}{}{<}{}f\genfrac{}{}{0pt}{}{<}{}60`$, the remolten liquid refreezes into this bco structure at $`\eta 0.80`$. At $`\eta 1.0`$, a further solid-solid phase transition from the bco into a diamond structure is found, the latter being stable for arm numbers $`f\genfrac{}{}{0pt}{}{>}{}44`$ and packing fractions up to $`\eta 1.4`$-1.5. Notice that the extension of the two phase regions (“density-jumps”) of all encountered phase transitions is extremely small due to the soft character of $`V(r)`$ . Moreover, the empirical Hansen-Verlet freezing rule is valid for all points at the phase boundaries where we calculated the static structure factor $`S(q)`$. This also includes the reentrant melting transition for $`\eta 0.7`$, where the $`S(q)`$ for the fluid begins to show unusual behavior .
We develop now a physical intuition for the unusual occurrence of the bco and diamond phase. For this purpose, we report first on the detailed structure of the bco phase. At fixed $`\eta `$, the bco crystal is described by the two lengths ratios of its elementary cell, $`b/a`$ and $`c/a`$, respectively. In order to calculate the free energy of the bco crystal by the Frenkel-Ladd method, these ratios had to be determined from a first set of simulations. In these $`NpT`$-simulations , the system was free to adopt its optimal values for $`b/a`$ and $`c/a`$, starting either from a purely random configuration or a initial bco configuration. Within the error bars, the so determined elementary cell length ratios were in agreement with the values obtained from the minimization of the lattice sums. We therefore took the lattice sum results as input for the free energy calculations. These ratios increase with $`\eta `$ from $`b/a2.24`$ and $`c/a1.32`$ at $`\eta =0.7`$ to $`b/a3.14`$ and $`c/a1.81`$ at $`\eta =1.0`$ and are nearly independent of $`f`$. Fig. 2 illustrates the resulting structure.
As can be seen from this figure, the anisotropy of the elementary cell leads to a strong interpenetration of the particle coronas along one of the lattice axes. In fact, over the whole stability range of the bco phase, the next neighbor distance along this axis is considerably smaller than $`\sigma `$, whereas all other next neighbor distances are larger than $`\sigma `$. This can be intuitively understood from the form of the potential (1): due to the weak divergence for small $`r`$, there is no huge energy penalty in bringing the nearest neighbors close together. On the other hand, the potential falls off rapidly for $`r>\sigma `$, so all the remaining neighbor shells are not costly in energy, too. With increasing $`\eta `$, the distance of the two nearest neighbors in the bco is decreasing until the energy penalty becomes significant. Hence, the bco will then lose against another structure with more than two nearest neighbors inside the corona. A suitable structure is the diamond phase which possesses four tetrahedrally ordered nearest neighbors. Indeed, our simulations show that all the other neighbors are kept outside the corona in the stability range range of the diamond. Therefore, both the ultrasoft logarithmic part and the crossover at $`r=\sigma `$ are crucial for the stability of the bco and the diamond phase. This provides a simple reason why such phases have not been found earlier for strongly repulsive interactions. We further note that the presented scenario also nicely expresses itself in the angle-average radial distribution functions $`g(r)`$ of the bco and diamond solid, which show a similar anomaly as found in the $`g(r)`$ of the fluid phase .
As for a further theoretical investigation, we solved the accurate Rogers-Young closure to obtain the free energy of the fluid for $`f=18,32,40,48`$ and $`64`$. For the aforementioned solid structures, we used the Einstein-crystal perturbation theory to calculate the associated free energies. As this theory provides only an upper bound to the free energy, the domain of stability of the fluid is enhanced in comparison to the simulation results. The theory predicts $`40<f_c<48`$ and eliminates the domain of stability of the bcc crystal. Otherwise, as also shown in Fig. 1, the same phase behavior as determined from simulations emerges.
We finally note that all our predictions for $`\rho 2\rho ^{}`$, i.e. $`\eta \genfrac{}{}{0pt}{}{<}{}1.0`$, should be verifiable in scattering experiments, since for these densities pair interactions are dominant. In fact, in recent experimental work on spherical diblock copolymer micelles, Gast and coworkers have already confirmed a part of our results . The freezing transition in fcc and bcc crystals depending on the number of arms $`f`$ is found as well as reentrant melting with increasing $`\eta `$ . For the ”most starlike” system, also a reentrant freezing is observed as predicted in Fig. 1. For $`\eta \genfrac{}{}{0pt}{}{>}{}1.0`$ however, when three stars exhibit overlaps within their coronae, many body interactions become important, which we have neglected in our calculations using the pair potential (1). Nevertheless, from a theoretical point of view, this potential turned out to be interesting also for $`\eta \genfrac{}{}{0pt}{}{>}{}1.0`$, resulting for the first time in a stable diamond structure for a purely radially symmetric pair interaction.
In conclusion, we have determined the phase diagram of star polymers over a broad range of arm numbers $`f`$ and packing fractions $`\eta `$ by computer simulations and theory. The phase diagram includes a fluid phase as well as four stable crystal phases. These crystal phases are a fcc crystal and a bcc crystal, as well as an unusual anisotropic bco structure and a diamond crystal.
It is a pleasure to thank Prof. Daan Frenkel for helpful remarks. We further thank the Deutsche Forschungsgemeinschaft for support within SFB 237.
|
no-problem/9905/quant-ph9905008.html
|
ar5iv
|
text
|
# Efficient Refocussing of One Spin and Two Spin Interactions for NMR Quantum Computation
## I Introduction
Much of the power and utility of NMR stems from the ease with which the experimenter can control the effective Hamiltonian experienced by the spin system. In conventional NMR this permits different interactions to be studied individually, while in NMR quantum computation this process is used to generate Hamiltonians corresponding to quantum logic gates between specific spins . This manipulation can be achieved using a variety of techniques , but the simplest and most important approach is the use of spin echoes. Applying a $`180^{}`$ pulse to a single spin in the middle of some evolution period acts to refocus any evolution occuring as a result of one spin interactions (that is, chemical shifts) or two spin interactions (spin–spin coupling, assumed to be *weak*) involving that spin. Thus the corresponding terms in the spin Hamiltonian are effectively deleted.
This approach is simple to apply in systems containing only a small number of coupled spins, but must be treated with caution when applied to larger systems, especially when all the spins are coupled to one another. The Hamiltonian describing evolution of a fully coupled spin system of $`N`$ spins contains $`N`$ one spin terms and $`N(N1)/2`$ two spin terms, and it would be possible to produce a large number of effective Hamiltonians, corresponding to any desired combination of these terms. For simplicity we will concentrate on one particularly simple Hamiltonian in which only one chemical shift term is retained; two other simple Hamiltonians (in which only one coupling is retained, or all interactions are refocussed) can be easily generated by small modifications to the corresponding pulse sequences.
A pulse sequence for achieving this simplification in a two spin system is shown in figure 1. In this case the refocussing process is simple, and can be achieved with only two time periods, separated by a single $`180^{}`$ pulse. For completeness it is necessary to apply a final $`180^{}`$ pulse to spin $`1`$ to ensure that each spin experiences an even number of $`180^{}`$ pulses (not gates), but in many cases such pulses can be omitted. When applied to larger spin systems, however, it is necessary to refocus many more interactions, requiring a larger number of time periods. The conventional approach is to recursively nest copies of sequences like that shown in figure 1 within one another, as shown for a fully coupled four spin system in figure 2. While this nesting process is effective, it is exponentially inefficient, in that the number of time periods and $`180^{}`$ pulses required grows exponentially with the number of spins in the spin system: while a four spin system requires that the evolution period be divided into eight sections, a five spin system will require sixteen sections, and so on.
Although this nesting process is very widely used within NMR, a far more efficient scheme is available. Here we describe this efficient refocussing scheme, and show how it may be used to create the two effective Hamiltonians described above. We also discuss the application of this scheme to partially coupled spin systems.
## II Theory
While spin echo sequences are usually drawn out as sequences of pulses, as shown above, it is more convenient to describe a sequence mathematically by considering the evolution of each spin during the various equal time periods of free precession (the $`180^{}`$ pulses are assumed to be of negligible duration). Suppose that before the start of the spin echo sequence a spin is in a state of $`p=+1`$ quantum coherence; the effect of the $`180^{}`$ pulse is to convert this to $`1`$ quantum coherence, and vice versa. The evolution of the spin will depend on its coherence order, and thus the evolution during any period can be described by a set of numbers taking the values $`+1`$ and $`1`$, where the value changes sign each time a $`180^{}`$ pulse is applied to the spin.
These sets of values can then be gathered together into a matrix, whose rows correspond to the individual spins and whose columns correspond to the different time periods. Thus the sequence depicted in figure 1 can be described by the matrix
$$M_2=\left(\begin{array}{cc}\hfill 1& \hfill 1\\ \hfill 1& \hfill 1\end{array}\right),$$
(1)
while that shown in figure 2 corresponds to the matrix
$$M_4=\left(\begin{array}{cccccccc}\hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1& \hfill 1\end{array}\right).$$
(2)
The refocussing effected by these pulse sequences can then be easily explained by examining the properties of the corresponding matrices. The chemical shift of a spin will be refocussed as long as the corresponding row of the matrix $`M`$ contains the same number of plus and minus ones. Similarly the coupling between two spins will be refocussed if the vector obtained by multiplying pairs of numbers from the rows corresponding to the two spins contains the same number of plus and minus ones. More concisely, a spin–spin coupling will be refocussed if the corresponding rows are orthogonal, while a chemical shift will be refocussed if the corresponding row is orthogonal to a row of ones.
These two properties are sufficient to allow us to construct refocussing matrices, and thus pulse sequences, with the desired properties. Consider a system of $`N`$ spins, where we wish to refocus all the interactions *except* the chemical shift of spin $`0`$. This can be achieved by constructing a refocussing matrix comprising one row of ones, and $`N1`$ rows of plus and minus ones, all of which are orthogonal to one another and to the first row; the most efficient refocussing sequence will correspond to the matrix with the smallest number of columns. Such matrices are closely related to the well known Hadamard matrices.
The Hadamard matrix corresponding to a two spin system, $`H_2`$ takes the simple form
$$H_2=M_2=\left(\begin{array}{cc}\hfill 1& \hfill 1\\ \hfill 1& \hfill 1\end{array}\right),$$
(3)
and so the conventional two spin sequence (figure 1) is, not surprisingly, the most efficient. The four spin Hadamard matrix, $`H_4`$, can be calculated using
$$H_4=H_2H_2=\left(\begin{array}{cccc}\hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1\\ \hfill 1& \hfill 1& \hfill 1& \hfill 1\end{array}\right),$$
(4)
and is only half the size of its conventional equivalent, $`M_4`$ (equation 2). The corresponding pulse sequence, shown in figure 3, is similarly shorter than the conventional equivalent (figure 2). Note that, with the exception of spin $`0`$, there is no particular significance to the spin labels, and individual nuclei can be assigned to the different spin numbers at will.
Unlike conventional spin echo sequences, these efficient sequences involve the application of simultaneous $`180^{}`$ pulses to two or more spins. In principle this need not be a problem, but in practice it may be necessary to choose $`B_1`$ field strengths with care, so as to minimise the effects of Hartmann-Hahn transfers (whether homonuclear or heteronuclear).
With slight modifications, these pulse sequences can also be used to generate effective Hamiltonians corresponding to a single spin–spin coupling. The procedure is straightforward: to generate a pure coupling between spins $`0`$ and $`1`$ (for example), simply copy the pulses applied to spin $`1`$ and apply them to spin $`0`$. Similarly, a pulse sequence in which *all* one and two-spin interactions are refocussed in an $`N`$ spin system can be obtained by using the bottom $`N`$ lines of a sequence which refocusses everything except the first chemical shift in an $`N+1`$ spin system.
The usefulness of this procedure depends on the existence of Hadamard matrices with appropriate dimensions. Ideally the matrix should have the same size as the number of spins in the spin system. Unfortunately it is usually only possible to form Hadamard matrices whose dimension is a multiple of four (the two by two Hadamard matrix, $`H_2`$, is a special case); Hadamard matrices can be formed for many (though not all) multiples of four, including all such multiples below fifty. When the number of spins is *not* a multiple of four, it is instead necessary to use the next largest appropriate multiple of four, and select an appropriate subset of the corresponding pulse sequence. In general this subset would probably be chosen to minimise the number of simultaneous pulses in the pulse sequence. For example, in a three spin system one possible pulse sequence is to use the lines labelled $`I^0`$, $`I^2`$ and $`I^3`$ in figure 3; this choice corresponds to the conventional pulse sequence! This procedure is always more efficient than the conventional approach except for the case of two or three spins: in these cases the conventional and Hadamard approaches give identical sequences.
A discussed below, in real systems where only some of the possible couplings are present it is not necessary to refocus all the couplings, and it is instead possible to use simpler pulse sequences. Spin coupling networks in molecules are usually quite local, with resolved couplings only being seen to a small number of close neighbours; it seems likely that the four spin (figure 3) and eight spin (figure 4) sequences would suffice for almost any system.
## III Partially coupled spin systems
In real life fully coupled spin systems are rather rare; in most cases only a small subset of the possible couplings can be resolved. It is not necessary to refocus all these unresolved couplings, which are effectively absent, and thus the refocussing process can be greatly simplified. In small spin systems it is practical to derive such simplified sequences by hand, but in larger systems it is useful to have an algorithmic procedure by which this can be achieved.
This is simply realised by treating the spin system as a non-complete graph. A graph comprises a set of vertices, connected by edges; this corresponds to a set of nuclei connected by J-couplings. A partially coupled spin system, in which some of the couplings are absent, corresponds to an non-complete graph. A graph can be coloured, by assigning each vertex one of a number of different colours, and the colouring scheme is called a proper colouring if no two connected vertices are the same colour. The graph may then be characterised by a chromatic number, $`\chi `$: this is the smallest number of colours required to properly colour the graph. In a complete graph (a fully coupled spin system) $`\chi =N`$, but in a partially coupled system $`\chi `$ can be much smaller.
The significance of this observation is that if a spin system is represented by a properly coloured graph, then it is not necessary to refocus interactions between nuclei corresponding to vertices with the same colour. To refocus all the interactions in an $`N`$ spin system it suffices to create a pulse sequence corresponding to a $`\chi `$ spin system, and apply identical pulses to all nuclei with the same colour. In this case there is no need for further concern about Hartman-Hahn transfers, as these additional simultaneous pulses will only be applied to spins which are not coupled to one another.
Clearly this approach is only practical if it is easy to determine both the value of $`\chi `$ and a corresponding proper colouring. In general this is extremely difficult: indeed, determining $`\chi `$ is an NP hard problem. This difficulty is, however, more apparent than real, as it is relatively simple to estimate $`\chi `$, and to find corresponding proper colourings, for certain simple types of graph such as those likely to occur in coupled spin systems. If the maximum number of edges at any vertex (that is, the maximum number of spins coupled to any other spin) is $`k`$, then the graph is said to be of degree $`k`$, and $`\chi k+1`$; in all but a few special cases $`\chi k`$. Furthermore, it is easy to construct a proper colouring using at most $`k`$ (or $`k+1`$) colours. Creating a sequence which refocuses all interactions except one chemical shift or one J-coupling is slightly more complicated, but the simplest approach is to assign the nuclei in question a unique colour; at worst this will increase the number of colours required by 1.
## IV Conclusions
The use of Hadamard matrices and non-complete graphs provides a powerful language for describing pulse sequences which refocus one spin and two spin interactions in NMR. This approach permits the construction of refocussing pulse sequences which are much shorter than their conventional equivalents.
## Acknowledgements
We thank M. Mosca for useful discussions. JAJ is a Royal Society University Research Fellow. EK thanks the National Security Agency for financial support.
|
no-problem/9905/cond-mat9905270.html
|
ar5iv
|
text
|
# Application of exchange Monte Carlo method to ordering dynamics
## Abstract
We apply the exchange Monte Carlo method to the ordering dynamics of the three-state Potts model with the conserved order parameter. Even for the deeply quenched case to low temperatures, we have observed a rapid domain growth; we have proved the efficiency of the exchange Monte Carlo method for the ordering process. The late-stage growth law has been found to be $`R(t)t^{1/3}`$ for the case of conserved order parameter of three-component system.
PACS numbers: 64.75.+g, 05.10Ln, 75.10.Hk
The ordering dynamics in the spinodal decomposition has captured a lot of attention . It is considered that the domain-size growth in the late stage is governed by an algebraic law, $`R(t)t^n`$. The classical Lifshitz-Slyozov theory gives the growth exponent $`n`$=1/3 in the case of the spinodal decomposition of the conserved order parameter. On the other hand, the late-stage ordering process of the nonconserved order parameter is described by the classical Lifshitz-Allen-Chan law, $`n=1/2`$ . Although there has been controversy about the value of the growth exponent $`n`$ especially for the conserved order parameter case, the recent studies have revealed that those growth exponents are universal, that is, they do not depend on the space dimensionality or the number of the components of the order parameter.
In simulational studies of ordering process of phase separation, we often encounter the problem of slow dynamics. The problem of slow dynamics is found in the wide range of problems in computer simulations. Other examples are the critical slowing down near the critical point and the slow dynamics due to randomness or frustration. There are several attempts to overcome slow dynamics in the Monte Carlo (MC) simulation. We may classify these attempts into two categories. The first one is the cluster algorithm, such as that of Swendsen-Wang and that of Wolff . The other one is the extended ensemble method. The multi-canonical method , the simulated tempering , and the exchange MC method are examples of the second category. The exchange MC method has been successfully applied to the problem of spin glasses .
Usually, it is considered that the extended ensemble method can be used only for static problems because the extended ensemble affects the dynamics. Is it always so? In this paper, we apply the exchange MC method , which treats the exchange of replicas with different temperatures, to the ordering problem, and test if we can discuss the ordering dynamics by using the exchange MC method. We pay attention to the fact that the ordering phenomena after quenching are controlled by a zero-temperature fixed point, in the language of the renormalization group, irrespective of quenched temperature below the critical temperature, $`T_c`$. We also note that in the case of algebraic growth, the composite of the growth law of different temperatures may become an algeraic one again in the leading order, which will be discussed later.
As a model system, we pick up the three-component system of the conserved order parameter. The reason is as follows. The ordering dynamics of the three-component system (three-state Potts model) is slower compared to the two-component system (Ising model), and, moreover, the temporal growth of the conserved order parameter is slower than that of the nonconserved order parameter . Since it is a difficult job to determine the late-stage growth law of the conserved order parameter due to the slow dynamics, the three-component system of the conserved order parameter is suitable for testing the efficiency of the exchange MC method.
We perform the MC simulation of the three-state ferromagnetic Potts model on the square lattice, whose Hamiltonian is given by
$$=J\underset{<i,j>}{}\delta (S_i,S_j),$$
(1)
where $`S_j`$ takes one of the three states, say, $`a,b`$ and $`c`$. For the spin-update of the MC simulation, we employ the Kawasaki dynamics of nearest-neighbor pair exchange because we treat the conserved order parameter.
Here, we briefly review the exchange MC method . We treat a compound system which consists of $`M`$ replicas of the system. The $`m`$-th replica is associated with the inverse temperature $`\beta _m`$. We consider the extended ensemble, which is denoted by $`\{X\}=\{X_1,X_2,\mathrm{}X_M\}`$. Then, the partition function of the compound system is given by
$$Z=\mathrm{Tr}_{\{X\}}\mathrm{exp}(\underset{m=1}{\overset{M}{}}\beta _m(X_m)).$$
(2)
To obtain an equilibrium distribution of the whole replicas, a replica exchange update process should be introduced. By considering the detailed balance condition, the transition probability of exchanging the $`i`$-th replica and the $`j`$-th replica may be chosen as
$$W(\{X_i,\beta _i;X_j,\beta _j\}\{X_i,\beta _j;X_j,\beta _i\})=\{\begin{array}{cc}1\hfill & (\mathrm{\Delta }0)\hfill \\ e^\mathrm{\Delta }\hfill & (\mathrm{\Delta }>0)\hfill \end{array}$$
(3)
where
$$\mathrm{\Delta }=(\beta _i\beta _j)(_i_j).$$
(4)
It should be noted that there is freedom of choice in inverse temperatures of replicas, $`\beta _m`$. We should take account of conditions such that the replica exchange happens with a non-negligible probability for all adjacent pairs of replicas, and each replica moves around the whole temperature range in suitable Monte Carlo steps per spin (MCS).
We make a few comments on the coding of the MC simulation. We use the technique of the multispin coding , where each bit within one word is assigned to the spin of different systems. When using a 32-bit machine, one is able to treat 32 systems simultaneously. For the case of the three-state Potts model, two words are needed to represent one set of spin states. This is different from the case of the Ising model, where spin states can be represented by one word. The multispin coding algorithm for the three-state Potts model obeying the Glauber dynamics has been already presented . Here we extend this algorithm to the case of the Kawasaki dynamics of nearest-neighbor pair exchange. As far as we know, the multispin coding algorithm for the Kawasaki dynamics has not been publicized so far even for the Ising model.
It has been well known that the vectorization is possible if one decomposes the lattice into interpenetrating sublattices. This technique is effective for fast computation when using a vector computer. In the case of the MC simulation of the Kawasaki dynamics, the sublattice decomposition into eight sublattices is useful, which is illustrated in figure 1. If a single spin represented by a black closed circle is picked up, we can choose either a red circle or a blue circle as a pair spin. For the calculation of the local energy, we need the information on the states of spins connected by dotted lines. All the spins concerned are independent of other black spins. Thus, all the calculations become vectorized. And this choice of sublattice decomposition is useful for the fast Fourier transform (FFT) calculation.
We have made simulations for the three-state ferromagnetic Potts model with the linear sizes, $`L=`$64, 128 and 256. We quench the system to desired temperatures, starting from the disordered state ($`T=\mathrm{}`$). Actually, we assign 16 systems of 32 multispin-coding systems to the replica exchange MC calculation of 16 temperatures. The other 16 systems are used for the standard MC calculation. For example, we consider 16 temperatures starting from 0.3 in units of $`J`$ with a separation of 0.04; that is, the highest temperature is 0.9. The transition temperature of the three-state ferromagnetic Potts model on the square lattice is exactly known as $`T_c=[\mathrm{ln}(1+\sqrt{3})]^1`$=0.99497 . Since we deal with the ordering phenomena, we have chosen all the temperatures such as lower than $`T_c`$. The number of the spins which take each of three states are fixed to be the same. We make a trial for the replica exchange after one MCS for a single spin flip. Typical MCS are $`8.0\times 10^5`$.
There are several ways to estimate the characteristic length scale of the domain size. Binder and Stauffer pointed out that the temporal change of the total energy from the equilibrium energy is appropriate for estimating the domain size; that is,
$$R_E(t)=N[(t)_T]^1$$
(5)
where $`_T`$ is an equilibrium energy, and $`\mathrm{}`$ represents a sample average. There are other quantities to estimate the domain size. For example, we may use the moment of the time-dependent structure factor $`S(k)`$. We have calculated $`S(k)`$ by performing the FFT. However, since the results are essentially the same, we only plot the time evolution of the excess energy
$$E=((t)_T)/N$$
(6)
in figure 2. The MCS is used for time $`t`$. We compare the data of the exchange MC method and those of the standard MC method. The system size is $`128\times 128`$, and quenched temperatures are $`T`$ = 0.3 and 0.42. The average has been taken over 16 samples for each data.
Comparing the purple and green curve in figure 2, we see that the domain growth becomes very slow at low temperatures for the standard MC. It is because the thermal diffusion is not so frequent at low enough temperatures. On the contrary, from the red and blue curves in figure 2, we find that the growth rate is faster for lower temperature in the case of the exchange MC because of the replica exchange process.
Examples of the real-space snapshots and the corresponding structure factor $`S(k)`$ are given in figure 3, where the system size is $`256\times 256`$ and the quenched temperature is $`T=0.3`$. The time is $`8.0\times 10^5`$ MCS and the data for the exchange MC and for the standard MC are compared in (a) and (b), respectively. The three colors, red, green and blue, are used for representing three states in the real-space snapshots. The vertical and horizontal axes for the structure factor is spanned from $`\pi `$ to $`\pi `$ in units of the inverse lattice spacing. We may see from figure 3 that even for the deeply quenched case to low temperatures, we have observed a rapid domain growth with the use of the exchange MC method. In the case of $`T=0.3`$, to attain the same domain size, the calculation with the exchange MC method is about 200 times faster than that with the standard MC method at the cost of 16 simultaneous calculations with different temperatures. We can use the data for all the temperatures, and the efficiency becomes more prominent for much lower temperatures.
In order to estimate the growth exponent, it is convenient to consider the effective exponent defined by
$$n_{\mathrm{eff}}=\frac{d\mathrm{ln}R(t)}{d\mathrm{ln}t}.$$
(7)
We can estimate the growth exponent $`n`$ by extrapolating $`n_{\mathrm{eff}}`$ in the limit of $`t\mathrm{}`$. Following Huse , we plot the effective exponent $`n_{\mathrm{eff}}`$ as a function of $`1/R_E`$ in figure 4. We calculate $`n_{\mathrm{eff}}`$ from the data shown in figure 2. We compare the data of the exchange MC method and those of the standard MC. From the data of the exchange MC method shown in figure 4, we may reliably estimate $`n`$ as 1/3, which is consistent with the conclusion of Ref. . In the study of the standard MC simulation , the growth exponent has been estimated as 1/3 from long extrapolation process; the value of the effective exponent for the largest domain size was 0.23. On the contrary, our value of $`n_{\mathrm{eff}}`$ by the use of the exchange MC simulation is 0.32. Thus, the estimate of the exponent $`n`$ is more reliable.
Although the replica exchange dynamics is not considered to be related to a real one, we have found that a domain growth is controlled by a simple algebraic growth law, $`R(t)t^{1/3}`$. The value is consistent with a direct simulation for the same model. In the standard MC the domain growth becomes very slow at low temperatures because of the lack of the thermal diffusion. In order to study the late-stage dynamics in the standard MC, we often choose relatively high temperature, for example, $`T=0.5T_c`$. Then interfacial fluctuations become large. The advantage of using the exchange MC method is that we can have rapid growth of order together with small interfacial fluctuations. The problem is whether the exchange process of replica modifies the dynamics or not. The answer is yes generally. However, for the problem of ordering phenomena, it works well even for the estimate of the growth exponent. Why can the exchange MC method simulate the dynamics of ordering phenomena? The ordering phenomena are controlled by a zero-temperature fixed point, in the language of the renormalization group, irrespective of quenched temperature; thus the replicas associated with different (inverse) temperatures obey the same growth law. Even if the growth process is described by the composite of the growth law of different temperatures, the resulting growth behavior again becomes an algebraic one,
$$C_1t^n+C_2t^n+C_3t^n+\mathrm{}t^n.$$
(8)
One comment should be made here on the choice of temperatures of replicas. Although we have shown only the data for the 16 temperatures from 0.3 to 0.9 with the temperature separation of 0.04, other choices of temperatures give essentially the same results.
In summary, we have tested the efficiency of the exchange MC method in the case of the ordering process by quenching. Even for the deeply quenched case at low temperatures, we have observed a rapid domain growth. Although the dynamics including the exchange process is not a simple one, a domain growth has been found to be controlled by a simple algebraic growth law, $`R(t)t^{1/3}`$, for the case of conserved order parameter of three-component system. It is interesting to apply this method to more complicated problems. The effect of surfactants in the phase separation dynamics is now studied by using this method .
The author would like to thank T. Kawakatsu, T. Ito, K. Hukushima and K. Nemoto for valuable discussions. This work was supported by a Grant-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture, Japan. The computation in this work has been done using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
|
no-problem/9905/astro-ph9905237.html
|
ar5iv
|
text
|
# A Survey for Low Surface Brightness Galaxies Around M31. II. The Newly Discovered Dwarf Andromeda VI
## 1 Introduction
The properties, frequency, and origin of dwarf galaxies have important implications for galaxy formation and cosmology. With an eye toward these implications, Gallagher & Wyse (1994), Grebel (1997), Da Costa (1998), and Mateo (1998) recently have reviewed the dwarf galaxies of the Local Group. As an example of a property of this class of galaxy where our knowledge is far from complete, consider the faint end of the galaxy luminosity function. Figure 1 shows a luminosity function, based on Table 4 of Mateo (1998), for all dwarf galaxies in the Local Group, with those dwarfs associated with the Milky Way, M31, and the general Local Group indicated.<sup>1</sup><sup>1</sup>1See Table 1 of Mateo (1998) for the association between individual galaxies and subgroups within the Local Group. For the few galaxies with two possible associations, we count them as “half galaxies” in each subgroup. The absolute magnitudes of the LMC and SMC are from Westerlund (1997). Note that the faintest dwarf galaxies appear to be more common around the Milky Way than in other parts of the Local Group. Is this a real effect or, instead, due to increasing incompleteness for low-luminosity dwarf galaxies as one looks beyond the outer halo of the Galaxy? The only way to resolve this and related questions is through more sensitive searches for low-luminosity galaxies in the Local Group.
We have undertaken a survey for low surface brightness dwarf galaxies around M31. Our methodology is the application of digital filtering techniques to the Second Palomar Sky Survey (POSS-II; Reid et al. 1991, Reid & Djorgovski 1993). Our survey seeks to cover a larger area around M31 and to reach a deeper limiting central surface brightness than the pioneering survey of van den Bergh (1972, 1974), which revealed three M31 dwarf companions (And I, II & III). In the first paper of this series (Armandroff, Davies, & Jacoby 1998b; hereafter referred to as Paper I), we described our methodology, discussed its application to 1550 deg<sup>2</sup> of the sky survey, and announced the first discovery of this survey: the new dwarf galaxy And V. We also presented a color–magnitude diagram for And V which yields a distance to this galaxy that is equivalent, within the uncertainties, to that of M31. The line-of-sight distance to And V and its projected distance from M31 suggest association with M31. The color–magnitude diagram also implies a mean metal abundance for And V of \[Fe/H\] $``$ –1.5 and lacks bright blue stars. This absence of a young population in the color–magnitude diagram, the lack of H$`\alpha `$ emission in the differencing of our deep H$`\alpha `$ and $`R`$ images, and the non-detection of far-infrared emission all imply that And V is a dwarf spheroidal (dSph) galaxy, and not a dwarf irregular galaxy. And V thus joined M31’s three other dwarf spheroidal companions (And I, II & III) and three more luminous dwarf elliptical companions (NGC 147, 185 and 205). IC 10, a dwarf irregular, LGS 3, a transition object between dwarf irregular and dwarf spheroidal, and M32, a low-luminosity elliptical, complete the inventory of dwarf galaxies commonly accepted as associated with M31.
In this paper, we discuss the next product of our survey: And VI. The newly discovered galaxy And VI is found to be another dwarf spheroidal companion to M31 (see also Karachentsev & Karachentseva 1999; Grebel & Guhathakurta 1999; Hopp et al. 1999).
## 2 Finding And VI
Using the methodology of Paper I, we found an excellent low surface brightness dwarf candidate at the following celestial coordinates: $`\alpha `$ = 23<sup>h</sup>51<sup>m</sup>46.3<sup>s</sup>, $`\delta `$ = +243457<sup>′′</sup> (J2000.0). These correspond to Galactic coordinates of: $`l`$ = 106.0, $`b`$ = –36.3. Anticipating the evidence presented below and following van den Bergh (1972), we will call this new galaxy Andromeda VI. Figure 2 shows And VI’s location on the sky relative to M31, M31’s known companions, and our search area. Figure 3 displays And VI on the digitized POSS-II, raw and after our enhancement procedure. One sees a strong resemblance between And VI and the known M31 dwarf spheroidals And II & III on the raw and processed POSS-II images (compare Figure 3 with Figure 1 of Armandroff et al. 1998b). And VI is visible on the POSS-II $`B`$ and $`R`$ transparencies. And I, II & III are also visible on the POSS-II transparencies.
There are three possible reasons why And VI was not recognized until recently. First, it is just outside the van den Bergh (1972, 1974) search area. Second, And VI is located right next to a bright star, so it might have been thought to be a ghost reflection caused by that star. Finally, examination of the POSS-II transparencies reveals diffuse Galactic emission to be common in this region, so And VI could have been interpreted as part of this extended emission.
## 3 Follow-Up Observations
As a first step toward clarifying its nature, And VI was imaged at the Kitt Peak National Observatory 4-m telescope prime focus with the T2KB CCD on 1998 January 23 (U.T.). Only one $`V`$ exposure of 300 sec was obtained due to And VI’s limited visibility in January. And VI resolved nicely into stars in this short $`V`$ image, suggesting that it is indeed a nearby dwarf galaxy.<sup>2</sup><sup>2</sup>2Several months after our confirmation that And VI is a bona fide nearby dwarf galaxy, we reported our discovery in the “Dwarf Tales” Newsletter (Armandroff et al. 1998c). Karachentsev & Karachentseva (1998) also reported their independent finding of And VI, calling it the Pegasus Dwarf, in “Dwarf Tales” and in a subsequent journal submission (Karachentsev & Karachentseva 1999).
And VI was imaged more deeply using the WIYN<sup>3</sup><sup>3</sup>3The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories. 3.5-m telescope, the S2KB CCD (a 2048 $`\times `$ 2048 STIS device) mounted at the Nasmyth focus, and the $`B`$, $`V`$ and $`I`$ filters on several occasions during 1998 September and October. The scale was 0.197 arcsec/pixel, and each image covered 6.7$`\times `$6.7 arcmin. The exposure times, dates of observation, and the resulting image quality are listed in Table 1. The deep images were taken during three nights when the seeing conditions were excellent (0.6 – 0.8 arcsec FWHM). These deep exposures were calibrated via images taken on a photometric night during which 13 Landolt (1992) standards were observed. All images were overscan corrected, bias subtracted, and flat fielded in the standard manner. The $`I`$ images were defringed using the scaled combination of several dark-sky fringe frames.
Figure 4 displays a color image of And VI made from the $`B`$, $`V`$, and $`I`$ CCD images from WIYN. And VI resolves nicely into stars on the $`B`$, $`V`$ and $`I`$ images. The And VI images exhibit a smooth stellar distribution and a resemblance with the other M31 dwarf spheroidals (compare Figure 4 with Figure 1 of Armandroff et al. 1993 and/or Figure 4 of Paper I). And VI does not look lumpy or show obvious regions of star formation that would suggest a dwarf irregular, as opposed to dwarf spheroidal, classification. No obvious globular clusters are seen in And VI, though none are expected for a low-luminosity dwarf spheroidal (e.g., Sec. 4.3 of Da Costa & Armandroff 1995).
In order to look for possible ionized gas, And VI was observed with the KPNO 4-m telescope in H$`\alpha `$ narrowband and $`R`$ on 1998 July 15 (U.T.) with the CCD Mosaic Imager (Armandroff et al. 1998a). The H$`\alpha `$ filter used has a central wavelength of 6569 Å and a width of 80 Å FWHM in the f/3.2 beam of the telescope. Three narrowband exposures of 700 sec each and three $`R`$ exposures of 300 sec each were obtained. The average seeing was 1.2 arcsec FWHM, sampled at 0.26 arcsec/pixel. We computed scaling ratios between the combined H$`\alpha `$ and $`R`$ images and performed the subtraction. Figure 5 shows both the H$`\alpha `$ image and the continuum-subtracted H$`\alpha `$ image. No diffuse H$`\alpha `$ emission or H ii regions are detected in the And VI continuum-subtracted image. The only features seen in this image are in the vicinity of the brighter stars, where residuals are caused by the slightly different behavior of the point spread function (PSF) between $`R`$ and H$`\alpha `$. No diffuse H$`\alpha `$ emission or H ii regions were found in And V either (Paper I). The lack of H$`\alpha `$ emission in And VI rules out current high-mass star formation. Almost all Local Group dwarf irregular galaxies are strongly detected in H$`\alpha `$ (e.g., Kennicutt 1994). And VI’s lack of H$`\alpha `$ emission, coupled with its smooth, regular appearance on broadband images, suggests that it is a dwarf spheroidal galaxy (rather than a dwarf irregular).
We also examined the IRAS 12-, 25-, 60-, and 100-micron maps of the And VI region, using the FRESCO data product from IPAC<sup>4</sup><sup>4</sup>4IPAC is funded by NASA as part of the IRAS extended mission under contract to JPL.. And VI is not detected in any of the IRAS far-infrared bands. And I, II, III & V are not detected by IRAS either (see Paper I). Far-infrared emission, as seen by IRAS, is the signature of warm dust. As discussed in Paper I, some Local Group dwarf irregular galaxies are detected by IRAS. And VI’s non-detection in the far infrared is consistent with And VI being a dSph galaxy. However, the non-detection by IRAS does not eliminate the possibility that And VI is a low-luminosity relatively quiescent dwarf irregular galaxy. The H$`\alpha `$ data and the color–magnitude diagram provide stronger evidence on And VI’s classification.
## 4 Color–Magnitude Diagram
The next step in our study of And VI is the construction of a color–magnitude diagram for its resolved stars in order to derive its distance and explore its stellar populations characteristics. Instrumental magnitudes were measured on the deep And VI WIYN images using the IRAF implementation of the daophot crowded-field photometry program (Stetson 1987, Stetson et al. 1990). The standard daophot procedure was used, culminating in multiple iterations of allstar. Stars with anomalously large values of the daophot quality-of-fit indicator CHI were deleted from the photometry lists.
Large-aperture photometry was performed for the 13 well-exposed Landolt (1992) photometric standard stars. Adopting mean atmospheric extinction coefficients for Kitt Peak, photometric transformation equations were derived. Only a zeropoint and a linear color term of small size were needed to represent the transformation for each filter. For each of $`B`$, $`V`$, and $`I`$, several stars that are well exposed on both the deep And VI images and the And VI images from the photometric calibration night were used to derive the magnitude offsets between the deep images and the standard system.
The ($`I`$,$`V`$$`I`$) color–magnitude diagram of And VI is shown in Figure 6. We used the spatial distribution of the stars in the red-giant region of the color–magnitude diagram to determine a reasonably precise center for And VI.<sup>5</sup><sup>5</sup>5The (x,y) coordinates of this center were transformed to celestial coordinates using the Hubble Space Telescope Guide Star Catalog. This is the origin of the celestial coordinates for And VI given in the first paragraph of Sec. 2. Only stars within 370 pixels (73 arcsec) of this center are plotted in Figure 6. This radius was chosen in order to provide a reasonably large sample of And VI members while minimizing contamination by field stars and is similar to the core radius of 76 arcsec (Caldwell 1999). Our And VI color–magnitude diagram clearly shows a red giant branch. The contamination of the color–magnitude diagram by foreground stars appears rather minor (see also below).
In order to interpret the color–magnitude diagram, an estimate of the foreground reddening is necessary. Note that different authors have adopted slightly different reddenings to the M31 companions. We adopt E($`B`$$`V`$) = 0.06 $`\pm `$ 0.01 for And VI based on the IRAS/DIRBE results of Schlegel, Finkbeiner, & Davis (1998). This value is 0.02 mag larger than that adopted by Grebel & Guhathakurta (1999) because they applied a correction for the mean difference between the Schlegel et al. (1998) E($`B`$$`V`$) values and those of Burstein & Heiles (1982). Assuming $`A_V`$ = 3.2$`\times `$E($`B`$$`V`$) and the E($`B`$$`V`$) to E($`V`$$`I`$) conversion of Dean, Warren, & Cousins (1978), E($`V`$$`I`$) = 0.08 $`\pm `$ 0.01 and $`A_I`$ = 0.11 $`\pm `$ 0.02.
A distance can be derived for And VI using the $`I`$ magnitude of the tip of the red giant branch (Da Costa & Armandroff 1990; Lee et al. 1993a). Figure 7 shows a cumulative luminosity function in $`I`$ for And VI (again, for stars within 370 pixels \[73 arcsec\] of the And VI center). Color limits of 0.9 $`<`$ $`V`$$`I`$ $`<`$ 2.2 have been used in constructing the luminosity function in order to maximize its sensitivity to And VI red giants and to minimize its dependence on foreground contamination. No correction for foreground contamination has been applied to the luminosity function for the following reasons. Even in the corners of our And VI images, the And VI giant branch is seen. Thus we have no true foreground/background measurement. However, a color–magnitude diagram for the corners of our images (distance from And VI’s center $`>`$ 191 arcsec) that covers 2.8 times more area than represented in Figures 6 and 7 shows no stars with $`19<I<21`$ within the color limits used to calculate the luminosity function. Therefore, over the magnitude and color intervals that we are identifying the red giant branch tip, Figures 6 and 7 are quite free of field contamination. Based on the magnitude at which the luminosity function begins to rise strongly (see Figure 7, especially the inset which shows the derivative of the cumulative luminosity function), the $`I`$ magnitude of the red giant branch tip in And VI is 20.56 $`\pm `$ 0.10. This value is supported by the apparent location of the red giant branch tip in the color–magnitude diagram (see Figure 6). For metal-poor systems such as And VI (as demonstrated below), the red giant branch tip occurs at $`M_I`$ = –4.0 (Da Costa & Armandroff 1990; Lee et al. 1993a). This results in distance moduli of ($`m`$$`M`$)<sub>I</sub> = 24.56 and ($`m`$$`M`$)<sub>0</sub> = 24.45 $`\pm `$ 0.10, corresponding to a distance of 775 $`\pm `$ 35 kpc.
As discussed in Da Costa et al. (1996) and Paper I, the best available Population II distances for M31 (on the same distance scale as the red giant branch tip calibration used above) are: 760 $`\pm `$ 45 kpc based on the red giant branch tip (Mould & Kristian 1986) or RR Lyrae stars (Pritchet & van den Bergh 1988) in the M31 halo; or 850 $`\pm `$ 20 kpc from the horizontal branch magnitudes of eight M31 globular clusters (Fusi Pecci et al. 1996). Our distance for And VI is consistent, within the combined uncertainties, with both of these M31 distances. Thus And VI is at approximately the same distance along the line of sight as M31, suggesting an association between And VI and M31. Note also that our line-of-sight distance of 775 $`\pm `$ 35 kpc for And VI is very similar to those of And I (810 $`\pm `$ 30 kpc; Da Costa et al. 1996), And III (760 $`\pm `$ 70 kpc; Armandroff et al. 1993), and And V (810 $`\pm `$ 45 kpc; Paper I), all on the same distance scale. And VI’s projected distance from the center of M31 is 271 kpc. The Galactic dwarf spheroidal Leo I has a Galactocentric distance of $``$250 kpc, comparable to And VI’s projected distance from M31. The above line-of-sight and projected distances strongly suggest that And VI is a member of M31’s satellite system.
Two other groups have recently published color–magnitude diagrams for And VI (Grebel & Guhathakurta 1999; Hopp et al. 1999). Their distances for And VI based on the tip of the red giant branch, 830 $`\pm `$ 80 kpc (Grebel & Guhathakurta 1999) and 795 $`\pm `$ 75 kpc (Hopp et al. 1999), are consistent with our value of 775 $`\pm `$ 35 kpc. All three determinations adopt the same zeropoint for the tip of the red giant branch method and thus are on the same distance scale.
Based on the ($`V`$,$`B`$$`V`$) and ($`I`$,$`V`$$`I`$) color–magnitude diagrams, we identify the two obvious bright blue stars in Figures 6 and 8 as possible post-asymptotic giant branch (PAGB) stars. These two stars have apparent $`B`$, $`V`$, and $`I`$ magnitudes of 21.11, 20.95, 20.13, and 21.44, 21.43, 21.30, respectively. Bond (1996) suggested that PAGB stars may be useful Population II distance indicators where $`<`$$`M_V`$$`>`$ = -3.4 $`\pm `$ 0.2. The average observed, extinction-corrected, $`V`$ magnitude for these two stars is $`21.00\pm 0.24`$. Thus, the PAGB distance modulus to And VI is ($`m`$$`M`$)<sub>0</sub> = 24.40 $`\pm `$ 0.31, or 760 $`\pm `$ 110 kpc, in excellent agreement with distances based on the red giant branch tip.
Using our distance from the red giant branch tip and adopted reddening for And VI, we have overplotted its color–magnitude diagram with fiducials representing the red giant branches of Galactic globular clusters that span a range of metal abundance (Da Costa & Armandroff 1990). Figure 6 displays the And VI ($`I`$,$`V`$$`I`$) color–magnitude diagram with fiducials for M15 (\[Fe/H\] = –2.17), M2 (–1.58), NGC 1851 (–1.16), and 47 Tuc (–0.71, dashed line). In the color–magnitude diagram, the bulk of the And VI red giants are bounded by the M15 and NGC 1851 loci. The mean locus of the And VI red giant branch is similar to the M2 fiducial.
For an old stellar system (as demonstrated below for And VI), the ($`V`$$`I`$)<sub>0</sub> color of the red giant branch is primarily sensitive to metal abundance. Armandroff et al. (1993) presented a linear relation between ($`V`$$`I`$)<sub>0,-3.5</sub> (the color of the red giant branch at $`M_I`$ = –3.5, corrected for reddening) and \[Fe/H\]. Using the distance modulus derived above, $`M_I`$ = –3.5 corresponds to $`I`$ = 21.06 for And VI. Thus, the mean dereddened color for stars in the magnitude interval 20.96 $``$ $`I`$ $``$ 21.16 and within 370 pixels (73 arcsec) of And VI’s center was computed, excluding one obvious non-member, and yielding ($`V`$$`I`$)<sub>0,-3.5</sub> = 1.38 $`\pm `$ 0.04 (where the error also includes a 0.03 mag contribution due to photometric zeropoint uncertainty and a 0.01 mag contribution from reddening uncertainty). Applying the Armandroff et al. (1993) calibration results in a mean abundance for And VI of $`<`$\[Fe/H\]$`>`$ = –1.58 $`\pm `$ 0.2. This metallicity is normal for a dwarf spheroidal (e.g., see Figure 9 of Armandroff et al. 1993). The mean metal abundance derived here for And VI, –1.58 $`\pm `$ 0.2, agrees with that of Grebel & Guhathakurta (1999), –1.35 $`\pm `$ 0.3, to within the combined uncertainties.
The And VI giant branch in Figure 6 exhibits some width in color at constant magnitude. The standard deviation in $`V`$$`I`$ of the 24 And VI red giants used to calculate ($`V`$$`I`$)<sub>0,-3.5</sub> is 0.09 mag. This is much larger than the mean photometric uncertainty for these same stars of 0.026 mag. Although this photometric uncertainty results from allstar and does not account for the crowding in the realistic way that artificial star tests would, it does suggest that And VI exhibits a range of metal abundance. A standard deviation of 0.08 mag in $`V`$$`I`$ at $`M_I`$ = –3.5 corresponds to a standard deviation in abundance of 0.3 dex. Grebel & Guhathakurta (1999) also concluded that And VI has a range in metal abundance. A much more definitive investigation of the abundance distribution in And VI will result from HST/WFPC2 observations to take place during Cycle 8 (see the analogous results on the abundance spread in And I by Da Costa et al. 1996).
The presence of upper asymptotic giant branch (AGB) stars, stars significantly more luminous than the red giant branch tip, in populations with \[Fe/H\] $``$ –1.0, is the signature of an intermediate-age population (one whose age is considerably less than that of the Galactic globular clusters). Armandroff et al. (1993) estimated the fraction of And I’s & And III’s total luminosity that is produced by an intermediate-age population using the numbers and luminosities of upper-AGB stars, following the formalism developed by Renzini & Buzzoni (1986). This analysis yielded an intermediate-age (3–10 Gyr) population fraction of 10 $`\pm `$ 10% for both And I & III. Following the same methodology, Paper I found no significant evidence for upper-AGB stars, and thus no significant intermediate-age population, in And V.
For And VI, we again adopt from Armandroff et al. (1993) the $`M_{\mathrm{bol}}`$ limits that correspond to 3–10 Gyr upper AGB stars: –3.8 to –4.6. We use the bolometric corrections to $`I`$ magnitudes from Da Costa & Armandroff (1990) and our distance modulus from the red giant branch tip to calculate bolometric magnitudes. In our And VI ($`I`$,$`V`$$`I`$) diagram for stars within 73 arcsec of the And VI center (Figure 6), there are five stars within the above $`M_{\mathrm{bol}}`$ interval (all in the range –3.8 to –4.0, with a mean $`M_{\mathrm{bol}}`$ of –3.9; these are the stars above the red giant branch tip with $`I20.35`$). There are no such stars in the color–magnitude diagram for objects with radius greater than 191 arcsec that we are using to estimate background (as discussed above). Based on the surface brightness profile from Caldwell (1999), the area from which we have selected upper-AGB stars represents 43% of And VI’s total $`M_V`$ of –11.3. Then Eq. 14 of Renzini & Buzzoni (1986) and the adoption of the AGB evolutionary rate used in Armandroff et al. (1993) yields an intermediate-age (3–10 Gyr) population fraction of 13 $`\pm `$ 6% for And VI. The error is estimated solely from counting statistics for the upper-AGB stars. Thus, And VI resembles And I & III in the strength of its intermediate-age population. It does not resemble the Galactic dwarf spheroidals Fornax or Leo I that have dominant intermediate-age populations.
No evidence for young, blue main-sequence stars is seen in Figure 6. Except for the star with $`V`$$`I`$=0.13 and $`I`$=21.30 that we identified as a candidate post-AGB star above, the And VI ($`I`$,$`V`$$`I`$) color–magnitude diagram is essentially devoid of blue stars. Bluer colors provide enhanced sensitivity to young main-sequence stars. Consequently, our ($`V`$,$`B`$$`V`$) color–magnitude diagram for And VI is displayed in Figure 8, again for stars within 73 arcsec of the And VI center. Also plotted are red giant branch fiducials for the Galactic globular clusters M15 (\[Fe/H\] = –2.17) and NGC 1851 (\[Fe/H\] = –1.16) (Sarajedini & Layden 1997) shifted to the distance modulus and reddening of And VI. As seen in the ($`I`$,$`V`$$`I`$) color–magnitude diagram, most of the stars in the ($`V`$,$`B`$$`V`$) color–magnitude diagram are red giants that are bracketed by the M15 and NGC 1851 fiducials. Because our photometry is significantly more precise in $`V`$ and $`I`$ than it is in $`B`$, we estimate And VI’s metallicity solely from the ($`I`$,$`V`$$`I`$) color–magnitude diagram.
One also sees a relatively small population of blue stars in Figure 8, with 23 $``$ $`V`$ $``$ 25.4 and -0.4 $``$ ($`B`$$`V`$) $``$ 0.4. These blue stars seen in the ($`V`$,$`B`$$`V`$) diagram correspond to the medium brightness blue stars apparent in our color image of And VI displayed in Figure 4. Consider the 10 stars with 23.0 $``$ $`V`$ $``$ 24.0 and -0.1 $``$ ($`B`$$`V`$) $``$ 0.4 in Figure 8. The spatial distribution of such stars is consistent with membership in And VI; their number density is an order of magnitude lower in the corners of our images than it is within 73 arcsec of the And VI center. In addition, the predictions of Galactic star count models (e.g., Ratnatunga & Bahcall 1985) for blue stars in this magnitude range are two orders of magnitude smaller than observed. Therefore, we conclude that these blue stars are a component of And VI. Their distribution in the ($`V`$,$`B`$$`V`$) diagram is not indicative of main-sequence stars because they are not sufficiently blue and because they fail to follow a continuous color–magnitude relation. Color–magnitude diagrams of the Galactic dSphs Leo I and Sextans show populations of stars with very similar absolute magnitudes and colors (Lee et al. 1993b; Demers et al. 1994; Gallart et al. 1999; Mateo et al. 1995). Some of these stars are known to be anomalous Cepheids. The HST/WFPC2 imaging of And VI planned for Cycle 8 should provide valuable clarification on this issue, because it will reach substantially deeper with better precision than the current photometry and should determine whether these blue stars are variable. Based on the available data, And VI’s color–magnitude diagrams are consistent with those of other dwarf spheroidal galaxies, confirming the other evidence for a dwarf spheroidal classification.
## 5 Discussion
The recent discoveries of And VI, And V (Paper I), and the Cas Dwarf (Karachentsev & Karachentseva 1999; Grebel & Guhathakurta 1999), coupled with the recognition that these dSph galaxies are part of the extended M31 satellite system, have changed our view of this system.
Karachentsev (1996) discussed the spatial distribution of the companions to M31. The discoveries of And VI, And V and Cas modify the spatial distribution of the M31 satellites. One asymmetry in the spatial distribution of the satellites results from And I, II & III all being located south of M31, while the three more luminous dwarf elliptical companions NGC 147, 185 and 205 are all positioned north of M31. Also, Karachentsev (1996) noted that there are more M31 companions overall south of M31 than north of M31. As discussed in Paper I, the location of And V north of M31 lessens both of these asymmetries, and Cas’s location north of M31 reduces the asymmetries further (see Figure 2). Because And VI is south of M31, it diminishes the asymmetry reduction provided by the Cas Dwarf.
Karachentsev (1996) called attention to the elongated shape of the distribution of M31 companions, with axis ratio 5:2:1 (based on the galaxies labelled in Figure 2 that were known in 1996 plus the more distant galaxy IC 1613, all of which he argues are associated with M31). On the sky, this elongation occurs primarily along the north–south axis. As discussed in Paper I, And V is located within this flattened spatial distribution. Of all the M31 companions, And VI’s location to the southeast of M31 places it the furthest from this elongated distribution, and nearly on the distribution’s minor axis. The Cas Dwarf is the next most deviant companion from the elongated pattern.
With projected radii from the center of M31 of 112, 271 and 224 kpc, respectively, And V, And VI and Cas increase the mean projected radius of the M31 dwarf spheroidals from 86 kpc to 144 kpc. Karachentsev (1996) also called attention to morphological segregation among the M31 companions, with the dwarf ellipticals and dwarf spheroidals located closer to M31, and the spirals and irregulars on the periphery (see also van den Bergh 1994 regarding segregation among the Galactic companions). The identification of And VI and Cas extends the presence of dwarf spheroidals to galactocentric distances comparable to those of LGS 3 (a transition object between dwarf irregular and dwarf spheroidal) and IC 10 (a dwarf irregular). Thus, And VI and Cas weaken the case for morphological segregation in the M31 satellite system.
The discovery of dwarf galaxies like And VI, And V and Cas augments the faint end of the luminosity function of the Local Group. Because of the special way that nearby dwarf galaxies are selected, the Local Group is uniquely suited to the study of the faint end of the galaxy luminosity function (e.g., Pritchet & van den Bergh 1999). Accurate integrated $`V`$ magnitudes are now available for And VI, And V and Cas from wide-field CCD imaging (Caldwell 1999). When combined with the distance moduli (this paper; Paper I; Grebel & Guhathakurta 1999), these yield $`M_V`$ values of -11.3, -9.1, and -12.0 for And VI, And V, and Cas, respectively. Luminosity functions for M31’s dwarf companions and for the entire Local Group that include and highlight these three new dwarf spheroidals are shown in Figure 9. The Local Group and subgroup memberships and $`M_V`$ values for the galaxies besides And VI, And V, and Cas are from Mateo (1998), except for the LMC and SMC which are from Westerlund (1997). And VI, And V and Cas increase the number of galaxies with -9.0 $`>`$ $`M_V`$ $``$ -12.0 that are M31 companions by 75% and in the entire Local Group by 19%.
From a survey of nine clusters of galaxies, Trentham (1998) derived a composite luminosity function that is steeper at the faint end than that of the Local Group (see his Figure 2). He attributed the seeming shortfall at faint $`M_V`$ in the Local Group to poor counting statistics and/or incompleteness. The discovery of And VI, And V and Cas reduces somewhat the difference between the Local Group luminosity function and Trentham’s (1998) function. To illustrate the difference, we have plotted a version of the Trentham luminosity function, transformed from $`B`$ to $`V`$ assuming ($`B`$$`V`$)<sub>0</sub> = 0.70, in Figure 9. The Trentham function has been normalized to match the M31 and Local Group functions in the interval -16 $`>`$ $`M_V`$ $``$ -18. One sees that the luminosity function for clusters of galaxies contains many more galaxies for $`M_V`$ $``$ -13 than are known in the Local Group. Despite the inclusion of the three newly discovered Local Group dwarf spheroidals, the difference between the Trentham (1998) luminosity function and that of the Local Group remains significant at faint $`M_V`$ ($`>`$ 95% confidence level). Pritchet & van den Bergh (1999) reached the same conclusion using a somewhat different assemblage of Local Group data. Provided that the Trentham function is representative, we conclude that either faint dwarf galaxies are much less common in the Local Group than in richer clusters or that the Local Group sample is incomplete by $``$10 to $``$25 galaxies in the $`M_V`$ range –12 to –14.
One important issue remains regarding Figure 9. Is the peak of the histogram in the region -11 $``$ $`M_V`$ $``$ -12, followed by a decline at fainter magnitudes, real or is it caused by incompleteness and small number statistics? The luminosity function is expected to decline fainter than some $`M_V`$ because of the inability of sufficiently small proto-galactic gas clouds to collapse and form stars due to photoionization by ultraviolet background radiation (e.g., Thoul & Weinberg 1996). Our continuing search for faint dwarf galaxies in the vicinity of M31 and other Local Group searches should help resolve the issue of where this turnover occurs.
Beyond the observational evidence for a deficit of dwarfs in the Local Group based on Trentham’s (1998) luminosity function, similarly dramatic discrepancies (factors of 15 or more) are suggested by recent theoretical developments. Klypin et al. (1999), for example, present simulations of the hierarchical formation of small groups like our Local Group in the currently favored $`\mathrm{\Lambda }`$CDM universe. While $``$40 Local Group galaxies are known, those authors predict that $``$500 galaxies should exist for which the vast majority are low luminosity dwarfs. That number doubles when the search radius extends from $``$300 to $``$600 kpc from the host galaxy. Our continuing survey around M31, especially when we extend it to larger radii, will provide a tight constraint on the true number of faint dwarf galaxies in the Local Group.
The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The survey images used here are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain. The Second Palomar Observatory Sky Survey was made by the California Institute of Technology with funds from the National Science Foundation, the National Geographic Society, the Sloan Foundation, the Samuel Oschin Foundation, and the Eastman Kodak Corporation. The Oschin Schmidt Telescope is operated by the California Institute of Technology and Palomar Observatory. Supplemental funding for sky-survey work at the STScI is provided by the European Southern Observatory. Watanabe Masaru at the National Astronomical Observatory of Japan and Jesse Doggett and especially the late Barry Lasker at STScI kindly helped us to access the DSS data. We appreciate the efforts of the WIYN Queue Team, Dianne Harmer, Abi Saha, Paul Smith, and Daryl Willmarth, in obtaining the WIYN images. We are grateful to Nelson Caldwell, Gary Da Costa, Eva Grebel, Anatoly Klypin, Chris Pritchet, and Abi Saha for helpful discussions. J. E. D. was supported by the NSF Research Experiences for Undergraduates Program at NOAO during the summer of 1997. G. H. J. wishes to thank Peter Strittmatter for providing a sabbatical office at Steward Observatory where some of this work was done. J. E. D. wishes to thank Robert Williams for providing an office at STScI where some of this work was done.
|
no-problem/9905/cond-mat9905338.html
|
ar5iv
|
text
|
# References
Topological Gauge Theory Of General Weitzenböck Manifolds Of Dislocations In Crystals
Y. C. Huang<sup>1,3</sup> B. L. Lin<sup>2</sup> S. Li<sup>3</sup> M. X. Shao<sup>3,4</sup>
<sup>1</sup>Department of Applied Physics, Beijing Polytechnic University, Beijing,100022, P. R. China
<sup>2</sup>Department of Transportation Management Engenerring, Northean Jiaotong University, Beijing, 100044, P. R.China
<sup>3</sup>Institute of Theoretical Physics, Chinese Academy of Science, Beijing, 100080, P. R. China
<sup>4</sup>Department of Physics, Beijing Normal University, Beijing, 100022, P. R. China
( <sup>1</sup>Email: ychuang@bjpu.edu.cn )
Abstract
General Weitzenböck material manifolds of dislocations in crystals are proposed, the reference, idealized and deformation states of the bodies in general case are unifiedly described by the general manifolds, the topological gauge field theory of dislocations is given in general case, true distributions and evolution of dislocations in crystals are given by the formulas describing dislocations in terms of the general manifolds, furthermore, their properties are discussed.
1.Introduction
Practical crystals always contain a lot of dislocations and are filled with them, the defects strongly affect the properties of the crystals. Some pioneers even in the fifties had researched the relations between the defect crystals and differential geometry. K. Kondo used torsion of material manifolds to study dislocations<sup>1</sup>, and Nye<sup>2</sup>, Bilby<sup>3</sup>, Kröner<sup>4</sup> etc studied the relations between the defects and torsion.
On the other hand, these defects are found having the features of elementary particles, and Kröner studied continuum theory of defects<sup>5</sup>. Due to the great success of Yang-Mills gauge field theory describing the interactions of elementary particles<sup>6</sup>, according to the research of gauge theory of gravitation, it can be discovered that geometry of Riemannian manifold essentially belongs to a kind of non-Abelian gauge theory<sup>7</sup>. Gauge theory and topological properties of dislocations were researched, and many important conclusions in the fields were obtained by Toulous and Kleman<sup>8</sup>, Kleinert<sup>9</sup>, Trebin<sup>10</sup> and so on. Duan and his associates showed a unified approach to the study of defect mechanics and their topological and geometric properties by applying vielbein field theory and gauge field theory to dislocation and disclination continuum<sup>11,12</sup>.
Sect.2 of the paper constructs the general Weitzenböck manifolds of dislocations in crystals, gives a unified description of these reference, idealized and deformation states; Sect.3 unifiedly studies topological gauge theory of dislocations, gives the formulas describing dislocations, their properties are discussed; Sect.4 gives the discussions of overcoming the default; the last Sect. is summary.
2.General Weitzenböck Material Manifolds of Dislocations
The Weitzenböck manifold<sup>7</sup>is the one whose $`_ig_{jk}=0`$, $`T_{jk}^i0`$, $`R_{jkl}^i=0`$, where $`g_{ij}`$, $`T_{jk}^i`$ and $`R_{jkl}^i`$ are, respectively, metric, torsion and curvature tensors of the manifold, when $`T_{jk}^i=0`$, the manifold is reduced into Euclidean manifold. Since properties of dislocations correspond to properties of torsion tensor of a manifold<sup>1-4</sup>, when considering dislocations, the corresponding manifold is thus taken as the Weitzenbock manifold. And the torsion tensor and metric tensor $`g_{ij}`$ of a manifold are all defined locally on the manifold<sup>13</sup>, then we may naturally expect that the torsion tensor possesses different values, even zero, on different parts of the manifold, we call the manifold as the general Weitzenbock manifold, the corresponding different parts are just the corresponding submanifolds in the material manifold. In fact, the situations above just correspond to disappearance and appearance of dislocations for zero and non-zero of torsion tensor of the corresponding submanifold respectively. Therefore, we can use the above discussions to represent the complex situations of motion, disappearance and appearance of the dislocations.
Now we construct the general Weitzenböck material manifolds, relative to time paeamotor t, of dislocations in crystals.
It is well known that Weitzenbock manifold is an ensemble of lots of small coordinate pieces homeomorphic to some local Euclidean spaces, lying, however, to a certain degree amorphously, and the characters of the ensemble of lots of the small pieces can be described by means of affine connections of the small pieces. And the geometric properties of dislocations homeomorphically correspond to the geometric properties of torsions of the corresponding material manifold.
Owing to the periodic regular distributions of the lattice particles of perfect crystal, the position coordinate $`x^a`$ of lattice particles can be globally determined in Euclidean space in terms of their periodic rules, then we regard their corresponding material manifold as Euclidean manifold. Obviously, when dislocations appear, which means that we can not globally do the same in terms of their periodic rules, however, we can build up a lot of local coordinate systems in which the position coordinate $`x^a`$ of any particles in the crystal can be locally defined.
In general, for a crystal with dislocations in 3-dimensional Euclidean space, we assume that the body is made of N lattice particles, for any lattice particle $`P_{ia}`$, we can always construct local coordinate system $`L_{ia}`$ in local Euclidean space and presume that there are k lattice particles $`P_{i1},P_{i2},\mathrm{}\mathrm{},P_{ik}`$ which are close adjacent to $`P_{io}`$, and the corresponding nonholonomic normalized frame fields are $`𝐞^a(a=1,2,3)`$, then the distance vector of any two close adjacent particles is
$$dr=e^adx^a$$
(1)
the metric tensor is
$$\eta ^{ab}=e^ae^b=\delta ^{ab}$$
(2)
and the square of the distance, i.e., the square of the line element is
$$ds^2=drdr=\eta ^{ab}dx^adx^b=dx^adx^a$$
(3)
where $`dx^a`$ is the corresponding nonholonomic coordinate difference.
Furthermore, for any particle $`P_{il}(l0)`$ discussed above, we can again construct a local coordinate system, and regard any such $`P_{il}(l0)`$ as a new $`P_{io}^{^{}},`$ do further the same as the above discussions about the $`P_{il}`$, $`\mathrm{}\mathrm{},`$ up to all particles in the body.
For a open subset $`V_\alpha R^3`$ of any local coordinate system discussed above, assume that there is an inverse homeomorphic map $`\phi _\alpha ^1:`$ $`V_\alpha M_\alpha `$ ( where $`M_\alpha `$ is a family of open sets ) and $`\underset{\alpha }{}V_\alpha =V_m`$ ( $`V_m`$ is the volume of the crystal in Euclidean space ) such that $`\underset{\alpha }{}M_\alpha =M`$ is topological space, which means that M is provided with family of pairs $`\{(M_\alpha ,\phi _\alpha )\}`$. Further assume that given $`M_\alpha `$ and $`M_\beta (\alpha \beta )`$ ( $`M_\alpha `$ $`M_\beta \varphi `$ ) the map $`\phi _\beta \phi _\alpha ^1`$from the subset $`\phi _\alpha (M_\alpha M_\beta )`$ of $`R^3`$ to the subset $`\phi _\beta (M_\alpha M_\beta )`$ of $`R^3`$ is infinitely differentiable. Consequently, the aggregation of the all coordinate pieces forms a general three dimensional differentiable material manifold<sup>13</sup>. Because the general manifold inherits the topological and geometric properties of the dislocations in the body, and these properties correspond to properties of torsion of the manifold and can just be expressed by the torsion tensor of the manifold, therefore, there always exists the inverse map $`\phi _\alpha ^1`$ such that M is the manifold that the properties of the dislocations of the crystal may be represented by means of the torsion tensors of the manifold. Because the torsion of the general manifold may have different values on different parts of the manifold, the different parts may correspond to Euclidean and Weitzenböck material submanifolds of the general material manifold. Evidently, the local Euclidean and Weitzenböck material submanifolds can be locally included in the general Weitzenböck material manifold, which just corresponds to the real distribution situations of dislocations of the body.
The relationship<sup>15,16</sup> between the normalized nonholonomic and holonomic frames is
$$e^a=e_i^ae^i,a,i=1,2,3$$
(4)
where $`e^i=dx^i`$ is holonomic coframe of the manifold and $`e_i^a`$ is the vielbein, then we have metric tensor of the manifold $`M`$
$$g_{ij}=e_i^ae_j^a$$
(5)
Using the vielbein theory<sup>7,14-16</sup> we can have
$$dx^a=e_i^adx^i$$
(6)
Substituting (6) into (3) and using (5), we have
$$ds^2=dx^adx^a=e_i^adx^ie_j^adx^j=g_{ij}dx^idx^j$$
(7)
which means that the square of the line element of a local Euclidean system in the crystal is equal to that of the line element of the general material manifold. Using the above discussions, furthermore, we can study their stress and so on. Therefore, the above discussions is essential for our further study.
If we take time $`t_0`$ to label the reference state of time $`t_0`$, when the body is experienced a deformation during time $`\mathrm{}t=tt_0`$ the corresponding deformed state is labelled by time $`t=t_0+\mathrm{}t`$, in other words, at different time $`t`$, the lattice particles possess different distributions by which the creating and moving of the dislocations with time $`t`$ are represented, the general material manifold with time parameter t is thus obtained.
In elastic and plastic mechanics, the reference, idealized and deformation states are chosen. We now give a unified description of the states by means of the general manifold. The reference state is usually chosen as perfect or idealized states of an ordered material. By the discussions above of translating the discretum of a perfect crystal into the continuum, we obtain a Euclidean material manifold corresponding to the global vanishing torsion of the manifold. Usually the idealized state of the body is defined as an aggregation of lots of small pieces of the idealized material. In fact, the small pieces of the idealized material can always be viewed as having some ordered distributions of lattice particles, and our discussions about the piece distributions of the particles are general, i. e., our discussions are general, then the idealized state of the body is included in the states of the general Weitzenböck material manifold M. For elastic and plastic deformation states, they can be represented in microscope as the changes of the displacement and interaction of any adjacent lattice particles. Analogous to the above discussions, evidently the elastic and plastic deformation states are included in the general state of the general manifold M , especially plastic deformation has cooperative motions of the many defects and lattice particles. Now we generally consider reference state (or called standard state ) and deformation state of M , the line elements of their manifolds are generally defined, respectively, as
$$ds_r^2=ds_{t_0}^2=g_{ij}(x(t_0),t_0)dx^i(t_0)dx^j(t_0)$$
(8)
and
$$ds_d^2=ds_t^2=g_{ij}(x(t),t)dx^i(t)dx^j(t)$$
(9)
Using (1) and (5), Eq.(9) can be generally rewritten as
$$ds_t^2=\eta ^{ab}e_i^a(x(t),t)e_j^b(x(t),t)dx^i(t)dx^j(t)$$
(10)
Similar to Ref., the strain tensor is defined as
$$E_{ij}=E_{ij}(x(t),x(t_0),t,t_0)=\frac{1}{2}[g_{ij}(x(t),t)g_{ij}(x(t_0),t_0)]$$
(11)
The general reference and deformation states and the strain tensor at any time t are useful for practical calculation, and it is convenient for acquiring different properties of a crystal, in fact, one usually compares reference states with deformation states under different conditions and studies the evolution laws of the deformation states with time t.
It is important and useful to describe these different states in general case, we overcomes the difficulty of self- consistence that generally describes these different states in terms of differential geometry, give true distributions and evolution of dislocations with variances of time in the crystals by means of the differential manifolds, and may give useful tool of describing the defects.
3.Topological Gauge Field Theory Of Dislocations
We now discuss the topological gauge field theory of the defects. In physics, any physical law doesn’t depend on the choices of coordinates. For example, the square of the line element and Lagrangian of the system, there exist two kinds of indices i and a ( i, a, = 1, 2, 3 ) which is holonomic and nonhonolomic indices respectively. For holonomic indices i, the coordinate transformations at a certain time t, i.e ., at a certain state, are
$$x^{}_{}{}^{}i(t)=x^{}_{}{}^{}i(\{x^j(t)\})$$
(12)
and
$$x^i(t)=x^i(\{x^{}_{}{}^{}j(t)\})$$
(13)
The transformation of the nonholonomic indices is the local SO(3) gauge transformation<sup>11</sup>
$$dx^{}_{}{}^{}a(t)=S^{ab}(t)dx^b(t)$$
(14)
where
$$\eta ^{{}_{}{}^{}ab}=\eta ^{cd}S^{ac}S^{bd}$$
(15)
Using Eqs.(14), (15) and the discussions above, it is easy to prove that Eqs.(8-10) are all invariant under the two kinds of coordinate transformations.
In order to achieve the invariance of physical laws under the two kinds of coordinate transformations, the usual partial derivative should be substituted into two kinds of covariant derivatives as follows
$$_ie_j^a=_ie_j^a\mathrm{\Gamma }_{ij}^ke_k^a$$
(16)
and
$$D_ie_j^a=_ie_j^a\omega _i^{ab}e_j^b$$
(17)
where $`\mathrm{\Gamma }_{ij}^k`$ and $`\omega _i^{ab}`$are affine connection and $`SO(3)`$ gauge potential (spin connection). Eqs.(16) and (17) satisfy the general and gauge covariant principles respectively, but the total Lagrangian of the system is invariant under the both transformations$`^{16}.`$
Using the vielbein theory<sup>11,12,16</sup> we have the torsion tensor of the general manifold as follows:
$$T_{ij}^k=\mathrm{\Gamma }_{ij}^k\mathrm{\Gamma }_{ji}^k=e^{ka}(D_ie_j^aD_je_i^a)=e^{ka}T_{ij}^a$$
(18)
where $`T_{ij}^a`$ is the torsion tensor with nonholonomic superscript a
$$T_{ij}^a=D_ie_j^aD_je_i^a$$
(19)
Analogous to Ref., we may define the general tensor densities of dislocations of the body at any time t in the following
$$\alpha ^{ia}(x(t),t)=\frac{\epsilon ^{ilm}}{2\sqrt{g(x(t),t)}}T_{lm}^a(x(t),t)$$
(20)
where $`g=det(g_{ij}),`$ $`\epsilon ^{123}=\epsilon ^{132}=1`$, the coefficient $`\frac{1}{2}`$ is due to the tensor sums of subscripts l and m . Then Burgers vector of dislocation can be defined as follows
$$b^a=_\mathrm{\Sigma }\alpha ^{ia}(x(t),t)\sqrt{g(x(t),t)}𝑑\sigma _i$$
(21)
where $`\mathrm{\Sigma }`$ is the surface including dislocations. The formulas (20) and (21) and x(t) are all the functions of the time t that labels the motions of the defects and deformations of a crystal, which includes the creations, motions and disappearances of the dislocations with evolution of time t in the general manifold, therefore, these variances can be represented by Burgers vector, or the tensor density of dislocations, these variances correspond to the variances that Eq.(20) and Eq.(21) are equal to different values at different time and position. Then it is given that true distributions and evolution of dislocations in the crystals by the formulas describing dislocations at any time t in the general manifold, and our theory is non-linear. Using gauge potential decompositions Duan and Zhang<sup>12</sup> gave the moving, geometric and topological descriptions of dislocations in terms of similar to Eqs. (20) and (21). Since the general Weitzenbock manifold is very general, then the corresponding further discussions of (20) and (21) are similar to those in Ref., therefore, one can arrive in their all results of topology, geometry and kinematics of the defects, thus we shall not repeat here.
The discussions about elastic and plastic dynamics and the generalization to a four dimensional general pseudo-Weitzenbock material manifold that concerns dissipative parts belonging to the fourth component of the vielbein fields will be written in a separated paper.
4.Summary
This paper proposes general Weitzenböck material manifolds, relative to time parameter t, of dislocations in crystals, in the manifolds the local Euclidean and Weitzenböck manifolds can be viewed as the submanifolds of the general Weitzenböck material manifold, furthermore, by analyzing the microscope structures and change, with time t, of the general manifolds, we obtain the unified description of reference, idealized and deformation states in terms of the general manifolds, and the general manifold with evolution of time parameter t just represents the motions and the distributions of dislocations, and the topological gauge theory are acquired by leaving the physical laws invariant under the coordinate transformations. We overcome the difficulty of self-consistence that generally describes these different states in terms of differential geometry, give true distributions and evolution of dislocations in the crystals by the formulas describing dislocations to any time in general manifold, and their properties are discussed.
Acknowledgment
One, Huang, of the authors is grateful to Prof. Y. S. Duan and Prof. S. L. Zhang for help.
|
no-problem/9905/cond-mat9905420.html
|
ar5iv
|
text
|
# Excitation-assisted inelastic processes in trapped Bose-Einstein condensates
## Abstract
We find that inelastic collisional processes in Bose-Einstein condensates induce local variations of the mean-field interparticle interaction and are accompanied by the creation/annihilation of elementary excitations. The physical picture is demonstrated for the case of 3-body recombination in a trapped condensate. For a high trap barrier the production of high-energy trapped single-particle excitations results in a strong increase of the loss rate of atoms from the condensate.
Since the discovery of Bose-Einstein condensation in trapped ultra-cold atomic gases , inelastic collisional processes in these systems attract a lot of attention. Three-body recombination and two-body spin relaxation limit achievable densities of trapped clouds , and spin relaxation in atomic Cs even places severe limitations on achieving the regime of quantum degeneracy . Theoretical studies of 3-body recombination and 2-body spin relaxation in ultra-cold gases provide a valuable information on the mechanisms and rates of these processes (see for review, and for the earlier work in hydrogen). These studies rely on a traditional approach of collisional physics in gases: The decay rates are found on the basis of the calculated probability of inelastic transition for a two- or three-body collision in vacuum.
In this Letter we find that inelastic collisional processes in Bose-condensed gases induce local variations of the mean-field interparticle interaction and can become ”excitation-assisted”. We demonstrate this phenomenon for the case of 3-body recombination. The particles produced in the course of recombination have a high kinetic energy and immediately fly away from the point of recombination. Therefore, each recombination event leads to an instantaneous local change of the mean field, i.e. to a change of the field acting on the surrounding particles. In this respect the recombination leads to ”shaking” of the system: The time scale on which the Hamiltonian changes is so short that the wavefunction of the system remains unchanged and, hence, corresponds to a superposition of many eigenstates of a new Hamiltonian. For this reason the recombination event can be accompanied by the creation or annihilation of elementary excitations.
From a general point of view, the concept of ”shaking” (sudden perturbations) was formulated by Migdal and used for calculating the ionization of an atom in $`\beta `$-decay (see, e.g., ). Sudden perturbations in condensed systems are accompanied by many-body collective effects. Dilute Bose-Einstein condensates are unique examples of gases, as the mean-field interparticle interaction makes their behavior in many aspects similar to that of a solid. To some extent, the picture of inelastic processes in these gases resembles, for example, the absorption of light by impurity particles in solids, where the transfer of impurities to an excited state leads to a sudden local change of the polarization of the medium and, hence, to the creation/annihilation of phonons (see, e.g., ).
A distinct feature of ”shaking” in a gaseous condensate concerns the back action of the shaking-created excitations on the condensate. This effect can be strongly pronounced in a trapped condensate. For a high trap barrier created high-energy single-particle excitations are still trapped and collide with condensate particles, removing them from the condensate. Every collision thus produces already two energetic (trapped) atoms which again collide with the condensate etc. As a result, one has a cascade production of non-condensed atoms out of the condensate. Despite a small probability of creating high-energy excitations in the recombination process, this mechanism can significantly increase the total loss rate of Bose-condensed atoms.
We will consider the most important channel of 3-body recombination in a trapped Bose-condensed gas, that is the recombination involving 3 condensate particles. For a single recombination event, the local change of the interparticle interaction, $`\delta H_i`$, can be obtained by considering the center of mass $`𝐫_i`$ of the recombination-produced fast atom and molecule as a force center. Just before the event there are 3 atoms at the point $`𝐫_i`$, and each particle $`j`$ of the sample interacts with this point via the potential $`3U(𝐫_j𝐫_i)`$. After the event this interaction is equal to zero, since the fast atom and molecule fly away from the area of recombination. We will use a point approximation for the interaction potential: $`U(𝐫)=\stackrel{~}{U}\delta (𝐫)`$, where $`\stackrel{~}{U}=4\pi \mathrm{}^2a/m`$, with $`a`$ being the $`s`$-wave scattering length and $`m`$ the atom mass. Then, assuming that the condensate density $`n_0`$ in the BEC spatial region greatly exceeds the density of non-condensed atoms, we obtain
$$\delta \widehat{H}_i=3d^3r_j\widehat{\mathrm{\Psi }}^{}(𝐫_j)U(𝐫_j𝐫_i)\widehat{\mathrm{\Psi }}(𝐫_j)=3\stackrel{~}{U}\widehat{\mathrm{\Psi }}^{}(𝐫_i)\widehat{\mathrm{\Psi }}(𝐫_i).$$
(1)
The field operator of atoms, $`\widehat{\mathrm{\Psi }}=\mathrm{\Psi }_0+\widehat{\mathrm{\Psi }}^{}`$, where $`\mathrm{\Psi }_0=n_0^{1/2}`$ is the condensate wavefunction, and $`\widehat{\mathrm{\Psi }}^{}`$ is a non-condensed part of the operator. This part refers to quasiparticle excitations, and we linearize $`\delta \widehat{H}_i`$ with respect to $`\widehat{\mathrm{\Psi }}^{}`$. Omitting the term $`3n_0\stackrel{~}{U}`$, which is decoupled from the quasiparticle excitations, we obtain
$$\delta \widehat{H}_i=3\stackrel{~}{U}\mathrm{\Psi }_0(\widehat{\mathrm{\Psi }}^{}(𝐫_i)+\widehat{\mathrm{\Psi }}^{}(𝐫_i)).$$
(2)
The force center represents a sort of a ”hole” in the coordinate space. It has the mass $`3m`$ and can undergo a translational motion. Therefore, the recombination-induced (sudden) change of the Hamiltonian $`\widehat{H}_0`$ of the excitations can be written as
$$\delta \widehat{H}=𝑑𝐫_i\widehat{\mathrm{\Phi }}^{}(𝐫_i)\{\delta \widehat{H}_i(\mathrm{}^2/6m)\mathrm{\Delta }\}\widehat{\mathrm{\Phi }}(𝐫_i),$$
(3)
where $`\widehat{\mathrm{\Phi }}(𝐫)`$ is the field operator of the force center. The first term in the rhs of Eq.(3) is related to the interparticle interaction, and the second term to the motion of the force center.
We assume that the kinetic energy of fast particles produced in the recombination process greatly exceeds any other energy scale in the problem, and hence the creation or annihilation of excitations does not influence the energy conservation law for the recombination. Then, according to the general theory of sudden perturbations , in each recombination event the probability of transition of the excitation subsystem to a new state $`f`$, characterized by a different set of quantum numbers for the excitations, is given by $`w_{if}=|i|f|^2`$. The symbol $`i|f`$ stands for the overlap integral between the wavefunction of the initial state $`i`$, which is an eigenstate of the Hamiltonian $`\widehat{H}_0`$, and the wavefunction of the state $`f`$ which is an eigenstate of the new Hamiltonian $`\widehat{H}_0+\delta \widehat{H}`$. As $`_fw_{if}=1`$, the creation/annihilation of excitations does not change the total recombination rate.
In Thomas-Fermi condensates the most important is the creation of excitations with energies of order the chemical potential $`\mu `$ or larger (see below). These excitations are essentially quasiclassical, and their de Broglie wavelength is much smaller than the spatial size of the condensate. Hence, the probability of recombination accompanied by the creation/annihilation of the excitations can be found in the local density approximation. In other words, as well as the recombination without production of excitations, this process occurs locally at a given point $`𝐫`$ characterized by local values of the chemical potential $`\mu `$ and condensate density $`n_0`$. Hence, one can use the Bogolyubov transformation for the spatially homogeneous case and represent $`\mathrm{\Psi }^{},\mathrm{\Psi }`$ in terms of the creation/annihilation operators $`\widehat{b}_𝐤^{},\widehat{b}_𝐤`$ of excitations characterized by momentum $`𝐤`$:
$$\widehat{\mathrm{\Psi }}^{}(𝐫_i)+\widehat{\mathrm{\Psi }}^{}(𝐫_i)=\frac{1}{\sqrt{V}}\underset{𝐤}{}(\frac{E_k}{ϵ_k})^{1/2}(\widehat{b}_𝐤^{}+\widehat{b}_𝐤)\mathrm{exp}(i\mathrm{𝐤𝐫}_i).$$
(4)
Here $`E_k=\mathrm{}^2k^2/2m`$ is the energy of a free particle, $`ϵ_k=(E_k^2+2n_0\stackrel{~}{U}E_k)^{1/2}`$ is the Bogolyubov energy of an excitation, and $`V`$ is the normalization volume. The field operator of the force center can be represented in the form $`\widehat{\mathrm{\Phi }}(𝐫)=(1/\sqrt{V})_𝐪\widehat{a}_𝐪\mathrm{exp}(i\mathrm{𝐪𝐫})`$, where $`\widehat{a}_𝐪`$ is the creation operator for the center. Then, using Eqs. (2) and (4), Eq.(3) is transformed to
$$\delta \widehat{H}=\frac{1}{\sqrt{V}}\underset{𝐤,𝐪}{}h_𝐤(\widehat{b}_𝐤^{}+\widehat{b}_𝐤)\widehat{a}_{𝐪𝐤}^{}\widehat{a}_𝐪+\underset{𝐪}{}\frac{\mathrm{}^2q^2}{6m}\widehat{a}_𝐪^{}a_𝐪,$$
(5)
where
$$h_𝐤=3\stackrel{~}{U}n_0^{1/2}(E_k/\epsilon _k)^{1/2}.$$
(6)
The first term in the rhs of Eq.(5), originating from the interparticle interaction, couples the motion of the force center with the excitation subsystem and is responsible for creating/annihilating excitations in the recombination process. Considering this term as a small perturbation, we see that a single recombination event can be accompanied by the creation/annihilation of one excitation. Initially the momentum of the force center $`𝐪=0`$, and after the creation of the excitation with momentum $`𝐤`$ (annihilation of the excitation with momentum $`𝐤`$) the center acquires the momentum $`𝐤`$ and the kinetic energy $`E_k/3`$. In a single recombination event, the probabilities of creating and annihilating the excitation with momentum $`𝐤`$ are given by
$`w(N_𝐤N_𝐤+1)`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \frac{|h_𝐤|^2(1+N_𝐤)}{(\epsilon _k+E_k/3)^2}}`$ (7)
$`w(N_𝐤N_𝐤1)`$ $`=`$ $`{\displaystyle \frac{1}{V}}{\displaystyle \frac{|h_𝐤|^2N_𝐤}{(\epsilon _kE_k/3)^2}},`$ (8)
where $`N_𝐤=[\mathrm{exp}(ϵ_k/T)1]^1`$ are the equilibrium occupation numbers for the excitations at a given temperature $`T`$. Then, for the rate constant of recombination accompanied by the creation of excitations we obtain
$$\alpha _{ex}=\alpha \frac{d^3k}{(2\pi )^3}|h_𝐤|^2\{\frac{1+N_k}{(ϵ_k+E_k/3)^2}\frac{N_k}{(ϵ_kE_k/3)^2}\},$$
(9)
with $`\alpha `$ being the total (event) rate constant of recombination. The first term in the rhs of Eq.(9) corresponds to spontaneous and stimulated creation of excitations, and the second term to their annihilation.
For $`T\mu `$ the annihilation and stimulated emission of excitations can be omitted. One can put $`N_k=0`$, and Eq.(9) yields
$$\alpha _{\mathrm{ex}}26\alpha (n_0a^3)^{1/2}$$
(10)
Even with a small value for the parameter $`(n_0a^3)`$ the large numerical factor in front of the expression (10) may imply that the creation of excitations in the course of 3-body recombination cannot be neglected. At the highest densities $`n_03\times 10^{15}`$ cm<sup>-3</sup> of the MIT sodium experiment Eq.(10) gives $`\alpha _{\mathrm{ex}}/\alpha 0.2`$.
With increasing temperature, the role of annihilation of the excitations increases. However, our calculations from Eq.(9) show that even at $`T\mu `$ the annihilation and stimulated emission of excitations give a small correction to Eq.(10). Only at $`T>10\mu `$ the annihilation dominates over the emission, and $`\alpha _{ex}`$ (9) becomes negative.
The most dramatic is the influence of created excitations on the loss rate of Bose-condensed atoms, which we will discuss for temperatures $`T\mu `$ . For a high trap barrier single-particle excitations with energies $`ϵ_k\mu `$ can be still trapped. These atoms act as ”bullets” penetrating the condensate. In a spherical trap this happens once per half of the oscillation period $`\pi /\omega `$ . A characteristic time which a fast atom with velocity $`v_k`$ spends inside the condensate is $`R/v_k`$, where $`R=(2\mu /m\omega ^2)^{1/2}`$ is the Thomas-Fermi radius of the condensate. Hence, the rate of elastic collisions of the fast atom with condensate atoms is $`n_0\sigma v_k(\omega R/v_k)n_0\sigma c_s`$, with $`\sigma =8\pi a^2`$ being the elastic cross section, and $`c_s=(\mu /m)^{1/2}`$ the sound velocity. In each elastic collision the fast atom transfers on average a half of its energy to the collisional partner and removes it from the condensate. One has then to deal with two energetic atoms, and so on. This cascade process continues until the excitation energy becomes of order the chemical potential $`\mu `$. Accordingly, the number of lost condensate atoms will be $`ϵ_k/\mu `$, and the characteristic time of the cascade process, $`\tau 2(n_0\sigma c_s)^1\mathrm{log}(\epsilon _k/\mu )`$. At realistic densities the time $`\tau `$ is much smaller than the characteristic recombination time $`\tau _r(\alpha n_0^2)^1`$.
The behavior of the excitations produced in the cascade process depends on the ratio $`T/\mu `$. At $`T\mu `$ their damping time strongly increases at energies well below $`\mu `$ (the decay rate is at least much smaller than $`\mu (n_0a^3)^{1/2}n_0\sigma c_s`$ ) and is likely to exceed the recombination time $`\tau _r`$. Therefore, these excitations mostly remain undamped and no longer influence the number of atoms in the (partially destroyed) condensate.
Thus, one has a non-equilibrium ”boiling” Bose-condensed sample: High-energy single-particle excitations, created in the recombination process, initiate a significant destruction of the condensate and the formation of a non-equilibrium non-condensed cloud. The corresponding loss rate of condensate atoms, $`\nu =Ln_0^3d^3r`$, is determined by the rate constant of recombination-induced production of excitations with energies $`\epsilon _k\mu `$, magnified by approximately $`ϵ_k/\mu `$:
$$L=\alpha \frac{d^3k}{(2\pi )^3}\left|\frac{h_𝐤}{ϵ_k+E_k/3}\right|^2\frac{ϵ_k}{\mu }\gamma ,$$
(11)
where the numerical coefficient $`\gamma 1`$. A precise value of $`\gamma `$ depends on a detailed behavior of damping rates of the excitations and, hence, on the trapping geometry.
The generated non-condensed cloud has energy $`\mu `$ per particle and occupies the volume which is of order the Thomas-Fermi volume of the condensate. Similarly to the condensate, this cloud decays due to 3-body recombination and, in this respect, the quantity $`\nu `$ describes extra losses of (condensate) atoms from the sample.
The integral in Eq.(11) is divergent at high energies, and one should put an upper bound $`ϵ_k=E_B`$, where $`E_B`$ is the trap barrier. The inequality $`E_B\mu `$ justifies that Eq.(11) indeed gives the loss rate due to the production of high-energy excitations ($`\epsilon _k\mu `$). From Eq.(11) we obtain
$$L=\alpha (n_0a^3)^{1/2}\frac{81}{\sqrt{2\pi }}\left(\frac{E_B}{\mu }\right)^{1/2}\gamma .$$
(12)
As $`\mu =n_{0m}\stackrel{~}{U}`$, where $`n_{0m}`$ is the maximum condensate density, the rate constant $`L`$ is independent of the number of Bose-condensed atoms.
The direct loss rate of Bose-condensed atoms due to 3-body recombination is $`\nu _0=3\alpha n_0^3d^3r`$, as three atoms disappear immediately in each recombination event. Then, using Eq.(12) and the Thomas-Fermi density profile $`n_0(r)=n_{0m}(1r^2/R^2)`$, we express the total loss rate of Bose-condensed atoms, $`\nu _t=\nu _0+Ln_0^3d^3r`$, through $`\nu _0`$:
$$\nu _t=\nu _0[1+\frac{216}{11\sqrt{2\pi }}(n_{0m}a^3)^{1/2}\left(\frac{E_B}{\mu }\right)^{1/2}\gamma ].$$
(13)
The situation is the same at $`T\mu `$, if the cascade production of excitations with energies $`\epsilon \mu `$ makes the quasiparticle distribution strongly non-equilibrium and prevents the damping of these excitations caused by their interaction with each other and with the thermal cloud. The number of excitations produced in one cascade process is $`E_B/\mu `$, and the number of thermal quasiparticles with $`\epsilon \mu `$ is $`N_\mu (\mu /\mathrm{}\omega )^3`$. Thus, under the condition $`E_B>\mu N_\mu `$ one also has a non-equilibrium ”boiling” Bose-condensed sample, and the loss rate of condensate atoms will be determined by Eq.(13).
Our calculations assume the $`s`$-wave scattering limit $`k|a|1`$ , and hence the maximum trap barrier for which they are valid is $`E_B\mathrm{}^2/2ma^2`$. In the case of $`{}_{}{}^{87}\mathrm{Rb}`$, the triplet scattering length is $`a=5.8`$ nm, and $`E_B=75`$ $`\mu `$K. Assuming $`\gamma 1`$, this gives $`L_t3L_0`$ and shows a qualitative significance of our mechanism: the loss rate of Bose-condensed atoms is essentially magnified by the creation of high-energy excitations and their destructive influence on the condensate. To be more quantitative, one has to consider the kinetics of excitations produced in the sample by the initially high-energy (trapped) atom.
In ongoing BEC experiments a characteristic temperature of a Bose-condensed sample is in the range from $`100`$ nK to $`1`$ $`\mu `$K and, hence, the above estimated magnification of the loss rate of the condensate atoms (factor 3 for $`E_B75\mu `$K) practically corresponds to switching off the evaporative cooling. With evaporative cooling on, the ratio $`E_B/\mu `$ for temperatures smaller than $`\mu `$ is in practice ranging from 2 to 5. Then, at typical densities $`n_010^{14}`$ cm<sup>-3</sup> Eq.(13) only gives a 10% increase of the total loss rate of Bose-condensed atoms compared to $`L_0`$. To some extent this explains the recent experiments , where a strong increase of the 3-body losses in the condensate has been observed after switching off the evaporative cooling.
For $`T\mu `$ one can also think of the situation, where the cascade production of excitations with energies of order $`\mu `$ does not significantly destroy the equilibrium distribution of quasiparticles in the sample. This should be the case if $`E_B\mu N_\mu `$. Then the damping of these excitations comes into play, continuously decreasing their energy and partially refilling the condensate. This damping originates from (inelastic) scattering of a thermal excitation on a given excitation, which transfers them to the condensate particle and the thermal excitation with higher energy . A characteristic damping rate is of order $`\epsilon (n_0a^3)^{1/2}`$, and even for the lowest excitations ($`ϵ\mathrm{}\omega `$) it can be larger than the rate of recombination.
Consequently, one can conclude that the energy of excitations produced in the recombination process is thermalized in the gas. The Bose-condensed sample will be in quasiequilibrium, with continuously increasing temperature. This provides extra losses of Bose-condensed atoms. Due to refilling the condensate in the course of damping of the excitations, these losses will be smaller than the extra losses described by Eq.(12) in the case of a non-equilibrium ”boiling” condensate.
The rate of energy transfer from the excitations, produced in the recombination process, to the thermal cloud determines the increase of the internal energy $`U`$ of the gas. One can write it as $`\dot{U}=Wn_0^3d^3r`$, where the quantity $`W`$ is obtained in the same way as Eq.(11):
$$W=\alpha \frac{d^3k}{(2\pi )^3}\left|\frac{h_𝐤}{ϵ_k+E_k/3}\right|^2ϵ_k.$$
(14)
Relying on Eq.(14) and the known expressions for $`U`$ and the number of Bose-condensed atoms $`N_0`$ as functions of $`T`$ and the total number of particles (see ), we have calculated the extra losses of condensate atoms $`|N_0/T|\dot{T}`$, related to the increase of temperature. At initial density $`n_010^{14}`$ cm<sup>-3</sup> they do not exceed $`10\%`$.
In conclusion, we have found that inelastic collisional processes in Bose-Einstein condensates can be accompanied by the creation of elementary excitations. It is worth mentioning that this phenomenon is not related to BEC as itself. It originates from the presence of the mean-field interparticle interaction and will also occur in a non-condensed ultra-cold gas, as soon as the parameter $`na^3`$ is not extremely small. We have revealed the influence of the production of high-energy excitations in the course of 3-body recombination on the loss rate of atoms from a trapped condensate. This effect is especially pronounced for a high trap barrier $`E_B`$, and it would be valuable to perform a systematic experimental investigation of the loss rate of condensed atoms as a function of $`E_B`$.
We acknowledge fruitful discussions with J. Dalibard and M.A. Baranov. This work was financially supported by the Stichting voor Fundamenteel Onderzoek der Materie (FOM), by the CNRS, by INTAS, and by the Russian Foundation for Basic Studies.
L.K.B. is an unité de recherche de l’Ecole Normale Supérieure et de l’Université Pierre et Marie Curie, associée au CNRS.
|
no-problem/9905/cond-mat9905128.html
|
ar5iv
|
text
|
# A new method for analyzing ground-state landscapes: ballistic search
## I Introduction
The calculation of the energetic minima of spin glass systems remains the paradigm for difficult optimization problems in physics. Usually, only one of the states exhibiting the lowest energy is calculated, even if a system is characterized by many minima having all the same lowest energy. In an algorithm is presented, which allows to analyze large numbers of ground states and enables to identify all ground-state funnels of Ising spin glass systems efficiently. Moreover, it is possible to analyze all funnels without having all ground states available. In this work the algorithm is presented in detail. Since the algorithm has a random nature, one has to show that the method is in fact reliable. This will be the main part of this paper.
The algorithm is applicable to Edwards-Anderson (EA) $`\pm J`$ spin glasses. They consist of $`N`$ spins $`\sigma _i=\pm 1`$, described by the Hamiltonian
$$H\underset{i,j}{}J_{ij}\sigma _i\sigma _j.$$
(1)
The sum $`i,j`$ runs over all pairs of neighbors. The spins are placed on a $`d`$-dimensional lattice of linear size $`L`$ with periodic boundary conditions in all directions. Systems with quenched disorder of the interactions (bonds) are considered. Their possible values are $`J_{ij}=\pm 1`$ with equal probability. To reduce the fluctuations, a constraint is imposed, so that $`_{i,j}J_{ij}=0`$. Since the Hamiltonian exhibits no external field, reversing all spins of a configuration (also called state) $`z=\{\sigma _i\}`$ results in a state with the same energy, called the inverse of $`z`$. In the following a spin configuration and its inverse are regarded as one single state.
The study of ground-state landscapes helps to understand the nature of random systems . But the calculation of the minima of $`H`$ turns out to be a computational hard problem: It is only for the special case of two-dimensional systems with periodic boundary conditions in no more than one direction and without external field that a polynomial-time algorithm is known for obtaining exact ground states . For more than two dimensions or in the presence of a magnetic field the problem belongs to the class of the NP-hard tasks , i.e. only algorithms with exponentially increasing running time are available. The simplest method works by enumerating all $`2^N`$ possible states and has obviously an exponential running time. Even a system size of $`4^3`$ is too large. The basic idea of the so called branch-and-bound algorithm is to exclude those parts of state space where no low-lying states can be found, so that the complete low-energy landscape of systems of size $`4^3`$ can be calculated .
A more sophisticated method called branch-and-cut works by rewriting the quadratic energy function: then a minimum of a linear function is to be found, but an additional set of inequalities must hold for all feasible solutions. Since not all inequalities are known a priori, the method iteratively solves the linear problem, looks for inequalities which are violated, and adds them to the set until the solution is found. Since the number of inequalities grows exponentially with the system size the same holds for the computation time of the algorithm. Anyway, with branch-and-cut small systems up to $`8^3`$ are feasible.
The method utilized here is able to calculate true ground states up to sizes $`14^3`$. For two-dimensional systems sizes up to $`50^2`$ can be treated. Additionally, in contrast to the methods mentioned earlier, the algorithm used here is able to calculate many statistically independent ground states for each realization of the randomness. The method is based on a special genetic algorithm and on the cluster-exact approximation (CEA) method . This technique is explained briefly in the next section.
But it is not only the computer time needed for the calculation of one ground state which may increase exponentially with the system size. For the $`\pm J`$ spin glass the number of ground states $`D`$, called the ground-state degeneracy, grows exponentially with $`N`$ as well. This is due to the fact that there are always free spins, i.e. spins which can be turned over without changing the energy of the system. A state with $`f`$ independent free spins allows for $`2^f`$ different configurations all having the same energy. The quantity suitable to describe this behavior is the ground-state entropy
$$S_0k_B\mathrm{ln}D$$
(2)
where $`\mathrm{}`$ denotes the average over different realizations of the bonds. Since the number of free spins is extensive, the entropy per spin $`s_0S_0/N`$ is non-zero for the $`\pm J`$ spin glass.
As the ground-state degeneracy increases exponentially, it seems to be impossible to obtain all ground states for systems unless they are not very small. To overcome this problem in this work all clusters (also called funnels) of ground states are calculated. A cluster is defined in the following way: Two ground-state configurations are called neighbors if they differ only by the orientation of one free spin. All ground states which are accessible through this neighbor relation are defined to be in the same cluster.
The method presented here, called ballistic search (BS), is able to obtain all ground-state funnels without knowing all ground states. Additionally one can estimate the size of the funnels. Consequently, it is possible to calculate directly the ground-state entropy per spin even for systems exhibiting a huge $`T=0`$ degeneracy. Furthermore, the number of funnels and their size-distribution as a function of system size are of interest on their own: for the infinite-ranged Sherrington-Kirkpatrik (SK) Ising spin glass a complex configuration-space structure was found using the replica-symmetry-breaking mean-field (MF) scheme by Parisi . If the MF approximation is valid for finite-dimensional spin glasses as well, then the number of ground-state funnels must diverge with increasing system-size. On the other hand the droplet-scaling picture predicts that, basically, one ground-state funnel dominates the spin-glass behavior . To address this issue a cluster-analysis was performed for small three-dimensional systems of one size $`L=4`$ . In two-dimensional spin glasses of sizes up to $`5\times 5`$ were investigated. As a first application of BS, an analysis of the size dependence of the number of clusters for $`d=3`$ (up to $`8\times 8\times 8`$) is presented in , revealing an exponential increase as a function of the number of spins.
The paper is organized as follows: First the procedures used in this work are presented in detail. Then the behavior of the algorithms is tested with respect to different system sizes and parameters. It is shown that BS works reliable. In Section IV a variant of BS is presented, which allows to estimate the size of clusters, if only a small number of ground states are available per funnel. Next, as an application, the algorithm is utilized to investigate the ground-state cluster structure of two-dimensional $`\pm J`$ spin glasses. In particular, the dependence of the number of clusters and the number of ground states on $`N`$ are evaluated. The last section summarizes the results.
## II Algorithms
First the CEA method is explained briefly. Then, for illustrating the the problem, a straight-forward method to identify clusters of ground states is given. In the main part the BS method for finding clusters in systems exhibiting a huge ground-state degeneracy is presented.
The basic method used here for the calculation of spin-glass ground states is the cluster-exact approximation algorithm , which is a discrete optimization method designed especially for spin glasses. In combination with a genetic algorithm this method is able to calculate true ground states in three-dimensions for systems of sizes up to $`L=14`$ on standard workstations. A detailed description of the method can be found in . Here the basic ideas of genetic CEA are summarized.
The concept of frustration is important for its understanding. A system is called frustrated, if it is not possible to find a configuration, where all bonds contribute with negative values to the energy. One says it is not possible to satisfy all bonds. The CEA method constructs iteratively and randomly a non-frustrated subset of spins within the system. Spins adjacent to many unsatisfied bonds are more likely to be added to the subset. During this construction a local gauge-transformation of the spin variables is applied so that all interactions between subset spins become ferromagnetic. The spins not belonging to the subset act like local magnetic fields on the subset spins. Therefore, the ground state of the subset is not trivial. Since the subset gives raise only to ferromagnetic interactions, an energetic minimum state for its spins can be calculated in polynomial time by using graph theoretical methods : an equivalent network is constructed , the maximum flow is calculated and the spins of the subset are adjusted to their orientations leading to a minimum in energy regarding the subset. Therefore, the energy is decreased for the total system or remains the same. By iterating this process a few times the total energy of a system is decreased quite efficiently, but obtaining ground states turned out to be very hard.
To increase the efficiency of CEA it is combined with a genetic algorithm . Genetic algorithms are biologically motivated. An optimal solution is found by treating many instances of an optimization problem in parallel, keeping only better instances and replacing bad ones by new ones (survival of the fittest). With an appropriate choice of few simulation parameters, usually more than 90% of all genetic CEA runs end up with a true ground state. Configurations with a higher energy are not included in further calculations.
Using this method one does not encounter ergodicity problems or critical slowing down like in algorithms which are based on Monte-Carlo methods. Moreover, it is possible to calculate many statistically independent configurations (replicas). Genetic CEA was already utilized to examine the ground-state landscape of two-dimensional and three-dimensional $`\pm J`$ spin glasses by calculating a small number of ground states per realization. Furthermore the existence of a spin-glass phase for nonzero temperature was confirmed for the three-dimensional spin glass . Finally, the method was applied to the $`\pm J`$ random-bond model to investigate its $`T=0`$ transition from ferromagnetism to spin-glass behavior , which takes place by increasing the fraction of antiferromagnetic bonds starting from a ferromagnet.
Once many ground states are calculated the straight-forward method to obtain the structure of the clusters works as follows: the construction starts with one arbitrarily chosen ground state. All other states, which differ from this state by one free spin, are said to be its neighbors. They are added to the cluster. These neighbors are treated recursively in the same way: all their neighbors which are yet not included in the cluster are added. After the construction of one cluster is complete, the construction of the next one starts with a ground state, which has not been visited so far.
The running-time of the construction of the clusters is only a linear function of the degeneracy $`D`$ ($`O(D)`$), similar to the Hoshen-Kopelman technique , because each ground state is visited only once. Unfortunately, the detection of all neighbors, which has to be performed at the beginning, is of $`O(D^2)`$, because all pairs of states have to be compared. Since for $`L=5`$ systems it is possible that they exhibit more than $`10^5`$ ground states, this algorithm is not suitable for larger sizes than $`L=5`$
The ballistic-search method instead is able to treat larger systems. Its basic idea is to use a test, which tells whether two ground states are in the same cluster or not. The test works as follows: Given two independent replicas $`\{\sigma _i^\alpha \}`$ and $`\{\sigma _i^\beta \}`$ let $`\mathrm{\Delta }`$ be the set of spins, which are different in both states: $`\mathrm{\Delta }\{i|\sigma _i^\alpha \sigma _i^\beta \}`$. Now BS tries to build a path in configuration-space of successive flips of free spins, which leads from $`\{\sigma _i^\alpha \}`$ to $`\{\sigma _i^\beta \}`$. The path consists of states which differ only by flips of free spins from $`\mathrm{\Delta }`$ (see Fig. 1). For the simplest version iteratively a free spin is selected randomly from $`\mathrm{\Delta }`$, flipped and removed from $`\mathrm{\Delta }`$. This test does not guaranty to find a path between two ground states which belong to the same cluster. It may depend on the order of selection of the spins whether a path is found or not, because not all free spins are independent of each other. Thus, a path is found with a certain probability $`p_f`$, which depends on the size of $`\mathrm{\Delta }`$. Later on the behavior of $`p_f`$ is analyzed.
The algorithm for the identification of clusters using BS works as follows: the basic idea is to let a ground state represent that part of a funnel which can be reached using BS with a high probability by starting at this ground state. If a cluster is large it has to be represented by a collection of states, so that the whole cluster is “covered”. For example a typical cluster of a $`L=8`$ spin glass consisting of $`10^{17}`$ ground states is usually represented by only some few ground states (e.g. two or three). A detailed analysis of how many representing ground states are needed as a function of cluster and system size can be found in the next section. At each time the algorithm stores a set of $`m`$ clusters $`A=\{A(r)|r=1,\mathrm{},m\}`$ each consisting of a set $`A(r)=\{z^{rl}\}`$ of representing configurations $`z^{rl}=\{\sigma _i^{rl}\}`$ $`(l=1,\mathrm{},|A(r)|)`$. At the beginning the cluster-set is empty. Iteratively all available ground states $`z^j=\{\sigma _i^j\}`$ ($`j=1,\mathrm{},D`$) are treated: The BS algorithm tries to find paths from $`z^j`$ or its inverse to all representing configurations in $`A`$. Let $`F`$ be the set of clusters-numbers, where a path is found. Now three cases are possible (see Fig. 2):
* No path is found: $`F=\mathrm{}`$
A new cluster is created, which is represented by the actual configuration treated: $`A(m+1)\{z^j\}`$. The cluster is added to $`A`$: $`AA\{A(m+1)\}`$.
* One or more paths are found to exactly one cluster: $`F=\{f_1\}`$. Thus, the ground state $`z^j`$ belongs to one cluster. Consequently, nothing special happens, the set $`A`$ remains unchanged.
* $`z^j`$ is found to be in more than one cluster: $`F=\{f_1,\mathrm{},f_k\}`$. All these clusters are merged into one single cluster, which is now represented by the union $`\stackrel{~}{A}`$ of the states, which have represented before all clusters affected by the merge:
$`\stackrel{~}{A}_{j=1}^kA(f_j)`$, $`A\{\stackrel{~}{A}\}A_{j=1}^k\{A(f_j)\}`$
The whole loop is performed two times. The reason is that a state which links two parts of a large cluster (case 3) may appear in the sequence of ground states before states appear belonging to the second part of the cluster. Consequently, this linking state is treated as being part of just one single smaller cluster and both subclusters are not recognized as one larger cluster (see Fig. 3). During the second iteration the “linking” state is compared to all other representing states found in the first iteration, i.e. the large cluster is identified correctly. With one iteration, the problem appears only, if few ground-states per cluster are available. Nevertheless, always two iterations are performed, so the difficulty does not occur.
The BS-identification algorithm has the following advantages: since each ground-state configuration represents many ground states, the method does not need to compare all pairs of states. Each state is compared only with the representing configurations. For the system sizes usually encountered, this value is only slightly larger than the number of funnels itself. Thus, the computer time needed for the calculation grows only a little bit faster than $`O(Dn_C)`$, where $`n_C`$ is the number of clusters. Consequently, large sets of ground states, which appear already for small system sizes, can be treated. Furthermore, the ground-state funnel-structure of even larger systems can be analyzed, since it is sufficient that there are only a small number of ground states per cluster available. One has to ensure that really all clusters are found, which is simply done by calculating enough states. A study of how many states are needed for different sizes $`L`$ is presented in the next section.
## III Numerical Tests
Since the BS cluster algorithm does not guaranty to find all clusters, numerical tests on two- and three-dimensional systems were performed. Here, the tests for three dimensions are presented, because for this type of system results are already available . For two dimensions the algorithm behaves similar. Results concerning the number of ground states and the number of funnels for $`d=2`$ are presented later on.
For system sizes $`L=3,4,5,6,8`$ large numbers of independent ground states were calculated using genetic CEA. Usually 1000 different realizations of the disorder were considered. Tab. I shows the number of realizations $`n_R`$ and the number of independent runs $`r`$ per realization for different system sizes $`L`$. For small systems sizes (and for 100 realizations of $`L=5`$) many runs plus an additional local search were performed to calculate all existing ground states. For the larger sizes $`L=5,6,8`$ the number of ground states is too large, so it is only possible to try to calculate at least one ground state per cluster. We will see later that for most of the realizations it is highly probable that all existing clusters were found using genetic CEA.
But first we concentrate on another issue: The ground states were grouped into clusters using the ballistic-search algorithm. To interpret the following results correctly, one should keep in mind that for detecting one ground state being part of a cluster it is sufficient to find just one path to any of the other states of the cluster. The question under consideration now is: how large is the probability, that for ground states belonging to the same cluster the BS test finds a path.
To investigate this question the following test was performed: Many thousand times pairs of ground states were selected, which belong to the same cluster. The probability for selecting a pair was proportional to the size of the cluster (How to estimate the size of a cluster, if not all ground states are available, is shown in the next section). This guarantees that each ground state contributes to the result with its proper thermodynamical weight. The outcome of this test depends on the assumption, that the construction of the clusters has been performed correctly. Later we will see, that this has indeed been the case with a very high probability. Let $`p_f`$ be the probability that BS finds a direct path in configuration space connecting two given states. The result is expected to depend on the number of spins, which are different in both states, i.e. on the length $`l_{\text{path}}`$ of the path. The result is shown in Fig. 4 for system sizes $`L=3,4,5,6,8`$. The probability decreases with increasing length of the path. Thus, finding a successful path becomes more difficult, which is to be expected, since the number of possible paths increases exponentially with $`l_{\text{path}}`$. On the other hand, by increasing the system size, it is more likely to find a connecting path. This is caused by the fact that the number of isolated free spins increases, which in fact can be flipped in any order. To investigate the dependence on $`L`$ the following finite-size behavior is assumed ($`\lambda `$ being a scaling exponent):
$$p_f(l_{\text{path}},L)=\stackrel{~}{p}_f(L^\lambda l_{\text{path}}).$$
(3)
By plotting $`p_f(l_{\text{path}},L)`$ against $`L^\lambda l_{\text{path}}`$ with correct parameter $`\lambda `$ the datapoints for different system sizes near $`l_{\text{path}}=0`$ should collapse onto a single curve. The best results were obtained for $`\lambda =1.7`$. In Fig. 5 the resulting scaling plot is shown. Now assume that two ground states differ by a certain fraction of spins. Thus, the absolute number of spins being different increases with $`L^3`$. Since the length of a path for a fixed value of $`p_f`$ increases only with $`L^{1.7}`$, it becomes indeed more and more difficult to find a path with increasing system size $`L`$.
So far the behavior of the ballistic search was investigated. But what does it mean for the cluster-identification algorithm? We are interested in the question, whether all clusters are identified correctly. This can be formulated as a generalized percolation problem:
* Consider
+ A set $`B=\{z^1,\mathrm{},z^K\}`$ of objects
+ A distance function $`d(z^a,z^b)`$
+ A probability $`p_{\text{bond}}(d)`$, that a bond is created between two elements from $`B`$, The probability depends on the distance $`d`$ between the elements and decreases monotonically with $`d`$.
* The quantity of interest is the probability $`p_1`$ that all objects belong to the same cluster, i.e. the probability that there is only one cluster.
One can identify $`B`$ with a set of ground states belonging all to the same cluster, $`d(z^a,z^b)`$ with $`l_{\text{path}}`$ and $`p_{\text{bond}}(d)`$ with the probability $`p_f`$ that a path is found. Then the quantity $`p_1`$ is the probability that all ground states are identified correctly by the BS clustering algorithm as being members of the same cluster. Since the average distance between different states decreases for a given cluster by increasing the number $`K`$ of states, $`p_1`$ should increase with $`K`$. The reason is that more bonds are likely to be created (see Fig. 6). It should be possible to determine $`p_1(K)`$ for different functions $`d(z^a,z^b)`$ and $`p_{\text{bond}}(d)`$, at least numerically. But here a different approach is selected: Since all ground states and funnels are available, $`p_1(K)`$ can be evaluated directly. For each realization, each lattice size $`L`$ and each number $`k[2,20]`$, a set of $`K`$ different ground states was selected 50 times randomly from one cluster. Again each ground-state funnel was chosen with a weight proportional to its size. The BS clustering method was applied and it was verified whether just one cluster was found. The result is shown if Fig. 7. As the system size increases, larger clusters occur, which are harder to identify. But is visible that even two ground states are sufficient most of the time to identify a cluster correctly. To be almost sure, a value of $`p_1>0.999`$ is expected to be sufficient, which means that $`K=10`$ is enough for $`L=3`$ and, as found by further analysis, $`K=40`$ for $`L=8`$.
In fact, for the largest ground-state funnels found in this work there are usually many more ground states available than needed for identifying a cluster correctly with a probability of 99.9%. Consequently, our results for the probability $`p_f`$ and $`p_1`$ are very reliable. Furthermore, whenever the number of ground states is too small, it is always possible to generate additional states by performing $`T=0`$ Monte-Carlo simulations, i.e. selecting spins randomly and flipping them if they are free. Consequently, we can be sure that the funnels, which were used for the preceding analysis, were obtained correctly.
Another question is, whether for a given realization there are some ground-state funnels, for which no ground states are found using just a restricted number of runs of genetic CEA. This problem does not occur for the smallest sizes, because it is possible to calculate all states of lowest energy using that method. But even for $`L=6`$ there are realizations already exhibiting more than $`10^6`$ ground states, making it impossible to obtain all of them directly. The genetic CEA method calculates a ground state with a probability $`p_C`$, which increases on average with the size $`|C|`$ of the cluster it belongs to . Thus, ground states belonging to small funnels have a small probability $`p_C`$ of being found using a finite number of runs. This probability is not extremely small, since $`p_C`$ increases slower than the size of a cluster $`|C|`$ , i.e. for $`|C_1|<|C_2|`$
$$\frac{p_{C_1}}{|C_1|}>\frac{p_{C_2}}{|C_2|},$$
(4)
but it is still small enough that some funnels may have been missed. Since $`p_C`$ increases with $`|C|`$, the probability that a cluster is not found at all takes the largest values for the smallest clusters, i.e. for a cluster of size 1. This probability is denoted here with $`\overline{p}_1`$. Consequently, $`\overline{p}_1`$ is an upper limit for the probability that a certain cluster is missed. Now $`\overline{p}_1`$ is estimated.
Consider a list of $`K`$ ground states $`z^1,\mathrm{},z^K`$ ($`z^j=\{\sigma _i^j\}`$), in the order they were obtained in the calculation using genetic CEA. Thus, on average states from larger funnels appear earlier in the list than states from smaller funnels. A state which is calculated several times, is stored in the list just once. For each state the number of times $`h_j`$ it has occurred is recorded, let $`h_jh_j`$. For small systems, where the number of existing ground states is small compared to the number of runs, usually $`h_j>1`$. Now we look at the smallest cluster $`C_{\mathrm{min}}`$, which was found using the procedure. The relative frequency, that a ground state from $`C_{\mathrm{min}}`$ is found, is approximately $`p_{\mathrm{min}}=_{jC_{\mathrm{min}}}h_j/h`$. It follows from (4) that $`p_1>p_{\mathrm{min}}/|C_{\mathrm{min}}|`$. Thus, we have for the probability $`\overline{p}_1`$ that a cluster of size 1 is not found during $`h`$ different runs
$$\overline{p}_1=(1p_1)^h<\left(1\frac{p_{\mathrm{min}}}{n_{C_{\mathrm{min}}}}\right)^h.$$
(5)
Consequently, it is possible to estimate for each single configuration the likelihood that a small cluster may have been missed. For the smaller sizes, where it was claimed that all ground states were found using a large number of runs, typical values $`\overline{p}_1<10^{10}`$ were found. A small cluster was missed with $`\overline{p}_1>0.01`$ only for three realizations of $`L=3`$ and never for $`L=4,5`$. Thus, it is highly probable, that all ground states were found for the smaller sizes $`L=3,4,5`$
For larger sizes, the number of states obtained per cluster is small compared to the size of the cluster. The estimate (5) gives always a large value. Consequently, another method of estimating the quality of the results has to be applied: the progression of the BS cluster algorithm is observed during the processing of the ground states $`z^1,\mathrm{},z^K`$. Each state which causes a new cluster to be created or some clusters to be merged is called an event. Since there is only a finite number $`n_C`$ of clusters and each cluster is represented by a finite number of configurations there is only a finite number of events. For the systems sizes encountered here, the number of events is only slightly larger than $`n_C`$, because most of the clusters are represented by just one ground state. In principle, if the last event is known, no further ground states have to be processed. Since the last event is not known for system sizes $`L>5`$, one can only assume that the last event has already occurred, if no new event is found for a long time, while treating more and more states $`z^j`$. At each step let
$$Q(j)\frac{j}{\text{number of the last event before}z^j}.$$
(6)
The fraction Q measures the relative length of sequences, where no event occurs. For $`j`$ beyond the last event, $`Q(j)\mathrm{}`$. The longest sequence found before the last event $`Q\mathrm{max}_{j\text{last}}Q(j)`$, describes how many states are needed to find all clusters without knowing the last event.
Using the number of runs of the genetic CEA algorithms given above, a value of Q larger than 4 was never observed for $`L=3`$. This means, one can be sure that all funnels have been identified, if the number of ground states processed is four times larger than the number of the state constituting the last event. For $`L=4`$ the largest $`Q`$ found was 3 and for $`L=5`$ it was observed that $`Q<2.5`$ for all realizations. For larger sizes Q is not known, so $`Q(K)`$ is used instead. Assume that the last event has already occurred, then by including more and more states into the analysis, $`Q(K)`$ grows linearly and the confidence increases that really no further event is to be expected. We believe that if the number $`K`$ of ground states is more than four times the number of the last event in $`z^1,\mathrm{},z^K`$, i.e. $`Q(K)>4`$ one can be quite sure that all funnels have been found, because $`Q4`$ for all small sizes. Even if $`Q(K)=2`$, it seems very likely that no cluster has been missed, since $`Q`$ seems to decrease by going to larger sizes. For $`L=5,6`$ $`Q(K)<4`$ was found for only about 8% of the realizations, and $`Q(K)<2`$ only for 2% ($`L=5`$) respectively 5% ($`L=6`$). Consequently, nearly all clusters have been identified. $`Q>4`$ holds for 75 % of all $`L=8`$ realizations ($`Q>2`$ for 85%), i.e. here a small number of funnels may have been missed for 25% of all realizations, while for the majority of the realizations really all funnels have been detected. Only by increasing massively the number of available ground states per realization, a substantial improvement for the largest size treated in this work is possible.
On the other hand, if physical properties have to be evaluated, the results are very reliable even for $`Q(K)2`$, because each cluster contributes with a weight proportional to its size. As mentioned before, the probability that genetic CEA returns a certain ground state increases with cluster size. Consequently, only small clusters are omitted and the result is affected only slightly.
## IV Size of a cluster
Once all clusters are identified their sizes have to be obtained to calculate the entropy. A variant of BS is used to perform this task. Starting from a state $`\{\sigma _i\}`$ from a cluster $`C`$, free spins are flipped iteratively, but each spin not more than once. During the iteration additional free spins may be generated and other spins may become fixed. When there are no more free spins left the process stops. Thus, one has constructed a straight path in state space from the ground state to the border of the funnel $`C`$. The number of spins that has been flipped is denoted by $`l_{\mathrm{max}}`$. By averaging over several trials and several ground states of a cluster, one obtains an average value $`\overline{l}_{\mathrm{max}}`$, which is a measure for the size of the cluster.
For system sizes $`L=3,4,5`$ all ground states were available (for 100 realizations for $`L=5`$) and the cluster sizes are known exactly. Fig. 8 displays the average size $`V`$ of a cluster as a function of $`l_{\mathrm{max}}`$. An exponential dependence is found, yielding $`V=2^{\alpha l_{\mathrm{max}}}`$ with $`\alpha =0.90(5)`$. The deviation from the pure exponential behavior for the largest clusters of each system size should be a finite size effect.
One might think that instead of successively turning spins over, one could simply count the static number of free spins. But it turns out that the quantity $`l_{\mathrm{max}}`$ describes the size of a cluster better. The reason is that by flipping spins additional free spins are created and deleted. Consider for example a one-dimensional chain of $`N`$ ferromagnetic coupled spins with antiperiodic boundary conditions. Each ground state consists of two linear domains of spins. In each domain all spins have the same orientation. For each ground-state there are just two free spin, but all $`2N`$ ground states belong to the same cluster. The possibility of similar ground-state topologies is taken into account using the definition given above.
## V Results
The data presented in the preceding sections show that earlier results for three-dimensional spin glasses are reliable, where an exponential increase of the degeneracy and the number of ground-state funnels was found. In this section a similar analysis of the ground-state landscape of two-dimensional systems is performed. It will be shown that qualitatively the behavior is the same as for $`d=3`$.
For system sizes $`L=5,7,10,14,20`$ large numbers of independent ground states were calculated using genetic CEA, up to $`10^4`$ runs per realization were performed. Since many runs are needed to describe the ground-state landscape as completely as possible, no runs for larger systems were conducted, although it is possible to obtain true ground states easily up to $`L=50`$. Usually 1000 different realizations of the disorder were considered, except for $`L=20`$, where only 92 realizations could be treated. For the small systems sizes $`L=5,7`$ many runs plus an additional local search were performed to calculate all ground states. For the larger sizes $`L=10,14,20`$ the number of ground states is too large, so we restrict ourselves to calculate at least one ground state per cluster. The probability that some clusters were missed is higher for two dimensions than for the $`d=3`$ case, because the ground-state degeneracy grows faster with the system size: for small systems sizes $`L10`$ it is again highly probable that all funnels have been obtained. For $`L=14`$ some small funnels may have been missed for about 30% of all realizations, while for $`L=20`$ this fraction raises even to 60%. This is due to the enormous computational effort needed for the largest systems. For the $`L=20`$ realizations a total computing time of more than 2 CPU-years was consumed on a cluster of Power-PC processors running with 80MHz.
The ground states were grouped into clusters using the ballistic-search algorithm. The number of states per funnel was sufficiently large, so that only with a probability of less than $`10^3`$ some configurations from a large cluster may be mistaken for belonging to different funnels. In Fig. 9 the average number $`n_C`$ of clusters is shown as a function of the number $`N`$ of spins. By visualizing the results in a double-logarithmic plot (see inset) one realizes that $`n_C`$ seems to grow faster than any power of $`N`$. The larger slope in the linear-logarithmic plot for small systems may be a finite-size effect. Additionally for $`L=20`$ there is a large probability that some small funnels are missed, explaining the smaller slope there. Consequently, the data presented here favor an exponential increase of $`n_C(N)`$.
For the small systems sizes the number of ground states in each cluster could be counted directly. For the larger sizes the variant of the BS method was used to estimate the size of each cluster. The average size $`V`$ of a cluster as a function of $`l_{\mathrm{max}}`$ is displayed in Fig. 10. Similar to the results presented in the preceding section, an exponential dependence is found, yielding $`V=2^{\alpha l_{\mathrm{max}}}`$ with $`\alpha =0.85(5)`$.
By summing up all cluster sizes for each realization the ground-state degeneracy $`D`$ is obtained. The quantity averaged over all realizations is shown in Fig. 11 as a function of $`N`$. The exponential growth is obvious.
The result for the average ground-state entropy per spin is shown in the inset of Fig. 11. By fitting a function of the form $`s_0(L)=s_0(\mathrm{})+aL^\beta `$ a value of $`s_0(\mathrm{})=0.0856(4)k_B`$ is obtained.
In $`s_00.075k_B`$ was estimated by using a recursive method to obtain numerically exact free energies up to $`L=18`$. For technical reasons, only systems with periodic boundary conditions in one direction were treated, which may be the reason for the smaller result. The result of $`s_00.07k_B`$ found in is even slightly lower. The value found by a Monte-Carlo simulation $`s_00.1k_B`$ for systems of size $`80^2`$ is much larger. The deviation is presumably caused by the fact that it was not possible to obtain true ground states for systems of that size, i.e. too many states were visited. The entropy is considerable higher than for the three-dimensional $`\pm J`$ spin glass, where $`s_0(\mathrm{})=0.051(1)k_B`$ was obtained using the same method applied here .
The result for the entropy does not suffer from the fact, that some ground-state funnels may have been missed for $`L=14,20`$: the probability for finding a cluster by applying genetic CEA grows with the size of the cluster . This implies that the clusters which may have been missed are considerably small, so the influence on the result is negligible. The largest source of uncertainty is caused by the assumption that the size of a cluster grows like $`2^{\alpha \overline{l}_{\mathrm{max}}}`$. The error of the constant $`\alpha `$ enters linearly into the result of the entropy. To estimate the influence of this approximation, $`s_0`$ was calculated using estimated cluster sizes as well for the three smallest systems sizes, where the entropy had been obtained exactly. For both cases the results were equal to the exact values within error bars. The final result quoted here is $`s_0=0.086(4)`$.
## VI Conclusion
The ballistic-search method has been presented, which allows the fast identification of very large clusters, appearing for example in the calculation of the ground-state landscape of $`\pm J`$ spin glasses. Furthermore, it is possible to calculate clusters of systems when only a small fraction of all their states is available. The method should be extendable to similar clustering problems, especially for analyzing results from simulations at finite temperature. A variant of the technique is used to estimate the size of the clusters.
Since the BS algorithm does not guarantee to find a path in configuration space between two ground states which belong to the same cluster, extensive numerical tests were performed. It was shown that by increasing the number of available states, it is possible to reduce the probability that a cluster is not identified correctly. Additionally it is possible to estimate the probability that small clusters are not found. Consequently, the new technique enables to analyze the complete funnel structure for two-dimensional $`\pm J`$ spin glasses up to $`L=20`$ and for three-dimensional systems up to $`L=8`$. Thus, systems exhibiting up to $`10^{17}`$ ground states can be treated efficiently.
For $`d=2`$ an analysis of the ground-state landscape has been presented. The number of funnels and the ground state degeneracy increases exponentially with the system size. The ground-state entropy per spin was found to be $`s_0=0.086(4)k_B`$. Results for three dimensional systems can be found in . It should be pointed out that the result for the entropy does not depend on the way a cluster is defined. The specific definition given here is only a tool which allows the treatment of systems exhibiting a huge ground-state degeneracy.
## VII Acknowledgements
The author thanks K. Battacharya for interesting discussions. He is grateful to P. Müller for critically reading the manuscript. He was supported by the Graduiertenkolleg “Modellierung und Wissenschaftliches Rechnen in Mathematik und Naturwissenschaften” at the Interdisziplinäres Zentrum für Wissenschaftliches Rechnen in Heidelberg and the Paderborn Center for Parallel Computing by the allocation of computer time. The author obtained financial support from the DFG (Deutsche Forschungs Gemeinschaft).
|
no-problem/9905/cond-mat9905057.html
|
ar5iv
|
text
|
# Weak localisation in AlGaAs/GaAs 𝑝-type quantum wells
## Abstract
We have for the first time experimentally investigated the weak localisation magnetoresistance in a AlGaAs/GaAs $`p`$-type quantum well. The peculiarity of such systems is that spin-orbit interaction is strong. On the theoretical side it is not possible to treat the spin-orbit interaction as a perturbation. This is in contrast to all prior investigations of weak localisation. In this letter we compare the experimental results with a newly developed diffusion theory, which explicitly describes the weak localisation regime when the spin-orbit coupling is strong. The spin relaxation rates calculated from the fitting parameters was found to agree with theoretical expectations. Furthermore the fitting parameters indicate an enhanced phase breaking rate compared to theoretical predictions.
PACS numbers: 73.61.Ey, 73.20.Fz
The effect of localisation in weakly disordered systems can be understood in terms of the quantum interference between two waves propagating by multiple scattering along the same path but in opposite directions. When a magnetic field is applied the phase pick up along the two paths have opposite sign, and as a consequence, a negative magnetoresistance is observed . This effect is normally known as weak localisation.
Due to the properties of the spin part of the wavefunction, spin-orbit interaction has been shown to have a dramatic influence on the weak localisation. In systems with strong spin-orbit interaction the magnetoresistance reverse the sign. This is in contrast to the above known as weak antilocalisation.
Traditionally, weak antilocalisation has been studied intensely in metallic films , where spin-orbit interaction occurs at the individual scattering centers. More recently weak antilocalisation has been observed in true two dimensional systems which lack inversion symmetry, like $`n`$-type GaAlAs/GaAs or Te quantum wells. The lack of inversion symmetry gives rise to a new spin relaxation mechanism. This has surprisingly led to a completely new physical insight (see also references in ).
However most of all previous works referred to $`n`$-type quantum wells. In the case of a $`p`$-type quantum well, an even more dominating positive magnetoresistance would be expected due to strong spin-orbit interaction in the GaAs valence band .
In recent theoretical works devoted to weak localisation in $`p`$-quantum wells it was shown how the sign of the magnetoresistance depends on the hole concentration. Moreover anisotropy of the spin relaxation was predicted, which in turn leads to dependence of the phase relaxation rate on the spin orientation. Experimental investigations of anomalous magnetoresistance in $`p`$-quantum wells so far did not exist. In this work, for the first time, the magnetoresistance is studied experimentally in $`p`$-quantum wells and peculiarities of weak localisation are discussed in the case where spin and momentum relaxation rates are comparable.
The heterostructures used in the experiment were grown on a oriented GaAs wafer by Molecular Beam Epitaxy (MBE) technique. A symmetrical quantum well was formed as a 70Å wide GaAs channel in a modulation doped Ga<sub>0.5</sub>Al<sub>0.5</sub>As matrix. The GaAlAs was homogeneously doped with Be ($`\mathrm{n}_{\mathrm{Be}}=210^{18}\mathrm{cm}^3`$) in two 50Å thick layers separated by 250Å of intrinsic Ga<sub>0.5</sub>Al<sub>0.5</sub>As from the centre of the GaAs channel. The individual samples were mesa-etched into rectangular Hall bars with a width of 0.2mm and a total length of 4.2mm. Three voltage contacts on each side were placed in a distance of each 0.8mm to avoid perturbing significantly the four point measurements. Ohmic contacts to the 2-dimensional hole gas were made by a Au/Zn/Au composite film annealed at 460C in 3 minutes. The contacts areas were $`0.6\times 0.6\mathrm{mm}^2`$ squares, and bonded to the legs of a nonmagnetic chip carrier. Four point measurements of the resistivity were carried out using standard low frequency lock-in technique (EG$`\&`$G 5210). The samples were biased by an AC current signal with an amplitude of 200nA. The experiments were performed at temperatures between 0.3 and 1.0K in an Oxford Heliox cryostat equipped with a copper electromagnet. The characterisation of the samples with respect to density and mobility were done by Hall measurement at magnetic fields between -0.3T and 0.3T, while the weak localisation magnetoresistance measurements were performed at fields between -100Gs and 100Gs. To generate the stable current for the magnetic fields we used a Keithley 2400. The samples were found to have a hole density of $`p=4.410^{15}\mathrm{m}^2`$, which is low enough to ensure that only one subband is filled. The mobility was found to be $`\mu =3.5\mathrm{T}^1`$.
It is well known that the weak localization effect on the magnetoconductivity manifests itself more brightly when $`k_\mathrm{F}l1`$, corresponding to a metallic conductivity in the system. Here $`k_\mathrm{F}`$ is the Fermi wave vector and $`l`$ is the mean free path. For our samples this product may be estimated with the help of the two-dimensional Drude conductivity, $`\sigma _D`$
$$\sigma _D=\frac{e^2}{2\pi \mathrm{}}k_\mathrm{F}l.$$
(1)
In the studied samples $`\sigma _D=2.4710^3\mathrm{\Omega }^1`$ which gives $`k_Fl63`$. The value of $`k_F`$ may be determined from the hole concentration: $`k_F=\sqrt{2\pi p}`$ and is equal to $`1.710^8`$ m<sup>-1</sup>. This leads to a mean free path $`l=0.37`$ $`\mu `$m for our samples. The magnetic length is equal to $`l`$ in a field $`B_{tr}=\mathrm{}/2el^224`$ Gs. For $`B<B_{tr}`$ the diffusion theory may be applied for description of weak localization effects.
According to recent theoretical works , the key parameter in a $`p`$-quantum well of width $`a`$ is $`k_\mathrm{F}a/\pi `$. This product is a measure of heavy-hole/light-hole mixing degree at the Fermi level which determines the behavior of the anomalous magnetoresistance. For instance, if the carrier concentration is small ($`k_\mathrm{F}a/\pi 1`$) the magnetoresistance does not change its sign and is exclusively negative. On the other hand if $`k_\mathrm{F}a/\pi 1`$, the magnetoresistance is also sign-constant, but positive. This positive magnetoresistance was observed in recent experimentally reports . Moreover the resistance may change its sign as a function of magnetic field at the intermediate values of this parameter. Since in the studied system $`k_\mathrm{F}a/\pi 0.37`$ this intermediate regime is in fact realised in our experiments.
Under these conditions the weak localisation correction to the conductivity of our $`p`$-type quantum wells in magnetic fields $`B<B_{tr}`$ is given as
$`\delta \sigma (B)=`$ (2)
$`{\displaystyle \frac{e^2}{\pi h}}\left[f\left({\displaystyle \frac{B}{B_\phi +B_{}}}\right)+{\displaystyle \frac{1}{2}}f\left({\displaystyle \frac{B}{B_\phi +B_{}}}\right){\displaystyle \frac{1}{2}}f\left({\displaystyle \frac{B}{B_\phi }}\right)\right],`$ (3)
where $`f`$ is given by: $`f(x)=\mathrm{ln}(x)+\psi (1/2+1/x)`$, here $`\psi (x)`$ is a Digamma-function and $`\delta \sigma (B)`$ is the difference between the conductivity with and without magnetic field. The characteristic magnetic fields $`B_\phi ,B_{}`$ and $`B_{}`$ are given as
$$B_\phi =\frac{\mathrm{}}{4eD\tau _\phi },B_{}=\frac{\mathrm{}}{4eD\tau _{}},B_{}=\frac{\mathrm{}}{4eD\tau _{}},$$
(5)
where the quantities $`\tau _{}`$, $`\tau _{}`$ refer to the longitudinal and transverse spin relaxation time with the preferred axis lying normal to the quantum well, and $`\tau _\phi `$ is the phase relaxation time for the holes. The diffusion coefficient $`D=l^2/2\tau `$, where $`\tau `$ is the momentum relaxation time. Equation (2) resembles the expression for metallic films first reported by Hikami et al. as well as that by Altschuler et al. for diffusive spin-orbit effects in two-dimensional electron systems. However in our case the spin relaxation cannot be described by one parameter and the expression given by Eq. (2) does only converge into the Hikami expression if $`B_{}=2B_{}`$ which as we shall see is not the case.
In Fig. 1 we present the magnetoconductivity measurements at different temperatures. An example of a fit obtained with Eq. (2) is also shown for $`T=360`$ mK. The fitting was done by the Levenberg-Marquardt method, implemented in $`C^{++}`$ by standard nonlinear least-squares routines. The parameters of the fitting procedure are: $`B_\phi =2.6`$ Gs, $`B_{}=17.2`$ Gs, and $`B_{}=4.6`$ Gs.
We have shown theoretically that spin flip probabilities depend differently on the value of Fermi quasimomentum for hole spin oriented along the grown axis and lying in the quantum well plane. For instance, for scattering from the short-range potential $`B_{}k_\mathrm{F}^4`$ and $`B_{}k_\mathrm{F}^6`$. This leads at arbitrary small hole concentrations to the inequality $`B_{}>B_{}`$ which is observed in the experiment.
Since $`B_\phi <B_{tr}`$, the wave function phase breaks after many collisions with impurities and one can apply the diffusion theory for experiment fitting. In magnetic fields $`BB_{tr}`$ the wave function phase breaks after a few collisions. Weak localisation theory for this region of fields is derived in references for systems with weak spin-orbit interaction only. Below we consider the case of strong spin-orbit interaction in magnetic fields $`BB_{tr}`$.
The Cooperon equations for particles with different absolute value of spin projection can be separated at $`BB_{tr}`$ . Thus the expression for $`\delta \sigma `$ has three terms and each of them depends only on one characteristic magnetic field, similar to Eq. (2). The Cooperon equations, which take into account strong spin-orbit interaction are complicated integral equations and have to be solved numerically. However it is clear that the absolute value of each term in the expression for $`\delta \sigma `$ decreases in comparison with the diffusion approximation. Hence Eq. (2) describes qualitatively the dependence $`\delta \sigma (B)`$ even at $`BB_{tr}`$. The maxima and the subsequent decrease in magnetoconductivity seen in Fig. 1 is in fact caused by the first term in Eq. (2) which dominates in these fields.
Thus the magnetoconductivity dependence in small magnetic fields is approximately given by
$$\delta \sigma (B)=\frac{e^2}{48\pi h}\left(\frac{B}{\stackrel{~}{B}}\right)^2.$$
(6)
One can show that $`\stackrel{~}{B}B_\phi `$ if $`B_{},B_{}>B_\phi `$. At $`T=360`$ mK this inequality is valid. The spin relaxation times, $`\tau _{}`$ and $`\tau _{}`$, are temperature independent because the studied system is degenerate and charge transport is realised by the carriers near the Fermi surface.
The temperature dependence of $`B_\phi `$ is shown in Fig. 2. One can see that it is roughly linear. A least square fit gives the approximation: $`B_\phi (T)=4.1\mathrm{Gs}\mathrm{K}^1\mathrm{T}+0.91\mathrm{Gs}`$. As an estimate we use the Nyquist noise formula for the electron phase breaking time as an approximation for $`B_\phi `$, :
$$B_\mathrm{N}=B_{tr}\frac{k_\mathrm{B}T}{\mathrm{}k_\mathrm{F}v_\mathrm{F}}\mathrm{ln}\left(k_\mathrm{F}l\right),$$
(7)
where $`v_\mathrm{F}`$ is the hole velocity at the Fermi surface. It is related to the mean free path by equality $`l=v_\mathrm{F}\tau `$. In this approximation $`B_\mathrm{N}=0.9\mathrm{Gs}\mathrm{K}^1\mathrm{T}`$, where an effective hole mass $`m_h=0.23m_0`$ was used ($`m_0`$ is the free electron mass). Hence the observed phase breaking rate is approximately four times larger than what is expected from this simple Nyquist noise estimate. A possible explanation for this discrepancy could be found in the non-parabolic dispersion relation which would tend to decrease $`v_\mathrm{F}`$. It is however difficult to make any further analyse due to the fact there has been no theoretical attempts to discuss the phase breaking rate in hole systems.
In conclusion, we have for the first time presented experimental studies of the magnetoconductivity caused by weak localisation in GaAlAs/GaAs $`p`$-type quantum well system, where the spin-orbit coupling is strong. We observe that the magnetoconductance changes sign from negative to positive as the magnetic field is increased. This is due to the intermediate degree of heavy-hole/light-hole mixing in these samples. The phase relaxation times were determined as a function of temperature. The spin relaxation rates are found to be in agreement with theory. The phase coherence relaxation rate was found to be significantly larger than the Nyquist behaviour previously found to explain the values for electron systems.
L.E.G. and N.S.A. thanks RFBR (grant 98-02-18424), program “Physics of Solid State Nanostructures” (grant 97-1035) and Volkswagen Foundation for financial support.
The experimental part of our research was supported by Velux Fonden, Ib Henriksen Foundation, Novo Nordisk Foundation, Danish Research Council (grant 9502937, 9601677 and 9800243).
|
no-problem/9905/hep-ph9905473.html
|
ar5iv
|
text
|
# Skyrmions and Bags in the 2𝐷-𝑂(3) model
## 1 Introduction
The linear $`2D`$-$`O(3)`$ model has been a favorite tool as a model field theory for a wide spectrum of physical systems. Commonly, the field vector $`𝚽`$ is embedded into a euclidean manifold and parametrized in terms of three cartesian components $`\mathrm{\Phi }_i`$, (i=1,2,3). Topologically this manifold is trivially connected. For standard $`\mathrm{\Phi }^4`$ potentials of the type $`(_i\mathrm{\Phi }_i^21)^2`$ the vacuum manifold is the 2-sphere $`S^2`$. Addition of symmetry breakers can remove the degeneracy of the vacuum manifold. Imposing the constraint $`_i\mathrm{\Phi }_i^21`$ defines the nonlinear $`2D`$-$`O(3)`$ model.The homotopy $`S^2S^2`$ then allows for a classification of static solutions by an integer topological winding index $`B`$ which is the spatial integral over the time component of a conserved topological current. Nontrivial configurations with $`B0`$ have found a direct application for the interpretation of the Quantum Hall Effect as charged excitations in two-dimensional spin systems . In the linear model, where configurations are not confined to the 2-sphere $`S^2`$, the winding number $`B`$ is not topologically protected. Therefore, beyond a critical magnitude for the coupling constants of symmetry breakers, static solutions with non vanishing winding $`B0`$ become unstable, and they unwind into the topologically trivial ($`B=0`$) uniform vacuum. In time-dependent evolutions changes in $`B`$ will frequently occur whenever the field passes through $`𝚽=0`$ at some point $`𝒙,t`$ in space and time.
In certain applications of the model the length of the $`𝚽`$-vector is an important degree of freedom such that the constraint to $`S^2`$ is too restrictive, but still, the winding number may correspond to a physically observable conserved charge. A prominent example is the chiral phase transition where the order parameter is the chiral meson field, its length measures the amount of spontaneous symmetry breaking, while the Skyrme-Witten conjecture identifies the winding index with baryon number. Its conservation law should not be affected by the chiral transition. Then, in order to retain the model as a faithful image of the underlying physics, it is necessary to prevent unwinding of nontrivial configurations. To achieve this it is sufficient to exclude one poimt from the manifold on which the fields live. The natural choice is to remove the origin $`𝚽=0`$, which is automatically accomplished in the radial-angular representation $`𝚽=\mathrm{\Phi }\widehat{𝚽}`$ because the angular fields are not defined at $`\mathrm{\Phi }=0`$. Then the manifold is nontrivially connected and the winding number is topologically protected. Thus, choosing the appropriate embedding for the field is an important part of the definition of the model.
In lattice simulations the topological arguments based on continuity cannot be used because field configurations may differ arbitrarily between two neighbouring lattice points. However, the conservation law is easily implemented into the update-algorithm, by allowing only for configurations with specified value of $`B`$. Evidently, for that purpose it is necessary to be able to evaluate the winding number for each configuration, so it is necessary to work in the representation $`𝚽=\mathrm{\Phi }\widehat{𝚽}`$, where the winding density is defined in terms of angles which may vary arbitrarily from one lattice point to its neighbours.
So, in the following, we shall be dealing with all degrees of freedom of the linear $`O(3)`$ model, however in the form of a nonlinear $`O(3)`$ model supplemented by an additional modulus field $`\mathrm{\Phi }`$. In this framework it is possible to define and conserve the winding number of nontrivial configurations. Naturally, in a sector with specified $`B0`$, we will find static solutions which do not exist in the topologically trivial cartesian embedding of the linear $`O(3)`$ model and which are characterized by the formation of a localized spatial bag in the modulus field.
We shall discuss two versions of the model which differ in the definition of the current that enters the current-current coupling Skyrme term. The competition between a symmetry-breaking Zeeman term and the Skyrme term determines the spatial extent of the soliton solutions. Thus the two different versions will lead to a characteristic difference in the spatial structure of the resulting winding density distributions in the interior of the bags: A dominating Skyrme term deconfines the topological charge inside the bag while a dominating Zeeman term preserves the particle nature of individual charge units.
## 2 The $`2D`$-$`O(3)`$ model with Skyrme-Zeeman stabilization
We consider the $`O(3)`$ lagrangian density in $`2+1`$ dimensions in terms of the dimensionless $`3`$-component field $`𝚽=\mathrm{\Phi }\widehat{𝚽}`$ with $`\widehat{𝚽}\widehat{𝚽}=1`$,
$$=F^2\left(\frac{1}{2}_\mu 𝚽^\mu 𝚽\frac{\lambda }{4\mathrm{}^2}\left(\mathrm{\Phi }^2f^2\right)^2\frac{\alpha }{\mathrm{}^2}(f_0\mathrm{\Phi }_3)(\alpha \mathrm{}^2)\rho _\mu \rho ^\mu \right).$$
(1)
Apart from the usual kinetic term this lagrangian contains the standard $`\mathrm{\Phi }^4`$ potential for the modulus field $`\mathrm{\Phi }`$ to monitor the spontaneous symmetry breaking with (dimensionless) coupling parameter $`\lambda `$, an explicitly symmetry-breaking (’Zeeman’) coupling with (dimensionless) strength $`\alpha `$, and a four-derivative (’Skyrme’) current-current coupling $`\rho _\mu \rho ^\mu `$ for the conserved topological current
$$\rho ^\mu =\frac{1}{8\pi }ϵ^{\mu \nu \rho }\widehat{𝚽}(_\nu \widehat{𝚽}\times _\rho \widehat{𝚽}),$$
(2)
satisfying $`_\mu \rho ^\mu =0`$. The parameter $`F^2`$ sets the overall energy scale, the length $`\mathrm{}`$ may be absorbed into the space-time coordinates, so it sets the size of localized static solutions. In order to keep the uniform minimum of the potential for finite $`\alpha `$ at the vacuum value $`\mathrm{\Phi }=f_0`$ we define
$$f^2=f_0^2\frac{\alpha }{\lambda f_0},$$
(3)
then we have the $`\alpha `$-independent boundary condition for the modulus $`\mathrm{\Phi }`$ at spatial infinity $`\mathrm{\Phi }f_0`$ for $`r\mathrm{}`$. Of course, we are free to insert additional powers of the modulus field $`\mathrm{\Phi }`$ into the Skyrme and the Zeeman term, the above choice being motivated to minimize interference with the $`\mathrm{\Phi }^4`$ spontaneous symmetry-breaking mechanism. This choice implies that as $`f_0`$ goes to zero (e.g. with increasing temperature) the typical size $`\mathrm{}`$ of static defects grows like $`f_0^{1/4}`$. This may be physically not unreasonable (cf. e.g. the discussion in the 3-dimensional case in ). (We shall discuss a different natural choice in the next section). Having fixed the $`\mathrm{\Phi }`$-dependence of the lagrangian as given in (1) and (2) we conveniently redefine the field and the parameters by
$$\stackrel{~}{\mathrm{\Phi }}=\mathrm{\Phi }f_0^1,\stackrel{~}{F}^2=F^2f_0^2,\stackrel{~}{\mathrm{}}=\mathrm{}f_0^{1/4},$$
(4)
$$\stackrel{~}{\lambda }=\lambda f_0^{3/2},\stackrel{~}{\alpha }=\alpha f_0^{3/2},\stackrel{~}{f}^2=1\frac{\stackrel{~}{\alpha }}{\stackrel{~}{\lambda }},$$
(5)
omit the tildes in the following and absorb the $`\mathrm{}`$’s into the length scale of space-time. Then we finally have
$$=F^2\left(\frac{1}{2}_\mu 𝚽^\mu 𝚽\frac{\lambda }{4}\left(\mathrm{\Phi }^21+\frac{\alpha }{\lambda }\right)^2\alpha \rho _\mu \rho ^\mu \alpha (1\mathrm{\Phi }_3)+\frac{\alpha ^2}{4\lambda }\right).$$
(6)
together with the $`\alpha `$-independent boundary condition $`\mathrm{\Phi }1`$ for $`r\mathrm{}`$. The constant $`\alpha ^2/(4\lambda )`$ has been added to set the $`\mathrm{\Phi }`$-potential to zero at the minimum $`𝚽=1`$. The nonlinear $`O(3)`$-model emerges in the limit $`\lambda \mathrm{}`$ where $`𝚽`$ is confined to the 2-sphere $`𝚽^21`$. For this case the static solutions for the angular field $`\widehat{𝚽}`$ have been thoroughly investigated . They are characterized by fixed integer winding number $`B`$, and by a modulus field $`\mathrm{\Phi }(𝒙)`$ which for very large $`\lambda `$ differs only minimally from its vacuum value $`\mathrm{\Phi }=1`$. We shall in the following denote them as $`B`$-skyrmions.
For fixed $`\alpha 0`$ and sufficiently small values of $`\lambda `$ the point $`𝚽=1`$ is the only real minimum of the potential in (6) , i.e. for $`\lambda `$ smaller than some critical value $`\lambda _c`$ static solutions in the cartesian representation of the linear $`O(3)`$-model collapse to the trivial vacuum $`𝚽1`$ . In a $`\lambda `$-$`\alpha `$ diagram the line $`\lambda _c(\alpha )`$ separates two phase regions where for $`\lambda >\lambda _c(\alpha )`$ static multi-skyrmions with winding number $`B`$ can exist at local energy minima $`E_B`$, while for $`\lambda <\lambda _c(\alpha )`$ only the global minimum $`𝚽1`$ at $`E_0=0`$ survives. There will be different phase boundaries $`\lambda _c(\alpha )`$ for multi-skyrmions with different winding numbers $`B`$.
In the $`𝚽=\mathrm{\Phi }\widehat{𝚽}`$ representation of the linear $`O(3)`$ model the winding number is fixed (and can be held fixed in lattice simulations). This means that for $`\lambda >\lambda _c(\alpha )`$ the local (multi-skyrmion) minimum $`E_B`$ turns into a global minimum in the respective $`B`$-sector and for $`\lambda <\lambda _c(\alpha )`$ the collapse of the multi-skyrmions into the vacuum is prevented. Instead, the global minimum in a given $`B`$-sector continues to exist for decreasing values of $`\lambda `$. But we expect a qualitative change in the structure of stable static configurations of (6) in the vicinity of the phase boundaries $`\lambda _c(\alpha )`$ leading to new types of solutions for $`\lambda <\lambda _c`$ which do not exist in the cartesian embedding of the linear $`O(3)`$-model. The nature of this structural change is easily visualized: Destabilization and unwinding can only proceed through the field configuration passing through $`\mathrm{\Phi }=0`$ at some point. Conservation of winding number therefore leads to formation of a spatial bag: In its interior the modulus field $`\mathrm{\Phi }`$ is close to zero, with rapid variation of the angular fields $`\widehat{𝚽}`$ such that the winding density is located inside the bag. Thus we may expect a close relation between the deviation of the scalar modulus field from its background value $`\mathrm{\Phi }=1`$ and the density of the topological charge. We shall in the following denote these types of solutions as $`B`$-bags.
## 3 Bag formation
Within a given $`B`$-sector the transition from skyrmions to bags is smooth. As $`\lambda `$ approaches $`\lambda _c`$ from above creation of the bag begins through formation of a shallow depression in the modulus $`\mathrm{\Phi }`$ near the center of the skyrmions. As $`\lambda `$ passes the critical value the depression quickly deepens with the winding density accumulating inside. In fig. 1 this transition is illustrated in the $`B=1`$ and $`B=5`$ sectors. Plotted is the minimal energy per unit topological charge $`E_B(\alpha ,\lambda )/B`$ as function of $`\lambda `$ for three different values of $`\alpha `$. The full lines show the $`B=1`$ sector. For a small value of $`\alpha =0.1`$ the transition near $`\mathrm{log}\lambda 3.2`$ is most pronounced. Bag formation sets in rather sharply, with a sudden steep decrease in $`E_B`$ with decreasing $`\lambda `$. For very large $`\lambda `$ above this critical value, $`E_B`$ rises very slowly towards a limit slightly above the Belavin-Polyakov (BP) monopole energy of $`4\pi `$ . For larger values of $`\alpha `$ the transition region gets smoothed out, still with a large gain in energy through formation of the bag.
The dashed lines show the same features in the $`B=5`$ sector. Comparing the energies for $`B=1`$ and $`B=5`$ for small $`\lambda `$ values near $`\lambda 1`$ which correspond to well-developped deep bags we see that $`E_5<5E_1`$ for all three values of $`\alpha `$. This implies that the five topological charges are strongly bound in the compact $`B=5`$ bag. It is only for the smallest value of $`\alpha `$ considered that with increasing $`\lambda `$ near $`\mathrm{log}\lambda 1.75`$ the 5-bag breaks up into five individual 1-bags such that $`E_5/5`$ closely follows $`E_1`$ into the transition region where the bags disappear and the emerging five 1-skyrmions combine into pairs of two 2-skyrmions plus one left-over 1-skyrmion (for $`\mathrm{log}\lambda >3.2`$). For the larger $`\alpha `$-values considered, $`E_5/5`$ always stays below $`E_1`$, so at no point there is a complete breakup into five individual 1-bags.
This holds for all values of $`B`$ for sufficiently small $`\lambda `$. Figure 2 shows the energy per unit topological charge at $`\lambda =10`$ and $`\alpha =1`$. It is a monotonically decreasing function of $`B`$ which converges towards approximately one half of the $`B=1`$ energy; so for this $`\lambda `$ all B-bags are stable against decay into 1-bags. Altogether there emerges a phase diagram of remarkable richness. The skyrmion regions ($`\lambda 1`$) have been amply discussed, so here we only illustrate typical features of the region with well developped deep bags. (In order to get sufficient resolution the results are calculated for $`\mathrm{}=10`$ on an 80$`\times `$80 mesh.)
The 1-bag which is formed for $`\alpha =1`$ and $`\lambda =10`$ is shown in fig. 3a. Its total classical energy is $`E_1=6.85`$. Already for this smallest value of $`B`$ the bag has developped a basically flat bottom where the modulus field is very close to zero, with small numerical fluctuations. To accommodate increasing numbers of topological charge $`B`$ the bag size increases correspondingly, its depth being limited by $`\mathrm{\Phi }>0`$. As an example, fig.3b shows the case $`B=20`$ for $`\lambda =10`$ and $`\alpha =1`$. The near-central cuts through the bag profiles plotted in fig. 5a (for $`B`$=1,2,4,6,..,20) show that the surface thickness of the bags is basically independent of $`B`$, it is only the radius of the flat interior which adjusts to the increasing total charge.
Throughout this deep bottom of the bag the angular fields display numerous strong local fluctuations. (Fig.4 shows the angle $`\mathrm{\Theta }=\mathrm{arccos}\widehat{\mathrm{\Phi }}_3`$ (for $`B=20`$) fluctuating rapidly inside the bag around an average value of $`\pi `$). Such fluctuations are energetically harmless because the large gradient terms are multiplied by the almost vanishing square of the modulus field. By this mechanism an essentially constant winding density $`\rho _0`$ is achieved which extends over the whole flat interior of the bag (apart from small numerical fluctuations, see fig.5b). In the context of nuclear physics one would say that the nucleons have lost their identity inside the nucleus and have dissolved into a hadronic soup. In fig.5b the near-central cuts through the density profiles are shown which correspond to the same bags as given in fig.5a. Again the surface thickness of the profiles is basically constant; it is, however, smaller than that of the bag profiles. So, for larger values of $`B`$ a square slab closely approximates the density distributions. This is to be contrasted with the winding density of a B-skyrmion. Its angular fields closely resemble the BP monopole $`\mathrm{\Theta }(r)=2\mathrm{arctan}(B/r)`$ with winding density
$$\rho _{BP}=\frac{B\mathrm{\Theta }^{}\mathrm{sin}\mathrm{\Theta }}{4\pi r}=\frac{B^3}{\pi (r^2+B^2)^2}.$$
(7)
There is no central plateau, the center density decreases as $`\rho _{BP}(0)B^1`$ and the mean radius increases as $`RB`$. In contrast to this, fig.5b shows that for the B-bags the central density $`\rho _0`$ quickly drops from its maximal value for $`B=1`$ ($`\rho _0=0.025`$ for $`\lambda =10,\alpha =1`$), then slowly converges with increasing $`B`$ towards a constant value, which for this parameter set is near $`\rho _00.017`$. This $`B`$-dependence reflects the interplay between the surface energy of the bag (which originates in the gradient terms of the modulus field) and volume energies which comprise the potential energy of the modulus field, Zeeman and Skyrme terms, and kinetic terms due to the gradients of the angular fields in the linear $`\sigma `$-model part of the lagrangian (6).
To get an idea about the (’nuclear matter’) density $`\rho _0`$ in the limit $`B\mathrm{}`$ (where surface terms play no role) we could approximately replace the winding density defined in terms of the angular fields (2) by the deviation of the modulus field $`\mathrm{\Phi }`$ from its vacuum value
$$\rho \rho _0(1\mathrm{\Phi })$$
(8)
and obtain the bag radius $`R`$ from $`\pi R^2\rho _0=B`$. The lagrangian (1) then shows that creation of a deep bag ($`\mathrm{\Phi }0`$ inside, $`\mathrm{\Phi }`$=1 outside) requires a volume potential energy density of
$$ϵ_V=\frac{\lambda +2\alpha }{4\mathrm{}^2}+\mathrm{}^2\alpha \rho _0^2.$$
(9)
Ignoring kinetic contributions for the moment, for fixed $`B`$ the interplay between these two volume terms determines the average density $`\rho _0`$ as
$$\rho _0=\frac{1}{2\mathrm{}^2}\sqrt{\frac{\lambda +2\alpha }{\alpha }}.$$
(10)
which for $`\lambda =10`$ and $`\alpha =1`$ is $`\rho _0=\sqrt{3}/\mathrm{}^2=0.0173`$.
The additional pressure due to the kinetic volume terms will further lower the central density, but comparison with the numerical result in fig.5b shows that their influence must be small. This is what we might expect due to the smallness of $`\mathrm{\Phi }`$ inside the bag, although very large gradients in $`\widehat{𝚽}`$ could compensate for it and cause a noticeable lowering of $`\rho _0`$, especially for small values of $`\alpha `$. Maximal importance of kinetic terms would occur if the hadronic soup inside the bag would consist of fermions. Then, in Thomas-Fermi approximation for the density $`\rho _0=p_F^2/(4\pi )`$ we would have for the energy per particle
$$\frac{ϵ}{\rho _0}=\frac{4\sqrt{\pi }}{3}\sqrt{\mathrm{}^2\rho _0}+\frac{ϵ_V}{\rho _0}.$$
(11)
Variation with respect to $`\rho _0`$ then leads to
$$\frac{2\sqrt{\pi }}{3}\mathrm{}\rho _0^{3/2}\frac{\lambda +2\alpha }{4\mathrm{}^2}+\mathrm{}^2\alpha \rho _0^2=0.$$
(12)
For $`\lambda =10`$ and $`\alpha =1`$ the central density thereby is lowered from 0.0173 to $`\rho _0=0.0120`$. Comparison with fig.5b apparently rules out the conjecture of a fermionic soup. (For small values of $`\alpha `$ the effect of the Thomas-Fermi term becomes stronger: for $`\lambda =10`$, $`\alpha =0.1`$, $`\mathrm{}=10`$, it lowers the result of (10) $`\rho _0=0.050`$ to $`\rho _0=0.016`$, while the numerically obtained density for $`B=20`$ is near 0.04).
## 4 Individual particles inside the bag
An interesting alternative way to define the Skyrme term in the lagrangian is to replace in (1) the topological current $`\rho ^\mu `$ (2) by the corresponding form in terms of the full vector field $`𝚽`$
$$T^\mu =\frac{1}{8\pi }ϵ^{\mu \nu \rho }𝚽(_\nu 𝚽\times _\rho 𝚽)=\mathrm{\Phi }^3\rho ^\mu .$$
(13)
This introduces a factor $`\mathrm{\Phi }^6`$ into the Skyrme term. Due to this high power of $`\mathrm{\Phi }`$ there again is very little interference with the $`\mathrm{\Phi }^4`$ spontaneous symmetry-breaking mechanism of the $`\mathrm{\Phi }`$-potential. However, apart from the fact that the length $`\mathrm{}`$ now scales with $`f_0^{5/4}`$ as $`f_0`$ tends to zero, this alternative definition leads to a situation where, for fixed $`f_0`$, in spatial regions with small $`\mathrm{\Phi }`$ (i.e. in the interior of bags) the Zeeman term dominates the Skyrme term. This results in the stabilization of pointlike ’particles’ inside the bag. Speaking again in the language of nuclear physics, one would say that the nucleons retain their individuality inside a bag which binds them together into a common $`B`$-nucleus.
Choosing as before the parameters $`\lambda =10,\alpha =1`$ which favor the formation of deep bags, we find the energy minimum for $`B=1`$ at $`E_1=1.152`$. In the $`B=2`$ sector we find a minimum near $`E_2/2=0.943`$ in which both charges share one common bag, but the individual charges stay apart from each other in two separate pockets. This ’1+1’-configuraton is shown on the left side of fig.6. There is, however, in this sector a lower minimum at $`E_2/2=0.663`$ where the bag consists of only one single pocket which houses one doubly charged structure, i.e. the model favors the formation of pairs. This ’2’-configuration is shown in the center plot of fig.6. For $`B=3`$ we find the expected ’1+2’-configuration at $`E_3/3=0.697`$, a pair and one unit charge tightly bound in two separate pockets of a common 3-bag, as shown on the right of fig.6. We do not find a minimal-energy ’3’-configuration with a threefold charge sitting in one single pocket. The same situation is observed in the $`B=4`$ sector where again we do not find the single pocket with fourfold charge, but instead the ’2+2’ double pair separated in two pockets bound in one common 4-bag at $`E_4/4=0.543`$. So, it appears that the situation resembles very closely the pair formation as it was observed for skyrmions for $`\lambda \mathrm{}`$. As in that case, the ’1’- and ’2’-configurations serve as building blocks which here are bound together in one deep bag. Due to this energetically favored pair formation the resulting ’nuclei’ will be characterized by odd-even staggering of their binding energies for higher values of $`B`$. Practically, of course, the actual configuration which is finally reached through numerical relaxation depends on the starting set and the sequence of pseudo temperatures in the Metropolis cooling algorithm. As an example we present in fig.7 a result for $`B=20`$ which shows a highly deformed bag with 15 pockets which houses a $`10\times ^{}1^{}+5\times ^{}2^{}`$ configuration. The calculation is done for the same parameter set as used in fig.3b, so a direct comparison shows nicely the contrast between pointlike particles and deconfined charge in the interior of one big bag.
## 5 Conclusion
We have investigated here the $`2D`$-$`O(3)`$ model in a representation where the 3-vector field $`𝚽`$ is split into the unit vector $`\widehat{𝚽}`$ and the modulus $`\mathrm{\Phi }`$. This allows for the definition of a topological winding number $`B`$, and for the separation of the complete configuration space into distinct $`B`$-sectors. For small values of the $`\mathrm{\Phi }^4`$-coupling strength $`\lambda `$ the stable energy minima in these sectors are characterized by bag formation in the modulus field. In the standard cartesian representation of the linear $`O(3)`$ model such configurations would be unstable towards decay into the trivial $`B=0`$ vacuum. Stabilized by $`B`$-conservation they exhibit a surprising variety of very appealing physics for multiply charged systems. For decreasing $`\lambda `$ multi-skyrmions get bound into one common deep bag like nucleons get bound into one nucleus. Depending on the competition between Skyrme and Zeeman energy two opposite ways of distributing the topological charge inside the bag can be realized: Pointlike structures which keep the individuality of single nucleons (or doubly charged pairs) inside the nucleus, or a deconfined charge density spread uniformly throughout the interior of the bag. This latter case suggests a very close relation between the charge density $`\rho `$ and the modulus field $`\mathrm{\Phi }`$ (which for sufficiently large $`B`$ is $`\rho =\rho _0(1\mathrm{\Phi })`$) and, correspondingly, an effective description through a density functional for $`\rho `$. This is a remarkable possibility because it gets rid of the angular fields altogether which form the basis for the definition of $`\rho `$ ! It may be understood by the fact that in these configurations the angular fields are very rapidly fluctuating functions and a spatial coarse graining procedure will eliminate them. This is reminiscent of the fact that mean field or density functional methods for nucleons in nuclear physics work very well without keeping explicitly the dynamical pionic degrees of freedom. Similarly, in a recent analysis of soliton formation in the Nambu-Jona Lasinio model it was found that without enforcing the chiral circle condition stable minima are characterized by vanishing pion field.
Of course, it will be most interesting to extend the present investigations to the $`3D`$-$`O(4)`$ model, where a lot of effort has gone into exploring $`3D`$-$`SU(N_f)`$ multi skyrmions and their possible relevance for the structure of nuclei. Their spatial structure (which for large $`B`$ looks somehow like buckey balls) has little in common with our naive picture of a nucleus. On the other hand it has been known for a long time that inclusion of a scalar $`\sigma `$ field is important for the attractive part of the skyrmion-skyrmion force. We should also rather expect a mechanism as described in sect.4 where the nucleons keep their individuality inside the bag. For dimensional reasons in the $`3D`$-$`O(4)`$ model there is no need for the Zeeman-type coupling to monitor the scale of the structures; it emerges directly from the competition between the second order (non linear $`\sigma `$) term and the (fourth order) Skyrme term. Dominance of the Skyrme term (which may arise in the interior of the bag due to the factor $`\mathrm{\Phi }^2`$ multiplying the nonlinear $`\sigma `$-term, or due to increasing temperature which lowers the coefficient of the whole second order term) will again lead to deconfined baryon density inside the nucleus (or nuclear matter in the limit $`B\mathrm{}`$). In any case, our present results strongly suggest that for a description of nuclei in terms of multiply charged solitons in chiral meson fields the constraint $`\mathrm{\Phi }1`$ is too restrictive.
Creation of spatial regions with disoriented chiral condensate (DCC) in the course of a chiral symmetry-breaking transition has been studied in the framework of the linear $`O(4)`$ model in trivial topology . It will be of interest to investigate how the existence of the various phase regions explored here for the simple $`2D`$-$`O(3)`$ model may affect the conclusions to be drawn for defect and DCC formation in case of the chiral phase transition .
Finally we should note that all results reported here have been obtained through numerical relaxation of a lattice functional through some Metropolis cooling algorithm. This naturally poses the question about the continuum limit of these results. Apparently, (as is directly obvious from fig.4), this is not a trivial matter because $`B`$ conservation relies on the removal of one single point ($`\mathrm{\Phi }=0`$) from the manifold on which the model is defined. But this is the usual mathematical difficulty encountered in the transition from a granular to a continuous density distribution. And, again as usual, we have no proof that we really have obtained the lowest energy minima in the respective $`B`$-sectors.
|
no-problem/9905/math9905029.html
|
ar5iv
|
text
|
# 1 . INTRODUCTION
## 1 . INTRODUCTION
It is well known that the standard approach to statistics of identical particles is based on the notion of the usual symmetric group $`S_n`$. This group describes the interchange process of indistinguishable particles. The transposition $`(1,2)(2,1)`$ corresponding for the exchanging of two identical particles is represented by the map
$$\begin{array}{c}\tau :x^1x^2\pm x^2x^1.\end{array}$$
(1.1)
Every transposition yields a phase factor equal to $`\pm 1`$ ($`+1`$ for bosons and $`1`$ for fermions). If we replace the factor $`\pm 1`$ by a complex parameter $`q`$, then we obtain the most simple generalization of the usual concept of statistics, namely the well–known $`q`$–statistics . The corresponding particles are said to be quons. If $`q:=\mathrm{exp}(i\phi )`$, where $`0\phi <2\pi `$ is the so–called statistics parameter, then the corresponding $`q`$–statistics is determined by the value of $`\phi `$. Observe that for $`\phi =0`$ we have bosons, and for $`\phi =\pm \pi `$ – fermions. For arbitrary $`\phi [0,2\pi )`$ we obtain anyons .
There is an interesting concept of an exotic statistics in a low dimensional space based on the notion of the braid group $`B_n`$ . In this concept the configuration space for the system of $`n`$–identical particles moving on a manifold $``$ is given by the following relation
$$Q_n()=\left(^{\times n}D\right)/S_n,$$
where $`D`$ is the subset of the Cartesian product $`^{\times n}`$ on which two or more particles occupy the same position. The group $`\pi _1\left(Q_n(M)\right)B_n(M)`$ is just the $`n`$–string braid group on $``$. Note that the statistics of a system of particles is determined by the group $`\mathrm{\Sigma }_n`$ . This group is a subgroup of the braid group $`B_n()`$ corresponding for the interchange process of two arbitrary indistinguishable particles. It is an extension of the symmetric group $`S_n`$. The mathematical formalism related to the braid group has been developed intensively by Majid, see for example. An algebraic formalism for a particle system with generalized statistics has been considered by the author . It is interesting that in this algebraic approach all commutation relations for particles equipped with an arbitrary statistics can be described as a representation of the so–called quantum Weyl algebra $`𝒲`$ (or Wick algebra) . In this attempt the creation and annihilation operators act on an algebra $`𝒜`$. The creation operators act as the multiplication in this algebra and the annihilation ones act as a noncommutative contraction (noncommutative partial derivatives). The algebra $`𝒜`$ play the role of noncommutative Fock space. The application for particles in singular magnetic field has been given by the author . Note that similar approach has been also considered by others authors . An interesting concept related to generalized statistics has been also given in .
In this paper we are going to study of a system of charged particles moving under influence of an intermediate quantum field. Our fundamental assumption is that every charged particle is transform under interaction into a system consisting a charge and quanta of the field. Such system behaves like free particles moving in certain effective space. It is showed that the system is endowed with the so–called cross statistics. This statistics is a result of inetractions. For the description of such statistics we develop the algebraic model of a system with generalized statistics studied previously by the author .
## 2 . FUNDAMENTAL ASSUMPTIONS
We are going here to study a system of charged particles with certain dynamical interaction. It is natural to expect that some new and specific quantum states of the system have appear as a result of interaction. We would like to describe all such states. In order to do this we assume that the interaction can be described by an intermediate quantum field. Our fundamental assumption is that every charged particle is transform under interaction into a system consisting a charge and $`N`$–species of quanta of the field. A system which contains a charge and certain number of quanta as a result of interaction with the quantum field is said to be a dressed particle . Next we assume that every dressed particle is a composite object equipped with an internal structure. Obviously the structure is determined by the interaction with the quantum field. We describe the structure of a dressed particle as a nonlocal system which contains $`n`$ centers. Such centers behave like free particles moving on certain effective space. Every center is equipped with ability for absorption and emission of quanta of the intermediate field. A center dressed with a single quantum of the field is said to be a quasiparticle. Quasiparticles represent elementary excited quantum states of the given system and they are are described by as a finite set of elements
$$Q:=\{x^i:i=1,\mathrm{},N<\mathrm{}\}$$
(2.1)
which form a basis for a linear space $`E`$ over a field of complex numbers . A center which is an empty place for a quantum is called a quasihole. Quasiholes represent conjugate states and they are corresponding to the basis
$$Q^{}:=\{x^i:i=N,N1,\mathrm{},1\}.$$
(2.2)
for the complex conjugate space $`E^{}`$. The pairing $`g_E:E^{}E\text{}`$ and the corresponding scalar product is given by
$$g_E(x^ix^j)=x^i|x^j:=\delta ^{ij}.$$
(2.3)
A center which contains any quantum is said to be neutral. It represents the ground state $`|0>=\mathrm{𝟏}`$ of the system. A neutral center can be transform into a quasiparticle or a quasihole by an absorption or emission process of single quantum, respectively. Quasiparticles and quasiholes as components of certain dressed particle have also their own statistics. It is interesting that there is a statistics of new kind, namely a cross statistics. This statistics is determined by an exchange process of quasiholes and quasiparticles. Note that the exchange is not a real process but an effect of interaction. Such exchange means annihilation of a quasiparticle on certain place and simultaneous creation of quasihole on an another place. This statistics is described by an operator $`T`$ called an elementary cross or twist. This operator is linear, invertible and Hermitian. It is given by its matrix elements $`T:E^{}EEE^{}`$
$$\begin{array}{c}T(x^ix^j)=\mathrm{\Sigma }T_{kl}^{ij}x^kx^l.\end{array}$$
(2.4)
The usual exchange statistics of quasiparticles is described a linear $`B`$ satisfying the standard braid relations
$$\begin{array}{c}B^{(1)}B^{(2)}B^{(1)}=B^{(2)}B^{(1)}B^{(2)},\end{array}$$
(2.5)
where $`B^{(1)}:=Bid`$ and $`B^{(2)}:=idB`$. The exchange process determined by the operator $`B`$ is a real process. Such exchange process is possible if the dimension of the effective space is equal or great than two. Hence in this case we need two operators $`T`$ and $`B`$ for the description of our system with generalized statistics. These operators are not arbitrary. They must satisfy the following consistency conditions
$$\begin{array}{c}B^{(1)}T^{(2)}T^{(1)}=T^{(2)}T^{(1)}B^{(2)},\\ (id_{EE}+\stackrel{~}{T})(id_{EE}B)=0,\end{array}$$
(2.6)
where the operator $`\stackrel{~}{T}:EEEE`$ is given by its matrix elements
$$(\stackrel{~}{T})_{kl}^{ij}=T_{lj}^{ki}.$$
(2.7)
We need a solution of these conditions for the construction of an example of a system with generalized statistics. Note that the general solution for these conditions is not known. Hence we must restrict our attention to some particular cases only. One can use solutions used in noncommutative differential calculi in order to give some examples . In the one–dimensional case there is no place for such exchange process. Hence in this case only the cross statistics is possible, the exchange braid statistics not exists.
## 3 . HERMITIAN WICK ALGEBRAS
Let us consider a pair of unital and associative algebras $`𝒜`$ and $`𝒜^{}`$. We assume that they are conjugated. This means that there is an antilinear and involutive isomorphism $`()^{}:𝒜𝒜^{}`$ and we have the following relations
$$m_𝒜^{}(b^{}a^{})=(m_𝒜(ab))^{},(a^{})^{}=a,$$
(3.1)
where $`a,b𝒜`$ and $`a^{},b^{}`$ are their images under the isomorhism $`()^{}`$. Both algebras $`𝒜`$ and $`𝒜^{}`$ are graded
$$\begin{array}{cc}𝒜:=\underset{n}{}𝒜^n,& 𝒜^{}:=\underset{n}{}𝒜^n.\end{array}$$
(3.2)
A linear mapping $`\mathrm{\Psi }:𝒜^{}𝒜𝒜𝒜^{}`$ such that we have the following relations
$$\begin{array}{c}\mathrm{\Psi }(id_𝒜^{}m_𝒜)=(m_𝒜id_𝒜^{})(id_𝒜\mathrm{\Psi })(\mathrm{\Psi }id_𝒜),\hfill \\ \mathrm{\Psi }(m_𝒜^{}id_𝒜)=(id_𝒜m_𝒜^{})(\mathrm{\Psi }id_𝒜^{})(id_𝒜^{}\mathrm{\Psi })\hfill \\ (\mathrm{\Psi }(b^{}a))^{}=\mathrm{\Psi }(a^{}b)\hfill \end{array}$$
(3.3)
is said to be a cross symmetry or $``$–twist . We use here the notation
$$\mathrm{\Psi }(b^{}a)=\mathrm{\Sigma }a_{(1)}b_{(2)}^{}$$
(3.4)
for $`a𝒜,b^{}𝒜^{}`$.
The tensor product $`𝒜𝒜^{}`$ of algebras $`𝒜`$ and $`𝒜^{}`$ equipped with the multiplication
$$\begin{array}{c}m_\mathrm{\Psi }:=(m_𝒜m_𝒜^{})(id_𝒜\mathrm{\Psi }id_𝒜^{})\end{array}$$
(3.5)
is an associative algebra called a Hermitian Wick algebra and it is denoted by $`𝒲=𝒲_\mathrm{\Psi }(𝒜)=𝒜_\mathrm{\Psi }𝒜^{}`$. This means that the Hermitian Wick algebra $`𝒲`$ is the tensor cross product of algebras $`𝒜`$ and $`𝒜^{}`$ with respect to the cross symmetry $`\mathrm{\Psi }`$ . Let $`H`$ be a linear space. We denote by $`L(H)`$ the algebra of linear operators acting on $`H`$. One can prove that we have the
Theorem: Let $`𝒲𝒜_\mathrm{\Psi }𝒜^{}`$ be a Hermitian Wick algebra. If $`\pi _𝒜:𝒜L(H)`$ is a representation of the algebra $`𝒜`$, such that we have the relation
$$\begin{array}{c}(\pi _𝒜(b))^{}\pi _𝒜(a)=\mathrm{\Sigma }\pi _𝒜(a_{(1)})\pi _𝒜^{}(b_{(2)}^{}),\\ \pi _𝒜^{}(a^{}):=(\pi _𝒜(a))^{},\end{array}$$
(3.6)
then there is a representation $`\pi _𝒲:𝒲L(H)`$ of the algebra $`𝒲`$. We use the following notation
$$\pi _𝒜(x^i)a_{x^i}^+,\pi _𝒜^{}(x^i)a_{x^i}.$$
(3.7)
The relations (3.6) are said to be commutation relations if there is a positive definite scalar product on $`H`$ such that operators $`a_{x^i}^+`$ are adjoint to $`a_{x^i}`$ and vice versa. Let us consider a Hermitian Wick algebra $`𝒲`$ corresponding for a system with generalized statistics. For the construction of such algebra we need a pair of algebras $`𝒜`$, $`𝒜^{}`$ and a cross symmetry $`\mathrm{\Psi }`$. It is natural to assume that these algebras have $`E`$ and $`E^{}`$ as generating spaces, respectively, and there is the following condition for the cross symmetry
$$\mathrm{\Psi }|_{E^{}E}=T+g_E.$$
(3.8)
## 4 . FOCK SPACE REPRESENTATION
Let us consider the Fock space representation of the algebra $`𝒲`$ corresponding for a system with generalized statistics. For the ground state and annihilation operators we assume that
$$0|0=0,a_s^{}|0=0\text{for}s^{}𝒜^{}.$$
(4.1)
In this case the representation act on the algebra $`𝒜`$. Creation operators are defined as a multiplication in the algebra $`𝒜`$
$$a_s^+t:=m_𝒜(st),\text{for}s,t𝒜.$$
(4.2)
The proper definition of the action of annihilation operators on the whole algebra $`𝒜`$ is a problem.
If the action of annihilation operators are given in such a way that there is unique, nondegenerate, positive definite scalar product on $`𝒜`$, creation operators are adjoint to annihilation ones and vice versa, then we say that we have a well–defined system with generalized statistics in the Fock representation .
Let us consider some examples for such systems. Assume that quasiparticles and quasiholes are moving on one dimensional effective space. In this case the algebra of states $`𝒜`$ is the full tensor algebra $`TE`$ over the space $`E`$, and the conjugate algebra $`𝒜^{}`$ is identical with the tensor algebra $`TE^{}`$. If $`T0`$ then we obtain the most simple example of well–defined system with generalized statistics. The corresponding statistics is the so–called infinite (Bolzman) statistics . If $`T:E^{}EEE^{}`$ is an arbitrary nontrivial cross operator, then there is the cross symmetry $`\mathrm{\Psi }^T:TE^{}TETEE^{}`$. It is defined by a set of mappings $`\mathrm{\Psi }_{k,l}:E^kE^kE^lE^k`$, where $`\mathrm{\Psi }_{1,1}R:=T+g_E`$, and
$$\begin{array}{c}\mathrm{\Psi }_{1,l}:=R_l^{(l)}\mathrm{}R_l^{(1)},\hfill \\ \mathrm{\Psi }_{k,l}:=(\mathrm{\Psi }_{1,l})^{(1)}\mathrm{}(\mathrm{\Psi }_{1,l})^{(k)},\hfill \end{array}$$
(4.3)
here $`R_l^{(i)}:E_l^{(i)}E_l^{(i+1)}`$, $`E_l^{(i)}:=E\mathrm{}E^{}E\mathrm{}E`$ ($`l+1`$-factors, $`E^{}`$ on the i-th place, $`il`$) is given by the relation
$$R_l^{(i)}:=\underset{ltimes}{\underset{}{id_E\mathrm{}R\mathrm{}id_E}},$$
where $`R`$ is on the i-th place, $`(\mathrm{\Psi }_{1,l})^{(i)}`$ is defined in similar way like $`R^{(i)}`$. The commutation relations (3.6) can be given here in the following form
$$a_{x^i}a_{x^j}^+T_{kl}^{ij}a_{x^k}^+a_{x^l}=\delta ^{ij}\mathrm{𝟏}.$$
(4.4)
If the linear operator $`\stackrel{~}{T}:EEEE`$ with the following matrix elements
$$(\stackrel{~}{T})_{kl}^{ij}=T_{lj}^{ki}.$$
(4.5)
is bounded, we have the following Yang-Baxter equation on $`EEE`$
$$(\stackrel{~}{T}id_E)(id_E\stackrel{~}{T})(\stackrel{~}{T}id_E)=(id_E\stackrel{~}{T})(\stackrel{~}{T}id_E)(id_E\stackrel{~}{T}),$$
(4.6)
and $`\stackrel{~}{T}1`$, then according to Bo$`\dot{z}`$ejko and Speicher there is a positive definite scalar product. Note that the existence of a nontrivial kernel of the operator $`P_2id_{EE}+\stackrel{~}{T}`$ is essential for the nondegeneracy of the scalar product . One can see that if this kernel is trivial, then we obtain well–defined system with generalized statistics .
If the dimension of the effective space is great than one and the kernel of $`P_2`$ is nontrivial, then the scalar product is degenerate. Hence we must remove this degeneracy by factoring the mentioned above scalar product by the kernel. In this case we have $`𝒜:=TE/I,𝒜^{}:=TE^{}/I^{}`$, where $`I:=gen\{id_{EE}B\}`$ is an ideal in $`TE`$ and $`B:EEEE`$ is a linear and invertible operator satisfying the braid relation (2.5) and the consistency conditions (2.6), $`I^{}`$ is the corresponding conjugated ideal in $`TE^{}`$. One can see that there is the cross symmetry and the action of annihilation operators can be defined in such a way that we obtain the well–defined system with the usual braid statistics . We have here the following commutation relations
$$\begin{array}{c}a_{x^i}a_{x^j}^+T_{kl}^{ij}a_{x^k}^+a_{x^l}=\delta ^{ij}\mathrm{𝟏}\hfill \\ a_{x^i}a_{x^j}B_{ij}^{kl}a_{x^l}a_{x^k}=0,\hfill \\ a_{x^i}^+a_{x^j}^+B_{kl}^{ij}a_{x^k}^+a_{x^l}^+=0.\hfill \end{array}$$
(4.7)
Observe that for $`TB\tau `$, where $`\tau `$ represents the transposition (1.1) we obtain the usual canonical (anti–)commutation relations for bosons or fermions.
## Acknowledgments
The author would like to thank A. Borowiec for the discussion and any other help.
|
no-problem/9905/astro-ph9905202.html
|
ar5iv
|
text
|
# THE NATURE OF THE DIFFUSE CLUMPS AND THE X-RAY COMPANION OF MRK 2731
## 1 INTRODUCTION
Mrk 273 is an ultraluminous infrared galaxy with prominent merging signatures. Fig. 1 shows our scanned POSS II J-band image and a deep R-band image (Hibbard & Yun 1999, kindly made available to us by John Hibbard) of the Mrk 273 field. These optical images reveal a long tail to the south and a faint plume to the northeast. The Very Large Array (VLA) continuum observations revealed extremely extended radio lobes up to more than 200 kpc in length extending further along the tip of the southern optical tail into southeast (Yun 1997; Yun & Hibbard 1999). VLA H i 21 cm line observations show extended atomic gas associated with the southern optical tail as well (Hibbard & Yun 1996, 1999). Mrk 273 is also detected in the X-ray by ROSAT and ASCA (Turner et al. 1993, 1997, 1998; Iwasawa 1998). ROSAT HRI image reveals an X-ray companion source of comparable brightness as Mrk 273, about $`1\mathrm{}.3`$ to the northeast of Mrk 273. Xia et al. (1998) named this source Mrk 273x, and found an optically faint source coincident with the X-ray position (cf. Fig. 1 in Xia et al. 1998; the position is marked here by ‘X’ in Fig. 1a and is very prominently shown in the deep R-band image in Fig. 1b). All these optical features are also fairly prominent in the Digitized Sky Survey (DSS) image of Mrk 273.
The core of Mrk 273 has been resolved into two compact components in the radio and near-infrared (Condon et al. 1991; Majewski et al. 1993; Knapen et al. 1997), which are believed to be the double nuclei of the two progenitor galaxies involved in the merging and at least one of them is a Seyfert 2 nucleus (Sargent 1972; Asatrian et al. 1990). Spectroscopic observations were carried out on the 2.16m telescope at the Beijing Astronomical Observatory (BAO) to secure the redshift of Mrk 273x and the BAO spectrum indicates that an object (thought to be Mrk 273x) at $`z=0.0378`$ was detected (Xia et al. 1998). Since the signal-to-noise ratio of this spectrum was low, further observations of Mrk 273x were performed at the William Herschel Telescope (WHT) for additional check. The new spectrum turned out to be different from the one obtained at BAO. Later examinations and further spectroscopic observations show that the object detected in the initial BAO spectrum was not Mrk 273x, but an object about $`20`$″ to the northeast of Mrk 273; we will call this diffuse object Mrk 273D. The mis-identification was caused by incorrect slit positioning in previous observation at BAO (see Section 2).
In this paper, we present the WHT spectrum of Mrk 273x and new observations for Mrk 273D performed at BAO. In addition, we re-examine all the original data analyzed by Xia et al. (1998). We find that both Mrk 273x and Mrk 273D have intriguing properties. We show that the emission line widths and line ratios of Mrk 273x resemble Seyfert 2 galaxies. Since Mrk 273x turns out to be a distant background source, its X-ray and radio luminosities are one of the highest among Seyfert 2 galaxies and yet it has a very low neutral hydrogen column density. Mrk 273D consists of diffuse clumps at the same redshift as the ultraluminous galaxy Mrk 273 and has quite unusual line ratios of \[O iii\] $`\lambda `$5007/H$`\beta `$ and \[N ii\] $`\lambda `$6584/H$`\alpha `$. These ratios are not easily produced by photoionization of O, B stars. We argue that the emission line ratios are better explained by shock excitation with precursors (Dopita & Sutherland 1995). The structure of the paper is as follows. In §2, we describe the WHT observation for Mrk 273x and new observations for Mrk 273D at BAO. We also explain the original observational setup at BAO. In §3, we present the results of our spectral analysis. In §4, we discuss the nature of Mrk 273x and Mrk 273D. And finally, a brief summary of the main results is given in §5. Throughout this paper, we use a Hubble constant of $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and an Einstein-de Sitter ($`\mathrm{\Omega }_0=1`$) cosmology.
## 2 OBSERVATIONS AND DATA REDUCTION
The observations carried out at BAO used a Zeiss universal spectrograph mounted on the 2.16m telescope of BAO. A Tektronix 1024x1024 CCD was used as the detector. The original slit setup, for the BAO observation on April 12, 1997, runs from the northeast to southwest passing through star S and source D, is illustrated in Fig. 1a. The slit was approximately $`4\mathrm{}`$ long and $`4\mathrm{}`$ wide. This observational setup was intended to determine the redshift for Mrk 273x (source X in Fig. 1a). Unfortunately the slit missed the target due to a slight error in the slit orientation. This occurred because Mrk 273x was too faint to be seen on the TV monitor, so the slit had to be rotated from an initial orientation (which passed through the star ‘S’ in Fig. 1a) by a prescribed angle such that it would pass through Mrk 273x. The amount of rotation required was calculated from several bright objects within the field. This technique of slit positioning for faint sources had been successfully employed previously at BAO, so no extra check was enforced in the initial observations reported in Xia et al. (1998). In spite of the incorrect rotation angle used, by coincidence, an object other than star S was detected in the spectrum and was (incorrectly) identified as Mrk 273x. Follow-up observations of Mrk 273x carried out at the William Herschel Telescope (WHT) at La Palma however yielded spectra that are completely different from that obtained at BAO (see below).
A re-examination of the long-slit spectrum obtained at BAO showed that the object detected in the spectrum was separated from the star S by 80$`\mathrm{}`$ rather than 20$`\mathrm{}`$, the actual angular distance between Mrk 273x and the star S. From the angular distance between the object detected in the BAO spectrum and the star S, and from the fact that only two objects were seen in the spectrum, we were able to reconstruct the slit orientation as shown in Fig. 1a. This slit orientation is well-constrained: a little more rotation to the southwest (anti-clockwise) the slit would enclose the bright southern tail of Mrk 273, while slightly more rotation to the opposition direction it would enclose no object other than the star S. It turned out that the slit passed through some bright clumps (source D in Fig. 1a), about $`20\mathrm{}`$ to the nuclear region of Mrk 273 (Knapen et al. 1997). While this slit re-construction seems convincing, as a double check, Mrk 273D was re-observed at BAO on February 20, 1999 with a different slit setup. The new observation yields a spectrum nearly identical to the one published by Xia et al. (1998). The slit setup was recorded with photographs, which allowed us to determine the slit orientation reliably. This new slit is approximately $`4\mathrm{}`$ long and $`2.5\mathrm{}`$ wide and runs nearly vertically (in the north–south direction passing through stars S1 and S2) as shown in Fig. 1a.
Both the previous and the new BAO spectra cover a wavelength range of 3500Å to 8100Å with a grating of 195Å/mm and a spectral resolution of 9.3Å (2 pixels). Wavelength calibration was carried out using an Fe-He-Ar lamp and standard stars were observed to perform flux calibrations. The wavelength calibration accuracy is better than 1Å.
Mrk 273x (source X in Fig. 1) was first observed on June 19, 1998 with the WHT. Since the resulting spectrum was different from the one obtained earlier at BAO, another observation was attempted at the WHT on June 26, 1998 which yielded spectrum identical to the one obtained a week ago on June 19. Additional new spectrum for Mrk 273x was also obtained using the BAO 2.16m telescope, confirming the results from the WHT. The new BAO spectrum for Mrk 273x is not presented here given its lower signal-to-noise ratio. The WHT spectra were obtained using the double spectrograph ISIS. The CCD detector used at the Blue and Red Arm was respectively an EEV chip of 2148$`\times `$4200 pixels with a 13.5$`\mu `$m pixel size and a Tek chip of 1124$`\times `$1124 pixels with a 24$`\mu `$m pixel size. At the Blue Arm, a grating of 64Å/mm centered at 4583Å yielded a useful wavelength coverage from 3200–5300Å. The violet end of the blue spectra was cut off by the atmosphere and by instrumental optics, while the red end was blocked out by the dichroic which had a half-power crossover wavelength of 5300Å. For the Red Arm, a grating of 158Å/mm dispersion covered the wavelength range 5300–8270Å. Three 2400s narrow-slit (0.8″) exposures and one 1800s wide-slit spectra (6″) were obtained for each Arm. The full width at half maximum (FWHM) spectral resolution was 6.2Å for the Red Arm and 3.28Å for the Blue Arm, respectively. Wavelength calibration was carried out using an Ar-Ne lamp; the resulting wavelength accuracy is about 0.2Å. The spectra were flux-calibrated using observations of the standard star BD +33<sup>o</sup> 2642.
All the optical spectral data reduction was performed at BAO using IRAF packages. The CCD data reduction includes standard procedures such as bias subtraction, flat fielding and cosmic ray removal. The measurements of emission lines were performed under the IRAF environment using tasks “splot” and “ngaussfit”.
## 3 RESULTS
### 3.1 The Optical and X-ray Properties of Mrk 273x
We first show the WHT optical spectrum for Mrk 273x in Fig. 2. The spectrum combines both the Red and Blue Arm data. \[O iii\] $`\lambda \lambda `$4959, 5007, H$`\beta `$, \[O iii\] $`\lambda `$4363, H$`\gamma `$, \[Ne iii\] $`\lambda `$3869, \[O ii\] $`\lambda `$3727 and Mg ii $`\lambda \lambda `$2796, 2803 emission lines are all convincingly detected. The redshift determined from these emission lines is 0.458. The measured fluxes for the identified lines are listed in Table 1. All detected lines have a FWHM of about $`600\mathrm{km}\mathrm{s}^1`$. The optical B and R magnitudes for Mrk 273x are respectively about 20.8 and 19.6 from the USNO-A1.0 catalog (Monet 1996). The absolute magnitude of Mrk 273x is therefore $`M_B=21.6`$, comparable to $`L^{}`$ in the Schechter luminosity function of galaxies (e.g., Lin et al. 1996). This luminosity includes the contribution from both the central active galactic nucleus (AGN, see below) and the host galaxy. In the short Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC2) snapshot image, the underlying galaxy is visible but its outer faint contours are not well-detected. From the deep R-band image (shown here in Fig. 1b) of Hibbard & Yun (1999), the AGN nucleus and the host galaxy seem to contribute comparable amount of light.
As reported in Xia et al. (1998), Mrk 273x is also luminous in the soft X-ray (0.1–2.4 keV) band. The soft X-ray flux is $`f_\mathrm{X}=1.1\times 10^{13}\mathrm{erg}\mathrm{s}^1\mathrm{cm}^2`$, corresponding to an X-ray luminosity of $`L_\mathrm{X}1.1\times 10^{44}\mathrm{erg}\mathrm{s}^1`$. The soft X-ray spectrum of Mrk 273x is well-fitted by a power-law, with a photon index of $`1.98`$, a value typical for Seyfert 2 galaxies. Spectral fitting yielded a neutral hydrogen column density of $`N_\mathrm{H}=(4.3\pm 2.2)\times 10^{20}\mathrm{cm}^2`$.
Mrk 273x is also detected at the VLA 21 cm continuum observations and appears to be consistent with a point-like source (Yun 1997; Yun & Hibbard 1999). At this high redshift, Mrk 273x falls into the class of powerful radio sources with $`L_{1.37\mathrm{GHz}}2.0\times 10^{40}\mathrm{erg}\mathrm{s}^1`$. This single property sets Mrk 273x into categories of AGNs, QSOs, and starbursts and radio galaxies rather than normal galaxies.
The high X-ray luminosity, strong radio emission and the emission line widths (FWHM $`600\mathrm{k}\mathrm{m}\mathrm{s}^1`$) of Mrk 273x imply that the main energy output mechanism is AGN, and Mrk 273x may be either a narrow-line Seyfert 1 (NLS1) galaxy (Osterbrock & Pogge 1985) or a Seyfert 2 galaxy from the relatively narrow emission lines. Below we explore each possibility in turn.
NLS1s are defined as Seyfert galaxies which have FWHM for the H$`\beta `$ line in the range of 500-2000 $`\mathrm{km}\mathrm{s}^1`$ and with \[O iii\] $`\lambda `$5007/H$`\beta `$$`<`$3. Mrk 273x does not satisfy the second criterion since the ratio of the \[O iii\] $`\lambda `$5007/H$`\beta `$ is about 6. Furthermore, Mrk 273x does not show prominent Fe II emission lines, which are strong in most (but not all) NLS1 galaxies. Also, the soft X-ray spectrum is well fitted by power-law with a photon index $`1.98`$ (Xia et al. 1998) and is not as steep as most NLS1s. Hence, Mrk 273x does not fit in the definition of NLS1 galaxies.
The relatively narrow Balmer and forbidden lines and the line ratios of \[O iii\] $`\lambda `$5007/H$`\beta `$ $`6`$ are consistent with Mrk 273x being a Seyfert 2 galaxy. Turner et al. (1997, 1998) and Polletta et al. (1996) presented catalogs for several tens of Seyfert 2 galaxies. In their catalogs, the soft X-ray luminosity for all but a few listed Seyfert 2 galaxies are less than $`10^{44}\mathrm{erg}\mathrm{s}^1`$ and the $`N_\mathrm{H}`$ value is larger than $`10^{21}\mathrm{cm}^2`$. The combination of a low $`N_\mathrm{H}`$ value together with powerful X-ray emission ($`L_\mathrm{X}1.1\times 10^{44}\mathrm{erg}\mathrm{s}^1`$) is therefore rare among Seyfert 2 galaxies. This peculiarity is also supported by its quite high radio luminosity and multi-wavelength observations of Mrk 273x. The flux ratios of Mrk273x in the soft X-ray, B-band, and radio are $`f_\mathrm{X}:f_\mathrm{B}:f_{1.37\mathrm{GHZ}}=7:1:1.3\times 10^3`$; such a spectral energy distribution is rarely seen in any other AGNs (see Xia et al. 1998 for more discussions). Furthermore, the low $`N_\mathrm{H}`$ value is not expected in the unified scheme of AGNs (e.g., Dopita 1997 and references therein). In this picture, the broad-line regions of Seyfert 2 galaxies are postulated to be obscured by a thick torus of gas and dust, which presumably gives rise to high $`N_\mathrm{H}`$ values.
To summarize, the optical emission line properties, the powerful X-ray and radio emission and the low neutral hydrogen absorption indicate that Mrk 273x is a rare source that may provide a test of the unified picture of AGNs.
### 3.2 Diffuse Clumps In the Tidal Plume of Mrk 273
The diffuse clumps (Mrk 273D, labeled as source D in Fig. 1a) are clearly seen in the J-film copy of POSS II and in the DSS image, but they are visible neither in the near-infrared (e.g., Smith et al. 1996), most R-band images (e.g., Yun & Scoville 1995; Mazzarella & Boroson 1993), nor in the HST WFPC2 F814 snapshot image, suggesting that they have quite blue colors. Only the deep R-band image shown in Fig. 1b (Hibbard & Yun 1999) reveals some patches in the northeast plume corresponding to these dense knots. We extracted the BAO spectra for the diffuse clumps in Fig. 1a using an aperture window size of 4″$`\times `$ 17″ for the observation on April 12, 1997 and 2.5″$`\times `$ 17″ for the observation on February 20, 1999. These aperture windows are indicated as the rectangles in Fig. 1a. The spectra are shown in Fig. 3 and the emission line fluxes are listed in Table 1. The old and new spectra are similar. The continuum is somewhat higher in the Feb. 20, 1999 observation. The difference may be real, as the two slits sampled slightly different regions, although the possibility of calibration uncertainties cannot be completely ruled out. The emission line ratios are very similar from both spectra. The redshift for Mrk 273D determined from the emission lines is the same as Mrk 273 ($`z=0.0376`$). Therefore these clumps are physically associated with the major merger process of Mrk 273. Yun & Scoville (1995) suggest that Mrk 273 is the merging product of a nearly edge-on gas-rich spiral and another more face-on spiral. The tidal plume in the northeast is from the face-on progenitor. So the physical association of Mrk 273d with Mrk 273 is expected.
It is clear from Fig. 3 and Table 1 that the continuum emission from Mrk 273D is very weak and the \[O iii\] $`\lambda \lambda `$4959, 5007 lines are strong compared to H$`\beta `$. More specifically, from the observation on April 12, 1997, the line ratio of the \[O iii\] $`\lambda `$5007 to H$`\beta `$ is $`18.6_{4.9}^{+9.5}`$, while the line ratio of the \[N ii\] $`\lambda `$6584 to H$`\alpha `$ is $`0.34\pm 0.1`$ (the error bars are 1$`\sigma `$ values). For the new observation, the ratio of the \[O iii\] $`\lambda `$5007 line to H$`\beta `$ is $`18.9_{5.7}^{+13}`$ while the ratio of \[N ii\] $`\lambda `$6584 line to H$`\alpha `$ is $`0.3\pm 0.1`$. Note that these two line ratios are little affected by dust extinction since the two lines involved in the ratios are quite close in wavelength. Fig. 4 shows the standard diagnistic diagram of \[O iii\] $`\lambda `$5007/H$`\beta `$ versus \[N ii\] $`\lambda `$6584/H$`\alpha `$ (Osterbrock 1989) for emission line galaxies. The HII, LINER and Seyfert galaxies occupy different regions in this diagram. For HII regions photoionized by O, B stars, $`[\mathrm{O}\mathrm{iii}]\lambda 5007/\mathrm{H}\beta <5`$ for $`[\mathrm{N}\mathrm{ii}]\lambda 6584/\mathrm{H}\alpha 0.30.4`$ (see also Fig. 12.1 in Osterbrock 1989). The \[O iii\] $`\lambda `$5007/H$`\beta `$ ratio of Mrk 273D is clearly much higher than those typically found for photoionized H ii regions. In fact, Mrk 273D is clearly located in the region occupied by Seyfert 2 galaxies. The \[O i\] $`\lambda `$6300/H$`\alpha `$ and \[S ii\] $`\lambda `$6716,6731/H$`\alpha `$ line ratios of Mrk 273D are also located in the Seyfert 2 region (cf. Fig. 12.2, 12.3, Osterbrock 1989 and more discussions in Xia et al. 1998). To achieve these Seyfert 2 like line ratios, the ionization source must be harder than the radiation provided by young massive stars. Mrk 273D is, however, obviously not an AGN given its diffuse morphology. We discuss further the mechanism of line excitations in Mrk 273D in §4.2.
## 4 DISCUSSION
### 4.1 Mrk 273x: an Unusual Seyfert 2 Galaxy
Mrk 273x is a background object in the Mrk 273 field; it is at a much higher redshift ($`z=0.458`$) than Mrk 273 which has a redshift of 0.0376. It is interesting to note that the X-ray companions of three nearest ultraluminous IRAS galaxies (Arp 220, Mrk 273 and Mrk 231) are all background sources and are therefore not physically associated with the mergers themselves (cf. Xia et al. 1998). Mrk 273x has AGN characteristics both optically and in the soft X-ray together with a high radio luminosity. The narrow emission lines and various line ratios are consistent with it being a Seyfert 2 galaxy. Its soft X-ray luminosity is one of the highest among Seyfert 2 galaxies with a low neutral hydrogen column density. Although many well known nearby Seyferts (including Mrk 273 itself) also host an energetically significant starburst, the low neutral hydrogen column density of $`N_\mathrm{H}4.4\times 10^{20}\mathrm{cm}^2`$ appears to exclude the possibility of Mrk 273x being a dusty luminous starburst galaxy. These observational facts show that Mrk 273x is an unusual Seyfert 2 galaxy that is not easy to explain using the unified scheme of AGNs.
Xia et al. (1998) examined the time variability of Mrk 273x using the ROSAT data taken in May and June 1992. While Mrk 273x is fainter than Mrk 273 by about 20% in the ROSAT PSPC image, it is almost as bright as Mrk 273 in the ROSAT HRI image. A $`\chi ^2`$ test however reveals that these changes are not statistically significant in the ROSAT data. From the 0.5-2 keV image of SIS on board ASCA, Iwasawa (1998) found that Mrk 273x is at most 40 percent as bright as Mrk 273 on Dec. 27, 1994. Since Mrk 273 is not known to vary in the soft X-ray, these multi-epoch observations imply that Mrk 273x has faded by a factor of 2 in two and a half years. Cycle 1 AXAF observations are planned to obtain a high-resolution map of Mrk 273x and Mrk 273. These observations will provide further insights on their X-ray variability, higher energy behaviors and spatial distribution of the X-ray emissions.
### 4.2 The Excitation Mechanism of Mrk 273D
The diffuse clumps in Mrk 273D in the northeast plume could directly come from the extreme outer regions of the face-on progenitor. It is also possible that Mrk 273D was formed in the major merger process. This patchy object is unlikely to be a self-gravitating (tidal) dwarf galaxy since they are usually found far from the interacting parent galaxies and their spectra resemble typical photoionized H ii regions (e.g., Duc & Mirabel 1997). Furthermore, clumps formed close to the merger is liable to tidal disruption once they fall back to the merger. (However, some observations indicate that clumps close to the merging pair do exist, e.g., H ii regions in the NGC 4676A tail, Hibbard & van Gorkom 1996.) In contrast, Mrk 273D is very close to ($`20`$ kpc in projected distance from) the main merging nuclei, and its spectrum differs significantly from photoionized H ii regions.
Although the spectrum of Mrk 273D resembles that of a Seyfert 2 galaxy, its diffuse morphology suggests that it is not a (dwarf) AGN. In fact, the spectrum of Mrk 273D is very similar to that of the soft X-ray nebula to the north of Mrk 266 (cf. Fig. 5 in Kollatschny & Kowatsch 1998). As pointed out by these authors, the $`VR`$ color ($`0.5\pm 0.1`$) of the northern component is exceptionally blue due to the intense \[O iii\] $`\lambda \lambda `$4959, 5007 line emission. The diffuse gas clumps in Mrk 273D are also very blue, since they are only seen in the B-band, but not in other bands redder than R. The blue color of Mrk 273D is also due to the continuum being dominated by the \[O iii\] $`\lambda \lambda `$4959, 5007 line emission (cf. Fig. 3). The spectrum of Mrk 273D is also similar to the average spectrum of the northwest cone of NGC 2992 (cf. Fig. 3a in Allen et al. 1999) and the spectrum of diffuse ionized gas in NGC 891 (Rand 1997, 1998). As discussed by these authors, the likely excitation mechanism for these peculiar spectra is the shock plus precursor model (Dopita & Sutherland 1995). For a fast shock with velocity of several hundred $`\mathrm{km}\mathrm{s}^1`$, copious UV photons are produced in the shock front, which can in turn excite the gas of H ii regions in front of the shock, thus producing radiative precursors. The hardness of the UV radiation (shock temperature) depends on the velocity of the shock wave, and with suitable parameters, this scenario can produce the line ratios as seen in Mrk 273D (cf. Fig. 2a in Dopita & Sutherland 1995).
It is believed that starbursts can drive radial outflows (Wang, Heckman & Weaver 1997). There is some evidence that such outflows (superwinds) indeed exist in several ultraluminous IRAS galaxies out to tens of kpc with velocities of a few hundred to 1000 $`\mathrm{km}\mathrm{s}^1`$, e.g., in NGC 6240, Arp 299, Arp 220 (Heckman et al. 1987, 1990, 1996, 1999; Schulz et al. 1997; Wang et al. 1997). Since Mrk 273 is also a major merger ultraluminous infrared galaxy and has extended soft X-ray emission and H i nebula, it is conceivable that such an outflow exists in Mrk 273. In this regard, Mrk 266 is also a candidate since it is a luminous infrared merging galaxy with double nuclei ($`7`$ kpc in projected separation) and very extended (about 150 kpc) soft X-ray nebula.
These observations suggest that the shock+precursor emission may be a common mechanism to excite gaseous nebulae in luminous infrared merging galaxies, in addition to the O, B star photoionization and AGN excitation. Wu et al. (1998) showed that many observed ultraluminous IRAS galaxies have mixture types in different line ratio diagnostic diagrams. Goncalves et al. (1998) also suggested that most emission line galaxies with the so-called transition spectrum have composite spectra with simultaneous presence of Seyfert, LINER and H ii region contributions. Perhaps these objects are not only just a mixture of AGN and photoionized starburst regions (Genzel et al. 1998), they may also contain shock+precursor regions, as seen in Mrk 273D and the northern nebula of Mrk 266. This also provides a caution: a Seyfert 2 like spectrum in high-redshift luminous infrared galaxies does not necessarily mean the presence of AGNs at their centers, instead such a spectrum could be induced by the shock+precursor excitation in the gas clumps in the mergers.
## 5 SUMMARY
We have presented optical spectroscopic observations for the X-ray companion source and the blue diffuse clumps in the northeast tidal debris surrounding the ultraluminous galaxy Mrk 273. Their peculiar properties are discussed and explored together with soft X-ray and radio observations available. We summarize below the main points we have presented and addressed in this paper.
Mrk 273x, the X-ray companion $`1\mathrm{}.3`$ to the northeast of the ultraluminous galaxy Mrk 273, is a background source at $`z=0.458`$. Its redshift is much higher than the redshift ($`z=0.0376`$) of Mrk 273. The soft X-ray spectrum for Mrk 273x is typical of Seyfert 2 galaxies. The X-ray luminosity of Mrk 273x is exceptionally high for a Seyfert 2 galaxy, $`L_\mathrm{x}1.1\times 10^{44}\mathrm{erg}\mathrm{s}^1`$. Spectral fitting gives a low neutral hydrogen column density, $`N_\mathrm{H}4.4\times 10^{20}`$ cm<sup>-2</sup>. All the optical emission lines detected for Mrk 273x have a similar FWHM of $`600`$ km s<sup>-1</sup> and the \[O iii\] $`\lambda `$5007/H$`\beta `$ ratio is about 6, again typical for Seyfert 2 galaxies. Mrk 273x is also a powerful radio source with a radio luminosity $`L_{1.37\mathrm{GHz}}2.0\times 10^{40}\mathrm{erg}\mathrm{s}^1`$. This adds more peculiarities to this energetically unusual Seyfert 2 galaxy.
The optical knots Mrk 273D in the northeast tidal tail/plume $`20`$ kpc (projected) from the nuclear region of Mrk 273 is at the same redshift as that of Mrk 273. They are diffuse gas clumps physically associated with the major merger. The spectrum of Mrk 237D is dominated by strong emission from the \[O iii\] doublet. The strong line emission gives rise to the blue color of this object. The \[O iii\]$`\lambda `$5007/H$`\beta `$ and \[N ii\]$`\lambda `$6584/H$`\alpha `$ line ratios are $`19`$ and 0.3, respectively. These Seyfert 2–like line ratios for Mrk 273D are likely excited by shocks plus precursor mechanism involved in the merging process.
###### Acknowledgements.
We are grateful to Zheng Zheng for assistance in observations and preliminary data reductions. We also thank Simon White for advice and the anonymous referee and the scientific editor John Huchra for valuable criticisms that have improved the paper. We appreciate very much for the deep R-band image kindly provided to us by John Hibbard. This project was partially supported by the NSF of China and the exchange program between NSFC and DFG. Y.G.’s research at LAI, Dept. of Astronomy is funded by NSF grants AST96-13999 and by the University of Illinois. Y.G. is also grateful to Ernie Seaquist for support at the University of Toronto.
## FIGURE CAPTIONS
|
no-problem/9905/cond-mat9905282.html
|
ar5iv
|
text
|
# Freezing of polydisperse hard spheres
## Abstract
The fluid - crystal equilibria of polydisperse mixtures of hard spheres have been studied by computer simulation of the solid phase and using an accurate equation of state for the fluid. A new scheme has been developed to evaluate the composition of crystalline phases in equilibrium with a given polydisperse fluid. Some common assumptions in theoretical approaches and their results are discussed on the light of the simulation results. Finally, no evidence of the existence of a terminal polydispersity in the fluid phase is found for polydisperse hard spheres, the disagreement of this finding with previous molecular simulation results is explained in terms of the inherent limitations of some ways of modeling the chemical potential as a function of the particle size.
PACS numbers: 05.70.Fh, 64.70.Do, 82.70.Dd
The phase behavior of polydisperse mixtures of hard spheres (PHS) has received some attention in recent years. Different theoretical approaches and simulation methods have been used to gain knowledge about the transition from a polydisperse fluid phase to crystal phase(s). The theoretical approaches use to involve drastic approximations regarding the composition of the phases, and the results are often presented in form of stability diagrams , both facts should be carefully taken into account when interpreting the results. Bolhuis and Kofke have studied the fluid - solid equilibria by using molecular simulation methods, finding a ”terminal polydispersity” in the fluid phase that they interpreted as the maximum polydispersity of a fluid which can originate a freezing transition, and related such a result with some experimental data. The origin of the ”terminal polydispersity” in Ref will be addressed in this work.
Let $`P(\sigma )`$ be a given probability distribution function of particle diameters (PDFD). The distribution can be characterized by its moments, $`m_k=<\sigma ^k>/\sigma _0^k`$, where $`\sigma _0`$ is a reference diameter.
The thermodynamics of PHS fluids is very accurately described by the generalization of Salacuse and Stell to the polydisperse case of the equation of state (EOS) due to Boublik and Mansoori, Carnahan, Starling and Leland (BMCSL) . In such an equation the pressure, $`p`$, can be written as: $`\beta p=\beta p(m_1,m_2,m_3,\eta )`$ where $`\eta `$ is the packing fraction: $`\eta =\pi Nm_3\sigma _0^3/(6V)`$. $`N`$ is the number of particles, $`V`$ is the volume. and $`\beta 1/(k_BT)`$, with $`k_B`$ being the Boltzmann’s constant and T the absolute temperature.
The excess chemical potential, $`\mu _{ex}`$, in the fluid phase takes the form: $`\beta \mu _{ex}\left(\sigma \right)=_{k=0}^3c_k\sigma ^k`$, where the coefficients $`c_k`$ depend on $`m_1`$, $`m_2`$, $`m_3`$ and either $`\eta `$ or $`\beta p\sigma _0^3`$.
The goal of the present work is to evaluate the fluid - solid equilibrium for a given PDFD in the fluid phase. This point of view is the main difference with the calculations of ref , however the statistical mechanics underlying both procedures is basically the same.
In order to study polydisperse systems is convenient to make use of the semigrand (SG) ensemble, where the pressure, the total number of particles, $`N`$, and the chemical potential differences between the different species and a reference one are fixed. For hard body interactions the basic thermodynamic differential relation reads,
$$d\left[N\beta \mu _0\right]=Vd\left(\beta p\right)+\beta \mu _0dN\underset{i0}{}N_id\left(\beta \mu _{i0}\right)$$
(1)
where $`\mu _0`$ is the chemical potential of the reference species. The sum is done over the other components, $`N_i`$ is the number of particles of species $`i`$ and $`\mu _{i0}\mu _i\mu _0`$. Later, a continuous distribution of sizes will be used however the discrete description is kept, for the shake of clarity in the equations:
An imposed chemical potential distribution (ICPD) is used to perform the calculations, such a distribution should produce the required PDFD in the fluid phase. As a difference with the procedure in Ref here the ICPD will depend on the pressure. We have taken advantage of the accuracy of BMCSL EOS. Such an equation let us to link the fluid phase composition with the chemical potential distribution at a given pressure.
Let $`P_0(\sigma )`$ be the expected PDFD, for instance, a Gaussian distribution centered at $`\sigma _0`$ and with standard deviation $`\sigma _0\lambda `$. We can use as input the values of $`\beta \mu \left(\sigma \right)`$ given by:
$$\beta \mu \left(\sigma \right)=\frac{\left(\sigma \sigma _0\right)^2}{2\sigma _0^2\lambda ^2}+\beta \mu _{ex}^{BMCSL}(\sigma ,\beta p,\lambda )$$
(2)
The actual PDFD of the fluid, $`P(\sigma )`$ will be practically identical to $`P_0`$ due to the accuracy of the BMCSL EOS: The two contributions to the chemical potential can be grouped, by using an unique set of coefficients $`\{a_k\}`$:
$$\beta \mu \left(\sigma \right)=\underset{k=0}{\overset{3}{}}a_k\left(\frac{\sigma }{\sigma _0}\right)^k$$
(3)
The coefficients $`a_k`$ will be functions of $`\lambda `$ and $`\beta p`$. The strategy to evaluate the fluid - solid equilibrium under the conditions stated above lies on Gibbs-Duhem (or Clausius-Clapeyron) integration schemes . The procedure is sketched as follows: Starting for a given point ($`\beta p_0,\lambda _0`$) in which both phases are in equilibrium, we obtain a trajectory on the $`(\beta p,\lambda )`$ plane that keep equilibrium conditions fulfilled. This can be done because the chemical potential differences can be written as functions of $`\beta p`$ and $`\lambda `$ through the coefficients $`a_k`$. Therefore, considering $`N`$ fixed:
$$d\left[N\beta \mu _0\right]=\left[V\underset{i0}{}N_i\left(\frac{\left(\beta \mu _{i0}\right)}{\beta p}\right)_\lambda \right]d\left(\beta p\right)+$$
$$\left[\underset{i0}{}N_i\left(\frac{\left(\beta \mu _{i0}\right)}{\lambda }\right)_{\beta p}\right]d\lambda $$
(4)
In the limit of a continuous distribution of sizes we can write a Clausius Clapeyron analogue equation for the coexistence $`(\beta p,\lambda )`$ line:
$$\left(\frac{d\beta p}{d\lambda }\right)_{coex}=\frac{_{k=1}^3\left(a_k/\lambda \right)_{\beta p}\mathrm{\Delta }m_k}{\mathrm{\Delta }v_{k=0}^3\left(a_k/\left(\beta p\right)\right)_\lambda \mathrm{\Delta }m_k}$$
(5)
where $`\mathrm{\Delta }`$ represents the difference between the values of the corresponding property in the two phases and $`vV/N`$. The values of the derivatives of $`a_k`$ with respect to $`\beta p`$ and $`\lambda `$ can be evaluated numerically.
The starting point in the Clausius Clapeyron integration (CCI) was the monodisperse hard sphere system ($`\lambda =0`$) where the equilibrium pressure is known to be $`\beta p\sigma ^311.71`$. A second order predictor corrector has been used to advance in the integration. The integration step was $`\delta \lambda =0.0025`$, the initial slope was found to be zero. The fluid properties have been directly extracted from the BMCSL EOS. A number of tests for several points on the ($`\beta p,\lambda `$) trajectory were performed by carrying out SG Monte Carlo (SGMC) simulations on the fluid phase using $`N=256`$ and, within numerical accuracy, no differences between simulation and theoretical results were found. The solid phase was considered to be in a face centered cubic (FCC) ordering and its properties were evaluated by SGMC simulation.
Details of the simulation procedure will be published elsewhere . It suffices to say that three kind of moves were performed, i) translation of the spheres (following the standard procedures), ii) changes of a particle diameter, by choosing the new diameter with probability proportional to $`\mathrm{exp}[\beta \mu (\sigma )]`$ with $`\sigma [0,\sigma _{max}]`$, $`\sigma _{max}`$ depends on the hard sphere interactions and iii) Changes of volume, where we found convenient to scale simultaneously the size of the particles to enhance convergence on the sampling.
CCI were performed by systems with with $`N=108`$ and $`N=256`$. No significant finite size effects were found regarding the main conclusions of the work. Some results for $`N=256`$ are presented in figures. In Fig 1 the results of the pressure as a function of the polydispersity in the two phases in equilibrium. In figure 2 we plot the packing fraction of the coexisting phases as a function of the polydispersity of the fluid phase. In figure 3 we show how the average diameter of the crystal phase increases with the polydispersity of the fluid phase.
In some theoretical work some conjectures have been made regarding the possibility of finding a fluid in equilibrium with two or more solid phases. This result, based on diagrams of phase stability, is not consistent with the phase rule, except in a number of singular points. The origin of such results lies in the theoretical approximation that the fluid composition (polydispersity) is equal to the overall composition in the solid phases(s). As can be seen in figures 1 and 3 such condition is not fulfilled except for very low polydispersities. However, one should not neglect the possibility of finding two (or more) crystalline phases which could enter into competition to become the solid phase in equilibrium with the fluid. The phase diagram of the binary mixtures of hard spheres can be used as an example; In general for an equimolar fluid mixture $`x=1/2`$, the solid phase in equilibrium with the fluid is richer in the large component. As the size difference increases an eutectic point appears in the phase diagrams , it could happen that for given size difference the eutectic composition could become $`x_s=1/2`$, in that case we could have for larger size differences, that the stable solid phase could become composed mainly by the small spheres. In order to check such a possibility for the polydisperse system we have made some control simulations starting from FCC phases composed with particles smaller that $`\sigma _0`$ at pressures closed to the ones obtained in CCI. Those systems evolved either to produce the same solid appearing in the CCI or to the melting of the sample. These results seem to discard the change of stable phase in the procedure of increasing $`\lambda `$ (at least in the range studied in this work). It is clear, however, that such a competition between solid phases could appear as the freezing proceeds, the change of the composition of the fluid phase will alleviate the phase rule restrictions, and it is quite likely to happen for high pressures where the fluid could even disappear as an equilibrium phase. Other possibilities have not been considered here, for instance, the formation of crystal phases with a bimodal distribution of sizes, which could be accommodated, for instance, in a body centered cubic lattice.
Here we will discuss briefly terminal polydispersity in the fluid phase which appears in the results of Ref. . In that work the ICPD has the form:
$$\beta \mu _{i0}\left(\sigma \right)=\frac{\left(\sigma \sigma _0\right)^2}{2\sigma _0\lambda ^2}$$
(6)
With this function a CCI scheme was performed from the monodisperse limit ($`\lambda =0`$), and it was found that $`m_10`$ when $`\lambda `$ increases and that the values of the reduced polydispersity, $`s_2=\sqrt{m_2/m_1^21}`$, in the fluid at equilibrium with a solid phase have to be less than a certain ”terminal polydispersity”, which the authors identified with some experimental results of crystallization of colloidal mixtures. In our simulations no such a terminal polydispersity appeared, however the results are roughly similar to those of Ref until that point. This apparent anomaly can be explained in terms of the form of ICPD. In our scheme, the excess contribution to the chemical potential produces a positive value of the coefficient $`a_3`$ of the ICPD, favoring large diameters to compensate the effect of hard sphere repulsions, however the form of the ICPD given by Eq (6) produces a limit in the maximum packing fraction of a fluid phase with a certain polydispersity well below close packing. It is possible to estimate such a limit by computer simulation or using the BMCSL EOS. In fact the terminal points in the diagrams of correspond just to the crossing between the fluid branch of the phase diagram and the line of such a maximum packing as a function of the polydispersity. As pointed out in Ref the end of the coexistence curve is conditioned by the ICPD, however such an end does not seem to correspond to any relevant physical situation.
The existence of a terminal polydispersity in the fluid phase has been also treated theoretically (See for instance Ref. ), however the results are strongly influenced by the restrictions to the size fractionation.
From the inspection of Fig. 1 one could think that the stability range of a polydisperse crystal with respect to the fluid can lie in a range between two pressures. This is not the case, we must emphasize that two points with the same value of the reduced polydispersity on the crystal ”branch” do not correspond to the same PDFD (See Fig. 2 and Fig.3). The one corresponding to the higher pressure is associated with a distribution with greater value of $`m_1`$ and a more negative value of the skewness (i.e. the distribution has an asymmetric tail extending out towards small values of $`\sigma `$).
The authors acknowledge the financial support of DGICYT/Spain (Grant No. PB95-072-C02-02)
|
no-problem/9905/cond-mat9905421.html
|
ar5iv
|
text
|
# Ground state of the hard-core Bose gas on lattice I. Energy estimates
## 1 Introduction
Bosons on a lattice interacting via an infinite on-site repulsion (hard-core bosons) represent a system of double interest. It is the simplest example of an interacting Bose gas and, thus, the most promising candidate for a rigorous treatment of Bose-Einstein condensation of interacting particles. On the other hand, the model is known to be equivalent to a system of $`\frac{1}{2}`$ spins coupled via the $`XY`$\- and possibly the $`Z`$-components of neighboring spins and exposed to an external magnetic field in the $`Z`$ direction. Ordering of the planar component of the spins is equivalent to Bose-Einstein condensation or the appearance of off-diagonal long-range order (ODLRO) in the system of bosons. In spite of a long and extensive study the results about ordering are far from being complete. Apart from some exceptions, like bounds on the density of the condensate or the discussion of the model on the full graph , the most interesting and difficult results were formulated in spin terminology . These works made use of a particular symmetry, the reflexion positivity. This introduced severe limitations as to the value of the external field (zero field) and the lattice type (essentially hypercubic lattices). Translated into the language of the boson gas, ODLRO was shown at half-filling on hypercubic lattices in the ground state in and above two dimensions, and for low enough temperatures above two dimensions.
In this paper we apply the boson terminology. Let $`𝐋`$ be an infinite lattice which, for the sake of simplicity, will be supposed to be regular with a constant coordination number (valency) $`k`$. Throughout the paper $`\mathrm{\Lambda }`$ denotes a finite part of $`𝐋`$ equipped with a periodic boundary condition so as to keep the valency constant (not really essential). The Hamiltonian we are going to study is
$$H=\underset{\{x,y\}E\mathrm{\Lambda }}{}(b_x^{}b_y+b_y^{}b_x)$$
(1)
We write $`x,y,\mathrm{}`$ for the vertices of $`𝐋`$, and $`E\mathrm{\Lambda }`$ for the set of edges of $`\mathrm{\Lambda }`$; $`b_x^{}`$ and $`b_x`$ create, resp., annihilate a hard-core boson at $`x`$. Boson operators at different sites commute with each other while
$$b_x^{}b_x+b_xb_x^{}=1$$
(2)
accounts for the hard-core condition. Correspondence with spin models is obtained by setting $`b_x=S_x^{}`$ and $`b_x^{}=S_x^+`$. The Hamiltonian conserves the number of bosons,
$$N=\underset{x\mathrm{\Lambda }}{}n_x=\underset{x\mathrm{\Lambda }}{}b_x^{}b_x$$
(3)
and is also invariant under particle-hole transformation. We can, therefore, fix $`N`$ so that $`\rho =N/|\mathrm{\Lambda }|`$ is between 0 and $`\frac{1}{2}`$. (Here and below, if $`A`$ is a finite set, $`|A|`$ denotes the number of its elements.) Our concern in this paper is to provide nontrivial upper bounds to the ground state energy, $`E_0`$. That such an estimate may be useful in the study of qualitative properties of the ground state, was an interesting point of .
Let $`X,Y,\mathrm{}`$ denote $`N`$-point subsets of $`\mathrm{\Lambda }`$, called also configurations. A convenient basis is formed by the states
$$\varphi (X)=\underset{xX}{}b_x^{}\mathrm{\Psi }_{\mathrm{vac}}$$
(4)
where $`\mathrm{\Psi }_{\mathrm{vac}}`$ is the vacuum state. Variational estimates of the ground state energy are of the form
$$E_0\psi |H|\psi /\psi |\psi .$$
(5)
A trivial choice is
$$\psi =\varphi (X)$$
(6)
where the summation goes over all $`N`$-point subsets of $`\mathrm{\Lambda }`$. It yields (cf. Section 2)
$$E_0k|\mathrm{\Lambda }|\left(\genfrac{}{}{0pt}{}{|\mathrm{\Lambda }|2}{N1}\right)\left(\genfrac{}{}{0pt}{}{|\mathrm{\Lambda }|}{N}\right)^1=k\rho (1\rho )|\mathrm{\Lambda }|+O(1).$$
(7)
The bound is nothing else than minus the average size of the boundaries of the configurations: We call the boundary of $`X`$ the set of half-filled edges and denote it by $`X`$. Hence, (7) is equivalent to
$$|E_0|\overline{|X|}\left(\genfrac{}{}{0pt}{}{|\mathrm{\Lambda }|}{N}\right)^1|X|$$
(8)
with summation over $`N`$-point configurations. If $`\mathrm{\Lambda }`$ is a full graph, i.e., any two sites are neighbors, (6) is an exact eigenvector and (8) holds with equality. In any other case we have a strict inequality.
The present work is about trying to improve on this bound, i.e., to make the right-hand side of (8) larger by an amount proportional to $`|\mathrm{\Lambda }|`$. It is important to know that this is possible for any $`k`$ and any $`\rho `$. In the opposite case, if $`k\rho (1\rho )`$ were to be the true ground state energy per site, one could easily conclude that as in the full graph, the Hamiltonian (1) has a product ground state in infinite volume,
$$\mathrm{\Psi }=\underset{x}{}(\sqrt{\rho }|n_x=1+\sqrt{1\rho }|n_x=0)$$
(9)
with ODLRO and the value of the order parameter at its theoretical maximum ($`\rho (1\rho )`$, cf. ).
Since by putting equal weights on every configuration, as in (6), yields the average boundary size (8), we expect that a larger value can be obtained by giving larger weights to configurations with larger boundaries. Therefore, we calculate variational bounds by using trial functions of the form
$$\psi _v=v_X\varphi (X)v_X=v(|X|)$$
(10)
i.e., $`v_X`$ depending on $`|X|`$ only, and $`v(n)`$ being concentrated on $`n>\overline{|X|}`$. Computability depends on our ability to estimate the number of configurations with a given size of boundary. The logarithm of this number is the entropy of the Ising model in a microcanonical ensemble with a fixed magnetization, $`_{x\mathrm{\Lambda }}\sigma _x=|\mathrm{\Lambda }|(12\rho )`$. Indeed, if we put $`\sigma _x=1`$ for $`x`$ in $`X`$ and $`\sigma _x=1`$ elsewhere, we get an Ising configuration with the corresponding union of contours $`X`$ whose total length, $`|X|`$, is the energy of the Ising configuration. In mathematical terms, the distribution of $`|X|`$ satisfies a large deviation principle whose rate function is, apart from a shift, minus the specific entropy of the Ising model, cf. . The exact form of the entropy is unknown for two- and higher-dimensional lattices. To circumvent this problem, we use an approximate formula for the probability of having a boundary of length $`n`$,
$$P_{\mathrm{\Lambda },N}(|X|=n)Z^1\mathrm{exp}\{\frac{(nM)^2}{2D^2}\}.$$
(11)
Here $`Z`$ is for normalization and
$$M=\overline{|X|}=k\rho (1\rho )|\mathrm{\Lambda }|D^2=\overline{(|X|\overline{|X|})^2}.$$
(12)
What is the approximation in the above expression? First, we replace the smooth and concave specific entropy $`s(ϵ)`$, having a maximum at the specific Ising energy $`ϵ_m=k\rho (1\rho )`$, by a parabola. Since the improved bound on $`|E_0|/|\mathrm{\Lambda }|`$ we are going to derive is $`(1+\delta )ϵ_m`$ with a $`\delta `$ never exceeding 0.2, cf. Table 4 below, this seems to be a consistent approximation. Second, we surmise that the second derivative of $`s(ϵ)`$ at the maximum is $`|\mathrm{\Lambda }|/D^2`$. An analogous statement holds true for the large deviations of a sum of identically distributed independent random variables. Now $`|X|`$ is the sum of identically distributed random variables, although not independent but finitely dependent ones, see Section 3. So this approximation is hopefully good. In any case, we use the formula (11) without looking for further justification. It may happen that energies of trial states depending on $`|X|`$ only can be quite close to $`E_0`$. Then the error of the approximation (11) may invalidate our estimates as rigorous upper bounds. They still can be useful as approximate formulas showing the $`k`$\- and $`\rho `$-dependence of the ground state energy. We apply the term ‘bound’ with this reservation.
In Section 2 we present the general setup for the estimates. In Section 3 we derive an expression for $`D^2`$ which seems to be valid for any vertex-transitive graph (whose all vertices are equivalent). This formula may also be of interest in site percolation problems and in the approximation of the entropy of the Ising model in nonvanishing magnetic fields. In Section 4 we compute variational bounds by using for $`v(n)`$ step-, exponential and Gaussian functions. The last two will be seen to yield improved bounds for any value of $`k`$ and $`\rho `$. Section 5 summarizes the results and indicates the way they can be extended to the grand-canonical ensemble.
In the true ground state $`v_X`$ is not a function of $`|X|`$ only. The detailed dependence on $`X`$ may not be relevant for computing the ground state energy but is absolutely crucial for qualitative properties, like ordering. Wave functions of the kind (10) trivially show an off-diagonal long-range order. In the true ground state fluctuations around a function of $`|X|`$ destroy ODLRO in one dimension and may decrease the order parameter in higher dimensions. A discussion of the ground state wave function will be given in a subsequent paper .
This work was supported by the Hungarian Scientific Research Fund (OTKA) under Grant No. T 030543.
## 2 Setup for the energy estimates
It seems advantageous to consider the eigenvalue problem of (1) as a problem of graph theory. The $`N`$-point configurations are vertices of a huge graph that we call the power graph of $`\mathrm{\Lambda }`$ of order $`N`$ and denote by $`G=G_{\mathrm{\Lambda },N}`$. Two configurations $`X`$ and $`Y`$ form an edge of $`G`$ if $`Y`$ can be obtained from $`X`$ by moving a single particle from a site of $`X`$ to a neighboring site, unoccupied in $`X`$. Thus, $`X`$ and $`Y`$ have $`N1`$ common sites and differ on an edge of $`\mathrm{\Lambda }`$. If $`VG`$ and $`EG`$ denote, respectively, the set of vertices and edges of $`G`$ then
$$|VG|=\left(\genfrac{}{}{0pt}{}{|\mathrm{\Lambda }|}{N}\right)|EG|=|E\mathrm{\Lambda }|\left(\genfrac{}{}{0pt}{}{|\mathrm{\Lambda }|2}{N1}\right)=\frac{1}{2}k|\mathrm{\Lambda }|\left(\genfrac{}{}{0pt}{}{|\mathrm{\Lambda }|2}{N1}\right).$$
(13)
The boundary $`X`$ of $`X`$ can be given a new interpretation as the set of neighbors of $`X`$ in $`G`$. So if $`X`$ and $`Y`$ form an edge then $`YX`$ and $`XY`$. The action of $`H`$ on $`G`$ is that of the usual lattice Laplacian with the exception that there is no subtracted diagonal term. The matrix of $`H`$ in the basis (4) is the adjacency matrix $`A=A(G)`$ of $`G`$, that is, $`A_{XY}=1`$ if $`X`$ and $`Y`$ are neighbors and zero otherwise. We are interested in the largest eigenvalue of $`A`$, $`\lambda _1=|E_0|`$ and in the corresponding eigenvector, $`a=(a_X)`$. $`G`$ is connected if $`\mathrm{\Lambda }`$ is connected (that we suppose), therefore $`A`$ is irreducible (ergodic) and the Perron-Frobenius theorem applies: $`\lambda _1`$ is nondegenerate and largest to absolute value, and $`a_X>0`$ for all $`X`$. We note that $`G`$ is bipartite if and only if $`\mathrm{\Lambda }`$ is bipartite, and hence $`\lambda _1`$ is an eigenvalue of $`A`$ if and only if $`\mathrm{\Lambda }`$ is bipartite.
We call $`n_{\mathrm{min}}`$, resp., $`n_{\mathrm{max}}`$ the minimum, resp., maximum of $`|X|`$ among the $`N`$-point configurations. Clearly, $`n_{\mathrm{min}}=O(N^{(d1)/d})`$ if $`𝐋`$ is a d-dimensional lattice, and ($`\rho \frac{1}{2}`$)
$$n_{\mathrm{max}}kN=k\rho |\mathrm{\Lambda }|.$$
(14)
We have the trivial inequalities
$$\overline{|X|}\lambda _1n_{\mathrm{max}}.$$
(15)
The first is the variational bound (8): Setting $`v_X1`$ we find
$$\frac{|\psi _v|H|\psi _v|}{\psi _v|\psi _v}=\frac{\underset{X}{}\underset{YX}{}1}{_X1}=\overline{|X|}$$
(16)
which is the same as
$$\frac{(v,Av)}{(v,v)}=\frac{1}{|VG|}\underset{X,Y:\{X,Y\}EG}{}1=\frac{2|EG|}{|VG|}.$$
(17)
The upper bound in (15) follows from the eigenvalue equation: Let $`X_0`$ be a configuration on which $`a_X`$ reaches its maximum, then
$$\lambda _1a_{X_0}=\underset{YX_0}{}a_Y|X_0|a_{X_0}.$$
(18)
We note that for $`\rho =\frac{1}{2}`$ there is a better upper bound, $`\lambda _1\frac{1}{4}(k+1)|\mathrm{\Lambda }|`$ (, Theorem C.1).
In Section 4 we shall see that $`n_{\mathrm{max}}`$ plays a role in optimizing the lower bound to $`\lambda _1`$. It is therefore important to know $`n_{\mathrm{max}}`$ exactly. In (14) there is equality if $`\rho `$ is small enough. In particular, $`n_{\mathrm{max}}=k\rho |\mathrm{\Lambda }|`$ for all $`\rho \frac{1}{2}`$ on bipartite lattices, and it is an easy graphical exercise to see that equality holds for $`\rho \frac{1}{3}`$ on the triangular ($`k=6`$) and on the Kagomé ($`k=4`$) lattices. Moreover, for both lattices $`n_{\mathrm{max}}`$ is constant between the densities $`\frac{1}{3}`$ and $`\frac{1}{2}`$: $`2|\mathrm{\Lambda }|`$ for the triangular and $`\frac{4}{3}|\mathrm{\Lambda }|`$ for the Kagomé lattice. This can be seen from the following argument. In general, $`n_{\mathrm{max}}`$ is the ground state energy of the antiferromagnetic Ising model under the restriction that the magnetization is fixed, $`_{x\mathrm{\Lambda }}\sigma _x=(12\rho )|\mathrm{\Lambda }|`$. However, we do not need to deal with the restriction. In both cases the (unrestricted) ground state is known to be highly degenerate. Among the exponentially large number of ground state configurations there are nonmagnetized ones, corresponding to $`\rho =\frac{1}{2}`$, others with concentration of down-spins $`\rho =\frac{1}{3}`$, and between these two limits $`\rho `$ can vary by steps of $`\frac{1}{|\mathrm{\Lambda }|}`$. The rule is to flip zero-energy spins one by one. The common energy of all these configurations is easy to compute from the fact that in each triangle there is precisely one unsatisfied bond. This fixes the value of $`n_{\mathrm{max}}`$ as given above.
The variational bound (5) reads $`\lambda _1B(v)`$ where
$$B(v)\frac{(v,Av)}{(v,v)}=\underset{X}{}v_X\underset{YX}{}v_Y/\underset{X}{}v_X^2.$$
(19)
In the estimations below a crucial role is played by the inequality
$$||X||Y||2(k1)\text{if}YX.$$
(20)
When passing from $`X`$ to $`Y`$ a neighboring particle-hole pair is interchanged. For both the particle and the hole the number of neighbors of the opposite kind can change by at most $`k1`$, whence (20) follows. If $`v_X=v(|X|)`$ and $`v(n)`$ is a nondecreasing sequence then
$$B(v)\frac{\underset{n=n_{\mathrm{min}}}{\overset{n_{\mathrm{max}}}{}}n|\mathrm{\Omega }_n|v(n2k+2)v(n)}{_{n=n_{\mathrm{min}}}^{n_{\mathrm{max}}}|\mathrm{\Omega }_n|v(n)^2}$$
(21)
where $`\mathrm{\Omega }_n`$ denotes the set of configurations with boundary length $`n`$.
Above it is understood that $`v(n)=v(n_{\mathrm{min}})`$ if $`n<n_{\mathrm{min}}`$. We have computed bounds given by the right member of (21), using step-functions and functions with an exponential or a faster increase. In all cases we have found no improvement with respect to the trivial bound if $`\rho `$ was in a neighborhood of $`\frac{1}{2}`$. Apparently, we have lost too much in the inequality (21).
There is, however, a way to compute $`B(v)`$ in leading order ($`|\mathrm{\Lambda }|`$) by making a further hypothesis on some details of the large deviation principle (11). Let $`\nu _i(X)`$ be the number of those neighbors of $`X`$ having a boundary length $`|X|+i`$. (So $`_i\nu _i(X)=|X|`$.) For $`v_X=v(|X|)`$ we have
$$B(v)=\frac{\underset{n}{}v(n)\underset{i=2k+2}{\overset{2k2}{}}v(n+i)\underset{X\mathrm{\Omega }_n}{}\nu _i(X)}{_nv(n)^2|\mathrm{\Omega }_n|}.$$
(22)
Equation (11) is equivalent to
$$|\mathrm{\Omega }_n|/|\mathrm{\Omega }_m|\mathrm{exp}\{[(nM)^2(mM)^2]/2D^2\}.$$
(23)
An analogous statement for the ratio of $`\nu _i(X)`$ and $`\nu _j(X)`$ is certainly wrong for all $`X`$ separately but may be correct for the ratio of their sums over $`\mathrm{\Omega }_n`$. In this hope we formulate the hypothesis that
$$\frac{\underset{X\mathrm{\Omega }_n}{}\nu _i(X)}{_{X\mathrm{\Omega }_n}\nu _j(X)}\frac{e^{\frac{(n+iM)^2}{2D^2}}}{e^{\frac{(n+jM)^2}{2D^2}}}=e^{\frac{nM}{D^2}(ij)}2k+2i,j2k2.$$
(24)
The third member of (24) is obtained by dropping a term of order $`|\mathrm{\Lambda }|^1`$. With (11) or (23) and (24) Eq. (22) reads
$`B(v)`$ $`=`$ $`\left({\displaystyle \underset{n}{}}e^{\frac{(nM)^2}{2D^2}}v(n)^2\right)^1{\displaystyle \underset{n}{}}ne^{\frac{(nM)^2}{2D^2}}v(n)v_n`$ (25)
$`v_n`$ $`=`$ $`\left({\displaystyle \underset{j=2k+2}{\overset{2k2}{}}}e^{\frac{nM}{D^2}j}\right)^1{\displaystyle \underset{i=2k+2}{\overset{2k2}{}}}e^{\frac{nM}{D^2}i}v(ni).`$ (26)
This is the starting point of the estimates of Section 4.
## 3 Statistics of the boundary lengths
In this section we show that the mean square deviation of $`|X|`$ is given by
$$D^2=[k2(2k1)\rho (1\rho )]k\rho (1\rho )|\mathrm{\Lambda }|=[k2(2k1)\rho (1\rho )]M.$$
(27)
We have found this expression first for $`d`$-dimensional hypercubic lattices, and checked it later to hold also for the triangular, honeycomb and Kagomé lattices. This is somewhat surprising because our derivation below needs knowledge of the rather different local neighborhoods up to next-nearest neighbors. The only common feature of all these lattices seems to be that all sites are symmetry-related and thus equivalent. Therefore, it should be possible to prove Eq. (27) on the basis of vertex-transitivity alone.
Equation (27) is obtained by using the grand-canonical probabilities $`P_{\mathrm{\Lambda },\rho }`$, i.e., by filling the sites of $`\mathrm{\Lambda }`$ independently and with equal probability $`\rho `$. We expect smaller order corrections to appear if the canonical distribution is used. So in this section $`X`$ is a random subset of $`\mathrm{\Lambda }`$ whose probability to be selected is $`\rho ^{|X|}(1\rho )^{|\mathrm{\Lambda }||X|}`$, and $`n_x=n_x(X)`$ is a random variable taking the value 1 if $`x`$ is in $`X`$ and 0 otherwise. Then all $`n_x`$ are independent and take 1 with probability $`\rho `$ and 0 with probability $`1\rho `$. We define
$$f_x=n_x\underset{yx}{}(1n_y)$$
(28)
where $`x`$ denotes the set of neighbors of $`x`$ in $`\mathrm{\Lambda }`$. Clearly,
$$\overline{f_x}=k\rho (1\rho ).$$
(29)
The boundary length of $`X`$ is obtained as
$$|X|=\underset{x\mathrm{\Lambda }}{}f_x(X).$$
(30)
Thus the mean value of (30) is
$$M=\underset{x\mathrm{\Lambda }}{}\overline{f_x}=k\rho (1\rho )|\mathrm{\Lambda }|$$
(31)
as found earlier.
Let $`d(x,y)`$ denote the graph distance of $`x`$ and $`y`$ in $`\mathrm{\Lambda }`$, i.e., the length of the shortest walk between them. Since $`f_x`$ and $`f_y`$ are independent if $`d(x,y)>2`$, we find
$$D^2=\underset{x,y}{}\overline{(f_x\overline{f_x})(f_y\overline{f_y})}=\underset{x,y:d(x,y)2}{}r(x,y)$$
(32)
$$r(x,y)=\overline{f_xf_y}\overline{f_x}^2.$$
(33)
The computation of the different terms is straightforward by observing that $`n_x^2=n_x`$, $`(1n_x)^2=1n_x`$ and $`n_x(1n_x)=0`$. The contribution of the diagonal terms $`x=y`$ is the same for any $`k`$-regular lattice. Namely,
$$\overline{f_x^2}=k\rho (1\rho )+k(k1)\rho (1\rho )^2$$
(34)
$$\underset{x\mathrm{\Lambda }}{}r(x,x)=|\mathrm{\Lambda }|r(x,x)=[k(2k1)\rho +k\rho ^2]M.$$
(35)
The contribution of nearest neighbor pairs depends on the number of triangles containing a given edge. If there are $`\mathrm{}`$ such triangles then
$$\overline{f_xf_y}=\rho ^2[\mathrm{}(1\rho )+[(k1)^2\mathrm{}](1\rho )^2]$$
(36)
$$\underset{x,y:d(x,y)=1}{}r(x,y)=k|\mathrm{\Lambda }|r(x,y)=\rho [\mathrm{}\rho (2k1)(1\rho )]M.$$
(37)
If $`x`$ and $`y`$ are next-nearest neighbors to each other, they may have $`m`$ common nearest neighbors. Then
$`\overline{f_xf_y}`$ $`=`$ $`\rho ^2[m(1\rho )+(k^2m)(1\rho )^2]`$
$`r(x,y)`$ $`=`$ $`m\rho ^3(1\rho ).`$ (38)
In $`d`$-dimensional hypercubic lattices ($`k=2d`$) there are next-nearest neighbor pairs with $`m=1`$ and $`m=2`$. Their contribution to $`D^2`$ is
$`{\displaystyle \underset{x,y:d(x,y)=2}{}}r(x,y)`$ $`=`$ $`k|\mathrm{\Lambda }|r(x,y)_{m=1}+4\left({\displaystyle \genfrac{}{}{0pt}{}{d}{2}}\right)|\mathrm{\Lambda }|r(x,y)_{m=2}`$ (39)
$`=`$ $`\rho ^2M+(k2)\rho ^2M=(k1)\rho ^2M.`$
For the triangular lattice ($`k=6`$)
$$\underset{x,y:d(x,y)=2}{}r(x,y)=k|\mathrm{\Lambda }|[r(x,y)_{m=1}+r(x,y)_{m=2}]=(k3)\rho ^2M.$$
(40)
In the honeycomb lattice ($`k=3`$) each site has 6 next-nearest neighbors, all of the type $`m=1`$. So
$$\underset{x,y:d(x,y)=2}{}r(x,y)=2k|\mathrm{\Lambda }|r(x,y)_{m=1}=(k1)\rho ^2M.$$
(41)
In the Kagomé lattice ($`k=4`$) there are 8 next-nearest neighbors with $`m=1`$:
$$\underset{x,y:d(x,y)=2}{}r(x,y)=2k|\mathrm{\Lambda }|r(x,y)_{m=1}=(k2)\rho ^2M.$$
(42)
Finally, we obtain $`D^2`$ by adding (35), (37) with $`\mathrm{}=0`$ and (39) for hypercubic lattices, (35), (37) with $`\mathrm{}=2`$ and (40) for the triangular lattice, (35), (37) with $`\mathrm{}=0`$ and (41) for the honeycomb lattice and (35), (37) with $`\mathrm{}=1`$ and (42) for the Kagomé lattice. All yield (27).
## 4 Energy estimates
By inspecting Eq. (25) it is clear that $`v(n)`$ has to be chosen in such a way that it shifts the expectation value of the Gaussian upwards. The appropriate choice can be either a rapidly – at least exponentially – increasing function or a function concentrated on values well above $`M`$. If in the numerator we had $`v(n)`$ also at the place of $`v_n`$, $`B(v)=n_{\mathrm{max}}`$ could be reached. However, typically $`v_n/v(n)<1`$ and decreases as we modify $`v`$ by putting increasing weights to larger boundary lengths. This limits the maximum of $`B(v)`$. We repeat here the DLS-bound , mentioned in Section 2,
$$\lambda _1M\left(1+\frac{1}{k}\right)(\rho =\frac{1}{2})$$
(43)
and the trivial upper bound (15),
$$\lambda _1M\frac{1}{1\rho }.$$
(44)
The right members of these inequalities are upper bounds to $`B(v)`$ as well.
Step functions
Let $`v(n)=1`$ if $`n_1nn_2`$ and 0 otherwise. Suppose that $`n_1M=m>c_1|\mathrm{\Lambda }|`$ and $`n_2n_1>c_2\sqrt{|\mathrm{\Lambda }|}`$ where $`c_1`$ and $`c_2`$ are positive constants. We have
$$B(v)=\left(\underset{n=n_1}{\overset{n_2}{}}e^{\frac{(nM)^2}{2D^2}}\right)^1\underset{n=n_1}{\overset{n_2}{}}e^{\frac{(nM)^2}{2D^2}}\left[n\frac{\underset{i}{}e^{\frac{nM}{D^2}i}v(ni)}{_ie^{\frac{nM}{D^2}i}}\right].$$
(45)
Because of the lower cutoff, the Gaussian is sharply concentrated on $`n_1`$. Except for $`v(ni)`$, we can therefore replace $`n`$ by $`n_1`$ in the square bracket. Also, with the above choice of $`n_2`$, the dependence on $`n_2`$ is negligible. We find after some manipulations that apart from smaller order corrections
$$B(v)=n_1\left[1\left(\underset{i=2k+2}{\overset{2k2}{}}e^{\frac{m}{D^2}i}\right)^1\underset{i=1}{\overset{2k2}{}}e^{\frac{m}{D^2}i}\frac{\underset{n=n_1}{\overset{n_1+i1}{}}e^{\frac{(nM)^2}{2D^2}}}{_{nn_1}e^{\frac{(nM)^2}{2D^2}}}\right].$$
(46)
We simplify the fraction with $`e^{m^2/2D^2}`$ and evaluate it to find $`1e^{mi/D^2}`$ plus a correction which vanishes as $`|\mathrm{\Lambda }|\mathrm{}`$. Inserting this into Eq. (46) we obtain
$`B(v)`$ $`=`$ $`M\left(1+{\displaystyle \frac{m}{M}}\right){\displaystyle \frac{2k2+\underset{i=0}{\overset{2k2}{}}e^{\frac{m}{D^2}i}}{_{i=2k+2}^{2k2}e^{\frac{m}{D^2}i}}}`$ (47)
$`=`$ $`M\left\{1+\left[{\displaystyle \frac{D^2}{M}}{\displaystyle \frac{(k1)(2k1)}{4k3}}\right]{\displaystyle \frac{m}{D^2}}+O\left(\left({\displaystyle \frac{m}{D^2}}\right)^2\right)\right\}.`$
From Eq. (27) we see that $`D^2/M`$ tends to $`k`$ as $`\rho `$ goes to zero. Hence, the coefficient of $`m/D^2`$ is positive if $`\rho `$ is small enough and we have the best (improved) bound for $`n_1`$ close to $`n_{\mathrm{max}}`$. Then $`m/D^2\rho /k`$ and we are not in conflict with (44). On the other hand, as $`\rho `$ goes to $`\frac{1}{2}`$, $`D^2/M`$ tends to $`\frac{1}{2}`$ and the coefficient of $`m/D^2`$ in (47) becomes negative. Thus, for any $`k2`$ there is a neighborhood of $`\rho =\frac{1}{2}`$ in which the maximum of $`B(v)`$ is reached for $`m=0`$, that is, we get no improved bound.
Exponential trial functions
Let $`v(n)=e^{xn}`$ with an $`x>0`$. Suppose first $`x<x_{\mathrm{max}}`$ where
$$x_{\mathrm{max}}=\frac{n_{\mathrm{max}}M}{2D^2}.$$
(48)
Plugging $`v`$ into Eq. (25), due to the factor $`e^{2xn}`$ the expectation value of the Gaussian is shifted from $`M`$ to $`M+2xD^2<n_{\mathrm{max}}`$ both in the numerator and in the denominator. The new expectation value can replace $`n`$ in its other occurrences in the numerator. Apart from smaller order corrections we get
$$B(v)=M\left(1+2xD^2/M\right)F_k(x)MG_{k,\rho }(x)(x<x_{\mathrm{max}}),$$
(49)
$$F_k(x)=\frac{\underset{i=2k+2}{\overset{2k2}{}}e^{xi}}{_{i=2k+2}^{2k2}e^{2xi}}=\frac{\mathrm{cosh}(2k\frac{1}{2})x\mathrm{cosh}(2k\frac{5}{2})x}{\mathrm{cosh}(4k\frac{5}{2})x\mathrm{cosh}(4k\frac{7}{2})x}.$$
(50)
Expanding $`F_k(x)`$ around 0 we find
$$G_{k,\rho }(x)=\left(1+2xD^2/M\right)[1(k1)(2k1)x^2+O(x^4)]>1$$
(51)
if $`x`$ is small enough, so we have an improved bound for any $`k`$ and $`\rho `$.
If $`x>x_{\mathrm{max}}`$, the maximum of the shifted Gaussian is outside the range of summation. Therefore the distribution is sharply concentrated on a small neighborhood of $`n_{\mathrm{max}}`$, and the computation of $`B(v)`$ can be done in analogy to the case of the step function. The result is
$$B(v)=\frac{n_{\mathrm{max}}}{_{i=2k+2}^{2k2}e^{2x_{\mathrm{max}}i}}\left[\underset{i=0}{\overset{2k2}{}}e^{(2x_{\mathrm{max}}x)i}+\underset{i=1}{\overset{2k2}{}}e^{xi}\right](x>x_{\mathrm{max}}).$$
(52)
Notice that $`B(v)`$ varies continuously with $`x`$ and both (49) and (52) yield
$$B(v)=n_{\mathrm{max}}F_k\left(x_{\mathrm{max}}\right)\mathrm{if}x=x_{\mathrm{max}}.$$
(53)
In order to obtain the best bound, $`B(v)`$ has to be maximized with respect to $`x`$. $`G_{k,\rho }(x)`$ has a unique maximum at some positive $`x`$. This can be the location of the maximum of $`B(v)`$ only when it is smaller than $`x_{\mathrm{max}}`$, and then the optimal $`B(v)`$ is computed with it from (49). Table 1 shows that this is the case for all $`k`$ if $`\rho `$ is not too close to 0. However, for small enough densities the maximum of $`G_{k,\rho }`$ is attained at an $`x`$ above $`x_{\mathrm{max}}`$. For these densities the first expression (49) increases with $`x`$ up to $`x_{\mathrm{max}}`$ while the second expression (52) decreases with $`x`$ for all densities. So the highest bound is provided by (53).
The conclusion is that for all densities we can obtain the best bound by maximizing (49) with respect to $`x`$ under the condition that $`xx_{\mathrm{max}}`$.
Numerical results on the best estimates are summarized in the tables below. They apply to bipartite lattices whenever $`D^2`$ is given by Eq. (27). According to Table 1, they are valid also to the Kagomé and triangular lattices, because the largest density for which the maximizing $`x`$ is $`x_{\mathrm{max}}`$ remains below $`\frac{1}{3}`$ for every $`k`$.
| | k | | | | |
| --- | --- | --- | --- | --- | --- |
| $`\rho `$ | 2 | 3 | 4 | 6 | 8 |
| 0.1 | 0.038 | 0.026 | 0.020 | 0.014 | 0.010 |
| 0.2 | 0.120 | 0.089 | 0.071 | 0.040 | 0.027 |
| 0.3 | 0.206 | 0.082 | 0.047 | 0.024 | 0.016 |
| 0.4 | 0.167 | 0.057 | 0.030 | 0.013 | 0.008 |
| 0.5 | 0.152 | 0.048 | 0.023 | 0.009 | 0.005 |
Table 1. Rates of increase, $`x`$, of the exponential trial functions, maximizing $`G_{k,\rho }`$. Numbers with an asterix correspond to $`x_{\mathrm{max}}`$.
| | k | | | | |
| --- | --- | --- | --- | --- | --- |
| $`\rho `$ | 2 | 3 | 4 | 6 | 8 |
| 0.1 | 1.106 | 1.103 | 1.102 | 1.099 | 1.098 |
| 0.2 | 1.199 | 1.157 | 1.128 | 1.100 | 1.088 |
| 0.3 | 1.155 | 1.075 | 1.051 | 1.033 | 1.027 |
| 0.4 | 1.094 | 1.035 | 1.019 | 1.009 | 1.006 |
| 0.5 | 1.077 | 1.024 | 1.012 | 1.004 | 1.002 |
Table 2. Maximal values of $`G_{k,\rho }(x)`$, computed with the entries of Table 1.
| | k | | | | |
| --- | --- | --- | --- | --- | --- |
| $`\rho `$ | 2 | 3 | 4 | 6 | 8 |
| 0.1 | 0.199 | 0.298 | 0.397 | 0.594 | 0.791 |
| 0.2 | 0.383 | 0.555 | 0.722 | 1.056 | 1.393 |
| 0.3 | 0.485 | 0.677 | 0.882 | 1.302 | 1.725 |
| 0.4 | 0.525 | 0.745 | 0.978 | 1.453 | 1.932 |
| 0.5 | 0.538 | 0.768 | 1.012 | 1.507 | 2.005 |
Table 3. Best estimates of $`|E_0|/|\mathrm{\Lambda }|`$ obtained by multiplying $`k\rho (1\rho )`$ with the corresponding entry of Table 2.
| k | 2 | 3 | 4 | 6 | 8 |
| --- | --- | --- | --- | --- | --- |
| $`\rho `$ | 0.23 | 0.19 | 0.17 | 0.16 | 0.15 |
| $`x`$ | 0.158 | 0.080 | 0.051 | 0.031 | 0.021 |
| $`G_{k,\rho }(x)`$ | 1.206 | 1.159 | 1.143 | 1.129 | 1.123 |
Table 4. The maximum of $`G_{k,\rho }(x)`$ as a function of $`\rho `$ and $`x[0,x_{\mathrm{max}}]`$, together with the maximizing $`\rho `$ and $`x`$. In each case the latter is $`x_{\mathrm{max}}`$.
Gaussian trial functions
Let $`v(n)=e^{(nM_1)^2/4D_1^2}`$ where $`M_1>M`$ and $`D_1^2\mathrm{}`$ with increasing $`|\mathrm{\Lambda }|`$ but otherwise are free parameters. The product of the two Gaussians gives rise to a new Gaussian with mean value and variance
$$M_2=\frac{MD_1^2+M_1D^2}{D_1^2+D^2}D_2^2=\frac{D_1^2D^2}{D_1^2+D^2}.$$
(54)
A straightforward computation yields the same $`B(v)`$, Eqs. (49), (52) and (53) as for the exponential trial function with
$$x=\frac{1}{2}\frac{M_1M}{D_1^2+D^2}.$$
(55)
So Gaussian and exponential trial functions provide the same bound, at least in leading order. Moreover, for any fixed $`x>0`$ there is a one-parameter family of Gaussians giving the same result. An interesting choice is $`D_1^2=o(|\mathrm{\Lambda }|)`$, e.g. $`D_1^2=\mathrm{const}\sqrt{|\mathrm{\Lambda }|}`$. In this case $`M_2/M_1=1`$ and $`D_2^2/D_1^2=1`$ asymptotically.
## 5 Summary and concluding remarks
We have presented variational estimates of the ground state energy of a gas of hard-core bosons on regular lattices. The wave functions we have used depended only on the size of the boundary of the $`N`$-point configurations. Therefore, the estimates could be based on a large deviation principle governing the distribution of the boundary sizes. The corresponding rate function is related to the entropy of the (ferromagnetic) Ising model on the same lattice. We have derived a formula for the mean-square deviation of the boundary sizes and applied it in a quadratic approximation of the rate function.
The best estimates we have found are of the form (49) where for given $`k`$ and $`\rho `$ the parameter $`x`$ is to be determined numerically so as to maximize $`B(v)`$. This has been illustrated in Tables 1-3. The maximum is realized either by an exponential function or by a one-parameter family of Gaussian functions.
Extension to the grand-canonical ensemble is straightforward. Adding $`\mu _{x\mathrm{\Lambda }}n_x`$ to the Hamiltonian resumes in adding $`\mu \rho |\mathrm{\Lambda }|`$ to $`B(v)`$. After maximizing with respect to $`x`$ we have to maximize with respect to $`\rho `$. More precisely, if $`b(\rho )`$ is the maximum of $`B(v)/|\mathrm{\Lambda }|`$ with respect to $`x`$, first we extend $`b(\rho )`$ symmetrically to $`\rho >\frac{1}{2}`$ and then determine $`\widehat{b}(\mu )=\mathrm{max}_\rho [b(\rho )+\mu \rho ]`$ to get an estimate of the ground state energy per site in the full bosonic Fock space. It is more $`\widehat{b}(\mu )`$ than $`b(\rho )`$ which is useful for the equivalent spin model whose Hamiltonian now contains an external magnetic field in the $`Z`$-direction. The function $`\rho (\mu )`$ that we can find in determining $`\widehat{b}(\mu )`$ is only an approximation of the true relationship which exists between the chemical potential and the density in the ground state. Table 3 suggests that $`\rho (0)=\frac{1}{2}`$. This rigorously holds true for any positive temperature; even we have a much stronger ‘uniform density theorem’, as in the Hubbard model : Because of the particle-hole symmetry,
$$\frac{\mathrm{Tr}n_xe^{\beta H_{\mu =0}}}{\mathrm{Tr}e^{\beta H_{\mu =0}}}=\frac{\mathrm{Tr}(1n_x)e^{\beta H_{\mu =0}}}{\mathrm{Tr}e^{\beta H_{\mu =0}}}=\frac{1}{2}$$
(56)
is valid in any $`\mathrm{\Lambda }`$ with free boundary condition.
In our estimates the lattice structure appears only through $`k`$, the coordination number. Therefore we obtain the same result for the Kagomé lattice as for the square lattice, and for the triangular lattice as for the simple cubic lattice. Although the energy corresponds to a nearest neighbor correlation, the details of the lattice geometry certainly influence the $`\rho `$-dependence of the exact ground state energy per site. The mark of the lattice could be recovered by using the exact Ising entropies instead of their quadratic approximants.
|
no-problem/9906/astro-ph9906245.html
|
ar5iv
|
text
|
# Luminous supersoft X-ray emission from the recurrent nova U Scorpii
## 1 Introduction
Recurrent novae (RN) are a small and diverse subclass of cataclysmic variables, which show multiple outbursts resembling those of classical novae, though of lesser magnitude (see Webbink et al. 1987; Sekiguchi 1995). U Scorpii (U Sco) is one of the six best known members of this class. The source underwent outbursts in 1863, 1906, 1936, 1979, 1987, and most recently on 1999 Feb 25.2 (Schmeer et al. 1999). The last two outbursts were separated by 8 and 12 years respectively. It is the RN with the shortest recurrence period known.
Starrfield et al. (1988) applied thermonuclear runaway (TR) theory to this nova assuming a very massive white dwarf (WD).
The estimate of the distance to U Sco in the literature varies with different assumptions. Kato (1990) obtained a distance range 3.3–8.6 kpc comparing the observed visual light curve with the theoretical one and assuming a high mass ($`1.38\mathrm{M}_{}`$) WD. If the donor star is a dwarf the distance can be $`3.5\pm 1.5\mathrm{kpc}`$ (Hanes 1985), however most authors recently agree instead on a subgiant nature of the donor, as indicated by the detection of a Mg Ib absorption feature at $`\lambda `$5180 in the late 1979 outburst spectrum, consistent with a K2 III spectral type (Pritchet et al. 1977). This is in agreement with a low mass ($``$ 1 M) subgiant secondary with M<sub>V</sub>=+3.8 which fills the Roche lobe at an orbital period $``$1.2 days (Portegies Zwart, private communication). From the apparent magnitude V=20.0 in the faint state and the visual extinction A<sub>V</sub>=0.6 a distance $``$13 kpc is derived, in rough agreement with d=14.8 kpc derived by Webbink et al. (1987) with the assumption of a G III subgiant, and with d$``$14 kpc estimated by Warner (1995). In any case U Sco is at a latitude 21<sup>o</sup> and for any distance d$``$ 3 kpc it belongs to the galactic halo population. It is seen through the full galactic hydrogen column of $`1.4\times 10^{21}\mathrm{cm}^2`$ (Dickey & Lockman 1990).
U Sco was observed to be an eclipsing system by Schaefer (1990). The orbital period is 1.23 days (Johnston & Kulkarni 1992; Schaefer & Ringwald 1995). $`\mathrm{m}_\mathrm{V}`$ varies from 18.5 to 20. An accretion disk is required from the modeling of the optical continuum during quiescence. The maximum visual magnitude during outburst is $`\mathrm{m}_\mathrm{V}8`$. U Sco shows the fastest visual decline of 0.67 magnitude per day of all known novae (Sekiguchi et al. 1988). Spectroscopically it shows very high ejection velocities of $`(7.511)\times 10^3\mathrm{km}\mathrm{s}^1`$ (Williams et al. 1981; Rosino & Iijima 1988; Niedzielski et al. 1999).
Ejecta abundances have been estimated from optical and UV studies (Williams et al. 1981; Barlow et al. 1981). From the emission lines a depletion in hydrogen relative to helium with He/H$``$2 has been derived while the CNO abundance was solar with an enhanced N/C ratio. The strongest emission feature at maximum is the Heii $`\lambda 4686`$ line. Other reported lines are H$`\alpha `$, Hei, Heii, Nii, Niii, Ciii, and Civ (Zwitter et al. 1999; Bonifacio et al. 1999). Satellite lines to H$`\alpha `$, Heii and Hei have also been detected (Bonifacio et al. 1999). The estimated mass of the ejected shell for the 1979 outburst is $`\mathrm{M}_{\mathrm{shell}}10^7\mathrm{M}_{}`$ (Williams et al. 1981).
It has been suggested that the companion of U Sco may be somewhat evolved (and helium enriched) as the quiescent spectrum shows strong Heii emission lines (cf. Hanes 1985). Hachisu et al. (1999) propose an evolutionary scenario for this system assuming a secondary star which experienced a helium accretion phase. The WD may efficiently grow in mass towards the Chandrasekhar (CH) limit and explode as a SN Ia (Della Valle & Livio 1996). But if U Sco is in the galactic halo then the system belongs to an old stellar population and the evolution may be different. Helium enrichment as observed from U Sco may also be due to helium enriched winds from the WD (cf. Prialnik & Livio, 1995).
## 2 The BeppoSAX observation
After the optical outburst of U Sco was reported (Schmeer et al. 1999) a target of opportunity observation of U Sco was performed with the BeppoSAX X-ray satellite. According to the calculations of Kato (1996), supersoft (SSS) X-ray emission is predicted to be observed $``$10–60 days after the optical outburst. The 50 ks exposure observation was performed during 1999 March 16.214 – 17.425, 19–20 days after the optical outburst. Here we report the first results of an analysis of the mean X-ray spectrum observed during this observation.
The scientific payload of BeppoSAX (see Boella et al. 1997a) comprises four coaligned Narrow Field Instruments including the LECS (Parmar et al. 1997) and MECS (Boella et al. 1997b). U Sco was detected with mean LECS and MECS net count rates, after background subtraction, of $`(5.67\pm 0.23)\times 10^2\mathrm{s}^1`$ and $`(1.35\pm 0.38)\times 10^3\mathrm{s}^1`$ respectively. The source was not detected in the high-energy non-imaging instruments. The X-ray flux varies by a factor of $``$1.5 during the observation possibly due to orbital variations or a rise in flux.
The combined LECS and MECS spectrum was first fit with a simple blackbody spectral model. The fit is unacceptable with a $`\chi ^2`$ of 72 for 10 degrees of freedom (dof). We then added absorption edges due to highly ionized species of N and O expected in the hot atmosphere of a steadily nuclear burning WD. The edge energies were fixed at 0.55 keV, 0.67 keV, 0.74 keV, and 0.87 keV, corresponding to the Lyman edges of N vi, N vii, O vii, and O viii, respectively. Only the Nvi, Nvii, and Oviii edges were detected at high significance with absorption depths of 4.3, 2.4, and 5.6, respectively. The Ovii edge is not detected and the 90% confidence upper limit to its absorption depth is $`<`$1.6. The $`\chi ^2`$ is 12 for 6 dof. However, other interpretations of the spectral shape above $``$0.8 keV appear to be more likely (see below and the discussion). We independently fitted the edge energies of the N vi and N vii features and derived 90% confidence ranges of 0.524–0.555 keV and 0.630–0.669 keV, respectively and an absorbing hydrogen column density $`(1.82.6)\times 10^{21}\mathrm{atom}\mathrm{cm}^2`$.
WD atmosphere spectra have been shown to deviate strongly from simple blackbodies (e.g., Hartmann & Heise 1997). The use of sophisticated WD atmosphere model spectra is required. We applied a non-LTE WD atmosphere spectral model grid assuming a very massive (log(g)=9.5) WD with cosmic CNO abundances (see e.g., Hartmann et al. 1999). The fit was unacceptable at energies $`{}_{}{}^{}{}_{}{}^{>}`$0.8 keV. We added an optically thin thermal component (Raymond & Smith 1977), hereafter RS, to the model. Such a component may be due to a strong wind from the WD atmosphere and has been observed in the classical nova Cyg 1992 (Balman et al. 1998). The fit was still unacceptable with a $`\chi ^2`$ of 23.4 for 8 dof. We also fitted the observed spectrum with two optically thin RS components. We found that the fit was not acceptable with a $`\chi ^2`$ of 55.8 for 8 dof.
When the CNO cycle is active then the N/C and O/C ratios are strongly modified. A strong enrichment of N with respect to C is expected as N is involved in the slowest reaction. We calculated log g=9.5 non-LTE WD atmosphere spectral models with He and CNO (number) abundances ($`\mathrm{H}=0.5`$, $`\mathrm{C}=9\times 10^4`$, $`\mathrm{N}=6\times 10^3`$, $`\mathrm{O}=3\times 10^3`$ with respect to helium) according to values determined from optical/UV studies of the nova ejecta of U Sco (Williams et al. 1981). In addition, we applied a hot optically thin thermal component. We found that with these assumptions the fit was acceptable with a $`\chi ^2`$ of 10.7 for 8 dof. The best-fit atmospheric temperature is $`(8.538.85)\times 10^5\mathrm{K}`$ (90% confidence), the atmospheric radius is $`(1.95.5)\times 10^7\mathrm{cm}(\mathrm{d}/\mathrm{kpc})`$, and the bolometric luminosity $`(0.161.2)\times 10^{36}\mathrm{erg}\mathrm{s}^1(\mathrm{d}/\mathrm{kpc})^2`$. For the optically thin component we derive a temperature, kT, of 0.22–0.52 keV and an emission measure, EM, of $`(0.423.2)\times 10^{55}\mathrm{cm}^3(\mathrm{d}/\mathrm{kpc})^2`$ assuming that He is enriched and N/C enhanced. The absorbing hydrogen column density is $`(3.14.8)\times 10^{21}\mathrm{atom}\mathrm{cm}^2`$. This value is larger than the galactic absorption in the direction of U Sco of $`1.4\times 10^{21}\mathrm{cm}^2`$ (see Introduction) indicating a substantial intrinsic absorption.
## 3 Discussion
### 3.1 Supersoft X-ray emission
U Sco belongs now to those novae for which SSS X-ray emission has been discovered (cf. Orio & Greiner 1999).
The TR theory predicts two processes which can generate soft X-rays: Shock acceleration in the nova ejecta and steady nuclear burning. For Nova Cyg 1992 an optically thin component due to shocks has been detected. In addition an optically thick SSS X-ray component has been observed $``$60 days after the outburst and for $``$600 days (Krautter et al. 1996; Balman et al. 1998).
According to the calculations of Kato (1996) performed for U Sco, and assuming a massive ($`\mathrm{M}_{\mathrm{WD}}`$=$`1.377\mathrm{M}_{}`$) WD a SSS component is predicted to be observed from $``$10 days after the outburst. In the case of the H-rich model (He/H=0.1) the supersoft component is expected to rise till $``$50 days after the outburst to a maximum luminosity of $`3\times 10^{36}\mathrm{erg}\mathrm{s}^1(\mathrm{d}/\mathrm{kpc})^2`$. In the He-rich model (He/H=2), a maximum luminosity for the SSS component of $`(0.81)\times 10^{36}\mathrm{erg}\mathrm{s}^1(\mathrm{d}/\mathrm{kpc})^2`$ is reached $``$20 days after the optical outburst. Using the He enriched fit with the N/C ratio enhanced (Table 1), the observed bolometric luminosity $``$19–20 days after the optical outburst is $`(0.161.2)\times 10^{36}\mathrm{erg}\mathrm{s}^1(\mathrm{d}/\mathrm{kpc})^2`$. Assuming a distance $``$14 kpc (see Introduction) a bolometric luminosity of $`(0.32.4)\times 10^{38}\mathrm{erg}\mathrm{s}^1`$ is derived. The luminosity is in agreement with the bolometric luminosity of $`10^{38}\mathrm{erg}\mathrm{s}^1`$ predicted for novae (Mc Donald et al. 1985). The temperature of $`(8.78.9)\times 10^5\mathrm{K}`$ and the luminosity of $`2.4\times 10^{38}\mathrm{erg}\mathrm{s}^1`$ derived from the X-ray spectral fit requires a very massive $`\mathrm{M}_{\mathrm{WD}}>1.2\mathrm{M}_{}`$ WD consistent with an almost CH mass WD (e.g. Kato 1997).
### 3.2 Spectrally hard component
In addition to the optically thick SSS X-ray model spectrum, the spectral fits require a spectrally hard component. A similar component in addition to a SSS component was used by Balman et al. (1998) for X-ray spectral fits to the classical nova Cyg 1992. Using an optically thin thermal model we derive a temperature of $`0.220.52\mathrm{keV}`$, an emission measure $`\mathrm{EM}=(0.43.2)\times 10^{55}\mathrm{cm}^3(\mathrm{d}/\mathrm{kpc})^2`$ if He is enriched and the ratio N/C enhanced.
If we assume a terminal wind velocity of $`\mathrm{v}_{\mathrm{}}10^8\mathrm{cm}\mathrm{s}^1`$ the wind mass loss rate for a He/H=2 mixture can be estimated from $`\dot{\mathrm{M}}14.5\mathrm{m}_\mathrm{H}\mathrm{v}_{\mathrm{}}\sqrt{\mathrm{EM}\mathrm{r}}`$. Here $`r`$ is the typical radius of the emitting region. We assume $`\mathrm{r}=10^{11}\mathrm{cm}`$, the radius of the Roche-lobe, and use the result of the spectral fit assuming He is enriched. We then obtain a wind mass loss rate of $`\dot{\mathrm{M}}=(2.46.9)\times 10^8\mathrm{M}_{}\mathrm{yr}^1(\mathrm{d}/\mathrm{kpc})`$. For a distance to U Sco of 14 kpc, we derive a wind mass loss rate of $`(3.49.7)\times 10^7\mathrm{M}_{}\mathrm{yr}^1`$. A near-CH mass WD at $`\mathrm{T}=9\times 10^5\mathrm{K}`$ experiences an envelope mass loss of $`1.2\times 10^6\mathrm{M}_{}\mathrm{yr}^1`$ due to both steady nuclear burning and a wind from the WD. The steady nuclear burning mass loss can be estimated to be $`6\times 10^7\mathrm{M}_{}\mathrm{yr}^1`$ (Hachisu et al. 1999). The mass loss due to the wind is $`6\times 10^7\mathrm{M}_{}\mathrm{yr}^1`$. This value is in agreement with the range we derived above. For a duration of the steady nuclear burning phase plus wind mass loss phase of $``$0.1 year (Kato 1996) we derive a mass loss from the WD envelope of $`1.2\times 10^7\mathrm{M}_{}`$ of which $`6\times 10^8\mathrm{M}_{}`$ is due to the wind. In addition the predicted post-outburst envelope mass is $`8\times 10^8\mathrm{M}_{}`$ (Hachisu et al. 1999). This would mean that 70% of the envelope mass has remained on the WD allowing it to increase in mass. Williams et al. (1981) derive from the UV lines (for 14 kpc and $`\mathrm{r}>6.5\times 10^{11}\mathrm{cm}`$) a wind mass loss rate $`\dot{\mathrm{M}}>5.4\times 10^8\mathrm{M}_{}\mathrm{yr}^1`$ which differs significantly from our value, although it is subject to many uncertainties, and differences between outbursts cannot be accounted for.
If the helium fraction is indeed large ($`\mathrm{He}/\mathrm{H}2`$) only part of the accreted He/H envelope might have been ejected and steady nuclear burning proceeded for at least one month. This result is consistent with the analytical model of Kahabka (1995). Assuming an X-ray on-time of 0.1 years and a recurrence period of 10 years, we constrain the WD mass to $`\mathrm{M}_{\mathrm{WD}}1.36\mathrm{M}_{}`$. U Sco and RN in general are therefore probably SN Ia progenitors (cf. Li & van den Heuvel 1997; for a recent review on SN Ia, see Livio 1999).
## 4 Conclusions
For the first time a RN (U Sco) has been detected in a SSS X-ray phase with the BeppoSAX X-ray satellite $``$20 days after an optical outburst. This observation confirms the theoretical predictions that RN have a SSS X-ray phase (Yungelson et al. 1996; Kato 1996). He enhanced non-LTE WD atmosphere model spectra, with a high N/C ratio (using abundances derived from the ejecta) are required to fit the BeppoSAX X-ray spectrum of U Sco. This is evidence that the outburst of U Sco was triggered by a TR and that the CNO cycle was active. From the temperature of the optically thick SSS component of $`9\times 10^5\mathrm{K}`$, we constrain the WD to be very massive ($`>1.2\mathrm{M}_{}`$) and consistent to be close to the CH limit.
Besides the SSS emission we observe an additional optically thin component. We explain this hard component as emission from a strong shocked wind from the WD with a mass loss rate of $`\dot{\mathrm{M}}=(2.46.9)\times 10^8\mathrm{M}_{}\mathrm{yr}^1(\mathrm{d}/\mathrm{kpc})`$. Such a component is consistent with the theoretical predictions for a WD with a mass just below the CH mass (Hachisu et al. 1999). According to their calculations U Sco emerged from an optically thick wind phase when the BeppoSAX observation was performed and it cannot last longer than 20 days for a mass just below the CH limit.
U Sco, and therefore RN in general, can be considered to be progenitors of SN Ia. The condition that the WD can grow in mass is achieved if the accreted and accumulated material is enriched in He and not all the envelope was ejected. This condition may occur if the donor star has experienced a previous helium accretion phase, if it is somewhat evolved (a subgiant), or if helium rich material has been mixed into the accreted envelope.
###### Acknowledgements.
We thank Luigi Piro for granting the BeppoSAX TOO and the BeppoSAX team including Milvia Capalbi for the very fast production and delivery of the final observation tape. We thank L. Yungelson for discussions and the referee M. Orio for critical comments. This research was supported in part by the Netherlands Organization for Scientific Research (NWO) through Spinoza Grant 08-0 to E.P.J. van den Heuvel. I. Negueruela is an ESA External Research Fellow.
|
no-problem/9906/cond-mat9906295.html
|
ar5iv
|
text
|
# Antiferromagnetism in hydrated 123 compounds
## I Conclusions
Our copper NQR/ZFNMR studies of the reaction of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.5</sub> compound with water vapour give straightforward evidence that “empty” CuO chains play the role of easy water insertion channels. The most ordered regions of the crystallites react most easily. The water insertion reaction proceeds very slowly at room temperature, but in 6 years in air, water reaches even samples packed in paraffin . At 100-200<sup>o</sup>C the reaction proceeds quickly (in few days). The final product of the reaction is a non-superconducting antiferromagnet characterized by at least two types of magnetically ordered copper ions with ZFNMR spectra at the frequency ranges of 46-96 and 96-135 MHz. This antiferromagnetic signal, indicating decomposition of the superconductor, was even detected in samples packed in Stycast and left at room temperature (normally deemed a safe storage procedure) for few years.
## II Acknowledgments
This study was supported in part by the Russia Foundation for Basic Research, under Project 98-02-17687, by INTAS, under Grant 96-0393, and by the Russian Scientific Council on Superconductivity, under Project 98014.
## III Figure captions
Fig.1. Copper NQR spectra for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.5</sub> with different water uptake $`z`$ (open circles, solid line, solid circles, dotted line and open triangles correspond to $`z=`$ 0; 0.14; 0.24; 0.55 and 1.2, respectively): a) as-taken spectra; b) slow-relaxing part of the spectra (Cu(1) spectra); c) fast-relaxing part. For details see text.
Fig.2. Copper ZFNMR spectra for YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.5</sub> with different water uptake $`z`$ (open and solid triangles and open and solid circles correspond to $`z=`$0.14; 0.24; 0.55 and 1.2, respectively): a) as-taken spectra; b) spectra normalized by water uptake; the spectrum for $`z=1.2`$ is normalized by 0.6.
Fig.3. The dependence of the superconducting volume fraction (solid circles), copper NQR spectra intensity (solid squares), NQR intensity of slow-relaxing Cu(1) nuclei (open squares) and ZFNMR intensity (triangles) on water uptake $`z`$. Straight lines represent the functions $`y=z`$ (solid line), $`y=1z`$ (dashed line) and $`y=12z`$ (dotted line).
|
no-problem/9906/astro-ph9906187.html
|
ar5iv
|
text
|
# The local to global 𝐻₀ ratio and the SNe Ia results
## 1 Introduction
Recent exciting results from the searches for high-redshift type Ia supernovæ (SNe Ia) have been interpreted as suggesting that there is indeed a cosmological constant and that the expansion of the Universe is accelerating (Perlmutter et al. 1999 and Riess et al. 1998).
The results on the cosmology are robust in that they are independent of the value of the Hubble parameter, $`H_0`$, (as they use redshift, $`z`$) and also of the local calibration of the SNE Ia luminosities (as the distance modulus is used). They do rest upon an underlying assumption that the local and global values of $`H_0`$ are the same (so that the distance-redshift relationship does not change).
Here we examine how different the local value of $`H_0`$ must be from the global value in order to significantly change the cosmology derived from the fitting of high-redshift SNe Ia data to the Hubble diagram. A similar approach was taken by Kim et al. (1997) to fit data from the first 7 high-redshift SNe Ia between $`0.35<z<0.65`$; in this paper we extend that work with a much larger number of supernovæ in the range $`0.3<z<0.85`$ . (It should also be noted that the data used by Kim et al. favoured an $`\mathrm{\Omega }_M=1`$ cosmology at the time.)
## 2 Local and global differences in $`H_0`$
The idea that the local Universe is atypical has been suggested by numerous authors (see Dekel 1994 for a review).
Hudson et al. (1999) report evidence for a large scale bulk flow on scales possibly greater than 14000 km s<sup>-1</sup>. Plionis & Kolokotronis (1998) show that there may be contributions to the X-ray cluster dipole from beyond 16000 km s<sup>-1</sup>. Lauer & Postman (1994) and Scaramella (1992) suggest the possibility of even larger density inhomogeneities out to a distance of 15000 - 30000 km s<sup>-1</sup>.
Phillips & Turner (1998) see that near-IR galaxy counts out to $`z=0.100.23`$ are deficient which may be due to a local underdensity on such scales, implying that the local value of the Hubble parameter $`H_0`$ is up to 30 per cent higher than the global value. Tammann (1998) also sees a decrease in the value of $`H_0`$ out to 18000 km s<sup>-1</sup> of 7 per cent.
Whilst these claims are not without their detractors it is clear that a body of work exists to suggest that the local value of $`H_0`$ may not be the same as the global value and could be higher. See Turner, Cen & Ostriker (1992) for a detailed discussion of this topic.
## 3 Method and results
We assume that the value of $`H_0`$ on scales greater than $`z=0.1`$ is equal to the global value, and then fit open and flat cosmologies to the high-redshift SNe Ia data, shifting the zero-point of the data to find the best fit to the various cosmologies. The difference in zero-points between the local and high-redshift supernovæ is then used to calculate the difference between the local and global values of the Hubble parameter.
Using the corrected data for 10 high-redshift SNe Ia presented in tables 5 and 6 of Riess et al. (1998) and for 40 Supernova Cosmology Project (SCP) high-redshift SNe Ia from table 1 of Perlmutter et al. (1999) we reconstruct the Hubble diagrams. The shift in the local-to-global Hubble parameter required to fit various flat and open cosmologies are then calculated using a $`\chi ^2`$ fit, also finding the 95 per cent confidence limits.
Figure 1 shows the required local-to-global ratio of the Hubble parameter to fit the SCP SNe Ia data. The best fit when $`H_{0(L)}/H_{0(G)}=1`$ in fig. 1(a) is the well-known $`\mathrm{\Omega }_M=0.28,\mathrm{\Omega }_\mathrm{\Lambda }=0.72`$ cosmology found by Perlmutter et al. (1999).
More interestingly a variety of low $`\mathrm{\Omega }_M`$ open cosmologies are not rejected at the 95 per cent level even if $`H_{0(L)}/H_{0(G)}=1`$. In order for an $`\mathrm{\Omega }_M=0.3`$ open cosmology to be the best fit to the data then $`H_{0(L)}/H_{0(G)}=1.07`$ is required.
The recovery of critical mass density cosmologies is more difficult. They are rejected at greater than the 99 per cent level if $`H_{0(L)}/H_{0(G)}=1`$ and need a large ratio $`H_{0(L)}/H_{0(G)}=1.21`$.
The data of Riess et al. (1998) was also examined and the fits are very similar to those presented in fig. 1 although they are less significant due to the much lower number of data points. Generally the Riess et al. data are slightly less consistent with a low $`\mathrm{\Omega }_M`$ open cosmology and $`\mathrm{\Omega }_M=1`$ critical cosmologies are ruled out at a higher confidence level, requiring a 1.2 to 1.3 $`H_{0(L)}/H_{0(G)}`$ ratio. To fit best a low $`\mathrm{\Omega }_M`$ open cosmology then the $`H_{0(L)}/H_{0(G)}`$ ratio must be 1.1 - 1.2.
## 4 Conclusions
If the local value of the Hubble parameter is higher than the global value on scales of a few 100 $`h^1`$ Mpc then the fitting of cosmological parameters to the high-redshift SNe Ia data of Perlmutter et al. (1999) and Riess et al. (1998) may be inappropriate. Previously popular cosmologies such as open or critical matter density Universes with no cosmological constant may be acceptable if we live in a local underdensity. Whilst large underdensities of scales up to 300-600 $`h^1`$ Mpc are not expected from standard power spectra, there is some observational evidence that we may live in such an underdensity (see section 2), a possibility which is not ruled out by these results. It seems that is too early yet to abandon the traditional models.
LIO is a Daphne Jackson Fellow funded by the Royal Society.
|
no-problem/9906/hep-ph9906454.html
|
ar5iv
|
text
|
# References
Mass hierarchy of leptons and hadrons within the framework of electrodynamics
D.L. Khokhlov
Sumy State University, R.-Korsakov St. 2
Sumy 244007 Ukraine
e-mail: khokhlov@cafe.sumy.ua
## Abstract
Structure of leptons and hadrons within the framework of electrodynamics is considered. Muon and tau-lepton have the structure of 3 electrons. The mass of muon is defined by the section of two-photon annihilation. The mass of tau-lepton is defined by the section of three-photon annihilation. Hadrons are characterized by the structure of 5 electrons. The masses of hadrons are defined via the masses of muon and tau-lepton.
According to the standard theory, hadrons have the quark structure. Baryons consist of 3 quarks, mesons consist of the quark-antiquark pair. In the $`SU(2)\times U(1)`$ electroweak theory , masses of the fermions, leptons and quarks, arise due to the Yukawa interaction
$$f(\overline{\psi }_L\psi _R\phi +\overline{\psi }_R\psi _L\overline{\phi })$$
(1)
where the couplings $`f`$ are different for each fermion. The standard theory do not define the couplings $`f`$ and cannot explain the hierarchy of the leptons and quarks masses.
To explain the mass hierarchy between lepton generations within the framework of electrodynamics it was proposed to consider muon and tau-lepton as composite particles. It was assumed that muon and tau-lepton have the following structure
$$\mu ^{}e^{}e^+e^{}$$
(2)
$$\tau ^{}e^{}e^+e^{}.$$
(3)
Muon arises due to the reaction
$$e^{}+2\gamma e^{}e^+e^{},$$
(4)
and tau-lepton arises due to the reaction
$$e^{}+3\gamma e^{}e^+e^{}.$$
(5)
The mass of muon is defined by the cross section of two-photon annihilation
$$m_\mu =\frac{\mathrm{}}{r_{2\gamma }}=70\mathrm{MeV}.$$
(6)
The experimental value is $`m_\mu =106\mathrm{MeV}`$ . The mass of tau-lepton is defined by the cross section of three-photon annihilation
$$m_\tau =\frac{\mathrm{}}{r_{3\gamma }}=2200\mathrm{MeV}.$$
(7)
The experimental value is $`m_\tau =1784\mathrm{MeV}`$ .
In order to describe the decays of muon and tau-lepton within the framework of electrodynamics the following reactions are introduced
$$\gamma \overline{\nu }_e\overline{\nu }_e$$
(8)
$$\overline{\nu }_e+\gamma \nu _\mu $$
(9)
$$\overline{\nu }_e+2\gamma \nu _\tau .$$
(10)
Combining eqs. (2), (4), (8), (9) we obtain the reaction for the decay of muon
$$\mu ^{}e^{}e^+e^{}e^{}+2\gamma e^{}+\gamma \overline{\nu }_e\overline{\nu }_ee^{}+\overline{\nu }_e\nu _\mu .$$
(11)
Combining eqs. (3), (5), (8), (10) we obtain the reaction for the decay of tau-lepton
$$\tau ^{}e^{}e^+e^{}e^{}+3\gamma e^{}+2\gamma \overline{\nu }_e\overline{\nu }_ee^{}+\overline{\nu }_e\nu _\tau .$$
(12)
Let us assume that hadrons are characterized by the structure of 5 electrons
$$e^{}e^+e^{}e^+e^{}.$$
(13)
In addition to reactions (8), (9), (10), let us introduce the following reactions
$$\overline{\nu }_e+3\gamma \nu _e$$
(14)
$$\overline{\nu }_e+4\gamma \overline{\nu }_e.$$
(15)
The structure of the meson-antimeson pair consists of ten electron-positron pairs. Let us assume that three electron-positron pairs transform into muon or tau-lepton pairs
$$e^{}e^+e^{}e^+e^{}e^+n(l^{}l^+)$$
(16)
where $`l`$ denotes muon or tau-lepton, $`n`$ is the number of pairs equal to 1 or 3. Two electron-positron pairs transform into the pair of neutrino-antineutrino
$$e^{}e^+e^{}e^+4\gamma \overline{\nu }\overline{\nu }+3\gamma \overline{\nu }\nu .$$
(17)
Consider the structure of pion which within the framework of the standard theory consists of $`u,d`$ quarks. Pair of the charged pions has the structure
$$\pi ^{}\pi ^+e^{}e^+e^{}e^+e^{}+e^+e^{}e^+e^{}e^+\mu ^{}\mu ^++\overline{\nu }_\mu \nu _\mu .$$
(18)
The mass of pion is estimated as
$$m_\pi =m_\mu =106\mathrm{MeV}.$$
(19)
The experimental value is $`m_\pi =140\mathrm{MeV}`$ . The pion structure of 5 electrons allows one to explain the probability of electron-positron annihilation into pions
$$\frac{\mathrm{\Gamma }(e^{}e^+\pi \pi )}{\mathrm{\Gamma }(e^{}e^+\mu \mu )}=\frac{q_i^2(\pi )}{q_i^2(\mu )}=\frac{5}{3}.$$
(20)
Consider the structure of $`K`$ meson which within the framework of the standard theory includes strange $`s`$ quark. Pair of the charged $`K`$ mesons has the structure
$$K^{}K^+e^{}e^+e^{}e^+e^{}+e^+e^{}e^+e^{}e^+3\mu ^{}3\mu ^++\overline{\nu }_\mu \nu _\mu .$$
(21)
The mass of $`K`$ meson is estimated as
$$m_K=3m_\mu =3\times 106=318\mathrm{MeV}.$$
(22)
The experimental value is $`m_K=494\mathrm{MeV}`$ .
Consider the structure of $`D`$ meson which within the framework of the standard theory includes charm $`c`$ quark. Pair of the charged $`D`$ mesons has the structure
$$D^{}D^+e^{}e^+e^{}e^+e^{}+e^+e^{}e^+e^{}e^+\tau ^{}\tau ^++\overline{\nu }_\tau \nu _\tau .$$
(23)
The mass of $`D`$ meson is estimated as
$$m_D=m_\tau =1784\mathrm{MeV}.$$
(24)
The experimental value is $`m_D=1869\mathrm{MeV}`$ .
Consider the structure of $`B`$ meson which within the framework of the standard theory includes beauty $`b`$ quark. Pair of the charged $`B`$ mesons has the structure
$$B^{}B^+e^{}e^+e^{}e^+e^{}+e^+e^{}e^+e^{}e^+3\tau ^{}3\tau ^++\overline{\nu }_\tau \nu _\tau .$$
(25)
The mass of $`B`$ meson is estimated as
$$m_B=3\times m_\tau =3\times 1784=5352\mathrm{MeV}.$$
(26)
The experimental value is $`m_B=5271\mathrm{MeV}`$ .
Consider the structure of proton which within the framework of the standard theory consists of $`u,u,d`$ quarks. Proton structure can be given by
$$pe^++2\gamma +3\gamma e^++e^{}e^++e^{}e^+$$
(27)
where electron-positron pairs are in the superposition of two-photon annihilation state and three-photon annihilation state. The mass of proton is estimated as
$$m_p=\frac{1}{2}(m_\mu +m_\tau )=\frac{106+1784}{2}=945\mathrm{MeV}.$$
(28)
The experimental value is $`m_p=938\mathrm{MeV}`$ . In view of (8), (15), (27), the decay of proton is given by
$$pe^+e^{}e^+e^{}e^+e^++5\gamma e^++4\gamma +\nu _e\nu _ee^++\nu _e\nu _e.$$
(29)
The birth of the pair of identical neutrinos violates the particle-antiparticle conservation law, so the decay of proton with the structure given by eq. (27) is forbidden.
|
no-problem/9906/nucl-th9906089.html
|
ar5iv
|
text
|
# The H-Dibaryon and the Hard Core
## I Introduction
Since the H-dibaryon was proposed by R. Jaffe as a likely candidate for a viable six quark bag (uuddss), there have been a variety of experiments proposed and carried out to locate it, which as yet have been unsuccessful. Some of these experiments involve production via \[$`K^{},K^+`$\] and proton induced reactions on light targets . More recently strangeness-rich heavy ion collisions have been thought to offer a preferred mechanism for generating the H. The H can also be viewed in the limit of ideal $`SU_f(3)`$ symmetry as the doubly strange, maximally symmetric, singlet combination of the octet baryons. However, given the mass gap existing between the $`\mathrm{\Lambda }`$ and the more massive $`\mathrm{\Sigma }`$ and $`\mathrm{\Xi }`$ states, it is probable that H consists mainly of $`|\mathrm{\Lambda }\mathrm{\Lambda }>`$. Consequently its lifetime would be expected to be closer to one-half that of the $`\mathrm{\Lambda }`$, than to the $`10^9`$ s value suggested by some authors.
Definitive observation of a double $`\mathrm{\Lambda }`$ hypernucleus is often considered antithetical to the existence of the H; one reason being that the two strange baryons, kept captive by their affinity to the normal nucleons would quickly fall into the lower energy dibaryon state. Of the recorded observations of such nuclei only the latter, performed at KEK, seems a good candidate . This proposition, that the existence of doubly strange hypernuclei rules out the existence of the H, need not be valid, and can become prejudicial. A hybrid state consisting partially of a six-quark bag and partly of a double $`\mathrm{\Lambda }`$ doorway state, attached to a nucleus might be identifiable in experiments involving the production of quite light doubly strange hypernuclei . We will have more to say about this possibility in what follows.
To our knowledge all theoretical estimates of production rates in heavy ion collisions , irrespective of mechanism, have generally overlooked the possible existence of a hard core in the baryon-baryon interaction at short distances. As we will show, under reasonable assumptions about the hard core, this can lead to quite appreciable suppression of H production. Here, we introduce this device into the framework of production via heavy ion collisions. A previous calculation suggested a high formation probability, $`0.07`$ H per central Au+Au collision. A recent AGS experiment, E896 , is presently analyzing some 100 million central $`Au+Au`$ events and could, in the light of this earlier prediction of the formation rate, provide a definitive search for the H. It is our present intention to at least semi-quantitatively understand the extent the hard core might interfere with this hope.
We treat the short range baryon-baryon force in a transparent and heuristic fashion. The best evidence from doubly strange nuclei suggests that the H, if it exists at all, is rather weakly bound, with binding less than $`20`$ MeV and probably considerably less. We use this to justify handling the coalescence of $`\mathrm{\Lambda }`$ pairs in relativistic ion collisions much in the manner we previously employed for the deuteron , also a weakly bound system and likely to form only after all np-constituents have ceased high energy cascading. This newer calculation of coalescence, in Ref , was not employed in earlier work on the H , and in fact had yielded a somewhat reduced formation rate for the H, by some $`50\%`$, in the absence of a hard core. The results arrived at in the present work suggest a more dramatic suppression. This conclusion should hold true for either the pure six quark bag proposed by Jaffe or for hybrid states, in both the relativistic heavy ion and proton induced environments if the hard core indeed exists. We refer simply to the repulsive potential as a hard core, although in practice we employ a repulsive potential with finite height.
## II Coalescence and the Hard Core
Coalescence is treated quantum mechanically in the more recent approach by calculating the overlap of the wave packets of the initial combining pair with an outgoing packet for the final bound state. The cascade in which this coalescence estimate is embedded provides the distributions of both relative momentum and relative position required for determining the degree of overlap. The overlap integral, squared to produce a probability, is then part of a factorised version of dibaryon production. The combining pair of particles may form a bound state only after each ceases to interact in the cascade, as was indicated previously.
However, for two strange baryons to coalesce they must first penetrate their mutual, repulsive, core. Such a core has a negligible effect on the deuteron which is spatially rather extended. This cannot be so for the H: not if this object consists at least partially of six quarks in a bag comparable in size to that for a single baryon, where short range repulsion can play a considerable role. H formation from two $`|\mathrm{\Lambda }\mathrm{\Lambda }>`$ could be viewed as proceeding in two steps: first a merging into a broad, deuteron-like doorway state and the second, barrier penetration into a single compact dibaryon, with the hard core repulsion forming the barrier. The overall rate for H production is then found to be the product of the usual coalescence probability and a barrier penetration post-factor. Naturally there are unknowns in such a calculation, one being the effective range at which combining baryons dissolve into a single bag, another being the nature of the short range forces (the hard core). The first we treat as a parameter; the second we approximate by using the $`NN`$ Bonn potential , limiting ourselves to including its shortest ranged components, due to $`\omega `$ and $`\sigma `$ exchange.
Our approach should apply equally well to a hybrid model in which the H is a combination of a deuteron-like $`|\mathrm{\Lambda }\mathrm{\Lambda }>`$ state and a, presumably smaller, six quark bag. The wave function would then in principle appear as:
$$\mathrm{\Psi }=\alpha |\mathrm{\Lambda }\mathrm{\Lambda }>+\beta |q^6>,$$
(1)
with $`\alpha `$ and $`\beta `$ representing amplitudes for the two-baryon and six-quark components of the hybrid state.
Whatever the actual composition of the physical dibaryon, it must have some minimal six quark bag presence, or else the relatively weak $`\mathrm{\Lambda }\mathrm{\Lambda }`$ force could not lead to binding. The pure Jaffe-like state corresponds to $`\beta =1`$, but our coalescence calculation is in principle independent of this parameter. The procedure we follow to estimate the effect of the repulsive core on entry from a doorway $`\mathrm{\Lambda }\mathrm{\Lambda }`$ state into the final H state would be applicable to either the pure bag or hybrid state cases. The only question is the precise nature of the hard core itself. We have stated our assumptions clearly above.
The barrier penetration calculation is described here in full detail. while the cascade and coalescence formalisms are referenced elsewhere . In a standard single meson exchange model (OBEP) of the nuclear two body interaction the hard core arises from $`\omega `$ exchange. In transferring this force to the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ system one should, however, probably not scale by the numbers of non-strange quarks. This component of the force is expected to be essentially flavour-independent. To the Bonn potential one must then add a term due to the exchange of a $`\varphi `$ meson between the $`s`$ quarks. This observation suggests there should be little or no modification of the nucleon-nucleon hard core in applying it to the two $`\mathrm{\Lambda }`$ system. We thus assume an intermediate to short range force exists of the form:
$$V(r)=V_\omega (r)+V_\sigma (r),$$
(2)
where
$$V_i(r)=g_i(1/r)exp(m_ir).$$
(3)
The couplings and the meson masses, $`g_i`$ and $`m_i`$ , are specified in Ref .
The strong intermediate range $`\sigma `$ attraction reduces the effect of the hard core, while the longer range parts of the potential are assumed to play a negligible role in coalescence. The two baryons will approach to some outer radius $`b`$, in fact a classical turning point, before being faced with the strong short range repulsion produced by the $`\omega `$. At some smaller radius $`a`$, representing the separation of $`\mathrm{\Lambda }`$-cluster centers, the two baryons are imagined to dissolve into a six quark bag. The calculation is especially sensitive to this ‘critical separation radius’ $`a`$. Although our final results on barrier penetration are consequently somewhat uncertain, it will become apparent that one thing one cannot do, is to ignore them while there remains any reason to believe that a short range repulsion exists.
## III Penetration Factor
We appeal to the WKB method for an estimate of the barrier penetration factor. We require some picture of the inter-baryon potential, from the larger separations in the initial doorway state down to the inner reaches of the final multi-quark bag and for $`ra`$ take
$$V(r)=V_0,$$
(4)
while for $`ra`$
$$V(r)=V_\omega (r)+V_\sigma (r),$$
(5)
as specified in Equation 3 above. Thus, inside the radius $`a`$ the two baryons, by fiat, melt into a bag. The probability of barrier penetration in this effective two body model can then be determined by calculating the WKB approximation for the transmission coefficient at relative energy $`E`$:
$$T(E)=4\sqrt{\frac{(V(b)E)(EV_0)}{V(b)V_0}}exp(2\tau ),$$
(6)
where
$$\tau =_a^b𝑑r\sqrt{2m(V(r)E)}.$$
(7)
As advertised the upper limit, $`b`$, of the integral for $`\tau `$ is a turning point defined implicitly by
$$E=V(b),$$
(8)
while $`a`$ represents the outer separation at which the six quark bag forms. The non-relativistic calculation of transmission performed here is probably adequate, given that coalescence into a relatively weakly bound system will not proceed at very high baryon-baryon relative momentum.
## IV Coalescence Mechanism
The rest of the calculation is straightforward, given the existence of a previously constructed heavy ion two-nucleon coalescence code . The transmission coefficient $`T(E)`$ is inserted into this heavy ion simulation at a point after the formation of a broad doorway state for the two strange baryons. That is to say, the H formation probability is taken as the product of $`T(E)`$ and a coalescence factor for the doorway state as defined in Reference . The size and structure of this state play only a minor role provided the turning point $`b`$ is within its confines. There are two modes for operation of this code, labeled static and dynamic, both of which are described in Reference . The dynamic code is self-contained, providing an internal estimate of the spatial spreading of the individual baryon wave packets, occasioned by interactions within the collision medium. The static mode produces essentially identical results provided the wave packet size, assigned as a fixed value in static coalescence, is appropriately tuned.
In the earlier work , a satisfactory agreement of the dynamic model with known $`Si+Au`$ deuteron single and double differential cross-sections was demonstrated, and $`Au+Au`$ predictions made. Only very preliminary data for deuterons from AGS $`Au+Au`$ collisions existed at that time. In Fig(1) we have compared these 1996 dynamic coalescence calculations for deuteron production in $`Au+Au`$ collisions at 11.6 GeV/c, with very recently submitted data from E866 . Considering the nature of both experiment and theory, this prediction of absolute deuteron yield must be considered as a triumph. There are in the dynamic simulations no adjustable parameters. In light of this, and to minimise computer time needed to produce sufficient statistics for the much rarer H dibaryon, we have performed the coalescence estimates in this work using the static treatment, adjusted to agree with the dynamic normalisation and perforce with the deuteron data. This procedure does not appreciably affect H production estimates and permits us to more efficiently examine bag size parameters $`a`$ where the hard core suppression can be rather large and the H yield truly small.
The deuteron prediction gives one great confidence in our treatment of the coalescence mechanism and lends credence to the use of a similar approach in estimating the creation of the elusive H.
## V Heavy Ion Production of the H
We consider $`Au+Au`$ collisions at an incident energy of $`10.6`$ GeV per nucleon. The actual beam energy in E896 is $`11.6`$ GeV but the use of a thick target reduces this to a lower effective average. The energy dependence, although appreciable and quoted, is by no means the most critical variable encountered in this simulation. The dependence on the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ separation, $`a`$, at the moment of dissolution into a bag easily wins that title. We have also examined the dependence of the results on the size $`r_h`$ of the H-doorway state and $`r_{sp}`$ the spatial size of the wave packets in the static coalescence model. Neither prove to be of much consequence. In practice, $`r_{sp}`$ is chosen to assure agreement with deuteron yields from dynamic coalescence. In the present simulations this occurs for $`r_{sp}1.52.0fm`$, an eminently reasonable value.
The Bonn inspired prescription we described for the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ potential reduces numerically to:
$$V(r)=3.94\frac{exp(3.97r)}{r}1.44\frac{exp(2.97r)}{r}GeV,$$
(9)
with $`r`$ measured in Fermis. The resulting short distance potential is graphed in Fig(2).
Our chief results are presented in Fig(3), indicating the variation of the numbers of H-dibaryons produced in central events with $`a`$. This latter parameter must not be thought of as an effective hard core radius for the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ interaction. The underlying quark-quark forces may also be viewed as possessing a repulsive short range component due to the exchange of vector mesons . Even with complete overlap of the parent baryons, i. e. $`a=0`$, the average inter-quark separation is, for a uniform spatial distribution, still comparable to the parent radius $`r0.8`$ fm, i. e. considerably greater than any conceivable fixed hard core radius. We have considered baryon centers between $`0.2`$ fm and $`0.5`$ fm apart, whereas reasonable values probably lie between $`0.2`$ fm and $`0.3`$ fm, where the baryon overlap region is near $`80`$% of the volume of a single baryon.
Even at the largest separations, H suppression due to the repulsive forces is not ignorable, but for the smallest values of $`a`$ the observation of the H, should it exist, becomes problematic. Early analysis of the actual experimental setup using the simulation ARC , suggested a neutral background comparable with the initial estimate of $`0.07`$ H’s per central collision . Thus, for baryon separations of $`0.20`$ fm to $`0.35`$ fm one would need to achieve tracking sensitivities of $`10^4`$ to $`10^2`$ relative to background. In our worst case scenario, at $`a=0.2`$, one is still left with perhaps a few thousand dibaryons produced in the E896 sample, but immersed in what may prove to be a daunting background.
We have also considered variations due to bombarding energy and H-doorway radius. These are easily understood, if not noteworthy. A decrease of collision momentum from $`14.6`$ to $`10.6`$ GeV/c results in close to a $`30\%`$ reduction in H’s produced per central event, while the yield is quite insensitive to the doorway radius.
## VI Conclusions
Short range repulsion between strange baryons can profoundly hinder coalescence into objects whose very existence depends on the presence of important bag-like structure. This lesson is even more applicable to the many H-searches initiated using ($`K^{},K^+`$) reactions , since these generally involve even lower relative energies $`E`$, and a consequent increased difficulty in barrier transmission.
There is perhaps one way to circumvent this frustrating roadblock in the discovery of the lightest of all possible strangelets. In the event a pair of strange dibaryons are attached, through a \[$`K^{},K^+`$\] reaction, to a light nucleus, a hybrid H may form, and itself remain bound to the nucleus. An optimum final nucleus might be $`{}_{\mathrm{\Lambda }\mathrm{\Lambda }}{}^{5}H`$ . The extra nucleons in this five particle system keep the captured $`\mathrm{\Lambda }`$ pair together for some $`100`$ picoseconds, this being far more than enough time for penetration of what then would constitute a rather modest, $`12`$ GeV barrier. Any evidence that the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ pairing energy substantially exceeded the $`2`$$`3`$ MeV or so expected from the known $`\mathrm{\Lambda }\mathrm{\Lambda }`$ interaction would indicate the presence of a hybrid bag+doorway state. Appreciable observed decay into the $`\mathrm{\Sigma }^{}p`$ channel would strengthen such a conclusion.
Unfortunately, the very same repulsive forces which made coalescence into a bound H state difficult, may also, at the quark level, destroy the existence of the state. Only a detailed microscopic calculation can begin to answer this question, for example in a model in which quarks rather than baryons exchange mesons . It is, in this context, disturbing that the search for the only ‘strangelets’ which we are certain do exist, i. e. doubly strange hypernuclei, may be discontinued despite the present finding of only a single good candidate .
## VII Acknowledgements
The authors are grateful to C. Chasman for providing us with the recently submitted E866 deuteron data. The present manuscript has been authored under US DOE grants NO. DE-FG02-93ER407688 and DE-AC02-76CH00016.
|
no-problem/9906/astro-ph9906371.html
|
ar5iv
|
text
|
# Implications of the X-ray Variability for the Mass of MCG-6-30-15
## 1. Introduction
The type 1 Seyfert galaxy MCG$``$6-30-15 has in recent years been the subject of intense study owing to the discovery by the Advanced Satellite for Cosmology and Astrophysics (ASCA) of a resolved, broad iron K$`\alpha `$ fluorescent line in its hard X-ray spectrum (Tanaka et al. (1995)). The shape of the line is consistent with a gravitationally and Doppler shifted emission line which originates from near the inner edge of an accretion disk around a black hole. In the X-rays, MCG$``$6-30-15 is also one of the brighter and more variable type 1 Seyferts. It is therefore hoped that by examining the correlated variability between the ionizing X-ray continuum and various components of the Fe K$`\alpha `$ line, one can obtain a size scale for the system and thence a mass for the black hole (Reynolds et al. (1999)). This requires, however, that the various components of the line be spectrally resolved on time scales shorter than the intrinsic response time scale of the line-emitting material.
At present, ASCA, BeppoSAX, and Chandra are the only X-ray telescopes with the spectral resolution to measure fluxes in different line components separately. Unfortunately, the integration times ($`10`$ ks) required to obtain this resolution are longer than the light travel time across the inner edge of an accretion disk around a $`10^8M_{}`$ Schwarzschild black hole. Furthermore, the fits to the iron line appear to require a Kerr geometry (although see Reynolds & Begelman (1997)) in order to explain the broad red wing of the line and the lack of significant emission blueward of 6.4 keV, thus making the relevant time scales even shorter. Nonetheless, over longer time scales, ASCA has measured significant variability in the shape of the MCG$``$6-30-15 Fe K$`\alpha `$ line (Iwasawa et al. (1996)).
Time-resolved spectral investigations (Iwasawa et al. (1996, 1999)) suggest that the broad and narrow components of the iron line are correlated differently with the flux state depending on the time scale investigated. For integrations $`10^4`$ s, the narrow component varies with the continuum flux whilst the broad component appears to be anti-correlated. In contrast, on shorter time scales the broad component responds immediately to flux changes whilst the narrow component remains constant. Iwasawa et al. (1999) suggest that multiple, localized X-ray flares occur on the disk surface near the inner edge and illuminate only a relatively small region of the disk. These small regions make contributions to narrow ranges in line redshift and thus produce very complex temporal behavior.
In support of this interpretation, Iwasawa et al. (1999) consider the iron line associated with a very short, bright flare from MCG$``$6-30-15 observed during 1997 August by ASCA (see Fig. 1). The line was very redshifted and had little or no emission blueward of 6 keV. Its shape was consistent with having originated entirely from a small region at $`r5\mathrm{GM}/\mathrm{c}^2`$ on the approaching side of the disk. The flare lasted about 4 ks, which is also approximately the orbital period at this radius for a $`10^7\mathrm{M}_{}`$ black hole. In order for the line not to be significantly more smeared by the orbital motion, Iwasawa et al. (1999) estimate a black hole mass of $`2\times 10^8\mathrm{M}_{}`$. Alternatively, they suggest that a much lower mass is possible if all the line emission arises from $`r<5\mathrm{GM}/\mathrm{c}^2`$.
A major complication that a high-mass model faces is the large amplitude and rapid X-ray variability of MCG$``$6-30-15 (Fig. 1). In particular, during the performance verification phase of ASCA, Reynolds et al. (1995) noted that the 0.5–10 keV flux increased by a factor 1.5 over 100 s, i.e., the dynamical time scale at the inner disk edge surrounding a maximal Kerr, $`2\times 10^6\mathrm{M}_{}`$ black hole. Typically one expects the bulk of the X-ray variability to occur on dynamical time scales or longer. It is clearly important to determine whether this variability represented a rare, rapid event or if such time scales are truly characteristic of the behavior of MCG$``$6-30-15.
One might expect that characteristic time scales should scale with the mass of the central object (see, e.g., the discussions of McHardy (1988) and Edelson & Nandra (1999)); therefore, in this work we try to gauge the size of the system and the mass of the black hole in MCG$``$6-30-15 by studying its characteristic X-ray variability properties in comparison to other black hole systems such as Cyg X-1 and NGC 5548. For the high-frequency variability analysis, we use the 1997 August simultaneous ASCA/Rossi X-ray Timing Explorer (RXTE) observation discussed by Iwasawa et al. (1999) (see also Lee et al. (1998)). In our analysis we screen the ASCA data as outlined by Brandt et al. (1996), except that we use the more stringent criteria of 7 GeV/c for the rigidity and an elevation angle of $`10^{}`$. Data from both SIS detectors are combined into a single lightcurve. For the RXTE data, we use screening criteria and analysis techniques appropriate for faint sources, as we have previously discussed in Chiang et al. (2000). Specifically, we only analyze top layer data from proportional counter units 0, 1, and 2.
## 2. Power Spectra
MCG$``$6-30-15 is detected by the All Sky Monitor (ASM; Levine et al. (1996)) on RXTE with a mean count rate of 0.44 cps in the 1.3–12.2 keV band. Currently, there is a 0.1 cps uncertainty in the zero level offset, as well as comparable magnitude systematic time-dependent variations due to, for example, the solar angular position relative to the source (Remillard 1999, priv. comm.). Assessing the very low frequency ($`f<10^6`$ Hz) X-ray variability of MCG$``$6-30-15 is therefore problematic. Efforts are currently underway, however, to revise the ASM data reduction process in order to minimize such systematic effects (Remillard 1999, priv. comm.); therefore, an ultra-low frequency variability study of MCG$``$6-30-15 in principle will be feasible in the near future.
We are able, however, to investigate the high-frequency power spectral density (PSD) of MCG$``$6-30-15 by using the RXTE 8–15 keV and the ASCA 0.5–2 keV lightcurves binned on 4096 sec time scales. Both lightcurves contain data gaps, especially on the orbital time scale of $`5`$ ks due to blockage by the Earth and passage through the South Atlantic Anomaly. We therefore use the techniques of Lomb (1976) and Scargle (1982) to calculate the PSD, and we only consider frequencies $`f<10^4`$ Hz (evenly sampled in intervals of the inverse of the observation duration). Higher time resolution lightcurves showed excess power on the $`5`$ ks orbital time scale. The results, binned over the greater of four contiguous frequency bins or logarithmically over $`f1.15f`$, are presented in Fig. 2. Here we use a one-sided normalization where integrating over positive frequencies yields the total mean square variability relative to the squared mean for the particular lightcurve being analyzed. The ASCA lightcurve shows 28% rms variability, whereas the RXTE lightcurve shows 16% rms variability. Both PSD have comparable shapes, i.e., flat from $`10^6`$$`10^5`$ Hz and slightly steeper than $`f^1`$ at higher frequencies.
We have fit a broken power law to the data, assuming error bars equal to the average PSD value divided by the square root of the number of frequency bins averaged over. Although such error estimates are only valid for averages made from independent frequency bins (which is not strictly true for the Lomb-Scargle periodogram; Scargle (1982)), this should provide a rough estimate, especially as the lightcurves are nearly evenly sampled. The best fit PSD slopes are $`1.3\pm 0.2`$ for the PCA and $`1.6\pm 0.3`$ for ASCA (errors are $`\mathrm{\Delta }\chi ^2=2.7`$). The break frequencies are found to be $`(8\pm 3)\times 10^6`$ Hz for the PCA, and $`(1.5\pm 0.5)\times 10^5`$ Hz for ASCA. These break frequencies are consistent with the value found by McHardy, Papadakis & Uttley (1998).
At frequencies $`>5\times 10^4`$ Hz there are enough contiguous data segments to be able to calculate the PSDs using standard FFT methods (Nowak et al. (1999), and extensive references therein). In Fig. 2 we show the calculated, background and noise-subtracted, PSD for the 1.8–3.6 keV and 8–15 keV RXTE bands. Below $`2\times 10^3`$ Hz, where signal to noise is greatest, both PSDs are consistent with being $`f^{2\pm 0.3}`$ and having 6% rms variability. From the generated background lightcurves, we estimate that background fluctuations contribute at most 1.5% and 2% rms variability, respectively, to these PSDs at $`<2\times 10^3`$ Hz. As the PSDs of the background lightcurves tend to be slightly steeper than $`f^2`$, there may be some trend for background fluctuations to steepen the observed high-frequency PSDs, and, in fact, Yaqoob (1997) find a slightly flatter ($`f^{1.4}`$) high-frequency PSD for MCG$``$6-30-15.
## 3. Time Delays
In order to examine time delays between the low energy ASCA lightcurve and the high energy RXTE lightcurve, we use the Z-transformed Discrete Cross-Correlation Function (ZDCF) of Alexander (1997), which is based upon the DCF method of Edelson & Krolik (1988). Auto-correlation functions can also be computed by this procedure, and we note that PSD derived from autocorrelations calculated via the ZDCF yield identical results to those presented in Fig. 2. We use the Monte Carlo methods described by Peterson et al. (1998) to assess the significance of any uncertainties due to flux measurement errors as well as uneven or incomplete sampling.
We have previously applied these methods to a simultaneous Extreme Ultraviolet Explorer (EUVE)/ASCA/RXTE observation of NGC 5548 (Chiang et al. (2000)), where we found evidence for the low energy ASCA band leading the high energy RXTE band by 5 ks. The results for the MCG$``$6-30-15 data are shown in Fig. 3, for which we have used lightcurves binned at 512 s resolution. A positive delay indicates that the RXTE light curve lags the ASCA light curve. Due to the ambiguities associated with interpreting cross-correlation results, particularly for such small delays, we considered three different measures of the “lag”. For each Monte Carlo trial, we estimate the characteristic lag by (1) fitting a parabola to the ZDCF values to find the location of the peak, (2) computing the centroid of the ZDCF over positive values bracketing the maximum value, and (3) using the location of the actual maximum value of the ZDCF. In all three cases, our simulations yield evidence for a positive lag at various degrees of significance: $`\tau _{\mathrm{fit}}=0.9\pm 0.7`$ks, $`\tau _{\mathrm{centroid}}=0.9\pm 1.3`$ks, $`\tau _{\mathrm{max}}=0.4_{1.2}^{+1.9}`$ (90% C.L.). Although the fitted peak estimate is consistent with a positive lag, both the centroid and ZDCF-maximum estimates are formally consistent with zero giving an upper limit of $`\tau <2`$ks.
At high Fourier frequency, we have used the 1.8–3.6 keV and 8–15 keV RXTE lightcurves discussed above to search for frequency-dependent time lags using standard FFT techniques. (A complete discussion of such methods, including calculation of error bars, is presented in Nowak et al. (1999) and references therein). No significant time delays were found, and the 1-$`\sigma `$ upper limits were 50–100 s in the $`5\times 10^4`$$`2\times 10^3`$ Hz range. Note that there were an insufficient number of uninterrupted, strictly simultaneous ASCA/RXTE lightcurves to allow calculation of their relative time delays via direct FFT methods.
## 4. Discussion
The MCG$``$6-30-15 PSD breaks to being $`f^1`$ at $`10^5`$ Hz, and then breaks to being approximately $`f^2`$ between $`10^4`$$`10^3`$ Hz. This is to be compared to the Cyg X-1 PSD which has a comparable rms amplitude and shape, and has a set of PSD breaks at frequencies between 0.03–0.3 Hz and 1–10 Hz (Nowak et al. (1999), and references therein). The black hole mass in Cyg X-1 is estimated to be $`10\mathrm{M}_{}`$ (Herrero et al. (1995)); therefore, if these break frequencies scale with mass, then the central black hole mass of MCG$``$6-30-15 could be as low as $`10^6\mathrm{M}_{}`$. NGC 5548, which is believed to have a central black hole mass of $`10^8\mathrm{M}_{}`$ (Done & Krolik (1996); Chiang & Murray (1996); Peterson & Wandel (1999)), shows a similar PSD with break frequencies at $`6\times 10^8`$ Hz and between $`3\times 10^7`$$`3\times 10^6`$ Hz (Chiang et al. (2000)). Again, if the break frequencies scale with mass, then the central black hole mass for MCG$``$6-30-15 could be several orders of magnitude lower than that for NGC 5548.
To date, the most carefully studied X-ray PSD for any AGN is that for NGC 3516 (Edelson & Nandra (1999)). (See also McHardy, Papadakis & Uttley (1998) who present preliminary results for a number of AGN, including MCG$``$6-30-15.) For NGC 3516 the PSD was seen to break from nearly flat at $`<3\times 10^7`$ Hz, and then gradually steepen into an $`f^{1.74}`$ power law up to frequencies as high as $`10^3`$ Hz. NGC 3516 is seen to be intermediate between NGC 5548 and MCG$``$6-30-15. Based upon these measurements (and upon other factors, such as the source luminosity), Edelson & Nandra (1999) argue for a black hole mass in the range of $`10^7`$$`10^8\mathrm{M}_{}`$, i.e., intermediate to the masses of NGC 5548 and MCG$``$6-30-15 discussed above.
The RXTE/ASCA lag upper limit for MCG$``$6-30-15 is also intermediate between the observed X-ray lags for Cyg X-1 (Miyamoto et al. (1988); Nowak et al. (1999)) and NGC 5548 (Chiang et al. (2000)). The time lag observed in NGC 5548, effectively measured on the $`f^2`$ portion of its PSD, is 5 ks. Time lags on the flat portion of the NGC 5548 PSD could be considerably longer (Chiang et al. (2000)). Near the PSD break from flat to $`f^1`$, the X-ray time lags in Cyg X-1 are $`0.1`$ s, whilst on the $`f^2`$ portion of the PSD the X-ray time lags are $`10^3`$$`10^2`$ s. The MCG$``$6-30-15 time lags may cover a similar dynamic range from $`<2`$ ks (overall lag) to $`<100`$ s (high frequency lag<sup>1</sup><sup>1</sup>1Although the low energy band here differed from the ASCA band, if the time lags scale logarithmically with energy (Nowak et al. (1999) and references therein) we would have expected the lag with respect to the ASCA band to be approximately a factor of 2 greater.).
If the characteristic variability and lag times are indicative of mass, then a mass as low as $`10^6\mathrm{M}_{}`$ may be required for the central black hole of MCG$``$6-30-15. Assuming a bolometric luminosity of $`4\times 10^{43}\mathrm{ergs}\mathrm{s}^1`$ (Reynolds et al. (1997)), this would imply that MCG$``$6-30-15 is emitting at $`30\%`$ of its Eddington rate, which is large but still plausible. A relatively low central black hole mass would make the large amplitude, rapid variability reported by Reynolds et al. (1995) much easier to understand, whereas a mass as large as the $`2\times 10^8\mathrm{M}_{}`$, the upper end discussed by Iwasawa et al. (1999), seems very unlikely. The $`<100`$ s lags seen at high frequency then likely provide an upper limit to the Compton diffusion time scale (see the discussion of Nowak et al. (1999)). These time scales are problematic for future hopes of simultaneously temporally and spectrally resolving the iron K$`\alpha `$ line in this system with Constellation-X as it will require $`1`$ ks integration times to study the line profile of MCG$``$6-30-15 (Young & Reynolds (2000)).
A 30% Eddington luminosity indicates that it is worthwhile to search for ultra-low frequency ($`f<10^6`$ Hz) variability in excess of $`3\%`$ rms, that is, above an extrapolation of the flat part of the ASCA and RXTE-PCA PSDs. ‘Very high state’, i.e., $`>𝒪(30\%L_{\mathrm{Edd}})`$, galactic black hole candidates can show PSDs that are flat below $`10^2`$ Hz, are approximately proportional to $`f^2`$ PSD between $`10^2`$$`10^1`$ Hz, are flat again between $`10^1`$$`1`$ Hz, and then break into an $`f^2`$ PSD at higher frequencies. (Specifically, see Fig. 4b of Miyamoto et al. (1991), which shows a ‘very high state’ PSD of GX339$``$4.) The low frequency portion of the ‘very high state’ PSD has no simple analogy in the (usually observed) low/hard state of Cyg X-1, where ultra-low frequency noise is typically associated with dipping activity due to obscuration by the accretion stream (Angelini, White & Stella (1994)). Previous models, for example, have associated ‘very high state’ low-frequency variability with fluctuations of a viscously unstable $`\alpha `$-disk (Nowak (1994), and references therein).
This highlights the major caveat that needs to be mentioned; we do not know the average PSD shape nor the scaling of the break frequencies as a function of fractional Eddington luminosity in either galactic black hole candidates or AGN. This is an especially important consideration as the mass estimates discussed above imply a large range of Eddington luminosity ratios. Considering all the evidence for rapid variability and extremely short time lags in MCG$``$6-30-15 discussed above, however, a low mass for MCG$``$6-30-15 seems to us very compelling. With the advent of the X-ray Multiple Mirror (XMM) mission, which has large effective area and is capable of extremely long, uninterrupted observations, these analyses will become more detailed for NGC 5548 and MCG$``$6-30-15, and will allow one to develop a statistical sample of numerous other AGN.
We thank R. Remillard for generating an ASM lightcurve of MCG$``$6-30-15, and O. Blaes, K. Pottschmidt, N. Murray, and J. Wilms for useful conversations. This work has been financed by NASA Grants NAG5-4731, NAG5-7723, and NAG5-6337. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
|
no-problem/9906/cond-mat9906318.html
|
ar5iv
|
text
|
# The Schrödinger formulation of the Feynman path centroid density
## I INTRODUCTION
The thermodynamic properties of many quantum systems have been studied within the path integral formulation of statistical mechanics. An essential aspect of this formulation is the mapping of the quantum system onto a classical model, whose equilibrium properties can be derived by classical molecular dynamics or Monte Carlo simulation techniques with arbitrarily accuracy. However, the efficient calculation of equilibrium dynamical properties of quantum many-body systems by this formulation, i.e., time dependent correlation functions, still remains as a challenging unsolved problem.
Several applications of the path integral formulation have been developed around a quantity introduced by Feynman and Hibbs: , the effective classical potential (ECP). This concept allows the formulation of a variational principle for quantum systems in thermodynamic equilibrium, whose importance can be appreciated by quoting the conclusions of the Feynman and Hibbs’ book: ”\[This variational principle is\] the only example of a result obtained with path integrals which cannot be obtained in simple manner by more conventional methods”. The original formulation of this principle used a free particle as a reference system, a fact that limits the range of validity of this approximation to temperatures where the system behaves nearly in a classical way. An essential improvement of this variational theory was formulated by Giachetti and Tognetti , and, independently, by Feynman and Kleinert , using a harmonic oscillator as reference system. With this improvement the equilibrium properties of quantum anharmonic systems can be realistically approximated even in the low temperature limit. The name ”pure quantum self-consistent harmonic approximation” has been recently coined for the application of the variational principle at this level of approximation.
Further applications of the ECP concept are related to kinetic and dynamical properties of quantum systems in thermodynamic equilibrium. Gillan formulated the basis of the quantum transition state theory (QTST), a kinetic approximation to calculate rate constants of thermal activated processes, that may be applied to anharmonic many-body systems at temperatures where quantum effects are important. This approximation has been used in condensed matter and chemical physics investigations, e.g. in recent studies of reorientation rates of hydrogen around a boron atom in doped silicon, , or proton transfer reactions . The most important quantity in QTST is the so called centroid density, a function that carries exactly the same information as the ECP: the centroid density is an exponential function of the ECP. An interesting dynamical approximation, called centroid molecular dynamics (CMD), was formulated by Cao and Voth to calculate real time correlation functions of quantum particles at finite temperatures. The ECP is an essential quantity for this approximation as the dynamical properties are derived from trajectories generated by classical equations of motion of particles moving in the ECP. A justification of the CMD approach has been given by showing that the centroid position correlation function agrees to second order, in a Taylor expansion in time, with the Kubo transformed position correlation function. The latter correlation function is related to the dynamical response of the system to an external force. Unfortunately, the limits and capability of CMD for many-body systems at temperatures where quantum effects and anharmonicity are relevant have not been at present fully established. For recent applications of this dynamical approach see Refs. and .
The centroid density or, equivalently, the ECP, are then important concepts in the theory of path integrals. The motivation of the present work is based on a simple idea: if the centroid density is the basis of several interesting physical applications within the path integral formulation (i.e., a variational principle, and the QTST and CMD approximations), then the definition of this quantity within the Schrödinger formulation may lead to new physical insight into these approximations. Our goal is to present the correspondence between path integral concepts, such as the centroid density or the ECP, and the Schrödinger formulation. In a recent contribution we have analyzed the centroid density showing that is related to the quasi-static response of the quantum system to a constant external force. Some physical implications of this analysis have been explored in the zero temperature limit, e.g., the ECP corresponds to the mean energy of minimum energy wave packets (MEWP, to be defined below) and CMD is an approximate dynamics based on these wave packets. In this work we present a full account of these findings, with focus on static equilibrium properties.
This paper is organized as follows. The theoretical part is presented in Sec. II, which is divided into several subsections. Sections II A and II B review the Schrödinger and path integral formulation of the equilibrium density matrix. The influence of a constant external force on the Euclidean action of the system is treated in Sec. II C. Important path integral quantities, as the centroid density and the static-force response (SFR) density matrix, are introduced in Sec. II D. In Sec. II E the centroid density and the SFR density matrix are derived in the Schrödinger formulation by means of their moment generating functions. Sec. III presents some relevant physical consequences of the theoretical part. It includes two subsections, Sec. III A deals with the static isothermal susceptibility at finite temperatures, and Sec III B treats the zero temperature limit of the ECP and of the SFR density matrix. The concluding remarks are given in Sec. IV.
## II Theory
### A Schrödinger formulation of the density matrix
We consider the simple case of a quantum particle of mass $`m`$ having bound states in a one-dimensional potential $`V(x)`$. The extension to the many-particle case, for distinguishable particles, is presented in Appendix A. For the subsequent analysis of the Feynman path centroid density, we study the static response of the particle to a constant external force, $`f`$. Therefore, we consider the Hamiltonian of the particle, $`\widehat{H}(f)`$, as a function of the external force:
$$\widehat{H}(f)=\widehat{H}f\widehat{x},$$
(1)
where $`\widehat{x}`$ is the position operator and $`\widehat{H}\widehat{H}(0)`$ is the Hamiltonian operator in the absence of the force:
$$\widehat{H}=\frac{\widehat{p}^2}{2m}+V(\widehat{x}),$$
(2)
with $`\widehat{p}`$ being the momentum operator. The following convention is used throughout the text: operators (or functions) that share the same name, as $`\widehat{H}(f)`$ and $`\widehat{H}`$, but differ either by the presence or number of arguments, represent different operators (or functions). The eigenfunctions, $`|\psi _n(f)`$, of the Hamiltonian, $`\widehat{H}(f)`$, satisfy the Schrödinger equation:
$$\widehat{H}(f)|\psi _n(f)=E_n(f)|\psi _n(f),$$
(3)
where $`E_n(f)`$ are the corresponding eigenvalues. In the position representation, the eigenfunctions are expressed as:
$$\psi _n(x;f)x|\psi _n(f),\text{with}n=0,1,2,\mathrm{}.$$
(4)
If we consider a canonical ensemble of independent particles in thermal equilibrium at temperature $`T`$, the statistical state of the system in the presence of the external force, $`f`$, is described in terms of the unnormalised density operator:
$$\widehat{\rho }(f)=e^{\beta \widehat{H}(f)},$$
(5)
where $`\beta `$ is the inverse temperature $`(k_BT)^1`$, with $`k_B`$ being the Boltzmann constant. The position representation of this operator is:
$$\rho (x,x^{};f)=x|\widehat{\rho }(f)|x^{}=\underset{n=0}{\overset{\mathrm{}}{}}e^{\beta E_n(f)}\psi _n(x;f)\psi _n^{}(x^{};f).$$
(6)
The unnormalised particle’s probability density is given by the diagonal elements of the density matrix, $`\rho (x,x;f)`$. The canonical partition function in the presence of the external force is defined by the trace of the density matrix:
$$Z(f)=_{\mathrm{}}^{\mathrm{}}𝑑x\rho (x,x;f)=\underset{n=0}{\overset{\mathrm{}}{}}e^{\beta E_n(f)},$$
(7)
where the second equality is obtained by substitution of $`\rho (x,x;f)`$ by the expression given in Eq. (6).
### B Path integral formulation of the density matrix
In the path integral formulation the elements of the unnormalised density matrix in the presence of the external force are given by:
$$\rho (x,x^{};f)=_{xx(0)}^{x^{}x(\beta \mathrm{})}D[x(u)]e^{\frac{1}{\mathrm{}}S[x(u);f]}.$$
(8)
$`u`$ is an imaginary time that varies between 0 and $`\beta \mathrm{}`$, and $`S[x(u);f]`$ is the functional of the Euclidean action of the path $`x(u)`$:
$$S[x(u);f]=_0^\beta \mathrm{}𝑑u\left(\frac{m}{2}\dot{x}(u)^2+V[x(u)]fx(u)\right).$$
(9)
The function $`\dot{x}(u)`$ represents the derivative of $`x(u)`$ with respect to $`u`$. The integral measure of the path integral is given by:
$$D[x(u)]=\underset{N\mathrm{}}{lim}\underset{k=1}{\overset{N1}{}}dx_k\left(\frac{mN}{2\pi \beta \mathrm{}^2}\right)^{\frac{N}{2}},$$
(10)
where the path $`x(u)`$ has been discretized as $`(x,x_1,x_2,\mathrm{},x_{N1},x^{})`$.
### C Euclidean action under an external force
It is important to note the effect of a constant external force on the Euclidean action of a given path $`x(u)`$. As the external force is independent of the imaginary time $`u`$, it can be taken out of the time integral in Eq. (9) with the result:
$$S[x(u);f]=S[x(u)]f\beta \mathrm{}X,$$
(11)
where $`S[x(u)]`$ is the Euclidean action in the absence of the force, and $`X`$ is the average point of the path, or path centroid:
$$X=\frac{1}{\beta \mathrm{}}_0^\beta \mathrm{}𝑑ux(u).$$
(12)
For the set of paths contributing to the path integral in Eq. (8), the relation of having the same path centroid is an equivalence relation. Then, the sum over paths may be decomposed into a sum over equivalence classes. A class of paths is the subset formed by all paths that have the same centroid coordinate $`X`$. We note that if two paths, say $`x(u)`$ and $`x^{}(u)`$, belong to the same class, then the contribution of the external force to the Euclidean action, \[i.e., the term $`f\beta \mathrm{}X`$ in Eq. (11)\], is identical for both paths. In other words, the contribution of a constant external force to the Euclidean action is identical for all paths that have the same centroid. This simple property is the origin of the interest of fixed centroid path integrals in statistical physics.
### D Static-force response density matrix and centroid density
The most important quantities to be defined for the class of paths with centroid at $`X`$ are the SFR density matrix and the centroid density . We have introduced a new name, static-force response density matrix, because we will show that this matrix depends on the response of the system to constant external forces of arbitrary magnitude. The name reduced density matrix has been used before for this quantity. However, this name is less convenient as it is often encountered in another context. The SFR density matrix is a constrained path integral over a given class of paths, defined by introducing a delta function in the integrand of Eq. (8):
$$\sigma (x,x^{};X;f)=_x^x^{}D[x(u)]\delta \left(X\frac{1}{\beta \mathrm{}}_0^\beta \mathrm{}𝑑ux(u)\right)e^{\frac{1}{\mathrm{}}S[x(u);f]}.$$
(13)
The centroid density, $`C(X;f)`$, in the presence of the external force $`f`$, is defined as the trace of the SFR density matrix:
$$C(X;f)=_{\mathrm{}}^{\mathrm{}}𝑑x\sigma (x,x;X;f).$$
(14)
By introducing the expression of the Euclidean action, \[Eq. (11)\], into Eq. (13), we derive an equation that relates the SFR density matrix in the presence of the external force $`f`$, with the same quantity in absence of the force:
$$\sigma (x,x^{};X;f)=\sigma (x,x^{};X)e^{\beta fX}.$$
(15)
By setting $`x=x^{}`$ in the last equation and integrating over $`x`$, we get the following analogous relation for the centroid density:
$$C(X;f)=C(X)e^{\beta fX}.$$
(16)
The last two equations show that if the centroid density, $`C(X)`$, and the SFR density matrix, $`\sigma (x,x^{};X)`$, are known for a given class of paths in absence of an external force, then the corresponding quantities in the presence of the force $`f`$ are easily obtained by multiplication with the constant factor $`e^{\beta fX}`$. Dividing the left and right hand sides of the last two equations, we get:
$$[C(X;f)]^1\sigma (x,x^{};X;f)=[C(X)]^1\sigma (x,x^{};X).$$
(17)
This equation implies that the normalized SFR density matrix defined for a given centroid position $`X`$ is an invariant quantity with respect to the value of the force $`f`$. The normalization constant is the trace of the matrix, i.e., the centroid density. Setting $`x=x^{}`$, we get:
$$\varphi ^2(x;X)[C(X;f)]^1\sigma (x,x;X;f).$$
(18)
$`\varphi ^2(x;X)`$ represents, as a function of $`x`$, a normalized particle’s probability density. The normalization of this probability density is easily checked by integrating Eq. (18) with respect to the variable $`x`$, and with the help of the definition given in Eq. (14):
$$_{\mathrm{}}^{\mathrm{}}𝑑x\varphi ^2(x;X)=1.$$
(19)
This relation is valid irrespective of the value of the centroid coordinate, $`X`$.
The results of this Section may help to understand path integral results that are usually presented from a different point of view. For example, the CMD approximation is an approximate dynamics for a normalized density matrix whose diagonal elements in the position representation are given by $`\varphi ^2(x;X)`$. An important property of this density matrix is its invariance with respect to the application of an external force to the particle. Associated to each centroid position, $`X`$, there is a different density matrix, and the approximate dynamics of CMD is formulated through a dynamical equation for the time evolution of the centroid coordinate.
We are now in position to relate the SFR density matrix and the centroid density to physical observables, an essential step for the understanding of important path integral concepts from the point of view of the Schrödinger formulation.
### E Moment generating function of the centroid density
After the decomposition of the path integral given in Eq. (8) into disjoint classes, we can recover the whole path integral, which is the quantity related to physical observables, by an integral over all classes. The elements of the unnormalised density matrix, in the presence of an external force $`f`$, are then obtained as:
$$\rho (x,x^{};f)=_{\mathrm{}}^{\mathrm{}}𝑑X\sigma (x,x^{};X;f).$$
(20)
Introducing in this equation the expression given in Eq. (15) for the SFR density matrix, one gets:
$$\rho (x,x^{};f)=_{\mathrm{}}^{\mathrm{}}𝑑X\sigma (x,x^{};X)e^{\beta fX}.$$
(21)
Setting $`x=x^{}`$ and integrating over the variable $`x`$ \[with the help of Eqs. (7) and (14)\], one arrives at:
$$Z(f)=_{\mathrm{}}^{\mathrm{}}𝑑XC(X)e^{\beta fX}.$$
(22)
The last two equations provide the essential link for the correspondence between fixed centroid path integrals and the Schrödinger formulation. Although Eq. (22) can be found in Kleinert’s book, the physical implications of this relation have not been explored until recently. The physical content of Eqs. (21) and (22) can be derived by two complementary points of view, as suggested by the mathematical structure of these equations:
* The centroid density, $`C(X)`$, in absence of the external force, is related to the partition function, $`Z(f)`$, by an integral transformation defined by the kernel $`e^{\beta fX}`$. The same relation holds between the SFR density matrix, $`\sigma (x,x^{};X)`$, and the density matrix $`\rho (x,x^{};f)`$. This integral transformation is performed with respect to the centroid variable $`X`$, and the product $`\beta f`$ is the corresponding conjugate variable. Note that a general property of an integral transformation is that the physical information contained in the direct and transformed functions is identical. Therefore the centroid density $`C(X)`$, as a function of $`X`$, carries out the same physical information as the partition function $`Z(f)`$, as a function of $`f`$. The same relation holds for $`\sigma (x,x^{};X)`$ and $`\rho (x,x^{};f)`$. This integral transformation is called a two-sided Laplace transform, with properties similar to those of a Fourier transform.
* The relation between the centroid density, $`C(X)`$, and the partition function in the presence of a force, $`Z(f)`$, can be interpreted as the relation between a probability density for the variable $`X`$ ant its moment generating function. The same relation holds for $`\sigma (x,x^{};X)`$ and $`\rho (x,x^{};f)`$. In the following, we develop this point of view.
The moments of the centroid density in the presence of an external force are defined as:
$$\left\{X^n\right\}_f=\frac{_{\mathrm{}}^{\mathrm{}}𝑑XC(X;f)X^n}{_{\mathrm{}}^{\mathrm{}}𝑑XC(X;f)}=[Z(f)]^1_{\mathrm{}}^{\mathrm{}}𝑑XC(X;f)X^n.$$
(23)
The brackets $`\left\{\mathrm{}\right\}_f`$ indicate an average over the centroid density, $`C(X;f)`$, and the second equality is obtained from Eq. (22). The moments of $`X`$ in absence of the external force are:
$$\left\{X^n\right\}=Z^1_{\mathrm{}}^{\mathrm{}}𝑑XC(X)X^n,$$
(24)
where the partition function is $`ZZ(0)`$. Note that the subindex ”$`f`$” is omitted from the brackets when $`f=0`$. The definition of the moment generating function, $`M(\beta f)`$, of the normalized centroid density, $`Z^1C(X)`$, is:
$$M(\beta f)=\left\{e^{\beta fX}\right\}=Z^1_{\mathrm{}}^{\mathrm{}}𝑑XC(X)e^{\beta fX}.$$
(25)
From Eqs. (22) and (25) we get:
$$M(\beta f)=\frac{Z(f)}{Z}.$$
(26)
This result has a clear physical meaning: the ratio of partition functions $`Z(f)/Z`$ is the function generating the moments of the centroid density. Therefore, these moments may be defined as:
$$\left\{X^n\right\}=\left[\frac{M(\beta f)}{(\beta f)}\right]_{f=0},$$
(27)
and with the help of Eq. (26), we get:
$$\left\{X^n\right\}=\frac{1}{Z\beta ^n}\left[\frac{^nZ(f)}{f^n}\right]_{f=0}.$$
(28)
The l.h.s. of the last equation, i.e., the moments of the centroid density, is a quantity typically defined by fixed centroid path integrals, while the r.h.s. contains physical quantities, that are defined within the Schrödinger formulation. It is more interesting to study the cumulant generating function, $`K(\beta f)`$, of the centroid density, which is defined as the logarithm of the moment generating function:
$$K(\beta f)=\mathrm{ln}\frac{Z(f)}{Z}=\beta [F(f)F],$$
(29)
where $`F(f)`$ is the free energy derived from the partition function: $`Z(f)=\mathrm{exp}[\beta F(f)]`$ and $`FF(0)`$. The cumulants, $`\kappa _n`$, of the centroid density are obtained as:
$$\kappa _n=\left[\frac{K(\beta f)}{(\beta f)}\right]_{f=0}.$$
(30)
With the help of Eq. (29) we get:
$$\kappa _n=\frac{1}{\beta ^{n1}}\left[\frac{^nF(f)}{f^n}\right]_{f=0}.$$
(31)
The last equation shows that the cumulants of the centroid density are related to the change in the free energy as a constant external force is acting on the particle. The first cumulants of the centroid density correspond to the mean value and the dispersion of the centroid coordinate:
$$\kappa _1=\left\{X\right\}=\left[\frac{F(f)}{f}\right]_{f=0},$$
(32)
$$\kappa _2=\delta X^2=\left\{X^2\right\}\left\{X\right\}^2=\frac{1}{\beta }\left[\frac{^2F(f)}{f^2}\right]_{f=0}.$$
(33)
The centroid dispersion, $`\delta X^2`$, has been related with a ”classical delocalization” of the particle. The precise physical meaning of this quantity will be presented in Sec. III.
As the mathematical structure of Eqs. (21) and (22) is identical, the derivation done for the moment and cumulant generating functions of the centroid density can be repeated step by step for the SFR density matrix. The most important result of this analysis may be enunciated as follows: the cumulant generating function, $`K_\sigma (\beta f)`$, of the SFR density matrix, $`\sigma (x,x^{};X)`$, is the logarithm of the ratio of the elements of the density matrix in the presence and in absence of the external force $`f`$:
$$K_\sigma (\beta f)=\mathrm{ln}\frac{\rho (x,x^{};f)}{\rho (x,x^{})}.$$
(34)
As an application of the analysis presented so far we derive in Appendix B the centroid density and the SFR density matrix for a harmonic oscillator by means of their cumulant generating functions. The information needed for this task is the value of the partition function, $`Z(f)`$, and the density matrix, $`\rho (x,x^{};f)`$, as a function of the external force $`f`$. This example shows how results previously obtained by solving fixed centroid path integrals can be easily derived in the Schrödinger formulation.
To close this theoretical Section, we explain the precise meaning of considering the centroid density as a classical-like density for the quantum particle, which is one of the suggestive pictures derived from the QTST and CMD approximations. Let $`\rho ^{cla}(x)`$ be the classical limit of the unnormalised particle’s probability density in absence of an external force. Then the classical partition function $`Z^{cla}(f)`$, in the presence of a constant external force, is given by:
$$Z^{cla}(f)=_{\mathrm{}}^{\mathrm{}}𝑑x\rho ^{cla}(x)e^{\beta fx}.$$
(35)
This equation has formally the same structure as Eq. (22) for the centroid density, and illustrates in which sense the centroid density behaves as a classical density for the quantum particle. In the classical limit the ratio $`Z^{cla}(f)/Z^{cla}`$ is the moment generating function of the classical particle’s probability density, while in the quantum case, the ratio $`Z(f)/Z`$ is the moment generating function of the centroid density.
The unnormalised particle’s probability density is given in the classical case by a function of the potential energy:
$$\rho ^{cla}(x)=\left(\frac{m}{2\pi \mathrm{}^2\beta }\right)^{\frac{1}{2}}e^{\beta V(x)},$$
(36)
In analogy to the classical result, the ECP, $`F_{ecp}(X)`$, for the quantum particle is defined from the centroid density as:
$$C(X)=\left(\frac{m}{2\pi \mathrm{}^2\beta }\right)^{\frac{1}{2}}e^{\beta F_{ecp}(X)}.$$
(37)
We remark that the ECP is sometimes called the potential of mean force, , the centroid potential, the effective centroid potential, or the quantum effective potential. However, these names refer all to the same physical quantity.
## III Physical implications
In this section we focus on some physical implications readily derived from the correspondence between fixed centroid path integrals and the Schrödinger formulation. Firstly we present some results valid at arbitrary temperatures.
### A Static susceptibility
The first moments of the centroid density have a clear physical meaning. We recall that the average position of the quantum particle in thermal equilibrium in the presence of an external force, $`f`$, is given by:
$$\overline{x}(f)=Z(f)^1_{\mathrm{}}^{\mathrm{}}𝑑xx\rho (x,x;f).$$
(38)
From the definition of the SFR density matrix and of the centroid density \[Eqs. (13) and (14)\], it is easy to show that the average centroid position, defined from Eq. (23) by setting $`n=1`$, and the average particle position are identical quantities:
$$\overline{x}(f)=\left\{X\right\}_f=\frac{F(f)}{f},$$
(39)
where the last equality is consequence of an exact generalization of Eq. (32), derived for $`f=0`$, to an arbitrary value of the external force. From Eqs. (33) and (39), we obtain the following expression for the dispersion of the centroid coordinate:
$$\delta X^2=\left\{X^2\right\}\left\{X\right\}^2=\frac{1}{\beta }\left[\frac{\overline{x}(f)}{f}\right]_{f=0}.$$
(40)
The derivative that appears in the last equation is the static isothermal susceptibility, $`\chi _{xx}^T`$, a quantity that describes the static response of the average position of the quantum particle, with respect to the application of an external force. We have then:
$$\delta X^2=\frac{1}{\beta }\chi _{xx}^T.$$
(41)
This isothermal susceptibility, $`\chi _{xx}^T`$, is related to the canonical correlation of the position operator, $`\widehat{x};\widehat{x}`$ by the following relation:
$$\chi _{xx}^T=\beta \left(\widehat{x};\widehat{x}\overline{x}^2\right),$$
(42)
where $`\overline{x}\overline{x}(0)`$ and the canonical correlation is defined as:
$$\widehat{x};\widehat{x}=Z^1\beta ^1_0^\beta 𝑑\lambda \text{Tr}\left[\mathrm{exp}(\beta \widehat{H})\mathrm{exp}(\lambda \widehat{H})\widehat{x}\mathrm{exp}(\lambda \widehat{H})\widehat{x}\right].$$
(43)
Comparing the Eqs. (41) and (42), and noting that $`\overline{x}=\left\{X\right\}`$, we deduce that the second moment of the centroid density is identical to the canonical correlation of the position operator:
$$\left\{X^2\right\}=\widehat{x};\widehat{x}.$$
(44)
This result is consistent with the analysis presented in Ref. concerning the relation between the centroid time correlation function and the Kubo transformed position correlation function. Eq. (44) is interesting not only to understand the physical meaning of the centroid density, but also for practical applications. In the case of a harmonic oscillator of angular frequency $`\omega `$ the dispersion of the centroid coordinate, $`\delta X^2`$, is given in Eq. (B5). The static isothermal susceptibility for the harmonic case is then:
$$\chi _{xx}^T=\beta \delta X^2=\frac{1}{m\omega ^2}.$$
(45)
For an arbitrary anharmonic potential, we expect that at low temperatures the static isothermal susceptibility will be determined by a frequency close to the first excitation energy, $`\mathrm{\Delta }E`$, of the system. Then, by substitution in the last equation of $`\omega `$ by $`\mathrm{\Delta }E_{app}/\mathrm{}`$, we get the following approximation to $`\mathrm{\Delta }E`$:
$$\mathrm{\Delta }E\mathrm{\Delta }E_{app}=\mathrm{}\left(\frac{k_BT}{m\delta X^2}\right)^{\frac{1}{2}}$$
(46)
The capability of this approximation has been checked as a function of temperature for several one-dimensional model potentials, that are listed in Table I. The potentials $`V_2`$, $`V_4`$, and $`V_{10}`$ are power functions of $`x`$, whose coefficients were chosen so that a particle of mass 16 au displays the same value of the excitation energy in each of these potentials. $`V_{dw}`$ is a double-well potential where the first excitation energy corresponds to the tunnel splitting. By Monte Carlo path integral simulations we have obtained the value of the centroid dispersion, $`\delta X^2`$, as a function of temperature, and the excitation energy has been estimated by Eq. (46). The results are shown in Fig. 1, where the value of $`\mathrm{\Delta }E_{app}`$ and the thermal energy, $`k_BT`$, are displayed in units of the $`\mathrm{\Delta }E`$ associated to each potential. For the harmonic potential, $`V_2`$, the approximation is exact at all temperatures. For the potentials $`V_4`$ and $`V_{10}`$ the approximation is remarkably good at temperatures where the thermal energy is lower than about $`1/4`$ of the first excitation energy. In this temperature range the tunnel splitting energy is approximated with an error of about 25 %, a value that at least is of the correct order of magnitude. This approximation can be applied to many-body problems, using the centroid dispersions resulting from the diagonalization of the tensor given in Eq. (A3).
### B $`T0`$ limit
In the zero temperature limit, the ECP and the normalized particle’s probability densities, $`\varphi ^2(x;X)`$, \[defined in Eq. (18)\] have a simple physical meaning: they are related to the eigenvalues and eigenfunctions of the Hamiltonian $`\widehat{H}(f)`$.
We are going to combine two pieces of information to obtain the low temperature limit of $`\varphi ^2(x;X)`$. Firstly, we know that, as $`T0`$, the normalized particle’s probability density derived from the path integral in Eq. (8) converges towards the probability density of the ground state of the Hamiltonian $`\widehat{H}(f)`$. The average value of the position operator $`\widehat{x}`$ for this ground state is $`\overline{x}_0(f)`$. Secondly, we have derived that the dispersion of the centroid coordinate goes to zero in this limit \[see Eq. (40)\]. This implies that the centroid density, $`C(X;f)`$, must be a delta function of $`X`$ centered at its average value $`\left\{X\right\}=\overline{x}_0(f)`$ \[see Eq. (39)\]. Therefore only the class of paths with centroid coordinate at $`X=\overline{x}_0(f)`$ contributes to the path integral in Eq. (8). As a consequence, the normalized probability density, $`\varphi ^2(x;X)`$, derived for the class of paths with centroid at $`X=\overline{x}_0(f)`$ should be identical to the ground state probability density of the particle in the presence of the external force $`f`$:
$$\underset{T0}{lim}\varphi ^2(x;X)=|\psi _0(x;f)|^2,\text{for}X\overline{x}_0(f).$$
(47)
As illustration of this result we display in Fig. 2 the probability densities, $`\varphi ^2(x;X)`$, obtained by fixed centroid Monte Carlo simulations of a particle in the model potentials $`V_2`$, $`V_4`$, and $`V_{10}`$ at a temperature of $`k_BT=0.001`$ au, which is about 1/300 of the first excitation energy, and therefore a good approximation to the low temperature limit. For each model potential, the simulations were performed at three different centroid positions, $`X`$. The ground state densities $`|\psi _0(x;f)|^2`$ obtained by numerical solution of the time independent Schrödinger equation, for different values of the external force, $`f`$, are also shown in the figure. Both probability densities are identical, apart from tiny deviations (not visible in the scale of the figure) due to the finite temperature and the statistical uncertainty of the simulation. In the harmonic case the displayed probability densities, $`\varphi ^2(x;X)`$, differ only by a rigid displacement as a function of the centroid position $`X`$. We note that these curves are identical to the probability densities of the coherent states of the harmonic oscillator.
In the following we show that the functions $`\varphi ^2(x;X)`$ are related to a variational principle. Suppose that we look for a quantum state, $`|\mathrm{\Psi }`$, of the particle with Hamiltonian $`\widehat{H}`$ whose mean energy,
$$E_{min}(X)=\mathrm{\Psi }|\widehat{H}|\mathrm{\Psi },$$
(48)
is minimum against small variations of $`|\mathrm{\Psi }`$. Moreover, this state must satisfy two constraints: its mean position is fixed at an arbitrary value $`X`$, and it is normalized:
$$X=\mathrm{\Psi }|\widehat{x}|\mathrm{\Psi },$$
(49)
$$1=\mathrm{\Psi }|\mathrm{\Psi }.$$
(50)
By straightforward application of calculus of variations (see Appendix C), one finds that $`|\mathrm{\Psi }`$ must be the ground state of a Hamiltonian $`\widehat{H}(f)`$, i.e., $`|\mathrm{\Psi }|\psi _0(f)`$:
$$(\widehat{H}f\widehat{x})|\psi _0(f)=E_0(f)|\psi _0(f),$$
(51)
where the force $`f`$ and the corresponding ground state energy, $`E_0(f)`$, of the Hamiltonian $`\widehat{H}(f)`$, appear as Lagrange multipliers. The value of $`f`$ must be chosen so that the constraint in Eq. (49) is satisfied. The fixed mean position $`X`$ corresponds then to the average position of the ground state $`|\psi _0(f)`$, i.e. $`X\overline{x}_0(f)`$. The average energy, $`E_{min}(X)`$, of these minimum energy states is derived as a function of $`X`$ with the help of Eqs. (51) and (49) as:
$$E_{min}(X)=E_0(f)+fX,\text{for}X\overline{x}_0(f).$$
(52)
We call the states $`|\psi _0(f)`$ the MEWP’s of the unperturbed Hamiltonian $`\widehat{H}`$. We can now reinterpretate Eq. (47) by saying that, in the zero temperature limit, the probability density $`\varphi ^2(x;X)`$ corresponds to the MEWP’s of the Hamiltonian $`\widehat{H}`$. A characteristic property of these states is that their average energy is stationary (i.e., minimum) with respect to any arbitrary change in their probability density that leaves constant their average position $`X`$. In the following we show that this average energy is identical to the ECP.
As $`T0`$ we have derived that the centroid density, $`C(X;f)`$, is a delta function centered at the average position of the ground state of $`\widehat{H}(f)`$, i.e. $`\left\{X\right\}_f=\overline{x}_0(f)`$. We also know that the integral of $`C(X;f)`$ with respect to $`X`$ is identical to the partition function $`Z(f)`$ \[see Eqs. (22) and (16)\]. Then, in the zero temperature limit we can write:
$$\underset{T0}{lim}C(X;f)=Z(f)\delta [X\overline{x}_0(f)].$$
(53)
The asymptotic behavior of $`C(X;f)`$ as $`T0`$ is given by an exponential of the ECP $`\mathrm{exp}[\beta F_{ecp}(X;f)]`$, while the asymptotic behavior of $`Z(f)`$ is given by an exponential of the ground state energy $`\mathrm{exp}[\beta E_0(f)]`$. From Eq. (53) the asymptotic behavior of $`C(X;f)`$ and $`Z(f)`$ should be the same at $`X=\overline{x}_0(f)`$, that is:
$$\underset{T0}{lim}F_{ecp}(X;f)=E_0(f),\text{for}X\overline{x}_0(f).$$
(54)
From the property given in Eq. (16) for the centroid density, and from the definition of the ECP in Eq. (37) it is easy to derive a relation valid at arbitrary temperature:
$$F_{ecp}(X;f)=F_{ecp}(X)fX.$$
(55)
By substitution of the last expression into Eq. (54) we obtain the desired result:
$$\underset{\beta \mathrm{}}{lim}F_{ecp}(X)=E_0(f)+fX=E_{min}(X),\text{for}X\overline{x}(f),$$
(56)
where the last equality corresponds to Eq. (52). Our final conclusion is that, in the limit $`T0`$, the value of the ECP, $`F_{ecp}(X)`$, is equal to the average energy of that MEWP whose average position coincides with the centroid coordinate $`X`$. The MEWP’s turn out to be the ground states of the Hamiltonian $`\widehat{H}(f)`$. The results obtained in this Subsection may be seen as a consequence of the original variational principle of Feynman and Hibbs , when it is applied to the zero temperature limit. However, the important role of the external force does not appear in the original path integral formulation of this variational principle.
The CMD approximation is, in the $`T0`$ limit, an approximate dynamics based on MEWP’s, which is accurate even for highly anharmonic potentials. Cao and Voth have shown that CMD reproduces correctly the classical limit at high temperatures. The classical limit of our formulation of CMD in terms of MEWP’s can be derived by the generalization of the results at $`T0`$ to arbitrary temperatures using the MEWP’s given by the excited states $`|\mathrm{\Psi }_n(f)`$ (see Appendix C). From this generalization we find that CMD can be formulated for harmonic systems at arbitrary temperatures as an exact MEWP’s dynamics, even in the classical limit. The most important applications of CMD are related to condensed phase quantum dynamics of anharmonic systems. From our analysis in terms of MEWP’s we find that for anharmonic potentials CMD provides accurate results in both the $`T0`$ and the classical limits, but further work is needed to clarify the capability of CMD at intermediate temperatures.
## IV Summary
We have presented the correspondence between fixed centroid path integrals and the Schrödinger formulation. This analysis shows that the Feynman path centroid density and the SFR density matrix depend on the static response of the quantum system to a constant external force. The path centroid density is related by a simple integral transformation to the partition function of the quantum system under the action of an external force. The same integral transformation is found between a classical phase space density and the classical partition function. Therefore, the interpretation of the centroid density as a classical-like density for the quantum system, which is one of the ideas suggested by the QTST and CMD approximations, has a precise physical meaning: the path centroid density behaves ”classically” in the sense that under the action of an external force it has the same static response properties as a classical phase space density. The same integral transformation is found between the normalized canonical density matrix in the presence of an external force and the SFR density matrix. Within the Schrödinger formulation it is essential to introduce a constant external force to define the centroid density and the elements of the SFR density matrix as transformed functions. However, in the path integral formulation the introduction of external forces is not needed, because one works directly with the transformed functions. This facility of the path integral formulation is the origin a large number of path integral applications, from a variational approximation of the thermodynamic properties of quantum system, to the CMD approximation of time correlation functions of quantum particles in thermal equilibrium.
In the present work, we have not tried to present the implications of our formulation in current applications based on fixed centroid path integrals. Nevertheless, our analysis has led to results that clarify the physical meaning of fixed centroid quantities. In particular, we have shown that the dispersion of the centroid coordinate is related to the static isothermal susceptibility, and that, in the zero temperature limit, the fixed centroid path integrals are related to the MEWP’s of the unperturbed system. At $`T=0`$, the normalized SFR density matrix is a pure state density matrix corresponding to a MEWP of the Hamiltonian, and the Feynman effective classical potential is the average energy of the MEWP’s.
###### Acknowledgements.
This work was supported by DGICYT (Spain) under contract PB96-0874. We thank E. Artacho for helpful discussions, and L.M. Sesé and M.C. Böhm for critically reading the manuscript.
## A Extension to many-particles
The results presented for a particle moving in one dimension can be easily generalized to multidimensional $`n`$-body systems of distinguishable particles. We denote the set of $`n`$ particle coordinates as a 3$`n`$-dimensional vector $`𝐫=(𝐫_\mathrm{𝟏},𝐫_\mathrm{𝟐},\mathrm{},𝐫_𝐧)`$ where the $`i`$-particle position vector is $`𝐫_𝐢=(r_{ix},r_{iy},r_{iz})`$. The set of external forces is $`𝐟=(𝐟_\mathrm{𝟏},𝐟_\mathrm{𝟐},\mathrm{},𝐟_𝐧)`$. The external force $`𝐟_𝐢`$ acts on the particle $`𝐫_𝐢`$ through the linear term, $`𝐟_𝐢\widehat{𝐫}_𝐢`$, appearing in the potential energy of the Hamiltonian $`\widehat{H}(𝐟)`$. The centroid density associated to the $`n`$-body Hamiltonian in absence of external forces, $`\widehat{H}`$, can be defined by a generalization of Eq. (22) as:
$$Z(𝐟)=_{\mathrm{}}^{\mathrm{}}\mathrm{}_{\mathrm{}}^{\mathrm{}}\mathrm{𝐝𝐫}𝐂(𝐗)𝐞^{\beta \mathrm{𝐟𝐗}},$$
(A1)
where $`𝐗=(𝐗_\mathrm{𝟏},𝐗_\mathrm{𝟐},\mathrm{},𝐗_𝐧)`$ is a 3$`n`$-dimensional vector formed by the centroid positions of each particle. The last equation represents a 3$`n`$-dimensional two-sided Laplace transform. The cumulants of the centroid density are obtained as the derivatives of the free energy $`F(𝐟)`$ associated to the partition function $`Z(𝐟)`$:
$$\left\{X_{ix}\right\}=\left[\frac{F(𝐟)}{f_{ix}}\right]_{𝐟=\mathrm{𝟎}},$$
(A2)
$$\left\{X_{ix}X_{jy}\right\}\left\{X_{ix}\right\}\left\{X_{jy}\right\}=\frac{1}{\beta }\left[\frac{^2F(𝐟)}{f_{ix}f_{jy}}\right]_{𝐟=\mathrm{𝟎}}.$$
(A3)
The last expression represents the components of a tensor that is related to the static isothermal susceptibity tensor by multiplication by the constant $`\beta `$.
## B Centroid density and static-force response density matrix for a linear harmonic oscillator
We consider a canonical ensemble of particles of mass $`m`$ moving in a one-dimensional harmonic potential $`V_{ho}(x)=(1/2)m\omega ^2x^2`$. In absence of an external force, the partition function and the free energy are:
$$Z=\left[2\mathrm{sinh}\left(\frac{\beta \mathrm{}\omega }{2}\right)\right]^1,$$
(B1)
$$F=k_BT\mathrm{ln}\left[2\mathrm{sinh}\left(\frac{\beta \mathrm{}\omega }{2}\right)\right].$$
(B2)
By application of a constant external force, $`f`$, the potential changes to $`V_{ho}(x)fx`$. The new potential energy is also a quadratic function of $`x`$, and the corresponding free energy is easily derived as:
$$F(f)=F\frac{f^2}{2m\omega ^2}.$$
(B3)
From Eqs. (32) and (33) we obtain the first two cumulants of the centroid density $`C(X)`$ as:
$$\left\{X\right\}=\left[\frac{f}{m\omega ^2}\right]_{f=0}=0,$$
(B4)
$$\delta X^2=\frac{1}{\beta m\omega ^2}.$$
(B5)
The higher order cumulants, which are proportional to successive derivatives of the last equation with respect to $`f`$, are zero. This result implies that the centroid density must be a Gaussian function of $`X`$:
$$C(X)=N_1G_X(\overline{X};\delta X^2),$$
(B6)
where $`N_1`$ is a normalization constant and $`G_X(\overline{X};\delta X^2)`$ is a normalized Gaussian function of $`X`$:
$$G_X(\overline{X};\delta X^2)=\left(\frac{1}{2\pi \delta X^2}\right)^{\frac{1}{2}}\mathrm{exp}\left[\frac{(\overline{X}X)^2}{2\delta X^2}\right].$$
(B7)
The constant $`N_1`$ can be obtained from Eq. (22) by setting $`f=0`$:
$$Z=_{\mathrm{}}^{\mathrm{}}𝑑XC(X)=N_1$$
(B8)
The centroid density of the harmonic oscillator is then:
$$C(X)=ZG_X(0;\frac{1}{\beta m\omega ^2}).$$
(B9)
The last result has been derived several times using fixed centroid path integrals. The ECP for the harmonic oscillator is obtained from its definition in Eq. (37) and the last equation as:
$$F_{ecp}(X)=Fk_BT\mathrm{ln}\left(\beta \mathrm{}\omega \right)+\frac{1}{2}m\omega ^2X^2.$$
(B10)
The SFR density matrix $`\sigma (x,x^{};X)`$ may be derived following the same scheme. The cumulant generating function in this case was given in Eq. (34). The elements of the density matrix for the harmonic oscillator in absence of an external force are:
$$\rho (x,x^{})=\left[\frac{m\omega }{2\pi \mathrm{}\mathrm{sinh}(\beta \mathrm{}\omega )}\right]^{\frac{1}{2}}\mathrm{exp}\left\{\frac{m\omega }{2\mathrm{}\mathrm{sinh}(\beta \mathrm{}\omega )}\left[(x^2+x^2)\mathrm{cosh}(\beta \mathrm{}\omega )2xx^{}\right]\right\}.$$
(B11)
While in the presence of an external force we obtain from the definition in Eq. (6):
$$\rho (x,x^{};f)=\mathrm{exp}\left(\frac{\beta f^2}{2m\omega ^2}\right)\rho (xx_{min},x^{}x_{min}),$$
(B12)
where we have used the following relations:
$$E_n(f)=E_n\frac{f^2}{2m\omega ^2},$$
(B13)
$$\psi _n(x;f)=\psi _n(xx_{min}).$$
(B14)
$`x_{min}=f/(m\omega ^2)`$ is the position of minimum energy for the potential $`V_{ho}(x)fx`$. The cumulants of the centroid variable $`X`$, with $`\sigma (x,x^{};X)`$ as its probability density, are evaluated as the derivatives of the cumulant generating function \[Eq. (34)\] with respect to the force at $`f=0`$. We find that only the first and second cumulants $`(\kappa _1,\kappa _2)`$ are different from zero. Therefore $`\sigma (x,x^{};X)`$ must be a Gaussian function of the variable X, with $`\kappa _1`$ being the mean value of $`X`$, and $`\kappa _2`$ its dispersion:
$$\sigma (x,x^{};X)=N_2G_X(\kappa _1;\kappa _2).$$
(B15)
The result for the first cumulant is obtained from the cumulant generating function after straightforward algebra as:
$$\kappa _1=\frac{\delta x_{cla}^2}{\delta x^2}\left(\frac{x+x^{}}{2}\right),$$
(B16)
and the result for the second cumulant is:
$$\kappa _2=\frac{\delta x_{cla}^2}{\delta x^2}(\delta x^2\delta x_{cla}^2).$$
(B17)
The constants $`\delta x^2`$ and $`\delta x_{cla}^2`$ correspond to the dispersion of the position coordinate for a canonical ensemble of harmonic oscillators in the quantum and classical case, respectively:
$$\delta x^2=\frac{\mathrm{}}{2m\omega }\mathrm{coth}\left(\frac{\beta \mathrm{}\omega }{2}\right),$$
(B18)
$$\delta x_{cla}^2=\frac{1}{\beta m\omega ^2}.$$
(B19)
The normalization constant, $`N_2`$, is determined from Eq. (21) by setting $`f=0`$. We get:
$$N_2=\rho (x,x^{}).$$
(B20)
The final result is:
$$\sigma (x,x^{};X)=\rho (x,x^{})G_X[\frac{\delta x_{cla}^2}{\delta x^2}\left(\frac{x^{}+x}{2}\right);\frac{\delta x_{cla}^2}{\delta x^2}(\delta x^2\delta x_{cla}^2)].$$
(B21)
The elements of the SFR density matrix are the product of the elements of the canonical density matrix by a normalized Gaussian function of the centroid coordinate $`X`$. This equation is identical to Eq. (B.4) of Ref. (apart from a different grouping of factors).
## C Minimum energy wave packets
We want to find the MEWP’s associated to the Hamiltonian $`\widehat{H}`$. To simplify the notation we use the following abbreviations: $`\mathrm{\Psi }x|\mathrm{\Psi }`$, $`\mathrm{\Psi }^{}\mathrm{\Psi }/x`$, $`\mathrm{\Psi }^{\prime \prime }^2\mathrm{\Psi }/x^2`$, $`VV(x)`$, and $`D\mathrm{}^2/(2m)`$.
The MEWP’s minimize the functional:
$$E_{min}=_{\mathrm{}}^{\mathrm{}}𝑑x(D\mathrm{\Psi }\mathrm{\Psi }^{\prime \prime }+V\mathrm{\Psi }^2)$$
(C1)
with the constraints given in Eqs. (49,50) for the wave function $`\mathrm{\Psi }`$. We call:
$$A=D\mathrm{\Psi }\mathrm{\Psi }^{\prime \prime }+V\mathrm{\Psi }^2fx\mathrm{\Psi }^2E\mathrm{\Psi }^2,$$
(C2)
where $`f`$ and $`E`$ are two Lagrange multipliers. From the Euler equation:
$$\frac{A}{\mathrm{\Psi }}\frac{d}{dx}\frac{A}{\mathrm{\Psi }^{}}+\frac{d^2}{dx^2}\frac{A}{\mathrm{\Psi }^{\prime \prime }}=0,$$
(C3)
one derives the differential equation that must be satisfied by the MEWP’s:
$$D\mathrm{\Psi }^{\prime \prime }+(Vfx)\mathrm{\Psi }=E\mathrm{\Psi },$$
(C4)
which is the time independent Hamilton equation corresponding to the Hamiltonian $`\widehat{H}(f)`$. All functions satisfying this equation, i.e., the eigenfunction $`|\mathrm{\Psi }_n(f)`$, are MEWP’s in the sense that they minimize the functional in Eq. (C1) subject to the constrains given in Eqs. (49,50). The excited states ($`n>1`$) correspond to local minima of the functional. In the study of the $`T0`$ limit of the SFR density matrix we use only the states $`|\mathrm{\Psi }_0(f)`$ corresponding to the global minimum of the functional, but the excited states are necessary to generalize this study to arbitrary temperatures.
|
no-problem/9906/astro-ph9906193.html
|
ar5iv
|
text
|
# 𝐵𝑉𝑅𝐼 photometry of QSO 0957+561A, B: Observations, new reduction method and time delay
## 1 Introduction
Since the discovery of the first gravitational lens, the Twin QSO 0957+561 (Walsh et al. 1979), the system has been subject to the most rigorous attempt to measure the time delay between its components. The specially propitious configuration of QSO 0957+561, two images separated by $`6\stackrel{}{\mathrm{.}}1`$ of a quasar ($`z=1.41`$) lensed by a galaxy ($`z=0.36`$) placed at the center of a cluster of galaxies, make it suitable for photometric monitorings. Although the time delay controversy has recently been solved, establishing a value for the time delay of $`420`$ days (Oscoz et al. 1997; Kundić et al. 1997; Pelt et al. 1998a), an on-going monitoring of the QSO components and comparison between the light-curves may yield important results, both for the study of the physical properties of quasars (Peterson 1993, Gould & Miralda-Escudé 1997) and for the detection of possible microlensing events (Gott 1981, Pelt et al. 1998b). Moreover, and although it will not lead to substantial changes in the value of the Hubble constant, a secure statement of the time delay is crucial for microlensing studies.
The main requirement to obtain useful information from the light-curves of the two components is a high level of photometric accuracy. However,the QSO 0957+561 is a very complicated system for two main reasons: i) the proximity of the point-like QSO components; and ii) the extended light distribution of the underlying lensing galaxy. Moreover, the whole scenario presents an additional complication due to the large amount of available data to reduce and analyze (Vanderriest et al. 1989; Press et al. 1992; Schild & Thomson 1995), hence automatic photometry codes become mandatory. Up to now, the only automated solution presented was developed by Colley & Schild (1999), who used HST data to subtract the lens galaxy and estimate the level of cross talk between the QSO components by selecting reference stars.
In this paper an alternative solution to this problem is presented: PSF fitting by means of DAOPHOT software. To check the feasibility of this new technique, it was applied to a sample of simulated data.
The dataset presented here is the result of three years monitoring, from 1996 February to 1998 July, a program which included 220 sessions of observation in the $`R`$ band, 62 in the $`B`$ band, 72 in the $`V`$ band and 68 in the $`I`$ band. The data acquisition and reduction processes are explained in detail in §2 and §3, respectively. The software environment used for the different reduction and analysis processes was IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. (Image Reduction and Analysis Facility, see http://www.noao.edu for more information), so any task or package referred to elsewhere is included in the IRAF environment. Section 4 is devoted to presenting the observed light curves and in §5 we discuss on the time delay obtained from these data. Finally, a brief summary of the results is given in §6.
## 2 Data Acquisition
Lens monitoring was performed in three consecutive seasons, 1996 February to June, 1996 October to 1997 July, and 1997 October to 1998 July (96, 97 and 98 season respectively hereby), using the CCD camera of the 82 cm IAC-80 telescope, sited at the Instituto de Astrofísica de Canarias’ Teide Observatory (Tenerife, Canary Islands, Spain). A Thomson 1024$`\times `$1024 chip was used, offering a field of nearly 7$`\stackrel{}{\mathrm{.}}`$5. Standard $`BVRI`$ broadband filters were used for the observations, corresponding fairly closely to the Landolt system (Landolt 1992). The IAC Time Allocation Committee awarded time for two kinds of observing runs: routine observation nights (RON nights hereafter) in which we could make use of 1200 s per night, and normal observation runs (NON nights hereafter) in which the telescope was available during the whole night for our project. The observational procedure was as follows:
* RON nights: on dark nights one image of 1200 s was taken, otherwise (moon nights) several short exposures, each of 300-400 s, were performed and then re-centered on selected field stars, and averaged to give the total exposure. The position of each individual field star was measured using the imexamine task and images were combined using the imcombine task.
* NON nights: under photometric conditions, $`BVRI`$ photometry of QSO0957+561 was performed. Landolt standard fields (Landolt 1992) were observed to provide the photometric calibration. When the nights were not of photometric quality several exposures of 1200 s each were performed in every filter to obtain a final deep exposure by averaging them.
The final data set is composed by 15 $`B`$, 14 $`V`$, 44 $`R`$, 19 $`I`$ brightness measurements in the 96 season; 13 $`B`$, 25 $`V`$, 72 $`R`$, 18 $`I`$ data points in the 97 season; and 34 $`B`$, 33 $`V`$, 104 $`R`$, 31 $`I`$ data points in the 98 season. High quality photometry in $`BVRI`$ was obtained on 30 nights during 1997 and 1998. Mean results for two reference stars (H and D, see Fig. 1) and QSO components are given in Table 1. It is important to mention that QSOB magnitudes were corrected from light of the lens galaxy following the procedure explained in §3.1.
## 3 Data reduction process
A remarkable characteristic of the photometric data presented here is their high degree of homogeneity; they were obtained using the same telescope and instrumentation over the entire monitoring campaign. Therefore, the reduction process can be the same for all the frames. In a first step, the data were reduced using the ccdred package. The overscan was subtracted from the images, which were then flat-fielded using very high signal-to-noise master flats, each of them taken from the mean of ten sky flat exposures made shortly before the beginning of the observations. These basic CCD reductions (bias, flat-field) are crucial when the noise must be kept as low as possible. However, to attempt the observation of quasar brightness fluctuations of $``$ 0.01 magnitudes -in order to detect short-timescale microlensing events- a high level of photometric accuracy is needed. To this end it is crucial to separate every source of error adopting specific solutions for each of them. There are two main sources of error in CCD photometry of QSO 0957+561 system:
1. Extinction errors: It is known that the main part of the variability of the observed target magnitude is explained in terms of atmospheric extinction and air-mass variability. Extinction errors are complicated by color terms when broad multi-band photometry is dealt with.
2. Aperture Photometry errors: Due to the special configuration of QSO 0957+561 system, there are some specific aperture photometry errors to take into account. As demonstrated in Colley & Schild (1999), these errors are driven by seeing variations, and can be separated in two parts as follows, 1) Influence of the lens galaxy: Since the core of the giant elliptical lens galaxy of $`R=18.3`$ is only separated by $`1\mathrm{}`$ from the B image, most of the galaxy’s light lies inside the image B aperture, but outside the image A aperture. This effect could introduce errors of order 1-2% in the final measured fluxes from images A and B (see Colley & Schild 1999). 2) Overlapping of images. The separation between the two images is $`6\stackrel{}{\mathrm{.}}1`$ and hence, when poor seeing conditions prevail, there is an important effect of cross-contamination of light between the two quasar images.
As explained above, the amount of archived data is so large (more that 1 thousand 1kx1k CCD images) that an automated photometry code is necessary. For extinction errors, the best and traditional method to work with is to measure differential photometry with several field stars close to the lens components (Kjeldsen & Frandsen 1992).
However, the solution for aperture photometry errors presents a higher level of difficulty. The only automated solution offered to date is explained in Colley & Schild (1999). These authors used HST data (Bernstein et al. 1997) of the lens galaxy for subtraction and reference stars to estimate the level of cross-talk between the images. After these corrections, they found that photometry is reliable to about 5.5 mmag (0.55 %) over three consecutive nights of real data. In this paper an alternative solution to the problem is proposed: PSF fitting using DAOPHOT software. A new, completely automatic IRAF task, pho2com, has been developed. Using a sample of simulated data it is demonstrated that the proposed scheme can reach high precision photometry; 0.5 % for B component and 0.2 % for A component. The following two sections are devoted to explaining each of the adopted solutions to eliminate CCD photometry errors.
### 3.1 PSF photometry: the pho2com IRAF task
It is well known that PSF fitting is the most precise method to carry out photometry of faint and/or crowded field stars, whereas aperture photometry is better for brighter, isolated stars. In order to benefit from these facts the pho2com task, written in the IRAF command language, combines aperture photometry (APPHOT package) and PSF fitting (DAOPHOT package) as explained below. Before applying the pho2com task it was necessary to select an image as a reference image and re-center all the frames, using accurate centroid determination from field stars, to the reference one. The pho2com photometry has two main iterations:
* Iteration 1: accurate sky background determination.
A precise determination of the sky background is extremely important for accurate photometry. There are mainly two different ways to find the sky background: global-sky or local-sky determination. Whereas in the local-sky method the sky value is calculated from pixels around objects, in the global-sky determination the sky is described by a simple, slowly changing function of the position in the field, e.g., a plane. This last method is the most precise, but uncrowded fields are necessary in order to prevent sky level changes from field stars. This is the case of TwQSO field where most pixels see a background sky value unperturbed by stars, so the global-sky option was used for sky determination. The main steps of current iteration are:
1. Reference stars and QSO components were removed from the frame using PSF fittings (allstar DAOPHOT task). This was done, as explained above, to prevent perturbations from these objects in the sky determination.
2. The sky level was determined by means of a smooth surface fitting (imsurf task) to the frame. The resulting image of iteration 1 is a sky-subtracted frame.
* Iteration 2: object photometry.
As commented above, the pho2com tasks uses aperture photometry for reference stars and PSF fitting for TwQSO components. Following Stetson (1987), the PSF is defined from a small sample of isolated stars (G, H, E, D stars in our case). The PSF fit has two components: an analytic and an empirical one. For the 2D analytic function the user can select between an elliptical Gaussian, an elliptical moffat function, an elliptical Lorentzian and a Penny function consisting of an elliptical Gaussian core and Lorentzian wings. These functions were applied to each frame, selecting the one which yields the smaller scatter in the fit. For the PSF empirical component a linear variation with position in the image proved to give the best results. The main steps in iteration 2 are:
1. Applying aperture photometry with a variable aperture of radius=2xFWHM (the FWHM was measured from reference stars) the reference stars fluxes were extracted. It is important to remember that the frames resulting from iteration 1 are sky-subtracted, and therefore the sky background value was forced to zero in the aperture intensity extractions.
2. PSF fit photometry, with a variable aperture of radius=FWHM, was applied to all the objects.
3. Aperture corrections were computed from the previous data to compare the QSO component fluxes with reference stars (aperture correction will transform data with radius=FWHM to radius=2xFWHM) and standard stars (aperture correction will transform data with radius=FWHM to photometric standard star radius, normally 4xFWHM).
A sample of simulated astronomical data was chosen in order to test the performance of the pho2com task. Simulations were made with the artdata package. Each simulated frame included the lens galaxy, the A and B quasar components and the D and H reference stars (see Table 1 for photometric data). The lens galaxy was created with a de Vacouleurs (elliptical) light distribution, $`I(r)=exp\{7.67[(r/R_e)^{(1/4)}1]\}`$ with $`R_e=4\stackrel{}{\mathrm{.}}5`$, taking into account published HST data (Bernstein et al. 1997) and ground-based photometry (Schild & Weekes 1984, Bernstein et al. 1993). The accurate position of each object was also defined using HST astrometry. Finally 200 simulated images were created with the mkobjects task. The only free parameter (see Table 2) was the atmospheric seeing, which was simulated with values between $`0\stackrel{}{\mathrm{.}}9`$-$`2\stackrel{}{\mathrm{.}}7`$ (see Figs. 2 and 3). Effects of pixellation and noise were included (for more details see mknoise task). Noise effects were considered by adding a Gaussian and Poisson noise to the images, which have a constant background (for each filter a mean sky value is deduced from real data). This kind of ideal photometry is not, of course, a full noise description. In any case the main error sources (lens galaxy light contamination and cross-talk between components) were included in the simulated images so the final estimated errors should be considered first order ones, where high order corrections (faint neighboring stars or galaxies, basic CCD reductions.., see Gilliland et al. 1991) are neglected. Aperture (with a fixed radius of $`3\mathrm{}`$) and pho2com photometry was applied to the simulated images. Differential light curves are plotted in Figs. 2 and 3. Correlations with seeing variations are clear. Although the seeing profile is the same for the two reference stars and the QSO 0957+561 components, fixed aperture photometry has final mean errors of $`1.5\%`$, $`2.2\%`$ for A and B components respectively.
PSF fitting photometry improves aperture photometry magnitudes, but subtraction of the lens galaxy is still not perfect and some of its light is present in the final B component magnitude; therefore the final QSOB magnitudes are over-estimated. To correct B magnitudes from underlying galaxy light linear relations between seeing and magnitude errors were calculated by means of simulated data. Figures 4 and 5 are plots of magnitude errors for A and B components versus seeing for $`BVRI`$ filters. After correcting data for these errors, final errors of $`0.2\%`$, $`0.5\%`$ were obtained, for the A and B simulated components, respectively. Two main conclusions can be deduced: 1) As explained above, the B component presents higher errors than A, due to its proximity to the lens galaxy; 2) Because the lens galaxy is extremely red (Schild & Weekes 1984), QSOB magnitude errors are larger in the red colors. Real data were also corrected for underlying galaxy light using the linear correlations of Figs. 4 and 5.
### 3.2 Differential photometry
The basic technique of differential photometry is very simple, and consists in determining the difference, in terms of magnitude, of the A and B images to selected field stars.
The transformation equations used to obtain the standard magnitudes are the following:
$`b=`$ $`B+B_0+B_1(BV)+B_2X`$ (1)
$`v=`$ $`V+V_0+V_1(BV)+V_2X`$
$`r=`$ $`R+R_0+R_1(VR)+R_2X`$
$`i=`$ $`I+I_0+I_1(RI)+I_2X,`$
where $`BVRI`$ are the standard magnitudes; $`bvri`$ are the instrumental magnitudes (i.e. $`r=2.5log[F_r]`$, where $`F_r`$ is the object flux through a predefined aperture); $`X`$ is the airmass; and $`(B_0,V_0,R_0,I_0)`$, $`(B_1,V_1,R_1,I_1)`$, $`(B_2,V_2,R_2,I_2)`$ are the zero-point constants, the color term coefficients and the extinction coefficients, respectively, determined from observations of standard stars. For a given object, the main source of magnitude variability can be explained in terms of atmospheric extinction and air mass variability. The usual way to remove this error is to use a comparison star observed at the same time under the same conditions (this is one of the main advantages of CCD observations). Under this assumption, the differential magnitude, for instance $`R`$, is then found as
$$r_or_s=(R_oR_s)+R_1[(VR)_o(VR)_s],$$
(2)
where subindices $`o`$,$`s`$ represent the object and comparison star respectively. The term $`R_1[(VR)_o(VR)_s]`$ is very important and is null only if the color term of the system is equal to zero, $`R_1=0`$, or the target object and the companion star have similar colors, $`(VR)_o=(VR)_s`$. In $`BVRI`$ photometry, color terms are not zero and, to decrease errors it is necessary to have similar spectra for the object and the comparison star. In this case it is possible to approximate $`R_o=R_s+(r_or_s)`$.
Figure 1 shows the field of QSO 0957+561 in the R band obtained as a combination of all the individual images taken during the three seasons. The total equivalent exposure time is 51.46 hours and the limiting R magnitude 25. The set of potential comparison stars, F, G, H, E, D, were examined differentially in sets of 4 versus one star. This allowed us to establish the stability of each comparison star. After careful analysis, only two stars -D and H- were selected as reference stars for differential photometry. Photometric errors were calculated using the statistical error analysis developed by Howell et al. (1988), which uses the rms of the differential photometry of comparison stars (H-D in our case) to deduce the photometric errors of QSO components A and B. In initial rms calculations the derived values are higher than expected so Eq. 2 was considered which, for selected reference stars, can be written as
$$r_Hr_D=(R_HR_D)+R_1colVR_{HD},$$
(3)
where $`colVR_{HD}=(VR)_H(VR)_D`$ which, taking into account the data in Table 1, is equal to 0.08. The color terms $`B_1,V_1,R_1,I_1`$ are not normally expected to change during the course of a night, as they are due to the mismatch between the instrumental bandpasses and the standard Johnson $`BVRI`$ bandpasses. However, instrumental bandpasses are derived as the convolution of the mirror reflectivities, the filter transmissions, and the chip response, so significant changes are indeed expected in the course of a season. Under this assumption, Eq. 3 can be formulated as
$$r_Hr_D=(R_HR_D)+f_R(JD)colVR_{HD},$$
(4)
where $`f_R(JD)=R_1`$ is a smooth function of Julian Day which fits the possible time changes of the color term $`R_1`$. This equation is demonstrated in Fig. 6, where we plotted the color term $`R_1`$ derived from Landolt standard stars and the same term derived from Eq. 4 using observational data from reference stars H,D. The curve is a parabolic fitting to reference star data which has, due to error propagation, large errorbars ($`0.1`$).If it is assumed that parabolic fitting represents real data without noise it is clear that the smooth variations in the differential light curves of reference stars H,D are mainly due to changes in color terms. To correct $`R`$ data of color term variations (the process is equivalent for the other filters) the following steps were taken: 1) from the differential light curve of reference stars the $`f_R(JD)`$ function was calculated by means of a parabolic fitting; 2) for reference star data the term $`f_R(JD)colVR_{HD}`$ was directly subtracted from $`r_Hr_D`$ observational data obtaining the differential magnitude values $`R_HR_D`$; and 3) for QSO data it was necessary to assume mean constant values for $`colVR_{AD}=(VR)_A(VR)_D=0.09`$ and $`colVR_{BD}=(VR)_B(VR)_D=0.16`$, and the final corrected R magnitudes are
$`R_A=R_D+(r_Ar_D)+f_R(JD)colVR_{AD}=15.163+(r_Ar_D)f_R(JD)0.09`$ (5)
$`R_B=R_D+(r_Br_D)+f_R(JD)colVR_{BD}=15.163+(r_Br_D)f_R(JD)0.16.`$ (6)
For the current system, the red spectra of the D reference star and those of QSO components are similar, so the derived color term correction values are rather small, $`0.5\%`$ for the $`R`$ and $`I`$ filters. On the contrary the QSO 0957+561 is bluer than the D star, and in this case color term errors become as high as $`2\%`$, $`5\%`$ for $`V`$,$`B`$ colors respectively. The final mean errors for reference stars and A,B component light curves are presented in Table 3.
## 4 $`BVRI`$ Light Curves
The results of our monitoring program are shown in Figs. 7, 8, 9, and 10 (the photometric data are available at URL http://www.iac.es/lent). In these figures we show the light-curves and error-bars for components A (black circles) and B (red squares) of Q0957+561 in $`R`$, $`B`$, $`V`$, and $`I`$ band, where the data for the B component are shifted by 425 days (the time delay estimate in this paper, see §5). Final magnitudes were calculated using the pho2com task and finally corrected for (1) the influence of the lens galaxy (see §3.1) and, (2) color term variations (see §3.2). Note the similar behavior of the curves for both components (especially in Fig. 7, corresponding to the $`R`$ band).
The robustness of the proposed photometry method can be assessed by comparing the magnitudes of the QSO 0957+561 A and B components deduced from monitoring light curves (averaged values) and Landolt standard star calibrations (see Table 1). The calculated values are presented in Table 4. The global agreement between both sets of magnitude values is clear.
The photometric data presented in Table 1 also needs discussion. In principle the colors of QSOA and QSOB, averaged over the monitoring campaign, should be essentially the same if sight-line-dependent extinction is ignored. A slight reddening is present in component A, although the significance of this excess, $`E(VR)=0.07\pm 0.08`$, is questionable. In order to verify the significance of the previous result we have plotted, in Figure 11, the $`V`$$``$$`R`$ color difference between components A and B, with B shifted by 425 days so that the emission time is the same for both components over the monitoring campaign averaged every 20 days. The “bluing” of component A is now clear and we may try to understand its origin:
1)A lens galaxy absorption effect would have produced a redder, and not a bluer, ($`V`$$``$$`R`$) color for image B.
2)The most likely explanation, proposed by Michalitsianos et al. (1997), is that the ray paths of lensed components intercept different regions of a galactic disk associated to the host galaxy of the source that is viewed pole on and situated in the quasar rest frame.
A preliminary analysis of the the data obtained in the $`R`$ filter has yielded an important result: component B is brighter than component A. The $`R`$ data have been averaged every 10 and 20 days and then the B component light curve has been shifted by 425 days. The average difference between components A and B is $`m_Bm_A=0.06`$ mag for both the 10- and 20-day average. Moreover, the averaged B/A magnification ratio is 1.06, varying between 1 and 1.12, in perfect agreement with the results described in Press & Rybicki (1998), indicating the prolongation of the long-timescale microlensing event during 1997 and 1998. At any rate, an exhaustive analysis of the long-timescale microlensing in the whole dataset is being conducted and will be presented in a future paper. This study will also include a comparison between the short-timescale microlensing during an epoch of calmness (96/97 seasons) and the rapid microlensing at a relatively active (but non-violent) epoch (the 97/98 seasons). The consequences for the population of dark-matter objects in the lensing galaxy and quasar properties will be also discussed and put into perspective.
## 5 Time Delay
Today, the historical controversy regarding the value of the time delay of Q0957+561A, B is almost solved. After twenty years of monitoring, recent data establish this value at around 420 days. There is, however, a small controversy between two values, $`\mathrm{\Delta }\tau _{\mathrm{BA}}`$ = 417 days (Kundić et al. 1997; Pelt et al. 1998a) and $`\mathrm{\Delta }\tau _{\mathrm{BA}}`$ = 424 days (Pelt et al. 1996; Oscoz et al. 1997; Pijpers 1997; Goicoechea et al. 1999). The difference (one week) is irrelevant in the Hubble constant calculations, but it may be crucial in order to detect microlensing events.
One of the “classical” ways of obtaining the time delay between components A and B of Q0957+561 is the computation of the $`A`$$``$$`B`$ cross-correlation (see Oscoz et al. 1997, and references therein). In the standard procedure, the maximum of the CCF (cross-correlation function) is identified with the time delay. However, the delay-peak generally has an irregular shape, and this fact causes a bias in the measurement of the time delay between the two components of the system. In this way, two different datasets could lead to two different estimates of the time delay that are in appreciable disagreement. The problem was considered by some authors in the past. Lehár et al. (1992) made a parabolic fit around the maximum of the cross-correlation function, whereas Haarsma et al. (1997) used a cubic polynomial fit to the delay-peak. Lehár et al. (1992) suggested that the delay-peak of the cross-correlation function should be closely traced by the central peak (around $`\tau =0`$) of the autocorrelation function. Moreover, other features of the cross-correlation function around lags $`\tau _1`$, $`\tau _2`$,… will be closely reproduced in the autocorrelation function around lags $`\tau _1\mathrm{\Delta }\tau _{\mathrm{BA}}`$, $`\tau _2\mathrm{\Delta }\tau _{\mathrm{BA}}`$,…, respectively.
In this paper we make use of the similarity between the discrete autocorrelation function (DAC) of the light curve of one of the components (B, for example) and the $`AB`$ discrete cross-correlation function (DCC) to improve the estimation of the time delay. The same origin of the A and B curves guarantees the fulfilment of the relationship DCC($`\tau `$) $``$ DAC($`\tau \mathrm{\Delta }\tau _{\mathrm{BA}}`$) in the absence of strong microlensing masking the QSO’s intrinsic variability. However, several questions such as the impossibility of observing the system during certain months of the year and the necessary lack of suitable edges, are additional drawbacks. So, the comparison between the DAC and the DCC from real data should be done by previously selecting a “clean” dataset, i.e., a homogeneous monitoring of both images during two active and clear (free from large gaps and microlensing) epochs separated by $``$420 days (the rough estimate of the time delay). Therefore, from the DAC and DCC functions, one can define the following function for every fixed value $`\theta `$ (days):
$$\delta ^2(\theta )=\left(\frac{1}{N}\right)\underset{i=1}{\overset{N}{}}S_i\left[\mathrm{DCC}(\tau _i)\mathrm{DAC}(\tau _i\theta )\right]^2,$$
(7)
where $`S_i=1`$ when both the DCC and DAC are defined at $`\tau _i`$ and $`\tau _i\theta `$, respectively, and 0 otherwise. Equation (7) can be minimized to obtain $`\theta _0=\mathrm{\Delta }\tau _{\mathrm{BA}}`$, the most probable value for the time delay. This least squares comparison ($`\delta ^2`$-test) of the auto and cross-correlation functions enables the time delay to be determined by comparing two discrete series, DCC and DAC, which should, in general, have the same shape.
### 5.1 Simulated data
Prior to computing the time delay from real data, the $`\delta ^2`$-test was applied to some simulated datasets to verify its reliability when dealing with discrete and irregularly sampled datasets. Several sets of artificial photometric data with similar magnitudes, error-bars and time distribution to that of the observations collected at Teide Observatory were created. In this section we will use the same terminology as with real data; that is, the $`y`$-axis will be considered as magnitude, the $`x`$-axis as truncated JD, and the delay between both curves as time delay.
A program was developed to generate sets of dates, $`x_i`$, between 1800 and 2000 (JD), approximately, with a pseudo-random separation taken from a uniform distribution between zero and five days, obtained with the G05CAF NAG function. The time data were then alternately separated in two different subsets, corresponding to A and B light curves. A first value of the magnitude was then calculated with the equation $`y_i=F(x_i)`$, where $`F(x_i)`$ is a selected function of the dates $`x_i`$. The probability of measuring a value $`y`$ for each $`x_i`$, due to several “observational effects”, is proportional to $`\mathrm{e}^{[(yy_i)^2/2\sigma _i^2]}`$, and hence characterized by $`\sigma _i`$, or, equivalently, the variable $`d=yy_i`$ is distributed as $`\mathrm{e}^{[d^2/2\sigma _i^2]}`$. A $`\sigma _i`$ taking pseudo-random values between 0.01 and 0.03, obtained with the G05CAF NAG function, was generated for each $`x_i`$. From here the quantities $`d_i`$, pseudo-random numbers obtained from a normal Gaussian distribution with zero mean and standard deviation $`\sigma _i`$, were calculated with the G05DDF NAG function, allowing them to adopt positive or negative values. Finally, the magnitude was generated from the equation $`y_o=F(x_i)+d_i=y_i+d_i`$, with an error-bar of $`\sigma _i`$. The A component was forced to be brighter by adding 0.1 to the magnitudes of the B component (although this situation is not realistic, it may be illustrative); moreover, 420 days were subtracted from the JD of the A dataset to simulate the existence of a time delay. The result was two pseudo-random sampled functions with pseudo-random noise, a true delay of 420 days, and the B component 0.1 mag fainter than component A. The first two selected functions were:
$$\mathrm{F1}:y=17.17+0.5\mathrm{e}^{0.5f}\mathrm{sin}(f),\mathrm{where}f=\frac{\left(x1800\right)}{20}$$
(8)
$$\mathrm{F2}:y=17.2+0.1\mathrm{sin}(f)\mathrm{sin}(4f),\mathrm{where}f=\frac{x}{40}$$
(9)
An additional function, consistent with the actual variability of Q0957+561, was created. The raw observational data, with none of the modifications explained in this paper, from 97–98 seasons were selected. The light curves were then fitted by the function
$$\mathrm{F3}:y=17.070.16\mathrm{e}^f,\mathrm{where}f=\frac{(x15.8m)^2}{2(10+s)^2}$$
(10)
$`m`$ being the mean of the JD in the selected range and $`s`$ its standard deviation. The resulting simulated data show a lower variability to that obtained from F1 and F2.
To calculate the DAC and the DCC functions the procedure described in Edelson & Krolik (1988, see also Oscoz et al. 1997) was followed. For two discrete data trains, $`a_i`$ and $`b_j`$, the formula corresponding to the DCC is
$$DCC(\tau )=\frac{1}{M}\frac{\left(a_i\overline{a}\right)\left(b_j\overline{b}\right)}{\sqrt{\left(\sigma _a^2e_a^2\right)\left(\sigma _b^2e_b^2\right)}},$$
(11)
averaging over the $`M`$ pairs for which $`\tau \alpha \mathrm{\Delta }t_{ij}<\tau +\alpha `$, $`\alpha `$ and $`e_k`$ being the bin semi-size and the measurement error associated with the data set $`k`$, respectively. The expression for the DAC can be obtained in a straightforward manner from Eq. (11), while the expression for $`\delta ^2`$ is given by Eq. (7). Finally, to calculate the uncertainty in the estimation of the time delay a Monte Carlo algorithm with 1000 iterations was applied to the simulations (see Efron & Tibshirany 1986).
The three simulated clean datasets are shown in Fig. 12. Open circles correspond to the A component, while red filled squares correspond to the B component shifted by 420 days and with an offset in magnitude. As can be seen, the two first sets of simulated data (Fig. 12, a-b) could represent violent epochs in the source quasar, with episodes where the variability is as much as 0.2–0.3 mag in only 20–30 days. The last set (Fig. 12, c) represents an epoch with less variability than the observational one reported in Fig. 7. The $`\delta ^2`$-test was applied to each clean dataset in three different cases: (i) DAC obtained from the A component; (ii) DAC obtained from the B component; and (iii) similar to (i) but this time with a large gap in the light curve B (32 days for F1, 30 days for F2, and 30 days for F3). The resulting values for the time delay and the corresponding error (1$`\sigma `$) in days, see Table 5, clearly indicate that the $`\delta ^2`$-test offers good estimates in all the simulations, even considering the large error-bar generated for each point, the existence of “periodic” trends, and the presence of some gaps in some light curves. From Table 5 one can see that the maximum difference between the real and the central value of the derived time delay is of seven days (for a relatively inactive source) and the 1$`\sigma `$ intervals always include the true delay. An example of the performance of the $`\delta ^2`$-test has been plotted in Fig. 13. The DAC (open circles) for the A component shifted by 420 days versus the DCC (red squares) for F2 appear in the upper panel. There is a very good correspondence between both curves. Possible values of the time delay ($`\theta `$) versus the associated values $`\delta ^2(\theta )`$, normalized by its minimum value, have also been represented in the lower panel.
### 5.2 Real data
The success of the calculation of the time delay from simulated data, as shown in §5.1, made it reasonable to apply the $`\delta ^2`$-test to real data. The observations, collected at Teide Observatory, covered three consecutive seasons (1996, 1997, and 1998), with 220 different points in the $`R`$ band. Some points are affected by strong systematic effects and show a strong and simultaneous variation in both components. Once these points were discarded, their total number was reduced to 197. Taking into account the presence of two main gaps in the data—JD 2450242 to 2450347 and JD 2450637 to 2450729—roughly corresponding to the summer months, two different datasets (free from large gaps and edges) can be selected: DSI, corresponding to the 96–97 seasons, with 28 points for the A component and 27 points for the B component; and DSII, corresponding to the 97–98 seasons, with 44 points for the A component and 86 points for the B component. Both DSI and DSII have been represented in Fig. 14, where the B light curves have been shifted by 420 days and +0.06 mag. As can be seen, DSI corresponds to an epoch of significant calmness in the activity of the quasar which, together with the relatively small number of points, made it problematical for time-delay calculations. This fact was stated after some tests. On the contrary, DSII (the 97 and 98 seasons) shows some level of activity (although not as strong as in A95/B96) and, moreover, contains an appreciable number of points. Neither is there any clear evidence for any microlensing event, a fundamental requirement for selecting a clean dataset. So, DSII was finally used to perform time-delay calculations, i.e., DSII is our clean dataset.
The DAC and DCC functions were obtained with the same procedure as in §5.1, taking into account that the better monitoring of the B light curve as compared to that for the A light curve (see Fig. 14) made it more suitable for the DAC calculations. The application of the $`\delta ^2`$-test to the DAC and DCC curves appear in Fig. 15 (normalized as in §5.1), where the minimum of the $`\delta ^2`$-curve appears at 425 days, corresponding to the best delay. The uncertainty in our estimate of the time delay was obtained by using a Monte Carlo algorithm. A random-number generator added a variable to each point of DSII to simulate the effects of the observational errors (see Efron & Tibshirany 1986), standard bootstrap samples being thereby obtained. The $`\delta ^2`$-test was applied to the bootstrap samples to get the time delay in each case, repeating the process 10000 times, a number large enough for the results to be treated statistically. The use of the Monte Carlo algorithm led to a final value of $`425\pm 4`$ days ($`1\sigma `$). The uncertainty with the $`\delta ^2`$-test is better than the uncertainties obtained with the same clean dataset with other alternative methods, like the dispersion spectra and the discrete cross-correlation techniques, $`426\pm 12`$ days and $`428\pm 9`$ days, respectively (see Oscoz et al. 1997 and references therein). On the other hand, the $`\delta ^2`$-test with the DAC obtained from the A light curve gives a time delay of $`425\pm 5`$ days. The resulting DCC (filled squares) and DAC (for the A component, open circles) curves are presented in Fig. 16, where a bin semi-size of $`\alpha =`$ 20 days was used. The DAC has been shifted by 417 (upper panel) and 425 (lower panel) days. The disagreement between both curves is evident in the former case. Our study indicates that the time delay between components A and B of Q0957+561 must be in the interval 420–430 days and is therefore slightly different from the “standard” typical value of 417 days.
## 6 Conclusions
CCD observations of the gravitational lens system Q0957+561A,B in the $`BVRI`$ bands are presented in this paper. The observations, taken with the 82 cm IAC-80 telescope, at Teide Observatory, Spain, were made from the beginning of 1996 February to 1998 July, as part of an on-going lens monitoring program. An alternative method to obtain accurate multi-band CCD photometry of this object is presented. A new, completely automatic IRAF task, pho2com, has been developed. This code yields accurate photometry by simultaneously fitting a stellar two-dimensional profile to each QSO component by means of DAOPHOT software. Using a sample of simulated data, it is demonstrated that the proposed method can achieve high precision photometry, 0.5 % for B component and 0.2 % for A component. In this paper we show that it is also necessary to correct $`BVRI`$ photometry for color term variations during a season, and a possible procedure is presented. Although PSF fitting photometry improves aperture photometry errors, the subtraction of the lens galaxy is still not perfect and some of its light is present in the final B component magnitude, therefore the final QSOB magnitudes are over-estimated. To correct B magnitudes from underlying galaxy light, linear relations between seeing and magnitude errors are deduced by means of simulated data. A remarkable characteristic of the final presented light-curves is their high degree of homogeneity; they have been obtained using the same telescope and instrumentation during the three years of monitoring campaign.
A calculation of the time delay between both components by using a clean dataset has been performed. The resulting delay, obtained with a new test, the $`\delta ^2`$-test, is of 425 $`\pm `$ 4 days, slightly higher than the value previously accepted (417 days), but concordant with the results obtained by Pelt et al. (1996); Oscoz et al. (1997); Pijpers (1997) and Goicoechea et al. (1999).
We are especially grateful to E. E. Falco for advising us on the possible presence of strange points in our datasets and to F. Atrio for their help in understanding some hidden aspects in our statistical treatment. The authors would like to thank Dr. Jesús Jiménez and Dr. Francisco Garzón for making the telescope readily available to us. This work was supported by the P6/88 project of the Instituto de Astrofísica de Canarias (IAC), Universidad de Cantabria funds, and DGESIC (Spain) grant PB97-0220-C02.
.
.
.
.
.
.
.
.
.
|
no-problem/9906/cond-mat9906155.html
|
ar5iv
|
text
|
# A Maximum Entropy Method of Obtaining Thermodynamic Properties from Quantum Monte Carlo Simulations
## I Introduction
The problem of obtaining thermodynamic properties from Quantum Monte Carlo (QMC) simulations is one of long–standing interest.. Although the internal energy, i.e., the expectation value of the Hamiltonian, is one of the easiest quantities to obtain via QMC, the free energy is almost impossible to obtain directly in a simulation. Likewise, the specific heat, i.e., the temperature derivative of the internal energy, is very difficult to obtain directly. Hence, one must turn to indirect methods.
Several methods to obtain the thermodynamic properties of model systems via QMC have been proposed, but all suffer from limitations of one sort or another. To a large extent, these stem from the use of a specific functional form to fit the internal energy of the system. We propose a novel method to obtain the internal energy, the specific heat, the entropy, and the free energy as a function of temperature via QMC which does not impose any functional form on these quantities and alleviates several other problems in the current methods. Our technique relies on probability theory and Maximum Entropy to obtain the most probable thermodynamic functions consistent with the QMC data and prior knowledge, such as a sum rule on the system’s entropy.
In the remainder of this paper, Sec. II reviews currently used techniques to obtain thermodynamic properties from QMC, their limitations, and the desirable features of a new technique. Sec. III contains a brief overview of our method to obtain thermodynamic quantities from QMC data. In Sec. IV, we review the theoretical underpinnings of the method, Maximum Entropy. Sec. V sets forth the algorithmic details of our method. In order to test our method, we apply it to a non-trivial problem – the 3d Periodic Anderson Model – in Sec. VI. Our summary is given in Sec. VII.
## II Background
The free energy and its derivatives, including the specific heat, provide experimentally relevant insight into a system’s temperature evolution and phase transitions. Unfortunately, both direct and indirect QMC measurements of such quantities are notoriously difficult to make. To understand why, we discuss two methods for obtaining thermodynamic quantities in this section.
Typically, one tries to obtain the thermodynamic properties of a system by performing QMC simulations at various discrete temperatures, then fitting the resultant energy data to a functional form. Generally, this functional form is not known, so a physically-motivated form must be chosen. The recipe is to fit the internal energy $`E(T)`$ to a functional form, which may then be differentiated explicitly to obtain the specific heat
$`C(T)={\displaystyle \frac{E(T)}{T}}.`$ (1)
From the specific heat, the entropy $`S(T)`$ may be calculated by integrating
$`S(T)={\displaystyle _0^T}𝑑T^{}{\displaystyle \frac{C(T^{})}{T^{}}}.`$ (2)
Then, the free energy $`F`$ may be obtained from the relation
$`F(T)=E(T)TS(T).`$ (3)
While apparently sound in principle, this prescription can manifest several serious problems. For one, the derivative in Eq. 1 enhances the statistical uncertainty in the fit. At low temperatures the procedure is further complicated by the division by $`T`$ in Eq. 2. Similar problems emerge at low $`T`$ when the specific heat is evaluated directly $`C=\left(E^2E^2\right)/T^2`$. Thus, there is no guarantee that the total entropy obtained by this method
$`S_{\mathrm{}}={\displaystyle _0^{\mathrm{}}}𝑑T{\displaystyle \frac{C(T)}{T}}`$ (4)
will equal the total infinite-temperature entropy of the system. If not, then both the specific heat $`C(T)`$ and the free-energy $`F(T)`$ may be unreliable.
One technique using this prescription is to fit QMC internal energy data to a pair of functional forms, one for low-temperature data and another for high-temperature data. This method has been successfully applied, but it does have several important drawbacks. First, separate functional forms in the nature of polynomials are used for the low- and the high-temperature regions. These are chosen after viewing the data, based on the shape of the $`E(T)`$ curve. Presumably, care must be taken to avoid spurious features in the derivative $`C(T)`$ of the internal energy $`E(T)`$ at the point where the two polynomials are joined. While an analytic function such as a polynomial expansion is a reasonable choice for an internal energy function on a finite-dimensional lattice, there is no guarantee that sufficient terms have been chosen for the polynomial and the most reliable test is a goodness of fit. Consequently, this technique requires a great deal of costly QMC data, especially in the neighborhood of a phase transition, in order to insure that a reasonable goodness of fit is obtained.
Another recently-proposed method to obtain thermodynamic properties from QMC data is to fit the internal energy $`E(T)`$ to a physically-motivated functional form. In this method, appropriate for a lattice simulation, the internal energy is fit to a sum of exponentials
$`E(T)=E_0+{\displaystyle \underset{n}{}}c_ne^{n\mathrm{\Delta }/T}.`$ (5)
The specific heat and the entropy are then obtained according to Eqs. 1 & 2, respectively.
One way to view the exponential functional form of this method is to note that physically, one may expect different energy scales to become important as the temperature $`T`$ is varied. At those energy scales, contributions to the internal energy $`E(T)`$ are effectively switched on. This is manifest in the fitting parameter $`\mathrm{\Delta }`$; at each temperature corresponding to the energy scale $`n\mathrm{\Delta }`$ for each of the $`n`$ terms in the expansion, another term in the expansion contributes to the energy. The amplitude of this contribution is set by the related coefficient $`c_n`$.
This technique has been successfully applied. It has at least one major advantages over the polynomial method in that it does not splice together two functions for different temperature regimes. Nevertheless, it also relies on a goodness of fit test to determine whether a reasonable number of fitting parameters have been chosen. Since it uses a gapped form for the internal energy, it is not suitable for systems in the thermodynamic limit, as may be studied using dynamical mean field techniques. Furthermore, both the polynomial and the exponential fitting schemes are inaccurate when the number of fitting parameters is small and become ill posed as the number of fitting parameters become large. Thus it is difficult to determine how many coefficients to use.
We now present a method that overcomes these drawbacks and possible pitfalls. Our method incorporates additional a priori information that the energy $`E(T)`$ increases monotonically with the temperature $`T`$ so that the specific heat is positive definite, $`C(T)0`$, and in the form of the infinite temperature entropy $`S_{\mathrm{}}`$. Thus it requires less QMC data than either of the functional fitting methods. It performs a search for the most probable energy $`E(T)`$ given the data and prior information, and therefore removes the question of determining the number of fitting parameters appearing when using the other methods. It is applicable to both finite-dimensional lattice QMC as well as QMC from non-gapped systems.
## III Overview of the New Technique
We start with the observation that given the appropriate distribution function $`K(\beta ,\omega )`$ relating the energy $`\omega `$ to the inverse temperature $`\beta =1/T`$, one can write the internal energy as a weighted integral with a positive-definite weight $`\rho (\omega )`$
$`E(T)={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \omega K(\beta ,\omega )\rho (\omega ).`$ (6)
The specific heat is obtained by differentiating
$`C(T)={\displaystyle \frac{E(T)}{T}}={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \omega {\displaystyle \frac{K(\beta ,\omega )}{T}}\rho (\omega ).`$ (7)
One may then use Eqs. 2 and 3 to obtain the entropy and the free energy, respectively.
The entire problem of obtaining the thermodynamic properties of the system then reduces to that of numerically inverting the integral equation Eq. 6 for the weight $`\rho (\omega )`$ given noisy QMC data for the internal energy $`E(T)`$. This is a well-known problem for which there exists a well-developed, powerful technique, the Maximum Entropy Method (MEM). (The MEM is discussed in the next section.) The kernel $`K(\beta ,\omega )`$ corresponds to a blurring function which acts on a spectrum $`\rho (\omega )`$. Typical blurring functions for a quantum system are the Bose-Einstein and the Fermi distribution functions, as discussed in detail in Sec. IV.
To understand the fundamental difference between other methods and our MEM technique, it is important to understand the questions which the two methods answer. The other methods rely on fitting noisy data to a functional form. They start with a physically-motivated functional form for $`E(T)`$ and seek to find the most likely curve of the infinitely many curves which optimize some likelihood function such as $`\chi ^2`$. In reality, the functional form of the energy is not known. What is known is that finite temperatures spread the excitation spectrum of a quantum system. The question one might like to ask, “what is the curve that fits the data” is therefore ill-defined. Without additional regularization, an infinite number of curves fit the data and, unless the precise functional form of the internal energy $`E(T)`$ is known, there is no precise answer to this question. However, if we know the form of the thermal blurring function $`K(\beta ,\omega )`$, we may ask the question, “given the blurring function and any other relevant prior information that we know, what is the most probable spectrum $`\rho (\omega )`$ from which this energy data $`E(T)`$ might arise?” As discussed in detail below, this is the precise question that the MEM answers.
In addition to relying on fundamental properties of quantum systems instead of a functional form, further benefits also accrue from employing our MEM technique. For example, since the MEM gives errorbars on integrated quantities, the uncertainties in both $`E(T)`$ and $`C(T)`$ are known when obtained via our MEM technique. Other advantages of our technique will be discussed below. Before discussing algorithmic details, it is useful and instructive to briefly review the MEM.
## IV The Maximum Entropy Method
The Maximum Entropy Method (MEM) is discussed in detail elsewhere. Here, we wish to review only as much of the MEM as is necessary to understand the new technique.
The MEM is frequently used to analytically continue QMC imaginary-time Green Function data to real frequencies. However, it is a general technique that is not limited to analytic continuation or to QMC-related problems. In fact, the MEM has a relatively long history as an image reconstruction technique in photography and dynamic light scattering problems.
In such problems, the observed image is the result of Gaussian blurring of light transmitted from a source through a medium, such as the atmosphere. Hence, the functional form of the image is not known and the question of whether the observed data fits a specific functional form is ill-defined – there are an infinite number of curves that fit the data! Instead, the best that one can do is to seek the most probable image given the data. This is exactly what the MEM sets out to accomplish.
This is done using Bayesian statistics. If there are two events, $`a`$ and $`b`$, then by Bayes’ theorem, the joint probability of these two events is
$`P(a,b)=P(a|b)P(b)=P(b|a)P(a),`$ (8)
where $`P(a|b)`$ is the conditional probability of $`a`$ given $`b`$. The probabilities are normalized so that
$`P(a)={\displaystyle 𝑑bP(a,b)\mathrm{and}𝑑aP(a)}=1.`$ (9)
In our problem, we search for the spectrum $`\rho `$ which maximizes the conditional probability of $`\rho `$ given the data $`E`$,
$`P(\rho |E)=P(E|\rho )P(\rho )/P(E).`$ (10)
Typically, one calls $`P(E|\rho )`$ the likelihood function and $`P(\rho )`$ the prior probability of $`\rho `$ (or the prior). Since we work with one set of QMC data at a time, $`P(E)`$ is a constant during this procedure and may be ignored. The prior and the likelihood functions require more thought, and are discussed in detail in Ref. , here we present the salient results of that discussion.
If the spectrum is positive-definite, we may think of it as a un-normalized probability density:
$`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \rho (\omega )<\mathrm{}.`$ (11)
Then by Skilling, the prior probability is proportional to $`\mathrm{exp}(\alpha S)`$ where $`S`$ is the entropy defined relative to some positive-definite function $`m(\omega )`$
$`S={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \left[\rho (\omega )m(\omega )\rho (\omega )\mathrm{ln}(\rho (\omega )/m(\omega ))\right],`$ (12)
and
$`P(\rho )P(\rho |m,\alpha )\mathrm{exp}(\alpha S),`$ (13)
where $`m(\omega )`$ is the default model since in the absence of data $`\rho =m`$. Selection of the default model for this method is discussed in Sec. V. The other unknown quantity $`\alpha `$ is determined during the MEM to maximize the probability of the image $`\rho `$ given the data.
The likelihood function follows from the central limit theorem. If each of the measurements $`E_{i,T}`$ ($`E_TE(T)`$) of the energy at a specific temperature $`T`$ is independent, then in the limit of a large number of measurements $`N_d`$ to determine each $`E_T`$ the distribution of the $`E_T`$ becomes Gaussian. The probability of measuring a particular $`E_T`$ is
$`P(E_T)={\displaystyle \frac{1}{\sqrt{2\pi }\sigma _T}}\mathrm{exp}\left[{\displaystyle \frac{1}{\sigma _T^2}}\left(E_TE_T\right)^2/2\right],`$ (14)
with an error estimate given by
$`\sigma _T^2={\displaystyle \frac{1}{N_d(N_d1)}}{\displaystyle \underset{i}{}}\left(E_TE_{i,T}\right)^2`$ (15)
and
$`E_T={\displaystyle \frac{1}{N_d}}{\displaystyle \underset{i=1}{\overset{N_d}{}}}E_{i,T}`$ (16)
for the $`N_d`$ measurements of $`E_{i,T}`$ at temperature $`T`$.
Then the likelihood function $`P(E|\rho )`$ of measuring the set of $`E`$ for a given image $`\rho `$ is
$`P(E|\rho )e^{\chi ^2/2}`$ (17)
where
$`\chi ^2={\displaystyle \underset{T}{\overset{\mathrm{all}T}{}}}{\displaystyle \frac{\left(E_T\underset{\omega }{}\omega K(\omega ,T)\rho (\omega )\right)^2}{\sigma _T^2}}`$ (18)
and we have discretized the integral Eq. 6.
We are now in a position to perform the MEM and find the most probable image $`\rho `$ given the data $`E_T`$. We wish to maximize the joint probability of the image or weight $`\rho `$ given the data $`E`$; the default model $`m`$; and the Lagrange multiplier $`\alpha `$
$`P(\rho |E,m,\alpha )`$ $``$ $`P(E|\rho )P(\rho |m,\alpha )`$ (19)
$`=`$ $`{\displaystyle \frac{\mathrm{exp}(\alpha S\chi ^2/2)}{Z_SZ_L}}`$ (20)
where $`Z_S`$ and $`Z_L`$ are normalization factors, independent of the image. For a fixed $`\alpha `$ and the given data $`E`$, the most probable image $`\widehat{\rho }(\alpha )`$ is the one that maximizes $`Q=\alpha S\chi ^2/2`$. This may be found, for example, using Newton’s method.
The details of implementing a MEM code and finding $`\alpha `$ are given elsewhere. We will not repeat that presentation here. A MEM code written according to Ref. is recommended for performing the technique we discuss herein. Having discussed the general MEM formalism, we now turn to the specific algorithmic details of our new technique.
## V Algorithmic Details of the New Technique
We desire to express the internal energy of the system as an integral over a density of energy levels times a relation between energy and temperature, according to Eq. 6. To that end, we make the following Ansatz
$`E(T)={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \omega \left[F(\beta ,\omega )\rho _F(\omega )+B(\beta ,\omega )\rho _B(\omega )\right],`$ (21)
where $`F`$ and $`B`$ are the Fermi- and Bose-distribution functions, respectively
$`F(\beta ,\omega )={\displaystyle \frac{1}{1+e^{\beta \omega }}}`$ (22)
$`B(\beta ,\omega )={\displaystyle \frac{1}{1e^{\beta \omega }}}`$ (23)
for $`\beta =1/T`$ (we have set the Boltzmann constant equal to unity $`k_B=1`$).
This Ansatz corresponds roughly to describing the energetics of the system as consisting of separate linear contributions from Fermi- and Bose-excitations and imposes the constraint that the corresponding energy $`E(T)`$ increases monotonically with temperature $`T`$ so that $`C(T)0`$. In addition, since the degeneracy of the ground state and the total number of accessible states is generally know, the infinite temperature entropy is generally known. $`S_{\mathrm{}}`$ may be obtained from
$`S_{\mathrm{}}={\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{dT}{T}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \omega \left[{\displaystyle \frac{F}{T}}\rho _F(\omega )+{\displaystyle \frac{B}{T}}\rho _B(\omega )\right]`$ (24)
by noting that the temperature integral for the Fermi term can be done analytically and since $`\rho _B(\omega )`$ is odd the Bose term does not contribute to the integral. The net result is that
$`S_{\mathrm{}}=\mathrm{ln}2{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \rho _F(\omega )`$ (25)
Additional information such as this may be imposed by modifying the prior or the likelihood function. Given the similarity of Eq. 25 and Eq. 21, which appears as part of the likelihood function, we choose the latter approach. We introduce $`S_{\mathrm{}}`$ as an additional datum with a relative error estimate $`\sigma _S/S_{\mathrm{}}`$ chosen to be approximately equal to the smallest relative error estimate of the energy data. We discuss our reasons for choosing this Ansatz in the appendix.
With this Ansatz, we are in a position to employ the MEM. We write $`K\rho `$ from Eq. 6 as a linear combination of $`F\rho _F+B\rho _B`$. Eq. 18 for $`\chi ^2`$ is modified similarly. We pick the default model in the following manner. We note first that it must be positive definite and integrable. We employ a Gaussian default model, which satisfies these criteria.
Once the default model is selected, the method described in Ref. may be applied straight away. To further illustrate the method, we now apply it to a non-trivial model.
## VI Example: 3d Periodic Anderson Model
The 3d Periodic Anderson Model (PAM) is often used to investigate f-electron systems, where electronic correlations are important for the phenomena under study. It is a simplified lattice model in which the Coulomb interaction is limited in range to on-site interactions only and then only within one of the two bands. Nevertheless, it is a rich model in which the interplay between delocalization (kinetic energy), Coulomb repulsion, Pauli exclusion, temperature, and electron density give rise to a wide variety of phenomena.
The periodic Anderson Hamiltonian is
$`H`$ $`=`$ $`{\displaystyle \underset{k\sigma }{}}ϵ_kd_{k\sigma }^{}d_{k\sigma }+{\displaystyle \underset{k\sigma }{}}V_k(d_{k\sigma }^{}f_{k\sigma }+f_{k\sigma }^{}d_{k\sigma })`$ (28)
$`+U_f{\displaystyle \underset{i}{}}(n_{if}{\displaystyle \frac{1}{2}})(n_{if}{\displaystyle \frac{1}{2}})`$
$`+{\displaystyle \underset{i\sigma }{}}ϵ_fn_{if\sigma }\mu {\displaystyle \underset{i\sigma }{}}(n_{if\sigma }+n_{id\sigma }).`$
We choose a simple cubic structure for which,
$`ϵ_k`$ $`=`$ $`2t_{dd}[\mathrm{cos}k_xa+\mathrm{cos}k_ya+\mathrm{cos}k_za],`$ (29)
$`V_k`$ $`=`$ $`2t_{fd}[\mathrm{cos}k_xa+\mathrm{cos}k_ya+\mathrm{cos}k_za],`$ (30)
where $`a`$ is the lattice constant. The dispersion of $`V_k`$ reflects our choice of near–neighbor (as opposed to on–site) hybridization of the $`f`$ and $`d`$ electrons. With on–site hybridization, the PAM is an insulator at half-filling, whereas with our intersite hybridization choice the half-filled, symmetric PAM is metallic.
The parameter values and the temperature $`T`$ in this work are given in units of $`t_{dd}`$. We take $`U_f=6`$ and explore a range of $`t_{fd}`$ and $`T`$ values. QMC results for this model were obtained using the determinant algorithm, which provides an exact treatment (to within statistical errors and finite size effects) of the correlations. We further choose the symmetric PAM ($`\mu =ϵ_f=0`$, and thus half–filling: $`n_{if}=n_{id}=1`$) in order to eliminate the QMC “sign problem,” allowing accurate simulations at low temperatures.
This version of the PAM has been studied in this parameter regime and is known to undergo a sharp finite-temperature crossover at finite $`t_{fd}0.60.8`$ with an associated, abrupt change in the free energy which is reflected in the specific heat. These thermodynamic anomalies are believed to be signals of a zero-temperature metal-insulator phase transition in the f-band which is also seen as a finite-temperature crossover to a localized f-electron system. We will test our new technique by using it to reproduce the published work.
Figure 1 shows the energy obtained via the MEM using Eq. 21 for various hybridizations $`t_{fd}=0.2,0.6,1.0`$ along with QMC data and errorbars on the QMC data. There is an excellent agreement between the QMC data and the MEM results throughout the range of temperatures simulated by QMC. The quality of the agreement is further illustrated by the inset in Fig. 1, which shows a magnified view of the lowest QMC temperature values for $`t_{fd}=0.2`$.
Once one obtains the image, the specific heat $`C(T)`$ is obtained by differentiating as in Eq. 1. Fig. 2 shows the specific heat $`C(T)`$, the specific heat divided by the temperature $`C(T)/T`$, and the entropy $`S(T)`$ in the top figure as a function of temperature $`T`$ for a fixed hybridization $`t_{fd}=1.0`$. For comparison, the bottom figure shows the energy $`E(T)`$. At this hybridization, singlets are known to form at low temperatures. This is reflected in a peak in $`C(T)/T`$ appearing at $`T0.15`$. The singlet formation peak is also visible in $`C(T)`$. A smaller peak in $`C(T)`$ at higher temperatures is believed to be due to the suppression of charge fluctuations in the $`f`$ band.
The entropy may be found in two ways. First, $`S(T)`$ may be obtained by integrating $`C(T)/T`$ according to Eq. 2. This was done and is plotted in Fig. 2. The entropy found in this manner saturates at high temperatures at the infinite-temperature limit $`S_{\mathrm{}}=4\mathrm{ln}2S_0`$ known for this model. Second, the infinite temperature entropy may calculated by integrating $`\rho _F`$, Eq. 25. The latter estimate is $`S_{\mathrm{}}=3.375\mathrm{ln}2`$, which is also that obtained from integrating $`C(T)/T`$ (Eq. 4) and the value of $`S_{\mathrm{}}=4\mathrm{ln}2S_0`$ known for the model.
In addition to singlet formation at low temperatures for relatively large hybridizations $`t_{fd}`$, the metallic PAM develops antiferromagnetic long-range-order (AFLRO) at low temperatures for small $`t_{fd}`$. Hence, if one examines $`C(T)/T`$ for decreasing hybridization $`t_{fd}`$, the singlet-formation peak should eventually disappear and a low-temperature peak corresponding to AFLRO should appear in $`C(T)/T`$. Previous work has observed the disappearance of the singlet-formation peak, but did not access a sufficiently low temperature to observe the appearance of the AFLRO peak.
Figure 3 shows the specific heat divided by temperature $`C(T)/T`$ for various $`fd`$ hybridizations $`t_{fd}`$, (a) $`t_{fd}=1.0`$, (b) $`t_{fd}=0.6`$, and (c) $`t_{fd}=0.2`$. The corresponding entropies from integrating $`C(T)/T`$ (Eq. 4) are shown in panel (d). Here, we observe the disappearance of the singlet peak with decreasing hybridization $`t_{fd}`$. At $`t_{fd}=1.0`$ there is a substantial singlet formation peak. At $`t_{fd}=0.2`$ there is no singlet formation for any temperature accessed by the simulations, and possibly no singlet formation even at $`T=0`$ The intermediate hybridization $`t_{fd}=0.6`$ corresponds to a regime where singlet formation occurs suddenly for low temperatures, as is reflected in the shift of the singlet peak to lower temperatures in Fig 3.
The sum rule
$`S_{\mathrm{}}=4\mathrm{ln}2S_0`$ (31)
for the entropy enforces the total entropy in the system. This is an important feature of the method and satisfying the sum rule is one check on whether the specific heat is physically reasonable. However, when the system does not quench all of the entropy by the lowest temperature accessed by the QMC simulation, one may worry whether enforcing the sum rule will push spurious entropy into the specific heat. This does not happen, as shown in Fig. 3(d). Instead, this entropy beyond that quenched within the accessed temperatures goes below the lowest QMC temperature, where indeed it should go on physical grounds. However, then the extrapolation below the lowest QMC data point is unreliable. This unreliability of the extrapolation to a regime where the sum rule has forced entropy below the QMC data is seen by the large error bars on the specific heat in this extrapolation regime. That is, the method puts the entropy where it belongs, but then informs one that the results of the MEM in this regime are totally uncertain. This is an extremely desirable result.
## VII Summary
We have described a novel technique to obtain the internal energy as a function of temperature, as well as the specific heat, the entropy, and the free energy of a system using QMC energy data sampled at a small, finite set of temperature values. Our technique relies on probability theory to obtain the most probable thermodynamic functions given the sampled QMC energy. The question of determining the number of fitting parameters, which plagues the other methods, is thereby removed. An entropy sum rule or other appropriate a priori information may also be used, if known. The technique was illustrated by applying it to the 3d Periodic Anderson Model. An important benefit of the technique is that it returns not only the thermodynamic functions, but also their uncertainties. This is a significant improvement over the prior techniques.
## Acknowledgements
We are grateful to J.E. Gubernatis, A.K. McMahan, R.T. Scalettar, and R.N. Silver for helpful discussions. This work was supported by NSF grants DMR–9704021 and DMR–9357199. The QMC simulations were performed on the U.S. Department of Energy ASCI Red and Blue-Pacific computers.
## A The form of the energy Ansatz
In section V we wrote the Ansatz for the energy as
$`E(T)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \omega F(\beta ,\omega )\rho _F(\omega )`$ (A2)
$`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega \omega B(\beta ,\omega )\rho _B(\omega ),`$
where $`F`$ and $`B`$ are the Fermi- and Bose-distribution functions, respectively
$`F(\beta ,\omega )={\displaystyle \frac{1}{1+e^{\beta \omega }}}`$ (A3)
$`B(\beta ,\omega )={\displaystyle \frac{1}{1e^{\beta \omega }}}.`$ (A4)
Each integral in A2 is a Fredholm integral equation of the first kind for the corresponding density with either $`F`$ or $`B`$ as the kernel. Since these kernels are continuous, the problem of inverting Eq. A2 to find $`E(T)`$ is ill-posed and must be regularized. MEM provides an efficient regularization method. In this appendix we motivate our Ansatz and discuss other possible functional forms.
The Ansatz given in Eq. A2 is not the only possible, or even sensible choice. In general one wants to choose as an Ansatz the one that captures the underlying physics of the problem. For models such as the PAM, we expect a density of Fermionic excitations represented by $`\rho _F`$ associated with quasiparticle excitations. Since quasiparticles are conserved, their density should be positive definite which is required when the MEM formalism is employed.
In addition, we expect a density $`\rho _B`$ of Bosonic excitations associated with collective behavior such as spin waves. With no Bosonic operators in the Hamiltonian, these Bosons are not conserved and have zero chemical potential. Thus, $`\rho _B(\omega >0)>0`$ corresponding to the creation of such excitations, and $`\rho _B(\omega <0)<0`$ corresponding to their destruction. This choice of $`\rho _F`$, $`\rho _B`$, and $`K`$ constrain the specific heat $`C(T)`$ to be positive definite, $`C(T)0`$. Furthermore, $`\rho _B(\omega )`$ is odd as required by the Fluctuation–Dissipation theorem. Therefore, we may reduce the second integral in Eq. A2 to the range $`(0,\mathrm{})`$ where $`\rho _B(\omega )`$ is positive semi-definite.
An Ansatz should also provide a faithful representation of the $`E(T)`$ data. To explore this question, we note that for a finite-sized system $`E(T)`$ is an analytic function that can be expanded in a Taylor series in $`T`$ around $`T=0`$. Expanding $`F(\beta ,\omega )`$ in a Sommerfeld low temperature expansion yields only even powers of $`T`$ in the energy. In order to get odd powers of $`T`$ that complete the Taylor series, the Bosonic kernel is required. Having the Taylor expansion for $`E(T)`$, the remaining question is the positive-definite nature of the image $`\rho (\omega )`$. Even the fact that the energy is monotonic does not yield a mathematical constraint that $`\rho _F`$ and $`\rho _B(\omega >0)`$ are positive definite, yet in practice this constraint imposed by the MEM is not a limitation.
It is possible to dispose of the Ansatz and use a general form for the energy. A faithful representation for systems in thermal equilibrium with a heat bath is provided by
$`E(T)={\displaystyle \frac{_{\mathrm{}}^{\mathrm{}}𝑑\omega \omega \rho (\omega )\mathrm{exp}(\omega /T)}{_{\mathrm{}}^{\mathrm{}}𝑑\omega \rho (\omega )\mathrm{exp}(\omega /T)}}`$ (A5)
where $`\rho (\omega )>0`$ is the density of eigenstates of the Hamiltonian. However, since the relationship between the density and $`E`$ is non-linear, the MEM algorithm described in Sec. IV cannot be applied without significant modification. Furthermore, the representation provided by Eq. A2 in each case we have tested has provided a fit to within the measured error. Thus, the additional complications associated with the use of Eq. A5 seem unnecessary for this and similar systems. However, this general technique would allow extension of the method to Classical Monte Carlo simulations and is something we are currently exploring.
|
no-problem/9906/cond-mat9906218.html
|
ar5iv
|
text
|
# Do crossover functions depend on the shape of the interaction profile?
## 1 Introduction
In recent years, there has been a revived interest in the nature of the crossover from classical to non-classical (asymptotic) critical behaviour upon approach of the critical point. This crossover between two universality classes can be observed in a great variety of many-body systems, including pure fluids, polymer mixtures and micellar solutions, and is driven by the ratio between the reduced temperature $`t(TT_\mathrm{c})/T_\mathrm{c}`$ (where $`T_\mathrm{c}`$ is the critical temperature) and a system-dependent parameter $`G`$, the Ginzburg number. The dependence of observables on this ratio is described by so-called crossover functions. Compared to our knowledge of critical exponents, for which very accurate, consistent estimates are available from renormalization-group (RG) calculations, series expansions, experiments and numerical calculations, the situation for non-asymptotic critical phenomena is not so clear-cut. Most theoretical predictions for crossover functions are obtained by means of RG-based methods. Examples include the work of Nicoll and Bhattacharjee , who used an RG-matching method to calculate crossover functions for the one-phase region ($`T>T_\mathrm{c}`$) to second order in $`\epsilon =4d`$ ($`d`$ denotes the spatial dimensionality), and of Bagnuls and Bervillier , who applied massive field theory in $`d=3`$. The latter work was then extended to the two-phase region as well , although only for temperatures relatively close to $`T_\mathrm{c}`$. Very recently, the approach of Ref. was used to calculate the full crossover function for the susceptibility exponent below $`T_\mathrm{c}`$ . A more phenomenological approach has been taken by Belyakov and Kiselev , who presented a generalization of first-order $`\epsilon `$-expansions. Although the different methods vary in mathematical rigour, they all suggest that the crossover functions are universal functions of the ratio $`t/G`$, under the additional restriction $`t0`$, $`G0`$. This limit, which is referred to as critical crossover, implies that one must consider the limit in which the coefficient $`u`$ of the quartic term in the Landau–Ginzburg–Wilson (LGW) Hamiltonian goes to zero (as can be realized, e.g., in systems with a diverging interaction range ). Experimental systems evidently do not obey these restrictions: Here $`G`$ is a fixed parameter and the crossover functions are obtained by varying $`t`$, where it is generally assumed that within the critical region, i.e. for $`t`$ sufficiently small, one still observes a universal crossover. Recent work has shown that this assumption is only partially correct. In this paper, we therefore examine the role of some of the parameters that might be held responsible for deviations from the field-theoretic crossover curves and show what degree of universality one may still expect.
Anisimov et al. have suggested that, apart from the correlation length $`\xi `$, an additional (mesoscopic) length scale may determine the nature of the crossover behaviour for complex fluids. Based upon earlier work , a corresponding parametric crossover function was proposed and in Refs. it was shown that this function of two variables can indeed describe the crossover of the susceptibility exponent for several experimental systems displaying qualitatively different behaviour. On the other hand, there are quite a number of experimental results that can be described in terms of the above-mentioned single-parameter functions, although it should be noted that only few experiments have yielded accurate results for the effective exponents, which are defined as the logarithmic derivatives of the crossover functions . Furthermore, most experiments only partially cover the crossover region, which occupies several decades in the reduced temperature. Thanks to recent algorithmic developments, numerical methods can circumvent both of these limitations in an efficient way, which has already led to several notable results . In particular, it was demonstrated in Ref. that the crossover function for the effective susceptibility exponent $`\gamma _{\mathrm{eff}}^+`$ (pertaining to $`T>T_\mathrm{c}`$) for three-dimensional systems with a finite interaction range $`R`$ is steeper than the functions presented in Refs. . The reason for this discrepancy lies in $`u`$ not being small for small $`R`$. Whereas the scale of the crossover is determined by the Ginzburg number, the shape of the crossover functions is determined by $`u`$ . This makes it difficult to obtain an explicit expression for the crossover. One possibility is to invoke the description of Ref. . A (still somewhat phenomenological) fit to this description is indeed possible , but a further demonstration that the crossover functions depend on more than one parameter is clearly desirable. The numerical results presented in Refs. have been obtained for a block-shaped interaction profile (the so-called equivalent-neighbour model), where the interaction strength is kept constant within a radius $`R_m`$ and zero beyond that. As long as the interaction has a finite range, different interaction profiles will lead to the same universal properties , but not necessarily to the same crossover functions. Thus, there is a twofold objective in studying the effect of a modified interaction profile. In the first place, we want to study the effect of introducing an additional length scale in the block-shaped interaction profile, in order to study its influence on the crossover functions. Secondly, this modification of the interaction profile allows us to examine, on a more general level, the dependence of crossover functions on the shape of the interaction profile and thus to shed some light on the universal nature of these functions.
## 2 Simulational aspects and determination of critical properties
In order to maximize the numerical sensitivity to variations in the crossover function, we have restricted ourselves to the two-dimensional (2D) case only. An additional attractive aspect of this system is, that the effective susceptibility exponent $`\gamma _{\mathrm{eff}}^{}`$ exhibits a remarkable non-monotonicity . It is well possible that such a peculiar property is particularly sensitive to variations in the interaction profile. Thus, we have carried out Monte Carlo simulations for a 2D Ising model, defined on a square lattice of size $`L\times L`$ with periodic boundary conditions. Each spin interacts with a strength $`K_1`$ with all its neighbours within a distance $`rR_1`$ (domain $`D_1`$) and with a strength $`K_2`$ with all its neighbours within a distance $`R_1<rR_2`$ (domain $`D_2`$). This means that the block-shaped interaction profile of Ref. has been generalized to the double-blocked case depicted in Fig. 1. The strength ratio $`K_1/K_2`$ is denoted by the parameter $`\alpha `$, which throughout this work is supposed to be greater than unity. In order to suppress lattice effects, all range dependences are expressed in terms of the effective interaction range $`R`$, defined by
$$R^2\frac{_{ij}|𝐫_i𝐫_j|^2K_{ij}}{_{ij}K_{ij}},$$
(1)
which for our interaction profile reduces to $`R^2=(\alpha _{iD_1}r_i^2+_{iD_2}r_i^2)/z_{\mathrm{eff}}`$, where $`z_{\mathrm{eff}}\alpha z_1+z_2`$, with $`z_i`$ the number of neighbours in domain $`D_i`$. The strength ratio was chosen as $`\alpha =16`$, in order to create a strong asymmetry between the two domains. The value of $`R_2`$ was kept fixed at $`\sqrt{140}`$. In the finite-size scaling analyses, the minimum system size has to be of the order of $`R_2^2`$ and a maximum linear system size $`L=1000`$ thus implies that the results cover a factor $`7`$ in $`L`$. The effective interaction range $`R`$ was then varied by varying $`R_1`$. Both for $`R_10`$ and for $`R_1R_2`$, $`R`$ will take its maximum value (which in the continuum limit approaches $`R_2/\sqrt{2}`$) and it will reach a minimum at $`R_1=R_{\mathrm{min}}`$. In the continuum limit, the corresponding effective range is $`R^2=R_{\mathrm{min}}^2=R_2^2/(\sqrt{\alpha }+1)`$. Although the same values for $`R`$ can be reached with $`R_1<R_{\mathrm{min}}`$ and $`R_1>R_{\mathrm{min}}`$, it should be noted that the two cases greatly differ. For example, for $`R_2^2=140`$ and $`\alpha =16`$ (where $`R_{\mathrm{min}}^2=28.06`$ is reached for $`z_1=88`$, i.e. $`26R_1^228`$) one may obtain $`R^250`$ by choosing either $`R_1^2=4`$ or $`R_1^2=93`$, but in the former case $`D_1`$ contains 12 out of 436 interacting neighbours, compared to 292 out of 436 in the latter case. This means that the integrated coupling ratio $`\alpha z_1/z_2`$, which indicates the relative contribution of the two domains to the total integrated coupling, is 0.45 in the first case and 32 in the second case. So, in combination with the original block-shaped profile, we can realize three qualitatively different interaction profiles and study the dependence of the crossover curve on the profile. Table I lists some properties of the interaction profiles considered in this work. The three profiles with $`R_2^2<140`$ were added in order to reach very small effective interaction ranges as well. For each choice, we have carried out extensive simulations using a dedicated cluster algorithm for long-range interactions . The critical properties of each individual system were determined via finite-size scaling analyses, along the lines described in Ref. . The critical coupling, for which an accurate value is required to attain the proper crossover curve, has been determined from the amplitude ratio $`Q=m^2^2/m^4`$ and we have obtained the magnetic exponent $`y_\mathrm{h}`$ from the absolute magnetization density $`|m|`$ (see Table I). One notes that for all systems $`Q`$ is in good agreement with the 2D Ising value $`Q_\mathrm{I}0.856216`$ and $`y_\mathrm{h}`$ lies very close to $`15/8`$. This confirms the expectation that all systems belong to the 2D Ising universality class. In addition, we have determined the magnetic susceptibility for $`t<0`$ from the fluctuation relation $`\chi =L^d(m^2|m|^2)/k_\mathrm{B}T`$ and the critical finite-size amplitudes of $`|m|`$ and $`m^2`$.
## 3 Range dependence of critical properties and analysis of the crossover functions
Figure 2 shows the critical temperatures as a function of the interaction range. All temperatures are expressed in units of the critical temperature of the mean-field model, $`T_\mathrm{c}=1/(z_{\mathrm{eff}}K_\mathrm{c})`$. Since fluctuations are less suppressed when the interaction range decreases, one observes that $`T_\mathrm{c}`$ is gradually depressed for smaller $`R`$. More importantly, the figure illustrates that a definitely non-universal quantity like $`T_\mathrm{c}`$ does not depend on the effective interaction range alone: Although for several systems the critical temperature lies on the curve describing $`T_\mathrm{c}(R)`$ for the block-shaped interaction profile, the systems with $`R_1\begin{array}{c}<\hfill \\ \hfill \end{array}R_{\mathrm{min}}`$ ($`R_1^2=4,27`$) exhibit a clear deviation from this curve. Apparently, the latter systems show (for a given $`R`$) the greatest deviation from the mean-field model, in the sense that the suppression of fluctuations is least efficient.
In contrast, no such deviations are observed for the critical finite-size amplitude of, e.g., the absolute magnetization density. This quantity, defined as $`d_0lim_L\mathrm{}L^{dy_\mathrm{h}}|m|`$, has an asymptotic range dependence proportional to $`R^{(3d4y_\mathrm{h})/(4d)}`$ . It turns out that the systems investigated in this work do not only follow this asymptotic law, but also for smaller $`R`$ agree very well with the range dependence found for the block-shaped profile, see Fig. 3. This even holds for highly asymmetric profiles such as $`R_1^2=4;R_2^2=140`$.
The quantity of central interest, however, is the magnetic susceptibility $`\chi `$ below $`T_\mathrm{c}`$. In Fig. 4 the crossover function for the block-shaped profile is shown as a reference curve. The parameter along the horizontal axis is proportional to $`t/G`$ ($`GR^{2d/(4d)}`$) and the susceptibility is divided by a factor $`R^2`$ to obtain a data collapse for different ranges. One clearly observes how the solid curve interpolates smoothly, but with a non-monotonic derivative, between the Ising asymptote and the classical asymptote. Within the same figure, we have also plotted the finite-size data for four different double-block profiles. Several remarks are in order here. First, all data have been divided by a range-dependent factor describing the deviation of the connected susceptibility from its asymptotic range dependence. This is similar to the difference between the dashed curve and the solid line in Fig. 3; as this curve turns out to depend solely on the value of $`R`$ and not on the shape of the interaction profile, it is permissible to use the same expression for all systems. For large interaction ranges, the correction factor approaches unity. Secondly, at the right-hand side of the graph the data points start to deviate from the reference curve. This is caused by the fact that, sufficiently close to $`T_\mathrm{c}`$, the diverging correlation length is truncated by the finite system size. For the systems with $`R_1^2=4`$ and $`R_1^2=93`$ (both with $`R_2^2=140`$) this happens in the figure at different temperatures, despite the fact that they have very similar values for the effective range $`R`$. The reason for this is that the data points pertain to different system sizes, viz. $`L=1000`$ and $`L=300`$, respectively. The inset shows that for the same system size ($`L=300`$) the data points for both systems virtually coincide, even in the finite-size regime! Finally, the left-most data points have been corrected for saturation effects, which are fully described by mean-field theory, cf. Ref. . This is merely an optical issue: Also the saturated curves (which display a strong decrease of the susceptibility) show no dependence on the shape of the interaction profile. The primary message, however, of Fig. 4 is that for all interaction profiles the data in the thermodynamic limit perfectly coincide with the reference curve for the block-shaped potential. We view this as a strong indication that crossover functions possess a considerable degree of universality and conclude that a second length scale ($`R_1`$) of the form introduced in this work is insufficient to induce modifications of the crossover functions, contrary to some expectations.
## 4 Conclusions
In summary, we have examined the critical properties of two-dimensional Ising-like models with an extended interaction range, for several different shapes of the interaction profile. In addition, we have calculated the crossover function for the susceptibility in the low-temperature regime, describing the crossover from classical to non-classical critical behaviour upon approach of the critical point. Although recent work has suggested that this function cannot be described in terms of a single parameter, namely the reduced temperature divided by the Ginzburg number, we find that it is independent of the precise shape of the interaction profile. Irrespective of the presence of an additional length scale or a high asymmetry in the interaction profile, all examined systems can be classified according to a single additional parameter describing the effective interaction range. In particular, the non-monotonic crossover of the effective susceptibility exponent, as found in Ref. , is not a peculiarity of the block-shaped interaction profile, but can be observed for all systems studied in the present work, provided that the effective interaction range is sufficiently large. Furthermore, a corollary of the results presented here is that the coefficient $`u`$ in the LGW Hamiltonian also appears to have only a weak dependence on the shape of the interaction profile. Of course, this still leaves the possibility that other parameters, that still need to be identified, have a more pronounced influence on $`u`$ and thus on the crossover functions.
\***
E. Luijten gratefully acknowledges illuminating discussions with M. A. Anisimov and J. V. Sengers. We thank the HLRZ Jülich for computing time on a Cray-T3E.
|
no-problem/9906/astro-ph9906445.html
|
ar5iv
|
text
|
# Evidence for an ionised disc in the narrow-line Seyfert 1 galaxy Ark 564
## 1 Introduction
Narrow-line Seyfert 1 galaxies (NLS1), defined as having H$`\beta `$ FWHM $``$ 2000 km/s (Osterbrock & Pogge 1985), possess distinctive X-ray properties that set them apart from ‘normal,’ broad-line Seyfert 1 (BLS1) galaxies. ROSAT observations have shown that NLS1s exhibit rapid variability and steep spectra in the soft X-ray band (Boller et al. 1996) with more recent ASCA measurements revealing that this anomalous spectral steepness often extends into the 2–10 keV band (Brandt et al. 1997; Vaughan et al. 1999).
It seems likely that the distinctive observational properties of this sub-class of AGN relate to some fundamental physical parameter common to all NLS1. By analogy with the ‘high state’ properties of Galactic black-hole candidates (GBHCs), it has been suggested that the fundamental ‘driver’ in NLS1 is the central black hole accreting at or above the Eddington limit (Pounds et al. 1995; Laor et al. 1997). One consequence of a high accretion rate might be that most of the power in NLS1 is liberated in a hot, photoionised accretion disc rather than a disc corona (Ross et al. 1992). Thermal emission from such a hot disc could then explain the strong soft excess flux and steep hard X-ray continuum often seen in NLS1s, as the copious soft photons will Compton cool the disc corona resulting in a steepening of the spectrum (Pounds et al. 1995).
The X-ray spectrum should also bear the signature of Compton ‘reflection’ from the disc surface, which is expected to be highly ionised in high accretion rate objects (e.g., Matt et al. 1993). The form of the reflection features, particularly the iron K line and absorption edge and the form of the soft X-ray continuum, should therefore differ in NLS1s from BLS1s if indeed NLS1s are accreting at a higher rate. Tentative support for this view has recently been presented by Comastri et al. (1998a) and Turner et al. (1999) who find evidence for an emission line near 7 keV, consistent with K$`\alpha `$ emission in hydrogen-like iron in the NLS1 Ton S180.
Ark 564 is the brightest known NLS1 in the 2–10 keV band and hence can be considered an ideal object in which to study the spectral features described above. ROSAT data revealed a complex spectrum in the soft X-ray band, well fitted with either a power law and strong soft excess, or a (steeper) power law and an absorption edge at 1.2 keV (Brandt et al. 1994). Vaughan et al. (1999) in their study of the ASCA spectra of 22 NLS1 find evidence in Ark 564 for an emission feature at $``$1 keV or a broad absorption feature at 1.2 keV. In the present paper we perform a more detailed analysis of these ASCA data together with simultaneous RXTE observations of Ark 564, to provide the first study of a NLS1 out to 20 keV. All line and edge energies derived from the spectral fitting are given in the rest frame of the source and errors are quoted at the 90% confidence level unless stated otherwise.
## 2 Observations and data reduction
### 2.1 The ASCA observations
Ark 564 was observed by ASCA on 23–25 December 1996 for a duration of 103 ksec. After applying standard screening criteria, the total ‘good’ exposure time reduced to 47 ksec. In both pairs of SIS and GIS instruments counts were accumulated from a 4′ circular aperture centred on the source position, with background estimated from source free regions at similar off-axis angles. The derived pulse height spectra were binned to give at least 20 counts per spectral channel. There appear to be significant calibration problems in these ASCA spectra at soft energies; the SIS and GIS spectra diverge at $`1`$ keV, differing by 30% at 0.8 keV, and the two SIS spectra diverge from each other at lower energies. This is most likely due to radiation damage to the SIS CCDs. The response of the GIS detectors is not thought to be time-dependent, but their sensitivity is poor below 1 keV. In order to minimize the effect of these uncertainties in the spectral analysis presented here, the SIS data below 1.0 keV have been ignored, as have the GIS data below 0.8 keV.
We note that the SIS lower-level discriminator setting changed during the observation. In order to assess the impact on the spectral analysis, we generated separate SIS spectra and response matrices for each discriminator setting. The difference between the spectra obtained with the different discriminator settings was not significant and was much smaller than the difference between the SIS-0 and SIS-1 spectra. We therefore did not distinguish between data gathered with different discriminator settings in the following analysis.
Source and background light curves were extracted in 128s bins, from both SIS detectors, in the soft (0.5–2 keV) band. (Note that a softer band was used for the light curves because the spectral calibration problems above have a much smaller effect on temporal analysis.) The two background subtracted light curves were combined to increase signal/noise and the resulting light curve was binned by orbit (5760s) for the temporal analysis.
### 2.2 The RXTE observations
Ark 564 was observed by RXTE simultaneously with ASCA for a duration of 95 ksec. Data from the top (most sensitive) layer of the PCU array was extracted using the rex reduction script supplied by NASA/GSFC. Poor quality data were excluded on the basis of the following acceptance criteria: the satellite is out of the South Atlantic Anomaly; Earth elevation angle $``$ 10; offset from optical position of Ark 564 $``$ 0.02; and electron-0$``$ 0.1. This last criterion removes data with high anti-coincidence rate in the PCUs. The total ‘good’ exposure time selected was 42 ksec. Data were collected from PCUs 0,1 and 2 and the background was estimated using the L7–240 model.
As with the ASCA data, the background-subtracted pulse height spectrum from the PCU array was binned to give at least 20 counts per channel. In the subsequent spectral fitting the RXTE data below 2.5 keV have been ignored in order to avoid calibration uncertainties, as well as above 20 keV where the signal is almost entirely background. The hard band (3–12 keV) light curve was extracted in 16s bins and, as with the ASCA light curve, was rebinned by orbit.
## 3 Spectral analysis
The background subtracted spectra from ASCA and RXTE were fitted using the xspec v10.0 software package. After examining the spectral fits separately to check for cross-calibration problems, we then fitted the spectra from both satellites simultaneously but with the relative normalisations free to vary. The RXTE spectrum overestimates the flux in the overlapping 2.5–10 keV band compared with ASCA, but there are no systematic differences in the shape of the spectrum. The fitting of a simple power-law model to the combined ASCA and RXTE spectra revealed strong residuals in the 5–9 keV band and a strong soft excess below 1.5 keV (as in Fig. 1).
### 3.1 The 2.5–20 keV spectrum
We first examined the spectra in the 2.5–20 keV band, ignoring for the time being the lower energy data. A simple spectral fit using a model consisting of a power-law continuum modified by Galactic absorption of $`N_H=6.4\times 10^{20}`$ cm<sup>-2</sup> (Dickey & Lockman 1990) gives a steep slope but a poor overall fit to the data (see Table 1, model 1). In particular there are clear deviations in the data–model residuals around 5–9 keV (see Fig. 1).
The addition of a narrow ($`\sigma =0.01`$ keV) Gaussian line at 6.4 keV, to represent K$`\alpha `$ emission from neutral iron (model 2), improved the fit significantly ($`>99.99`$% significance in an F-test). The best-fit energy of the line is consistent with 6.4 keV but the width of this line is not well constrained using these data. Even after the addition of an iron line the residuals show a deficit at $``$8 keV, particularly in the RXTE data. In order to quantify these features we fitted a range of simple models (see Table 1). Specifically, we tested models with a line and an edge at energies expected from neutral iron (model 3), from helium-like iron (model 4), and with the line and edge energies free to vary (model 5). The He-like features provided a better fit than neutral features, but the best fit values lie between these two extremes (see Fig. 2). The important point is that the iron K-edge is at an energy clearly above that for neutral iron. We have verified that the edge energy is not an artefact of trying to fit a possible broad emission feature with a narrow line, by repeating the fit with lines of increasing width, up to $`\sigma =0.5`$ keV. The absorption edge energy and optical depth remained consistent with the values given above for the narrow line fit. In particular, the measured energy of the edge implies an origin in strongly ionised material. This could, in principle, lie along the line of sight (a ‘warm absorber’) or arise by reflection from optically thick matter having a highly ionised surface layer (an ‘ionised reflector’).
To pursue these alternatives we first fitted the spectra above 2.5 keV with the absori warm absorber model contained within the xspec package. The free parameters were power-law slope and normalisation, iron line equivalent width, hydrogen column density and the ionisation parameter ($`\xi =L/nr^2`$; where $`n`$ is the particle density at a distance $`r`$ from a source of ionising luminosity $`L`$, measured in the 0.005–20 keV range). The temperature of the absorbing gas was fixed at $`10^6`$ K. This model provides a good fit to the data ($`\chi _\nu ^2=0.96`$; see Table 1, model 6). An unusually high $`\xi `$ is needed to explain the energy of the iron edge, with a column density in excess of $`10^{23}\mathrm{cm}^2`$. Such a highly ionised absorber, if indeed stable, will be essentially transparent at lower energies.
As an alternative we fitted the 2.5–20 keV spectrum with the ionised reflector model pexriv in xspec (Magdziarz & Zdziarski 1995). This model consists of a power-law and reflection component from ionised material but does not contain the emission lines expected from such a reflector. The energy of the iron line is only poorly constrained but is consistent with emission in the range 6.3–6.7 keV (see Fig. 2). We therefore added a narrow emission line at 6.4 keV to model the iron K$`\alpha `$ line. This is not necessarily inconsistent with emission from an ionised disc as the line is expected to be broadened and redshifted if the emitting region is in close proximity to the central black hole. The free parameters in the fit were the power law index and normalisation, iron line equivalent width, the reflection strength ($`R`$) and the ionisation parameter. The inclination of the reflector was fixed at 30(a value which is unlikely to be exceeded in NLS1s) and the elemental abundances were assumed to be solar. The surface temperature of the reflector was fixed at $`10^6`$ K, consistent with values expected for an ionised disc (e.g., Ross et al. 1999). This model also provides a good fit to the data ($`\chi _\nu ^2=0.96`$; Table 1, model 7), comparable with the warm absorber model, with plausible values for both $`\xi `$ and $`R`$.
### 3.2 The 0.8–20 keV spectrum
Having obtained a satisfactory description of the hard X-ray spectrum of Ark 564 we extended our analysis to cover the 0.8–20 keV range, but exclude the SIS data below 1.0 keV in order to avoid the calibration uncertainties mentioned above. Applying the ionised reflector model (with parameters as in Table 1, model 7) gave some reduction in the ‘soft excess,’ due to the enhanced reflectivity of the ionised matter. However the fit (Table 2, model 1) leaves a substantial excess flux which can be well modeled by the addition of a $`\mathrm{kT}100`$ eV black body component (leading to $`\mathrm{\Delta }\chi ^2>445`$; Table 2, model 2).
The steep underlying power law, enhanced by the high reflectivity of the ionised disc, lead to a smaller black body component than would otherwise be the case. We note, furthermore, that a more complete ionised reflector model would include significant soft X-ray emission. Reference to the disc reflection model recently published by Ross et al. (1999), suggests that ionised oxygen features, in particular the recombination continuum of O viii (above 0.87 keV) and the O viii Ly-$`\alpha `$ emission line (at 0.65 keV), will be prominent for values of the ionisation parameters similar to those derived for Ark 564. Therefore, as an alternative to the black body component, we refitted the ionised reflector model with the addition of O viii recombination at 0.87 keV (using the redge model in xspec). The result (Table 2, model 3) was a good fit (very similar, in statistical terms, to the previous model) with a best fit temperature of $`50`$ eV (i.e. within a factor of two of our earlier assumption of a disk temperature of $`10^6`$ K). We conclude from this that, to first order, both the iron K features and the soft excess in the ASCA and RXTE observation of Ark 564 can be explained in terms of reflection from highly ionised matter, presumably the putative accretion disc in this NLS1. Specifically, the anomalous spectral form of Ark 564 below 2 keV may have a natural explanation in terms of the combined effects of deep O viii and Si xiii and xiv edges (present in the pexriv reflection continuum) together with associated recombination continuum and line emission (see Fig. 3).
One important consequence of our ionised reflector description for Ark 564 is the need to reassess the strong ‘primary’ soft emission component, thought to be a major feature of NLS1 (and generally associated with internally generated emission from the accretion disc) and extending up to 1 keV. Indeed when we included the ROSAT spectrum considered by Brandt et al. (1994) in our model fitting (assuming that the spectral shape of Ark 564 is constant but allowing the relative level of the spectrum to vary from epoch to epoch) we found that the ionised reflector, now including O viii recombination and Ly-$`\alpha `$ emission (the latter with an equivalent width of $`120`$ eV), allowed a reasonable fit down to 0.2 keV (Table 2, model 4).
Comastri et al. (1998b) present a preliminary analysis of a BeppoSAX observation of Ark 564. Our interpretation of the X-ray spectrum of Ark 564 in terms of an ionised reflector is broadly consistent with the measured BeppoSAX spectrum.
## 4 Temporal analysis
These data were also used to study the variability properties of Ark 564. Fig. 4 shows that the ASCA soft (0.5–2 keV) and RXTE hard (3–12 keV) band light curves reveal strong flaring, with an increase of $`>`$50% in a single orbit that is almost identical in amplitude and phase in both bands.
In order to quantify the variability in Ark 564, we computed the ‘excess variance’ ($`\sigma _{xs}^2`$) in the same fashion as Nandra et al. (1997), except that paper used 128s bins while we use orbital bins. For Ark 564, the hard and soft band excess variances were measured to be $`\sigma _{xs}^2=0.0322`$ and 0.0325, respectively. This compares with a typical value found by Nandra et al. (1997) for ‘normal’ Seyfert 1s with a similar luminosity ($`6\times 10^{43}`$ erg/s) of $`\sigma _{xs}^2=0.005`$ in the ASCA 0.5–10 keV band.
It is also interesting that Ark 564 shows almost identical variability amplitudes in the hard RXTE and soft ASCA bands. Furthermore, the hard and soft band variations track almost perfectly, with no measurable lag, especially during the large flare. In terms of our present model of Ark 564, in which a substantial fraction of the soft X-ray flux is reprocessed harder radiation, the observed limit on the lag ($`{}_{}{}^{<}{}_{}{}^{}`$ 96 min), gives a maximum size for the effective reprocessing region of $`2\times 10^{14}`$ cm.
## 5 Discussion
The simultaneous ASCA and RXTE data presented here give the first determination of the spectrum of an ‘ultra-soft’ NLS1 extending above 10 keV. The spectrum is extremely steep ($`\mathrm{\Gamma }2.7`$) and shows little sign of flattening at harder energies. There is good evidence for both K-edge and line features from ionised iron and the spectrum also shows a strong excess over the best-fit power law below 1.5 keV. Due to calibration problems with the ASCA detectors at low energies (see Iwasawa, Fabian & Nandra 1999), the data have not been fitted below 0.8 keV and so the form of this soft excess is not well determined by these data.
The existence of strong soft X-ray emission and an underlying power law much steeper than typical of BLS1s and quasars are well established characteristics of the X-ray spectra of NLS1s. It has been suggested that the two features are linked, whereby Compton cooling of the hard X-ray source (possibly in a disc corona), and hence its steeper power law, is a consequence of the strong soft EUV flux (Pounds et al. 1995). In this picture, the soft component, probably peaking in the hidden EUV band, is intrinsic emission from the accretion disc, which is expected to be stronger in high accretion rate objects (e.g. Szuszkiewicz et al. 1996).
In the present analysis we now find other spectral features, namely an ionised iron K-edge and recombination emission below $``$1.5 keV, indicative of reflection from an ionised disc. The higher level of irradiation thought to occur in NLS1s would lead naturally to the surface layers of the disc becoming strongly ionised (Matt et al. 1993). In contrast, the alternative interpretation we have considered for the observed iron K-edge, in terms of absorption in a large column of highly ionised gas, is less attractive, since it may well be unstable (Netzer 1996) and because such material would be transparent in the soft X-ray band, leaving the need for a separate explanation of the the observed spectral features below $``$2 keV.
An important consequence of the steep underlying power law of Ark 564, enhanced at low energies both by the high reflectivity of the disc and the additional line and recombination emission, is that the need for a primary emission ‘soft excess,’ at least within the observable X-ray band, must be reconsidered when better data from e.g., XMM and Chandra become available. However, the circumstantial evidence for the soft X-ray/EUV flux to dominate energetically in NLS1 remains persuasive, particularly in providing a natural explanation for the steep hard power law spectrum and the absence of broad optical lines.
Finally we note that evidence for reflection from an ionised disc has been reported in several GBHC, exhibiting strong, ionised edges with weak, Compton-broadened Fe lines (e.g., Zycki et al. 1997, 1998; but see Done & Zycki 1999). The similarity of NLS1 to GBHC in their high flux mode has been noted earlier (Pounds et al. 1995). It is interesting to speculate that the reported weakness in higher luminosity AGN (e.g. Reeves et al. 1997), of the reflection components commonly seen in BLS1 (George et al. 1998), may be a due to the accretion disc material in the former being highly ionised, with the classical indicators of cold reflection, namely strong iron fluorescence and a continuum hump near 10 keV, consequently reduced.
## Acknowledgments
This research made use of data obtained from the LEDAS and HEASARC. We thank Andy Young for providing data files on the Ross et al. ionised disc model. SV acknowledges a research studentship from PPARC. RE acknowledges support from NASA grant NAG 5-3295.
|
no-problem/9906/cond-mat9906371.html
|
ar5iv
|
text
|
# Suppression of 𝑻_𝒄 in superconducting amorphous wires
Disorder suppresses the superconductivity transition in morphologically homogeneous superconductors because the diffusive character of the electron motion in dirty systems makes the Coulomb interaction more effective . As a result, the attraction between the electrons in Cooper pairs becomes weaker, and the transition temperature, $`T_c`$, is lowered. In two dimensions ($`2D`$) the influence of disorder on $`T_c`$ can be studied systematically by varying the film thickness $`d`$ . In uniform films $`T_c`$, being well defined, is suppressed as the sheet resistance, $`R_{\mathrm{}}`$, increases with decreasing $`d`$. (For a review see Ref. .) When the geometry of the sample is such that its dimension is lowered towards the one-dimensional ($`1D`$) limit, the suppression of superconductivity should become more pronounced .
Recently, efforts have been made to extend the experiments in films to narrow wires by fabricating a series of amorphous $`Pb`$ wires of different thicknesses and widths. It has been found that the $`T_c`$-suppression becomes stronger as the wires’ width reduces below $`1000`$Å. The experiment of Refs. is in the crossover region from $`2D`$ to $`1D`$. Actually, the wires are in the $`1D`$ limit as far as superconducting fluctuations are concerned , but they are in the crossover region from $`2D`$ to $`1D`$ with respect to the diffusive motion of the electrons.
From the theoretical point of view, the problem of $`T_c`$-suppression in $`1D`$ wires is rather intriguing. As is well known, the superconductivity transition is determined by a series of logarithmically divergent terms describing the electron scattering in the two-particle Cooper channel. In $`2D`$ systems the corrections due to the electron-electron ($`e`$$`e`$) interactions combined with disorder are logarithmically divergent as well . As the whole problem is controlled by logarithmic singularities, it can be studied by renormalization group (RG) methods . In $`1D`$, due to the reduced dimensionality, the effect of $`e`$$`e`$ interactions is more singular. It produces corrections that diverge as the square root of the frequency. The presence of two types of singularities demands a special analysis in the calculation of $`T_c`$. In this paper we develop a theory that describes adequately the effect of the dynamically enhanced $`e`$$`e`$ interaction on $`T_c`$ in the crossover region from $`2D`$ to $`1D`$ and perform a detailed comparison with the experiment.
The mean field temperature, $`T_c`$, is defined as the temperature at which the electron scattering amplitude in the Cooper channel, $`\mathrm{\Gamma }_c`$, becomes infinite. Fluctuations of the superconductivity order parameter lead to a broadening of the phase transition. However, its mean field temperature can be found experimentally by fitting the upper part of the resistive transition to the Aslamazov–Larkin theory . The diagrammatic representation of the amplitude $`\mathrm{\Gamma }_c`$ is shown in Fig Suppression of $`𝑻_𝒄`$ in superconducting amorphous wires. In addition to the contact BCS-interaction amplitude $`\gamma `$, the terms arising as a result of the interplay of the Coulomb interaction and disorder are also included in the Cooper ladder-diagram series. \[The impurity scattering does not influence the $`e`$$`e`$ interaction mediated by phonons because in the long wavelength limit the lattice defects oscillate together with the ions .\] The resulting equation for $`\mathrm{\Gamma }_c`$ is:
$`\mathrm{\Gamma }_c(ϵ_n,ϵ_l)=|\gamma |+t\mathrm{\Lambda }(ϵ_n+ϵ_l)`$ (1)
$`2\pi T{\displaystyle \underset{m=0}{\overset{M}{}}}\left[|\gamma |+t\mathrm{\Lambda }(ϵ_n+ϵ_m)\right]`$ (2)
$`\times {\displaystyle \frac{1}{ϵ_m}}\mathrm{\Gamma }_c(ϵ_m,ϵ_l),`$ (3)
where $`ϵ_m=2\pi T(m+1/2)`$ is the Matsubara frequency, and the summation over $`m`$ is limited by $`M=\left(2\pi T\tau \right)^1`$. In this equation $`\gamma `$, the bare value of the amplitude $`\mathrm{\Gamma }_c`$, is rescaled in such a way that the Debye frequency as a cut off energy is substituted by $`\tau ^1`$, the inverse of the scattering time. Then, $`\gamma =1/\mathrm{ln}\left(T_{c0}\tau /1.14\right),`$ where $`T_{c0}`$ is the temperature of the superconducting transition in the bulk limit. The parameter $`t=(e^2/2\pi ^2\mathrm{})R_{\mathrm{}}`$ characterizes the level of disorder in a sample, where $`R_{\mathrm{}}`$ is the sheet resistance. The amplitude $`\mathrm{\Lambda }`$ describing the combined action of the $`e`$$`e`$ interaction and disorder is given by
$$\mathrm{\Lambda }(\omega _n)=u\frac{4\pi D}{La}\underset{q_L,q_a}{}\frac{1}{Dq_L^2+Dq_a^2+\omega _n},$$
(4)
where $`a`$ and $`L`$ are the width and the length of the wire, respectively. The parameter $`u`$ describes the amplitude of the $`e`$$`e`$ interaction when the momentum $`q`$ transferred by this interaction is not too small compared with the transferred frequency $`\omega `$, namely when $`qq_\omega =\sqrt{\omega /D}`$. (As was explained in Refs. , the most divergent contributions from the region $`q<`$ $`q_\omega `$ cancel each other out. In this region of small momenta, the $`e`$$`e`$ interaction depends only on the frequency, and therefore it can be gauged out.) Next, for amorphous $`Pb`$ films the spin-orbit scattering time is expected to be only a few times longer than the elastic scattering time and therefore the part of the $`e`$$`e`$ interaction related to spin density fluctuations can be neglected. In that case, we may take $`u`$ to be the value of the screened Coulomb interaction amplitude in the region of momenta $`qq_\omega `$, which gives $`u1/2`$.
In $`2D`$ the summation in Eq. (4) yields $`\mathrm{\Lambda }(\omega _n)u\mathrm{ln}(1/\omega _n\tau ).`$ Therefore, Eq. (3) combines the usual BCS logarithms together with the ones arising due to disorder. Unlike the ladder diagrams in the BCS-theory, the integrations in the different blocks of the diagrams in Fig. Suppression of $`𝑻_𝒄`$ in superconducting amorphous wires cannot be factorized, because $`\mathrm{\Lambda }\left(ϵ_n+ϵ_m\right)`$ matches the frequency arguments of two neighboring blocks. In order to solve this parquet-like equation with a logarithmic accuracy one uses the approximation $`\mathrm{ln}(z+z^{})\mathrm{ln}(\mathrm{max}\{z,z^{}\})`$, see e.g. Ref. . Then, it is possible to apply the “maximum section” method. This procedure leads to the RG equation for the amplitude $`\mathrm{\Gamma }_c(\epsilon ,\epsilon )`$ : $`d\mathrm{\Gamma }_c/dl_\epsilon =ut\mathrm{\Gamma }_c^2,`$ where $`l_\epsilon =\mathrm{ln}(1/\epsilon \tau )`$. The integration of the RG equation gives the suppression of $`T_c`$ by the Coulomb interaction in $`2D`$ disordered systems:
$$\mathrm{ln}\left(\frac{T_c}{T_{c0}}\right)=\frac{1}{|\gamma |}\frac{1}{2\sqrt{ut}}\mathrm{ln}\frac{1+\sqrt{ut}/\left|\gamma \right|}{1\sqrt{ut}/\left|\gamma \right|}.$$
(5)
This formula accurately describes the experimental results in $`MoGe`$ films , with $`u=1/2`$ and using only one fitting parameter, $`\gamma `$ .
In $`1D`$ the result of the summation in Eq. (4) yields a square root singularity in the amplitude $`\mathrm{\Lambda }(\omega _n)`$. When one deals with singularities stronger than logarithmic ones, the approximations of the maximum section method cease to be valid, and a different method should be invented. In this Letter we treat the problem of finding $`T_c`$ from Eq. (3) as a sort of an eigenvalue problem, which leads to an implicit equation for $`T_c`$. To see this, we will consider $`\mathrm{\Gamma }_c(ϵ_n,ϵ_m)`$ as the matrix elements of a matrix $`\widehat{\mathrm{\Gamma }}_c`$, and will write the solution of Eq. (3) for $`\mathrm{\Gamma }_c`$ in matrix notations:
$$\widehat{\mathrm{\Gamma }}_c=\widehat{ϵ}^{\frac{1}{2}}\left(\widehat{I}|\gamma |\widehat{\mathrm{\Pi }}\right)^1\widehat{ϵ}^{\frac{1}{2}}\left(|\gamma |\widehat{1}+t\widehat{\mathrm{\Lambda }}\right).$$
(6)
Here $`\widehat{\mathrm{\Pi }}(T)=\widehat{ϵ}^{1/2}[\widehat{1}|\gamma |^1t\widehat{\mathrm{\Lambda }}]\widehat{ϵ}^{1/2}`$, $`\widehat{ϵ}_{nm}=\delta _{nm}(n+1/2)`$, $`\widehat{\mathrm{\Lambda }}_{nm}=\mathrm{\Lambda }(ϵ_n+ϵ_m)`$, $`\widehat{1}_{nm}=1,`$ and $`\widehat{I}`$ is a unit matrix. Eq. (6) is written in such a form that $`\widehat{\mathrm{\Pi }}`$ is a symmetric matrix. Notice, that the dependence of $`\widehat{\mathrm{\Pi }}`$ on the temperature $`T`$ is not only through the dependence of $`\widehat{\mathrm{\Lambda }}`$ on the Matsubara frequencies, but also through the matrix rank $`M=\left(2\pi T\tau \right)^1.`$ The amplitude $`\mathrm{\Gamma }_c`$ diverges when the temperature is such that one of the eigenvalues of the matrix $`\widehat{\mathrm{\Pi }}(T)`$ is equal to $`|\gamma |^1`$, i.e., at $`T=T_c`$ the equation
$$\left[|\gamma |^1\widehat{I}\widehat{\mathrm{\Pi }}(T_c)\right]|\mathrm{\Psi }=0$$
(7)
holds. Thus, the equation determining $`T_c`$ can be obtained from an eigenvalue problem. \[One can also obtain an equation for $`T_c`$ by considering a BSC-like gap equation with frequency dependent interaction vertex, $`|\gamma |+t\mathrm{\Lambda }`$.\] The matrix elements of $`\widehat{\mathrm{\Pi }}=\widehat{\mathrm{\Pi }}^0+\widehat{\mathrm{\Pi }}^1`$ are
$`\widehat{\mathrm{\Pi }}_{nm}^0=[(n+1/2)(m+1/2)]^{1/2},`$ (8)
$`\widehat{\mathrm{\Pi }}_{nm}^1=t[(n+1/2)(m+1/2)]^{1/2}|\gamma |^1\mathrm{\Lambda }(ϵ_n+ϵ_m).`$ (9)
As the matrix elements $`\widehat{\mathrm{\Pi }}_{nm}^0`$ are factorized with respect to $`n`$ and $`m`$, all the eigenvalues of the matrix $`\widehat{\mathrm{\Pi }}^0`$, except one, are degenerate and equal to zero. The eigenvector corresponding to the nonzero eigenvalue is $`\mathrm{\Psi }_n^0=c/\sqrt{n+1/2}`$, and the equation $`|\gamma |^1\mathrm{\Psi }_n^0=_m\widehat{\mathrm{\Pi }}_{nm}^0\mathrm{\Psi }_m^0`$ leads to the BCS relation for $`T_{c0}`$:
$$|\gamma |^1=l_0(T_{c0}),l_0(T)\underset{m=0}{\overset{M}{}}\frac{1}{m+1/2}=\mathrm{ln}\frac{1.14}{T\tau }.$$
(10)
Our strategy now will be to calculate the corrections to this eigenvalue perturbatively in $`\widehat{\mathrm{\Pi }}^1`$ (notice that $`\widehat{\mathrm{\Pi }}^1t`$), and in this way to get an implicit equation for $`T_c`$. Since $`\widehat{\mathrm{\Pi }}`$ is symmetric we can perform this program using a standard perturbation theory:
$$|\gamma |^1=l_0(T)+l_1(T)+l_2(T)+\mathrm{}$$
(11)
The first order term can be obtained straightforwardly
$`l_1=\mathrm{\Psi }^0\left|\widehat{\mathrm{\Pi }}^1\right|\mathrm{\Psi }^0{\displaystyle \frac{t}{l_0|\gamma |}}\mathrm{\Sigma }_2(T),`$ (12)
$`\mathrm{\Sigma }_2(T)={\displaystyle \underset{n,m=0}{\overset{M}{}}}{\displaystyle \frac{\mathrm{\Lambda }\left(ϵ_n+ϵ_m\right)}{(n+1/2)(m+1/2)}}.`$ (13)
The prefactor $`1/l_0`$ appears in $`l_1`$ because the normalization factor $`c`$ of the eigenvector $`\mathrm{\Psi }_n^0`$ is equal to $`1/\sqrt{l_0}`$. Since all the eigenvalues of the operator $`\widehat{\mathrm{\Pi }}^0`$ are degenerate except the one under studying, it is also possible to find the higher order corrections using only the eigenvector $`|\mathrm{\Psi }^0`$, without involving other eigenvectors. We demonstrate it here for the second order term, but a generalization to higher orders is straightforward. In the second order
$`l_2={\displaystyle \underset{i0}{}}{\displaystyle \frac{\mathrm{\Psi }^0\left|\widehat{\mathrm{\Pi }}^1\right|\mathrm{\Psi }^i\mathrm{\Psi }^i\left|\widehat{\mathrm{\Pi }}^1\right|\mathrm{\Psi }^0}{l_0}}`$ (14)
$`={\displaystyle \frac{1}{l_0}}\left[\mathrm{\Psi }^0\left|\widehat{\mathrm{\Pi }}^1\widehat{\mathrm{\Pi }}^1\right|\mathrm{\Psi }^0\left(l_1\right)^2\right],`$ (15)
where
$`\mathrm{\Psi }^0\left|\widehat{\mathrm{\Pi }}^1\widehat{\mathrm{\Pi }}^1\right|\mathrm{\Psi }^0{\displaystyle \frac{t^2}{l_0|\gamma |^2}}\mathrm{\Sigma }_3(T),`$ (16)
$`\mathrm{\Sigma }_3(T)={\displaystyle \underset{nmk=0}{\overset{M}{}}}{\displaystyle \frac{\mathrm{\Lambda }\left(ϵ_n+ϵ_k\right)\mathrm{\Lambda }\left(ϵ_k+ϵ_m\right)}{(n+1/2)(m+1/2)(k+1/2)}}.`$ (17)
Inverting Eq. (11) perturbatively in $`t`$ and having in mind that $`|\gamma |l_0(T_{c0})=1`$, we find
$`\mathrm{ln}{\displaystyle \frac{T_c}{T_{c0}}}=t\mathrm{\Sigma }_2(T_{c0})`$ (18)
$`+t^2\left(\mathrm{\Sigma }_3(T_{c0})+T_{c0}{\displaystyle \frac{\mathrm{\Sigma }_2(T)}{T}}|_{T=T_{c0}}\mathrm{\Sigma }_2(T_{c0})\right)+\mathrm{}`$ (19)
Since Eq. (19) gives an approximation for $`\mathrm{ln}(T_c/T_{c0})`$, while the measured quantity in experiments is $`T_c/T_{c0}`$, the first two terms of the perturbative series are sufficient for the description of the $`T_c`$ suppression, if the parameter $`t`$ is not too close to a critical value where $`T_c`$ vanishes. \[The parameter $`t`$ should be inside the radius of convergence of the series (19). Outside this radius the superconductivity is completely suppressed.\] In the $`2D`$ case Eq. (19) reproduces the first two terms of the expansion of the right hand side of Eq. (5) in powers of $`ut/\gamma ^2`$:
$$\mathrm{ln}\left(\frac{T_c}{T_{c0}}\right)=\underset{n=1}{\overset{\mathrm{}}{}}\frac{1}{(2n+1)\gamma }\left(\frac{ut}{\gamma ^2}\right)^n.$$
(20)
We note that expansion (20) does not contain a term $`t^2/\gamma ^4`$. There are several diagrams that gives contributions to that order, however, finally they cancel each other . The main advantage of Eq. (19) is that it is not restricted to a logarithmic accuracy, and can be applied to the description of the crossover from $`2D`$ to $`1D`$ systems.
In the experiment of Xiong et al. the mean field temperature of the superconducting transition, $`T_c`$, has been measured systematically for uniform $`Pb`$ wires of various widths. The effective strength of the disorder characterized by $`R_{\mathrm{}}`$ has been controlled by the wire thickness $`d`$. Before going to a detailed comparison of the theory with the experiment a few remarks are in order. The theory described above deals with the universal mechanism related to large scale distances that are of the order of the thermal length $`L_T\sqrt{D/T}`$). However, a number of other effects may also influence $`T_c`$ when the thickness $`d`$ is decreased. For example, the electron states quantization and the interaction of the electrons with the film’s substrate can alter the parameters of the electron liquid. These nonuniversal effects of a short range origin are not addressed by the present theory. In some systems, e.g., $`MoGe`$ (see Ref. ), the discussed effect, originated from the interplay of the Coulomb interaction and disorder, is dominant, and the theoretical curve matches the experimental data at $`2D`$. Unfortunately, as it is shown in Fig. Suppression of $`𝑻_𝒄`$ in superconducting amorphous wires the theoretical curve for $`Pb`$ films does not follow the experiment. This fact indicates that the effects of a short range physics are not negligible here.
To minimize the role of the nonuniversal effects, and make the comparison between the experiment and theory possible, we proceed in the following way. For each width the theoretical curve has been multiplied by the function $`x\left(R_{\mathrm{}}\right)=T_c^{2D}(R_{\mathrm{}})_{ex}/T_c^{2D}(R_{\mathrm{}})_{th}`$. This function is the ratio between the two curves presented in Fig. Suppression of $`𝑻_𝒄`$ in superconducting amorphous wires. Here, the basic idea is that, because the widths of the wires are considerably larger than any microscopical scale, the influence of the short range effects on $`T_c`$ in wires remains the same as in $`2D`$ films. In this way, we believe, the effect of the long range physics determining the crossover from $`2D`$ to $`1D`$ systems can be captured by the present theory. To continue further we have to discuss another complication. Unlike the case of $`2D`$ films, the limit of $`R_{\mathrm{}}0`$ for a series of wires with a fixed width is somewhat ambiguous. For the discussed data, the extrapolation of $`T_c`$ to the limit $`R_{\mathrm{}}0`$ at a fixed width yields values that are not equal to the transition temperature in the bulk limit. (Moreover, the extrapolated values behave in an irregular way as a function of the wire width.) Under this circumstance, we have normalized the theoretical curves in such a way that in the limit $`R_{\mathrm{}}0`$ the fitting curves for each width, $`a`$, start from the extrapolated $`T_{c0}(a)=T_c(R_{\mathrm{}}0)`$. After this normalization procedure and rescaling the theoretical curves by $`x\left(R_{\mathrm{}}\right)`$, the data for wires of different widths has been plotted together with the theoretical curves in Fig. Suppression of $`𝑻_𝒄`$ in superconducting amorphous wires. The fitting parameter $`\gamma =0.16`$, determined from the initial slope of the $`T_c(R_{\mathrm{}})`$ in $`2D`$ films, was the same for all wire widths. Notice that at $`R_{\mathrm{}}2000\mathrm{\Omega }`$ the suppression of $`T_c`$ for the wire of the smallest width is about 1.5 times stronger than for the widest one. The agreement between theory, i.e. Eq. (19), and experimental data for all wires of different widths turned out to be very good.
To summarize, we have developed a theory that describes the suppression of the mean-field temperature of the superconducting transition in amorphous systems. The theory is based on the consideration of the suppression of the contact attraction due to phonons, by the dynamically enhanced Coulomb repulsion. It is suitable for the description of the crossover region between $`2D`$ and $`1D`$. By treating the problem as an eigenvalue problem, we overcame the difficulties occurring because of the coexistence of different singularities in the equation determining $`T_c`$. In order to compare the available experimental results with the theory, we analyzed the data in a way that minimizes the role of nonuniversal effects of a short range origin. We believe that the theory could be tested further with superconducting wires fabricated from other materials, where the initial slope of $`T_c(R_{\mathrm{}})`$ is larger than in $`Pb`$ films.
It is our pleasure to acknowledge discussions with I. L. Aleiner, V. Ambegaokar, M. E. Gershenson, L. S. Levitov, M. Yu. Reizer, and R. A. Smith. The research is supported by the DIP Cooperation, the Israel Science Foundation–Centers of Excellence (MOKKED), and by the NSF grant DMR-94-16910. YO is grateful for the support by the Rothschild Fund.
|
no-problem/9906/hep-th9906172.html
|
ar5iv
|
text
|
# Fractional spin and the Pauli term
## Abstract
It has recently been claimed that the inclusion of a Pauli term in (2+1) dimensions gives rise to a new type of anomalous spin term. The form of that term is shown to contradict the structure relations for the inhomogeneous Lorentz group.
In 1984 it was shown that it is possible to formulate a pure Chern-Simons gauge theory. That theory was found to have the peculiar feature that the commutator of the angular momentum operator $`L`$ with the fundamental charge field has an anomalous term (i.e., a fractional spin) proportional to to the product of that field with the charge operator $`Q`$. More recently it has been claimed that if a Pauli-type coupling is included, an additional anomalous spin term is induced. Specifically, it is asserted that the commutator of $`L`$ with the scalar charge field $`\varphi `$ has the form
$$[L,\varphi (y)]=i(𝐲\times )\varphi (y)\frac{e^2}{2\pi \kappa }Q\varphi (y)+i\frac{g}{2}𝐲𝐄\varphi (y)$$
where $`e`$, $`g`$, and $`\kappa `$ are constants. It is shown here that such a term is inconsistent with the underlying Poincaré algebra in (2+1) dimensions.
The proof of this result follows immediately upon inserting the above equation between the operators $`e^{i𝐏𝐚}`$ and $`e^{i𝐏𝐚}`$ where $`𝐚`$ is an arbitrary two-vector. Using the result
$$A(𝐱+𝐚)=e^{i𝐏𝐚}A(𝐱)e^{i𝐏𝐚}$$
for an arbitrary operator $`A`$ together with the structure relation
$$[L,P^i]=iϵ^{ij}P_j,$$
this is seen to yield
$$[L,\varphi (y)]=i(𝐲\times )\varphi (y)\frac{e^2}{2\pi \kappa }Q\varphi (y)+i\frac{g}{2}(𝐲𝐚)𝐄\varphi (y)$$
upon letting $`𝐲𝐲𝐚`$. This leads to the immediate result that the relation claimed in ref. 2 cannot be valid for nonzero $`g`$.
###### Acknowledgements.
This work is supported in part by the U.S. Department of Energy Grant No.DE-FG02-91ER40685.
|
no-problem/9906/solv-int9906012.html
|
ar5iv
|
text
|
# Integrability of the Higher-Order Nonlinear Schrödinger Equation Revisited
(June 22, 1999)
## Abstract
Only the known integrable cases of the Kodama-Hasegawa higher-order nonlinear Schrödinger equation pass the Painlevé test. Recent results of Ghosh and Nandy add no new integrable cases of this equation.
Being a true high-technology application of a mathematical object, the optical soliton in fiber optical lines of communications is an ideal carrier of information because the integrabilityof its model equation, the nonlinear Schrödinger (NLS) equation, provides a guarantee on the input and output causal relation for light waves in fibers . The integrable NLS equation, however, does not govern femtosecond light pulses which have much potential for the future technology. Kodama and Hasegawa derived a model equation for ultra-short light pulses in optical fibers, the higher-order nonlinear Schrödinger (HNLS) equation:
$$iw_z+\frac{1}{2}w_{yy}+\left|w\right|^2w+i\alpha w_{yyy}+i\beta \left|w\right|^2w_y+i\gamma w(\left|w\right|^2)_y=0,$$
(1)
where real parameters $`\alpha `$, $`\beta `$ and $`\gamma `$ are determined by spectral and geometric properties of a fiber. Eq. (1) admits one-soliton solutions of bright and dark types in wide domains of its parameters , but this has no relation to its integrability. Only the following four integrablecases of the HNLS eq. (1) are known besides the NLS equation itself: the derivative NLS equation I ($`\alpha :\beta :\gamma =0:1:1`$), the derivative NLS equation II ($`0:1:0`$), the Hirota equation ($`1:6:0`$), and the Sasa-Satsuma equation ($`1:6:3`$). According to the results of Nijhof and Roelofs , only these known integrable cases of the HNLS eq. (1) possess the infinite-dimensional prolongation structures.
The Painlevé analysis leads us to the same conclusion on the integrability of eq. (1). Let us remind in brief our results obtained in in the framework of the Weiss-Kruskal algorithm (for details of the method see e.g. and references therein).
The HNLS eq. (1) with $`\alpha =0`$ lies in the class of derivative NLS equations which has been analyzed by Clarkson and Cosgrove . We find from their results that eq. (1) with $`\alpha =0`$ has the Painlevé property if and only if $`(\beta \gamma )\gamma =0`$, i.e. exactly when eq. (1) is the derivative NLS equations I and II besides the NLS equation itself.
When $`\alpha 0`$, we can transform eq. (1) by
$`w(y,z)`$ $`=u(x,t)\mathrm{exp}\left({\displaystyle \frac{i}{6}}\alpha ^1x{\displaystyle \frac{i}{216}}\alpha ^3t\right),`$
$`y`$ $`=x{\displaystyle \frac{1}{12}}\alpha ^2t,\text{ }z=\alpha ^1t`$ (2)
into the equivalent complex modified Korteweg-de Vries (CMKdV) equation
$$u_t=u_{xxx}+auu^{}u_x+bu^2u_x^{}+icu^2u^{},$$
(3)
where * denotes the complex conjugation, $`a=\alpha ^1(\beta +\gamma )`$, $`b=\alpha ^1\gamma `$, and $`c=\frac{1}{6}\alpha ^2(\beta 6\alpha )`$. (It is sometimes overlooked in the literature that the CMKdV eq. (3) with $`c=0`$ is equivalent to the HNLS eq. (1) only if $`\beta =6\alpha `$ in eq. (1).) Since eq. (3) is a complex equation, we supplement eq. (3) by its complex conjugation, introduce the new variable $`v`$, $`v=u^{}`$, and then consider $`u`$ and $`v`$ as mutually independent. Thus, we have the following system of two nonlinear equations of total order six:
$`u_{xxx}+auvu_x+bu^2v_x+icu^2vu_t`$ $`=0,`$
$`v_{xxx}+auvv_x+bv^2u_xicuv^2v_t`$ $`=0.`$ (4)
A hypersurface $`\phi (x,t)=0`$ is non-characteristic for this system if $`\phi _x0`$ (we take $`\phi _x=1`$), and the general solution of eq. (4) must contain six arbitrary functions of one variable. We substitute the expressions $`u=\phi ^\sigma [u_0(t)+\mathrm{}+u_r\left(t\right)\phi ^r+\mathrm{}]`$ and $`v=\phi ^\tau [v_0(t)+\mathrm{}+v_r\left(t\right)\phi ^r+\mathrm{}]`$ into eq. (4) for determining the exponents $`\sigma `$ and $`\tau `$ of the dominant behavior of solutions and the positions $`r`$ of resonances, and find the following two branches if $`a^2+b^20`$ (we reject the case $`a=b=0`$, $`c0`$ because of inadmissible $`r=\frac{1}{2}\left(5\pm i\sqrt{87}\right)`$ ).
* Branch (A): $`\sigma =\tau =1`$, $`u_0v_0=6\left(a+b\right)^1`$, $`u_0/v_0`$ is arbitrary, $`r=1,0,3,4,3p,3+p`$, where
$$p=2\left(\frac{a2b}{a+b}\right)^{1/2}.$$
(5)
* Branch (B): $`\sigma =1\pm n`$, $`\tau =1n`$, $`u_0v_0=60\left(5a7b\right)^1`$, $`u_0/v_0`$ is arbitrary, $`r=1,0,4,6,\frac{1}{2}\left(3q\right),\frac{1}{2}\left(3+q\right)`$, where
$$n=\left(\frac{5a+17b}{5a7b}\right)^{1/2},$$
(6)
$$q=\left(\frac{245a+617b}{5a7b}\right)^{1/2}.$$
(7)
We reject the following cases: $`a+b=0`$, when the branch (A) does not exist, because of inadmissible $`r=\frac{1}{2}\left(3\pm i\sqrt{31}\right)`$ in the branch (B); $`5a7b=0`$, when the branch (B) does not exist, because of inadmissible $`r=3\pm i`$ in the branch (A); and $`5a+17b=0`$, when the branches (A) and (B) coincide, because the double resonance $`r=0`$ and the fact that $`u_0v_0`$ is determined require logarithmic terms in the singular expansions. Thus, the two different branches, (A) and (B), exist for all the cases of eq. (3) we have to analyze farther. The existence of the branch (B) was ignored in several works, where some special cases of the CMKdV eq. (3) were tested. The branch (B) was lost as well in , where the Painlevé test of the HNLS eq. (1) missed all the integrable cases except the Sasa-Satsuma case.
Eqs. (5), (6) and (7) show that the dominant behavior of solutions in the branch (B) and the positions of two resonances in each of the branches (A) and (B) are determined only by the quotient $`a/b`$. Elimination of $`a/b`$ from eqs. (5), (6) and (7) leads to the following two equations:
$$\left(1+p^2\right)\left(1+n^2\right)=10,$$
(8)
$$9+40n^2=q^2.$$
(9)
The numbers $`p`$, $`n`$ and $`q`$ have to be integer for the CMKdV eq. (3) to possess the Painlevé property. Eqs. (8) and (9) admit three integer solutions: $`(p,n,q)=(1,2,13),(2,1,7),(3,0,3)`$, but the last one corresponds to the already rejected case $`a/b=17/5`$. The solution $`(2,1,7)`$ leads to the Hirota case of the HNLS eq. (1): $`b=0`$ in eq. (3) corresponds to $`\gamma =0`$ in eq. (1), the usual way of constructing recursion relations and checking compatibility conditions at resonances gives us the condition $`c=0`$ (i.e. $`\beta =6\alpha `$ in eq. (1)) at $`r=1`$ in the branch (A), and then compatibility conditions become identities in both branches. The solution $`(1,2,13)`$ leads to the Sasa-Satsuma case of the HNLS eq. (1): $`\beta =2\gamma `$ since $`a=3b`$, the condition $`c=0`$ (i.e. $`\beta =6\alpha `$) arises at $`r=3`$ in the branch (A), and other compatibility conditions are all satisfied.
Consequently, only the known integrable cases of the HNLS eq. (1) pass the Painlevé test for integrability . This completely agrees with the results of Nijhof and Roelofs .
Recently, however, Ghosh and Nandy reported that they found a parametric Lax representation for the CMKdV eq. (3) with $`c=0`$ and any rational value of $`b/a`$ from the interval $`[0,1]`$. Let us consider in brief the intriguing results of .
The Lax pair, proposed in , is as follows:
$$\mathrm{\Psi }_x=U\mathrm{\Psi },\mathrm{\Psi }_t=V\mathrm{\Psi },$$
(10)
$$U=i\lambda \mathrm{\Sigma }+A,$$
(11)
$$V=A_{xx}+AA_xA_xA2A^32i\lambda \mathrm{\Sigma }(A_xA^2)4\lambda ^2A+4i\lambda ^3\mathrm{\Sigma },$$
(12)
$$\mathrm{\Sigma }=\left(\begin{array}{ccccccc}1& \mathrm{}& 0& 0& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& \mathrm{}& 1& 0& \mathrm{}& 0& 0\\ 0& \mathrm{}& 0& 1& \mathrm{}& 0& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& \mathrm{}& 0& 0& \mathrm{}& 1& 0\\ 0& \mathrm{}& 0& 0& \mathrm{}& 0& 1\end{array}\right),$$
(13)
$$A=\left(\begin{array}{ccccccc}0& \mathrm{}& 0& 0& \mathrm{}& 0& u\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& \mathrm{}& 0& 0& \mathrm{}& 0& u\\ 0& \mathrm{}& 0& 0& \mathrm{}& 0& u^{}\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& \mathrm{}& 0& 0& \mathrm{}& 0& u^{}\\ u^{}& \mathrm{}& u^{}& u& \mathrm{}& u& 0\end{array}\right),$$
(14)
where $`\lambda `$ is a parameter, $`u=u(x,t)`$, and the dimension of the block-form matrices $`\mathrm{\Sigma }`$ and $`A`$ is $`(l+m+1)\times (l+m+1)`$ (note: the expression for $`V`$ contains some misprints in ). This form of $`U`$ and $`V`$ provides that the compatibility condition of the system (10), $`U_t=V_xUV+VU`$, is as follows:
$$A_t=A_{xxx}3(A^2A_x+A_xA^2).$$
(15)
It is stated in that the equation (15) with $`A`$ (14) is nothing but
$$u_t=u_{xxx}+(6l+3m)uu^{}u_x+3mu^2u_x^{}$$
(16)
and therefore the system (10) with (11), (12), (13) and (14) represents a Lax pair for eq. (3) with $`c=0`$ and a rational value of $`b/a`$ determined by the dimension of the block matrices.
The real situation, however, is different. There are three distinct cases of what is in fact the equation (15) with $`A`$ (14), depending on values of $`l`$ and $`m`$.
* If $`m=0,l0`$ or $`l=0,m0`$, then eq. (15) is eq. (3) with $`b=0`$. This is the Hirota case.
* If $`m=l0`$, then eq. (15) is eq. (3) with $`a=3b`$. This is the Sasa-Satsuma case.
* If $`m0,l0,ml`$, then eq. (15) is the following over-determined system of two complex evolution equations:
$`u_t`$ $`=u_{xxx}+(6l+3m)uu^{}u_x+3mu^2u_x^{},`$
$`u_t`$ $`=u_{xxx}+(3l+6m)uu^{}u_x+3lu^2u_x^{}.`$ (17)
Representing $`u`$ as $`u=fe^{ig}`$, where $`f`$ and $`g`$ are real functions of $`x`$ and $`t`$, we get the following equivalent form of the system (17):
$$f_t=f_{xxx}+6(l+m)f^2f_x,g_x=g_t=0.$$
(18)
Though integrable, the system (18) is only a reduction of the CMKdV eq. (3).
Consequently, no new integrable cases of the Kodama-Hasegawa HNLS eq. (1) were found in .
This work was supported in part by Grant $`\mathrm{\Phi }`$98-044 of the Fundamental Research Fund of Belarus.
|
no-problem/9906/cond-mat9906292.html
|
ar5iv
|
text
|
# Fidelity and leakage of Josephson qubits
## Abstract
The unit of quantum information is the qubit, a vector in a two-dimensional Hilbert space. On the other hand, quantum hardware often operates in two-dimensional subspaces of vector spaces of higher dimensionality. The presence of higher quantum states may affect the accuracy of quantum information processing. In this Letter we show how to cope with quantum leakage in devices based on small Josephson junctions. While the presence of higher charge states of the junction reduces the fidelity during gate operations we demonstrate that errors can be minimized by appropriately designing and operating the gates.
The most widely accepted paradigm of quantum computation describes quantum information processing in terms of quantum gates whose input and output are two-state quantum systems called qubits . Quantum Computation (QC) is performed by means of a controllable unitary evolution of the qubits . Due to the intrinsic quantum parallelism, problems which are intractable on classical computers can be solved efficiently by using quantum algorithms. Probably the most striking example is the factorization of large numbers .
Parallel to the developement of the theory of quantum information there has been an increasing interest in finding physical systems where quantum computation could be implemented. In an (almost) ideal situation one should identify a suitable set of two-level systems (sufficiently decoupled from any source of decoherence ) with some controllable couplings among them needed to realize single qubit and two-qubit operations. These requirements are sufficient to implement any computational task . Various physical systems have been suggested for the implementation of quantum algorithms, e.g. ions traps , QED cavities and NMR . The quest for large scale integrability and flexibility in the design has very recently stimulated an increasing interest in the field of nanostructures. Up to now promising proposals are based on small-capacitance Josephson junctions , coupled quantum dots and phosphorus dopants in silicon crystals . The experiments on the superposition of charge states in Josephson junctions and the recent achievements in controlling the coherent evolution of quantum states in a Cooper pair box render superconducting nanocircuits interesting candidates to implement solid state quantum computers.
Physical realizations of QC are never completely decoupled from the environment. Since decoherence will ultimately limit the performance of a quantum computer a lot of attention is being devoted to this problem. Besides decoherence, for each proposed scheme a detailed analysis of the errors induced by the gate operations themselves is crucial in order to assess their reliability and the feasibility of fault-tolerant quantum computation . Errors may occur due to a variety of reasons. An obvious example are fluctuations in the control parameters of the gate which act as a random noise and thus affects the unitarity of the time evolution. Alternatively gate operations can change the coupling of the qubits to the environment (even if this coupling is negligible during storage periods) thereby enhancing decoherence. All these error sources can be analyzed by properly modelling the qubit-environment coupling. However, there are errors which are not due to (or cannot be described in terms of) the action of an external environment. Much rather they are inherent in the design of the gate.
In this Letter we consider one (intrinsic) source of error in gate operations which is common to several of the proposed solid state implementations, the quantum leakage. It occurs when the computational space is a subspace of a larger Hilbert space. This is the case e.g. when the information is encoded in trapped ions or in charge (or flux) states of devices based on Josephson junctions (or SQUIDS). We start by introducing a general scheme to characterize the leakage and then we focus on devices based on small-capacitance tunnel junctions.
Our analysis applies to the situation illustrated in Fig. 1. The two low-energy states constitute the computational Hilbert space. The system, however, can leak out to the higher states. If the energy difference between the low-lying and the excited states is large compared to the other energy scales of the system (as in Refs. ) the probability to leak out is small. One might wonder whether it is necessary to discuss this effect at all. As we will see the consequences of leakage are more severe than a simple estimate of energy scales might suggest. The presence of states outside the computational space modifies the time evolution of the qubit states compared to the idealized design.
The ideal unitary gate operation $`U_I`$ is obtained by switching on a suitable Hamiltonian $`H_I`$ which couples the desired computational states in a controlled way for a time $`t_0`$. By choosing $`t_0`$ one can implement the desidered gate operation. In reality, however, the dynamics of the system is governed by a unitary operator $`U_R`$ which acts on the full Hilbert space. Since information is being processed within the computational subspace the output is related to the input state via the map $`\mathrm{\Pi }U_R(t)\mathrm{\Pi }`$, where $`\mathrm{\Pi }`$ is the projection operator on the computational space. One is interested in optimizing the real gate operation in order to get as close as possible to the ideal $`U_I`$. In general the “best” operation may require a time $`tt_0`$ as all the system eigenenergies are modified by the states outside the computational subspace. Therefore we use the time $`t`$ as parameter to optimize the given computational step. We characterize the performance of real gates by the fidelity $``$ and the probability of leakage $`(t)`$ defined as
$$=1\frac{1}{2}\text{min}_{\{\text{t}\}}U_I(t_0)\mathrm{\Pi }U_R(t)\mathrm{\Pi }$$
(1)
$$(t)=1\text{min}_\psi \psi |U_R^{}(t)\mathrm{\Pi }U_R(t)|\psi $$
(2)
In Eq. (1) we make use of the operator norm defined as $`D=\text{Sup}_\psi |D|\psi |=\text{Sup}_\psi \sqrt{\psi |D^{}D|\psi }`$ over the vectors $`\{|\psi :\psi |\psi =1\}`$ of the computational subspace. This definition implies that $`D=\sqrt{\lambda _M}`$ where $`\lambda _M`$ is the biggest eigenvalue of $`D^{}D`$. As in the case of the minimal fidelity this definition gives estimates for the worst case. The definition given in Eq. (1) can therefore be regarded as a prescription how to optimize the gate design (note that the fidelity defined in Eq.(1) does not depend on the time $`t`$).
As mentioned before the existence of states other than the computational ones has two main consequences on the qubit dynamics. There is a nonzero probability of leakage, measured by $`(t)`$, and a modification of the eigenenergies and eigenstates of the real system. The latter effect turns out to be an important source of gate errors.
In order to study the phenomena related to leakage quantitatively we apply Eqs. (1), (2) to Josephson junction qubits in the charge regime as proposed in Refs. . A similar analysis can be carried out, with appropriate changes of paramaters, for all other cases where leakage is present. In Refs. the qubit is implemented using nanocircuits of Josephson junctions. The corresponding Hamiltonian for one and two qubit operations can be written as
$$H_R=\underset{i=1,2}{}\left[E_{\mathrm{ch}}(n_in_{x,i})^2E_J\mathrm{cos}\varphi _i\right]+E_L\left(\mathrm{sin}\varphi _1+\mathrm{sin}\varphi _2\right)^2$$
(3)
In the first term $`E_{\mathrm{ch}}`$ is the charging energy. The second and the third term represent the Josephson tunneling (associated with the energy $`E_J`$) and the inductive coupling of strength $`E_L`$ which bring about single and two qubit operations possible. Both $`E_J`$ and $`E_L`$ are assumed to be much smaller than the charging energy. The offset charge $`n_{x,i}`$ can be controlled by an external gate voltage. The phases $`\varphi _i`$ and the number of Cooper pairs $`n_i`$ are canonically conjugate variables $`[\varphi _i,n_j]=i\delta _{ij}`$ .
At temperatures much lower than the charging energy, for $`n_{x,i}1/2`$ the two charge states $`n_i=0,1`$ are nearly degenerate. They represent the states $`|0`$, $`|1`$ of the qubit (see Fig. 1). In the computational Hilbert space the ideal evolution of the system is governed by the Hamiltonian
$$H_I=\underset{i=1,2}{}\left[\mathrm{\Delta }E_{\mathrm{ch},i}\sigma _{z,i}\frac{E_J}{2}\sigma _{x,i}\right]\frac{E_L}{2}\sigma _{y,1}\sigma _{y,2}$$
(4)
where $`\sigma `$ are Pauli matrices and $`\mathrm{\Delta }E_{\mathrm{ch},i}=E_{\mathrm{ch}}(n_{x,i}1/2)`$. The different time evolution due to $`H_R`$ and to $`H_I`$ causes an error in the gate operation. We note that leakage is also present during idle periods of the gates. However, here we only discuss the errors during gate operations.
One-bit gate ($`E_L=0`$). Single-qubit gate operations can be implemented, e.g., by suddenly switching the offset charge to the degeneracy point $`n_x=1/2`$ where the charge states $`|0`$ and $`|1`$ are strongly mixed by the Josephson coupling . Whereas in the ideal setup this coupling mixes only the states $`|0`$ and $`|1`$, in the real qubit all charge states are involved.
The evolution in the computational subspace for a time interval $`t`$ is described by the operator ($`\mathrm{}=1`$)
$$\mathrm{\Pi }U_R(t)\mathrm{\Pi }=e^{iE_nt}\mathrm{\Pi }|\mathrm{\Phi }_n\mathrm{\Phi }_n|\mathrm{\Pi }$$
(5)
where $`\mathrm{\Pi }=|00|+|11|`$ is the projector on the computational subspace and $`|\mathrm{\Phi }_n`$ are the eigenstates with energies $`E_n`$ of the Hamiltonian $`H_R`$ (here $`|\mathrm{\Phi }_n`$ can be expressed in terms of Mathieu functions).
By evaluating the leakage according to Eq. (2) we obtain
$`(t)`$ $`=`$ $`1\text{min}_\pm {\displaystyle \underset{n;m=0,1}{}}(\pm )^m0\mathrm{\Phi }_n\mathrm{\Phi }_nme^{iE_nt}^2`$ (6)
$``$ $`{\displaystyle \frac{E_J^2}{8E_{\mathrm{ch}}^2}}[1\text{min}_\pm \mathrm{cos}(2E_{\mathrm{ch}}\pm E_J/2)t].`$ (7)
The order of magnitude $`(E_J/E_{\mathrm{ch}})^2`$ can be understood immediately by regarding the coupling to higher charge states as a perturbation to the ideal system of Eq. (4).
The fidelity has to be limited by the leakage since it describes the length of the projection of the true state at time $`t`$ onto the ideal state at $`t_0`$. There is another effect contributing to the loss of fidelity: the presence of higher charge states renormalizes the energy eigenvalues thus leading to a frequency mismatch between ideal and real time evolution. However, due to the symmetry of the system and the fact that $`E_J`$ is the only coupling energy to the states outside the computational subspace there is a simple way to cure this problem. Let us consider a $`\pi `$-rotation. The optimal gate is obtained by changing the operation time to $`t_0^{}=\pi /\mathrm{\Delta }E`$ where $`\mathrm{\Delta }E`$ is the energy splitting between the two lowest eigenstates (as opposed to the time $`t_0=\pi /E_J`$ in the ideal system). The value of the fidelity is then given by
$``$ $`=`$ $`1{\displaystyle \frac{1}{2}}\left|{\displaystyle \underset{n;m=0,1}{}}0\mathrm{\Phi }_n\mathrm{\Phi }_nme^{iE_nt_0^{}}i\right|`$ (8)
$``$ $`1{\displaystyle \frac{1}{32}}{\displaystyle \frac{E_J^2}{E_{\mathrm{ch}}^2}}\sqrt{2+2\mathrm{sin}(2\pi E_{\mathrm{ch}}/E_J)}`$ (9)
We mention that the error accumulates linearly with the number of operations. For typical parameters of Josephson junctions $`E_J/E_{\mathrm{ch}}0.02`$ one finds that after about $`10^4`$ operations the loss of fidelity becomes of order unity.
Two-bit gate ($`E_L0`$). Among the many possibilities for the elementary two-qubit operation, choosing a particular one may be a non-trivial step in the course of implementing quantum hardware. Due to universality of quantum computation one is free to use any generic $`4\times 4`$ unitary matrix as a two-qubit gate. From our point of view a choice is optimal if it avoids errors stemming from a discrepancy of the ideal gate and the way of its implementation. Therefore, in the following we assume that the Hamiltonian as introduced in Eq. (4)
$$H_I=\left(\begin{array}{cccc}2\mathrm{\Delta }E_{\mathrm{ch}}& E_J/2& E_J/2& E_L/2\\ E_J/2& 0& E_L/2& E_J/2\\ E_J/2& E_L/2& 0& E_J/2\\ E_L/2& E_J/2& E_J/2& 2\mathrm{\Delta }E_{\mathrm{ch}}\end{array}\right)$$
(10)
generates the ideal two-bit gate. In Eq. (10) we have used the basis $`\{|00,|01,|10,|11\}`$ (which is obtained as the direct product of the states introduced previously). The typical scale for the operation time $`t_0`$ is on the order of $`1/E_L`$ .
In complete analogy with the one-bit gate we find that the leakage is of the same order for the two-qubit operation:
$`\mathrm{max}\{\left({\displaystyle \frac{E_J}{E_{\mathrm{ch}}}}\right)^2,\left({\displaystyle \frac{E_L}{E_{\mathrm{ch}}}}\right)^2\}`$(the numerical coefficient is larger than in the one-bit case because there are more charge states outside the computational subspace directly coupled to the qubit states either by $`E_L`$ or $`E_J`$).
The situation for the fidelity, however, is different. In order to estimate $``$ we consider a perturbative expansion of $`D^{}D`$ where $`D=U_I(t_0)\mathrm{\Pi }U_R(t)\mathrm{\Pi }`$ up to second order in $`E_J/E_{\mathrm{ch}}`$, $`E_L/E_{\mathrm{ch}}`$. The eigenvalues of this matrix have the form $`22\mathrm{cos}(E_ntE_n^{(0)}t_0)+`$ 2nd order terms (here $`E_n`$ and $`E_n^{(0)}`$ are the eigenvalues of $`H_R`$ and $`H_I`$, respectively). It turns out that due to the presence of several energy scales the frequency mismatch between real and ideal time evolution cannot be compensated for by adjusting the operation period. The leading terms of the fidelity can be written as
$$1\frac{1}{2}\left(a\frac{E_J^2}{E_LE_{\mathrm{ch}}}+b\frac{E_L}{E_{\mathrm{ch}}}\right),$$
(11)
where $`a`$ and $`b`$ are coefficients which depend on the particular choice of $`n_{x,i}`$ and $`t_0`$. In Fig. 2 we show the numerical results for $`n_x=1/4`$ and $`t_0=\pi /E_L`$. The loss of fidelity (the term in paranthesis in Eq. (11)) is proportional to $`t_0`$. The maximum (the best operation one can achieve) scales linearly with $`E_J/E_{ch}`$. This should be contrasted with the one-bit case where it scales quadratically.
We mention that we have chosen the definitions for the leakage and the fidelity describing the “worst case” in order to avoid a dependence of the discussion on the preparation of the initial state. One could wonder whether the “generic case” is much more robust with respect to leakage. It is easy to convince oneself by checking various choices of initial states that the loss of fidelity is indeed on the order of the worst case estimates.
In conclusion, starting from given gate operations we have discussed their optimal implementation in real systems. We have shown that leakage limits the number of operations which can be performed reliably both for one and two qubit gates. For one-bit gates one can correct leakage errors by changing the operation time. We have pointed out that with respect to fidelity it may be appropriate to choose the elementary two-qubit gate as it is determined by the implementation. Fig. 2 shows the central result of this work: although leakage causes an inevitable loss of fidelity for two-qubit operations, this loss can be minimized by an appropriate choice of the device parameters.
Finally we mention that one can speculate about correction procedures for errors caused by leakage. It should be possible to check during the computation whether leakage has occured. This should be done by measuring the system only if it is outside the computational subspace. One can imagine to realize a low sensitivity SET transistor which is able to measure the system only if the charge is outside a specified window.
###### Acknowledgements.
The authors would like to thank A.K. Ekert, G. Falci, R. Jozsa and Y. Makhlin for helpful discussions. This work was supported in part by the European TMR Research Network under contracts ERB 4061PL95-1412 and FMRX-CT-97-0143.
|
no-problem/9906/quant-ph9906096.html
|
ar5iv
|
text
|
# Untitled Document
Book Review
Bohmian Mechanics and Quantum Theory: An Appraisal.
James Cushing, Arthur Fine and Sheldon Goldstein (Eds.)
Gregg Jaeger Boston University, Dept. of Elect. and Comp. Eng. Photonics Center, 8 St. Mary’s St, Boston, MA 02215
This collection “Bohmian Mechanics and Quantum Theory” gives us the broadest perspective yet on this important realist alternative to standard quantum theory. The book treats the history, philosophical ramifications and consistency of theories arising from Bohm’s original model and their relations to other alternative theories. Significantly, it also contains applications of these theories to practical situations, such as scattering theory - a sign of the field’s increasing maturity.
Most importantly among its contributions to physics and the philosophy of physics, Bohmian mechanics provides an existence proof in support of the hidden-variable approach to explaining quantum phenomena: it is a causal mechanics observationally equivalent to standard nonrelativistic quantum theory. As with the study of the foundations of quantum theory generally, Bohmian mechanics is undergoing a renaissance. During those long years of neglect between the appearance of the EPR argument in 1935 and the emergence of various theories of microphysics, Bohmiam mechanics and Everett’s relative state theory emerged as little-regarded testimonies that alternatives to standard quantum theory were indeed conceivable. In these later times, we have this exciting volume of contemporary work on Bohmian mechanics. Though the editors have included among the contributors several contemporary advocates of the approach, sufficient sceptical analysis is also provided to put optimistic claims in proper perspective.
The simplest way to view the introduction in 1952 of Bohm’s theory is as a direct response to the acausality that Bohr and Pauli, amongst others, saw as essential to quantum phenomena. After all, these workers saw acausality unmistakably manifest in the apparent impossibility of understanding quantum phenomena with theories attributing definite trajectories in space-time. Bohm’s move was to take as fundamental the contrary, that physical systems, beginning with single particles, have definite positions at all times and that their initial positions at a given time determine their future behavior, in accordance with Newton’s equation of motion. This required mainly the addition of a “quantum potential” to those potentials ordinarily present. The result was a complete theory of motion for microscopic systems whose statistics accord with the predictions of standard quantum mechanics. In a broader sense, Bohm’s theory can be seen as a materialist response to the mystic character of the standard theory, most clearly exhibited in its dependence on the special status of the observer of quantum phenomena.
The paper by Millard Baublitz and Abner Shimony gives order to the conceptual tug-of-war now surrounding Bohmian mechanics, distinguishing two components within Bohm’s initial theory: the “causal view” and the “guidance view.” These contributors show how the presence of the two views within Bohmian mechanics engenders an inner tension that has haunted the subject since its inception. The former, causal view involves a Newtonian interpretation of Bohm’s equation of motion, which includes the auxiliary “quantum potential.” The guidance view instead makes the “guidance condition” – Bohm’s special assumption that a particle’s velocity can be written as the derivative of its Hamilton-Jacobi function divided by its mass – fundamental and exact, while dispensing with the quantum potential. Support for the stripped down, guidance view is strong, numbering amongst its advocates John S. Bell. By contrast, there are not any advocates of the causal view as a stand-alone framework. Rather, the causal view is a component of the specific hidden variables theories proposed by David Bohm and B.J. Hiley and others.
Under the causal view, Bohmian mechanics is subsumed under classical mechanics – provided the additional “quantum potential” is included – thereby inheriting the realist metaphysics of classical theory. The price paid is the loss of necessary agreement with the Born rule and of explanations for the behavior of entangled systems. This, in turn, means the loss of explanations for measurement results. Such a loss is reason enough to abandon the formalism of the causal view as sufficient for a stand-alone theory. The guidance view, on the other hand, reproduces the quantum mechanical predictions for entangled systems. The price paid is the loss of the intuitive clarity of a classical theory. These two views complement one another; abandoning either involves a significant loss, one physical and the other intuitive. It is, therefore, understandable that Bohm was compelled to incorporate both views into his theory to the extent allowed, though this threatens the theory’s consistency. This tension occupied Bohm, and has occupied others ever since the theory’s introduction.
Not surprisingly, this situation has also not been resolved within the current volume. However, the book does bring us closer to the heart of the matter through its careful analyses of existing theory and presentations of exploratory variants. Beyond these questions, others of more explicitly historical and philosophical natures are also addressed in this collection. Recall that another concern of EPR’s critique of standard quantum theory, beyond the issue of completeness, was the nature of “physical reality.” Mara Beller’s essay, “Bohm and the ‘Inevitability’ of Acausality” takes up this question. Beller carefully delves into Bohm’s position, which sees a positivistic eschewing of unobservables as the source of physicists’ reluctance to go beyond the trappings of the standard theory since its formulation. (Notably, critic Robin Collins takes a similar position against Bohmian realism in his “Epistemological critique…”.) Importantly, Beller points out a long antagonism between Bohm and Bohr regarding the relation between classical and quantum concepts: Bohm had resisted from the beginning of his career the conclusion that the quantum world is fundamentally acausal. It was only later that Bohm’s theory demonstrated the deniability of the acausality advocated by Bohr.
As Arthur Fine notes in his essay, Bohm developed an independent ideological framework based on “radical holism.” Fine also points out that the realism of Bohm is of an unusual sort; the properties found in measurement are not simply disclosed by the act of measuring but may change during measurement. This is was recognized by Einstein already in 1953, when he saw that in the case of a particle traveling between two walls the pre-measurement speed would be zero, with a non-zero velocity being acquired during measurement. Such compromised realism turned Einstein away from support of the Bohm model.
The question of how the properties of a Bohmian mechanical system are to be attributed is taken up by Harvey Brown, Andrew Elby and Robert Weingard. Ontologically, two components are generally associated with a Bohmian system: the $`\mathrm{\Psi }`$-function and the corpuscle. The former has been viewed by Peter Holland as acting on the latter in a causal manner. Holland assumes the nonlocalizability hypothesis: that the dynamical state-independent parameters of mass, charge and magnetic moment cannot be attributed to (i.e. localized within) the corpuscle alone. This hypothesis is motivated in part by the (nonlocal) $`\mathrm{\Psi }`$’s dependence on these parameters. One who accepts the nonlocalizability hypothesis faces the options of parsimony or generosity with regard to $`\mathrm{\Psi }`$’s bearing these attributes. On the former option all these properties are associated with $`\mathrm{\Psi }`$, whereas on the latter they are attributed to the conjunction of $`\mathrm{\Psi }`$ and the corpuscle. Brown et al. suggest a choice in favor of generosity, as parsimony would give rise to paradoxical situations such as a pair of particles being associated with a single corpuscle should the pair have coincident trajectories.
For his part, Holland argues elsewhere quantum mechanics and Bohmian mechanics cannot be universal physical theories since the formal structure of quantum theory prevents the recovery of the full range of possible classical mechanical motions. In essence, Holland is denying the reducibility of Classical to Quantum Theory. In a related contribution, Detlef Duerr, Sheldon Goldstein and Nino Zanghi argue that Bohmian mechanics, being a hidden-variables theory, should be viewed as the ”foundation of quantum mechanics” in that one may arrive at a version of Bohmian mechanics by adding to quantum mechanics particle positions as hidden variables. This raises the question as to whether Bohmian mechanics and quantum mechanics or Bohmian mechanics and classical mechanics are to be seen as closer pairings.
Variant verions of Bohmian mechanics are also presented, along with factors motivating them. Trevor Samols points out that, though Bohmian mechanics can be viewed as a realist version of quantum mechanics, there is great difficulty in extending the theory in a relativistic form, however well the phenomenology (in the physicists’ sense) may work out: Bohmian mechanics bears the burden of a preferred frame of reference due to its use of the guidance condition. Samols offers an alternative field theory in the spirit of Bohmian mechanics that does not depend on such a frame. Similarly, Antony Valentini offers his own ”pilot-wave theory” of fields, gravitation and cosmology. P.N. Kaloyerou, and Chris Dewdney and George Horton also discuss the treatment of bosonic fields within the Bohmian tradition.
In addition to the above contributions, this collection contains a number of carefully worked out practical applications of Bohmian mechanics to interferometry, position measurements, scattering theory, tunneling phenomena and other practical situations to which any mature physical theory must be applicable. Though this book does not answer all the questions one is compelled to ask of Bohmian mechanics, it does demonstrate that Bohm’s theory and its successors form a vibrant part of contemporary physical theory and a locus of stimulating philosophical inquiry.
|
no-problem/9906/gr-qc9906019.html
|
ar5iv
|
text
|
# Numerical simulations of Gowdy spacetimes on 𝑆²×𝑆¹×𝑅
## I Introduction
There have been several numerical investigations of the approach to the singularity in inhomogeneous cosmologies. In general, it is found that (except at isolated points) the approach to the singularity is either asymptotically velocity term dominated (AVTD) or is oscillatory. In the oscillatory case, there are epochs of velocity term dominance punctuated by short “bounces.” The most extensively studied inhomogeneous cosmology is the Gowdy spacetime on $`T^3\times R`$. Here the approach to the singularity is AVTD except at isolated points. The Gowdy spacetimes on $`T^3\times R`$ are especially well suited to a numerical treatment for the following reasons: (i) Due to the presence of two Killing fields, the metric components depend on only two spacetime coordinates. (ii) The constraint equations are easy to implement. (iii) The boundary conditions are particularly simple, just periodic boundary conditions in the one nontrivial spatial direction.
The original work of Gowdy treated spatially compact spacetimes with a two parameter spacelike isometry group. Gowdy showed that, for these spacetimes, the topology of space must be $`T^3`$ or $`S^3`$ or $`S^2\times S^1`$. Given the numerical results for the $`T^3`$ case, it is natural to ask what happens in the other two cases. In a recent paper Obregon and Ryan note that the Kerr metric between the outer and inner horizons is a Gowdy spacetime with spatial topology $`S^2\times S^1`$. They analyze the behavior of this spacetime and speculate that there may be significant differences between the behavior of Gowdy spacetimes on $`T^3\times R`$ and on $`S^2\times S^1\times R`$.
A numerical simulation of the $`S^2\times S^1`$ case presents some difficulties that are absent in the $`T^3`$ case. The constraint equations become more complicated, and there are difficulties associated with boundary conditions. In the $`T^3`$ case, the Killing fields are nowhere vanishing. However, in the $`S^2\times S^1`$ case, one of the Killing fields vanishes at the north and south poles of the $`S^2`$. Smoothness of the metric at these axis points then requires that the metric components behave in a particular way at these points. A computer code to evolve the $`S^2\times S^1`$ case therefore must enforce these smoothness conditions as boundary conditions, and must do so in such a way that the evolution is both stable and accurate. These issues are similar to those encountered in the numerical evolution of axisymmetric spacetimes, and the techniques presented here for Gowdy spacetimes should be useful for axisymmetric spacetimes as well.
This paper presents the results of numerical simulations of Gowdy spacetimes on $`S^2\times S^1\times R`$. Section 2 presents the metric and vacuum Einstein equations in a form suitable for numerical evolution. The numerical technique is presented in section 3, with the results given in section 4.
## II Metric and Field Equations
The Gowdy metric on $`S^2\times S^1\times R`$ has the form
$$ds^2=e^M(dt^2+d\theta ^2)+\mathrm{sin}t\mathrm{sin}\theta \left(e^L[d\varphi +Qd\delta ]^2+e^Ld\delta ^2\right).$$
(1)
Here the metric functions $`M,L`$ and $`Q`$ depend only on $`t`$ and $`\theta `$. Thus our two Killing fields are $`(/\varphi )`$ and $`(/\delta )`$. The coordinates $`\varphi `$ and $`\delta `$ are identified with period $`2\pi `$, with $`\delta `$ the coordinate on the $`S^1`$ and $`(\theta ,\varphi )`$ the coordinates on the $`S^2`$.
Here the “axis” points are at $`\theta =0`$ and $`\theta =\pi `$. The spacetime singularities (“big bang” and “big crunch”) are at $`t=0`$ and $`t=\pi `$. This form of the metric presents difficulties for a numerical treatment. Smoothness at the axis requires divergent behavior in the functions $`L`$ and $`M`$. Furthermore, the spactimes singularities occur at finite values of the time coordinate. This is likely to lead to bad behavior of the numerical simulation near $`t=0`$ or $`t=\pi `$. These difficulties are overcome with a new choice of metric functions and time coordinate. Define the new metric functions $`P`$ and $`\gamma `$ by
$$PL\mathrm{ln}\mathrm{sin}\theta ,$$
(2)
$$2\gamma M(P+\mathrm{ln}\mathrm{sin}t).$$
(3)
Define the new time coordinate $`\tau `$ by
$$\tau \mathrm{ln}\mathrm{tan}(t/2).$$
(4)
The metric then takes the form
$$ds^2=\frac{1}{\mathrm{cosh}\tau }\left\{e^P\left[e^{2\gamma }\left(\frac{d\tau ^2}{\mathrm{cosh}^2\tau }+d\theta ^2\right)+\mathrm{sin}^2\theta (d\varphi +Qd\delta )^2\right]+e^Pd\delta ^2\right\}.$$
(5)
Smoothness of the metric at the axis is equivalent to the requirement that $`P,Q`$ and $`\gamma `$ be smooth functions of $`\mathrm{cos}\theta `$ with $`\gamma `$ vanishing at $`\theta =0`$ and $`\theta =\pi `$. Note that for $`f`$ any smooth function of $`\mathrm{cos}\theta `$, it follows that $`df/d\theta =0`$ at $`\theta =0`$ and $`\theta =\pi `$.
As in the $`T^3`$ case, the vacuum Einstein field equations become evolution equations for $`P`$ and $`Q`$ and “constraint” equations that determine $`\gamma `$. The evolution equations are
$$P_{\tau \tau }=e^{2P}\mathrm{sin}^2\theta (Q_\tau )^2+\frac{1}{\mathrm{cosh}^2\tau }\left[P_{\theta \theta }+\mathrm{cot}\theta P_\theta \mathrm{\hspace{0.33em}1}e^{2P}\mathrm{sin}^2\theta (Q_\theta )^2\right],$$
(6)
$$Q_{\tau \tau }=\mathrm{\hspace{0.17em}2}P_\tau Q_\tau +\frac{1}{\mathrm{cosh}^2\tau }\left[Q_{\theta \theta }+\mathrm{\hspace{0.33em}3}\mathrm{cot}\theta Q_\theta +\mathrm{\hspace{0.33em}2}P_\theta Q_\theta \right].$$
(7)
Here a subscript denotes partial derivative with respect to the corresponding coordinate. Note that, as in the $`T^3`$ case, the evolution equations have no dependence on $`\gamma `$.
The constraint equations are
$$\mathrm{cot}\theta \gamma _\tau \mathrm{tanh}\tau \gamma _\theta =A,$$
(8)
$$\frac{\mathrm{cot}\theta }{\mathrm{cosh}^2\tau }\gamma _\theta \mathrm{tanh}\tau \gamma _\tau =B,$$
(9)
where the quantities $`A`$ and $`B`$ are given by
$$2A\mathrm{tanh}\tau P_\theta +P_\tau P_\theta +e^{2P}\mathrm{sin}^2\theta Q_\tau Q_\theta ,$$
(10)
$$4B2\mathrm{tanh}\tau P_\tau +(P_\tau )^2+e^{2P}\mathrm{sin}^2\theta (Q_\tau )^2+\mathrm{tanh}^2\tau \mathrm{\hspace{0.33em}4}+\frac{1}{\mathrm{cosh}^2\tau }\left[(P_\theta )^2+e^{2P}\mathrm{sin}^2\theta (Q_\theta )^2\right].$$
(11)
Solving equations (8) and (9) for $`\gamma _\theta `$ and $`\gamma _\tau `$ we find
$$\gamma _\theta =\frac{\mathrm{cosh}^2\tau (A\mathrm{tanh}\tau +B\mathrm{cot}\theta )}{\mathrm{cot}^2\theta \mathrm{sinh}^2\tau },$$
(12)
$$\gamma _\tau =\frac{A\mathrm{cot}\theta +B\mathrm{sinh}\tau \mathrm{cosh}\tau }{\mathrm{cot}^2\theta \mathrm{sinh}^2\tau }.$$
(13)
Given a solution of the evolution equations (6) and (7) for $`P`$ and $`Q`$, equations (12) and (13) and the smoothness condition that $`\gamma =0`$ at $`\theta =0`$ completely determine $`\gamma `$. Actually, equations (12) and (13) seem to be in danger of over determining $`\gamma `$, but the integrability condition for these equations is automatically satisfied as a consequence of the evolution equations for $`P`$ and $`Q`$. There are, however, two remaining difficulties with the equations for $`\gamma `$. The first has to do with the fact that the denominator in equations (12) and (13) vanishes when $`|\mathrm{cot}\theta |=\mathrm{sinh}\tau `$. Smoothness of the metric then requires that the numerators of these equations vanish whenever the denominator does. This places conditions on $`P`$ and $`Q`$. If these conditions are satisfied for the initial data, the evolution equations will preserve them. The second difficulty has to do with the fact that $`\gamma `$ must vanish at $`\theta =\pi `$ as well as $`\theta =0`$. Integrating equation (12) from $`0`$ to $`\pi `$ it then follows that we must have
$$_0^\pi \frac{\mathrm{cosh}^2\tau (A\mathrm{tanh}\tau +B\mathrm{cot}\theta )d\theta }{\mathrm{cot}^2\theta \mathrm{sinh}^2\tau }=0.$$
(14)
If this condition is satisfied by the initial data, then the evolution equations will preserve it.
In summary, the initial data for $`P`$ and $`Q`$ are not completely freely specifiable. They must satisfy conditions at the points where $`|\mathrm{cot}\theta |=\mathrm{sinh}\tau `$ as well as an integral condition. Given initial data satisfying these conditions, the evolution equations (6) and (7) then determine $`P`$ and $`Q`$ and the constraint equations (12) and (13) then determine $`\gamma `$.
## III Numerical Methods
We now turn to the numerical methods used to implement the evolution equations. We begin by casting the equations in first order form by introducing the quantities $`VP_\tau `$ and $`WQ_\tau `$. These quantities then satisfy the equations
$$V_\tau =e^{2P}\mathrm{sin}^2\theta W^2+\frac{1}{\mathrm{cosh}^2\tau }\left[P_{\theta \theta }+\mathrm{cot}\theta P_\theta \mathrm{\hspace{0.33em}1}e^{2P}\mathrm{sin}^2\theta (Q_\theta )^2\right],$$
(15)
$$W_\tau =\mathrm{\hspace{0.17em}2}VW+\frac{1}{\mathrm{cosh}^2\tau }\left[Q_{\theta \theta }+\mathrm{\hspace{0.33em}3}\mathrm{cot}\theta Q_\theta +\mathrm{\hspace{0.33em}2}P_\theta Q_\theta \right].$$
(16)
Thus the evolution equations have the form $`\stackrel{}{X}_\tau =\stackrel{}{F}(\stackrel{}{X},\tau )`$. We implement these equations using an iterative Crank-Nicholson scheme. Given $`\stackrel{}{X}`$ at time $`\tau `$, we define $`\stackrel{}{X}_0(\tau +\mathrm{\Delta }\tau )\stackrel{}{X}(\tau )`$ and then iterate the equation
$$\stackrel{}{X}_{n+1}(\tau +\mathrm{\Delta }\tau )=\stackrel{}{X}(\tau )+\frac{\mathrm{\Delta }\tau }{2}\left[\stackrel{}{F}(\stackrel{}{X}(\tau ),\tau )+\stackrel{}{F}(\stackrel{}{X}_n(\tau +\mathrm{\Delta }\tau ),\tau +\mathrm{\Delta }\tau )\right].$$
(17)
In principle, one should iterate until some sort of convergence is achieved. In practice, we simply iterate 10 times. We use $`\mathrm{\Delta }\tau =\mathrm{\Delta }\theta /2`$ where $`\mathrm{\Delta }\theta `$ is the spatial grid spacing.
The spatial grid is as follows: Let $`n_\theta `$ be the number of spatial grid points. Then we choose $`\mathrm{\Delta }\theta =\pi /(n_\theta 2)`$ and $`\theta _i=(i1.5)\mathrm{\Delta }\theta `$. Thus in addition to the “physical zones” for $`i=2,3,\mathrm{},n_\theta 1`$, we have two “ghost zones” at $`\theta _1=\mathrm{\Delta }\theta /2`$ and $`\theta _{n_\theta }=\pi +(\mathrm{\Delta }\theta /2)`$. The ghost zones are not part of the spacetime: variables there are set by boundary conditions. For any quantity $`S`$, define $`S_iS(\theta _i)`$. Spatial derivatives are implemented using the usual second order scheme:
$$S_\theta (\theta _i)=\frac{S_{i+1}S_{i1}}{2\mathrm{\Delta }\theta },$$
(18)
$$S_{\theta \theta }(\theta _i)=\frac{S_{i+1}+S_{i1}\mathrm{\hspace{0.17em}2}S_i}{(\mathrm{\Delta }\theta )^2}.$$
(19)
Smoothness of the metric requires that $`P_\theta =0`$ at $`\theta =0`$. Since $`\theta =0`$ is halfway between $`i=1`$ and $`i=2`$, we implement this condition as $`P_1=P_2`$. Similarly, we use $`Q_1=Q_2`$ since $`Q_\theta =0`$ at $`\theta =0`$. Correspondingly, the requirement that $`P_\theta `$ and $`Q_\theta `$ vanish at $`\theta =\pi `$ is implemented as $`P_{n_\theta }=P_{n_\theta 1}`$ and $`Q_{n_\theta }=Q_{n_\theta 1}`$. These boundary conditions are imposed at each iteration of the Crank-Nicholson scheme.
## IV Results
To test the computer code, it is helpful to have some closed form exact solution of the evolution equations to compare to the numerical evolution of the corresponding initial data. In particular, for a second order accurate evolution scheme, the difference between the numerical solution and the exact solution should converge to zero as the grid spacing squared.
A polarized Gowdy spacetime is one for which the Killing vectors are hypersurface orthogonal. For our form of the metric, that is equivalent to the condition $`Q=0`$. For polarized Gowdy spacetimes, the evolution equation (7) for $`Q`$ is trivially satisfied, and the evolution equation (6) for $`P`$ reduces to the following:
$$P_{\tau \tau }=\frac{1}{\mathrm{cosh}^2\tau }\left[P_{\theta \theta }+\mathrm{cot}\theta P_\theta \mathrm{\hspace{0.33em}1}\right].$$
(20)
This is a linear equation which can be solved by separation of variables, though one must choose only those solutions that satisfy the additional conditions for smoothness of the metric.
Unfortunately, the polarized solutions do not provide, by themselves, a very stringent code test: the evolution equation for $`Q`$ and the nonlinear terms in the evolution equation for $`P`$ are not tested at all. Fortunately, there is a technique, the Ehlers solution generating technique, which allows us to begin with a polarized solution and produce an unpolarized solution. Let $`\overline{P}`$ be any solution of the polarized equation (20) and let $`c`$ be any constant. Define $`P`$ and $`Q`$ by
$$P=\overline{P}\mathrm{ln}\left[1+\left(\frac{c\mathrm{sin}^2\theta }{\mathrm{cosh}\tau }e^{\overline{P}}\right)^2\right],$$
(21)
$$Q_\tau =\frac{\mathrm{\hspace{0.17em}2}c}{\mathrm{cosh}^2\tau }\left(2\mathrm{cos}\theta +\mathrm{sin}\theta \overline{P}_\theta \right),$$
(22)
$$Q_\theta =2c\mathrm{sin}\theta \left(\mathrm{tanh}\tau \overline{P}_\tau \right).$$
(23)
Then $`(P,Q)`$ is a solution of the unpolarized Gowdy equations (6) and (7).
We use the following polarized solution:
$$\overline{P}=\mathrm{ln}\mathrm{cosh}\tau +\mathrm{\hspace{0.33em}2}\tau ,$$
(24)
where $`b`$ is a constant. The solution generating technique then yields an unpolarized solution with $`P`$ given by equation (21) and $`Q`$ given by
$$Q=4c(1\mathrm{tanh}\tau )\mathrm{cos}\theta .$$
(25)
Figure 1 shows $`P`$ for the exact solution and the numerical evolution. (Here there are 502 spatial grid points, $`c=1`$, and the initial data at $`\tau =0`$ are evolved to $`\tau =10`$. The results are shown at 51 equally spaced points from $`\theta =0`$ to $`\theta =\pi `$). Figure 2 shows the difference between exact and numerical solutions. Here the parameters are as in figure (1), except that two simulations are run: one with 502 spatial gridpoints and one with 1002 grid points. For comparison, the results on the finer grid are multiplied by a factor of 4. The results show second order convergence. (Note: due to the presence of the ghost zones, quantities must be interpolated on the grids to make a comparison between quantities at the same values of $`\theta `$).
We would now like to find the generic behavior of Gowdy spacetimes on $`S^2\times S^1\times R`$. In the $`T^3`$ case a family of initial data was chosen and evolved. It was argued that the behavior of these spacetimes reflects the generic behavior. Here, we choose a similar family. The initial data at $`\tau =0`$ are $`P=0,P_\tau =v_0\mathrm{cos}\theta ,Q=2\mathrm{cos}\theta ,Q_\tau =0`$. Here $`v_0`$ is a constant. These data satisfy the constraint conditions. Figures 3-8 show the evolution of these data for various values of the parameter $`v_0`$. Here, $`v_0=2`$ in figures 3 and 4, $`v_0=4`$ in figure 5 and 6, and $`v_0=8`$ in figures 7 and 8. In all cases, the range of $`\theta `$ is $`(0,\pi )`$, the range of $`\tau `$ is $`(0,10)`$ and the simulation is run with 1002 spatial grid points. Note the presence of spiky features. The large $`\tau `$ behavior of the solutions is the following: There are functions $`Q_{\mathrm{}}(\theta )`$ and $`v_{\mathrm{}}(\theta )`$ with $`v_{\mathrm{}}(\theta )<1`$ such that away from the spiky features we have $`QQ_{\mathrm{}}(\theta )`$ and $`P_\tau v_{\mathrm{}}(\theta )`$ for large $`\tau `$.
The reason for this behavior is not hard to find and is essentially the same as in the $`T^3`$ case. For large $`\tau `$ and provided that $`P`$ is growing no faster than $`\tau `$, The terms in equations (6) and (7) proportional to $`1/\mathrm{cosh}^2\tau `$ become negligible. The truncated equations obtained by neglecting these terms are
$$P_{\tau \tau }=e^{2P}\mathrm{sin}^2\theta (Q_\tau )^2,$$
(26)
$$Q_{\tau \tau }=\mathrm{\hspace{0.17em}2}P_\tau Q_\tau .$$
(27)
Equations (26) and (27) are called the AVTD equations. They can be solved in closed form and have the property that $`QQ_{\mathrm{}}(\theta )`$ and $`P_\tau v_{\mathrm{}}(\theta )`$ as $`\tau \mathrm{}`$. Solutions of the full equations (6) and (7) are called AVTD provided that they approach solutions of the AVTD equations for large $`\tau `$. Thus we have an explanation of the AVTD behavior provided that we can show that $`P`$ grows no faster than $`\tau `$. As in the $`T^3`$ case, if $`P`$ grows faster than $`\tau `$ then the term in equation (6) proportional to $`e^{2P}/\mathrm{cosh}^2\tau `$ will cause a “bounce” that leaves $`P`$ growing less fast than $`\tau `$. Thus, an analysis of the large $`\tau `$ behavior of the evolution equations (6) and (7), essentially the same as in the $`T^3`$ case, leads to an explanation of the AVTD behavior.
The AVTD behavior can also be explained by an analysis of the local properties of Gowdy spacetimes on $`S^2\times S^1\times R`$. Define $`S_a_a(\mathrm{sin}t\mathrm{sin}\theta )`$. Then in regions of the spacetime where $`S_a`$ is timelike, the region is locally isometric to a Gowdy spacetime on $`T^3\times R`$. In regions where $`S_a`$ is spacelike, the region is locally isometric to a cylindrical wave. Thus, the behavior that we should expect in the $`S^2\times S^1`$ case is a combination of the the behavior of the $`T^3`$ case and the behavior of cylindrical waves. Furthermore, for any point on the $`S^2`$ except the poles, as the singularity is approached, $`S_a`$ becomes timelike at that point. Thus the asymptotic behavior as the singularity is approached in the $`S^2\times S^1`$ case should be the same as in the $`T^3`$ case.
We now turn to an analysis of the spiky features seen in the metric functions $`P`$ and $`Q`$. The argument of the previous paragraph indicates that these features are essentially the same as those seen in the $`T^3`$ case. In fact, these features can be explained using the evolution equations (6) and (7) as was done in the $`T^3`$ case. For large $`\tau `$, it follows from equation (7) that $`Q_\tau \mathrm{\Pi }_Q(\theta )e^{2P}`$ for some function $`\mathrm{\Pi }_Q(\theta )`$. Then, using this result in equation (6) we have an approximate evolution equation for $`P`$:
$$P_{\tau \tau }\mathrm{sin}^2\theta \left[e^{2P}(\mathrm{\Pi }_Q)^2\frac{e^{2P}}{\mathrm{cosh}^2\tau }(Q_\theta )^2\right].$$
(28)
These terms eventually drive $`P_\tau `$ to the range between $`0`$ and $`1`$. However, at a point $`\theta _1`$ where $`Q_\theta `$ vanishes, $`P_\tau `$ can be greater than $`1`$. This leads to a spiky feature in $`P`$, since $`P_\tau >1`$ at $`\theta _1`$ but $`P_\tau <1`$ at points near $`\theta _1`$. This sort of spiky feature is illustrated in figure 9. Correspondingly, at a point $`\theta _2`$ where $`\mathrm{\Pi }_Q`$ vanishes, $`P_\tau `$ can be less than zero. This leads to sharp features in $`P`$ since $`P_\tau >0`$ at points near $`\theta _2`$. Also since the region where $`P<0`$ leads to rapid growth in $`Q`$, there is a sharp feature in $`Q`$. This sort of feature is illustrated in figure 10.
In summary, a numerical treatment of Gowdy spacetimes on $`S^2\times S^1\times R`$ reveals that they are very similar to Gowdy spacetimes on $`T^3\times R`$. In particular, they show the same behavior of AVTD behavior almost everywhere, and they have the same sort of spiky features at isolated points.
## V Acknowledgements
I would like to thank Beverly Berger, Vince Moncrief and G. Comer Duncan for helpful discussions. I would also like to thank the Institute for Theoretical Physics at Santa Barbara for hospitality. This work was partially supported by NSF grant PHY-9722039 to Oakland University.
|
no-problem/9906/gr-qc9906038.html
|
ar5iv
|
text
|
# Tensor mass and particle number peak at the same location in the scalar-tensor gravity boson star models - an analytical proof
## Abstract
Recently in boson star models in framework of Brans-Dicke theory, three possible definitions of mass have been identified, all identical in general relativity, but different in scalar-tensor theories of gravity.It has been conjectured that it’s the tensor mass which peaks, as a function of the central density, at the same location where the particle number takes its maximum.This is a very important property which is crucial for stability analysis via catastrophe theory. This conjecture has received some numerical support. Here we give an analytical proof of the conjecture in framework of the generalized scalar-tensor theory of gravity, confirming in this way the numerical calculations.
PACS numbers: 0440D, 0450
Boson stars were first discussed by Kaup and then by Ruffini and Bonazzola . Boson stars in scalar-tensor theories of gravity have been investigated extensively by many researchers.The first model of a boson star in pure Brans-Dicke theory has been studied by Gunderson and Jensen . Their work was generalized by Torres who was studied boson stars in scalar-tensor theories with non-constant $`\omega _{BD}(\mathrm{\Phi })`$.More recently boson stars have investigated in the papers by Torres et al. , and in the paper by Comer and Shinkai . In boson stars have been studied in connection with so-called gravitational memory , while their stability through cosmic history has examined in and .Finally, the dynamical evolution of boson stars has been investigated in the paper by Balakrishna and Shinkai . For more details we refer the reader to the most recent review on boson stars .
Here we consider complex scalar field boson stars in the most general scalar tensor theory of gravity with an action in Jordan frame
$`S={\displaystyle \frac{1}{16\pi G_{}}}{\displaystyle \sqrt{\stackrel{~}{g}}\left(F(\mathrm{\Phi })\stackrel{~}{R}H(\mathrm{\Phi })\stackrel{~}{g}^{\mu \nu }_\mu \mathrm{\Phi }_\nu \mathrm{\Phi }+\stackrel{~}{U}(\varphi )\right)d^4x}+`$ (1)
$`+{\displaystyle \sqrt{\stackrel{~}{g}}\left(\frac{1}{2}\stackrel{~}{g}^{\mu \nu }_\mu \mathrm{\Psi }^+_\nu \mathrm{\Psi }W(\mathrm{\Psi }^+\mathrm{\Psi })\right)d^4x}`$
where $`\stackrel{~}{R}`$ is Ricci scalar curvature with respect to the space-time metric $`\stackrel{~}{g}_{\mu \nu }`$, $`G_{}`$ is the bare Newtonian constant and $`\mathrm{\Phi }`$ is the gravitational Brans-Dicke scalar with potential term $`\stackrel{~}{U}(\varphi ).`$ $`\mathrm{\Psi }`$ is massive self-interacting complex scalar field with
$$W(\mathrm{\Psi }^+\mathrm{\Psi })=\frac{1}{2}m^2\mathrm{\Psi }^+\mathrm{\Psi }+\frac{1}{4}\lambda _{}(\mathrm{\Psi }^+\mathrm{\Psi })^2.$$
Hereafter we will consider only static and spherically symmetric boson stars.
The explicit form of the action shows that we have $`U(1)`$-invariant action under global gauge transformation $`\mathrm{\Psi }e^{ia}\mathrm{\Psi }`$, $`a`$ being a constant.This global $`U(1)`$-symmetry gives rise to the following conserved current
$`J^\mu ={\displaystyle \frac{i}{2}}\sqrt{\stackrel{~}{g}}\stackrel{~}{g}^{\mu \nu }\left(\mathrm{\Psi }_\nu \mathrm{\Psi }^+\mathrm{\Psi }^+_\nu \mathrm{\Psi }\right)`$ (2)
The conserved current leads to a conserved charge - total particle number making up the star
$`N={\displaystyle J^0d^3x}`$ (3)
Binding energy is then defined by
$`E_B=M_{Star}mN`$ (4)
where $`m`$ is the particle mass.
Now a problem arises: How to define the mass which appears in the expression for the binding energy?
In contrary to general relativity the definition of mass in scalar-tensor theories of gravity is quite subtle.This problem has been recently examined numerically in the work by Whinnett .He has considered three possible mass definitions in Jordan frame, namely Schwarzschild mass $`M_S`$ (i.e. ADM mass in Jordan frame), Keplerian mass $`M_{Kepler}`$ and the tensor one $`M_T`$.As it has been shown numerically in (see also ) these three masses differ significantly from each other in the case $`\omega _{BD}=1`$.Keplerian mass leads to positive binding energy which means that every boson star solution is in general potentially unstable.Contrary, Schwarzschild mass leads to negative binding energy suggesting that every solution is potentially stable even for large central densities $`\rho =\mathrm{\Psi }(0)^2`$.
It should be noted that for large constant $`\omega _{BD}`$ (say $`\omega _{BD}>500`$) the difference between the three masses is negligible.However, for arbitrary <sup>1</sup><sup>1</sup>1We mean arbitrary $`\omega _{BD}(\varphi )`$ for which the theory passes through all known gravitational experiments. $`\omega _{BD}(\varphi )`$ it’s possible that the three masses may differ from each other significantly.This may occur in the early universe when the cosmological value $`\mathrm{\Phi }_{\mathrm{}}`$ is sufficiently smaller than $`1`$ . Moreover, our numerical calculations show that for some physically relevant functions $`\omega _{BD}(\varphi )`$ we may have $`M_TM_S(0.150.20)M_T.`$ On the other hand the scalar tensor theories of gravity with $`\omega _{BD}=1`$ better describe the early universe(see and references therein).So, the case when $`\omega _{BD}`$ is not large, is also physically relevant.Therefore, when we study the boson stars in the early universe the mass choice is crucial.
It’s the tensor mass which leads to physically acceptable picture.In it’s shown numerically that the tensor mass peaks at the same point as particle numbers - a very important property in general relativity .This property is also crucial for the application of catastrophe theory to analyze the stability of the boson stars ,.
In and the problem for analytical proof of the fact that it’s the tensor mass which peaks at the same location as the particle number has been stated. The main purpose of this paper is to fill this gap.
In our opinion it’s more convenient to work in Einstein frame given by
$`g_{\mu \nu }=F(\mathrm{\Phi })\stackrel{~}{g}_{\mu \nu }`$ (5)
In Einstein frame the action (Tensor mass and particle number peak at the same location in the scalar-tensor gravity boson star models - an analytical proof) takes the form
$`S={\displaystyle \frac{1}{16\pi G_{}}}{\displaystyle \sqrt{g}\left(R2g^{\mu \nu }_\mu \varphi _\nu \varphi +U(\varphi )\right)d^4x}+`$ (6)
$`+{\displaystyle \sqrt{g}\left(\frac{1}{2}A^2(\varphi )g^{\mu \nu }_\mu \mathrm{\Psi }^+_\nu \mathrm{\Psi }A^4(\varphi )W(\mathrm{\Psi }^+\mathrm{\Psi })\right)d^4x}`$
where $`R`$ is the Ricci scalar curvature with respect to the metric $`g_{\mu \nu }`$, $`U(\varphi )=A^4(\varphi )\stackrel{~}{U}(\mathrm{\Phi }(\varphi ))`$ and $`A^2(\varphi )=F^1(\mathrm{\Phi }(\varphi ))`$ as $`\varphi `$ is given by
$`\varphi ={\displaystyle 𝑑\xi \sqrt{\frac{3}{4}\left(\frac{d\mathrm{ln}(F(\xi ))}{d\xi }\right)^2+\frac{1}{2}\frac{H(\xi )}{F(\xi )}}}`$ (7)
The action (6) leads to the following fields equations
$`G_\mu ^\nu =\kappa _{}T_\mu ^\nu +2_\mu \varphi ^\nu \varphi ^\sigma \varphi _\sigma \varphi \delta _\mu ^\nu +{\displaystyle \frac{1}{2}}U(\varphi )\delta _\mu ^\nu `$ (8)
$`\mathrm{}\varphi +{\displaystyle \frac{1}{4}}U^{}(\varphi )={\displaystyle \frac{\kappa _{}}{2}}\alpha (\varphi )T`$
$`\mathrm{}\mathrm{\Psi }+2\alpha (\varphi )^\sigma \varphi _\sigma \mathrm{\Psi }=2A^2(\varphi ){\displaystyle \frac{W(\mathrm{\Psi }^+\mathrm{\Psi })}{\mathrm{\Psi }^+}}`$
$`\mathrm{}\mathrm{\Psi }^++2\alpha (\varphi )^\sigma \varphi _\sigma \mathrm{\Psi }^+=2A^2(\varphi ){\displaystyle \frac{W(\mathrm{\Psi }^+\mathrm{\Psi })}{\mathrm{\Psi }^+}}`$
where $`\mathrm{}`$ is d’Alambert operator in terms of the metric $`g_{\mu \nu }`$, $`\kappa _{}=8\pi G_{}`$,$`\alpha (\varphi )=\frac{d}{d\varphi }\mathrm{ln}(A(\varphi ))`$ and $`T`$ is the trace of the Einstein frame energy-momentum tensor of the complex scalar field given by
$`T_\mu ^\nu ={\displaystyle \frac{1}{2}}A^2(\varphi )\left(_\mu \mathrm{\Psi }^+^\nu \mathrm{\Psi }+_\mu \mathrm{\Psi }^\nu \mathrm{\Psi }^+\right)`$ (9)
$`{\displaystyle \frac{1}{2}}A^2(\varphi )\left(_\sigma \mathrm{\Psi }^+^\sigma \mathrm{\Psi }2A^2(\varphi )W(\mathrm{\Psi }^+\mathrm{\Psi })\right)\delta _\mu ^\nu `$
The conserved $`U(1)`$-current in Einstein frame is
$`J^\mu ={\displaystyle \frac{i}{2}}A^2(\varphi )\sqrt{g}g^{\mu \nu }\left(\mathrm{\Psi }_\nu \mathrm{\Psi }^+\mathrm{\Psi }^+_\nu \mathrm{\Psi }\right)`$ (10)
As we have already mentioned we consider static and spherically symmetric boson stars i.e. space-time with a line element in Einstein frame
$`ds^2=e^\nu dt^2e^\lambda dr^2r^2\left(d\theta ^2+\mathrm{sin}^2(\theta )d\phi ^2\right)`$ (11)
and complex scalar field in the form
$`\mathrm{\Psi }=\sigma (r)e^{i\omega t}`$ (12)
where $`\omega `$ is real positive number and $`\sigma (r)`$ is a real function. It should be noted that the asymptotic behaviour of the field $`\varphi `$ set the following tight constraint
$`0<\omega <m`$ (13)
In this case the field equation system reduces to the following system of ordinary differential equations
$`\lambda ^{}={\displaystyle \frac{1e^\lambda }{r}}+\kappa _{}e^\lambda rT_0^0+r\varphi _{}^{}{}_{}{}^{2}+{\displaystyle \frac{1}{2}}U(\varphi )e^\lambda r`$ (14)
$`\nu ^{}={\displaystyle \frac{e^\lambda 1}{r}}\kappa _{}e^\lambda rT_1^1+r\varphi _{}^{}{}_{}{}^{2}{\displaystyle \frac{1}{2}}U(\varphi )e^\lambda r`$
$`\varphi ^{\prime \prime }=\left({\displaystyle \frac{\nu ^{}\lambda ^{}}{2}}+{\displaystyle \frac{2}{r}}\right)\varphi ^{}+{\displaystyle \frac{1}{4}}U^{}(\varphi )e^\lambda +{\displaystyle \frac{\kappa _{}}{2}}\alpha (\varphi )Te^\lambda `$
$`\sigma ^{\prime \prime }=\left({\displaystyle \frac{\nu ^{}\lambda ^{}}{2}}+{\displaystyle \frac{2}{r}}\right)\sigma ^{}\omega ^2e^{\nu \lambda }\sigma 2\alpha (\varphi )\varphi ^{}\sigma ^{}+2A^2(\varphi )e^\lambda W^{}(\sigma ^2)\sigma `$
Here the components $`T_0^0`$ and $`T_1^1`$ are given correspondingly by
$`T_0^0={\displaystyle \frac{1}{2}}\omega ^2A^2(\varphi )e^\nu \sigma ^2+{\displaystyle \frac{1}{2}}A^2(\varphi )e^\lambda \sigma _{}^{}{}_{}{}^{2}+A^4(\varphi )W(\sigma ^2)`$ (15)
$`T_1^1={\displaystyle \frac{1}{2}}\omega ^2A^2(\varphi )e^\nu \sigma ^2{\displaystyle \frac{1}{2}}A^2(\varphi )e^\lambda \sigma _{}^{}{}_{}{}^{2}+A^4(\varphi )W(\sigma ^2)`$ (16)
The system (14) has to be solved at the following boundary conditions.We demand asymptotic flatness which means that $`\nu (\mathrm{})=0`$. On the other hand nonsingularity at the origin requires $`\lambda (0)=0`$. Concerning $`\varphi `$,nonsingularity at the origin implies $`\varphi ^{}(0)=0`$ while at infinity $`\varphi `$ has to match the cosmological value $`\varphi (\mathrm{})=\varphi _{\mathrm{}}`$.Nonsingularity of $`\sigma `$ at the origin implies $`\sigma ^{}(0)=0`$.We require finite mass an therefore we put $`\sigma (\mathrm{})=0`$.In addition we have to give the central value $`\sigma (0)`$.
It’s well known that the tensor mass is just the ADM mass in Einstein frame (we note that in Einstein frame all mass definitions coincide).Therefore we can write directly the explicit expression for the tensor mass using the first equation of (14), namely
$`M_T={\displaystyle \frac{1}{2G_{}}}{\displaystyle _0^{\mathrm{}}}𝑑rr^2\left(\kappa _{}T_0^0+e^\lambda \varphi _{}^{}{}_{}{}^{2}+{\displaystyle \frac{1}{2}}U(\varphi )\right)={\displaystyle \frac{1}{2G_{}}}{\displaystyle _0^{\mathrm{}}}𝑑rr^2𝒟(r)`$ (17)
as $`𝒟(r)`$ is define by the expression itself.Respectively, the particle number is given by
$`N=4\pi {\displaystyle _0^{\mathrm{}}}𝑑rr^2e^{\frac{\lambda }{2}}\left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)`$ (18)
When we are interested only in zeronodes solutions the boson star states are parameterized by the $`\sigma `$ field central value $`\sigma (0)`$ or equivalently by $`\rho =\sigma ^2(0).`$ Let’s consider two infinitesimally nearby configurations parameterized by $`\sigma (0)`$ and $`\sigma (0)+\delta \sigma (0)`$.The corresponding variations of the tensor mass and particle number are
$`\delta M_T={\displaystyle \frac{1}{2G_{}}}{\displaystyle _0^{\mathrm{}}}𝑑rr^2\delta 𝒟`$ (19)
$`\delta N=4\pi {\displaystyle _0^{\mathrm{}}}𝑑rr^2\delta \left(e^{\frac{\lambda }{2}}\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)=`$ (20)
$`4\pi {\displaystyle _0^{\mathrm{}}}𝑑rr^2e^{\frac{\lambda }{2}}\delta \left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)+4\pi {\displaystyle _0^{\mathrm{}}}𝑑rr^2\left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)\delta e^{\frac{\lambda }{2}}`$
It’s not difficult to obtain
$`\delta e^{\frac{\lambda }{2}}={\displaystyle \frac{1}{2r}}e^{\frac{3}{2}\lambda }{\displaystyle _0^r}𝑑rr^2\delta 𝒟`$ (21)
Now substituting (21) in (20) and after some algebra we have
$`{\displaystyle \frac{\delta N}{4\pi }}={\displaystyle _0^{\mathrm{}}}𝑑rr^2e^{\frac{\lambda }{2}}\delta \left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)+`$ (22)
$`+{\displaystyle \frac{1}{2}}{\displaystyle _0^{\mathrm{}}}𝑑rr^2\delta 𝒟{\displaystyle _r^{\mathrm{}}}𝑑\xi \xi \left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)e^{\frac{3}{2}\lambda }`$
Taking into account the explicit form (17) of $`𝒟`$ one obtains
$`\delta \left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)={\displaystyle \frac{e^{\frac{\nu }{2}}}{\omega }}\left(\delta \left({\displaystyle \frac{𝒟}{\kappa _{}}}\right)+{\displaystyle \frac{1}{2}}\omega ^2e^\nu \delta \left(A^2(\varphi )\sigma ^2\right)\right)`$ (23)
$`{\displaystyle \frac{e^{\frac{\nu }{2}}}{\omega }}\delta \left({\displaystyle \frac{1}{2}}A^2(\varphi )e^\lambda \sigma _{}^{}{}_{}{}^{2}+A^4(\varphi )W(\sigma ^2)+{\displaystyle \frac{1}{\kappa _{}}}e^\lambda \varphi _{}^{}{}_{}{}^{2}+{\displaystyle \frac{1}{2\kappa _{}}}U(\varphi )\right)`$
In more detailed form the expression (23) is written as follows
$`\delta \left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)={\displaystyle \frac{e^{\frac{\nu }{2}}}{\omega }}\delta \left({\displaystyle \frac{𝒟}{\kappa _{}}}\right){\displaystyle \frac{e^{\frac{\nu }{2}}}{\omega }}\left(L_\varphi \delta \varphi +{\displaystyle \frac{2}{\kappa _{}}}e^\lambda \varphi ^{}\delta \varphi ^{}\right)`$ (24)
$`{\displaystyle \frac{e^{\frac{\nu }{2}}}{\omega }}\left(L_\sigma \delta \sigma +A^2(\varphi )e^\lambda \sigma ^{}\delta \sigma ^{}\right){\displaystyle \frac{e^{\frac{\nu }{2}}}{\omega }}\left({\displaystyle \frac{1}{2}}A^2(\varphi )\sigma _{}^{}{}_{}{}^{2}+{\displaystyle \frac{1}{\kappa _{}}}\varphi _{}^{}{}_{}{}^{2}\right)\delta e^\lambda `$
where $`L_\varphi `$ and $`L_\sigma `$ are given by
$`L_\varphi =\alpha (\varphi )\left(\omega ^2e^\nu A^2(\varphi )\sigma ^2+A^2(\varphi )e^\lambda \sigma _{}^{}{}_{}{}^{2}+4A^4(\varphi )W(\sigma ^2)\right)+{\displaystyle \frac{1}{2\kappa _{}}}U^{}(\varphi )`$ (25)
$`L_\sigma =\omega ^2e^\nu A^2(\varphi )\sigma +2A^4(\varphi )W^{}(\sigma ^2)\sigma `$ (26)
Putting (24) in the first integral of (22), performing integration by parts and taking into account the third and the fourth equation of the system (14) one arrives at
$`{\displaystyle _0^{\mathrm{}}}𝑑rr^2e^{\frac{\lambda }{2}}\delta \left(\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}\right)={\displaystyle _0^{\mathrm{}}}𝑑rr^2{\displaystyle \frac{e^{\frac{1}{2}(\nu +\lambda )}}{\omega }}\delta \left({\displaystyle \frac{𝒟}{\kappa _{}}}\right)`$ (27)
$`{\displaystyle _0^{\mathrm{}}}𝑑rr^2{\displaystyle \frac{e^{\frac{1}{2}(\nu +\lambda )}}{\omega }}\left({\displaystyle \frac{1}{2}}A^2(\varphi )\sigma _{}^{}{}_{}{}^{2}+{\displaystyle \frac{1}{\kappa _{}}}\varphi _{}^{}{}_{}{}^{2}\right)\delta e^\lambda `$
Substituting now (27) in (22) as one uses that
$$\delta e^\lambda =\frac{1}{r}_0^r𝑑rr^2\delta 𝒟$$
we have
$`{\displaystyle \frac{\delta N}{4\pi }}={\displaystyle _0^{\mathrm{}}}𝑑rr^2\mathrm{\Lambda }(r)\delta \left({\displaystyle \frac{𝒟}{\kappa _{}}}\right)`$ (28)
where $`\mathrm{\Lambda }(r)`$ is the following expression
$`\mathrm{\Lambda }(r)=\kappa _{}{\displaystyle _r^{\mathrm{}}}𝑑\xi \xi \left({\displaystyle \frac{1}{\omega }}e^{\frac{1}{2}(\nu +\lambda )}\left({\displaystyle \frac{1}{2}}A^2(\varphi )\sigma _{}^{}{}_{}{}^{2}+{\displaystyle \frac{1}{\kappa _{}}}\varphi _{}^{}{}_{}{}^{2}\right)+{\displaystyle \frac{1}{2}}\omega A^2(\varphi )\sigma ^2e^{\frac{\nu }{2}}e^{\frac{3}{2}\lambda }\right)+`$ (29)
$`+{\displaystyle \frac{1}{\omega }}e^{\frac{1}{2}(\nu +\lambda )}`$
Using the first two equations of (14), it’s not difficult one to show that $`\mathrm{\Lambda }`$ is actually a constant which turns out to be $`\mathrm{\Lambda }=\frac{1}{\omega }`$.Therefore, we obtain finally
$`{\displaystyle \frac{\delta N}{4\pi }}={\displaystyle \frac{1}{\omega }}{\displaystyle _0^{\mathrm{}}}𝑑rr^2\delta {\displaystyle \frac{𝒟}{\kappa _{}}}`$ (30)
Comparing this expression with (19) we conclude that the following important relation holds
$`\delta M_T=\omega \delta N`$ (31)
The relation (31) may be rewritten in the form
$`{\displaystyle \frac{\delta M_T}{\delta \rho }}=\omega {\displaystyle \frac{\delta N}{\delta \rho }}`$ (32)
Taking into account that $`0<\omega <m`$, it follows from (32) that if $`\rho _{crit}`$ is a critical point for $`M_T`$ (i.e. $`\frac{\delta M_T}{\delta \rho }=0`$), then $`\rho _{crit}`$ is also critical point for $`N.`$
Let $`M_T`$ has a maximum at $`\rho =\rho _{crit}`$ ($`\frac{\delta ^2M_T}{\delta \rho ^2}<0`$).Then we obtain
$`\left({\displaystyle \frac{\delta ^2N}{\delta \rho ^2}}\right)_{crit}={\displaystyle \frac{1}{\omega _{crit}}}\left({\displaystyle \frac{\delta ^2M_T}{\delta \rho ^2}}\right)_{crit}<0`$ (33)
which shows that $`N`$ has a maximum at $`\rho =\rho _{crit}`$, too.
Therefore the tensor mass $`M_T`$ and the particle number $`N`$ peak, as functions of the central density, at the same location .
That the tensor mass and particle number peak at the same location results in a cusp in the bifurcation diagram $`M_T`$ versus $`N.`$ In infinitesimal small neighborhood of a cusp we may take expansions in powers of $`(\rho \rho _{crit})`$
$`M_T=M_{T}^{}{}_{crit}{}^{}+{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\delta ^2M_T}{\delta \rho ^2}}\right)_{crit}(\rho \rho _{crit})^2+{\displaystyle \frac{1}{6}}\left({\displaystyle \frac{\delta ^3M_T}{\delta \rho ^3}}\right)_{crit}(\rho \rho _{crit})^3+O(4)`$ (34)
$`N=N_{crit}+{\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\delta ^2N}{\delta \rho ^2}}\right)_{crit}(\rho \rho _{crit})^2+{\displaystyle \frac{1}{6}}\left({\displaystyle \frac{\delta ^3N}{\delta \rho ^3}}\right)_{crit}(\rho \rho _{crit})^3+O(4)`$
Now using (32), the coefficients $`\left(\frac{\delta ^2M_T}{\delta \rho ^2}\right)_{crit}`$ and $`\left(\frac{\delta ^3M_T}{\delta \rho ^3}\right)_{crit}`$ may be expressed by $`N^{\prime \prime }=\left(\frac{\delta ^2N}{\delta \rho ^2}\right)_{crit}`$ and $`N^{\prime \prime \prime }=\left(\frac{\delta ^3N}{\delta \rho ^3}\right)_{crit}`$, and the result is
$$\left(\frac{\delta ^2M_T}{\delta \rho ^2}\right)_{crit}=\omega _{crit}N^{\prime \prime }$$
$$\left(\frac{\delta ^3M_T}{\delta \rho ^3}\right)_{crit}=\omega _{crit}N^{\prime \prime \prime }+2\omega _{crit}^{}N^{\prime \prime }.$$
On the other hand, from the expansion for $`N`$ we have
$$\rho \rho _{crit}=\pm \left(\left(\frac{2}{N^{\prime \prime }}\right)(N_{crrit}N)\right)^{\frac{1}{2}}+\left(\frac{N^{\prime \prime \prime }}{3N_{}^{\prime \prime }{}_{}{}^{2}}\right)(N_{crit}N)+\mathrm{}.$$
Substituting this expression in the expansion for $`M_T`$ we obtain
$`M_T=M_{T}^{}{}_{crit}{}^{}+\omega _{crit}(NN_{crit}){\displaystyle \frac{1}{3}}\omega _{}^{}{}_{crit}{}^{}\left({\displaystyle \frac{2}{N^{\prime \prime }}}\right)^{\frac{1}{2}}(N_{crit}N)^{\frac{3}{2}}+O(2)`$ (35)
Let’s denote by $`M_T^{up}`$ and $`M_T^{low}`$ correspondingly the tensor mass on the upper and lower branch of the curve $`M_T(N).`$ Then making use of (35) one obtains
$`M_T^{up}M_T^{low}={\displaystyle \frac{2}{3}}\omega _{}^{}{}_{crit}{}^{}\left({\displaystyle \frac{2}{N^{\prime \prime }}}\right)^{\frac{1}{2}}(N_{crit}N)^{\frac{3}{2}}`$ (36)
It’s easy to see that for the binding energy we have the same relation
$`E_B^{up}E_B^{low}={\displaystyle \frac{2}{3}}\omega _{}^{}{}_{crit}{}^{}\left({\displaystyle \frac{2}{N^{\prime \prime }}}\right)^{\frac{1}{2}}(N_{crit}N)^{\frac{3}{2}}`$ (37)
These relations are scalar-tensor boson star versions of the similar relations in the fermion stars theory in pure general relativity . It should be noted that such dependence $`(N_{crit}N)^{\frac{3}{2}}`$ is typical for catastrophe theory .
Conclusion
Scalar-tensor theories of gravity violate the strong equivalence principle. This results in the appearance of three different possible masses as a measure of the total energy of the boson star.The stability analysis of the boson stars requires that the mass and particle number peak, as functions of the central density, at the same location.In this article we have proved analytically that it’s the tensor mass which possess the desirable property. Therefore, it’s the tensor mass which should be taken as the physical mass, for example in the construction of the binding energy of the star. While the numerical calculations have been done for boson stars in pure Brans-Dicke theory of gravity, our proof holds for the most general scalar-tensor theory.Especially, our proof involves a potential term for the gravitational scalar $`\mathrm{\Phi }.`$ This is important, because many scalar-tensor models of gravity involve such term. On the other hand the potential term may play significant role in the early universe and to influence the boson star formation and stability,although at present there aren’t numerical investigations of boson stars in scalar-tensor theories with a potential term.
Finally, we believe that the result in this letter has a more general nature. It’s the tensor mass which should be consider as the best candidate for the physical mass as a measure of the total energy of space-time in scalar-tensor theories of gravity.
Acknowledgments
The author is grateful to P. Fiziev for his continuous encouragement and valuable comments.
The author is also grateful to the anonymous referees for their valuable suggestions.
The work on this letter has been partially supported by the Sofia University Foundation for Scientific Research, Contracts No.No. 245/99, 257/99 .
|
no-problem/9906/cond-mat9906082.html
|
ar5iv
|
text
|
# A Generalized Fokker-Planck Equation for the Ratchet Problem: Memory Effects
## Abstract
We study stochastic ratchets with inertia in the limit where the time correlations become important. We have developed a Fokker-Planck type equation for the ratchet problem which includes the memory effects. It is tested with comparison to the Langevin simulations. The effect of the memory in the system is analyzed extensively. A positive feedback regime which manifests itself as instabilities is observed. This may be regarded as an illustration of stochastic resonance.
The motion of a particle under the influence of random forces and an asymmetric periodic potential has been attracting a great deal of interest in recent years. The so-called ratchet models exhibit a variety of interesting phenomena stemming from nonequilibrium fluctuations. The most significant result of these investigations is the occurrence of a net macroscopic current which depends on the spatial asymmetry of the external potential as well as the statistical properties of the fluctuating forces. The stochastic ratchets are important in understanding biological systems such as molecular motors and they also find application for electrons in superlattices, quantum dots, Josephson junctions, and atomic systems. Recent experiments making use of nanolitography techniques provide a wealth of possibilities in the study of quantum properties.
In this work we study the effects of memory friction for a Brownian particle subject to spatial and temporal forces. When the dissipative effects of the fluctuating (random) force are allowed to depend on the system’s past behavior, the memory function replaces the constant friction term and a generalized Langevin equation results. Previous studies of thermal ratchets making use of the memory function or correlated noise relied on the simulation of Langevin equation to obtain transport properties. It has been remarked that a corresponding Fokker-Planck equation (FPE) describing the dynamics of the system is difficult to construct owing to the non-Markovian nature of the underlying process. There has been several attempts to obtain a FPE with colored Gaussian noise. Fokker-Planck equation description in this context presents the global characteristics and in particular the current density which reveals the most interesting properties of thermal ratchets such as rectification and current reversal. Our chief aim here is to obtain a FPE in the presence of memory effects in an approximate way. Introducing some simplifying assumptions we form an integro-differential equation satisfied by the probability density which may be construed as a generalized Fokker-Planck equation. We remark that the non-Markovian nature of the problem is preserved in the final FPE we obtain. It has been known that for a non-Markovian system, the is no generic way of obtaining a FPE if the system does not exhibit steady state solutions. In our case, we know from the outset that steady state solutions exist through Langevin simulations, even in the non-Markovian limit (i.e. with memory effects). Thus, deriving a FPE equation for this system does not contradict the well known features of the non-Markovian problems. We demonstrate by numerical calculations that the same qualitative and quantitative results can be obtained as those from the generalized Langevin equation.
Our application of the generalized FPE to the ratchet problem shows interesting properties resulting from the memory effects. In particular, we find a regime in the current characteristics exhibiting instabilities. In the following we first sketch a derivation of the generalized FPE in the form of an integro-differential equation which has general applicability to the ratchet problem. We then present and discuss our results emphasizing the memory effects.
Our aim is to construct a Fokker-Planck equation for stochastic ratchets including the memory effects. We begin with the basic equation of motion governing the dynamics of a Brownian particle, given by the generalized Langevin equation
$$\frac{dv}{dt}=_{\mathrm{}}^t\mu (tt^{})v(t^{})𝑑t^{}+f(t)\frac{dV(x)}{dx}+\xi (t),$$
(1)
where $`x(t)`$ and $`v(t)=dx/dt`$, respectively, are the position and velocity of the particle, $`V(x)`$ is the asymmetric periodic potential, $`f(t)`$ is a time-dependent external driving force, and $`\xi (t)`$ is the stochastic force with zero mean. Each term in Eq. (1) is scaled with the particle mass $`m`$. We note that the above description is quite general and encompasses a variety of situations already covered in previous studies. Assuming we know the solution at some time $`t`$, it is possible to construct the solution at time $`t+dt`$. Let the solution for the position (velocity) be $`x`$ ($`v`$) and $`x^{}`$ ($`v^{}`$) at time $`t`$ and $`t+dt`$, respectively. Then it is straightforward to construct the following set of equations:
$$v^{}=v\left[_{\mathrm{}}^t\mu (tt^{})v(t^{})𝑑t^{}f(t)+\frac{dV(x)}{dx}\right]\mathrm{\Delta }t+\xi (t)(\mathrm{\Delta }t)^{1/2},\text{and}x^{}=x+v\mathrm{\Delta }t.$$
(2)
The power $`1/2`$ of $`\mathrm{\Delta }t`$ multiplying the random force is a requirement imposed by the zero-mean property of $`\xi (t)`$, so that the lowest order contribution of the noise term is in second order.
Our goal is to construct an equation for the probability density function $`P(v,x,t)`$. Since we can express $`v^{}`$ and $`x^{}`$ at time $`t+dt`$ when we know $`v`$ and $`x`$ at time $`t`$, it should also be possible to find a solution for $`P(v^{},x^{},t+dt)`$ when we know $`P(v,x,t)`$. In other words, by using the fact that the total probability is conserved, we aim at finding how $`P(v,x,t)`$ transforms to $`P(v^{},x^{},t+dt)`$. To this end, we first discretize the integral in Eq. (2), so that
$$_{\mathrm{}}^t\mu (tt^{})v(t^{})𝑑t^{}=\underset{n=0}{\overset{\mathrm{}}{}}\mu (n\mathrm{\Delta }t^{})v(tn\mathrm{\Delta }t^{})\mathrm{\Delta }t^{},$$
(3)
with $`\mu (0)\mathrm{\Delta }t^{}`$ approaching $`\gamma `$ as $`\mathrm{\Delta }t^{}0`$. Here $`\gamma `$ is the friction constant in the absence of memory correlations in the dissipation term. In order to simplify the notation we define $`v_n`$ as $`v(tn\mathrm{\Delta }t^{})`$, $`v`$ as velocity at time $`t`$, and $`v^{}`$ as velocity at time $`t+\mathrm{\Delta }t`$. At this point, remembering that all $`v_n`$’s, $`v`$, $`v^{}`$, $`x`$, and $`x^{}`$ are random variables, we write
$$P(v^{},x^{},t+\mathrm{\Delta }t)=𝑑x𝑑v𝑑\xi \left[\underset{n}{}dv_n\right]\left[\underset{n}{}P_n(v_n)\right]P(v,x,t)P_\xi \delta (x^{}xv\mathrm{\Delta }t)$$
(4)
$$\times \delta \left(v^{}v+\left[\underset{n=0}{\overset{\mathrm{}}{}}\mu (n\mathrm{\Delta }t^{})v_n\mathrm{\Delta }t^{}F(x)f(t)\right]\mathrm{\Delta }t\xi (t)(\mathrm{\Delta }t)^{1/2}\right).$$
Here $`P_n`$, is the probability density function for the random variable $`v_n`$ and similarly $`P_\xi `$ is the probability density function for $`\xi `$. The product over $`P_n(v_n)`$ indicates an assumption of statistical independence of the velocities $`v_n`$ at different times, although such velocities are clearly expected to be correlated. Nevertheless, this approximation is expected to be valid for velocity distributions near steady state, and enables us to proceed with the algebra. Performing the integrations over $`v`$ and $`x`$, keeping only terms of order $`\mathrm{\Delta }t`$, and considering the limits $`\mathrm{\Delta }t^{}0`$ and $`\mathrm{\Delta }t0`$, we find
$$\frac{P}{t}=\gamma P(v,x,t)+\frac{P}{v}\left[_{\mathrm{}}^t\mu (tt^{})\overline{v}(t^{})𝑑t^{}F(x)f(t)+\gamma (v\overline{v})\right]\frac{P}{x}v+D\frac{^2P}{v^2}.$$
(5)
Here $`\overline{v}(t)`$ is the average velocity at a given time, $`\gamma `$ is equal to $`\mu (0)\mathrm{\Delta }t^{}`$ as $`\mathrm{\Delta }t^{}0`$ as defined earlier, and $`D`$ is the diffusion constant. Equation (5) is the main result of this paper. Under the assumptions set out in the preceding paragraphs (i.e. statistical independence), it describes the dynamics of stochastic ratchets when the memory effects are included. We recover the conventional FPE if the memory function is chosen to be a delta function. In the following we demonstrate that the solutions of Eq. (5) are in good agreement with the Langevin simulation results.
We have solved the Fokker-Planck and Langevin equations numerically to test how well our proposed FPE describes the phenomenology of thermal ratchets. We have found that Eq. (5) reproduces all the established results such as current reversal and noise rectification rather well. In the results to be discussed below, we have specifically used $`f(t)=A\mathrm{sin}(\omega t)`$ with $`A=1.0`$ and $`\omega =0.2`$ for the driving force. The ratchet potential is taken to be $`V(x)=b_0\mathrm{sin}(2\pi x/L)+b_1\mathrm{sin}(4\pi x/L)`$, with $`b_0`$, $`b_1`$, and $`L`$ are constants as has been used by others. We model the memory function in the form $`\mu (t)=(\gamma /\tau )\mathrm{exp}(|t|/\tau )`$, where $`\tau `$ is the correlation time. Finally, $`\xi (t)`$ is a Gaussian random variable whose average is zero and time correlation is given by the fluctuation-dissipation theorem $`\xi (t)\xi (t^{})=2D\mu (tt^{})`$, for $`t>t^{}`$, and approaches $`\xi (t)\xi (t^{})=2\gamma D\delta (tt^{})`$ as $`\tau `$ approaches zero.
We first demonstrate that our proposed Fokker-Planck equation \[Eq. (5)\] and Langevin equation \[Eq. (1)\] yield the same current value for the ratchet problem. As seen from these two equations, we can construct a solution for both of them if we know the solution at some initial time. We have solved both equations using a simple finite element algorithm. In Fig. 1 we illustrate our results for the time dependence of the velocity. Figure 1a shows the instantaneous velocities in both approaches and we observe that the solutions are similar. The effective or average velocity of the particle corresponding to long time behavior is shown in Fig. 1b. A simple running average algorithm is used to find the effective (average) velocities. In fact, when properly averaged over the periods of oscillations, the long-time behaviors are just straight lines. Nevertheless, it is seen that the effective velocity, or equivalently the current which is the physical observable, is the same for both methods. It is evident in Fig. 1 that the FPE results, approach the Langevin equation values with increasing time.
We now turn to the effects of memory correlation in the ratchet problem. In this regard, we have analyzed the ratchet problem defined in the previous section for various values of the correlation time $`\tau `$ and strength of the correlations $`\gamma `$. In Fig. 2 we depict $`x(t)`$ for various values of $`\tau `$. It is evident that as we increase $`\tau `$, because of negative feedback of the correlation to the system, the current is decreased (current is proportional to the slope of the curves). When we explore the large $`\tau `$ regime such that it becomes comparable to the period of oscillations in $`x(t)`$, we find a different behavior. In Fig. 3 we again show $`x(t)`$ for different values of $`\tau `$. The topmost curve ($`\tau =0`$) is enlarged many times in order to show the differences between the case when $`\tau =0`$ and when $`\tau `$ is large. It is observed that as $`\tau `$ becomes comparable to the period of oscillations of the $`\tau =0`$ case, there appears some instabilities in the system stemming from the positive feedback of the memory. The memory term in the definition of the friction now behaves as if it is not a friction term but rather as a driving term in resonance with the actual frequency of the oscillations. Stochastic resonances in ratchet problems or in a class of Fokker-Planck equations have been reported. In the absence of a time-dependent driving force, no instabilities are observed, consistent with the known results. We have checked that similar instabilities are also obtained by direct Langevin simulations. A further question is how the system behaves as the strength of the total friction changes. In Fig. 4 the time development of the position $`x(t)`$ for various values of $`\gamma `$ when an instability is encountered ($`\tau =30`$) is shown. We observe that as the magnitude of $`\gamma `$ increases, the period of oscillations decreases. In the inset of Fig. 4 we show the period of oscillations as a function of $`\gamma `$ which indicates an exponential decrease. This again illustrates the positive feedback in our ratchet system.
In summary, we have developed a Fokker-Planck equation corresponding to a generalized Langevin equation to describe the dynamics of a Brownian particle under the influence of external potentials and memory effects. The proposed FPE reproduces the established results and reduces to the known case in the white-noise limit when the memory effect is absent. We have demonstrated the adequacy of generalized FPE description by comparing the solutions to that of the Langevin equation simulations. The correlated fluctuating force gives rise to resonances for certain choice of the parameters. Finally, since Eq. (1) can also be interpreted as the Heisenberg equation for a quantum particle coupled to a heat bath, our corresponding FPE may be explored to study the quantum ratchets.
This work was partially supported by the Scientific and Technical Research Council of Turkey (TUBITAK) under Grant No. TBAG-1662. We thank Ö. Türel for discussions.
|
no-problem/9906/hep-ph9906201.html
|
ar5iv
|
text
|
# Structure functions in polarized proton-deuteron Drell-Yan processes
## 1 Introduction
Spin structure of the spin-1/2 nucleon has been investigated extensively. On the other hand, spin structure of spin-1 hadrons has not been well investigated in connection with new spin physics, namely the tensor structure. We know that the tensor structure function $`b_1`$ exists for a spin-1 hadron and it will be measured in the polarized electron-deuteron scattering. However, it is known in the unpolarized reactions that we can not determine the antiquark distributions in the medium-$`x`$ region by the electron scattering data. They are determined by the Drell-Yan measurements. In the same way, the studies of polarized proton-deuteron (pd) Drell-Yan processes should be valuable for finding the tensor-polarized antiquark distributions. We discuss a general formalism and a parton-model analysis of the pd Drell-Yan processes. In particular, we explain how a quadrupole spin asymmetry is related to the $`b_1`$-type distributions.
## 2 Possible structure functions
The general formalism of the proton-proton (pp) Drell-Yan process was completed many years ago, and it is the foundation of the RHIC-SPIN project. On the other hand, the polarized pd formalism was studied only recently . Because the space is limited, only the major points are explained in this section. Two independent methods are employed for finding possible structure functions.
In the first method, the Jacob-Wick helicity formalism is used by introducing the spin density matrix. The important point in the formalism is that there exist rank-two tensors in the density matrix because of the spin-1 nature of the deuteron. Imposing Hermiticity, parity conservation, and time-reversal invariance, we find that 108 structure functions exist in the pd Drell-Yan processes. In comparison with the 48 functions in the pp Drell-Yan, there are 60 new structure functions. Of course, all of them are associated with the rank-two terms, namely the tensor structure of the deuteron. The 108 functions are too many to be investigated seriously. In order to extract the essential ones, the cross section is integrated over the virtual-photon transverse momentum $`\stackrel{}{Q}_T`$. Even in this case, we find that there are 22 structure functions. Because only 11 functions exist in the pp reactions, there are 11 additional ones. Therefore, the interesting point of studying the pd Drell-Yan is to investigate these new structure functions.
In the second method, the hadron tensor is expanded in terms of possible combinations of momentum and spin vectors with the conditions of Hermiticity, parity conservation, time-reversal invariance, and current conservation. It is not useful to list all the 108 combinations; therefore, only the $`Q_T0`$ limit is considered in this formalism. We have to be particularly careful about including spin-dependent tensor terms in the expansion. Assigning a structure function for each expansion coefficient, we find also the 22 structure functions. It means that the possible functions are confirmed by the two independent methods. The details of these formalisms are found in Ref. .
The new structure functions are characterized by the polarizations given by the spherical harmonics $`Y_{20}`$, $`Y_{21}`$, and $`Y_{22}`$. We express these quadrupole polarizations as $`Q_0`$, $`Q_1`$, and $`Q_2`$:
$`Q_0`$ $`\mathrm{for}\mathrm{the}\mathrm{term}3cos^2\beta 1Y_{20},`$
$`Q_1`$ $`\mathrm{for}sin\beta cos\beta Y_{21},`$ (1)
$`Q_2`$ $`\mathrm{for}sin^2\beta Y_{22},`$
where $`\beta `$ is the polar angle of the spin polarization. They are related to the quadrupole spin asymmetries in the $`xz`$, $`yz`$, and $`xy`$ planes, respectively. A $`Q_0`$ type structure function is measured by the difference between the longitudinally and transversely-polarized cross sections. A $`Q_2`$ one is measured by the difference between the cross sections with the polarizations of $`x`$ and $`y`$ directions. The $`Q_1`$ type structure functions are interesting in the sense that they cannot be measured in the longitudinally-polarized ($`\beta =0`$) and transversely-polarized ($`\beta =\pi /2`$) reactions. The optimum way of measuring them is to choose the polarization angle in between ($`\beta =\pi /4`$). In this sense, it may be called “intermediate” polarization. It is an important finding of our studies that there exist intermediate structure functions which are not related to the longitudinal and transverse polarizations.
The polarized structure functions could be obtained by polarization asymmetry measurements. It is well known that five spin combinations should exist in the pp Drell-Yan: unpolarized cross section $`<\sigma >`$, longitudinal (transverse) double spin asymmetry $`A_{LL}`$ ($`A_{TT}`$), longitudinal-transverse spin asymmetry $`A_{LT}`$, and transverse single spin asymmetry $`A_T`$ (or denoted as $`A_N`$). We should be careful in defining the spin asymmetries in the deuteron reactions so that the tensor distributions should be excluded from the denominator . In the pd Drell-Yan, there are additional quadrupole asymmetries and we have the following fifteen spin combinations
$`<\sigma >,A_{LL},A_{TT},A_{LT},A_{TL},A_{UT},A_{TU},A_{UQ_0},`$
$`A_{TQ_0},A_{UQ_1},A_{LQ_1},A_{TQ_1},A_{UQ_2},A_{LQ_2},A_{TQ_2},`$ (2)
where $`U`$ denotes the unpolarized case. For example, the asymmetry $`A_{UQ_0}`$ indicates that the proton is unpolarized and the quadrupole $`Q_0`$ spin combination is taken for the deuteron. The precise definitions of these asymmetries should be found in Ref. . The new asymmetries are those with the subscript $`Q_0`$, $`Q_1`$, or $`Q_2`$.
## 3 Parton-model analysis
The dependence of the structure functions on the proton and deuteron polarizations is revealed by the formalism of the previous section. However, it is not obvious how these structure functions are related to the parton distributions. In particular, the meaning of the new quadrupole structure functions is not obvious at all. In order to clarify it, the hadron tensor is analyzed in a naive parton model. Because the polarized pd Drell-Yan had not been discussed before Ref. , we should content ourselves at this stage with the naive analysis: $`O(1/Q)`$ contributions are neglected in the course of calculations.
The hadron tensor due to the annihilation process, $`q`$(in A)+$`\overline{q}`$(in B)$`\mathrm{}^++\mathrm{}^{}`$, is given in the parton model as
$`W^{\mu \nu }={\displaystyle \frac{1}{3}}{\displaystyle \underset{a,b}{}}\delta _{b\overline{a}}e_a^2{\displaystyle d^4k_ad^4k_b\delta ^4(k_a+k_bQ)}`$
$`\times Tr[\mathrm{\Phi }_{a/A}(P_AS_A;k_a)\gamma ^\mu \overline{\mathrm{\Phi }}_{b/B}(P_BS_B;k_b)\gamma ^\nu ],`$ (3)
where $`k_a`$ and $`k_b=k_{\overline{a}}`$ are the quark and antiquark momenta, the color average is taken by the factor $`1/3=3(1/3)^2`$, $`e_a`$ is the charge of a quark with the flavor $`a`$, and $`\mathrm{\Phi }`$ is a correlation function. Of course, the opposite process $`\overline{q}`$(in A)+$`q`$(in B)$`\mathrm{}^++\mathrm{}^{}`$ should be taken into account in order to compare with the experimental cross section. The correlation function $`\mathrm{\Phi }(PS;k)`$ is a matrix with sixteen components, so that it can be expanded in terms of the sixteen $`4\times 4`$ matrices: $`\mathrm{𝟏},\gamma _5,\gamma ^\mu ,\gamma ^\mu \gamma _5,\sigma ^{\mu \nu }\gamma _5`$ together with the possible Lorentz vectors and pseudovector: $`P^\mu `$, $`k^\mu `$, and $`S^\mu `$. Of course, the expansion terms should satisfy the conditions of Hermiticity, parity conservation, and time-reversal invariance. However, the most important point is that the second rank tensors exist in the deuteron although the spin dependent terms are allowed up to the linear spin ones in the proton. It is shown in Ref. that the additional terms give rise to the tensor distribution $`b_1`$. We anticipated to have $`b_1`$; however, we also find a new one which is related to the intermediate polarization. It is the first time that we encounter such a distribution, so that it is simply named a $`c_1`$ distribution.
Even in the naive analysis, there are still 19 structure functions. In order to find the most essential ones, the cross section is integrated over $`\stackrel{}{Q}_T`$. Then, only four finite structure functions exist. Noting that there are three functions in the pp Drell-Yan, we find a new structure function which is specific to the deuteron. Furthermore, we find that it is expressed by the combinations of unpolarized distributions in the proton with the tensor polarized distributions in the deuteron. This structure function can be investigated by the unpolarized-quadrupole $`Q_0`$ asymmetry:
$$A_{UQ_0}=\frac{_ae_a^2\left[f_1(x_A)\overline{b}_1(x_B)+\overline{f}_1(x_A)b_1(x_B)\right]}{_ae_a^2\left[f_1(x_A)\overline{f}_1(x_B)+\overline{f}_1(x_A)f_1(x_B)\right]}.$$
(4)
The advantage of using the hadron reaction is that the tensor-polarized antiquark distributions could be obtained rather easily. In the electron scattering, the antiquark distributions cannot be determined precisely. For example, the violation of the Gottfried sum rule suggested $`\overline{u}\overline{d}`$ ; however, the precise $`x`$ dependence of $`\overline{u}/\overline{d}`$ is determined only recently by the E866 Drell-Yan experiments. If the large $`x_F`$ region is considered in Eq. (4), it becomes
$$A_{UQ_0}\text{(large }x_F\text{)}\frac{_ae_a^2f_1(x_A)\overline{b}_1(x_B)}{_ae_a^2f_1(x_A)\overline{f}_1(x_B)}.$$
(5)
It indicates that the antiquark tensor distributions $`\overline{b}_1`$ can be determined if the unpolarized distributions are well known in the proton and deuteron.
Another advantage of studying the polarized pd Drell-Yan is that it becomes possible to extract the flavor asymmetry $`\mathrm{\Delta }_T\overline{u}/\mathrm{\Delta }_T\overline{d}`$ in the transversity distributions by comparing the pp and pd cross sections .
It is rather difficult to attain the longitudinal polarization for the deuteron in the collider experiment (e.g. a possible next-generation RHIC project) because of its small magnetic moment. However, we could combine the transversely polarized cross sections with the unpolarized one for investigating the tensor structure. If a fixed target can be used for the deuteron, there is no such difficulty. In fact, there are possibilities to study the polarized pd Drell-Yan at FNAL and also in the HERA-N project. However, significant theoretical and experimental efforts are necessary for proposing such an experiment. At this stage, our numerical analysis is in progress in order to find the experimental possibilities.
## 4 Conclusion
We discussed first what kinds of structure functions could be studied in the polarized proton-deuteron Drell-Yan processes. Because of the new spin structure for the deuteron, there exist 108 structure functions. Among them, there are 22 finite ones if the cross section is integrated over $`\stackrel{}{Q}_T`$. The new structure functions are associated with the tensor structure of the deuteron. The parton-model analysis indicated that there are only four structure functions in the $`\stackrel{}{Q}_T`$-integrated case. They are unpolarized, longitudinally-polarized, transversity, tensor-polarized structure functions. The last one does not exist in the proton-proton reactions. It could be measured in the quadrupole $`Q_0`$ polarization asymmetry with the unpolarized proton. We expect that the tensor structure will become one of the exciting topics in high-energy spin physics in the near future.
|
no-problem/9906/astro-ph9906449.html
|
ar5iv
|
text
|
# Secondary Stars in CVs: The Theoretical Perspective
## 1 Introduction
The new generation of low–mass star and brown dwarf models by Baraffe et al. (1995, 1997, 1998, henceforth summarized as BCAH) represent a significant improvement in the quantitative description of stars with mass $`<\mathrm{\hspace{0.17em}1}`$ $`\mathrm{M}_{}`$.
The main strengths of the models are in two areas: the microphysics determining the equation of state (EOS) in the stellar interior, and the realistic non–grey atmosphere models which enter as the outer boundary condition. The EOS (Saumon, Chabrier and Van Horn 1995) is specifically calculated for very low–mass stars, brown dwarfs and giant planets. It has been successfully tested against recent laser–driven shock wave experiments performed at Livermore, which probe the complex regime of pressure dissociation and ionization relevant for these objects (cf. Saumon et al. 1998). Recent much improved cool atmosphere models (see e.g. the review of Allard et al. 1997) now provide realistic atmosphere profiles, which we use as the outer boundary condition, and synthetic spectra. Chabrier & Baraffe (1997) have shown that evolutionary models employing a grey atmosphere instead, e.g. the standard Eddington approximation, overestimate the effective temperature for a given mass, and yield too large a minimum hydrogen burning mass.
Several observational tests confirm the success of evolutionary models based on these improvements, e.g. mass–magnitude relations, colour–magnitude diagrams (Baraffe et al. 1997, 1998), mass–spectral type relations (Baraffe & Chabrier 1996), the first cool BD GL 229B (Allard et al. 1996), properties of components in detached eclipsing binaries (Chabrier & Baraffe 1995) and of field M–dwarfs (cf. Beuermann et al. 1998).
Here we consider the BCAH models in the context of CV secondaries, and focus on the relation between spectral type ($`SpT`$), donor mass ($`M_2`$) and orbital period ($`P`$). We obtain the spectral type of a stellar model from its calculated colour $`(IK)`$ and the empirical $`SpT(IK)`$ relation established by Beuermann et al. (1998). A summary of the input physics used for the calculations presented below is given by Kolb & Baraffe (1999).
## 2 The mass–spectral type relation
Mass loss causes stars to deviate from thermal equilibrium. The surface luminosity is no longer in balance with the luminosity generated in the core by nuclear reactions, and the difference causes the star to expand or contract. Therefore the stellar radius can be either larger or smaller than the corresponding equilibrium radius (see e.g. Whyte & Eggleton 1980). Remarkably, the effective temperature of low–mass main–sequence stars is rather insensitive to the degree of thermal disequilibrium. They behave just like giant stars on the Hayashi line, expanding along an evolutionary track with nearly constant effective temperature. This can be understood in terms of simple homology relations for predominantly convective low–mass stars (e.g Stehle et al. 1996).
Using BCAH models we quantify the deviation from equilibrium spectral type, as a measure of the surface temperature, for CV donors with various evolutionary histories. We consider the following simple cases:
Standard sequence: mass transfer starts from an unevolved (ZAMS) donor with mass 1 $`\mathrm{M}_{}`$, proceeds at a constant rate $`1.5\times 10^9`$ $`\mathrm{M}_{}\mathrm{yr}^1`$, stops when the donor becomes fully convective (at mass 0.21 $`\mathrm{M}_{}`$), and resumes at the lower rate $`5\times 10^{11}`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ once the donor has settled back into thermal equlibrium. This sequence fits the period gap in the framework of disrupted orbital braking (e.g. King 1988, Kolb 1996).
Unevolved sequences with constant $`\dot{M}`$: mass transfer starts from an unevolved (ZAMS) donor with mass 1 $`\mathrm{M}_{}`$ and proceeds at a constant rate $`\dot{M}`$ ($`10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ or $`10^7`$ $`\mathrm{M}_{}\mathrm{yr}^1`$).
Evolved sequences with constant $`\dot{M}`$: mass transfer initiates from a donor which has already burned a significant fraction of its hydrogen supply, but is still in the core hydrogen burning phase. We show three examples: a moderately evolved low $`\dot{M}`$ sequence (initial central hydrogen content $`X_c=0.16`$, initial donor mass $`M_2=1.0`$ $`\mathrm{M}_{}`$, $`\dot{M}=1.5\times 10^9`$ $`\mathrm{M}_{}\mathrm{yr}^1`$), an evolved, low $`\dot{M}`$ sequence ($`X_c=4\times 10^4`$, $`M_2=1.2`$ $`\mathrm{M}_{}`$, $`\dot{M}=1.5\times 10^9`$ $`\mathrm{M}_{}\mathrm{yr}^1`$), and an evolved high $`\dot{M}`$ sequence ($`X_c=4\times 10^4`$, $`M_2=1.0`$ $`\mathrm{M}_{}`$, $`\dot{M}=5\times 10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$).
Note that as the mass loss timescale is short compared with the nuclear time, the nuclear state of the donor is essentially frozen once mass transfer has started.
Fig. 2 shows that the effect of thermal disequilibrium is negligible along the standard sequence; this hardly differs from that for CVs with a ZAMS donor. If $`\dot{M}`$ is higher than in the standard sequence, the spectral type is slightly later for a given mass, while in a sequence with evolved donors it is somewhat earlier. As most CVs should form with essentially unevolved donors (Politano 1996, de Kool 1992), and as the estimated $`\dot{M}`$ exceeds $`10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ only for very few CVs (e.g. Warner 1995, his Fig. 9.8), the upper solid curve ($`X_c=0.16`$) and the dashed curve ($`\dot{M}=10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$) in Fig. 2 should bracket the location of the majority of CVs in the mass–spectral type diagram. The figure suggests that the spectral type is a good indicator of the donor mass, with a typical uncertainty $`\mathrm{\Delta }M_20.1\mathrm{M}_{}`$, for any given $`SpT`$ earlier than M0-M2. For later $`SpT`$ this mass “determination” becomes impractical as the curves in the figure flatten and $`\mathrm{\Delta }M_2`$ is large. Even with extreme evolutionary cases, the unevolved sequence with very high transfer rate ($`\dot{M}=10^7`$ $`\mathrm{M}_{}\mathrm{yr}^1`$) more typical for supersoft X–ray binaries than for CVs, and the evolved sequence starting on the terminal main–sequence, the range covered by the secondary stars in the $`SpT`$–mass diagram remains surprisingly narrow (Fig. 2; see Kolb, King & Baraffe 1999 for more details).
Although the observations compiled by Smith & Dhillon (1998) do not seem to support this prediction, the observational errors on $`M_2`$ are far too large for any definitive conclusions to be drawn from this mismatch. On the contrary, with improved and more reliable mass determinations, one may hope to constrain the range of $`\dot{M}`$ and the relative importance of evolved and unevolved systems in the CV population.
The figures also apply to secondaries in LMXBs. Evolutionary considerations suggest that, unlike CVs, most short–period LMXBs, form with an evolved main–sequence donor with mass $`>\mathrm{\hspace{0.17em}1}`$ $`\mathrm{M}_{}`$ (King & Kolb 1997, Kalogera et al. 1998).
## 3 The orbital period–spectral type diagram
A second invaluable diagnostic diagram for CVs is the orbital period–spectral type diagram (PS diagram) — the “HR diagram” analogue for CV secondary stars. In a semi–detached binary the Roche lobe filling star’s mean density $`\rho `$ determines the orbital period $`P`$ almost uniquely (e.g. King 1988), $`P_h=k/\rho _{}^{1/2}`$, with $`k8.85`$ being only a weak function of the mass ratio, $`P_h`$ the period in hr, and $`\rho _{}`$ the mean density in solar units. Hence the PS diagram probes the donor structure in terms of two stellar parameters that are relatively easy to determine from observation. (The importance of the PS diagram has already been pointed out by Ritter 1994).
The ZAMS track defines an upper bound for the scatter of observed CV secondaries in Fig. 4. Systems immediately above and below the period gap have the same spectral type. The results above suggest that these CVs have the same donor mass, consistent with the standard period gap model according to which CVs evolve through the gap with constant secondary mass.
The location of various evolutionary tracks above the period gap are shown in Fig. 5. Most tracks intersect the ZAMS track at a critical period $`P_{\mathrm{crit}}56`$ hr, which corresponds to a donor with mass $`M_20.6`$ $`\mathrm{M}_{}`$ marking the transition from mainly radiative to predominantly convective stars. A simple corollary from the results of Sec. 2 is that the donor mass on any evolutionary track is about the same as the one on the ZAMS track with the same spectral type. Deviations to shorter or longer periods from the ZAMS track mean that the donor is smaller or larger then the corresponding ZAMS star, respectively. From the morphology of tracks in Fig. 5 it is clear that unevolved high $`\dot{M}`$ sequences are excluded at long periods ($`P>P_{\mathrm{crit}}`$), but would explain the observed scatter for $`P<P_{\mathrm{crit}}`$. Likewise, the observations exclude evolved low $`\dot{M}`$ sequences at short periods, while these are clearly required for $`P>P_{\mathrm{crit}}`$.
This suggests that, overall, a secular mean mass transfer rate increasing with decreasing orbital period would provide a good fit to the observed PS diagram above the period gap. CVs would evolve along a low $`\dot{M}`$ track at long periods and along a high $`\dot{M}`$ track at short periods. To explain the scatter of systems at long periods the full spectrum of nuclear evolution prior to mass loss is required. Note that the sequence starting mass transfer from a star at the terminal main–sequence defines a lower bound for the data points in Fig. 4. It is difficult to see why CVs with somewhat evolved donors should dominate the population. One possibility is that unevolved systems are more susceptible to irradiation–driven mass transfer cycles (King et al. 1996), with a high state $`\dot{M}`$ too high to be recognisable as CVs. The scatter at short periods ($`35`$ hr) in turn seems to imply a wide range of secular mean $`\dot{M}`$ values, ($`10^910^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$). None of the orbital angular momentum loss mechanisms postulated so far generates this wide a range of $`\dot{M}`$ at a given $`P`$, while observational estimates for $`\dot{M}`$ always suggest that there is an even larger range (e.g. Patterson 1984, Warner 1995).
If the $`\dot{M}`$ range reflects the range of the secular mean, then the appearance of a gap with relatively sharp edges is more difficult to understand than with a uniform evolution driven by angular momentum losses insensitive to system parameters (Kolb 1996; Stehle et al. 1996). Yet the observed period distribution and its explanation by disrupted orbital braking cannot unambiguously rule out a large range of the secular mean transfer rate above the gap (Baraffe & Kolb 1999). The $`\dot{M}`$ range certainly has to level off rather steeply below $`10^9`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ to avoid filling the gap with systems that hardly detach. On the other hand, systems with $`\dot{M}>\mathrm{\hspace{0.17em}2}\times 10^9`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ undergo period bounce before they reach the upper edge of the gap, and before the donor becomes fully convective (at a mass $`M_{\mathrm{conv}}`$). This bounce period increases with $`\dot{M}`$, while $`M_{\mathrm{conv}}`$ decreases. Hence the detached phase of high $`\dot{M}`$ systems extends from periods longer than 3 hr to periods shorter than 2 hr, and does not interfere with the “classical” gap. The time spent in the detached phase increases as well, in the case $`\dot{M}=10^8`$ $`\mathrm{M}_{}\mathrm{yr}^1`$ it is longer than 8 Gyr (assuming angular momentum losses by gravitational radiation). Only a small fraction of such systems would reach the orbital period minimum at 80 min within the age of the Galaxy, thus alleviating (but not solving) the problem of the missing period spike (see Kolb & Baraffe 1999) at this minimum period.
Our conclusions from the mass–spectral type diagram and orbital period–spectral type diagram can be tested by a set of more accurate mass and radius determinations of CV secondaries.
|
no-problem/9906/cond-mat9906439.html
|
ar5iv
|
text
|
# Density matrix purification due to continuous quantum measurement
## Abstract
We consider the continuous quantum measurement of a two-level system, for example, a single-Cooper-pair box measured by a single-electron transistor or a double-quantum dot measured by a quantum point contact. While the approach most commonly used describes the gradual decoherence of the system due to the measurement, we show that when taking into account the detector output, we get the opposite effect: gradual purification of the density matrix. The competition between purification due to measurement and decoherence due to interaction with the environment can be described by a simple Langevin equation which couples the random evolution of the system density matrix and the stochastic detector output. The gradual density matrix purification due to continuous measurement may be verified experimentally using present-day technology. The effect can be useful for quantum computing.
preprint: Submitted to LT’22
The active research on quantum computing as well as the progress in experimental techniques have motivated renewed interest in the problems of quantum measurement, including the long-standing “philosophical” questions. In contrast to the usual case of averaging over a large ensemble of similar quantum systems, it is becoming possible to study experimentally the evolution of an individual quantum system. In this paper we consider the continuous measurement of a two-level system by a “weakly responding” detector which can be treated as a classical device.
While after averaging over the ensemble the continuous measurement leads to the gradual decoherence of the system density matrix, the situation is completely different in the case of an individual quantum system. In particular, the system evolution becomes dependent (“conditioned”) on the particular detector output. The theory of conditioned evolution of a pure wavefunction was developed relatively long ago, mainly for the purposes of quantum optics (see, e.g. Ref. and references therein). However, for solid state structures the problem of continuous quantum measurement with an account of the measurement result has only been addressed recently , with the main emphasis on the mixed quantum states and the detector nonideality.
The evolution of the density matrix $`\sigma `$ of a double-dot with the tunneling matrix element $`H`$ and energy asymmetry $`\epsilon `$ can be described by nonlinear equations
$`\dot{\sigma }_{11}=\dot{\sigma }_{22}=2(H/\mathrm{})\text{Im}\sigma _{12}`$ (1)
$`\sigma _{11}\sigma _{22}(2\mathrm{\Delta }I/S_I)[I(t)I_0],`$ (2)
$`\dot{\sigma }_{12}=i(\epsilon /\mathrm{})\sigma _{12}+i(H/\mathrm{})(\sigma _{11}\sigma _{22})`$ (3)
$`+(\sigma _{11}\sigma _{22})(\mathrm{\Delta }I/S_I)[I(t)I_0]\sigma _{12}\gamma \sigma _{12},`$ (4)
where $`I(t)`$ is the particular detector output (we assume electric current), $`I_0=(I_1+I_2)/2`$, $`I_1`$ and $`I_2`$ are the average currents corresponding to two localized states of the double-dot, $`\mathrm{\Delta }I=I_2I_1`$, $`S_I`$ is the low frequency spectral density of the detector shot noise, and the detector nonideality is described by the extra dephasing due to interaction with an “untrackable” environment $`\gamma =\mathrm{\Gamma }(\mathrm{\Delta }I)^2/4S_I`$, where $`\mathrm{\Gamma }`$ is the dephasing rate in the conventional approach (after ensemble averaging). In particular, the quantum point contact (QPC) can be an ideal detector, $`\gamma =0`$ (see, e.g. Ref. ), while the single-electron transistor (SET) in a typical operation point is a significantly nonideal detector, $`\gamma \mathrm{\Gamma }`$ .
Equations (2)–(4) allow us to calculate the evolution of the system density matrix if the detector output $`I(t)`$ is known from the experiment. They can be also used for the simulation, then the term $`[I(t)I_0]`$ should be replaced with $`[\mathrm{\Delta }I(\sigma _{22}\sigma _{11})/2+\xi (t)]`$ where the random process $`\xi (t)`$ has zero average and $`S_\xi =S_I`$. (We use the Stratonovich formalism for stochastic equations.)
Figure 1 shows the result of such a simulation for a slightly nonideal detector, $`\gamma =0.1\mathrm{\Gamma }`$, in the case when the evolution starts from the maximally mixed state, $`\sigma _{11}=\sigma _{22}=0.5`$, $`\sigma _{12}=0`$. One can see that $`\sigma _{12}`$ gradually appears during the measurement, eventually leading to well-pronounced quantum oscillations. In the case $`\gamma =0`$ the density matrix becomes almost pure after a sufficiently long time. This gradual purification can be interpreted as being due to the gradual acquiring of information about the system. The detector nonideality, $`\gamma 0`$, causes decoherence and competes with the purification due to measurement.
In contrast to QPC, the SET as a detector directly affects the two-level system asymmetry $`\epsilon `$ because of the fluctuating potential $`\varphi (t)`$ of SET’s central island. Since there is typically a correlation between fluctuations of $`I(t)`$ and $`\varphi (t)`$ , we should add into Eq. (4) the term $`i\sigma _{12}K[I(t)(\sigma _{11}I_1+\sigma _{22}I_2)]=i\sigma _{12}K\xi (t)`$ where $`K=(d\epsilon /d\varphi )S_{\varphi I}/S_I\mathrm{}`$. This allows the partial recovery of coherence, so that $`\gamma =\mathrm{\Gamma }(\mathrm{\Delta }I)^2/4S_IK^2S_I/4`$. The average asymmetry $`\epsilon `$ should be also renormalized to account for the backaction of $`\overline{\varphi }`$ shift.
To observe the density matrix purification experimentally, it is necessary to record the detector output with sufficiently wide bandwidth, $`\mathrm{\Delta }f\mathrm{\Gamma }`$ (possibly, $`\mathrm{\Delta }f10^9`$ Hz), and plug it into Eqs. (1)–(2). Calculations will show the development of quantum oscillations with precisely known phase. Stopping the evolution by rapidly raising the barrier ($`H0`$) when $`\sigma _{11}1`$ and checking that the system is really localized in the first state, it is possible to verify the presented results.
The potential application in quantum computing is the fast initialization of the qubit state (not requiring relaxation to the ground state) after the intermediate measurements.
|
no-problem/9906/cond-mat9906308.html
|
ar5iv
|
text
|
# Beta-relaxation of non-polymeric liquids close to the glass transition
## Abstract
Dielectric beta-relaxation in a pyridine-toluene solution is studied close to the glass transition. In the equilibrium liquid state the beta loss peak frequency is not Arrhenius (as in the glass) but virtually temperature-independent, while the maximum loss is strongly temperature-dependent. Both loss peak frequency and maximum loss exhibit thermal hysteresis. A new annealing-state independent parameter involving loss and loss peak frequency is identified. This parameter has a simple Arrhenius temperature-dependence and is unaffected by the glass transition.
Viscous liquids are characterized by relaxation times that increase strongly upon cooling towards the glass transition . The relaxation time of molecular rotation is monitored by dielectric relaxation experiments probing the linear response to a periodic external electric field . The dominant relaxation process is referred to as the alpha-process. For most viscous liquids, upon cooling the alpha-peak bifurcates just before the glass transition and an additional minor loss peak appears at higher frequencies . This is traditionally referred to as beta-relaxation (now sometimes termed Johari-Goldstein beta-relaxation to distinguish it from the mode-coupling theory’s “cage-rattling” at much higher frequencies). Beta-relaxation has also been observed in mechanical and thermal relaxation experiments. Here, we limit ourselves to dielectric beta-relaxation. Our purpose is to show that the conventional view that beta-relaxation is unaffected by the glass transition is not confirmed by experiments on non-polymeric liquids. We have not studied beta-relaxation in polymers, but believe based on the literature that beta-relaxation in polymers is probably not affected by the glass transition.
Beta-relaxation was first seen in polymers, where it was attributed to side-chain motion . In 1970 Johari and Goldstein found beta-relaxation in a number viscous liquids of rigid molecules and conjectured that beta-relaxation should be considered as “a characteristic property of the liquid in or near the glassy state” . However, for some glass-forming liquids (e.g., glycerol) no beta-relaxation has been observed. Today, viscous liquids are sometimes classified according to whether or not they exhibit beta-relaxation , although there are recent intriguing speculations that beta-relaxation indeed is universal with the beta-peak sometimes hiding under the alpha-peak .
There is no general agreement about the cause of beta-relaxation . It is unknown whether every molecule contributes to the relaxation or only those within “islands of mobility” . Similarly, it is not known whether small angle jumps or large angle jumps are responsible for beta-relaxation. Of course, a possible explanation of these disagreements is that beta-relaxation is non-universal .
As traditionally reported in the literature (see, e.g., ), beta-relaxation is characterized by a broad loss peak with Arrhenius temperature-dependent loss peak frequency and only weakly temperature-dependent maximum loss. In this picture, which is mainly based on measurements in the glassy phase, the glass transition has no effect on the temperature-dependence of the beta loss peak frequency. In our opinion, it is unlikely that the temperature-dependence of the loss peak frequency is unaffected by the glass transition, considering the well-known fact that the strength of the beta process in the glassy state decreases during annealing (in some cases to below resolution limit ). Actually, no detailed investigations of beta-relaxation in the equilibrium liquid phase of non-polymeric liquids have been carried out. This may be because studying beta-relaxation above the glass transition temperature $`T_g`$ is difficult since there is only a tiny temperature-window (if any) where alpha- and beta-relaxations are well-separated.
Motivated by the above arguments, one of us recently investigated beta-relaxation in sorbitol and found that the temperature-dependence of both loss peak frequency and loss magnitude in the equilibrium liquid state is indeed different from what is found in the glassy state . This result was obtained on a system which - like most others - has a beta-relaxation that in the equilibrium liquid phase is not very well separated from the alpha-relaxation. The sorbitol results were mainly based on annealing experiments below $`T_g`$ and the results were to some extent inconclusive. Below, we present data for beta-relaxation in a liquid with a strong beta-peak which is well separated from the alpha-peak in a range of temperatures above $`T_g`$. The liquid is a 71%/29% mixture of pyridine and toluene, a system first studied by Johari . Toluene molecules have only a small dipole moment, so dielectric spectra mainly reflect motion of the pyridine molecules acting as probes of the overall dynamics of the solution .
The dielectric measuring cell used is a 22-layer gold-plated capacitor with empty capacitance 68 pF (layer distance 0.1 mm). The dielectric constant was measured over 9 decades of frequency using standard equipment: From 100 Hz to 1 MHz a HP4284A precision LCR meter was used, from 1 mHz to 100 Hz a HP3458A multimeter in conjunction with a Keithley 5 MHz, 12-bit, arbitrary waveform generator was used. The dielectric loss was determined with at precision better than $`10^4`$ in the whole frequency-range. The measurements were carried out in a cryostat designed for long time annealing experiments, keeping temperature variations below 5 mK.
Figure 1a shows the dielectric loss at 125K, 126K, and 127K. The alpha- and beta-peaks are quite well separated. Despite this a procedure is needed to eliminate the alpha-tail influence on the beta-peak. From Fig. 1a we find that the alpha-peak follows a high-frequency power-law decay with exponent -0.47. In order to arrive at the “true” beta-peak this alpha-tail was subtracted by applying the following procedure: At each temperature the magnitude of the subtraction was uniquely determined by requiring that the beta-peak follows a low-frequency power-law. We used a power-law fit because a Gaussian, as sometimes used to fit beta-peaks , cannot fit our data. This way to eliminate the alpha-contribution involves the following assumptions: 1) The dielectric spectrum is a simple sum of alpha- and beta-relaxation and not, e.g., a Williams-convolution ; 2) In the relatively narrow temperature-interval under study the alpha-tail’s power-law decay has an exponent which is temperature-independent. Figure 1b shows eight normalized beta-peaks (119K-126K) after subtraction of alpha-tails. The figure shows that the subtraction procedure is consistent: The “corrected” beta-peaks do follow a low-frequency power-law to a good approximation.
Figure 2a shows beta loss peak frequency $`f_{max}`$ ($`\mathrm{}`$) and maximum loss $`ϵ_{max}`$ ($`\mathrm{}`$) as function of inverse temperature for a cooling taking the equilibrium liquid into the glassy state at a rate of 1 K/h. The system was cooled in steps of 0.5 K. Dielectric loss was measured after annealing 30 minutes at constant temperature, immediately before cooling another 0.5 K. At high temperatures - in the equilibrium liquid state - the loss peak frequency is almost temperature-independent while the loss decreases sharply during the cooling . At low temperatures, the well-known Arrhenius temperature-dependence of loss peak frequency is observed and the maximum loss is much less temperature-dependent than in the liquid.
Figure 2b shows beta loss peak frequency $`f_{max}`$ and maximum loss $`ϵ_{max}`$ as function of inverse temperature during a cooling through the glass transition followed by a subsequent faster reheating. Starting in the equilibrium liquid state the sample was cooled in steps of 0.5 K with measurements carried out after annealing 30 minutes at each temperature. The cooling continued until 119 K was reached. The sample was then heated in steps of 1.0 K every 30 minutes. The figure shows hysteresis like that found for all other quantities changing their temperature-dependence at the glass transition. The figure also shows ($`\mathrm{}`$) the temperature-dependence during both cooling and subsequent reheating of the following quantity
$$X=f_{max}\left(ϵ_{max}\right)^\gamma .$$
(1)
Here, $`\gamma =1.19`$ is an empirical exponent. There is just one curve marking the temperature-dependence of $`X`$, showing that $`X`$ exhibits no thermal hysteresis. In particular, $`X`$ is independent of annealing-state. Surprisingly, $`X`$ has an Arrhenius temperature-dependence.
We have found similar behavior for sorbitol, DBP, DEP, DMP, and PPG, although these liquids have beta-peaks that are less well-separated from the alpha-peak in the liquid state. The exponent $`\gamma `$ is non-universal (varying from 1 to 2). We have no model for these findings. Speculating, we note that the case $`\gamma =1`$ may be modelled by an asymmetric two-level system: If the large barrier is temperature-independent $`X`$ is Arrhenius . However, in order to explain the findings of Fig. 2, a rather peculiar temperature-dependence of the highest of the two energy minima must be assumed .
In conclusion, we have shown that the loss peak frequency of beta relaxation in a pyridine-toluene solution is almost temperature-independent above $`T_g`$ where, on the other hand, the maximum loss is strongly temperature-dependent. Thus, beta relaxation in the equilibrium liquid state has characteristics that are opposite of those found in the glassy state, where loss peak frequency is Arrhenius and maximum loss is only weakly temperature-dependent. It has furthermore been shown that the quantity $`X`$ of Eq. (1) is Arrhenius; in particular $`X`$ exhibits no thermal hysteresis around $`T_g`$. These results contradict the traditional view of beta-relaxation as being unaffected by the glass transition and show the need for further experimental as well as theoretical work in this field. The recent surprising findings by Wagner and Richert that liquids like o-terphenyl and salol, previously believed to have no beta-relaxation, do exhibit beta-relaxation deep in the glassy state after very fast quenchings emphasize the need for further work in this field.
###### Acknowledgements.
This work was supported in part by the Danish Natural Science Research Council.
|
no-problem/9906/astro-ph9906032.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The quest for a detailed understanding of galaxy formation has become one of the central goals of modern astrophysics. On the observational side data from the Keck and Hubble Space telescopes have revolutionised our view of the high redshift Universe and there are claims that the epoch of galaxy formation has already been observed . From the theoretical point of view the problem is fundamental because its resolution involves the synthesis of work from a wide range of specialities. A full treatment requires consideration of the early Universe processes that create primordial density fluctuations, the microphysics and chemistry that precipitate star formation within giant molecular clouds, the energy exchange present in supernova feedback , the dissipational processes of cooling gas, the dynamics of galaxies moving within a dense environment and the large scale tidal torques which determine the angular momentum profiles of the resultant objects.
Analytic approaches to the problem founder partly because of the lack of symmetry. As high redshift observations show, real galaxies do not form in a smooth, spherically symmetric fashion but rather form as complicated collections of bright knots which merge and evolve into normal galaxies . Because of its complexity this problem is often approached using numerical simulation. In one of the first attempts demonstrated the a wide variety of observed galaxy properties could be fitted by even the most basic of simulations. Unfortunately their computational volume (like that of ) was too small to produce a reliable correlation function and they had to stop the simulation at $`z=1`$ but this work clearly demonstrated the potential for simulations of this kind. Both these groups used smoothed particle hydrodynamics (SPH) to model the gas. In a complimentary investigation used a grid based code, providing a useful cross-check of the results. More recently several groups have attempted to tackle the problem by coupling a semi-analytic approach to a collisionless simulation and have again produced interesting results .
In this paper we simulate the process of galaxy formation within a representative volume of the Universe in two contrasting cosmologies. The volume is large enough – $`100\mathrm{Mpc}`$ on a side – that several thousand galaxies form and we can reliably measure the galaxy correlation function and study the effects of bias. Previous work by the Virgo Consortium predicted the magnitude of the bias that would be required to reconcile state-of-the-art collisionless simulations to the observed galaxy correlation function. Here we examine whether the inclusion of a gaseous component and basic physics can indeed produce this bias.
## 2 The simulation
The simulations we have carried out are state-of-the-art SPH calculations of 2 million gas plus 2 million dark matter particles in a box of side $`100\mathrm{Mpc}`$ using a parallel adaptive particle-particle particle-mesh plus SPH code We have completed both a standard Cold Dark Matter (SCDM) run and a run with a positive cosmological constant ($`\mathrm{\Lambda }`$CDM). Both have the same parameters as detailed by . The baryon fraction was set from nucleosynthesis constraints, $`\mathrm{\Omega }_bh^2=0.015`$ and we assume an unevolving gas metallicity of 0.5 times the solar value. This leads to a gas mass per particle of $`2\times 10^9\mathrm{M}_{}`$ for both runs. As we typically smooth over 32 SPH particles this gives us an effective minimum gas mass resolution of $`6.3\times 10^{10}\mathrm{M}_{}`$. We employ a comoving Plummer gravitational softening of $`10h^1\mathrm{kpc}`$ and the minimum SPH resolution was set to match this.
## 3 Underlying assumptions
This simulation is based upon two fundamental assumptions. These are that the feedback of energy due to supernovae explosions can be approximated by assuming that this process effectively imposes a mass resolution threshold. Objects below this mass cannot form whereas objects above this mass are unaffected. Comparing the simulated star formation rate to the observed one shows that such an approximation does not lead to a ridiculous star formation history. The second assumption is that once gas has cooled into tight, dense blobs it is effectively decoupled from the surrounding hot gas. This is equivalent to assuming that this gas has been converted into stars or is isolated by additional physics such as magnetic fields. The main further approximation we are forced to employ for computational reasons is a spatial resolution of $`10h^1\mathrm{kpc}`$. This is over twice the value we would have liked to have used and leads to enhanced tidal disruption, drag and merging within the larger clusters of objects.
The microphysics of star formation and supernovae explosions typically have parsec lengthscales and so happen far below our resolution limit. The combination of poor man’s feedback and decoupling the cold, dense gas effectively negates these effects (over which we have no control anyway) for all objects above our resolution threshold. All objects must cross this threshold at some time (as a large object cannot simply pop into existence, presumably via the merger of smaller, sub-resolution (and so unresolved) fragments. Our assumptions are equivalent to presuming that the absence of previous levels of the hierarchy has no effect on the subsequent evolution or properties of our objects and that these objects are large enough to resist the destructive effects of supernovae explosions.
It should be stressed that because of our relatively poor physical resolution we can only predict accurate masses and positions for our objects. We do not have sufficient resolution to resolve internal structure and so cannot directly ascertain a morphological type for our galaxies as each object typical only contains between 100 and 1000 particles.
## 4 Analysis
The presence of gas makes the identification of a set of objects which we would like to equate to galaxies very straightforward. Cooling leads to large collapse factors and small knots of cold gas residing within a much hotter halo. For the purposes of this paper we use a standard friends-of-friends group finder with a small linking length to reliably extract a group catalog from each simulation output time. At the end of the simulation there are over 2000 significant objects within each simulation volume, roughly the observed number density.
The luminosity of any object is then extracted by tracking each of the particles that finally reside inside each object backwards in time and extracting the time at which each of them first became cold and dense. Standard population synthesis techniques (e.g. ) can then be applied to produce the luminosity of each object in any desired pass band.
In Fig. 1 we show a comparison between the observed K-band luminosity function and the SCDM simulation. The observations are neatly bracketed by the simulation within the uncertainties imposed by the choice of initial mass function. Clearly the shape of the observations are well reproduced with no excess at the bright end. The faint end slope is not well reproduced because of the relatively high resolution imposed mass cut-off which was deliberately chosen to be close to $`L_{}`$. For the LCDM simulation the shape of the luminosity function is also well modelled but the objects tend to be too bright. This is because the parameters chosen for this simulation lead to a higher global fraction of cold, dense gas making all objects more luminous.
## 5 Conclusions
For the first time we have been able to model galaxy formation within a large enough volume of the Universe that reliable calculations can be made of the galaxy correlation and luminosity functions. As has been shown elsewhere in these proceedings () the observed correlation function is well fitted by these models which also simultaneously fit the observed luminosity functions. The galaxies in our models are not only reasonably distributed, they also have a sensible range of masses and formation times.
## Acknowledgments
The work presented in this paper was carried out as part of the programme of the Virgo Supercomputing Consortium using computers based at the Computing Centre of the Max-Planck Society in Garching and at the Edinburgh Parallel Computing Centre.
|
no-problem/9906/astro-ph9906240.html
|
ar5iv
|
text
|
# On the evolutionary status of Be stars
## 1 Introduction
The evolutionary status of classical Be stars is a frequently raised and yet unsolved question. The main issue is to determine whether the Be phenomenon appears at a given stage in the evolutionary track of every B star, or it originates in the conditions of formation of some stars, which include fast rotation and probably other facts. A fundamental element in this discussion is the study of Be stars in open clusters, in two different ways: i./ the determination of the Be star positions in the cluster photometric diagrams; and, ii./ the study of the abundance of Be stars as a function of the cluster age.
It is well known that Be stars usually occupy anomalous positions in the colour-magnitude diagrams, lying above the main sequence. Early attempts to explain the Be phenomenon suggested that Be stars occur during the core contraction phase following the exhaustion of hydrogen (Schmidt-Kaler 1964). Later, however, it was observed a significant fraction of Be stars close to the ZAMS (Schild & Romanishin 1976), and today it is generaly accepted that they occupy the whole main sequence band and different evolutionary states (Mermilliod 1982; Slettebak 1985) and therefore they are not confined to any particular evolutionary phase. It is well established that the anomalous positions in the photometric diagrams can be explained in terms of the contribution of the circumstellar continuum emission to the photometric indices (Fabregat et al. 1996; Fabregat & Torrejón 1998; and references therein).
Extensive studies of the abundances of Be stars in open clusters have been done by Mermilliod (1982) and recently by Grebel (1997). Both authors obtained similar results, finding Be stars in clusters of all ages, with a peak frequency in clusters with turn-off at spectral types B1-B2, and a regular decrease with increasing age afterwards.
Nevertheless, these kind of studies face some difficulties which make their conclusions somewhat uncertain. The purpose of this paper is to critically review the previous work in this field, and to present a new study of the abundance of Be stars in clusters of different ages taking into account new considerations and observational results.
## 2 Critical review of previous work
In this section we will address the main drawbacks which affect the determination of the Be star abundace as a function of the cluster age. Whenever possible we will propose solutions to these problems.
### 2.1 The ages of the open clusters
The usual way to determine the age of a star cluster is by means of isochrone fitting. The most extended technique is to transform the theoretical isochrone from the $`L`$ \- $`T_{\mathrm{eff}}`$ plane to the observational colour-magnitude plane, and then directly compare it with the observational photometric data.
In young open clusters the isochrone fitting is difficult by two main problems affecting the observational data. The usual presence of differential reddening across the cluster face widens the observed main sequence. For the clusters we are dealing with in this paper, the presence of Be stars which generally occupy anomalous positions in the colour-magnitude diagrams, also contributes to a further main-sequence widening. Hence, the fit of a particular isochrone can be a very uncertain process. For instance, recent age determinations for the cluster with the higest Be star abundance in the Galaxy, NGC 663, are the following: 21 Myr (Leisawitz 1988), 9 Myr (Tapia et al. 1991), 12-15 Myr (Phelp & Janes 1994) and 23 Myr (this work, Sect. 3). The difference in the age determinations amounts to a factor of three.
As an attempt to solve this problem, we have investigated the different photometric indices which are commonly used as horizontal axis in the observational HR diagrams, with regard to the B star region of the main sequence. In Table 1 we present for each index its variation along the B star sequence, the photometric accuracy which is usually reached in the photometric data, the sampling of the main sequence – the ratio between the index variation and the accuracy – and how the interstellar reddening affects the index.
In view of this table, the best sampling is obtained with the $`(UB)`$ colour in the Johnson system and the $`c_1`$ index in the Strömgren system. $`c_1`$ has the additional advantage of being much less affected by reddening. Furthermore, the $`M_V`$ \- $`c_1`$ diagram allows an efficient segregation of the Be stars from the absorption line B stars (Fabregat et al. 1996). Therefore we propose that the most efficient way to determine reliable ages for very young clusters is the isochrone fitting to the observational $`M_V`$ \- $`c_1`$ HR diagram. In Section 3 we will base our discussion of the Be stars abundances as a function of the cluster age on ages determined in this way.
### 2.2 Determination of Be star frecuencies
Spectroscopic surveys devoted to the detection of Be stars in open clusters are scanty in the literature. The only clusters for which the abundance of Be stars have been exhaustively studied are h and $`\chi `$ Persei (NGC 869 and NGC 884) and NGC 663. The last systematic, although not exhaustive, survey dates from 1976 with the work of Schild & Romanishin.
In the recent years new detection techniques based on CCD imaging photometry are being applied to study the Be star abundances in clusters in the Galaxy (Capilla & Fabregat 1999) and in the Magellanic Clouds (Grebel et al. 1992; Grebel 1997; Keller et al. 1999). These studies, however, only provide lower limits of the abundance for no more than 10 clusters. We are still far from having a statistically significant sample of open clusters with well determined Be star abundances.
### 2.3 Classical versus Herbig Be stars
In this paper we deal only with the abundances of classical Be stars. There exists other classes of early-type emission line stars. Among them the most conspicuous are the so called Herbig Ae/Be stars. These objects are pre-main sequence stars in which the line emission originates in circumstellar material remaining from the proto-stellar cloud from which the star was formed (for a recent review of Herbig Ae/Be stars, see Waters & Waelkens 1998). The observational characteristics of classical and Herbig Be stars, at least in the optical region, are very similar, making a very difficult task to differenciate between the two types. An efficient segregation can be made in the far-infrared region, where the Herbig Ae/Be stars show an important excess caused by the presence of dust, which is lacking in classical Be stars.
Grebel (1997) includes in her study the clusters NGC 2244, NGC 6611 and IC 2944. All of them are very young open clusters (age $`<`$ 6 Myr) associated to bright emission nebulosities, and are still undergoing stellar formation (Hillenbrand et al. 1993; Pérez et al. 1987; Reipurt et al. 1997; de Winter et al. 1997). Hillenbrand et al. (1993) found a large number (27) of H$`\alpha `$ emission line stars in NGC 6611. They pay special atention to the study of the strong emiters W235 and W503, and show evidence that these stars are Herbig Ae/Be stars instead of classical Be stars. For the rest of the stars the situation is much more uncertain, but they concluded that all emission line stars in NGC 6611 are pre-main sequence objects instead of classical Be stars. De Winter et al. (1997), in a smaller sample in the same cluster, found 11 emission line stars. They classified three of them as Herbig Ae/Be stars, and agreed with Hillenbrand et al. that most probably the rest of the emission line objects are also pre-main sequence stars.
In the same way, Reipurth et al. (1997) analysed 7 emission line stars in the H ii region IC 2944. They classified all of them as young pre-main sequence objects, probably Herbig Ae/Be stars. Van den Anker et al. (1997) found 5 emission line stars in the star-forming cluster NGC 6530. They classified 3 as Herbig Ae/Be, and concluded that the remaining two most probably are of the same type.
We can conclude that, despite the difficulty of differenciating between classical and Herbig Be stars, there are in the literature several positive identifications of Herbig Ae/Be stars in the youngest, star-forming open clusters. Conversely, no bona fide classical Be star has yet been reported among them.
## 3 The Be star abundance as a function of the cluster age
There are only four galactic open clusters with a known distinctly high abundance of Be stars, namely more than 15 Be stars, or more than 25% of Be stars among their observed B stars. They are NGC 663, NGC 869, NGC 884 and NGC 3760. As we argued in Sect. 2.1, we will assume for these clusters the ages determined through isochrone fitting to the observational $`M_V`$ \- $`c_1`$ HR diagram. These ages are 14 Myr for NGC 869 and NGC 884 (Fabregat et al. 1996) and 24 Myr for NGC 3766 (Shobbrook 1985, 1987). For NGC 663, in Fabregat et al. (1996) we assumed the age of 21 Myr, because it was the age determination found in the literature which shown the best agreement with our uvby data. However, in Fig. 2 of this reference it can be seen that the 21 Myr isochrone does not make the best fit to the data in the $`M_V`$ \- $`c_1`$ diagram. A much better fit is obtained with an age of 23 Myr, and we will assume this value as the cluster age. We conclude that, when using the isochrone fitting to the $`M_V`$ \- $`c_1`$ diagram as age estimator, the galactic clusters with the highest frecuency of Be stars occupy the very narrow age interval of 14-24 Myr.
Grebel (1997) report on three more clusters in the Magellanic Clouds, with Be stars abundances comparable or even higher than the clusters referred to in the above paragraph. The ages of these clusters fall in the same range than those of the above galactic clusters. They are NGC 330 (19 Myr), NGC 2004 (20 Myr) and NGC 1818 (25 Myr). Notice that the age reported by Grebel for NGC 1818 is between 25 and 30 Myr, but looking at her Fig. 1 we find that the 25 Myr fits better the data. The same conclusion is reached by Van Veber and Vanbeveren (1997).
Dieball and Grebel (1998) studied three more clusters in the LMC, namely SL 538 (18$`\pm `$2 Myr), NGC 2006 (22.5$`\pm `$2.5 Myr) and KMHK 19 (16 Myr). They found between 5% to 12% of Be stars among the observed clusters B stars.
Keller at al. (1999) also searched for Be stars in the clusters NGC 330 and NGC 346 in the SMC, and NGC 1818, NGC 1948, NGC 2004 and NGC 2100 in the LMC. They found a large amount of Be stars in all these clusters, with frecuencies ranging from 13% to 34% of the clusters main sequence B stars. For the three clusters not in common with Grebel (1997), the age determinations in the literature are 15 Myr for NGC 1948 (Vallenari et al. 1993) and 18 Myr for NGC 2100 (Cassatella et al. 1996). We will exclude NGC 346 from this discussion because it is a much younger cluster, embedded in N66, the largest and brightest H ii region in the SMC (Kudritzki et al. 1989; Massey et al. 1989).
All the age determinations for the Magellanic Clouds clusters are derived from $`BVRI`$ photometry, and so we consider it less reliable that the ages obtained from uvby photometry, for the reasons explained in Sect. 2.3. Keller et al. consider these ages uncertain by factors of 2 to 3. Despite of this fact, the coincidence between these ages and the age interval we determined from the uvby photometry of the galactic clusters is overwhelming. Only one Magellanic Clouds cluster fall very marginally outside of the age interval of 14-24 Myr, namely NGC 1818 (25 Myr).
In Table 2 we resume the ages and Be star abundances of all discussed ”Be star rich” clusters. When comparing the abundances of the Galactic and Magellanic Clouds clusters it should be noted that abundances in the latter are derived from single-epoch surveys, while in the former they came from more than 50 years of study. In fact, it turns out that the Be star abundance of the Magellanic Clouds clusters is significantly higher, and this can be attributed to the different metalicity (Maeder et al. 1999).
From the above, we conclude that the clusters with a high abundance of Be stars occupy a very narrow range of ages, namely between 14 and 25 Myr. For older clusters the percentage of Be stars decrease, as shown by Mermilliod (1982) and Grebel (1997). We have to study now the abundances in the younger clusters. There are not in the literature enough observational data to make a complete analysis, but we have been able to collect several pieces of evidence pointing towards the same conclusion: the very paucity of Be stars in clusters younger than 10 Myr. In the following paragraphs we will review some of them.
We have performed a first search in the WEBDA database of open cluster data (Mermilliod 1999). We found 64 clusters younger than 10 Myr. Among them, 9 clusters contain 3 or more Be stars. They include NGC 2244 (3 Be stars), NGC 6530 (18), NGC 6611 (20) and IC 2944 (8). All these clusters have been discussed in Sect. 2.3, where we show that their emission line objects are Herbig Be stars instead of classical Be stars. NGC 6823 (5) lies in a bright H ii region with associated dark clouds (Stone 1988). NGC 7380 (3) is associated to the molecular cloud regions Sh2-142 and NGC 7380 E, and contains pre-main sequece stars among which several Herbig Ae/Be stars are identified (Chavarría-K. et al. 1994). IC 1590 (4) is embedded in the nebulosity of NGC 281, also identified as the bright H ii emission region Sharpless 184 (Guetter & Turner 1997). One more cluster, IC 1805 (2) is located in an H ii region, and associated to a large molecular cloud (Ninkov et al. 1995; Heyer et al. 1996). For the same arguments exposed in Sect. 2.3, we consider that the emission line objects in the last four clusters are likely to be pre-main sequence objects. The two remaining Be star rich clusters are NGC 884 (17) and NGC 957 (5). NGC 884 has been discussed in the above paragraphs, where we assumed and justified an age of 14 Myr. Meynet et al. (1993) also derived an age of 14 Myr from isochrone fitting to $`UBV`$ photometric data. The age of 8 Myr in the WEBDA database is clearly wrong. The same can be said for NGC 957, whose age in the database is 6 Myr. This cluster has an age of 15 Myr in the last edition of the Lyngå catalogue (Lyngå 1987), which would place it in the age range we have established for the maximum Be star abundance. Maeder, Grebel & Mermilliod (1999) assumed ages of 13 and 16 Myr respectively for NGC 884 and NGC 957.
Among the remaining 54 clusters younger than 10 Myr in the WEBDA database, one contain two Be stars (NGC 6871) and three one Be star (NGC 6383, NGC 7235 and Hogg 16). Even in these very few cases doubts still remain on the cluster ages and the nature of the emission line objects. The emission line star in NGC 6383 has been studied by Thé et al. (1985), who cannot decide whether it is a classical Be star or a pre-main sequence object. A recent determination of the age of Hogg 16 gives a result of 25 Myr (Vázquez & Feinstein 1991). 50 more clusters younger than 10 Myr in the WEBDA database have no Be stars detected so far.
Balona and co-workers (Balona 1994; Balona & Koen 1994; Balona & Laney 1995, 1996) obtained CCD uvby$`\beta `$ photometry for several young open clusters. They did not make any particular investigation of the Be star content of the clusters they observed, but we have searched their photometric lists for objects with emission in the H$`\beta `$ line.
A $`\beta `$ index equal to 2.55 correspond to an equivalent width of the H$`\beta `$ line equal to 0. i.e., a photospheric absorption line completely filled-in by emission (Fabregat & Torrejón 1999). Hence, $`\beta <2.55`$ indicates that the H$`\beta `$ line is in emission. We have searched the above referred to photometric lists for OB stars ($`(by)_0`$ $`<`$ 0.05) with $`\beta <2.55`$. This search will also detect, as well as classical Be stars, other kinds of early-type emission line objects, like Of and OBIa stars. To exclude these stars we have introduced the additional restriction of $`M_V>4.5`$. A proof of the reliability of this last restriction can be found in the following data: the two emission line stars in NGC 3293 brighter than $``$4.5 have spectral classification in the literature; they are star 3, with type B0.5Ib (Feast 1958), and star 4, type B0Ib (Morgan et al. 1955). In NGC 6231 there are 8 emission line stars brighter than $``$4.5, and 7 among them have spectral types given by Levato & Malaroda (1980). Two are OBI supergiants and three more Of. Even if the remaining three are Be stars – Oe in this case – this would not affect the main conclussions of this work, as we will comment on later.
60% of the Be stars in NGC 663, NGC 869 and NGC 884 observed by Fabregat et al. (1996) would have been detected by applying the above criteria. This percentage can be considered as the typical detection capability of a photometric survey. Photometric surveys never detect the whole content of Be stars by the two following reasons: i./ surveys through photometric filters only detect stars with high level of line emission, loosing the mild emitters which only can be identified by spectroscopic means; ii./ the Be phenomenon is variable, and at a given time only a fraction of Be stars are in a phase of line emission.
After all these considerations, the final results of our survey in the photometric data published by Balona and co-workers are given in Table 4. Except in the case of NGC 2362, the ages reported have been derived from the pulsational properties of the $`\beta `$ Cephei stars present in each cluster by Balona et al. (1997). As it can be seen the clusters in the age interval 4-10 Myr are almost lacking of Be stars. No Be stars have been found in the two younger clusters, with ages of 4-5 Myr. A few have been detected in the two older, with ages of 9-12 Myr.
## 4 Discussion
The general picture which emerges from the analysis in the previous section is the following: star forming clusters, associated to bright emission nebulae, are rich in emission line stars, but they are much likely pre-main sequence objects related to Herbig Ae/Be stars. When the nebula dissipates the process of star formation stops – at least with regard to the massive stars – and the clusters are devoid of early-type emission line objects. Classical Be stars start to appear in clusters with age of around 10 Myr, and reach their maximum abundance in the 14-25 Myr interval. For older clusters the Be star abundance decreases with age, as shown by Mermilliod (1982) and Grebel (1997).
The decreasing of the Be star abundance with the age after the 14-24 Myr peak is a reflect of the dependence of the Be star abundance with the spectral type. It is well known that the maximum abundance occurs for spectral type B1-B2 (Zorec & Briot 1997). Clusters older than 25 Myr have their turnoff at type B3 or later, and hence they are expected to contain lower abundances than clusters with B1-B2 main sequence stars. Clusters older than 100 Myr have their turnoff at B8 or later, and the lack of Be stars is an obvious reflect of the lack of any kind of B stars.
Conversely, the lack of Be stars in clusters younger than 10 Myr has evident implications on the evolutionary status discussion. These clusters have their turnoff at type B1 or earlier, and hence they have their B star sequence complete, including the spectral types for which the Be star abundance reach its maximum. The lack of Be stars in these clusters implies that a Be star cannot be a very young object.
Be stars appear in clusters with turnoff at B1, and reach its maximum abundance in clusters with turnoff at B2. As most of the Be stars belong to these types, we have to conclude that Be stars are much closer to the end of the main sequence than to the ZAMS.
This result contradicts the finding of Mermilliod (1982) and Slettebak (1985), already mentioned in the introduction, who stated that Be stars occupy the whole main sequence band from the ZAMS to the TAMS. This afirmation is mainly based on photometric data, in the $`UBV`$ system, of Be stars in open clusters. For a B star of a given subtype, the difference in $`(BV)`$ between its position at the ZAMS and the end of the main sequence is lower than 0.1 mag. To firmly conclude that a Be star is in or near the ZAMS, a photometric accuracy of 0.02 mag. for the underlying star of the Be object would be required. If we consider all the problems which affect photometry of Be stars, this accuracy seems not to be within reach. Mermilliod (1982) use the $`(UB)`$ colour, for which the diference between ZAMS and TAMS is higher. But Be stars tend to move leftwards in the $`M_V`$ \- $`(UB)`$ diagram due to the excess in the $`U`$ magnitude caused by the circumstellar emission in the Balmer continuum (Kaiser 1989). The same effect is present in the Strömgren $`c_1`$ index, as shown by Fabregat et al. (1996). This effect can displace a strong emitter from TAMS to ZAMS and even leftwards. Mermilliod already realized this effect when he states that the $`(UB)`$ colours are affected by the Be phenomenon. Hence we consider our result based on the analysis of Be star abundances in open clusters more reliable that the results based on photometric data which are strongly affected by the circumstellar continuum emission. On the other hand, both authors are aware that most Be stars occur on the evolved part of the main sequence (Mermilliod 1992) and considerably off the ZAMS (Slettebak 1985).
There is additional evidence indicating that Be stars are somewhat evolved objects. In the younger clusters containing early-type Be stars, late type Be stars are scarce or completely lacking. This was first noted by Sanduleak (1979, 1990). He found 26 Be stars in NGC 663, and among them only 2 later than B5. His objetive prism survey was complete to mag. 14, which at the cluster reddening and distance reach the spectral type B6-B7V. He concludes that Be stars in NGC 663 are primarily confined to spectral types earlier than B5.
Capilla & Fabregat (1999) performed CCD Balmer-line photometry of NGC 663, NGC 869 and NGC 884. Their images are deep enough to cover all the B type range. They detected a total of 25 Be stars in the three clusters, and among them only two are later than B5.
These results can be interpreted in the same evolutionary terms that before. In clusters with ages in the interval 14-24 Myr, the stars earlier than B5 have spent more than a half of its life in the main sequence, while the late B stars are still in the first half of the main sequence phase. The Be phenomenon accurs among the former and not among the latter.
### 4.1 Be stars as post-mass-transfer binary systems
It has been suggested that Be stars could be the result of the evolution of close binary systems. The transfer of matter and angular momentum would produce the spin up of the mass gainer to very high rotation rates. It is well known that rapid rotation is a common characteristic of Be stars, and hence a key ingredient of the Be phenomenon. The products of close binary evolution are therefore good candidates to develope the Be phenomenon (Pols et al. 1991). Moreover, several Be stars are definitely post-mass-transfer systems. They are the Be/X-ray systems, in which a neutron star orbits an early-type Be star, accreting matter from the dense stellar wind and thereby generating X-rays. The properties of Be stars in Be/X-ray binaries are not different from those of the rest of Be stars. For a recent review of the properties of Be/X-ray binaries, see Negueruela (1998).
Our conclusions on the evolutionary status of Be stars are consistent with the hypotesis of the nature of Be stars as post-mass-transfer systems. The Be phenomenon would occur after the mass transfer phase in the evolution of a close binary. This would explain the scarcity of Be stars in very young clusters: the necessary time for the main sequence evolution of the primary star in the system has to be over before the mass transfer begins and the Be star is formed, and hence a Be star cannot be a very young object.
However, the interpretation of the Be phenomenon as the result of close binary evolution faces important problems, both theoretical and observational. The computations of Pols et al. (1991) only can account for about half the population of Be stars. The recent study of Van Bever & Vanbeveren (1997) with updated models of close binary evolution reveal that only a minority of the Be stars (less than 20% and possibly as low as 5%) can be due to close binary evolution. On observational grounds, the models of close binary evolution predict a population of Be star plus white dwarf systems ten times more abundant than the Be/X-ray binaries. These systems should be observable as low luminosity X-ray sources. The search conducted by Meurs et al. (1992) failed in detecting the predicted population of Be+WD systems.
Hence we have to conclude that, despite the consistency with the results of our analysis, the model of the close binary evolution does not provide a satisfactory explanation to the Be star phenomenon. Moreover, it has to be considered that this model is ad hoc, because it only justifies the formation of a rapidly rotating B star, but does not explain how the Be phenomenon arises from it.
### 4.2 Evolution through the main sequence
The main conclusion of our study is that Be stars are evolved main sequence stars, closer to the TAMS than to the ZAMS. This would imply the existence of some evolutionary change able to produce the Be star phenomenon during the main sequence lifetime. This is not in agreement with the classical theory of stellar evolution, which predicts that the main sequence is a quiet evolutionary stage in which no major changes in the stellar structure occur.
However, in the modern literature there is a growing evidence of important changes which occur during the main sequence stage. Lyubimkov (1996, 1998) has shown that the abundance of helium and nitrogen in O and early B stars increases during the main sequence. This change is not monotonic. The initial helium abundance $`He/H`$ = 0.08-0.09 is maintained during the first half of the main sequence lifetime. Subsequently, $`He/H`$ abruptly increases approximately twofold in a short interval of relative ages $`t/t_{\mathrm{MS}}`$ (where $`t_{\mathrm{MS}}`$ is the main sequence lifetime) between 0.5 and 0.7, and this enhanced $`He/H`$ remains constant until the main sequence stage is complete. Recent evolutionary models take into account this effect, and attribute the light element enhancement as due to early mixing produced by rotationally induced turbulent diffusion (Denissenkov 1994; Talon at al. 1997; Maeder 1997).
Our results are consistent with the Be phenomenon appearing at the same evolutive age $`t/t_{\mathrm{MS}}`$ $``$ 0.5. To show this we have to keep in mind that the highest percentage of Be stars corresponds to spectral types B1-B2 (Zorec & Briot 1997). The age of 10 Myr, where Be stars start to appear, correspond to an evolutionary age of $`t/t_{\mathrm{MS}}`$ = 0.5 for a star of about 10 M and solar metalicity (this and next paragraph discussion is based on data from Table 47 and Fig. 1 in Schaller et al. 1992). Such a star at this age has an spectral type of B1, where the abundance of Be stars reach its maximum. In clusters younger than 8 Myr, stars are reaching the TAMS at 20 M or higher, and at spectral types earlier than B1. The lower abundance of Be stars among these earlier types, and the low relative number of such massive stars explain the paucity of Be stars among these very young clusters. O8-9.5e stars can, however, start to appear at as early an age as 3 Myr, which can explain the few cases of clusters younger than 10 Myr with a few Be stars. NGC 6231, discussed in Section 3, could be one of this cases.
The maximum Be star abundance occurs between 14 and 25 Myr. 14 Myr correspond with $`t/t_{\mathrm{MS}}`$ = 0.6 for a 9 M star, at spectral type B1, i.e. at the begining of the maximum abundance of Be stars. 25 Myr is the end of the main sequence lifetime for a 9 M star, at spectral type B3. Stars of lower mass reach the relative age of $`t/t_{\mathrm{MS}}`$ = 0.5 at spectral types of B3 or later, where the abundance of Be stars decreases. This explains the decreasing abundance of Be stars after an age of 25 Myr.
We propose the hypothesis that the Be phenomenon is an evolutionary effect, appearing half way of the main sequence lifetime, and is related to the light elements enhancement which occurs at the same evolutionary phase. It is now widely accepted that magnetic fields near the stellar surface could be the cause of the enhanced mass loss which characterize the Be phenomenon. The mechanisms proposed to explain the mixing at this stage, which imply movement of plasma near the stellar surface, coupled with the rotation of the star, could originate and maintain a magnetic field via a dynamo related effect. The characteristic high rotational velocity of Be stars would play a major role in: i./ inducing the movement of matter via turbulent diffusion; and, ii./ enhancing the magnetic field strenght via the dynamo effect. Hence our hypothesis provides a natural explanation of the influence of high rotational velocity in the Be phenomenon.
A direct proof of this hypothesis could be obtained by studying whether Be stars have an enhanced helium abundance. Unfortunately, the contamination of the photospheric spectrum by the circumstellar emission lines makes the abundance analysis of Be stars an almost impossible task. Such analysis can be performed in Be stars observed in a disk-loss phase, i.e., in a phase in which the circumstellar disk has dissipated and the photospheric spectrum is directly observable. This has been done by Lyubimkov et al. (1997) for the Be star X Persei, and they obtained an enhanced helium abundance of $`He/H`$ = 0.19. However, this case is not conclusive, because X Persei is a Be/X-ray binary which in the past underwent mass transfer, and hence it is not possible to know whether the helium overabundance is due to internal processes in the current primary star or whether it is caused by external accretion from the original primary. Helium abundance studies of isolated Be stars observed during disk-loss phases are required to proof our sugestions.
## 5 Conclusions
We have presented a study of the abundance of Be stars in open clusters as a function of the cluster age, using whenever possible ages determined through Strömgren $`uvby`$ photometry. For the first time in studies of this kind we have considered separately classical and Herbig Be stars.
The main results obtained can be summarized as follows:
* Clusters associated to emitting nebulosities and undergoing stellar formation are rich in emission line objects, which most likely are all pre-main sequence objects. No bona fide classical Be star has yet been identified among them.
* Clusters younger than 10 Myr and without associated nebulosity are almost completely lacking Be stars, despite they have a complete unevolved B main sequence.
* Classical Be stars appear at an age of 10 Myr, and reach the maximum abundance in the age interval 14-25 Myr.
We have interpreted our results in the sense that the Be phenomenon is an evolutionary effect which appears in the second half of the main sequence lifetime of a B star. This conclusion is supported by other facts, like the lack of late-type Be stars in young clusters rich in early-type Be stars.
We propose the hypothesis that the Be phenomenon could be related to main structural changes happening at an evolutionary age of $`t/t_{\mathrm{MS}}`$ = 0.5, which also lead to the recently discovered non-monotonic helium abundance enhancement. The semiconvection or turbulent difussion responsible of the helium and nitrogen enrichment, coupled with the high rotational velocity, can originate magnetic fields via the dynamo effect. It is now widely accepted that many observed phenomena are due to Be star photospheric activity related to the presence of magnetic fields. Our hypotesis provides a natural explanation of the relationship between the Be phenomenon and the high rotational velocity characteristic of Be stars.
It should be noted, however, that our results on the Be star frequencies in open clusters came from scarce and inhomogeneous sets of data, and this leads to a somewhat speculative component in our conclusions. To check our results and proposed explanations the following observational data would be of critical importance:
* A systematic study of the Be star frequencies in a significant number of clusters of different ages, both in the Galaxy and in the Magellanic Clouds. It would be of exceptional interest to know the abundances in Magellanic Clouds clusters younger than 10 Myr and with high-mass stellar formation finished yet.
* The determination of cluster ages in an homogeneous system. We propose for this purpose the use of the Strömgren $`uvby`$ system.
* The determination of the helium and light elements abundance of Be stars undergoing phases of disk-loss.
* In a more general context, the determination of the helium abundance in B stars with evolutionary ages $`t/t_{\mathrm{MS}}`$ $`>`$ 0.5. Early type B stars in clusters with turn-off at B1-B2 would be specially well suited for this purpose.
We are currently undertaking observational programs to address the above questions.
To conclude, we would like to comment that in the recent years, many relations and cross-links between classical Be stars and several other types of peculiar hot stars have been put forward, clearly showing that the Be phenomenon is not just an isolated problem of the stellar astrophysics. On the other hand, the referred to results on light element enhancements in the atmospheres of hot stars during their main sequence lifetime are not compatible with the classical stellar evolutionary models, and demostrate that there are important lackings in our knowledge of massive star evolution. In this paper we propose that both phenomena are related, and hence, the understanding of the Be phenomenon could be the clue for the advance in our understanding of major issues in massive star formation and evolution.
###### Acknowledgements.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the NASA’s Astrophysics Data System Abstract Service.
|
no-problem/9906/astro-ph9906006.html
|
ar5iv
|
text
|
# Mass Transfer Processes in Classical Nova Systems Around Their Outburst Events and Beyond
novae, cataclysmic variables — stars: evolution — stars: individuals (V1974 Cyg, V1425 Aql, DN Gem) — white dwarfs
Dissertation Summary
Electronic mail: ar@astro.keele.ac.uk; Thesis work conducted at: Dept. of Physics and Astron., Tel Aviv University, Tel Aviv, 69978, Israel; Ph.D. Thesis directed by: Prof. Elia Leibowitz; Ph.D. Degree awarded: 1999
A continuous photometric study of 12 novae was carried out for more than 365 nights during the past four years at the Wise Observatory in a motivation to extend our knowledge on these systems. Table 1 presents a summary of the observations and the main results.
It was believed that the accretion disc is destroyed by the nova outburst, and that it takes only a few decades for the disc to reform. The main aim of the project was, therefore, to try to find observational evidence for the presence of an accretion disc within the binary nova system shortly after its outburst event. Positive results were found in one case (permanent superhumps in Nova V1974 Cygni 1992 - Retter A., Leibowitz E.M., Ofek E.O., 1997, MNRAS, 286, 745), and most probably in another case (the classification of Nova V1425 Aql 1995 as an intermediate polar system - Retter A., Leibowitz E.M., Kovo-Kariti O., 1998, MNRAS, 293, 145). In addition, we used the Eccentric Disc model in order to derive the binary parameters of Nova V1974 Cygni 1992 (Retter et al. 1997).
Further use of the theory and simple assumptions lead to a prediction for the future of superhumps in classical nova systems, and in particular for the permanent superhumps in Nova V1974 Cygni 1992 (Retter A., Leibowitz E.M., 1998, MNRAS, 296, L37). They will either continue to prevail in its light curve, while the brightness of the nova stays well above its pre-outburst level, or the system will continue to decay and eventually evolve into a regular SU UMa system, with superhumps appearing only during superoutbursts.
The detection of permanent superhumps in Nova Cygni 1992 and the observed evolution of two other old classical novae - V603 Aquilae 1918 and CP Puppis 1942 - lead us to suggest that non-magnetic short-period nova systems are the progenitors of permanent superhump systems, which later in their life-time might evolve into SU UMa systems (Retter & Leibowitz, 1998).
We also found a photometric period in the light curve of Nova DN Gem 1912, and proposed that it is the orbital period of the binary system (Retter A., Leibowitz E.M., Naylor T., 1999, MNRAS, accepted). This interpretation makes DN Gem the fourth nova inside the period gap distribution of cataclysmic variables, and it bolsters the idea that there is no gap for classical novae. However, the small number of known nova periods is still too poor to establish this idea statistically. We further argued that the modulation is driven by an irradiation effect.
We also detected periodic variations in Nova V705 Cas 1993 (Retter A., Leibowitz E.M., 1995, IAUC, 6234; Leibowitz E.M. et al., in preparation), and in Nova Sgr 1998 (Lipkin Y., Retter A., Leibowitz E.M., 1998, IAUC, 6963). The observations of Nova Aql 1993 and Nova Oph 1994 revealed a possible photometric period in each. The reduction and analysis of the data of both novae are under way. Irregular variations were found in the light curves of Nova Cas 1995, DM Gem and RW UMi. In 1997, I lead an international photometric campaign on Nova V1974 Cyg 1992. These data are being analyzed.
|
no-problem/9906/hep-ph9906216.html
|
ar5iv
|
text
|
# Abstract
## Abstract
We give a brief overview on the present status of Generalized Vector Dominance as appplied to vector-meson production and the total photoabsorption cross section in the region of small $`x_{bj}`$. We comment on how GVD originates from QCD notions such as color transparency.
## 1 THE BASIC QUESTION
Concerning DIS at low values of the scaling variable, $`x_{bj}`$, a basic question has been around for about thirty years : when does the virtual photon behave hadronlike, is it when $`Q^20`$ or is it when $`x_{bj}0`$, but $`Q^2`$ fixed and arbitrarily large? Here, “hadronlike” behaviour includes the transition of the (virtual) photon to (massive) $`q\overline{q}`$ states and their subsequent diffractive forward scattering from the proton, in generalization of the role of the low-lying vector mesons in photoproduction. There is qualitative experimental evidence for this picture of generalized vector dominance (GVD) at low $`x_{bj}`$ and large $`Q^2`$,
* the existence of high-mass diffractive production discovered at HERA ,
* the similarity in shape (thrust, sphericity) of the states diffractively produced in DIS and the ones produced in $`e^+e^{}`$ annihilation,
* the persistence of shadowing in $`\gamma ^{}A`$ collisions for $`x_{bj}0`$ at fixed $`Q^2>>0`$ .
Quantitatively, one starts from the mass dispersion relation for $`\sigma _T(W^2,Q^2)`$,
$$\sigma _T=𝑑m^2𝑑m^2\frac{\rho _T(W^2,m^2,m^2)m^2m^2}{(Q^2+m^2)(Q^2+m^2)},$$
(1)
and its generalization to the longitudinal photon absorption cross section, $`\sigma _L`$, where the spectral weight function is related to the product of the $`\gamma ^{}q\overline{q}`$ transition (in the initial and the final state in the forward Compton amplitude) and the imaginary part of the $`q\overline{q}`$ proton forward scattering amplitude. Frequently, the diagonal approximation, $`\rho \delta (m^2m^2)`$, is adopted that requires $`\sigma _{q\overline{q}p}1/m_{q\overline{q}}^2`$ to obtain scaling for $`\sigma _T`$.
## 2 DIAGONAL GVD
Lack of space does not permit me to reproduce the phenomenologically successful representation of $`\sigma _{\gamma ^{}p}(W^2,Q^2)`$ at low $`x_{bj}`$, including photoproduction, by GVD. I have to refer to ref. . The diagonal approximation, nevertheless, cannot be the full story. After all, diffraction dissociation exists in hadron reactions, and there is no particular reason in a gluon-exchange picture that would forbid different masses, $`m_{q\overline{q}}m_{q\overline{q}}^{}`$, for ingoing and outgoing $`q\overline{q}`$ states in the forward Compton amplitude.
## 3 OFF-DIAGONAL GVD IN VECTOR-MESON PRODUCTION
Reformulating and extending the off-diagonal GVD ansatz for elastic vector meson production, recent work by Schuler, Surrow and myself yields a satisfactory representation of the transverse cross section and the longitudinal-to-transverse ratio, $`R`$, for elastic $`\rho ^0,\varphi `$ and $`J/\mathrm{Psi}`$-production . The theoretical prediction for $`\sigma _{T,\gamma ^{}pVp}`$ is based on
$$\sigma _{T,\gamma ^{}pVp}=\frac{m_{V,T}^4}{(Q^2+m_{V,T}^2)^2}\sigma _{\gamma pVp}(W^2).$$
(2)
I refer to ref. for the prediction for $`R`$. The inclusion of off-diagonal transitions with destructive interference yields $`m_{V,T}^2<m_V^2`$, where $`m_V`$ stands for the mass of the vector meson being produced. As an example, in fig. 1, I show $`\varphi `$ production. The curves (2-par. fit) are based on $`m_{\varphi ,T}^2=0.40m_\varphi ^2`$ and $`\sigma _{\gamma p\varphi p}=1.0\mu b`$. The theoretical curves for $`J/\mathrm{Psi}`$ production in fig. 2 were obtained by the replacement $`m_\varphi ^2m_{J/\mathrm{Psi}}^2`$ and $`\sigma _{\gamma p\varphi p}\sigma _{\gamma pJ/\mathrm{Psi}p}`$.
## 4 OFF-DIAGONAL GVD FROM QCD
This is work in progress in collaboration with Cvetic and Shoshi . Starting from the QCD notion of color transparency and an impact-parameter representation for $`\sigma _{\gamma ^{}p}^{tot}`$, we obtain a representation for $`\sigma _{\gamma ^{}p}^{tot}`$ of the form (1). The spectral weight function turns out to be much like the one conjectured a long time ago . Color transparency, as fulfilled in a two-gluon exchange ansatz, provides the destructive interference necessary for convergence and scaling in (1), thus resolving what has sometimes been called the “Gribov paradox”.
|
no-problem/9906/astro-ph9906045.html
|
ar5iv
|
text
|
# Cygnus X-2: the Descendant of an Intermediate-Mass X-Ray Binary
## 1 Introduction
Cygnus X-2 has long been considered a typical low-mass X-ray binary containing a neutron star (e.g., Smale 1998) with a relatively long orbital period ($`P=9.84`$d; Cowley, Crampton & Hutchings 1979). Recent spectroscopic observations by Casares, Charles & Kuulkers (1998) and a study of the ellipsoidal light variation by Orosz & Kuulkers (1998) yielded accurate values for the mass ratio ($`q=0.34\pm 0.04`$), the individual masses ($`M_x=1.78\pm 0.23M_{}`$ and $`M_c=0.60\pm 0.13M_{}`$ for the neutron star and the companion, respectively), and the radius of the companion ($`R_c=7.0\pm 0.5R_{}`$; unpublished). These observations confirm the present low mass of the companion.
Rather surprisingly, however, Casares et al. (1998) also showed that the spectral type of the companion is, to within two subtypes, A9III and that the spectral type does not vary with orbital phase. Using the binary parameters above and adopting a temperature of $`7400`$K for this spectral type (Straižys & Kuriliene 1981), we obtain a luminosity $`L_c130L_{}`$ for the companion star. On the other hand, we can also estimate the expected temperature and luminosity for a low-mass subgiant that is consistent with the binary parameters of Cyg X-2: using the calibrated stellar models by Han (1995), we find an expected temperature of 4500 K and an expected luminosity of 18$`L_{}`$. This much higher temperature and luminosity of the companion star could in principle be caused by X-ray heating (also see Cowley et al. 1979). However, the fact that the spectral type does not vary with orbital phase despite an inclination $`i62\mathrm{°}`$ (Orosz & Kuulkers 1998) would imply extremely efficient redistribution of the irradiation energy around the star by irradiation-induced circulation currents, much higher than is, for example, observed in the companion of Her X-1 (Kippenhahn & Thomas 1979). The spectra of Casares & Charles (1998) also show a helium absorption line whose strength varies with orbital phase. The very presence of this line on the companion indicates regions of much higher temperature than found on an A9 star. Since the strength of this line is strongest on the illuminated side of the secondary, this suggests that it is caused by external illumination and heating, while the non-varying component of spectral type A9 reflects the true intrinsic spectral type of the companion.
Since a spectral type A9 is not consistent with a low-mass subgiant, this immediately suggests that the original mass of the companion must have been much higher and that most of the mass of the companion must have been lost from the system. Indeed, in §2 we will show that the present parameters of Cyg X-2 can be understood if, at the beginning of the mass-transfer phase, the companion was a somewhat evolved star of mass $`3.5M_{}`$ that lost most of its envelope as a result of highly non-conservative mass transfer. Cyg X-2 may therefore be related to a hitherto little studied class of intermediate-mass X-ray binaries (IMXBs). In §3 we discuss some of the implications of this conclusion and how it may affect our understanding of X-ray binaries in general.
We note that a similar suggestion concerning the evolutionary status of Cyg X-2 was recently made independently by King & Ritter (1999). However, as we will show in §2, our best model (case AB) differs substantially from their suggested model (early case B), both in its details and its predictions.
## 2 Binary Calculations
In order to explore the possibility that Cyg X-2 is the descendant of an intermediate-mass X-ray binary, we performed a series of binary calculations using an up-to-date, standard, Henyey-type stellar evolution code (Kippenhahn, Weigert & Hofmeister 1967). Our calculations use solar metallicity ($`Z=0.02`$), an initial hydrogen abundance of $`X=0.70`$, a mixing-length parameter $`\alpha =2`$, and use a standard prescription for the mass-transfer rate $`\dot{M}`$ (see, e.g., Ritter 1988). For each binary-evolution sequence, one needs to specify what fraction, $`\beta `$, of the mass lost by the donor is accreted by the neutron star, and the specific angular momentum of any matter that is lost from the system. We somewhat arbitrarily choose $`\beta `$ to be 1/2, and limit the maximum accretion rate to the Eddington accretion rate, taken to be $`\dot{M}=2\times 10^8M_{}\text{yr}\text{-1}`$ and kept constant throughout each run. We further assume that the mass lost from the system carries away the specific angular momentum of the accreting neutron star which has an initial mass of 1.4$`M_{}`$. We include angular-momentum loss due to gravitational radiation (although it is of no real importance here), but do not consider magnetic braking, since during all slow evolutionary phases, the envelopes of our donor stars tend to be fully radiative (for a general background review see, e.g., Ritter 1996). We have also performed some calculations that include magnetic braking and found that this would not change any of the main conclusions in this paper.
The evolution of a binary is very sensitive to the evolutionary state of the donor star at the beginning of the mass-transfer phase. For example, if the donor is initially relatively unevolved (early case A; Kippenhahn & Weigert 1967), it will subsequently mimic a single star of the same actual mass rather than its initial mass (e.g., Hellings 1983), provided that is is not too far out of thermal equilibrium. Since the companion of Cyg X-2 does not presently resemble a single star of the same mass, this type of evolution cannot be applicable. To illustrate this evolution, we show in Figure 1 the evolutionary track in the Hertzsprung-Russell (H-R) diagram of a 3.5$`M_{}`$ donor star that is somewhat evolved (marked ‘case A’). At the beginning of the mass-transfer phase, it has used up only half of its initial hydrogen supply in the center. After a short, rapid mass-transfer phase (dashed portion), the star spends most of the remaining main-sequence phase in the low-mass region of the main sequence (dot-dashed portion) and then evolves like a low-mass star up the Hayashi line (solid portion). It never resembles a star like the companion of Cyg X-2 (the parameters for Cyg X-2 are shown as a boxed region in Fig. 1).
In order to explain the presently overluminous companion, it must have been relatively evolved at the beginning of the mass-transfer phase. This constraint allows two possibilities: either the secondary was near the end of the main sequence and had already developed a core structure typical of a post-main-sequence intermediate-mass star (case AB) or it had already left the main sequence and filled its Roche lobe while evolving through the Hertzsprung gap (case B).
### 2.1 The Case AB Scenario
The curves marked ‘case AB’ in Figure 1 and Figure 2 illustrate the evolution of a 3.5$`M_{}`$ star that fills its Roche lobe for the first time near the end of the main sequence, when its central hydrogen abundance has been reduced to $`X_c0.10`$. The exact value of $`X_c`$ is not crucial, but should probably be less than $`0.2`$. The panels in Figure 2 show the radius and Roche-lobe radius (panel a), the orbital period (panel b), the masses of the two components and the secondary’s core mass (panel c), and the mass-loss rate from the donor (panel d) since the beginning of the mass-transfer phase. As is most evident from the last panel, one can clearly distinguish three separate phases, one very rapid phase and two slower phases.
The rapid initial phase
Because of the large initial mass ratio, the initial mass-transfer rate is of order $`10^5M_{}\text{yr}\text{-1}`$, and most of the mass lost from the companion has to be ejected from the system. The high $`\dot{M}`$ implies a mass-loss time scale that is short compared to the thermal time scale of the donor, and the donor star is therefore driven significantly out of thermal equilibrium (i.e., it is undersized and underluminous for its mass). The mass-transfer rate only starts to decrease significantly after the mass ratio has been reversed and the system, and hence the Roche lobe, begin to expand. At this stage, mass transfer is driven entirely by the thermal expansion of the companion. The rapid phase ends once the companion has re-established thermal equilibrium (after $`2\times 10^6`$yr, of order the thermal time scale of the star). By this time, the secondary’s mass has decreased to 0.900$`M_{}`$. Since the material that is now exposed at the surface has undergone partial CNO burning, its composition shows the signature of CNO processing (enhanced nitrogen, decreased carbon and oxygen) and the surface hydrogen mass fraction is reduced to 0.55 (from 0.7).
The hydrogen core-burning phase
Once the star has returned to thermal equilibrium, the further evolution is driven by the nuclear evolution of the core (hydrogen core burning). During this phase, which in this example lasts $`6\times 10^7`$yr, the mass-transfer rate is of order $`4\times 10^{10}M_{}\text{yr}\text{-1}`$ and the mass decreases to 0.876$`M_{}`$. The phase ends when hydrogen has been exhausted in the core and the system becomes temporarily detached, accompanied by a small hook in the H-R diagram (Fig. 1).
The hydrogen shell-burning phase
After the exhaustion of central hydrogen, the star expands again and soon starts to fill its Roche lobe for a second time. Initially the evolution of the secondary resembles that of a low-mass star evolving off the main sequence. However, its behavior changes qualitatively once it has lost all of the material that was outside the convective core at the beginning of the mass-transfer phase (which had a mass of $`0.5M_{}`$). At this point, the surface hydrogen abundance drops to 0.1 (the central hydrogen abundance in the core at the beginning of mass transfer), and the star now has many of the characteristics of a non-degenerate helium star, except for the fact that the evolution, and hence mass transfer, are driven by hydrogen burning in an outward-moving shell. Unlike the case of early case B (see § 2.2), the star has now no tendency to become a giant, which would lead to mass transfer on a thermal time scale (we have tested this by evolving such a star without further mass loss). The mass-transfer rate is therefore determined by the time scale for hydrogen-shell burning and lies in the range of $`2\times 10^9M_{}\text{yr}\text{-1}`$ to $`7\times 10^8M_{}\text{yr}\text{-1}`$ (see the inset in Fig. 2(d)). This phase ends when the hydrogen-rich envelope has almost been exhausted and hydrogen-burning stops. The star still continues to evolve, now as a detached star, burns helium in the core as an O subdwarf (with a luminosity $`10L_{}`$ and temperature $`30,000`$K) and eventually ends its evolution as a CO white dwarf with an unusually low mass of $`0.414M_{}`$. The final neutron-star mass in this calculation is $`1.64M_{}`$.
The appearance of the secondary
In the initial rapid phase (dashed portion of curve AB in Fig. 1), most of the transferred mass has to be ejected from the system. Since the mass-loss rate exceeds the Eddington rate by several orders of magnitude, the system is unlikely to look like a typical X-ray binary. Since this phase is rather short-lived ($`10^6\text{ yr}`$), relatively few systems should be found in this phase. One possibly related, somewhat more massive observed counterpart is the famous system SS433, which appears to eject most of the transferred mass by means of two relativistic jets (e.g., Margon 1984).
On the other hand, in the two nuclear-evolution-driven phases, the system would in most respects resemble a typical low-mass X-ray binary (LMXB) and would almost certainly be observationally classified as a low-luminosity LMXB in the core-burning phase, and a high-luminosity LMXB in the shell-burning phase. Both phases are also relatively long-lived ($`6\times 10^7`$ and $`3\times 10^7\text{ yr}`$) and hence relatively more systems should be found in these phases than in the rapid phase (see § 3 for discussion).
In this particular model, the star spends a large fraction of the shell-burning phase near a location in the H-R diagram where Cyg X-2 lies (the dashed portion of track AB in Fig. 1). Indeed during the phase when the orbital period increases from $`7`$ to $`10`$d, the model provides an excellent description of the main observed properties of Cyg X-2 (see § 1; the mass of the secondary is in the range of 0.45 and 0.5$`M_{}`$). An important prediction of this model is that the surface composition should be the same as the core composition of the secondary at the beginning of the first mass-transfer phase, i.e., be severely hydrogen depleted (in this particular model, $`X0.1`$) and show strong evidence of CNO processing. (We caution, however, that there are a sufficient number of uncertainties in the binary-evolution model, in particular associated with the angular-momentum loss from the system, that somewhat different initial binary parameters can almost certainly also reproduce the observed properties of Cyg X-2.)
### 2.2 An Early Case B Scenario?
An alternative model for Cyg X-2 is that it started to fill its Roche lobe in the Hertzsprung gap (early case B; also see Kolb 1998 and, in particular, King & Ritter 1999). We have calculated this type of evolution for a 3.5$`M_{}`$ star that has just left the main sequence (dashed curve marked ‘case B’ in Fig. 1). The initial evolution is similar to the previous case. The mass-transfer rate (dashed curve in Fig. 2(d)) reaches a peak of $`10^5M_{}\text{yr}\text{-1}`$ and starts to drop once the mass ratio has been reversed. In the subsequent somewhat, but not much slower phase, mass-transfer is driven by the thermal evolution of the envelope (the star is trying to evolve across the Hertzsprung gap to become a giant). This phase ends and mass transfer stops completely once most of the hydrogen-envelope has been lost and the hydrogen-burning shell has been extinguished. The secondary continues to evolve as a detached star, burning helium as an O subdwarf and finally becomes a CO white dwarf of mass $`0.617M_{}`$ (the neutron star has only a slightly increased final mass of $`1.427M_{}`$).
As is clear from Figure 2(d), apart from the initial phase, the mass-transfer history is very different from the case AB scenario above. In the slower phase, mass transfer is driven by the thermal expansion of the secondary rather than the nuclear evolution of the core or shell. Since the thermal time scale is much shorter than the evolutionary time scale during the hydrogen-shell burning phase in the case AB scenario, the mass-transfer rate in the case B scenario is $`3\times 10^7M_{}\text{yr}\text{-1}`$, an order of magnitude above the Eddington rate. It is not clear whether such a high mass-transfer rate would be consistent with the observed properties of Cyg X-2. This problem is somewhat reduced if the initial secondary mass was somewhat lower ($`3M_{}`$). However, all case B scenarios suffer from the generic problems that the mass-transfer rate is significantly super-Eddington and that the characteristic lifetime of this phase is very short (of order the thermal time scale of the star, $`10^6\text{ yr}`$). In all the case B models we have calculated, the secondaries spend very little time in the H-R diagram in the general neighborhood of Cyg X-2. For even lower masses, the thermal time scale would be longer, but the star would have a tendency to evolve towards the Hayashi line, which again would be inconsistent with the inferred parameters of the secondary in Cyg X-2. We have performed a whole series of early-case-B binary sequences. None of them was able to reproduce all of the observationally inferred parameters of Cyg X-2 simultaneously (i.e., the position in the H-R diagram, the constituent masses, and $`\dot{M}`$). While we cannot strongly rule out an early-case-B scenario for Cyg X-2, keeping in mind the general uncertainties in modeling non-conservative mass transfer, we conclude that it is more likely that Cyg X-2 is at present in a hydrogen-shell burning phase in a case AB scenario.
## 3 The Importance of Intermediate-Mass X-Ray Binaries (IMXBs)
An important implication of our modeling of Cyg X-2 is that it suggests that there may be a whole class of X-ray binaries which either have intermediate-mass secondaries now or had them in the past. Such systems have received hardly any attention in the past (see, however, Pylyser & Savonije 1988, 1989; Kolb 1998). To some degree, this is the result of observational findings, since previously only one such system (Her X-1/HZ Her) had been unambiguously identified. In addition, the theoretically predicted large initial mass-transfer rates are much higher than those typically inferred for observed X-ray binaries. However, as our calculation show, this rapid phase is relatively short-lived and very few systems should be found in this phase. It is also not clear what they would look like. A further theoretical uncertainty is how the mass is lost in this rapid phase. As noted earlier, SS433 may provide an observational answer to both of these questions (the appearance initially and the mass-loss mechanism). On the other hand, from an evolutionary point of view, IMXBs should be relatively common, since they are much easier to form and do not require the same of amount of fine-tuning as LMXBs (see, for example, Bhattacharya & van den Heuvel 1991). The observational properties of IMXBs would vary significantly, depending on the initial mass of the secondary and the evolutionary phase at the beginning of the mass-transfer phase.
We have initiated a systematic investigation of IMXBs and found that, in the longer-lasting evolutionary phases, they typically resemble ‘classical’ LMXBs (Podsiadlowski & Rappaport 1999). This suggests that perhaps a significant fraction of X-ray binaries, presently classified as LMXBs, are actually IMXBs or their descendants. This may have important implications for our understanding of X-ray binaries. For example, it may help to explain some of the systematic differences between Galactic and globular-cluster LMXBs, the observed period and luminosity distributions, and may help to resolve the apparent discrepancy between the number of millisecond pulsars and their putative progenitors (e.g., Kulkarni & Narayan 1988).
## 4 IMXBS and millisecond pulsars
There is a group of about two dozen radio pulsars that are found in nearly circular orbits with low-mass white-dwarf companions (see, e.g., Taylor, Manchester & Lyne 1993; Rappaport et al. 1995). The orbital periods of these systems range from less than 1 day to 1232 days. It has been conjectured that this type of system evolves from a low-mass ($`1M_{}`$) subgiant or giant donor star which transfers its envelope to a neutron star via stable Roche-lobe overflow (see, e.g., Joss, Rappaport & Lewis 1987; Rappaport et al. 1995). What remains is a low-mass white dwarf (the core of the giant) in a wide, nearly circular orbit about the neutron star which has been spun up by the accretion process.
Joss, Rappaport, & Lewis (1987) and Rappaport et al. (1995) showed that there is a nearly unique relation between the final orbital period, $`P`$, of such a “recycled” binary pulsar and the mass of the white dwarf, $`M_{\mathrm{wd}}`$ (see also Savonije 1987, and Refsdal & Weigert 1970, 1971 for related discussions). In short, this relation results from the fact that the radius of a low-mass giant is a nearly unique function of its core mass. It then follows for a Roche-lobe filling star in a binary that $`P`$, at the end of mass transfer, is a function of only the core mass of the donor. Thus, the binary remains as a fossil relic of the giant donor star and its degenerate He core. The theoretical relation between $`P`$ and $`M_{\mathrm{wd}}`$ is found to be roughly consistent with the inferred masses of the white dwarf companions in these systems. Unfortunately, this model cannot be confirmed to the level of confidence that one would like. First, for most of these systems, only the mass function is measured, and thus the uncertainty in the white dwarf mass is rather large. Second, there are a number of these systems for which the orbital period is really too short to be explained by this scenario (i.e., $`<3`$d). Finally, there are a few systems where the lower limit on the mass of the white dwarf is only marginally consistent with the theoretical relationship.
The last of these difficulties could be mitigated if some of these systems formed with donors of intermediate mass. We have shown in this work that neutron stars with intermediate-mass donor stars can attain a final evolutionary state which is very much like that of the binary radio pulsars discussed here. In particular, the system which we use to model Cyg X-2 would end its evolution with a 0.42$`M_{}`$ CO white dwarf in a nearly circular 12-day orbit about a spun-up neutron star. Since the initial mass transfer in this system occurred when the 3.5$`M_{}`$ donor star was late in its main-sequence phase (late case A), the core was neither completely hydrogen depleted nor degenerate. This combination enables the final orbital period to remain relatively short for a white-dwarf mass as large as 0.42$`M_{}`$. By contrast, in the more standard low-mass giant scenario the corresponding final orbital period with this same white dwarf mass would have been $`400`$ days. Put another way, in this same scenario, a system with a 12-day orbital period would be expected to have $`M_{\mathrm{wd}}0.21M_{}`$. In the future, deep spectroscopy of the companion white dwarfs (see, e.g., van Kerkwijk & Kulkarni 1997), as well as the possible detection of the Shapiro delay in the arrival times of the radio pulses (see, e.g., Kaspi, Taylor, & Ryba 1994), could substantially improve the determination of the white dwarf masses and thereby discriminate between low-mass and intermediate-mass progenitors.
It is also possible that neutron stars accreting from intermediate mass donor stars could attain a final evolutionary state consisting of a very low-mass white dwarf in a short orbital period binary (e.g., $`<1`$d), and help explain binary radio pulsars with such properties. We plan to explore such scenarios in future work.
Thus, evolutionary scenarios and population synthesis studies which are employed to reproduce the distribution of orbital periods and white dwarf masses in binary radio pulsars, must include consideration of intermediate mass donor stars.
## 5 Conclusions
We have shown that the observationally inferred parameters of the X-ray binary Cyg X-2 can be understood if the secondary had an initial mass of $`3.5M_{}`$ and started to transfer mass near the end of the main sequence (or less likely just after leaving the main sequence). Cyg X-2 is therefore related to a class of intermediate-mass X-ray binaries that has been little studied before. Our favored model for Cyg X-2 (case AB) predicts that the secondary should, at present, be severely hydrogen depleted. We suggest that perhaps a significant fraction of X-ray binaries, that are presently classified as LMXBs, could be related to this class. This could have far-reaching implications for our understanding of X-ray binaries. Detailed future observations of the secondaries in X-ray binaries as well as systematic binary population synthesis studies should help to assess the importance of this relatively unexplored evolutionary channel.
The authors are grateful to P. Charles and D. Chakrabarty for stimulating discussions. This research was supported in part by NASA ATP Grant NAG5-4057.
Figure Captions
Figure 1. Evolutionary tracks of the secondaries in three binary calculations in the Hertzsprung-Russell diagram. The secondary has an initial mass of $`3.5M_{}`$ and the primary, assumed to be a neutron star, an initial mass of $`1.4M_{}`$ in all calculations. The dotted curve shows the track of a $`3.5M_{}`$ star without mass loss. The mass-loss tracks, labelled case A, AB and B, start at different evolutionary phases of the secondary (‘case A’: the middle of the main sequence; ‘case AB’: the end of the main sequence; ‘case B’: just after the main-sequence). The dashed portions in each track indicate the rapid initial mass-transfer phase, the dot-dashed and solid portions the slow phases where mass-transfer is driven by hydrogen core burning and hydrogen shell burning, respectively (only in case A and case AB). The beginning and end points of the various phases are marked by solid bullets, the small figures next to them give the mass of the secondary at these points. The boxed region labelled ‘Cyg’ indicates the observationally determined parameter region for the secondary in Cyg X-2. The tracks after mass transfer has ceased are not shown.
Figure 2. Key binary parameters for the case AB binary calculation as a function of time (with arbitrary offset). Panel (a): radius (solid curve) and Roche-lobe radius (dot-dashed curve) of the secondary; panel (b): the orbital period (solid curve); panel (c): the mass of the secondary (solid curve), of its hydrogen-exhausted core (dotted curve), and of the primary (dot-dashed curve); panel (d): the mass-loss rate from the secondary (solid curve); the inset shows a blow-up of the second slow mass-transfer phase (hydrogen shell burning). The dashed curves in panels (b) and (d) show the orbital period and mass-transfer rate for the case B calculation for comparison.
|
no-problem/9906/astro-ph9906106.html
|
ar5iv
|
text
|
# The optical counterparts to Be/X-ray binaries in the Magellanic Clouds
## 1 Introduction
The Magellanic Clouds (MC’s) present a unique opportunity to study stellar populations in galaxies other than our own. Their structure and chemical composition differs from that of the Galaxy, yet they are close enough to allow study with modest sized ground based telescopes. The study of any stellar population in an external galaxy is of interest as any differences with the same population in our own Galaxy will have implications on the evolutionary differences of the stars within the galaxies. A High Mass X-ray Binary (HMXB) consists of a compact object (neutron star or black hole) in orbit around a non-degenerate massive (OB type) star. X-ray emission is the result of accretion of material onto the compact object from the massive companion. The HMXB’s can be divided into those with supergiant companions, and those with Be star companions. A Be star is defined as an early type luminosity class III–V star which has at some time shown emission in the Balmer lines. This Balmer emission, along with a significant infrared excess is believed to originate in the circumstellar material which forms a disc around the star. The X-ray emission in these systems is transient in nature, as the orbit is wide and eccentric, and the neutron star passes through the densest regions of the circumstellar disc at periastron only. In the supergiant systems, the companion fills or is close to filling its Roche lobe, and mass transfer occurs through Roche lobe overflow or via a strong stellar wind removing $``$10<sup>-8</sup>My<sup>-1</sup>. These systems tend to be more persistent than the Be/X-ray binaries, sometimes showing flaring events on short timescales (For a comprehensive review of X-ray binaries, see Lewin, van Paradijs & van den Heuvel 1995.)
Observations of the HMXB’s in the Magellanic Clouds appear to show marked differences in the populations. The X-ray luminosity distribution of the MC sources appears to be shifted to higher luminosities relative to the Galactic population. There also seems to be a higher incidence of sources suspected to contain black holes (see Clark et al. 1978; Pakull 1989; Schmidtke et al. 1994). Clark et al. (1978) attribute the higher luminosities to the lower metal abundance of the MC’s, whilst Pakull (1989) refers to evolutionary scenarios of van den Heuvel & Habets (1984) and de Kool et al (1987) which appear to favour black hole formation in low metal abundance environments.
In order to study the differences between the HMXB populations of the Magellanic Clouds and the Galaxy, it is desirable to determine the physical parameters of as many systems as possible. We can then investigate whether the distributions of mass, orbital period, or spectral type are significantly different. Because of the small sample size of known Be/X-ray binaries in the Magellanic Clouds we have searched the fields of a number of unidentified X-ray sources suspected to be HMXB’s in an attempt to identify more Be/X-ray binaries.
Table 1 lists the X-ray sources in the Magellanic Clouds observed during this study. The sample was chosen to include unidentified X-ray sources from which either pulsations have been detected, or other characteristics that strongly suggests a HMXB status.
In Table 1, column 4 gives the uncertainty in X-ray position in arcseconds. Many of the sources are ROSAT detections with uncertainties of the order of a few arcseconds. In galactic fields, sub-ten arcsecond resolution would normally be adequate for an unambiguous optical identification, but owing to the crowded nature of Magellanic Cloud fields, even some of the ROSAT sources have X-ray positional uncertainties that allow several possible optical ccandidates.
We obtained CCD images of the fields through BV(R)<sub>C</sub> and H$`\alpha `$ filters in order to identify early type stars within the fields, and to search for H$`\alpha `$ emission from these stars. We have also obtained medium- and low-resolution spectroscopy of most candidates in order to confirm the presence of H$`\alpha `$ emission, and to measure radial velocities to allow confirmation of SMC or LMC membership.
In the following sections we describe in more detail the observations and subsequent analysis, and present resulting data for identified candidates.
## 2 Observations
### 2.1 CCD Photometry
CCD imaging was performed at the SAAO during 1996 October 1–7. All observations were made using the 1.0-m telescope and Tek8 CCD, plus 3x Shara focal reducer. The resulting pixel scale was 1.05” per pixel, with a total image size of 519x519”. All fields were observed through R and H$`\alpha `$ filters. In addition, most fields were observed through B and V filters. The H$`\alpha `$ filter used was an interference filter centered on 6562Å, with a width of 50Å. A complete log of observations is shown in Table 2 and finding charts for all the targets are presented in Figure 1.
The use of the focal reducer, whilst necessary to provide a field of view adequate to search larger X-ray error circles, introduced significant vignetting which was not satisfactorilly removed by flat-field corrections. Analysis showed that flat-field errors were below the 1% level within 4’ of the image centre. In subsequent analysis we therefore rejected any measurements of objects that lay further than 4’ from the image centre.
Due to the crowded nature of the fields, profile fitting photometry was necessary. PSF fitting photometry was carried out using IRAF/DAOPHOT. In each field between 30 and 50 stars were used to model the PSF. Instrumental magnitudes were transformed to the standard system using observations of a set of E and F region standards.
A H$`\alpha `$ magnitude scale was calibrated by defining the zero point such that a R–H$`\alpha `$ index has a value of zero for main sequence, non-emission line stars, and becomes positive for emission line stars.
For each field observed, an emission-colour diagram was plotted, with the R-H$`\alpha `$ emission index on the vertical axis, and the B-V colour index on the horizontal axis. As demonstrated by Grebel (1997), when such a diagram is plotted for Magellanic Cloud fields, where all objects lie at the same distance and are affected by the same reddening, Be stars can be clearly distinguished by their blue colour, and high emission index. Comparing photometry of other objects observed during this run with photometry obtained on other occasions without use of the focal reducer indicated that errors of up to 0.1 in magnitude could have resulted from fitting a poorly sampled PSF. In addition, some observations were made in conditions which were slightly below photometric quality, introducing systematic errors. As the method of identification of Be star candidates is through their relative positions in the $`emissioncolour`$ diagram, systematic errors do not undermine our results.
### 2.2 Optical spectroscopy
Optical spectra were obtained with the 1.9-m telescope at the SAAO and the Cassegrain spectrograph with SITe 1 CCD. On 1998 February 3 and 4 spectra were obtained with a spectral range of 6295–7042 Å and dispersion of 0.43 Å/pixel. On 1998 February 5, low resolution spectra were obtained with a range of 3800–7780 Å, and a dispersion of 2.3 Å/pixel. A log of spectroscopic observations is shown in Table 3. All spectra were reduced using tasks in IRAF’s KPNOSLIT package. Spectra were optimally extracted, and wavelength scales were applied using arc-lamp spectra obtained before and after each target observation.
On 1998 February 5, a flux standard was observed, as well as a smooth spectrum standard for the removal of telluric features. All low resolution spectra have subsequently been flux calibrated, and telluric features to the red of H$`\alpha `$ have been removed.
## 3 Results and discussion of individual sources
### 3.1 RX J0032.9-7348
This source was discovered by Kahabka & Pietsch (1996, hereafter KP1996) in ROSAT pointed observations made in 1992 December and 1993 April. The unabsorbed bolometric luminosity they derive from the 1993 observations is $`2.5\times 10^{36}`$ ergs s<sup>-1</sup>, whilst the 1992 December flux was a factor of $``$6 less. From the X-ray spectrum, length of the X-ray high state (at least 5 days in 1993 April), and long term variability, KP1996 propose a likely HMXB nature for the source. The X-ray position was determined to an accuracy of $`\pm `$ 62 arcseconds(KP1996).
We obtained $`BVR`$ and H$`\alpha `$ images of the field on the night of 1996 October 1. Figure 2 shows the $`emissioncolour`$ diagram for stars in the field, the left hand plot showing all stars in the field, the right hand plot showing only those stars within a field centered on the X-ray position, with a radius 124 arcseconds (twice the positional uncertainty). On this plot, we further identify those stars that lie within the X-ray error circle by plotting with filled circles. The field population is predominantly evolved red stars, with only six early type stars detected within the 124 arcsecond area. Of these early type stars, two show clear excess H$`\alpha `$ flux. The strongest H$`\alpha `$ excess is seen from a star which lies within the X-ray error circle ($``$12 arcseconds from the X-ray position, marked as Object 1 in Figures 1(a) and 2). The other H$`\alpha `$ source (object 3 in Figure 2) lies $``$100 arcseconds to the north of the X-ray position. Besides object 1, only one other object within the error circle is identified as an early type star from its $`BV`$ colour (object 2, which we identify with GSC 0914101338).
Our medium resolution spectrum of object 1 (Fig 3(a)) shows H$`\alpha `$ in emission with an equivalent width of EW(H$`\alpha `$) = -35 Å. No He I (6678Å) feature is seen above the level of the continuum noise. The H$`\alpha `$ line is single peaked, and centered on $`6566.5\pm 0.5`$ Å. Assuming that the deviation from the rest wavelength of H$`\alpha `$ is purely due to the radial velocity of the star, we derive a velocity of $`171\pm 23`$ km s<sup>-1</sup>, consistent with the systemic radial velocity for the SMC of $`166\pm 3`$ km s<sup>-1</sup> found by Feast (1961). This object is the most probable optical counterpart for RX J0032.9-7238 (but see the discussion on chance probability in Section 4).
### 3.2 RX J0049.1-7250
This source was discovered with ROSAT (KP1996) in pointed observations. It appears highly absorbed, and is variable by a factor of more than 10. KP1996 concluded that the source probably lay behind the SMC, with a maximum luminosity of $`10^{38}`$ ergs, but they could not rule out a time variable background AGN nature for the source.
A new X-ray pulsar was discovered by the RXTE satellite during observations centered on the position of SMC X-3. Pulsations were detected with a period of 74.8$`\pm `$0.4 seconds (Corbet et al. 1998b). Follow up observations with the ASCA satellite made on 1997 November 13 detected pulsations with a period of 74.675$`\pm `$0.006 seconds (Yokogawa & Koyama 1998). The 2 arcminute X-ray error circle from the ASCA observations contained the ROSAT error circle for RX J0049.1-7250. Kahabka & Pietsch (1998) report that the source showed a high degree of variablility, having gone undetected in a 35 ksec ROSAT HRI observation on 1997 May 9-25, and concluded that a Be/X-ray binary nature was probable for the source.
A finder chart is shown in Figure 1(b) with the ROSAT error circle marked. An emission-colour diagram similar to Figure 2 was produced for this field. Only one Be star is found within the X-ray error circle, lying only 3 arcseconds from the X-ray position (marked as object 1 in Fig 1(b). A second Be star (Object 2 in the figure) lies 25 arcseconds from the X-ray position.
Based on the positional coincidence, Object 1 is most probably the counterpart to the X-ray pulsar, although Object 2 cannot be wholly dismissed at this stage.
### 3.3 AX J0051-722
This source was first detected as a 91.12 second pulsar in RXTE observations (Corbet et al. 1998a) although was initially confused with the nearby 46 second pulsar 1WGA J0053.8-7226 (Buckley et al. 1998). Further observations with ASCA revealed two pulsars in the field with an approximate 2 to 1 ratio in periods, the 91 second period belonging to the new source, AX J0051-722, whilst observations with ROSAT reduced the positional uncertainty to 10 arcseconds. We performed spectroscopic observations of the brightest object in this error circle on 1998 February 3 (see finder chart in Figure 1(c)). The spectrum (Figure 3(b)) shows the H$`\alpha `$ line strongly in emission, with an equivalent width of -22 Å. The centre of the line corresponds to a velocity of 165$`\pm 23`$ km s<sup>-1</sup>, consistent with SMC membership.
We have no photometry of objects in this field, but estimate $`V15`$ from Digitised Sky Survey images. This, together with the $`H\alpha `$ emission and radial velocity indicates an early Be star in the SMC. With an X-ray positional uncertainty of only 10 arcseconds, we conclude that this Be star is the optical counterpart to the X-ray pulsar.
### 3.4 RX J0051.8-7231
RX J0051.8-7231 was discovered in Einstein observations. The X-ray error circle included the bright SMC star AV 111 Figure 1(d)), which was suggested as the optical counterpart (Bruhweiler 1987, Wang & Wu 1992).
The source displays time variability, as demonstrated by an increase in luminosity by more than a factor of ten in $``$1 year, and by a factor of $``$5 in 5 days (Kahabka & Pietsch 1996). The X-ray luminosity, time-variability and hard spectrum led Kahabka & Pietsch to suggest a Be/X-ray binary nature for the source.
Israel et al. (1995) detected pulsations from the X-ray source, with a period of 8.9 seconds. They subsequently obtained optical CCD images of the field (Israel et al 1997), and concluded that both AV111 and a second star (Star 1) within their error circle displayed H$`\alpha `$ activity.
In fact, both AV111 and Star 1 do not lie within the error circle derived by Kahabka & Pietsch, which contains only one object visible in the DSS image, star 2. On 1998 February 2 we obtained a medium resolution H$`\alpha `$ spectrum of this object, which is shown in Figure 3(c). H$`\alpha `$ is in emission with EW(H$`\alpha `$) = -3.0$`\pm `$1.0 Å.
The ROSAT error circle of Kahabka & Pietsch seems to favour the identification of star 2 as the optical counterpart to RX J0051.8-7231, though in the light of the H$`\alpha `$ activity detected by Israel et al. (1997) we cannot rule out star 1, whilst AV111 appears a less likely candidate.
### 3.5 1WGA J0054.9-7226
This source appears in a number of catalogues based on Einstein observations of the SMC (Inoue, Koyama & Tanaka 1983; Bruhweiler et al. 1987; Wang & Wu 1992), and was detected by ROSAT (KP1996). Kahabka and Pietsch (KP1996) rejected an X-ray binary nature for the source on the grounds of the lack of time variability (on a timescale of hours).
Observations made with the $`RXTE`$ satellite on 1998 January 20 detected a 59 second pulsar, with a positional uncertainty of $`\pm `$10 arcminutes, consistent with the position of RX J0054.9-7226 (Marshall & Lochner 1998). Subsequent observations with $`SAX`$ reduced the positional uncertainty to a radius of 50 arcseconds, confirming the identification with RX J0054.9-7226, and refining the pulse period to 58.969 seconds (Santangelo & Cusumano 1998). The uncertainty in X-ray position was further reduced to a 10 arcsecond radius from the analysis of archival ROSAT data by Israel (1998).
We obtained $`BVR`$ and H$`\alpha `$ images of the field on the night of 1996 October 5. An $`emissioncolour`$ diagram for the field revealed four objects within the X-ray error circle, all identified as early type stars by their $`BV`$ colours. Of these, only one shows strong H$`\alpha `$ emission, with a R-$`H\alpha `$ value of 0.49 (the object indicated in Figure 1(e)).
We obtained spectra of Object 1 on the nights of 1998 February 4 and 5. The medium resolution H$`\alpha `$ spectrum obtained is shown in Figure 3(d). The H$`\alpha `$ line shows strong emission, with an equivalent width of EW(H$`\alpha `$) = -25 $`\pm `$2 Å, and a radial velocity of 137 $`\pm `$28km s<sup>-1</sup>, consistent with SMC membership. The low resolution spectrum in Figure 4 shows H$`\beta `$ also clearly in emission, with EW(H$`\beta `$) = -2.0$`\pm `$1.0 Å.
### 3.6 RX J0101.0-7321
This X-ray source was discovered in ROSAT pointed observations in 1991 October. Observations approximately 6 months later failed to detect the source, suggesting a transient nature (Kahabka & Pietsch 1996). KP1996 claim that the source is most likely associated with a 15-16th magnitude Be star. To the authors’ knowledge, no observations of this star have previously been published.
We only obtained $`R_C`$ and H$`\alpha `$ images of the field of RX J0101.0-7321 on 1995 October 1. A plot of the measured H$`\alpha `$ magnitudes against the measured $`R_C`$-band magnitudes shows a clear linear relationship exists between R and H$`\alpha `$; the scatter at magnitudes R $`>`$ 15.5 is mostly due to uncertainties in the H$`\alpha `$ magnitudes. Three points show H$`\alpha `$ excesses which appear to be much greater than the local scatter of points. Objects 2 and 3 each lie a few arcminutes from the X-ray position, whilst the error associated with this position is only 11 arcsec. Object 1 is only 10 arcsec from the X-ray position (see the finder chart in Figure 1(f)).
On 1998 February 3 we obtained a H$`\alpha `$ spectrum of Object 1. The spectrum, shown in Figure 3(e), has a strong H$`\alpha `$ emission line, with EW(H$`\alpha `$)=-60 Å. The peak of the H$`\alpha `$ line is at a wavelength of 6566.0$`\pm `$0.5 Å, corresponding to a velocity of 148$`\pm `$23 km s<sup>-1</sup>. These data confirm that Object 1 is a Be star in the SMC. As this is the only such object within several arcminutes of the X-ray position, we identify object 1 as the optical counterpart, and confirm a Be/X-ray binary nature for this X-ray source.
### 3.7 EXO 0531.1-6609
This source was discovered by EXOSAT during deep observations of the LMC X-4 region in 1983 (Pakull et al. 1985). It was detected again in 1985 by the SL2 XRT experiment. The lack of detection in EXOSAT observations made between these dates demonstrates the transient nature of the source. The object was identified with a Be star by Pakull (private communication). The counterpart proposed by Pakull is the northern component of a close double.
The components of this double are marked 1 and 2 in Figures 1(g). The positions of the two objects in the emission-colour diagram show that both are early type stars. With our chosen criterion of a Be star having R-H$`\alpha 0.2`$ then Object 1 is the only Be star within the X-ray error circle, with R-H$`\alpha `$ = 0.21, Object 2 has R-H$`\alpha `$ = 0.15; within the uncertainties however, our data do not favour one object over another as emission line objects.
On 1998 February 3 we obtained a spectrum of the Northern component of the double. The resulting spectrum shown in Figure 3(f) confirms the presence of H$`\alpha `$ emission, with EW(H$`\alpha `$) = -10.0$`\pm `$1.0 Å. The line is centered on a wavelength of 6568.7$`\pm `$0.5 Å, corresponding to a velocity of 272$`\pm `$23 km s<sup>-1</sup>. On 1998 February 5 we obtained a low resoluion, flux calibrated spectrum of this object (shown in Figure 5), showing H$`\beta `$ also in emission with EW($`\beta `$) = -0.5$`\pm `$0.2 Å. No spectrum has yet been obtained of the Southern component of the double.
In order to determine which object is the counterpart to the X-ray source, it will be necessary to obtain an X-ray position to sub-arcsecond accuracy, possible with the forthcoming AXAF mission, or to find optical/infrared variations in one of the objects which correlate with X-ray behaviour.
### 3.8 H0544-655
This source was discovered with the HEAO-1 scanning modulation collimator by Johnston, Bradt & Doxsey (1979). The brightest object within the X-ray error circle (star 1 in Figure 1, and in Figure 6 of Johnston, Bradt & Doxsey 1979) was found to be a variable B0-1 star (van der Klis et al. 1983 and references therein) but no emission lines have been observed in its spectrum to identify it as a Be star. van der Klis et al. (1983) published photometry which showed a negative correlation between optical magnitudes and colour indices, typical of Be stars whose variability is due to variations in the circumstellar disc. The authors expressed concern at the lack of other obvious Be star spectral characteristics, but suggested that the object may be a Be star in a low state of activity.
An emission-colour diagram for objects in the field was created as previously described. In the 4 arcminute radius area searched, only one object displays the colours indicative of a Be star. This object is identified as Object 1 in Figure 1, and corresponds to the Object 1 of Johnson, Bradt & Doxey (1979).
We obtained optical spectra of this object in 1998 February, these are shown in Figures 3(g) and 6. The H$`\alpha `$ line is clearly in emission, with an equivalent width measured from the medium resolution spectrum (Figure 3(g)) of EW(H$`\alpha `$) = -8.7$`\pm `$1.0 Å. The profile is double peaked with a peak separation of 181$`\pm `$30 km s<sup>-1</sup>, the mean velocity of these peaks is 282$`\pm `$20 km s<sup>-1</sup>, consistent with the LMC velocity of 275 km s<sup>-1</sup> given by Westerlund (1997), but lower than the 369$`\pm `$42 km s<sup>-1</sup> measured for Balmer absorption lines in this object by van der Klis et al (1983). The low resolution spectrum in Figure 6 shows also weak H$`\beta `$ emission with EW(H$`\beta `$) = -0.7$`\pm `$0.5 Å.
To the authors knowledge, these observations represent the first detection of emission lines in the spectrum of this star. We confirm a LMC Be star nature and, due to the lack of other emission line objects in or near the X-ray error circle, we conclude that this star is the optical counterpart of the X-ray source.
## 4 Discussion
The number of Be/X-ray binary systems known in the Magellanic Clouds is now increased to 12 in the SMC and 7 in the LMC. These numbers compare with 29 known in our own Galaxy. Scaling simply by mass, we should expect the number of Be/X-ray binaries in the LMC and SMC to be 0.1 and 0.01 times the number in the Galaxy respectively. In fact we find, especially in the SMC, what appears at first to be an abnormally large Be/X-ray binary population.
The discrepancy might be explained through consideration of a number of issues:
a) Studies of cluster populations in the SMC have shown a higher proportion of Be stars amongst early type stars than in the Galaxy; this may be due to the effects of metallicity on a radiatively driven wind. If a higher proportion of B type stars in the SMC have circumstellar envelopes, then for a given population of B star/Neutron star binaries, more SMC systems would be accreting X-ray systems.
b) The star formation history of the SMC may have resulted in a hump in the stellar age distribution such that a higher proportion of SMC stars are of the appropriate age to have evolved into Be/X-ray binary systems. Such a scenario would require an increase in star formation activity $`10^7`$ years ago. Supporting evidence for such an effect comes from the HI work of Staveley-Smith et al (1997) who deduce from studies of six expanding shells in the SMC a dynamical age of 5.4Myr. They conclude that there must have been an exceptional degree of coherent star formation throughout the SMC at this time.
The photometric method used to identify Be star candidates has proved succesful. In each case where spectroscopic observations have been made, the Be star nature of the object has been confirmed. The strength of this approach is illustrated by Figure 7 which shows the relationship between the measured R–H$`\alpha `$ index from photometric data and the EW(H$`\alpha `$) of each object for which both such data were available. There appears to be a direct correlation, with the exception of one possible point – that of RX J0101.0-7321. This seemingly anomalous point may be explained by variability, as the photometric and spectroscopic observations were made over 1 year apart. The indication then is that the Be star counterpart in this system underwent some change between 1996 October and 1998 February, which resulted in a large increase in the amount of circumstellar material. Apart from this kind of inherent difficulty, this approach is clearly an excellent method for identifying the counterparts to High Mass X-ray Binaries.
In addition, one must be careful when examining so many fields of the chance probability of finding a Be star unrelated to the X-ray source. From the 5 fields presented here, a total of 28 objects were found lying within twice the error circle radius with an H$`\alpha `$ index of (R-H$`\alpha `$)$`>`$0.2. The total area covered by this sample is 103,000 sq. arcsec. This gives an average Be star rate of almost exactly 1 per sq. arcmin, so clearly one has to be careful when working with arc minute size error circles (i.e. RX J0032.9-7348 in this work). Ultimately, either the error circles have to be reduced by a significant factor, or simultaneous variability observed between the X-ray and the optical/IR band, before definite confirmation of the counterpart is established.
## 5 Conclusions
We have identified several X-ray sources with Be stars in the Small Magellanic Cloud and demonstrated a reliable technique for doing so. The exceptionally large number of these systems in the SMC can probably be attributed to an unusally large amount of star formation $`10^7`$ years ago.
### Acknowledgments
We are grateful to the staff at the SAAO for their assistance, and to Dr. M. Pakull for communications regarding certain sources. The Shara Focal Reducer is on loan from Dr Michael Shara (STScI). JBS acknowledges the receipt of a Southampton University studentship.
|
no-problem/9906/cond-mat9906303.html
|
ar5iv
|
text
|
# Two-dimensional pattern formation in surfactant-mediated epitaxial growth
## Abstract
The effects of a surfactant on two-dimensional pattern formation in epitaxial growth are explored theoretically using a simple model, in which an adatom becomes immobile only after overcoming a large energy barrier as it exchanges positions with a surfactant atom, and subsequent growth from such a seed is further shielded. Within this model, a fractal-to-compact island shape transition can be induced by either decreasing the growth temperature or increasing the deposition flux. This and other intriguing findings are in excellent qualitative agreement with recent experiments.
Because of stress effects, heteroepitaxial growth typically proceeds via the formation of three-dimensional (3D) islands, leading to rough films. However, it was discovered nearly a decade ago that the use of a surfactant can lead to layer-by-layer growth and drastically reduced film roughness . Since then, much effort has been devoted to the study of surfactant-mediated growth in both hetero- and homo-epitaxial systems . In these studies, it has been observed that the surfactant atoms can not only modify the 3D growth mode, but often induce the formation of fractal-like 2D islands . To date, little effort has been devoted to the understanding of the precise formation mechanisms for such fractal islands in the presence of surfactant atoms. Such understanding is vitally important because the morphology and the distribution of the 2D islands formed at submonolayer coverages can severely influence the growth mode in the multilayer regime.
Two-dimensional pattern formation is itself an important area of statistical physics. In their classic work, Witten and Sander demonstrated that a fractal island can be formed when random walkers join a seed by hit-and-stick (without any relaxation). More recently, 2D pattern formation within the context of dynamical island growth in submonolayer epitaxy has become the subject of intensive study, to a large extent advanced by the capability of the scanning tunneling microscopy (STM) in characterizing such islands. These studies have firmly established that islands can become more fractal-like if the growth temperature is decreased at a given deposition flux, or the deposition flux is increased at a given growth temperature . However, most of the earlier studies of 2D pattern formation had been focused on model homo- or hetero-epitaxial systems without surfactants. Only very recently, have the effects of Pb as a surfactant on the formation of 2D Ge islands grown on Si(111) been studied systematically by Hwang et al.. They found, most surprisingly, that the fractal-to-compact transition is induced by lowering the temperature or by increasing the deposition flux. These observations are in clear contradiction with traditional expectations, and the underlying physical mechanisms for such transitions are still unclear . Michely et al. have also observed a compact to fractal transition of Pt islands on Pt(111) by decreasing the deposition flux , possibly caused by the presence of the CO impurities . In another experiment of Sb-induced growth of C60 films on NaCl(100), a compact-fractal-compact transition was observed by increasing the temperature .
In this Letter, we use a novel model to explore the effects of a monolayer of surfactant atoms on 2D pattern formation in epitaxial growth. The model contains a minimum number of key assumptions, each based on sound physical grounds. First, an adatom needs to overcome a rate-limiting potential energy barrier in order to exchange positions with a surfactant atom and become immobile. Second, for other adatoms to join such a seed and form a stable island, they still need to overcome a repulsive potential energy barrier surrounding the seed. Third, only islands formed inside the surfactant layer are stable. Our study of this simple model leads to various intriguing findings on both the morphology and the distribution of the 2D islands formed under surfactant action. Most notably, a fractal-to-compact island transition can be induced by either decreasing the growth temperature or increasing the deposition flux. We also obtain the characteristic dependences of the island density as a function of temperature ($`T`$), flux ($`F`$), and coverage ($`\theta `$), and rationalize our findings based on a simple physical picture emphasizing the shielding effect on the incoming adatoms by the surfactant atoms surrounding the islands. Our findings are in excellent qualitative agreement with the observations of Ge growth on Pb-covered Si(111), and may find different degrees of applicability in other surfactant-induced growth systems as well .
We start with an ideally flat substrate of material A, covered with a complete surfactant layer of material S. Atoms of a different type, B, are deposited onto the surfactant layer at a given deposition rate. We consider the case where the coverage of S is sufficiently high, such that the adatom islands, once formed, are always surrounded by the surfactant atoms. As our first study, we consider a simple model that catches the essential physics involved in the shape transitions but with a minimum number of input parameters (hereafter referred as the first model). Three elementary rate processes are emphasized in this model: diffusion ($`dif`$) of a B-type atom on top of the surfactant layer; a B-type adatom diving ($`d`$) from above to below the surfactant layer (via place exchange with an S-type atom); and the aided diving ($`ad`$) of a subsequent B atom to join the first one. We denote the activation barriers of these three processes by $`V_{dif}`$, $`V_d`$, and $`V_{ad}`$, respectively, and the corresponding rates by $`R_{dif}`$, $`R_d`$, and $`R_{ad}`$, with $`R=\nu \mathrm{exp}(V/kT)`$. The three barriers satisfy the inequality chain $`V_{dif}V_{ad}<V_d`$. $`V_{dif}`$ is the smallest because adatom diffusion is often significantly enhanced due to the passivation of the surface by the surfactant layer . $`V_d`$ is the largest, making it the rate-limiting process for eventual formation of a stable island. The first inequality reflects the fact that there exists an effective repulsive wall surrounding the seed atom or an island formed underneath the surfactant layer. The existence of such a repulsive potential to the incoming adatoms due to the presence of the surfactant atoms surrounding an island has been proposed previously , and its effect on the island density has been explored very recently . An isolated adatom can hop with the rate $`R_{dif}`$, or exchange down with the rate $`R_d`$. Here for simplicity, the in-plane mobility of the B atoms underneath the surfactant layer is taken to be negligible. We also ignore the reverse exchange process in which a B-type atom resurfaces to the top of the surfactant layer, corresponding to the case where a B atom strongly favors the underneath site. On the other hand, islands formed on top of the surfactant layer can still dissociate, and are therefore unstable. When a B atom hops to a site which has $`n_d`$ static B-type nearest neighbors, it remains stuck there until it exchanges down with the rate $`n_dR_{ad}`$. Because the activation barrier for this process must be in between $`V_{ad}/n_d`$ and $`V_{ad}`$, for simplicity we choose the rate $`n_dR_{ad}`$ to take into account the effect of the $`n_d`$ neighboring static atoms without introducing another parameter at this stage. Using the definition of classical nucleation theory , we actually have two critical island sizes: $`i^{}=\mathrm{}`$ and $`i^{}=0`$ for the upper layer and the lower layer, respectively. In contrast to the earlier irreversible ”hit-and-stick” or the ”hit-stick-relax” model , the stable islands in the present study consist only of down-exchanged atoms. Overall, the current model is consistent with the fact that the binding energy between A and B is typically much larger than that between A and S. Later we will show that considerations going beyond this first model do not change the main qualitative features of the present study.
We primarily use kinetic Monte Carlo (KMC) simulations to study this model; later we also briefly describe the main results from rate equation analysis. The KMC simulations were carried out on a square $`200\times 200`$ lattice, though simulations using a lattice of triangular geometry yield qualitatively similar conclusions. We take a small diffusion barrier $`V_{dif}=0.59`$ eV, reflecting the fast adatom diffusion on top of the surfactant layer. The barriers of the exchange (diving) and the aided exchange processes are taken as $`V_d=0.90`$ eV and $`V_{ad}=0.82`$ eV, respectively. The attempt frequency is uniquely chosen to be $`\nu =4.1671\times 10^{10}T`$, with $`T`$ given in degree K.
The temperature dependence of the island shapes obtained at $`F=0.005`$ ML/s and $`\theta =0.1`$ ML is shown in Fig. 1. At 300 K, the islands are typically compact (Fig. 1a); at 340 K, the islands are typically fractal-like. The transition from compact to fractal patterns takes place approximately at 315 K. Fig. 1b shows the pattern at 310 K, just below the transition temperature; the islands are still compact, though there are some ramified structures in the outer part of the islands. Fig 1c is the pattern at 320 K, just above the transition; here the islands are predominantly fractal-like.
The intriguing temperature dependence described above can be understood by considering the shielding effect of the adatoms stuck around the edge of a nucleation seed or an island of down-exchanged atoms in the sublayer. At high temperatures, such surrounding adatoms can easily dive into the sublayer at their initial points of sticking; once they manage to exchange into the sublayer, their mobility is severely limited, making the whole situation very similar to the classic hit-and-stick diffusion limited aggregation . On the other hand, at lower temperatures, such stuck adatoms and the surfactant atoms surrounding them effectively block incoming adatoms from reaching a seed atom or an island in the sublayer. Therefore, these incoming adatoms have a chance to leave their initial points of sticking, and after some random walking can restick at different points of the same island. Such processes effectively lead to relaxation around the edge of an island, resulting in more compact island morphology .
Figure 2 shows an interesting nonmonotonous dependence of the island density with the temperature. The minimum in island density is located right at the temperature at which the compact-to-fractal transition in island morphology has been observed. We note that the temperature dependence shown here is similar to that obtained previously by Meyer and Behm, but the two cases differ in physical origins. In their case, islands formed either by nucleation of two mobile adatoms or by meeting of one mobile and one trapped adatom are both stable, and the minimum in the island density as a function of the temperature is associated with the transition from the nucleation-dominant to exchange-prominent region. In our case, islands formed on top of the surfactant layer are unstable, and the rate-limiting process for the nucleation of a stable island is the diving of an adatom into the sublayer. Therefore, our system is always in the exchange-prominent region. Nevertheless, when the temperature is low, a stable seed atom in the surfactant layer may not necessarily grow into a stable island because of the effective shielding of the incoming adatoms. But those seeds which manage to grow into islands will grow even faster as their sizes increase. The decrease in island density with temperature is caused by the increased mobility of the adatoms in searching for such islands. On the other hand, after the transition temperature, the shielding effect is very weak, and every seed atom is likely to grow into a stable island. The island density increase with temperature reflects the enhanced rate in creating such seeds.
Figure 3 displays the island patterns obtained at different deposition rates. Here the growth temperature is fixed at $`T=300`$ K and the coverage is again at 0.1 ML. Fig. 3a is an ideal fractal pattern at the flux of 0.0001 ML/s. Fig. 3b is still a fractal pattern at the flux of 0.001 ML/s, though there are islands that are more compact. In Fig. 3c the flux is 0.0025 ML/s and the island shape already becomes compact. In Fig. 3d the islands are ideal compact patterns, obtained at the flux of 0.028 ML/s. Because the shielding effect increases with flux, the flux-induced fractal to compact transition is also driven by the shielding effect, but here the transition is from strong shielding to weak shielding when the flux increases.
Figure 4 shows the flux dependence of the stable island density, $`N_s`$, obtained at $`T=300`$ K and $`\theta =0.1`$ ML. The curve can be divided into three regimes: the low-flux fractal regime, where the dependence is very weak; the intermediate flux regime, where a scaling law can be well defined ($`N_sF^\beta `$ with $`\beta =0.40`$); and the high-flux compact regime, where the island density has saturated. When the maximal saturation island density (obtained at different coverages) is plotted as a function of flux, a much larger scaling exponent is obtained ($`\beta =0.7`$).
We have also carried out a limited rate equation analysis of the above model. In this approach, we introduce the island perimeter $`L_d`$ and parameterize the area of an island by $`S_d=pL_d^q`$, where $`p`$ is a constant and $`q`$ is the dimension of the islands . For compact islands we have $`q=2`$ and for fractal patterns $`1<q<2`$. In the low-coverage limit, we have the following rate equations:
$$\frac{d}{dt}n_a=F\theta (t_0t)\alpha _dn_a\alpha _bn_aN_s(L_dn_b)$$
(1)
$$\frac{d}{dt}(N_sn_b)=\alpha _en_bN_s+\alpha _bn_aN_s(L_dn_b)$$
(2)
$$\frac{d}{dt}N_s=\alpha _dn_a$$
(3)
where $`n_a`$ is the density of the movable active atoms, $`n_b`$ is the number of edge atoms per stable island, $`\alpha _d`$ and $`\alpha _e`$ can be taken as $`R_d`$ and $`R_{ad}`$, and $`\alpha _b`$ is the capture constant. We have introduced the step function $`\theta (t_0t)`$ to reflect the fact that the STM imaging was typically some time after the deposition time $`t_0`$. Letting $`N_a`$ be the total number of active atoms, one has $`N_a=n_a+N_sn_b`$. When the flux is low or the temperature is high, very few adatoms remain active at the end of deposition. The island patterns can then be determined mainly by the $`t<t_0`$ region. In such situations, a steady-state approximation can be made for $`N_a`$, similar to what has been done previously . Then a scaling law of island density can be derived: $`N_sF^{n_F}\theta ^{n_\theta }`$, where $`n_F=0`$, as suggested in the low-flux regime shown in Fig. 4; and $`n_\theta =(q1)/(2q1)`$. The constant $`q`$ can be easily determined from the island shapes only in the limit of very low flux or very high temperature. In the compact and intermediate regimes, both the $`t<t_0`$ and $`t>t_0`$ regions should be considered. It is then more involved to obtain a simple analytical scaling law .
The temperature- or flux-induced fractal-compact transition in the island shapes predicted within the present model provides, on a qualitative level, the theoretical basis for the transitions observed in Pb-induced growth of Ge on Si(111) . Because there are only three parameters in our first model, the agreement in the main features between theory and experiment should be viewed excellent. In particular, the shielding effect emphasized in the present model plays the essential role in causing the shape transition.
We have expanded the range of applicability of our simple model to various more realistic growth systems in several aspects, such as KMC simulations on a triangular lattice and variations of the three basic model parameters. We have also considered physical effects beyond the first model, including the binding energy of the adatoms in an island formed above the surfactant layer, detachment of the adatoms trapped around the edge of a stable island, and the possibility of simultaneous exchange of multi-adatoms . These lattice and parameter variations, as well as the improvement beyond the first model, do not change the central qualitative findings of the present work, namely, the counter-intuitive fractal-compact transition caused by the temperature or the flux. Of course, one should expect that, on a quantitative level, the island density will depend differently on the temperature, flux, and the coverage . We should also note that the fractal-compact transitions predicted here are expected to be observable even if the stable islands are formed on top of the surface but surrounded by a sufficiently high coverage of impurity atoms, as long as those impurity atoms can effectively hinder the growth of the stable islands by shielding. Furthermore, the phenomena are not limited to heteroepitaxial growth only: Even in surfactant-induced homoepitaxy, similar phenomena are likely to occur if the shielding effect associated with the impurity or surfactant atoms is sufficiently effective.
In summary, we have theoretically explored the effects of a monolayer of surfactant atoms on two-dimensional pattern formation in epitaxial growth by using a simple but physically sensible model. We find that a fractal-to-compact island transition can be induced by either decreasing the growth temperature or increasing the deposition flux. Furthermore, the flux and temperature dependence of the island density on the fractal side is very different from that on the compact side. The counter-intuitive predictions on the island morphological evolution can be rationalized based on the shielding effects. Our findings on the shape transitions are in excellent qualitative agreement with recent observations, while the predicted nonmonotonic temperature dependence of the island density is yet to be confirmed in future experimental studies.
We thank I.-S. Hwang and T. T. Tsong for stimulating discussions, and R. J. Behm and J. J. de Miguel for helpful correspondences. This research was supported by the Chinese Natural Science Fundation (Grant No. 19810760328), by Chinese State Key Project of Basic Research on Rare Earth, by Oak Ridge National Laboratory, managed by Lockheed Martin Energy Research Corp. for the U.S. Department of Energy under Contract No. DE-AC05-96OR22464, and by the U.S. National Science Foundation (Grant No. DMR-9705406).
|
no-problem/9906/hep-ex9906018.html
|
ar5iv
|
text
|
# Internal Jet Structure in Dijet Production in Deep-Inelastic Scattering
## 1 INTRODUCTION
The internal structure of jets is sensitive to the mechanism by which a complex aggregate of observable hadrons evolves from a hard process. It is expected that the internal structure of jets depends mainly on the type of the primary parton, quark or gluon, from which it originated and to a lesser extent on the particular hard scattering process. Measurements of the internal structure of jets have been made in $`p\overline{p}`$ collisions and in $`e^+e^{}`$ annihilations. At the $`e^\pm p`$ collider HERA jet shapes have been investigated in photoproduction ($`Q^20\mathrm{GeV}^2`$) and in deep-inelastic scattering at $`Q^2>100\mathrm{GeV}^2`$ .
Here we present the measurements of internal jet structure in a sample of inclusive dijet events with transverse jet energies of $`E_{T,\mathrm{Breit}}>5\mathrm{GeV}`$ in the kinematic range of $`10<Q^2120\mathrm{GeV}^2`$ and $`210^4x_{\mathrm{Bj}}810^3`$. Jets are defined in the Breit frame by the $`k_{}`$ and the cone jet algorithm. The analysis is based on data taken in 1994 with the H1 detector at HERA, operated with positrons of energy $`E_e=\mathrm{\hspace{0.17em}27.5}\mathrm{GeV}`$ colliding with protons of energy $`E_p=\mathrm{\hspace{0.17em}820}\mathrm{GeV}`$. The data correspond to an integrated luminosity of $`_{\mathrm{int}}2\text{pb}^1`$. Two observables, jet shapes and subjet multiplicities, are studied. Both observables are corrected for detector effects and are presented as a function of the transverse jet energy and the jet pseudo-rapidity.
## 2 OBSERVABLES
The jet shape $`\mathrm{\Psi }(r)`$ is defined as the average fractional transverse jet energy that lies in a subcone of radius $`r`$ concentric with the jet axis
$$\mathrm{\Psi }(r)\frac{1}{N_{\mathrm{jets}}}\underset{\mathrm{jets}}{}\frac{E_T(r)}{E_{T,\mathrm{jet}}}.$$
(1)
$`N_{\mathrm{jets}}`$ is the total number of jets in the sample and $`E_T(r)`$ is the transverse energy within a subcone of radius $`r`$. A natural variable for studying the internal structure of jets with the $`k_{}`$ cluster algorithm is the multiplicity of subjets, resolved at a resolution scale which is a fraction of the jet’s transverse energy. For each jet in the sample the clustering procedure is repeated for all particles assigned to the jet. The clustering is stopped when the distances $`y_{ij}`$ between all particles $`i,j`$ are above some cut-off $`y_{\mathrm{cut}}`$
$$y_{ij}=\frac{\mathrm{min}(E_{T,i}^2,E_{T,j}^2)}{E_{T,\mathrm{jet}}^2}\frac{\left(\mathrm{\Delta }\eta _{ij}^2+\mathrm{\Delta }\varphi _{ij}^2\right)}{R_0^2}>y_{\mathrm{cut}}$$
(2)
where $`R_0`$ is set to $`R_0=1.0`$. The remaining (pseudo-)particles are called subjets. The parameter $`y_{\mathrm{cut}}`$ defines the minimal relative transverse energy between subjets inside the jet and thus determines the extent to which the internal jet structure is resolved.
## 3 RESULTS
The radial dependence of the jet shape $`\mathrm{\Psi }(r)`$ for the $`k_{}`$ algorithm is shown in Fig. 1 in different ranges of the pseudo-rapidity and transverse jet energy in the Breit frame. The predictions of three QCD models are overlaid. The jet shape $`\mathrm{\Psi }(r)`$ increases faster with $`r`$ for jets at larger $`E_T`$, indicating that these jets are more collimated. Jets towards the proton direction (at larger values of $`\eta _{\text{Breit}}`$) are broader than jets towards the photon direction (smaller $`\eta _{\text{Breit}}`$).
The QCD models LEPTO, ARIADNE and HERWIG all show $`E_{T,\mathrm{Breit}}`$ and $`\eta _{\text{Breit}}`$ dependencies similar to those seen in the data. LEPTO gives a good overall description of the data although it has the tendency to produce broader jets in the proton direction. A good description is also obtained by the ARIADNE model except for jets at smaller pseudo-rapidities where the jet shapes have the tendency of being too narrow. For the HERWIG model the jet shapes are slightly narrower than those in the data in all $`E_{T,\mathrm{Breit}}`$ and $`\eta _{\text{Breit}}`$ regions.
Fig. 2 shows the jet shapes and the subjet multiplicities for the $`k_{}`$ algorithm as predicted by the LEPTO parton shower model, separately for quark and gluon jets at $`E_{T,\mathrm{Breit}}>8\mathrm{GeV}`$ and $`\eta _{\text{Breit}}<1.5`$. Within this model gluon jets are broader than quark jets. The same prediction is obtained by the HERWIG parton shower model. In the phase space considered here, LEPTO and HERWIG (in agreement with next-to-leading order calculations) predict a fraction of approx. 80 % photon-gluon fusion events with two quarks in the partonic final state. The jet samples of these models are therefore dominated by quark jets. Both model predictions for the jet shapes and the subjet multiplicities therefore mainly reflect the properties of the quark jets as can be seen in Fig. 2. These predictions give a good description of the data. Thus, we conclude, that the jets we observe are consistent with being mainly initiated by quarks.
Fig. 3 shows the jet shapes for the cone algorithm in the backward region ($`\eta _{\text{Breit}}<1.5`$) for two regions of $`E_{T,\mathrm{Breit}}`$. As seen for the $`k_{}`$ algorithm in Fig. 1 jets with larger transverse energy $`E_{T,\mathrm{Breit}}`$ are more collimated.
The results are compared to the jet shapes measured by OPAL in $`\gamma \gamma `$ collisions in similar $`E_T`$ regions. Although these jets are produced in a different scattering process, their jet shapes are very similar to those measured in $`ep`$ collisions. This indicates that the internal jet structure does not depend on the underlying hard process.
|
no-problem/9906/quant-ph9906065.html
|
ar5iv
|
text
|
# Quantization ambiguity, ergodicity, and semiclassics
## Abstract
A simple argument shows that eigenstates of a classically ergodic system are individually ergodic on coarse-grained scales. This has implications for the quantization ambiguity in ergodic systems: the difference between alternative quantizations is suppressed compared with the $`O(\mathrm{}^2)`$ ambiguity in the integrable case. For two-dimensional ergodic systems in the high-energy regime, individual eigenstates are independent of the choice of quantization procedure, in contrast with the regular case, where even the ordering of eigenlevels is ambiguous. Surprisingly, semiclassical methods are shown to be much more precise for chaotic than for integrable systems.
For many years, it has been widely recognized that “quantizing” a given classical system is inherently an ambiguous procedure, as a large family of quantum Hamiltonians may have the same classical limit . For example, given the classical dynamics of a particle constrained to move on a closed loop, different choices of boundary condition give rise to different phases relating classical paths of different winding number. Knowledge of these Aharonov-Bohm phases is of course necessary to construct a semiclassical dynamics (which includes interference between classical paths), and thus many semiclassical theories correspond to the same classical dynamics. Physically, this $`O(\mathrm{})`$ or gauge ambiguity may be associated with the possibility of varying the magnetic flux enclosed by the loop.
This is not all, however: there are also many quantum theories, differing at $`O(\mathrm{}^2)`$ or higher in the Hamiltonian, which all have the same semi-classical limit. There are many ways of seeing this $`O(\mathrm{}^2)`$ ambiguity; one of the simplest is to imagine making a canonical transformation on the classical phase space, applying the canonical quantization prescription to the new coordinates, and then transforming back to the original coordinate system. Generically, one then obtains a new quantum Hamiltonian which differs from the original by $`O(\mathrm{}^2)`$ plus higher order terms:
$$\widehat{H}^{}=\widehat{H}+\mathrm{}^2\widehat{A}+\mathrm{},$$
(1)
where the operator $`\widehat{A}`$ has a well-defined classical limit . As classical dynamics is of course independent of the choice of coordinate system, this implies that quantization is inherently ambiguous at second order in $`\mathrm{}`$.
An even more striking case is that of a particle constrained to move on a two-dimensional surface embedded in three-dimensional space. Here, the non-trivial metric contained in the kinetic term gives rise to obvious operator-ordering ambiguities in canonical quantization; this has led to much discussion in the literature over whether a term proportional to the local Gaussian curvature $`R`$ of the surface should be added to the Hamiltonian, and if so, what the proportionality constant should be (different prescriptions suggesting $`\mathrm{}^2R/8`$, $`\mathrm{}^2R/6`$, and $`\mathrm{}^2R/12`$ as the “correct” answer) . In the path integral approach, ambiguities at the same order arise in choosing how to incorporate the metric into the kernel and in deciding at what point in the infinitesimal time interval to evaluate functions of the metric . Physically, one may define dynamics on a constraint surface through a limiting process, where the strength of restoring forces causing the particle to live on the surface is taken to infinity . Classically, this procedure is known to give an unambiguous constrained dynamics; in quantum mechanics, $`O(\mathrm{}^2)`$ differences arise depending on the precise way in which the strength of the constraining potential is taken to infinity at various places along the surface . The ambiguity here has a clear physical meaning: to determine the true quantum mechanics one needs to know the mechanism through which the particle is bound to the surface of constraint; it is not sufficient to know only the intrinsic properties of the surface itself. Similarly, there is no “correct” answer to the problem of quantizing a classical double pendulum : quantum dynamics at $`O(\mathrm{}^2)`$ is determined by the precise way in which one takes to infinity the rigidity of the two rods.
In a two-dimensional system, the energy spacing between adjacent levels is $`O(\mathrm{}^2)`$, i.e. of the same order as the quantization ambiguity demonstrated above. It would then seem not to be possible to uniquely determine the eigenvalues and eigenstates of a two-dimensional system given only the classical dynamics on the two-dimensional surface. Similarly, it should not be generally possible to compute semiclassically the levels and wavefunctions of a two-dimensional quantum system, because a given semiclassical calculation has many quantum theories corresponding to it, with Hamiltonians related as in Eq. 1. Recently, however, it was shown (and numerically confirmed) that in strongly chaotic systems long-time semiclassical methods can in fact be used to compute quantum properties to accuracy much better than a level spacing, i.e. that individual wavefunctions and eigenenergies can be semiclassically resolved . This creates an apparent paradox, to be resolved in the present paper.
The answer is somewhat surprising: it is found that (i) the quantization ambiguity is greatly reduced for classically ergodic as compared with regular systems; (ii) in two dimensions, the ambiguity for ergodic systems is small compared to a level spacing, in contrast with the integrable case where even the ordering of eigenlevels is ambiguous; and (iii) unexpectedly, semiclassical methods are valid to much longer times in strongly chaotic as compared with integrable systems, allowing individual eigenenergies to be easily resolved. The key to these surprising results is the coarse-grained ergodicity of individual wavefunctions in classically ergodic systems. More precisely, in the classical limit, the quantum expectation value of any classically defined operator over individual eigenstates must approach the ergodic, microcanonical average of the operator, for almost all eigenstates. In effect, the fraction of eigenstates that deviate from ergodicity when smoothed over a finite mesh size in phase space must tend to zero in the $`\mathrm{}0`$ limit. This behavior has been studied mathematically under a variety of technical assumptions by Shnirelman, Zelditch, and Colin de Verdiere . Here we present a simple physical argument that demonstrates the generality of the result.
Let $`\widehat{A}`$ be an arbitrary quantum operator having the smooth phase space function $`A(q,p)`$ as its classical limit. Now we select any energy $`E_0`$ and choose a classically small energy window $`\mathrm{\Delta }E`$ such that the microcanonical average $`a(E)`$ of the function $`A(q,p)`$ is as close as one likes to a constant, $`a_0`$, for all $`E[E_0,E_0+\mathrm{\Delta }E]`$. Notice that in the $`\mathrm{}0`$ limit, the classically infinitesimal energy window will contain an arbitrarily large number of quantum eigenstates, $`N\mathrm{}^d`$ in $`d`$ dimensions. We wish to show that for any $`ϵ`$, there exists $`\mathrm{}`$ such that
$$\left(n|\widehat{A}|na_0\right)^2<ϵ^2,$$
(2)
where the average is taken over all wavefunctions $`|n`$ in the energy window $`[E_0,E_0+\mathrm{\Delta }E]`$.
Consider the quantum correlator
$$f(t)=\mathrm{tr}\widehat{A}^{}\widehat{A}(t)=\mathrm{tr}\widehat{A}^{}e^{i\widehat{H}t}\widehat{A}e^{i\widehat{H}t},$$
(3)
where the trace is over all states in the energy window. Now define the time-averaged correlator:
$`F(T)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2\pi T}}}{\displaystyle 𝑑te^{t^2/2T^2}f(t)}`$ (4)
$`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n,n^{}=1}{\overset{N}{}}}\left|n|\widehat{A}|n^{}\right|^2e^{(E_nE_n^{})^2T^2/2}.`$ (5)
$`F_{\mathrm{cl}}(T)`$, the time-averaged correlator of $`A(q,p)`$ and $`A(q(t),p(t))`$, is the classical counterpart of $`F(T)`$, and must by the definition of ergodicity tend to the ergodic value $`a_0^2`$ as $`T\mathrm{}`$. Now we simply choose $`T`$ large enough so that $`F_{\mathrm{cl}}(T)`$ is as close as we like to its long-time asymptotic value (say, within $`O(ϵ^2)`$), and then choose $`\mathrm{}`$ small enough so that the Ehrenfest time $`T_{\mathrm{Ehr}}`$ (the time at which classical–quantum correspondence breaks down) is large compared with $`T`$. \[Notice that for a hard chaotic system, the breakdown time $`T_{\mathrm{Ehr}}\lambda ^1\mathrm{log}\mathrm{}^1`$ as $`\mathrm{}0`$, where $`\lambda `$ is the Lyapunov exponent; however we make no specific assumption here about the system apart from its being ergodic.\] We then have
$`{\displaystyle \frac{1}{N}}{\displaystyle \underset{n=1}{\overset{N}{}}}\left|n|\widehat{A}|n\right|^2F(\mathrm{})F(T)=a_0^2+O(ϵ^2).`$ (6)
Now $`a_0`$ is of course the mean value of the matrix elements $`n|\widehat{A}|n`$ (which can be seen formally by considering the correlation of $`\widehat{A}(t)`$ with the identity operator), so Eq. 6 implies that the variance of the matrix elements is bounded by $`ϵ^2`$ (proving the statement of Eq. 2), and goes to zero in the $`\mathrm{}0`$ limit.
In the presence of symmetry-induced degeneracies, the argument above clearly holds for any choice of orthonormal eigenbasis $`|n`$. Furthermore, the difference between the first two quantities in Eq. 6 is also bounded above by $`O(ϵ^2)`$; this implies
$$n|\widehat{A}|n^{}0$$
(7)
for almost all degenerate eigenstates $`|n`$ and $`|n^{}`$ in the $`\mathrm{}0`$ limit.
As an aside, we mention that the constraint $`F(\mathrm{})F(T)`$ (obtained from Eq. 4) allows us to circumvent the usual problem of noncommutativity of the $`\mathrm{}0`$ and $`T\mathrm{}`$ limits. Specifically, quantum approach to ergodicity at short times in the sense of $`F(T)a_0^2`$ guarantees quantum ergodicity at longer times where individual eigenstates are resolved.
The statistical argument presented here leaves open the possibility of a zero measure set of states grossly violating the ergodic condition $`n|\widehat{A}|na_0`$. This is not a deficiency in the argument: infinite sequences of highly non-ergodic states (namely, the bouncing-ball wavefunctions) have been shown to exist in chaotic systems with marginally stable classical orbits . The fraction of such states does go to zero in the classical high-energy limit, in accordance with the predictions of the theorem.
We also note that no assumptions have been made in the argument about “hard chaos” properties such as hyperbolicity or mixing; implications for the elimination of quantization ambiguity in two dimensions thus hold for all classically ergodic systems.
Coarse-grained quantum ergodicity is a much weaker and more general result than random matrix theory (RMT), which has been conjectured to be a statistical description of classically ergodic wavefunctions . The latter conjecture suggests ergodic behavior of quantum wavefunctions on single-wavelength scales, and is violated by scars and other effects of short-time dynamics . For example, the quantum wavefunctions of the Sinai billiard, a paradigm of classical chaos, have been shown to have inverse participation ratios (IPR’s) that diverge in the classical limit away from their ergodic and RMT values . This strongly non-ergodic wavefunction behavior on scales of size $`\mathrm{}`$ is entirely consitent with ergodicity on coarse-grained scales as discussed above. It is easy to see that the ergodicity argument breaks down if $`\widehat{A}`$ does not have a well-defined classical limit. We may for example let $`\widehat{A}`$ be a projection onto a localized test state $`\widehat{A}=|aa|`$, but then we are not guaranteed to find a time $`T`$ where classical–quantum correspondence between $`F(T)`$ and $`F_{\mathrm{cl}}(T)`$ still holds, and simultaneously $`TT_{\mathrm{Ehr}}`$.
Comparing the result of Eq. 2 with Eq. 1, we see that the leading effect of a change in the quantization prescription is to shift all energy levels classically close to $`E_0`$ by a constant displacement $`\mathrm{}^2a_0`$ (comparable in size to a level spacing); perturbation theory then gives:
$$\delta E_nE_n^{}E_n=\mathrm{}^2(a_0+O(ϵ))$$
(8)
with $`ϵ1`$ for small $`\mathrm{}`$ (high energy). In the special case of RMT, the energy hypersurface has $`M\mathrm{}^{1d}`$ Planck-sized cells: since the deviations $`ϵ`$ in the matrix elements $`n|\widehat{A}|n`$ arise from uncorrelated fluctuations in these cells, we then have
$$ϵ_{\mathrm{RMT}}\frac{1}{\sqrt{M}}\mathrm{}^{(d1)/2}.$$
(9)
The relative shift between nearby energy levels compared to a level spacing is
$$\frac{\delta E_n\delta E_m}{\mathrm{}^d}\mathrm{}^{2d}ϵ\stackrel{\mathrm{RMT}}{}\mathrm{}^{(3d)/2}.$$
(10)
The overall energy shift $`\mathrm{}^2a_0`$ is of course unphysical, corresponding simply to changing the Hamiltonian by a constant; only energy differences are measurable, and from the first relation in Eq. 10 we see that for $`d=2`$ these become independent of one’s choice of quantization in the $`\mathrm{}0`$ limit (i.e. for highly excited eigenstates). Energy splittings $`E_nE_m`$ which are already classically large can of course change by $`O(\mathrm{}^2)`$ as one considers different quantum systems with the same semiclassical limit. However, the ratio $`\delta (E_nE_m)/(E_nE_m)0`$ for almost all states $`|n`$, $`|m`$ in the semiclassical limit for $`d=2`$, whether $`|n`$ and $`|m`$ are nearest neighbor levels or are far apart in energy. Thus, in the classical limit of a $`d=2`$ constrained surface, the way in which the particle is bound to the surface has no effect on any measurable quantity, provided that motion on the constraint surface itself is ergodic. \[In the case of mixed phase space, states living on regular islands move up or down relative to the rigid chaotic sea, as the quantization is varied.\]
The result of Eq. 10 is consistent with the finding in that in caustic-free chaotic $`d=2`$ systems, long-time semiclassical dynamics approaches the quantum answer, and that the critical dimension for breakdown of the semiclassical approximation at the Heisenberg time is $`d=3`$ for chaotic systems, as compared with $`d=2`$ in the integrable case.
For a simple numerical example of the above results, we consider a discrete-time map on a two-dimensional toroidal phase space $`(q,p)[0,1)\times [0,1)`$ (notice that this scales equivalently to a two-dimensional autonomous Hamiltonian system and can be thought of as a Poincaré section of the latter):
$`p\stackrel{~}{p}`$ $`=`$ $`pV^{}(q)\mathrm{mod}\mathrm{\hspace{0.33em}1}`$ (11)
$`q\stackrel{~}{q}`$ $`=`$ $`q+\stackrel{~}{p}\mathrm{mod}\mathrm{\hspace{0.33em}1}.`$ (12)
The above equations of motion can be obtained from the time-periodic (“kicked”) Hamiltonian
$$H=\frac{p^2}{2}+V(q)\underset{n}{}\delta (tn).$$
(13)
We let the potential be given by
$$V(q)=\pm \frac{q^2}{2}+\frac{0.4}{(2\pi )^2}\mathrm{sin}2\pi q+rh^2\mathrm{cos}2\pi q,$$
(14)
where the $``$ sign gives completely chaotic dynamics , while the $`+`$ sign leads to a (mostly) regular classical phase space. The $`\mathrm{sin}2\pi q`$ term is added to break parity symmetries and make the dynamics nonlinear and generic in both cases, while $`r`$ parametrizes a one-parameter family of possible quantizations.
A sample piece of the regular (solid curve) and chaotic (dashed curve) spectrum is shown in Fig. 1, as a function of the quantization parameter $`r`$. We see in the regular case that the eigenvalues often cross each other as the quantization is varied, making impossible a semiclassical ordering of the spectrum, while in the chaotic case the eigenvalues never shift by an amount comparable to a mean level spacing ($`0.0117`$ in this calculation). We notice that in our example the trace of $`A(q,p)=r\mathrm{cos}2\pi q`$ is zero, so the overall spectral shift $`\mathrm{}^2a_0`$ of Eq. 8 vanishes as well. For a different $`A(q,p)=r\left[\mathrm{cos}2\pi q\pm c\right]`$, there would be a secular upward or downward trend in all the eigenvalues (regular and chaotic) as $`r`$ increased, but no physical quantities would be affected.
Despite the exponential proliferation of classical paths in the chaotic case, semiclassical calculations past the Heisenberg time can be performed to arbitrary precision with only a power-law amount of effort, using an iterative approach, and individual semiclassical eigenvalues and eigenstates can be extracted . Semiclassically obtained eigenvalues for the chaotic system are indicated by arrows in Fig. 1 and are observed to agree well with the quantum results (independent of quantization). In the regular case, there is of course little meaning to semiclassically computed individual eigenvalues, and these are not shown.
In Fig. 2 we show the mean squared change in eigenlevel position, as a function of effective Planck’s constant (inverse momentum) $`h=1/64\mathrm{}1/1028`$, when the quantization parameter $`r`$ is changed from $`0.0`$ to $`3.0`$. The $`\times `$’s represent the regular case, and show that the quantization ambiguity there is large and energy-independent. The data points marked by squares correspond to the chaotic system, and clearly follow the linear law predicted by Eq. 10 (for $`d=2`$). Finally, the plusses come from the “slow ergodic” sawtooth potential map defined by
$$V(q)=0.3\left|q\frac{1}{2}\right|,$$
(15)
where the quantization ambiguity is now put in the kinetic term: $`H=\mathrm{}+rh^2\mathrm{cos}2\pi p`$. Here we do not expect quantum wavefunctions to be ergodic on the scale of a single channel in momentum space , but they will be ergodic on scales $`1/|\mathrm{log}\mathrm{}|`$ as $`\mathrm{}0`$. We then expect $`ϵ1/(\alpha +\beta \mathrm{log}\mathrm{})`$ in Eq. 10, which behavior is indeed observed in Fig. 2. We see that ergodicity without chaos is sufficient to obtain unambiguous quantization of $`d=2`$ systems, though the error in the high-energy limit approaches zero more slowly in the slow ergodic than in the fully hyperbolic case.
It is a pleasure to thank M. Wilkinson for initially stimulating this work and E. J. Heller for many fruitful discussions. This research was supported by the National Science Foundation under Grant No. 66-701-7557-2-30.
|
no-problem/9906/hep-ph9906250.html
|
ar5iv
|
text
|
# Explicit expressions of spin wave functions
## I Introduction
To describe particles with high spins in amplitude analysis, one needs to construct the explicit expressions of wave functions. Detailed properties of the amplitudes are needed in tensor analysis to give independent invariant amplitudes free of kinematics singularities and zeros . We will give the explicit expressions of the canonical and helicity wave functions for massive particles with arbitrary spins in this paper. These wave functions satisfy Rarita-Schwinger conditions .
We will discuss quantum states in section II. Spin wave functions are given in section III and section IV.
## II Quantum States
Let $`L(p)`$ be a Lorentz transformation that satisfies
$$p^\mu =L_{}^{\mu }{}_{\nu }{}^{}(p)k^\nu .$$
(1)
For massive particles one can choose the standard momentum to be $`(k^\mu )=(w;\stackrel{}{0})`$. $`w`$ is the mass of the particle. The space-time metric is taken as $`(g^{\mu \nu })=diag\{1,1,1,1\}.`$ Now we can define one particle states as
$$|p,\sigma =U(L(p))|k,\sigma $$
(2)
with $`U(L(p))`$ the unitary representation of $`L(p)`$ in Hilbert space. The one particle states satisfy
$$\widehat{p}^\mu |p,\sigma =p^\mu |p,\sigma .$$
(3)
We choose the othonormal condition to be
$$p^{^{}},\sigma ^{^{}}|p,\sigma =(2\pi )^3(2p^0)\delta (\stackrel{}{p}^{^{}}\stackrel{}{p})\delta _{\sigma ^{^{}}\sigma }.$$
(4)
Under Lorentz transformations,
$$U(\mathrm{\Lambda })|p,\sigma =\underset{\sigma ^{^{}}}{}D_{\sigma ^{^{}}\sigma }(W(\mathrm{\Lambda },p))|\mathrm{\Lambda }p,\sigma ^{^{}};$$
(5)
where
$$D_{\sigma ^{^{}}\sigma }(W(\mathrm{\Lambda },p))L^1(\mathrm{\Lambda }p)\mathrm{\Lambda }L(p)$$
(6)
is the Wigner rotation and $`\{D_{\sigma ^{^{}}\sigma }\}`$ furnishes a representation of $`SO(3).`$ We also use the notation $`|\stackrel{}{p},j,m|p,j,m`$.
There are infinite ways to define the Lorentz transformation that satisfies equation (1). Canonical state and helicity state are the two types mostly used.
If one define the Lorentz transformation in equation (2) to be pure Lorentz boost
$$\begin{array}{ccc}\hfill L(p)& =& L(\stackrel{}{p})\hfill \\ & & R(\phi ,\theta ,0)L_z(|\stackrel{}{p}|)R^1(\phi ,\theta ,0),\hfill \end{array}$$
(7)
the canonical state is obtained. Here $`R(\phi ,\theta ,0)`$ is the rotation that takes $`z`$-axis to the direction of $`\stackrel{}{p}`$ with spherical angles $`(\theta ,\phi )`$, and the boost $`L_z(|\stackrel{}{p}|)`$ takes the four-momentum $`(k^\mu )=(w;\stackrel{}{0})`$ to $`(\sqrt{w^2+\stackrel{}{p}^2};0,0,|\stackrel{}{p}|)`$. For a particle of spin-$`j`$,$`\sigma (j,m)`$. It can be shown that for the canonical states, equation (7) become
$$U(\mathrm{\Lambda })|\stackrel{}{p},j,m=\underset{m^{^{}}}{}D_{m^{^{}}m}^j(L^1(\stackrel{}{\mathrm{\Lambda }}p)\mathrm{\Lambda }L(\stackrel{}{p}))|\stackrel{}{\mathrm{\Lambda }}p,j,m^{^{}}.$$
(8)
$`D_{m^{^{}}m}^j`$ is the ordinary $`D`$-function. Especially, under rotation $`R`$,
$$U(R)|\stackrel{}{p},j,m=\underset{m^{^{}}}{}D_{m^{^{}}m}^j(R)|\stackrel{}{R}p,j,m^{^{}}.$$
(9)
Defining the Lorentz transformation in another way will leads to helicity states :
$$\begin{array}{ccc}\hfill L(p)& =& L(\stackrel{}{p})R^1(\phi ,\theta ,0)\hfill \\ & & R(\phi ,\theta ,0)L_z(|\stackrel{}{p}|).\hfill \end{array}$$
(10)
We have
$$U(\mathrm{\Lambda })|\stackrel{}{p},j,\lambda =\underset{\lambda ^{^{}}}{}D_{\lambda ^{^{}}\lambda }^j(L^1(\stackrel{}{\mathrm{\Lambda }}p)\mathrm{\Lambda }L(\stackrel{}{p}))|\stackrel{}{\mathrm{\Lambda }}p,j,\lambda ^{^{}}$$
(11)
and
$$U(R)|\stackrel{}{p},j,\lambda =|\stackrel{}{R}p,j,\lambda .$$
(12)
The two types of definitions are related to each other by
$$|\stackrel{}{p},j,\lambda _{helicity}=\underset{m}{}D_{m\lambda }^j(\phi ,\theta ,0)|\stackrel{}{p},j,m_{canonical}.$$
(13)
We see that the definition of state depends on the choice of Lorentz transformation in equation (1). There is a definition called spinor state , which is different from that of equation (2) and does not depend on the specific choice of Lorentz transformation; but it makes things more complex and is seldom used.
Now we write quantum states in terms of creation and annihilation operators:
$$|\stackrel{}{p},\sigma =\sqrt{(2\pi )^32p^0}a^{}(\stackrel{}{p},\sigma )|0,$$
(14)
with $`|0`$ the vacuum state. Quatum fields are given by
$`\psi _l^{(+)}={\displaystyle \frac{d^3p}{\sqrt{\left(2\pi \right)^32p^0}}\underset{\sigma }{}U_l(\stackrel{}{p},\sigma )a(\stackrel{}{p},\sigma )e^{ipx}},`$ (15)
$`\psi _l^{()}={\displaystyle \frac{d^3p}{\sqrt{\left(2\pi \right)^32p^0}}\underset{\sigma }{}V_l(\stackrel{}{p},\sigma )a^{}(\stackrel{}{p},\sigma )e^{ipx}},`$ (16)
$`U(\mathrm{\Lambda },a)\psi _l^{(\pm )}U^1(\mathrm{\Lambda },a)={\displaystyle \underset{l^{^{}}}{}}G_{ll^{^{}}}(\mathrm{\Lambda }^1)\psi _l^{^{}}^{(\pm )}(\mathrm{\Lambda }x+a).`$ (17)
The coefficient functions, $`U_l`$ and $`V_l`$, are wave functions in momentum space. $`a^\mu `$ are parameters for translation. $`\{G_{ll^{^{}}}\}`$ furnishes a representation of the Lorentz group. One finds that wave functions satisfy
$`{\displaystyle \underset{l^{^{}}}{}}G_{ll^{^{}}}(\mathrm{\Lambda })U_l^{^{}}(\stackrel{}{p},\sigma )={\displaystyle \underset{\sigma ^{^{}}}{}}D_{\sigma ^{^{}}\sigma }(W(\mathrm{\Lambda },p))U_l(\stackrel{}{\mathrm{\Lambda }}p,\sigma ^{^{}}),`$ (18)
$`{\displaystyle \underset{l^{^{}}}{}}G_{ll^{^{}}}(\mathrm{\Lambda })V_l^{^{}}(\stackrel{}{p},\sigma )={\displaystyle \underset{\sigma ^{^{}}}{}}D_{\sigma ^{^{}}\sigma }^{}(W(\mathrm{\Lambda },p))V_l(\stackrel{}{\mathrm{\Lambda }}p,\sigma ^{^{}});`$ (19)
so we can define wave functions as
$`U_l(\stackrel{}{p},\sigma )`$ $`=`$ $`{\displaystyle \underset{l^{^{}}}{}}G_{ll^{^{}}}(L(\stackrel{}{p}))U_l^{^{}}(\stackrel{}{k},\sigma ),`$ (20)
$`V_l(\stackrel{}{p},\sigma )`$ $`=`$ $`{\displaystyle \underset{l^{^{}}}{}}G_{ll^{^{}}}(L(\stackrel{}{p}))V_l^{^{}}(\stackrel{}{k},\sigma ).`$ (21)
For massive particles, $`\stackrel{}{k}=\stackrel{}{0}`$.
## III Wave Functions for IntegralL Spin Particles
If the index $`l`$ in previous section is chosen as Lorentz indexes, one arrived at vector fields:
$$\begin{array}{c}G(\mathrm{\Lambda })_{}^{\mu }{}_{\nu }{}^{}=\mathrm{\Lambda }_{}^{\mu }{}_{\nu }{}^{},\\ U^\mu (\stackrel{}{p},\sigma )=L(p)_{}^{\mu }{}_{\nu }{}^{}U^\nu (\stackrel{}{0},\sigma ),\\ V^\mu (\stackrel{}{p},\sigma )=L(p)_{}^{\mu }{}_{\nu }{}^{}V^\nu (\stackrel{}{0},\sigma ).\end{array}$$
(22)
We use the following infinitesimal generators of the Lorentz group:
$$\begin{array}{ccc}\left(J_{1}^{}{}_{}{}^{\mu }{}_{\nu }{}^{}\right)=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& i\\ 0& 0& i& 0\end{array}\right),& \left(J_{2}^{}{}_{}{}^{\mu }{}_{\nu }{}^{}\right)=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& 0& i\\ 0& 0& 0& 0\\ 0& i& 0& 0\end{array}\right),& \left(J_{3}^{}{}_{}{}^{\mu }{}_{\nu }{}^{}\right)=\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& 0& i& 0\\ 0& i& 0& 0\\ 0& 0& 0& 0\end{array}\right),\\ \left(K_{1}^{}{}_{}{}^{\mu }{}_{\nu }{}^{}\right)=\left(\begin{array}{cccc}0& i& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),& \left(K_{2}^{}{}_{}{}^{\mu }{}_{\nu }{}^{}\right)=\left(\begin{array}{cccc}0& 0& i& 0\\ 0& 0& 0& 0\\ i& 0& 0& 0\\ 0& 0& 0& 0\end{array}\right),& \left(K_{3}^{}{}_{}{}^{\mu }{}_{\nu }{}^{}\right)=\left(\begin{array}{cccc}0& 0& 0& i\\ 0& 0& 0& 0\\ 0& 0& 0& 0\\ i& 0& 0& 0\end{array}\right);\end{array}$$
(23)
and get the explicit expressions of canonical wave functions ($`E`$ is the energy of the particle)
$$\begin{array}{ccc}\hfill \left(e_c^\mu (\stackrel{}{p},0)\right)& =& \left(\begin{array}{c}\frac{|\stackrel{}{p}|}{w}\mathrm{cos}\theta \\ \frac{1}{2}\left(\frac{E}{w}1\right)\mathrm{sin}2\theta \mathrm{cos}\phi \\ \frac{1}{2}\left(\frac{E}{w}1\right)\mathrm{sin}2\theta \mathrm{sin}\phi \\ \frac{1}{2}\left(\frac{E}{w}1\right)(1+\mathrm{cos}2\theta )+1\end{array}\right),\hfill \\ \hfill \left(e_c^\mu (\stackrel{}{p},\pm 1)\right)& =& \frac{1}{\sqrt{2}}\left(\begin{array}{c}\frac{|\stackrel{}{p}|}{w}\mathrm{sin}\theta e^{\pm i\phi }\\ \left(\frac{E}{w}1\right)\mathrm{sin}^2\theta \mathrm{cos}\phi e^{\pm i\phi }+1\\ \left(\frac{E}{w}1\right)\mathrm{sin}^2\theta \mathrm{sin}\phi e^{\pm i\phi }\pm 1\\ \left(\frac{E}{w}1\right)\mathrm{cos}\theta \mathrm{sin}\theta e^{\pm i\phi }\end{array}\right);\hfill \end{array}$$
(24)
while helicity wave functions are
$$\begin{array}{ccc}\hfill \left(e_h^\mu (\stackrel{}{p},0)\right)& =& \left(\begin{array}{c}\frac{|\stackrel{}{p}|}{w}\\ \frac{E}{w}\mathrm{sin}\theta \mathrm{cos}\phi \\ \frac{E}{w}\mathrm{sin}\theta \mathrm{sin}\phi \\ \frac{E}{w}\mathrm{cos}\theta \end{array}\right),\hfill \\ \hfill \left(e_h^\mu (\stackrel{}{p},\pm 1)\right)& =& \frac{1}{\sqrt{2}}\left(\begin{array}{c}0\\ \mathrm{cos}\theta \mathrm{cos}\phi +i\mathrm{sin}\phi \\ \mathrm{cos}\theta \mathrm{sin}\phi i\mathrm{cos}\phi \\ \pm \mathrm{sin}\theta \end{array}\right).\hfill \end{array}$$
(25)
We have
$$U(\stackrel{}{p},\sigma )=V^{}(\stackrel{}{p},\sigma )=e(\stackrel{}{p},\sigma ).$$
(26)
Wave functions for higher integral spins can be defined recurrently by
$$e_{\mu _1\mu _2\mathrm{}\mu _j}(\stackrel{}{p},j,\sigma )=\underset{\sigma _{j1}^{^{}},\sigma _j}{}(j1,\sigma _{j1}^{^{}};1,\sigma _j|j,\sigma )e_{\mu _1\mu _2\mathrm{}\mu _{j1}}(\stackrel{}{p},j1,\sigma _{j1}^{^{}})e_{\mu _j}(\stackrel{}{p},\sigma _j).$$
(27)
Using the C-G coefficient relation
$$\begin{array}{cc}& \underset{\sigma _3^{^{}},\sigma _4^{^{}},\mathrm{},\sigma _n^{^{}}}{}(j_1,\sigma _1;j_2,\sigma _2|j_1+j_2,\sigma _3^{^{}})(j_1+j_2,\sigma _3^{^{}};k_3,\sigma _3|j_1+j_2+j_3,\sigma _4^{^{}})\mathrm{}\hfill \\ & \times (j_1+j_2+\mathrm{}+j_{n1},\sigma _n^{^{}};j_n,\sigma _n|j_1+j_2+\mathrm{}+j_n,\sigma _n^{^{}}+\sigma _n)\hfill \\ \hfill =& \left[\underset{i=1}{\overset{n}{}}\frac{(2j_i)!}{(j_i+\sigma _i)!(j_i\sigma _i)!}\right]^{\frac{1}{2}}\left\{\frac{\left[\underset{i=1}{\overset{n}{}}(j_i+\sigma _i)\right]!\left[\underset{i=1}{\overset{n}{}}(j_i\sigma _i)\right]!}{\left(2\underset{i=1}{\overset{n}{}}j_i\right)!}\right\}^{\frac{1}{2}},\hfill \end{array}$$
(28)
we find
$$\begin{array}{cc}& e_{\mu _1\mu _2\mathrm{}\mu _j}(\stackrel{}{p},j,\sigma )\hfill \\ \hfill =& \underset{\sigma _1,\sigma _2,\mathrm{},\sigma _j}{}\left\{\frac{2^j(j+\sigma )!(j\sigma )!}{(2j)!\underset{i=1}{\overset{j}{}}[(1+\sigma _i)!(1\sigma _i)!]}\right\}^{\frac{1}{2}}\delta _{\sigma _1+\sigma _2+\mathrm{}+\sigma _j,\sigma }e_{\mu _1}(\stackrel{}{p},\sigma _1)e_{\mu _2}(\stackrel{}{p},\sigma _2)\mathrm{}e_{\mu _j}(\stackrel{}{p},\sigma _j).\hfill \end{array}$$
(29)
It is easy to show
$$\mathrm{\Lambda }_{}^{\mu _1}{}_{\nu _1}{}^{}\mathrm{\Lambda }_{}^{\mu _2}{}_{\nu _2}{}^{}\mathrm{}\mathrm{\Lambda }_{}^{\mu _j}{}_{\nu _j}{}^{}e^{\nu _1\nu _2\mathrm{}\nu _j}(\stackrel{}{p},j,\sigma )=\underset{\sigma ^{^{}}}{}D_{\sigma ^{^{}}\sigma }^j(W(\mathrm{\Lambda },\stackrel{}{p}))e^{\mu _1\mu _2\mathrm{}\mu _j}(\stackrel{}{\mathrm{\Lambda }}p,j,\sigma ^{^{}}).$$
(30)
$`e_{\mu _1\mu _2\mathrm{}\mu _j}(\stackrel{}{p},j,\sigma )`$ satisfies all of the Rarita-Schwinger conditions: space-like, symmetric and traceless.
## IV Wave Functions for Half-integral Spin Particles
The convention of $`\gamma `$-matrices used here follows that of Bjorken and Drell , so the generators of the Lorentz group are
$$\begin{array}{cc}\stackrel{}{J}=\frac{1}{2}\left(\begin{array}{cc}\stackrel{}{\tau }& 0\\ 0& \stackrel{}{\tau }\end{array}\right),& \stackrel{}{K}=\frac{i}{2}\left(\begin{array}{cc}0& \stackrel{}{\tau }\\ \stackrel{}{\tau }& 0\end{array}\right);\end{array}$$
(31)
with $`\stackrel{}{\tau }`$ Pauli matrixes.
The spin-$`\frac{1}{2}`$ canonical wave functions are ( here $`\alpha =\mathrm{ln}\left((E+|\stackrel{}{p}|)/w\right)`$)
$$\begin{array}{cc}U_c(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{cosh}\frac{\alpha }{2}\\ 0\\ \mathrm{cos}\theta \mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{sin}\theta e^{i\phi }\mathrm{sinh}\frac{\alpha }{2}\end{array}\right),& U_c(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}0\\ \mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{sin}\theta e^{i\phi }\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{cos}\theta \mathrm{sinh}\frac{\alpha }{2}\end{array}\right);\\ V_c(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{sin}\theta e^{i\phi }\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{cos}\theta \mathrm{sinh}\frac{\alpha }{2}\\ 0\\ \mathrm{cosh}\frac{\alpha }{2}\end{array}\right),& V_c(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{cos}\theta \mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{sin}\theta e^{i\phi }\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{cosh}\frac{\alpha }{2}\\ 0\end{array}\right);\end{array}$$
(32)
and the helicity wave functions are
$$\begin{array}{cc}U_h(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\end{array}\right),\hfill & U_h(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\end{array}\right);\hfill \\ V_h(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\end{array}\right),\hfill & V_h(\stackrel{}{p},\frac{1}{2})=\left(\begin{array}{c}\mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{sinh}\frac{\alpha }{2}\\ \mathrm{cos}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\\ \mathrm{sin}\frac{\theta }{2}e^{i\frac{\phi }{2}}\mathrm{cosh}\frac{\alpha }{2}\end{array}\right).\hfill \end{array}$$
(33)
Spin-$`n+\frac{1}{2}`$ wave functions read
$$\begin{array}{cc}& U_{\mu _1\mu _2\mathrm{}\mu _n}(\stackrel{}{p},n+\frac{1}{2},\sigma )\hfill \\ \hfill =& \underset{\sigma _1,\sigma _2,\mathrm{},\sigma _{n+1}}{}\left\{\frac{2^n(n+\frac{1}{2}+\sigma )!(n+\frac{1}{2}\sigma )!}{(2n+1)!\underset{i=1}{\overset{n}{}}[(1+\sigma _i)!(1\sigma _i)!]}\right\}^{\frac{1}{2}}\delta _{\sigma _1+\sigma _2+\mathrm{}+\sigma _{n+1},\sigma }\hfill \\ & \times e_{\mu _1}(\stackrel{}{p},\sigma _1)e_{\mu _2}(\stackrel{}{p},\sigma _2)\mathrm{}e_{\mu _n}(\stackrel{}{p},\sigma _n)U(\stackrel{}{p},\sigma _{n+1});\hfill \end{array}$$
(34)
$$\begin{array}{cc}& V_{\mu _1\mu _2\mathrm{}\mu _n}(\stackrel{}{p},n+\frac{1}{2},\sigma )\hfill \\ \hfill =& \underset{\sigma _1,\sigma _2,\mathrm{},\sigma _{n+1}}{}\left\{\frac{2^n(n+\frac{1}{2}+\sigma )!(n+\frac{1}{2}\sigma )!}{(2n+1)!\underset{i=1}{\overset{n}{}}[(1+\sigma _i)!(1\sigma _i)!]}\right\}^{\frac{1}{2}}\delta _{\sigma _1+\sigma _2+\mathrm{}+\sigma _{n+1},\sigma }\hfill \\ & \times e_{\mu _1}^{}(\stackrel{}{p},\sigma _1)e_{\mu _2}^{}(\stackrel{}{p},\sigma _2)\mathrm{}e_{\mu _n}^{}(\stackrel{}{p},\sigma _n)V(\stackrel{}{p},\sigma _{n+1}).\hfill \end{array}$$
(35)
They satisfy Dirac equations and Rarita-Schwinger conditions ; especially
$$\gamma ^{\mu _k}U_{\mu _1\mu _2\mathrm{}\mu _k\mathrm{}\mu _n}(\stackrel{}{p},n+\frac{1}{2},\sigma )=0,$$
(36)
$$\gamma ^{\mu _k}V_{\mu _1\mu _2\mathrm{}\mu _k\mathrm{}\mu _n}(\stackrel{}{p},n+\frac{1}{2},\sigma )=0.$$
(37)
|
no-problem/9906/astro-ph9906253.html
|
ar5iv
|
text
|
# Cosmological and astrophysical tests of quantum gravity
## Abstract
Physics in the vicinity of an ultraviolet stable fixed point of a quantum field theory is parametrized by a renormalization group invariant macroscopic length scale, the correlation length $`\xi ,`$ with the quantum effective action a function of this length scale. Numerical simulations of quantum gravity suggest the existence of just such a fixed point. Since the quantum effective action is a function only of $`\xi ,`$ the cosmological constant must be $`k\xi ^2`$ with $`k`$ a pure number. Higher derivative terms are also parametrized by this length scale, so in particular the effective Newtonian dynamics of a test particle is modified at acceleration scales of order $`1/\xi .`$ Thus, renormalization group effects in quantum gravity provide a natural link between the phenomenological acceleration scale associated with galactic rotation curves and the value of the cosmological constant favoured by recent supernovae observations.
Quantum gravity is a difficult subject with a host of conceptual and computational problems that we are far from resolving. It is the aim of this paper to point out that even in the absence of a complete theory, general arguments based on the properties of the renormalization group in quantum field theory suggest that quantum gravity may provide simple explanations for some astrophysical and cosmological puzzles.
Let us begin by considering an approach to quantum gravity reviewed by Weinberg. Many quantum field theories can be formulated in arbitrary numbers of spacetime dimensions. Examples are Yang–Mills theories, the nonlinear sigma model (NLSM), and general relativity (GR). In such cases there is typically a spacetime dimension in which the coupling constant in these theories is dimensionless, called the upper critical dimension (ucd). For the NLSM and GR the ucd is 2 while for Yang–Mills theories it is 4. Above the ucd, the coupling constant has dimensions of length to positive powers and the theory is perturbatively non-renormalizable—as one probes physics at shorter distance scales, the effective coupling in these theories increases on dimensional grounds alone. Suppose that exactly at the ucd the theory is asymptotically free, in other words quantum loop effects tend to make the coupling decrease at shorter distances. If one formally treats the dimension of spacetime as an analytic variable following Wilson and Fisher, one can find a critical value of the coupling such that these two effects exactly cancel and the theory is scale invariant at a dimension $`D`$ larger than the ucd. This behaviour is encoded in a renormalization group equation $`G/\mathrm{ln}\rho \beta (G)=ϵG\beta _0G^2+\mathrm{}`$ where $`ϵDD_c,`$ $`D_c`$ is the ucd, $`G`$ is a dimensionless coupling, related to the coupling constant in $`D`$ dimensions by $`G_D=\rho ^ϵG,`$ $`\rho `$ is the cutoff and $`\beta _0>0`$ for asymptotically free theories. If $`DD_c`$ is small, one can find a fixed point i.e. $`G_c:\beta (G_c)=0`$ with $`G_c`$ small (in perturbation theory, $`G_cϵ/\beta _0`$) so that one can trust the qualitative features found in perturbative computations.
An important consequence of the existence of an ultraviolet stable fixed point is the existence of a macroscopic physical correlation length $`\xi ,`$ independent of $`\rho .`$ $`\xi `$ characterizes scaling violations as one considers physics at longer distances around the ultraviolet stable fixed point, marking the transition from physics described by the ultraviolet stable fixed point at $`G_c,`$ with the effective coupling $`G(r/\xi )`$ growing with increasing $`r,`$ to physics described by some other fixed point (in fact, such an infrared fixed point need not even exist at finite $`G`$): $`\xi =\rho \mathrm{exp}\left(^G𝑑G^{}/\beta (G^{})\right)\rho |GG_c|^{1/\beta ^{}(G_c)}.`$ While its existence is demonstrable in perturbation theory, $`\xi `$ cannot be determined in perturbation theory. Fixing $`\xi `$ specifies $`G:`$ this is the phenomenon of dimensional transmutation.
The effective action obtained by systematically computing in renormalized perturbation theory is a function of $`\xi `$ alone. All the terms in the effective action are directly computable from the bare action with no arbitrariness. One cannot compute this effective action analytically in dimensions far above the ucd because the Wilson–Fisher $`ϵ`$ expansion is not a good quantitative guide, but in principle one can compute this effective action numerically. The only features that I need are the existence of $`\xi `$ and the fact that the effective action is independent of any particular field configuration. Its equation of motion incorporates quantum corrections and one must use this quantum equation of motion for all physics with no change in the value of $`\xi .`$ This may seem surprising but in fact it has to be the case since the appearance of $`\xi `$ is an ultraviolet effect—in other words, it is the ultraviolet stable fixed point that implies the existence of $`\xi `$ and this is independent of the macroscopic field configuration at which one chooses to evaluate the effective action.
In the present work, I adopt a phenomenological stance and attempt to understand qualitative aspects of physics near the ultraviolet stable fixed point in quantum gravity. Happily, a more quantitative and constructive approach may not be too far in the future. Numerical simulations of lattice gravity have made great progress and support the qualititive features suggested by the $`ϵ`$ expansion. I focus on the recent lattice gravity results of Hamber and Williams in the Regge calculus approach. For $`G>G_c`$ the effective coupling at a physical separation $`r`$ behaves as
$$G(r)=G_c\left[1+c_1\left(\frac{r}{\xi }\right)^{1/\nu }+c_2\left(\frac{r}{\xi }\right)^{2/\nu }+\mathrm{}\right]$$
(1)
where $`\nu 1/\beta ^{}(G_c).`$ In the case of pure gravity $`1/\nu 2.8`$ and the beta function has a negative slope at $`G_c,`$ so the effective coupling increases with distance. Further, $`1/\xi `$ is found to be very small close to the fixed point, on the order of the inverse anti–de Sitter radius (since the simulations are Euclidean). While the precise value of $`\nu `$ will change when matter is added, the negative slope of the beta function has been linked to the fact that gravity is attractive, so the sign of $`\nu `$ should not change.
For general relativity, in the present framework, one expects then that the effective action will be a function of a correlation length $`\xi `$ alone. Thus, for example, the cosmological constant term in the effective action will naturally be $`k\sqrt{g}(\xi ^2G_\xi )^1`$ with $`k`$ a computable pure number, and $`G_\xi `$ the value of Newton’s constant determined non-perturbatively by $`\xi .`$ ($`k`$ could be zero but numerical work does not suggest this.) Since the effective value of Newton’s constant grows with distance, it is natural to expect that important aspects of the physics will be incorporated in non-polynomial functions of derivatives. This is exactly analogous to Yang–Mills theory where physics at large momentum transfer is simple, while at small momentum transfer one sees qualitatively different physics characterized by the appearance of the scale $`\mathrm{\Lambda }_{\mathrm{QCD}}.`$ In particular, if one considers ordinary Newtonian dynamics for a test particle, we would expect the appearance of an acceleration scale $`1/\xi .`$ (The modification of Newtonian dynamics is not due to the simple increase in the coupling constant, just as confinement in QCD is not due to just the logarithmic growth of the effective coupling.) Lastly, note that string theory does not preclude the existence of a non–perturbative ultraviolet stable fixed point for quantum gravity relevant for describing physics at scales much below the string scale.
Why should the scale dependence appear as an acceleration scale? For example, one might imagine that a frequency scale might appear for orbital motion. Here one must recall that the effective action’s scale dependence is independent of the geometry under consideration—the scale dependence must be something universal that can be defined in gravity for any geometry. Based on the principle of equivalence, I suggest that the only universal quantity that one can expect is an acceleration scale. This is clearly a speculation, but at least qualitative confirmation within the $`ϵ`$ expansion should be possible. Another obvious motivation for an acceleration scale is provided by a comparison to the difference in small and large momentum transfer processes in QCD.
How do these qualitative features match cosmological and astrophysical observations? I consider cosmology first. Recent observational work on supernovae appears to support a source term in the Friedman equation with positive energy and negative pressure, similar to a cosmological constant. The ratio of the energy in the cosmological constant–like term to the critical energy density $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is $`0.60.7`$ for a spatially flat universe, which is difficult to explain from a particle physics point of view.
If one identifies the Hubble length $`H_0^1=\xi ,`$ the existence of an ultraviolet stable fixed point in quantum gravity implies $`\mathrm{\Lambda }`$ is naturally the same order of magnitude as $`H_0^2.`$ Why should the present value of the Hubble length be the correlation length $`\xi \mathrm{?}`$ As explained above, the quantum effective action is a function of $`\xi `$ for any geometry. While one could invoke anthropic considerations, only within a larger theory can one ask this question intelligently, just as only within a larger theory can one expect to compute the ratio of $`\mathrm{\Lambda }_{\mathrm{QCD}}`$ to the mass of the electron. The constancy of $`\xi `$ is important since variations of Newton’s constant are strongly constrained.
An explanation for the order of magnitude equality of $`\mathrm{\Omega }_{\mathrm{matter}}`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ requires a more complete theory as well, see for example. It is interesting to consider the effects of this scale on cosmology with spatial curvature, which is also a viable option. Spatial curvature appears as an integration constant in the equations of motion—it is possible albeit unnatural to have the scale of an integration constant different from the scale in the equations of motion.
Now consider galaxy scale physics. As is well–known, optical surface brightness in the disks of spiral galaxies falls off exponentially with radius. On the other hand, measured circular speeds of rotation as a function of distance from the center of the galaxy approach 220 $`(L/L_{})^{0.22}`$ km/s at distances on the order of twice the disk length scale. If starlight traces mass, the measured circular velocities should fall off as $`r^{1/2}.`$ While the evidence is most obvious for spiral galaxies, similar puzzles exist for other galaxies. One can either explain this by positing the existence of dark matter, such as a preponderance at larger radii of stars with low masses or more exotic forms of non–luminous matter, or one can suppose that there is new dynamics that sets in at these large distances.
There is a large body of phenomenological work reviewed in that is referred to as the modified Newtonian dynamics (MOND) programme, aimed at explaining observed rotation curves in terms of modifications of gravity at small accelerations. The tenets of MOND are that the actual acceleration of a particle $`𝐠`$ is related to the Newtonian acceleration $`𝐠_N`$ by $`\mu (g/a_0)𝐠=𝐠_N,`$ where $`\mu `$ is a (matrix) function with the important feature that it involves a fundamental acceleration $`a_0.`$ Explaining galaxy rotation curves requires that $`\mu (x)1`$ as $`x\mathrm{},`$ and $`\mu (x)x`$ as $`x0.`$ The MOND programme does a good job of explaining the observed rotation curves, including the Tully–Fisher relation. Phenomenology gives $`a_0\mathrm{2\; 10}^8\mathrm{cm}/\mathrm{s}^2`$ which is equivalent to a length scale of order the present Hubble length, a fact that has found no explanation. There are no experimental constraints from terrestrial measurements on observed deviations from Newtonian dynamics at the accelerations of interest here. Milgrom has argued already that MOND requires strongly non–local higher–derivative terms.
As discussed above, non–polynomial higher–derivative corrections will naturally appear in the quantum effective action with a scale set by $`\xi .`$ While it is not possible to compute these in a quantitative analytic form at present, the relation between the cosmological constant and the MOND acceleration scale are exactly what one expects in the quantum effective action from the existence of an ultraviolet stable fixed point. The small value of the acceleration scale is directly related to the large value of the correlation length. Note particularly in light of the comments above regarding the constancy of $`\xi `$ that the light observed from distant galaxies was emitted at a different value of the Hubble parameter, but the rotation curves are fit by an acceleration scale that is equivalent to the present Hubble parameter. Further, the increase in the effective Newtonian attraction at larger separations predicted is qualitatively consistent with the form suggested by phenomenology.
In conclusion, I have argued here that the dependence of the effective action of quantum gravity on the renormalization group invariant correlation length may account for the effective value of the cosmological constant and for the rotation curves of galaxies. There is no way to separate the cosmological implications and the implications for small accelerations of the appearance of $`\xi `$—quantum field theory predicts a universal scale parameter. The relations suggested in the present letter are conservative and can be tested numerically. The qualitative features of quantum gravity are in analogy with the proven physics of the NLSM, and are supported by the available lattice data. In ending I must mention that quantum gravity has been suggested previously as a source of infrared effects—see for example . The connections considered here have not been suggested in the literature—see, however, for other work connecting a universal acceleration scale and a cosmological constant.
I am grateful to J. Cohn, H. Hamber, G. Lifschytz, M. Milgrom, J. Peebles, L. Randall, B. Ratra, P. Steinhardt, C. Thompson and M. White for their help. This work was supported in part by NSF grant PHY-9802484.
|
no-problem/9906/cond-mat9906385.html
|
ar5iv
|
text
|
# Direct exchange in the edge-sharing spin-1/2 chain compound MgVO3
\[
## Abstract
Bandstructure calculations with different spin arrangement for the spin-chain compound MgVO<sub>3</sub> have been performed, and paramagnetic as well as magnetic solutions with ferro- and antiferromagnetically ordered chains are found, the magnetic solutions being by 0.22 eV per formula unit lower than the paramagnetic one. The orbital analysis of the narrow band crossing the Fermi level in the paramagnetic solution reveals that the band has almost pure vanadium $`3d`$ character, the lobes of the relevant $`d`$-orbitals at the neighboring in-chain sites being directed towards each other, which suggests direct exchange. The tight-binding analysis of the band confirms the strong exchange transfer between neighboring in-chain V-ions. Besides, some additional superexchange transfer terms are found, which give rise both to in-plane coupling between the chains and to frustration, the dominant frustration occurring due to the interchain interactions.
\]
Spin chains and ladders are of fundamental interest for solid state physics due to their peculiar properties. In the last years many spin-chain compounds were found, mostly cuprates and vanadates. There one can distinguish between the corner-sharing compounds with a 180 (T)ransition metal–(L)igand–T bond (Sr<sub>2</sub>CuO<sub>3</sub> ) and the edge-sharing compounds with a 90 T–L–T bond (Li<sub>2</sub>CuO<sub>2</sub>, CuGeO<sub>3</sub>, MgVO<sub>3</sub> ). In cuprates the relevant $`d`$ orbitals are directed towards the ligand ions, which results in a strong antiferromagnetic superexchange interaction for a 180 T–L–T bond and a weaker ferromagnetic coupling for a 90 T–L–T bond according to the Goodenough-Kanamori-Anderson (GKA) rules. Deviations from the GKA rules are known for CuGeO<sub>3</sub> due to the presence of side groups.
Recently, the compound MgVO<sub>3</sub> was proposed as a new candidate for a model spin-chain system and magnetic susceptibility measurements were presented. The data suggest short range antiferromagnetic spin correlations with the constant of the high-temperature Curie-Weiss law $`\theta 100`$ K. The data were analyzed within a $`1D`$ spin-1/2 Heisenberg model with the nearest neighbor $`J_1`$ and the next nearest neighbor $`J_2`$ exchange couplings and a frustration $`\alpha =J_2/J_1`$ close to the critical value $`\alpha _c=0.24`$ was found. Here we present bandstructure calculations for this compound and a corresponding orbital and tight-binding (TB) analysis.
The base-centered orthorhombic crystal structure of MgVO<sub>3</sub> was determined by Bouloux et al. and is shown in Fig. 1. It consists of edge-sharing VO<sub>2</sub> chains running along the $`y`$-direction which are coupled in $`x`$-direction by V–O–O–V bonds. For a convenient presentation of the orbital analysis the coordinate system is rotated $`90^{}`$ about the $`x`$-axis against the standard one so that the space group reads as $`Bm2_1b`$ instead of $`Cmc2_1`$ given in Ref. .
One notes a slight tilting of the VO<sub>5</sub> pyramids out of the $`z`$-direction and an asymmetric coordination of V and O1 ions. If one distorts the structure in a way depicted in Table I both the tilting and the asymmetry are removed, a center of inversion appears, and the space group becomes $`Bmmb`$. All the following results were obtained using the model structure, because this simplification considerably reduces the computational efforts, and, as we have checked for the non-spin-polarized case, the deviation from the result obtained with the real crystal structure is negligible.
The calculation of the bandstructure was performed using the full-potential nonorthogonal local-orbital minimum-basis bandstructure scheme (FPLO) within the local spin density approximation (LSDA). The calculation was non-relativistic, the exchange and correlation potential was taken from Ref. . The set of valence orbitals was chosen to be Mg: $`3s3p3d`$, V: $`3s3p4s4p3d`$ and O: $`2s2p3d`$. The inclusion of the vanadium $`3s3p`$ states turned out to be unavoidable since the V–O2 distance of about $`3.11a_0`$ is small enough to yield a slight overlap of these states. The oxygen and magnesium $`3d`$ orbitals were taken to increase the basis completeness; though being not occupied they contribute to the overlap density. The extent of the basis orbitals, controlled by a confining potential $`(r/r_0)^4`$, was optimized with respect to the total energy.
In our calculation the Bloch state $`|𝐤\nu `$ is composed of overlapping atomiclike orbitals $`|Lij`$ centered at the atomic sites $`j`$ in the unit cell $`i`$ with coordinates $`𝐑_{ij}`$:
$$|𝐤\nu =\underset{Lij}{}C_{Lij}^{𝐤\nu }e^{\mathrm{i}\mathrm{𝐤𝐑}_{ij}}|Lij,$$
(1)
with the normalization condition $`𝐤\nu |𝐤\nu =1`$. $`L`$ stands for $`\{nlm\sigma \}`$ denoting the main, orbital, magnetic quantum numbers and the projection of spin, respectively.
We have performed three kinds of computation: one non-spin-polarized and two spin-polarized with collinear and anticollinear spin polarization at the neighboring in-chain vanadium ions. In the last case we assumed ferromagnetic order along the $`x`$\- and $`z`$-directions. In both spin-polarized calculations we have found magnetic (ferro- and antiferro-) solutions with the total energy being about 0.22 eV per formula unit lower than that of the paramagnetic solution of the non-spin-polarized calculation. The LSDA accuracy does not allow to determine which of the magnetic states is preferable. The magnetic moment $`n_{}n_{}\mu _\mathrm{B}`$ of the vanadium ion in both magnetic solutions is close to the saturated value $`1\mu _\mathrm{B}`$. The magnetic moments of the other ions are much smaller, the only appreciable one occurs at apex oxygen (O2) having a value of about $`0.05\mu _\mathrm{B}`$ and being directed opposite to the moment of the neighboring vanadium ion.
The results of all three calculations are presented in Fig. 2. The paramagnetic solution has metallic character with a half-filled conduction band at the Fermi level, whereas the band splits in two in the magnetic solutions and an insulator gap opens, being about 0.5 eV and FIG. 2.: Band structure and density of states of MgVO<sub>3</sub> for the paramagnetic (a), ferromagnetic (b) and antiferromagnetic (c) solutions. The symmetry point coordinates of the $`B`$-base centered orthorhombic Brillouin zone are given in units $`\pi (2/a,1/b,2/c)`$. The majority and minority (tinier dots and shaded DOS) spin parts are shown. 0.8 eV in ferro- and antiferromagnetic cases, respectively. However, it can be expected that the real gap is caused mainly by the electron correlation and is considerably larger than that one produced by the magnetic ordering. The correlation gap should persist in the paramagnetic case as well.
For a deeper understanding of the electronic structure we have performed an orbital analysis for the paramagnetic case. The weight $`W_{Lj}^{𝐤\nu }`$ of the orbital $`|Lj`$ in the Bloch state $`|𝐤\nu `$ (Eq. 1) was taken as
$$W_{Lj}^{𝐤\nu }=\underset{i}{}|C_{Lij}^{𝐤\nu }|^2.$$
(2)
The sum of all weights $`_{Lj}W_{Lj}^{𝐤\nu }`$ is approximately unity, with some deviation due to the nonorthogonality of the basis orbitals. Actually the orbital weights were normalized with respect to the sum.
According to the analysis, the valence bands consist of a lower oxygen $`2p`$-orbital complex which is separated by 3 eV from upper vanadium $`3d`$-orbitals (cf. Fig. 2). The $`3d`$-orbitals are split according to the standard crystal field rules. At (0,1,0) their energy rises in the order: $`d_{x^2y^2}`$, $`d_{yz}`$, $`d_{zx}`$, $`d_{3z^2r^2}`$, $`d_{xy}`$. Only the lowest $`3d_{x^2y^2}`$-orbital is partially occupied, being half-filled with two electrons per two vanadium atoms in the primitive unit cell. The corresponding band (see Fig. 3) is very narrow having a width of only 0.8 eV. The weight of the V: $`3d_{x^2y^2}`$ orbital in the band is larger than 70 percent, but there are considerable contributions up to 20 percent from the O1: $`2p_x`$ and the V: $`3d_{yz}`$ orbitals, as well as smaller contributions from some other orbitals. Comparing the situation with that in cuprates having edge-sharing CuO<sub>2</sub> chains (Li<sub>2</sub>CuO<sub>2</sub> or CuGeO<sub>3</sub>) one observes an important distinction: in the cuprates the relevant Cu: $`3d`$-orbital is directed towards the oxygen ions, whereas in the present case it is directed towards the neighboring vanadium ions. It implies an indirect superexchange mechanism via the intermediate ligand in the cuprates but probably a dominant direct exchange process between neighboring vanadium ions and a much smaller superexchange hopping to the second in-chain neighbor in MgVO<sub>3</sub>. Let us note that the different coordination of the relevant $`3d`$-orbital in vanadates and cuprates with similar crystal structure has to be a rather common feature, because in vanadates the energetically lowest $`3d`$-orbital is half-filled in contrast to the highest one in cuprates.
The coupling between the neighboring chains occurs mainly in the $`xy`$-plane via the intermediate O1: $`2p_x`$ orbitals. This is manifested by the strong dispersion along the $`(k_x,0,0)`$ direction (see Fig. 3) of the lower part of the band, which has a remarkable contribution from the orbital, whereas the upper part having a contribution only from the O1: $`2p_y`$ orbital is almost dispersionless. Fig. 4 showing the relevant orbitals at the $`\mathrm{\Gamma }`$-point illustrates the interchain coupling via $`\sigma `$-bonds of neighboring O1: $`2p_x`$ orbitals.
To quantify the result a TB analysis has been performed for the relevant band of the paramagnetic solution. The hopping processes to the nearest ($`t_{1y}`$) and the next nearest ($`t_{2y}`$) in-chain neighbors, as well as two hopping processes ($`t_x,t_{xy}`$) to the next chain and one ($`t_{xyz}`$) possible coupling in the $`z`$-direction were taken into account:
$`E_0E_𝐤=2t_{1y}\mathrm{cos}k_y{\displaystyle \frac{b}{2}}+2t_{2y}\mathrm{cos}k_yb+2t_x\mathrm{cos}k_xa`$
$`+4t_{xy}\mathrm{cos}k_xa\mathrm{cos}k_y{\displaystyle \frac{b}{2}}+8t_{xyz}\mathrm{cos}k_x{\displaystyle \frac{a}{2}}\mathrm{cos}k_y{\displaystyle \frac{b}{2}}\mathrm{cos}k_z{\displaystyle \frac{c}{2}},`$
yielding the parameters (in meV): $`t_{1y}=125`$, $`t_{2y}=20\mathrm{}26`$, $`t_x=50`$, $`t_{xy}=20`$ and a tiny value of 3 for $`t_{xyz}`$. The data confirm that the ratio $`t_{2y}/t_{1y}`$, a measure for the frustration in the chain direction, is much smaller than the corresponding value for CuGeO<sub>3</sub> and Li<sub>2</sub>CuO<sub>2</sub>. It should be noted that the parameter $`t_{1y}`$ is much larger than the corresponding inter-ladder hopping in NaV<sub>2</sub>O<sub>5</sub> given in Ref. , probably due to a mutual compensation of different transfer paths in the ladder compound.
An estimate of the corresponding exchange integrals of the effective Heisenberg Hamiltonian
$$\widehat{H}=\frac{1}{2}\underset{ij}{}J_j𝐒_i𝐒_{i+j}$$
can be performed in the one-band Hubbard description, which gives an antiferromagnetic exchange of roughly $`J_j=4t_j^2/U`$ with a value of $`U`$ still to be determined. Using the experimental value of $`J_{1y}10`$ meV given in Ref. one gets $`U=6.25`$ eV which seems to be slightly overestimated. Most probably, additional ferromagnetic processes (possibly via the $`3d_{yz}`$ orbital or indirect processes via oxygen) have to be included to improve the estimate. Our TB analysis suggests a rather large value for $`J_x`$. It gives two-dimensional antiferromagnetic order with the frustration terms $`J_{2y}`$ and $`J_{xy}`$ (see Fig. 4). Though $`t_{2y}`$ and $`t_{xy}`$ are of nearly the same value, due to the larger coordination number, $`J_{xy}`$ dominates. In a simple mean field approach one can define an effective frustration exchange $`J_{\mathrm{eff}}=2J_{xy}+J_{2y}`$ yielding an effective frustration $`J_{\mathrm{eff}}/J_{1y}`$ of roughly $`0.1`$ which is a very preliminary analysis, however.
Thus, we have found that the VO<sub>2</sub> chains of MgVO<sub>3</sub> have indeed a spin-1/2 structure. In contrast to the crystallographically similar edge-sharing cuprates the relevant $`3d`$ orbital is directed towards the nearest in-chain transition metal (V) ions. It suggests a direct antiferromagnetic exchange process between the neighboring in-chain spins. A similar process was proposed earlier for the inter-ladder exchange coupling in MgV<sub>2</sub>O<sub>5</sub>. The importance of the direct vanadium $`d`$-$`d`$ transfer was also anticipated in Ref. for CaV<sub>4</sub>O<sub>9</sub>, a compound with V-V distance slightly larger (3.00 Å) than in MgVO<sub>3</sub> (2.96 Å), where a value of 80 meV for the transfer was reported. Besides the direct exchange we have found also some additional superexchange terms which give rise to the coupling between the chains in the $`x`$-direction and to the frustration. The dominant frustration occurs between the neighboring chains in contrast to the naive picture of the frustration due to the next-nearest in-chain neighbor superexchange. It suggests that the experimental data require an analysis in the framework of a two- rather than a one-dimensional Heisenberg model.
###### Acknowledgements.
We thank Christoph Geibel for useful discussions.
|
no-problem/9906/astro-ph9906113.html
|
ar5iv
|
text
|
# Modelling the Origin of Astrophysical Jets from Galactic and Extra-galactic Sources
## 1 Introduction
One of the most prominent signatures of activities around the Active Galactic Nuclei (AGN) is the presence of mass outflows and jets. AGNs produce cosmic jets through which immense amount of matter and energy are ejected out of the cores of the galaxies . The structure of these jets can reach sizes of several million light years and extent way beyond their host galaxies into the vastness of intergalactic space. Similarly, micro quasars <sup>3</sup><sup>3</sup>3Microquasars are stellar-mass black holes in our galaxy which mimic, in a smaller scale, of the phenomena seen in quasars. Unlike the AGN and Qyasar jets (where the extend of the jets may reach several million of light years), the double sided jets coming out of these objets can have sizes upto a few light years only have also been very recently discovered where the mass outflows are formed from stellar mass black hole candidates .
Fig. 1: VLA radio image of Cygnas A after repeated correction from atmospheric effects and calibration errors (Observation by R. H. Perley and J. W. Dreher). The bright spot in the centeris the core of the galaxy. Two bright patches on either side of the core (one at the top right and the other at the lower left corner of the image) represent the radio emitting lobes while the thin band connecting the central spot with the top right patch is the jet shooting out from the core (Reproduced with kind permission from R. H. Perley and J. W. Dreher)
Looking back at the era of infancy of radio observations of the extragalactic sources, one sees that in 1953, Jenison and Dasgupta discovered that the radio emission from Cygnas A was originating from two amorphous blobs straddling symmetrically the associated optical galaxy rather than the galaxy itself (Figure 1.). Subsequent observations of other powerful radio emitting sources ravaeled the fact that this is rather a general phenomenon. Initially it was thought that these radio emitting blobs had been directly shot out of the core of the galaxy, however, this idea created some dynamical problems (adiabatic loss problem, as for example, see for detail) which prompted theoreticians to propose a “black box model” to explain the phenomenon. In early theoretical contributions, what people put forward was basically a black box sitting at the dynamical center of the galaxy which is doing something interesting so that these radio emitting lobes are continuously fueled by the process of channelling matter and energy emanating from the galactic center. There is need to go into the history of subsequent discussion (and there is no scope to do so either due to the limitation of space, interested readers may have a look to and \[6-7\]), suffice it to say that from the present status of observational evidences , we are in a strong position to say that these channels of matter and energy, or jets (as it was first named by Baade and Minkowski in 1954 ) are the ubiquitious feature of the AGNs, Young Stellar Objects (YSOs) and of some small scale prototype of AGNs, SS433 for example, which is believed to harbour a neutron star at its center. (For a detail discussion of SS433 jet, see \[10-11\]). Although since the first theoretical contribution to this black box model approach , much work has been done on how such jets interact with their surrounding and on how such interaction may convey the informations about the morphology of different extragalactic radio sources, the fundamental problem of what exactly is happening inside the black box still remains unresolved.
¿From the observational point of view, probably the most attractive feature of these astrophysical jets is that they are the most prominent and visible signatures of the AGNs. Hence, studying the jets has been one of the most exhaustive part of the research carried out by the observational astrophysisists over many years, and a huge “zoo” of different jet species has emerged.
On the other hand, from the theoretical front, the non-stellar activities around the AGNs are thought to be produced by a powerful engine sitting at the dynamical center of the galaxy . Because of the fact that the high luminosity produced by the AGNs are concentrated in a very small volume, it has been strongly argued that these engines are basically powered by the accretion onto massive black holes. This “black hole hypothesis”, namely that essentially all AGNs contain $`10^610^9M_{}`$ black holes \[13-14\] and that these objects together with their orbiting accretion disks are the prime movers for most of the powerful activities of AGNs including the formation of bipolar outflows and relativistic jets are further supported by the recent observational evidences, specially from the VLBI observations and the HST data where the signature of the so called “jet disk symbiosis” is supposed to be detected . That means, for most (if not all) AGNs and microquasars, the jets and the accretion disks around the central compact object are symbioticaly related . Probably this has to be the case in reality because in the absence of any binary companion, jet is supposed to be the only outlet for the intrinsic angular momentum of the interstellar/ intergalactic matter accreting onto an isolated compact object. So the accretion powered outflows are not merely an incidental by-product of the mass flow through the disk but, in fact, are a necessary ingredient in the accretion process, in that they constitute the main mechanism for removing excess angular momentum of the inflowing matter. Hence, it is quite logical to conclude that the jet formation and accretion onto isolated black holes are not two different issues to be studied disjointedly, but they must be strongly correlated and it is necessary to study the accretion and jet within the same framework. On the other hand, the major difference between the ordinary stellar outflows and the outflows/jets from the vicinity of a black hole or a neutron star, is that they do not have their own atmospheres and outflows/jets in this case have to be generated from the inflowing materials only.
Keeping these basic facts in the back of our mind, our aim was to theoretically study the mass outflow from galactic/ extragalactic sources more realistically than what has been attempted so far. The existing models which study the origin, acceleration and collimation of mass outflow in the form of jets from AGNs and Quasars are roughly of three types. The first type of solutions confine themselves to the jet properties only, completely decoupled from the internal properties of accretion disks \[18-20\]. In the second type, efforts are made to correlate the internal disk structure with that of the outflow using hydrodynamic, magnetohydrodynamic or electromagnetic considerations ( and references therein, \[21-24\]). In the third type, numerical simulations are carried out to actually see how matter is deflected from the equatorial plane towards the axis \[25-31\]. ¿From the analytical front, though the wind type and accretion type solutions come out from the same set of governing equations \[23-24\], till today there was no attempt to obtain the estimation of outflow rate from the inflow rate. A theoretical model has very recently been developed \[32-40\], which, for the first time we believe, can compute (semi analytically and semi numerically) the absolute value of mass outflow rate from the matter accreting onto galactic and extra-galactic black holes using combinations of exact transonic accretion and wind topologies which form a self-consistent inflow-outflow system. The simplicity of black holes and neutron stars lie in the fact that they do not have atmospheres. But the inflowing matter surrounding them have, and similar method as employed in stellar atmospheres should be applicable to the accreting matter surrounding them. The approach in our model is precisely this. We first determine the properties of the inflow and outflow and identify solutions to connect them. In this manner we self-consistently determine what fraction of the matter accreting onto these compact objects is coming out as outflows/jets.
If $`\dot{M}_{in}`$ is the time rate of accretion onto a compact object (the amount of matter falling in per unit time) and $`\dot{M}_{out}`$ be the time rate of outflow (the amount of matter being ‘kicked out’ per unit time as wind), the ratio $`\left(\frac{\dot{M}_{out}}{\dot{M}_{in}}\right)`$ we call the ‘Mass Outflow Rate’ and denote it by $`R_{\dot{m}}`$. Thus $`R_{\dot{m}}`$ is a measure of the ratio of the outflow rate to the inflow rate of matter. The major aim of our work is to compute the absolute value of $`R_{\dot{m}}`$ in terms of the inflow parameters and to study the dependence of $`R_{\dot{m}}`$ on those parameters. Computation has been carried out for a Schwarzschild black hole using Paczy$`n^{}`$ski-Wiita pseudo-Newtonian potential which mimics the Schwarzschild space-time in an excellent fashion.
For matter accreting with considerable intrinsic angular momentum (formation of accretion disks), we establish \[32-36\] that the bulk of the outflow is from the CENtrifugal pressure dominated BOundary Layer (it is called CENBOL, the formation of which will be explained in the next section), we find that $`R_{\dot{m}}`$ varies anywhere from a few percent to even close to a hundred percent depending on the initial parameters of the inflow, the degree of compression of matter near the CENBOL and the polytropic index of the flow. Our model thus, not only provides a sufficiently plausible estimation of $`R_{\dot{m}}`$, but is also able to study the variation of this rate as a function of various parameters governing the flow \[33-36\].
We have also studied the mass outflow from spherical accretion with zero intrinsic angular momentum \[37-40\]. It has been shown that a self-supported spherical pair-plasma mediated standing shock may be produced even for accretion with zero angular momentum. We have taken this shock surface as the generating surface of mass outflow for spherical inflow and have compared the results with that of obtained from the outflows generating from CENBOL (disk-outflow system).
The plan of this article is as follows:
In the next section we describe our model for the disk-outflow system (matter accreting onto compact object with considerable intrinsic angular momentum) and summarize the results obtained in this case. In §3, we present how we model the inflow-outflow system for zero angular quasi-spherical accretion. Finally, in §4, we discuss some of the possible extensions of our work.
## 2 Disk-Outflow System:
### 2.1 Formation of CENBOL and the Outflow Geometry
Before we proceed further, let us describe basic properties of the rotating inflow and outflow. A rotating inflow with a specific angular momentum (angular momentum per unit mass) entering into a black hole will have almost constant angular momentum close to the black hole for any moderate viscous stress. This is because the viscous time scale to transport angular momentum is generally much longer compared to the infall time scale (because near the black hole, the flow ‘advects’ inward with enormously large radial velocity) and even though at the outer edge of the accretion disk the angular momentum distribution may be Keplerian or even super-Keplerian, matter would be highly sub-Keplerian close to the black hole. This happens because the flow has to enter through the horizon with velocity of light and presence of this large inertial (ram) force, in addition to usual gravitational and centrifugal forces, makes the flow sub-Keplerian. This almost constant angular momentum produces a very strong centrifugal force which increases much faster compared to the gravitational force (because while the centrifugal force varies inversely with the qubic power of the radial distance measured from the central gravitating body, gravitational attraction falls off obeying modified inverse square rule because we are using pseudo-Newtonian geometry instead of fully general relativistic treatment) and becomes comparable at some specific radial distance, location of which is easy to compute. Here, (actually, a little farther out, due to thermal pressure) matter starts piling up and produces the centrifugal pressure supported boundary layer (CENBOL). Further close to the black hole, the gravity always wins and matter enters the horizon supersonically after passing through a sonic point. CENBOL may or may not have a sharp boundary, depending on whether standing shocks form or not. Generally speaking, in a polytropic flow, if the polytropic index $`\gamma >1.5`$, then shocks do not form and if $`\gamma <1.5`$, only a region of the parameter space forms the shock . In any case, the CENBOL forms. In this region the flow becomes hotter and denser (matter is either ‘shock-compressed’ or compressed by the maximization of polytropic pressure of the inflow) and for all practical purposes behaves as the stellar atmosphere so far as the formation of outflows are concerned. <sup>4</sup><sup>4</sup>4Inflows on neutron stars behave similarly, except that the ‘hard-surface’ inner boundary condition dictates that the flow remains subsonic between the CENBOL and the surface rather than becoming supersonic as in the case of a black hole . A part of the hot and dense accreted matter with shock generate dhigher entropy density (piled up on the CENBOL) is then ‘squirt’ as outflow. In case where the shock does not form, regions around pressure maximum achieved just outside the inner sonic point of the inflow would also drive the flow outwards. In the back of our mind, we have kind of picture of the outflow namely that the outflow is thermally and centrifugally accelerated but confined by external pressure of the ambient medium.
Outflow rates from accretion disks around black holes and neutron stars must be related to the properties of CENBOL which in turn, depend on the inflow parameters. Subsonic outflows originating from CENBOL would pass through sonic points and reach far distances as in wind solution. Figure. 2 represents a schematic diagram showing the geometry of the disk-jet system proposed in our model. The arrows show the axis of the whirling jet, D(K) stands for the Keplarian part of the disk and D(SK) stands for the subkeplarian part. CENBOL forms somewhere inside the D(SK) and J stands for the hollow conical jet structure.
Fig. 2:Geometry of the disk-jet system
There are two surfaces of utmost importance in flows with angular momentum. One is the ‘funnel wall’ where the effective potential (sum of gravitational potential and the specific rotational energy) vanishes. In the case of a purely rotating flow, this is the ‘zero pressure’ surface. Flows cannot enter inside the funnel wall because the pressure would be negative. (Fig. 3) The other surface is called the ‘centrifugal barrier’. This is the surface where the radial pressure gradient of a purely rotating flow vanishes and is located outside the funnel wall simply because the flow pressure is higher than zero on this surface. Flow with inertial pressure easily crosses this ‘barrier’ and either enters into a black hole or flows out as winds depending on its initial parameters (detail classification of the parameter space is in ). In our model the outflow generally hugs the ‘funnel wall’ and goes out in between these two surfaces (see for detail).
### 2.2 Model Description, Solution Procedure and Results
#### 2.2.1 Model Description
We consider thin, axisymmetric polytropic inflows in vertical equilibrium (otherwise known as 1.5 dimensional flow ). We ignore the self-gravity of the flow and viscosity is assumed to be significant only at the shock so that entropy is generated. We do the calculations using Paczyński-Wiita potential which mimics surroundings of the Schwarzschild black hole. Considering the inflow to be polytropic, we explore both the polytropic and the isothermal outflow.
For polytropic outflows, the specific energy $``$ is assumed to remain fixed throughout the flow trajectory as it moves from the disk to the jet. At the shock, entropy is generated and hence the outflow is of higher entropy for the same specific energy.
For isothermal outflow, we assume that the outflow has exactly the same temperature as that of the post-shock flow, but the energy is not conserved as matter goes from disk to the jet. In other words the outflow is kept in a thermal bath of temperature as that of the post-shock flow. The temperature of the outflow is obtained from the proton temperature of the advective region of the disk. The proton temperature is obtained using the Comptonization, bremsstrahlung, inverse bremsstrahlung and Coulomb processes ( and references therein). In both the models of the outflow, we assume that the flow is primarily radial.
#### 2.2.2 Solution Procedure
Let us suppose that matter first enters through the outer sonic point and passes through a shock (see for parameter space classification). At the shock, part of the incoming matter, having higher entropy density is likely to return back as winds through a sonic point, other than the one it just entered. Thus a combination of topologies, one from the region of accretion and the other from the wind region is required to obtain a full solution. In the absence of the shocks, the flow is likely to bounce back at the pressure maximum of the inflow and since the outflow would be heated by photons, and thus have a smaller polytropic constant, the flow would leave the system through an outer sonic point different from that of the incoming solution. Thus finding a complete self-consistent solution boils down to finding the outer sonic point of the outflow and the mass flux through it \[33-36\].
For polytropic outflows, the specific energy $``$ is assumed to remain fixed throughout the flow trajectory as it moves from the disk to the jet. At the shock, entropy is generated and hence the outflow is of higher entropy for the same specific energy. A supply of parameters $``$(specific energy of the inflow), $`\lambda `$ (specific angular momentum of the inflow), $`\gamma `$ (polytropic index of the inflow) and $`\gamma _o`$ (polytropic index of the outflow) makes a self-consistent computation of $`R_{\dot{m}}`$ possible (see for detail). All the physical quantities are measured in the Geometric Unit. It is to be noted that when the outflows are produced, one cannot use the usual Rankine-Hugoniot relations at the shock location, since mass flux is no longer conserved in accretion, but part of it is lost in the outflow. Accordingly, we modified the standard Rankine-Hugoniot condition .
#### 2.2.3 Results
By simultaneously solving the proper set of equations in appropriate geometry (see for detail), we get the combined flow topologies which is presented as Figure. 4. It shows a typical solution which combines the accretion and the outflow. Mach number (the ratio of the mechanical to the thermal velocity of matter) is plotted along the ordinate while the distance measured from the central object (scaled in the unit of Schwarzschild radius) is plotted in logarithmic scale along absisca. The input parameters are $`=0.0005`$, $`\lambda =1.75`$ and $`\gamma =4/3`$ corresponding to relativistic inflow. The solid curve with an incoming arrow represents the pre-shock region of the inflow and the long-dashed curve with an arrow inward represents the post-shock inflow which enters the black hole after passing through the inner sonic point (I). The solid vertical line at $`X_{s3}`$ (the leftmost vertical transition) with double arrow represents the shock transition obtained with exact Rankine-Hugoniot condition (i.e., with no mass loss). The actual shock location obtained with modified Rankine-Hugoniot condition is farther out from the original location $`X_{s3}`$. Three vertical lines connected with the corresponding dotted curves represent three outflow solutions for the parameters $`\gamma _o=1.3`$ (top), $`1.15`$ (middle) and $`1.05`$ (bottom). The outflow branches shown pass through the corresponding sonic points. It is
evident from the figure that the outflow moves along solution curves completely different from that of the ‘wind solution’ of the inflow which passes through the outer sonic point ‘O’. The leftmost shock transition ($`X_{s3}`$) is obtained from unmodified Rankine-Hugoniot condition, while the other transitions are obtained when the mass-outflow is taken into account. The mass loss ratio $`R_{\dot{m}}`$ in these cases are $`0.256`$, $`0.159`$ and $`0.085`$ respectively.
We can summarize the results (see for details) of our calculation as follows:
a) It is possible that most of the outflows are coming from the centrifugally supported boundary layer (CENBOL) of the accretion disks.
b) The outflow rate generally increases with the proton temperature of CENBOL. In other words, winds are, at least partially, thermally driven. This is reflected more strongly when the outflow is isothermal.
c) Even though specific angular momentum of the flow increases the size of the CENBOL, and one would have expected a higher mass flux in the wind, we find that the rate of the outflow is actually anti-correlated with the $`\lambda `$ of the inflow. On the other hand, if the angular momentum of the outflow is reduced, we find that the rate of the outflow is correlated with $`\lambda `$ of the outflow. This suggests that the outflow is partially centrifugally driven as well.
d) The ratio $`R_{\dot{m}}`$ is generally anti-correlated with the inflow accretion rate. That is, disks of lower luminosity would produce higher $`R_{\dot{m}}`$.
e) Generally speaking, supersonic region of the inflow do not have pre ssure maxima. Thus, outflows emerge from the subsonic region of the inflow, whether the shock actual ly forms or not.
If we introduce an extra radiation pressure term (with a term like $`\mathrm{\Gamma }/r^2`$ in the radial force equation, where $`\mathrm{\Gamma }`$ is the contribution due to radiative process), particularly important for neutron stars, the outcome is significant. In the inflow, outward radiation pressure weakens gravity and thus the shock is located farther out. The temperature is cooler and therefore the outflow rate is lower. If the term is introduced only in the outflow, the effect is not significant . However, we understand that inclusion of only $`\frac{\mathrm{\Gamma }}{r^2}`$ term does not give the whole picture of the various radiative processes taking place in the disk-jet system and a more general and exact form of the radiative force term is to be included in the set of equations governing the inflow-outflow system.
An interesting situation arises when the polytropic index of the outflow is large and the compression ratio of the flow is also very high. In this case, the flow virtually bounces back as the winds and the outflow rate can be equal to the inflow rate or even higher, thereby evacuating the disk. In this range of parameters, most, if not all, of our assumptions may breakdown completely because the situation could become inherently time-dependent. It is possible that some of the black hole systems, including that in our own galactic center, may have undergone such evacuation phase in the past and gone into quiescent phase (see \[33-36\] for detail).
Strong winds are suspected to be present in Sgr $`A^{}`$ at our galactic center \[45-46\]. We have shown that when the inflow rate itself is low (as in the case for Sgr $`A^{}`$; $`10^310^4`$ Eddington rate), the mass outflow rate is very high, almost to the point of evacuating the disk. This prompted us to strongly speculate that the spectral properties of our galactic center could be explained by inclusion of winds using our model .
## 3 Outflow from Bondi Type Accretion
### 3.1 In Search for a Suitable Surface
For some black hole models of active galactic nuclei, inflow may not have accretion disk . Accretion is then quasi-spherical having almost zero or negligible angular momentum (Bondi type accretion). Absence of angular momentum rules out the possibility of formation of the Rankine-Hugoniot shock as well as the polytropic pressure maxima. So, for quasi-spherical accretion, CENBOL formation (as discussed in the earlier section) is not possible. It has been shown that \[49-50\] for quasi-spherical accretion onto black holes, steady state situation may be developed close to the black hole where a standing collisionless shock may form due to the plasma instabilities and for nonlinearity introduced by small density perturbation. This is because, after crossing the sonic point the infalling matter (in plasma form) becomes highly supersonic. Any small perturbation and slowing down of the infall velocity will create a piston and produce a shock. A spherically symmetric shock produced in such a way will accelerate a fraction of the inflowing plasma to relativistic energies. The shock accelerated relativistic particles suffer essentially no Compton loss and are assumed to lose energy only through proton - proton ($`pp`$) collision. These relativistic hadrons are not readily captured by the black hole rather considerable high energy density of these relativistic protons would be maintained to support a standing, collisionless, spherical shock around the black hole . Thus, a self-supported standing shock may be produced even for accretion with zero angular momentum. In this work, we take this pair-plasma pressure mediated shock surface as the alternative of the CENBOL which can be treated as the effective physical hard surface which, in principle mimics the ordinary stellar surface regarding the mass outflow.
The condition necessary for the development and maintenance of such a self-supported spherical shock is satisfied for the high Mach number solutions . Keeping this in the back of our mind, for our present work, we concentrate only on low energy accretion to obtain high shock Mach number. Considering low energy ($`\stackrel{<}{}0.001`$) accretion, we assume that particles accreting toward black hole are shock accelerated via first order Fermi acceleration producing relativistic protons. Those relativistic protons usually scatters several times before being captured by the black hole. These energized particles, in turn, provide sufficiently outward pressure to support a standing, collisionless shock. A fraction of the energy flux of infalling matter is assumed to be converted into radiation at the shock standoff distance through hadronic ($`pp`$) collision and mesonic ($`\pi ^\pm ,\pi ^0`$) decay. Pions generated by this process, decay into relativistic electrons, neutrinos/antineutrinos and produces high energy $`\gamma `$ rays (see for details). These electrons produce the observed non-thermal radiation by Synchrotron and inverse Compton scattering. The overall efficiency of this mechanism depends largely on the shock location. Luminosity produced by this fraction is used to obtain the shock location for the present work.
### 3.2 At the end of the Search
At the shock surface, density of the post-shock material shoots up and velocity falls down, infalling matter starts piling up on the shock surface. The post shock relativistic hadronic pressure then gives a kick to the piled up matter the result of which is the ejection of outflow from the shock surface. For this type of inflow, accretion is known to proceed smoothly after a shock transition, since successful subsonic solutions have been constructed for accretion onto black holes embedded within normal stars with the boundary condition $`u=c`$; where $`u`$ is the infall velocity of matter and $`c`$ is the velocity of light in vacuum. The fraction of energy converted, the shock compression ratio $`R_{comp}`$, along with the ratio of post shock relativistic hadronic pressure to infalling ram pressure at a given shock location are obtained from the steady state shock solution of Ellision and Eichler \[52-53\]. The shock location as a function of the specific energy $``$ of the infalling matter and accretion rate is then self-consistently obtained using the above mentioned quantities. We then calculate the amount of mass outflow rate $`R_{\dot{m}}`$ from the shock surface using combination of exact transonic inflow outflow solutions and study the dependence of $`R_{\dot{m}}`$ on various physical entities governing the inflow-outflow system .
### 3.3 Model Description, Solution Procedure and Results
#### 3.3.1 Model Description and Solution Procedure
We assume that a Schwarzschild type black hole quasi-spherically accretes low energy ($`\stackrel{<}{}0.001`$) fluid obeying polytropic equation of state. We also assume that the accretion rate with which the fluid is being accreted, is not a function of $`r`$ ($`r`$ being the radial distance measured from the central object scaled in the unit of Schwarzschild radius). For simplicity of calculation, we choose geometric unit to measure all the relevant quantities. We ignore the self-gravity of the flow and the calculation is being done using Paczy$`n^{}`$ski-Wiita potential which mimics surrounding of the Schwarzschild black hole. As already mentioned, we assume that a steady, collisionless shock forms at a particular distance (measured in the unit of Schwarzschild radius) due to the instabilities in the plasma flow. We also assume that for our model, the effective thickness of the shock is small enough compared to the shock standoff distance. For simplicity of calculation, we assume that the outflow is also quasi-spherical.
It is obvious from the above discussion that $`R_{\dot{m}}`$ should have some complicated functional dependences on the inflowing parameters through the shock location (see for detail).
#### 3.3.2 Results
By simultaneously solving the proper set of equations in appropriate geometry (see for detail), we get the combined flow topologies which is presented as Figure. 5. It shows a typical solution which combines the accretion and the outflow. Mach number (the ratio of the mechanical to the thermal velocity of matter) is plotted along the ordinate while the distance measured from the central object (scaled in the unit of Schwarzschild radius) is plotted in logarithmic scale along absisca. The input parameters are $``$ = 0.001, $`\dot{M}_{in}`$ =1.0 Eddington rate ($`_d`$ stands for the Eddington rate in the figure) and $`\gamma =\frac{4}{3}`$ corresponding to relativistic inflow. The solid curve with an arrow represents the pre-shock region of the inflow and the solid vertical line with double arrow at $`X_{pps}`$ (the subscript $`pps`$ stands for $`p`$air $`p`$lasma mediated $`s`$hock) represents the shock transition. Location of shock is obtained using the eqs.(4) for a particular set if inflow parameters mentioned above. Three dotted curves show the three different outflow branches corresponding to different polytropic index of the outflow as $`\gamma _o`$ = 1.3(left most curve), 1.275 (middle curve) and 1.25(rightmost curve). It is evident from the figure that the outflow moves along the solution curves completely different from that of the “wind solution” (solid line marked with an outword directed arrow) of the inflow which passes through the sonic point $`P_s`$. The mass loss ratio $`R_{\dot{m}}`$ for these cases are 0.0023, 0.00065 and 0.00014 respectively.
We can summarize our results (see for details) obtained in this case as follows:
a)It is possible that outflows for quasi-spherical Bondi type accretion onto a Schwarzschild black hole are coming from the pair plasma pressure mediated shock surface.
b) The outflow rate monotonically increases with the specific energy of the inflow and nonlinearly increases with the Eddington rate of the infalling matter.
c) $`R_{\dot{m}}`$, in general, correlates with $`\gamma _o`$ but anticorrelates with $`\gamma `$.
d) Generally speaking, as our model deals with high shock Mach number (low energy accretion) solutions, outflows in our work always generate from the supersonic branch of the inflow, i,e, shock is always located inside the sonic point.
e) Unlike the mass outflow from the disk-outflow case around black holes \[33-36\] here we found that the value of $`R_{\dot{m}}`$ is distinguishably small. This is because matter is ejected out due to the pressure of the relativistic plasma pairs which is very much less in comparison to the pressure generated due to the presence of significant angular momentum. However, in the present work we have dealt only high Mach number solution which means matter is accreting with very low energy (‘cold inflow’, as it is described in literature). This is another possible reason to obtain a low mass loss rate. If, instead of high Mach number solution, we would use low Mach number solution, e,g, high energy accretion, the mass outflow would be considerably higher (this is obvious because it has already been established in present work that $`R_{\dot{m}}`$ increases with $``$ (see for detail).
## 4 Future Perspectives
Based on the works have already been done, our goals for the future work are essentially the following:
1. We have carried out our calculations for the Schwarzschild black hole using Paczy$`n^{}`$ski-Wiita pseudo-Newtonian potential. Now we would like to extend our calculations in fully general relativistic framework so that the parameter space for calculation gets modified and the mass-loss rate calculated in this way could be compared with our previous results.
2. In our work, we assumed that the magnetic field is absent. Magnetized winds from the accretion disks have so far been considered in the context of a Keplarian disk and not in the context of sub-Keplarian flows on which we concentrate here. It is not unreasonable to assume that our prime surface for the wind formation, the CENBOL would still form when magnetic fields are present and since the Alfven speed is, by definition, higher compared to the sound speed, the acceleration would also be higher than what we computed here. Morever, introduction of toroidal magnetic field in our model with its associated “hoop” stress, would lead to a better understanding of the “collimation problem” of the jets.
3. Recently it has been suggested ( & references therein) that significant nucleosynthesis is possible in the centrifugal pressure supported dense and hot region of the accretion flow which deviate from the disk around the black holes. Attempts had been made to compute the composition changes and energy generation due to such nuclear processes as a function of the radial distance from the black hole. We suggest that the outflows produced from this region would carry away modified composition and contaminate the atmosphere of the surrounding stars and the galaxies in general. Unlike the present calculation, where the outflow consists of $`m_p`$ only (proton jet), now we will be trying to take the weighted average of the heavier elements produced by the nucleosynthesis in advective accretion disks as the constituent elements of the outflow.
4. Finally, we would like to carry out all of our calculations (done for Schwarzschild black hole) in Kerr space-time to bring the whole picture into focus.
Epilogue
It is to be noted that although the existence of astrophysical outflows and jets from the galactic and extragalactic sources are well known, their rates are not. Similarly, till date, there is no definitive model present in the literature which can handle the origin of this outflow in a self-consistent way. Hence we think that the formation and dynamics (acceleration and collimation) of these outflows are the open problems in present day theoretical astrophysics. Along with our present analysis of mass outflow in Schwarzschild geometry, if we can carry out our calculation in Kerr geometry as well, we strongly believe that these combined calculations definitely could shed some new light on the origin and energetics of the astrophysical jets.
Acknowledgements
The author expresses his gratitude to Prof. S. K. Chakrabarti for introducing him to this subject and for his guidance, to Dr. S. Chakrabarti, and Mr. A. Ray of his Institute for carefully going through the manuscript and for providing valuable suggestion to improve the quality of this article. He is also thankful to Prof. A. R. Rao of Tata Institute of Fundamental research for constructive interaction during the YATI conference. The present article is basically an organized summary of a couple of seminars and colloquiums presented by the author at various Institutes and conferences during the period November 1998 to March 1999. The author was greatly benefited from discussions with several faculty members, post doctoral fellows and research students of the Institutes he visited and with some of the participants of last Texas Symposium (Dec. 1998) held in Paris during his participation. Among them, special thanks are for Prof. P. A. Charles, Dr. T. L. Grey and Dr. P. Saha from Dept. of Astrophysics, University of Oxford, Prof. W. Kundt from Institute f$`\ddot{u}`$r Astrophysics, University of Bonn, Prof. P. Biermann, Dr. H. Falke, Dr. T. Ensllin and Dr. Yiping Yang of Max Plack Institute for Radio Astronomy, Prof. B. Czerney and Ms. A. Rozanska from Nicolaus Copernicus Astronomical Centre, Warsaw, Poland, Prof. M. Begelman from ZILA, Colorado, Prof. M. Ostrowski from Obserwatorium Astronomiczne, Krakow, Poland, Prof. J. V. Narlikar, Prof. A. Kembhabi, Dr. S. Bose, Dr. R. Misra and Dr. R. Srianad from Inter University Centre for Astronomy and Astrophysics and Prof. Gopal Kriskna from National Centre for Radio Astronomy.
## References
|
no-problem/9906/math9906001.html
|
ar5iv
|
text
|
# Untitled Document
A discrete form of the Beckman-Quarles theorem for rational eight-space
Apoloniusz Tyszka
footnotetext: AMS (1991) Subject Classification: Primary 51M05, Secondary 05C12
Abstract. Let $`𝐐`$ be the field of rationals numbers. We prove that: (1) if $`x,y𝐑^n`$ $`(n>1)`$ and $`|xy|`$ is constructible by means of ruler and compass then there exists a finite set $`S_{xy}𝐑^n`$ containing $`x`$ and $`y`$ such that each map from $`S_{xy}`$ to $`𝐑^n`$ preserving unit distance preserves the distance between $`x`$ and $`y`$, (2) if $`x,y𝐐^8`$ then there exists a finite set $`S_{xy}𝐐^8`$ containing $`x`$ and $`y`$ such that each map from $`S_{xy}`$ to $`𝐑^8`$ preserving unit distance preserves the distance between $`x`$ and $`y`$.
Theorem 1 may be viewed as a discrete form of the classical Beckman-Quarles theorem, which states that any map from $`𝐑^n`$ to $`𝐑^n`$ ($`2n<\mathrm{}`$) preserving unit distance is an isometry, see -. Theorem 1 was announced in and prove there in the case where $`n=2`$. A stronger version of Theorem 1 can be found in , but we need the elementary proof of Theorem 1 as an introduction to Theorem 2. Theorem 1. If $`x,y𝐑^n`$ ($`n>1`$) and $`|xy|`$ is constructible by means of ruler and compass then there exists a finite set $`S_{xy}𝐑^n`$ containing $`x`$ and $`y`$ such that each map from $`S_{xy}`$ to $`𝐑^n`$ preserving unit distance preserves the distance between $`x`$ and $`y`$. Proof. Let us denote by $`D_n`$ the set of all non-negative numbers $`d`$ with the following property: if $`x,y𝐑^n`$ and $`|xy|=d`$ then there exists a finite set $`S_{xy}𝐑^n`$ such that $`x,yS_{xy}`$ and any map $`f:S_{xy}𝐑^n`$ that preserves unit distance preserves also the distance between $`x`$ and $`y`$. Obviously $`0,1D_n`$. We first prove that if $`dD_n`$ then $`\sqrt{2+2/n}dD_n`$. Assume that $`d>0`$, $`x,y𝐑^n`$ and $`|xy|=\sqrt{2+2/n}d`$. Using the notation of Figure 1 we show that
$`S_{xy}:=\{S_{ab}:a,b\{x,y,\stackrel{~}{y},p_1,p_2,\mathrm{},p_n,\stackrel{~}{p}_1,\stackrel{~}{p}_2,\mathrm{},\stackrel{~}{p}_n\},|ab|=d\}`$
satisfies the condition of Theorem 1. Figure 1 shows the case $`n=2`$, but equations below Figure 1 describe the general case $`n2`$; $`z`$ denotes the centre of the $`(n1)`$-dimensional regular simplex $`p_1p_2\mathrm{}p_n`$.
Figure 1
$`1i<jn`$
$`|y\stackrel{~}{y}|=d`$, $`|xp_i|=|yp_i|=|p_ip_j|=d=|x\stackrel{~}{p}_i|=|\stackrel{~}{y}\stackrel{~}{p}_i|=|\stackrel{~}{p}_i\stackrel{~}{p}_j|`$
$`|x\stackrel{~}{y}|=|xy|=2|xz|=2\sqrt{\frac{n+1}{2n}}d=\sqrt{2+2/n}d`$
Assume that $`f:S_{xy}`$ $``$ $`𝐑^n`$ preserves distance $`1`$. Since
$`S_{xy}S_{y\stackrel{~}{y}}_{i=1}^nS_{xp_i}_{i=1}^nS_{yp_i}_{1i<jn}S_{p_ip_j}`$
we conclude that $`f`$ preserves the distances between $`y`$ and $`\stackrel{~}{y}`$, $`x`$ and $`p_i`$ ($`1in`$), $`y`$ and $`p_i`$ ($`1in`$), and all distances between $`p_i`$ and $`p_j`$ ($`1i<jn`$). Hence $`|f(y)f(\stackrel{~}{y})|=d`$ and $`|f(x)f(y)|`$ equals either $`0`$ or $`\sqrt{2+2/n}d`$. Analogously we have that $`|f(x)f(\stackrel{~}{y})|`$ equals either $`0`$ or $`\sqrt{2+2/n}d`$. Thus $`f(x)f(y)`$, so $`|f(x)f(y)|=\sqrt{2+2/n}d`$ which completes the proof that $`\sqrt{2+2/n}dD_n`$. Therefore, if $`dD_n`$ then $`(2+2/n)d=\sqrt{2+2/n}(\sqrt{2+2/n}d)D_n`$. We next prove that if $`x,y𝐑^n`$, $`dD_n`$ and $`|xy|=(2/n)d`$ then there exists a finite set $`Z_{xy}𝐑^n`$ containing $`x`$ and $`y`$ such that any map $`f:Z_{xy}𝐑^n`$ that preserves unit distance satisfies $`|f(x)f(y)||xy|`$; this result is adapted from . It is obvious in the case where $`n=2`$, therefore we assume that $`n>2`$ and $`d>0`$. In Figure 2, $`z`$ denotes the centre of the $`(n1)`$-dimensional regular simplex $`p_1p_2\mathrm{}p_n`$. Figure 2 shows the case $`n=3`$, but equations below Figure 2 describe the general case where $`n3`$.
Figure 2
$`1i<jn`$
$`|xp_i|=|yp_i|=d,|p_ip_j|=\sqrt{2+2/n}d,|zp_i|=\sqrt{11/n^2}d`$
$`|xy|=2|xz|=2\sqrt{|xp_i|^2|zp_i|^2}=2\sqrt{d^2(11/n^2)d^2}=(2/n)d`$
Define:
$`Z_{xy}:=_{1i<jn}S_{p_ip_j}_{i=1}^nS_{xp_i}_{i=1}^nS_{yp_i}`$
If $`f:Z_{xy}𝐑^n`$ preserves distance $`1`$ then $`|f(x)f(y)|=|xy|=(2/n)d`$ or $`|f(x)f(y)|=0`$, hence $`|f(x)f(y)||xy|`$. If $`dD_n`$, then $`2dD_n`$ (see Figure 3).
Figure 3
$`|xy|=2d`$
$`S_{xy}=S_{xs}S_{sy}Z_{yt}S_{xt}`$
From Figure 4 it is clear that if $`dD_n`$ then all distances $`kd`$ (where $`k`$ is a positive integer) belong to $`D_n`$.
Figure 4
$`|xy|=kd`$
$`S_{xy}=\{S_{ab}:a,b\{w_0,w_1,\mathrm{},w_k\},|ab|=d|ab|=2d\}`$
From Figure 5 it is clear that if $`dD_n`$ then all distances $`d/k`$ (where $`k`$ is a positive integer) belong to $`D_n`$. Hence $`𝐐`$ $``$ $`(0,\mathrm{})`$ $``$ $`D_n`$.
Figure 5
$`|xy|=d/k`$
$`S_{xy}=S_{\stackrel{~}{x}\stackrel{~}{y}}S_{\stackrel{~}{x}x}S_{xz}S_{\stackrel{~}{x}z}S_{\stackrel{~}{y}y}S_{yz}S_{\stackrel{~}{y}z}`$
Observation. If $`x,y𝐑^n`$ ($`n>1`$) and $`\epsilon >0`$ then there exists a finite set $`T_{xy}(\epsilon )𝐑^n`$ containing $`x`$ and $`y`$ such that for each map $`f:T_{xy}(\epsilon )𝐑^n`$ preserving unit distance we have $`||f(x)f(y)||xy||\epsilon `$.
Proof. It follows from Figure 6.
Figure 6
$`|xz|,|zy|𝐐(0,\mathrm{})`$, $`|zy|\epsilon /2`$
$`T_{xy}(\epsilon )=S_{xz}S_{zy}`$
Note. The above part of the proof can be found in .
If $`a,bD_n`$, $`a>b>0`$ then $`\sqrt{a^2b^2}D_n`$ (see Figure 7, cf.).
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................$`s`$$`x`$$`t`$$`y`$$`a`$$`a`$$`b`$$`b`$.............................
Figure 7
$`|xy|`$=$`\sqrt{a^2b^2}`$
$`S_{xy}=S_{sx}S_{xt}S_{st}S_{sy}S_{ty}`$
Hence $`\sqrt{3}a=\sqrt{(2a)^2a^2}D_n`$ and $`\sqrt{2}a=\sqrt{(\sqrt{3}a)^2a^2}D_n`$. Therefore $`\sqrt{a^2+b^2}=\sqrt{(\sqrt{2}a)^2(\sqrt{a^2b^2})^2}D_n`$.
In Figure 8, z denotes the centre of the $`(n1)`$-dimensional regular simplex $`p_1p_2\mathrm{}p_n`$, $`n=2`$, but equations below Figure 8 describe the general case where $`n2`$. This construction shows that if $`a,bD_n`$, $`a>b>0`$, $`n2`$ then $`abD_n`$, hence $`a+b=2a(ab)D_n`$.
..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................$`z`$$`y`$$`x`$$`p_1`$$`p_2`$.............................
Figure 8
$`|xy|=ab,|xz|=aD_n,|yz|=bD_n`$
$`|p_ip_j|=\sqrt{2+2/n}D_n,|zp_i|=\sqrt{1^2(1/n)^2}D_n,1i<jn`$
$`|xp_1|=\sqrt{|xz|^2+|zp_1|^2}=\mathrm{}=|xp_n|=\sqrt{|xz|^2+|zp_n|^2}D_n`$
$`|yp_1|=\sqrt{|yz|^2+|zp_1|^2}=\mathrm{}=|yp_n|=\sqrt{|yz|^2+|zp_n|^2}D_n`$
$`S_{xy}=_{1i<jn}S_{p_ip_j}_{i=1}^nS_{xp_i}_{i=1}^nS_{yp_i}T_{xy}(b)`$
In order to prove that $`D_n\backslash \{0\}`$ is a multiplicative group it remains to observe that if positive $`a,b,cD_n`$, then $`\frac{ab}{c}D_n`$ (see Figure 9, cf.).
Figure 9
$`m`$ is a positive integer
$`b<2mc`$
$`S_{AB}=S_{OA}S_{OB}S_{O\stackrel{~}{A}}S_{O\stackrel{~}{B}}S_{A\stackrel{~}{A}}S_{B\stackrel{~}{B}}S_{\stackrel{~}{A}\stackrel{~}{B}}`$
If $`aD_n`$, $`a>1`$, then $`\sqrt{a}=\frac{1}{2}\sqrt{(a+1)^2(a1)^2}D_n`$; if $`aD_n`$, $`0<a<1`$, then $`\sqrt{a}=1/\sqrt{\frac{1}{a}}D_n`$. Thus $`D_n`$ contains all non-negative real numbers contained in the real quadratic closure of $`𝐐`$. This completes the proof. Remark 1. Let $`𝐅𝐑`$ is a euclidean field, i.e. $`x𝐅\mathrm{y}𝐅`$ ($`x=y^2x=y^2`$) (cf. ). Our proof of Theorem 1 gives that if $`x,y𝐅^n`$ ($`n>1`$) and $`|xy|`$ is constructible by means of ruler and compass then there exists a finite set $`S_{xy}𝐅^n`$ containing $`x`$ and $`y`$ such that each map from $`S_{xy}`$ to $`𝐑^n`$ preserving unit distance preserves the distance between $`x`$ and $`y`$.
Theorem 2. If $`x,y𝐐^8`$ then there exists a finite set $`S_{xy}𝐐^8`$ containing $`x`$ and $`y`$ such that each map from $`S_{xy}`$ to $`𝐑^8`$ preserving unit distance preserves the distance between $`x`$ and $`y`$.
Proof. Denote by $`R_8`$ the set of all $`d0`$ with the following property:
if $`x,y𝐐^8`$ and $`|xy|=d`$ then there exists a finite set $`S_{xy}𝐐^8`$ such that $`x,yS_{xy}`$ and any map $`f:S_{xy}𝐑^8`$ that preserves unit distance preserves also the distance between $`x`$ and $`y`$.
Obviously $`0,1R_8`$. We need to prove that if $`xy𝐐^8`$ then $`|xy|R_8`$. We show that configurations from Figures 1-5 and 7 (see the proof of Theorem 1) exist in $`𝐐^8`$. We start from simple lemmas.
Lemma 1 (see ). If $`A`$ and $`B`$ are two different points of $`𝐐^n`$ then the reflection of $`𝐐^n`$ with respect to the hyperplane which is the perpendicular bisector of the segment $`AB`$, is a rational transformation (that is, takes rational points to rational points).
Lemma 2 (in the real case cf. p.173 and ). If $`A,B,\stackrel{~}{A},\stackrel{~}{B}𝐐^8`$ and $`|AB|=|\stackrel{~}{A}\stackrel{~}{B}|`$ then there exists an isometry $`I:𝐐^8𝐐^8`$ satisfying $`I(A)=\stackrel{~}{A}`$ and $`I(B)=\stackrel{~}{B}`$.
Proof. If $`A=\stackrel{~}{A}`$ and $`B=\stackrel{~}{B}`$ then $`I=id(𝐐^8)`$. If $`A=\stackrel{~}{A}`$ and $`B\stackrel{~}{B}`$ then the reflection of $`𝐐^8`$ with respect to the hyperplane which is the perpendicular bisector of the segment $`B\stackrel{~}{B}`$, satisfies the condition of Lemma 2 in virtue of Lemma 1. Assume that $`A\stackrel{~}{A}`$. Let $`I_1:𝐐^8𝐐^8`$ denote the reflection of $`𝐐^8`$ with respect to the hyperplane which is the perpendicular bisector of the segment $`A\stackrel{~}{A}`$. If $`I_1(B)=\stackrel{~}{B}`$ then the proof is complete. In the opposite case let $`B_1=I_1(B)`$, $`B_1𝐐^8`$ according to Lemma 1. Let $`I_2:𝐐^8𝐐^8`$ denote the reflection of $`𝐐^8`$ with respect to the hyperplane which is the perpendicular bisector of the segment $`B_1\stackrel{~}{B}`$. Since $`|\stackrel{~}{A}B_1|=|I_1(A)I_1(B)|=|AB|=|\stackrel{~}{A}\stackrel{~}{B}|`$ we conclude that $`I_2(\stackrel{~}{A})=\stackrel{~}{A}`$. Therefore $`I=I_2I_1`$ satisfies the condition of Lemma 2. Corollary. Lemma 2 ensures that if some configuration from Figures 1-5 and 7 exists in $`𝐐^8`$ for a fixed $`x,y𝐐^8`$, then this configuration exists for any $`x,y𝐐^8`$ with the same $`|xy|`$.
Lemma 3. LAGRANGE’S FOUR SQUARE THEOREM. Every non-negative integer is the sum of four squares of integers, and therefore every non-negative rational is the sum of four squares of rationals, see .
Lemma 4. If $`a,b`$ are positive rationals and $`b<2a`$ then there exists a triangle in $`𝐐^8`$ with sides $`b,a,a`$.
Proof. Let $`a^2(b/2)^2=k^2+l^2+m^2+n^2`$ where $`k,l,m,n`$ are rational according to Lemma 3. Then the triangle
$`[b/2,0,0,0,0,0,0,0]`$ $`[b/2,0,0,0,0,0,0,0]`$ $`[0,k,l,m,n,0,0,0]`$
has sides $`b,a,a`$.
Now we turn to the main part of the proof. Rational coordinates of the following configuration are taken from .
$`x=[0,0,0,0,0,0,0,0]`$
$`y=(3/8)[3,0,0,0,1,1,1,2]`$ $`\stackrel{~}{y}=(1/6)[8,1,1,3,1,0,1,2]`$
$`p_1=[1,0,0,0,0,0,0,0]p_2=(1/2)[1,1,0,0,0,0,1,1]`$
$`p_3=(1/2)[1,1,0,0,0,0,1,1]`$ $`p_4=(1/2)[1,0,1,0,0,1,0,1]`$
$`p_5=(1/2)[1,0,1,0,0,1,0,1]`$ $`p_6=(1/2)[1,0,0,1,1,0,0,1]`$
$`p_7=(1/2)[1,0,0,1,1,0,0,1]`$ $`p_8=(1/2)[1,0,0,0,1,1,1,0]`$
Let $`I:𝐐^8𝐐^8`$ denote the reflection with respect to the hyperplane which is the perpendicular bisector of the segment $`y\stackrel{~}{y}`$. By Lemma 1 we have $`\stackrel{~}{p}_i=I(p_i)𝐐^8`$ ($`1i8`$). It is easy to check that points $`x,y,\stackrel{~}{y},p_i,\stackrel{~}{p}_i`$ ($`1i8`$) form the configuration from Figure 1 for $`d=1`$. The Corollary ensures that $`3/2=\sqrt{2+2/8}d=|xy|R_8`$.
Points $`(3/2)x,(3/2)y,(3/2)\stackrel{~}{y},(3/2)p_i,(3/2)\stackrel{~}{p_i}`$ ($`1i8`$) form the configuration from Figure 1 for $`d=\sqrt{2+2/8}=3/2`$. The Corollary ensures that $`2+1/4=\sqrt{2+2/8}d=|(3/2)x(3/2)y|R_8`$.
The following points: $`p_1=[3/2,0,0,0,0,0,0,0]`$ $`p_2=[3/4,3/4,0,0,0,0,3/4,3/4]`$ $`p_3=[3/4,3/4,0,0,0,0,3/4,3/4]`$ $`p_4=[3/4,0,3/4,0,0,3/4,0,3/4]`$ $`p_5=[3/4,0,3/4,0,0,3/4,0,3/4]`$ $`p_6=[3/4,0,0,3/4,3/4,0,0,3/4]`$ $`p_7=[3/4,0,0,3/4,3/4,0,0,3/4]`$ $`p_8=[3/4,0,0,0,3/4,3/4,3/4,0]`$ $`x=[3/4,0,0,0,1/4,1/4,1/4,1/2]`$ $`y=[15/16,0,0,0,5/16,5/16,5/16,5/8]`$ form the configuration from Figure 2 for $`d=1`$. Therefore, in virtue of Corollary if $`x,y𝐐^8`$ and $`|xy|=(2/8)d=1/4`$, then there exists a finite set $`Z_{xy}𝐐^8`$ containing $`x`$ and $`y`$ such that any map $`f:Z_{xy}𝐑^8`$ that preserves unit distance satisfies $`|f(x)f(y)||xy|`$.
As in the proof of Theorem 1 we can prove that $`2R_8`$ and all integer distances belong to $`R_8`$. In the same way using the Corollary we can prove that all rational distances belong to $`R_8`$, because by Lemma 4 there exists a triangle in $`𝐐^8`$ with sides $`d,kd,kd`$ ($`d,k`$ are positive integers, see Figure 5).
Finally, we prove that $`|xy|R_8`$ for arbitrary $`xy𝐐^8`$. It is obvious if $`|xy|=1/2`$ because $`1/2`$ is rational. Let us assume that $`|xy|1/2`$. We have: $`|xy|^2`$ $`𝐐`$ ($`0,\mathrm{}`$). Let $`|xy|^2=k^2+l^2+m^2+n^2`$ where $`k,l,m,n`$ are rationals according to Lemma 3.
The following points:
$`s=[||xy|^21/4|,0,0,0,0,0,0,0]`$ , $`x=[0,0,0,0,0,0,0,0]`$
$`t=[||xy|^21/4|,0,0,0,0,0,0,0]`$ , $`y=[0,k,l,m,n,0,0,0]`$
form the configuration from Figure 7 for $`a=||xy|^2+1/4|`$ $`𝐐`$ ($`0,\mathrm{}`$) $`R_8`$ and $`b=||xy|^21/4|`$ $`𝐐`$ ($`0,\mathrm{}`$) $`R_8`$. The Corollary ensures that $`|xy|=\sqrt{a^2b^2}R_8`$. This completes the proof of Theorem 2.
Remark 2. Theorem 2 implies that any map $`f:𝐐^8𝐐^8`$ which preserves unit distance is an isometry.
Remark 3. It is known that the injection of $`𝐐^n`$ ($`n5`$) which preserves the distances $`d`$ and $`d/2`$ ($`d`$ is positive and rational) is an isometry, see . The general result from implies that any map $`f:𝐐^n𝐐^n`$ ($`n5`$) which preserves the distances $`1`$ and $`4`$ is an isometry. On the other hand, from (for $`n=1,2`$) and (for $`n=3,4`$) it may be concluded that there exist bijections of $`𝐐^n`$ ($`n=1,2,3,4`$) which preserve all distances belonging to $`\{k/2:k=1,2,3,\mathrm{}\}`$ and which are not isometries.
Remark 4. J. Zaks informed (private communication, May 2000) the author that he proved the following:
1. (cf. Remark 3): Let $`k`$ be any integer, $`k2`$; every mapping from $`𝐐^n`$ to $`𝐐^n`$, $`n5`$, which preserves the distances $`1`$ and $`k`$ \- is an isometry.
2. Theorem 2 holds for all even $`n`$ of the form $`n=4t(t+1)`$, $`t2`$, as well as for all odd values of $`n`$ which are a perfect square greater than $`1`$, $`n=x^2`$, and which, in addition, are of the form $`n=2y^21`$. The construction is a modified version of the proof of Theorem 2.
References
1. F. S. Beckman and D. A. Quarles Jr., On isometries of euclidean spaces, Proc. Amer. Math. Soc., 4 (1953), 810-815.
2. W. Benz, Real geometries, BI Wissenschaftsverlag, Mannheim, 1994.
3. U. Everling, Solution of the isometry problem stated by K. Ciesielski, Math. Intelligencer 10 (1988), No.4, p.47.
4. B. Farrahi, On distance preserning transformations of Euclidean-like planes over the rational field, Aequationes Math. 14 (1976), 473-483.
5. B. Farrahi, A characterization of isometries of rational euclidean spaces, J. Geom. 12 (1979), 65-68.
6. M. Hazewinkel, Encyclopaedia of mathematics, Kluwer Academic Publishers, Dordrecht 1995.
7. H. Lenz, Der Satz von Beckman-Quarles im rationalen Raum, Arch. Math. (Basel) 49 (1987), 106-113.
8. L. J. Mordell, Diophantine equations, Academic Press, London - New York, 1969.
9. A. Tyszka, A discrete form of the Beckman-Quarles theorem, Amer. Math. Monthly 104 (1997), 757-761.
10. A. Tyszka, Discrete versions of the Beckman-Quarles theorem, Aequationes Math. 59 (2000), 124-133.
11. J. Zaks, On the chromatic number of some rational spaces, Ars. Combin. 33 (1992), 253-256.
Technical Faculty
Hugo Kołła̧taj University
Balicka 104, PL-30-149 Kraków, Poland
rttyszka@cyf-kr.edu.pl
http://www.cyf-kr.edu.pl/~rttyszka
|
no-problem/9906/astro-ph9906109.html
|
ar5iv
|
text
|
# Untitled Document
Environment and Energy Injection Effects in GRB Afterglows
Z. G. Dai and T. Lu
Department of Astronomy, Nanjing University, Nanjing 210093, China
ABSTRACT
In a recent paper (Dai & Lu 1999), we have proposed a simple model in which the steepening in the light curve of the R-band afterglow of the gamma-ray burst (GRB) 990123 is caused by the adiabatic shock which has evolved from an ultrarelativistic phase to a nonrelativistic phase in a dense medium. We find that such a model is quite consistent with observations if the medium density is about $`3\times 10^6\mathrm{cm}^3`$. Here we discuss this model in more details. In particular, we investigate the effects of synchrotron self absorption and energy injection. A shock in a dense medium becomes nonrelativistic rapidly after a short relativistic phase. The afterglow from the shock at the nonrelativistic stage decays more rapidly than at the relativistic stage. Since some models for GRB energy sources predict that a strongly magnetic millisecond pulsar may be born during the formation of GRB, we discuss the effect of such a pulsar on the evolution of the nonrelativistic shock through magnetic dipole radiation. We find that after the energy which the shock obtains from the pulsar is much more than the initial energy of the shock, the afterglow decay will flatten significantly. When the pulsar energy input effect disappears, the decay will steepen again. These features are in excellent agreement with the afterglows of GRB 980519, GRB 990510 and GRB 980326. Furthermore, our model fits very well all the observational data of GRB 980519 including the last two detections.
Subject headings: gamma-rays: bursts – stars: pulsars – shock waves
1. INTRODUCTION
In the standard afterglow shock model (for a review see Piran 1999), a gamma-ray burst (GRB) afterglow is usually believed to be produced by synchrotron radiation or inverse Compton scattering in an ultrarelativistic shock wave expanding in a homogeneous medium. As more and more ambient matter is swept up, the shock gradually decelerates while the emission from such a shock fades down, dominating at the beginning in X-rays and progressively at optical to radio energy band. In this model, there are two limiting cases (adiabatic and highly radiative) for hydrodynamical evolution of a relativistic shock. These cases have been well studied both analytically (e.g., Mészáros & Rees 1997; Wijers, Rees & Mészáros 1997; Waxman 1997a, b; Reichart 1997; Sari 1997; Vietri 1997; Katz & Piran 1997; Sari, Piran & Narayan 1998; etc) and numerically (e.g., Panaitescu, Mészáros & Rees 1998; Huang et al. 1998; Huang, Dai & Lu 1998). A partially radiative (intermediate) case has been investigated (Chiang & Dermer 1998; Cohen, Piran & Sari 1998; Dai, Huang & Lu 1999; Huang, Dai & Lu 1999a). All the studies are based on the following basic assumptions: (1) the total energy of the shock is released impulsively before its formation; (2) the medium swept up by the shock is homogeneous and its density ($`n`$) is the one of the interstellar medium $`1\mathrm{cm}^3`$; and (3) the electron and magnetic field energy fractions of the shocked medium and the index ($`p`$) in the accelerated electrons’ power-law distribution are constant during the whole evolution stage. The standard model is successful at explaining the overall features of late afterglows of some bursts such as GRB 970508: the light curves behave according to a single unbroken power law with decay index of $`\alpha 1`$ as long as the observations continued (Zharikov et al. 1998).
Each of these assumptions has been varied to discuss why some observed afterglows deviate from that expected by the standard afterglow model. For example, the R-band light curve of GRB 970508 afterglow peaks around two days after the burst, and there is a rather rapid rise before the peak which is followed by a long power-law decay. There are two models explaining this special feature: (i) Rees and Mészáros (1998) envisioned that a postburst fireball may contain shells with a continuous distribution of Lorentz factors. As the external forward shock sweeps up ambient matter and decelerates, internal shells will catch up with the shock and supply energy into it. A detailed calculation shows that this model can explain well this special feature (Panaitescu, Mészáros and Rees 1998). (ii) Dai & Lu (1998a) considered continuous energy injection from a strongly magnetized millisecond pulsar into the shock through magnetic dipole radiation. This model can also account for well the observations. It is very clear that these models don’t use basic assumption (1).
There are several models in the literature that discuss the effect of inhomogeneous media on afterglows (Dai & Lu 1998b; Mészáros, Rees & Wijers 1998; Vietri 1997), dropping the second assumption. Generally, an $`nr^k`$ ($`k>0`$) medium is expected to steepen an afterglow’s temporal decay. Recently, Chevalier & Li (1999a, b) found that a Wolf-Rayet star wind likely leads to an $`nr^2`$ medium, and thus if GRB 980519 resulted from the explosion of such a massive star, subsequent evolution of a relativistic shock in this medium is consistent with the steep decay in the R-band light curve of the afterglow from this burst. Another way of dropping the second assumption is that the density of an ambient medium is invoked to be as high as $`n10^6\mathrm{cm}^3`$. Recent observations show that the temporal decay of the R-band afterglow of GRB 990123 steepened about 2.5 days after this burst (Kulkarni et al. 1999; Castro-Tirado et al. 1999; Fruchter et al. 1999). Dai & Lu (1999) (hereafter DL99) proposed a plausible model in which a shock expanding in a dense medium has evolved from a relativistic phase to a nonrelativistic phase. They found that this model fits well the observational data if the medium density is about $`3\times 10^6\mathrm{cm}^3`$. They further suggested that such a medium could be a supernova or supranova or hypernova ejecta. Of course, the steepening in the light curves of the afterglows of these two bursts may be due to lateral spreading of a jet, as analyzed by Rhoads (1999) and Sari, Piran & Halpern (1999). However, numerical studies of Panaitescu & Mészáros (1999), Moderski, Sikora & Bulik (1999), Huang et al. (1999b) and Huang, Dai & Lu (2000) show that the break of the light curve is weaker and much smoother than the one analytically predicted when the light travel effects related to the lateral size of the jet and a realistic expression of the lateral expansion speed are taken into account.
In basic assumption (3), the electron and magnetic field energy fractions of the shocked medium may not be varied during whole evolution, as argued by Wang, Dai & Lu (1999a), who analyzed all the observational data including both the prompt optical flash and the afterglow of GRB 990123. However, the assumption that $`p`$ is constant might be inconsistent with the early afterglow from GRB 970508 (Djorgovski et al. 1997).
In this paper, we discuss the model proposed by DL99 in more details, by taking into account both the synchrotron self-absorption effect in the shocked medium and the energy injection effect of Dai & Lu (1998a, c). Therefore, our present analysis, in fact, relaxes assumptions (1) and (2). So far the bursts whose afterglow decay steepens include GRB 980519, GRB 980326 and GRB 990510 besides GRB 990123. In particular, for the former two of these bursts, optical observations several days later are far above a power law decline, implying possible energy injection at such a late stage. In section 2, we analyze the spectrum and light curve of radiation from a shock expanding in a dense medium. In section 3, we compare our model with observations related to GRB 980519 and infer all intrinsic parameters and the redshift of this burst. We discuss properties of GRB 990510 and GRB 980326 in section 4, and in the final section we give a discussion and conclusion.
2. SHOCK EVOLUTION
2.1. Relativistic Phase
For simplicity, we assume that a relativistic shock expanding in a dense medium is adiabatic. The evolution of a partially radiative shock depends on both the efficiency with which the shock transfers its bulk kinetic energy to electrons and magnetic fields and on the efficiency with which the electrons radiate their energy (Dai, Huang & Lu 1999). Here we don’t consider such a shock. The Blandford-McKee (1976) self similar solution gives the Lorentz factor of an adiabatic relativistic shock,
$$\gamma =\frac{1}{4}\left[\frac{17E_0(1+z)^3}{\pi nm_pc^5t_{}^3}\right]^{1/8}=1E_{52}^{1/8}n_5^{1/8}t_{}^{3/8}[(1+z)/2]^{3/8},$$
(1)
where $`E_0=E_{52}\times 10^{52}\mathrm{ergs}`$ is the total isotropic energy, $`n_5=n/10^5\mathrm{cm}^3`$, $`t_{}`$ is the observer’s time since the gamma-ray trigger in units of 1 day, $`z`$ is the the redshift of the source generating this shock, and $`m_p`$ is the proton mass. We assume $`\gamma =1`$ when $`t_{}=t_b`$. This implies
$$n_5=E_{52}t_b^3[(1+z)/2]^3.$$
(2)
For $`t_{}>t_b`$, the shock will be in a nonrelativistic phase. In the following we will see different spectra and light curves from this shock before and after the time $`t_b`$. We first analyze the relativistic case.
As usual, only synchrotron radiation from the shock is considered. To analyze the spectrum and light curve, one needs to know three crucial frequencies: the synchrotron peak frequency ($`\nu _m`$), the cooling frequency ($`\nu _c`$), and the self-absorption frequency ($`\nu _a`$). We assume a power law distribution of the electrons accelerated by the shock: $`dn_e^{}/d\gamma _e\gamma _e^p`$ for $`\gamma _e\gamma _{em}`$, where $`\gamma _e`$ is the electron Lorentz factor and $`\gamma _{em}=610ϵ_e\gamma `$ is the minimum Lorentz factor. We further assume that $`ϵ_e`$ and $`ϵ_B`$ are the electron and magnetic energy density fractions of the shocked medium respectively. The $`\nu _m`$ is the characteristic synchrotron frequency of an electron with Lorentz factor of $`\gamma _{em}`$, while the $`\nu _c`$ is the characteristic synchrotron frequency of an electron which cools on the dynamical age of the shock. According to Sari et al. (1998), therefore, these two frequencies, measured in the observer’s frame, can be written as
$$\nu _m=\frac{\gamma \gamma _{em}^2}{1+z}\frac{eB^{}}{2\pi m_ec}=7.0\times 10^8(ϵ_e/0.1)^2ϵ_{B,6}^{1/2}E_{52}^{1/2}t_{}^{3/2}[(1+z)/2]^{1/2}\mathrm{Hz},$$
(3)
$$\nu _c=\frac{18\pi em_ec(1+z)}{\sigma _T^2B^3\gamma t_{}^2}=2.2\times 10^{17}ϵ_{B,6}^{3/2}E_{52}^{1/2}n_5^1t_{}^{1/2}[(1+z)/2]^{1/2}\mathrm{Hz},$$
(4)
where $`ϵ_{B,6}=ϵ_B/10^6`$, $`B^{}=(32\pi ϵ_B\gamma ^2nm_pc^2)^{1/2}`$ is the internal magnetic field strength of the shocked medium and $`\sigma _T`$ is the Thompson scattering cross section. The self-absorption frequency has been estimated to be
$$\nu _a=6.0\times 10^{11}(ϵ_e/0.1)^1ϵ_{B,6}^{1/5}E_{52}^{1/5}n_5^{3/5}[(1+z)/2]^1\mathrm{Hz},$$
(5)
where $`p=2.8`$ has been used (Granot, Piran & Sari 1998; Wijers & Galama 1998). This estimate is valid only for $`\nu _a<\nu _m`$. Since $`\nu _m`$ decreases as $`t_{}^{3/2}`$ and $`\nu _a`$ is time invariant, we define a time $`t_a`$ based on $`\nu _m(t_a)=\nu _a`$:
$$t_a=0.01(ϵ_e/0.1)^2ϵ_{B,6}^{1/5}E_{52}^{1/5}n_5^{2/5}[(1+z)/2]\mathrm{days}.$$
(6)
When $`\nu _a>\nu _m`$, the self-absorption coefficient $`\alpha _\nu \gamma \gamma _{em}^{p1}B^{(p+2)/2}\nu ^{(p+4)/2}`$ (Rybicki & Lightman 1979) and the width of the shock $`\mathrm{\Delta }rr/\gamma \gamma t_{}`$, so, based on the optical depth $`\tau =\alpha _\nu \mathrm{\Delta }r=0.35`$, we find that the self-absorption frequency decays as $`\nu _at_{}^{(3p+2)/2(p+4)}`$. Thus we have
$$\nu _a=6.0\times 10^{11}(ϵ_e/0.1)^1ϵ_{B,6}^{1/5}E_{52}^{1/5}n_5^{3/5}[(1+z)/2]^1(t_{}/t_a)^{\frac{3p+2}{2(p+4)}}\mathrm{Hz},$$
(7)
The observed synchrotron radiation peak flux can be obtained by
$$F_{\nu _m}=\frac{N_e\gamma P_{\nu _m}^{}(1+z)}{4\pi D_L^2}=10ϵ_{B,6}^{1/2}E_{52}n_5^{1/2}\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^2m\mathrm{Jy},$$
(8)
where $`N_e`$ is the total number of swept-up electrons and $`P_{\nu _m}^{}=m_ec^2\sigma _TB^{}/(3e)`$ is the radiated power per electron per unit frequency in the frame comoving with the shocked medium. For a flat universe with $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, the distance to the source $`D_L=2c/H_0(1+z\sqrt{1+z})`$.
After having the peak flux and three break frequencies, we can write the spectrum and light curve of synchrotron radiation. For high frequency $`\nu >\nu _{am}\mathrm{max}(\nu _a,\nu _m)`$, we find
$$F_\nu =\{\begin{array}{cc}(\nu /\nu _m)^{(p1)/2}F_{\nu _m}\nu ^{(p1)/2}t_{}^{3(1p)/4}\hfill & \mathrm{if}\nu _{am}<\nu <\nu _c\hfill \\ (\nu _c/\nu _m)^{(p1)/2}(\nu /\nu _c)^{p/2}F_{\nu _m}\nu ^{p/2}t_{}^{(23p)/4}\hfill & \mathrm{if}\nu >\nu _c.\hfill \end{array}$$
(9)
If $`p2.8`$, then the temporal decay index $`\alpha =3(1p)/41.35`$ for emission from slow-cooling electrons or $`\alpha =(23p)/41.6`$ for emission from fast-cooling electrons. In addition, the low-frequency ($`\nu <\nu _{am}`$) radiation should be discussed in two cases: (i) for $`\nu _a<\nu _m`$, the spectrum and light curve can be written
$$F_\nu =\{\begin{array}{cc}(\nu _a/\nu _m)^{1/3}(\nu /\nu _a)^2F_{\nu _m}\nu ^2t_{}^{1/2}\hfill & \mathrm{if}\nu <\nu _a\hfill \\ (\nu /\nu _m)^{1/3}F_{\nu _m}\nu ^{1/3}t_{}^{1/2}\hfill & \mathrm{if}\nu _a<\nu <\nu _m;\hfill \end{array}$$
(10)
(ii) for $`\nu _a>\nu _m`$, we can obtain the spectrum and light curve,
$$F_\nu =(\nu _a/\nu _m)^{(p1)/2}(\nu /\nu _a)^{5/2}F_{\nu _m}\nu ^{5/2}t_{}^{5/4}.$$
(11)
These equations show that the flux of the low-frequency radiation increases with time.
2.2. Nonrelativistic Phase
After sweeping up sufficient ambient matter, the shock will eventually go into a nonrelativistic phase, viz., $`t_{}>t_b`$. In the following we analyze the spectrum and light curve of the synchrotron radiation from such a shock, by assuming $`\nu _a>\nu _m`$.
2.2.1. Without Any Energy Injection
We first consider the widely-studied case without any energy to be input into the shock after the GRB. In this case, the shock’s velocity $`vt_{}^{3/5}`$ and its radius $`rvt_{}t_{}^{2/5}`$. According to DL99, thus, the synchrotron peak frequency, the cooling frequency, the self-absorption frequency, and the peak flux are derived as
$$\nu _m=7.0\times 10^8(ϵ_e/0.1)^2ϵ_{B,6}^{1/2}E_{52}^{1/2}t_b^{3/2}[(1+z)/2]^{1/2}(t_{}/t_b)^3\mathrm{Hz},$$
(12)
$$\nu _c=2.2\times 10^{17}ϵ_{B,6}^{3/2}E_{52}^{1/2}n_5^1t_b^{1/2}[(1+z)/2]^{1/2}(t_{}/t_b)^{1/5}\mathrm{Hz},$$
(13)
$$\nu _a=6.0\times 10^{11}(ϵ_e/0.1)^1ϵ_{B,6}^{1/5}E_{52}^{1/5}n_5^{3/5}[(1+z)/2]^1(t_b/t_a)^{\frac{3p+2}{2(p+4)}}(t_{}/t_b)^{\frac{3p2}{p+4}}\mathrm{Hz},$$
(14)
and
$$F_{\nu _m}=10ϵ_{B,6}^{1/2}E_{52}n_5^{1/2}\left(\frac{t_{}}{t_b}\right)^{3/5}\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^2m\mathrm{Jy}.$$
(15)
Based on these equations, we further derive the spectrum and light curve,
$$F_\nu =\{\begin{array}{ccc}(\nu _a/\nu _m)^{(p1)/2}(\nu /\nu _a)^{5/2}F_{\nu _m}\nu ^{5/2}t_{}^{11/10}\hfill & \mathrm{if}\nu <\nu _a\hfill & \\ (\nu /\nu _m)^{(p1)/2}F_{\nu _m}\nu ^{(p1)/2}t_{}^{(2115p)/10}\hfill & \mathrm{if}\nu _a<\nu <\nu _c\hfill & \\ (\nu _c/\nu _m)^{(p1)/2}(\nu /\nu _c)^{p/2}F_{\nu _m}\nu ^{p/2}t_{}^{(43p)/2}\hfill & \mathrm{if}\nu >\nu _c.\hfill & \end{array}$$
(16)
We easily see that for high-frequency radiation the temporal decay index $`\alpha =(2115p)/10`$ for emission from slow-cooling electrons or $`\alpha =(43p)/2`$ for emission from fast-cooling electrons. If $`p2.8`$, then $`\alpha 2.1`$ or $`2.2`$. Comparing this with the relativistic result, we conclude that the afterglow decay steepens at the nonrelativistic stage.
2.2.2. With Energy Injection from Pulsars
Some models for GRB energy sources (for a brief review see Dai & Lu 1998c) predict that during the formation of an ultrarelativistic fireball required by GRB, a strongly magnetized millisecond pulsar will be born. If so, the pulsar will continuously input its rotational energy into the forward shock of the postburst fireball through magnetic dipole radiation because electromagnetic waves radiated by the pulsar will be absorbed in the shocked medium (Dai & Lu 1998a, c). Since an initially ultrarelativistic shock discussed in this paper rapidly becomes nonrelativistic in a dense medium, we next investigate the evolution of a nonrelativistic adiabatic shock with energy injection from a pulsar. The total energy of the shock is the sum of the initial energy and the energy which the shock has obtained from the pulsar:
$$E_0+_0^t_{}L𝑑t_{}=E_{\mathrm{tot}}v^2r^3,$$
(17)
where $`L`$ is the stellar spindown power $`(1+t_{}/T)^2`$ ($`T`$ is the initial spindown time scale). The term on the right-hand side is consistent with the Sedov solution. Please note that $`L`$ can be thought of as a constant for $`t_{}<T`$, while $`L`$ decays as $`t_{}^2`$ for $`t_{}T`$. Because of this feature, we easily integrate the second term on the left-hand side of equation (17). We now define a time at which the shock has obtained energy $`E_0`$ from the pulsar, $`t_c=E_0/L`$, and assume $`t_cT`$. We next analyze the evolution of the afterglow from such a shock at three stages.
First, at the initial stage, $`t_{}t_c`$, viz., the second term on the left-hand side of equation (17) can be neglected. The evolution of the afterglow is the same as in the above case without any energy injection.
Second, for $`T>t_{}t_c`$, the term $`E_0`$ in equation (17) can be neglected. At this stage, the shock’s velocity $`vt_{}^{2/5}`$, its radius $`rt_{}^{3/5}`$, the internal field strength $`B^{}t_{}^{2/5}`$ and the typical electron Lorentz factor $`\gamma _{em}t_{}^{4/5}`$. Thus, we obtain the synchrotron peak frequency $`\nu _m\gamma _{em}^2B^{}t_{}^2`$, the cooling frequency $`\nu _cB^3t_{}^2t_{}^{4/5}`$, the self-absorption frequency $`\nu _at_{}^{2(p1)/(p+4)}`$, and the peak flux $`F_{\nu _m}N_eP_{\nu _m}^{}r^3B^{}t_{}^{7/5}`$. According to these scaling laws, we derive the spectrum and light curve of the afterglow
$$F_\nu =\{\begin{array}{ccc}(\nu _a/\nu _m)^{(p1)/2}(\nu /\nu _a)^{5/2}F_{\nu _m}\nu ^{5/2}t_{}^{7/5}\hfill & \mathrm{if}\nu <\nu _a\hfill & \\ (\nu /\nu _m)^{(p1)/2}F_{\nu _m}\nu ^{(p1)/2}t_{}^{(125p)/5}\hfill & \mathrm{if}\nu _a<\nu <\nu _c\hfill & \\ (\nu _c/\nu _m)^{(p1)/2}(\nu /\nu _c)^{p/2}F_{\nu _m}\nu ^{p/2}t_{}^{2p}\hfill & \mathrm{if}\nu >\nu _c.\hfill & \end{array}$$
(18)
It can be seen that for high-frequency radiation the temporal decay index $`\alpha =(125p)/50.4`$ for emission from slow-cooling electrons or $`\alpha =2p0.8`$ for emission from fast-cooling electrons if $`p2.8`$. This shows that the afterglow decay may significantly flatten due to the effect of the pulsar.
Third, for $`t_{}T`$, the power of the pulsar due to magnetic dipole radiation rapidly decreases as $`Lt_{}^2`$, and the evolution of the shock is hardly affected by the stellar radiation. Thus, the evolution of the afterglow at this stage will be the same as in the above case without any energy injection.
3. OBSERVED AND INFERRED PARAMETERS OF GRB 980519
We have shown that as an adiabatic shock expands in a dense medium from an ultrarelativistic phase to a nonrelativistic phase, the decay of radiation from such a shock will steepen, subsequently may flatten if a strongly magnetic millisecond pulsar continuously inputs its rotational energy into the shock through magnetic dipole radiation, and finally the decay will steepen again due to disappearance of the stellar effect. We next show that these effects can fit very well the observed afterglow of GRB 980519.
GRB 980519 was one of the brightest of the bursts detected by the BeppoSAX satellite (Muller et al. 1998; in ’t Zand et al. 1999). The BATSE measured fluence above 25 keV was $`(2.54\pm 0.41)\times 10^5\mathrm{ergs}\mathrm{cm}^2`$, which places it among the top 12% of BATSE bursts (Connaughton 1998). An X-ray afterglow was detected by the BeppoSAX NFI (Nicastro et al. 1999). The optical afterglow $`8.5`$ hours after the burst presented the most rapid fading of the well-detected GRB afterglows except for GRB 990510, consistent with $`t_{}^{2.05\pm 0.04}`$ in BVRI (Halpern et al. 1999), while the power-law decay index of the X-ray afterglow, $`\alpha _X=2.07\pm 0.11`$ (Owens et al. 1998), in agreement with the optical. The spectrum in optical band alone is well fitted by a power low $`\nu ^{1.20\pm 0.25}`$, while the optical and X-ray spectra together can also be fitted by a single power law of the form $`\nu ^{1.05\pm 0.10}`$. In addition, the radio afterglow of this burst was observed by the VLA at 8.3 GHz, and its temporal evolution $`t_{}^{0.9\pm 0.3}`$ between 1998 May 19.8UT and 22.3UT (Frail, Taylor & Kulkarni 1998).
We now analyze the observed afterglow data of GRB 980519 based on our model. We assume that for this burst, the forward shock evolved from an ultrarelativistic phase to a nonrelativistic phase in a dense medium at $`8`$ hr after the burst. So, the detected afterglow, in fact, was the radiation from a nonrelativistic shock. This implies $`\gamma 1`$ at $`t_b1/3`$ days. From equation (2), therefore, we find
$$n_527E_{52}[(1+z)/2]^3.$$
(19)
If $`p2.8`$, and if the observed optical afterglow was emitted by slow-cooling electrons and the X-ray afterglow from fast-cooling electrons, then according to equation (16), the decay index $`\alpha _R=(2115p)/102.1`$ and $`\alpha _X=(43p)/22.2`$, in excellent agreement with observations. Furthermore, the model spectral index at the optical to X-ray band and the decay index at the radio band, $`\beta =(p1)/20.9`$ and $`\alpha =1.1`$, are quite consistent with the observed ones, $`1.05\pm 0.10`$ and $`0.9\pm 0.3`$, respectively.
We next continue to take into account three observational results. First, on May 21.6UT, the Keck II 10m telescope detected the R-band magnitude $`R=23.03\pm 0.13`$, corresponding to the flux $`F_R3.5\mu \mathrm{Jy}`$ at $`t_{}2`$ days (Gal et al. 1998). Considering this result in the second sub-equation of (16) together with equations (12), (15) and (19), we can derive
$$ϵ_e^{1.8}ϵ_{B,6}^{0.95}E_{52}^{1.95}\left(\frac{1+z}{2}\right)^{1.95}\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^21.7,$$
(20)
where $`p2.8`$ has been assumed.
Second, the BeppoSAX observed the X-ray (2-10 keV) flux $`F_X1.3\times 10^2\mu `$Jy at $`t_{}0.65`$ days (Nicastro et al. 1999). Inserting this result into the third sub-equation of (16) together with equations (12), (13), (15) and (19), we can also derive
$$ϵ_e^{1.8}ϵ_{B,6}^{0.2}E_{52}^{1.2}\left(\frac{1+z}{2}\right)^{0.7}\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^20.14.$$
(21)
Third, the VLA detected the radio flux $`F_{8.3\mathrm{GHz}}102\pm 19\mu \mathrm{Jy}`$ on May 22.3UT in 1998 (Frail et al. 1998). This may result from the self-absorption effect in the shocked medium. Thus, combining it with the first sub-equation of (16) together with equations (12), (14), (15) and (19) leads to
$$ϵ_e^{0.32}ϵ_{B,6}^{0.24}E_{52}^{1.28}\left(\frac{1+z}{2}\right)^{0.26}\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^23.2.$$
(22)
In addition, the total energy of the adiabatic shock, $`E_0`$, is approximately equal to the one released initially in gamma-rays (Piran 1999). This implies
$$E_{52}10\left(\frac{1+z}{2}\right)\left(\frac{\sqrt{1+z}1}{\sqrt{2}1}\right)^2.$$
(23)
From equations (19)-(23), we infer intrinsic parameters of the shock and the redshift of the burst:
$$\begin{array}{ccc}ϵ_e0.16,ϵ_{B,6}280,\hfill & & \\ E_{52}0.27,n_53.4,\hfill & & \\ z0.55.\hfill & & \end{array}$$
(24)
Our inferred value of $`ϵ_e`$ is near the equipartition value, in agreement with the result of Wijers & Galama (1998) and Granot, Piran & Sari (1998), while our $`ϵ_B`$ is also close to the value inferred from GRB 971214 and GRB 990123 (Wijers & Galama 1998; Galama et al. 1999; Wang et al. 1999a). After considering these reasonable parameters, Wang, Dai & Lu (1999b) numerically studied the trans-relativistic evolution of the shock and found that our dense medium model can provide an excellent fit to all the observational data of the radio afterglow from GRB 980519 shown in Frail et al. (1999).
If the late afterglow of GRB 980519 had still decayed according to equation (16), the inferred R-band fluxes on the 60th day and 66th day would have been nearly two orders of magnitude smaller than the observed values. This would lead to the argument that the emission on these two days came from the host galaxy of the burst (Sokolov et al. 1998; Bloom et al. 1998). We note that, despite of excellent seeing conditions on the Keck II telescope, Bloom et al. (1998) found little evidence for extension expected of a host galaxy. This implies that there may exist some mechanism by which the shock at the late stage had been renewed. As suggested in the above section, this mechanism is that a strongly magnetized millisecond pulsar had supplied its rotational energy to the shock through magnetic dipole radiation. We can see from the second sub-equation of (18) that when $`t_ct_{}<T`$, the R-band afterglow decay index $`\alpha _R=(125p)/50.4`$, where $`p2.8`$ has been assumed. Combining this result with the observed flux $`F_R0.2\mu \mathrm{Jy}`$ on the 60th day and the decay power law in several days after the burst, we infer $`t_c4`$ days. According to the definition of $`t_c=29E_{52}B_{s,13}^2P_{\mathrm{ms}}^4`$days where $`B_{s,13}`$ is the surface magnetic field strength of the pulsar in units of $`10^{13}`$G and $`P_{\mathrm{ms}}`$ is its initial period in units of 1 ms (Dai & Lu 1998a, c), we can obtain a constraint on the stellar parameters: $`B_{s,13}2.7E_{52}^{1/2}P_{\mathrm{ms}}^21.7P_{\mathrm{ms}}^2`$. Moreover, our model requires $`T>66`$ days, which leads to $`P_{\mathrm{ms}}<0.8`$, where we have used the definition of the stellar spindown timescale, $`T=120B_{s,13}^2P_{\mathrm{ms}}^2`$days. Therefore, if GRB 980519 resulted from a pulsar, and if the property of the late afterglow was caused by the effect of the stellar magnetic dipole radiation, then this pulsar may be a strongly magnetized millisecond or even submillisecond one.
4. PROPERTIES OF OTHER BURSTS
4.1. GRB 990510
GRB 990510 was detected by the BeppoSAX Gamma-Ray Burst Monitor (Piro et al. 1999) as a bright and complex GRB composed by two well seperated and multi-peaked pulses with a total duration of about 75 s (Amati et al. 1999). Its fluence was among the highest of the BeppoSAX localized events, after GRB 990123, GRB 980329 and GRB 970111. It was also detected by BATSE (Kippen et al. 1999) and its fluence ($`>20`$ keV) was $`(2.56\pm 0.09)\times 10^5\mathrm{erg}\mathrm{cm}^2`$, in the top 9% of the BATSE burst fluence distribution. The burst appears at $`z1.62`$ (Vreeswijk et al. 1999), which leads to an isotropic energy of $`1.4\times 10^{53}\mathrm{ergs}`$. The burst’s afterglow was detected and monitored at X-ray and optical bands. Even though the X-ray afterglow decay light curve is not unlike that seen previous X-ray afterglow decays (Kuulkers et al. 1999), the optical afterglow displays its special feature: all the temporal decays at VRI bands steepened about 1.2 days after the burst (Harrison et al. 1999; Stanek et al. 1999; Bloom et al. 1999a; Marconi et al. 1999). Initially the optical decay index $`\alpha _1=0.82\pm 0.02`$, but about 1.2 days later the index became $`\alpha _2=2.18\pm 0.05`$. The consistency of $`\alpha _1`$, $`\alpha _2`$ and the breaking time means that the breaking is wide band. This is the first clear observation of a wide band break (Bloom et al. 1999a; Harrison et al. 1999).
One simple interpretation for this steepening seems that we have been seeing evidence for a spreading jet (Rhoads 1999; Sari et al. 1999; Bloom et al. 1999a). As shown numerically in Panaitescu & Mészáros (1999), Moderski et al. (1999) and Huang et al. (1999b, 2000), however, the evolution of a spreading jet may not lead to a marked steepening. Another possible interpretation is that the effect of a strongly magetized millisecond pulsar on the evolution of a nonrelativistic adiabatic shock in a dense medium has been becoming unimportant. If initially the pulsar was able to change the evolution of the shock for GRB 990510 through magnetic dipole radiation, and if the optical afterglow came from fast-cooling electrons in the shocked medium, then according to the second sub-equation of (18), the temporal decay index $`\alpha =2p0.82\pm 0.02`$. This requires $`p2.8`$, which is quite consistent with the value inferred from GRB 980519. When the effect of the pulsar on the shock disappeared, the optical afterglow decayed based on the third sub-equation of (16), viz., $`\alpha =(43p)/22.2`$, in excellent agreement with the observations. Furthermore, the observed breaking time should be equal to the stellar spindown timescale in our model, which constrains the pulsar’s field strength: $`B_{s,13}10P_{\mathrm{ms}}`$. Thus, the central engine of this burst could be a millisecond magnetar.
4.2. GRB 980326
The afterglow of GRB 980326 also had a rapid decline. Groot et al. (1998) derived a temporal decay index of $`\alpha =2.1\pm 0.13`$ and a spectral index of $`\beta =0.66\pm 0.7`$ in the optical band. This initial decay index, which is similar to that of GRB 980519, suggests the evolution of a nonrelativistic adiabatic shock in a dense medium. There is another observational result similar to the case of GRB 980519: the decay of the observed optical afterglow began to flatten about 5 days after the burst; this is not the contribution of the host galaxy because it is not present at a later time (Bloom & Kulkarni 1998; Bloom et al. 1999b). Consequently, the late afterglow might be interpreted as a different phenomenon. Bloom et al. (1999b) suggested that this late afterglow could result from a supernova associating with GRB 980326. In our model, this is understood to be the emission from the nonrelativistic shock to which has been input energy by a strongly magnetized millisecond pulsar. According to the light curve shown in Figure 2 of Bloom et al. (1999b), we infer $`t_c5`$ days. Thus, the surface field strength of the pulsar could be $`B_{s,13}2.4E_{52}^{1/2}P_{\mathrm{ms}}^2`$.
5. DISCUSSIONS AND CONCLUSIONS
There have been two kinds of plausible models for GRB energy sources in the literature: one relating to pulsars and another to stellar-mass black holes. Dai & Lu (1998c) have summarized possible progenitors involving strongly magnetic millisecond pulsars: accretion-induced collapses of magnetized white dwarfs, mergers of two neutron stars if the equation of state for neutron matter is moderately stiff to stiff, and phase transitions of neutron stars to strange stars. The pulsar progenitor models also include the birth of magnetars in supernova explosions suggested by Wheeler et al. (1999). The rotational energy of such pulsars at the moment of their formation is as high as a few $`\times 10^{53}`$ ergs. The efficiency of transformation of the rotational energy to the energy of a relativistic outflow and then to the energy of high-frequency radiation may be as high as almost 100% (Usov 1994; Blackman, Yi & Field 1996). Such pulsars have been suggested to generate possibly anisotropic outflows (Dai & Lu 1998a; Smolsky & Usov 1999), and thus may explain the energetics of GRBs, including GRB 990123 as an extreme event if the energy flux from the source at the line of sight is only about ten times more than the energy flux averaged over all directions. Two by-products in this kind of models are millisecond pulsars and relativistic forward shocks forming during the collision of the outflows with ambient media. It is natural to expect that the central pulsars affect the evolution of postburst shocks and in turn the afterglows from these shocks through magnetic dipole radiation.
In the second kind of GRB source models, Fryer, Woosley & Hartmann (1999) have summarized possible progenitors involving black hole accretion disks: neutron star-neutron star binary mergers, black hole-neutron star mergers, black hole-white dwarf mergers, massive star core collapses, and black hole-helium star mergers. This kind of models should include the supranovae proposed by Vietri & Stella (1998).
Some of the source models mentioned above, e.g., failed supernovae (Woosley 1993), hypernovae (Paczyński 1998), supranovae (Vietri & Stella 1998), phase transitions of neutron stars to strange stars (Dai & Lu 1998a; DL99) and birth of magnetars (Wheeler et al. 1999), may lead to dense media prior to the occurrence of GRBs. In addition, dense media have also been discussed in the context of GRBs. For example, Katz (1994) suggested collisions of relativistic nucleons with a dense cloud as an explanation of the delayed hard photons from GRB 940217. DL99 discussed the evolution of an adiabatic shock in a dense medium to explain the steepening feature in the light curve of the R-band afterglow from GRB 990123.
Based on these arguments, following DL99, we discuss the evolution of an adiabatic shock expanding in a dense medium from an ultrarelativistic phase to a nonrelativistic phase in more details in this paper. In particular, we discuss the effects of synchrotron self absorption and energy injection on the afterglow from this shock. In a dense medium, the shock becomes nonrelativistic rapidly after a short relativistic phase. This transition time varies from several hours to a few days when the medium density is from $`10^5`$ to a few $`\times 10^6\mathrm{cm}^3`$, and the shock energy from $`10^{51}`$ to $`10^{54}`$ ergs. The afterglow from the shock at the nonrelativistic stage decays more rapidly than at the relativistic stage, while the decay index varies from $`1.35`$ to $`2.1`$ if the spectral index of the accelerated electron distribution, $`p=2.8`$, and the radiation comes from those slow-cooling electrons. Since some models mentioned above predict that a strongly magnetic millisecond pulsar may be born during the formation of GRB, we also discuss the effect of such a pulsar on the evolution of a nonrelativistic shock through magnetic dipole radiation, in contrast to the case discussed in Dai & Lu (1998a, c). We find that after the energy which the shock obtains from the pulsar is much more than the initial energy of the shock, the afterglow decay will flatten significantly and the decay index will become $`0.4`$. When the pulsar energy input effect disappears, the index will still be $`2.1`$. These features are in excellent agreement with the afterglows of GRB 980519 and GRB 980326. Furthermore, our model fits very well all the observational data of GRB 980519 including the last two detections. Of course, if an afterglow of our interest comes from fast-cooling electrons in the shocked medium, the decay index of this afterglow will first be $`1.6`$ during the relativistic phase and subsequently $`2.2`$ at the nonrelativistic stage in the case of $`p=2.8`$. If the energy input effect of the pulsar during this stage becomes very important, the decay index will be $`0.8`$. When this effect disappears, the index will become $`2.2`$ again. The latter values of the index are quite consistent with the observations of GRB 990510. This requires that the time before the pulsar is able to affect the evolution of the shock is only as short as a few hours. It should be pointed out that whether and when a pulsar significantly affects the evolution of a shock largely depends upon the following three parameters: the shock’s initial energy, the dipole magnetic field and period of the pulsar.
The flattening of the late optical afterglow light curves of GRB 980519 and GRB 980326 has also been interpreted as being due to Ib/c supernovae, which is based on the obvious reddening of the observed afterglow spectrum. In our model, this flattening is due to the energy input effect of millisecond pulsars through magnetic dipole radiation, and the expected afterglow spectrum would be the typical synchrotron spectrum without any dust effect. If dust exists in the vicinity of the pulsars, however, dust sublimation may lead to reddening of the afterglow spectrum as suggested by Waxman & Draine (1999). In explaining the steepening of the afterglow light curve of GRB 990510, we have envisioned the disappearance of the pulsar energy input effect. In our model, radio afterglows first brighten as $`t_{}^{5/4}`$ with the pulsar energy input effect, and subsequently fade down as $`t_{}^{(43p)/2}`$ when this effect becomes unimportant and the observed frequency is smaller than the synchrotron self-absorption frequency. Such a steepening has also been understood to be due to the lateral expansion of relativistic jets (Rhoads 1999; Sari et al. 1999). Thus, radio afterglows from jets first decay as $`t_{}^{1/3}`$ and then as $`t_{}^p`$ (Sari et al. 1999). Therefore, there is an obvious difference in radio afterglow light curves between the pulsar energy input model and the jet model. We expect that future observations will provide evidence for or against the pulsar energy input model.
In summary, following DL99, we propose a model for several afterglows, in which a shock expanding in a dense medium evolves if its central engine is a strongly magnetized millisecond pulsar. We show that this model explains well the features of the afterglows from GRBs 980519, 980326 and 990510.
We would like to thank the anonymous referee for valuable comments and suggestions, and Y. F. Huang and D. M. Wei for helpful discussions. This work was supported partially by the National Natural Science Foundation of China (grants 19825109 and 19773007) and partially by the National 973 Project.
REFERENCES
Amati, L., Frontera, F., Costa, E., & Feroci, M. 1999, GCN Circ. 317
Blackman, E. G., Yi, I., & Field, G. B. 1996, ApJ, 473, L79
Blandford, R. D., & McKee, C. F. 1976, Phys. Fluids, 19, 1130
Bloom, J. S., & Kulkarni, S. R. 1998, GCN Circ. 161
Bloom, J. S., Kulkarni, S. R., Djorgovski, S. G., Gal, R. R., Eichelberger, A., & Frail, D. A. 1998, GCN Circ. 149
Bloom, J. S. et al. 1999a, GCN Circ. 323
Bloom, J. S. et al. 1999b, Nature, 401, 453
Castro-Tirado, A. J. et al. 1999, Science, 283, 2069
Chiang, J., & Dermer, C. D. 1998, astro-ph/9803339
Chevalier, R. A., & Li, Z. Y. 1999a, ApJ, 520, L29
Chevalier, R. A., & Li, Z. Y. 1999b, astro-ph/9908272
Cohen, E., Piran, T., & Sari, R. 1998, ApJ, 509, 717
Connaughton, V. 1998, GCN Circ. 86
Dai, Z. G., Huang, Y. F., & Lu, T. 1999, ApJ, 520, 634
Dai, Z. G., & Lu, T. 1998a, Phys. Rev. Lett., 81, 4301
Dai, Z. G., & Lu, T. 1998b, MNRAS, 298, 87
Dai, Z. G., & Lu, T. 1998c, A&A, 333, L87
Dai, Z. G., & Lu, T. 1999, ApJ, 519, L155 (DL99)
Djorgovski, S. G. et al. 1997, Nature, 387, 876
Frail, D. A., Taylor, G. B., & Kulkarni, S. R. 1998, GCN Circ. 89
Frail, D. A., et al. 1999, ApJL, submitted (astro-ph/9910060)
Fruchter, A. S. et al. 1999, ApJ, in press (astro-ph/9902236)
Fryer, C. L., Woosley, S. E., & Hartmann, D. H. 1999, ApJ, submitted
(astro-ph/9904122)
Gal, R. R. et al. 1998, GCN Circ. 88
Galama, T. J. et al. 1999, Nature, 398, 394
Granot, J., Piran, T., & Sari, R. 1998, astro-ph/9808007
Groot, P. J. et al. 1998, ApJ, 502, L123
Halpern, J. P., Kemp, J., Piran, T., & Bershady, M. A. 1999, ApJ, submitted
(astro-ph/9903418)
Harrison, F. A. et al. 1999, astro-ph/9905306
Huang, Y. F., Dai, Z. G., & Lu, T. 1998, A&A, 336, L69
Huang, Y. F., Dai, Z. G., & Lu, T. 1999a, MNRAS, 309, 513
Huang, Y. F., Dai, Z. G., Wei, D. M., & Lu, T. 1998, MNRAS, 298, 459
Huang, Y. F., Gou, L. J., Dai, Z. G., & Lu, T. 1999b, ApJ, submitted (astro-ph/9910493)
Huang, Y. F., Dai, Z. G., & Lu, T. 2000, MNRAS, submitted
in ’t Zand, J. J. M., Heise, J. van Paradijs, J., & Fenimore, E. E. 1998, ApJ, in press
Katz, J. I. 1994, ApJ, 432, L27
Katz, J. I., & Piran, T. 1997, ApJ, 490, 772
Kippen, R. M. 1999, GCN Circ. 322
Kulkarni, S. R. et al. 1999, Nature, 398, 389
Kuulkers, E. et al. 1999, GCN Circ. 326
Marconi, G., Israel, G. L., Lazzati, D., Covino, S., & Ghisellini, G. 1999, GCN Circ. 329
Mészáros, P., & Rees, M. J. 1997, ApJ, 476, 232
Mészáros, P., Rees, M. J., & Wijers, R. A. M. J. 1998, ApJ, 499, 301
Moderski, R., Sikora, M., & Bulik, T. 1999, astro-ph/9904310
Muller, J. M. et al. 1998, IAU Circ. 6910
Nicastro, L. et al. 1999, A&A, in press (astro-ph/9904169)
Owens, A. et al. 1998, A&A, 339, L37
Paczyński, B. 1998, ApJ, 494, L45
Panaitescu, A., & Mészáros, P. 1999, ApJ, in press (astro-ph/9806016)
Panaitescu, A., Mészáros, P., & Rees, M. J. 1998, ApJ, 503, 315
Piran, T. 1999, Phys. Rep., 314, 575
Piro, L. et al. 1999, GCN Circ. 304
Rees, M. J., & Mészáros, P. 1998, ApJ, 496, L1
Reichart, D. E. 1997, ApJ, 485, L57
Rhoads, J. 1999, ApJ, in press (astro-ph/9903399)
Rybicki, G. B. & Lightman, A. P. 1979, Radiative Processes in Astrophysics (New York: Wiley)
Sari, R. 1997, ApJ, 489, L37
Sari, R., & Piran, T., & Halpern, J. P. 1999, astro-ph/9903339
Sari, R., Piran, T., & Narayan, R. 1998, ApJ, 497, L17
Smolsky, M. V., & Usov, V. V. 1999, astro-ph/9905142
Sokolov, V., Zharikov, S., Palazzi, L., & Nicastro, L. 1998, GCN Circ.148
Stanek, K. Z. et al. 1999, astro-ph/9905304
Usov, V. V. 1994, MNRAS, 267, 1035
Vietri, M. 1997, ApJ, 488, L105
Vietri, M., & Stella, L. 1998, ApJ, 507, L45
Vreeswijk, P. M. et al. 1999, GCN Circ. 324
Wang, X. Y., Dai, Z. G., & Lu, T. 1999a, MNRAS, submitted (astro-ph/9906062)
Wang, X. Y., Dai, Z. G., & Lu, T. 1999b, MNRAS, submitted (astro-ph/9912492)
Waxman, E. 1997a, ApJ, 485, L5
Waxman, E. 1997b, ApJ, 489, L33
Waxman, E., & Draine, B. T. 1999, ApJ, submitted (astro-ph/9909020)
Waxman, E., Kulkarni, S. R., & Frail, D. A. 1998, ApJ, 497, 288
Wheeler, J. C., Yi, I., Höflich, P., & Wang, L. 1999, ApJ, in press
Wijers, R. A. M. J., & Galama, T. J. 1998, ApJ, in press (astro-ph/9805341)
Wijers, R. A. M. J., Rees, M. J., & Mészáros, P. 1997, MNRAS, 288, L51
Woosley, S. 1993, ApJ, 405, 273
Zharikov, S. V., Sokolov, V. V., & Baryshev, Yu V. 1998, A&A, 337, 356
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.