id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9902/nucl-th9902009.html
ar5iv
text
# 1 Introduction ## 1 Introduction The density dependence of vector-meson masses suggested by Brown/Rho (B/R) scaling stimulated a lot of interest. In particular, the CERES dilepton experiments provided strong evidence that the properties of the $`\rho `$ mesons are nontrivially modified in hadronic matter. An excess of the dilepton with low invariant mass, as well as strength missing from the region of the free $`\rho `$-mass, are found in the experiments, although these determinations are not very quantitative up to now. However, experiments now underway with TPC should determine with good accuracy just how much strength is left at the free $`\rho `$-meson pole during the time of overlap of the heavy nuclei up until freezeout, that is, during the fireball. The simplest and most economical explanation for the observed low-mass dileptons is given in terms of quasiparticles (both fermions and bosons) whose masses drop according to B/R scaling, thereby making an appealing link to the chiral structure of the hadronic vacuum. In an alternative view to this description, Rapp, Chanfray and Wambach (R/W) claimed that the excess of low-mass dileptons can also follow from conventional many-body physics. On a rather general ground, this “alternative” description was in a sense anticipated as discussed by one of the authors . In analogy to the quark-hadron duality in heavy-light meson decay processes, one may view B/R scaling as a “partonic” picture while R/W as a hadronic one. One way of succinctly summarizing the situation is that the former is a top-down approach and the latter a bottom-up one. The link between B/R scaling and the Landau quasiparticle interaction $`F_1`$ established in is one specific indication for this “duality.” Indeed, in , Brown et al argued that the R/W explanation could be interpreted as a density-dependent $`\rho `$-meson mass, calculated in a hadron language (in contrast to that of constituent quarks used by Brown and Rho). In particular it was suggested in ref. that if one replaced the $`\rho `$-meson mass $`m_\rho `$ by the mass $`m_\rho ^{}(\rho )`$ at the density being considered, one would arrive at a description, in hadron language, which at high densities appeared dual to that of the Brown/Rho one in terms of constituent quarks. These developments involved the interpretation of a collective isobar-hole excitation as an effective vector meson field operating on the ground state of the nucleus; i.e., $$\frac{1}{\sqrt{A}}\underset{i}{}[N^{}(1520)_iN_i^1]^1\underset{i}{}[\rho (x_i)or\omega (x_i)]|\mathrm{\Psi }_0>_s,$$ (1) with the antisymmetrical (symmetrical) sum over neutrons and protons giving the $`\rho `$-like ($`\omega `$-like) nuclear excitation. The dropping vector meson masses could then be calculated in terms of mixing of the nuclear collective state, eq.(1), with the elementary vector meson through the mixing matrix elements of fig. 1. Now building up the collective nuclear mode, the latter can be identified as an analog to the state in the degenerate schematic model of Brown for giant dipole resonance . An important development which leads to the assumption eq.(1) was furnished by Friman, Lutz and Wolf. From empirical values of the amplitudes such as $`\pi +N\rho +N`$, etc. they constructed the $`\rho `$-like or $`\omega `$-like states encouraged by our assumption eq.(1). Thus our input assumption receives substantial empirical support. Furthermore, one can obtain the coupling constants of the nucleon to three collective states from their work, that to the $`\rho `$-like excitation being close to the one used by Brown et al in . In this paper, we reformulate the heuristic idea described in in a more specific form by clearly stating the set of assumptions we make in implementing the strategy. Our principal aim is to construct a model that interpolates the R/W theory valid near zero density to the B/R theory valid near the chiral phase transition density. Within the schematic two-level (“collective” field and “elementary” field) model defined by the coupling matrix element $`M_{ij}`$ of fig.1, we assume that the self-energy $`\mathrm{\Sigma }`$, eq.(8), that enters the dispersion formula (to be given below) encodes the mechanism to interpolate between the two regimes. In , it was suggested <sup>1</sup><sup>1</sup>1 We have no convincing argument for the validity of this procedure. Our conjecture is as follows. To zeroth order in density, the $`\rho N^{}N`$ coupling is of the form $`\frac{f}{m}q_0`$ with a dimensionless constant $`f`$ and $`q_0`$ is the fourth component of the four-vector of the $`\rho `$ meson. If one writes this as $`Fq_0`$ with $`F=f/m`$, then one should compute the medium renormalization of the constant $`F`$ which will then depend on density $`\rho `$. In order for the vector meson mass to go to zero at some high density so as to match B/R scaling, it is required that $`F(\rho )q_0\mathrm{constant}0`$. For $`q=|\stackrel{}{q}|0`$ which we are considering, this can be satisfied if $`F(\rho )m_{}^{}{}_{}{}^{1}`$, modulo an overall constant. This is essentially the essence of the proposal of ref., that going to B/R scaling from R/W theory corresponds to replacing the $`m_\rho ^2`$ appearing in the denominator of (8) by $`m_{\rho }^{}{}_{}{}^{2}`$. In this paper, we shall show that this is indeed consistent with what is expected at $`\rho 0`$ and $`\rho \rho _c`$. Specifically, with the present construction, $`m_\rho ^{}`$ goes to zero at $`\rho _c2.75\rho _0`$ as in the Nambu-Jona-Lasinio calculation . Without this replacement in R/W, however, the $`\rho `$ mass can never go to zero at any density. This is because the self-energy is prefixed by $`q_0^2m_{\rho }^{}{}_{}{}^{2}`$, so that it would vanish if $`m_\rho ^{}0`$, resulting in a contradiction. Furthermore at any density, there will always be two states of the $`\rho `$ quantum number in R/W whereas in B/R, all of the strength ($`A(\omega )`$ defined later) goes into the lower one as $`m_\rho ^{}0`$ with the width going to zero as well since the phase space for decay goes to zero. We identify this state as the effective $`\rho `$ degree of freedom as one approaches the critical density. At lower densities, we cannot make this identification, because of the two different $`\rho `$-states, so our model has a clear interpretation only at $`\rho 0`$ and $`\rho \rho _c`$. Since we have a simple schematic model which can describe (roughly) the Rapp/Wambach or Brown/Rho regimes, depending on whether one scales with $`m_\rho `$ or $`m_\rho ^{}`$, we can easily calculate the general amount of strength to be found in low-mass dileptons, and the strength removed from the free-$`\rho `$ pole. We adopt the following strategy. We calculate the weighting factor $`Z`$ for the two states, nuclear collective and elementary vector, which mix. The large imaginary part of the energy of the former state makes it difficult to show in detail exactly how its strength is distributed, but we know that the amount of the strength in that state – in our two-level model – must be just the strength removed from the higher state by the mixing. This strength will be formed at low invariant masses. We make rough estimates, by including or not including various widths, of the energies at which this lower strength will be formed. We note here that in our two-level model, the state originating from the elementary $`\rho `$ (or $`\omega `$) is pushed up substantially in energy. We believe much of this displacement to be an artefact of our two-level model, because there is substantial strength with $`\rho `$ (or $`\omega `$) quantum numbers lying above the single $`\rho `$ (or $`\omega `$) excitation we have chosen and the strength above will push down the upper $`\rho `$ (or $`\omega `$). Whereas some shift upwards of the strength originally in the elementary $`\rho `$ (or $`\omega `$) may be formed, our two-level model certainly will overdo the shift. We do not believe this defect to greatly change the amount of strength shifted to lower invariant mass, however. Of course the total strength is conserved, so the amount of strength shifted to lower energies must be that missing from the higher state. However we note that the spectral strength $`A(\omega )`$ is related to $`Z(\omega )`$ by a factor $$A(\omega )=\frac{Z(\omega )}{2\omega }.$$ (2) This is clear because the sum rule on the $`A(\omega )`$, essentially oscillator strength, must be just that of the (energy weighted) Thomas-Reiche- Kuhn sum rule. It is the quantity $`A(\omega )`$ which enters into the rate equation for the dilepton production. Thus if the nuclear collective state is pushed down to an energy $`\omega m_\rho /2`$ with $`25\%`$ of the strength being removed from the elementary $`\rho `$ pole, then one finds roughly equal spectral weights in the low-energy region and in the region of the elementary $`\rho `$. Because of the much larger Boltzmann factor in the low-energy region, a factor of several more dileptons will come from it than from the $`\rho `$-pole, given temperature $`T150`$ MeV. ## 2 The $`\rho `$-Meson in Nuclear Matter The in-medium $`\rho `$-meson propagator is given by, $$D_\rho (q_0,\stackrel{}{q})=1/[q_0^2\stackrel{}{q}^2(m_\rho ^0)^2\mathrm{\Sigma }_{\pi \pi }(q_0,\stackrel{}{q})\mathrm{\Sigma }_{\rho N^{}N}(q_0,\stackrel{}{q})]$$ (3) where $`m_\rho ^0`$ is a bare mass. The real part of $`\mathrm{\Sigma }_{\pi \pi }`$ is taken into account approximately by defining $`m_\rho ^2=(m_\rho ^0)^2+\mathrm{Re}\mathrm{\Sigma }_{\pi \pi }=(770)^2\mathrm{MeV}^2`$ . The imaginary part is taken to be $$\mathrm{Im}\mathrm{\Sigma }_{\pi \pi }(q_0,\stackrel{}{q})=m_\rho \mathrm{\Gamma }_{\pi \pi }(q_0,\stackrel{}{q}).$$ (4) Then we get,<sup>2</sup><sup>2</sup>2From the particle data book, we find $`\mathrm{\Gamma }_{\pi \pi }=150`$ MeV (full width). In ref. (compare this with that in ref. ), the authors used the following form for $`\mathrm{\Gamma }_{\pi \pi }`$ $$\mathrm{\Gamma }_{\pi \pi }=\frac{p(q_0)^3}{p_0^3}(\frac{2\mathrm{\Lambda }_\rho ^2+m_\rho ^2}{2\mathrm{\Lambda }_\rho ^2+q_0^2})\mathrm{\Gamma }_{\pi \pi }^0$$ (5) with $`\mathrm{\Gamma }_{\pi \pi }^0=120`$ MeV and $`p_0p(q_0=m_\rho )`$. Here $`p`$ refers to the pion momentum ($`p=|\stackrel{}{p}|`$) and $`q_0`$ the energy of the $`\rho `$-meson. $$D_\rho (q_0,\stackrel{}{q})=1/[q_0^2\stackrel{}{q}^2m_\rho ^2(q_0)+im_\rho \mathrm{\Gamma }_{\pi \pi }(q_0)\mathrm{\Sigma }_{\rho N^{}N}]$$ (6) where $`m_\rho (q_0)`$ is the energy-dependent mass with the energy dependence lodged in the self-energy. The $`\rho `$-meson dispersion relation( at $`\stackrel{}{q}=0`$) is given by $`q_0^2=m_\rho ^2+\mathrm{Re}\mathrm{\Sigma }_{\rho N^{}N}(q_0).`$ (7) Solving this equation is equivalent to determining the zeros in the real part of the inverse $`\rho `$-meson propagator, eq.(6). ### 2.1 The Rapp/Wambach approach We start with the crucial ingredients in R/W (Rapp/Wambach) theory . The $`\rho `$-meson self-energy coming from the particle-hole excitation $`N^{}(1520)N^1`$ is $$\mathrm{\Sigma }_{\rho N^{}N}(q_0)=f_{\rho N^{}N}^2\frac{8}{3}\frac{q_0^2}{m_\rho ^2}\frac{\rho _0}{4}(\frac{2(\mathrm{\Delta }E)}{(q_0+i\mathrm{\Gamma }_{tot}/2)^2(\mathrm{\Delta }E)^2})$$ (8) where $`\mathrm{\Delta }E=M_N^{}M_N1520940=580`$ MeV and $`\mathrm{\Gamma }_{tot}=\mathrm{\Gamma }_0+\mathrm{\Gamma }_{med}`$ where $`\mathrm{\Gamma }_0`$ is the full width of $`N^{}(1520)`$ in free space, $`120`$ MeV. The $`\mathrm{\Gamma }_{med}`$ represents medium corrections to the width of $`N^{}(1520)`$ . In this calculation, we shall just replace integration over fermi-momentum by nuclear density $`\rho _0`$, an approximation presumably good at low density. The real part of $`\mathrm{\Sigma }_{\rho N^{}N}(q_0)`$ then takes the form $$\mathrm{Re}\mathrm{\Sigma }_{\rho \mathrm{N}^{}\mathrm{N}}=\mathrm{f}_{\rho \mathrm{N}^{}\mathrm{N}}^2\frac{4}{3}\frac{\mathrm{q}_0^2}{\mathrm{m}_\rho ^2}\rho _0\frac{\mathrm{\Delta }\mathrm{E}(\mathrm{q}_0^2(\mathrm{\Delta }\mathrm{E})^2\frac{1}{4}\mathrm{\Gamma }_{\mathrm{tot}}^2)}{(\mathrm{q}_0^2(\mathrm{\Delta }\mathrm{E})^2\frac{1}{4}\mathrm{\Gamma }_{\mathrm{tot}}^2)^2+\mathrm{\Gamma }_{\mathrm{tot}}^2\mathrm{q}_0^2}$$ (9) This leads to the $`\rho `$-meson dispersion relation (for $`\stackrel{}{q}=0`$) $`q_0^2`$ $`=`$ $`m_\rho ^2+\mathrm{Re}\mathrm{\Sigma }_{\rho \mathrm{N}^{}\mathrm{N}}(\mathrm{q}_0)`$ (10) $`=`$ $`m_\rho ^2+f_{\rho N^{}N}^2{\displaystyle \frac{4}{3}}{\displaystyle \frac{q_0^2}{m_\rho ^2}}\rho _0{\displaystyle \frac{\mathrm{\Delta }E(q_0^2(\mathrm{\Delta }E)^2\frac{1}{4}\mathrm{\Gamma }_{tot}^2)}{(q_0^2(\mathrm{\Delta }E)^2\frac{1}{4}\mathrm{\Gamma }_{tot}^2)^2+\mathrm{\Gamma }_{tot}^2q_0^2}}.`$ The $`Z`$-factor that represents the spectral weight of the upper state is, in general, defined by $$Z=(1\frac{\mathrm{\Sigma }}{q_0^2})^1.$$ (11) To get this quantity, we first evaluate $`\frac{\mathrm{\Sigma }_{\rho N^{}N}}{q_0^2}`$. Due to the width of $`N^{}(1520)`$, $`\mathrm{\Sigma }_{\rho N^{}N}`$ has an imaginary part. We shall define $$Z=(1\frac{}{q_0^2}\mathrm{Re}\mathrm{\Sigma }_{\rho \mathrm{N}^{}\mathrm{N}})^1$$ (12) taking the real part of $`\mathrm{\Sigma }_{\rho N^{}N}`$ so as to make the $`Z`$-factor real <sup>3</sup><sup>3</sup>3There is a point which should be clarified. If we solve the equation of the $`\rho `$-meson dispersion relation (eq.(10)), we will possibly get one real and two complex valued solutions for $`q_0`$. For the real solution, our definition of $`Z`$-factor (eq.(13)) could be correct. But in the case of the complex solution, we are not sure whether this definition still makes sense. Of course, we can use a sum rule for $`Z`$-factor to estimate the $`Z`$-factor corresponding to complex solutions. This point needs further study.. Defining $`x=\frac{q_0^2}{m_\rho ^2}`$, we get $`{\displaystyle \frac{}{q_0^2}}\mathrm{Re}\mathrm{\Sigma }_{\rho \mathrm{N}^{}\mathrm{N}}`$ $`=`$ $`{\displaystyle \frac{}{x}}({\displaystyle \frac{\mathrm{Re}\mathrm{\Sigma }_{\rho \mathrm{N}^{}\mathrm{N}}}{m_\rho ^2}})`$ (13) $`=`$ $`c{\displaystyle \frac{(2xc_1c_2)((xc_1c_2)^2+4c_2x)}{((xc_1c_2)^2+4c_2x)^2}}`$ $`{\displaystyle \frac{x(xc_1c_2)(2(xc_1c_2)+4c_2)}{((xc_1c_2)^2+4c_2x)^2}}`$ where $`c=f_{\rho N^{}N}^2\frac{4}{3}\frac{\rho _0}{m_\rho ^3}\frac{\mathrm{\Delta }E}{m_\rho }`$, $`c_1=\frac{(\mathrm{\Delta }E)^2}{m_\rho ^2}0.567`$ and $`c_2=\frac{1}{4}\frac{\mathrm{\Gamma }_{tot}^2}{m_\rho ^2}`$. We can readily obtain the zeros in the real part of the $`\rho `$-propagator and the $`Z`$ factor by plotting figures like those of Fig.3 in ref.. But here we shall get them by directly solving eq.(10) and calculating eq.(13). $``$ Case of $`\mathrm{\Gamma }_{tot}=0`$ For simplicity, let us set $`\mathrm{\Gamma }_{tot}=0`$. The relevant equations simplify to $`q_0^2`$ $`=`$ $`m_\rho ^2+f_{\rho N^{}N}^2{\displaystyle \frac{4}{3}}{\displaystyle \frac{q_0^2}{m_\rho ^2}}\rho _0{\displaystyle \frac{\mathrm{\Delta }E}{q_0^2(\mathrm{\Delta }E)^2}},`$ $`{\displaystyle \frac{\mathrm{\Sigma }_{\rho N^{}N}}{q_0^2}}`$ $`=`$ $`{\displaystyle \frac{}{x}}({\displaystyle \frac{\mathrm{\Sigma }_{\rho N^{}N}}{m_\rho ^2}})`$ (14) $`=`$ $`f_{\rho N^{}N}^2{\displaystyle \frac{4}{3}}{\displaystyle \frac{\rho _0^2}{m_\rho ^3}}{\displaystyle \frac{\mathrm{\Delta }E}{m_\rho }}{\displaystyle \frac{\frac{(\mathrm{\Delta }E)^2}{m_\rho ^2}}{(x\frac{(\mathrm{\Delta }E)^2}{m_\rho ^2})^2}}.`$ Written in terms of the quantity $`xq_0^2/m_\rho ^2`$, the dispersion relation reads $$x=1+0.208x\frac{1}{x0.567}$$ (15) where we have used $`\frac{f_{\rho N^{}N}^2}{4\pi }=5.5`$ from ref. and $`\rho _0\frac{1}{2}m_\pi ^3`$. The solutions are $$q_0^{}498\mathrm{MeV},q_0^+897\mathrm{MeV}.$$ (16) The formula for $`Z`$-factor $$Z=(1+0.118\frac{1}{(x0.567)^2})^1$$ (17) yields the corresponding $`Z`$-factors $$Z(q_0^{})0.16,Z(q_0^+)0.84.$$ (18) Naively extrapolated to a higher density, say, $`\rho 2.5\rho _0`$, the results come out to be $$q_0^{}436\mathrm{MeV},q_0^+1023\mathrm{MeV}$$ (19) and $$Z(q_0^{})0.17,Z(q_0^+)0.83$$ (20) $``$ Case of $`\mathrm{\Gamma }_{tot}=\mathrm{\Gamma }_0=120`$ MeV Substituting $`\mathrm{\Gamma }_0=120`$ MeV and $`\mathrm{\Gamma }_{med}=0`$ into the dispersion relation at normal nuclear density $`\rho _0`$ $$x=1+0.208x\frac{x0.567\frac{\mathrm{\Gamma }_{tot}^2}{4m_\rho ^2}}{(x0.567\frac{\mathrm{\Gamma }_{tot}^2}{4m_\rho ^2})^2+\frac{\mathrm{\Gamma }_{tot}^2}{m_\rho ^2}x}$$ (21) we obtain the solutions $$x=\frac{q_0^2}{m_\rho ^2}=1.24,0.49+i0.267,0.49i0.267$$ (22) where $`i`$ refers to imaginary. Taking the real part of the solutions, we get<sup>4</sup><sup>4</sup>4As stated, we do not know how to interpret physically the imaginary parts of the solution. Of course, the imaginary part tells us that the pole is located off the real axis. $$q_0^{}\sqrt{0.49m_\rho ^2}=541\mathrm{MeV},q_0^+857\mathrm{MeV}.$$ (23) The $`Z`$-factor for $`q_0^+`$ state is calculated to be $$Z(q_0^+)0.86$$ (24) with the remaining strength going to the lower state. For $`\rho =2.5\rho _0`$, we get $$x=0.34,0.55,1.7$$ (25) corresponding to $$q_0=489\mathrm{MeV},571\mathrm{MeV},1003\mathrm{MeV}.$$ (26) The corresponding $`Z`$-factors are<sup>5</sup><sup>5</sup>5We interpret these three states of $`\rho `$-meson quantum number to be the “elementary” $`\rho ,N^{}(1520)N^1\pi N`$ and $`N^{}(1520)N^1`$. See fig.2 $`Z(449)`$ $`=`$ $`0.21`$ $`Z(571)`$ $`=`$ $`0.056`$ $`Z(1004)`$ $`=`$ $`0.83.`$ (27) The $`\rho `$-meson dispersion relation with $`\mathrm{\Gamma }_0=120MeV`$ at $`\rho =\rho _0`$ and $`\rho =2.5\rho _0`$ is shown in fig.2. ### 2.2 The B/R approach As stated in Introduction, we propose that approaching B/R-scaling from hadronic excitations is effected by replacing $`\frac{q_0^2}{m_\rho ^2}`$ by $`1`$ in the $`\mathrm{\Sigma }_{\rho N^{}N}`$ that enters in the dispersion relation . Let us see how this ansatz works out in reproducing the structure of B/R scaling as density increases. From (10), we get a minimally modified dispersion relation for the $`\rho `$-meson in medium $$q_0^2=m_\rho ^2+f_{\rho N^{}N}^2\frac{4}{3}\rho _0\frac{\mathrm{\Delta }E(q_0^2(\mathrm{\Delta }E)^2\frac{1}{4}\mathrm{\Gamma }_{tot}^2)}{(q_0^2(\mathrm{\Delta }E)^2\frac{1}{4}\mathrm{\Gamma }_{tot}^2)^2+\mathrm{\Gamma }_{tot}^2q_0^2}.$$ (28) In this formula, we shall assume that in medium, $`\mathrm{\Delta }E`$ remains unchanged (assumption valid to the leading order in $`1/N_c`$) while the width $`\mathrm{\Gamma }_{tot}`$ may be affected by density. $``$ Case of $`\mathrm{\Gamma }_{tot}=0`$ In contrast to the R/W approach, this is a situation which is actually realizable as density approaches the chiral transition density $`\rho _c`$ since the phase space for $`\rho `$ decay goes to zero at that density. Let us consider what happens at normal nuclear density $`(\rho _0)`$ in the limit of zero width. The solutions are $$q_0^{}=406.7\mathrm{MeV},q_0^+=873.9\mathrm{MeV}$$ (29) with the corresponding $`Z`$-factors $$Z(q_0^{})=0.285,Z(q_0^+)=0.714$$ (30) For $`\rho =2.5\rho _0`$, we obtain $$q_0^{}=136\mathrm{MeV},q_0^+=956\mathrm{MeV}$$ (31) and $$Z(q_0^{})=0.356,Z(q_0^+)=0.646.$$ (32) Since the width should vanish near the critical density $`\rho _c`$, the dispersion formula with zero width should approach the correct one near it. Figure 3 shows indeed that $`m_\rho ^{}0`$ as $`\rho 2.75\rho _0`$ as found in . $``$ Case of $`\mathrm{\Gamma }_{tot}=\mathrm{\Gamma }_0=120`$ MeV The results at normal nuclear density are $`q_0`$ $`=`$ $`423\mathrm{MeV},567\mathrm{MeV},878\mathrm{MeV}`$ $`Z(423)`$ $`=`$ $`0.34,Z(567)=0.0799,Z(878)=0.73.`$ (33) For $`\rho =2.5\rho _0`$, we get $`q_0`$ $`=`$ $`146\mathrm{MeV},577\mathrm{MeV},951\mathrm{MeV}`$ $`Z(146)`$ $`=`$ $`0.369,Z(567)=0.027,Z(951)=0.657`$ (34) We compare in fig.3 the in-medium $`\rho `$-meson mass and $`Z`$-factors in B/R and R/W. ### 2.3 The $`m_\rho ^{}`$ as an order parameter In , an argument was given that the in-medium mass of the $`\rho `$-meson can be taken, roughly, as an order parameter for the chiral phase transition. Figure 4 shows that our model described above predicts $`m_\rho ^{}`$ dropping roughly linearly in density. This is consistent with the behavior of the quark condensate in medium, $$\frac{\overline{q}q^{}}{\overline{q}q}1\frac{\sigma _N\rho _N}{f_\pi ^2m_\pi ^2}$$ (35) where the star denotes finite density (or temperature) and $`\rho _N`$ the nuclear (vector) density. Indeed we would find from eq.(35) roughly the same $`\rho _c`$ as in fig.3 for $`m_\rho ^{}0`$ by setting $`\overline{q}q^{}0`$. Thus our model has the quark condensate, on the average, dropping roughly linearly with density $`\rho `$. ## 3 The $`\omega `$-meson in nuclear matter In this section, we apply the same two-level model to the $`\omega `$-meson channel. We shall consider both R/W and B/R approaches. ### 3.1 The R/W approach For this calculation, all we have to do is to replace $`f_{\rho N^{}N}(m_\rho )`$ by $`f_{\omega N^{}N}(m_\omega )`$ and $`m_\rho `$ by $`m_\omega `$ in eq.(8). A priori, we do not know how to relate $`f_{\omega N^{}N}`$ to $`f_{\rho N^{}N}`$. Assuming a generalized VDM would give the relation $`f_{\omega NN}=3f_{\rho NN}`$ but there is no reason, theoretical or empirical, to believe that such a relation should be reliable. We shall instead resort to the empirical result of Friman et al . From their fig. 4, we find $$f_{\omega N^{}N}^24.4f_{\rho N^{}N}^2$$ (36) $``$ Case of $`\mathrm{\Gamma }_{tot}=0`$ At normal nuclear matter density, the dispersion formula (corresponding to eq.(10) for the $`\rho `$ meson) is $$x=1+0.863\frac{x}{x0.55}$$ (37) where $`x=q_0^2/m_\omega ^2`$. The solutions are $$q_0^{}395\mathrm{MeV},q_0^+1149\mathrm{MeV}$$ (38) with the corresponding $`Z`$ factors $$Z(q_0^{})0.155,Z(q_0^+)0.845.$$ (39) The behavior of the $`\omega `$ mass is compared with that of the $`\rho `$ mass in fig.5. Note that the stronger coupling makes the $`\omega `$ mass fall faster than the $`\rho `$ mass. $``$ Case of $`\mathrm{\Gamma }_0=120`$ MeV In this case, we find $$q_0^1=403\mathrm{MeV},q_0^2=576\mathrm{MeV},q_0^3=1145\mathrm{MeV}$$ (40) and $$Z(q_0^1)=0.174,Z(q_0^2)=0.0293,Z(q_0^3)=0.856$$ (41) at normal nuclear matter density. For comparison, we quote the values of Friman et al : $$q_0^{}328\mathrm{MeV},q_0^+1384\mathrm{MeV}$$ (42) and $$Z(q_0^{})0.125$$ (43) ### 3.2 The B/R approach Even if we can extract the $`\omega N^{}N`$ coupling constant from experiments at zero density, there is no reason to expect that that constant will remain unchanged in medium. Indeed we have reasons to believe that the ratio $`(f_{\omega NN}/f_{\rho NN})^2`$ will decrease as density increases. For this reason, we shall consider two cases: (1) a density-independent constant; (2) a density-dependent constant. #### 3.2.1 With density-independent $`f_{\omega N^{}N}`$ With B/R scaling, the factor 4.4 determined empirically in (36) for matter-free space turns out to give an unreasonably low critical density ($`\rho _c0.7\rho _0`$) at which the collective $`\omega `$ mass vanishes. While the $`\omega `$ mass is expected to drop faster than the $`\rho `$ mass as explained below, it does not seem reasonable that the $`\omega `$ mass vanish much before the $`\rho `$ mass does. To see what happens if one takes a constant coupling constant somewhat larger than the $`\rho N^{}N`$ coupling, we take for illustration $$f_{\omega N^{}N}^21.6f_{\rho N^{}N}^2.$$ (44) We have no particular reason to take this number but it gives a qualitative idea as to how things go. The results are summarized in fig.6 for the case with $`\mathrm{\Gamma }_{tot}=0`$. Since $`|f_{\omega N^{}N}|>|f_{\rho N^{}N}|`$, the $`\omega `$ mass drops to zero faster than the $`\rho `$ mass: $`m_\omega ^{}0`$ for $`\rho \rho _c1.7\rho _0`$. We shall suggest below that this feature of the $`\omega `$ properties may be interpreted in terms of an “induced symmetry breaking” (ISB) in the vector channel. #### 3.2.2 Density-dependent $`f_{\omega N^{}N}`$ The fact that empirically $`4`$ at zero density indicates that the vector dominance model (VDM) – which would give 9 – fails. This is not surprising: there is no reason to expect that the VDM should work in the baryon sector, particularly where baryon resonances are involved. On the other hand, as density approaches the chiral phase transition point, we would expect that the system becomes a Fermi liquid of quasiquarks to which the vector degrees of freedom corresponding to the $`\rho `$ and the $`\omega `$ would couple in an $`U(2)`$ symmetric way. This would mean that the ratio $``$ would go to 1 as $`\rho \rho _c`$. Here we shall implement this possibility in the dispersion formula for in-medium $`\omega `$’s assuming that the constant $`f_{\rho N^{}N}`$ depends little on density. It turns out that the simplest possible linear interpolation between $`\rho =0`$ and $`\rho =\rho _c`$ gives too rapid a decrease of the $`\omega `$ mass, vanishing at much too low a density than that of the $`\rho `$. Different parameterizations have been tried and we report two of them which appear to be reasonable. One is the following log-type parametrization $$f_{\omega N^{}N}^2=f_{\rho N^{}N}^2\frac{4.4}{1+3.4\mathrm{log}(1+1.72(c/2.8))}$$ (45) where $`c`$ is defined by $`\rho =c\rho _0`$. In this parametrizaton, $`\rho _c`$ is chosen as $`\rho _c=2.8\rho _0`$. The results are given in fig.7. As an alternative parametrization, we take $$f_{\omega N^{}N}^2=f_{\rho N^{}N}^2(4.43.4(c/2.8)^{\frac{1}{3}}).$$ (46) Figures 7 and 8 show that the two parameterizations (45) and (46) give qualitatively the same results<sup>6</sup><sup>6</sup>6The density at which the $`\omega `$ mass goes to zero differs slightly but we do not believe that any importance can be attached to this difference.. Because of the initially stronger coupling constant, the $`\omega `$ falls faster than the $`\rho `$ at the beginning, flattens in the middle and then approaches zero at the critical density together with the $`\rho `$. ### 3.3 The dropping $`\omega `$ mass and a high-density “ISB phase” The stronger $`\omega N^{}N`$ coupling relative to the $`\rho N^{}N`$ coupling leads naturally to the prediction in the present model that in medium the $`\omega `$ mass would fall faster than the $`\rho `$ mass as density increases. This is expected in both R/W and B/R approaches. In B/R, however, this leads to the additional prediction that the $`\omega `$ mass would go to zero (in the chiral limit) either before or at $`\rho _c`$ for the $`\rho `$ meson depending on whether the ratio $``$ remains constant (of density) or goes to 1 at $`\rho =\rho _c`$. There is no theoretical reason known to favor one scenario over the other. However that the $`\omega `$ mass falls faster than the $`\rho `$ mass – which is essentially dictated in the present formalism by the fact that $`>1`$ – is consistent with the phase structure of dense matter previously arrived at in quark language by Langfeld et al . Briefly the scenario given by is as follows. If quark-quark interactions have a strength in vector channel comparable to what is found in one-gluon exchanges, then an induced Lorentz symmetry breaking could take place at a critical chemical potential $`\mu _c`$ at which chiral symmetry would be restored, i.e., $`\overline{q}q=0`$, and the baryon density $`\overline{B}\gamma _0B`$ would have a discontinuous increase as $`\mu `$ exceeds $`\mu _c`$ indicating a first-order transition. The consequence is that a low-energy collective state carrying the quantum number of an $`\omega `$ meson should emerge at $`\mu _c`$ as a pseudo-Goldstone vector boson. Our proposal is that the collective $`N^{}`$-hole excitation of the $`\omega `$ quantum number built in our schematic degenerate model be identified with the low-mass ISB state described in quark language in refs.. In this “dual” description, as the inverse $`\omega `$ propagator vanishes at the point where the mass of the lower $`\omega `$ branch goes to zero, the $`\omega `$ field develops an “induced VEV” $`\delta \omega _0_{\rho _c}`$ so that there would be a discontinuity at $`\rho _c`$ of the $`\omega _0`$. Translated into the baryon (or quark) density, this means that there will be a jump in density at the critical chemical potential $`\mu _c`$ if one looks at the density vs. $`\mu `$. One could think of this as a chiral symmetry restoration in dense matter in a way analogous to what was obtained in ERGF (exact renormalization group flow) by Berges et al. . A more appealing way of viewing the present scenario is that it provides a hadronic counterpart of the quark-model scenario of Langfeld et al.. It is amusing to note that the ISB phenomenon in the quark sector is encoded in the empirical fact in the hadronic sector that $`|f_{\omega N^{}N}|>|f_{\rho N^{}N}|`$ for $`\rho <\rho _c`$. ## 4 Conclusion We have constructed a schematic model of the Rapp/Wambach theory, emphasizing the role of the $`[N^{}(1520)N^1]`$ isobar-hole state. This model turns out to be essentially the same as the degenerate schematic model of Brown but for the $`q_0`$-dependence of the coupling of $`\rho `$ meson to $`N^{}N`$. In fact when this coupling is cancelled by introducing $`m_\rho ^{}`$ as the mass scaling factor of the Lagrangian, the model becomes precisely that of Brown . We show that this latter model gives the same results as Brown/Rho scaling in the limit of $`\rho \rho _c`$, where $`\rho _c`$ is the chiral restoration density and propose to use it as an interpolation formula between R/W and B/R scaling. This then provides a possible mechanism to arrive at B/R scaling from the hadronic side, that is, in the bottom-up way. New results from the TCP now in service with the CERES collaboration should pin down the strength at the $`\rho `$-meson poles, or at least an average of this strength over the various densities encountered in this experiment. These will confirm or infirm our scenario: Since the expected strength entering into the dilepton rate is obtained by $`A(\omega )=\frac{Z(\omega )}{2\omega }`$, lower-energy component(s) will be progressively more enhanced at higher densities. Furthermore, given temperature of $`T150`$ MeV, there will be larger Boltzmann factors at lower energies, so the net result will be that more leptons will come out of the lower-energy state(s). We have also suggested that the $`\omega `$ mass should fall faster than the $`\rho `$ mass until one approaches the chiral phase transition and that the collective $`N^{}`$-hole excitation of the $`\omega `$ quantum number in the schematic degenerate model is the hadronic (”Cheshire-cat”) description of the pseudo-Goldstone vector boson generated by an induced symmetry breaking of Lorentz symmetry in dense medium. ### Acknowledgments We are grateful for helpful discussions with Kurt Langfeld and Hyun Kyu Lee. One of us (YK) is indebted to Hyun Kyu Lee for support and encouragement. The work of YK was partially supported by KOSEF (Grant No. 985-0200-001-2) and the U.S. Department of Energy under Grant No. DE–FG02–88ER40388 and that of RR and GEB by the U.S. Department of Energy under Grant No. DE–FG02–88ER40388.
no-problem/9902/nucl-th9902057.html
ar5iv
text
# On the Backbending Mechanism of 48Cr \[ ## Abstract The mechanism of backbending in <sup>48</sup>Cr is investigated in terms of the Projected Shell Model and the Generator Coordinate Method. It is shown that both methods are reasonable shell model truncation schemes. These two quite different quantum mechanical approaches lead to a similar conclusion that the backbending is due to a band crossing involving an excited band which is built on simultaneously broken neutron and proton pairs in the “intruder” subshell $`f_{7/2}`$. It is pointed out that this type of band crossing is usually known to cause the second backbending in rare-earth nuclei. \] Investigation of the yrast band in the <sup>48</sup>Cr nucleus has recently become a particularly interesting subject in nuclear structure studies because of the full $`pf`$-shell model calculation on the one hand and of the spectroscopic measurements on the other. It is a light nucleus for which an exact shell model diagonalization is feasible, yet exhibits remarkable high-spin phenomena usually observed in heavy nuclei: large deformation, typical rotational spectrum and the backbending in which regular rotational spectrum is disturbed by a sudden irregularity at a certain spin. This nucleus is therefore an excellent example for theoretical studies, providing a unique testing ground for various approaches. The intrinsic and laboratory frame descriptions of this nucleus were presented in a paper by the Strasbourg-Madrid collaboration . These approaches are complementary views of the same problem from two extremes. On the one hand, the cranked Hartree-Fock-Bogoliubov (CHFB) description interprets the problem in terms of the intrinsic frame on which the lowest rotational band (yrast band) is built. It can provide a nice physical insight but does not treat the angular momentum as a good quantum number. On other hand, the $`pf`$-shell model (pf-SM) approach solves the problem fully quantum mechanically and provides the exact solution of the Hamiltonian within the $`pf`$-shell. However, in the pf-SM, a single shell model configuration does not correspond to any excitation mode of deformed nucleus and therefore millions of many-body basis states are necessary even to represent the lowest eigenstate of the Hamiltonian. Consequently, the physical insight is lost and interpretation of the result becomes very difficult. The purpose of the present work is to clarify the physics associated with the yrast spectrum of the nucleus <sup>48</sup>Cr from two different quantum mechanical view points. To extract physics, it is desirable to use a shell model basis which has a good classification scheme in the sense that a simple configuration corresponds (approximately) to a low excitation mode of the nucleus. This suggests us to use a deformed basis corresponding to the optimal set of basis states. In fact, a basis truncation can be most easily done by selecting low-lying states if a proper deformed basis is used. To carry out a shell model type calculation with such a basis, the broken rotational symmetry (and the particle number conservation if necessary) has to be restored. This can be done by using the projection method to form a many-body basis in the laboratory frame. After this procedure, one diagonalizes the Hamiltonian. Such an approach lies conceptually between the two extreme methods mentioned above and takes the advantages of both. This is exactly the philosophy on which the Projected Shell Model (PSM) is based. The PSM uses the Nilsson+BCS representation as the deformed quasiparticle (qp) basis. Before performing a calculation for <sup>48</sup>Cr, one has to find out where the optimal basis is. The experimental lifetime measurement suggests an axially symmetric deformation of $`\beta 0.28`$ near the ground state of <sup>48</sup>Cr, which roughly corresponds to $`\epsilon _2=0.25`$. We simply take this information and build our shell model basis at this deformation. The set of multi-qp states relevant for our shell model configuration space is $`|\mathrm{\Phi }_\kappa =\{|\mathrm{\hspace{0.17em}0},a_{\nu _1}^{}a_{\nu _2}^{}|\mathrm{\hspace{0.17em}0},a_{\pi _1}^{}a_{\pi _2}^{}|\mathrm{\hspace{0.17em}0},a_{\nu _1}^{}a_{\nu _2}^{}a_{\pi _1}^{}a_{\pi _2}^{}|\mathrm{\hspace{0.17em}0}\},`$ (1) where $`a^{}`$’s are the qp creation operators, $`\nu `$’s ($`\pi `$’s) denote the neutron (proton) Nilsson quantum numbers which run over properly selected (low-lying) orbitals and $`|\mathrm{\hspace{0.17em}0}`$ the Nilsson+BCS vacuum or 0-qp state. As in the usual PSM calculations, we will use the Hamiltonian $$\widehat{H}=\widehat{H}_0\frac{1}{2}\chi \underset{\mu }{}\widehat{Q}_\mu ^{}\widehat{Q}_\mu G_M\widehat{P}^{}\widehat{P}G_Q\underset{\mu }{}\widehat{P}_\mu ^{}\widehat{P}_\mu ,$$ (2) where $`\widehat{H}_0`$ is the spherical single-particle Hamiltonian which in particular contains a proper spin-orbit force, whose strengths (i.e. the Nilsson parameters $`\kappa `$ and $`\mu `$) are taken from Ref. . The second term in the Hamiltonian is the Q-Q interaction and the last two terms the monopole and quadrupole pairing interactions, respectively. It was shown that these interactions simulate the essence of the most important correlations in nuclei, so that even the realistic force has to contain at least these components implicitly in order for it to work successfully in the structure calculations. The interaction strengths are determined as follows: the Q-Q interaction strength $`\chi `$ is adjusted by the self-consistent relation such that the input quadrupole deformation $`\epsilon _2`$ and the one resulting from the HFB procedure coincide with each other . The monopole pairing strength $`G_M`$ is taken to be $`G_M=\left[22.518.0(NZ)/A\right]/A`$ for neutrons and $`G_M=22.5/A`$ for protons, which was first introduced in Ref. . This choice of $`G_M`$ seems to be appropriate for the single-particle space employed in the present calculation in which three major shells ($`N=1,2,3`$) are used for both neutron and proton. Finally, the quadrupole pairing strength $`G_Q`$ is assumed to be proportional to $`G_M`$, the proportionality constant, which is usually taken in the range of 0.16 – 0.20, being fixed to 0.20 in the present work. The eigenvalue equation of the PSM for a given spin $`I`$ takes the form $$\underset{\kappa ^{}}{}\left\{H_{\kappa \kappa ^{}}^IE^IN_{\kappa \kappa ^{}}^I\right\}F_\kappa ^{}^I=0,$$ (3) where the Hamiltonian and norm matrix elements are respectively defined by $$H_{\kappa \kappa ^{}}^I=\mathrm{\Phi }_\kappa |\widehat{H}\widehat{P}_{KK^{}}^I|\mathrm{\Phi }_\kappa ^{},N_{\kappa \kappa ^{}}^I=\mathrm{\Phi }_\kappa |\widehat{P}_{KK^{}}^I|\mathrm{\Phi }_\kappa ^{},$$ (4) and $`\widehat{P}_{MK}^I`$ is the angular momentum projection operator. The expectation value of the Hamiltonian with respect to a “rotational band $`\kappa `$$`H_{\kappa \kappa }^I/N_{\kappa \kappa }^I`$ is called a band energy. When they are plotted as functions of spin $`I`$, we call it a band diagram . It will provide us a useful tool for interpreting the result. We have carried out not only the PSM calculation but also a calculation based on the Generator Coordinate Method (GCM) with the same Hamiltonian as used in the pf-SM . This is because the PSM Hamiltonian is quite schematic as it stands, so that we felt it necessary to confirm the result by another theory using the same Hamiltonian as the pf-SM. The relation between the PSM and GCM will be discussed later. We will first explain briefly how the GCM is performed. For given quadrupole moment $`q_\mu `$ and spin $`I`$, we look for the minimum of $$\widehat{H}^{}\widehat{H}+c_1\underset{\mu =0,\pm 2}{}(\widehat{Q}_\mu q_\mu )^2+c_2[\widehat{J}_x\sqrt{I(I+1)}]^2,$$ (5) where $`c_1`$ and $`c_2`$ are predefined positive constants. This procedure generates a constrained Hartree-Fock (CHF) state $`|q,\gamma ,I`$, where $$q_0=\sqrt{\frac{5}{4\pi }}q\mathrm{cos}\gamma ,q_{\pm 2}=\sqrt{\frac{5}{8\pi }}q\mathrm{sin}\gamma .$$ (6) It is useful to plot an energy surface $`q,\gamma ,I|\widehat{H}|q,\gamma ,I`$ in the $`q`$-$`\gamma `$ plane for each $`I`$, which usually shows several local minima. It will help us interpreting the result as we will soon see. We then project such Slater determinants corresponding to various $`q`$ and $`\gamma `$ onto a good angular momentum $`I`$ and diagonalize the Hamiltonian with them, the eigenvalue equation being again of the form Eq.(3) with $`|\mathrm{\Phi }_\kappa =\{|q,\gamma ,I\}`$. The total number of mesh points in the $`q`$-$`\gamma `$ parameter space is 66 in the present calculation, so that the size of the eigenvalue equation for a given spin $`I`$ is at most $`66(2I+1)`$ (some of them will be discarded due to vanishing norm). This is another way of truncating the shell model basis . In Fig. 1, the results of the PSM and GCM for the $`\gamma `$-ray energy $`E_\gamma =E(I)E(I2)`$ along the yrast band, together with that of the pf-SM reported in Ref. , are compared with the newest experimental data . One sees that four curves are bunched together over the entire spin region, indicating an excellent agreement of three theories with each other and with the data. The sudden drop in $`E_\gamma `$ occurring around spin 10 and 12 corresponds to the backbending in the yrast band of <sup>48</sup>Cr. In Fig. 2, three theoretical results for B(E2) are compared with the data . All the three theories use the same effective charges (0.5e for neutrons and 1.5e for protons). Again, one sees that theories agree not only with each other but also with the data quite well. The B(E2) values decrease monotonously after spin 6 (where the first band crossing takes place in the PSM, see Fig. 3 and discussions below). This implies a monotonous decrease of the intrinsic Q-moment as a function of spin, reaching finally the spherical regime at higher spins. This feature was explicitly discussed in Ref. within the CHFB framework. Figs. 1 and 2 indicate that both PSM and GCM are reasonable shell model truncation schemes as they reproduce the result of the pf-SM very well. Let us study their band diagrams to understand why and how the backbending occurs. As mentioned before, a band diagram displays band energies of various configurations before they are mixed by the diagonalization procedure Eq.(3). It can provide a transparent picture of band crossings. Irregularity in a spectrum may appear if a band is crossed by another one at certain spin. We remark in this connection that a small crossing angle implies a smooth change in the yrast band while a large crossing angle a sudden change, leading to a backbending . In Fig. 3, the band diagram of the PSM is shown. Different configurations are distinguished by different types of lines, and the filled circles represent the yrast states obtained after the configuration mixing. Among several 2-qp bands which start at energies of 2 – 3 MeV, two of them (one solid and another dashed line) cross the ground band at spin 6. They are neutron 2-qp and proton 2-qp bands consisting of two $`f_{7/2}`$ quasiparticles of $`\mathrm{\Omega }=3/2`$ and $`5/2`$ coupled to total $`K=5/23/2=1`$ forming the so-called $`s`$-band. The crossing angle is relatively small so that the yrast band smoothly changes its structure from the 0-qp to the 2-qp states around spin 6. Therefore, no clear effect of this (first) band crossing is seen in the yrast band (cf. Fig. 1). These $`\mathrm{\Omega }=3/2`$ and $`5/2`$ Nilsson states are nearly spherical in which the $`j=7/2`$ component dominates (95% and 98% respectively). This is nothing other than the property which characterizes the intruder states. The above two ($`K=1`$) 2-qp bands can combine to a ($`K=2`$) 4-qp band which represents simultaneously broken neutron and proton pairs. In Fig. 3, this 4-qp band (one of the dashed-dotted lines which becomes the lowest band for $`I10`$) shows a unique behavior as a function of spin. As spin increases, it goes down first but turns up at spin 6. This behavior has its origin in the spin alignment of a decoupled band as intensively discussed in Ref. . Because of this, it can sharply cross the 2-qp bands between spin 8 and 10 and becomes the lowest band thereafter, so that the yrast band gets the main component from this 4-qp band. This is seen in the band diagram as a (second) band crossing. Thus, we can interpret the backbending in <sup>48</sup>Cr as a consequence of the simultaneous breaking of the $`f_{7/2}`$ neutron and proton pairs. Similar band crossing picture emerges also from the band diagram of the GCM (here we use the word “band” symbolically). In Fig. 4, two most prominent bands in the GCM are shown together with the yrast band obtained after the diagonalization. The one labelled as deformed band is associated with a prolate minimum while the one labeled as spherical band with the zero deformation which becomes a minimum only when spin is higher. Fig. 4 shows the competition between these two bands which are built by diagonalizing the Hamiltonian within small regions of the $`q`$-$`\gamma `$ plane around the respective local minima in the energy surface. Therefore, in the GCM, the backbending in <sup>48</sup>Cr can be interpreted as due to the crossing between the deformed and spherical band. The latter dominates beyond spin 10 and this explains the sudden decrease of the Q-moment in the pf-SM calculation. While the deformed band in the GCM corresponds obviously to an admixture of the 0-qp and a 2-qp band in the PSM, we note that the feature of the spherical band in Fig. 4 looks similar to that of the 4-qp band in Fig. 3. In fact, the 4-qp band of the PSM can be considered as a spherical band because the main part of its wavefunction is a product of the $`j=7/2`$ components as mentioned before, while the spherical band of the GCM can be thought as the 4-qp band because it consists mainly of the $`f_{7/2}`$ neutrons and protons as confirmed by evaluating the occupation number of each shell. We have thus the same physics in two different languages and may conclude that the backbending of <sup>48</sup>Cr is due to the band crossing caused by a simultaneous neutron and proton pair breaking in the $`f_{7/2}`$ shell. We remark that this statement differs from that of a recent paper based on the CHFB which claims that the backbending in <sup>48</sup>Cr is not due to level crossing. To conclude, both PSM and GCM are reasonable shell model truncation schemes and suggest consistently that the backbending of <sup>48</sup>Cr is due to a band crossing. The PSM carries out the configuration mixing in terms of qp-excitations while the GCM in terms of Slater determinants that belong to different nuclear shapes. We have shown in the present context that they describe the same physics. However, in the PSM terminology, the backbending in <sup>48</sup>Cr is not due to the crossing between the 0-qp and 2-qp band (or the so-called $`g`$-$`s`$ band crossing) but rather to the one between a 2-qp and 4-qp band. This is remarkable because such a band crossing is known to lead to the second backbending in rare-earth nuclei . The $`g`$-$`s`$ band crossing which usually leads to the first backbending shows no prominent effect in <sup>48</sup>Cr except for the decrease in B(E2) values. This is because the spin alignment of a 2-qp band cannot become large enough in light nuclei as the maximal value is limited to $`J=6`$ in the intruder subshell $`f_{7/2}`$, which is only the half of what is possible ($`J=12`$) in the intruder subshell $`i_{13/2}`$ of rare-earth nuclei. One basic question still remains. The PSM uses a schematic Hamiltonian while the GCM the same realistic Hamiltonian as the pf-SM. How can the PSM deliver a similar result as the latter two? The answer to this question was actually given in Appendix B of Ref. . It was proved that the band energies do not depend on details of the Hamiltonian for the ground and intruder bands and that any (rotation invariant) Hamiltonian, which gives similar values for (1) the fluctuation of the angular momentum, (2) the Peierls-Yoccoz moment of inertia and (3) the qp excitation energies, will lead essentially to the same result. The first two conditions are the ground state properties that determine the scaling of a band diagram while the last one the relative position of various bands reflecting the shell filling of the nucleus in question. The detailed spin dependence such as the signature rules (for example, even/odd spins are favoured/unfavoured in an even-even system) originate solely from the kinematics of angular momenta. Note that the yrast band is essentially the envelope of the band energies, cf. Fig. 3. This explains the reason why the PSM works so nicely even with a simple schematic Hamiltonian. Finally, we briefly comment on the GCM. Its wavefunction is a superposition of various projected CHF states but, for heavier systems, the constained HFB states should be used to take into account strong pairing correlations more efficiently. Intuitively speaking, the CHF sates corresponding to various nuclear shapes represent different moments of inertia and they compete with one another, which is probably a new way of interpreting the backbending. This formalism has an obvious advantage that the normally deformed and superdeformed bands in one nucelus may be simultaneously described on the same footing. The present work is supported in part by Grant-in-Aid for Scientific Research (A)(2)(10304019) from the Ministry of Education, Science and Culture of Japan. K.H. and Y.S. acknowledge A.P. Zuker for a conversation made during the Drexel Shell Model Workshop in Philadelphia, 1996, stimulating them to carry out a PSM analysis of the nucleus <sup>48</sup>Cr on which a part of the present work is based.
no-problem/9902/cond-mat9902276.html
ar5iv
text
# Combined study of KNbO3 and KTaO3 by different techniques of photoelectron and X-ray emission spectroscopy ## Introduction Potassium niobate and tantalate are traditional benchmark systems for the testing of new theory developments in the study of ferroelectric materials. The starting point of any first-principles treatment, aimed at lattice dynamics or dielectric response, is the knowledge of the ground-state electronic structure. That of KNbO<sub>3</sub> and KTaO<sub>3</sub> is believed to be quite well known as a result of a long development beginning with the empirical parameter-adjusting schemes by the end of 1970s , followed by first-principles self-consistent treatment and finally refined in precision all-electron total energy calculations by different methods . In the evaluation of one-electron Kohn-Sham energies, an excellent agreement exists nowadays between different technical implementations of the density functional theory (DFT) – see, for example, Ref. . However, the Hartree-Fock formalism provides basically different description of the one-particle excitation spectrum and hence somehow different dielectric constant than that in the DFT treatment . In the situation when ab initio calculations of quasiparticle excitations spectra for such moderately complex systems as cubic perovskites are not yet routinely feasible, the electronic and X-ray spectroscopy remains the important tool for the experimental evaluation of the electronic structure data. The X-ray photoelectron spectroscopy (XPS) has been applied in several cases to probe the density of states (DOS) distribution in the valence band (VB) of KNbO<sub>3</sub> and KTaO<sub>3</sub> . Moreover, the angle-resolved ultraviolet photoelectron spectra have been measured for KTaO<sub>3</sub> and explained in terms of full-potential relativistic theory of photoemission . While being often quite successful in the study of metals, the electron spectroscopy techniques face severe problems in dielectrics due to sample-charging effects. This problem does not arise in the X-ray emission spectroscopy (XES). Larger exit depth of soft X-ray emission, as compared to that of photoelectrons, may be advantageous in many cases because the quality of the sample surface preparation is not so crucial. Moreover, due to dipole selection rules the XES is element-sensitive, which allows to probe partial states distribution related to different atoms. It has been shown that not only DOS but also momentum-resolved information regarding the VB dispersion can be extracted from soft X-ray resonant inelastic spectra (XRIS) . Therefore, XES complements the electron spectroscopy in the study of electronic structure of insulating materials. In the present work, we concentrate on the partial DOS demonstrated by several X-ray emission spectra in KNbO<sub>3</sub> and KTaO<sub>3</sub> and analyzed in comparison with the VB XPS. The interpretation of spectra (including the dipole transition matrix elements) was done on the basis of calculations by the all-electron full-potential linearized augmented plane-wave method (FLAPW). ## Experiment and calculation details The soft X-ray fluorescence experiments were performed at Beamline 8.0 of the Advanced Light Source at Lawrence Berkeley Laboratory. The undulator beamline is equipped with a spherical grating monochromator and the resolving power was set to $`E/\mathrm{\Delta }E`$=500. The fluorescence end station with a Rowland circle grating spectrometer provided a resolving power of about 300 at 200 eV. The XPS measurements have been done using a Perkin Elmer PHI 5600ci Multitechnique System with monochromatized Al $`K\alpha `$ radiation (bandwidth 0.3 eV FWHM). In order to obtain clean surfaces for the XPS measurements, the KNbO<sub>3</sub> single crystal was cleaved under UHV conditions. The X-ray Nb $`L\beta _{2,15}`$ and Nb $`L\gamma _1`$ fluorescence spectra were obtained by a Stearat spectrometer . A quartz crystal ($`d`$=0.334 nm) curved to $`R`$=500 cm was used as dispersive element to analyze the photons. X-ray emission spectra were detected by a flow proportional counter by scanning along the Rowland circle with an energy resolution of $`\pm `$0.2 eV. The FLAPW calculations have been done with the WIEN97 implementation of the code . With its extended basis of augmented plane waves, the FLAPW method allows a practical description of the electronic states up to relatively high energies in the conduction band, which is important for analyzing the trends in the X-ray absorption and the resonance X-ray emission intensities . The experimental lattice constant a=7.553 a.u. was used, and the atomic sphere radii were set to 1.95 a.u. (K), 1.85 a.u. (Nb) and 1.65 a.u. (O). The lattice was assumed to be an ideal cubic perovskite because the effect of ferroelectric distortion on the DOS is known to be negligible considering the comparison to experimentally broadened spectra. The local density approximation for the exchange-correlation was used, according to the prescription by Perdew and Wang . The states in the VB have been treated semi-relativistically, the core states fully relativistically. The DOS and emission spectra were calculated using the tetrahedron method, for the $`12\times 12\times 12`$ divisions of the whole Brillouin zone. Whereas our calculated partial and total DOS agree with previous calculations cited above, the calculated X-ray emission (and absorption) spectra, including the energy dependence of the dipole transition matrix elements present, to our knowledge, new information. We utilize our calculated results to understand the trends in our resonant X-ray emission spectra. ## Results and discussion In Fig. 1 the Nb $`M_{4,5}`$ emission spectra of KNbO<sub>3</sub> are shown for various excitation energies near the Nb$`3d`$ threshold. Four features are observed and labeled $`A`$ through $`D`$. The most dramatic changes in the fine structure of Nb $`M_{4,5}`$ emission spectra are found for excitation energies between 206.5 and 212.3 eV where new features $`B`$ and $`D`$ appear with changes in excitation energy. It was suggested in Ref. that the Nb $`M_{4,5}`$ emission only reveals the $`M_5`$ ($`3d_{5/2}`$) features because the $`M_4`$ ($`3d_{3/2}`$) is filled by radiationless transition. Therefore it was unexpected when we found the excitation energy dependence of Nb $`M_{4,5}`$ to be distorted by the spin-orbit splitting of the Nb $`3d`$-levels and hence the band dispersion effects to be blurred. Fig. 2 displays the calculated Nb $`M_{4,5}`$ emission spectra based on the Nb$`5p`$ and Nb$`4f`$ DOS and modulated by the dipole transition probabilities. The resulting spectrum consists of two (identical) contributions, corresponding to individual $`M_4`$ and $`M_5`$ spectra, that have been shifted apart by the value of the calculated Nb$`3d`$ spin-orbit splitting (2.88 eV) and summed up with relative weights 2:3. It was found that the contribution of the Nb$`4f`$ states is negligible in the VB, so that only the $`5p3d`$ transition is important for the interpretation of the Nb $`M_{4,5}`$ XES. The O$`2s`$ states contribute to the Nb $`M_5`$ XES due to the O$`2s`$Nb$`5p`$ hybridization. The corresponding features in the Nb $`M_4`$ and $`M_5`$ XES near $`16`$ eV and $`13`$ eV are broadened in the experimental spectra but still recognizable in Fig. 1 as peaks $`A`$ and $`B`$. Going back to the discussion of the energy dependence of the Nb $`M_{4,5}`$ emission spectra (Fig. 1), we note that according to our XPS measurements the Nb $`M_5`$ ($`3d_{5/2}`$) and Nb $`M_4`$ ($`3d_{3/2}`$) binding energies are 207.23 and 210 eV, respectively (relative to the vacuum level). Conclusively the emission features in the spectra excited between 206.5 and 209.6 eV are generated by the refill of the Nb $`M_5`$($`3d_{5/2}`$) hole because the excitation of Nb $`M_4`$ is not possible below 210 eV. In the excitation energy range between 210.5 and 216.9 eV, additional features $`B`$ and $`D`$ appear as a result of contributing transitions to Nb $`M_4`$ ($`3d_{3/2}`$). The sharp increase in intensity in the emission spectra at excitation energies from 216.9 eV to 221.1 eV can be attributed to the threshold of the $`3d4f`$ absorption. This is illustrated by the calculated Nb $`M_4`$ absorption spectrum in Fig. 3. The onset for the $`3d4f`$ absorption occurs around 235 eV. Whereas the $`3d4f`$ contribution dominates in the $`M_{4,5}`$ absorption spectrum, the $`3d5p`$ process gives rise to the absorption at about 218 eV. The strong enhancement of the emission (see Fig. 1) occurs when the excitation energy exceeds the $`M_5`$ threshold ($`E_{exc}`$=208.5 eV), $`M_4`$ threshold ($`E_{exc}`$=211.5 eV) and the $`3d5p`$ threshold ($`E_{exc}`$=221.1eV) as displayed in the absorption spectrum (Fig. 3). This sharp increase in the absorption intensity and hence in the resonant emission appears in each of these steps, at increase of the excitation energy, first for the $`M_5`$ component and followed by the $`M_4`$. The emission spectrum corresponding to the excitation energy of 221.1 eV in Fig. 1 exhibits, as compared with that for $`E_{exc}`$=216.9 eV, strongly enhanced $`M_5`$ but relatively unchanged $`M_4`$ intensity. Therefore, this specific selective excitation of Nb$`M_5`$ XES can be also used to receive experimental information about Nb$`5p`$ DOS undistorted by overlapping with the Nb $`M_4`$ XES. Apart from Nb $`M_{4,5}`$, some other XES measurements are useful to provide complementary information about the electronic state distribution at the Nb site. In Fig. 4, Nb $`M_3`$ ($`4d3p_{3/2}`$), Nb $`L\beta _{2,15}`$ ($`4d_{5/2,3/2}2p_{3/2}`$) and Nb $`\gamma _1`$ ($`4d_{3/2}2p_{1/2}`$) spectra are shown in comparison with the VB XPS. The spectra are brought to a common scale of binding energies, based on the binding energies of corresponding core levels. Since $`M_4`$ and $`M_5`$ spectra cannot be easily separated, they are plotted with respect to the $`3d_{5/2}`$ binding energy (207.23 eV). The Nb $`M_{2,3}`$ spectra have not been, to our knowledge, measured before, since they were not listed in the Beardeen tables , nor in the last systematic study of the ultrasoft XES in $`4d`$-transition metals . These spectra lie in a convenient energy region for the synchrotron study, and their two components ($`3p_{1/2}`$ and $`3p_{3/2}`$) are well separated by $``$15 eV. The $`M_3`$ spectrum shown in Fig. 4 was obtained in an exposition time of about 20 min. Despite their low intensity, the $`M_{2,3}`$ spectra may be potentially suitable for the study of the VB, probably including dispersion effects. The overall difference between Nb$`M_3`$ and Nb$`L`$ spectra on one hand and the Nb $`M_{4,5}`$ spectra on the other hand is that the former probe primarily the occupied Nb$`4d`$ DOS, centered near the VB bottom, whereas the latter reveal the contribution of the occupied states with the $`l=1`$ symmetry at Nb site, that are diffuse and hybridize strongly with VB states of other atoms. It is well seen from Fig. 4 that on the common energy scale, the maximum of the $`M_5`$ spectrum lies roughly in the middle of the VB. The energy positioning of the O $`K_\alpha `$ spectrum in the common energy scale has been done based on the O$`1s`$ binding energy (530.1 eV). This spectrum reveals the predominant distribution of O$`2p`$ states near the top of the VB – the fact well known from band structure calculations (see, for example, Ref. ). The shape of the spectrum shows no development as the excitation energy varies, the reason being that no vacant O$`2p`$ states are available near the bottom of the conduction band, so that the O $`K_\alpha `$ emission is essentially uncoherent with the corresponding absorption. The $`3p`$ state of potassium does not hybridize with any of the states contributing to the spectra above discussed and is visible only in the XPS. Potassium tantalate has been earlier studied by XPS in comparison with potassium niobate . In the present work, we concentrate on the Ta $`N_{2,3}`$ ($`5d6s4p_{3/2,1/2}`$) emission spectra that have not been reported before. The $`M_2`$ and $`M_3`$ spectra are far separated in energy (by $``$62 eV) and probe primarily the Ta states well represented in the VB and at the bottom of the conduction band. As is known from the calculated band structure (see Ref. , these bands exhibit strong energy dispersion. In Fig. 5, the sequence of Ta $`N_3`$ is shown for several excitation energies, obtained at 5–10 min. exposure per spectrum. The intensity is therefore much higher that that of counterpart $`M_{2,3}`$ spectra in KNbO<sub>3</sub>. Moreover, some development may be seen in the spectra depending on the excitation energy. While still absent at $`E_{exc}`$=399.4 eV, the X-ray emission is clearly visible at $`E_{exc}`$=401.4 eV, i.e. well below the Ta$`4p_{3/2}`$ excitation energy (404.0 eV). The displaced peak position at $`E_{exc}`$=401.4 eV. may be an indication of the X-ray resonant Raman scattering that enables excitations just below the threshold. Similar trends have been observed in Ref. below and through the Ti $`L_3`$ absorption threshold in another perovskite system (Ba,Sr)TiO<sub>3</sub>. The XPS of KTaO<sub>3</sub> does not differ much from earlier studies . The O $`K_\alpha `$ emission spectrum is similar (but not completely identical) to that in KNbO<sub>3</sub>. A separate analysis of our spectroscopy studies on KTaO<sub>3</sub> will be described elsewhere . Summarizing, we have compared the available spectroscopic experimental information about electronic structure of KNbO<sub>3</sub> (including new Nb $`M_{4,5}`$ XES, Nb $`L\beta _{2,15}`$, Nb $`L\gamma _1`$, O $`K_\alpha `$ XES and the VB XPS with the results of first-principles calculations. The contributions from decay of the $`3d_{5/2}`$ and $`3d_{3/2}`$ holes via valence emission are attributed by tuning the excitation energy. In terms of the spectra being potentially suitable for the study of band dispersion in the perovskites in question, we found that Nb $`M_{2,3}`$ have very low intensity whereas the two components of Nb $`M_{4,5}`$ are strongly overlapped which complicates the analysis. The Ta $`N_{2,3}`$ spectra are apparently free from both these disadvantages. The implications for the structural studies in perovskites may be – provided the band dispersion studies turn out to be successful – the analysis of the band structure distortion in lower-symmetry phases with the use of angle-resolved resonance X-ray emission. ## Acknowledgements This work was supported by the Russian Science Foundation for Fundamental Research (Project 96-15-96598 and 98-02-04129), the NATO Linkage Grant (HTECH.LG 971222), the DFG-RFFI Project, the Swedish Natural Science Research Council (NFR) and the Göran Gustavsson Foundation for Research in Natural Sciences and Medicine. AVP, BS and MN acknowledge the support of the German Research Society (SFB 225).
no-problem/9902/hep-ph9902344.html
ar5iv
text
# Acknowledgments ## Acknowledgments I wish to thank the organisers of the “Strong and Electroweak Matter 98” conference for a really enjoyable atmosphere. Financial support from CICYT, Spain, project AEN97-1693, is acknowledged.
no-problem/9902/astro-ph9902085.html
ar5iv
text
# Mirror Matter MACHOs ## I Introduction The nature of the dark matter in the universe, for which there is considerable observational evidence, is a mystery. There are a number of experiments in progress to resolve this mystery. Here we will address the issues raised by the microlensing experiments which monitor millions of stars in the neighbouring Large Magellanic Cloud (LMC) to see if any them suddenly brighten for a certain duration of time and then fade away again. The brightening observed is attributed to a lensing effect due to passage of a dark massive object, presumably from our galaxy, in front of the LMC star. The duration $`\mathrm{\Delta }t`$ of the brightening of a MACHO event has been calculated and is proportional to $`\sqrt{m}/v`$ where $`m`$ is the mass of the object and $`v`$ is its velocity. If one assumes that the object is from our galaxy, its velocity is determined; as a result, from $`\mathrm{\Delta }t`$, one can deduce the macho mass. Based on the 14 events from the MACHO and the EROS collaborations that are attributable to MACHO events, one obtains a best fit mass of $`0.5M_{}`$ for the MACHOs. Also it has been established that there are no MACHO candidates with masses between $`10^7M_{}`$ to $`10^2M_{}`$. It is also expected that these objects can comprise as much as 30 to 50% of the halo mass fraction. The question then arises: “what are these objects?”. The simplest possibilities are conventional baryonic objects such as red, brown or white dwarfs whose masses are expected to be in this range or that they are neutron stars. It has however been argued by Hegyi and Olive that a large class of baryonic candidates are incompatible with observations. More recently, Freese et al have made a detailed study of the possibility that they could be red, brown or white dwarfs and have found such interpretations to be highly problematic. (For a more recent analysis of some of these issues, see .) We will summarize these difficulties in a subsequent section. Accepting them for the moment their conclusion then leads us to search for alternative explanations. Our finding in this paper is that if there exists a mirror universe with identical particle and force content to the visible universe prior to gauge symmetry breaking, then for certain choice of the symmetry parameters, the maximum mass of the “mirror” stars is of order $`0.5M_{}`$ and they could therefore be the machos observed in the microlensing experiments. Since they are made of mirror baryons, they avoid all the problems encountered by machos made of conventional baryonic matter. We also estimate the main sequence lifetime of the mirror stars and find that in the parameter range of interest it is much less than the age of the universe. As a result, the machos are in the form of black holes. Let us note that precisely such models have recently been proposed to accomodate all the neutrino observations by identifying the lightest of the mirror neutrinos with the sterile neutrino needed in understanding the LSND result together with the solar and the atmospheric neutrino results. While this is the main result of our paper, our study also applies to the question of how the world of familiar matter would look if the masses are all scaled by a common factor. Similar questions have been studied in the past. The main results of this letter are the following: we review the scaling laws for maximum and minimum values for the stellar masses as the masses of the electrons, W-bosons, protons and neutrons vary together ($`m_i\zeta m_i`$). The maximum value, which is of particular interest for us is derived by using an argument in the literature that the radiation pressure inside a stable compact stellar object should not exceed the gas pressure inside it and by finding how that condition scales as the elementary particle masses vary. We find that, for $`\zeta `$ of order 15, mirror stars have maximum mass of the order of the $`0.5M_{}`$ needed for MACHOs in the halo. We give qualitative arguments to support the hypothesis that the initial stellar mass function (IMF) for the mirror sector is likely to peak near the maximum value for $`\zeta `$ in the parameter region of interest to neutrino physics. These two arguments ($`\zeta 15`$ explains both MACHO masses and the neutrino results) in our opinion strengthen the conjecture that mirror matter machos are the dark massive halo objects in our galaxy seen in the microlensing data. ## II Scaling laws for stellar masses and main sequence lifetimes We begin with a brief overview of the mirror universe model and the the parameters describing fundamental forces in the mirror sector. As mentioned, one considers a duplicate version of the standard model with an exact mirror symmetry which is broken in the process of gauge symmetry breaking. All particles and parameters of the mirror sector will be denoted by a prime over the corresponding familiar sector symbol- e.g. mirror quarks are $`u^{},d^{},s^{},`$ etc and mirror Higgs field as $`H^{}`$, mirror QCD scale as $`\mathrm{\Lambda }^{}`$ . We assume that $`<H^{}>/<H>=\mathrm{\Lambda }^{}/\mathrm{\Lambda }\zeta `$. Since one expects the masses of the neutron and proton to be given by the scale $`\mathrm{\Lambda }`$ and charged lepton (and current quark) masses to be given by $`<H>`$, scaling both parameters by the same amount implies that all fermion masses that are relevant to the discussion of stellar structure scale by the same amount, i.e. we have $`m_i\zeta m_i`$ with $`i=n,p,e,W,Z.`$ Furthermore, this gives weak cross sections varying as $`\zeta ^4`$ for fixed values of energy. With these simple rules, assuming that electroweak and strong coupling constants do not change, we can say a great deal about how the properties of stars would change. We start with the four equations of stellar structure: $`dP/dr=G\rho (r)M(r)/r^2`$ (1) $`dM(r)/dr=4\pi r^2\rho (r)`$ (2) $`L(r)/4\pi r^2=(16/3)\sigma _{SB}(T^3/\rho \kappa )dT/dr`$ (3) $`dL/dr=4\pi r^2ϵ(r)\rho (r)`$ (4) where $`\kappa (r)`$ is the opacity (cross section per unit mass) at radius $`r`$, $`\sigma _{SB}`$ the Stefan-Boltzmann constant, L(r) the luminosity at radius $`r`$, and $`ϵ(r)`$, the rate of energy generation per unit mass at radius $`r`$. We will need three terms in the equation of state (below) taken one or two at a time: $`P=(\rho /m)kT+(4\sigma _{SB}/3c)T^4+(h^2/2m_e)(3/8\pi )^2(\rho /m)^{5/3}`$ (5) where the three terms represent gas pressure, radiation pressure, and (non-relativistic) degenerate electron pressure. $`m`$ is the nucleon mass, $`m_e`$ that of the electron. We have neglected such niceties as keeping track of how many objects there are for each $`m`$ of gas (2 for H, 3/4 for He, etc) We will make standard, illuminating if crude, approximations in order to understand the $`\zeta `$ behavior of the solutions to the above equations. First we write $`P=\rho GM/R,\rho =3M/4\pi R^3`$ (6) where $`P`$ and $`\rho `$ are roughly core averages. Here $`M`$ and $`R`$ are mass and radius of core or star; our approximations are not good enough to be precise on such points.(In practice we will adjust $`M`$ to be the mass of the whole star and $`R`$ will fall short of the core radius for the sun). Equation (6) gives the useful relation $`P=(4\pi /3)^{1/3}GM^{2/3}\rho ^{4/3}`$ (7) To find the minimum mass of a star, we neglect the radiation pressure term in Equation (5), insert into Equation (7), solve for T and maximize with respect to $`\rho `$, giving $`kT=(G^2/2)(8\pi /3)^{2/3}M^{4/3}m^{8/3}m_e/h^2`$ (8) Following Phillips , we set $`T=T_{ig}`$, the lowest temperature that gives sufficient burning to match energy escape and solve for $`M`$, obtaining $`M_{min}[h^2kT_{ig}/(m_eG^2m^{8/3})]^{3/4}`$ (9) We know from more detailed analysis that $`M_{min}`$ is of the order of $`0.07M_{}`$ and $`T_{ig}10^6K`$. We use Equation (9) to obtain the variation with $`\zeta `$. $`m`$ and $`m_e`$ go as $`\zeta `$. $`T_{ig}`$, in principle must be found by solving the four coupled Equations (1-4). Roughly, however, nuclear binding energies will go linearly with $`\zeta `$ so we will approximate the variation of $`T_{ig}`$ as linear as well. We will see below that the solution of approximate equations gives $`T`$ varying with $`\zeta `$ (numerically) not greatly different. We thus obtain $`M_{min}\zeta ^2`$ (10) We can also, again following Phillips , use Equation (5) to find the maximum mass of a (main sequence) star. As the mass of the star gets bigger, the core temperature rises. Therefore, of the three terms in the expression for the pressure in the Equation 5, we expect $`P_g`$ and $`P_r`$ to dominate. Following Phillips, we parameterize them as fractions of the total pressure $`P`$ as below: $`P_g=\beta P,P_r=(1\beta )P`$ (11) We eliminate $`T`$ and solve Equation (11) for P, obtaining $`\beta P=[(\rho k/m)^4(\beta ^11)/(4\sigma _{SB}/3c)]^{1/3}`$ (12) Using Equation (7) again then gives $`M_{max}[(1\beta )c/\sigma _{SB}]^{1/2}G^{3/2}(k/m)^2/\beta ^2`$ (13) As $`\beta `$ approaches $`1`$, the energy density is increasingly dominated by photons (relativistic particles) and stars become unstable. Taking a cutoff around $`\beta 1/2`$ gives a maximum stellar mass around $`70M_{}`$. Thus the range for stars is roughly $`0.07M_{}`$ to $`70M_{}`$. From Equation (13) one sees, in the approximation that instability sets in at the same $`\beta `$ independent of $`\zeta `$, that $`M_{max}`$ varies as $`\zeta ^2`$ (like $`M_{min}`$). It is similarly easy to see from the standard expression for the Chandrashekar mass that it too varies as $`\zeta ^2`$. Note that, in a model with $`m_e`$ varying linearly with $`\zeta `$, but $`m`$ constant, both $`M_{CH}`$ and $`M_{max}`$ would be independent of $`\zeta `$ while $`M_{min}`$ would go as $`\zeta ^{3/4}`$ (since higher mass for the electron would permit contraction to higher densities before Pauli repulsion becomes important). Note that such is the case for the mirror matter model investigated in references in order to solve the neutrino puzzles. We now consider stellar burning as $`\zeta `$ varies. We approximate Equation (3) as $`L=(16\pi /3)^2\sigma _{SB}(RT)^4/(\kappa M)`$ (14) where $`\kappa `$ is the opacity, for which we keep just $`\gamma e`$ scattering contribution i.e. we take $`\kappa `$ as<sup>*</sup><sup>*</sup>*Omitting other contributions to opacity tends to overstimate $`\kappa `$ as $`\zeta `$ increases and hence underestimate the luminosity and overestimate the main sequence lifetime. For our purpose, therefore, our assumption about $`\kappa `$ is a conservative one. $`\kappa =\sigma _T/m_e\zeta ^3\kappa _{}`$ (15) Since the rate of energy generation is determined by the weak interaction rate for $`p+pe^++d+\overline{\nu }`$, we approximate Equation(4) by $`L=ϵM`$ (16) with $`ϵ=\overline{\sigma v}E_{pp}\rho /m^2`$ (17) We take $`\overline{\sigma v}`$ from Clayton and, as above, continue in $`\zeta `$, obtaining $`ϵ=(\rho E_{pp}/m^2)\zeta ^3f(T_6/\zeta )`$ (18) where $`T_6=T/10^6`$, $`E_{pp}`$ is the energy release for the full $`pp`$ chain into other than neutrinos, and $`f(x)=3\times 10^{37}x^{2/3}e^{33.81x^{1/3}}[1+0.021x^{1/3}+0.01x^{2/3}+9.5x\times 10^4]`$ (19) One factor of $`\zeta ^1`$ in Equation (18) comes from the parenthesis preceding $`\zeta ^3`$ and two come from the behavior of the weak cross section, taking into account energies increasing with $`\zeta `$. Equating the two expressions above for $`L`$ permits us to solve for $`RT`$ as a function of $`T`$, while Equations (5) and (7) give $`RT=MGm/k`$. Combining these results we can write, $`M^4=4(3/16\pi )^3(k/mG)^7(ϵ_{}\kappa _{}/\rho _{}\sigma _{SB})T^3\zeta ^{13}f(T_6/\zeta )/f(15)`$ (20) where we have normalized to the temperature ($`T_6=15`$) at the center of the sun. Equation (20) gives an approximation to the variation of stellar masses with $`\zeta `$. With it, we can solve for $`R`$, $`\rho `$, $`L`$, and the main sequence lifetime $`t_{MS}=(0.1Mc^2/L)`$. In Figure 1 (a,b) we give the results, as a function of $`M`$, for temperature $`T`$, radius $`R`$, and main sequence lifetime, $`t_{MS}`$ for $`\zeta =1.0,15`$. We have inserted an overall factor to scale the main sequence lifetime of the sun ($`\zeta =1,<T_6>7.5`$) to $`10^{10}`$ years. The scale units are $`10^6K`$, $`10^{10}cm`$, and $`10^9years`$ for the three quantities. It should be noted that, in the approximations made, the solar (core) radius comes out to about $`0.3\times 10^{10}cm`$ while, in real life, two thirds of the sun’s mass (with temperature $`T_6>7`$) extends out to about $`1.5\times 10^{10}cm`$. We see from Figure (1) that the ranges of radii and main sequence lifetimes fall with increasing $`\zeta `$ while that of temperatures increases. In Figure (2) we address this increase in more detail by plotting $`T`$ against $`\zeta `$ for four values of an index that runs from 1-100 as $`M`$ varies from $`M_{min}`$ to $`M_{Max}`$ in equal logarithmic increments. Roughly, we see that $`T`$ goes as $`\zeta ^{4/3}`$, so that the assumption above that $`T_{ig}\zeta `$ is not grossly out of line. It should be noted that, for massive stars (with $`\zeta =1`$), the pp cycle on which the above considerations are based is replaced by CNO cycles (Clayton, ref.), in which C, N, and O catalyze He production from H at high enough temperatures. Since the weak interactions in the cycle are all weak decays, which increase, in rate, linearly with $`\zeta `$, we expect that massive stars, for large $`\zeta `$, should have even shorter main sequence lifetimes than those of Figure 1. Thus the lifetimes in Figure 1 should be considered upper bounds. However estimating the abundances of C, N, O could be quite complicated (see below). We will use, below, the unsurprising result from Figure (1), that for $`\zeta >5`$ main sequence lifetimes are all much shorter than the age of the universe; they fall roughly as $`\zeta ^3`$. ## III Mirror vs Baryonic MACHOs In this section we show that mirror matter MACHOs (MMMs) provide a good explanation of microlensing events within which microlensing data can determine the parameter $`\zeta `$. Finally, we discuss briefly tests of the model. First, mirror matter resolves a number of MACHO problems. Fields, Freese and Graff , in a very detailed work, raise several problems with baryonic MACHO candidates, including: –All baryonic candidates require that the MACHO population be near the minimum of the range permitted by observations and that $`\mathrm{\Omega }_B`$ be near the maximum of the range permitted by BBNS theory if the sum of individual baryon components is to be less than the total baryon number. MMMs avoid this problem completely since mirror matter does not enter into the baryon budget. –Neutron stars and black holes from supernovae cannot fit the fact that lensing events point to MACHO masses around $`0.5M_{}`$. We will see below that mirror matter does provide an explanation of the mass observation. –Brown dwarf explanations conflict with a growing number of observations that show that the index $`\beta `$ in the initial stellar mass function (IMF), $`N(m)m^\beta `$, which is over 2 (2.35 is a commonly accepted value ) for $`m>M_{}`$ whereas it is well under 2 for $`m<M_{}`$; under 2 means that most of the mass is in the higher mass stars. The conflict is because the frequency of microlensing events appears to require a MACHO population with a total mass on the order of up to half $`\mathrm{\Omega }_B`$ while the decrease in $`\beta `$ for low masses precludes such a result. Since mirror matter is not baryonic, it has no such problem. –Finally, the “favorite candidate,” white dwarfs, suffer from several problems raised by Fields, Freese and Graff , including: (i) not being seen – for example in the Hubble deep field (see Flynn, Bahcall, and Gould ) as they should be since $`0.5M_{}`$ dwarfs can only cool slowly; (ii) a need for a large population of galactic massive stars and their supernovae to produce galactic winds to cleanse the galaxy of the processed material in the white dwarf ejecta; and (iii) a contradiction between the amount of carbon the progenitors would produce and the amount observed. None of these problems would arise with MMM progenitors. We turn now to the question of why the MMMs should have masses around $`0.5M_{}`$. This would be the case (see Figure (1)) if we have $`\zeta `$ greater than say, 15, since the maximum stellar mass then falls in the region between 0.5 and 1.0 solar masses. The work of shows in some detail that such values are just what is required to provide simultaneous solution of the atmospheric, solar, and LSND neutrino problems (and provide warm dark matter in addition). There are, furthermore, reasons why (a) the mirror matter stellar IMF would peak near the maximum mass and (b) remnant masses would be similar to initial masses. Both stem from the decrease in cross sections with increasing $`\zeta `$. Current theories of star formation (see, for example, Adams and Fatuzzo and references therein) from molecular cloud core collapse require a mechanism to stop accretion during collapse, i.e. to limit the size of the star. Such mechanisms are based on scattering, but cross sections for scattering of molecules, atoms, ions, and electrons off photons, atoms and molecules will fall as $`\zeta ^2`$. Thus it is not unreasonable to expect that the mirror matter IMF should become more and more strongly peaked near $`M_{Max}`$ as $`\zeta `$ increases. Additionally, we might expect a modest increase in $`\zeta ^2M_{max}`$ in the Section 2 estimate since scattering processes that create instability as $`\beta 0`$ will become less effective leading to smaller $`\beta `$ values and, through Equation (13), larger values of $`M_{max}`$. Cross section decrease also predicts that there should be little mass loss between the initial star and the remnant. As $`\zeta `$ increases, the radius of the star decreases and neutrino cross sections decrease. Thus neutrino confinement times decrease sharply and it is doubtful that (Type II) supernova shocks can be formed, thereby creating the population of $`M_{Max}`$ black holes that are detected as MACHOs. These two $`\zeta >15`$ features – decreased mass loss and mirror star masses peaking around $`0.5M_{}`$ – result in fixing $`\zeta `$ in a range optimal from the point of view of fitting neutrino results in reference . In sum, MMMs appear to have a number of positive features as the explanation of microlensing events. Finally, we turn to the question of observational tests of the MMM hypothesis. The following come to mind: –Absence of any optical observations of lensing objects as lensing events accumulate; –Within the qualitative picture above, relatively strong peaking of lens masses into a narrow range; –Possible detection of the black holes by some new method; unfortunately estimates by Heckler and Kolb show that, even with new instruments (the Sloan digital sky survey telescope), black holes under $`10M_{}`$ could saturate the halo mass without being detectable from the signal from interstellar material infall. –Possible future detection of black hole MACH binaries that if they exist in sufficient number through the emission of gravitational waves in experiments such as LIGO, VIRGO, TAMA and GEO. ## IV Lucky to Be Alive Finally we note some of the implications for the familiar world from the above results on varying $`\zeta `$. As noted in Section I, there is a growing literature on the changes that would obtain if standard model, and other parameters, were different. The present investigation adds to those results. The two most important changes as $`\zeta `$ grows would appear to be the absence of supernovae and the decrease in stellar lifetimes. These imply that, as $`\zeta `$ grows, there would be lower abundances of heavy elements in the interstellar medium with which to form planets and carbon based life forms on them, as well as less time in orbit around main sequence stars during which it would be possible for the latter to occur. Although rates for radiative processes would increase linearly with $`\zeta `$, main sequence lifetimes would fall as $`\zeta ^3`$ as shown in Figure 1. It is the factor of $`p^3`$ in phase space that, apparently results in increasing rates and decreasing cross sections. As $`\zeta `$ decreases, the combination of decreasing rates and increasing cross sections would be likely to interfere with current models of star formation cited above. For example, collapse times ($`[G\rho ]^{1/2}`$) would increase like $`\zeta ^2`$ while cross sections for scattering that would tend to disperse the collapsing cloud would increase like $`\zeta ^2`$. In conclusion, we have studied the variation of stellar mass and lifetimes as the masses of the elementary particles $`m_e,m_p,m_n,M_W`$ all vary in the same way (given by the parameter $`\zeta `$). We conclude that for a value of $`\zeta 15`$, the maximum mass of the mirror stars is around half a solar mass; as a result, they could be viable candidates for the MACHOs observed in the various microlensing searchesNote that, in the mirror matter model of references the neutron is unstable for $`\zeta `$ in the range of interest, so stellar structures become either mirror white dwarfs or black holes. Our considerations with respect to the maximum mass of such objects would still obtain. The result would be prompt black hole formation and the application to the MACHO problem would be the same as in this letter.. The many problems encountered in trying to explain the $`0.5M_{}`$ machos white dwarfs and brown dwarfs etc are now easily avoided. We do a crude analysis of the dependence of the “main sequence” life time of the mirror stars as a function of the $`\zeta `$ variable and find that for similar $`\zeta `$ values, the mirror macho is most likely a black hole since main sequence stellar lifetimes are few times shorter than the age of the universe. We also propose several tests of this hypothesis. We are grateful to D. Clayton, S. Nussinov and D. Rosenbaum for many useful discussions. We are grateful to K. Freese and W. K. Rose for reading the manuscript and comments. The work of R. N. M. is supported by the National Science Foundation grant under no. PHY-9802551 and the work of V. L. T. is supported by the DOE under grant no. DE-FG03-95ER40908. Figure Caption Figure 1 (a, b): Temperature T, radius R and the main sequence lifetime $`t_{MS}`$ of the mirror stars as a function of the stellar mass M for six different values of $`\zeta =1.0,15`$. The units for the above are $`10^6`$K for T, $`10^{10}`$ cm for R and $`10^9`$ yrs. for $`t_{MS}`$. Figure 2. Variation of temperature T as a function of $`\zeta `$ for 4 of 100 equal mass steps between $`M_{min}`$ (step 1) and $`M_{max}`$ (step 100). Step 3 for $`\zeta =1`$ corresponds to the sun.
no-problem/9902/astro-ph9902325.html
ar5iv
text
# X-ray reflection spectra from ionized slabs ## 1 INTRODUCTION Compton reflection is an important component of the X-ray spectrum of many compact objects (Guilbert & Rees 1988; Lightman & White 1989), particularly of Active Galactic Nuclei (AGN) and Black Hole Candidate (BHC) sources. It is produced when the primary X-ray emission from the source strikes Thomson-thick matter which Compton scatters X-rays into our line of sight. In many cases the accretion flow, probably a disc, is the major scattering medium. If that matter were solely composed of hydrogen then the reflection continuum would have the same spectral shape as the primary emission in the X-ray band, decreasing above about 30 keV due to Compton recoil. Other elements such as oxygen and iron, if they are not highly ionized, can absorb the lower energy X-rays and so flatten the reflection continuum and, in particular iron, add fluorescent line emission (George & Fabian 1991; Matt, Perola & Piro 1991). Measurements of the strength and shape of the reflection spectrum can yield the geometry, velocity, gravitational potential depth and abundances of the scattering medium. As the X-ray irradiation of the medium becomes more intense, so the matter becomes ionized. This reduces the effects of absorption, progressively from lower to higher energies as the intensity increases (Lightman & White 1988; Done et al 1992; Ross & Fabian 1993; Życki et al 1994). An important signature of ionized reflection is a strong iron edge. Iron is a strong absorber when the gas is neutral, but absorption both above and below the edge are so strong that the change in observed flux across the edge is small, particularly if the primary continuum is also observed. When much of the oxygen in the gas is completely ionized the edge appears much stronger because the absorption below it is reduced. Of course the energy of the iron emission line shifts up in energy from 6.4 through 6.7 to 6.97 keV as iron is ionized, but doppler shifts may confuse precise estimates of the observed energy. The line can be weak when iron is ionized to Fe xvii–Fe xxiii since line photons can be resonantly scattered and destroyed by Auger events (Ross, Fabian & Brandt 1996). Moreover, as the fraction of completely stripped iron increases the line may be significantly broadened by Comptonization and so become less apparent in the observed spectrum (Matt, Fabian & Ross 1996). Relativistic blurring when the reflection is from an accretion disc can further merge the line and edge so that neither is distinct in the final spectrum. The net result can be a weak broad absorption trough which starts at about 7 keV. Unlike the reflection spectrum from relatively neutral gas, which depends mainly upon the relative abundance of the elements and can be computed in a straightforward manner by the Monte-Carlo technique, the spectrum when the gas is partially ionized requires detailed numerical calculation to obtain the temperature, ionization structure and resultant Comptonization (Ross 1979). We have previously computed examples appropriate for simple accretion discs around AGN and BHC, concentrating mainly on ionization parameters $`\xi =4\pi F/r^210^3`$ where Auger destruction can be important (Ross & Fabian 1993). Since however the precise density structure of the outer few Thomson depths of an accretion disc where the reflection spectrum is formed is unknown (the total disc thickness may be hundreds of Thomson depths), we consider a wider range of conditions here and highlight the range at higher values of $`\xi 10^4`$, the details of which have been largely ignored in previous work. This range of $`\xi `$ may be particularly relevant to the spectra of many BHC in the low/hard state, which show only a small reflection component (Ebisawa 1991; Ebisawa et al 1994, 1996; Gierliński et al 1997; Życki, Done & Smith 1998; Done & Życki 1998). One popular interpretation for this is that the central parts of the disc out to 30 – 50 gravitational radii are missing and replaced by a hot cloud which gives no intrinsic reflection (Gierliński et al 1997; Życki et al 1998; Poutanen 1998). What reflection is seen comes from the outer disc. We point out the alternative that, in a geometry where the primary X-rays are produced above a disc, an apparent lack of reflection could indicate high reflection. In principle, there is no difference in the spectrum expected from a single hot cloud and one with a perfect mirror bisecting it. In practice, Compton reflection is not perfect, but departures from a uniform albedo can be small in the 1–30 keV range where most spectral fitting takes place. We show here that the small departures from uniformity due to iron features that occur in a highly ionized disc can account for the small reflection signature seen, without the need to illuminate the outer disc. This solution may also be relevant to the X-ray spectra of many quasars, which also show little evidence for reflection yet are generally assumed to be powered by disc accretion (Nandra et al 1995, 1997). We note that values of $`\xi 10^4`$ appear to imply an accretion rate close to the Eddington limit for a standard accretion disc (Ross & Fabian 1993 show that $`\xi =7\times 10^4f^3,`$ where $`f`$ is the Eddington ratio). However, if there is a patchy corona above the disc (see e.g. Stern et al 1995), rather than a continuous one, with a covering fraction of $`f_\mathrm{A}`$ then $`\xi >10^4`$ requires $`f>f_\mathrm{A}/7,`$ which is only a per cent or so when $`f_\mathrm{A}0.1.`$ Moreover, these estimates assume a uniform density for the disc right to its surface. The outer few Thomson depths could well have a lower density and thus mean that $`\xi `$ is underestimated. In this paper we present and discuss detailed, self-consistent computations of the temperature and ionization structure of slabs of gas ionized by the incident radiation and of the resulting reflection spectra. We assume both a constant density for the gas and a gaussian fall off. The spectra will be compared with X-ray observations of BHC in future work. ## 2 Computations ### 2.1 Method We consider a slab of gas whose surface is illuminated by radiation with a power-law spectrum of photon index $`\mathrm{\Gamma }`$. The radiative transfer is calculated for the upper layer of the slab that is responsible for producing the reflected spectrum. In order to concentrate solely on the effects of the illumination, no radiation is taken to enter the treated layer from the remainder of the slab beneath it. The method used has been described in detail by Ross & Fabian (1993). The illuminating radiation is treated analytically in a ‘one-stream’ approximation. The diffuse radiation that results from Compton scattering of the incident radiation and from emission within the gas is treated using the Fokker-Planck/diffusion method of Ross, Weaver & McCray (1978) modified for plane-parallel geometry. The local temperature and ionization state of the gas is found by solving the equations for thermal and ionization equilibrium, so that they are consistent with the local radiation field. Hydrogen and helium are assumed to be fully ionized, while the following ionization stages of the most abundant metals are treated: C v–vii, O v–ix, Mg ix–xiii, Si xi–xv, and Fe xvi–xxvii. For a given value of $`\mathrm{\Gamma }`$, the temperature and ionization state of the outer portion of the slab is expected to depend primarily on the value of the ionization parameter, $$\xi =\frac{4\pi F}{n_\mathrm{H}},$$ (1) where $`F`$ is the total illuminating flux (from $`0.01100\mathrm{keV}`$) and $`n_\mathrm{H}`$ is the hydrogen number density. For uniform-density slabs, we vary $`\xi `$ by changing the total illuminating flux while keeping the hydrogen number density fixed at $`n_0=10^{15}\mathrm{cm}^3`$. This is a typical density that might be found in an AGN accretion disc. At the higher density expected for a BHC accretion disc, the effects of three-body recombination and heating due to free-free absorption cause the temperature and ionization state to depend somewhat on $`n_\mathrm{H}`$ as well as $`\xi `$ (e.g., see Ross 1979). However, this should have very little effect on the X-ray spectral features due to iron, which should remain similar to those calculated here. ### 2.2 Results Figure 1 shows the results for a uniform slab illuminated by a $`\mathrm{\Gamma }=2`$ spectrum with an ionization parameter $`\xi =10^4`$. The illuminating and reflected spectra are displayed as $`EF_E`$ versus $`E`$, where $`E`$ is the photon energy and $`F_E`$ is the spectral energy flux. The slab is highly reflecting, with $`F_E(\mathrm{out})/F_E(\mathrm{in})>63`$ per cent throughout the 2–20 keV spectral band. This is because the gas is highly ionized. At the illuminated surface, the iron is 85 per cent fully ionized, 14per cent Fe xxvi, and 1 per cent Fe xxv. The Fe xxvi fraction peaks at 45per cent around Thomson depth $`\tau _\mathrm{T}2.5`$. Fe xxv is the dominant ion for $`3.5\tau _\mathrm{T}7.5`$. Silicon and magnesium (not shown in Fig. 1) are fully ionized throughout the regions where Fe xxv–xxvii dominate. For $`\tau _\mathrm{T}10`$, iron ions have all their L-shell electrons, while magnesium and silicon ions have filled K-shells. Despite the high ionization of the surface layers, the reflected spectrum shows features due to iron K$`\alpha `$ emission and K-shell absorption. Most of the K$`\alpha `$ photons emerge in a broad Comptonized emission feature that blends smoothly into the Compton-smeared absorption feature. Only a small fraction of the K$`\alpha `$ photons emerge in narrow Fe xxv and Fe xxvi line cores at 6.7 and 7.0 keV, respectively; these are shown in Fig. 1 with a spectral resolution $`\delta E/E2`$ per cent. Fe xxvi K$`\alpha `$ photons are subject to resonance trapping. Many are removed from the narrow line core by an initial Compton scattering and then are further Comptonized as they diffuse outward to the surface. The Fe xxv intercombination line is not subject to resonance trapping, but most of these photons are produced at such great depth that they also suffer repeated Compton scatterings before escaping. To clarify the contribution of the Comptonized line, Fig. 1 also shows the reflected spectrum when the iron K$`\alpha `$ emission is artificially suppressed. The K$`\alpha `$ photons form an important part of the reflected continuum for $`4E9\mathrm{keV}`$. Also shown in Fig. 1 is the reflected spectrum if all iron features are suppressed by setting the iron abundance to zero. Relative to this smooth spectrum, the actual reflection spectrum is enhanced for $`E<7.5\mathrm{keV}`$ and depleted for $`E>7.5\mathrm{keV}`$ For soft X-rays ($`E1.5\mathrm{keV}`$ in this case) not shown in Fig. 1, the emergent spectral flux exceeds the incident flux due to bremsstrahlung emission by the hot surface layers and “inverse Compton” upscattering of even softer photons. The total flux leaving the surface over the entire spectral range under consideration ($`0.01\mathrm{keV}E100\mathrm{keV}`$) equals the total incident flux, as required by the condition of thermal equilibrium. The reflected spectrum declines steeply above $`50\mathrm{keV}`$ because of the sharp cutoff assumed at $`100\mathrm{keV}`$. Extending the illuminating spectrum to higher energies (with an exponential cutoff, say) would raise this portion of the emergent spectrum via Compton downscattering of higher energy photons, but would have little effect on the iron ionization structure and spectral features. The temperature at the illuminated surface is found to be $`1.5\times 10^7\mathrm{K}`$. In the highly ionized gas there, heating due to Compton downscattering of hard photons dominates over heating due to photoabsorption. Therefore the temperature should not exceed the “Compton temperature,” $`T_\mathrm{C}`$, at which Compton heating would be balanced solely by cooling due to “inverse Compton” upscattering of soft photons. This condition is given by $$4kT_\mathrm{C}_{E_1}^{E_2}u_E𝑑E=_{E_1}^{E_2}u_E\left(E\frac{21E^2}{5m_\mathrm{e}c^2}\right)𝑑E,$$ (2) where $`u_E`$ is the spectral energy density of the radiation, while $`E_1`$ and $`E_2`$ are the lower and upper limits, respectively, of the spectrum under consideration. The right-hand side of this equation includes the reduction in Compton heating due to first-order Klein-Nishina corrections to the scattering cross section (see Ross 1979). Setting $`u_EE^{1\mathrm{\Gamma }}`$ for the illuminating radiation, this gives $`T_\mathrm{C}=1.9\times 10^7\mathrm{K}`$ for $`\mathrm{\Gamma }=2`$. The temperature that we find at the surface is slightly lower than the Compton temperature due to additional cooling by bremsstrahlung emission. This disagrees with the results of Życki et al. (1994), who found a surface temperature exceeding $`4\times 10^7\mathrm{K}`$ for illumination with $`\xi =10^4`$ by a spectrum that was only slightly flatter ($`\mathrm{\Gamma }=1.9`$). The Monte Carlo calculation of Życki et al. only treated photons in the range $`0.15\mathrm{keV}E100\mathrm{keV}`$ and did not include the thermal emission by the gas itself. This leads to underestimation of the inverse Compton cooling rate and thus to the high surface temperature (and the steep temperature gradient) that they found. Figure 2 shows a series of reflection spectra under similar conditions with ionization parameters ranging from $`\xi =30`$ to $`\xi =10^5`$. The model with the highest ionization parameter, $`\xi =10^5`$, is an excellent reflector and exhibits negligibly small spectral features due to iron. This is because the surface layer is fully ionized to great depth, with Fe xxvi not becoming dominant until $`\tau _\mathrm{T}8`$. The Compton reflection produces a slight steepening in the reflected spectrum compared to the illumination; the reflected component mimics a power law with $`\mathrm{\Gamma }=2.14`$ in the 2–20 keV band. The temperature at the illuminated surface is found to be $`1.9\times 10^7\mathrm{K}`$, the full Compton temperature for a $`\mathrm{\Gamma }=2`$ power law spectrum. When the illuminating flux is reduced so that $`\xi =3\times 10^4`$, the K$`\alpha `$ emission and K-shell absorption features due to iron begin to become apparent. Fully-ionized iron dominates for $`\tau _\mathrm{T}5`$, so the emission and absorption features are weak and are highly broadened by Compton scattering. When $`\xi `$ is reduced to $`10^4`$, the broad spectral features due to iron become more important. This model has already been discussed in detail. With $`\xi `$ reduced to $`3\times 10^3`$, less than half of the iron is fully ionized at the illuminated surface, and Fe xxv becomes dominant at $`\tau _\mathrm{T}1`$. Now an important narrow emission line due to Fe xxv can be seen in addition to a Compton-broadened emission feature in Fig. 2. This is primarily due to emission of the intercombination line following recombination to excited states. When $`\xi `$ is further reduced to $`10^3`$, Fe xxv dominates at the illuminated surface, and the narrow Fe xxv K$`\alpha `$ line is quite strong. The tiny emission feature just above 8.8 keV is due to radiative recombination directly to the ground level of Fe xxv. Similarly, Si xiv produces two emission features: one at $`2.01\mathrm{keV}`$ due to K$`\alpha `$ emission and the other just above $`2.67\mathrm{keV}`$ due to radiative recombination. (See Życki et al. 1994 for other examples of radiative recombination emission features.) The exact strength of these and other low-energy spectral features could be affected by absorption by elements such as nitrogen and neon which we do not include in our calculations, and we do not calculate the features due to sulfur. The narrow Fe K$`\alpha `$ emission line is suppressed in the models with $`\xi =300`$ and $`\xi =100`$. This is because ions in the range Fe xvii–Fe xxiii dominate at the illuminated surface, and their K$`\alpha `$ photons are assumed to be destroyed by the Auger effect during resonance trapping (see Ross & Fabian 1993; Życki & Czerny 1994; Ross et al. 1996). The narrow line seen when $`\xi =300`$ is due to a small amount of Fe xxv near the surface. On the other hand, the narrow line seen when $`\xi =100`$ is due to Fe xvi, the least ionized species that we treat, which then dominates for $`\tau _\mathrm{T}0.5`$. Finally, for $`\xi =30`$ the reflection is similar to that of a cold, neutral slab, and the narrow emission line at 6.4 keV is strong. The ionization structure in the outer layers of the illuminated slab depends on the spectral form of the illumination as well as on the ratio of total flux to gas density expressed by the parameter $`\xi `$. Figure 3 shows the results with $`\xi =10^4`$ again, but when the illuminating spectrum is a flatter power law with $`\mathrm{\Gamma }=1.5`$. Now a greater fraction of the illuminating photons lie in the 9–20 keV range that is so important in producing fully photoionized iron. As a result, Fe xxvii dominates to greater depth ($`\tau _\mathrm{T}5.5`$) than when $`\mathrm{\Gamma }=2`$, and the Compton-broadened emission and absorption features due to iron are not as strong. One of the uncertainties in modelling Compton reflection is the density structure of the illuminated gas. In the models presented above, the gas density has been assumed to be uniform with depth. This is the case, for example, in the standard Shakura & Sunyaev (1973) theory of accretion discs (also see Laor & Netzer 1989). However, this probably is not realistic even for a bare accretion disc (e.g., see Shakura, Sunyaev & Zilitinkevich 1978; Shimura & Takahara 1993), and it certainly cannot be the case when the surface has strong external illumination. The heating of the outermost layers by the impinging radiation should result in a decrease in density there. In order to see the general effect of such a decrease in density, let us arbitrarily assume that a constant-density slab (with $`n_\mathrm{H}=n_0`$) is topped by a “boundary layer” in which the density decreases with height in a gaussian manner, $$n_\mathrm{H}(z)=n_0\mathrm{exp}\left(\frac{z^2}{h^2}\right).$$ (3) Here $`z`$ is the height above the base of the boundary layer, and $`h`$ is its characteristic thickness. Since the illuminating radiation is expected to have an important effect down to a Thomson depth of a few, we let the boundary layer have a total Thomson depth $$\tau _\mathrm{T}=1.2n_0\sigma _\mathrm{T}h\frac{\sqrt{\pi }}{2}=5,$$ (4) where the free electron density is assumed to be $`n_\mathrm{e}=1.2n_\mathrm{H}`$. Figure 4 shows the result when such a gas is illuminated by a $`\mathrm{\Gamma }=2`$ power law with a flux that would yield an ionization parameter $`\xi _0=4\pi F/n_0=10^4`$ for gas at the base density. Since the gas density is lower in the boundary layer, the effective ionization parameter is higher there. As a result, iron is 88 per cent fully ionized at Thomson depth $`\tau _\mathrm{T}=1`$, compared to only 69 per cent fully ionized for the uniform-density slab shown in Fig. 1. In fact, Fe xxvii dominates all the way down to $`\tau _\mathrm{T}3.5`$. The broad, Comptonized, iron K$`\alpha `$ emission and K-shell absorption features in the reflected spectrum are not as strong as for the uniform-density slab. Of particular importance is the fact that the narrow K$`\alpha `$ line cores, which were already weak in the uniform-density case, are now almost totally suppressed. ## 3 Discussion For X-ray illumination with $`\xi 10^4`$ or higher, the features in the reflected spectrum due to iron K$`\alpha `$ emission and K-shell absorption are weak and are smeared out by Compton scattering. Any narrow Fe K$`\alpha `$ line cores are extremely weak. These effects are further enhanced when the illuminating spectrum is harder (flatter) or when there is a dropoff in gas density due to the heating by the external illumination. Such effects may come into play in the formation of X-ray spectra of Black Hole Candidates. In the past, Ginga spectra of BHCs have been interpreted as exhibiting broad iron absorption features (“smeared edges”) with very weak Fe K$`\alpha `$ lines (Ebisawa 1991; Tanaka 1991; Ebisawa et al. 1994). This led to the suggestion that the line is suppressed by resonant Auger destruction in Fe xvii–xxiii (Ross & Fabian 1993; Ueda, Ebisawa & Done 1994; Ross et al. 1996). However, recent spectral studies of Ginga, EXOSAT, and ASCA BHC in the low/hard state have found broad Fe K$`\alpha `$ emission features as well as K-shell absorption features (Życki, Done & Smith 1997; Życki et al. 1998; Done & Życki 1998). These studies have treated the broadening of the features as being due to relativistic smearing by the reflecting accretion disc. The $`\xi `$ values derived for the illumination have been very low, and the weakness of the iron features has been taken to imply that the disc subtends a solid angle of illuminating radiation considerably smaller than $`2\pi `$. (Done & Życki conclude that the broadening of the disc K$`\alpha `$ line makes it difficult to detect with ASCA, so the narrow line found by Ebisawa et al. 1996 comes from the companion star.) This has led to the conclusion that the optically thick accretion disc only extends inward to a few tens of gravitational radii. As an alternative, the broad and weak iron features could be due to illumination of the disc with $`\xi 10^4`$. Figure 5 shows the incident and reflected spectra for illumination with $`\xi =10^4`$ and a power law index $`\mathrm{\Gamma }=1.7`$. If the primary X-ray emission is from a corona immediately above the accretion disc, the reprocessor subtends a full $`\mathrm{\Omega }=2\pi `$, and the total observed spectrum is the sum of the illuminating flux (due to the half of the radiation emitted in the outward direction) and the reflected flux. Fig. 5 shows the result after relativistic smearing if this total flux is assumed to originate from a narrow annulus at radius $`7R_S`$ inclined at $`30^{}`$ around a Kerr black hole. Over the 2–20 keV range, the best-fitting power law now has a slope of $`\mathrm{\Gamma }=1.79`$. The ratio of the observed spectrum to the best-fit power law model shows broad emission and absorption features markedly similar to those found by Done & Życki (1998) for the EXOSAT spectrum of Cyg X-1. There are several features to note here: Comptonization of the edge means that it begins around 6 keV (dash-dot curve in Fig. 1) and now resembles more a symmetrical trough than a convential photelectric absorption edge; the Comptonized and smeared line emission fills in the lower energy part of this trough so that the whole feature mimics an edge at about 7 keV. Encouraged by the similarity bewteen our model and the observed spectra shown by Done & Życki (1999), we have fitted a grid of our models to the brightest of the archival EXOSAT spectra. The models are relativistically blurred using the Kerr metric kernel of Laor (1990). The best fit over the 3–15 keV range indicates $`\xi =7400`$ and $`\mathrm{\Gamma }=1.5`$ for an iron abundance twice the Solar value (Fig. 6, upper and centre panels) from a disc inclined at less than 30 deg and extending from $`1.235m`$ to $`100m`$, with surface emissivity varying as $`(\mathrm{radius})^3`$. In the lower panel we show the ratio of the data to the best-fitting power law model, which strongly resembles the similar plot by Done & Życki (1999) for all the EXOSAT data. It is interesting to compare the result of our calculation with that given by the PEXRIV code (Magdziarz & Zdziarski 1995) in the XSPEC package for a value of $`\xi =5000`$, which is within the range allowed by that code (we correct for the different energy ranges used to define $`\xi `$ in our codes). This is shown in Figure 7. There is clearly a large difference, particularly in the position of the iron edge (the PEXRIV code does not predict the iron line properties, which must be added separately). The PEXRIV code does not take account of the Comptonization of the features in the outer, most highly ionized layers. We advise against the use of this code for high values of $`\xi `$ and for the use of our approach, or that of Życki et al (1994; see also Böttcher, Liang & Smith 1998). A detailed model for an ionized accretion disc will necessarily require a range of $`\xi `$ to be present, both from different radii, and at different distances from the ionizing source if the corona is patchy. The results shown in this paper will be useful as a guide to situations dominated by highly ionized matter, for example where the size of separate patches of a corona exceed their height above the disc. We intend to pursue a detailed comparison with observed spectra in future work. ## ACKNOWLEDGEMENTS RRR, ACF and AJY thank the College of the Holy Cross, the Royal Society and PPARC, respectively, for support.
no-problem/9902/cond-mat9902096.html
ar5iv
text
# On the Reconstruction of Random Media using Monte Carlo Methods ## I Introduction A better understanding of the transport properties of random media, such as fluid flow in sandstones or electrical conductivity of composites requires the micro-structure as input . Digitized, three-dimensional micro-structures of natural sandstones are difficult and expensive to obtain . Thus there is a need for simulation algorithms that are able to provide representative micro-structures from statistical probability functions. Recently various algorithms have been proposed for the reconstruction of random micro-structures . In this paper we investigate a simulated annealing method that enforces agreement between the correlation functions of the original and the reconstructed micro-structure . To save computation time the authors of have evaluated the correlation functions only in certain directions assuming isotropy of the medium. The objective of this paper is to study the effect of this simplification on the final, reconstructed configurations. We test the effects on two examples: (i) the correlation function of a Fontainebleau sandstone and (ii) an artificial correlation function with damped oscillations. The results show, that at least for the oscillating correlation function this yields significantly different configurations. More importantly, the reconstructions are anisotropic as a result of the simplified evaluation of the correlation functions. ## II The reconstruction method We follow the reconstruction algorithm proposed in . The reconstruction is performed on a $`d`$-dimensional hyper-cubic lattice $`^d`$ ($`d=2`$ for the results presented below). Whether a lattice point lies within pore space or matrix space is indicated by the characteristic function $$\chi \text{}(\stackrel{}{x})=\chi \text{}(x_1,x_2,\mathrm{},x_d)=\{\begin{array}{c}0\text{for}\stackrel{}{x}\hfill \\ 1\text{for}\stackrel{}{x}𝕄\hfill \end{array}$$ (1) with $`x_i=0,1,\mathrm{},M_i1`$ where $``$ denotes the pore space and $`𝕄`$ the matrix space of a two phase porous medium. The $`x_i`$ are in units of the lattice spacing $`a`$. The porosity $`\varphi `$ is given as $`\varphi =\frac{1}{N}_{j=1}^N\left(1\chi \text{}(\stackrel{}{x}_j)\right)`$ where $`N=_{i=1}^dM_i`$ is the total number of lattice sites. Simulated annealing is an iterative technique for combinatorial optimization problems. The iteration steps are denoted by a subscript $`t`$. The optimum is found by lowering a fictitious temperature $`T_t`$ that controls the acceptance or rejection of configurations with ”energy” (or cost function) $`E_t`$. The energy function used in our simulations is defined as $$E_t=\underset{k=1}{\overset{J}{}}w_k\underset{\stackrel{}{r}𝔻_k}{}\left(g_t^k(\stackrel{}{r})g_{\mathrm{ref}}^k(\stackrel{}{r})\right)^2$$ (2) where $`𝔻_k^d`$ is a subset of lattice points and $`g_t^k`$ is the $`k`$th function of a set of $`J`$ statistical probability functions calculated for the configuration of step $`t`$. For example $`g_t^k`$ may be a $`k`$-point correlation function. The real valued factor $`w_k`$ is a weight for the $`k`$th function. Hence, the energy can be understood as a measure for the deviations of the probability functions $`g_t^k`$ from predefined reference functions $`g_{\mathrm{ref}}^k`$. The simulated annealing algorithm consists of the following steps. 1. Initialization: The 0’s and 1’s are randomly distributed with given porosity $`\varphi `$. 2. Two lattice points of different phase are chosen at random and exchanged. In this way the porosity $`\varphi `$ is conserved. 3. The ”energy” $`E_t`$ of the current configuration is calculated according to Equation (2). 4. The ”temperature” $`T_t`$ is adjusted according to a fixed cooling schedule. 5. The new configuration created by the exchange of the two points is accepted with probability $$p=\mathrm{min}(1,\mathrm{exp}\left(\frac{E_tE_{t1}}{T_t}\right)).$$ (3) In case of rejection, the two points are restored and the old configuration is left unchanged. 6. Return to step 2. As can be seen from Equation (3), configurations with lower energy are immediately accepted while the acceptance of a configuration with higher energy is controlled by the temperature $`T`$. In order to obtain a configuration with minimal energy $`E`$ or, in other words, a configuration with minimal deviations of the probability functions $`g^k`$ from their reference functions $`g_{\mathrm{ref}}^k`$, the temperature $`T`$ has to be lowered in a suitable way. The algorithm is applicable to various functions $`g^k`$. Most authors propose the two point correlation function . Assuming homogeneity and ergodicity, the two-point correlation function can be defined for our case as $$g(\stackrel{}{r})=\frac{\chi \text{}(\stackrel{}{x})\chi \text{}(\stackrel{}{x}+\stackrel{}{r})(1\varphi )^2}{\varphi \varphi ^2}.$$ (4) where the average $`\mathrm{}`$ indicates a spatial average over all lattice sites $`\stackrel{}{x}`$. If $`g_{t1}(\stackrel{}{r})`$ is known a single update step of the annealing process requires the recalculation of the correlations of the two pixels that are exchanged in step 2 with all other pixels. The numerical effort to obtain $`g_t(\stackrel{}{r})`$ is therefore proportional to $`N`$ where $`N`$ is the total number of lattice points. In the reconstruction of three-dimensional porous media with $`N10^6\mathrm{}10^7`$ this leads to unacceptably long calculation times and therefore one has to find ways to speed up this calculation. One possibility for reducing the numerical effort is to truncate $`g`$ at a value $`r_c`$ for which $`g(\stackrel{}{r})0`$ for $`r=|\stackrel{}{r}|r_c`$ holds. Below we set $`g(\stackrel{}{r})=0`$ for $`|\stackrel{}{r}|r_c`$ where $`r_c`$ is a parameter in the reconstruction. For isotropic media, where $`g(\stackrel{}{r})=g(r)`$ with $`|\stackrel{}{r}|=r`$, it was suggested in to calculate $`g(r)`$ only in certain directions by setting $`\stackrel{}{r}=r\stackrel{}{e}_k`$ where $`\stackrel{}{e}_k`$ is an arbitrary unit vector. In the two-dimensional reconstructions presented below $`\stackrel{}{e}_k`$ will be set to the radial unit vector in a polar coordinate system $`\stackrel{}{e}_k=\stackrel{}{e}_{\phi _k}=\stackrel{}{e}_x\mathrm{cos}\phi _k+\stackrel{}{e}_y\mathrm{sin}\phi _k`$ where $`\stackrel{}{e}_x`$ and $`\stackrel{}{e}_y`$ are the unit vectors of the Cartesian coordinate system and $`\phi _k`$ is the angle between $`\stackrel{}{e}_x`$ and $`\stackrel{}{e}_{\phi _k}`$. Hence, instead of Equation (4) we use $$g^k(r)=\frac{\chi \text{}(\stackrel{}{x})\chi \text{}(\stackrel{}{x}+r\stackrel{}{e}_k)(1\varphi )^2}{\varphi \varphi ^2}$$ (5) with $`r=0,1,\mathrm{},r_c`$ in the simplified reconstruction scheme. Since (5) is a set of $`J`$ one-dimensional correlation functions the numerical effort is reduced by a factor of roughly $`Jr_c/N`$ as compared to (4). The above algorithm is now used to reconstruct two-dimensional, isotropic media with correlation function $$g_{\mathrm{ref}}(r)=\mathrm{exp}\left(\frac{r}{8}\right)\mathrm{cos}(\omega r)$$ (6) with $`r`$ in units of the lattice spacing and $`\omega =1`$. The same function was used by the authors of to exemplify their algorithm. In the evaluation of the correlation functions periodic boundary conditions are assumed. We use an exponential decrease of the temperature $$T_t=\mathrm{exp}\left(\frac{t}{1610^5}\right).$$ (7) The remaining parameters are the lattice size $`M_1=M_2=400`$ and the porosity $`\varphi =0.5`$. They are the same as in . The following reconstructions have all been initialized with the same random seed. The algorithm terminated when the configuration did not change for 20000 subsequent update steps. ## III Results In the first reconstruction, the correlation function was calculated only in the horizontal and vertical direction, i.e. Equations (2) and (5) were used with $`J=2`$, $`\stackrel{}{r}_1=r\stackrel{}{e}_0`$ and $`\stackrel{}{r}_2=r\stackrel{}{e}_{\pi /2}`$. Both correlation functions $`g^1,g^2`$ had the same weight $`w_1=w_2=0.5`$ and the same reference function $`g_{\mathrm{ref}}^1=g_{\mathrm{ref}}^2`$ given in Equation (6) was used. The correlation function was truncated at $`r_c=100`$. Figure 1 shows the final configuration of the reconstruction. A similar pattern was found in . The pattern consists of stripes in direction of $`\stackrel{}{e}_{\pi /4}`$ and in direction $`\stackrel{}{e}_{\pi /4}`$. The distance between the stripes is determined by the cosine term. On a larger length scale, the pattern organizes into several regions in which all lines are parallel. The typical size of these regions is given by roughly 20 in view of Equation (6) and Figure 2. Clearly, the pattern is not isotropic as it should be. The stripes are preferentially directed along the directions $`\stackrel{}{e}_{\pi /4}`$ and $`\stackrel{}{e}_{\pi /4}`$. Also, one expects that the oscillation frequency $`\omega `$ of the correlation function in direction $`\stackrel{}{e}_{\pm \pi /4}`$ is not $`\omega =1`$ but $`\omega =\sqrt{2}`$. Figure 2 shows the correlation functions for the configuration of Figure 1 in the directions of the unit vectors $`\stackrel{}{e}_0,\stackrel{}{e}_{\pi /2}`$ and $`\stackrel{}{e}_{\pi /4}`$. The first and second have been used for the reconstruction and hence show good agreement with the reference function while the latter deviates drastically. The correlation in direction $`\stackrel{}{e}_{\pi /4}`$ is better described by the function $$f(r)=\frac{1}{2}\left(\mathrm{exp}\left(\frac{r}{8}\right)\mathrm{cos}\left(\sqrt{2}r\right)+\mathrm{exp}\left(\frac{r}{8}\right)\right).$$ (8) shown as the dotted line. This may be interpreted as the arithmetic mean of the correlation function for regions (described above) with stripes perpendicular to the direction $`\stackrel{}{e}_{\pi /4}`$ given by the first term in Equation (8) and the correlation function for regions with stripes parallel to the direction $`\stackrel{}{e}_{\pi /4}`$ given by the second term. Of course, the same correlation function is found in direction $`\stackrel{}{e}_{\pi /4}`$. One step towards an isotropic reconstruction may be to use $`J=4`$, and to force agreement of the correlations not only in two but in four directions $`\stackrel{}{e}_0`$, $`\stackrel{}{e}_{\pi /2}`$, $`\stackrel{}{e}_{\pi /4}`$ and $`\stackrel{}{e}_{\pi /4}`$. The resulting configuration is shown in Figure 3. The pattern is significantly different from the pattern of Figure 1. There are not only stripes parallel to the diagonal directions but more rounded formations and concentric circles at some points. The correlation functions for Figure 3 are plotted in Figure 4. Surprisingly we find that it is not possible to obtain agreement with the reference correlation function. The resulting correlation functions in the horizontal, vertical and diagonal direction all deviate strongly from the reference function especially at the first minimum. Additional simulations with different cooling schedules, damping factors and frequency of the correlation function did not give better agreement. We have also varied the system sizes from $`200\times 200`$ up to $`1000\times 1000`$. The results were identical. Figure 5 shows the two-dimensional reconstructions of a Fontainebleau sandstone with porosity $`\varphi =0.135`$. The correlation function was truncated at $`r_c=50`$. The reconstruction of Figure 5a used the correlation function only in vertical $`\stackrel{}{e}_0`$ and horizontal $`\stackrel{}{e}_{\pi /2}`$ direction as suggested in . The reconstruction shown in Figure 5b, however, calculates the full two-dimensional correlation function according to (4) where the two-dimensional correlation function was radially binned in the calculations to obtain a one-dimensional function which can be compared to the one-dimensional, isotropic correlation function of the original sandstone. We emphasize that in this calculation the two-dimensional correlation function was evaluated without restrictions or simplifications. Therefore, the calculation needed about 50 times longer than the simplified reconstruction. Again, there are differences visible although they are smaller than those in the reconstructions using (6). The shape of the pores in the full reconstruction (Figure 5b) appear to be smoother and there is not as much ”dust” visible as in the simplified (restricted) reconstruction shown in Figure 5a. The correlation functions plotted in Figure 6 reveal also that the reconstructed micro-structure in Figure 5a is strongly anisotropic while the micro-structure in Figure 5b is isotropic as it should be. In summary we note that the statistical reconstruction of porous media with predefined two-point correlation function often requires some reduction of numerical effort. Especially the reconstruction of three-dimensional porous media in reasonable time does not seem possible without such simplifications in the calculation of the correlation function. Of course this problem is exacerbated if one wishes to include three-point or even higher order correlation functions. We applied a simplification proposed and used in which samples the correlation function only in certain directions. The effect of this simplification on the final configurations may in some cases be negligible but in general configurations strong anisotropy and patterns which are significantly different from those of a proper isotropic reconstruction may appear as a result. ## Acknowledgments We thank B. Biswal for comments and helpful discussions, K. Höfler, S. Schwarzer and M. Müller for technical advise and significant parts of the C++ code, the Deutsche Forschungsgemeinschaft for financial support through the GKKS Stuttgart. ## Figure captions * Simplified reconstruction of the damped oscillating correlation function given in Equation (6) by restricting the correlation function evaluation to the horizontal and vertical directions. * Correlation functions for the reconstruction of Figure 1. The solid line is the reference function of the reconstruction given in Equation (6). $`+`$ and $`\times `$ are the values of the correlation function of Figure 1 along the horizontal and vertical direction, respectively. The values of the correlation function in direction $`\stackrel{}{e}_{\pi /4}`$ are plotted with $``$. The dotted line is the estimate given in Equation (8) for the correlation function along the $`\stackrel{}{e}_{\pi /4}`$-direction. * Simplified reconstruction of the damped oscillating correlation function of Equation (6) by restricting the correlation function evaluation to the four $`\stackrel{}{e}_0`$, $`\stackrel{}{e}_{\pi /2}`$, $`\stackrel{}{e}_{\pi /4}`$ and $`\stackrel{}{e}_{\pi /4}`$ directions. * Correlation functions for the reconstruction of Figure 3. The reference function is plotted as solid line. Note the mismatch in the first minimum and maximum. * (a) Two-dimensional simplified reconstruction of the correlation function of a Fontainebleau sandstone by restricting the correlation function evaluation to the horizontal and vertical directions. (b) Reconstruction with the same reference correlation function as in (a) but complete evaluation of the correlation function, i.e. without directional restrictions. * Correlation function for the reconstructions of Figure 5. The reference function is plotted as solid line. The correlation functions calculated in horizontal, vertical and diagonal direction refer to the configuration of Figure 5a. The complete correlation function is calculated from the configuration of Figure 5b. C.Manwart and R.Hilfer Figure 1 C.Manwart and R.Hilfer Figure 2 C.Manwart and R.Hilfer Figure 3 C.Manwart and R.Hilfer Figure 4 C.Manwart and R.Hilfer Figure 5 (a) (b) C.Manwart and R.Hilfer Figure 6
no-problem/9902/quant-ph9902011.html
ar5iv
text
# Comment on ”Quantum Theory of Dispersive Electromagnetic Fields” ## Abstract Recently Drummond and Hillery \[Phys. Rev. A 59, 691 (1999)\] presented a quantum theory of dispersion based on the analysis of a coupled system of the electromagnetic field and atoms in the multipolar QED formulation. The theory has led to the explicit mode-expansions for various field-operators in a homogeneous medium characterized by an arbitrary number of resonant transitions with different frequencies. In this Comment, we drawn attention to a similar multipolar study by Juzeliūnas \[Phys. Rev. A 53, 3543 (1996); A 55, 929 (1997)\] on the field quantization in a discrete molecular (or atomic) medium. A comparative analysis of the two approaches is carried out highlighting both common and distinctive features. Recently Drummond and Hillery have presented a quantum theory of dispersive electromagnetic modes in a medium characterized by an arbitrary number of resonant transitions with different frequencies. The radiation field and the matter was assumed to constitute a single dynamical system. In the homogeneous (plane wave) case, explicit expansions have been obtained for the field-operators in terms of the operators for creation and annihilation of polaritons (elementary excitations of a combined system containing the radiation and the matter).Yet, the authors of the paper have overlooked a closely related study that also provides explicit mode-expansions for various quantized fields, such as the macroscopic displacement field, given by (in the Heisenberg picture) : $$\overline{𝐝}^{}(𝐫,t)=\underset{𝐤,m}{}\overline{𝐝}_{𝐤,m}^{}(𝐫,t);$$ (1) with $$\overline{𝐝}_{𝐤,m}^{}(𝐫,t)=i\underset{\lambda =1}{\overset{2}{}}\left(\frac{\epsilon _0\mathrm{}kv_g^{\left(m\right)}}{2V_0}\right)^{1/2}n^{\left(m\right)}𝐞^{\left(\lambda \right)}\left(𝐤\right)\left[e^{i\left(𝐤.𝐫\omega _k^{\left(m\right)}t\right)}P_{𝐤,m,\lambda }e^{i\left(𝐤.𝐫\omega _k^{\left(m\right)}t\right)}P_{𝐤,m,\lambda }^+\right]$$ (2) where $`P_{k,m,\lambda }^+`$ ($`P_{k,m,\lambda }`$) is the Bose operator for creation (annihilation) of a polariton characterized by a wave-vector $`k`$, a polarization index $`\lambda `$, and also an extra index $`m`$ $`=1,2,\mathrm{},M+1`$ that labels branches of polariton dispersion, $`M`$ being a number of excitation frequencies accommodated by each molecule forming the medium. Here also $`𝐞^{\left(\lambda \right)}\left(𝐤\right)`$ is a unit polarization vector, $`V_0`$ is the quantization volume, $`\omega _k^{\left(m\right)}=ck/n^{\left(m\right)}`$ is the polariton frequency, $`n^{\left(m\right)}n\left(\omega _k^{\left(m\right)}\right)`$ is the refractive index (calculated at $`\omega _k^{\left(m\right)}`$), and $`v_g^{\left(m\right)}=d\omega _k^{\left(m\right)}/dk`$ is the branch-dependent group velocity. In what follows we compare the formalism by Drummond and Hillery to that by Juzeliūnas highlighting common and distinctive features of the two approaches. Both studies consider a similar coupled system of the radiation field and the matter, exploiting the same multipolar formulation of Quantum Electrodynamics (QED). (Yet, different techniques have been employed to represent the operators of interest in terms of the normal polariton modes.) Moreover, in either approach an arbitrary number of transition frequencies (of electronic or vibrational origin) has been included for the material medium. As a result, the above mode-expansion (1)-(2) reproduces the same functional dependence on the group velocity and the refractive index as the corresponding expansion for the macroscopic displacement field in the one-dimensional case, given by Eq.(5.17) of ref.. Such a functional dependence is also consistent with earlier narrow-band Langrangian approach by Drummond . Furthermore, the mode-expansion (1)-(2) is equivalent to the three-dimensional displacement operator derived by Drummond and Hillery , as long as the spatial dispersion is neglected in the corresponding Eq. (8.24) of ref.. The same holds for other field-operators, such as the operator for the transverse electric field whose mode-components are related to Eq.(2) via a relationship of the classical type : $`\overline{𝐝}_{𝐤,m}^{}\left(𝐫\right)=\epsilon _0\epsilon _r^{\left(m\right)}\overline{𝐞}_{𝐤,m}^{}\left(𝐫\right)`$, the emerging relative dielectric permittivity $`\epsilon _r^{\left(m\right)}\left(n^{\left(m\right)}\right)^2`$ being a branch-dependent quantity. This is also in agreement with the ref. . Both studies take special care in making sure that the (equal-time) commutation relationships preserve between various operators in their diagonal (polariton) representation. The study has checked the commutation relationships involving the field-operators $`\overline{𝐝}^{}`$, $`𝐞^{}`$, $`\overline{𝐚}^{}`$, $`\overline{𝐩}^{}`$ and $`\overline{𝐡}^{}`$. Their correctness appears to be ensured by the following equalities : $$\begin{array}{ccc}\underset{m}{}v_g^{\left(m\right)}n^{\left(m\right)}=c\hfill & \text{and}\hfill & \underset{m}{}v_g^{\left(m\right)}/n^{\left(m\right)}=c\hfill \end{array}$$ (3) Drummond and Hillery have also exploited one of the these equalities in analyzing the commutation relationships. In addition, a few more complex equalities have been presented when dealing with the material operators. A distinctive feature of the approach by Drummond and Hillery is inclusion of spatial dispersion, i.e. dependence of the dielectric permittivity not only on the frequency, but also the wave-vector . This does not alter the form of the above relationships, as well as the mode-expansions of other macroscopic field-operators presented earlier , yet the meaning of group velocity is to be modified . It is noteworthy that the spatial dispersion has been included in the ’effective mass’ approximation through an extra differential term featured in Eqs.(6.2) and (6.8) of ref. : The term represents the coupling between infinitely close dipole-oscillators comprising the continuous dielectric medium. One might argue that such an approach is not fully consistent with the spirit of multipolar QED formulation in which there is no direct coupling between the dipole-oscillators . However, this is perhaps the only way to include the spatial dispersion within the continuous model of the dielectric considered. On the other hand, the approach assumes the matter to be discrete, the constituent molecules (dipole-oscillators) forming a cubic lattice. Here the effects of spatial dispersion (as well as other intermolecular coupling) are contained in the initial multipolar Hamiltonian for the radiation field coupled to the discrete medium, all interatomic coupling being mediated exclusively via the transverse virtual photons . As a result, the spatial dispersion is implicit in the general analysis of ref. up to Eq.(3.40). The effect has been omitted in the subsequent long wave-length approximation made in Eq.(4.1) of ref. followed by the explicit results. In fact, the spatial dispersion does play a minor role for the optical (photon-like) modes characterized by small wave-vectors. One can recover the spatial dispersion in a relatively straightforward manner using the discrete model , however this is out of the scope of the present Comment. Note that in contrast to the continuous approach , the spatial dispersion is characterized exclusively by the microscopic parameters of a discrete system, there being no need to introduce an extra parameter describing the effect. Using a discrete approach, one can also recover the local field effects from first principles. In doing this, the theory treats systematically the Umklapp processes playing an important role in the multipolar QED formulation. As a result, the required local-field corrections emerge intrinsically in the refractive index and the group velocity entering the mode-expansions of the field-operators . Furthermore, the discrete approach allows us to consider not only the macroscopic operators , but also the operators for the local and microscopic fields. For instance, the mode-components of the local displacement operator are related to those for the macroscopic displacement operator as : $`𝐝_{𝐤,m}^{}\left(𝐫_\zeta \right)=\left(\epsilon _r^{\left(m\right)}\right)^1\left[\left(\epsilon _r^{\left(m\right)}+2\right)/3\right]\overline{𝐝}_{𝐤,m}^{}\left(𝐫_\zeta \right)`$. This appears to be very helpful for the analysis of various molecular-radiation processes in dielectric media , such as the spontaneous emission. Finally, Drummond and Hillery have pointed out that the solution to the eigenvalue equation $`\omega ^2=c^2k^2/n^2`$ given by Eq.(7.5), ’is unique for any given modal frequency, but has forbidden regions which indicate a resonance, or absorption, band’. This would be absolutely true for the spatially non-dispersive media, as illustrated in fig.2 of ref.. However Eq.(7.5) of ref. contains the spatial dispersion (in the ’effective mass’ approximation), so that $`n=n(\omega ,k)`$. Inclusion of such a spatial dispersion yields more than one value of $`k`$ for certain frequencies , the additional solutions representing the exciton-like modes characterized by much larger $`k`$.
no-problem/9902/hep-ph9902450.html
ar5iv
text
# 1 Introduction ## 1 Introduction On the strength of the recent report of atmospheric neutrino oscillations , as well as previous other indications of solar and accelerator neutrino oscillations, neutrino masses are now considered to be almost established experimentally. Yet there is no clear theoretical consensus as to the origin of neutrino masses. In the standard model, the usual way is to add three right-handed neutrino singlets with large Majorana masses and use the canonical seesaw mechanism to obtain small Majorana masses for $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$. On the other hand, other mechanisms are available , the simplest alternative being the addition of a heavy scalar triplet . There is another important theoretical reason for going beyond the minimal standard model, i.e. supersymmetry. However, the minimal supersymmetric standard model (MSSM) keeps the neutrinos massless because it contains no extra fields or interactions which could make them massive. Of course, one may simply add three right-handed neutrino singlet superfields to the MSSM and invoke the canonical seesaw mechanism as before. On the other hand, given the particle content of the MSSM, one may also allow new, lepton-number nonconserving terms in the Lagrangian which would then induce nonzero neutrino masses . In this talk, I will review briefly this latter situation where $`R`$-parity is usually assumed to be violated, and point out its potential problem with leptogenesis, ending with a proposal of radiative neutrino masses with $`R`$-parity conservation. ## 2 MSSM and $`R`$-Parity The well-known superfield content of the MSSM is given by $`Q_i=(u_i,d_i)_L(3,2,1/6),`$ (1) $`u_i^c(3^{},1,2/3),`$ (2) $`d_i^c(3^{},1,1/3),`$ (3) $`L_i=(\nu _i,l_i)_L(1,2,1/2),`$ (4) $`l_i^c(1,1,1);`$ (5) $`H_1=(\overline{\varphi }_1^0,\varphi _1^{})(1,2,1/2),`$ (6) $`H_2=(\varphi _2^+,\varphi _2^0)(1,2,1/2).`$ (7) Given the above transformations under the standard $`SU(3)\times SU(2)\times U(1)`$ gauge group, the corresponding superpotential should contain in general all gauge-invariant bilinear and trilinear combinations of the superfields. However, to forbid the nonconservation of both baryon number ($`B`$) and lepton number ($`L`$), each particle is usually assigned a dicrete $`R`$-parity: $$R(1)^{3B+L+2j},$$ (8) which is assumed to be conserved by the allowed interactions. Hence the MSSM superpotential has only the terms $`H_1H_2`$, $`H_1L_il_j^c`$, $`H_1Q_id_j^c`$, and $`H_2Q_iu_j^c`$. Since the superfield $`\nu _i^c(1,1,0)`$ is absent, $`m_\nu =0`$ in the MSSM as in the minimal standard model. Neutrino oscillations are thus unexplained. Phenomenologically, it makes sense to require only $`B`$ conservation (to make sure that the proton is stable), but to allow $`L`$ violation (hence $`R`$-parity violation) so that the additional terms $`L_iH_2`$, $`L_iL_jl_k^c`$, and $`L_iQ_jd_k^c`$ may occur. Note that they all have $`\mathrm{\Delta }L=1`$. From the bilinear terms $$\mu H_1H_2+ϵ_iL_iH_2,$$ (9) we get a $`7\times 7`$ neutralino-neutrino mass matrix $$\left[\begin{array}{ccccc}M_1& 0& g_1v_1& g_1v_2& g_1u_i\\ 0& M_2& g_2v_1& g_2v_2& g_2u_i\\ g_1v_1& g_2v_1& 0& \mu & 0\\ g_1v_2& g_2v_2& \mu & 0& ϵ_i\\ g_1u_i& g_2u_i& 0& ϵ_i& 0\end{array}\right],$$ (10) where $`v_{1,2}=\varphi _{1,2}^0/2`$ and $`u_i=\stackrel{~}{\nu }_i/2`$, with $`i=e,\mu ,\tau `$. Note first the important fact that a nonzero $`ϵ_i`$ implies a nonzero $`u_i`$ . Note also that if $`u_i/ϵ_i`$ is the same for all $`i`$, then only one linear combination of the three neutrinos gets a tree-level mass. From the trilinear terms, neutrino masses are also obtained , now as one-loop radiative corrections. Note that these occur as the result of supersymmetry breaking and are suppressed by $`m_d^2`$ or $`m_l^2`$. ## 3 $`L`$ Nonconservation and the Universe As noted earlier, the $`R`$-parity nonconserving interactions have $`\mathrm{\Delta }L=1`$. Furthermore, the particles involved have masses at most equal to the supersymmetry breaking scale, i.e. a few TeV. This means that their $`L`$ violation together with the $`B+L`$ violation by sphalerons would erase any primordial $`B`$ or $`L`$ asymmetry of the Universe . To avoid such a possibility, one may reduce the relevant Yukawa couplings to less than about $`10^7`$, but a typical minimum value of $`10^4`$ is required for realistic neutrino masses. Hence the existence of the present baryon asymmetry of the Universe is unexplained if neutrino masses originate from these $`\mathrm{\Delta }L=1`$ interactions. This is a generic problem of all models of radiative neutrino masses where the $`L`$ violation can be traced to interactions occuring at energies below $`10^{13}`$ GeV or so. Consider the prototype (Zee) model of radiative neutrino masses . It is not supersymmetric and it only adds one charged scalar singlet $`\chi ^\pm `$ and a second Higgs doublet to the minimal standard model. Call the two Higgs doublets $`\mathrm{\Phi }_{1,2}`$, then the trilinear coupling $`\chi ^{}(\varphi _1^+\varphi _2^0\varphi _1^0\varphi _2^+)`$ is allowed as well as the Yukawa coupling $`\chi ^+(\nu _il_jl_i\nu _j)`$. Hence there is an effective dimension-5 operator $`\nu _i\nu _j\varphi _1^0\varphi _2^0`$ which renders the neutrinos massive , but it is again suppressed by $`m_l^2`$. Note that the new interactions have $`\mathrm{\Delta }L=2`$. ## 4 Supersymmetric Radiative Neutrino Masses and <br>Leptogenesis It has been shown recently that naturally small Majorana neutrino masses may be obtained from heavy scalar triplets and if the latter have masses of order $`10^{13}`$ GeV, their decays could generate a lepton asymmetry which then gets converted into the present baryon asymmetry of the Universe through the electroweak phase transition. The same role may be attributed to the scalar singlets of the Zee model if they are heavy enough, but then to obtain realistic radiative neutrino masses, unsuppressed Yukawa couplings are needed. Consider now the following supersymmetric extension of the Zee model. Since all the interactions are either $`\mathrm{\Delta }L=0`$ or $`\mathrm{\Delta }L=2`$, $`R`$-parity is conserved. Because of the requirement of supersymmetry, there is a doubling of the scalar superfields: $`\chi _1^+(1,1,1;+,),`$ (11) $`\chi _2^{}(1,1,1;+,),`$ (12) $`H_{1,3}(1,2,1/2;+,\pm ),`$ (13) $`H_{2,4}(1,2,1/2;+,\pm ).`$ (14) A fourth family of leptons is then added: $`(N_1^0,E^{})(1,2,1/2;,),`$ (15) $`N_2^0(1,1,0;,),`$ (16) $`E^+(1,1,1;,).`$ (17) In the above, the assignments of these superfields under a discrete $`Z_2\times Z_2^{}`$ symmetry are also displayed. The first is merely the one usually assumed to obtain $`R`$-parity; the second is used to distinguish the new particles from those of the MSSM. The relevant terms in the $`R`$-parity preserving superpotential of this model are then given by $`W`$ $`=`$ $`\mu _{12}(h_1^0h_2^0h_1^{}h_2^+)`$ (18) $`+`$ $`\mu _{34}(h_3^0h_4^0h_3^{}h_4^+)`$ $`+`$ $`m_\chi \chi ^+\chi ^{}`$ $`+`$ $`(m_E/v_1)(h_1^0E^{}h_1^{}N_1^0)E^+`$ $`+`$ $`f_i(\nu _ih_3^{}l_ih_3^0)E^+`$ $`+`$ $`f_j^{}(\nu _jE^{}l_jN_1^0)\chi _1^+`$ $`+`$ $`f_{24}(h_2^+h_4^0h_2^0h_4^+)\chi _2^{},`$ where $`v_{1,2}`$ are the vacuum expectation values of $`h_{1,2}^0`$. The unsuppressed one-loop diagram generating neutrino masses is shown in Fig. 1 of Ref. . Note that the effective supersymmetric dimension-5 operator $`L_iL_jH_2H_2`$ is indeed realized. Assuming that the masses of the scalar leptons of the fourth family to be equal to $`M_{SUSY}`$, the neutrino mass matrix is then obtained: $$\frac{(f_if_j^{}+f_i^{}f_j)f_{24}v_2^2m_E\mu _{12}\mu _{34}}{16\pi ^2v_1M_{SUSY}^2m_\chi }\mathrm{ln}\frac{m_\chi ^2}{M_{SUSY}^2}.$$ (19) To get an estimate of the above expression, let $`f_i=f_j^{}=f_{24}=1`$, $`m_E=v_1`$, $`\mu _{12}=\mu _{34}=M_{SUSY}`$, then $$m_\nu =\frac{1}{8\pi ^2}\frac{v_2^2}{m_\chi }\mathrm{ln}\frac{m_\chi ^2}{M_{SUSY}^2}.$$ (20) Assuming $`v_210^2`$ GeV, $`m_\chi 10^{13}`$ GeV, and $`M_{SUSY}10^3`$ GeV, a value of $`m_\nu 0.6`$ eV is obtained. This is just one order of magnitude greater than the square root of the $`\mathrm{\Delta }m^25\times 10^3`$ eV<sup>2</sup> needed for atmospheric neutrino oscillations . Reducing slightly the above dimensionless couplings from unity would fit the data quite well. Since $`m_\chi 10^{13}`$ GeV is now allowed, leptogenesis should be possible as demonstrated in Ref. . ## 5 Neutrino Oscillations It has recently been shown that the structure of Eq. (19) for the $`\mu \tau `$ sector is naturally suited for the large mixing solution of atmospheric neutrino oscillations. To be more specific, the $`2\times 2`$ submatrix of Eq. (19) for the $`\mu \tau `$ sector can be written as $$m_0\left[\begin{array}{cc}2\mathrm{sin}\alpha \mathrm{sin}\alpha ^{}& \mathrm{sin}(\alpha +\alpha ^{})\\ \mathrm{sin}(\alpha +\alpha ^{})& 2\mathrm{cos}\alpha \mathrm{cos}\alpha ^{}\end{array}\right],$$ (21) where $`\mathrm{tan}\alpha =f_\mu /f_\tau `$ and $`\mathrm{tan}\alpha ^{}=f_\mu ^{}/f_\tau ^{}`$. The eigenvalues of the above are then given by $`m_0(c_1\pm 1)`$, where $`c_1=\mathrm{cos}(\alpha \alpha ^{})`$, and the effective $`\mathrm{sin}^22\theta `$ for $`\nu _\mu \nu _\tau `$ oscillations is $`(1c_2)/(1+c_2)`$, where $`c_2=\mathrm{cos}(\alpha +\alpha ^{})`$. If $`\mathrm{tan}\alpha \mathrm{tan}\alpha ^{}1`$, then $`c_11`$ and $`c_20`$. In that case, maximal mixing between a heavy $`(2m_0)`$ and a light $`(s_1^2m_0/2)`$ neutrino occurs as an explanation of the atmospheric data. If it is assumed further that $`f_e<<f_{\mu ,\tau }`$ and $`f_e^{}<<f_{\mu ,\tau }^{}`$, then the small-angle matter-enhanced solution of solar neutrino oscillations may be obtained as well. ## 6 Collider Phenomenology The above model has the twin virtues of an acceptable neutrino mass matrix and the possibility of generating a lepton asymmetry of the Universe. It is also phenomenologically safe because all the additions to the standard model do not alter its known successes. Neither the fourth family of leptons nor the two extra Higgs doublets mix with their standard-model analogs because they are odd under the new discrete $`Z_2^{}`$ symmetry. In particular, $`H_3`$ and $`H_4`$ do not couple to the known quarks and leptons, hence flavor-changing neutral currents are suppressed here as in the standard model. The lepton-number violation of this model is associated with $`m_\chi `$ which is of order $`10^{13}`$ GeV. However, the fourth family of leptons should have masses of order $`10^2`$ GeV and be observable at planned future colliders. The two extra Higgs doublets should also be observable with an energy scale of order $`M_{SUSY}`$. The soft supersymmetry-breaking terms of this model are assumed to break $`Z_2^{}`$ without breaking $`Z_2`$. Hence there will still be a stable LSP (lightest supersymmetric particle) and a fourth-family lepton will still decay into ordinary leptons. For example, because $`\stackrel{~}{h}_3^0`$ mixes with $`\stackrel{~}{h}_1^0`$, the decay $$E^{}\mu ^{}\stackrel{~}{h}_3^0(\stackrel{~}{h}_1^0)\mu ^{}\tau ^+\tau ^{}$$ (22) is possible and would make a spectacular signature. ## 7 Conclusion In conclusion, the issue of neutrino masses in supersymmetry has been addressed in this talk. The assumption of $`R`$-parity nonconservation is shown to be generically inconsistent with leptogenesis because the lepton-number violating interactions would act in conjunction with the $`B+L`$ violating sphaleron processes and erase any pre-existing $`B`$ or $`L`$ or $`BL`$ asymmetry of the Universe. This constraint means that any $`R`$-parity violation must be very small, so that it is of negligible phenomenological interest and cannot contribute significantly to neutrino masses. This conclusion also applies to models of radiative neutrino masses with suppressed Yukawa couplings, such as the Zee model. However, it has also been shown that realistic neutrino masses in supersymmetry are possible beyond the MSSM with $`R`$-parity conservation where lepton-number violation is by two units and occurs at the mass scale of $`10^{13}`$ GeV. The specific model presented also predicts new particles which should be observable in the future at the LHC (Large Hadron Collider). ACKNOWLEDGEMENT I thank the organizers George Zoupanos, Nick Tracas, and George Koutsoumbas for their great hospitality at Corfu. This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837.
no-problem/9902/astro-ph9902313.html
ar5iv
text
# 1 Abstract ## 1 Abstract We briefly review and extend our discussion of the ROSAT detection of the extraordinarily luminous ($`>`$$`10^{42}`$ erg/s) partly extended ($`>`$30 kpc diameter) X-ray emission from the ultraluminous infrared galaxy NGC 6240. The ‘standard’-model of starburst outflow is contrasted with alternatives and a comparison with the X-ray properties of ellipticals is performed. ## 2 Introduction The double-nucleus galaxy NGC 6240 is outstanding in several respects: its infrared H<sub>2</sub> 2.121$`\mu `$m and \[FeII\] 1.644$`\mu `$m line luminosities and the ratio of H<sub>2</sub> to bolometric luminosities are the largest currently known (van der Werf et al. 1993). Its huge far-infrared luminosity of $`10^{12}L_{}`$ (Wright et al. 1984) comprises nearly all of its bolometric luminosity. Hence, owing to its low redshift of $`z`$=0.024, NGC 6240 is one of the nearest members of the class of ultraluminous infrared galaxies (hereafter ULIRGs).<sup>1</sup><sup>1</sup>1We continue to refer to NGC 6240 as ULIRG but note that, owing to the method to integrate over the IRAS bands and the adopted value of $`H_0`$, most authors now attribute an IR luminosity $`<10^{12}L_{}`$ to NGC 6240 rendering it a LIRG instead of a ULIRG Its optical morphology (e.g., Zwicky et al. 1961, Fried & Schulz 1983) and its large stellar velocity dispersion of 360 km/s (among the highest values ever found in the center of a galaxy: e.g., Doyon et al. 1994) suggest that it is a merging system on its way to become an elliptical. Like other ULIRGs, the object contains a compact, luminous CO(1-0) emitting core of molecular gas (Solomon et al. 1997). Within this core most of the ultimate power source of the FIR radiation appears to be hidden. There is now growing evidence that LIRGs are predominantly powered by star-formation and that the AGN contribution increases with FIR luminosity (e.g., Shier et al. 1996, Lutz et al. 1998a, Rigopolou et al. 1998; see Sanders & Mirabel 1996 and Genzel et al. 1998 for recent reviews) while essentially all of the HyLIRGs contain QSOs (e.g., Hines et al. 1995 and these proceedings). In the ‘transition region’ around $`L_{\mathrm{FIR}}10^{12}L_{}`$ it then requires a careful object-by-object analysis to find out the majour power source. Concerning NGC 6240, at least four scenarios have been suggested: Heating of dust by a superluminous starburst, by an AGN, by an old stellar population, and by UV radiation from molecular cloud collisions. In particular, previous hints for an AGN included (i) the strength of NIR recombination lines (de Poy et al. 1986, depending on the applied reddening correction, though), (ii) the presence of compact bright radio cores (Carral et al. 1990 but see Colbert et al. 1994), (iii) the discovery of a high-excitation core in the southern nucleus with HST (Barbieri et al. 1994, 1995, Rafanelli et al. 1997), and the detection of the \[OIV\] 25.9$`\mu `$m emission line with ISO (Lutz et al. 1996 – but see Lutz et al. 1998b who discovered this line in a number of starburst galaxies; Egami, this meeting). X-rays are a powerful tool to investigate both, the presence of an AGN and starburst-superwind activity. It is the aim of the present contribution to review briefly and discuss our findings of evidence for a hard X-ray component in the ROSAT PSPC spectrum of NGC 6240 (Schulz et al. 1998, SKBB hereafter) and the detection of luminous extended emission based on ROSAT HRI data (Komossa et al. 1998) in combination with the discovery of an FeK line and hard X-ray component by ASCA (first reported by Mitsuda 1995). Luminosities given below were calculated using $`H_0=50`$ km/s/Mpc. ## 3 Scenarios to explain the luminous X-ray emission of NGC 6240 ### 3.1 Spectral properties and origin of the hard component In our analysis of ROSAT PSPC data of NGC 6240 we tested a large variety of models to explain the X-ray spectrum (SKBB). One-component fits turned out to be unlikely. E.g., a single Raymond-Smith model requires a huge absorbing column along the line-of-sight, the consequence being an intrinsic (absorption corrected) luminosity of $`L_{\mathrm{X},0.12.4}\mathrm{4\hspace{0.17em}10}^{43}`$ erg/s, almost impossible to reach in any starburst-superwind scenario. The one model that does not require excess absorption is a single black body. Although physically implausible, this description does allow to derive a lower limit on the intrinsically emitted X-ray luminosity which any model has to explain: $`L_\mathrm{X}\text{ }>\mathrm{2.5\hspace{0.17em}10}^{42}`$ erg/s in the (0.1-2.4) keV band (Fricke & Papaderos 1996 obtained 3.8 10<sup>42</sup> erg/s by fitting a thermal bremsstrahlung model and allowing for some excess absorption). Successful two-component models require the presence of a hard X-ray component, in form of either very hot thermal emission ($`kT`$ 7 keV) or a powerlaw.<sup>2</sup><sup>2</sup>2The requirement of a second component in the ROSAT band can be omitted if strongly depleted metal abundances are allowed for. This has been reported for other objects as well and the low inferred X-ray abundances are quite puzzling given that other methods yield much higher abundances (as also stressed by Netzer at this meeting). A recent discussion of this issue is given in Komossa & Schulz (1998) and Buote & Fabian (1998) who give arguments in favour of two-component X-ray spectral models of $``$solar abundances in the Raymond-Smith component instead of single-component models of very subsolar abundances. In particular, Buote & Fabian conclude that the necessity of a second X-ray component cannot be circumvented by details of the modelling of the Fe L emission. Due to its large luminosity of several 10<sup>42</sup> erg/s we interpret the hard component to arise from an AGN. Both, the essential lack of non-X-ray evidence for an unobscured AGN, and the high equivalent width of the FeK line observed by ASCA (e.g., Mitsuda et al. 1995) suggest we see the AGN mainly in scattered light. Indeed, the X-ray spectrum can be successfully described by a ‘warm scatterer’ (cf. Fig. 4 of Komossa et al. 1998), highly ionized material seen in reflection which could also explain the strong FeK line seen by ASCA. Our model, which was suggested to explain the hard component, is quite similar to the one suggested by Netzer et al. (1998) who, however, explained the whole ASCA spectrum in terms of scattering. In this respect, we emphasize that the widely extended X-ray component detected with the ROSAT HRI (Komossa et al. 1998; cf. next Sect.) is likely of different origin, given the low efficiency of a hugely extended scattering mirror (SKBB). With an X-ray luminosity in scattered emission of a few 10<sup>42</sup> erg/s one obtains an intrinsic luminosity of order 10<sup>44-45</sup> erg/s, depending on the covering factor of the scatterer. Various fits of ASCA spectra (e.g., Mitsuda 1995, Kii et al. 1997, Iwasawa 1998, Netzer et al. 1998, Nakagawa, these proceedings) revealing the extension of the hard component up to 10 keV support this conclusion; the various approaches differ in the description of the soft component(s) and the amount of absorption of the hard component, though. In any case, the AGN contributes an appreciable fraction of the total $`L_{\mathrm{bol}}(\mathrm{NGC}\mathrm{\hspace{0.17em}6240})=\mathrm{4\hspace{0.17em}10}^{45}`$ erg/s. If $`L_{\mathrm{bol}}(\mathrm{AGN})L_{\mathrm{Edd}}`$ a black hole mass of $`M_{\mathrm{bh}}10^7M_{}`$ results. NGC 6240 is expected to form an $`L_{}`$ elliptical galaxy rather than a giant elliptical after having completed its merging epoch (Shier & Fischer 1997). However, to match the relation $`M_{\mathrm{bh}}0.002M_{\mathrm{gal}}`$ (Lauer et al. 1997) for the evolved elliptical the black hole has still to grow by an order of magnitude which would require another $`10^9`$ yrs of accretion while the merger is settling down. Alternatively, the present accretion rate could be below the Eddington rate. The inferred X-ray luminosity is an appreciable fraction of the FIR luminosity, suggesting that both, the starburst (e.g., Lutz et al. 1996) and the AGN power the FIR emission of this ULIRG. ### 3.2 Extended emission The HRI images (Komossa et al. 1998) reveal that part of the huge X-ray luminosity arises in a roughly spherical source with strong ($`2\sigma `$ above background) emission out to a radius of 20<sup>′′</sup> ($``$14 kpc; Fig. 1). Hence, NGC 6240 is the host of one of the most luminous extended X-ray sources in isolated galaxies (see Fig. 2 where $`L_\mathrm{X}`$ is compared with a sample of elliptical galaxies and Arp 220). Analytical estimates based on the Mac Low & McCray (1988) models show that the extended emission can be explained by superwind-shell interaction from the central starburst (SKBB). A puzzle is the high circular symmetry of the X-ray bubble, in contrast to the bicone-symmetry expected in a wind-driven supershell scenario so that it seems worthwhile to look for further potential contributors to the X-ray emission. An additional small contribution may come from a wind induced by the large velocity dispersion of 350 km/s (Lester & Gaffney 1994) leading to shocks in the gas expelled by the red giant population. Another interesting point is that the extended X-ray bubbles around elliptical galaxies are usually brighter in the inflow phases or ‘when caught in the verge of experiencing their central cooling catastrophe’ (Ciotti et al. 1991, Friaca & Terlevich 1998). Although time scales and details for an ongoing merger are certainly different, it is conceivable that NGC 6240 experiences a lack of heating when a majour starburst period has ended. In this case, a cooling flow would commence boosting $`L_\mathrm{x}`$ and presumably shock heating the ISM in the central kiloparsecs. Due to fragmentation, shock velocities could be enhanced causing the LINER like line ratios in the two nuclei (gravitational centers) and, with lower velocities, excite the molecular cloud complex between the nuclei leading to the extreme H<sub>2</sub> luminosity found there (van der Werf et al. 1993). Its exceptional X-ray properties make NGC 6240 a prime target for future X-ray satellites like XMM and AXAF. ###### Acknowledgements. St.K. acknowledges support from the Verbundforschung under grant No. 50 OR 93065 and thanks the organizers for the very efficient workshop and the pleasant workshop atmosphere. To appear in the proc. of the Ringberg workshop on ‘Ultraluminous Galaxies: Monsters or Babies’ (Ringberg castle, Sept. 1998); Ap&SS, in press
no-problem/9902/astro-ph9902243.html
ar5iv
text
# Hard X-ray emitting black hole fed by accretion of low angular momentum matter ## 1 Introduction There is a general agreement that the observed properties of galactic black hole candidates and of active galactic nuclei (AGN) could be best explained in the framework of accretion disks around black holes. However, no theoretical accretion disk model could explain all the basic properties of these sources. In particular, the best known, standard Shakura & Sunyaev (1973) disk model (SSD), predicts a temperature of accreted matter far too low to explain the hard X-ray emission ($`h\nu 10\mathrm{k}\mathrm{e}V`$) that is observed. The observed hard X-ray emission could be explained by postulating the existence of a very hot plasma, with electron temperature $`T_e10^9`$ K, in which soft photons emitted by SSD are boosted to higher energies by inverse Compton effect. The question is, how does such a hot plasma form in black hole accretion flows. The very popular disk-corona (DC) model (e.g., Liang & Price 1977; Haardt & Maraschi 1993) postulates the existence of the $`T_e10^9`$ K plasma in the form of a hot corona above the cold disk. Because the physics of DC models is largely ad hoc, a typical specific DC model contains a set of free tunable parameters. In the Shapiro, Lightman & Eardley (1976, SLE) hot, optically thin accretion disk model, ions are heated by viscous dissipation of their orbital energy, and inefficiently cooled by the Coulomb interaction with electrons. Thus, the ions have temperature close to the virial temperature, $`10^{11}10^{12}`$ K. Electrons are very efficiently cooled by a variety of radiative mechanisms, and this reduces their temperature to about $`10^910^{10}`$ K, which is sufficient for explanation of the hard X-ray radiation. The SLE model is, however, violently thermally unstable and therefore it cannot describe really existing objects. Detailed models of black hole optically thin accretion flows in which cooling is dominated by advection (ADAF) have been recently constructed in many papers (see recent reviews in Abramowicz, Björnsson & Pringle 1998 and Kato, Fukue & Mineshige 1998). ADAFs are hot, with the electron temperature about $`10^910^{10}`$ K, and underluminous, $`LL_{Edd}`$. Here $`L_{Edd}=1.3\times 10^{38}(M/M_{})`$ erg s<sup>-1</sup> is the Eddington luminosity corresponding to the black hole mass $`M`$. No ADAF solution is possible above a limiting accretion rate that is roughly about $`0.1\dot{M}_{Edd}=0.1L_{Edd}/c^2`$. In some black hole sources, however, observations point to accretion rates and radiative efficiencies that are much too high for the standard ADAF model to explain (see review in Szuszkiewicz, Malkan & Abramowicz 1996). On theoretical side, no satisfactory model of an SSD-ADAF transition has been worked out and thus, in order to account for the presence of cold matter together with hot ADAF plasma, one still uses phenomenological arguments (see Kato & Nakamura 1998). A model of inhomogeneous inner region of accretion disk was proposed by Krolik (1998). In this model the accretion flow consists of clouds moving in a hot, magnetized intercloud medium, which can principally explain the emittion of hard X-rays. Such a structure could result from a dynamical photon bubble instability in radiation pressure supported disks (Spiegel 1977; Gammie 1998). Due to a significant complexity in describing of the inhomogeneous medium, the model contains a number of phenomenological assumptions. In particularly, the assumptions of stability of such a configuration is questionable. Motivated by the above difficulties of the existent models for black hole accretion flow in explaining the co-existence of the hot and cold components, we have constructed a new model that has the following properties. (1) Both very hot, hard X-ray emitting gas, and sufficiently cool gas, consistent with the detection of the fluorescent iron line, are present very close to the central black hole. (2) Significant part (up to $`50\%`$) of total luminosity ($`10^110^2L_{Edd}`$) of the object is emitted in hard X-rays. (3) Flow is stationary and globally stable for large accretion rates ($`\dot{M}\dot{M}_{Edd}`$). The key element of the model is that accretion matter has initially a low angular momentum. We keep in mind two kind of objects, in which a low angular momentum accretion onto black hole may occur. First, black holes, which are fed by accretion from a winds blowing of OB star in binary systems (Illarionov & Sunyaev 1975a). The most popular object of this kind is X-ray binary Cygnus X-1 (see Liang & Nolan 1984). Second, luminous X-ray quasars and AGN, where the central supermassive black hole is fed by the matter, which is lost from stars of slowly rotating central stellar cluster (Illarionov 1988). In its basic respects, the geometry of our model closely resembles that of the DC model, but there are also significant differences. Our models is also very different from that recently discussed by Esin (1997) although they both stress the importance of cooling by inverse Comptonization. ## 2 Low angular momentum accretion Let us consider matter with a low characteristic specific angular momentum $`\overline{\mathrm{}}`$, accreted quasi-spherically onto the central black hole. By low $`\overline{\mathrm{}}`$ we mean that which corresponds to the Keplerian orbit with radius $`r_0=2\overline{\mathrm{}}^2/r_gc^2`$ in range, $`3r_g<r_0100r_g`$. Here $`r_g=3\times 10^5(M/M_{})`$ cm is the gravitational radius of black hole with the mass $`M`$. Matter with angular momentum smaller than that in the indicated range could not form an accretion disk around the Schwarzschild black hole because the innermost stable orbit around such a hole is localized at $`r_{ms}=3r_g`$. Matter with $`\overline{\mathrm{}}`$ higher than in the indicated range could form an accretion disk that extends from the black hole to large radial distances, $`r100r_g`$: this would correspond to the previously studied SSD. At large radii, $`rr_0`$, the low angular momentum accretion flow closely resembles spherical Bondi accretion (Bondi 1952). Approximated models of spherical accretion onto a luminous X-ray central source was studied by several authors (e.g. Ostriker et al. 1976; Bisnovatyi-Kogan & Blinnikov 1980). It was shown by Igumenshchev, Illarionov & Kompaneets (1993) that inside of the Compton radius $`r_C=10^4(10^8K/T_C)r_g`$, where the Compton temperature $`T_C`$ is determined by the ‘average’ photon energy of the source, accretion is almost spherical and supersonic. Our model shows that at smaller radii $`rr_0`$, the flow significantly deviates from the spherical accretion. Fluid elements tend to cross the equatorial plane at the radius which corresponds to the Keplerian orbit for the angular momentum of the element. This leads to formation of shocks above and below the equatorial plane at $`rr_0`$. By crossing the shocks, protons reach the virial temperature $`T_p10^{12}(r_g/r)K`$. In the presence of soft photons from the thin accretion disk, electrons in the post-shock region are efficiently cooled by inverse Comptonization. These cold electrons also efficiently cool protons via Coulomb collisions. Such a plasma undergoes a runaway cooling due to the intensive bremsstrahlung-Compton processes at layers, where the Compton $`y`$-parameter reaches the order of unity. At these layers protons loose the most of their thermal energy, and the plasma condenses into the thin and cold disk. The thin disk spreads to very large radii, $`rr_0`$, due to the viscous diffusion (von Weizsäcker 1948, also see Pringle 1981) from the region of condensation at $`rr_0`$. It is convenient to consider two different parts of the thin disk. The inner part, $`rr_0`$, is an accretion disk, where matter moves inward and angular momentum is transported outward. Matter mainly enters the black hole through this accretion disk, which means that radiative efficiency of the system is as high as in the standard SSD model ($`6\%`$ for the Schwarzschild black hole). At the outer part of the disk, $`rr_0`$, angular momentum is transported outward, removing its excess from the accretion flow to large radii. ## 3 Numerical method To study details of the model briefly described in the previous Section, we have simulated the low angular momentum accretion of weakly magnetized plasma onto black hole with the help of two-dimensional time-dependent hydrodynamical calculations. Our code is based on the PPM hydrodynamical scheme (Colella & Woodward 1984), and solves non-relativistic Navier-Stokes equations in spherical coordinates, assuming the azimuthal symmetry. We calculate separately the balance of electrons and protons internal energies for the electron-proton plasma. The energies of electrons and protons are coupled through the Coulomb collisions. Electrons are cooled by bremsstrahlung and Compton mechanisms. The cold and thin accretion disk at the equatorial plane emits soft photons needed for the Compton cooling. The thickness of the disk is not resolved in our models. We approximately calculate the energy release in the disk, and the correspondent radiation flux, which directly affects the efficiency of the Compton cooling, using the standard SSD model. In the model, the accretion rate in the cold disk equals to the condensation rate of the hot plasma into the disk, which is directly calculated in numerical simulations. We neglect the multiple photon scattering processes, when calculating the Compton cooling of plasma. This approximation is quite reasonable for the optically thin flows ($`\tau 1`$), which are characteristic for our models. We do not solve the transfer equation for radiation emitted by the thin disk. However, we calculate the radiation density (which is used to find the plasma energy losses) at each point above and below the disk by neglecting the absorption of photons. In our simulations, protons are heated by shocks, adiabatic compression and viscous dissipation. We take into account all the components of the viscous stress tensor corresponding to shear in all directions. The bulk viscosity is not considered. The kinematic viscosity coefficient is taken in the standard $`\alpha `$-parameterization form: $`\nu =\alpha c_s^2/\mathrm{\Omega }_K`$, where $`\alpha `$ is a constant, $`c_s`$ is the isothermal sound speed, and $`\mathrm{\Omega }_K=(r_g/r)^{3/2}c/\sqrt{2}r_g`$ is the Keplerian angular velocity. We use $`\alpha =0.1`$ in numerical models. No artificial numerical viscosity was used in the calculations. At the outer boundary we assume a supersonic matter inflow with the spherically symmetric distributed density and the specific angular momentum distributed in the polar direction consistently with rigid rotation: $`\mathrm{}(\theta )=\mathrm{}_{max}\mathrm{sin}^2(\theta )`$. The parameter $`\mathrm{}_{max}`$ determines (roughly) the maximum radius, $`r_d=2\mathrm{}_{max}^2/r_gc^2`$, inside which the condensation of the hot plasma to the thin disk takes place. For convenience, we will use the parameter $`r_d`$ rather than $`\mathrm{}_{max}`$, when describing results of numerical simulations. At the inner boundary, $`r_{in}=3r_g`$, and at the equatorial plane, where the condensation occurs, we assume total absorption of the inflowing matter. ## 4 Results and discussions We follow the evolution from an initial state, until a time when a stationary flow pattern is established. Models show a strong dependence on two parameters: accretion rate $`\dot{M}`$, and $`r_d`$. We have calculated several models with different $`\dot{M}`$ and $`r_d`$. Two examples of stationary accretion flows with $`r_d=30r_g`$ and two different $`\dot{M}=0.25`$ and $`0.5\dot{M}_{Edd}`$ are presented in Fig. 1. One can clearly see two stationary shock structures, which are developed in each model. The structures are more compact for larger $`\dot{M}`$. The nature of inner shock, at radial distances $`r<r_d`$ above and below the equatorial plane, has already been discussed in Section 2. In these shocks, the supersonic accretion flow is slowed down before condensation to the thin disk. We expect that, in the hot post-shock plasma, the inverse Comptonization of soft photons emitted from the thin disk provides the main contribution to the hard X-ray luminosity of this type of accretion flow. In this region the maximum proton temperature $`T_p10^{11}K`$ and the electron temperature $`T_e10^9K`$ in both models shown in Fig. 1. The distribution of $`T_e`$ in the post-shock region is quite uniform. Note, that we do not resolve in our calculations the regions close to the thin disk surface, where the main part of energy of the matter being condensed is released, and where the Comptonized spectrum is formed. We are limited in space resolution by the sizes of numerical cells, but the scale of condensation region is much less than the cell’s sizes. This lack of resolution can be demonstrated by the estimation of the $`y`$-parameter, calculated as the integral $`y=(kT_e/m_ec^2)𝑑\tau `$, where $`\tau `$ is the Thomson optical depth and the integration is taken in $`z`$-direction from the equatorial plane till the outer boundary. For the larger accretion rate model, $`\dot{M}=0.5\dot{M}_{Edd}`$, the maximum value of $`y`$ is only of order of $`0.1`$, whereas the optical depth in vertical direction does not exceed $`0.5`$. The spectrum of the escaping radiation is most probably formed at layers with $`y1`$ and $`\tau 1`$, and its hardness is determined by the temperature of these layers. To resolve the layers, where the spectrum is formed, one should increase the resolution of numerical scheme at more than ten times in comparison with the present one (we currently use $`n_r\times n_\theta =150\times 100`$). As an alternative approach we propose to study the vertical structure of the condensation region using a one-dimension numerical scheme with the boundary conditions taken from the two-dimension models. Radiative shocks of discussed type could be thermally unstable (see Saxton et al. 1998), and their instabilities may drive quasi-periodic oscillations with time scale of order of $`10r_g/c`$, which could explain the observed high frequency variability in the X-ray flux (Cui et al. 1997). The outer shock shapes depend mainly on $`\dot{M}`$ and vary from an oblate spheroid to a torus (see Fig. 1). The formation of this shock is connected to the presence of the centrifugal barrier: supersonically moving matter is stopped at $`rr_d`$ by the centrifugal force. The possibility of such kind of centrifugal driving shocks in accretion flows was first pointed out by Fukue 1987. In the pre-shock region, between the outer boundary and the outer shock, the viscous transport of angular momentum is not important due to the supersonic accretion velocities. Slowing down of matter inflow on the outer shock makes the viscous transport of angular momentum important in the outer post-shock region. The angular momentum is effectively transported outward there. This outward transport, from the rapidly rotating inner parts to the slowly rotating outer ones, is balanced by the inward advection of angular momentum by the inflowing matter. This balance, as well as the efficient cooling of plasma via the Compton mechanism, play a significant role in limiting the size of the outer post-shock region and creation of the stationary shock structures. We found, that in the case when the Compton cooling is (artificially) switched off, no stationary solutions of this type are possible. The dependence of the flow structure on the parameters $`r_d`$ and $`\dot{M}`$ can be used for explanation of the observed spectral variability of galactic black hole candidates. For example, the change of spectral state from ‘hard’ to ‘soft’ in the case of Cygnus X-1 (see Tanaka & Lewin 1995) can be explained in terms of variation of $`r_d`$. In the hard state (small $`r_d`$) most of the accreting matter goes through the inner shocks at radius $`10r_g`$. In this case a significant part of radiation comes in hard X-rays. In the soft state (large $`r_d`$) the accretion matter comes to the thin disk at radii $`100r_g`$. Close to the black hole, accretion through a thin disk takes place only. Obviously, hard X-ray emittion is suppressed in this case. Both parameters $`r_d`$ and $`\dot{M}`$ are very sensitive to conditions of mass exchange between binary companions in the case of a wind fed accretion (Illarionov & Sunyaev 1975b). A small change in the velocity of the wind of an OB star companion leads to a significant variation of both $`r_d`$ and $`\dot{M}`$ with amplitudes that could explain observed variability. ## 5 Conclusion New model of accreting black holes have been proposed, which self-consistently explains the existence of two-component (hot and cold) plasma in the vicinity of black holes. The model assumes that at large radial distances the accretion flow has a quasi-spherical geometry and a low characteristic angular momentum. The main parameters of the model are the accretion rate and the amount of angular momentum carrying by accretion flow. The model can naturally explain, without phenomenological assumptions, the origin of hard X-ray excess observed from the stellar mass black holes (Galactic black hole candidates), as well as the supermassive black holes (quasars and AGN). More hydrodynamical and radiative transfer simulations are needed to construct detailed spectra of accreting black holes, which will be compared with observations. This study was supported in part by the grant from the Royal Swedish Academy of Sciences, RFBR grant 97-02-16975 and NORDITA under the Nordic Project ‘Non-linear phenomena in accretion disks around black holes’. We would like to thank the referee for many helpful suggestions.
no-problem/9902/hep-th9902088.html
ar5iv
text
# 1 Introduction ## 1 Introduction The holographic principle proposes that the maximum number of degrees of freedom in a volume is proportional to the surface area . This principle is based on earlier studies by Bekenstein of maximum entropy bounds within a given volume. One argument used to motivate the holographic principle is as follows. Consider a region of space with volume $`V`$, bounded by an area $`A`$, which contains an entropy, $`S`$, and assume that this entropy is larger than that of a black hole with the same surface area. Now throw additional energy into this region to form a black hole. Assuming that the Bekenstein-Hawking formula, $`S=A/4`$, actually gives the entropy of the black hole, we conclude that the generalized second law of thermodynamics has been violated. To avoid this contradiction, the holographic principle proposes that the entropy inside a given region must satisfy $`S/A<1`$. However, this line of reasoning implicitly assumes that the black hole forms in an otherwise static background. In what follows, we examine how this argument changes in more general, time-dependent, spacetimes, such as those encountered in cosmology. We argue the holographic bound is replaced by the simple requirement that physics respects the generalized second law of thermodynamics . For static backgrounds this reduces to the holographic bound but, in general, the maximum entropy permitted inside a region is not related to its area by a simple formula. Fischler and Susskind have proposed a generalization of the holographic principle to certain cosmological backgrounds. This proposal has been studied further in . For flat and open universes with time independent equations of state, we find that their bound is in accord with the generalized second law, and we propose a refinement of the Fischler-Susskind bound that applies to an inflationary universe after reheating. Fischler and Susskind found that closed universes violate their cosmological holographic bound, and speculated that such backgrounds were either inconsistent, or that new behavior sets in as the bound is violated. We argue that the evolution of closed universes does not violate the generalized second law, and that such backgrounds are thus self-consistent. A related problem is the application of the holographic principle to a volume inside the event horizon of a black hole. The naïve holographic bound can easily be violated in such a region. Although this evolution respects the generalized second law, it appears that the price an observer pays for violating the holographic bound is to eventually encounter a curvature singularity. However it is possible for this fate to be delayed for cosmologically long time scales. ## 2 The Story So Far Fischler and Susskind realized that while the holographic bound, $`S/A<1`$, applies to an arbitrary region for the static case, its application to cosmological spacetimes is more subtle. Specifically, the homogeneous energy density, $`\rho `$, of simple cosmological models implies a homogeneous entropy density, $`s`$. Inside a (spatial) volume $`VR^3`$, the total entropy is $`S=sV`$, but $`S/AsR`$. Consequently, for a fixed $`s`$ it is always possible to choose a volume large enough to violate the holographic bound. Fischler and Susskind resolve this problem by stipulating that the holographic bound only applies to regions smaller than the cosmological (particle) horizon , which corresponds to the forward light-cone of an event occurring at (or infinitesimally after) the initial singularity. The comoving distance to the horizon, $`r_H`$, is $$r_h=_0^{t_0}\frac{1}{a(t^{})}𝑑t^{}$$ (1) while the corresponding physical distance is $$d_h=a(t)r_h=a(t)_0^{t_0}\frac{1}{a(t^{})}𝑑t^{}.$$ (2) Here $`a(t)`$ is the scale factor of the Robertson-Walker metric, and obeys the evolution equations $`\left({\displaystyle \frac{\dot{a}}{a}}\right)^2=H^2`$ $`=`$ $`{\displaystyle \frac{\rho }{3}}{\displaystyle \frac{k}{a^2}}`$ (3) $`{\displaystyle \frac{\ddot{a}}{a}}`$ $`=`$ $`{\displaystyle \frac{\left(\rho +3p\right)}{6}}`$ (4) where $`k`$ takes the values $`\pm 1`$ and 0, for solutions with positive, negative and zero spatial curvature. For a perfect fluid, in a flat ($`k=0`$) universe, whose pressure and density satisfy $`\rho =\omega p`$, the solution of equations (3) and (4) is straightforward $$a(t)=a_0\left(\frac{t}{t_0}\right)^q,q=\frac{2}{3}\frac{1}{1+\omega }.$$ (5) In particular, if $`\omega =0`$ we recover the equation of state for dust, while $`\omega =1/3`$ is the appropriate value for a hot (relativistic) gas or radiation. In general, $$d_H=\frac{t}{1q}.$$ (6) The comoving entropy density is constant, so with $`k=0`$ it follows that when measured over the horizon volume, $$\frac{S}{A}t^{13q}.$$ (7) If $`q<1/3`$ ($`\omega >1`$) the holographic bound is violated at late times but, as Fischler and Susskind explain, such a cosmological model is not viable since a perfect fluid with $`\omega >1`$ has a speed of sound greater than the speed of light. In realistic cosmological models the equation of state is far from that of a perfect fluid with constant $`\omega `$. Even simple models of the big bang combine dust and radiation and make a transition between $`\omega =1/3`$ and $`\omega =0`$, since the energy density of radiation drops faster than the density of dust as the universe expands. More importantly, during an inflationary epoch in the primordial universe $`\ddot{a}`$ is, by definition, positive; so the pressure and $`\omega `$ must be negative. One of the original motivations for inflation was that it endows the primordial universe with a substantial entropy density. Inflationary models generate entropy after inflation has finished, when energy is transferred from the scalar field which drove the inflationary expansion to radiation and ultra-relativistic particles. This process is referred to as reheating, and the equation of state usually changes from $`\omega <1/3`$, to that of a radiation dominated universe whose subsequent evolution is described by the “standard” model of the hot big bang. The comoving entropy density is not constant, and $`S/A`$ is thus a more complicated function of time than it is in models with constant $`\omega `$. The maximum temperature, $`T`$, attained after inflation is model dependent, and the resulting entropy density is proportional to $`T^3`$ only if we assume a relativistic gas. Inflation can make the cosmological horizon arbitrarily large; for instance in a class of realistic models it may be $`10^{1000}`$ times greater than in the absence of inflation. Applying the original Fischler-Susskind formulation of the holographic principle leads to a value of $`S/A`$ massively greater than unity for almost any realistic inflationary model. This difficulty is noted by Rama and Sarkar , and they propose various smaller volumes over which to measure the entropy. In general, their formulation is not consistent with the one we propose in the next section. ## 3 Holography and the <br>Generalized Second Law One of the initial motivations for the holographic principle was based on the generalized second law of thermodynamics. The generalized second law states $$\delta S_{mat}+\delta S_{bh}0,$$ (8) where $`S_{mat}`$ is the entropy of matter outside black holes, and $`S_{bh}`$ is the Bekenstein-Hawking entropy of the black holes. This law has not been proven, but is expected to follow from most of the current approaches to quantum gravity and has been tested in many non-trivial situations . Assuming that this law is correct, the holographic principle follows from it if we consider a region of space embedded in an approximately static background (such as Minkowski space, or anti-de Sitter space), as discussed in the introduction. Our main interest is to study the formulation of holographic style bounds in a general background. In time dependent situations, we propose that the general principle which replaces the holographic principle is simply: that the generalized second law of thermodynamics holds. In the examples considered below, we assume that changes are quasi-static, so that at all times the entropy is maximized to an arbitrarily good approximation, subject to constraints. In these situations we can make the stronger statement: for all time, the entropy is maximized subject to the constraints. For volumes embedded in certain backgrounds we may use these principles to deduce holographic style bounds on the entropy, but this does not appear to be possible in general. We will now illustrate these observations with a number of examples. ### 3.1 Flat Universe Let us consider isotropic, homogeneous and spatially flat cosmologies. The comoving entropy density in these models is constant.<sup>1</sup><sup>1</sup>1This discussion, like that of Fischler and Susskind, assumes that a flat or open universe necessarily expands indefinitely, while a closed universe recollapses in a finite time. However, a spatially flat or open universe with a negative vacuum energy density (cosmological constant) can recollapse, just as a positive vacuum energy can cause a closed universe to expand indefinitely. Our discussion can easily be generalized to these cases. In order to formulate a holographic bound, we need to introduce a length scale that defines the size of the spatial region under consideration. We argue that the relevant length scale is the Hubble length $`1/H`$. Physically, $`1/H`$ is the distance at which a point appears to recede at the speed of light, due to the overall expansion of the universe. To see that this is the relevant length scale consider a small gravitational perturbation of this background. Small perturbations to a spatially flat, homogeneous and isotropic universe with wavelengths larger than $`1/H`$ do not grow with time , provided the equation of state remains constant. If perturbations do not grow, black holes cannot form, $`\delta S_{bh}=0`$, and the generalized second law reduces to $`\delta S_{mat}0`$, which is satisfied by any physically reasonable equation of state. Perturbations with wavelengths shorter than the Hubble length will tend to collapse and form black holes via the Jeans instability. Thus consistency with the generalized second law suggests a holographic bound may hold for regions smaller than the Hubble volume. If $`\omega `$ is constant the particle horizon, $`d_H`$, and $`1/H`$ are related to one another by a factor of order unity, and we recover the Fischler-Susskind formulation. However, if inflation has taken place the particle horizon is much larger than $`1/H`$, which depends only on the instantaneous expansion rate, and not on the integrated history of the universe. As we will see later, the Hubble volume is the relevant region to consider in this case. In principle, any initial value of $`S/A`$ is consistent with the generalized second law. However, if $`S/A<1`$ initially, this condition is satisfied at all later times, provided $`\omega `$ is fixed and less than unity . With some additional assumptions, we can also bound $`S/A`$ at early times. As an example, consider the energy density and entropy density of a relativistic gas at temperature $`T`$: $`\rho `$ $`=`$ $`{\displaystyle \frac{\pi ^2}{30}}n_{}T^4,`$ (9) $`s`$ $`=`$ $`{\displaystyle \frac{2\pi ^2}{45}}n_{}T^3,`$ (10) where $`n_{}`$ is the number of bosonic degrees of freedom plus $`7/8`$ times the number of fermionic degrees of freedom. Using equation (3) to relate $`\rho `$ and $`H`$, we find $$\frac{S}{A}\sqrt{n_{}}T,$$ (11) up to constant factors. Since the density must be less than unity for quantum gravitational corrections to be safely ignored, the maximum temperature is proportional to $`n_{}^{1/4}`$, and the maximum value of $`S/A`$ inside a Hubble volume is proportional to $`n_{}^{1/4}`$. Violating $`S/A<1`$ significantly at a sub-Planckian energy requires an enormous value of $`n_{}`$. Thus, in the absence of fine tuning the holographic bound originally proposed by Fischler and Susskind is satisfied at all post-Planckian times. ### 3.2 Closed Universe For an isotropic closed universe with fixed equation of state, Fischler and Susskind found that even if $`S/A<1`$ initially on particle horizon sized regions, it could be violated at later times. This violation is possible even while the universe is still in its expansion phase. On the other hand, the generalized second law is expected to hold over regions the size of the particle horizon in a closed universe. To be definite, suppose this violation occurs while the universe is still in its expansion phase. One certainly expects that a region with an excessive entropy density could begin to collapse via the Jeans instability and form a black hole. However, if the size of this region is similar to that of the particle horizon, its collapse will necessarily take at least a Hubble time. Consequently, a violation of the holographic bound of Fischler and Susskind remains consistent with the second law for cosmologically long time-scales. Of course, in order to find $`S/A>1`$ well before the closed universe reaches its final singularity, we must consider a volume that is a substantial fraction of the overall universe. The collapse of this region into a single black hole is not a small perturbation of the background Friedmann universe. Thus, the evolution equations for the unperturbed universe are not expected to accurately describe the entropy density of the collapsing region. ### 3.3 Open Universe The behavior of isotropic open (negative spatial curvature) universes is similar to that of isotropic flat universes. If $`S/A<1`$ initially, it remains so at later times . An argument that $`S/A<1`$ remains valid at earlier times can likewise be made in a similar way to the flat case. ### 3.4 Inside a Black Hole Another interesting time dependent background is the region inside the event horizon of a black hole. The generalized second law will apply to a spatial volume inside the event horizon if the volume is out of thermal contact with other regions. However it is straightforward to argue that the entropy can exceed the surface area in such a volume whose size is on the order of the horizon size. Consider a large ball of gravitationally collapsing dust. The entropy of the ball is approximately constant during the course of the collapse. However, the size of the ball can contract to zero at the singularity, giving rise to a violation of the holographic bound. We see no reason why an observer inside such a region should not be able to actually measure a violation of the holographic bound. A direct measurement is difficult since the observer will typically hit the singularity within a light-crossing time of the black hole horizon. However, if the observer has the additional information that the entropy density is constant, s/he can infer a violation of holography via local measurements. ### 3.5 The Inflationary Universe We can view an inflationary universe as a Friedmann universe with a time dependent equation of state. During the reheating phase at the end of inflation there is a sharp change in the equation of state, as energy is transferred from the inflaton field to radiation (or ultra-relativistic particles). This raises the entropy of the universe in a homogeneous way. After this sudden increase in the entropy density it is possible to violate Fischler and Susskind’s bound when it is applied to regions the size of the particle horizon. Of course, a sharp homogeneous increase in the entropy density is permitted by the generalized second law. The process of reheating is model dependent. To simplify the discussion assume that reheating takes place instantaneously. After reheating, the equation of state need not change significantly. The post-inflationary universe closely resembles a homogeneous and isotropic universe which never inflated, with the exception that the particle horizon of the universe after inflation is much larger than that of a universe which did pass through an inflationary phase. We may therefore adopt the results for Friedmann universes with a constant equation of state. The post-inflationary universe is accurately approximated by an isotropic Friedmann spacetime, so if $`S/A<1`$ when measured over a Hubble volume at the end of inflation, this inequality will continue to be satisfied at later times. Moreover, immediately after reheating the energy density is typically well below the Planck scale so $`S/A1`$ in the absence of extreme fine-tuning, as discussed above. This bound differs from that of Rama and Sarkar , and we obtain no specific constraints on inflationary models beyond the usual assumption that the energy density is sub-Planckian during and after inflation. ## 4 Conclusions We have proposed that the holographic principle should be replaced by the generalized second law of thermodynamics in time dependent backgrounds. In static backgrounds, the generalized second law reduces to the holographic principle of ’t Hooft and Susskind . In cosmological backgrounds corresponding to isotropic flat and open universes with a fixed equation of state the second law implies the entropy bound of Fischler and Susskind over regions the size of the particle horizon. However for closed universes, and inside black hole event horizons, a useful holographic bound cannot be deduced from the second law. Finally, we proposed a modified version of the holographic bound which applies to spatial regions of the post-inflationary universe that are smaller than the Hubble volume. ## Acknowledgments We thank Robert Brandenberger for useful comments. This work was supported in part by DOE grant DE-FE0291ER40688-Task A.
no-problem/9902/hep-ph9902479.html
ar5iv
text
# PROTON STRUCTURE IN TRANSVERSE SPACE AND THE EFFECTIVE CROSS SECTION ## I Introduction While parton distributions represent the relevant information on the non perturbative input for most of the large $`p_t`$ processes at present energy, parton distributions do not exhaust the information on the hadron structure. In fact the information in the parton distributions corresponds to the average number of partons of a given kind and with a given momentum fraction $`x`$ which is seen when probing the hadron with the resolution $`Q^2`$. In this respect a qualitative change has occurred when it has been shown that the observation of hadronic interactions with multiple parton collisions is experimentally feasible. In a multiparton collision event different pairs of partons interact independently at different points in the transverse plane and the process, as a consequence, depends in a direct way on the actual distribution of the interacting partonic matter in transverse space. The simplest case is the double parton scattering. The non perturbative input to the process is the double parton distribution $`D_2(x,x^{};𝐛)`$ depending on the momentum fractions of the interacting partons and on their relative distance in the transverse plane $`𝐛`$. If all partons are uncorrelated and the dependence on the different degrees of freedom is factorized, one may write $`D_2(x,x^{};𝐛)=g_{eff}(x)g_{eff}(x^{})F(𝐛)`$ where $`g_{eff}(x)`$ is the usual effective parton distribution, namely the gluon plus 4/9 of the quark and anti-quark parton distributions. When the interacting hadrons are identical $`F(𝐛)`$ is equal to the overlap in transverse plane of the matter distribution of the two hadrons (normalized to one) as a function of $`𝐛`$, which now represents the impact parameter. The effective cross section $`\sigma _{eff}`$ is introduced by the inclusive double scattering process, which is proportional to $`1/\sigma _{eff}=d^2bF^2(𝐛)`$. The effective cross section is then a well defined property of the hadronic interaction and, at least if the simplest possibility for the hadronic structure is realized, it is both energy and cutoff independent. Analogously the dimensional scale factors, which one may introduce in relation to triple, quadruple etc. partonic collisions, are energy and cutoff independent properties of the interaction. Initially the search of double parton collisions have been sparse and not very consistent . CDF however has recently claimed the observation of a large number of double parton scatterings . The measured value of the effective cross section, $`\sigma _{eff}=14.5\pm 1.7_{2.3}^{+1.7}mb`$, is sizably smaller than expected naively and it is an indication of important correlation effects in the hadron structure . Correlations on the other hand are a manifestation of the links between the different constituents of the hadronic bound state and, in fact, it is precisely this sort of connections which one would like to learn as a result of experiments probing the hadron structure. In the present paper we point out a rather natural possible source of correlations, which is able to give rise to a sizable reduction of the value of the effective cross section, as compared to the most naive expectation. In fact we show that, when linking the transverse size of the gluon and sea distributions to the actual configuration of the valence, the expected value of $`\sigma _{eff}`$ is sizably decreased. In the next paragraph we recall the main features of the multiple partonic collision processes in the simplest uncorrelated case, in the following section we correlate the distributions of gluons and sea quarks with the configuration of the valence and, in the final paragraph, we discuss our results. ## II Multiple parton collisions The non-perturbative input to the multiple parton collision processes is the many body parton distribution. In most cases the simplest uncorrelated case, with factorized dependence on the actual degrees of freedom, is considered and the probability distribution is expressed as a poissonian. The probability of a configuration with $`n`$ partons with fractional momenta $`x_1,\mathrm{},x_n`$ and with transverse coordinates $`𝐛_1,\mathrm{},𝐛_n`$ is then expressed as $$\mathrm{\Gamma }(x_1,𝐛_1,\mathrm{}x_n,𝐛_n)=\frac{D(x_1,𝐛_1)\mathrm{}D(x_n,𝐛_n)}{n!}\mathrm{exp}\left\{D(x,𝐛)𝑑xd^2b\right\}$$ (1) where $`D(x,𝐛)=g_{eff}(x)f(𝐛)`$ is the average number of partons with momentum fraction $`x`$ and with transverse coordinate $`𝐛`$. The function $`g_{eff}(x)`$ is the effective parton distribution while $`f(𝐛)`$ represents the partonic matter distribution in transverse space. The many body parton distribution is an infrared divergent quantity, one needs therefore to introduce an infrared cut-off. A natural cutoff in this context is the lower value of momentum transfer which allows a scattered parton to be recognized as jet in the final state. Given the many-body parton distribution one can work out all possible multi-parton interactions. If rescatterings are neglected, namely if every parton is allowed to interact with large momentum transfer (namely larger than the infrared cutoff) at most once, one may write a simplest analytic expression for the hard cross section $`\sigma _H`$, corresponding to the cross section counting all inelastic hadronic events with al least one hard partonic interaction: $$\sigma _H=d^2\beta \left[1e^{\sigma _SF(\beta )}\right]=\underset{n=1}{\overset{\mathrm{}}{}}d^2\beta \frac{\left(\sigma _SF(\beta )\right)^n}{n!}e^{\sigma _SF(\beta )}$$ (2) here $`\beta `$ is the impact parameter of the hadronic collision, $`F(\beta )=d^2bf(𝐛)f(𝐛\beta )`$ and $`\sigma _S`$ is the usual expression for the integrated inclusive cross section to produce jets, namely the convolution of the parton distributions and of the partonic cross section. The unitarized expression of $`\sigma _H`$ in Eq.2, which takes into account the possibility of many parton-parton interactions in an overall hadron-hadron collision, is well behaved in the infrared region and it corresponds to a poissonian distribution of multiple parton collisions at a fixed value of the impact parameter of the hadronic interaction. If one works out in Eq.2 the average number of partonic collisions one obtains $$n\sigma _H=d^2\beta \sigma _SF(\beta )=\sigma _S$$ (3) so that $`\sigma _S`$ represents the inclusive cross section normalized with the multiplicity of partonic interactions. The inclusive cross section for a double parton scattering $`\sigma _D`$, normalized in an analogous way with the multiplicity of double parton collisions, $`n(n1)/2\sigma _H`$, is given by $$\frac{n(n1)}{2}\sigma _H=\frac{1}{2}d^2\beta \sigma _S^2F^2(\beta )=\sigma _D$$ (4) One may then introduce the effective cross section: $$\sigma _D=\frac{1}{2}\frac{\sigma _S^2}{\sigma _{eff}}$$ (5) which is therefore expressed in terms of the overlap of the partonic matter distribution of the two interacting hadrons as $$\sigma _{eff}=\frac{1}{d^2\beta F^2(\beta )}$$ (6) In fig.1 two different analytic forms for $`f(𝐛)`$ are compared: a gaussian and the projection of a sphere in transverse space. The radius of the distributions has been fixed in such a way that in both cases the RMS radius has the same value that, following CDF, has been set equal to $`0.56`$fm. For the cutoff we have used $`5`$GeV and, in order to be able to reproduce the value of the integrated hard cross section for producing minijets measured by UA1, we have multiplied the partonic cross section, computed in the perturbative QCD parton model, by the appropriate $`k`$-factor. Both choices give a similar qualitative result: the effective cross section is constant with energy (and cutoff independent) while the ‘total’ hard cross section $`\sigma _H`$ grows rapidly as a function of the c.m. energy. The values obtained for $`\sigma _{eff}`$ are however too large, by roughly a factor two, as compared to the value quoted by experiment. One has to add that in the experimental analysis of CDF all events with triple parton collisions have been removed from the sample of inelastic events with double parton scatterings. The resulting value for the effective cross section, $`\sigma _{eff}|_{CDF}`$, is therefore larger with respect to the quantity discussed here above and usually considered in the literature. The disagreement with the most naive picture is therefore even stronger then apparent when comparing the result of the uncorrelated calculation with the quoted value of $`\sigma _{eff}`$. In the usually considered picture of multiparton interactions just recalled all correlations are neglected and one may therefore claim that the experimental evidence is precisely that correlations play an important role in the many-body parton distribution of the hadron. In the next paragraph we propose therefore a slight modification to the picture, linking the transverse size of the gluon and sea distributions to the configuration in transverse space of valence quarks. ## III A simplest model for the partonic matter distribution in the proton Charged matter is distributed in the proton according to the charge form factor, which is well represented by an exponential expression in coordinate space. The information refers to a large extent to the distribution in space of the valence quarks, which can then be found in various different configurations in transverse space with a given probability distribution. Less is known on the distribution in space of the neutral matter component of the proton structure, namely the gluons. In this respect one may consider two different extreme possibilities: * the distribution in space of sea quarks and gluons has no relation with the distribution of valence quarks, * or, rather, its transverse size is linked closely to the actual configuration in space of the valence quarks. The no correlation hypothesis, which has been ruled out by the measure of $`\sigma _{eff}`$, would imply the first possibility. We try therefore to work out here a simplest model where the distribution of the whole partonic matter in the hadron is driven by the actual configuration of valence quarks. The parton distribution in Eq.1 can be modified by separating the valence quarks from gluons and sea quarks and by correlating the transverse size of the distributions of sea quarks and gluons with the transverse size of the actual configuration of the valence. If however one simply rescales the distribution function in transverse space $`f(𝐛)`$ by rescaling the radius while keeping the normalization constant, one is imposing a conservation constraint in the number of gluons. The number is in fact the same in each configuration of the proton, both when it is squeezed to a small transverse size and when it is expanded to a relatively large dimension. The picture therefore is not consistent with the common believe that the energy to the gluon field grows basically because of the growth of the distance between the valence quarks. It seems therefore more reasonable to remove the constraint on normalization and to keep rather fixed the maximum value of $`f(𝐛)`$ in all configurations. We modify therefore the simplest poissonian expression in Eq.1 and we express the probability distribution $`\mathrm{\Gamma }`$ as follows: $`\mathrm{\Gamma }(x_1,𝐛_1,\mathrm{}x_n,𝐛_n)`$ $`=`$ $`\phi (𝐁_D,𝐁)q_v(X_1)q_v(X_2)q_v(X_3)`$ (7) $`\times `$ $`{\displaystyle \frac{1}{n!}}\left[g(x_1)f(b,𝐛_1){\displaystyle \frac{b^2}{b^2}}\mathrm{}g(x_n)f(b,𝐛_n){\displaystyle \frac{b^2}{b^2}}\right]\mathrm{exp}\left\{{\displaystyle \frac{b^2}{b^2}}{\displaystyle g(x)𝑑x}\right\}`$ (8) To simplify the notation we have not written explicitly the dependence of $`\mathrm{\Gamma }`$ on the coordinates of the valence quarks, $`X`$ and $`𝐁`$. The dependence of $`\mathrm{\Gamma }`$ on the momentum fraction of the valence quarks $`X`$ is factorized and $`q_v(X)`$ is the usual distribution of valence quarks as a function of the momentum fraction $`X`$. The function $`f(b,𝐛_i)`$ represents the distribution of gluons and sea quarks in transverse space. It is a function of the transverse coordinate of the considered parton, $`𝐛_i`$, and it depends on the scale factor $`b`$ whose value is given by the actual configuration of the valence in transverse space. The transverse coordinates of the three valence quarks are $`𝐁_1`$ $`=`$ $`{\displaystyle \frac{1}{2}}𝐁_D+𝐁`$ (9) $`𝐁_2`$ $`=`$ $`{\displaystyle \frac{1}{2}}𝐁_D𝐁`$ (10) $`𝐁_3`$ $`=`$ $`𝐁_D`$ (11) The dependence on the transverse coordinates of the valence is given by $`\phi (𝐁_D,𝐁)`$, that is the integral on the longitudinal coordinates $`Z_D,Z`$ of $`\varphi (𝐑_D,𝐑)`$, representing the valence structure of the proton in coordinate space. Explicitly we use the exponential form $$\varphi (𝐑_D,𝐑)=\frac{\lambda _D^3\lambda ^3}{(8\pi )^2}\mathrm{exp}\left\{(\lambda _DR_D+\lambda R)\right\}$$ (12) where $`\lambda _D`$ $`=`$ $`{\displaystyle \frac{2\sqrt{3}}{\sqrt{r^2}}}`$ (13) $`\lambda `$ $`=`$ $`{\displaystyle \frac{4}{\sqrt{r^2}}}`$ (14) and $`\sqrt{r^2}=0.81`$fm is the proton charge radius. In a given configuration of the valence, the average number of gluons and sea quarks is not equal to the overall average number $`g(x)`$ (with $`g(x)`$ we indicate here the sum of gluon and sea quark distributions). It is rather equal to $`g(x)\frac{b^2}{b^2}`$; where $`b^2`$ is the average of $`b^2`$ with the probability distribution of the valence. To make a definite choice we take $`b=B_D`$ in such a way that $$b^2=B_D^2=𝑑𝐑_D𝑑𝐑B_D^2\varphi (𝐑_D,𝐑)$$ (15) The density of gluons and sea quarks in transverse space is then constant in the middle of the proton in all different configurations of the valence quarks and it is equal to the value assumed in the average configuration. The actual number of gluons and sea quarks is therefore a function of the configuration taken by the valence, compact configurations giving rise to small numbers while more extended configurations giving rise to larger numbers. Summing over all possible probability configurations of gluons and sea quarks one obtains, from Eq.8, the probability distribution of valence in transverse space $`\phi (𝐁_D,𝐁)`$. The average number of gluons and sea quarks, with given momentum fraction $`x`$, is $$𝑑𝐁_D𝑑𝐁𝑑𝐛\phi (𝐁_D,𝐁)g(x)f(B_D,𝐛)\frac{B_D^2}{B_D^2}=g(x)$$ (16) while the average number of partonic collisions, non involving valence $`n(𝐁_D,𝐁,𝐁_D^{},𝐁^{},\beta )`$, for a given configuration of the valence and for a given value of the impact parameter $`\beta `$, is expressed as $`n(𝐁_D,𝐁,𝐁_D^{},𝐁^{},\beta )={\displaystyle 𝑑𝐛f(B_D,𝐛\beta )\frac{B_D^2}{B_D^2}f(B_D^{},𝐛)\frac{(B_D^{})^2}{B_D^2}[\sigma _S(g+q_s,g+q_s)]}`$ (17) The average number of interactions of valence quark with gluons and sea is written in an analogous way. If one integrates the average number of partonic collisions $`n(𝐁_D,𝐁,𝐁_D^{},𝐁^{},\beta )`$ over all configurations of the valence quarks of the two hadrons, with the corresponding weights, and on the hadronic impact parameter $`\beta `$, one obtains the single scattering inclusive cross section $`\sigma _S`$: $`{\displaystyle 𝑑𝐁_D𝑑𝐁𝑑𝐁_D^{}𝑑𝐁^{}𝑑\beta \phi (𝐁_D,𝐁)\phi (𝐁_D^{},𝐁^{})n(𝐁_D,𝐁,𝐁_D^{},𝐁^{},\beta )}=\sigma _S(g+q_s,g+q_s)`$ (18) An analogous expressions for interactions of valence quarks with gluons and sea quarks is readily written. All average quantities are therefore the well known ones of the perturbative QCD-parton model. The inclusion of the transverse degrees of freedom in the relations above allows one to write down promptly all expressions corresponding to the various multiparton collision processes. For the double parton scattering case one has: $`\sigma _D={\displaystyle \frac{1}{2}}{\displaystyle 𝑑𝐁_D𝑑𝐁𝑑𝐁_D^{}𝑑𝐁^{}𝑑\beta \phi (𝐁_D,𝐁)\phi (𝐁_D^{},𝐁^{})\left[n(𝐁_D,𝐁,𝐁_D^{},𝐁^{},\beta )\right]^2}`$ (19) and for the hard cross section $`\sigma _H={\displaystyle 𝑑𝐁_D𝑑𝐁𝑑𝐁_D^{}𝑑𝐁^{}𝑑\beta \phi (𝐁_D,𝐁)\phi (𝐁_D^{},𝐁^{})\left[1\mathrm{exp}\left\{n(𝐁_D,𝐁,𝐁_D^{},𝐁^{},\beta )\right\}\right]}`$ (20) The effective cross section is obtained, as in the uncorrelated case, from $`\sigma _D`$ by using Eq.5. ## IV Discussion The values of $`\sigma _H`$ and of $`\sigma _{eff}`$, derived from the correlated parton distribution described in the previous paragraph, are plotted in fig.2, where two different analytic expressions for $`f(𝐛)`$ are considered. In fig.2a $`f(𝐛)`$ is the projection of a sharp-edged sphere in transverse space, while in fig.2b $`f(𝐛)`$ is a gaussian. In each figure we draw the inclusive cross section $`\sigma _S`$, which is the fast growing short-dashed curve, the hard cross section $`\sigma _H`$, continuous curves, and the effective cross section $`\sigma _{eff}`$, long-dashed curves. Both for $`\sigma _H`$ and for $`\sigma _{eff}`$ we draw two different curves. The higher curve refers to the case where the gluon and sea distributions are kept fixed and equal to the average configuration, irrespectively of the configuration of the valence, while the lower curve refers to the case where the configuration of gluons and sea are correlated in radius with the configuration of the valence, as described in the previous paragraph. The lower threshold for the transverse momentum of the produced jets has been put equal to $`5`$GeV. In the correlated case, when considering a sphere for $`f(𝐛)`$, we choose for the radius of the sphere the value of $`B_D`$. The RMS of the radius of the sphere, averaged with the probability distribution of the valence, is then $`\sqrt{B_D^2}=0.66`$fm, namely it is equal to $`\sqrt{2/3r^2}`$, with $`\sqrt{r^2}=0.81`$fm, the RMS proton charge radius. In the gaussian case we fix the size of the distribution by requiring the same RMS value for the radius of the distribution as in the case of the sphere. As one may see in fig.2, while the value of $`\sigma _{eff}`$ is too large when the distribution of gluons and sea quarks is kept fixed, the value of $`\sigma _{eff}`$ is sizably reduced when it is correlated with the distribution of the valence quarks. With our choice the value of the effective cross section turns out of the right size. Obviously the result could have been significantly different with a different, but still plausible choice of the (correlated) radius of the gluon and sea distributions. We do not claim therefore that the simplest mechanism discussed here is the only solution to the problem posed by the smallness of the observed value of $`\sigma _{eff}`$. Rather we point out that the source of correlations discussed here on one side is a minimal modification to the uncorrelated many-body parton distribution and, on the other, it looks as a natural possibility, which could explain a substantial amount of the difference between the observed value of $`\sigma _{eff}`$ and the result of the uncorrelated calculation. As shown in fig.2, our simplest model, while reducing sizeably the expectation for $`\sigma _{eff}`$, does not modify dramatically the expectation for $`\sigma _H`$. It is therefore worth wile making predictions for other possible observables, in order to have an independent indication on the actual model. We have then estimated the triple parton scattering cross section. The triple scattering cross section, being proportional to $`\sigma _S^3`$, introduces a new dimensional quantity other than the effective cross section $`\sigma _{eff}`$. In the uncorrelated case the new dimensional quantity has a given value, proportional to $`\sigma _{eff}^2`$ and the proportionality factor depends on the actual form of $`f(𝐛)`$. Also in the correlated case one may however write $$\sigma _T=\frac{1}{3!}\frac{\sigma _S^3}{\tau \sigma _{eff}^2}$$ (21) The observation of the triple scattering parton process allows therefore one to measure the dimensionless quantity $`\tau `$, which, as $`\sigma _{eff}`$ is a well defined quantity, related to the geometrical properties of the interacting hadrons. The actual expectations for the $`\tau `$-factor according to the model discussed here are shown in the table. ## V Conclusions The observation of multiple parton collisions allows the measure of a whole set of quantities which characterize the interaction and that are directly connected to the geometrical properties of the hadron structure. The first indication in this direction is the measure of the effective cross section of double parton collisions performed by CDF. The measured value of the effective cross section rules out the simplest uncorrelated picture of the many body parton distribution of the hadron. In this note we have shown that correlating the transverse dimension of the gluon and sea quark distributions to the transverse dimension of the valence one can obtain for the effective cross section a value much closer to the experimental indication. In order to have the possibility to test the model, at least to some extent, we have then worked out our prediction for the factor $`\tau `$, characterizing the triple parton scattering cross section. The simplest expectation for $`\sigma _{eff}`$ and, more in general, for the scale factors of the different multiple parton collision processes, is that they are cutoff and energy independent. The situation however is changed when more elaborate structures are considered. Also in the simplest model discussed here in fact $`\sigma _{eff}`$ shows a slight energy and cutoff dependence. The origin is the following: In our case partons are organized in two different structures in transverse space, on one side one has valence quarks and on the other gluons and sea quarks. When changing the energy or the lower cutoff in $`p_t`$ one is varying the relative number of interacting sea quarks and gluons with respect to the valence quarks. This variation is then reflected in the relative weight of the rate of collisions of partons which have a different distributions in transverse space and, as a consequence, $`\sigma _{eff}`$ is slightly modified. Acknowledgements This work was partially supported by the Italian Ministry of University and of Scientific and Technological Research by means of the Fondi per la Ricerca scientifica - Università di Trieste.
no-problem/9902/astro-ph9902262.html
ar5iv
text
# Tales of tails in cosmology ## 1 Introduction The late time mild inflationary (LTMI) scenario has recently been proposed to solve the age of the universe problem, and the puzzle of the discrepancy between the locally and globally measured values of the Hubble parameter. In this paper, we discuss the homogeneous Klein–Gordon equation for a scalar field from the point of view of the Huygens’ principle and wave tails, with respect to its cosmological applications. In the LTMI scenario, the scalar field is allowed to couple explicitly to the Ricci curvature of spacetime (see Eq. (2.2) below). The physical reasons to consider a nonminimal coupling term are many, and are summarized in Ref. ; indeed, the nonminimal coupling is forced upon us by the physics of the scalar field . The study of the Huygens’ principle and of scalar tails leads to unexpected physics . The main goal of the present paper is the application of these results to cosmology, in particular to LTMI, continuing the program initiated in Refs. . An interesting issue in the physics of wave propagation is the validity of the Huygens’ principle. A field satisfying a linear wave equation can propagate “sharply” along the characteristic surfaces, or with “tails” of radiation, reverberations that degrade the information carried by a initially delta–like pulse, and that violate the Huygens’ principle. To be clear, we adopt the physical definition of Huygens’ principle due to Hadamard . Assume that a delta–like pulse of radiation (light, for example) is emitted by a point–like source in $`P`$, at the time $`t=0`$. If, at the time $`t>0`$, the radiation is entirely confined to the surface of the sphere of center $`P`$ and radius $`r=ct`$ (where $`c`$ is the speed of light), one says that the Huygens’ principle is satisfied. If, on the contrary, there is radiation at radii $`r`$ such that $`r<ct`$, there are tails of radiation: the waves are spread at any radius. A precise mathematical definition<sup>1</sup><sup>1</sup>1Unfortunately, the terminology commonly used in the literature is misleading; it would be more appropriate to refer to the “Huygens’ property” instead of the “Huygens’ principle”. is found in Sec. 2. It is known that, given a wave equation in a curved spacetime $`(M,g_{ab})`$, the Huygens’ principle is generally violated by its solutions, due to the following possibilities: the presence of a mass term in the wave equation satisfied by the field, the dimensionality of spacetime, and backscattering off the background curvature of spacetime . The first of these causes is trivial and well known. Moreover, in this paper there are no tails due to the spacetime dimensionality. Backscattering off the spacetime curvature, on the other hand, is nontrivial and the presence or absence of tails for scalar, electromagnetic, and gravitational waves has been established only for a handful of spacetime metrics $`g_{ab}`$. It is not surprising that the study of violations of the Huygens’ principle has fruitful applications to cosmology, in view of the fact that scalar fields are widely used in this area, expecially in inflationary theories of the early universe, and as candidates for dark matter in today’s universe. Also, it is worth reminding the reader that tails of gravitational waves due to the spacetime curvature near compact sources have received attention in conjunction with the data analysis of the large interferometric detectors of gravitational waves . The relevance of tails for cosmological gravitational waves is, instead, unclear. The plan of the paper is as follows: in Sec. 2, a theorem valid for massive fields of spin $`s1/2`$ satisfying wave equations is recalled, and analogous results are derived for the massive spin 0 field. The importance of a correct formulation of the Huygens’ principle for physical applications is emphasized. Then, we proceed to study an “ultrapathological” case of wave propagation for a scalar field in a curved space. In Sec. 3, which is the most relevant to cosmology, the late time mild inflationary scenario of the universe is studied, and it is shown that this scenario essentially coincides with the ultrapathological space of Sec. 2. Alternative mechanisms to achieve late time mild inflation are discussed. Section 4 presents the conclusions. ## 2 Massive fields in curved spaces and the tail–free property Massive fields of arbitrary spin satisfying wave equations in a curved space have been studied for a long time, both from the mathematical and the physical (classical and quantum) point of view. In this paper, we restrict ourselves to the classical aspects of the physics of wave propagation, in particular the violation of the Huygens’ principle and the occurrence of tails of radiation for a field satisfying a wave equation . It is required that the fields considered live in the spacetime $`(M,g_{ab})`$, where $`M`$ is a four–dimensional smooth manifold, $`g_{ab}`$ is the metric tensor, and $`_a`$ is the associated covariant derivative operator<sup>2</sup><sup>2</sup>2The metric signature is – + + +. The speed of light and Planck’s constant assume the value unity. The Ricci tensor is given by $`R_{\mu \rho }=\mathrm{\Gamma }_{\mu \rho ,\nu }^\nu \mathrm{\Gamma }_{\nu \rho ,\mu }^\nu +\mathrm{\Gamma }_{\mu \rho }^\alpha \mathrm{\Gamma }_{\alpha \nu }^\nu \mathrm{\Gamma }_{\nu \rho }^\alpha \mathrm{\Gamma }_{\alpha \mu }^\nu `$ in terms of the Christoffel symbols $`\mathrm{\Gamma }_{\alpha \beta }^\delta `$, and $`R=R_{\mu }^{}{}_{}{}^{\mu }`$. The abstract index notation is used.. We begin by considering massive fields with spin $`s1/2`$, which have recently been the subject of renewed interest ; the following theorem is valid (we refer the reader to Ref. for the relevant equations and a proof): Theorem 1: A solution of the homogeneous wave equation for a massive field with spin $`s1/2`$ on the spacetime $`(M,g_{ab})`$ obeys the Huygens’ principle if and only if $`(M,g_{ab})`$ is a spacetime of constant curvature and the Ricci scalar satisfies $$R=\frac{6m^2}{s},$$ (2.1) where $`m`$ is the mass of the field. The formulation of the Huygens’ principle used in Theorem 1 and in its proof is crucial. In fact, although the Huygens’ principle for the solutions of a wave equation was formulated by Hadamard in a clear and physically meaningful way as the absence of tails of radiation, several other definitions have been introduced in the literature over the years: characteristic propagation property, progressing–wave propagation, etc. These definitions are a priori inequivalent, and they are all loosely referred to as the “Huygens’ principle”. This improper terminology is often a source of confusion and misinterpretations of mathematical results (see Refs. for a clarification of the relationships between at least some of the various definitions proposed in the literature). In the following, we consider the analogue of Theorem 1 for the case of the massive scalar field ($`s=0`$). To this end, we first provide a unambigous definition of the Huygens’ principle. A scalar field $`\varphi `$ in a source–free region of spacetime satisfies the homogeneous Klein–Gordon equation $$g^{ab}_a_b\varphi m^2\varphi \xi R\mathrm{\Phi }=0,$$ (2.2) where the dimensionless constant $`\xi `$ describes the direct coupling between the field $`\varphi `$ and the Ricci curvature $`R`$ of spacetime. The formal solution of Eq. (2.2) is given by a Green function representation in a normal domain $`𝒩`$ of spacetime not containing sources as $$\varphi (x)=\underset{𝒩}{}𝑑S^a^{}(x^{})G(x^{},x)_{}^{^{^{}}}{}_{a^{}}{}^{}\varphi (x^{}),$$ (2.3) where $`𝒩`$ is the boundary of the normal domain $`𝒩`$, $`dS^a^{}(x^{})`$ is the oriented volume element on the hypersurface $`𝒩`$ at $`x^{}`$, and $$f_1^{^{^{}}}f_2f_1f_2f_2f_1$$ (2.4) for any pair of differentiable functions $`(f_1,f_2)`$. For physical reasons, we restrict ourselves to the consideration of the retarded Green function $`G_R(x^{},x)`$, which is a solution of the wave equation (2.2) with an impulsive source located at $`x`$, $$\left[g^{a^{}b^{}}(x^{})_a^{}_b^{}m^2\xi R(x^{})\right]G(x^{},x)=\delta (x^{},x).$$ (2.5) $`\delta (x^{},x)`$ is the delta function on spacetime such that, for each test function $`f`$, $$d^4x^{}\sqrt{g(x^{})}f(x^{})\delta (x^{},x)=f(x).$$ (2.6) The retarded Green function $`G_R(x^{},x)`$ admits the decomposition $$G_R(x^{},x)=\mathrm{\Sigma }(x^{},x)\delta _R(\mathrm{\Gamma }(x^{},x))+V(x^{},x)\mathrm{\Theta }_R(\mathrm{\Gamma }(x^{},x)).$$ (2.7) $`\mathrm{\Gamma }(x^{},x)`$ is the square of the proper distance between $`x^{}`$ and $`x`$ computed along the unique geodesic connecting $`x^{}`$ and $`x`$ in the normal domain $`𝒩`$; $`\mathrm{\Gamma }=0`$ corresponds to the light cones. $`\delta _R`$ and $`\mathrm{\Theta }_R`$ are, respectively, the Dirac delta distribution and the Heaviside step function with support in the past of $`x^{}`$. The functions $`\mathrm{\Sigma }`$ and $`V`$ are uniquely determined in a given spacetime metric . The non–vanishing of $`V(x^{},x)`$ corresponds to the presence of wave tails propagating inside the light cone , while the first contribution to $`G_R`$, weighted by the coefficient $`\mathrm{\Sigma }(x^{},x)`$ describes sharp propagation along the light cone. The structure (2.7) of the retarded Green function is qualitatively the same for the wave equations satisfied by fields of higher spin in a curved space. Here, we omit writing these equations and the corresponding Green functions explicitly, for the sake of brevity. However, it is important to remember that the formulation of the Huygens’ principle used in Theorem 1 corresponds to the absence of tails (i.e. $`V(x^{},x)=0`$ for all spacetime points $`x^{},x`$ in Eq. (2.7)). Following Ref. , one considers a neighborhood $`U\left(x\right)`$ of the spacetime point $`xM`$, and one Taylor–expands $`G_R(x^{},x)`$, obtaining $$\mathrm{\Sigma }(x^{},x)=\frac{1}{4\pi }+r_1(x^{},x),$$ (2.8) $$V(x^{},x)=\frac{1}{8\pi }\left[m^2+\left(\xi \frac{1}{6}\right)R(x)\right]+r_2(x^{},x),$$ (2.9) where the remainders $`r_{1,2}(x^{},x)0`$ as $`x^{}x`$. When the neighborhood $`U(x^{})`$ has a small diameter ($`x^{}x`$), there is a tail ($`V(x^{},x)0`$) unless the effective mass $`m_{eff}(x)`$ given by $$m_{eff}^2(x)=m^2+\left(\xi \frac{1}{6}\right)R(x)$$ (2.10) vanishes. We introduce Definition 1: the field $`\varphi `$ obeying Eq. (2.2) satisfies the Huygens’ principle at the spacetime point $`x`$ if $`V(x^{},x)0`$ for $`x^{}x`$ in a normal neighborhood of $`x`$. Definition 2: the field $`\varphi `$ obeying Eq. (2.2) satisfies the Huygens’ principle if the latter is satisfied at every spacetime point $`x`$. Then, a straightforward consequence of Eqs. (2.7), (2.9) is Lemma: The solution of Eq. (2.2) with $`\xi 1/6`$ in the spacetime $`(M,g_{ab})`$ satisfies the Huygens’ principle in $`x`$ if and only if $$R(x)=\frac{6m^2}{16\xi }.$$ (2.11) The case $`\xi =1/6`$ is special; in this case there are no tails if and only if $`m=0`$, irrespective of the curvature (the value $`\xi =1/6`$ is of physical significance – see below). We also have Theorem 2: A sufficient condition for a solution of Eq. (2.2) with $`\xi 1/6`$ to satisfy the Huygens’ principle in the spacetime $`(M,g_{ab})`$ is that the latter is a constant curvature space and $`R=6m^2/(16\xi )`$. So far, our considerations have been limited to the mathematical aspects of the propagation of a scalar field in a curved space. At this point, it is interesting to examine the subject from the physical point of view. The physical reasons for the occurrence of tails are : i): The field is massive ($`m0`$). For example, the solutions of the Klein–Gordon equation (2.1) in the four–dimensional Minkowski space $`(R^4,\eta _{ab})`$ have tails whenever $`m0`$. ii): The dimensionality of spacetime. For example, the solutions of Eq. (2.2) in the $`k`$–dimensional Minkowski space have tails for odd $`k`$, but not for even $`k>2`$ . In this paper, we restrict ourselves to the case of a four–dimensional manifold. iii): Backscattering of the waves off a potential and/or the spacetime curvature. This is the most interesting case and in this section we consider only a non self–interacting field, hence the potential is absent and we are concerned solely with the backscattering off the background curvature. The extension to a self–interacting field is straightforward . Moreover, in this paper the dimension of spacetime is fixed to four, and we are not concerned with tails due to odd spacetime dimension. Although the study of the conditions for the absence of wave tails for massive fields of arbitrary spin (e.g. Refs. ) is legitimate from the mathematical point of view, it is not easy to justify from the physical perspective. In fact, a field with $`m0`$ will have a tail due the fact that it is massive (this tail is present even in flat space) and due to the backscattering off the background curvature of spacetime. The absence of tails means that the two effects exactly cancel each other. This situation corresponds to a field with nonzero intrinsic mass that propagates sharply along the light cone, a phenomenon that has no experimental or observational support. A wave tail is indeed a desirable feature for a massive field; there is not much point in requiring that the Huygens’ principle be satisfied on a curved space and in deriving the conditions under hich this “principle” is satisfied. As a matter of fact, these conditions are very restrictive, as is suggested by Theorems 1 and 2. In other words, the Huygens’ principle is not a fundamental principle like, say, the equivalence principle, and its violation is very realistic. We conclude this section with an example relevant for cosmology, which will be used later in Sec. 3. In this example, the balance between tails due to a mass term and those due to backscattering off the background curvature is achieved exactly at every spacetime point. Keeping in mind Theorem 2, we consider the de Sitter space of constant curvature $`R`$, and a test scalar field satisfying Eq. (2.2), with a mass given by $$m=\left[R\left(\frac{1}{6}\xi \right)\right]^{1/2}$$ (2.12) for $`\xi <1/6`$. Then $`V(x^{},x)=0`$ and the field propagates sharply along the light cone at every spacetime point. However, its intrinsic mass $`m`$ can be made arbitrarily large by suitably choosing the Ricci curvature (or the constant $`\xi `$, or both), while the effective mass given by Eq. (2.10) vanishes. We will call this example the ultrapathological spacetime. Of course, one could also consider its counterpart obtained by using the anti–de Sitter space and $`\xi >1/6`$. ## 3 Late time mild inflation In this section, we proceed to apply to cosmology the previous considerations on scalar wave tails. In Ref. it was argued that, if the Einstein equivalence principle is valid (i.e. in any metric theory of gravity in which the nature of $`\varphi `$ is nongravitational), then in the limit $`x^{}x`$ the solutions of Eq. (2.2) and the corresponding Green functions must have the same structure as in flat space. This corresponds to the local approximation of the spacetime $`(M,g_{ab})`$ with its tangent Minkowski space. The flat space retarded Green function $$G_R^{(M)}(x^{},x)=\frac{1}{4\pi }\delta _R(\mathrm{\Gamma }(x^{},x))\left(\frac{m^2}{8\pi }+r_3(x^{},x)\right)\mathrm{\Theta }_R(\mathrm{\Gamma }(x^{},x)),$$ (3.1) where $`r_3(x^{},x)0`$ as $`x^{}x`$, must be reproduced in the $`x^{}x`$ limit, and this requirement leads to the prescription $`\xi =1/6`$ for the value of the coupling constant . This result was rederived and confirmed in and it can be physically interpreted as the fact that, in the absence of a scalar field mass, no scale must appear in the local solution to the wave equation, in analogy with the flat space situation<sup>3</sup><sup>3</sup>3Note that this is not guaranteed by setting $`\xi =0`$; in this case the curvature scale would survive in the Green function, which is the solution for an impulsive source used in the physical definition of the Huygens’ principle given by Hadamard .. The prescription $`\xi =1/6`$ has many consequences for cosmological inflation. In fact, the success of many inflationary scenarios strongly depends from the fine tuning of the parameter $`\xi `$, which is impossible once the value of $`\xi `$ is fixed to the conformal value $`1/6`$ (). If inflation is driven by a quantum scalar field, the Einstein equivalence principle probably cannot be imposed. The equivalence principle is likely to be violated at the quantum level, and the prescription $`\xi =1/6`$ is not applicable in the quantum regime. However, it is a common belief that inflation is a classical phenomenon . Moreover, there are other prescriptions for the value of the coupling constant $`\xi `$ (see references in ) that are valid for quantum fields, and they differ according to the physical nature of the field $`\varphi `$. The existence of tails of radiation, and the issue of the value of $`\xi `$ are relevant also for other areas of cosmology and of theoretical physics . Currently, cosmology faces two problems raised by recent observations: the age of the universe problem (the age of certain globular clusters is larger than the age of the universe inferred from the method of Cepheid variables ), and the discrepancy between the local and the global (based on the Zeldovich–Sunyaev effect ) measures of the Hubble parameter $`H_0`$. In order to reconcile theory and observations, it has been proposed that the universe undergoes short periods of piece–wise exponential expansion that interrupt the matter–dominated era after star formation (“late time mild inflation” or LTMI) . ### 3.1 The proposed mechanism for LTMI is physically pathological The mechanism proposed in to achieve LTMI is based on a classical, massive, non self–interacting scalar field nonminimally coupled to the Ricci curvature of spacetime, and satisfying Eq. (2.2), in the context of general relativity. The authors of Ref. assume a Einstein–de Sitter universe and a baryon density of order $`\mathrm{\Omega }_m=0.01`$ (in units of the critical density $`\rho _c=3H^2/8\pi G`$). The Einstein equations for a mixture of dust and a scalar field are $$H^2=\frac{8\pi G}{3}\left(\rho _m+\rho _\varphi \right),$$ (3.2) $$\dot{H}+H^2=\frac{4\pi G}{3}\left(\rho _m+\rho _\varphi +3P_\varphi \right),$$ (3.3) where $`H=\dot{a}/a`$ is the Hubble parameter, $`a(t)`$ is the scale factor of the Einstein–de Sitter line element, and a overdot denotes differentiation with respect to the comoving time $`t`$. $`\rho _m`$ is the energy density of dust, $`P_m=0`$, and the energy density and pressure of the scalar field component of the cosmic fluid are given by $$\rho _\varphi =\left(18\pi G\xi \varphi ^2\right)^1\left[\frac{(\dot{\varphi })^2}{2}+F(\varphi )+6\xi H\varphi \dot{\varphi }\right],$$ (3.4) $$P_\varphi =\left(18\pi G\xi \varphi ^2\right)^1\left[\left(\frac{1}{2}2\xi \right)\dot{\varphi }^2F(\varphi )2\xi \varphi \ddot{\varphi }4\xi H\varphi \dot{\varphi }\right],$$ (3.5) respectively, where $`F(\varphi )=m^2\varphi ^2/2`$. The Klein–Gordon equation becomes $$\ddot{\varphi }+3H\dot{\varphi }+m^2\varphi +6\xi \left(\dot{H}+2H^2\right)\varphi =0.$$ (3.6) A period of LTMI corresponds to the particular solution $$H_{}=\left(\frac{m^2}{12|\xi |}\right)^{1/2},\varphi _{}^2=\frac{1}{8\pi G|\xi |},$$ (3.7) for which $`\rho _\varphi 0`$, $`P_\varphi =\rho _\varphi `$. Due to the onset of instabilities, the exponential expansion soon stops and is followed by a oscillatory decay (due to the fact that $`m0`$). The values of the parameters $`m`$ and $`\xi `$ have to be adjusted in order to fit the observations; in particular, a negative value of $`\xi `$ and a rather large (compared to unity) value of its modulus are essential for a successful LTMI . The authors of Ref. chose $`\xi =80`$ and $`m=10^{31}`$ eV (although this is more an example than a best fit of the observational data, it gives an idea of the orders of magnitude of the parameters $`\xi ,m`$ needed for an interesting LTMI). In the light of the result of Ref. explained at the beginning of this section, the value of the coupling constant $`\xi `$ is fixed to $`1/6`$ in general relativity, and the LTMI scenario does not work. It is in principle possible that LTMI can be achieved in the context of a theory of gravity and of the boson field in which the prescription $`\xi =1/6`$ does not apply. In this case, one still has to deal with the other prescriptions for the value of $`\xi `$ existing in the literature (see for a review). A more serious problem is that, even ignoring the prescription $`\xi =1/6`$ coming from the Einstein equivalence principle, each phase of LTMI is extremely close to the ultrapathological spacetime described at the end of the previous section. In fact, LTMI corresponds to the vanishing of the effective mass given by $$\mu ^2=m^2+6\xi \left(\dot{H}+2H^2\right),$$ (3.8) while the ultrapathological space corresponds to the vanishing of $`m_{eff}`$ given by Eq. (2.10), $$m_{eff}^2=m^2+\left(6\xi 1\right)\left(\dot{H}+2H^2\right).$$ (3.9) For $`\xi >>1`$, $`m_{eff}\mu `$ and LTMI essentially reproduces the ultrapathological space. Using the value $`\xi =80`$ of Ref. , one obtains from Eq. (3.8) that $`H_{}^21.041610^3m^2`$ , while the ultrapathological case corresponds to $`H_{}^21.039510^3m^2`$. A very substantial part of the tail of $`\varphi `$ due to the intrinsic mass $`m`$ is cancelled by the tail due to the backscattering off the background curvature of spacetime. The cancellation becomes more and more precise as $`\xi `$ increases, which makes inflation more and more pronounced . ### 3.2 Alternative mechanisms for LTMI The mechanism used in Ref. to achieve LTMI is clearly unphysical. Is there a realistic mechanism that works ? In order to answer this question, one possibility is adding a nontrivial (i.e. not a pure mass term) potential $`F(\varphi )`$ to the picture. However, one then defines the intrinsic mass of the scalar field in the late time mild inflationary state $`(H_{},\varphi _{})`$ as $`m_\varphi =d^2F/d\varphi ^2(\varphi _{})`$, and one is facing again the problem of the cancellation between the tail due to the intrinsic mass $`m`$ and the tail due to the backscattering off the background curvature. A way out of this dilemma could be the consideration of the linear potential $`F(\varphi )=\lambda \varphi `$, for which $`m_\varphi =0`$. In this case, the Einstein equations (3.2), (3.3), supplemented by the expressions for the energy density and pressure of the scalar field (3.4), (3.5) admit, for $`\xi <0`$, the de Sitter solution $$H_{}^2=\left(\frac{\pi G}{6|\xi |}\right)^{1/2}\lambda ,\varphi _{}^2=\frac{1}{24\pi G|\xi |}.$$ (3.10) In principle, one can stop a late time inflation of this kind; in the original mechanism for LTMI proposed in Ref. , the exit from the exponential expansion was due to the Ljapunov instability of the de Sitter solution against small perturbations. For a field in a linear potential the de Sitter solution is also unstable. In fact, consider the universe in a state that is a small perturbation of the $`(H_{},\varphi _{})`$ inflationary state, $$\varphi =\varphi _{}(1+x),H=H_{}(1+y),$$ (3.11) where $`x`$ and $`y`$ are small compared to unity. After straightforward calculations, one obtains the evolution equations for the perturbations, $$\left(\begin{array}{c}\dot{x}\\ \dot{y}\end{array}\right)=\left(\begin{array}{cc}a_1& a_2\\ a_3& a_4\end{array}\right)\left(\begin{array}{c}x\\ y\end{array}\right),$$ (3.12) where $$a_1=\alpha ,$$ (3.13) $$a_2=a_4=4\alpha ,$$ (3.14) $$a_3=\frac{2\alpha }{3\xi 2},$$ (3.15) $$\alpha =\left(\frac{\pi G}{6|\xi |}\right)^{1/4}\lambda ^{1/2},$$ (3.16) or $$\underset{¯}{\overset{\dot{}}{x}}=\alpha M\underset{¯}{x}.$$ (3.17) The matrix $`M`$ has real eigenvalues $$s_{1,2}=\frac{3}{2}\left(1\pm \sqrt{1\frac{16\xi }{23\xi }}\right),$$ (3.18) where the discriminant $`\mathrm{\Delta }=(1875\xi )(23\xi )^1>0`$ for $`\xi <0`$. Since $`s_1`$, $`s_2`$ have opposite signs, $`(H_{},\varphi _{})`$ is a saddle point, and describes an unstable equilibrium. A perturbation can grow and break the exponential expansion. Another possibility that one can naturally think of in order to avoid the ultrapathological spacetime, consists in requiring that, during LTMI, the growth of the scale factor be accelerated, but not exponential. For example, one can search for piece–wise power–law inflationary solutions $`a=a_0t^p`$, where $`p>1`$. Then, the Ricci curvature is not constant and the exact cancellation of mass and curvature tails can occur at most at a single instant in the history of the universe; because of the monotonic behaviour of the Ricci curvature $`R=6p(2p1)t^2`$, the equation $`m_{eff}=0`$ has only one root. This instant of pathological behaviour can be avoided by making the piece–wise period of inflation sufficiently short. A power–law inflationary solution for a universe driven by a nonminimally coupled scalar field with the potential $$F(\varphi )=A\varphi ^n,n>6,$$ (3.19) was found in Ref. . One has, for this solution, $$p=2\frac{1+(n10)\xi }{(n4)(n6)|\xi |}.$$ (3.20) In general relativity, the prescription $`\xi =1/6`$ yields $`p=2/(n6)`$, which corresponds to i) accelerated expansion if $`6<n<8`$, ii) to a coasting universe if $`n=8`$, and iii) to a decelerated universe which still expands faster than $`a(t)=a_0t^{2/3}`$ if $`n<9`$. Hence, in principle, one can achieve periods of LTMI in general relativity, with the potential (3.19). The detailed analysis of these alternative mechanisms for LTMI and their comparison with the cosmological observations are beyond the scope of this paper, which focuses on tails of radiation. In addition, it would be desirable to identify the scalar field in the potential (3.19) with some known field from high energy physics, which is not done in the phenomenological approach to LTMI. ## 4 Discussion and conclusions The violation of the Huygens’ principle and the presence of tails of radiation have been studied for many years in the context of mathematical physics. Only recently it has been realized that tails of radiation have important physical applications. An example in astrophysics is given by the tails in the gravitational radiation emitted by compact objects. These tails are relevant for the correct data analysis (matched filtering) in the large laser interferometric detectors of gravitational waves (LIGO, VIRGO, GEO600, TAMA, …) . In classical field theory in curved spaces, a counterintuitive result is that the absence of pathologies in the propagation of scalar fields fixes to $`1/6`$ the value of the coupling constant $`\xi `$ of the scalar field with the Ricci curvature . This prescription has far–reaching consequences for cosmological inflation ( and references therein), for the cosmic no–hair theorems , and possibly for other areas of cosmology and of theoretical physics . In the present paper, we have considered the idea of LTMI, recently proposed to solve the age of the universe problem and the puzzle of the discrepancy between the local and global measures of the Hubble parameter. While the idea of LTMI appears to be very valuable, unfortunately the mechanism employed to achieve it (a classical, massive, non self–interacting scalar field nonminimally coupled to the Ricci curvature) is not viable, because it corresponds to the extremely pathological physics discussed in Sec. 2. In fact, the LTMI scenario almost exactly reproduces the ultrapathological spacetime. Alternative mechanisms to generate LTMI are discussed in Sec. 3, and the possibility of having LTMI with a negatively coupled, self–interacting scalar field is not ruled out in general relativity. However, one must be willing to pay the price of introducing a suitable scalar field potential and obtaining a less–than–exponential expansion of the universe during LTMI. The approach to LTMI is purely phenomenological, and no serious attempt is made to identify the scalar field with a known field from a high energy physics theory. The hypotetical possibility of identifying the scalar field with a superlight Proca field clearly does not work if the field is self–interacting (apart from the problem that a homogeneous vector field would introduce anisotropy, and a nontrivial distribution of this vector field would need to be considered ). The analysis of the LTMI scenario would have to be redone if a vector field instead of a scalar one was used as a source term in the right hand side of the Einstein equations. On the other hand, fields of different spin in the same background metric have different behaviour with respect to tails. For example, the Maxwell field satisfies the Huygens’ principle in a Friedmann–Robertson–Walker space; in fact, the latter is conformally flat, and the Maxwell equations are conformally invariant. The tail–free property of the Maxwell field in Minkowski space is then transferred to the Friedmann–Robertson–Walker space . Hopefully, a viable mechanism will be found which is capable of successfully implementing the idea of LTMI. Work in this direction is in progress. The model of the universe analogous to that of LTMI, but with $`\xi >0`$, does not give rise to inflation and was considered in Ref. in order to explain the reported periodicity in the redshift of galaxies . A look at the latter model with the knowledge of scalar field tails is also instructive , and leads to information on the nature of the correct theory of gravity, should the reported redshift periodicity turn out to be genuine and not an artifact of incomplete or faulty statistics. Previous literature and the present paper show that tails of scalar fields and nonminimal coupling to the Ricci curvature are very relevant for cosmology, and not only for inflationary theories. Moreover, tails and nonminimal ($`\xi 0`$) coupling are forced upon us in almost all situations of physical interest. Thus, it is seen that the study of these phenomena is not optional, rather it is necessary in cosmology. ## Acknowledgments V.F. is grateful to Varun Sahni for stimulating discussions, to the RggR group for a visit at the Université Libre de Bruxelles, where this paper was begun, and to L. Niwa for a reading of the manuscript. This work was partially supported by EEC grants numbers PSS\* 0992 and CT1\*–CT94–0004, and by OLAM, Fondation pour la Recherche Fondamentale, Brussels.
no-problem/9902/hep-th9902077.html
ar5iv
text
# Quantum Field Theory in a Topology Changing Universe ## I Introduction The open inflation model proposed recently by Hawking-Turok revived an interest in the wave functions of quantum cosmology, and provoked a debate on the boundary conditions of the Universe . The present Lorentzian spacetime is supposed to emerge quantum mechanically from a Euclidean spacetime. Leading proposals for such wave functions are the Hartle-Hawking’s no-boundary wave function , the Linde’s wave function and the Vilenkin’s tunneling wave function . Though there has not still been an unanimous agreement on the boundary condition of the Universe, it is generally accepted that the Universe which tunneled quantum mechanically the Euclidean spacetime should have either an exponentially growing or decaying wave function or a combination of both branches . But the issue of quantum theory of matter fields in a topology changing universe such as the tunneling universe has not been raised seriously yet. It is the purpose of this paper to investigate a consistent quantum theory of a scalar field in the Euclidean region of the topology changing universe such as the tunneling universe and to construct quantum states explicitly there. To treat the quantum field in a curved spacetime there have been used two typical methods: in the conventional approach the underlying spacetime is fixed as a background and matter field is quantized on it , and in the other approach known as the semiclassical (quantum) gravity the semiclassical Einstein equation with the quantum back-reaction of matter field and the time-dependent Schrödinger equation are derived from the Wheeler-DeWitt equation for the gravity-matter system . In both approaches the functional Schrödinger equation for the scalar field in the Euclidean region obeys a non-unitary (diffusion-like) evolution equation. So the quantization rules of the Minkowski spacetime may not be applied directly. To construct the consistent quantum theory in the Euclidean region we propose a method in which one first quantizes the Wick-rotated gravity-matter system of the Euclidean geometry, derives the time-dependent Schrödinger equation and then transforms back via the Wick rotation the quantum states into those of the Lorentzian geometry. This method is consistent because the Wick rotations are well defined and so does the time-dependent Schrödinger equation in the Euclidean region in the Euclidean geometry just as in the Lorentzian region of the Lorentzian geometry. It is also useful in that one is able to find the quantum states explicitly using the Liouville-Neumann method which has already been used to find quantum states of the scalar field in the Lorentzian regions of spacetime . The organization of the paper is as follows. In Sec. II we quantize the gravity-scalar system in the Lorentzian geometry and derive the semiclassical Einstein equation and the time-dependent Schrödinger equation from the Wheeler-DeWitt equation in the Lorentzian region where the gravitational wave function oscillates. The region where the gravitational wave function exhibits an exponential behavior corresponds to a Euclidean region of spacetime. We focus in particular on the Schrödinger equation in the Euclidean region of spacetime. In Sec. III we quantize the Euclidean gravity (geometry) coupled to the minimal scalar field and derive the time-dependent Schrödinger equation together with the semiclassical Einstein equation in the region corresponding to the Euclidean region of the Lorentzian geometry. Quantum states are found using the Liouville-Neumann method. In Sec. IV the Wick rotation is employed to transform these quantum states defined in terms of the Euclidean geometry into those defined in terms of the Lorentzian geometry. ## II Quantum Theory in Lorentzian Geometry As a simple but interesting quantum cosmological model, let us consider the closed FRW universe minimally coupled to an inflaton, a minimal scalar field. The action for the gravity with a cosmological constant $`\mathrm{\Lambda }`$ and the scalar field takes the form $$I=\frac{m_P^2}{16\pi }d^4x\sqrt{g}\left[R2\mathrm{\Lambda }\right]+\frac{m_P^2}{8\pi }d^3x\sqrt{h}K+d^4x\sqrt{g}\left[\frac{1}{2}g^{\mu \nu }_\mu \varphi _\nu \varphi V(\varphi )\right],$$ (1) where $`m_P^2=1/G`$ is the Planck mass squared. The surface term for the gravity has been introduced to yield the correct Einstein equation for the closed universe. In the Lorentzian FRW universe with the metric in the ADM formulation $$ds_L^2=N^2(t)dt^2+a^2(t)d\mathrm{\Omega }_3^2,$$ (2) the action becomes $$I_L=𝑑t\left[\frac{3\pi m_P^2}{4}\left(\frac{a}{N}\left(\frac{a}{t}\right)^2NV_g(a)\right)+2\pi ^2a^3\left(\frac{1}{2N}\left(\frac{\varphi }{t}\right)^2NV(\varphi )\right)\right],$$ (3) where $$V_g(a)=a\frac{\mathrm{\Lambda }}{3}a^3.$$ (4) In the above equation we dropped the second order derivative term, which is to be cancelled by a boundary action. By introducing the canonical momenta $$\pi _a=\frac{3\pi m_P^2a}{2\pi N}\frac{a}{t},\pi _\varphi =\frac{2\pi ^2a^3}{N}\frac{\varphi }{t},$$ (5) one obtains the Hamiltonian constraint $$_L=\widehat{H}_g(\pi _a,a)+\widehat{H}_L(\pi _\varphi ,\varphi ,a)=0,$$ (6) where $`\widehat{H}_g(\pi _a,a)`$ $`=`$ $`{\displaystyle \frac{1}{3\pi m_P^2a}}\pi _a^2{\displaystyle \frac{3\pi m_P^2}{4}}V_g(a),`$ (7) $`\widehat{H}_L(\pi _\varphi ,\varphi ,a)`$ $`=`$ $`{\displaystyle \frac{1}{4\pi ^2a^3}}\pi _\varphi ^2+2\pi ^2a^3V(\varphi )`$ (8) are the Hamiltonians for the gravity and scalar field, respectively. The Dirac quantization leads to the Wheeler-DeWitt equation for the Lorentzian geometry $$\left[\frac{\mathrm{}^2}{3\pi m_P^2a}\frac{^2}{a^2}+\frac{3\pi m_P^2}{4}V_g(a)+\frac{\mathrm{}^2}{4\pi ^2a^3}\frac{^2}{\varphi ^2}2\pi ^2a^3V(\varphi )\right]\mathrm{\Psi }_L(a,\varphi )=0,$$ (9) where we neglected the operator ordering ambiguity. Before we derive the equation for quantum fields in the Euclidean region, we review briefly how to obtain the time-dependent Schrödinger equation in the Lorentzian region from the semiclassical (quantum) gravity point of view. In the Lorentzian region, where the wave function for the gravitational field oscillates, we first adopt the Born-Oppenheimer idea to expand the total wave function according to different mass scales $$\mathrm{\Psi }_L(a,\varphi )=\psi _L(a)\mathrm{\Phi }_L(\varphi ,a),$$ (10) and obtain the gravitational field equation with the back-reaction of matter field $$\left[\frac{\mathrm{}^2}{3\pi m_P^2a}D^2+\frac{3\pi m_P^2}{4}V_g(a)+\widehat{H}_L\frac{\mathrm{}^2}{3\pi m_P^2a}\overline{D}^2\right]\psi _L(a)=0.$$ (11) Here, $`D`$ and $`\overline{D}`$ denote the covariant derivatives $$D=\frac{}{a}+iA(a),\overline{D}=\frac{}{a}iA(a),$$ (12) with an effective gauge potential from the scalar field $$A(a)=i\frac{\mathrm{\Phi }_L|\frac{}{a}|\mathrm{\Phi }_L}{\mathrm{\Phi }_L|\mathrm{\Phi }_L},$$ (13) and $`\widehat{H}_L`$ and $`\overline{D}^2`$ denote the expectation value of the corresponding operators $$\widehat{H}_L=\frac{\mathrm{\Phi }_L|\widehat{H}_L|\mathrm{\Phi }_L}{\mathrm{\Phi }_L|\mathrm{\Phi }_L},\overline{D}^2=\frac{\mathrm{\Phi }_L|\widehat{H}_L|\mathrm{\Phi }_L}{\mathrm{\Phi }_L|\mathrm{\Phi }_L}.$$ (14) By putting Eq. (10) into Eq. (9) and by subtracting Eq. (11), one gets the equation for the matter field $$\frac{2\mathrm{}^2}{3\pi m_P^2a}\frac{1}{\psi _L}\left(D\psi _L\right)\left(\overline{D}\mathrm{\Phi }_L\right)+\left(\widehat{H}_L\widehat{H}_L\right)\mathrm{\Phi }_L\frac{\mathrm{}^2}{3\pi m_P^2a}\left(\overline{D}^2\overline{D}^2\right)\mathrm{\Phi }_L=0.$$ (15) Since $`A(a)`$ is a gauge potential and a $`c`$-number, the geometric phases for the wave function and quantum state $$\psi _L(a)=e^{i{\scriptscriptstyle A}}\stackrel{~}{\psi }_L,\mathrm{\Phi }_L=e^{i{\scriptscriptstyle A}}\stackrel{~}{\mathrm{\Phi }}_L,$$ (16) remove the gauge potentials from the covariant derivatives $`D`$ and $`\overline{D}`$, in Eqs. (11) and (15). However, the total wave function (10) keeps the same form $`\mathrm{\Psi }_L=\stackrel{~}{\psi }_L\stackrel{~}{\mathrm{\Phi }}_L`$. From now on we shall work with the wave function and quantum state (16), drop the tildes for simplicity and ignore the last terms in Eqs. (11) and (15), which are small compared with the other terms. We then follow the de Broglie-Bohm interpretation and set the gravitational wave function in the form $$\psi _{L(II)}(a)=F(a)\mathrm{exp}\left[\pm \frac{i}{\mathrm{}}S_{L(II)}(a)\right],$$ (17) Here, $`(II)`$ denotes an oscillatory region of Lorentzian geometry (see Fig.1) and $`\pm `$ signs correspond to the expanding and collapsing branches of the universe, respectively. The real part gives rise to the Hamilton-Jacobi equation $$\frac{1}{3\pi m_P^2a}\left(\frac{S_{L(II)}}{a}\right)^2+\frac{3\pi m_P^2}{4}V_g\frac{1}{3\pi m_P^2a}V_q\widehat{H}_m=0,$$ (18) where $$V_q(a)=\mathrm{}^2\frac{^2F/a^2}{F}$$ (19) is the quantum potential. The imaginary part leads to the continuity equation $$F\frac{^2S_{L(II)}}{a^2}+2\frac{F}{a}\frac{S_{L(II)}}{a}=0.$$ (20) The contribution $`V_q`$ from the quantum potential will also be ignored, which is at most one-loop or higher orders. By integrating $`c`$-number $`\widehat{H}_L`$ and writing it as a phase factor of $`\mathrm{\Phi }_L=e^{i{\scriptscriptstyle \widehat{H}_L}}\stackrel{~}{\stackrel{~}{\mathrm{\Phi }}}_L`$ in Eq. (15) and once again dropping the tilde for simplicity, one also obtains the time-dependent Schrödinger equation for the scalar field $$i\mathrm{}\frac{}{t}\mathrm{\Phi }_L(\varphi ,t)=\widehat{H}_L(\frac{\mathrm{}}{i}\frac{}{\varphi },\varphi ,t)\mathrm{\Phi }_L(\varphi ,t),$$ (21) where $`t`$ is the cosmological (WKB) time $$\frac{}{t}=\frac{2}{3\pi m_P^2a}\frac{S_{L(II)}}{a}\frac{}{a}.$$ (22) By identifying the cosmological time (22) with the comoving time in Eq. (2) and by making use of $$\frac{S_{L(II)}}{a}=\frac{3\pi m_P^2a}{2}\frac{a}{t},$$ (23) one sees that Eq. (18) becomes indeed the semiclassical Einstein equation $$\left(\frac{\frac{a}{t}}{a}\right)^2+\frac{1}{a^2}\frac{\mathrm{\Lambda }}{3}=\frac{4}{3\pi m_P^2a^3}\widehat{H}_L.$$ (24) The spacetime regions are divided according as the effective potential for the gravitational field $$V_L(a)=\frac{3\pi m_P^2}{4}V_g(a)\widehat{H}_L$$ (25) takes positive or negative values. For the sake of simplicity, we assume that the quantum back-reaction of the scalar field is insignificant compared with $`V_g`$, so that the effective potential $`V_L`$ has a simple form in Fig. 1. The region I of Fig. 1, where $`V_L`$ is positive, corresponds to a part of Euclidean spacetime, whereas the region II, where $`V_L`$ is negative, corresponds to a part of Lorentzian spacetime. Being mostly interested in the quantum creation of the universe from the Euclidean region of the tunneling universe to the Lorentzian region, we focus on the region I of Fig. 1. Though the gravitational motion is prohibited classically in the region I, it is, however, permitted quantum mechanically. In this region one is tempted to continue analytically the wave function (17) to get $$\psi _{L(I)}(a)=F(a)\mathrm{exp}\left[\frac{1}{\mathrm{}}S_{L(I)}(a)\right],$$ (26) whose dominant contribution to Eq. (11) leads to the Hamilton-Jacobi-like equation $$\left(\frac{S_{L(I)}}{a}\right)^2=3\pi m_P^2aV_L.$$ (27) At the same time one is able to obtain from Eq. (15) the time-dependent Schrödinger equation $$\mathrm{}\frac{}{s}\mathrm{\Phi }_L(\varphi ,s)=\widehat{H}_L(\frac{\mathrm{}}{i}\frac{}{\varphi },\varphi ,s)\mathrm{\Phi }_L(\varphi ,s),$$ (28) where $`s`$ is a Euclidean analog of cosmological time defined by $$\frac{}{s}=\pm \frac{2}{3\pi m_P^2a}\frac{S_{L(I)}}{a}\frac{}{a}.$$ (29) But the scalar field Hamiltonian $`\widehat{H}_L`$ keeps the same form. Then the following questions are raised. What is the meaning of the parameter $`s`$? Is it an analytic continuation of the cosmological time or a Wick-rotated Euclidean time? How to solve Eq. (28), an apparently time-dependent diffusion-like equation? What are the quantization rule $`[\widehat{\varphi },\widehat{\pi }_\varphi ]`$ and the meaning of quantum states of this non-unitary evolution? To answer these questions and to follow analogy with quantum theory of Lorentzian spacetime, we shall consider the quantum theory of the scalar field by quantizing the Euclidean geometry. ## III Quantum Theory in Euclidean Geometry To obtain the quantum cosmological model for the Euclidean spacetime, we perform the Wick rotation $`t=i\tau `$ and consider the Euclidean metric $$ds^2=N^2(\tau )d\tau ^2+a^2(\tau )d\mathrm{\Omega }_3^2.$$ (30) From the Euclidean action $$I_E=𝑑\tau \left[\frac{3\pi m_P^2}{4}\left(\frac{a}{N}\left(\frac{a}{\tau }\right)^2+NV_g(a)\right)2\pi ^2a^3(\tau )\left(\frac{1}{2N}\left(\frac{\varphi }{\tau }\right)^2+NV(\varphi )\right)\right],$$ (31) we obtain the Hamiltonian constraint $$_E=\frac{1}{3\pi m_P^2a}\pi _{E,a}^2\frac{3\pi m_P^2}{4}V_g(a)\frac{1}{4\pi ^2a^3}\pi _{E,\varphi }^2+2\pi ^2a^3V(\varphi )=0,$$ (32) where $$\pi _{E,a}=\frac{3\pi m_P^2a}{2N}\frac{a}{\tau },\pi _{E,\varphi }=\frac{2\pi ^2a^3}{N}\frac{\varphi }{\tau }.$$ (33) The Hamiltonian constraint (32) leads to the Wheeler-DeWitt equation for the Euclidean geometry $$\left[\frac{\mathrm{}^2}{3\pi m_P^2a}\frac{^2}{a^2}\frac{3\pi m_P^2}{4}V_g(a)+\frac{\mathrm{}^2}{4\pi ^2a^3}\frac{^2}{\varphi ^2}+2\pi ^2a^3V(\varphi )\right]\mathrm{\Psi }_E(a,\varphi )=0.$$ (34) It should be remarked that the Wick rotation changed both signs of the kinetic terms of the scalar and gravitational fields. This is the reason why the Euclidean action can not be made positive definite for the gravity-matter system. We now wish to obtain the semiclassical Einstein equation and the time-dependent Schrödinger equation in the Euclidean region I of Fig. 1. Since the sign of the kinetic term of gravitational field was reversed due to the Wick rotation, the wave function $`\mathrm{\Psi }_E`$ of the Wheeler-DeWitt equation (34) now oscillates in the region I. This is in contrast with the behavior of the wave function $`\mathrm{\Psi }_L`$. Thus we may use the semiclassical quantum gravity approach in Sec. II and Ref. . As in the Lorentzian spacetime we may expand the total wave function in the form of Eq. (10) and set the wave function for the gravity in the form $$\psi _E(a)=F(a)\mathrm{exp}\left[\pm \frac{i}{\mathrm{}}S_E(a)\right].$$ (35) The real part of the resultant gravitational field equation, which is similar to Eq. (11) with the reversed signs for both kinetic terms, gives rise to the Hamilton-Jacobi equation $$\frac{1}{3\pi m_P^2a}\left(\frac{S_E}{a}\right)^2\frac{3\pi m_P^2}{4}V_g+\widehat{H}_E=0,$$ (36) where $$\widehat{H}_E=\frac{\mathrm{}^2}{4\pi ^2a^3}\frac{^2}{\varphi ^2}+2\pi ^2a^3V(\varphi )$$ (37) is the scalar field Hamiltonian in the Euclidean geometry. As in the Lorentzian geometry, one can get the semiclassical Einstein equation in the Euclidean geometry $$\left(\frac{\frac{a}{\tau }}{a}\right)^2\frac{1}{a^2}+\frac{\mathrm{\Lambda }}{3}=\frac{4}{3\pi m_P^2a^3}\widehat{H}_E.$$ (38) We note that the Euclidean region I corresponds to the region where the effective potential for the gravitational field $$V_E(a)=\frac{3\pi m_P^2}{4}V_g\widehat{H}_E$$ (39) takes positive values. To consolidate this point further, let us remind that $`\widehat{H}_E`$ is a Wick rotation of $`\widehat{H}_L`$, as will be shown later. So the region where $`V_E`$ is positive, coincides with the region I where $`V_L`$ is positive, too. However, the wave function for the quantized Euclidean geometry oscillates in this region, in contrast with the exponential behavior of the wave function for the quantized Lorentzian geometry. Therefore, we are able to obtain the time-dependent unitary Schrödinger equation $$i\mathrm{}\frac{}{\tau }\mathrm{\Phi }_E(\varphi ,\tau )=\widehat{H}_E(\frac{\mathrm{}}{i}\frac{}{\varphi },\varphi ,\tau )\mathrm{\Phi }_E(\varphi ,\tau ),$$ (40) where $`\tau `$ is the cosmological time defined as $$\frac{}{\tau }=\frac{2}{3\pi m_P^2a}\frac{S_E}{a}\frac{}{a}.$$ (41) The cosmological time (41) coincides with the Euclidean time in Eq. (30). Finally we turn to the task to find quantum states of the scalar field obeying Eq. (40) explicitly. In Ref. the Liouville-Neumann method has been used to construct the Hilbert spaces for quantum inflatons in the FRW background exactly for a quadratic potential and approximately for a generic potential. Similarly we look for the operators that satisfy the Liouville-Neumann equation $$i\mathrm{}\frac{}{\tau }\left\{\begin{array}{c}\widehat{A}^{}\\ \widehat{A}\end{array}\right\}+[\left\{\begin{array}{c}\widehat{A}^{}\\ \widehat{A}\end{array}\right\},\widehat{H}_E]=0.$$ (42) Two independent Liouville-Neumann operators are found $`\widehat{A}^{}(\tau )`$ $`=`$ $`i\left(\phi _E(\tau )\widehat{\pi }_\varphi a^3(\tau ){\displaystyle \frac{\phi _E(\tau )}{\tau }}\widehat{\varphi }\right),`$ (43) $`\widehat{A}(\tau )`$ $`=`$ $`i\left(\phi _E^{}(\tau )\widehat{\pi }_\varphi a^3(\tau ){\displaystyle \frac{\phi _E^{}(\tau )}{\tau }}\widehat{\varphi }\right),`$ (44) where $`\phi _E`$ is a complex solution to the equation $$\frac{^2\phi _E(\tau )}{\tau ^2}+3\left(\frac{a(\tau )/\tau }{a(\tau )}\right)\frac{\phi _E(\tau )}{\tau }\frac{\delta ^2V(\widehat{\varphi })}{\delta \widehat{\varphi }^2}\phi _E(\tau )=0.$$ (45) Gaussian states are obtained by taking the expectation value of Eq. (45) with respect to the ground state defined by $`\widehat{A}(\tau )|0(\tau )=0`$, and by solving the following equation $$\frac{^2\phi _E(\tau )}{\tau ^2}+3\left(\frac{a(\tau )/\tau }{a(\tau )}\right)\frac{\phi _E(\tau )}{\tau }0(\tau )|\frac{\delta ^2V(\widehat{\varphi })}{\delta \widehat{\varphi }^2}|0(\tau )\phi _E(\tau )=0.$$ (46) It should be noted that Eq. (45) can also be obtained by the Wick rotation of the Lorentzian equation $$\frac{^2\phi _L(t)}{t^2}+3\left(\frac{a(t)/t}{a(t)}\right)\frac{\phi _L(t)}{t}+\frac{\delta ^2V(\widehat{\varphi })}{\delta \widehat{\varphi }^2}\phi _L(t)=0.$$ (47) Note also that the inverted potential in Eq. (46) can be obtained through mean-field approximation and Wick rotation of the Heisenberg equation of motion in the Lorentzian region $$\frac{^2\widehat{\varphi }_L}{t^2}+3\left(\frac{a(t)/t}{a(t)}\right)\frac{\widehat{\varphi }_L(t)}{t}+\frac{\delta V(\widehat{\varphi }_L)}{\delta \widehat{\varphi }_L}=0.$$ (48) All these aspects are expected in the Wick rotation of quantum theory in the Minkowski spacetime. ## IV Transformation between Lorentzian and Euclidean Quantum Geometries In the tunneling universe of Fig. 1, the Lorentzian geometry is sewn to the Euclidean geometry. There should be a matching condition or surgery of two geometries. Classically to match smoothly across the boundary the extrinsic curvature should be continuous across the boundary. In the FRW universe where the Lorentzian spacetime is connected to the Euclidean spacetime, the extrinsic curvature is given by $`\pi _a`$. We also require that the geometric quantities $`a,\pi _a`$ and physical quantities $`\varphi ,\pi _\varphi `$ be continuous. Sometimes all these are meant the continuity of wave function of the Wheeler-DeWitt equation across the boundary just as the wave function of a quantum mechanical system is continuous across the boundary of tunneling barrier. Though tempted to continue analytically Eq. (18) to describe quantum theory in the Euclidean region, we have seen that such a prescription does not provide a good picture for quantum theory particularly for a gravity-matter system. In the quantum Lorentzian geometry, the quantum theory of the scalar field in the Euclidean region I is defined in an ad hoc manner via the non-unitary Schrödinger equation. Besides, the quantum operators $`\widehat{\pi }_\varphi `$ and $`\widehat{\varphi }`$, and all the quantization rules are defined in the exactly same manner as in Lorentzian region. This is not obviously a Wick-rotation. On the other hand, in the quantum Euclidean geometry the oscillatory behavior of the Wheeler-DeWitt equation in the same region I enables one to apply the semiclassical quantum gravity approach to obtain a well-defined quantum theory of the scalar field. To get a quantum picture for the scalar field in the Euclidean region there should be a transformation of the Hilbert space constructed in the quantum Euclidean geometry into that of the quantum Lorentzian geometry. In the de Broglie-Bohm interpretation the canonical momenta are related to the actions $$\pi _a=\frac{S_{L(II)}}{a},\pi _{E,a}=\frac{S_E}{a}.$$ (49) $`\pi _a`$ is well defined in the region II $`(aa_0)`$, since $$\left(\pi _a\right)^2=3\pi m_P^2aV_L(a)0,$$ (50) whereas $`\pi _{E,a}`$ is well defined in the region I $`(aa_0)`$, since $$\left(\pi _{E,a}\right)^2=3\pi m_P^2aV_E(a)0.$$ (51) To find the momentum $`\pi _a`$ of the Lorentzian geometry in the Euclidean region I we transform back $`\pi _{E,a}`$ by the inverse Wick rotation $`\tau =is`$. Hence, momenta in the Lorentzian and Euclidean geometries are related by the following transformations $`\pi _{E,a}`$ $`=`$ $`i\pi _a,`$ (52) $`\pi _{E,\varphi }`$ $`=`$ $`i\pi _\varphi .`$ (53) By making use of Eqs. (49) and (53) we recover the gravitational field wave function (26) of the Lorentzian geometry from that of the Euclidean geometry: $$\mathrm{\Psi }_E(a)=F(a)\mathrm{exp}\left[\pm \frac{i}{\mathrm{}}S_E(a)\right]\mathrm{\Psi }_L(a)=F(a)\mathrm{exp}\left[\frac{1}{\mathrm{}}S_{L(I)}(a)\right].$$ (54) We turn to the transformation of quantum states of the scalar field. In the region II the scalar field has the energy expectation value with respect to the symmetric Gaussian state $`\widehat{\varphi }=\varphi _c=0`$ $$\widehat{H}_L=\pi ^2\mathrm{}^2a^3\frac{\phi _{L(II)}^{}}{t}\frac{\phi _{L(II)}}{t}+2\pi ^2a^3\left[\mathrm{exp}\left(\frac{\mathrm{}^2}{2}\phi _{L(II)}^{}\phi _{L(II)}\frac{^2}{\varphi _c^2}\right)1\right]V(\varphi _c=0).$$ (55) Similarly, in the region I of the Euclidean geometry the energy expectation value is given by $$\widehat{H}_E=\pi ^2\mathrm{}^2a^3\frac{\phi _E^{}}{\tau }\frac{\phi _E}{\tau }+2\pi ^2a^3\left[\mathrm{exp}\left(\frac{\mathrm{}^2}{2}\phi _E^{}\phi _E\frac{^2}{\varphi _c^2}\right)1\right]V(\varphi _c=0).$$ (56) Thus $`\widehat{H}_E`$ is the true Wick rotation of $`\widehat{H}_L`$. This justifies the fact that the region I of the Lorentzian geometry coincides with the region where $`V_E`$ is positive and the wave function oscillates. The Wick rotation transforms $`H_E`$ back into $`H_L`$ and recovers the positive signature of the kinetic term. Likewise, Eq. (28) is the Wick rotation of Eq. (40). Therefore, in the region I all quantum states of the scalar field in the Lorentzian geometry are obtained by Wick rotating those in the Euclidean geometry. How can we find directly quantum states in the Lorentzian geometry? For this purpose we should find the quantization rule in the region I $$[\widehat{\varphi },\widehat{\pi }_\varphi ]=\mathrm{},$$ (57) which follows from Eq. (49) and the standard quantization in the Euclidean geometry $$[\widehat{\varphi }_E,\widehat{\pi }_{E,\varphi }]=i\mathrm{}.$$ (58) Though not firmly established, we may use the non-unitary version of the Liouville-Neumann equation $$\mathrm{}\frac{}{s}\widehat{𝒪}+[\widehat{𝒪},\widehat{H}_L]=0.$$ (59) Two operators are found $`\widehat{A}_+(s)`$ $`=`$ $`\phi _{L(I)}(s)\widehat{\pi }_\varphi a^3{\displaystyle \frac{\phi _{L(I)}(s)}{s}}\widehat{\varphi },`$ (60) $`\widehat{A}_{}(s)`$ $`=`$ $`\phi _{L(I)}^{}(s)\widehat{\pi }_\varphi +a^3{\displaystyle \frac{\phi _{L(I)}^{}(s)}{s}}\widehat{\varphi },`$ (61) where $`\phi _{L(I)}`$ is a complex solution to the equation $$\frac{^2\phi _{L(I)}(s)}{s^2}+3\left(\frac{a/s}{a}\right)\frac{\phi _{L(I)}(s)}{s}+\frac{\delta ^2V(\widehat{\varphi })}{\delta \widehat{\varphi }^2}\phi _{L(I)}(s)=0.$$ (62) The operators $`\widehat{A}_+(s)`$ and $`\widehat{A}_{}(s)`$ play the same role as $`\widehat{A}^{}(\tau )`$ and $`\widehat{A}(\tau )`$, respectively. Note that Eq. (62) is the Wick rotation of Eq. (45). Finally we discuss the interpretation of a bounce solution of the semiclassical Einstein equation (38) in the Euclidean geometry in terms of temperature for a tunneling regime in the Lorentzian geometry . Without the back-reaction of matter, Eq. (38) has a periodic solution $$a(\tau )=\sqrt{\frac{3}{\mathrm{\Lambda }}}\mathrm{cos}\left(\sqrt{\frac{\mathrm{\Lambda }}{3}}\tau \right).$$ (63) The periodic solution (63) is the analytic continuation of the de Sitter solution to the semiclassical Einstein equation (24) in the Lorentzian geometry $$a(t)=\sqrt{\frac{3}{\mathrm{\Lambda }}}\mathrm{cosh}\left(\sqrt{\frac{\mathrm{\Lambda }}{3}}t\right).$$ (64) When we adopt the standard interpretation of finite temperature fields in the Minkowski spacetime, the period of Eq. (63) corresponds to an inverse temperature $$\tau =\frac{1}{T}=\frac{2\pi }{\sqrt{\frac{\mathrm{\Lambda }}{3}}}.$$ (65) This coincides with the temperature for the de Sitter spacetime from other methods. However, with the back-reaction (56), Eq. (38) reads that $$\left(\frac{\frac{a}{\tau }}{a}\right)^2\frac{1}{a^2}+\frac{\mathrm{\Lambda }}{3}=\frac{4\pi }{3m_P^2}\left\{\mathrm{}^2\frac{\phi _E^{}}{\tau }\frac{\phi _E}{\tau }+2\left[\mathrm{exp}\left(\frac{\mathrm{}^2}{2}\phi _E^{}\phi _E\frac{^2}{\varphi _c^2}\right)1\right]V(\varphi _c=0)\right\}.$$ (66) The task to find the temperature for the gravity-matter system is equivalent to solving both Eqs. (66) and (45) or (46) and finding a periodic solution. This requires a further study . ## V Conclusion We have studied quantum field theory of matter in the universe undergoing a topology change from the Euclidean region into the Lorentzian region. It is shown that the semiclassical gravity derived from canonical quantum gravity provides a consistent scheme for quantum field theory in such topology changing universes. The Lorentzian and Euclidean regions of spacetime are classified according to the behavior of the wave function of the gravitational field with the quantum back-reaction of matter included. In the Lorentzian region the gravitational wave function oscillates. Provided a cosmological time is properly chosen along the trajectory of oscillating gravitational wave function, the semiclassical Einstein equation has the same form as the classical Einstein equation with the quantum back-reaction of matter as a source. The time-dependent Schrödinger equation is identical to canonical quantum field equation. On the other hand, in the Euclidean region the gravitational wave function shows either an exponentially growing or decaying behavior or a superposition of them. One may derive the semiclassical Einstein equation and time-dependent Schrödinger equation in the sense of analytic continuation. However, it is found that the Schrödinger equation evolves like a diffusion equation, not preserving unitarity, and the quantization rule differs from the usual one in the Lorentzian spacetime. In order to construct a consistent quantum theory of matter field we have proposed a scheme in which the gravity-matter system is Wick-rotated in the Euclidean region of spacetime and the semiclassical (quantum) gravity is derived from the Wheeler-DeWitt equation for the Wick-rotated Euclidean geometry. The time-dependent Schrödinger equation is well defined as in the Lorentzian region and quantum states are found using the Liouville-Neumann method. Finally these quantum states are transformed via the inverse Wick rotation back into those of the Lorentzian geometry. This quantum field theory applies to the Universe that emerged quantum mechanically from a Euclidean region of spacetime. It would be interesting to see the physical consequences of the quantum fields for different boundary conditions of the Universe. Another physically interesting problem requiring a further study is to find the period solution of both the semiclassical Einstein equation and matter field equation in the Euclidean geometry and to interpret the period as the inverse temperature for the gravity-matter system in the tunneling regime of the Lorentzian geometry . ###### Acknowledgements. The author wishes to acknowledge the financial support of the Korea Research Foundation under contract No. 1998-001-D00364 and through BSRI Program under contract No. 1998-015-D00129.
no-problem/9902/astro-ph9902289.html
ar5iv
text
# THE RR LYRAE PERIOD-AMPLITUDE RELATION AS A CLUE TO THE ORIGIN OF THE OOSTERHOFF DICHOTOMY ## 1 INTRODUCTION It has often been assumed that the period-amplitude relation for fundamental mode RR Lyrae (RRab) stars is a function of metal abundance. Forty years ago, in a study of the period-$`m_{pg}`$ amplitude diagram for approximately 50 field RRab stars, Preston (1959) demonstrated that the more metal-poor and more metal-rich stars stars appeared to define two sequences well separated in amplitude. Subsequently, Dickens & Saunders (1965) and Dickens (1970) found similar results for RRab stars in globular clusters. Then, in a study of six well observed clusters, Sandage (1981b, hereafter referred to as S81b) quantified this result. He measured a period shift, $`\mathrm{\Delta }\mathrm{log}P`$, for each cluster relative to the mean period-amplitude relation for M3 and found a correlation between $`\mathrm{\Delta }\mathrm{log}P`$ and metal abundance in the sense that the more metal poor RR Lyrae variables had longer periods. As a result of this, some investigators have used the period-amplitude relation as an indicator of metal abundance, particularly in faint systems where it is difficult to estimate it by other methods. Sandages’s (S81b) result was based mainly on photographic data, but in the last ten years, CCD detectors have been widely used for observations of RR Lyrae variables in globular clusters. CCDs are linear detectors and this makes the photometry more accurate. They also have a higher quantum efficiency than photographic emulsions so that exposure times can be shorter. As a result, the time and magnitude at maximum and minimum light can be more precisely established and it is possible to derive more accurate amplitudes. Another problem with earlier studies was that, in some cases, stars with non-repeating light curves, stars that exhibit the Blazhko effect, were included in the samples. The amplitude of light variation for Blazhko stars varies over time scales longer than the basic pulsation period. Thus, if Blazhko variables are not identified they introduce scatter into the period-amplitude (P-A) diagram. To address this problem, Jurcsik & Kovács (1996, hereafter JK) recently devised a compatibility test for identifying Blazhko variables. The purpose of our investigation is to use $`V`$ amplitudes derived from CCD photometry to re-examine the P-A relation for RR Lyrae variables in globular clusters of both Oosterhoff types and to apply JK’s test so that Blazhko variables can be identified. In Table 1, we list \[Fe/H\], horizontal branch classification and Oosterhoff (1939, 1944) type for seven clusters for which published $`V`$ photometry is available. The \[Fe/H\] is on the system of Jurcsik (1995, hereafter J95) and the horizontal branch (HB) classification is indicated by the quantity (B-R)/(B+V+R) defined by Lee, Demarque & Zinn (1994). A negative value for this quantity indicates that most of the HB stars are on the red side of the instability strip and a high positive value indicates that most are on the blue side. ## 2 THE COMPATIBILITY TEST OF JURCSIK & KOVÁCS The modus operandi of JK was to characterize the light curve systematics of a sample of 74 RRab stars with normal light curves by studying the interrelations among the Fourier parameters. Specifically, they derived a set of 9 equations for calculating the Fourier amplitudes $`A_1`$ to $`A_5`$ and the phase differences $`\varphi _{21}`$ to $`\varphi _{51}`$. If, for a particular star, the calculated value for any one of the parameters is not in good agreement with the observed value, the star’s light curve is considered to be peculiar. They illustrated the effectiveness of their method with a study of RV UMa, an RR Lyrae star that exhibits the Blazhko effect. An independent demonstration of the validity of JK’s test comes from a study of V12 in NGC 6171 (M107) by Clement & Shelton (1997, hereafter CS97). CS97 observed this star in two different years and the light curve appears to repeat well from cycle to cycle. Nevertheless, the JK compatibility test indicates that the light curve of V12 is peculiar. It turns out that this is indeed the case if one compares the light curve of CS97 with an earlier one published by Dickens (1970). Both the shape and amplitude of the curve of V12 changed dramatically between the two epochs. ## 3 THE PERIOD-AMPLITUDE RELATION OF RR LYRAE VARIABLES ### 3.1 The Oosterhoff type I clusters M3 and M107 In the two upper panels of Figure 1, we show the period-$`V`$ amplitude relations for the Oosterhoff type I (OoI) globular clusters, M3 and M107. The data for M3 are from Kaluzny et al. (1998, hereafter KHCR) and for M107 from CS97. To establish which RRab stars in M3 and M107 had peculiar light curves, we applied JK’s compatibility test using equations recently derived by Kovács & Kanbur (1998) from a sample of 257 RRab stars. What we see in the upper two panels of the figure is that most of the RRab stars with peculiar light curves (the open circles) have lower amplitudes than other stars with the same period. According to Szeidl (1988) and JK, the maximum amplitude of a Blazhko variable fits the period-amplitude relation for regular RRab stars. Thus these stars are probably Blazhko variables which were not observed at a time when the amplitude was at its maximum. In the M3 plot, the squares represent three regular RRab stars (V14, V65 and V104) that are brighter that the other stars. KHCR concluded that these three stars are probably in a more advanced evolutionary state than the others. In the lower panel of Figure 1, we plot the RRab stars, V29 in M4 and V8 and V28 in M5. These are stars that JK classified as normal and for which published $`V`$ photometry is available. Clementini et al. (1994) observed M4 and Storm et al. (1991) observed M5. Also plotted are the $`V`$ amplitudes that CS97 derived from Reid’s (1996) observations of RRc stars in M5. The straight line shown in each panel is a least squares fit to the principal sequence of regular RRab stars in M3 (the solid circles). We can readily see that the regular RRab stars in the three OoI clusters (M107, M4 and M5) fit the P-A relation for M3. There is no evidence for a shift in $`\mathrm{log}P`$ even though all three of these clusters are more metal rich than M3. The situation may be different, however, for the RRc stars. The curve to the left of $`\mathrm{log}P=0.4`$ in each panel of the diagram is a fit to the RRc stars in M3 and in this case, the P-A relations for M5 and M107 are shifted to shorter periods than M3. ### 3.2 The Oosterhoff type II clusters M9 and M68 In the upper panel of Figure 2, we plot the P-$`A_V`$ diagram for M9, based on data published by Clement & Shelton (1999). The solid straight line is a least squares fit to RRab stars in M9 and the dashed line is the fit to M3 shown in Figure 1. The two curves are the fits to the RRc stars in M3 and M9. M3 is among the most metal poor of the OoI clusters and M9 is among the most metal rich of the OoII clusters, but the diagram shows that there is a definitive period shift between the two, for both RRab and RRc stars. This is the Oosterhoff dichotomy. In the center panel of Figure 2, the M68 data of Walker (1994) are plotted. JK found that two RRab stars in M68 (V23 and V35) had normal curves and so these are plotted as solid circles. The remaining RRab stars are plotted as open circles. The M68 RRab stars with peculiar light curves generally have lower amplitudes than those with normal curves, like M3 and M107 in Figure 1. However, this trend is not apparent in M9. Perhaps, this is because the stars with irregular light curves are observed at maximum amplitude. In the lower panel of Figure 2, we plot amplitudes derived from the observations of Carney et al. (1992) for two M92 RRab stars classified as regular by JK. Also included in the lower panel are the three bright stars in M3. All of the regular RRab stars plotted in Figure 2 seem to fit one P-A relation; there is no correlation with metallicity for the OoII clusters. Also there is no evidence that the periods of the RRc stars in M68 are longer than those in M9, even though M68 is more metal poor. The plots of Figures 1 and 2 demonstrate that the P-A relation for RRab stars is not a function of metal abundance. Rather, it is related to Oosterhoff type. Lee, Demarque & Zinn (1990, hereafter LDZ) have proposed that evolution away from the ZAHB plays a role in the Oosterhoff dichotomy. If this is correct, then the fact that we have found two different P-A relations suggests that there may be one P-A relation for ZAHB stars and another for stars that are more evolved. ZAHB stars have helium fusion in the core, but after the core helium is consumed, the helium fusion occurs in a shell. Apparently, this circumstance has more importance than metal abundance for determining the P-A relation of fundamental mode pulsators. The situation for the RRc stars (the first overtone pulsators) may be different. It is possible that the P-A relation for RRc stars in OoI clusters depends on metal abundance. ### 3.3 The unique case of M3 For OoI clusters like M3, the models of LDZ predict that RR Lyrae stars evolve blueward across the instability strip on the ZAHB, but when they evolve away from the ZAHB, they become brighter and redder. Clement et al. (1997) found evidence for blueward evolution of the M3 star V79 because of a mode switch. Before 1962, V79 was an RRab star with a period of 0<sup>d</sup>.483, but in 1996, it was an RRd star with the first overtone mode dominant. This mode switch has since been confirmed by Corwin, Carney & Allen (1999) and Clement & Goranskij (1999) who found that it occurred in 1992. The mean V magnitude of V79 is $`15.71`$ comparable to the mean magnitude ($`15.69`$) of the 21 RRab stars that fit the P-A relation for OoI clusters (the solid dots in Fig. 1), presumably all ZAHB stars. However, the three stars V14, V65 and V104 are brighter. Consequently, KHCR concluded that they are in a more advanced evolutionary state. In addition, these three bright stars fit better to the P-A relation for the OoII RRab stars and their mean period is 0<sup>d</sup>.625, which is an appropriate value for an OoII cluster. In Table 2, we list their periods, period shifts ($`\mathrm{\Delta }\mathrm{log}P`$), their mean $`V`$ magnitudes and $`\mathrm{\Delta }V`$ relative to other RRab stars with the same amplitude (i.e. the ones that fit on the straight line of Figure 1). The above discussion indicates that there are RR Lyrae variables with characteristics of the two Oosterhoff groups in this one cluster. Assuming there is no variation in metal abundance among the stars of M3, this is further evidence that the P-A relation for RRab stars is not a function of metal abundance. If the P-A relation is not correlated with metal abundance, then why was such a correlation found by previous investigators? One reason is that OoII clusters are, in general, more metal poor than OoI clusters. As a consequence of this, a difference in the P-A relation for clusters of the different Oosterhoff groups could be attributed to a difference in metal abundance. However, Sandage’s analysis indicated that the $`\mathrm{\Delta }\mathrm{log}P`$-metal abundance correlation exists among clusters of one Oosterhoff type. This is documented in column 11 of Table 7 in his paper (S81b). The P-A relations for the OoI clusters M4 and NGC 6171 (M107) are shifted to short periods compared with M3. We believe that this apparent correlation probably occurs because of a selection effect in the choice of the M3 sample. The M3 data were taken from the study of Roberts and Sandage (1955, hereafter RS) whose objective was to determine reliable colors for RR Lyrae variables, and so they excluded stars with non-repeating light curves. As a result, stars like those plotted as open circles in our P-A relation for M3 were not included in their study. This makes the P-A relation for M3 appear to be shifted to longer periods than the other OoI clusters. In addition, the fact that some of the M3 RR Lyrae variables have OoII characteristics must also be a contributing factor. (It must also be acknowledged that in clusters like M4 and M107, the period at which the transition between fundamental and overtone mode pulsation occurs is shorter than in M3. However, the short period fundamental mode pulsators in these clusters have non-repeating light curves.) M3 is not the only cluster that has RR Lyrae variables with the characteristics of both Oosterhoff groups. Omega Centauri is another. In a study of $`\omega `$ Centauri, Butler, Dickens & Epps (1978, hereafter BDE) commented that although it is generally assumed to be an OoII cluster, some of its RR Lyrae variables have OoI characteristics. This can account for the S81b finding that its P-A relation is shifted to shorter periods than that of M15 and to longer periods than M3. ### 3.4 The Period-Luminosity-Amplitude Relation Sandage (1981a, hereafter S81a) showed that there is a period-luminosity-amplitude relation for RRab stars in the sense that, for a given amplitude, brighter stars have longer periods. He demonstrated this with photographic observations of two clusters, M3 (RS) and $`\omega `$ Centauri (BDE). His approach was to calculate the mean apparent bolometric magnitude for the RR Lyrae variables in a particular cluster and then use van Albada & Baker’s (1971, hereafter VAB) equation relating pulsation period, mass, temperature and absolute bolometric magnitude to derive a ‘reduced’ period for each star. This is the period the star would have if its $`m_{bol}`$ had the same value as the cluster mean. A plot of amplitude against ‘reduced’ period shows much less scatter than a regular period-amplitude plot, and if the masses of the RR Lyrae stars in a particular cluster are the same, this implies that the amplitude is a function of temperature. S81b’s correlation between $`\mathrm{\Delta }\mathrm{log}P`$ and metal abundance was derived from P-A relations plotted with ‘reduced’ periods. Nevertheless, the plot for NGC 6171 shows considerable scatter. We believe that this can be attributed to the fact that Blazhko variables were included in his sample. The P-L-A relation does not apply to Blazhko variables. It is significant that Sandage showed that the P-L-A relation holds for both $`\omega `$ Centauri and M3 because these two clusters have stars with properties of both Oosterhoff groups. Even though the P-A relation for RRab stars is a function of Oosterhoff type, a P-L-A relation seems to be valid for stars in a cluster that belongs to both groups. If there is a unique period-amplitude relation for RRab stars on the ZAHB, then the P-L-A relation can be used to estimate the apparent magnitude of the ZAHB in any cluster, regardless of its Oosterhoff type, as long as it has RRab stars with normal light curves. However, an examination of the data in Table 2 indicates that the period shift (at constant amplitude) for the three bright stars in M3 can not be completely accounted for by a difference in luminosity. According to VAB’s pulsation equation, $`\mathrm{\Delta }\mathrm{log}P/\mathrm{\Delta }m_{bol}`$ is $`0.34`$ for constant mass and temperature, but the mean $`\mathrm{\Delta }\mathrm{log}P/\mathrm{\Delta }V`$ derived for the three bright stars in M3 is $`0.48`$ with $`\sigma =0.10`$. Thus there must be a difference in mass and/or temperature as well. ## 4 IMPLICATIONS FOR AGES OF GLOBULAR CLUSTERS In recent years, there has been condsiderable discussion in the literature (e.g. Chaboyer, Demarque & Sarajedini 1996; Stetson, Vandenberg & Bolte 1996) about the range of ages of galactic globular clusters and also about the question of whether or not metal poor clusters are older than metal rich clusters. These issues have not yet been resolved, but our results may have some impact on this problem. In some investigations, e.g. Gratton (1985), the cluster age is derived from the difference between the ZAHB and the main sequence turnoff ($`\mathrm{\Delta }V_{TO}^{ZAHB}`$). In these cases, the faintest stars on the HB in the vicinity of the RR Lyrae instability strip are assumed to be ZAHB objects, but if they are in fact, at a more advanced evolutionary state, the apparent luminosity of the ZAHB is overestimated. As a result, $`\mathrm{\Delta }V_{TO}^{ZAHB}`$ is also overestimated and this leads to overestimation of the cluster age. This could cause an apparent age-metallicty relation. It will be very interesting to see if other studies of RR Lyrae variables confirm our conclusion that the period-amplitude relation for fundamental mode pulsators depends on evolutionary state and not on metal abundance. We would like to thank Pierre Demarque and Norman Simon for discussing this work during the course of the investigation. They both encouraged us to try to understand the significance of our results and we hope we have succeeded. Thanks are also due to Jason Rowe for his assistance in the preparation of the diagrams. The work has been supported by the Natural Sciences and Engineering Research Council of Canada.
no-problem/9902/astro-ph9902145.html
ar5iv
text
# Timing analysis of the X-ray transient source XTE J1806–246 (2S1803–245) ## 1 Introduction The X-ray transient source XTE J1806–246 (=2S1803–245=MX1803–24) was discovered by SAS–3 in May 1976 (Jernigan, (1976), Jernigan et al. (1978)). An isolated X-ray burst from this region of the sky was detected by the Wide Field Camera 1 (WFC1) on the BeppoSAX observatory on Apr. 2, 1998. The All Sky Monitor (ASM) of the Rossi X-ray Timing Explorer satellite detected the beginning of an X-ray outburst of the source on Apr. 16, 1998 (Marshall et al., (1998)). Observations in other wavebands revealed the probable radio (Hjellming, Midouszewski & Rupen, (1998)) and optical (Hynes, Roche & Haswell, (1998)) counterparts of the object. Quasi Periodical Oscillations (QPO) were discovered by Wijnands & van der Klis, (1998) in the power density spectrum (PDS) of the source in data obtained on May 3 1998. In this Letter we present results of timing analysis of the PCA/RXTE experiment data, discuss the power density spectrum of the source and report detected correlations of QPO parameters on energy range and X-ray flux. ## 2 Observations and analysis We analyzed archival data obtained by the RXTE observatory during the outburst of the source in Apr.-July 1998. Brief information about the observations is presented in the Table 1. The data were analyzed according to the RXTE Cook Book recipes using the FTOOLS, version 4.2 tasks. For background estimations we used the VLE model for observations when X-ray flux was high, and the L7/240 for the observations corresponding to low flux. The data were collected in different modes (Standard 2, Event Mode, Single Binned and Binned Mode) with the best timing resolution of 16$`\mu `$s in hard energy bands ($``$13 keV), and of 8 ms and 125 $`\mu `$s in soft energy bands ($``$13 keV). The X-ray flux from the source appears to be highly erratic, with significant variability at all time scales. To obtain a broad band power density spectra we used data of two types. For the PDS at frequencies higher than 0.05 Hz we used 16 sec data segments with an 8 ms time resolution. This gave us a set of power density spectra between 1/16 and 64 Hz, which were subsequently averaged and corrected for deadtime modified poissonian noise (Zhang et al., (1995)). To obtain PDS in a lower frequency band ($`5\times 10^4`$ Hz – $``$0.1 Hz) we used the 16-second time resolution data of Standard 2 mode, because it allowed us to take into account the influence of the background variation on the PDS, which might be of importance at such frequencies. ## 3 Results ### 3.1 Power density spectra The power density spectra for different observations are presented in Fig.1. The power law representing the Very Low Frequency Noise (VLFN) component dominates the frequency band $`10^4`$ – 1 Hz for all PDSs. The high Frequency Noise component for frequencies higher than 1 Hz can be approximated either by another power law with an exponential cut-off at frequencies of 10–20 Hz or by a wide Lorentzian. The slope of the VLFN component was in the range $`\alpha 1.01.5`$ and its fractional variability was from 1.5 to 4 percent in the energy band 2–13 keV. The amplitude of the VLFN fractional variability increases with energy (see Fig. 3 for observation #4) while the slope of the power law remains constant. In the observation of May 3, 1998, a significant ($`>10\sigma `$) QPO peak was detected, agreeing with an earlier report by Wijnands & van der Klis, (1998). The central frequency of the QPO, which was obtained by fitting a Lorentzian profile to the power density spectrum averaged over the whole observation (2–13 keV energy band) is equal to $`f=9.11\pm 0.07`$ Hz, with the width of the Lorentzian profile $`FWHM=5.4\pm 0.2`$ Hz. Its fractional variability amplitude in the frequency range $`10^464`$ Hz was $`4.7\pm 0.1\%`$. The amplitude of the QPO varies with energy similar to the variation of the VLFN (Fig.3). It is remarkable that this observation was taken when X-ray flux from the source was the highest among all PCA observations. The X-ray light curve of XTE J1806-246 during its outburst of 1998 is presented in Fig.2. It is worth noting that the difference in the X-ray flux detected in observations #3 and #4 is only $``$5%, but PDS are qualitatively different. ### 3.2 QPO parameters In the observation of May 3, 1998 we detected significant changes in the QPO parameters: the central frequency, width and amplitude of the fractional variability. Fig. 4 shows the variation of QPO amplitude and frequency in comparison with X-ray flux variation (all values were computed for the energy band 3–13 keV). Because both the X-ray flux from the source and QPO parameters have demonstrated time variability, we tried to look for correlations between them, which might be important for the overall understanding of the QPO phenomenon. One might note three distinctive time intervals during the observation of May 3. The first corresponds to “low” flux (below 7100 cnts/s) and “strong” QPO (rms higher than 5%) with “low” frequency (below 10 Hz). In the second interval the flux is “medium” (7100 – 7400 cnts/s), QPO is “weak” (rms $`5\%`$) and still of “low” frequency. During the third interval the flux is “high” ($``$ 7400 cnts/s), QPO is “strong” and of a higher frequency (10-12 Hz). While there is no global correlation between these three intervals, we have found some evidence for short-term correlations within each of them. The results are presented in Fig.5. Parameter variations were calculated by subtraction of the mean value for each interval. For flux-frequency dependence data for the first interval were separated from later data. In order to obtain flux-rms dependence the first two intervals were analyzed together, and the third one - separately. A flux-frequency correlation and flux-rms anticorrelation are clearly seen at short scales, but an influence of other, not related to QPO, processes might be the reason why such correlations could not be expanded for longer time intervals. We would like to note that the measurement of total X-ray flux and the determination of QPO parameters was done by completely independent methods, so any correlation between them must be not methodical, but physical. We were concerned about a possible systematical cross-correlation between QPO amplitude and QPO frequency values, both determined in the same procedure, but our analysis has not revealed any evidence of such a correlation. ### 3.3 Color-color diagram We used data in different energy channels to build a color-color diagram (CCD), which allows one to follow spectral changes of the source (Hasinger & van der Klis, (1989)). The CCD for the data of the first 7 analyzed observations is shown in Fig.6. For this CCD we used four energy bands in the PCA spectrum: 2.1–3.5 keV, 3.5–6.4 keV, 6.4–9.7 keV and 9.7–16.0 keV. The overall shape of the points distribution reminds us of the diagrams for some Z-sources, such as GX17+2 and Sco X-1 (see Hasinger & van der Klis, (1989)). However, we did not find a clear correlation between QPO and one of the branches on diagram. Instead, we found that the QPO region corresponds to a lower value of hard color regardless of the value of soft color. ## 4 Discussion Quasi-periodical oscillations in power density spectra have been detected from many sources of different natures (see van der Klis, (1995) for review and references therein). We discuss here a strong QPO in the PDS of the X-ray source XTE J1806-246, which can be considered a typical X-ray transient. The nature of this source is more or less clear because of the definite detection of the X-ray burst from it (Muller et al. (1998)), so one can consider XTE J1806-246 a neutron star with a low magnetic field. X-ray bursts are more typical for atoll sources (see the classification in Hasinger & van der Klis, (1989)). However, the shape of the color-color diagram, PDS dominated by VLFN and HFN components and the fact that QPO was observed at the maximum X-ray light curve of the source is strongly suggestive for the classification of XTE J1806-246 as another Z-source, known to be neutron stars, probably with a magnetic field stronger than that of typical X-ray bursters, but weaker than that of pulsars (van der Klis, (1994)). As mentioned by Wijnands & van der Klis, (1998) it would be then the first ever known Z-transient. Other reputed Z-sources are all persistent X-ray sources (Hasinger & van der Klis, (1989)). The origin of QPOs in Z-sources remains controversial. Some of models proposed include the beat frequency model (Alpar & Shaham, (1985)), hot spots in a boundary layer (Hameury et al., (1985)), obscuration of the central X-ray source by an accretion disk (Stella, (1986)) etc. (for review see Lamb, (1988), van der Klis, (1995) and references therein). The correlation between the X-ray flux and the QPO frequency we found for short time scales could be an indication that the frequency of QPO is linked with the typical radius of the accretion disk in the system, which in turn correlated with the luminosity of the source. An anticorrelation between the source flux and the amplitude of fractional variability associated with QPO demonstrates that short-term increases of X-ray flux (microflares) might be caused by a mechanism, not related with QPO. The Very Low Frequency Noise was detected in all observations of XTE J1806-246 by PCA/RXTE, in all energy bands with the increasing amplitude of fractional variability towards higher energies. The striking similarity in the energy dependences of QPO and VLFN amplitudes suggests that these components originate in the same region of the binary system. The energy spectrum of the source is probably formed by hard highly variable component and soft constant component, which explains the increase of QPO and VLFN amplitudes with energy. ###### Acknowledgements. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. The work has been supported in part by RFBR grant 96-15-96343. Authors would like to acknowledge helpful comments of anonymous referee. We are grateful to Ms.K.O’Shea for the language editoring of the manuscript.
no-problem/9902/astro-ph9902164.html
ar5iv
text
# The Color Distributions of Globular Clusters in Virgo Elliptical Galaxies ## 1. Introduction The first detection of bimodality in the globular cluster system of an elliptical galaxy was made by Zepf & Ashman (1993) using the photometry of the M49 globular cluster (GC) system taken by Couture et al. (1991). Better data with larger samples of clusters subsequently confirmed the detection ( Zepf et al. 1995) and added several galaxies to the list of galaxies with detected bimodal distributions. Additional studies ( Geisler et al. 1996, Neilsen, Tsvetanov & Ford 1998) show that the radial color gradients in the mean color of the globular clusters of M49 and M87 can be attributed to variations in the spatial distributions of the clusters contributing to each peak. This spatially varying bimodal distribution was predicted by the merger model of Ashman & Zepf (1992), in which the blue clusters are those of spiral galaxies which merged to form the elliptical, and the red clusters formed during the mergers. Cote et al. (1998) have proposed an alternate model in which only the red population is truly associated with the host galaxy, while the blue clusters were formed in other (smaller) members of the galaxy cluster and were captured during mergers or stripped from their original hosts through interactions. Under this model, the red clusters are metal rich because the large mass of the galaxy prevents metals from escaping. The blue clusters now follow the gravitational potential of the cluster as a whole, which is strongly centered on the central giant ellipticals. There are serious objections to both models. The Cote et al. (1998) model has difficulty explaining the rotation of the GC system of M87, seen by Kissler-Patig & Gebhardt (1998) in the data set of Cohen, Blakeslee, & Ryzhov (1998). Furthermore, the stellar metallicity of the halos of elliptical galaxies is higher than that expected by stripping models (Harris, Harris & McLaughlin 1998). The Ashman & Zepf (1992) model implies a correlation between the relative numbers of red and blue clusters and specific frequency in globular cluster systems, which is not seen. By obtaining high precision color measurements of large, well defined samples of GC’s in elliptical galaxies located in different parts of a galaxy cluster, further tests of these models become possible. For both the merger model and the tidal stripping model, one would expect the color of the blue population to be roughly the same in all galaxies. In the tidal stripping model, the size of the galaxy is expected to determine the location of the red peak, so a correlation between the color of the red peak and the luminosity of the galaxy is to be expected. (This correlation may be masked if a significant fraction of the stars were obtained in mergers.) However, in the merger model the shape and location of the color peak is dependent on the details of the merger history, and one would expect significant variation among the sample. ## 2. Observations, Reduction and Analysis The Hubble Space Telescope archive provided all of the data used in this study. The sample includes those elliptical galaxies in the Virgo cluster for which there are WFPC-2 observations with images in filters which approximate the $`V`$ and $`I`$ bands (F814W and F555W or F606W, respectively), and which are deep enough to see a significant fraction of the globular clusters in a galaxy at 15-20 Mpc. Table 1 lists the basic parameters of the data taken. The raw data were calibrated using the WFPC2 pipeline procedure. The different exposures were then aligned, cosmic ray rejected, and combined using standard software. The custom program SBFtool was used both to determine the amplitude of the surface brightness fluctuations and to detect, classify, and perform photometry on candidate objects in the images. First, we model the galaxy and sky background using a combination of ellipse fitting or spatial frequency filtering techniques. We mark groups of four or more adjacent pixels significantly brighter than the model and noise as potential object pixels. Sets of pixels consistent with the shape of a PSF or distant globular cluster convolved by a PSF form the set of candidate globular clusters. Because globular clusters at the distance of Virgo are partially resolved in HST WFPC-2 images, simple aperture photometry becomes complicated; the aperture correction will vary from object to object. Instead, we fit the data to a library of model GC’s constructed using King profiles and PSF’s created using tinytim (Krist 1997), covering a range of both core and tidal radii. (For similar approaches to photometry on partially resolved globular clusters, see Grillmair et al. (1999) and Holtzman et al. (1996).) In central, high signal to noise $`(S/N>30)`$ pixels, where errors in the model fitting become significant compared to the noise, the flux is measured directly. The flux outside these pixels is estimated using the best fit model. The remaining catalog still contains contaminating objects, particularly background galaxies. The removal of objects with colors outside of the range $`0.5<VI<1.5`$ (where practically all globular clusters fall), highly asymmetric objects, and objects where the best fitting model provided a poor reduced $`\chi ^2`$ fit significantly reduces the contamination. For the study of the color distribution, we consider only objects with signals greater than 3000 e<sup>-</sup> in each filter. These objects are significantly above the detection threshold in both filters, and the uncertainty in the color of fainter objects due to counting statistics alone is significant. The bright magnitude cutoff reduces the smoothing of the color distribution due to measurement error, ensures that the catalog is complete down to a well defined limit, and minimizes contamination due to remaining background galaxies. In all data sets except NGC 4365, NGC 4660, and NGC 4458, this limit is fainter than the expected peak of the GC magnitude distribution. The limit in the M86 data set is close to the expected peak of the GC distribution. The procedure outlined by Holtzman et al. (1995) guided the conversion of the measured flux in the HST filters to standard $`V`$ and $`I`$ magnitudes. ## 3. Results Figure 1 displays the color distributions of each galaxy. The light grey shaded area shows the distribution using a traditional histogram. The histogram is not the ideal representation of the data; bin sizes narrow enough to detect fine structure will also display significant noise, and the choice of phase can also have a significant effect on the appearance. Simonoff (1996) describes and compares a variety of alternatives for estimating the underlying probability distribution, including the variable width Epanechnikov kernel. For any given value, one of the simplest ways to measure the density of points at that value is to count the number of points within some distance $`h`$ of that value: $$\widehat{f}(x)=\underset{i=1}{\overset{n}{}}\frac{1}{h}K(\frac{xx_i}{h})$$ where $$K(u)=\{\begin{array}{cc}\frac{1}{2}\hfill & \text{if }1<u1\text{,}\hfill \\ 0\hfill & \text{otherwise.}\hfill \end{array}$$ The appropriate choice of $`h`$ is a function both of the form of the underlying distribution and the density of data points; smaller values of $`h`$ are warranted when there are a larger number of data points. Use of an unnecessarily large value for $`h`$ will result in an overly smooth estimate of $`f(x)`$. One can accommodate the varying density of data points by substituting $`h(x)=h_\nu \times f(x_i)^{\frac{1}{2}}`$ for $`h`$. Clearly, an estimate of $`f(x)`$ must be made to apply this method, but an iterative process beginning with a crude (e.g., uniform) estimate provides stable results in few iterations. A second improvement that can be made is in the choice of the function $`K(u)`$. It can be shown that an estimate made using the Epanechnikov kernel, $$K(u)=\{\begin{array}{cc}\frac{3}{4}(1u^2)\hfill & \text{if }1<u1\text{,}\hfill \\ 0\hfill & \text{otherwise,}\hfill \end{array}$$ minimizes the mean integrated square difference between $`f(x)`$ and $`\widehat{f}(x)`$, provided that $`f^{\prime \prime }`$ is continuous, $`f^{\prime \prime \prime }`$ is square integrable, and $`K(u)0`$ (Simonoff 1996). In figure 1, we present the color distributions using a simple histogram and approximations made using two values of reference kernel width, $`h_\nu `$. The thick line is the smoothing using a reference kernel width that would be optimal for a Gaussian distribution with a standard deviation equal to that of the data, and estimated using a constant kernel width. This value will over-smooth the data when applied with a variable kernel width and $`\widehat{f}(x)<1`$, particularly if the distribution is not Gaussian; features seen in this smoothing are very likely to be real. A Kolmogorov-Smirnov (K-S) test comparing this smoothed distribution to the data confirms that it is significantly over-smoothed. The thin line shows the smoothing using a kernel width such that the smoothed curve can be excluded by a K-S test at the 50% level, giving an indication of what the true distribution may be. However, features seen in this line cannot be reguarded as having been reliably detected. The KMM algorithm (Ashman, Bird, & Zepf 1994) provides a statistical test for comparing the likelihood of the underlying distribution being a single or double Gaussian. The KMM algorithm returns a likelihood ratio test statistic, which is a measure of the improvement in the fit of a two Gaussian model over a single Gaussian one. From this we calculate the $`p`$ value, the probability of measuring this statistic from a single Gaussian distribution. Low $`p`$ values reject the hypothesis that the examined distribution resulted from single Gaussian distribution. They do not necessarily reject other (possibly unimodal) models for the distribution, however. Because the presence of contaminating objects outside the main distribution significantly reduces the effectiveness of the KMM algorithm, we have removed objects with colors far from the main distribution (which are probably contaminating background galaxies) from our sample before applying the KMM algorithm to our data (see Ashman et al. 1994 for a more complete discussion of the effects of such a truncation). Table 1 presents the various physical properties of each galaxy, including the absolute $`B`$ magnitude (calculated using SBF distances and RC3 apparent magnitudes), $`BV`$ color, and Hubble type; the $`V`$ magnitude cutoff and the total number of clusters considered in the color distribution; and the $`p`$ value statistic and distribution locations from the KMM algorithm (mode 1 and mode 2). ## 4. Discussion For four of the eight data sets where a reasonably large number of clusters have been detected $`(N>100)`$, two peaks are clearly identifiable even where the data are over-smoothed. Furthermore, in each of these four cases the locations of the peaks are consistently near $`VI=1.01`$ and $`VI=1.26`$. NGC 4365, NGC 4473, and M59 show single peaks, broader than the individual peaks in galaxies with bimodal distributions; the data are consistent with the base distribution being either a single broad peak or two peaks to close to be resolved. Although a single Gaussian distribution for these galaxies cannot be excluded, the relatively low $`p`$ value and best peak values (for the two Gaussian fit) close to 1.01 and 1.26 are suggestive of bimodality. M86 features a large population of globular clusters, but the distribution appears smoothly unimodal with a peak of $`VI1.03`$. The width and color of the peak are comparable to the width and color of the blue peak in the bimodal galaxies. The difference seems to be its lack of a red peak. In other properties, such as its luminosity and X-ray emission, it is similar to other bright galaxies in the sample. The only other remarkable feature is that M86 is bluer than any of the bright galaxies. This may indicate that the processes which result in a detectable red peak in the GC population also redden the star light of the galaxy as a whole. We do not emphasize the remaining four populations $`(N<100)`$ both because of the reduced overall statistics and the fact that the small number of clusters increases the effect of contamination by background galaxies on the overall appearance of the distribution. While the positions of the modes of the color distribution appear uniform, the relative contribution of each peak to the distribution does not. The lack of a red peak in M86, and the strength of the red peak in the remainder of the bright galaxies, is a dramatic illustration of this variation. In the model where the red population if supposed to from with the host galaxy, and the blue population is collected from neighbors, the mass of the galaxy is reguarded as the cause for the high metallicity of the clusters which originated with the host galaxy, so one expect a significant variation in the position of the red peak with galaxy mass. Our data set shows no such trend, although the range of magnitudes may be too small for such a trend to be detected. Why the red peak should show such consistency is equally unclear in the merger model of formation, as the distribution of red clusters should vary depending in the details of the merger history of the galaxy. It is possible, though, that the breadth of the red peak hides multiple populations formed though mergers, which typically result in a similar overall peak. We would like to thank Holland Ford and Patrick Cote for useful advice and discussions, and the referee for comments which improved this presentation. Support for this work was provided by NASA through grant GO-7543 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555.
no-problem/9902/hep-ex9902032.html
ar5iv
text
# References Characteristics of Alpha, Gamma and Nuclear Recoil Pulses from NaI(Tl) at 10-100 keV Relevant to Dark Matter Searches V.A.Kudryavtsev<sup>a</sup>, M.J.Lehner<sup>a</sup>, C.D.Peak<sup>a</sup>, T.B.Lawson<sup>a</sup>, P.K.Lightfoot<sup>a</sup>, J.E.McMillan<sup>a</sup>, J.W.Roberts<sup>a</sup>, N.J.C.Spooner<sup>a</sup>, D.R.Tovey<sup>a</sup>, C.K.Ward<sup>a</sup>, P.F.Smith<sup>b</sup>, N.J.T.Smith<sup>b</sup> <sup>a</sup> Department of Physics and Astronomy, University of Sheffield, Sheffield S3 7RH, UK <sup>b</sup> Rutherford Appleton Laboratory, Chilton, OX11 0QX, UK Abstract Measurements of the shapes of scintillation pulses produced by nuclear recoils, alpha particles and photons in NaI(Tl) crystals at visible energies of 10-100 keV have been performed in order to investigate possible sources of background in NaI(Tl) dark matter experiments and, in particular, the possible origin of the anomalous fast time constant events observed in the UK Dark Matter Collaboration experiments at Boulby mine . Pulses initiated by X-rays (via photoelectric effect close to the surface of the crystal) were found not to differ from those produced by high-energy photons (via Compton electrons inside the crystal) within experimental errors. However, pulses induced by alpha particles (degraded from an external MeV source) were found to be $`10\%`$ faster than those of nuclear recoils, but insufficiently fast to account for the anomalous events. PACS: 29.40Mc, 14.80 Ly Keywords: Scintillation detectors; Dark matter; WIMP; Radioactive sources; Pulse-shape discrimination Corresponding author: V. A. Kudryavtsev, Department of Physics and Astronomy, University of Sheffield, Hicks Building, Hounsfield Rd., Sheffield S3 7RH, UK Tel: +44 (0)114 2224531; Fax: +44 (0)114 2728079 E-mail: v.kudryavtsev@sheffield.ac.uk 1. Introduction Several NaI-based experiments searching for Weakly Interacting Massive Particles (WIMPs), as possible constituents of galactic dark matter, use pulse-shape discrimination to separate the nuclear recoil signals expected from WIMP elastic scattering from electron recoils due to gamma background (see for example ). The feasibility of this technique arises because pulses initiated by nuclear recoils in NaI(Tl) are known to be typically $`30\%`$ faster than those due to electron recoils . In each case the integrated scintillation pulses can be adequately fitted by assuming an exponential decay of time constant $`\tau `$. The resulting distribution of $`\tau `$, can be approximated by a gaussian in $`\mathrm{ln}(\tau )`$. The shape of the pulses can then be characterised by the mean value of $`\tau `$, $`\tau _o`$, in the gaussian . The difference between $`\tau _o`$ values for nuclear and electron recoils is of the order of $`30\%`$ for measured energies above 20 keV, decreasing almost to zero at 2 keV. This difference, divided by the distribution width, is a measure of the discrimination which can be improved by optimising the crystal operating temperature . Assuming operation of the technique at energies higher than $``$4 keV then, apart from gammas, other potential sources of background are neutrons and alpha particles. Neutron interactions are expected to yield events indistinguishable from WIMP interactions (hence neutrons are usually used to determine the response of NaI to nuclear recoils since neutron scattering on nuclei is similar to WIMP scattering in terms of nuclear recoil generation). However, neutron background can be sufficiently suppressed by shielding the detectors from neighbouring radioactivity and using deep underground sites to avoid neutrons from cosmic-ray muons. In the case of alpha particles any background at low energies cannot arise from the likely small contamination of uranium and thorium in NaI(Tl) because the alphas from uranium and thorium decay chains have energies exceeding 1 MeV. However, high energy alphas from activity in surrounding materials potentially may penetrate into the crystal and deposit energy in the surface layers. Their path in the sensitive surface layer of a crystal, and hence their energy deposition, can then be small depending on the point of their production. Thus such alphas could be responsible for background events and may be responsible for the anomalous fast time constant events (pulses faster than recoil-like pulses) observed in UKDMC NaI(Tl) experiments undertaken at Boulby mine . In this paper we report characterisation of the shape of pulses induced by neutrons, alphas and electron recoils in NaI(Tl) in order to investigate possible sources of background in NaI(Tl) dark matter experiments. Similar measurements have been performed previously in NaI(Tl) (see, for example, and references therein). However, presented here are results obtained with a single NaI(Tl) crystal at low energies (10-100 keV), allowing direct comparison of the pulses induced by the different particles in the energy range of prime interest for dark matter experiments. 2. Experimental set-up and analysis technique Measurements were performed using a Hilger Analytical Ltd. unencapsulated NaI(Tl) crystal of dimensions 2 inch x 2 inch diameter cooled, using a purpose-built copper cryostat, to 11<sup>o</sup>C with control so that variation did not exceed 0.5<sup>o</sup>C. Further details of the apparatus can be found in . The temperature chosen was close to that of the crystals used in the UK Dark Matter Collaboration (UKDMC) experiments at Boulby mine . The crystal was viewed through a pair of silica light guides by two 3-inch ETL type 9265 photomultiplier tubes (PMTs). Integrated pulses from the PMTs were digitised using a LeCroy 9430 oscilloscope driven by a Macintosh computer running Labview-based data acquisition software identical to that used with the UKDMC dark matter detectors. The digitised pulse shapes were passed to the computer and stored on disk. For the final analysis the sum of the pulses from two PMTs was used. The apparatus was found to yield a total light output corresponded to $``$5.5 photoelectrons/keV. Energy calibrations for the tests were performed with a <sup>57</sup>Co gamma source (122 keV). A <sup>60</sup>Co gamma source was used to obtain pulse shapes from high-energy gammas. These gammas undergo Compton scattering inside the crystal producing electrons which deposit energy measured by the detector. To measure the shapes of the integrated pulses induced by nuclear recoils (sodium and iodine), the crystal was irradiated by neutrons from a <sup>252</sup>Cf source. Neutron energies were decreased by shielding the source with a 4 cm thick lead block. All aforementioned sources were placed outside the copper vessel containing the crystal, light guides and PMTs. An <sup>241</sup>Am source was used to irradiate the crystal with alpha particles. However, since the path length of alphas does not exceed tens of microns the source was attached to the bare cylindrical face of the crystal inside the cryostat. To decrease the energy of the alphas down to the keV range of interest a few, thin ($``$10 $`\mu `$m), layers of plastic were placed between the source and the crystal. The 60 keV gamma-line from the <sup>241</sup>Am source also allowed independent energy calibration of the detector and measurement of the shape of the pulses initiated by electrons near the crystal surface. Such electrons are produced via the photoelectric effect. Analysis was performed, using the procedure described in , by fitting an exponential to each integrated pulse to obtain the index of the exponent, $`\tau `$ \- faster pulses (due to nuclear recoils, for example) having smaller values of $`\tau `$, than slower pulses such as from gammas. For each experiment histograms were generated of the number of detected events versus the value of $`\tau `$ \- referred to here as $`\tau `$-distributions. The $`\tau `$-distributions are known to be dependent on the measured energy. To reduce such dependence the energy range (0-100 keV) was subdivided into energy bins of 10 keV width. We note that the energy threshold was about 10 keV. As shown in (see also references therein) the $`\tau `$-distribution for each population of pulses can be approximated by a gaussian in $`\mathrm{ln}(\tau )`$ (for a more detailed discussion of the distributions see and references therein): $$\frac{dN}{d\tau }=\frac{N_o}{\tau \sqrt{2\pi }\mathrm{ln}w}\mathrm{exp}\left[\frac{(\mathrm{ln}\tau \mathrm{ln}\tau _o)^2}{2(\mathrm{ln}w)^2}\right]$$ (1) The $`\tau `$-distributions were fitted with a gaussian in $`\mathrm{ln}(\tau )`$ with several free parameters as follows. In the case of events from the <sup>60</sup>Co gamma source a 3-parameter fit was used with free parameters $`\tau _o`$, $`w`$ and $`N_o`$. In the experiments with the <sup>252</sup>Cf neutron source both neutrons and gammas (from the source as well as from local radioactivity) were detected. The resulting $`\tau `$\- distribution can thus be fitted with two gaussians. However, the parameters $`\tau _{o\gamma }`$ and $`w`$ for events initiated by gammas (effectively by Compton electrons) are known from experiments with the <sup>60</sup>Co gamma source. Assuming the value of $`w`$ (called the width parameter) for the neutron distribution (where the pulses are due to nuclear recoils) is the same as that of the gamma distribution, since the width is determined mainly by the number of photoelectrons, again a 3-parameter fit can be applied. In this case the free parameters are: number of neutrons, $`N_{on}`$, number of gammas, $`N_{o\gamma }`$, and the mean value of the exponent for the neutron distribution, $`\tau _{on}`$. In practice, for direct comparison of the gamma distributions obtained with different sources, the $`\tau `$-distribution of gamma events measured with the gamma source was used, instead of the gaussian fit, to approximate the distribution of gamma events measured with the neutron source. In experiments with the alpha source both alphas and gammas were detected. To approximate the resulting distribution, we again used the $`\tau `$-distribution of the gamma events measured with <sup>60</sup>Co gamma source and a gaussian fit to the alpha distribution with 3 free parameters: number of alphas, $`N_{o\alpha }`$, number of gammas, $`N_{o\gamma }`$, and the mean value of the exponent of the alpha distribution, $`\tau _{o\alpha }`$. Furthermore, by making use of the 60 keV X-rays from the <sup>241</sup>Am alpha source it was also possible to evaluate $`\tau _{oX}`$, attributed to pulses due to X-ray events initiated via the photoelectric effect near the surface of the crystal. Finally, to compare the 4 populations of events (initiated by gammas, neutrons, alphas and X-rays), we compared the values of $`\tau _{o\gamma }`$, $`\tau _{on}`$, $`\tau _{o\alpha }`$ and $`\tau _{oX}`$. 3. Results and discussion Measured $`\tau `$-distributions for events in two example energy bins are plotted in Figure 1a (30-40 keV, alpha source), 1b (55-65 keV, alpha source), 1c (30-40 keV, neutron source) and 1d (55-65 keV, neutron source). Plus signs show the data collected with the aforementioned sources. Open squares correspond to the data collected with the <sup>60</sup>Co gamma source normalised using the best fit procedure. Dotted curves show the fits to the neutron (alpha) distributions. Data collected with the gamma source (squares in Figure 1) match well the right-hand parts of distributions obtained with the neutron or alpha sources (these parts correspond to gamma events detected in the experiments with the neutron or alpha sources). This is true also for the 55-65 keV range with the alpha source where the gamma (right-hand) part of the $`\tau `$-distribution is dominated by X-rays from the <sup>241</sup>Am source. The amplitude of the right-hand peak at 55-65 keV (Figure 1b) is several times more than that at 30-40 keV (Figure 1a) (note the logarithmic scale of the $`y`$-axes), showing the presence of the strong 60 keV line superimposed on the background due to Compton electrons. This means that the $`\tau `$-distribution, and hence the basic shape of the pulses due to the X-rays, does not differ from that of the Compton electrons initiated by high-energy gammas. It is clear also that the positions of the left-hand peaks in the experiment with the alpha source (Figures 1a and 1b) are shifted to the left with respect to the positions of the left-hand peaks in the experiment with the neutron source (Figures 1c and 1d, respectively). This is an indication that $`\tau _{o\alpha }`$ is less than $`\tau _{on}`$ and, hence, the pulses due to alphas are faster than the pulses due to nuclear recoils (note that it was shown in that pulses due to sodium recoils are indistinguishable from those initiated by iodine recoils at all energies of interest). Figure 2 shows the fits to the $`\tau `$-distributions from the gamma- (solid curve), neutron- (dashed curve) and alpha- (dotted curve) induced events for the energy bin 30-40 keV. The total number of events in each case is normalised to unity. The results are summarised quantitatively in terms of $`\tau _o`$ in Table 1. The typical error in $`\tau _o`$ is of the order of 2-5 ns (arising from the statistics of the fit), except for the first energy bin in the experiment with the alpha source where the error of $`\tau _{o\alpha }`$ is 15 ns - being higher due to the smaller number of detected alphas. The value of $`\tau _{oX}`$, obtained from the fit to the right-hand peak of the $`\tau `$-distribution at 55-65 keV with the alpha source (see Figure 1), is 322$`\pm `$2 ns, in good agreement with the value of $`\tau _{o\gamma }`$ (2nd column of Table 1). However, this does not agree with the conclusion of , where the shapes of the pulses due to X-rays from an <sup>241</sup>Am source and Compton electrons from high-energy gammas were found to be different (the shape of the pulses due to X-rays was found to be similar to that of nuclear recoils). The values of $`\tau _{o\alpha }`$ (4th column of Table 1) are on average $`10\%`$ smaller than those of $`\tau _{on}`$ (3rd column of Table 1). The values of $`\tau _o`$ are known to vary from one crystal to another depending on the growth technology, Tl doping, temperature and other factors . However, the ratios, for instance of $`\tau _{on}`$ to $`\tau _{o\gamma }`$ are known to be quasi-independent of the crystal for fixed energy and temperature (we found it to decrease from 0.80 down to 0.76 with increasing energy from 10 to 80 keV in the crystal with anomalous events, currently under operation in the Boulby mine). The ratios $`\tau _{on}/\tau _{o\gamma }`$, $`\tau _{o\alpha }/\tau _{o\gamma }`$ and $`\tau _{o\alpha }/\tau _{on}`$ are shown in the 5th, 6th and 7th columns of Table 1, respectively. The first two slightly decrease with increasing energy, while $`\tau _{o\alpha }/\tau _{on}`$ remains almost constant. The average ratio is $`<\tau _{o\alpha }/\tau _{on}>`$ = $`0.90\pm 0.01`$. The ratio $`<\tau _{o\alpha }/\tau _{on}>`$ is higher than the ratio $`<\tau _{oa}/\tau _{on}>`$ = $`0.79\pm 0.04`$ found for the anomalous fast events (pulses faster than recoil-like pulses) observed in the UKDMC experiment at Boulby mine . This suggest that the anomalous events are not produced by external high energy alphas degraded in energy by a non-scintillating layer of material, assuming the ratio $`<\tau _{o\alpha }/\tau _{on}>`$ does not depend on the crystal. 4. Conclusions The form of the pulses initiated by gammas, alphas, nuclear recoils and X-rays have been analysed in terms of the mean value, $`\tau _o`$, of the gaussian distribution of exponent indices (see eq. (1)). The value of $`\tau _{oX}`$ of events initiated by X-rays (using the 60 keV line from an <sup>241</sup>Am source) was found to be the same as that of events induced by Compton electrons from high-energy gammas. The values of $`\tau _{o\alpha }`$ for alpha events are smaller (by $`10\%`$ on average) than those of $`\tau _{on}`$ for nuclear recoils induced by neutrons. However, the ratio of $`\tau _{o\alpha }/\tau _{on}`$ is higher than the corresponding ratio for the anomalous events to nuclear recoil events observed in the UKDMC experiment. This suggests that the anomalous events are not produced by external high energy alphas degraded in energy by a non-scintillating layer of material, assuming the ratio $`\tau _{o\alpha }/\tau _{on}`$ does not depend on the crystal. 5. Acknowledgements The authors wish to thank for the support PPARC, Zinsser Analytic (J.E.M.), Hilger Analytical Ltd. (J.W.R.), Electron Tubes Ltd. (J.W.R.).
no-problem/9902/astro-ph9902204.html
ar5iv
text
# Thermalization Mechanisms in Compact Sources ## 1 Introduction Observations with hard X-ray/$`\gamma `$-ray satellites such as CGRO OSSE, RXTE, and BeppoSAX indicate that the X/$`\gamma `$ spectra cut off at a few hundred keV for the majority of active galactic nuclei (see Zdziarski, this volume; Matt, this volume) and for the hard states of galactic black hole candidates (see Zdziarski, this volume; Grove, this volume). It is generally believed that Comptonization by a quasi-thermal population of electrons (or pairs) is responsible for the formation of the X/$`\gamma `$ spectra from these sources. In the spectral modeling codes (e.g., Poutanen & Svensson 1996), it is normally assumed that the Comptonizing particles have a Maxwellian distribution. It has been pointed out several times that thermalization by Coulomb scattering may not be fast enough as compared to various cooling mechanisms (such as Compton cooling) and that the particle distribution therefore will differ from a Maxwellian (e.g., Dermer & Liang 1989; Fabian 1994). On the other hand, it has been noticed that another thermalization mechanism, synchrotron self-absorption, may operate in compact plasmas (Ghisellini, Guilbert, & Svensson 1988; Ghisellini & Svensson 1989). Here, we review the physics of these two thermalization mechanisms, and explore in which contexts each of them may operate. ## 2 Thermalization by Coulomb Scattering The approximate time scale, $`t_\mathrm{C}`$, for thermalization by Coulomb (Møller) scattering between electrons in nonrelativistic plasmas has been known since long (e.g., Spitzer 1956; see Stepney 1983 for relativistic corrections): $$t_\mathrm{C}=4\frac{t_\mathrm{T}}{\mathrm{ln}\mathrm{\Lambda }}\mathrm{\Theta }^{3/2}(\pi ^{1/2}1.2\mathrm{\Theta }^{1/4}+2\mathrm{\Theta }^{1/2}),$$ (1) where $`t_\mathrm{T}=(n_e\sigma _\mathrm{T}c)^1`$ is the Thomson time, $`n_e`$ is the electron density, $`\mathrm{ln}\mathrm{\Lambda }1020`$ is the Coulomb logarithm, and $`\mathrm{\Theta }kT_e/m_ec^2`$ is the dimensionless temperature. Only with the advent of X-ray astronomy emerged the understanding that the electrons may reach mildly relativistic temperatures of 10<sup>9</sup> K or more, and that electron-positron pairs may be created. In such conditions, electron-positron (Bhabha) scattering also becomes important. Here, the term “Coulomb scattering” will be used for all types of “Coulomb” interactions. ### 2.1 The Relaxation Process Detailed numerical simulations of the relaxation and thermalization process were performed by Dermer & Liang (1989). As small angle scatterings normally dominate and the fractional energy change per scattering is small, a Fokker-Planck approach is appropriate. Dermer & Liang evaluated the energy exchange and diffusion coefficients assuming the plasma to have a Maxwellian distribution. Using these coefficients in the simulations is equivalent to studying the relaxation of a test particle distribution in a Maxwellian background plasma. Nayakshin & Melia (1998) relaxed the Maxwellian assumption and computed self-consistent Fokker-Planck coefficients using the real particle distributions. A Monte Carlo approach was taken by Pilla & Shaham (1997) who treated the time evolution of both the pair and photon distributions in an infinite system and who besides Coulomb interactions, bremsstrahlung, and Compton scatterings also included pair production and annihilation. Figure 1 (from Dermer & Liang 1989) shows the relaxation of a test electron distribution in a background thermal electron plasma of temperature 511 keV, or, equivalently, $`\mathrm{\Theta }`$ = 1. The test electrons had initially a Gaussian distribution centered at 1 MeV and a FWHM of 0.28 MeV. The diffusion process dominates initially and broadens the electron distribution that first relaxes at lower energies and only later the Maxwellian tail forms. Figure 7 in Nayakshin & Melia (1998) shows the similar process but here all electrons are initially Gaussian (i.e., there is no Maxwellian background). If the initial distribution is broader than a Maxwellian then the energy exchange coefficient dominates initially and the low energy electrons gain energy while the higher energy electrons loose energy thereby narrowing the distribution towards a Maxwellian (see fig. 6 in Nayakshin & Melia 1998). ### 2.2 Influence of Cooling Processes on the Steady Electron Distribution With increasing temperature, the Coulomb energy exchange rate decreases. Various cooling processes such as bremsstrahlung, Compton cooling and synchrotron cooling increases with temperature and eventually the thermalization process will be inhibited by the cooling first noticeable as a truncation of the Maxwellian tail. Stepney (1983) noticed that bremsstrahlung cooling will prevent thermalization for temperatures larger than about 5 $`\times 10^{10}`$ K, and Baring (1987) performed further analysis for additional cooling processes, as did Ghisellini, Haardt, & Fabian (1993). Including Compton and synchrotron losses in the Fokker-Planck equation allows for the determination of the steady distribution function under the influence of these cooling processes. Results from Dermer & Liang (1989) are shown in Figure 2. It is seen that, for increasing energy densities of radiation and magnetic fields, the high energy tail of the electron distribution becomes increasingly truncated and the effective temperature of the distribution becomes smaller. What are then the conditions for losses to dominate over Coulomb thermalization (see, e.g., Fabian 1994)? The nonrelativistic cooling time scale can be written as (see Coppi, this volume) $$t_{\mathrm{cool}}R/[c\mathrm{}_\mathrm{B}(1+U_{\mathrm{rad}}/U_\mathrm{B})],$$ (2) where $`R`$ is the size of the region, $`\mathrm{}_\mathrm{B}`$ is the magnetic compactness defined in equation (4) below, and $`U_{\mathrm{rad}}`$, $`U_\mathrm{B}`$ are the energy densities in radiation and magnetic fields, respectively. Comparing with equation (1), one finds that Coulomb scattering cannot maintain a Maxwellian when $$\mathrm{}_\mathrm{B}(1+U_{\mathrm{rad}}/U_\mathrm{B})>\frac{\tau _\mathrm{T}\mathrm{ln}\mathrm{\Lambda }}{4\mathrm{\Theta }^{3/2}},$$ (3) where $`\tau _\mathrm{T}=n_e\sigma _\mathrm{T}R`$ is the Thomson depth. ### 2.3 Can Coulomb Collisions Thermalize Pair Coronae and Active Pair Regions? Two different numerical codes (Stern et al. 1995a; Poutanen & Svensson 1996) have been used to study radiative transfer and Comptonization in pure pair coronae in energy and pair balance (see, e.g., Svensson 1997 for a review). For coronae of a given geometry and in energy balance, there exists a unique $`T_e\tau _\mathrm{T}`$ relation, where $`T_e`$ is the volume-averaged coronal temperature and $`\tau _\mathrm{T}`$ is a characteristic Thomson scattering optical depth of the coronal region. In Figure 3a, this relation is shown for different geometries. The results for active regions are connected by dotted curves. For comparison we also show the slab results from Stern et al. (1995b) using an iterative scattering method code (dashed curve). Solving the pair balance for the obtained combinations of ($`\mathrm{\Theta }`$, $`\tau _\mathrm{T}`$) gives a unique dissipation compactness, $`\mathrm{}_{\mathrm{diss}}`$ (see Ghisellini & Haardt 1994 for a discussion). Here, the local dissipation compactness, $`\mathrm{}_{\mathrm{diss}}(L_{\mathrm{diss}}/h)(\sigma _\mathrm{T}/m_ec^3)`$, characterizes the dissipation with $`L_{\mathrm{diss}}`$ being the power providing uniform heating in a cubic volume of size $`h`$ in the case of a slab of height $`h`$, or in the whole volume in the case of an active region of size $`h`$. Figure 3b shows the $`\mathrm{\Theta }`$ vs. $`\mathrm{}_{\mathrm{diss}}`$ relations. The question arises whether the electrons can thermalize or not for the conditions, $`\mathrm{\Theta }`$, $`\tau _\mathrm{T}`$, and $`\mathrm{}_{\mathrm{diss}}`$, in Figure 3. Energy exchange and thermalization through Møller ($`e^\pm e^\pm `$) and Bhabha ($`e^+e^{}`$) scattering compete with various loss mechanisms, with Compton losses being the most important for our conditions. The thermalization is slowest and the Compton losses largest for the higher energy particles in the Maxwellian tail. Instead of using the approximate equation (3), we use the detailed simulations by Dermer & Liang (1989, their fig. 8) to find the critical compactness above which the deviation of the electron distribution at the Maxwellian mean energy is more than a factor $`e2.7`$. The dash-dotted and dash-dot-dot-dotted curves in Figure 3b show this critical compactness for slabs and for surface spheres, respectively. In agreement with Ghisellini, Haardt, & Fabian (1993), we find that Møller and Bhabha scattering cannot compete with Compton losses in our pair slabs and active regions. The problem then arises of what mechanism can thermalize the apparently thermal electron distribution in compact sources. One such mechanism is cyclo/synchrotron absorption. ## 3 Thermalization by Cyclo/Synchrotron Absorption ### 3.1 A Brief History of Synchrotron Thermalization Ever since the classical interpretation by Shklovskii in the 1950s of the radiation from the Crab nebula as being synchrotron radiation, this process has played an important role in our interpretation of the non-thermal radiation from a wide variety of astronomical objects. In general, the electron distribution has been assumed to be a power law or nearly a power law. Much less attention has been paid to what happens to the electron distribution at self-absorbed electron energies. The theory for synchrotron radiation was developed in the 1950s (see, e.g., reviews by Ginzburg and Syrovatskii 1965, 1969; Pacholczyk 1970). The emission and absorption coefficients for single relativistic electrons as well as for ensembles of relativistic electrons having power law or thermal distributions were calculated in the 1950s and 1960s. For power law distributions, the absorption coefficient increases towards lower photon energies. Below some photon energy, $`\nu _{\mathrm{abs}}`$, the source becomes optically thick to synchrotron self-absorption resulting in an intensity proportional to $`\nu ^{5/2}`$, obtained from the ratio of emission and absorption coefficients (Le Roux 1961). A finite $`\nu _{\mathrm{abs}}`$, of course, requires that the source is finite. Below we call this and its consequences ”finite source effects”. This early work was, in general, applied to extended sources where the cooling time at self-absorbed particle energies is longer than other relevant time scales (such as the age of the source or the dynamical time scales). It was therefore natural to assume that the self-absorbing electron distribution below the Lorentz factor, $`\gamma _{\mathrm{abs}}`$, of the electron (emitting at the frequency $`\nu _{\mathrm{abs}}`$ where the source becomes optically thick) is unaffected by self-absorption and simply maintains the power law distribution of the injected electrons. In the late 1960s, it became increasingly clear that, in compact sources or on long time scales, the self-absorbed electron distribution $`N(\gamma )`$ will evolve under the influence of synchrotron emission and absorption. What are then the possible equilibrium solutions at self-absorbing Lorentz factors towards which $`N(\gamma )`$ would relax? In the important papers by Rees (1967) and McCray(1969), it was shown that power law distributions, $`N(\gamma )\gamma ^s`$ with $`s=2`$ and 3, are equilibrium solutions to the kinetic equations. Rees (1967), however, also found that the solution with $`s=3`$ is unstable and would evolve away from $`s=3`$ if slightly perturbed. McCray (1969) showed this explicitly by numerically calculating the time dependent evolution of initial power law distributions in an infinite source. Rees predicted and McCray confirmed that the high energy electrons in a flat ($`s<3`$) initial power law would tend to evolve into a quasi-Maxwellian distribution. McCray (1969), furthermore, emphasized the importance of finite source effects on the evolution. In particular, for power laws with $`s<3`$, the self-absorbing electrons would gain energy absorbing slightly more energy than they emit, while the electrons radiating in the optically thin limit lose energy by radiating much more energy than they absorb. All electrons would therefore tend to gather at $`\gamma _{\mathrm{abs}}`$ developing a peak there (as was already emphasized by Rees 1967). It must be emphasized that the relaxation of self-absorbing electrons takes place through the energy exchange with the radiation field, which in its turn is determined by the particle distribution. This is the “synchrotron boiler”, a terminology coined by Ghisellini, Guilbert, & Svensson (1988). ### 3.2 Rise and Fall of yet another Paradigm In a series of papers in the 1970s, Norman and coworkers further developed the concept Plasma Turbulent Reactor (PTR) introduced by Kaplan and Tsytovich (1973). Originally the turbulence feeding the electrons was thought to be plasmons. In practice, however, the PTR is exactly the self-absorbing synchrotron source considered here, as photons are the only plasma modes with sufficiently small damping rate that they can mediate energy transfer from one electron to another (Norman 1977). Norman and ter Haar (1975) and Norman (1977) essentially repeated the analysis of McCray (1969) using quite different notation and definitions but arriving at the same conclusions that $`N(\gamma )\gamma ^2`$ and $`N(\gamma )\gamma ^3`$ are the only steady power law equilibrium solutions. It is important that they noted that the $`N(\gamma )\gamma ^2`$ solution corresponds to a finite electron flux upwards along the energy axis, while $`N(\gamma )\gamma ^3`$ corresponds to zero electron flux. They argued that $`N(\gamma )\gamma ^3`$ was the most physical solution as the synchrotron time scales establishing this distribution are shorter than other time scales. Although being aware of possible finite source effects, they considered them not to influence the electron distribution at Lorentz factors $`\gamma _{\mathrm{abs}}`$. The self-absorbed solution, $`N(\gamma )\gamma ^3`$, was considered sufficiently important in explaining power law spectra from a variety of sources that Norman and ter Haar (1975) called the PTR a new astrophysical paradigm. Norman and coworkers, however, do not seem to have considered the stability of the $`N(\gamma )\gamma ^3`$ solution. The work of Rees (1967) and McCray (1969) indicates that a Maxwellian distribution may be the only stable equilibrium solution. This was, however, not rigorously established causing Ghisellini, Guilbert & Svensson (1988, GGS88) to numerically determine the steady solutions of the kinetic equations including physical boundary conditions (i.e., correct Fokker-Planck coefficients at subrelativistic energies, and the accounting for finite source effects at large energies). As $`\gamma _{\mathrm{abs}}`$ typically is of the order 10-100 in compact radio sources and the development of the self-absorbed distribution takes place at mildly relativistic energies, they used expressions and equations valid at any energy. Furthermore, in order to obtain steady solutions the particle injection had to be balanced by a sink term (escape or reacceleration). Injecting a power law proportional to $`\gamma ^3`$ (i.e., with the equilibrium slope) GGS88 found that the steady solution was a Maxwellian with a temperature corresponding to the mean energy of the injected electrons. Similarly, an injected power law proportional to $`\gamma ^2`$ (essentially corresponding to monoenergetic injection at some large Lorentz factor $`\gamma _{\mathrm{abs}}`$) led to the establishment of a Maxwellian just below $`\gamma _{\mathrm{abs}}`$. The injected electrons cool until reaching $`\gamma _{\mathrm{abs}}`$ where they thermalize exchanging energy with the self-absorbed radiation field. Additional studies were made by de Kool, Begelman, & Sikora (1989) and Coppi (1990). With these works it appears that the PTR paradigm of Norman and ter Haar (1975) has been shown to be invalid. ### 3.3 Relaxation by Cyclo/Synchrotron Absorption The works so far that explicitly have demonstrated the formation of a Maxwellian through synchrotron self-absorption are the numerical simulations of GGS88, Coppi (1990), and Ghisellini, Haardt, & Svensson (1998). Here, we review some of the results in the last paper. In the numerical simulations, a kinetic equation for the electron distribution is solved. The kinetic equation, which also in this case takes the form of a Fokker-Planck equation (see derivation in McCray 1969), includes Compton and synchrotron cooling, synchrotron absorption (heating), electron injection, and electron escape. Even though various source geometries are discussed, the radiation field is assumed to be given by the steady slab solution, which is correct to the order of unity. The simulations consider a region of size $`R`$ with a magnetic field of strength $`B`$, into which some distribution of electrons are injected with a power $`L`$. In a steady state, this power emerges either as radiation or as the power of escaping electrons. The electrons are assumed to escape at the speed $`v_{\mathrm{esc}}=c\beta _{\mathrm{esc}}=R/t_{\mathrm{esc}}`$ where $`t_{\mathrm{esc}}`$ is the escape time. Convenient parameters describing compact sources are the injection compactness, $`\mathrm{}_{\mathrm{inj}}`$, and the magnetic compactness, $`\mathrm{}_\mathrm{B}`$, defined as $$\mathrm{}_{\mathrm{inj}}=\frac{L}{R}\frac{\sigma _\mathrm{T}}{m_\mathrm{e}c^3};\mathrm{}_\mathrm{B}=\frac{\sigma _\mathrm{T}}{m_\mathrm{e}c^2}RU_\mathrm{B},$$ (4) where $`\sigma _\mathrm{T}`$ is the Thomson cross section and $`U_\mathrm{B}=B^2/8\pi `$ is the magnetic field energy density. Note that $`U_{\mathrm{rad}}/U_\mathrm{B}(9/16\pi )(\mathrm{}_{\mathrm{inj}}/\mathrm{}_\mathrm{B})(1+\tau _\mathrm{T})`$, where the numerical factor is dependent on the source geometry. The problem we consider has the following parameters: $`\mathrm{}_{\mathrm{inj}}`$, $`\mathrm{}_\mathrm{B}`$, $`\beta _{\mathrm{esc}}`$, and either $`R`$ or $`B`$. Further parameters are those describing the shape of the injected electron distribution. For steady state electrons emitting and absorbing synchrotron photons, the cooling (emission) and absorption/diffusion time scales are balanced and thus equal. The synchrotron cooling time scale can thus be taken to be the thermalization time scale. From Equation (2), it is then clear that self-absorbing electrons will thermalize before they escape when $`\mathrm{}_\mathrm{B}\stackrel{>}{}1`$. The simulations shown below have $`\mathrm{}_\mathrm{B}`$ = 10 and 30. Figure 4 shows the relaxation due to cyclo/synchrotron absorption of an electron distribution towards the equilibrium Maxwellian distribution. The injected electrons have a Gaussian energy distribution peaking at $`\gamma =10`$. Each curve is labeled by the time in units of $`R/c`$. The shape of the equilibrium distribution is reached in about $`0.1R/c`$, about equal to the cyclo/synchrotron cooling time. With the assumed input parameters, the synchrotron terms (emission, absorption and energy diffusion) in the kinetic equation are dominant over Compton losses. Gains and losses in this case almost perfectly balance. As a result the equilibrium electron distribution is a Maxwellian. Figure 4 also shows that the high energy part of the Maxwellian distribution is formed earlier than the low energy part, due to the higher efficiency of exchanging photons of the high energy electrons. A slower evolution takes place after $`0.1(R/c)`$, as the balance between electron injection and electron escape is achieved on a time scale of a few $`t_{\mathrm{esc}}`$. Only then have both the shape and the amplitude of the electron distribution reached their equilibrium values. ### 3.4 Influence of Cooling Processes on the Steady Electron Distribution The equilibrium distributions for different values of the injected compactness are shown in Figure 5. The magnetic compactness is set to $`\mathrm{}_\mathrm{B}=30`$, corresponding to $`B=10^4`$ G for $`R=10^{13}`$ cm (from Eq. 4). In all cases, the injected distribution is a peaked function with an exponential high energy cut-off. The mean injected Lorentz factor is $`<\gamma >5`$ and essentially all electrons are below $`\gamma _{\mathrm{abs}}`$. It is apparent from Figure 5 that the electron distribution is a quasi-Maxwellian at all energies as long as $`\mathrm{}_{\mathrm{inj}}\mathrm{}_\mathrm{B}`$. This is a consequence of an almost perfect balance between synchrotron gains (absorption) and losses, while Compton losses are only a small perturbation. As $`\mathrm{}_{\mathrm{inj}}`$ increases towards $`\mathrm{}_\mathrm{B}`$, Compton losses become increasingly important, competing with the synchrotron processes. At high energies, losses overcome gains, and the electrons diffuse downwards in energy, until subrelativistic energies are reached. In this energy regime, the increased efficiency of synchrotron gains (relative to losses) halts the systematic downward diffusion in energy, and a Maxwellian can form (see Ghisellini & Svensson 1989). The temperature of this part of $`N(\gamma )`$ can be obtained by fitting a Maxwellian to the the low energy part of the distribution, up to energies just above the peak of the electron distribution. The resulting temperatures are plotted in Figure 6 as a function of $`\mathrm{}_{\mathrm{inj}}`$. For $`\mathrm{}_{\mathrm{inj}}\stackrel{<}{}1`$, the temperature is approximately constant, while it decreases for $`\mathrm{}_{\mathrm{inj}}\stackrel{>}{}1`$. From Equation (3) (with $`U_{\mathrm{rad}}/U_\mathrm{B}`$ set to zero), we see that thermalization by synchrotron self-absorption dominates when $`\mathrm{}_\mathrm{B}>`$ $`\tau _\mathrm{T}\mathrm{ln}\mathrm{\Lambda }/4\mathrm{\Theta }^{3/2}`$ (assuming $`\mathrm{\Theta }\stackrel{<}{}1`$). The Coulomb process thus dominates for small temperatures and large $`\tau _\mathrm{T}`$ (i.e., large electron densities). We need to know $`\tau _\mathrm{T}`$ for our simulations. The balance between electron injection and escape in our model gives a Thomson optical depth of $`\tau _\mathrm{T}=(3/4\pi )(\mathrm{}_{\mathrm{inj}}/\beta _{\mathrm{esc}}<\gamma >).`$ For the simulations in Figure 5, the optical depth increases from $`\tau _\mathrm{T}=5\times 10^3`$ for $`\mathrm{}_{\mathrm{inj}}=0.1`$ to $`\tau _\mathrm{T}=5`$ for $`\mathrm{}_{\mathrm{inj}}=100`$. Using the expression for $`\tau _\mathrm{T}`$, we find that thermalization by synchrotron self-absorption then dominates over Coulomb scattering for temperatures $`\mathrm{\Theta }>0.11(\mathrm{ln}\mathrm{\Lambda }/<\gamma >)^{2/3}(\mathrm{}_{\mathrm{inj}}/\mathrm{}_\mathrm{B})^{2/3}`$. For the parameters of the simulations in Figures 5 and 6 and $`\mathrm{ln}\mathrm{\Lambda }=20`$, the condition becomes $`\mathrm{\Theta }>0.03(\mathrm{}_{\mathrm{inj}})^{2/3}`$, which is plotted as the solid line in Figure 3. One sees that that synchrotron self-absorption dominates the thermalization for all cases with $`\mathrm{}_{\mathrm{inj}}`$ smaller than about 10 . For the cases $`\mathrm{}_{\mathrm{inj}}=`$ 30 and 100, one cannot neglect Coulomb thermalization. ### 3.5 Spectra from Steady Electron Distributions In Figure 7, the radiation spectra corresponding to four of the equilibrium electron distributions in Figure 5 are shown. Each spectrum consists of several continuum components: * a self–absorbed synchrotron spectrum (S); * a Comptonized synchrotron spectrum (SSC); * a reprocessed thermal soft component (bump); * a component from Comptonization of thermal bump photons (IC); * a Compton reflection component. Details of the spectral calculations are given in Ghisellini, Haardt, & Svensson (1998). Some features in Figure 7 may be noticed. For $`\mathrm{}_{\mathrm{inj}}<1`$, the Compton $`y`$-parameter is less than unity (see Fig. 6) making the Compton losses relatively unimportant relative the self-absorbed synchrotron radiation. The large value of $`\mathrm{\Theta }`$ makes the Comptonized spectra bumpy. The 2–10 keV band is dominated by the SSC component, rather than by the IC. The thermal bump and the X–ray flux are thus not directly related. This is contrary to the common interpretation of the X-ray emission in Seyfert galaxies as being due to Comptonization of thermal bump photons. For $`\mathrm{}_{\mathrm{inj}}>1`$, the Compton cooling dominates and limits the $`y`$-parameter to unity. The smooth IC power law dominates over the S and SSC components. For $`\mathrm{}_{\mathrm{inj}}\stackrel{<}{}3`$, the high energy spectral cut-off can be described by an exponential, since the electron distribution is a quasi-Maxwellian in the entire energy range. For $`\mathrm{}_{\mathrm{inj}}\stackrel{>}{}3`$, the electron distribution is more complex (see Fig. 5), resulting in a more complex spectral cut-off. The choice of $`R=10^{13}`$ cm (or $`B=10^4`$ G) in Figure 7 corresponds to the case of active galactic nuclei (AGN). Ghisellini, Haardt, & Svensson (1998) also study the case of galactic black holes choosing $`R=10^7`$ cm (or $`B=10^7`$ G) for the same set of $`\mathrm{}_\mathrm{B}`$ and $`\mathrm{}_{\mathrm{inj}}`$. The large magnetic field and small size move the S and bump peaks to larger frequencies. The main difference is that now the synchrotron component is not completely self-absorbed leading to an optically thin synchrotron component from the highest energy electrons. More soft synchrotron photons enhances the SSC component relative the IC component as compared to the AGN case. ## 4 Final Remarks First, we note that $`\mathrm{}_\mathrm{B}>1`$ is needed for synchrotron self-absorption to operate efficiently (from eq. 2). A Maxwellian is then formed with the same mean energy as the injected electrons, assuming that self-absorption operates at essentially all electron energies of interest. If, furthermore, Compton cooling is important, i.e., if $`\mathrm{}_{\mathrm{inj}}>\mathrm{}_\mathrm{B}`$, then the Maxwellian is modified and is shifted to lower energies (temperature). Second, we note that the criterion for Coulomb vs synchrotron thermalization in the case when all electrons are self-absorbing (i.e. radiating optically thick synchrotron radiation) is more complex (eq. 3). Essentially, the same criterion is valid for comparing Coulomb thermalization in the case when the major part of the electrons radiate optically thin synchrotron radiation (or Compton radiation) truncating the Maxwellian. Which of the two cases that is valid depends on whether the energy, $`(\gamma _{\mathrm{abs}}1)m_ec^2`$, of the electrons radiating at the photon energy where the absorption optical depth is unity is much larger or much smaller than $`kT_e`$. There should also be a region in parameter space where both Coulomb and synchrotron thermalization operates simultaneously (assuming that most electrons radiate optically thick radiation). Here, Coulomb thermalization should dominate at lower electron energies and synchrotron thermalization at larger energies. However, nobody seems so far to have solved the Fokker-Planck equations to study the thermalization process including both Coulomb scattering and synchrotron self-absorption. Ultimately, the thermalization process should be put in a realistic context. Mahadevan & Quataert (1997) studied the importance of thermalization in advection-dominated flows onto black holes under the conditions considered in such flows (e.g., close to free-fall, equipartition magnetic fields). Comparing the thermalization time scales with the accretion time scale (equivalent to $`t_{\mathrm{esc}}`$ in our discussion), they found that thermalization did not occur at large radii and small accretion rates. However, at sufficiently large accretion rate, synchrotron thermalization becomes important, and at even larger rates (and thus larger densities) Coulomb thermalization starts operating. Another scenario for generating the X-ray radiation from compact sources is that of a corona or magnetic flares atop an accretion disks. The typical condition for the flare regions is that the magnetic energy density should dominate the radiation energy density, i.e., that $`\mathrm{}_\mathrm{B}\stackrel{>}{}\mathrm{}_{\mathrm{inj}}>1`$, which should ensure that cyclo/synchrotron self-absorption acts as a very efficient thermalizing mechanism in such regions. ###### Acknowledgements. I appreciate a more than decade-long collaboration with G. Ghisellini on the issues discussed in this review. I thank J. Poutanen and A. Beloborodov for valuable comments. This work is supported by the Swedish Natural Science Research Council and the Swedish National Space Board.
no-problem/9902/astro-ph9902228.html
ar5iv
text
# Starlight in the Universe 1footnote 11footnote 1To appear in Physica Scripta, Proceedings of the Nobel Symposium, Particle Physics and the Universe (Enkoping, Sweden, August 20-25, 1998). ## Introduction There is little doubt that the last few years have been very exciting times in galaxy formation and evolution studies. The remarkable progress in our understanding of faint galaxy data made possible by the combination of Hubble Space Telescope (HST) deep imaging and ground-based spectroscopy has permitted to shed new light on the evolution of the stellar birthrate in the universe, to identify the epoch $`1\mathrm{}<z\mathrm{}<2`$ where most of the optical extragalactic background light was produced, and to set important contraints on galaxy evolution scenarios. The explosion in the quantity of information available on the high-redshift universe at optical wavelengths has been complemented by the detection of the far-IR/sub-mm background by DIRBE and FIRAS onboard the COBE satellite, and by theoretical progress made in understanding how cosmic structure forms from initial density fluctuations JPO . The IR data have revealed the ‘optically-hidden’ side of galaxy formation, and shown that a significant fraction of the energy released by stellar nucleosynthesis is re-emitted as thermal radiation by dust. The underlying goal of all these efforts is to understand the growth of cosmic structures, the internal properties of galaxies and their evolution, the mechanisms that shaped Hubble’s morphological sequence, and ultimately to map the transition from the cosmic ‘dark age’ to a ionized universe populated with luminous sources. While one of the important questions recently emerged is the nature (starbursts or active galactic nuclei?) and redshift distribution of the ultraluminous sub-mm sources discovered by SCUBA, of perhaps equal interest is the possible existence of a large population of faint galaxies still undetected at high redshifts, as the color-selected ground-based and Hubble Deep Field (HDF) samples include only the brightest and bluest star-forming objects. In any hierarchical clustering (‘bottom-up’) scenario (the cold dark matter model being the best studied example), subgalactic structures are the first non-linearities to form. High-$`z`$ dwarf galaxies and/or mini-quasars (i.e. an early generation of stars and accreting black holes in dark matter halos with circular velocities $`v_c50\mathrm{km}\mathrm{s}^1`$) may then be one of the main source of UV photons and heavy elements at early epochs. In this talk I will focus on some of the open issues and controversies surrounding our present understanding of the history of the conversion of cold gas into stars within galaxies, and of the evolution of luminous sources in the universe. An Einstein-deSitter (EdS) universe ($`\mathrm{\Omega }_M=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$) with $`h=H_0/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1=0.5`$ will be adopted in the following. ## Counting galaxies Much observing time has been devoted in the past few years to the problem of the detection of galaxies at high redshifts, as it was anticipated that any knowledge of their early luminosity and color evolution would set important constraints on the history of structure and star formation in the universe. As the best view to date of the optical sky at faint flux levels, the HDF imaging survey has rapidly become a key testing ground for models of galaxy evolution. The field, an undistinguished portion of the northen sky at high galactic latitudes (the data from a southern deep field are being analyzed as we speak), is essentially a deep core sample of the universe, acquired with the HST in a 10-day exposure. With its depth – reaching 5-$`\sigma `$ limiting AB magnitudes of roughly 27.7, 28.6, 29.0, and 28.4 in $`U,B,V,`$ and $`I`$ <sup>2</sup><sup>2</sup>2To get a feeling of the depth of this survey, note that $`AB=29`$ mag corresponds to the flux at Earth from a 100 Watt light bulb at a distance of 10 million kilometers. – and four-filter strategy to provide constraints on the redshift and age distribution of galaxies in the image, the HDF has offered the astronomical community the opportunity to study the galaxy population in unprecedented detail W96 . There are about 3000 galaxies in the HDF, corresponding to $`2\times 10^6`$ deg<sup>-2</sup> down to the faint limit of the images. The galaxy counts are shown in Figure 1 in four bandpasses centered at roughly 300, 450, 600, and 800 nm. A compilation of existing ground-based data is also shown, together with the predictions of no-evolution models, i.e. models in which the absolute brightness, volume density, and spectra of galaxies do not change with time. In all four bands, the logarithmic slope $`\alpha `$ of the galaxy number-apparent magnitude counts, $`\mathrm{log}N(m)=\alpha m`$, flattens at faint magnitudes, e.g., from $`\alpha =0.45`$ in the interval $`21<B<25`$ to $`\alpha =0.17`$ for $`25<B<29`$. The slope of the galaxy counts is a simple cosmological probe of the early history of star formation. The flattening at faint apparent magnitudes cannot be due to the reddening of distant sources as their Lyman break gets redshifted into the blue passband,<sup>3</sup><sup>3</sup>3For galaxies with $`z>2`$ ($`z>3.5`$), the $`\mathrm{I}`$Lyman edge shifts into the 300 (450) nm HDF bandpass. Neutral hydrogen, which is ubiquitous both within galaxies and in intergalactic space, strongly absorbs ultraviolet light, creating a spectral discontinuity that can be used to identify young, high-redshift galaxies S96 . since the fraction of Lyman-break galaxies at $`B25`$ is only of order 10%. Moreover, an absorption-induced loss of sources could not explain the similar flattening of the galaxy counts observed in the $`V`$ and $`I`$ bands. Rather, the change of slope suggests that the surface density of luminous galaxies declines beyond $`z1.5`$. ## The brightness of the night sky The extragalactic background light (EBL) is an indicator of the total luminosity of the universe. It provides unique information on the evolution of cosmic structures at all epochs, as the cumulative emission from galactic systems and active galactic nuclei (AGNs) is expected to be recorded in this background. The contribution of known galaxies to the optical EBL can be calculated directly by integrating the emitted flux times the differential galaxy number counts down to the detection threshold. The leveling off of the counts is clearly seen in Figure 2, where the function $`i_\nu =10^{0.4(m+48.6)}\times N(m)`$ is plotted against apparent magnitude in all bands Pozz98 . While counts having a logarithmic slope of $`\alpha 0.40`$ continue to add to the EBL at the faintest magnitudes, it appears that the HDF survey has achieved the sensitivity to capture the bulk of the extragalactic light from discrete sources (an extrapolation of the observed counts to brighter and/or fainter magnitudes would typically increase the sky brightness by less than 20%). To $`AB=29`$ mag, the sky brightness from resolved galaxies in the $`I`$-band is $`2\times 10^{20}\mathrm{ergs}\mathrm{cm}^2\mathrm{s}^1\mathrm{Hz}^1\mathrm{sr}^1`$, increasing roughly as $`\lambda ^2`$ from 2000 to 8000 Å. The flattening of the number counts has the interesting consequence that the galaxies that produce $`60\%`$ of the blue EBL have $`B<24.5`$. They are then bright enough to be identified in spectroscopic surveys, and are indeed known to have median redshift $`z=0.6`$ Li96 . The quite general conclusion is that there is no evidence in the number-magnitude relation down to very faint flux levels for a large amount of star formation at high redshift. Note that these considerations do not constrain the rate of starbirth at early epochs, only the total (integrated over cosmic time) amount of stars – hence background light – being produced, and neglect the effect of dust reddening. Figure 3 shows the total optical EBL from known galaxies together with the recent COBE results. The value derived by integrating the galaxy counts Pozz98 down to very faint magnitude levels \[because of the flattening at faint magnitudes of the number-magnitude relation most of the contribution to the optical EBL comes from relatively bright galaxies\] implies a lower limit to the EBL intensity in the 0.3–2.2 $`\mu `$m interval of $`I_{\mathrm{opt}}12\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$.<sup>4</sup><sup>4</sup>4The direct detection of the optical EBL at 3000, 5500, and 8000 Å derived from HST data RAB implies values that are about a factor of two higher than the integrated light from galaxy counts. When combined with the FIRAS and DIRBE measurements ($`I_{\mathrm{FIR}}16\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$ in the 125–5000 $`\mu `$m range), this gives an observed EBL intensity in excess of $`28\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. The correction factor needed to account for the residual emission in the 2.2 to 125 $`\mu `$m region is probably $`\mathrm{}<2`$ Dwe98 . (We shall see below how a population of dusty AGNs could make a significant contribution to the FIR background.) In the rest of this talk I will adopt a conservative reference value for the total EBL intensity associated with star formation activity over the entire history of the universe of $`I_{\mathrm{EBL}}=40I_{40}\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. ## Modeling galaxy evolution In the past few years two different approaches have been widely used to interpret faint galaxy data RSE . In the simplest version of what I will call the ‘traditional’ scheme, a one-to-one mapping between galaxies at the present epoch and their distant counterparts is assumed: one starts from the local measurements of the distribution of galaxies as a function of luminosity and Hubble type and models their photometric evolution assuming some redshift of formation and a set of parameterized star formation histories Tin80 . These, together with an initial mass function (IMF) and a cosmological model, are then adjusted to match the observed number counts, colors, and redshift distributions. Beyond the intrinsic simplicity of assuming a well defined collapse epoch and pure-luminosity evolution thereafter, the main advantage of this kind of approach is that it can easily be made consistent with the classical view that ellipticals and spiral galaxy bulges (both redder than spiral disks and containing less gas) formed early in a single burst of duration 1 Gyr or less Bern . Spiral galaxies, by contrast, are characterized by a slower metabolism, i.e. star formation histories that extend to the present epoch. In these models, typically, much of the action happens at high-redshifts. A more physically motivated way to interpret the observations is to construct semianalytic hierarchical models of galaxy formation and evolution WF91 . Here, one starts ab initio from a power spectrum of primordial density fluctuations, and follows the formation and hierarchical merging of the dark matter halos that provide the early seeds for later galaxy formation. Baryonic gas gets accreted onto the halos and is shock-heated. Various prescriptions for gas cooling, star formation, feedback, and dynamical friction are adopted, and tuned to match the statistical properties of both nearby and distant galaxies. In this scenario, there is no period when bulges and ellipticals form rapidly as single units and are very bright: rather, small objects form first and merge continually to make larger ones. Galaxy do not evolve as isolated objects, and the rate of interaction was higher in the past. The bulk of the galaxy population is predicted to have been assembled quite recently, and most galaxies never experience star formation rates in excess of a few solar masses per year. ## Star formation history Recently, it has become familiar to follow an alternative method, which focuses on the emission properties of the galaxy population as a whole. It traces the cosmic evolution with redshift of the galaxy luminosity density and offers the prospect of an empirical determination of the global star formation history of the universe and IMF of stars independently of the merging histories, complex evolutionary phases, and possibly short-lived star formation episodes of individual galaxies. The technique relies on two basic properties of stellar populations: a) the UV-continuum emission in all but the oldest galaxies is dominated by short-lived massive stars, and is therefore a direct measure, for a given IMF and dust content, of the instantaneous star formation rate; and b) the rest-frame near-IR light is dominated by near-solar mass evolved stars, the progenitors of which make up the bulk of a galaxy’s stellar mass, and is more sensitive to the past star-formation history than the blue (and UV) light. By modeling the “emission history” of the universe at ultraviolet, optical, and near-infrared wavelengths from the present epoch to high redshifts, one should be able to shed light on some key questions in galaxy formation and evolution studies: Is there a characteristic epoch of star and metal formation in galaxies? What fraction of the luminous baryons observed today were already locked into galaxies at early epochs? Are high-$`z`$ galaxies obscured by dust? Do spheroids form early and rapidly? Is there a universal IMF? The comoving volume-averaged history of star formation follows a relatively simple dependence on redshift. Its latest version, uncorrected for dust extinction, is plotted in Figure 4 (left). The measurements are based upon the rest-frame UV luminosity function (at 1500 and 2800 Å), assumed to be from young stellar populations M96 . The prescription for a ‘correct’ de-reddening of these values has been the subject of an ongoing debate. Dust may play a role in obscuring the UV continuum of Canada-France Reshift Survey (CFRS, $`0.3<z<1`$) and Lyman-break ($`z3`$) galaxies, as their colors are too red to be fitted with an evolving stellar population and a Salpeter IMF M98 . Figure 4 (right) depicts an extinction-corrected version of the same plot. The best-fit cosmic star formation history (shown by the dashed-line) with such a universal correction produces a total EBL of $`37\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. About 65% of this is radiated in the UV$`+`$optical$`+`$near-IR between 0.1 and 5 $`\mu `$m; the total amount of starlight that is absorbed by dust and reprocessed in the far-IR is $`13\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1`$. Because of the uncertainties associated with the incompleteness of the data sets, photometric redshift technique, dust reddening, and UV-to-SFR conversion, these numbers are only meant to be indicative. On the other hand, this very simple model is not in obvious disagreement with any of the observations, and is able, in particular, to provide a reasonable estimate of the galaxy optical and near-IR luminosity density. ## The stellar baryon budget With the help of some simple stellar population synthesis tools it is possible at this stage to make an estimate of the stellar mass density that produced the integrated light observed today. The total bolometric luminosity of a simple stellar population (a single generation of coeval, chemically homogeneous stars) having mass $`M`$ can be well approximated by a power-law with time for all ages $`t\mathrm{}>100`$ Myr, $$L(t)=1.3L_{}\frac{M}{M_{}}\left(\frac{t}{1\mathrm{Gyr}}\right)^{0.8}$$ (1) (cf. Bu95 ), where we have assumed solar metallicity and a Salpeter IMF truncated at 0.1 and 125 $`M_{}`$. In a stellar system with arbitrary star formation rate per unit cosmological volume, $`\dot{\rho }_{}`$, the comoving bolometric emissivity at time $`t`$ is given by the convolution integral $$\rho _{\mathrm{bol}}(t)=_0^tL(\tau )\dot{\rho }_{}(t\tau )𝑑\tau .$$ (2) The total background light observed at Earth ($`t=t_H`$) is $$I_{\mathrm{EBL}}=\frac{c}{4\pi }_0^{t_H}\frac{\rho _{\mathrm{bol}}(t)}{1+z}𝑑t,$$ (3) where the factor $`(1+z)`$ at the denominator is lost to cosmic expansion when converting from observed to radiated (comoving) luminosity density. From the above equations it is easy to derive in a EdS cosmology $$I_{\mathrm{EBL}}=740\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1\frac{\dot{\rho }_{}}{\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3}\left(\frac{t_H}{13\mathrm{Gyr}}\right)^{1.87}.$$ (4) The observations shown in Figure 3 therefore imply a “fiducial” mean star formation density of $`\dot{\rho }_{}=0.054I_{40}\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3`$. The total stellar mass density observed today is $$\rho _{}(t_H)=(1R)_0^{t_H}\dot{\rho }_{}(t)𝑑t5\times 10^8I_{40}\mathrm{M}_{}\mathrm{Mpc}^3$$ (5) (corresponding to $`\mathrm{\Omega }_{}=0.007I_{40}`$), where $`R`$ is the mass fraction of a generation of stars that is returned to the interstellar medium, $`R0.3`$ for a Salpeter IMF. The optical/FIR background therefore requires that about 10% of the nucleosynthetic baryons today Bur98 are in the forms of stars and their remnants. The predicted stellar mass-to-blue light ratio is $`M/L_B5`$. These values are quite sensitive to the lower-mass cutoff of the IMF, as very-low mass stars can contribute significantly to the mass but not to the integrated light of the whole stellar population. A lower cutoff of 0.5$`\mathrm{M}_{}`$ instead of the 0.1$`\mathrm{M}_{}`$ adopted would decrease the mass-to-light ratio (and $`\mathrm{\Omega }_{}`$) by a factor of 1.9 for a Salpeter function. ## Two extreme scenarios Based on the agreement between the $`z3`$ and $`z4`$ luminosity functions at the bright end, it has been recently argued Ste98 that the decline in the luminosity density of faint HDF Lyman-break galaxies observed in the same redshift interval M96 may not be real, but simply due to sample variance in the HDF. When extinction corrections are applied, the emissivity per unit comoving volume due to star formation may then remain essentially flat for all redshift $`z\mathrm{}>1`$ (see Fig. 4). While this has obvious implications for hierarchical models of structure formation, the epoch of first light, and the reionization of the intergalactic medium (IGM), it is also interesting to speculate on the possibility of a constant star-formation density at all epochs $`0z5`$. Figure 5 shows the time evolution of the blue and near-IR rest-frame luminosity density of a stellar population characterized by a Salpeter IMF, solar metallicity, and a (constant) star formation rate of $`\dot{\rho _{}}=0.054\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3`$ (needed to produce the observed EBL). The predicted evolution appears to be a poor match to the observations: it overpredicts the local $`B`$ and $`K`$-band luminosity densities, and underpredicts the 1$`\mu `$m emissivity at $`z1`$. At the other extreme, we know from stellar population studies that about half of the present-day stars are contained in spheroidal systems, i.e. elliptical galaxies and spiral galaxy bulges, and that these stars formed early and rapidly. The expected rest-frame blue and near-IR emissivity of a simple stellar population with formation redshift $`z_{\mathrm{on}}=5`$ and total mass density equal to the mass in spheroids observed today is shown in Figure 5. In this model the near-IR and blue emissivities at $`z=45`$ are comparable with the values observed at $`z=1`$. HST-NICMOS deep observations may be able to test similar scenarios for the formation of elliptical galaxies at early times. ## The mass density in black holes Recent dynamical evidence indicates that supermassive black holes reside at the center of most nearby galaxies. The available data (about 30 objects) show a strong correlation (but with a large scatter) between bulge and black hole mass Mag98 , with $`M_{\mathrm{bh}}=0.006M_{\mathrm{bulge}}`$ as a best-fit. The total mass density in spheroids today is $`\mathrm{\Omega }_{\mathrm{bulge}}=0.0036_{0.0017}^{+0.0024}`$ Fuk98 , implying a mean mass density of dead quasars $$\rho _{\mathrm{bh}}=1.5_{0.7}^{+1.0}\times 10^6\mathrm{M}_{}\mathrm{Mpc}^3.$$ (6) Since the observed energy density from all quasars is equal to the emitted energy divided by the average quasar redshift Sol82 , the total contribution to the EBL from accretion onto black holes is $$I_{\mathrm{bh}}=\frac{c^3}{4\pi }\frac{\eta \rho _{\mathrm{bh}}}{1+z}18\mathrm{nW}\mathrm{m}^2\mathrm{sr}^1\eta _{0.1}1+z^1,$$ (7) where $`\eta _{0.1}`$ is the efficiency for transforming accreted rest-mass energy into radiation (in units of 10%). Quasars at $`z\mathrm{}<2`$ could then make a significant contribution to the brightness of the night sky if dust-obscured accretion onto supermassive black holes is an efficient process Hae98 , Fab . <sup>5</sup><sup>5</sup>5It might be interesting to note in this context that a population of AGNs with strong intrinsic absorption (Type II quasars) is actually invoked in many current models for the X-ray background Mad94 , Com95 . ## The end of the ‘dark ages’ The epoch of reionization marked the end of the ‘dark ages’ during which the ever-fading primordial background radiation cooled below 3000 K and shifted first into the infrared and then into the radio. Darkness persisted until early structures collapsed and cooled, forming the first stars and quasars that lit the universe up again Rees98 . The application of the Gunn-Peterson constraint on the amount of smoothly distributed neutral material along the line of sight to distant objects requires the hydrogen component of the diffuse IGM to have been highly ionized by $`z5`$ SSG , and the helium component by $`z2.5`$ DKZ . From QSO absorption studies we also know that neutral hydrogen at high-$`z`$ accounts for only a small fraction, $`10\%`$, of the nucleosynthetic baryons LWT . A substantial population of dwarf galaxies having star formation rates $`<0.3\mathrm{M}_{}\mathrm{yr}^1`$, and a space density in excess of that predicted by extrapolating to faint magnitudes the best-fit Schechter function, may be expected to form at early times in hierarchical clustering models, and has been recently proposed MR98 , MHR as a possible candidate for photoionizing the IGM at these early epochs. Establishing the character of cosmological ionizing sources is an efficient way to constrain competing models for structure formation in the universe, and to study the collapse and cooling of small mass objects at early epochs. The study of the candidate sources of ionization at $`z=5`$ can be simplified by noting that the breakthrough epoch (when all radiation sources can see each other in the hydrogen Lyman-continuum) occurs much later in the universe than the overlap epoch (when individual ionized zones become simply connected and every point in space is exposed to ionizing radiation). This implies that at high redshifts the ionization equilibrium is actually determined by the instantaneous UV production rate MHR . The fact that the IGM is rather clumpy and still optically thick at overlapping, coupled to recent observations of a rapid decline in the space density of radio-loud quasars and of a large population of star-forming galaxies at $`z\mathrm{}>3`$, has some interesting implications for rival ionization scenarios and for the star formation activity in the interval $`<3<z<5`$. The existence of a decline in the space density of bright quasars at redshifts beyond $`3`$ was first suggested by O82 , and has been since then the subject of a long-standing debate. In recent years, several optical surveys have consistently provided new evidence for a turnover in the QSO counts HS90 , WHO , Sc95 , KDC . The interpretation of the drop-off observed in optically selected samples is equivocal, however, because of the possible bias introduced by dust obscuration arising from intervening systems. Radio emission, on the other hand, is unaffected by dust, and it has recently been shown Sha that the space density of radio-loud quasars also decreases strongly for $`z>3`$. This argues that the turnover is indeed real and that dust along the line of sight has a minimal effect on optically-selected QSOs. In this case the QSO emission rate of hydrogen ionizing photons per unit comoving volume drops by a factor of 3 from $`z=2.5`$ to $`z=5`$, as shown in Figure 6. Galaxies with ongoing star-formation are another obvious source of Lyman-continuum photons. Since the rest-frame UV continuum at 1500 Å (redshifted into the visible band for a source at $`z3`$) is dominated by the same short-lived, massive stars which are responsible for the emission of photons shortward of the Lyman edge, the needed conversion factor, about one ionizing photon every 10 photons at 1500 Å, is fairly insensitive to the assumed IMF and independent of the galaxy history for $`t10^7`$ yr. Figure 6 (right) shows the estimated Lyman-continuum luminosity density of galaxies at $`z3`$. The data point assumes a value of $`f_{\mathrm{esc}}=0.5`$ for the unknown fraction of ionizing photons which escapes the galaxy $`\mathrm{I}`$layers into the intergalactic medium. One should note that, while highly reddened galaxies at high redshifts would be missed by the Lyman-break color technique (which isolates sources that have blue colors in the optical and a sharp drop in the rest-frame UV), it seems unlikely that very dusty objects (with $`f_{\mathrm{esc}}1`$) would contribute in any significant manner to the ionizing metagalactic flux. ## Reionization When an isolated point source of ionizing radiation turns on in a neutral medium, the ionized volume initially grows in size at a rate fixed by the emission of UV photons, and an ionization front separating the $`\mathrm{II}`$and $`\mathrm{I}`$regions propagates into the neutral gas. Most photons travel freely in the ionized bubble, and are absorbed in a transition layer. The evolution of an expanding $`\mathrm{II}`$region is governed by the equation $$\frac{dV_I}{dt}3HV_I=\frac{\dot{N}_{\mathrm{ion}}}{\overline{n}_\mathrm{H}}\frac{V_I}{\overline{t}_{\mathrm{rec}}},$$ (8) where $`V_I`$ is the proper volume of the ionized zone, $`\dot{N}_{\mathrm{ion}}`$ is the number of ionizing photons emitted by the central source per unit time, $`\overline{n}_\mathrm{H}`$ is the mean hydrogen density of the expanding IGM, $`H`$ is the Hubble constant, and $`\overline{t}_{\mathrm{rec}}`$ is the hydrogen mean recombination timescale, $$\overline{t}_{\mathrm{rec}}=[(1+2\chi )\overline{n}_\mathrm{H}\alpha _BC]^1=0.3\mathrm{Gyr}\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)^1\left(\frac{1+z}{4}\right)^3C_{30}^1.$$ (9) One should point out that the use of a volume-averaged clumping factor, $`Cn_{\mathrm{HII}}^2/\overline{n}_{\mathrm{HII}}^2`$, in the recombination timescale is only justified when the size of the $`\mathrm{II}`$region is large compared to the scale of the clumping, so that the effect of many clumps (filaments) within the ionized volume can be averaged over. The validity of this approximation can be tested by numerical simulations (see Figure 7). Across the I-front the degree of ionization changes sharply on a distance of the order of the mean free path of an ionizing photon. When $`\overline{t}_{\mathrm{rec}}`$ is much smaller than the Hubble time, the growth of the $`\mathrm{II}`$region is slowed down by recombinations in the highly inhomogeneous medium, and its evolution can be decoupled from the expansion of the universe. In analogy with the individual $`\mathrm{II}`$region case, it can be shown that the hydrogen component in a highly inhomogeneous universe is completely reionized when the number of photons emitted above 1 ryd in one recombination time equals the mean number of hydrogen atoms. At any given epoch there is a critical value for the emission rate of ionizing photons per unit cosmological comoving volume, $$\dot{𝒩}_{\mathrm{ion}}(z)=\frac{\overline{n}_\mathrm{H}(0)}{\overline{t}_{\mathrm{rec}}(z)}=(10^{51.2}\mathrm{s}^1\mathrm{Mpc}^3)C_{30}\left(\frac{1+z}{6}\right)^3\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)^2,$$ (10) which is independent of the (unknown) previous emission history of the universe: only rates above this value will provide enough UV photons to ionize the IGM by that epoch. One can then compare our estimate of $`\dot{𝒩}_{\mathrm{ion}}`$ to the inferred contribution from QSOs and star-forming galaxies. The uncertainty on this critical rate is difficult to estimate, as it depends on the clumpiness of the IGM (scaled in the expression above to the value inferred at $`z=5`$ from numerical simulations GO97 ) and the nucleosynthesis constrained baryon density. The evolution of the critical rate as a function of redshift is plotted in Figure 6 (right). While $`\dot{𝒩}_{\mathrm{ion}}`$ is comparable to the quasar contribution at $`z\mathrm{}>3`$, there is some indication of a deficit of Lyman-continuum photons at $`z=5`$. For bright, massive galaxies to produce enough UV radiation at $`z=5`$, their space density would have to be comparable to the one observed at $`z3`$, with most ionizing photons being able to escape freely from the regions of star formation into the IGM. This scenario may be in conflict with direct observations of local starbursts below the Lyman limit showing that at most a few percent of the stellar ionizing radiation produced by these luminous sources actually escapes into the IGM Le95 .<sup>6</sup><sup>6</sup>6At $`z=3`$ Lyman-break galaxies radiate into the IGM more ionizing photons than QSOs if $`f_{\mathrm{esc}}\mathrm{}>30\%`$. It is interesting to convert the derived value of $`\dot{𝒩}_{\mathrm{ion}}`$ into a “minimum” star formation rate per unit (comoving) volume, $`\dot{\rho }_{}`$: $$\dot{\rho }_{}(z)=\dot{𝒩}_{\mathrm{ion}}(z)\times 10^{53.1}f_{\mathrm{esc}}^10.013f_{\mathrm{esc}}^1\left(\frac{1+z}{6}\right)^3\mathrm{M}_{}\mathrm{yr}^1\mathrm{Mpc}^3.$$ (11) The star formation density given in the equation above is comparable with the value directly “observed” (i.e., uncorrected for dust reddening) at $`z3`$ M98 . The conversion factor assumes a Salpeter IMF with solar metallicity, and has been computed using a population synthesis code BC98 . It can be understood by noting that, for each 1 $`M_{}`$ of stars formed, 8% goes into massive stars with $`M>20M_{}`$ that dominate the Lyman-continuum luminosity of a stellar population. At the end of the C-burning phase, roughly half of the initial mass is converted into helium and carbon, with a mass fraction released as radiation of 0.007. About 25% of the energy radiated away goes into ionizing photons of mean energy 20 eV. For each 1 $`M_{}`$ of stars formed every year, we then expect $$\frac{0.08\times 0.5\times 0.007\times 0.25\times M_{}c^2}{20\mathrm{eV}}\frac{1}{1\mathrm{yr}}10^{53}\mathrm{phot}\mathrm{s}^1$$ (12) to be emitted shortward of 1 ryd. ## Conclusions Recent studies of the volume-averaged history of stellar birth are pointing to an era of intense star formation at $`z11.5`$. The optical datasets imply that a fraction close to 65% of the present-day stars was produced at $`z>1`$, and only 25% at $`z>2`$. About half of the stars observed today would be more than 9 Gyr old, and only 10% would be younger than 5 Gyr.<sup>7</sup><sup>7</sup>7Unlike the measured number densities of objects and rates of star formation, the integrated stellar mass density does not depend on the assumed cosmological model. There is no ‘single epoch of galaxy formation’: rather, it appears that galaxy formation is a gradual process. Numerous uncertainties remain, however, particularly the role played by dust in obscuring star-forming objects. Our first glimpse of the history of galaxies to $`z4`$ leads to the exciting question of what happened before. Substantial sources of ultraviolet photons must have been present at $`z\mathrm{}>5`$ to keep the universe ionized, perhaps low-luminosity quasars HL98 or a first generation of stars in dark matter halos with virial temperature $`T_{\mathrm{vir}}10^410^5`$K OG96 , HL97 . Early star formation provides a possible explanation for the widespread existence of heavy elements in the Ly$`\alpha `$forest Cow95 , while reionization by QSOs may produce a detectable signal in the radio extragalactic background at meter wavelengths Mad97 . A detailed exploration of such territories must await projected facilities like the Next Generation Space Telescope and the Square Kilometer Radio Telescope. Acknowledgments Support for this work was provided by NASA through ATP grant NAG5–4236.
no-problem/9902/astro-ph9902184.html
ar5iv
text
# Contents ## 1 Introduction The idea of planning a survey aimed especially to the detection of clusters of galaxies came long ago following a suggestion by Riccardo Giacconi. At that time Riccardo, in collaboration with Chris Burrows and Richard Burg, was studying the possibility to design an X-Ray optics having a good resolution over a large field of view. This would optimize the time it would take to carry out a survey over a large solid angle. Once it had been demonstrated that the design was under control, we started to develop the technology to realize the prototype. This took some years and finally we succeeded. In these proceedings we will briefly outline the science goals and the general plan of the mission. Further details are given in the study of phase A that will be submitted to the Italian Space Agency in November 1998 and on specialized papers published especially by the technology X-Ray group headed by Oberto Citterio. At this meeting, however, we present for the first time the excellent results of the X-Ray metrology carried out on the 60-cm shell prototype, the most difficult and critical shell to be made. This is a breakthrough in the field comparable, in all aspects, to the result of the first Ritchey Chretien optics for ground based optical telescopes. WAXS/WFXT is an excellent and unique survey mission with a strong Italian heritage. The ROSAT all sky survey is too shallow and the ROSAT deep surveys have too small a solid angle. AXAF will not be dedicated to large surveys and does not have the field of view to discover a sizeable number of objects. The disadvantage of the XMM serendipitous survey is that it will be spread over thousands of pointing in different directions, this is not suitable for measuring the Large Scale Structure. ABRIXAS, while being a complement to the proposed mission, will not have the adequate sensitivity and angular resolution for our science goals. (The rather limited resolution makes identification directly from the survey difficult and places a heavy demand on the telescope time needed for the optical follow-up). WAXS will complement the above missions by accomplishing original science and by creating a unique catalogue for follow-up observations. Both large ground-based telescopes and space missions will make use of the WAXS source catalogues for many years to come. A comparison with the most important missions that are ready to fly is significant and illuminating. This is shown in Fig. I.1, where we plot the area of the survey versus the limiting sensitivity. The baseline of the mission includes an ultra-deep survey that almost reaches the sensitivity of the AXAF deep survey, but over an area more than twenty times larger. The confusion limit is, assuming we reach the optimum resolution as we expect, well below the XMM confusion limit. The work I am describing below is the result of the creativity and dedication of many scientists, Italian and Foreign Institutions. To them goes the merit of the content and however I am responsible for the form and the eventual inaccuracies of the text. The main contributors will be acknowledged at the end of the contribution. I would like from the starting, however, to express my gratitude to the team of the Brera Observatory whom, especially in this last year, worked with extreme dedication to this project. The collaboration of Steeve Murray, Alan Wells and Cosimo Chiarelli went beyond duty and could be explained only as a result of true friendship and very deep interest in the project. I will discuss part of the science goals in section II and I will illustrate the expected performance of the instrument in section III. In section IV I will give a brief summary of the mission planning. At the time of writing it is not yet known whether the mission will be approved. I hope to convince the reader in that this mission is a unique opportunity to extend our understanding of the Structure and Origins of the Cosmos. ## 2 Science goals The unique features of the X-ray sky make it possible to select groups, clusters, and AGN from X-ray images and to use these classes of objects to map the large scale structure of the Universe at high redshift ($`z>1`$). Compared to optical images, X-ray images are relatively sparse and dominated by the distant, extragalactic sources. A convincing example is given in Fig. II.1, where we reproduce a patch of sky from the second generation Digitised Sky Survey plates, $`30^{}\times 30^{}`$ in size and corresponding to a ROSAT-PSPC pointed observation (targeted at QSO1404+286). In this optical image there are 2176 objects brighter than the plate limit $`m_B23`$, about half of which are galaxies. Contours from the ROSAT X-ray image are overlaid on the optical picture. The X-ray data reveals two clusters, ‘A’ at $`z=0.36`$ and ‘C’ at $`z=0.55`$, and a group of galaxies, ‘B’, at $`z=0.12`$. About 20 of the 26 detected sources are distant AGNs. The design of the WAXS/WFXT mission is indeed based on the scientific goal of detecting clusters of galaxies at high redshifts. Optical surveys are extremely limited, even if complementary, in detecting clusters. This is because for distant clusters optical surveys generally detect only the very tip of the luminosity function, that is the brightest galaxies. The galaxy background, however, increases tremendously with distance so that the cluster galaxies are confused in the background and clusters are difficult to detect even at $`z0.81.0`$. In the X-ray, clusters of galaxies appear as extended objects and an angular resolution of 15 arcseconds over a large field of view is sufficient to separate clusters from point sources at large redshift. In any cosmology the minimum angular diameter of a cluster is about 30 arcseconds, occurring at about $`z1.25`$. Finally it is important to detect clusters at various redshifts and over a very large area. This is necessary to obtain a statistically significant number of bright clusters, which are very rare, and to study their evolution and clustering properties. Thus, the primary mission requirement is to conduct two surveys. A 900 square degree shallow survey to detect the brightest clusters over a large solid angle of contiguous area, and a 100 square degree deep survey to probe more deeply into the cluster X-ray luminosity function to study evolution. In Fig. II.2 we compare explicitly the effective sky coverages as a function of the X-ray flux limit for the three most representative present surveys and the WAXS Shallow and Deep surveys. Note how in both case, the WAXS surveys represent a step of an order of magnitude with respect to existing surveys. Note also that these flux limits are conservative, as they have been based on the request of collecting 50-100 counts from a cluster at the limit. In practice, we can very probably push our detection limit down by a factor of two in both surveys. Based on the experience gained with the RDCS survey (Rosati et al. 1998), and the similar survey by Vikhlinin et al. (1998b), we define as “typical” a cluster with an extension of $`1`$ arcmin radius. This represents the median angular size (roughly twice the 50% power radius) of the clusters in the RDCS sample after de-convolving the ROSAT-PSPC PSF. In Fig. II.3, we plot the limiting flux as a function of the exposure time, for a signal-to-noise ratio (S/N)=5 and for a source with an extension of 1 arcmin radius. For a cluster described by a Raymond-Smith thermal model, with KT=5keV and 0.3 solar abundance (filtered by a galactic absorbing column density equal to $`3\times 10^{20}`$ cm<sup>-2</sup>), the conversion factor is $`1ctss^1=1.2\times 10^{11}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (between 0.5-2.0 keV), which is accurate within 10% for clusters with temperatures between 2 and 10 keV. We considered an instrumental and cosmic background of $`10^3`$ cts s<sup>-1</sup> arcmin<sup>-2</sup>, which is about two times the diffuse X-ray background in the WFXT energy band and should provide a conservative upper limit. In Fig. II.3 we also show the S/N sensitivity curve in the case of a total background two times lower and the $`S/N=10`$ and $`S/N=50`$ sensitivity curves. If we take an exposure time of $`>10^5`$ s for the deep area (100 sq.deg.) and $`>10^4`$ s for the shallow area (900 sq.deg.), from Fig. II.3 we can estimate a limiting flux (0.5 - 2.0 keV) for our “typical cluster” of $`10^{14}`$ ergs cm<sup>-2</sup> s<sup>-1</sup> and $`5\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> for the deep and shallow area, respectively. These exposure times ensure that, at the faintest fluxes here considered, at least 50 to 100 net counts will be accumulated from a typical cluster. Based on the RDCS survey, the corresponding signal-to-noise is enough to discriminate between a point-like and an extended source, thus allowing a robust list of cluster candidates to be defined by using X-ray data alone. This clearly implies that we shall also be able to detect clusters at fainter fluxes. A very important cosmological probe is provided by the evolution of clusters. The main reason for this is that clusters correspond to the high peaks of the primordial density field (e.g. Kaiser 1984), so that their abundance (i.e. the number of clusters within a given mass range), is highly sensitive to the details of the underlying mass density field. The typical mass of rich clusters ($`10^{15}M_{}`$), is close to the average mass within a sphere of 16 Mpc radius in an unperturbed Universe, so that the local ($`z<0.2`$) abundance of clusters is expected to place a constraint on the r.m.s. fluctuations on the same scale, what is called $`\sigma _8`$ (being normally expressed using $`H_0=100`$ km s<sup>-1</sup> Mpc<sup>-1</sup>). This is basically a measure of the normalization of the power spectrum, and the Press-Schechter (1974) theory, easily shows that the cluster abundance is highly sensitive to $`\sigma _8`$ for a given value of the density parameter $`\mathrm{\Omega }_0`$ (see Borgani et al. 1998 for details). At the same time, once the local abundance of clusters (i.e. $`\sigma _8`$) is fixed, its evolution back in time will mainly depend on $`\mathrm{\Omega }_0`$. Therefore, if we are able to trace the cluster abundance to high redshifts in a reliable way, we shall directly constrain the value of the cosmological density parameter. The problem is that we cannot observe directly the abundance of clusters within a defined mass range, as required by the theory, and have to resort to some kind of observable which can be connected as closely as possible to mass. Cluster masses can be measured through galaxy velocity dispersions, but this is very time-consuming and for the moment limited to local samples. Weak-lensing maps are also a promising technique, but again it is difficult to collect systematic observations for large samples. In this respect, X-ray selected clusters offer the best opportunity, as they have measurable properties that can be linked to mass in a more direct way than optically-selected systems. The best parameter would be X-ray temperature, which offers the most direct route to mass. Although the situation is improving (Henry 1997), X-ray temperatures are however still not available in a systematic way for large samples of clusters. An easier way is to use X-ray luminosities. Considerable efforts have therefore concentrated in the last few years on trying to detect signs of evolution with redshift in the X-ray luminosity function (XLF), to be then related to the mass function through the Luminosity-Temperature relation. After the pioneering results from EMSS survey (Gioia et al. 1990), in the last couple of years there has been a major burst of works tackling this problem, all based on serendipitous searches of clusters over deep ROSAT pointed observations from the public archive (Rosati et al. 1998, Vikhlinin et al. 1998b; see Rosati 1998 for a review). These studies, that were able to reach fluxes as faint as $`2\times 10^{14}`$ erg s<sup>-1</sup> cm<sup>-2</sup>, have shown how there is practically no evolution between $`z=0`$ and $`z=0.8`$ in the abundance of clusters of moderate luminosity ($`L<L_{}4\times 10^{44}`$ erg s<sup>-1</sup>), while there is a hint for evolution in the abundance of very luminous systems. Vikhlinin et al. (private comm.), in fact find no luminous cluster in their distant redshift bin ($`z>0.5`$), while 9 would be predicted on the basis of the local XLF. Fig. II.4 shows a comparison of the XLF in local and distant samples. Note how in this figure, prior to the latest Vikhlinin et al. result, the only evidence for evolution is the $`2\sigma `$ deficiency in the EMSS XLF at high $`z`$ (starred symbols). These serendipitous surveys are in general limited by the difficult compromise between depth (i.e. flux limit) and sky coverage, as we further show in Fig. II.5 for the RDCS: the data cover different ranges of luminosity at different redshifts and direct comparison is difficult. The only way to improve significantly on these measures of evolution is to enlarge the covered area at similar fluxes, and to go fainter on comparable areas. This is exactly what the Shallow and Deep WAXS surveys will do. Determining the existence of luminous clusters at $`z>1`$, providing a robust measure of the XLF at different redshifts and establishing firmly the evolution of clusters of galaxies is a primary goal of the mission. The full power of the survey concerning the statistics of large-scale structure will be exploited through the measurements of redshifts for WAXS groups and clusters. While “local” ($`z<0.2`$) surveys of rich clusters, as the REFLEX survey, give a coarse view of the large- scale distribution of matter, the WAXS survey will be unique in producing a large sample of groups at $`z<0.1`$, that will give a more detailed description of local large-scale structure. Even more important, this will be based on objects selected through a clean tracer of mass as X-ray emission is. The Shallow survey will detect nearly 500 groups within $`z<0.1`$. A direct example of the power of using X-ray selected clusters for studying the large-scale structure of the Universe has been recently provided by the ROSAT-ESO Flux Limited X-ray cluster survey (REFLEX, Guzzo et al. 1995, Boehringer et al. 1998). While the above mentioned ROSAT deep surveys are all based on searches of serendipitous cluster sources on archival pointed observations (see Rosati 1998 for a review), the REFLEX survey is a wide-angle project based on the ROSAT All-Sky Survey (RASS), which is about two orders of magnitude brighter in flux limit ($`1\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup> for clusters). The preliminary results from REFLEX represent a very good example of the effectiveness of X-ray selected clusters as tracers of large-scale structure. In Fig. II.6 we plot the estimate of the power spectrum from a preliminary version of the REFLEX data. This is computed using only 230 clusters with $`z<0.1`$, while the whole survey is going to contain 475 clusters with $`z<0.3`$ and flux brighter than $`3\times 10^{12}`$ erg s<sup>-1</sup> cm<sup>-2</sup>, over an area of 4.24 sr. The WAXS surveys will be able to both increase the detail of this measure in the local ($`z<0.2`$) Universe, by using X-ray selected groups and poor clusters, and, most importantly, to perform the same measure as a function of redshift, out to $`z1`$, using X-ray luminous rich clusters. This will represent an unprecedented probe of the evolution of the large-scale structure of the Universe, constraining the cosmological parameters. Very little is known about the physics of the intercluster medium in superclusters, primarily because these systems are rare (only 3 superclusters are known with masses comparable to Shapley 8) and because they are very hard to map due to their large angular size. Possible detection of a significant number of rich superclusters in the WAXS surveys provides a unique opportunity for their detailed study. A 30 Mpc diameter supercluster will subtend approximately $`1\mathrm{deg}`$ at $`z=0.5`$. With the large contiguous survey area of the WAXS surveys, available only to this specific mission, we can readily map the X-ray background and provide strong limits on any diffuse emission within the core of such a supercluster. A very conservative estimate of the minimum detectable surface brightness enhancement is 30% of the X-ray background brightness around 1 keV. For a 0.5-1 keV plasma filling a 10 Mpc supercluster, this corresponds to the central density of $`12\times 10^5`$ cm<sup>-3</sup>. The diffuse intercluster gas in these superstructures can be detected in the WAXS surveys out to substantial redshifts. According to our visibility simulations, the diffuse gas in a Shapley-like supercluster would be detected out to $`z0.5`$ in the Deep survey and out to $`z0.2`$ in the Shallow survey. While other X-ray missions (e.g., XMM) may detect comparable numbers of clusters and superclusters, they will be randomly spread over the sky and the primary science objective – studying large scale structure – is feasible only with the large contiguous surveys proposed here. A sensitive X-ray survey like WAXS should also be able to map directly the warm/hot gas present between clusters, trapped inside the potential filaments and superstructures. In fact, at redshifts below 0.5 also the diffuse gas filling the deepest parts of the supercluster and filament potential wells is starting to be detected in the X-rays (see, e.g., Wang et al. 1997, Connolly et al. 1996). This allows to map directly the large scale distribution of the warm/hot baryons, the dominant baryonic component of the matter in the Universe (Cen & Ostriker 1998). The volume fraction of the warm/hot gas is only a few percent while the relative mass fraction is up to $`50\%`$ at $`z>0.5`$ (see Fig. 2 in Cen & Ostriker 1998) indicating that the filamentary Cosmic Web is the repository of such abundant baryonic material. At the same redshifts the overall emission of the warm/hot gas with $`kT>0.5`$ keV is just a factor $`3`$ below the overall emission of the hot gas in rich clusters (see Fig. 4 in Cen & Ostriker) indicating that such warm/hot gas is a major contributor to the soft ($`kT>1`$ keV) diffuse X-ray background. Also, Colberg et al. (1998) showed that clusters accrete matter from a few preferred directions, defined by the filamentary structures, and that the accretion persists over cosmological long times. A spectacular example is shown in Fig. II.7. The observational study of AGNs, along with the theoretical study of their formation, is another avenue to better understanding the origin of structures. AGNs dominate the deep X-ray images, comprising approximately 80% of all the sources (Hasinger et al. 1998; Schmidt et al. 1998) at high galactic latitude. Once the clusters (and the bright stars) have been identified, one can reasonably assume that most of the remaining sources are AGNs. In the present baseline for the two high latitude WAXS surveys ($`100`$ sq.deg. at $`S_X>4\times 10^{15}`$ erg s<sup>-1</sup> cm<sup>-2</sup> and $`900`$ sq.deg. at $`S_X>3\times 10^{14}`$ erg s<sup>-1</sup> cm<sup>-2</sup> at $`5\sigma `$) we expect to detect $`30,00040,000`$ AGNs, approximately equally divided in the two surveys. These numbers will increase by about 50% pushing the detection limit down to $`4\sigma `$. While most of these AGNS will be the “classical” broad-line quasars, a not negligible fraction is expected to be constituted by absorbed AGNs, with $`N_H>10^{21}`$ cm<sup>-2</sup> (Comastri et al. 1995). At the $`5\sigma `$ limiting fluxes quoted above, we will detect $`15`$ and $`200`$ AGNs per square degree in the Shallow and Deep surveys, respectively. The surface density of AGNs in the Deep Survey is significantly higher than those which will be obtained in the forthcoming optical surveys on large areas. For example, the Sloan Digital Sky Survey (SDSS) will measure redshifts of $`105`$ QSO in $`10^4`$ sq.deg. (Margon 1998), while the expected surface density of AGNs in the 2dF sample is 30-40 sq.deg. It should be noted that the optical surveys will select magnitude-limited samples of AGN with colors as a primary selection criterion. Since in the optical band AGNs are a small fraction of the total number of objects, the statistical uncertainties in the colors at faint magnitudes and the difficulty in separating the point-like objects from the much more numerous faint, extended galaxies makes very difficult the assessment of the level of completeness of faint optically selected samples. From this point of view, the X-ray selection is highly superior because a very large fraction ($`>70\%`$) of X-ray sources are known to be AGNs. The comoving spatial density of AGNs detected in WAXS surveys will be $`10^5`$ Mpc<sup>-3</sup>, i.e. comparable to that of galaxy groups at low redshift. Thus, AGNs are promising tracers of the large scale structure, although they cannot be substituted to clusters for a quantitative measurement of the matter power spectrum. The physical cause of AGN activity is not well understood yet and therefore AGNs can be arbitrarily biased (or anti-biased) with respect to the total matter distribution. Nevertheless, AGNs should be suitable for qualitatively mapping the large scale structure. For this purpose, the high spatial density of X-ray selected QSOs and our square survey geometry is highly desirable. The evolution with redshift of the clustering strength is much more controversial, with contradictory results obtained by different groups using samples of $`1000`$ quasars spanning the entire redshift range (see, for example, Andreani & Cristiani 1992). These results will be soon improved by the analysis of the forthcoming 2dF quasar sample, which ultimately will contain about 30,000 quasars with $`m_B<21.0`$ in 750 sq.deg. Even this sample, however, will not provide much information for $`z>2.5`$, because of the relatively bright limiting magnitude and, therefore, the relatively small number of objects at such high redshifts. Vice versa, WAXS will allow the detection of a large number of quasars at $`z>2.5`$. A precise estimate of how many such quasars will be detected is highly uncertain because very little is known about the X- ray luminosity function (XLF) at these redshifts. While most of the AGNs will be in the range $`0.5<z<2.5`$, more than 2,000 quasars (most of them in the deep survey) are expected to be detected at $`z>2.5`$. More pessimistic models, with a decreasing comoving density beyond $`z=2.5`$, still predict about 1,000 such objects. These estimates are consistent with the available optical identifications in the deep ROSAT survey in the Lockman Hole (Schmidt et al. 1998). Note that in this survey, covering only $`0.2`$ sq.deg., it has been detected the highest redshift, X-ray selected quasar ($`z=4.45`$; Schneider et al. 1998), with a flux higher than the limit of the WAXS Deep survey. WAXS is an excellent and unique mission for such a project. The ROSAT all-sky survey is too shallow and the ROSAT deep surveys have too small solid angles. Serendipitous surveys with AXAF and XMM eventually will cover a large area at a limiting flux similar to that of the WAXS. The disadvantage of a serendipitous survey, however, is that it will be spread over of thousands pointings in different directions, which complicates the optical follow-up and makes the detailed study of the AGN spatial distribution impossible. Rapid and large amplitude variability is common among AGNs, both for radio-quiet Seyfert galaxies and for radio-loud objects, and it is the main defining property of blazars. For the shallow survey, each field of view will be observed in a single passage for approximately $`10^4`$ seconds, while for the small area deep survey each field will be observed $`10`$ times, within a total period of a few months. In this area we will therefore have the possibility to detect variability on both short timescales (of the order of hours) and long timescales (from a few weeks to a few months). The latter timescales are particularly interesting, since they have not been well sampled yet, except for a few selected sources. Time variability studies on these scales will be impossible with other missions (with the exception of a small number of “famous” sources). The number of AGNs subject to this variability analysis is huge (essentially, the brightest 30% of all objects), allowing us to define for the first time the variability properties of all classes of AGNs. We will also include in the mission plan, a Galactic Plane survey to better understand the structure of the Milky Way and its X-ray properties. Given the strong dependence of coronal activity on rotation and age, any flux limited X-ray survey will preferentially detect active stars which are observable to larger distances than low-activity ones. This explains why flux limited surveys carried out with Einstein, EXOSAT and ROSAT have typically shown a large fraction of active stars, either young stars or RS CVn binaries (Favata et al. 1993, 1995; Tagliaferri et al. 1994; note that for RS CVn binaries the high rotation and high coronal activity is due to tidal interaction rather than young age). X-ray observations thus provide a unique way to investigate the distribution of active stars, and in particular of young stars, in the Galaxy up to distances of few kpc. There are several factors that are expected to influence the distribution of X-ray active stars in the solar neighborhood, all of which are still poorly understood. A first dominant component is expected to arise from the structure of the galactic disk, with young stars strongly concentrated on the galactic plane and rapidly decreasing at higher galactic latitudes. To first approximation, this distribution should be only weakly dependent on galactic longitude, provided the sampled volume remains sufficiently close to the Sun (up to few hundred parsec, as in most X-ray surveys carried out so far). However, as soon as we increase the sensitivity, we will start exploring larger and larger volumes and the radial distribution of stars on the disk (e.g. the spiral arm structure) will become increasingly important. For sufficiently high sensitivity, a clear asymmetry between the directions of the galactic center and anticenter should become readily apparent. Even at limited sensitivity, the distribution of young active stars should be markedly different at different galactic latitudes, owing to the finite scale-height ($`100`$ pc) of their density perpendicular to the galactic plane. A cross-correlation of the RASS survey (at a flux limit of $`2\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>) with the Tycho catalogue (which is complete down to $`m_V=10.5`$) has recently shown an additional structure in the spatial distribution of X-ray active stars besides the general decrease with galactic latitude (Guillout et al. 1998a, b; see Fig. II.8). This density enhancement, which is particular prominent in the third and fourth quadrants of galactic longitudes (i.e. between $`l=180\mathrm{deg}`$ and $`l=360\mathrm{deg}`$), is in very good agreement with the expected position of the so-called Gould Belt (GB). This is a large-scale ring structure of recent star formation which had previously been identified on the basis of the spatial distribution of OB associations and which appears to be inclined by about $`20\mathrm{deg}`$ to the galactic plane. Fig. II.9 shows a model prediction at a sensitivity a factor 4 higher and for stars down to $`m_V=15`$ (simulation courtesy of P. Guillout); the galactic plane structure and the GB are now detectable much more clearly. For a few selected regions at low galactic latitudes in Cygnus and in Taurus covering respectively 64.5 sq.deg and 70 sq. deg, a complete optical identification program of the RASS sources has been carried out (Motch et al. 1997), showing that $`85\%`$ of the RASS sources at low galactic latitudes are indeed stars. A GPS at a sensitivity of $`12\times 10^{14}`$ erg sec<sup>-1</sup> cm<sup>-2</sup> has been carried out (cf. Morley at al. 1996, Pye et al. 1997, Sciortino et al. 1998) using a number of individual PSPC pointings in the range of galactic longitudes from $`l=180\mathrm{deg}`$ to $`270\mathrm{deg}`$ and at very low galactic latitudes $`b<0.3\mathrm{deg}`$ . The sensitivity is much larger than that of the RASS but the total survey area is only 2.5 sq.deg and makes the results statistically uncertain and dependent on local fluctuations. With the current scan rate of 0.3 arcsec/sec and a FOV of $`1\mathrm{deg}\times 1\mathrm{deg}`$, current estimates of the WFXT sensitivity indicates that a limiting sensitivity of $`2\times 10^{14}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (appropriate for moderately absorbed - $`N_H<10^{21}`$ cm<sup>-2</sup> \- thermal sources with a temperature of $`10^7`$ K) in the spectral band 0.35 - 8 keV can be reached for point sources with a single scan (i.e. for an exposure time of 10 ksec). With the goal to survey $`500`$ sq.deg. at this limiting sensitivity and assuming a 15% overlap between adjacent strips we estimate that the overall observing time will be of $`7\times 10^6`$ s equivalent, for a 65% average observing efficiency, to $`4`$ months of elapsed time to be devoted by WFXT to the GPS. Hence, at the same limiting sensitivity, WFXT will cover an area 200 times larger than the ROSAT GPS based on pointed observations. With respect to the areas studied in detail by Motch et al. (1997) in Cygnus and Taurus, the improvement in area coverage by WFXT will be only a factor 3.7, but the sensitivity will be an order of magnitude higher, making accessible a volume 100 times larger than with the RASS. In all these cases, the step forward with respect to previous observations allowed by the proposal WFXT GPS will be enormous. An ultra-deep survey of a few square degrees has been included in the mission plan, for detecting the faintest possible sources before being limited by source confusion. This survey requires that we reach the goal of better than 15-arcsecond angular resolution for the optics. Should this not be the case, the ultra-deep survey will not be made, and we will instead extend the area of the shallow and deep surveys. We have simulated a very deep WFXT image using the BeppoSAX simulator, assuming an Half Energy Width of 12 arcsec over the whole field of view ($`1\mathrm{deg}`$) and to this we have applied the Waveket Transform algorithm already developed at OAB. At $`10^6`$ seconds integration time (see Fig. II.10) we are getting close to the confusion limit. However, we still see the sources quite well. In Fig. II.11 we show the comparison between the input sources and those detected by the algorithm. Although we did not spend to much time in refining the algorithm to the WAXS case, the agreement is excellent. We can recover all sources down to a flux limit of $`<6\times 10^{16}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. Moreover, our simulations have also shown that we are accurately able to distinguish between point-like and extended objects for sources with only 50 counts. ## 3 The Instrument The configuration of the satellite as shown on Fig. III.1 is the result of the trade-off and detailed configuration design and analysis. The WFXT is the assembly of several structural and functional elements, in the following we describe the mirror module and illustrate the overall response. The Mirror Support Adapter (MSA) is the outermost element, its main function is to support WFXT connecting the telescope to the satellite structure. The next element, moving towards the core is the case: it connects the top of the MSA to the front spider. The front spider is the element supporting the mirror shells. In the front spider two circular rings, “C” shaped, are connected by 16 radial spokes. The spokes have a rectangular cross section; moving from the inner ring to the outer ring, their height decreases linearly while their width increases. Underneath the front spider are placed two flat masks (X-ray pre-collimator). They are connected to the case through a circular “C” shaped ring - X-ray pre-collimator support. On the other side, the front spider supports 25 mirror shells (MSs). The connection between the mirror shells and the front spider is by means of glue. The 25 MSs are concentrically disposed, they are divided into two groups: the outermost ones, formed by the MSs from #1 to #9, while the innermost range from #10 to #25. The front spider supports also a cylindrical element: the fiducial light mechanical interface. The MSA top is connected to the rear spider. The rear spider is the assembly of two circular rings, having a rectangular cross section, connected by means of 16 spokes. The main function of these spokes is to support the thermal baffles and the electron diverter. The thermal baffles are three concentric cylindrical shells. The final mirror module consist of the MSS, thermal pre- and post-collimators and an X-ray pre- collimator (see Fig. III.2). In the Phase A study we considered in detail three models for the WFXT Mirror Module based on different technologies: $``$ Mirror Module with Nickel Mirror Shells (named Model A); $``$ Mirror Module with SiC Mirror Shells (named Model B); $``$ Mirror Module with Al2O3 Mirror Shells (named Model C). All the models have the same interfaces with respect to the tube of the satellite and the fiducial light system. The design of each model has been done taking into account: $``$ The optical specifications; $``$ The interface requirements; $``$ The experience that Media Lario has accumulated as responsible for the mirror module design and manufacturing of other X-ray astronomy projects (JET-X, XMM). For each Model we performed a Finite Element Model (FEM) and a thermo-structural analysis (with the impact on the optical performance of the Mirror Module). A prototype SiC carrier has been manufactured by Morton International (USA) and C. Zeiss (D) has made the replica. This corresponds to the largest mirror shell of the WFXT telescope. This shell has been tested at the X-ray PANTER facility. Measurements were carried out with the PSPC and CCD detectors at 0.5 and 1.5 keV. The results of these tests are very encouraging; we have obtained values of the $`HEW15\mathrm{"}`$ (Fig. III.3). The breakthrough is that we have fully demonstrated that by adopting the polynomial design it is possible to have almost constant HEW on a large field of view ($`\pm 30^{}`$). This is an outstanding result if compared with the performances of , e.g., the JET-X Wolter I design. We point out, however, that the HEW values are higher than what we would expect from the design and manufacturing errors. We have identified an epoxy variation thickness at the front and back entrance of the mirror shell, which has caused a variation in the mirror profile and in turn the image blur. This problem can easily be solved and a new mirror shell is under manufacturing to confirm our analysis. In particular, extra- and infra-focal position PSPC images show that the image degradation took place in a limited azimuth sector of the mirror shell. By extrapolating the performances of the azimuth portion of the mirror not degraded by the epoxy variation thickness we are able to derive an approximate value of the best HEW that can be obtained once solved this problem. The procedure involves the de-convolution of the PSPC resolution and provides a HEW of $`12`$ arcsec. This demonstrates that the mirror technology fabrication meets the requirement for the WFXT program. The collecting area of the telescope at 1.5 keV is $`360`$ cm<sup>2</sup> ($`310`$ cm<sup>2</sup> after convoluting with filter and CCD response). The energy resolution, at 1.5 keV, is $`\mathrm{\Delta }E/E10\%`$. The CCD focal plane is based on the heritage of the technology developed for JET-X, XMM and AXAF using the best technology and detectors available today. The convolution of the mirror shells with the filter and detector shows that we have a rather good response in the range of interest (0.2 - 8 keV). The response curve of the instrument is plotted in Fig. III.4. The high angular resolution distinguishes extended clusters of galaxies from point-like sources at any redshift, and the good sensitivity reaches a large number of very faint and distant objects as required to achieve the science goals of the mission. ## 4 A simulated mission A typical mission plan has been computed for the two possible orbital inclinations: 3.5 and $`50\mathrm{deg}`$. The purpose here, is to show that the scientific goals can really be accomplished in the minimum mission baseline of approximately two years. The proposed mission plans are computed with the purpose to keep the amount of large satellite movements (e.g. switching among different targets) as low as possible. All factors reducing the observing efficiency (Earth and solar interference, radiation belt crossing, etc.) have been taken into account. For simplicity in the table we list only the simulation for the 3.5 degrees orbit. Note that in the last column, total, we give the percentual accumulation time achieved after each pointing to reach the full integration time at the specified target, 100%. The data will be promptly released after completion of the observations and in any case within one year of the end of the mission. They will enter the public domain through the delivery of the calibrated data to the appropriate centers. The final calibration will be implemented and released at the end of the mission when all LMC data will be available. Copy of the archives will be set up in Italy, USA, UK and Germany and will be regulated by common rules. The clean, calibrated data will be kept available online on disks; at the end of the mission this service could be under the responsibility of the ASI-SDC. When justified a large set of data could be transferred to the requesting group by the most convenient means of data transfer (DAT tapes, CD-ROMs). The WFXT Science team, coordinated by the PI, will release at the end of the mission the final and official catalog of the detected sources, along with information such us coordinates, fluxes, source extension, etc. ## 5 Acknowledgements The Scientific Institutes involved in the hardware aspect during the Phase A study are: Brera Astronomical Observatory (OAB), Smithsonian Astrophysical Observatory (SAO), University of Leicester (UL), Max-Planck-Institute für Extraterristrische Physik (MPE). The Industries participating are: ALENIA, MEDIA-LARIO, TELESPAZIO, OFFICINE GALILEO. P.I: G. Chincarini (OAB); co-PI:S. Murray (SAO); Co-I J. Trümper (MPE), A. Wells (UL), Telescope PI: O. Citterio (OAB); PM: G. Tagliaferri (OAB); PS: S. Sciortino (OAPA). Six panels headed by S. Sciortino coordinated all the work for the science proposal. The panels were chaired by: Colafrancesco (LSS), Chincarini (Clusters of galaxies), Zamorani (AGN), Forman (Galaxies) Pallavicini (Stars) and Watson (Compact Objects). Contribution to this work came from Antonuccio-Delogu, Arnaboldi, Bandiera, Bardelli, Boehringer, Bonometto, Borgani, Campana, Catalano, Cavaliere, Covino, Della Ceca, De Grandi, De Martino, Fiore, Garilli, Ghisellini, Giacconi, Giommi, Girardi, Giuricin, Governato, Guillout, Guzzo (who revised the final version of the science proposal), Iovino, Israel, Lazzati, Le Fevre, Longo, Maccacaro, Maccagni, Mardirossian, Matarrese, Micela, Molendi, Molinari, Moscardini, Murray, Norman, Perola, Osborne, Pye, Ramella, Randich, Robba, Rosati, Scaramella, Stella, Stewart, Tagliaferri, Tozzi, Trinchieri, Vikhlinin, Vittorio, Wolter. Members of the following Institution expressed direct interest in the mission: ASI (SDC), Bologna (OA), Catania (OA), Firenze (OA), Milano (IFCTR,UNI), Napoli (OA), Padova (UNI,OA), Palermo (OA,UNI), Perugia (INFN), Roma (OA-UNI), Trieste (OA), Marsiglia (CNRS,LAS), Baltimore (IHU), Cambridge (MIT), Princeton (UNI), Munich (ESO,MPE), Copenhagen (DSRI). Indeed this paper is the result of cutting and pasting part of the Phase A proposal. ## 6 References Allen, S.W., Fabian, A.C., 1998, MNRAS, 297, L63 Andreani, P., Cristiani, S., 1992, ApJL, 398, L13 Bahcall, N. A., 1979, ApJ, 232, 689 Bardeen et al., 1986, ApJ, 304, 15 Bardelli S., et al., 1997, Astroph. Lett. & Comm., 36, 251 Bode N., et al. 1998, preprint Boehringer, H., Guzzo, L., Collins, C.A., Neumann, D.M., Schindler, S., et al. (the REFLEX Team), 1998, ESO Messenger, 94, in press (astro-ph/9809382) Borgani, S., Rosati, P., Tozzi, P., & Norman, C., 1998, ApJ, in press Branduardi-Raymont G., et al. 1994, MNRAS 270, 947 Briel, U., Henry, P., 1995, A&A, 302, 9 Bryan, G., 1997, PhD Thesis Carroll, S. M., Press, W. H., Turner, E. L., 1992, ARA&A, 30, 499 Cen, R., Ostriker, J. P., 1998, Science, in press (astro-ph/9806281) Colberg, J.M., White, S.D.M., Jenkins, A., Pearce, F.R., 1997, submitted to MNRAS, astro-ph/9711040 Collins, C. A., Burke, D. J., Romer, A. K., Sharples, R. M., Nichol, R. C., 1997, ApJ, 479, L117 Connoly A. J., et al., 1998, ApJ, 473, L67 Comastri, A., Setti, G., Zamorani, G., Hasinger, G., 1995, A&A, 296, 1 Dalton, G. B., Croft, R. A. C., Efstathiou, G., Sutherland, W. J., Maddox, S. J., Davis, M., 1994, MNRAS, 271, L47 David, L. P., Jones, C., and Forman, W., 1995, ApJ, 445, 578 De Grandi, S., Guzzo, L., Boehringer, H., Molendi, S., Chincarini, G., 1998, ApJ, submitted Donnelly, R. H., Faber, S., O’Connell, R., 1990, ApJ, 354, 52 Durret, F., Forman, W., Gerbal, D., Jones, C., Vikhlinin, A., 1998, A&A, 335, 41 Ebeling, H., Edge, A. C., Fabian, A. C., Allen, S. W., Crawford, C. S., Bohringer, H., 1997, ApJ, 479, L101 Efstathiou, G., 1995, MNRAS, 272, L25 Eke, V. R., Cole, S., Frenk, C. S., 1996, MNRAS, 282, 263 Elvis, M., et al., 1981, ApJ, 246, 20 Ettori, S., Fabian, A. C., White, D. A., 1998, MNRAS. 289, 787 Fabbiano, G., Trinchieri, G., 1985, ApJ, 296, 430 Favata F., Barbera, M., Micela, G., Sciortino, S., 1993, A&A 277, 428 Favata F., Barbera, M., Micela, G., Sciortino, S., 1995, A&A 295, 147 Favata F., Micela G, Sciortino S., Vaiana G. S., 1992, A&A, 272, 124 Forman, W., Jones, C. & Tucker, W., 1985, 293, 102 Fukugita, M., Hogan, C.J. and Peebles, P.J.E., 1997, astro-ph/9712020 Geller, M. J., Huchra, J. P., 1989, Science, 246, 897 Gioia, I. M, Maccacaro, T., Schild, R. E., Wolter, A., Stocke, J. T., Morris, S.L., Henry, J.P., 1995, ApJS, 72, 567 Griffiths, R. E., Georgantopoulos, I., Boyle, B. J., Stewart, G. C., Shanks, T., Della Ceca, R, 1995, MNRAS, 275, 77 Groth, E. J., Peebles, P. J. E., 1977, ApJ, 217, 385 Guillout P., Haywood, M., Motch, C., Robin, A.C., 1996, A&A 316, 89 Guillout P., Sterzik, M. F., Schmitt, J. H. M. M., Motch, C., Egret, D., Voges, W., Neuhäuser, R., 1998a, A&A, in press Guillout P., Sterzik, M. F., Schmitt, J. H. M. M., Motch, C., Neuhäuser, R., 1998b, A&A, in press Guzzo, L., Boehringer, H., Briel, U., Chincarini, G., Collins, C. A., et al., 1995, in “Widerref Field Spectroscopy and the Distant Universe”, S.J. Maddox & A. Aragonrref Salamanca, eds., World Scientific, Singapore, p.205 Guzzo, L., Strauss, M. A., Fisher, K.B., Giovanelli, R., & Haynes, M.P., 1997, ApJ, 489, 37 van Haarlem, M. P., Frenk, C. S., White, S. D. M., 1997, MNRAS, 287, 817 Hanami H. et al. 1998, preprint Hasinger, G., Burg, R., Giacconi, R., Hartner, G., Schmidt, M., Trümper, J., Zamorani, G., 1993, A&A 275, 1 Hasinger, G., Burg, R., Giacconi, R., Schmidt, M., Trumper, J., Zamorani, G., 1998, A&A, 329, 482 Hauser, M. G. & Peebles, P. J. E., 1973, ApJ, 185, 757 Henry, J. P., 1997, ApJ, 489, L1 Henry, J. P., Gioia, I. M., Maccacaro, T., Morris, S. L., Stocke, J. T, Wolter, A., 1992, ApJ, 386, 408 Hjorth, J., Oukbir, J., van Kampen, E. 1998, MNRAS, in press (astro-ph/9802293) Hudson, M. J., Ebeling, H. 1997, ApJ, 479, 621 Jenkins, A., et al. 1998, ApJ, 499, 20 Kaiser, N., et al., 1998, ApJ, submitted (astro-ph/ 9809268) Landy, S. D., Szalay, A. S. 1993, ApJ, 412, 64 Long K. S., Helfand D. J., Grabelsky D., 1981, ApJ, 248, 925 Maccacaro, T., Schild, R. E., Wolter, A., Henry, J. P., 1991, ApJS, 76, 813 Maddox, S. J., Efxtathiou, G., Sutherland, W. J., and Loveday, J., 1990, MNRAS, 242, 43p Micela, G., Sciortino, S., Serio, S., Vaiana, G. S., Bookbinder, J., Golub, L., Harnden, F. R. Jr., Rosner, R., 1985, ApJ, 292, 172 Micela, G., Sciortino, S., Vaiana, G. S., Schmitt, J. H. M. M., Stern, R., Harnden, F. R. Jr., Rosner, R., 1988, ApJ, 325, 798 Micela, G., Sciortino, S., Vaiana, G. S., Harnden, F. R. Jr., Rosner, R., Schmitt, J.H.M.M., 1990, 348, 557 Micela, G., Sciortino, S., Favata, F., 1993, ApJ 412, 618 Mo, H. J., Jing, Y. P., White, S. D. M., 1996, MNRAS, 284, 189 Morley, J. E., Pye, J. P., Warwick, R. S., Pilkington, J., 1996, MPE Report 263, 659 Moscardin,i L., et al., 1998, preprint Motch, C., Guillout, P., Haberl, F., Pakull, M., Pietsch, W., Reinsch, K., 1997, A&A 318, 111 Neuhäuser, R., 1997, Science 276, 1363 Ostriker, J. P., Cen, R., 1998, submitted to Science, astro-ph/9806281 Ostriker, J. P., Steinhardt, P. J. 1995, Nature, 377, 600 Page, M. J., Mason, K. O., McHardy, I. M. et al., 1997, MNRAS, 291, 324 Pallavicini, R., Golub, L., Rosner, R., Vaiana, G. S., Ayres, T., Linsky, J.L., 1981, ApJ, 248, 279 Pallavicini, R., 1989, A&A Review, 1, 177 Pallavicini, R., 1998, Space Science Review, in press Peebles, P. J. E., 1980, The Large Scale Structure of the Universe (Princeton: Princeton Univ. Press) Perlman, E. S., Padovani, P., Giommi, P., et al., 1998, AJ, 115, 1253 Pogosyan, D., Bond, R., Kofman, L., Wadsley, J., 1998, preprint Ponman, A., et al., 1994, Nature, 369, 462 Postman, M., 1998, in “Evolution of Structure: from Recombination to Garching” (August 1998), A. Banday et al. eds., in press (astro-ph/9810088) Pye, J. P., Morley, J. E., Warwick, R. S., Pilkington, J., Micela, G., Sciortino, S., Favata, F., 1997, in “Cool Stars in Clusters and Associations: Magnetic Activity and Age Indicators” (G. Micela, R. Pallavicini, S. Sciortino eds.), Memorie SAIt 68, 1089 Press, W. H., Schechter, O., 1974, ApJ, 187, 425 Randich, S., 1997, in “Cool Stars in Clusters and Associations: Magnetic Activity and Age Indicators” (G. Micela, R. Pallavicini, S. Sciortino eds.), Memorie SAIt 68, 971 Rosati, P., Della Ceca, R., Norman, C., Giacconi, R., 1998, ApJ, 492, L21 Rosati, P., 1998, in “Widerref Field Surveys in Cosmology”, Proceedings of XIVth IAP meeting, S. Colombi & Y. Mellier eds., in press (astro-ph/9810054) Rosner, R., Golub, L., Vaiana, G. S., 1985, AR&A, 23, 413 Scharf, C. A, Mushotzky, R. F., 1997, ApJ, 482, L13 Schmidt, M., Hasinger, G., Gunn, J., et al., 1998, A&A, 329, 495 Schneider, D. P., Schmidt, M., Hasinger, G. et al., 1998, AJ, 115, 1230 Sciortino, S. 1993, in “Physics of Solar and Stellar Coronae”, J. Linsky and S. Serio (eds.), Kluwer Academic Publishers, 221 Sciortino, S., Damiani, F., Favata, F., Micela, G., Pye, J., 1998, Astron. Nachr., 319, 108 Sciortino, S., Favata, F., Micela, G., 1995, A&A, 296, 370 Seward, F.D., Mitchell, M., 1981, ApJ 243 736 Shanks, T., Boyle, B. J., 1994, MNRAS, 271, 753 Sidoli, L., et al., 1998, A&A 336 L81 Slezak, E., Durret, F., Guibert, J., Lobo, C., 1998, A&AS, 129, 281 Snowden, S. L., Petre R., 1994, ApJ, 436, L123 Stocke, J. T., Morris, S. L., Gioia, I. M., et al., 1991, ApJS, 76, 813 Tagliaferri, G., Cutispoto, G., Pallavicini, R., Randich, S., Pasquini, L., 1994, A&A 285, 272 Tananbaum, H., Tucker, W., Prestwich, A., Remillard, R., 1997, ApJ, 476, 83 Vikhlinin, A., McNamara B. R., Forman, W., Jones, C., Quintana, H., & Hornstrup, A., 1998a, ApJ, 498, L21 Vikhlinin, A., Forman, W., Jones, C., & Murray, S., 1995, ApJ, 451, 553 Vikhlinin, A., McNamara, B. R., Forman, W., Jones, C., Quintana, H., & Hornstrup, A., 1998b, ApJ, 502, 558. Vogeley, M. S., 1998, in “Ringberg Workshop on Large-Scale Structure” (astro-ph/9805160) Wang, Q. D., Connoly, A. J., Brunner, R. J., 1997, ApJ, 487, L16 Wang, Q., et al., 1991, ApJ 374 475 West, M. J., Jones, C., Forman, W., 1995, ApJ, 451, L5 White, N., et al., 1995, in X-ray Binaries, eds. White, Parmar & van den Heuvel Xia, X.-Y., et al., 1998, ApJ, 496, L9 Zucca, E., et al., 1993, ApJ, 407, 470 Figure I.1: Comparison of the survey area versus the limiting sensitivity. The baseline of the mission almost reaches the sensitivity of the AXAF deep survey over an area more than twenty larger. The confusion limit is well below the XMM confusion limit. Figure II.1: 30’x30’ optical image from the DSS plates, with superimposed the X-ray contours from a pointed ROSAT observation. There are 26 X-ray sources in this area. 20 are distant AGNs. Sources ’A’ and ’C’ are two clusters of galaxies at $`z=0.36`$ and $`z=0.55`$ respectively, while ’B’ is a poorer group at $`z=0.12`$. Figure II.2: The effective area covered on the sky by the three most representative X-ray deep cluster surveys, the EMSS (Henry et al. 1992), the RDCS (Rosati et al. 1998) and CfA (Vikhlinin et al. 1998a), compared to the areas of the WAXS Shallow and Deep surveys. Note that the flux limits for the WAXS surveys are rather conservative (adapted from Rosati 1998). Figure II.3: The limiting flux reachable as a function of the integration time for a typical cluster, at a given S/N ratio. Figure II.4: Comparison of the local ($`z<0.3`$) and distant ($`0.3<z<0.8`$) cluster X-ray luminosity function. Filled circles and the dot-dashed line represent the local XLF as measured by De Grandi et al. (1998), filled triangles and stars are, respectively, the RDCS (Rosati et al. 1998) and EMSS (Henry et al. 1992) results in the $`0.3<z<0.6`$ range; lozenges are the RDCS estimates for the $`0.5<z<0.85`$ bin. Figure II.5: The filled circles give the cluster X-ray luminosity function from the RDCS survey (Rosati et al. 1998), in three redshift bins at a median redshift (top to bottom) of 0.17, 0.31, 0.58. The three XLF have been displayed with a vertical shift $`\mathrm{\Delta }\mathrm{log}(\varphi )=2`$ for clarity. The two panels refer to two different cosmological models: an Einstein-deSitter model, and a flat model with $`\mathrm{\Omega }_0=0.3`$ and a cosmological constant. Note how the data from the more distant bin marginally overlap those at smaller redshifts (Borgani et al. 1998). Figure II.6: A preliminary estimate of the power spectrum of cluster density fluctuations from the REFLEX survey, currently the best available sample of X-ray selected clusters from the ROSAT All-Sky Survey. Note how using clusters , the sampling of density fluctuations is optimal on large scales ($`k<0.2`$), i.e. specifically where more information is needed and where this becomes difficult when using galaxy redshift surveys. (Schuecker et al., in preparation; Boehringer et al. 1998). Figura II.7: Output of a hydrodynamic/n-body simulation of the formation of a cluster of galaxies and the surrounding filamentary structures, within a 32 Mpc size cube (Bode et al. 1998). The pictures show different quantities projected from a slab of 8 Mpc deep at the center of the cube. Clockwise from top-left, they show the distribution of (a) the dark matter, (b) the gas, (c) the X-ray emission, and (d) the corresponding temperature field. Note how well the cluster and surrounding groups in the X-ray image trace the overall mass distribution. Figure II.8: All sky distribution in galactic coordinate of the 8593 Tycho stars detected as PSPC sources above the limiting flux of $`2\times 10^{13}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. The density enhancement at low galactic latitude is clearly visible as well as an evident enhancement that has been associated with a physical structure, the so-called Gould Belt. This has an ellipsoidal shape with a semi-major axis of about 500 pc and a semi-minor axis of about 340 pc, and with the Sun located inside it about 150 to 250 pc off center (from Guillout et al. 1998a). Figure II.9: A simulation of the projected all-sky angular distribution (in galactic coordinates) of the X-ray emitting stars down to $`m_V=15`$ and to $`f_X=5\times 10^{14}`$ erg s<sup>-1</sup> cm<sup>-2</sup> according to the model of the spatial distribution of young stars recently proposed by Guillout et al.(1998b). Note that both the galactic disk population and the Gould belt populations will be adeguatelly sampled by the proposed low-latitude surveys (Simulation is courtesy of P. Guillout). Figure II.10: Detection by the wavelet algorithm on a simulation of an ultra deep WAXS field. The size of the circles is related to the physical dimension of the detected sources. Figure II.11: Comparison between the input $`\mathrm{log}N\mathrm{log}S`$ and that derived from the analysis. Figure III.1: View of the WFXT parts. Figure III.2: WFXT Mirror Module conceptual design. Figure III.3: Image spot of the outermost WFXT mirror shell tested at the PANTER facility. The four images (from left to right and top to bottom) refer to off-axis positions of 0’, 10’, 20’ and 30’ off-axis angles. Images were taken at energy of 1.5 keV and with a CCD detector. Figure III.4: Effective area (mirror + CCD + filter) for the WFXT telescope. Different curves refer to the different material used to coat the mirror surfaces.
no-problem/9902/astro-ph9902233.html
ar5iv
text
# Dust lanes causing structure in the extended narrow line region of early-type Seyfert galaxies ## 1 Introduction In Seyfert galaxies when radiation is obscured by an inner optically thick ‘torus’ (Antonucci (1993)) ultraviolet (UV) radiation can escape along a conical shaped beam causing an “ionization cone” to be observed in emission line maps (e.g., Evans et al. (1991), Pogge (1989)). The observed morphology and luminosity of the ionization cone can be influenced by the density distribution of the ambient media (e.g., as simulated by Mulchaey, Wilson & Tsvetanov (1996)). For example, in NGC 4151 extended emission is produced by the intersection of the ionization cone with the disk of the galaxy (Pedlar et al. (1993); Boksenberg et al. (1995); Wilson & Tsvetanov (1994)). In samples of Seyfert galaxies the orientation of \[OIII\] and H$`\alpha `$+\[NII\] line emission is generally near that of the galaxy major axis (Nagar et al. (1999), Mulchaey & Wilson (1995)). This suggests that the availability and distribution of dense galactic gas plays an important role in determining the width and orientation of the ionization cone. Extended emission-line and radio morphologies are often co-spatial or aligned in Seyfert galaxies (e.g., Unger et al. (1987); Pogge (1988)). This supports the idea that ionizing photons preferentially escape along the radio axis. The connection between the radio ejecta of Seyfert nuclei and their extended emission line regions is evident from the similar spatial extents observed in high angular resolution radio interferometric observations compared to Hubble Space Telescope (HST) images (e.g., recently Simpson et al. (1997); Falcke, Wilson & Simpson (1998); Mulchaey et al. (1994); Ferruit, Wilson & Mulchaey (1998)). However, the morphologies observed in lines from ionized gas \[OIII\]$`5007\AA `$ and H$`\alpha `$+\[NII\]$`6548,6583\AA `$ can be complex, showing S-shaped features, partial loops, and curves suggestive of bow shocks. Except for observations showing that high excitation gas tends to form a cone shaped morphology, no pattern has emerged connecting the complicated morphologies observed in the ionized gas distribution among different Seyfert galaxies. In particular it not clear what role the distribution of ambient galactic gas plays compared to energetic hydrodynamical processes caused by the jet. In this paper we take the opportunity provided by high angular resolution near-infrared imaging with NICMOS on HST to make color maps with previously observed visible wavelength HST/WFPC2 images. We find that the resulting color maps show extinction from dust features with higher signal to noise than possible with color maps made previously from the existing WFPC2 images alone. This could be because the large wavelength separation between the visible and near-infrared images makes it easier to identify features with low column depths of dust. We present here a comparison between morphology observed in these color maps (tracing dust features) and that displayed in \[OIII\] and H$`\alpha `$+\[NII\] line emission from ionized gas. Using these color maps we can examine how the distribution of ambient dense gas in the galaxy affects the morphology of the line emission. We chose Seyfert galaxies with existing high quality images observed with WFPC or WFPC2 and NICMOS and published recent literature discussing their ionization cone morphology. The galaxies are also of early-type, so as to minimize confusion caused by star formation and by extreme extinction from dust. This limited our sample to 4 galaxies: Markarian 573, NGC 3516, NGC 2110 and NGC 5643. All are Seyfert 2 galaxies except for NGC 3516 which is a Seyfert 1. The Hubble types range from SB0 to SAB. A number of dynamical models or scenarios have been proposed to explain the individual galaxy morphologies. In NGC 3516 a bent bipolar mass outflow model was suggested (Goad & Gallagher (1987); Mulchaey et al. (1992)) to account for the S-shape of the ionized gas. Alternatively a precessing jet model might also be appropriate (Veilleux, Tully, & Bland-Hawthorn (1993)). In Markarian 573 inner knots nearest the radio jet may represent deflection of the jet by ambient or entrained clouds, whereas the inner arcs at $`1^{\prime \prime }.8`$ may represent bow shocks driven by the jets (Ferruit et al. (1999)). Despite the similarities in morphology between the inner and outer arcs at $`3^{\prime \prime }.6`$ this mechanism is unlikely to apply to the outer arcs. In NGC 2110 a pair of curved features of ionized gas is observed which are offset from the radio jet itself (Mulchaey et al. (1994)). This might be consistent with a model where gas is swept up or ejected by the jet, but the relative positions of the radio emission and the emission line features remains difficult to explain (Mulchaey et al. (1994)). Despite the morphological similarity between the inner and outer emission features in NGC 2110 the outer S-shaped region of ionized gas at $`4^{\prime \prime }`$ from the nucleus is suspected to be caused by a different mechanism, that of ambient interstellar gas photoionized by the central source (Mulchaey et al. (1994)). Many of these studies and interpretations were hampered by the lack of information about the distribution of ambient galactic material. In this paper we search for a simple explanation for the variety of emission line morphologies observed in these galaxies. ## 2 Comparison of near-infrared/visible color maps with emission line maps WFPC, WFPC2 (visible) and NICMOS (near-infrared) images were taken from the HST archive. For more information on the visible band images see the original papers discussing the HST observations (on NGC 3516; Ferruit et al. (1998), on NGC 5643, Simpson et al. (1997), on NGC 2110; Mulchaey et al. (1994), on Markarian 573; Ferruit et al. (1999), Falcke et al. (1998)). NICMOS Camera 2 images in the filter F160W (centered at $`1.60\mu `$m) were primarily from the snap shot program 7330 (Regan & Mulchaey ). For NGC 2110 we used the narrow band image in the filter F200N (centered at $`2.00\mu `$m) from GO program 7869. The NICMOS images were reduced with nicred (McLeod (1997)) with on-orbit flats and darks. In Figures 1-4 we show ionized gas traced in either \[OIII\] or in H$`\alpha `$+\[NII\] for the four galaxies. We also show color maps constructed from NICMOS and WFPC2 images. The color maps trace the morphology of molecular gaseous structures predominantly on the near side of the galaxy (in front of most of the stars). They are the only way currently feasible to compare the location of the molecular gas with the emission line structures in the ionization cones. ### 2.1 NGC 5643 The isophotes in the outer parts of the NICMOS camera 2 F160W, $`1.6\mu `$m image are roughly aligned with a large scale bar (with major axis $`PA90^{}`$, and extending to a radius of $`r_b30^{\prime \prime }`$, Mulchaey, Regan & Kundu (1997)). No inner bar is detected in this image. The color map shows a pair of extinction features consistent with leading dust lanes along but offset from the major axis of the bar. Knots observed in \[OIII\] are probably not directly associated with any dust features observed in the galaxy. However, the southern side of the ionization cone displays a component of diffuse emission which appears to be bounded by the dustlane seen on the south east side of the galaxy. The dust lane appears to be very slightly offset to the south from the diffuse component of the line emission. This could be explained by a projection effect if the UV radiation beam from the central source (or from shocks caused by the jet, e.g., Dopita & Sutherland (1995)) illuminates material somewhat above but not in the plane of the galaxy. This offset could also be explained by a model where dense material originally from the dust lane is entrained by moving material associated with the jet ($`PA90^{}`$, Morris et al. (1985)). ### 2.2 NGC 2110 For NGC 2110 the deepest visible broad band image available was the F606W broad band filter which is somewhat contaminated by line emission. However the extent of dust lanes can be seen outside the ionization cone and includes large areas which are not contaminated. The overall pattern is that of spiral dust lanes which could be part of spiral arms. In particular dust features are observed at about $`4^{\prime \prime }`$ and $`8^{\prime \prime }`$ north of the nucleus, roughly corresponding to the two arcs seen in the H$`\alpha `$ and \[OIII\] emission maps (Mulchaey et al. (1994)). A dust lane is also observed to the south of the nucleus corresponding to a broad curve of line emission about $`2^{\prime \prime }`$ south of the nucleus. The dust features appear to be offset from the line emission, being slightly more distant from the nucleus (see the above discussion on NGC 5643). ### 2.3 NGC 3516 On large scales NGC 3516 is barred with major axis $`PA170^{}`$ and extending to a radius of $`r_b13^{\prime \prime }`$ (Mulchaey et al. (1997)). In the F160W image the isophotes are slightly elongated in a direction roughly perpendicular to this outer bar (at $`r3^{\prime \prime }`$) and so the galaxy may be doubly barred. However the morphology of the dust features observed in extinction in the color map and gas kinematics are not consistent with what would be expected from gas in the plane of the galaxy (see discussion in Ferruit et al. (1998), Veilleux, Tully, & Bland-Hawthorn (1993)). When gas exists above the plane of galaxy a warped configuration is most likely (e.g., Tubbs (1980)). To the south of the nucleus a curved dust feature is observed in the color map. The shape of this dust feature corresponds quite well with the morphology of the \[OIII\] emission and suggests that dust is associated with the ionized gas. To the north west and south east of the nucleus extinction features are observed which are not co-spatial with bright line emission. We note that the F547M image we used to make the color map should be free of line emission. ### 2.4 Markarian 573 On large scales Markarian 573 is barred with major axis $`PA0^{}`$ and extending to a radius of $`r_b10^{\prime \prime }`$ (e.g., Alonso-Herrero et al. (1998)). In the F160W image there is a strong elongation almost exactly perpendicular to the larger scale bar at $`r2^{\prime \prime }`$ which corresponds to an inner bar (noted by Pogge & DeRobertis (1993)). There is also a pair of dust lanes slightly offset from the major axis of this inner bar (Pogge & DeRobertis (1993)) which would be consistent with leading dust lanes along this bar, however a warped dusty disk oriented perpendicular to the jet might also be present. Unfortunately the F606W filter is strongly contaminated by line emission. However, in the F606W/F160W color map we can trace the extent of dust features outside the ionized gas. To the east of the nucleus 2 linear dust features are connected to the 2 south eastern arcs of line emission suggesting that they are likely to continue within the region of line emission. To the south there is also a dustlane which connects to the outer arc. The morphology of the line emission and dust lanes is reminiscent of double spiral arms which are sometimes observed at the ends of a bar (e.g., in NGC 1365, Lindblad, Lindblad, & Athanassoula (1996)). The linear feature of high excitation emission within an arcsec from the nucleus does not correspond to any dust features and so is probably directly associated with the jet (as discussed in Ferruit et al. (1999), Falcke et al. (1998)). ### 2.5 Interpretation Spiral arms and bars can cause gas mass surface density contrast ratios (between arm and inter-arm of dustlane and inter-dustlane) of greater than a factor of few in the plane of the galaxy (e.g., Hausman & Roberts (1984), Athanassoula (1992)). Extinctions measured from our color maps are similar to that estimated from visible color maps and range from $`A_V0.51.5`$ mag in the dust features (e.g., Simpson et al. (1997), Mulchaey et al. (1994), Ferruit et al. (1998)) corresponding to $`N_H39\times 10^{21}\mathrm{cm}^2`$ using a standard ratio of total hydrogen column depth to color excess. Outside the dust features visible to infrared colors match those on the opposite side of the galaxy nucleus allowing us to limit $`A_V0.1`$ mag inter-arm (or inter-dustlane). This implies that the minimum surface density contrast ratio between arm/inter-arm (or dustlane and inter-dustlane) is a factor of a few. The dust features that are evident in our visible/infrared color maps therefore represent a significant source of dense galactic gas compared to that outside these features. Densities estimated from emission line diagnostics are relatively high ($`30\mathrm{cm}^3`$ for NGC 3516, Ulrich & Péquignot (1980); $`100850\mathrm{cm}^3`$ depending on the arc in Markarian 573, Ferruit et al. (1998); $`50\mathrm{cm}^3`$ in a particular knot in NGC 5643, Simpson et al. (1997)). Using these densities to estimate the total number of ionized hydrogen from the H$`\alpha `$ line strength we estimate that $`N_{H,ion}`$ a few $`\times 10^{21}`$ cm<sup>-2</sup> in Markarian 573, NGC 2110 and NGC 3516 and a factor of 10 lower in NGC 5643. We estimate that these column depths are a significant fraction of those estimated above from the color excesses or extinctions in the dust features. The high densities and large column depths of ionized hydrogen suggest that a nearby source of dense galactic gas is required to account for their existence. The dust lanes provide a nearby source of dense gas which does not exist outside these regions. The emission measure of a line is proportional to the density squared and so is a strong function of the gas density. This implies that denser material should produce brighter line emission. If UV photons preferentially escape along a conical radiation beam, the densest material illuminated by this beam would be easiest to see in a line emission map. Alternatively in models where the interaction of the radio ejecta and the ambient medium could also produce ionizing radiation (e.g., Dopita & Sutherland (1995)) we would also expect a bias towards detecting denser gas associated with dust features. When dense gas exists in the plane of the galaxy we preferentially expect to see line emission near this plane, and associated with denser media which would be traced by dust lanes. ## 3 Discussion For 4 early-type Seyfert galaxies we demonstrate that dust lanes are near features seen in line emission maps in the extended emission line region. Although the correspondence between emission line and dust features is not perfect, it is plausible that “missing” dust features that are seen in line emission lie on the far side of the galaxy and are not seen as shadows in the color maps. In NGC 5643 a component of diffuse line emission is bounded by a dustlane on the south eastern side of the ionization cone. In NGC 2110 3 spiral dust lanes have similar curvature and location to 3 arcs seen in line emission maps. In Markarian 573 on the east side 2 linear dust lanes merge into the two south eastern line emission arcs. To the south a linear dustlane merges into the south eastern outer arc. In NGC 3516 to the south west of the nucleus patchy dust features exist near the location of bright line emission. The dust features continue outside the region of ionized emission to the north west and south east of the nucleus. The proximity of dust features with line emission suggests that the morphology of the line emission is affected by spiral arms or bars (except for NGC 3516 where the dust may be part of a gas warp). Dense gas in these dust features is more likely to result in bright line emission when illuminated by a UV source or when affected by a jet. This suggests that line emission luminosity might be dependent on jet or ionization cone orientation with respect to the galaxy plane, or on the scale height of the galactic gas. When spiral arms or bars are present, moderate deviations from circular motion caused by streaming are expected. This could explain some (but not all) of the peculiarities observed in the velocity fields (e.g., in Markarian 573; Ferruit et al. (1999)). However, we note that not all of the line emission features are associated with dust lanes. This is illustrated with Markarian 573 and NGC 5643 where many of the features observed in the \[OIII\] emission map do not have nearby dust lanes. We would expect that galaxies with a larger fraction of active star formation than the galaxies presented here may have much more complicated ionized gas structure because of their multiple sources of UV radiation (e.g., as seen in Circinus, Maiolino et al. (1999)). The galaxies studied here have ionization cones which extend hundreds of pc from the galaxy nucleus, and well outside a gas disk exponential scale length. Ionization cones on small scales within a gas exponential length would not be expected to be so sensitive to the distribution of galactic gas in the plane of the galaxy. Support for this work was provided by NASA through grant number GO-07869.01-96A from the Space Telescope Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. We also acknowledge support from NASA projects NAG5-3042 and NAG5-3359. We acknowledge helpful discussions and correspondence with Chien Peng and Don Garnett.
no-problem/9902/cond-mat9902098.html
ar5iv
text
# Crossing of Specific Heat Curves in some correlated Fermion systems ## Abstract Specific heat versus temperature curves for various pressures, or magnetic fields (or some other external control parameter) have been seen to cross at a point or in a very small range of temperatures in many correlated fermion systems. We show that this behavior is related to the vicinity of a quantum critical point in these systems which leads to a crossover at some temperature $`T^{}`$ from quantum to classical fluctuation regime. The temperature at which the curves cross turns out to be near $`T^{}`$. We have discussed the case of the normal phase of liquid Helium three and the heavy fermion systems CeAl<sub>3</sub> and UBe<sub>13</sub> in detail within the spin fluctuation theory. When the crossover scale is any homogeneous function of these control parameters there is always crossing at a point. There has been a surge of interest in correlated fermionic systems for last ten years. This has led to a recognition that the usual mean field or Hartree Fock description of interacting fermionic systems is not enough, in particular when the effective space dimension of the system is low or when the system is near a quantum phase transition due to the effects of characteristic low energy quantum fluctuations. For example, systems near a metal insulator transition or near a magnetic instability, high temperature superconductors, heavy fermions and liquid <sup>3</sup>He, all show temperature dependence of their properties at low temperatures which differs from that expected in a normal Fermi liquid . One phenomenon which had been observed long ago is that in some systems the specific heat curves as a function of temperature, for various values of external parameters (e.g. pressure, magnetic field) cross at a point or at least within a very narrow regime of temperature. This phenomenon was initially observed for <sup>3</sup>He by Brewer et. al. and has been seen later on, in a variety of materials ranging from systems close to metal-insulator transition to heavy fermions. The variety of materials in which this phenomenon has been observed leads one to believe that there is some kind of universality in this behavior. In a recent publication, Vollhardt has given a thermodynamic interpretation to this universality. The argument relies on a smooth crossover between behavior of entropy at temperatures low compared to degeneracy temperature and the high temperature classical limit. As such, the question of why such crossings are prominently seen in systems with highly enhanced magnetic susceptibility or effective mass remains unanswered. Here we propose that the operative cause is the proximity to a quantum critical point (or $`T=0`$ critical point). Vicinity to a quantum critical point is usually marked by enhancement in the effective mass, and in spin or density (charge) response in a system at low temperatures. This in turn introduces a low energy scale which marks a crossover from quantum to a classical behavior in the temperature dependence of various physical properties. In most materials the abovementioned crossing of specific heat occurs near this crossover temperature. This scenario is quite general and holds for transitions involving conserved (for example, the ferromagnetic) as well as nonconserved (the anti-ferromagnetic) order parameters. The examples discussed in the present letter have been chosen to represent both of these order parameter fluctuations. We use the microscopic spin fluctuation theory to discuss the behavior in detail. This theory has the low energy scale inherently built in it. Consider first the case of liquid <sup>3</sup>He. It is a Fermi system with a degeneracy temperature of about 5 K. It has some interesting normal state properties. For example, it behaves like a dense classical liquid down to 0.5 K and like a degenerate Fermi liquid below 0.2 K. It has a large (nuclear) spin susceptibility, about 10 to 25 times the free Fermi gas or Pauli susceptibility $`\chi _P`$, depending on pressure. The coefficient of the linear term in specific heat is also large. In the spin fluctuation theory presented below, the liquid is regarded as near a ferromagnetic instability. In this theory the temperature variation of various physical quantities is governed by transverse and longitudinal spin fluctuations. Though the actual transition does not take place, the effect of fluctuations is observable over a wide temperature range at low temperatures.. In the following we use some results from our earlier works to discuss the crossing point in the specific heat curves. We consider first fluctuation above the transition to a ferromagnetic phase. The spin fluctuation contribution to the free energy within the mean fluctuation field approximation (or quasi harmonic approximation) at temperature $`T`$ for systems near a ferromagnetic instability is given by , $$\mathrm{\Delta }\mathrm{\Omega }=\frac{3T}{2}\underset{q,m}{}\mathrm{ln}\{1U\chi _{qm}^0+\lambda T\underset{q^{},m^{}}{}D_{q^{}m^{}}\}.$$ (1) Here $`D_{q,m}`$ is the fluctuation propagator which is related to inverse dynamical susceptibility, $`\chi _{qm}^0`$ is the free Fermi gas (Lindhardt) response function, and $`\lambda `$ is the fluctuation coupling constant. Considering only the thermal part of the integral and ignoring the zero point part, we perform the frequency summation and obtain, $$\mathrm{\Delta }\mathrm{\Omega }_{Th}=\frac{3}{\pi }\underset{q}{}_0^{\mathrm{}}\frac{d\omega }{e^{\omega /\tau }1}\mathrm{arctan}\{\frac{\pi \omega /4q}{\alpha (\tau )+\delta q^2}\},$$ (2) where $`\alpha (\tau )`$ is the inverse of spin susceptibility in units of the Pauli susceptibility. The wavevector $`q`$ is given in units of Fermi momentum $`k_F`$ and the energy is in units of Fermi energy ($`\tau =T/T_F`$). For a free Fermi gas $`\gamma =1/2`$, $`\delta =1/12`$. The correction to the specific heat is given by, $$\frac{\mathrm{\Delta }C}{k_B}=3\tau ^2\underset{q}{}\left[(\frac{2}{\tau }\frac{y}{\tau }+\frac{^2y}{\tau ^2})\varphi (y)+(\frac{y}{\tau })^2\frac{\varphi (y)}{y}\right]$$ (3) The function $`\varphi (y)`$ is given by $`\mathrm{ln}y1/2y\psi (y)`$ and $`\psi (y)`$ is digamma function where, $`y=q(\alpha (\tau )+\delta q^2)/(\pi ^2\gamma \tau )`$. $`\varphi (y)`$ is related to the fluctuation self energy summed over frequency. It varies as $`1/2y`$ for $`y1`$ and as $`1/12y^2`$ for $`y1`$. Clearly the calculation of specific heat correction involves the temperature dependence of spin susceptibility. A self consistent equation for the temperature dependence of $`\alpha (T)`$ within one spin fluctuation approximation is given by , $$\alpha (\tau )=\alpha (0)+\frac{\lambda }{\pi }\underset{q}{}q\varphi (y).$$ (4) For a finite $`\alpha (0)`$ there are two regions of temperature . For $`\tau <\alpha (0)`$, which corresponds to $`y1`$, one gets an enhanced Pauli susceptibility with standard paramagnon theory corrections, $`\alpha (\tau )=\alpha (0)+a\tau ^2/\alpha (0)`$, where $`a`$ turns out to be $`0.44`$. At higher temperatures ( $`\alpha (0)<\tau <1`$) $`\alpha (\tau )\tau ^n`$ with the exponent $`1n4/3`$. This result for the susceptibility mimics the classical Curie Weiss behavior. Notice that even in a degenerate regime ($`\tau <1`$), the susceptibility for a Fermi system behaves like the one for a collection of classical spins. This behavior agrees well with experimental results of Thompson et. al. . The parameter $`\alpha (0)T_F`$ is the low energy scale which arises in the spin fluctuation theory naturally. The corresponding low temperature ($`\tau \alpha (0)`$) correction to the specific heat is, $$\frac{\mathrm{\Delta }C}{k_B}=\underset{q}{}\frac{\pi ^2\tau }{4q(\alpha +\delta q^2)}.$$ (5) The phase space integral reproduces the standard paramagnon mass enhancement result, $`\tau \mathrm{ln}\alpha `$ for $`\mathrm{\Delta }C`$. In the classical regime, $`\alpha (0)\tau 1`$, where the small $`y`$ approximation holds and $`\alpha (\tau )`$ varies as $`\tau ,`$ $`\mathrm{\Delta }C`$ falls as $`1/\tau ^2`$ and vanishes at higher temperatures. The main point of the above discussion is that there are two regimes for specific heat similar to the regimes in the susceptibility variation. The behavior of the specific heat in these two regimes is qualitatively different. At low temperature there is an enhanced linear rise of specific heat correction with temperature leading to a peak and thereafter a slow fall as the temperature increases. The peak marks a transition from quantum to classical spin fluctuation regimes. Considered as a function of $`\alpha (0)T_F`$, the temperature dependence of specific heat is more revealing. Below a certain temperature $`T_{cr}`$, specific heat decreases as $`\alpha (0)T_F`$ increases, while above it the behavior is reversed (see Fig. 1). $`T_{cr}`$ clearly marks the crossing and is of the order of $`\alpha (0)T_F`$. The spin fluctuation theory has only one parameter, that is, $`\alpha (0)T_F`$. The pressure or magnetic field dependence of quantities is realised through the dependence of $`\alpha (0)T_F`$ on them. Whenever $`\alpha (0)T_F`$ is homogeneously increasing or decreasing function of these parameters the specific heat curves will cross at a point. In this case $`C/\alpha (0)T_F=0`$ at $`T=T_{cr}`$ also means $`C/X=0`$ at the same temperature, where $`X`$ is an external control parameter like pressure or magnetic field. The later equation is the condition for crossing of curves at a point. For liquid <sup>3</sup>He the specific heat is plotted in Fig. 2 as a function of temperature for various values of pressure, assuming a linear reduction of $`\alpha (0)T_F`$ with pressure. The linear scaling is experimentally observed above pressures about 15 kbar. However, at small pressures there is some departure. The peak in $`C(T)`$ appears around $`0.15K`$. To calculate the specific heat, the free Fermi gas part ($`\pi ^2T/2T_F`$) has been added to $`\mathrm{\Delta }C(T)`$. The value of $`\alpha (\tau )`$ has been calculated self consistently using Eq.(4) and then used as an input in the specific heat calculation. The coupling constant $`\lambda `$ has been chosen to be $`0.08`$ and the cutoff for the momentum sum, $`1.2k_F`$. The crossing temperature is related to $`\alpha (0)T_F`$ which depends on pressure in general. The crossing point shifts towards high temperature side slightly with increase in cutoff and with decrease in $`\lambda `$ but the nature of crossing is not affected. There are some heavy fermion materials in which the specific heat curves cross. We consider the case of CeAl<sub>3</sub> and UBe<sub>13</sub> . CeAl<sub>3</sub> does not undergo either a magnetic or a superconducting transition, while UBe<sub>13</sub> becomes superconductor at 0.9 K at normal pressure. The present discussion pertains to their normal state properties only. Heavy fermions are characterised by a large linear temperature dependent term in the specific heat and a large low temperature spin susceptibility . In this regime the resistivity also shows a T<sup>2</sup> behavior characteristic of a Fermi liquid. Above a certain temperature $`T^{}`$, the susceptibility starts showing a Curie Weiss behavior, indicating the existence of interacting local moments on the f-shells. The local moment to Pauli like behavior of the susceptibility, as temperature reduces, marks the onset of coherence in these systems. In UBe<sub>13</sub> this coherence regime is less visible because of the onset of superconductivity, but once the superconductivity is suppressed on application of pressure the coherence is restored . At present a clear microscopic understanding of the behavior of heavy fermions is lacking, one has to take recourse to various levels of phenomenology. It is possible that the unusual low temperature dependence of physical properties in UBe<sub>13</sub> for example, is due to its being a non-fermi liquid of as yet unknown origin. We take the point of view here that this behavior can be described in terms of proximity to a quantum critical point which is also known to lead to temperature dependences different from fermi liquid theory (for example ). Because of the similarity to liquid <sup>3</sup>He, at the phenomenological level it is tempting to apply the spin fluctuation theory to these materials, with $`\alpha (0)T_F`$ playing the role of the crossover temperature $`T^{}`$. However, there is a difference. While <sup>3</sup>He can be considered close to a ferromagnetic transition, most heavy fermion materials seem close to an antiferromagnetic instability. In the present work, we therefore consider the heavy fermions in the coherence regime as nearly antiferromagnetic Fermi liquid. We have calculated the specific heat corrections by writing the equations for the susceptibility enhancement and specific heat near an antiferromagnetic instability. The formalism remains same except that the factor $`\omega /q`$ in Eq. 2 is replaced by $`\omega `$ to take care of low energy behavior of the fluctuation propagator . The difference is due to the fact that the order parameter does not remain a conserved quantity. Further, to reproduce the huge effective mass observed, fluctuation modes are essentially dispersionless in heavy fermions , namely the coefficient of the $`q^2`$ term in $`y`$, i.e., $`\delta 0`$. In this case, the leading contribution to specific heat is $`\tau /\alpha (0)`$ at low temperatures. In the same range of temperatures the leading temperature correction to zero temperature susceptibility is $`\tau ^2/\alpha ^2(0)`$. In Fig-3 and 4 the specific heat curves for CeAl<sub>3</sub> and UBe<sub>13</sub> have been plotted as a function of temperature for various pressures. The value of $`\gamma `$ has been taken to be 0.185 and the cutoff $`q_c`$ is 2.0. The fluctuation coupling $`\lambda `$ is 5 x 10<sup>-4</sup> for CeAl<sub>3</sub> and 2 x 10<sup>-4</sup> for UBe<sub>13</sub>, and decreases slightly with pressure. The parameter $`\alpha (0)T_F`$ is of the order of the crossing temperature with a weak linear pressure dependence. The variation with pressure is within 10 $`\%`$. In contrast to <sup>3</sup>He, here $`\alpha (0)T_F`$ increases with pressure. This is because in <sup>3</sup>He pressure brings the atoms closer and thereby increasing the interaction, while in heavy fermions the reduction in the lattice parameter enhances the hybridization between conduction electrons and f electrons thereby the antiferromagnetic exchange between local moment and the conduction electron will be enhanced leading to a non magnetic ground state. It is seen that the curves cross within a small regime close to the experimental crossing point. Beyond the crossing point the deviation from xthe experimental curves is large. In fact, in heavy fermions, the curves cross at two points, the second point being away from the crossover temperature T, though still at temperatures far below $`T_F`$. The reason for the second crossing cannot be found in a single parameter theory like the present one. It might be due to some other low lying modes like crystal field excitations or phonons . So far we have discussed the ferro- and antiferromagnetic quantum critical points. In a phenomenological model attempting to incorporate some aspects of strong correlations near the Mott transition Rice et.al. generalized the Brinkman-Rice theory to finite temperatures by introducing an extra ansatz for the entropy. It was applied to the case of UBe<sub>13</sub> and later to liquid <sup>3</sup>He . At a low energy scale which is related to reducing double occupancy there is crossover between Pauli to Curie behavior for the susceptibility. However, the specific heat curves for liquid <sup>3</sup>He at various pressures seem to cross over a wide range of temperatures unlike the experimental findings . Recently, the metal insulator transition has been discussed within the single band Hubbard model for infinite dimension by Georges and Krauth . A low energy scale, related to the vanishing quasiparticle weight, arises in the metallic side of the transition. The specific heat curves cross at temperature around this scale. However, the theory gives a second crossing around the energy scale $`U`$. We have used the terms quantum and classical in the discussion above, because, the temperatures below $`\alpha (0)T_F`$ essentially define a regime where one gets a Fermi liquid behavior whereas at high temperatures, fluctuations get correlated resulting in the classical behavior for the susceptibility. The distinction, quantum versus classical, becomes clear when one takes the limit $`\alpha (0)0`$ (the quantum critical point). In that case the Curie law for susceptibility is obtained down to zero degree , while in the opposite limit ($`\alpha (0)1`$) one gets the Pauli susceptibility; in either of these limits the curves for specific heat do not cross. Acknowledgement : We are grateful to Prof. T. V. Ramakrishnan for critical reading of the manuscript.
no-problem/9902/astro-ph9902270.html
ar5iv
text
# The kinematics and the origin of the ionized gas in NGC 4036 ## 1 Introduction NGC 4036 has been classified S0<sub>3</sub>(8)/Sa in RSA (Sandage & Tammann 1981) and S0<sup>-</sup> in RC3 (de Vaucouleurs et al. 1991). Its total apparent magnitude is $`V_T=\mathrm{\hspace{0.17em}10.66}`$ mag (RC3). This corresponds to a total luminosity $`L_V=\mathrm{\hspace{0.17em}4.2}10^{10}`$ $`\mathrm{L}_\mathrm{V}_{}`$ at the assumed distance of $`d=V_0/H_0=\mathrm{\hspace{0.17em}30.2}`$ Mpc, where $`V_0=\mathrm{\hspace{0.17em}1509}\pm 50`$ $`\mathrm{km}\mathrm{s}^1`$ (RSA) and assuming $`H_0=\mathrm{\hspace{0.17em}50}`$ $`\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. At this distance the scale is 146 pc arcsec<sup>-1</sup>. We measured the kinematics of stars and ionized gas along the galaxy major axis and we also derived their distribution in the nuclear regions by means of ground-based $`V`$band and HST narrow-band imaging respectively. ## 2 Modeling the stellar kinematics We apply to the observed stellar kinematics the Jeans modeling technique introduced by Binney et al. (1990), developed by van der Marel et al. (1990) and van der Marel (1991), and extended to two-component galaxies by Cinzano & van der Marel (1994). The best-fit model to the observed major-axis stellar kinematics is shown in Fig. 1. The bulge is an oblate isotropic rotator ($`k=1`$) with $`(M/L_V)_b`$ = 3.4 (M/L). The exponential disk has $`(M/L_V)_d`$ = 3.4 (M/L) with the velocity dispersion profile given by $`\sigma _{}(r)=\mathrm{\hspace{0.17em}155}e^{r/r_\sigma }`$ $`\mathrm{km}\mathrm{s}^1`$ with scale-length $`r_\sigma =\mathrm{\hspace{0.17em}27.4}^{\prime \prime }=\mathrm{\hspace{0.17em}4.0}`$ kpc. The derived bulge and disk masses are $`M_b=\mathrm{\hspace{0.17em}9.8}10^{10}`$ M and $`M_d=\mathrm{\hspace{0.17em}4.8}10^{10}`$ M, adding up to a total (bulge$`+`$disk) luminous mass of $`M_T=\mathrm{\hspace{0.17em}14.5}10^{10}`$ M. The disk-to-bulge and disk-to-total luminosity ratios are $`L_b/L_d=\mathrm{\hspace{0.17em}0.58}`$ and $`L_d/L_T=\mathrm{\hspace{0.17em}0.36}`$. ## 3 Modeling the gaseous kinematics At small radii both the ionized gas velocity and velocity dispersion are comparable to stellar velocity and velocity dispersion, for $`r9^{\prime \prime }`$ and $`r5^{\prime \prime }`$ respectively. Moreover a change in the slope of the \[O II\]$`\lambda 3726`$ intensity radial profile is observed inside $`r8^{\prime \prime }`$, its gradient appears to be somewhat steeper towards the center. The velocity dispersion and intensity profiles of the ionized gas suggest that it is distributed into two components: a small inner spheroidal component and a disk. We decomposed the \[O II\]$`\lambda 3726`$ intensity profile as the sum of an $`R^{1/4}`$ gaseous spheroid and an exponential gaseous disk and the gas spheroid resulted to be the dominating component up to $`r8^{\prime \prime }`$. We built up dynamical models for the ionized gas in NGC 4036 (Fig. 2). It was assumed to be distributed in a dynamically hot spheroidal and in a dynamically cold disk component and consisting of collisionless individual clumps (cloudlets) which orbit in the total potential. We made two different sets of assumptions based on two different physical scenarios for the gas cloudlets. Model A: In a first set of models we described the gaseous component consisting of collisionless cloudlets which can be considered in hydrostatic equilibrium. The gaseous spheroid is characterized by a density distribution and flattening different from those of stars. Its major-axis luminosity profile was assumed to follow an $`R^{1/4}`$ law. The flattening of the spheroid $`q`$ was kept as free parameter. To derive the kinematics of the gaseous spheroid and disk we solved the Jeans Equations. Model B: In a second set of model we assumed that the emission observed in the gaseous spheroid and disk arise from material that was recently shed from stars. Different authors (Bertola et al. 1984, 1995b; Fillmore et al. 1986; Kormendy & Westpfahl 1989; Mathews 1990) suggested that the gas lost (e.g. in planetary nebulae) by stars was heated by shocks to the virial temperature of the galaxy within $`10^4`$ years, a time shorter than the typical dynamical time of the galaxy. Hence in this picture the ionized gas and the stars have the same true kinematics, while their observed kinematics are different due to the line-of-sight integration of their different spatial distribution. ## 4 Do drag forces affect the kinematics of the gaseous cloudlets? The discrepancy between model and observations could be explained by taking properly into account the drag interaction between the ionized gas cloudlets of the gaseous spheroid and the hot component of the interstellar medium (Mathews 1990). To have some qualitative insights in understanding the effects of a drag force on the gas kinematics we studied the case of a gaseous nebula moving in the spherical potential generated by an homogeneous mass distribution of density $`\rho `$ and which, starting onto a circular orbit, is decelerated by a drag force $`𝐅_{\mathrm{𝑑𝑟𝑎𝑔}}=(k_{\mathrm{𝑑𝑟𝑎𝑔}}v/m)𝐯`$, where $`m`$ and $`𝐯`$ are the mass and the velocity of the gaseous cloud and the constant $`k_{\mathrm{𝑑𝑟𝑎𝑔}}`$ is given following Mathews (1990). We numerically solved the equations of motion of a nebula to study the time-dependence of the radial and tangential velocity components $`\dot{r}`$ and $`r\dot{\psi }`$. We fixed the potential assuming a circular velocity of 250 $`\mathrm{km}\mathrm{s}^1`$ at $`r=1`$ kpc. Following Mathews (1990) we took an equilibrium radius for the gaseous nebula $`a_{\mathrm{𝑒𝑞}}=0.37`$ pc. It results that $`\ddot{\psi }<0`$ and $`\ddot{r}>0`$: the clouds spiralize towards the galaxy center as expected. Moreover the drag effects are greater on faster starting clouds and therefore negligible for the slowly moving clouds in the very inner region of NGC 4036. If the nebulae are homogeneously distributed in the gaseous spheroid only the tangential component $`r\dot{\psi }`$ of their velocity contribute to the observed velocity. No contribution derives from the radial component $`\dot{r}`$ of their velocities. In fact for each nebula moving towards the galaxy center, which is also approaching to us, we expect to find along the line-of-sight a receding nebula, which is falling towards the center from the same galactocentric distance with an opposite line-of-sight component of its $`\dot{r}`$. However the radial components of the cloudlets velocities (typically of 30-40 $`\mathrm{km}\mathrm{s}^1`$) are crucial to explain the velocity dispersion profile and to understand how the difference between the observed velocity dispersions and the model B predictions arises. If the clouds are decelerated by the drag force their orbits become more radially extended and the velocity ellipsoids acquire a radial anisotropy. So we expect that (in the region of the gaseous spheroid) including drag effects in our gas modeling should give a velocity dispersion profile steeper than the one predicted by our isotropic model B, and in better agreement with observations. ## 5 Discussion and conclusions The modeling of the stellar and gas kinematics in NGC 4036 shows that the observed velocities of the ionized gas, moving in the gravitational potential determined from the stellar kinematics, can not be explained without taking the gas velocity dispersion into account. In the inner regions of NGC 4036 the gas is not moving at circular velocity. A better match with the observed gas kinematics is found by assuming the ionized gas as made of collisionless clouds in a spheroidal and disk component for which the Jeans Equations can be solved in the gravitational potential of the stars (i.e., model A). A much better agreement is achieved by assuming that the ionized gas emission comes from material which has recently been shed from the bulge stars (i.e., model B). If this gas is heated to the virial temperature of the galaxy (ceasing to produce emission lines) within a time much shorter than the orbital time, it shares the same ‘true’ kinematics of its parent stars. If this is the case we would observe a different kinematics for ionized gas and stars due only to their different spatial distribution. An HST H$`\alpha `$$`+`$\[N II\] image of the nucleus of NGC 4036 confirms that except for a complex emission structure inside $`3^{\prime \prime }`$ the smoothness of the distribution of the emission as we expect for the gas spheroidal component. This kinematical modeling leaves open the questions about the physical state (e.g. the lifetime of the emitting clouds) and the origin of the dynamically hot gas. We tested the hypothesis that the ionized gas is located in short-living clouds shed by evolved stars (e.g. Mathews 1990) finding a satisfying agreement with our observational data. These clouds may be ionized by the parent stars, by shocks, or by the UV-flux from hot stars (Bertola et al. 1995a). The comparison with the more recent and detailed data on gas by Fisher (1997) opens wide the possibility for further modeling improvement if the drag effects on gaseous cloudlets (due to the diffuse interstellar medium) will be taken into account. These arguments indicate that the dynamically hot gas in NGC 4036 has an internal origin. This does not exclude the possibility for the gaseous disk to be of external origin as discussed for S0’s by Bertola et al. (1992). Spectra at higher spatial resolution are needed to understand the structure of the gas inside $`3^{\prime \prime }`$.
no-problem/9902/hep-th9902076.html
ar5iv
text
# Confining solutions of 𝑆⁢𝑈⁢(3) Yang - Mills theory ## I Introduction The strong, nuclear interaction (quantum chromodynamics or QCD) is thought to be described by a quantized SU(3) gauge theory. In this paper we will examine solutions to the classical field equations of an SU(3) gauge theory. The reason for investigating these classical field configurations is to see if they might shed some light on the confinement mechanism which is hypothesized to occur in the strong interaction. Although a full explanation of the confinement mechanism may require that one consider the fully quantized theory, the solutions presented in this paper have properties which mimick the behaviour of various phenomenological explanations of confinement. In particular the various solutions exhibit a bag-like structure similiar to the bag models of confinement, an almost linearly increasing potential such as those used in the study of heavy quark bound states , and a string like structure as found in the dual superconducting picture of confinement. The draw back of the classical configurations presented here is that they all have infinite field energy when their energy densities are intergrated over. This can be compared with the finite energy monopole and dyon solutions of Yang-Mills field theory . At the classical level one might expect that only solutions which have fields that become infinite (and thus have an infinite field energy) are capable of giving a confining behaviour. In the context of $`SU(2)`$ Yang-Mills theory it has been shown that, at the classical level, finite energy solutions, like monopoles, do not lead to confinement, while infinite energy solutions do lead to confinment. Quantum effects may modify these classical solutions to soften the infinite field strengths and energies in the same way that quantum effects soften the singularity of the Coulomb solution in E&M. ## II Spherically symmetric ansatz The ansatz for the $`SU(3)`$ gauge field we take as in : $`A_0`$ $`=`$ $`{\displaystyle \frac{2\phi (r)}{𝐢r^2}}\left(\lambda ^2x\lambda ^5y+\lambda ^7z\right)+{\displaystyle \frac{1}{2}}\lambda ^a\left(\lambda _{ij}^a+\lambda _{ji}^a\right){\displaystyle \frac{x^ix^j}{r^2}}w(r),`$ (1) $`A_i^a`$ $`=`$ $`\left(\lambda _{ij}^a\lambda _{ji}^a\right){\displaystyle \frac{x^j}{𝐢r^2}}\left(f(r)1\right)+\lambda _{jk}^a\left(ϵ_{ilj}x^k+ϵ_{ilk}x^j\right){\displaystyle \frac{x^l}{r^3}}v(r),`$ (2) here $`\lambda ^a`$ are the Gell - Mann matrices; $`a=1,2,\mathrm{},8`$ is a color index; the Latin indices $`i,j,k,l=1,2,3`$ are the space indices; $`𝐢^2=1`$; $`r,\theta ,\phi `$ are the usual spherically coordinates. Substituting Eqs. (1) - (2) into the Yang - Mills equations $$\frac{1}{\sqrt{g}}_\mu \left(\sqrt{g}F_{}^{a\mu }{}_{\nu }{}^{}\right)+f^{abc}F_{}^{b\mu }{}_{\nu }{}^{}A_\mu ^c=0,$$ (3) gives the following system of equations for $`f(r),v(r),w(r)`$ and $`\phi (r)`$ $`r^2f^{\prime \prime }`$ $`=`$ $`f^3f+7fv^2+2vw\phi f\left(w^2+\phi ^2\right),`$ (4) $`r^2v^{\prime \prime }`$ $`=`$ $`v^3v+7vf^2+2fw\phi v\left(w^2+\phi ^2\right),`$ (5) $`r^2w^{\prime \prime }`$ $`=`$ $`6w\left(f^2+v^2\right)12fv\phi ,`$ (6) $`r^2\phi ^{\prime \prime }`$ $`=`$ $`2\phi \left(f^2+v^2\right)4fvw.`$ (7) This set of equations is difficult to solve even numerically thus we will investigate various simplified cases when only two of the functions are nonzero. Under this assumption there are three cases. In the first case $`(f,w=0)`$ or $`(v,w=0)`$ Eqs. (4) - (7) reduce to a form similiar to the system of equations studied in which yield the well known dyon solutions. We will examine the cases where $`w=\phi =0`$, and $`f=\phi =0`$ (or $`v=\phi =0`$). ### A The $`SU(3)`$ bag In this case we set $`w=\phi =0`$ so that Eqs. (4) - (7) reduce to the following form $`r^2f^{\prime \prime }`$ $`=`$ $`f^3f+7fv^2,`$ (8) $`r^2v^{\prime \prime }`$ $`=`$ $`v^3v+7vf^2.`$ (9) To simplify the equations further we take $`f(r)=v(r)=q(r)/\sqrt{8}`$. This reduces Eqs. (8) - (9) to $$r^2q^{\prime \prime }=q(q^21)$$ (10) This is the Wu-Yang equation. In addition to the monopole solutions to this equation it is also known that this equation possesses a solution which becomes infinite on a spherical surface . If one lets this spherical surface be at $`r=r_0`$ then in the limit $`rr_0`$ the form of the solution approaches $$q(r)\frac{\sqrt{2}r_0}{r_0r}$$ (11) Using Eq. (11) to find $`f(r),v(r)`$ and inserting these back into Eq. (2) shows that the $`A_i^a`$ gauge field develops a singularity on the sphere of radius $`r=r_0`$. It is easy to solve Eq. (10) numerically (for this work we used the Mathematica numerical differential solver routinue). In solving Eq. (10) we considered that near $`r=0`$ the function $`q(r)`$ had a series expansion like $$q(r)=1+q_2\frac{r^2}{2!}+\mathrm{}$$ (12) where $`q_2`$ is some constant. Chosing a specific $`q_2`$ at some radius $`r=r_i`$ sets the initial conditions on $`q(r_i),q^{}(r_i)`$ for the numerical solution. The choice of these initial conditions determined the radius on which $`q(r)`$ became singular. In Fig. 1 we show a typical example of $`q(r)`$. This type of field configuration is somewhat similiar to a bag-like structure, and it has been shown that such structures lead to the confinement of a test particle placed in the field of this solution . If complex gauge fields are allowed or if scalar fields are introduced into the field equations it is possible to find analytical solutions which possesses gauge fields which are singular on some spherical surface of radius $`r=r_0`$. Several authors have remarked on the mathematical similiarity between the above solution and the Schwarzschild solution of general relativity, which leads to a gravitational type of confinement. ### B The $`SU(3)`$ bunker Here we examine the $`f=\phi =0`$ case. The case $`v=\phi =0`$ is entirely analogous. From Eqs. (4) - (7) the equations for the ansatz functions become $`r^2v^{\prime \prime }`$ $`=`$ $`v^3vvw^2,`$ (13) $`r^2w^{\prime \prime }`$ $`=`$ $`6wv^2.`$ (14) Near $`r=0`$ we took the series expansion form for $`v`$ and $`w`$ as $`v=1+v_2{\displaystyle \frac{r^2}{2!}}+\mathrm{},`$ (15) $`w=w_3{\displaystyle \frac{r^3}{3!}}+\mathrm{}`$ (16) where $`v_2,w_3`$ were constants which determined the initial conditions on $`v`$ and $`w`$ as in the last section. In the asymptotic limit $`r\mathrm{}`$ the form of the solutions to Eqs. (13) - (14) approaches the form $`v`$ $``$ $`A\mathrm{sin}\left(x^\alpha +\varphi _0\right),`$ (17) $`w`$ $``$ $`\pm \left[\alpha x^\alpha +{\displaystyle \frac{\alpha 1}{4}}{\displaystyle \frac{\mathrm{cos}\left(2x^\alpha +2\varphi _0\right)}{x^\alpha }}\right],`$ (18) $`3A^2`$ $`=`$ $`\alpha (\alpha 1).`$ (19) where $`x=r/r_0`$ is a dimensionless radius and $`r_0,\varphi _0`$, and $`A`$ are constants. The second, strongly oscillating term in $`w(r)`$ is kept since it contributes to the asymptotic behaviour of $`w^{\prime \prime }`$. As in the previous case we did not find an analytical solution for Eqs. (13) - (14) but it is straight forward to solve these equations numerically. A typical solution is shown in Fig. 2. The strongly oscillating behaviour of $`v(r)`$ resulted in the space part of the gauge field of Eq. (2) being strongly oscillating. The ansatz function $`w(r)`$ increases as some power of $`x`$ as $`x\mathrm{}`$, and would lead to the confinement of a test particle placed in the background field of this solution. (For the bunker solution there is some subtlety associated with the confinement of the test particle due to pair creation when the test particle scatters off the potential. This is essentially related to the Klein paradox and is discussed in Refs. ). The type of confinement given by this bunker solution is different from the bag-like solution of the previous sub-section : First the confining behaviour of the bag-like solution came from the “magnetic” part of the gauge field ($`A_i^a`$) through the ansatz functions $`v(r),f(r)`$, while in the present case it is the “electric” part of the gauge field ($`A_0^a`$) which gives confinement through the ansatz function $`w(r)`$. Second, the bag-like solution confines a test particle by the field strength becoming infinitely large at some finite value of $`r`$, while the present solution confines a test particle by the field strength increasing without bound as $`r\mathrm{}`$. The power law with which $`w(r)`$ increases changes as $`r`$ increases. In Fig. 3 we show a plot of $`Log(w)Log(x)`$ for the solution of Fig. 2. At around $`Log(x)0.7`$ the slope of the line (and therefore the power law increase of $`w(r)`$) changes from $`\alpha 2.8`$ to $`\alpha 1.3`$. Depending on the initial conditions we found that for $`x`$ near the origin $`\alpha `$ was in the range $`23`$ while as $`x`$ became large $`\alpha `$ decreased to the range of $`1.21.8`$. In studies of heavy quark bound states a potential which increases as $`r\mathrm{}`$ is often used to sucessfully model the excited states of these systems. In these studies the increase is usually linear in $`r`$. The “magnetic” and “electric” fields associated with this solution can be found from $`A_\mu ^a`$, and have the following behaviour $`H_r^a`$ $``$ $`{\displaystyle \frac{v^21}{r^2}},H_\phi ^av^{},H_\theta ^av^{},`$ (20) $`E_r^a`$ $``$ $`{\displaystyle \frac{rw^{}w}{r^2}},E_\phi ^a{\displaystyle \frac{vw}{r}},E_\theta ^a{\displaystyle \frac{vw}{r}},`$ (21) here for $`E_r^a,H_\theta ^a`$, and $`H_\phi ^a`$ the color index $`a=1,3,4,6,8`$ and for $`H_r^a,E_\theta ^a`$ and $`E_\phi ^a`$ $`a=2,5,7`$. The asymptotic behaviour of $`H_\phi ^a,H_\theta ^a`$ and $`E_\phi ^a,E_\theta ^a`$ is dominated by the strongly oscillating function $`v(r)`$. If quantum corrections where applied to this solution it is expected that these strongly oscillating fields would be smoothed out and not play a significant role in the large $`r`$ limit. From Eqs. (20) - (21) and the asymptotic form of $`v(r),w(r)`$ the radial components of the “magnetic” and “electric” have the following asymptotic behaviour $$H_r^a\frac{1}{r^2},E_r^a\frac{1}{r^{2\alpha }}.$$ (22) where the strongly oscillating portion of $`H_r^a`$ is assumed not to contribute in the limit of large $`r`$ due to smoothing by quantum corrections. The radial “electric” field falls off slower than $`1/r^2`$ (since $`\alpha >1`$) indicating the presence of a confining potential. The $`1/r^2`$ fall off of $`H_r^a`$ indicates that this solution carries a “magnetic” charge. This was also true for the simple solutions discussed in Refs. . It can also be shown in the same way that the bag-like solution of the previous section also carries a “magnetic” charge. This leads to the result that if a test particle is placed in the background field of either the bag or bunker solution, this composite system will have unusal spin properties (i.e. if the test particle is a boson the system will behave as a fermion, and if the test particle is a fermion the system will behave as a boson). Just as for the bag solution, the biggest draw back of the present solution is its infinite field energy. The bunker solution has an asymptotic energy density proportional to $$4\frac{v^2}{r^2}+\frac{2}{3}\left(\frac{w^{}}{r}\frac{w}{r^2}\right)^2+4\frac{v^2w^2}{r^4}+\frac{2}{r^4}\left(v^21\right)^2\frac{2}{3}\frac{\alpha ^2(\alpha 1)(3\alpha 1)}{x^{42\alpha }}$$ (23) Since we found $`\alpha >1`$ this energy density will yield an infinite field energy when integrated over all space. This can be compared with the finite field energy monopole and dyon solution . However, as remarked previously, it has been demonstrated that the finite energy monopole solutions do not trap a test particle while the infinite energy solutions do. What is the physical meaning of this solution ? As in the case of the bag-like solution one can examine the motion of a test particle in the background field of the bunker solution, and find in this way that the test particle will tend to remain confined due to the increasing gauge potential. Another possible interpretation is that this solution is the Yang-Mills analog to the Coulomb potential in electrostatics. An electron can exist as an asymptotic state while a quark can not. Therefore, the bunker solution can be thought of as the far field of a color charge - “quark”. The fact that the bunker solution possesses an infinite field energy then indicates that an isolated quark is not allowed as an observable free state. The Coulomb solution of electrostatics also posseses an infinite field energy, but the manner in which the field energy becomes infinite is different than for the bunker solution. Any point electric charge such as the electron has a singularity at $`r=0`$, but the “quark” field of the bunker solution has a singularity at $`r=\mathrm{}`$. To follow through on this interpretation of the bunker solution as an isolated “quark”, one should investigate what happens when two bunker solutions are placed in the vicinity of one another. In this way one might hope that the combination of two bunker solutions would lead to a localized, finite energy field configuration. Then if one tried to separate the two “quarks” the field energy would become infinite. However the nonlinear character of the classical SU(3) field equations make this a difficult problem beyond the scope of the present work. Finally it can be noted that this solution is in a sense asymptotical free since at $`r=0`$ the gauge potential $`A_\mu ^a0`$. ## III The gauge “string” Let us write down the following ansatz $`A_t^2`$ $`=`$ $`f(\rho ),`$ (24) $`A_z^5`$ $`=`$ $`v(\rho ),`$ (25) $`A_\phi ^7`$ $`=`$ $`\rho w(\rho ),`$ (26) here we use the cylindrical coordinate system $`z,\rho ,\phi `$. The color index $`a=2,5,7`$ corresponds to an embedding of $`SU(2)`$ in $`SU(3)`$. Using Eqs. (24) - (26) the Yang - Mills equations become $`f^{\prime \prime }+{\displaystyle \frac{f^{}}{\rho }}`$ $`=`$ $`f\left(v^2+w^2\right),`$ (27) $`v^{\prime \prime }+{\displaystyle \frac{v^{}}{\rho }}`$ $`=`$ $`v\left(f^2+w^2\right),`$ (28) $`w^{\prime \prime }+{\displaystyle \frac{w^{}}{\rho }}{\displaystyle \frac{w}{\rho ^2}}`$ $`=`$ $`w\left(f^2+v^2\right),`$ (29) Let us examine the simple case $`w=0`$ which reduces Eqs. (27) - (29) to $`f^{\prime \prime }+{\displaystyle \frac{f^{}}{\rho }}`$ $`=`$ $`fv^2,`$ (30) $`v^{\prime \prime }+{\displaystyle \frac{v^{}}{\rho }}`$ $`=`$ $`vf^2.`$ (31) At origin $`\rho =0`$ the solution has the following form $`f`$ $`=`$ $`f_0+f_2{\displaystyle \frac{\rho ^2}{2}}+\mathrm{},`$ (32) $`v`$ $`=`$ $`v_0+v_2{\displaystyle \frac{\rho ^2}{2}}+\mathrm{}.`$ (33) Substituting Eqs. (32) - (33) into (30) - (31) we find that $`f_2`$ $`=`$ $`{\displaystyle \frac{1}{2}}f_0v_0^2,`$ (34) $`v_2`$ $`=`$ $`{\displaystyle \frac{1}{2}}v_0f_0^2.`$ (35) The asymptotic behaviour of the ansatz functions $`f,v`$ and the energy density $``$ can be given as $`f`$ $``$ $`2\left[x+{\displaystyle \frac{\mathrm{cos}\left(2x^2+2\varphi _1\right)}{16x^3}}\right],`$ (36) $`v`$ $``$ $`\sqrt{2}{\displaystyle \frac{\mathrm{sin}\left(x^2+\varphi _1\right)}{x}},`$ (37) $``$ $``$ $`f^2+v^2+f^2v^2const,`$ (38) where $`x=\rho /\rho _0`$ is the dimensionless radius, and $`\rho _0,\varphi _1`$ are constants. To solve the system in Eqs. (30) - (31) for all $`r`$ we again used numerical methods. A typical solution for $`f`$ and $`v`$ is shown in Fig. 4. As in the solution of the previous section we have a confining potential $`A_t^2=f(\rho )`$ and a strongly oscillating potential $`A_z^5=v(\rho )`$. Depending on the relationship between $`v_0`$ and $`f_0`$ the energy density near $`\rho =0`$ will be either a hollow (i.e. an energy density less than the asymptotic value) or a hump (i.e. an energy density greater than the asymptotic value). On account of this and the cylindrical symmetry of this solution we call this the “string” solution. The quotation marks indicate that this is a string from an energetic point of view, not from the potential ($`A_\mu ^a`$) or field strength ($`F_{\mu \nu }^a`$) point of view. After quantization the oscillating functions will most likely vanish and only the confining potential and constant energy density will remain. This “string”-like solution can be thought of as describing the classical gauge field between two “quarks”. Similiar string-like configurations are thought to occur in the dual superconductor picture of confinement, and lattice calculations (nonperturbative quantization) also may give evidence for such structures. ## IV Discussion In this paper we have examined several non-trivial classical solutions of the $`SU(3)`$ Yang - Mills theory. Each of these solutions demonstrated some type of confining behaviour, indicating that this may be a general property of the classical $`SU(3)`$ Yang - Mills theory, and also that some form of this behaviour may carry over to the quantized theory. These infinite energy solutions to Eqs. (4) \- (7) represent typical solutions to the classical field equations in the sense that they arise for a wide range of initial conditions. In contrast to this the simple SU(3) monopole and dyon solutions investigated in Refs. are unique solutions in the sense that they arise for only certain initial conditions. In addition the infinite energy solutions investigated here give rise to a classical type of confining behaviour which neither the SU(3) solutions of Refs. or the finite energy solutions possess. The physical significance of the spherically symmetric cases is motivated by noting the similiarities between these solutions and various phenomenological models of confinement. The first solution will confine a quantum test particle via the spherical singularity in the “magnetic” part of the gauge field in a manner similiar to some bag models. Studies of such bag-like field configurations with scalar and spinor test particles have been carried out. In both cases it was found that the test particles were confined inside $`r=r_0`$, and in Ref. a somewhat realistic spectrum of hadron masses was obtained in this way. The second solution has the “electric” part of the gauge field increasing like $`r^\alpha `$ for large $`r`$ with $`\alpha >1`$. If this field configuration is taken as representing the far field of an isolated “quark” then the infinitely increasing field strength can be taken to indicate the impossibility of isolating an individual “quark”. In contrast isolated electrons exist in nature since they generate electric fields which decrease at infinity. The third solution has a string-like structure from an energetic point of view. Similiar string-like structures are found in the dual superconductor picture of confinement. Just as two interacted electrons generate an electric field which is essentially the superposition of the electric fields of the individual electrons, so two interacting quarks are thought to generate a string-like flux tube which runs from one quark to the other. The “string” solution obtained above is a classical model of such a field distribution. It appears as a string-like structure on the background of the field with constant energy density. The strongly oscillating components of this and the bunker solution will most likely be smoothed out once quantum effects are taken into account. ## V Acknowledgements This work has been funded in part by the National Research Council under the Collaboration in Basic Science and Engineering Program. The mention of trade names or commercial products does not imply endorsement by the NRC. List of figure captions Fig.1. The $`q(r)`$ function for the $`SU(3)`$ bag. The initial conditions for this solution were $`q_2=0.1`$ and $`r_i=0.001`$. Fig.2. The $`w(x)`$ confining function, and the $`v(x)`$ oscillating function of the $`SU(3)`$ bunker solution. The initial conditions for this particular solution were $`v_2=0.1`$, $`w_3=2.0`$, and $`x_i=0.001`$. Fig.3. A plot of $`Log(w)Log(x)`$ of the solution from Fig. 2 showing the different power law behaviour in the small $`x`$ and large $`x`$ regions. Fig.4. The $`SU(3)`$ “string” solution with the linearly confining function $`f(x)`$ and the strongly oscillating function $`v(x)`$. The initial conditions for this solution were $`f_0=0.75`$, $`v_0=0.75`$ and $`x_i=0.001`$.
no-problem/9902/cond-mat9902159.html
ar5iv
text
# cond-mat/9902159 “X-Ray Edge” Singularities in Nanotubes and Quantum Wires with Multiple Subbands ## I Introduction It is well known that one-dimensional (1d) metals are proundly influenced by interactions. The generic behavior for a 1d system with a single conduction band is that of a Luttinger liquid, in which the quasiparticle excitations of the non-interacting system are converted into spin and charge collective modes. These collective modes are largely orthogonal to the bare quasiparticle states close to the Fermi surface, hence leading to a dramatic power-law suppression of the tunneling Density of States (DOS) at the Fermi energy. A similar vanishing occurs in the momentum-dependent single-particle spectral function, signaling complete breakdown of the quasiparticle picture. Indeed, tunneling experiments evidence the novel nature of the Luttinger liquid much more strongly than do other measures such as four-probe resistance and optical conductivity, which are essentially probes of collective modes. There now appears to be increasing evidence for Luttinger liquid behavior in carbon nanotubes, which are perhaps the most ideal experimental quantum wires to date. In this paper, we address the behavior of the tunneling DOS far above the Fermi energy. As the voltage bias between a tunneling tip and the quantum wire is increased, it becomes possible to add an electron not only to the conduction band(s), but to higher energy unoccupied subbands. Such multiple subband structure exists both in carbon nanotubes and semiconductor quantum wires. In the former case, the unoccupied subbands are essentially states of different angular momentum around the graphitic cylinder. For semiconductor systems, the higher subbands arise from different standing-wave modes transverse to the wire’s propagation direction in the confinement region. In a non-interacting model, the density of states would exhibit van Hove singularities at the subband edges. In one dimension, these singularities are divergent, giving a contribution $`\rho _0(ϵ)\sqrt{m}(ϵ\mathrm{\Delta })^{1/2}\mathrm{\Theta }(ϵ\mathrm{\Delta })`$ for energies just above the subband edge at $`ϵ=\mathrm{\Delta }`$ ($`m`$ is the subband effective mass). An asymmetric peak structure has indeed been observed in STM measurements of individual nanotubes on gold surfaces. How do interactions affect these van Hove singularities? A simplified, though unphysical model in which the mass of the higher subband is taken to be infinite provides considerable insight. In this limit the higher energy subbands can be replaced by discrete, localized levels. The “x-ray edge” problem of a localized level interacting with a conduction sea was solved by Nozieres and de Dominicis, and is one of the first demonstrations of an orthogonality catastrophe. Physically, the core hole is “dressed” through interactions with conduction electrons, which see the hole as a scattering center. A bare or undressed hole is then orthogonal to its dressed counterpart, since an infinite number of conduction electrons are available to scatter off of it. This leads to a broadening and reduction of the tunneling density of states from a sharp delta-function to a power law singularity. While this effect is superficially similar to the suppression of spectral weight in the Luttinger liquid, it is in fact quite distinct. It is due not to the absence of a well-defined core hole excitation, but due to its mixing with the conduction sea. The distinction is emphasized by the fact that x-ray edge singularities are present in any dimension, not just in 1d. Suppose now that the mass of the higher subband is taken finite. Since for the tunneling density of states only a single particle is being added to the metal, one is faced with understanding the behavior of a heavy particle in a Luttinger liquid. This problem has been investigated by a number of authors in a different context. These authors were primarily concerned with the mobility of the heavy particle in response to an external electric field at finite temperature. For the tunneling DOS, one is interested in a rather different property – essentially, the overlap between the two ground states in the presence and absence of the heavy particle. This overlap may be thought of as a boundary condition changing operator, and is completely outside the Hilbert space of the problem in which the heavy particle is always present. In this paper, we demonstrate that x-ray edge effects persist even for this finite mass case. These reduce the naive van Hove singularities in the tunneling DOS by a power-law amount. Like the singularities in the original x-ray edge problem (but unlike the the Luttinger singularities at low bias), this modification does not, however, signal the destruction of sharp quasiparticle excitations in the higher subbands. We argue that a necessary and sufficient condition for the presence of such finite-energy singularities is a conserved quantum number distinguishing the states of the higher subband from the conduction states. In the case of the carbon nanotube, this is an angular momentum quanta. For a semiconductor wire, such a good quantum number exists in the ideal case of a symmetric confining potential, in which case the second subband has odd parity with respect to reflection, while the lowest subband has even parity. If such a distinguishing quantum number is absent, we expect the van Hove peak to be rounded and rendered completely nonsingular. The case when only forward scattering interactions are present between the higher subband particles and the conduction sea is asymptotically exactly soluble by bosonization methods, as we outline here. As discussed in Ref. , this forward scattering model is in fact an excellent approximation for single-walled carbon nanotubes with diameters $`D1nm`$. Within this model, we predict a reduced density of states singularity $$\rho (ϵ)\rho _0\left(\frac{\mathrm{\Delta }}{ϵ\mathrm{\Delta }}\right)^{\frac{1}{2}\beta }\mathrm{\Theta }(ϵ\mathrm{\Delta }),$$ (1) where $`\mathrm{\Theta }(x)`$ is the heavyside step function, and the orthogonality exponent $`\beta 0.3`$ for typical metallic nanotubes. The form of Eq. 1 is expected to apply to the second subband in semiconductor quantum wires as well (see the concluding remarks for a discussion of experiments). ## II Forward Scattering Charge Model In this section we present a simple Forward Scattering Charge Model (FSCM) describing only forward scattering interactions, i.e. those processes involving small momentum exchange, in the total charge channel. We describe the conduction electrons by a Luttinger model, valid near the Fermi energy, which we take to be $`E=0`$, $$H_0^c=\underset{i\alpha }{}𝑑xv_F\left[\psi _{Ri\alpha }^{}i_x\psi _{Ri\alpha }^{}\psi _{Li\alpha }^{}i_x\psi _{Li\alpha }^{}\right].$$ (2) Here $`v_F`$ is the Fermi velocity, and $`\alpha =,`$ labels the electron spin. We have also included an additional “flavor” index $`i=1\mathrm{}N_f`$ to allow for extra degenerate bands (with the same Fermi velocity) at the Fermi energy. For metallic carbon nanotubes, $`N_f=2`$ and $`i`$ should be interpreted as a sublattice index. No such special degeneracy is present for a semiconductor quantum wire, so $`N_f=1`$ in this case. Eq. 2 can be rewritten using bosonization. We follow the conventions of Ref. . One has $`\psi _{R/L;i\alpha }e^{i(\varphi _{i\alpha }\pm \theta _{i\alpha })}`$, where the dual fields satisfy $`[\varphi _{i\alpha }(x),\theta _{j\beta }(y)]=i\pi \delta _{ij}\delta _{\alpha \beta }\mathrm{\Theta }(xy)`$ ($`\mathrm{\Theta }(x)`$ is a heavyside step function). Then $`H_0^c=_{i,\alpha }𝑑x_0^c(\theta _{i\alpha },\varphi _{i\alpha })`$, with $$_0^c(\theta ,\varphi )=\frac{v_F}{2\pi }[(_x\theta )^2+(_x\varphi )^2].$$ (3) The slowly varying electronic density in a given channel is given by $`\rho _{i\alpha }\psi _{Ri\alpha }^{}\psi _{Ri\alpha }^{}+\psi _{Li\alpha }^{}\psi _{Li\alpha }^{}=_x\theta _{i\alpha }/\pi `$. Physically, $`\theta `$ can be understood as a displacement or phonon field, while $`\varphi `$ carries the phase of the quantum wavefunction. It is simplest to work in a rotated basis of collective modes. For $`N_f=1`$, define $`\theta _{\rho /\sigma }=(\theta _{}\pm \theta _{})/\sqrt{2}`$. For $`N_f=2`$, let $`\theta _{i,\rho /\sigma }=(\theta _i\pm \theta _i)/\sqrt{2}`$ and $`\theta _{\mu \pm }=(\theta _{1\mu }\pm \theta _{2\mu })/\sqrt{2}`$, with $`\mu =\rho ,\sigma `$. With similar definitions for the $`\varphi `$ fields, canonical commutators are preserved, and $`H_0^c=_a_x_0^c(\theta _a,\varphi _a)`$, where $`a`$ is summed over the $`2N_f`$ rotated boson fields. Because we are interested only in energies near the putative van Hove singularity, the unoccupied 1d subband can be described by a non-relativistic electron operator $`d,d^{}`$: $$H_0^d=𝑑xd_\alpha ^{}\left[\frac{1}{2m}_x^2+\mathrm{\Delta }\right]d_\alpha ^{}.$$ (4) Here $`\mathrm{\Delta }`$ is the gap to the first subband and $`m`$ is an effective mass. The electron field satisfies $`\{d_\alpha ^{}(x),d_\beta ^{}(x^{})\}=\delta _{\alpha \beta }\delta (xx^{})`$. In the case of a carbon nanotube, there are actually multiple degenerate subbands at energy $`\mathrm{\Delta }`$. This degeneracy is unimportant within the FSCM, as the tunneling DOS involves only states with a single excited electron We therefore neglect this additional degeneracy. The interactions in the FSCM are written as a single term coupling only the total charge density, $$H_{\mathrm{int}}=\frac{1}{2}𝑑x𝑑x^{}\rho _{\mathrm{tot}}(x)V(xx^{})\rho _{\mathrm{tot}}(x^{}),$$ (5) where $$\rho _{\mathrm{tot}}=e(d^{}d+\underset{i\alpha }{}\rho _{i\alpha })=e(d^{}d+\frac{\sqrt{2N_f}}{\pi }_x\theta _\rho ).$$ (6) Here and in what follows we abbreviate $`\theta _{\rho +}=\theta _\rho `$ for the $`N_f=2`$ case. A phenomenological form for the potential is sufficient for our purposes. We take $`V(x)=\mathrm{exp}(|x|/\xi )/\sqrt{x^2+W^2}`$, modeling the smoothing of the interaction on the scale of the wire width by $`W`$ and including a screening length $`\xi `$, determined, e.g. by the distance to an external gate (any dielectric constant can be included by rescaling $`e^2e^2/ϵ`$. While it is possible to proceed directly with the non-local form in Eq. 5, near to the van Hove singularity (within an energy of order $`v_F/W`$, up to a weak logarithmic factor) it is sufficient to make the local approximation $`V(x)\delta (x)𝑑x^{}V(x^{})`$. This gives $`H_{\mathrm{int}}=_x_{\mathrm{int}}`$, with $$_{\mathrm{int}}=e^2\mathrm{ln}(\xi /W)\left(\frac{\sqrt{2N_f}}{\pi }_x\theta _\rho +d^{}d\right)^2.$$ (7) ## III Solution of FSCM To determine the effects of the interaction, it is convenient to employ a path integral formulation. Quantum mechanical expectation values are evaluated as functional integrals over classical fields in imaginary time with respect to a measure $`\mathrm{exp}(𝑑x𝑑\tau )`$, where $``$ is a Lagrange density. Standard techniques give $$=\frac{i}{\pi }_x\theta _\rho _\tau \varphi _\rho +d^{}_\tau d^{}+,$$ (8) with $``$ $`=`$ $`_0^c+_0^d+_{\mathrm{int}}`$ (9) $`=`$ $`{\displaystyle \frac{v_\rho }{2\pi g}}\left[_x\theta _\rho +\gamma d^{}d^{}\right]^2+{\displaystyle \frac{gv_\rho }{2\pi }}(_x\varphi )^2+_0^d.`$ (10) Here we have defined the plasmon velocity $`v_\rho =\sqrt{v_F(v_F+(4N_fe^2/\pi \mathrm{})\mathrm{ln}(\xi /W))}`$, Luttinger parameter (“conductance”) $`g=v_F/v_\rho `$, and $`\gamma =\frac{\pi }{\sqrt{2N_f}}(1g^2)`$. It may be tempting to proceed by perturbation theory in $`\gamma `$. Indeed, for properties of the conduction electrons at energies small compared to $`\mathrm{\Delta }`$, this is a perfectly sensible procedure: as no heavy particles are present, the properties of the conduction sea are completely unaffected. However, the same is not true for the tunneling density of states. This is obtained from the heavy particle Green’s function, $`G(x,\tau )d^{}(x,\tau )d^{}(0,0)`$, via $`\rho (ϵ)=\pi ^1\mathrm{Im}𝑑k/2\pi G(k,i\omega \omega +i0^+)`$. The first perturbative correction to $`G`$, obtained from the self-energy diagram in Fig. 1a, is logarithmically divergent. Although we will not proceed along this route, this logarithmic divergence can be controlled to leading order using a renormalization group procedure which treats in a self-consistent fashion both this self-energy correction and the additional vertex renormalization given by the diagram in Fig. 1b. The results of this calculation are confirmed by a non-perturbative analysis, to which we now turn. The model in Eq. 10 is solved by a canonical transformation, or change of variables in the path integral: $`\theta _\rho (x)`$ $`=`$ $`\stackrel{~}{\theta }_\rho (x)\gamma {\displaystyle _{\mathrm{}}^x}𝑑x^{}d^{}(x^{})d^{}(x^{}),`$ (11) $`d(x)`$ $`=`$ $`e^{i\gamma \varphi _\rho (x)/\pi }\stackrel{~}{d}(x).`$ (12) Eqs. 11-12 embody the physical process in which the conduction sea adiabatically adjusts to the heavy particle. In particular, Eq. 11 represents the depletion of the conduction electron density near the heavy particle due to Coulomb repulsion. Eq. 12 represents phase shifts of these conduction electrons when the heavy particle is introduced. Formally, the exponential of the dual ($`\varphi `$) field in Eq. 12 is a Jordan-Wigner “string” operator which has been attached to the heavy particle. In the new variables, the hamiltonian density becomes $$=\frac{v_\rho }{2\pi g}(_x\stackrel{~}{\theta }_\rho )^2+\frac{gv_\rho }{2\pi }(_x\varphi )^2+_0^d[\stackrel{~}{d},\stackrel{~}{d}^{}]+\stackrel{~}{}_{\mathrm{int}},$$ (13) with a different residual interaction term $$\stackrel{~}{}_{\mathrm{int}}=\left[\frac{\gamma ^2}{2m\pi ^2}(_x\varphi _\rho )^2i\frac{\gamma }{2m\pi }_x^2\varphi _\rho \right]\stackrel{~}{d}^{}\stackrel{~}{d}^{}.$$ (14) It might appear that the transformations have actually worsened the problem, as the interaction in Eq. 14 naively appears more complicated than the original in Eq. 10. However, a closer inspection shows that the new couplings in Eq. 14 are dimensionally weaker by one inverse power of length than the original forms. As the original interaction was marginal, the terms in Eq. 14 are in fact irrelevant in the renormalization group sense. This can be verified by an explicit calculation of their effects on the $`\stackrel{~}{d}`$ Green’s function. Apart from a constant renormalization of the subband gap, the leading order diagrams (Figs. 1a,1c) give self-energy contributions proportional to $`(\omega \mathrm{\Delta })^3`$. The irrelevance of the couplings in Eq. 14 indicates that at long times and distances, the transformed fermion and boson correlation functions asymptotically factorize. Thus at energies close to the threshold energy $`\mathrm{\Delta }`$, the $`\stackrel{~}{d}^{}`$ operator creates good quasiparticles which propagate independently of the conduction sea. The tunneling DOS, however, involves the addition of a bare electron created by the $`d^{}`$ operator. Factorization implies $$G(x,\tau )=G^0(x,\tau )/\left|\mathrm{\Lambda }\sqrt{x^2+v_\rho ^2\tau ^2}\right|^\beta ,$$ (15) where $`G^0`$ is the non-interacting Green’s function describing free propagation in the unoccupied subband, $`\beta =\gamma ^2/2\pi ^2g=(1g^2)^2/(4N_fg)`$, and $`\mathrm{\Lambda }`$ is a momentum cutoff ($`O(k_F)`$). When Fourier transformed, the space-time product above becomes a convolution, which physically represents the emission of plasmon waves by the added electron. Analytic continuation to real frequency gives the modified van Hove singularity in Eq. 1. ## IV Discussion The preceding analysis demonstrates the persistance of a well-defined edge in the tunneling spectrum in the presence of forward scattering charge interactions with the conduction electrons. The existence of such a finite energy singularity hinges on the inability of the heavy particle to truly decay or mix with the many other (conduction subband) excitations coexisting at the same energy. This is ensured within the FSCM due to heavy particle charge conservation. If there are no true distinguishing quantum numbers of the excited state, decay is possible and the singularity is rounded, as can be verified by explicit calculation using, e.g. Fermi’s golden rule. We have already argued that for ideal conducting nanotubes and symmetric semiconductor quantum wires, at least the first excited subband is protected from decay in this way. In any experiment, various non-ideal perturbations will lead to some rounding. Thermal smearing (from thermal excitation both internally and in the tunneling lead) limits the resolution to $`ϵk_BT`$. More significant rounding arises from hybridization of the one-dimensional electronic states with the bulk. This is probably the most significant effect in recent tunneling experiments using nanotubes on metallic gold substrates. This effect could be greatly reduced by using an insulating substrate or better, a freely suspended tube. Even for nanotubes on an insulating substrate, the asymmetry in dielectric constants allows some degree of mixing of different subband states. Fortunately, this is likely to be a weak effect due to the delocalization of the subband states around the circumference of the cylinder. Finally, impurities or defects lead to smearing of the ideal density of states on the scale of the inverse elastic scattering time. Additional processes left out of the FSCM may lead to significant modifications of the physics, although they cannot remove the edge singularity. The forward scattering assumption itself is always correct at energies sufficiently close to $`\mathrm{\Delta }`$, since the heavy particle cannot accomodate a large change in momentum without a corresponding large increase in its energy. There are, however, orward scattering interactions outside the total charge channel. The most natural example is the Kondo interaction, $`_K=\lambda (\psi _R^{}\stackrel{}{\sigma }\psi _R^{}+\psi _L^{}\stackrel{}{\sigma }\psi _L^{})d^{}\stackrel{}{\sigma }d^{}`$. For $`m=\mathrm{}`$, one has negligible effects for ferromagnetic ($`\lambda >0`$) coupling and perfect screening of the heavy spin for antiferromagnetic coupling. For finite $`m`$, one can perform a perturbative renormalization group calculation which demonstrates that small ferromagnetic coupling remains irrelevant for $`m<\mathrm{}`$. Likewise, for antiferromagnetic exchange, the problem scales to strong coupling. The nature of the strong-coupling excitation(s) is not completely clear, though preliminary investigations of some lattice models suggest the formation of a propagating charge $`e`$ singlet particle. As for the infinite mass model, we expect a large (universal?) contribution to the orthogonality exponent $`\beta `$ in this case. In the cases of nanotubes and quantum wires, as for most itinerant systems, the bare Kondo couplings are ferromagnetic, and we thus expect insignificant modifications in the spin channel. The antiferromagnetic case could perhaps be relevant, however, for certain excited states in ladder materials. The extra degeneracies in the unoccupied subbands in nanotubes might also lead to some effects of this type for pseudospin, but only very close to the band edge as all corrections to the FSCM are small in this case. We conclude by summarizing the experimental predictions for nanotubes and symmetric quantum wires. While the tunneling density of states for insulating systems should show the bare $`(ϵ\mathrm{\Delta })^{1/2}`$ van Hove singularities (up to the two-particle continuum), conducting systems should display a reduction of the edge singularity at the first subband to $`(ϵ\mathrm{\Delta })^{1/2\beta }`$. For a $`1.4nm`$ diameter nanotube, we estimate $`\beta 0.3`$. The singularities at all higher subbands in metallic quantum wires should be rounded. ###### Acknowledgements. I would like to thank Aris Moustakas, Steve Simon, and Xenophon Xotos for useful discussions.
no-problem/9902/astro-ph9902093.html
ar5iv
text
# Multiband polarimetric and total intensity imaging of 3C 345 ## 1 Introduction The QSO 3C 345 ($`V`$=16<sup>mag</sup>, $`z`$=0.595) is one of the best examples of a radio source jet in a core-dominated radio source. It has been monitored with VLBI since 1979 (Unwin et al.,, 1983; Biretta et al.,, 1986; Brown et al.,, 1994; Wardle et al.,, 1994; Rantakyrö et al.,, 1995; Zensus et al., 1995a, ; Zensus et al., 1995b, ; Lobanov,, 1996; Taylor,, 1998). On arcsecond scales the source contains a compact region at the base of a 4<sup>′′</sup> jet that is embedded in a diffuse steep-spectrum halo (Kollgaard et al.,, 1989). ¿From astrometric measurements, the parsec-scale core has been shown to be stationary within uncertainties of 20 $`\mu `$as yr<sup>-1</sup> (Bartel et al.,, 1986). The parsec-scale jet consists of several prominent enhanced emission regions (jet components) apparently ejected at different position angles (P.A. ranging from 240 to 290) with respect to the jet core. The components move along curved trajectories that can be modelled by a simple helical jet. The curvature of the trajectories may be caused by some periodic process at the jet origin, like orbital motion in a binary black hole system or Kelvin-Helmholtz instabilities (Steffen et al.,, 1995; Qian et al.,, 1996; Hardee,, 1987). 3C 345 was observed with VLBI during the first half of the 90’s almost every 6 months at different frequencies. The component motions were well monitored through these observations (cf. Lobanov, (1996)) allowing precise determination of component motions. The present work extends this effort with an enhanced data set at four frequencies, including polarization data. Assuming a standard Friedmann cosmology, with $`H_0`$=100$`h`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$=0.5, an angular scale of 1 mas corresponds to 3.79 $`h^1`$ pc for a cosmological redshift of 0.595. An apparent angular motion of 1 mas yr<sup>-1</sup> represents an apparent speed of $`\beta _{\mathrm{app}}`$=19.7 $`h^1`$. ## 2 Observations and Imaging. We observed 3C 345 with the VLBA at 3 epochs (1995.84, 1996.41, and 1996.81) at 22, 15, 8.4, and 5 GHz, using a bandwidth of 16 MHz. At each frequency, the source was observed for about 14 hrs, using 5-minute scans and interleaving all observing frequencies. Some calibrator scans (on 3C 279, 3C 84, NRAO 91, OQ 208, and 3C 286) were inserted during the observations. The data were correlated at the NRAO<sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory (NRAO) is operated by Associated Universities, Inc., under cooperative agreement with the National Science Foundation. VLBA correlator in Socorro, New Mexico, USA. ### 2.1 Total Intensity Images. The data were fringe-fitted and calibrated in AIPS and imaged using the differential mapping program DIFMAP (Shepherd et al.,, 1994). High dynamic range images (1:1000) were obtained for all epochs and frequencies. At the higher frequencies (15 and 22 GHz), we identified the core D and two main components (tentatively identified as C8 and C7) in the inner 3 mas (Ros et al.,, 1999). The components travel outwards with respect to the core, with observed apparent superluminal speeds of about 5$`c`$. At 5 and 8.4 GHz the jet extends up to 20 mas distances from the core. Fig. 1 shows the images at 8.4 and 5 GHz for all 3 observing epochs. Model fitting of Gaussian profile components to the jet at 5 and 8.4 GHz includes the inner components seen at the higher frequencies, and additionally more extended components of the jet further out. The proper motions of these regions are typically of $``$4.5$`c`$. At $``$6.2 mas from the core (P.A. $`70^{}`$) the emission can be fitted for the six images with a component, having a total flux density of $``$0.6 Jy at 8.4 GHz and $``$0.7 Jy at 5 GHz. The proper motion for this component is $``$6$`c`$. It is most plausibly identified with the C4 component discussed by, e.g., Lobanov, (1996). Component C2 is fitted at a position $``$12 mas away from the core (P.A. $`70^{}`$), with a flux density of $``$0.1 Jy both at 8.4 and 5 GHz. The extended nature of this component makes proper motion determinations very uncertain. At 5 GHz another component, C1, separated $``$20 mas from the core (P.A. $`55^{}`$), with a flux density above 0.2 Jy for all three epochs, permits us to trace the northern extended jet emission (the FWHM of the elliptical component is about 10 mas). A comparison with previous model-fitting results will provide a better understanding of the kinematic properties of the jet. ### 2.2 Polarized Images. Recently, important progress has been done in the VLBI polarimetric observations. The polarization calibration at high frequencies is now possible using the target source itself as a calibrator (Leppänen et al.,, 1995). This technique is non-iterative and is insensitive to structure in the polarization calibrator. We have applied this approach to determine the instrumental polarization of the observing stations and to image the linear polarization in 3C 345. We do not apply Faraday rotation to the observed electric vector position angles presented here, relying on the results of Taylor, (1998), who reports small rotation measurements in 3C 345 at frequencies higher than 5 GHz. In Ros et al., (1999) we show some polarization results of this monitoring at 22 GHz, where the electric vector is roughly parallel to the jet direction in the inner part of the jet. Lister et al., (1998) report similar vector alignment with the jet at 43 GHz, for some blazars in the inner jet regions. At lower frequencies, the distribution of the polarized emission differs significantly from that seen at 22 and 15 GHz. The core is less polarized ($`m`$$``$1%) than at higher frequencies, showing an electric vector oblique to the jet direction. Both facts may be caused by the blending of differently oriented polarized components. The degree of polarization at 5 GHz in the jet reaches values over 15%. We present an image of the third epoch of our monitoring in Fig. 2. Both previous epochs display very similar features for the jet. These findings seem to be consistent with the conclusions of Brown et al., (1994), who reported, at 5 GHz, a weakly polarized core in 3C 345 and a fractional polarization reaching 15% in the jet. The electric vector position angles were found to be variable in the core from one epoch to the other but perpendicular to the jet for all epochs. The images obtained by Taylor, (1998) at 8.4 GHz are similar to our results, and also imply a magnetic field aligned with the jet direction at 3-10 mas distances from the core. Cawthorne et al., (1993) suggested that the longitudinal component of the magnetic field can increase with the distance from the core as a result of shear from the dense emission line gas near the nucleus. At larger distances from the core the shocks may be too weak to dominate the underlying field, resulting in the observed electric field perpendicular to the jet. ## 3 Conclusions We have studied the QSO 3C 345 with the VLBA at three epochs and four frequencies, analyzing the properties of its total intensity and polarized emission. We have monitored the kinematics of the radio source, identifying components and proper motions. The polarized intensity images show the alignment between the jet direction and the electric vector position angle in the inner regions of the jet at high frequencies, and the orthogonality of the electric vector to the jet direction in the outer regions of the jet. These results confirm reliably the previously reported observations (Brown et al.,, 1994; Leppänen et al.,, 1995; Taylor,, 1998) covering different frequencies and jet regions in 3C 345. Continuing the multiband monitoring of 3C 345 with VLBI, enhanced by regular polarimetric studies, can provide important clues on the physics of the parsec-scale jet. Higher frequencies and better resolution observations (as those provided by orbital VLBI results with HALCA) should help to better constrain the models of this QSO.
no-problem/9902/gr-qc9902008.html
ar5iv
text
# The Inner Structure of Black Holes ## Abstract We study the gravitational collapse of a self-gravitating charged scalar-field. Starting with a regular spacetime, we follow the evolution through the formation of an apparent horizon, a Cauchy horizon and a final central singularity. We find a null, weak, mass-inflation singularity along the Cauchy horizon, which is a precursor of a strong, spacelike singularity along the $`r=0`$ hypersurface. The inner black hole region is bounded (in the future) by singularities. This resembles the classical inner structure of a Schwarzschild black hole and it is remarkably different from the inner structure of a charged static Reissner-Nordström or a stationary rotating Kerr black holes. The simple picture describing the exterior of a black-hole is in dramatic contrast with its interior. The singularity theorems of Penrose and Hawking predicts the occurrence of inevitable spacetime singularities as a result of a gravitational collapse in which a black-hole forms. According to the weak cosmic censorship conjecture , these spacetime singularities are hidden beneath the black-hole’s event-horizon. However, these theorems tell us nothing about the nature of these spacetime singularities. In particular, the final outcome of a generic gravitational collapse is still an open question. Our physical intuition regarding the nature of these inner singularities and the outcome of gravitational collapse is largely based on the spherical Schwarzschild black hole solution and the idealized Oppenheimer-Snyder collapse model . The Schwarzschild black hole contains a strong spacelike central singularity. All the matter that falls into the black hole crashes into this singularity within a finite proper time. The Schwarzschild singularity is unavoidable. This behaviour is manifested in the Penrose diagram describing the conformal structure of a spacetime in which a Schwarzschild black hole forms (see Fig. 1). However, spherical collapse is not generic. We expect some angular momentum and this might change this picture drastically. The inner structure of a stationary rotating, Kerr, black hole contains a strong inner timelike singularity, which is separated from external observes by both an apparent horizon and a Cauchy horizon (CH). A free-falling test particle cannot reach this singularity. Instead it will cross a second Cauchy horizon and emerge form a white hole into another asymptotically flat region. A remarkably similar structure exists in a charged Reissner-Nordström black hole (see Fig. 1) . We do not expect to find charged collapse in nature. However, this similarity motivates us to study spherically symmetric charged gravitational collapse as a simple toy model for a realistic generic rotating collapse (which is at best axisymmetric). Does the inner structure of a Reissner-Nordström black hole describe the generic outcome of gravitational collapse? Novikov studied the collapse of a charged shell and found that the shell will reach a minimal radius and bounce back, emerging into another asymptotically flat region - a different universe. The idea of reaching other universes via a black hole’s interior is rather attractive. It immediately captured the imagination of the popular audience and SciFi authors coined the “technical” term “Stargate” for this phenomenon. However as predictability is lost at the CH this leads to serious conceptual problems. We are faced with two gravitational collapse models. The “traumatic” collapse to Schwarzschild in which nothing can escape the central singularity and the “fascinating” collapse to Kerr or Reissner-Nordström in which a generic infalling observer might escape unharmed to another Universe. Which of the two possibilities is the generic one? Penrose, who was the first to address this issue pointed out that small perturbations, which are remnants of the gravitational collapse outside the collapsing object are infinitely blueshifted as they propagate in the black-hole’s interior parallel to the Cauchy horizon. The resulting infinite energy leads to a curvature singularity. Matzner et. al have shown that the CH is indeed unstable to linear perturbations. This indicates that the CH might be singular - “Stargate” might be closed. A detailed modeling of this phenomena suggests that the CH inside charged or spinning black-holes is transformed into a null, weak singularity . The CH singularity is weak in the sense that an infalling observer which hits this null singularity experiences only a finite tidal deformation . Nevertheless, curvature scalars (namely, the Newman-Penrose Weyl scalar $`\mathrm{\Psi }_2`$) diverge along the CH, a phenomena known as mass-inflation . Despite this remarkable progress the physical picture is not complete yet. The evidence for the existence of a null, weak CH singularity is mostly based on perturbative analysis. The pioneering work of Gnedin and Gnedin was a first step towards a full non-linear analysis. They have demonstrated the appearance of a central spacelike singularity deep inside a charged black-hole coupled to a (neutral) scalar-field. Much insight was gained from the numerical work of Brady and Smith who studied the same problem. These authors established the existence of a null mass-inflation singularity along the CH. Furthermore, they showed that the singular CH contracts to meet the central $`r=0`$ spacelike singularity. More recently, Burko demonstrated that there is a good agreement between the numerical results and the predictions of the perturbative approach. Still, the mass-inflation scenario has never been demonstrated explicitly in a collapsing situation beginning from a regular spacetime. All previous numerical studies began with a singular Reissner-Nordström spacetime with an additional infalling scalar field. We demonstrate here explicitly that mass-inflation takes place during a dynamical charged gravitational collapse. We show that the generic black hole that forms in a charged collapse is engulfed by singularities in all future directions. We consider the gravitational collapse of a self-gravitating charged scalar-field. The physical model is described by the coupled Einstein-Maxwell-charged scalar equations. We solve the coupled equations using a characteristic method. Our scheme is based on double null coordinates: a retarded null coordinate $`u`$ and an advanced null coordinate $`v`$. The axis, $`r=0`$, is along $`u=v`$. For $`vM`$ our null ingoing coordinate $`v`$ is proportional to the Eddington-Finkelstein null ingoing coordinate $`v_e`$. These coordinates allow us to begin with a regular initial spacetime (at approximately past null infinity), calculate the formation of the black-hole’s event horizon, and follow the evolution inside the black-hole all the way to the central and the CH singularities. Fig. 2 describes the numerical spacetime that we find. The upper panel (Fig. 2a) displays the radius $`r(u,v)`$ as a function of the ingoing null coordinate $`v`$ along a sequence of outgoing ($`u=const`$) null rays that originate from the non-singular axis $`r=0`$. One can distinguish between three types of outgoing null rays: (i) The outer-most (small-$`u`$) rays, which escape to infinity. (ii) The intermediate outgoing null rays which approach a fixed radius $`r_{CH}(u)`$ at late-times $`v\mathrm{}`$ indicating the existence of a CH. (iii) The inner-most (large-$`u`$) rays, which terminate at the singular section of the $`r=0`$ hypersurface. These outgoing rays reach the $`r=0`$ singularity in a finite $`v`$, without intersecting the CH. This structure is drastically different from the Reissner-Nordström spacetime, in which all outgoing null rays which originate inside the black-hole intersect the CH. Moreover while in a Reissner-Nordström spacetime the CH is a stationary null hypersurface, here $`r_{CH}(u)`$ depends on the outgoing null coordinate $`u`$. The CH contracts and reaches the inner $`r=0`$ singularity. The CH is smaller if the charge is smaller, and if the charge is sufficiently small it is difficult (numerically) to notice the existence of a CH in the solution. Fig. 2b. depicts the $`r(u,v)`$ contour lines. The outermost contour line corresponds to $`r=0`$; its left section (a straight line $`u=v`$) is the non-singular axis and its right section corresponds to the central singularity at $`r=0`$. Since $`r_v<0`$ along this section, the central singularity is spacelike. Previously $`r_v=0`$ indicated the existence of an apparent horizon (which is first formed at $`u1`$ for this specific solution). The CH is a null hypersurface located at $`v\mathrm{}`$. This follows because the intermediate outgoing null rays (in the range $`1u2.1`$ for this specific solution) terminate at a finite ($`u`$-dependent) radius $`r_{CH}(u)`$. The singular CH contracts to meet the central ($`r=0`$) spacelike singularity (along the $`u2.1`$ outgoing null ray). Thus, the null CH singularity is a precursor of the final spacelike singularity along the $`r=0`$ hypersurface. As expected from the Mass Inflation scenario the mass function $`m(u,v)`$ (and the curvature) diverge exponentially along the outgoing null rays (see Fig. 3a). The mass function increases not only along the outgoing ($`u`$=const) null rays (as $`v`$ increases) but also along ingoing ($`v`$=const) null rays (as $`u`$ increases). The weakness of the singularity is demonstrated here by the metric function $`g_{uV}`$ (see Fig. 3b) which approaches a finite value at the CH. This confirms the analytical analysis of Ori , according to which a suitable coordinate transformation can produce a non-singular metric. Our numerical solution has put together all the different ingredients found in the previous analyses into a single coherent picture. The inner structure of a black hole that forms in a gravitational collapse of a charged scalar-field is remarkably different from the inner structure of a Reissner-Nordström (or Kerr) black hole (see Fig. 1). The inner region is bounded by singularities in all future directions: a spacelike singularity forms on $`r=0`$ and a null singularity forms along the CH, which contracts and meets the spacelike singularity at $`r=0`$. This structure is much closer to the “traditional” Schwarzschild inner structure than to the seemingly more generic Reissner-Nordström or Kerr structures. However, while the spacelike singularity is strong, the null singularity along the CH is weak. Matter is able to cross this singularity without being crushed by tidal forces. Thus, in spite of this “singular” picture, “Stargate” might not be completely closed after all (provided that the travelers are willing to suffer a strong, yet finite distortion). These travelers will not have, of course, the slightest idea what is expected for them beyond the CH. The weakness of the CH singularity leaves open the question of predictability beyond the CH. ACKNOWLEDGMENTS This research was supported by a grant from the Israel Science Foundation. TP thanks W. Israel for helpful discussions.
no-problem/9902/cond-mat9902275.html
ar5iv
text
# Quantum Spinodal Decomposition in Multicomponent Bose-Einstein Condensates \[ ## Abstract We investigate analytically the non-equilibrium spatial phase segregation process of a mixture of alkali Bose-Einstein condensates. Two stages (I and II) are found in analogy to the classical spinodal decomposition. The coupled non-linear Schrödinger equations enable us to give a quantitative picture of the present dynamical process in a square well trap. Both time and length scales in the stage I are obtained. We further propose that the stage II is dominated by the Josephson effect between different domains of same condensate different from scenarios in the classical spinodal decomposition. Its time scale is estimated. PACS$`\mathrm{\#}`$: 03.75.Fi; 64.75.+g \] The recent realizations of two and three component alkali Bose-Einstein condensates (BEC’s) in one trap provide us with new systems to explore the physics in otherwise unachievable parameter regimes . These systems have two noticeable advantages: the easy control of experimental parameters and the relative simplicity of the mathematical description. A direct comparison between theoretical calculations and experimental observations can be made. The purpose of the present paper is to explore one of main questions in the non-equilibrium statistical physics using those new systems: spinodal decomposition in a binary-solution system. It is a typical example of phase ordering dynamics: the growth of order through domain coarsening when a system is quenched from the homogeneous phase into a broken-symmetry phase . Systems quenched from a disordered phase into an ordered phase do not order instantaneously. Instead, different length scales set in as the domains form and grows with time, and different broken symmetry phases compete to select the equilibrium state. We show that it is possible to have an analogous spinodal decomposition in BEC’s, which manifests all the main phenomenology except those constrained by the trap size. In the following, we shall study the main features in this dynamical evolution process, starting from the homogeneous unstable state of BEC’s. To differentiate the present situation from the usual one, we shall call the present one the quantum spinodal decomposition, and the previous ones the classical spinodal decomposition. We start from the time dependent non-linear Schrödinger equations $`i\mathrm{}{\displaystyle \frac{}{t}}\psi _1`$ $`=`$ $`\left[{\displaystyle \frac{\mathrm{}^2}{2m_1}}^2+(U_1(x)\mu _1)+G_{11}|\psi _1|^2\right]\psi _1`$ (2) $`+G_{12}|\psi _2|^2\psi _1,`$ and $`i\mathrm{}{\displaystyle \frac{}{t}}\psi _2`$ $`=`$ $`\left[{\displaystyle \frac{\mathrm{}^2}{2m_2}}^2+(U_2(x)\mu _2)+G_{22}|\psi _2|^2\right]\psi _2`$ (4) $`+G_{12}|\psi _1|^2\psi _2.`$ Here $`\psi _j(x,t)`$, $`m_j`$, $`U_j`$ with $`j=1,2`$ are the effective wave function, the mass, and the trapping potential of the $`j`$th condensate. The repulsive interaction between the $`j`$th condensate atoms is specified by $`G_{jj}`$, and that between 1 and 2 by $`G_{12}`$. The Lagrangian multipliers, the chemical potentials $`\mu _1`$ and $`\mu _2`$, are fixed by the relations $`d^3x|\psi _j(x,t)|^2=N_j,j=1,2`$, with $`N_j`$ the number of the $`j`$th condensate atoms. Eq. (1) and (2) are mean field equations, since we treat the effective wave functions as $`c`$ numbers, corresponding to the Hartree-Fock-Bogoliubov approximation. They provide a good description for the slow dynamics in both alkali BEC’s and the superfluid He4 on length scale larger than the range of microscopic interaction. Experimentally, the trapping potentials $`\{U_j\}`$ are simple harmonic in nature. For the sake of simplicity and to illustrate the physics we shall consider a square well trapping potential $`U_j=U`$: zero inside and large (infinite) outside, unless otherwise explicitly specified. We consider the strong mutual repulsive interaction $$G_{12}>\sqrt{G_{11}G_{22}}.$$ (5) In this regime the equilibrium state for two Bose-Einstein condensates is a spatial segregation of two condensates, where two phases, the weakly and strong segregated phases, characterized by the healing length and the penetration depth, have been predicted . We shall use Eq. (1) and (2) under the condition (3) to study a highly non-linear dynamical process: The two condensates are initially in a homogeneously mixed state, then eventually approach the phase segregated state. In the same mean field treatment as in Ref., we find that this dynamical process can be classified into two main stages: The initial highly non-equilibrium dynamical growth in the stage I, where the dynamics is governed by the fastest growth mode, and the stage II of approach to equilibrium where the dynamics is governed by the slowest mode. The stage II is typical of a relaxation process near equilibrium. However, we shall show again it is governed by a quantum effect, namely, the Josephson effect. Stage I: Fastest Growth Mode. The coupled non-linear Schrödinger equations have an obvious homogeneous solution: Inside the trap the condensate densities $`|\psi _j|^2=\rho _{j0}`$, $`\rho _{j0}=N_j/V,`$ with $`V`$ the volume of the square well potential trap, and the chemical potentials $`\mu _1=G_{11}\rho _{10}+G_{12}\rho _{20}`$ and $`\mu _2=G_{22}\rho _{20}+G_{12}\rho _{10}`$. This is the initial condition of the present problem. To look for the fastest growth mode out of the homogeneous state, we start with small fluctuations from the homogeneous state. This is consistent with the usual stability analysis . Our approach here is to emphasize the connection with the physics of the classical spinodal decomposition and the role played by the Josephson relationships. Define $$\psi _j(x,t)=\sqrt{\rho _j(x,t)}e^{i\theta _j(x,t)},$$ (6) and define the density fluctuations $`\delta \rho _j=\rho _j\rho _{j0}`$ and the phase fluctuations $`\theta _j`$, and assume they are small: $`|\delta \rho _j|/\rho _j,|\theta _j|<<1`$. The definition of the phase fluctuations here has made use of the implicit assumption that there is no net current in the condensate. To the linear order, we have from Eqs.(1,2), after eliminating the phase variables, $$\frac{^2}{t^2}\left(\begin{array}{c}\delta \rho _1\\ \delta \rho _2\end{array}\right)=\left(\begin{array}{cc}b_1& \frac{\rho _{10}}{m_1}G_{12}^2\\ \frac{\rho _{20}}{m_2}G_{12}^2& b_2\end{array}\right)\left(\begin{array}{c}\delta \rho _1\\ \delta \rho _2\end{array}\right).$$ (7) with $`b_j=\frac{\mathrm{}^2}{4m_j^2}^4+\frac{\rho _{j0}}{m_j}G_{jj}^2`$. We look for the solution of the form $`\left(\begin{array}{c}\delta \rho _1\\ \delta \rho _2\end{array}\right)=\left(\begin{array}{c}A\\ B\end{array}\right)e^{i(𝐪𝐫\omega t)},`$ with $`A,B`$ constants. There are two branches of solution for Eq. (5). For one branch, the frequency is always real. For another branch which we denote by $`\omega _{}`$, it can be imaginary. An imaginary frequency $`\omega _{}`$ shows that the initial homogeneously mixed state is unstable. It is straightforward to verify that the sufficient condition for the appearance of imaginary frequency for small enough wave number $`q`$ is the validity of the inequality Eq.(3). The modes with imaginary frequencies will then grow exponentially with time. Unlike the usual situation near equilibrium, this growth from the present non-equilibrium homogeneously mixed state will be dominated by the fastest growth mode. This is precisely the same case as in the initial stage of the classical spinodal decomposition. To get a concrete understanding of the physical implications of Eq. (5) in the stage I we consider a case relevant to recent experiments where particles of the two condensates have the same mass, $`m_1=m_2=m`$. In this case we find the wavenumber corresponding to the most negative $`\omega _{max}^2`$ is $$q_{max}^2=\frac{m}{\mathrm{}^2}\left[\sqrt{b_0^2+4\rho _{10}\rho _{20}G_{11}G_{22}\left(\frac{G_{12}^2}{G_{11}G_{22}}1\right)}b_0\right],$$ (8) and $$\omega _{max}=i\frac{\mathrm{}}{2m}q_{max}^2,$$ (9) with $`b_0=\rho _{10}G_{11}+\rho _{20}G_{22}`$. The physics implied in Eqs. (6,7) is as follows. Starting from the initial homogeneous mixture of the two condensates, on the time scale given by $$t_I=1/|\omega _{max}|,$$ (10) domain patterns of the phase segregation with the characteristic length $$l_I=1/q_{max}$$ (11) will appear. Particularly, for the weakly segregated phase of $`G_{12}^2/G_{11}G_{22}10`$, we have the length scale $`l_I^1=\sqrt{2}/\sqrt{\mathrm{\Lambda }_1^2+\mathrm{\Lambda }_2^2}`$, and for the strongly segregated phase of $`G_{12}^2/G_{11}G_{22}1\mathrm{}`$, we have $`l_I^1=\sqrt{2}/\sqrt{\mathrm{\Lambda }_1\mathrm{\Lambda }_2}`$. Here $`\mathrm{\Lambda }_j=\xi _j/\sqrt{G_{12}/\sqrt{G_{11}G_{22}}1}`$ and $`\xi _j=\sqrt{\mathrm{}^2/2m_j\mathrm{\hspace{0.33em}1}/\rho _{j0}G_{jj}}`$ are the penetration and healing lengths in the binary BEC mixture . Those length and times scales can be measured experimentally. We will come back to the experimental situation below. After the stage I of fast growth into the domain pattern characterized by the length scale $`l_I`$, the system will gradually approach the true ground state of the complete phase segregation: one condensate in one region and the second condensate in another region. This stage is slow, dominated by the slowest mode, and is the subject of stage II. It is now evident that the stage I of the growth of binary BEC’s shares the same phenomenology of the initial stage of the classical spinodal decomposition: the domination of the fastest growth mode, the appearance of domains of segregated phases, and the conservation of particle numbers. There are, however, two important differences. First, the dynamical evolution of the binary BEC’s is governed by a coupled nonlinear time dependent Schrödinger equations, not by a nonlinear diffusion equation supplemented with the continuity equation, the Cahn-Hilliard equation. There is no external relaxation process for the present wave functions. Secondly, the energy of the binary BEC’s is conserved during the growth process, not as in the case of classical spinodal decomposition where the system energy always decreases. Stage II: Merging and Oscillating between Domains. The BEC binary mixture occurs in a trap. This finite size effect of the droplet leads to the broken symmetry of the condensate profiles , which tends to separate the condensates in mutually isolated regions. This implies that there is no contact between different domains of the same condensates formed in the stage I. The classical spinodal process involves diffusion. An estimate of the diffusion constant for the BEC system can be made from kinetics theory. The ratio of the time scales for quantum and classical particle transport is of the order of the ratio of the BEC cloud size to the de Broglie wavelength. This is much smaller than one for the experimental systems of interest. The classical diffusion process is thus not important. Because the domains are not connected and because the diffusion for the BEC mixture is extremely low, all the mechanisms for the late stage classical spinodal decomposition are not applicable. We propose that it is the Josephson effect that is responsible for the approach to equilibrium in the stage II. Two models for the Josephson effect, the ‘rigid pendulum’ model and the ‘soft pendulum’ model will be discussed. They both give the same time scale when the ‘Rabi’ frequency is small. Let us consider the specific case of two domains of condensate 1 separated by a domain with width $`d`$ of condensate 2. The ability of condensate 1 to tunnel through condensate 2 is described by the penetration depth $`\mathrm{\Lambda }`$, as discussed in Ref. . Hence the probability of condensate 1 to tunnel through condensate 2 can be estimated as $$p=e^{2d/\mathrm{\Lambda }},$$ (12) when $`p`$ is much smaller than 1. The finiteness, though small, of the tunneling probability suggests that it is the Josephson effect responsible for the relaxation process in the stage II. The Josephson effect leads to the merging of two domains of the same condensate. The dynamics of such motion may be governed by the ‘rigid pendulum’ Hamiltonian for a Josephson junction : $`H(\varphi ,n)=E_J(1\mathrm{cos}\varphi )+{\displaystyle \frac{1}{2}}E_Cn^2,`$ where $`E_J`$ is the Josephson coupling energy determined by the tunneling probability, $`n=(n_xn_y)/2`$ is the particle number difference between the numbers of particles, $`n_x`$ and $`n_y`$, in the two domains, and $`E_C\mu /n`$ is the ‘capacitive’ energy due to interactions. In the absence of external constraints, $`\mu =E_Cn`$. The phase difference $`\varphi `$ between the two domains is conjugated to $`n`$, as in usual Josephson junctions. Under the appropriate condition, such as low temperature and smallness of the capacitive energy, there may be an oscillation between the two domains of condensate 1 separated by condensate 2. In such a case, we may estimate that the oscillation period $`t_{II}=2\pi /\omega _{JP}`$, with the so-called Josephson plasma frequency $$\omega _{JP}=\frac{\sqrt{E_CE_J}}{\mathrm{}}.$$ (13) For small tunneling probability, the Josephson junction energy may be estimated as $`E_J=n_T^{1/3}\mathrm{}\omega _0e^{2d/\mathrm{\Lambda }}`$, and the capacitive energy as $`E_C=\frac{2}{5}\left(n_T/2\right)^{0.6}\left(15a_{11}/a_0\right)^{0.4}\mathrm{}\omega _0`$. Here $`\omega _0`$ is the harmonic oscillator frequency for condensate 1 in a harmonic trap, $`a_0=\sqrt{\mathrm{}/m_1\omega _0}`$ is the corresponding oscillator length, the $`a_{11}`$ is the scattering length of condensate 1, and $`n_T=n_x+n_y`$ is the total number of particles in domain $`x`$ and $`y`$. The oscillatory time scale between the domains determines by the Josephson plasma frequency $$t_{II}^1=\left(\frac{2a_0}{15a_{11}n_T}\right)^{2/15}\frac{\omega _0}{2\pi }e^{d/\mathrm{\Lambda }}.$$ (14) The rigid pendulum Hamiltonian would give a good description when $`n<<n_T`$. Another description of the Josephson effect uses the ‘soft pendulum’ Hamiltonian proposed in Ref. . In the parameter regime relevant current experimental situations both of them give the same order of magnitude estimate for $`t_{II}`$. Given the Josephson effect is the dominant mechanism in the stage II, the time scale to arrive at the ground state will be determined by the Josephson effect at the final two domains, in which the two domains of condensate 1 is separated by the domain of condensate 2, in the manner of $`\mathrm{1\hspace{0.17em}2\hspace{0.17em}1\hspace{0.17em}2}`$ spatial configuration for the case of equal numbers of the condensates. The width $`d`$ of each domain is then $`D/4`$, with $`D`$ the size of the trap. According to the above analysis, the largest time scale determined by Eq.(12), the slowest mode, in the stage II is $`t_{II}`$. The arrangement of $`\mathrm{1\hspace{0.17em}2\hspace{0.17em}1}`$ spatial configuration may also occur here, in which it is more likely for condensate 2 to tunnel through 1 to the edge of the trap, because of the larger tunneling probability. We next turn our attention to the experimental situation. The first question is whether the quantum spinodal decomposition discussed above can happen or not. In terms of the atomic scattering lengths of condensate atoms $`a_{jj}`$, the interactions are $`G_{jj}=4\pi \mathrm{}^2a_{jj}/m_j`$. The typical value of $`a_{jj}`$ for <sup>87</sup>Rb is about $`50`$Å. The typical density realized for the binary BEC mixture is about $`\rho _{i0}10^{14}/cm^3`$. Hence the healing length is $`\xi =\sqrt{(\mathrm{}^2/2m)(1/G_{jj}\rho _{j0})}=\sqrt{1/(8\pi a_{jj}\rho _{j0})}3000`$Å. For the different hyperfine states of <sup>87</sup>Rb, it is known now that $`G_{12}/\sqrt{G_{11}G_{22}}>1`$. Hence the ground state of the phase segregated phase can be realized. Also, that both the healing length and the penetration depth are smaller than the trap size validates the application of results obtained in the square well trapping potential to the harmonic one. Therefore, the quantum spinodal decomposition can happen. Experimentally, starting from the initially homogeneous state, after a short period of time a domain pattern does appear. Then a damped oscillation between the domain pattern has been observed. Eventually the binary BEC mixture sets to the segregated phase . If we take $`G_{12}/\sqrt{G_{11}G_{22}}=1.04`$, the penetration depth is $`\mathrm{\Lambda }=\xi /\sqrt{G_{12}/\sqrt{G_{11}G_{22}}1}1.5\mu `$m. The length scale $`l_I`$ determined by Eq.(9) in the stage I is $`1.5\mu `$m, which is the same order of magnitude in comparison with experimental data. This length is also comparable to the domain wall width seen experimentally. The corresponding time scale $`t_I`$ (Eq. (8)) is then 6 ms, again the same order of magnitude, though both estimated $`l_I`$ and $`t_I`$ are somewhat smaller. For stage II, $`d/\mathrm{\Lambda }`$ is of the order of $`n_d`$, the number of domains formed in stage I. For the experiment of interest to use, $`n_d2`$. If we assume that the damped oscillation to equilibrium observed experimentally is the stage II discussed here, taking the total number of particle $`N_1=10^6`$, we find the period according to Eq. (12) is 30 ms, comparable to the experimental value. For the stage I, our above analysis shows that it is insensitive to the damping. At this moment we do not have a reliable estimation of the damping in the Josephson oscillation, whose existence has been indicated experimentally. Nevertheless, given the uncertainty in the value of $`G_{12}`$ or $`a_{12}`$, we conclude that the stage II of the quantum spinodal decomposition may have been observed. Periodic-like structures have also been observed in the phase segregation of spin 1 Na mixtures . For the definiteness of analysis we have presently focused on the binary case, but we believe a similar physical picture of quantum spinodal decomposition applies to that case as well. Finally, we point out that the length and time scales for Rb mixtures is controlled by the difference of $`r=G_{12}/\sqrt{G_{11}G_{22}}`$ and 1. Since $`r`$ is close to 1 experimentally, the data can provide for a very sensitive estimate of r-1. Since this parameter determines $`\mathrm{𝐛𝐨𝐭𝐡}`$ the time and length scale of stage I, it is a self-consistent check on the physical picture provided here. This work was supported in part by a grant from NASA (NAG8-1427), and by Swedish NFR (PA). One of us (PA) thanks the Bartol Research Institute as well as the Department of Physics at University of Delaware for the pleasant hospitality, where the main body of the work was completed. We also thank the D. S. Hall for sending us their data.
no-problem/9902/nucl-th9902026.html
ar5iv
text
# Effect of 𝑞-deformation in the NJL gap equation ## Abstract We obtain a $`q`$-deformed algebra version of the Nambu–Jona-Lasinio model gap equation. In this framework we discuss some hadronic properties such as the dynamical mass generated for the quarks, the pion decay constant and the phase transition present in this model. PACS numbers: 11.30.Rd, 03.65.Fd, 12.40.-y Keywords: Deformed Algebras, Hadronic Physics, Effective Models The concept of symmetry is of fundamental importance in physics; the breaking of a symmetry and its associated phase transition are universal phenomena appearing in many branches in physics, such as nuclear and solid state physics, although the broken symmetries and the physical systems involved are quite different. Y. Nambu was the first to realize this universal aspect of dynamical symmetry breaking . The Nambu-Jona-Lasinio (NJL) model is very adequate to study the breaking of chiral symmetry and the generation of a dynamical mass for the quarks due to the appearance of condensates. On another side, in the last few years the study of $`q`$-deformed algebras turned out to be a fertile area of research. The use of $`q`$-deformed algebra in the description of some many-body systems has lead to the appearance of new features when compared to the non-deformed case . In particular, it seems to be a very elegant framework to describe perturbations from some underlying symmetry structure. From the many applications of $`q`$-deformation ideas existing in the literature, ranging from optics to particle physics, we would like to pinpoint three of them: the investigation of the behavior of the second order phase transition in a $`q`$-deformed Lipkin model , the good agreement with the experimental data obtained through a $`\kappa `$-deformed Poincaré phenomenological fit to the dynamical mass and rotational and radial excitations of mesons , and the purely $`su_q(2)`$-based mass formula for quarks and leptons developed by using an inequivalent representation . It sounds therefore reasonable to study the influence of the $`q`$-deformation on the mass generation mechanism due to the breaking of chiral symmetry. To be definite, in this work we intend to investigate the effects of the $`q`$-deformation on the phase transition of the NJL model, stimulated by an analogy with $`q`$-deformed Lipkin model, where the phase transition is smoothed down when the Lipkin Hamiltonian is deformed . Recently, the thermodynamical properties of a free quantum group fermionic system with two “flavors” were studied . In particular, it was given there a $`su_q\left(N\right)`$-covariant representation of the fermionic algebra for arbitrary $`N`$ in terms of ordinary creation and annihilation operators. The $`su_q\left(2\right)`$-covariant algebra is given by the following relations $$\{\psi _1,\overline{\psi }_1\}=1\left(1q^2\right)\overline{\psi }_2\psi _2\{\psi _2,\overline{\psi }_2\}=1,$$ (1) $$\psi _1\psi _2=q\psi _2\psi _1\overline{\psi }_1\psi _2=q\psi _2\overline{\psi }_1,$$ (2) $$\{\psi _1,\psi _1\}=0\{\psi _2,\psi _2\}=0.$$ (3) The usual $`su(2)`$ covariant fermionic algebra is recovered when $`q=1`$. Later, the pure nuclear pairing force version of the Bardeen-Cooper-Schrieffer (BCS) many-body formalism was extended in such a way to replace the usual fermions by quantum group covariant ones satisfying appropriate anticommutation relations for a $`su_q(N)`$-fermionic algebra . Using the $`su_q\left(2j+1\right)`$-covariant representation of the fermionic algebra, a $`q`$-covariant form of the BCS approximation was constructed and the $`q`$-analog to the BCS equations along with the quantum gap equation was derived. The quantum gap was shown to depend explicitly on the deformation parameter and it is reduced as the deformation increases. The Nambu–Jona-Lasinio model was first introduced to describe the nucleon-nucleon interaction via a four-fermion contact interaction. Later, the model was extended to quark degrees of freedom becoming an effective model for quantum chromodynamics (QCD). The Lagrangian of the NJL model is given by $`_{NJL}`$ $`=`$ $`\overline{\psi }i\gamma ^\mu _\mu \psi +_{int},`$ (4) $`_{int}`$ $`=`$ $`G\left[\left(\overline{\psi }\psi \right)^2+\left(\overline{\psi }i\gamma _5\tau \psi \right)^2\right].`$ (5) Linearizing the above interaction in a mean field approach, the last term does not contribute if the vacuum is parity and Lorentz invariant. The Lagrangian with the linearized interaction is then $$_{NJL}=\overline{\psi }i\gamma ^\mu _\mu \psi +2G\overline{\psi }\psi \overline{\psi }\psi .$$ (6) Regarding this Lagrangian as a Dirac Lagrangian for massive quarks we obtain a dynamical mass for the quarks $$m=2G\overline{\psi }\psi ,$$ (7) where $`\overline{\psi }\psi `$ is the vacuum expectation value of the scalar density $`\overline{\psi }\psi `$, representing the quark condensates. Eq. (7) describes how the dynamical mass is generated with the appearance of the quark condensates. The quarks are massless if the condensate vanishes. We now turn to the $`q`$-deformation of the NJL gap equation. Following we write the creation and annihilation operators of the $`su_q\left(2j+1\right)`$ fermionic algebra as, $`A_{jm}`$ $`=`$ $`a_{jm}{\displaystyle \underset{i=m+1}{}}\left(1+Qa_{ji}^{}a_{ji}\right),`$ (8) $`A_{jm}^{}`$ $`=`$ $`a_{jm}^{}{\displaystyle \underset{i=m+1}{}}\left(1+Qa_{ji}^{}a_{ji}\right),`$ (9) where $`Q=q^11,`$ $`j=1/2`$ and $`m=\pm 1/2`$. The first consequence of the above deformation is that only the operators corresponding to $`m=\frac{1}{2}`$ are modified. Since in the NJL model we deal with quarks (anti-quarks) creation and annihilation operators, this feature is important because only negative helicity quarks (anti-quarks) operators will be deformed. Explicitly, we have $$A_{}=a_{}\left(1+Qa_+^{}a_+\right),A_{}^{}=a_{}^{}\left(1+Qa_+^{}a_+\right),$$ (10) $$A_+=a_+,A_+^{}=a_+^{},$$ (11) where $`+`$ $`()`$ stands for the positive (negative) helicity. In a sense, we are embedding the chiral symmetry breaking effects in the operators’ definition. We are now in position a to obtain the deformed gap equation by introducing a BCS-like vacuum and proceeding similarly to the standard Bogoliubov-Valatin approach . The $`q`$-deformed BCS vacuum reads $$|NJL=\underset{𝐩,s=\pm 1}{}\left[\mathrm{cos}\theta (p)+s\mathrm{sin}\theta (p)B^{}(𝐩,s)D^{}(𝐩,s)\right]|0$$ (12) and the quark fields are expressed in terms of $`q`$-deformed creation and annihilation operators as $$\psi _q(x,0)=\underset{s}{}\frac{d^3p}{\left(2\pi \right)^3}\left[B(𝐩,s)u(𝐩,s)e^{i𝐩𝐱}+D^{}(𝐩,s)v(𝐩,s)e^{i𝐩𝐱}\right].$$ (13) The $`q`$-deformed quark and anti-quark creation and annihilation operators $`B`$, $`B^{}`$, $`D`$, and $`D^{}`$, are expressed in terms of the non-deformed ones according to Eqs. (10) and (11) $`B_{}`$ $`=`$ $`b_{}\left(1+Qb_+^{}b_+\right),B_{}^{}=b_{}^{}\left(1+Qb_+^{}b_+\right),`$ (14) $`D_{}`$ $`=`$ $`d_{}\left(1+Qd_+^{}d_+\right),D_{}^{}=d_{}^{}\left(1+Qd_+^{}d_+\right),`$ (15) $`B_+`$ $`=`$ $`b_+,B_+^{}=b_+^{},`$ (16) $`D_+`$ $`=`$ $`d_+,D_+^{}=d_+^{},`$ (17) ( in the above equations we simplified the notation: $`B(𝐩,s)B_s`$, $`b(𝐩,s)b_s`$, etc. ). We would like to stress that, as discussed in Ref. , the deformed vacuum differs from the non-deformed one only by a phase and, therefore, the effects of the deformation comes solely from the modified field operators. Additionally, the $`q`$-deformed NJL Lagrangian, constructed using $`\psi _q`$ instead of $`\psi `$, is invariant under the quantum group $`SU_q(2)`$ transformations. This can be seen by using the two-dimensional representation of the $`SU_q(2)`$ unitary transformation given in Ref. . The deformed gap equation is $$m=2G\overline{\psi }\psi _q,$$ (18) where $`\overline{\psi }\psi _q`$ is the $`q`$-deformed condensate calculated using the BCS-like vacuum, Eq. (12), and Eq. (13), $`\overline{\psi }\psi _q`$ $`=`$ $`NJL\left|\overline{\psi }_q\psi _q\right|NJL`$ (19) $`=`$ $`\overline{\psi }\psi +NJL\left|𝒬\right|NJL,`$ (20) where $`\overline{\psi }\psi `$ is the non-deformed condensate and $`NJL\left|𝒬\right|NJL`$ represents all non-vanishing matrix elements arising from the $`q`$-deformation of the quark fields. The contribution of these $`q`$-deformed matrix elements is $$NJL\left|𝒬\right|NJL=Q\frac{d^3p}{\left(2\pi \right)^3}\left[\mathrm{sin}2\theta (p)\mathrm{sin}2\theta (p)\mathrm{cos}2\theta (p)\right].$$ (21) The calculation of the deformed condensate will be performed in a similar way as in the usual case . It requires also a regularization procedure since the NJL interaction is not perturbatively renormalizable. For reasons of simplicity a non-covariant trimomentum cutoff is applied arising $$\overline{\psi }\psi _q=\frac{3m}{\pi ^2}\left[\left(1\frac{Q}{2}\right)_0^\mathrm{\Lambda }𝑑p\frac{p^2}{\sqrt{𝐩^2+m^2}}+\frac{Q}{2}_0^\mathrm{\Lambda }𝑑p\frac{p^3}{𝐩^2+m^2}\right]$$ (22) for each quark flavor. At this point we see that the dynamical mass is again given by a self-consistent equation since the condensate depends also on the mass. Inserting Eq. (22) into Eq. (18) we obtain the deformed NJL gap equation $$m=\frac{2Gm}{\pi ^2}\left[\left(1\frac{Q}{2}\right)_0^\mathrm{\Lambda }𝑑p\frac{p^2}{\sqrt{𝐩^2+m^2}}+\frac{Q}{2}_0^\mathrm{\Lambda }𝑑p\frac{p^3}{𝐩^2+m^2}\right].$$ (23) It is easy to see that for $`Q=0`$ $`(q=1)`$, we recover the NJL gap equation in its more familiar form $$m=\frac{2Gm}{\pi ^2}_0^\mathrm{\Lambda }𝑑p\frac{p^2}{\sqrt{𝐩^2+m^2}}+m_0,$$ (24) where $`m_0`$ appears only if we consider the current quark mass term $`_{mass}=m_0\overline{\psi }\psi `$ in the NJL Lagrangian Eq. (4). The pion decay constant is calculated from the vacuum to one pion axial vector current matrix element, which, in the simple 3D non-covariant cutoff we are using , is given by $$f_\pi ^2=N_cm^2_0^\mathrm{\Lambda }\frac{d^3p}{\left(2\pi \right)^3}\frac{1}{\left(𝐩^2+m^2\right)^{3/2}},$$ (25) for each quark color. The deformed calculation of $`f_\pi `$ is performed directly by substituting the dynamical mass in Eq. (25) from the one obtained in Eq. (23), instead of deforming the axial current in the calculation of its matrix element of between the vacuum and the one pion state. As in the non-deformed case, the $`q`$-gap equation has non-trivial solutions when the coupling $`G`$ exceeds a critical value $`G_{crit}`$ related to the cutoff. Figure 1 depicts the sharp phase transition at $`G=G_{crit}`$ separating the Wigner-Weyl and Nambu-Goldstone phases, corresponding to different realizations of chiral symmetry. Figure 1 also shows the deformed condensate values as a function of $`q`$. We can see the enhancement of the condensate’s value, due to presence of the $`q`$-deformation. The dynamical mass is accordingly modified through the deformed gap equation (23), as can be seen in Table I, along with the corresponding values of the pion decay constant, $`f_\pi `$. The behavior of the condensate around the critical coupling, $`G_c,`$ is similar for both deformed and non-deformed cases, meaning that the adopted procedure to $`q`$-deform the underlying $`su(2)`$ algebra in a two flavor NJL model does not change the behavior of the phase transition around $`G_c`$. Table II presents the behavior of the coupling constant for two typical dynamical mass values for different $`q`$’s. The analysis of this table shows that the coupling constant $`G`$ decreases with $`q`$, for a given value of the dynamical mass, meaning that to acquire a given mass we need a weaker coupling when the algebra is deformed. This indicates that the deformation of the $`su(2)`$ algebra incorporates effects the NJL interaction which are propagated to the physical quantities such as the condensate, the dynamical mass and the pion decay constant. The formalism developed in allow $`q`$-values smaller than one (which corresponds to $`Q>0`$). It is worth to mention that in this case the $`q`$-deformation effect goes in the opposite direction, namely, the condensate value and the dynamical mass decrease for $`q<1`$. To summarize, the main objective of this work was to analyze the influence of the $`q`$-deformation in the NJL model. We studied the deformation of the underlying $`su(2)`$ algebra in a two flavor version of the model and investigated an important feature of the $`su(2)`$ chiral symmetry breaking, namely the dynamical mass generation, through the incorporation of helicity non-conserving terms directly in the fermionic operators. The main effect of the $`q`$-deformation is to effectively enhance the coupling strength of the NJL four fermion interaction, leading to an increasing in the value of the quark condensate. The dynamical mass, which is related to the presence of the condensate, is correspondingly increased. We also looked closely at the behavior of the phase transition around the critical point, which is still sharp, meaning that the new contributions arising from the deformation of the condensate do not play the role of explicit chiral symmetry breaking terms . Acknowledgments The authors are grateful to D. Galetti and B. M. Pimentel for helpful discussions. V. S. T. would like to acknowledge Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) for financial support and M. Malheiro for very helpful discussions concerning the $`q`$-deformation in the NJL model.
no-problem/9902/physics9902058.html
ar5iv
text
# Cold inelastic collisions between lithium and cesium in a two-species magneto-optical trap ## 1 Introduction Cold collisions between trapped, laser-cooled atoms have been the subject of extensive research in the past years walk94:adv . In contrast to collisions of thermal atoms, the collision process between cold atoms is extremely sensitive to the long-range part of the inter-atomic interaction allowing precise determination of molecular potentials mill93:prl and atomic lifetimes mcalex96:pra . In the presence of light fields, molecular excitation during the collisional process is non-negligible, leading to phenomena such as light-induced collisions pren88:optlett ; sesk89:prl , photoassociation mill93:prl , optical shielding of inelastic processes bali94:epl and formation of cold ground-state molecules fior98:prl . Investigations have almost exclusively concentrated on binary collisions between atoms of the same species walk94:adv ; supt94:optlett . Light-induced cold collisions between two different species (heteronuclear collisions) strongly differ from single-species collisions (homonuclear collisions). The excited-state interaction potential for two different species is of much shorter range (van-der-Waals potential $`1/R^6`$ at large interatomic separations $`R`$) than the excited state potential for two atoms of the same species (resonant-dipole potential $`1/R^3`$). In the homonuclear case, the duration of the cold collision is much longer than the excited-state lifetimes gems98:prl so that the dynamics of the collisional process greatly depends on the atom-light interaction during the collision process. For the heteronuclear case, even a cold collision takes less time than the lifetimes of the excited atomic states. The collision process is therefore essentially determined by the asymptotic states which are initially prepared, much like classical “hot” collisions. However, the low temperatures of laser-cooled atoms lead to a large extension of the molecular wavepacket formed during the cold collision. The wavepacket spreads over typically some fraction of an optical wavelength which is of the same order of magnitude as the range of the interaction potentials. A light-induced cold collision between two different species is therefore highly quantum-mechanical with mainly the s-wave scattering distribution determining the cross-section, in contrast to homonuclear collisions involving excited atoms juli91:pra . Only recently, simultaneous trapping of two different atomic species has been reported sant95:pra ; wind98 ; shaf98:prl . In this article, we present the first investigation on inelastic cold collisions between lithium and cesium, i.e. the lightest and the heaviest stable alkali. This extreme combination opens intriguing perspectives for future experiments related to the large difference in mass and electron affinity of the two atomic species, e.g., sympathetic cooling of lithium by optically cooled cesium engl98:applphys and the formation of cold polar molecules with large electric dipole moment which could be trapped electrostatically seka95:jetp . In our experiments, both species are simultaneously confined in a combined magneto-optical trap. Trap loss is studied by analyzing the decay of the trapped particle number after interruption of the loading flux, both in presence and in absence of the other species. By choosing appropriate trap parameters, different trap loss processes based on inelastic collisions between lithium and cesium are identified, and the corresponding cross sections and rate coefficients are determined. The specific features of inelastic cold collisions between two different species are introduced in Sec. 2 with emphasis on the peculiarities of the Li-Cs system. The combined magneto-optical trap for simultaneous confinement of lithium and cesium is described in Sec. 3. In Sec. 4 detailed quantitative studies of trap loss through inelastic Li-Cs collisions are presented. Sec. 5 summarizes the main results. ## 2 Two-species cold collisions ### 2.1 Quasi-molecular potentials When two cold atoms approach each other, the interaction between the atoms leads to the formation of quasi-molecular states. The leading term in the long-range part of the interaction arises from the dipole-dipole interaction. If both atoms are in their ground state, the potential energy is given by the well-known van-der-Waals expression $`W_{gg}=C_6/R^6`$. The coefficient $`C_6`$ can be estimated by treating the two atoms A and B as simple two-level systems with transition frequencies $`\omega _i=2\pi c/\lambda _i`$ and electric dipole moments $`d_i`$ ($`i`$ = A or B). Second-order perturbation theory yields $$C_6\frac{4d_A^2d_B^2}{\mathrm{}(\omega _A+\omega _B)}.$$ (1) The van-der-Waals interaction between two ground state atoms is thus always attractive as shown in Fig. 1. If one collision partner arrives in an excited state, the nature of the interaction depends on whether both atoms belong to the same species, or whether two different species collide. For the homonuclear quasi-molecule, the interaction potential is given by the resonant-dipole interaction $`W_{ge}=C_3^{}/R^3`$ with the perturbative two-level result $`C_3^{}\pm 2d^2`$. In the heteronuclear case with atom A in the excited state and atom B in the ground state, one obtains a van-der-Waals potential $`W_{ge}=C_6^{}/R^6`$ with $$C_6^{}\frac{4d_A^2d_B^2}{\mathrm{}(\omega _A\omega _B)}.$$ (2) The relative value of the transition energies determines the character of the interaction. If the collision partner with the larger (smaller) resonance frequency is excited, the interaction is generally repulsive (attractive) as indicated in Fig. 1 for the case of lithium and cesium. As we will see, this general feature of the excited state van-der-Waals interaction has important implications on the collisional properties of two different species. The van-der-Waals coefficients $`C_6`$ and $`C_6^{}`$ differ by the factor $`(\omega _A\omega _B)/(\omega _A+\omega _B)1`$ resulting in a much steeper potential for the excited state than for the ground state (see Fig. 1). Although the two-level approximation is an oversimplified model for the complex level schemes of real atoms, the numerical values for the coefficients $`C_6`$, $`C_3^{}`$ and $`C_6^{}`$ derived from the two-level approach reproduce the right orders of magnitude for alkali dimers. For more accurate determination of the long-range molecular potentials, elaborate models including spin-orbit effects and interacting molecular states have been developed buss87:chemphys . ### 2.2 Inelastic processes Inelastic collisions in a trap lead to loss of atoms when the kinetic energy gain of the colliding atom is larger than the trap depth. If the energy gain is smaller than the trap depth, the atom is retained in the trap, but the inelastic collision represents a significant heating mechanism. Due to the low temperatures achieved in a magneto-optical trap (MOT), the initial kinetic energy of the collision partners can be neglected with respect to the interaction energy. In the presence of light fields, two basic processes were identified for cold inelastic collisions involving optical excitation of the colliding pair juli91:pra ; gall89:prl : fine-structure changing collisions (FC) and radiative escape (RE). For two different species, an exoergic energy-exchange reaction $`\mathrm{A}^{}+\mathrm{B}\mathrm{A}+\mathrm{B}^{}+\mathrm{}(\omega _\mathrm{A}\omega _\mathrm{B})`$ may also take place. Due to the repulsive A-B potential and the large energy defect associated with this reaction as compared to other inelastic processes, we conjecture that this process has negligible influence. When a photon from the light field is absorbed during the collision, the colliding partners are accelerated towards each other on the strongly attractive potential of the excited state. The FC mechanism is based on coupling of the excited molecular state to another fine-structure or hyperfine-structure state with lower asymptotic energy, which occurs at typical distances smaller 10 Å. The kinetic energy gain of the atom pair is the difference between the absorbed photon energy and the energy of the lower excited fine-structure state. The RE mechanism relies on the spontaneous emission of a photon during acceleration on the excited molecular potential. The gain of kinetic energy is then given by the difference in energy between the absorbed and the emitted photon. Both mechanisms involve one collision partner in the excited state. The excitation probability of the collisional quasi-molecule is largest when the detuning $`\delta `$ of the light field from the atomic resonance is compensated by the interaction energy. The corresponding internuclear distance is called the Condon point $`R_C`$ defined by the condition $`W_{ge}(R_C)W_{gg}(R_C)=\mathrm{}\delta `$. For typical detunings of a MOT ($`\delta =(16)\mathrm{\Gamma }`$, with $`\mathrm{\Gamma }`$ denoting the inverse lifetime of the excited state), the Condon point has values around $`5002000`$ Å for homonuclear collisions with the long-range $`1/R^3`$ resonant-dipole potential, and much smaller values around $`50150`$ Å for heteronuclear collisions with the shorter-range $`1/R^6`$ excited state van-der-Waals potential. At distances smaller than the Condon point, the colliding atoms quickly decouple from the light field due to the increasing energy shifts induced by the interatomic interaction. Taking typical relative velocities $`\overline{v}=0.11`$ m/s in a MOT and typical radiative lifetimes $`\mathrm{\Gamma }^130`$ ns, the semiclassical probability of reaching small internuclear distances on an excited state molecular potential (“survival probability”) is small for homonuclear collisions, but might get close to unity for heteronuclear ones. In addition to the excited-state inelastic collisions, collisions involving both colliding atoms in the ground state may occur. In particular, hyperfine-changing collisions (HFC) releasing the ground state hyperfine energy, similar to the FC mechanism in the excited state, can play a role for losses in shallow traps. ### 2.3 Cold lithium-cesium collisions The special case of a cold Li-Cs collision shows some peculiar features. The lithium and cesium level schemes for the ground and first excited states are shown in Fig. 2. Lifetimes for the Li and Cs excited states are $`(\mathrm{\Gamma }_{\mathrm{Li}})^1=27`$ ns and $`(\mathrm{\Gamma }_{\mathrm{Cs}})^1=30`$ ns, respectively. The $`S_{1/2}`$ \- $`P_{3/2}`$ transitions (D2 line) at 671 nm for Li and 852 nm for Cs are used for cooling and trapping. Due to the difference in transition energy, the quasi-molecular potential for a Li-Cs collision is repulsive for all substates with Li+Cs asymptotes, and attractive for all substates with Li+Cs asymptotes buss87:chemphys as indicated in Fig. 1. The repulsive long-range interaction for the Li-Cs pair has already experimentally been demonstrated by spectroscopic measurements in a hot Li-Cs vapor vadl83:prl . Due to the small initial kinetic energy with respect to the interatomic interaction potential, a Li-Cs pair is prevented from reaching small internuclear separations where inelastic processes can occur. Possible inelastic channels for Li-Cs collisions are therefore hyperfine-changing collisions for ground state Li and Cs, as well as FC, HFC and RE collisions involving Cs. Momentum and energy conservation require that only 5%, i.e. $`m_{\mathrm{Li}}/(m_{\mathrm{Li}}+m_{\mathrm{Cs}})`$ ($`m_i`$ = mass of the atoms), of the released energy is transferred to the heavier collision partner Cs. When the two-species MOT is optimized for maximum capture velocity for each species (trap depth $`h\times 15`$GHz) with $`h`$ = Planck’s constant), Li loss is induced by Cs when the total energy release is larger than $`h\times 16`$ GHz, while the energy release for Li-induced Cs loss has to be larger than $`h\times 300`$ GHz. The hyperfine splittings of Li, Li, Cs and Cs are therefore too small to induce trap loss. However, for a slightly shallower Li-MOT, the hyperfine energy of the ground-state Cs may just be sufficient to induce loss of Li, while the Cs collisions partner remains in the MOT. With laser cooling, much lower temperatures are achieved for Cs ($`50\mu `$K) than for Li ($`1`$mK), mainly because of the great difference in photon recoil energy $`\mathrm{}^2k^2/m`$ with $`m`$ denoting the mass of the atom and $`\mathrm{}k`$ the momentum of an absorbed photon. The mean speed for Li atoms at $`T_{\mathrm{Li}}=1`$ mK is $`\overline{v}_{\mathrm{Li}}=1.7`$ m/s. This has to be compared to the mean Cs speed $`\overline{v}_{\mathrm{Cs}}=0.1`$ m/s for a Cs temperature of $`T_{\mathrm{Cs}}=50\mu `$K. The Cs atoms can therefore be considered at rest before the collision, and the mean relative velocity $`\overline{v}_{\mathrm{LiCs}}`$ between cold Li and Cs is solely determined by the Li temperature. ## 3 Combined cesium-lithium trap A schematic view of the experiment is presented in Fig. 3. The apparatus is an extension of a MOT for Li which is described in detail in Ref. schun98a:optcomm . The combined magneto-optical trap for lithium and cesium consists of three mutually orthogonal pairs of counter-propagating laser beams for each species with opposite circular polarization, intersecting at the center of an axially symmetric magnetic quadrupole field. Field gradients are 14 G/cm along the vertical axis, and 7 G/cm in the horizontal directions. The light field of the MOT is formed by retroreflected beams with a 1/$`e^2`$-diameter of 15 mm. Total laser power is about 15 mW for the Cs-MOT at 852 nm and 27 mW for the Li-MOT at 671 nm. Completely separated optics is used for the two wavelengths. The same windows are used for each trapping laser beam at 852 nm and 671 nm. The light is coupled into the vacuum chamber with a small angle between the 671 nm-beam and the 852 nm-beam. The laser beams are provided exclusively by diode lasers. For the trapping of cesium, a diode laser is operated close to the $`6S_{1/2}(F=4)6P_{3/2}(F=5)`$ cycling transition of the cesium D2 line at 852 nm. To avoid optical pumping into the other hyperfine ground state, a second laser beam from a diode laser resonant with the $`6S_{1/2}(F=3)6P_{3/2}(F=4)`$ transition is superimposed with the trapping beam. Both lasers are frequency-stabilized relative to absorption lines from Cs vapor cells at room temperature. The error signal of the servo loops is provided by the frequency-dependent circular dichroism of Cs vapor in a glass cell, to which a longitudinal magnetic field of some tens of Gauss is applied. The dichroism is measured as the difference in absorption between the left- and right-circular components of a linearly polarized beam. Trapping of lithium is accomplished with diode lasers in a master-slave injection-locking scheme as described in schun98a:optcomm . The lasers operate close to the $`2S_{1/2}(F=2)2P_{3/2}`$ transition and the $`2S_{1/2}(F=1)2P_{3/2}`$ transition, respectively, of the lithium D2 line at 671 nm <sup>1</sup><sup>1</sup>1The excited-state hyperfine splitting of Li is of the same order as the natural linewidth and can thus not be resolved.. One of the lasers is locked to Doppler-free absorption lines measured by radio-frequency spectroscopy bjor83:applphys . The second laser is stabilized with respect to the first by a tunable offset-frequency lock schun98b:revsci . Both MOTs are loaded from effusive atomic beams which can be interrupted by mechanical shutters (see Fig. 3). The Cs oven at a temperature of typically 85 C is continuously filled during operation by running a current of $`2`$ A through a set of nine Cs dispensers. The Cs MOT accumulates atoms from the slow velocity tail ($`v10`$ m/s) of the Maxwell distribution. Typically, close to $`10^6`$ atoms (at a detuning $`\delta _{\mathrm{Cs}}=1.5\mathrm{\Gamma }_{\mathrm{Cs}}`$) are trapped with a loading time constant of several seconds. Lithium has to be evaporated at much higher temperatures. The small mass of Li results in much higher atom velocities. Atoms with velocity $`v600`$ m/s are decelerated in a compact Zeeman slower by an additional laser beam at 671 nm schun98a:optcomm . The trapped atoms are shielded from the Li atomic beam by a small beam block schun98a:optcomm . At a Li oven temperature of 450 C, the loading rate is around $`10^8`$ atoms/s, yielding up to $`10^9`$ trapped Li atoms. The steady-state number of trapped Li atoms can be adjusted over a wide range by decreasing the loading flux through attenuation of the Zeeman-slowing laser beam. Densities for the Cs and the Li MOT range between $`10^9`$ and $`10^{10}`$ atoms/cm<sup>3</sup>. The fluorescence of the trapped atoms is monitored by two calibrated photodiodes with narrow-band interference filters at 852 nm and 671 nm, respectively. Shape and position of the two atomic clouds are measured with two CCD cameras. From these measurements, the number of trapped atoms and the density are determined. The cameras are looking from different directions yielding 3D information on the position of the Li and Cs cloud. The clouds are not necessarily overlapping. To superimpose both clouds, we found it most simple and reproducible to shift the Li onto the Cs cloud by slightly focusing the retroreflected beams at 671 nm and thus introducing a controlled radiation-pressure imbalance. The Li cloud turned out to be more sensitive to a radiation-pressure imbalance than the Cs cloud. Cooling in the Li-MOT is based on Doppler forces schun98a:optcomm , while polarization-gradient forces are acting on the trapped Cs stea92:josab . As a consequence of the different mechanisms, temperatures of the Li cloud in the MOT are above the Doppler temperature ($`T_{Li}1`$ mK), while the Cs cloud is cooled to sub-Doppler temperatures ($`T_{Cs}50\mu `$K). Fig. 4 shows a fluorescence picture of simultaneously trapped Li and Cs atoms. Due to its much lower temperature the Cs cloud occupies a much smaller volume than the Li cloud, as indicated by the density profiles in Fig. 4. This particular property of the Li/Cs system greatly simplifies quantitative collisional studies. Binary inelastic collisions between lithium and cesium lead to loss from the two-species MOT. One indication of this loss is a decrease of the steady-state particle number of one species when the other species is also loaded into the combined MOT. In Fig. 5, the temporal evolution of the trapped particle number during loading is shown to illustrate the influence of inter-species collisions. First, only Cs is loaded into the MOT until the steady-state number is reached. Then, as Li is also filled into the trap by opening the atomic beam shutter, the number of trapped Cs decreases which indicates inelastic Li-Cs collisions resulting in a trap loss of Cs. After a new steady state has established, loading of Cs is stopped by shuttering the Cs beam, and the light field at 852 nm is interrupted for a short moment, resulting in quick escape of all Cs atoms. Without Cs, the number of trapped Li further increases which shows that inter-species collisions also induce Li loss. ## 4 Quantitative studies ### 4.1 Measurement procedures Inelastic collisions can be studied quantitatively by measuring rate coefficients for the loss of particles from the trap. The temporal evolution of the trapped particle number $`N_A`$ for the species A under the presence of species B is described by the rate equation $$\frac{dN_\mathrm{A}}{dt}=L_\mathrm{A}\alpha _\mathrm{A}N_\mathrm{A}\beta _\mathrm{A}n_\mathrm{A}^2d^3r\gamma _{\mathrm{AB}}n_\mathrm{A}n_\mathrm{B}d^3r$$ (3) where $`n_{\mathrm{A},\mathrm{B}}`$ denote the local densities and $`L_\mathrm{A}`$ the loading rate for species A. The loss rate coefficient $`\alpha _\mathrm{A}`$ in the second term of Eq. 3 characterizes trap loss by collisions with background particles. Inelastic binary collisions between trapped particles are described by the last two terms in Eq. 3. The rate coefficients $`\beta _\mathrm{A}`$ and $`\gamma _{\mathrm{AB}}`$ denote the loss rate coefficient for collisions between atoms of the same species and between different atomic species, respectively. These coefficients can be expressed in terms of trap-loss cross sections $`\beta _\mathrm{A}=\overline{v}_{\mathrm{AA}}\sigma _\mathrm{A}`$ and $`\gamma _{\mathrm{AB}}=\overline{v}_{\mathrm{AB}}\sigma _{\mathrm{AB}}`$ where $`\overline{v}_{\mathrm{AA}}`$ and $`\overline{v}_{\mathrm{AB}}`$ denote the relative speed between two atoms of species A and between species A and B, respectively. The rate coefficients for trap loss in Eq. 3 can be inferred from the decay of the fluorescence signal after interruption of the loading flux for species A ($`L_\mathrm{A}=0`$ in Eq. 3). Species B is still continuously loaded into the two-species MOT ($`L_\mathrm{B}0`$). The fluorescence signal from the MOT is proportional to the particle number when the cloud of trapped atoms is not optically thick which is well fulfilled in all our measurements. Analysis of the data is simplified by the fact that, for low numbers of trapped particles, the cloud extension is determined solely by the temperature (temperature-limited regime) town95:pra . In this regime, the root-mean-square radius $`r_\mathrm{A}`$ of the Gaussian spatial density distribution is independent of the number of particles which we have carefully checked for our two-species MOT schlo98:diplom . Thus, the quadratic loss term can be written as $`\beta _\mathrm{A}N_\mathrm{A}^2/\sqrt{8}V_\mathrm{A}`$ where we call $`V_\mathrm{A}=\left(\sqrt{2\pi }r_\mathrm{A}\right)^3`$ the volume of the species A cloud. The volume $`V_A`$ stays constant during the decay of the trapped particle number. In addition, the Li cloud is generally much larger than the Cs cloud (see Fig. 4). The third term in Eq. (3) therefore simplifies to $`\gamma \widehat{n}_{\mathrm{Li}}N_{\mathrm{Cs}}`$ where $`\widehat{n}_{\mathrm{Li}}=N_{\mathrm{Li}}/V_{\mathrm{Li}}`$ denotes the Li peak density. With these simplifications, the decay of the trapped particle number $`N_\mathrm{A}`$ is described by $$\frac{dN_\mathrm{A}}{dt}=\left(\alpha _\mathrm{A}+\frac{\gamma _{\mathrm{AB}}}{V_{\mathrm{Li}}}N_\mathrm{B}\right)N_\mathrm{A}\frac{\beta _\mathrm{A}}{\sqrt{8}V_\mathrm{A}}N_\mathrm{A}^2$$ (4) where A and B stand for Li or Cs. In the general case, $`N_\mathrm{A}`$ and $`N_\mathrm{B}`$ are coupled by the inelastic inter-species collisions (see Fig. 5). However, when the loading flux for species B is large compared to the loss rate by inter-species collisions, i.e. $`L_\mathrm{B}>>\gamma _{\mathrm{BA}}N_\mathrm{A}N_\mathrm{B}/V_{\mathrm{Li}}`$, the steady-state particle number $`N_{\mathrm{B},0}`$ is not influenced by the presence of species A. In this case, the rate equations for A and B become decoupled, and Eq. 4 has the simple analytical solution $$N_\mathrm{A}(t)=\frac{N_{\mathrm{A},0}e^{\stackrel{~}{\alpha }_\mathrm{A}t}}{1+\frac{N_{\mathrm{A},0}}{\sqrt{8}V_\mathrm{A}}\frac{\beta _\mathrm{A}}{\stackrel{~}{\alpha }_\mathrm{A}}\left(1e^{\stackrel{~}{\alpha }_\mathrm{A}t}\right)}$$ (5) with the effective decay rate coefficient $`\stackrel{~}{\alpha }_\mathrm{A}=\alpha _\mathrm{A}+\gamma _{\mathrm{AB}}N_{\mathrm{B},0}/V_{\mathrm{Li}}`$. The coefficients $`\alpha _\mathrm{A}`$ and $`\beta _\mathrm{A}`$ in Eq. (4) are determined by fitting Eq. (5) to the fluorescence decay without the species B loaded into the MOT ($`N_{\mathrm{B},0}=0`$). A typical example is depicted in Fig. 6. We have found no influence of the trapping light for species B on the rate coefficients for species A. However, to exclude any possible ambiguities, the trapping light for species B is not interrupted during the measurements without species B being loaded into the trap. In addition, we observe no influence of the species B atomic beam on the decay characteristics of species A when opening the beam shutter and interrupting one arm of the MOT laser beams so that no species B atoms are trapped. As shown in Fig. 6, the fluorescence decay changes significantly when the second species is also confined in the two-species MOT. By adjusting the loading fluxes, the particle number $`N_{\mathrm{B},0}`$ is decoupled from the decay of species A which is checked by monitoring the fluorescence of species B. Besides the initial particle number $`N_{\mathrm{A},0}`$, the inter-species rate coefficient $`\gamma _{\mathrm{AB}}`$ is the only free fitting parameter used since the single-species parameters $`\alpha _\mathrm{A}`$ and $`\beta _\mathrm{A}`$ are kept fixed to the values determined without B. In this way, determination of $`\gamma _{\mathrm{AB}}`$ is uncorrelated to the evaluation of $`\alpha _\mathrm{A}`$ and $`\beta _\mathrm{A}`$ which greatly reduces the fitting errors. Experimental errors in the determination of the particle numbers $`N_{\mathrm{A},0}`$ and $`N_{\mathrm{B},0}`$ are 25%, while the trap volumes $`V_\mathrm{A}`$ and $`V_\mathrm{B}`$ are accurate to within 15%. The errors given in the following refer to the combination of experimental errors with the statistical errors of the fitting procedures. Not included are possible systematic errors in the particle number determination which we estimate to about 50%. Comparison of the values for $`\beta _\mathrm{A}`$ from our measurements with previous values measured in single-species MOTs provide a consistency check for our data analysis and the influence of systematics. ### 4.2 Lithium-induced cesium loss Following the procedures described in the preceding section, we have studied the trap loss of Cs atoms resulting from inelastic Li-Cs collisions. Li and Cs are loaded for about 30 s to their steady-state particle numbers ($`N_{\mathrm{Cs},0}10^6`$ at $`\delta _{\mathrm{Cs}}=1.5\mathrm{\Gamma }_{\mathrm{Cs}}`$, $`N_{\mathrm{Li},0}10^8`$ at $`\delta _{\mathrm{Li}}=3\mathrm{\Gamma }_{\mathrm{Li}}`$). The decay of the Cs fluorescence is monitored with and without Li after the Cs loading flux is interrupted. In addition, the Li fluorescence is observed during the decay of the Cs fluorescence to verify that $`N_{\mathrm{Li},0}`$ is independent of $`N_{\mathrm{Cs}}`$. For each set of measurements, a camera picture is taken to measure the spatial volume $`V_{\mathrm{Li}}`$ of the Li cloud. We first investigate the influence of the population of the lithium $`2P_{3/2}`$ excited state on $`\gamma _{\mathrm{CsLi}}`$. After the Cs loading is interrupted, the average Li excitation is adjusted by periodically chopping the trapping light. The chopping frequency of $`100`$ kHz is slow compared to the internal dynamics of the Li atoms determined by $`\mathrm{\Gamma }_{\mathrm{Li}}`$, but fast compared to the dynamics of the trapped particles. Therefore, the average excitation of the Li atoms scales linearly with the ratio of the on/off time intervals (duty cycle) <sup>2</sup><sup>2</sup>2The Li loading rate changes with the duty cycle resulting in a decay of $`N_{\mathrm{Li},0}`$ to a new steady-state value. Therefore, Eq. 5 can not be used in this measurement. The Li decay is measured by monitoring the Li fluorescence. The observed decay of $`N_{\mathrm{Li},0}`$ is incorporated into Eq. 4.. At 100 % duty cycle (no chopping), the average excited-state population $`\mathrm{\Pi }_{\mathrm{Li}^{}}`$ is 0.06(1) at a detuning $`\delta _{\mathrm{Li}}=3\mathrm{\Gamma }_{\mathrm{Li}}`$. To determine the population, the fluorescence rate is measured as the function of detuning for a fixed number of trapped atoms. The excited-state population is then deduced from two-level theory by fitting a Lorentzian to the data. The volume $`V_{\mathrm{Li}}`$ of the Li MOT increases with decreasing duty cycle from 1.3(1) mm<sup>3</sup> at 100 % duty cycle to 3.0(3) mm<sup>3</sup> at 30%. The Li temperature does not change significantly with the duty cycle ($`T_{\mathrm{Li}}=0.9(2)`$ mK at $`\delta _{\mathrm{Li}}=3\mathrm{\Gamma }_{\mathrm{Li}}`$). The data presented in the left graph of Fig. 7 show that the Cs loss rate coefficient $`\gamma _{\mathrm{CsLi}}`$ has the constant value of $`\gamma _{\mathrm{CsLi}}=1.1(2)\times 10^{10}`$cm<sup>3</sup>/s at $`\delta _{\mathrm{Cs}}=1.5\mathrm{\Gamma }_{\mathrm{Cs}}`$ and $`\delta _{\mathrm{Li}}=3\mathrm{\Gamma }_{\mathrm{Li}}`$). The coefficient exhibits no significant dependence on the Li excitation. This observation can be regarded as a direct consequence of the repulsive interaction between excited Li and ground-state Cs as discussed in Sec. 2.3. Excited Li in the MOT therefore does not contribute to the trap loss of Cs. In a second set of measurements, the dependence of $`\gamma _{\mathrm{CsLi}}`$ on the detuning $`\delta _{\mathrm{Li}}`$ of the trapping light for Li is investigated. As shown in the right graph of Fig. 7, the inter-species rate coefficient steadily increases with increasing detuning from $`0.6(1)\times 10^{10}`$ cm<sup>3</sup>/s at $`\delta _{\mathrm{Li}}=1\mathrm{\Gamma }_{\mathrm{Li}}`$ to $`3.0(6)\times 10^{10}`$ cm<sup>3</sup>/s at $`\delta _{\mathrm{Li}}=6\mathrm{\Gamma }_{\mathrm{Li}}`$. Changing $`\delta _{\mathrm{Li}}`$ has two major consequences: the temperature of Li increases with increasing detuning as demonstrated in schun98a:optcomm , and the excitation probability for Li is modified. The dashed line in Fig. 7 gives the dependence of $`\overline{v}_{\mathrm{Li}}=(\frac{8}{3}k_BT_{\mathrm{Li}}/m_{\mathrm{Li}})^{1/2}`$ on the Li detuning as measured for our Li MOT schun98a:optcomm . As discussed in Sec. 2.3, the Li velocity determines the average relative velocity between lithium and cesium $`\overline{v}_{\mathrm{LiCs}}\overline{v}_{\mathrm{Li}}`$. Since the change in $`\overline{v}_{\mathrm{Li}}`$ essentially reproduces the measured trend of the rate coefficient, it follows that the cross section $`\sigma _{\mathrm{CsLi}}=\gamma _{\mathrm{CsLi}}/\overline{v}_{\mathrm{CsLi}}`$ for Li-induced Cs loss is independent on the Li detuning ($`\sigma _{\mathrm{CsLi}}=0.7(2)\times 10^4`$ Å<sup>2</sup> at $`\delta _{\mathrm{Cs}}=1.5\mathrm{\Gamma }_{\mathrm{Cs}}`$). The data again indicate that the Li excitation plays no role in inelastic Li-Cs collisions. The rate coefficient $`\gamma _{\mathrm{CsLi}}=1.1(2)\times 10^{10}`$ cm<sup>3</sup>/s at $`\delta _{\mathrm{Cs}}=1.5\mathrm{\Gamma }_{\mathrm{Cs}}`$ is about one order of magnitude larger than the homonuclear coefficient $`\beta _{\mathrm{Cs}}=2.0(4)\times 10^{11}`$ cm<sup>3</sup>/s measured under the same conditions but without lithium in the trap schlo98:diplom <sup>3</sup><sup>3</sup>3The value of $`\beta _{\mathrm{Cs}}`$ is consistent with earlier measurements in a Cs MOT by Sesko et al. sesk89:prl .. The corresponding cross sections $`\sigma _{\mathrm{CsLi}}=0.7(2)\times 10^4`$ Å<sup>2</sup> and $`\sigma _{\mathrm{Cs}}=2.0(4)\times 10^4`$Å<sup>2</sup>, however, are of the same order of magnitude due to the much smaller relative velocities of the Cs atoms (see Sec. 2.3). To investigate the influence of optical Cs excitation, the detuning $`\delta _{\mathrm{Cs}}`$ of the Cs-trapping light is switched to a given value after interruption of the Cs loading flux. The Li detuning is kept fixed at $`\delta _{\mathrm{Li}}=3\mathrm{\Gamma }_{\mathrm{Li}}`$ resulting in a constant number of $`N_{\mathrm{Li},0}=5(2)\times 10^6`$ Li atoms at a density $`\widehat{n}_{\mathrm{Li}}=1.7(7)\times 10^9`$ cm<sup>-3</sup> in the MOT. As shown in the left graph in Fig. 8, one observes a decrease of the rate coefficient by a factor of five for increasing detuning $`\delta _{\mathrm{Cs}}`$. At higher detuning, $`\gamma _{\mathrm{CsLi}}`$ rises up again. Changing the detuning of the Cs MOT has two effects: excitation of the Cs $`6P_{3/2}`$ state depends on the detuning, and the Cs trap becomes shallower at larger detunings. In addition, the Cs temperature becomes lower at larger detuning due to polarization-gradient cooling stea92:josab , but this does not effect the rate coefficient since the relative velocity is determined by the Li temperature only. The increase of the rate coefficient at $`\delta _{\mathrm{Cs}}4\mathrm{\Gamma }_{\mathrm{Cs}}`$ can be attributed to the decrease of the MOT depth. The MOT might eventually become shallow enough to allow for trap loss due to Li-Cs collisions changing the Cs hyperfine structure. From a simplified yet realistic picture for the capture range of the Cs MOT lind92:pra , we expect this process to become relevant at detunings below $`4\mathrm{\Gamma }_{\mathrm{Cs}}`$ consistent with the observed increase of $`\gamma _{\mathrm{CsLi}}`$. The decrease of $`\gamma _{\mathrm{CsLi}}`$ with detuning for $`\delta _{\mathrm{Cs}}4\mathrm{\Gamma }_{\mathrm{Cs}}`$, however, must be related to the change in the Cs excitation. The average population of the Cs in the $`P_{3/2}`$ state in the MOT decreases with the detuning. As described in Sec. 2.2, the relevant inelastic processes for trap loss occur at internuclear distances around 10 Å. Due to the large relative velocities of about 1 m/s, excitation of Cs survives over an internuclear distance of about 300 Å. The Cs atoms might therefore be excited at separations before the interatomic interaction energy becomes relevant ($`R_C100`$ Å), and still reach the inner interaction zone. The modification of the excitation probability by the interaction potential therefore plays a minor role for the probability for an excited atom to reach the inner interaction zone in the excited state. It seems therefore appropriate to expect a linear increase of $`\gamma _{\mathrm{CsLi}}`$ with the average excited state population in the MOT. To support this picture, the right graph in Fig. 8 shows the same data as in (a), but now plotted versus the average population $`\mathrm{\Pi }_{\mathrm{Cs}^{}}`$ of the Cs $`P_{3/2}`$ state. The excited-state population is measured as described above for Li. The rate coefficient scales proportional to the average $`P_{3/2}`$ population indicating that excitation relevant for the inelastic processes indeed occurs at large internuclear distances where the modification of the energy through the quasi-molecular potential can be neglected. In addition, the rapid decrease of the rate coefficient with decreasing Cs excitation shows, that inelastic Li-Cs collisions are the main channel for Li-induced Cs trap loss. The strong decrease of the rate coefficient with increasing detuning constitutes an important difference to homonuclear collisions where the trap loss rate is found to increase with increasing detuning sesk89:prl . In the homonuclear case, the colliding atoms are decoupled from the light field already at distances around 1000 Å due to the long-range resonant-dipole interaction. The rate coefficient for homonuclear collisions can be increased by primarily exciting the atoms at smaller internuclear separations, i.e. at larger detunings from resonance for the attractive interaction potential, resulting in a larger survival probability. ### 4.3 Cesium-induced lithium loss The investigation of Cs-induced Li loss from the MOT proceeds similar to the experiments on Li-induced Cs loss. Cesium is permanently loaded, and at a given moment the Li loading flux is interrupted for a measurement of the Li trap decay. It has now to be ensured that the steady-state number $`N_{\mathrm{Cs},0}`$ of trapped Cs is independent of the number of trapped Li $`N_{\mathrm{Li}}`$, i.e., the Cs loading rate has to be chosen large compared to the Li-induced Cs loss rate. By decreasing the Li loading flux, the steady-state Li particle number in the MOT is adjusted to values comparable to the largest achievable Cs particle number ($`N_{\mathrm{Li},0}N_{\mathrm{Cs},0}10^6`$). Under these conditions, the Cs fluorescence shows only a marginal dependence on the number of trapped Li atoms so that Eq. 5 can be used to analyze the data. At the corresponding low Li densities, the decay of the Li fluorescence was found to be purely exponential indicating that quadratic Li loss can be neglected ($`\beta _{\mathrm{Li}}N_{\mathrm{Li},0}/\sqrt{8}V_{\mathrm{Li}}\alpha _{\mathrm{Li},\mathrm{eff}}`$ in Eq. 5). From energetical considerations discussed in Sec. 2.3, trap escape of a Cs atom through an inelastic Li-Cs collision has to be accompanied by the loss of the involved Li atom, since the largest share of the released energy is taken by the Li. In Fig. 9, the ratio between the loss rate coefficient for a Cs-induced Li loss $`\gamma _{\mathrm{LiCs}}`$ and the coefficient for a Li-induced Cs loss $`\gamma _{\mathrm{CsLi}}`$ is depicted as function of the Li detuning $`\delta _{\mathrm{Li}}`$. For $`\delta _{\mathrm{Li}}3\mathrm{\Gamma }_{\mathrm{Li}}`$ one observes $`\gamma _{\mathrm{LiCs}}\gamma _{\mathrm{CsLi}}`$, which shows that both collisions partners simultaneously leave the trap. Since the Cs trap escape is essentially determined by collisions involving excited Cs, this collision channel is also the main source for Li loss. Interestingly, at smaller detunings, an additional loss channel for Li atoms opens which is not accompanied by the loss of the Cs atoms. A possible process releasing sufficient energy for the escape of Li without providing enough energy for Cs is represented by inelastic collisions between Cs and Li both in the ground state (hyperfine-changing collisions, see Sec. 2.2). In particular, in the MOT nearly all ground-state Cs atoms occupy the $`6S_{1/2}(F=4)`$ level. Collisions changing the hyperfine state of the Cs would transfer around $`h\times 9`$ GHz of energy to the Li atom, which corresponds roughly to the Li trap depth at small detunings <sup>4</sup><sup>4</sup>4For Li, the trap depth steadily increases with detuning in the parameter ranges considered here schun98a:optcomm .. To further investigate the hypothesis that the additional Li loss is a manifestation of hyperfine-changing collisions, we have changed the Li trap depth by square-wave modulation of the Li trapping light kawa93:pra as explained in the preceding section. At full duty cycle, the trap depth is estimated from the laser power to be around 15 GHz. Lowering the duty cycle thus reduces the trap depth sufficiently to allow for the inset of loss through hyperfine-structure change of the Cs ground state. As shown in Fig. 10 for $`\delta _{\mathrm{Li}}=4\mathrm{\Gamma }_{\mathrm{Li}}`$, the loss rate coefficient for Cs-induced Li loss $`\gamma _{\mathrm{LiCs}}`$ drastically increases when the duty cycle is reduced below a critical value of $`40\%`$. At a duty cycle of 20%, a rate coefficient of $`\gamma _{\mathrm{LiCs}}=5(1)\times 10^{10}`$ cm<sup>3</sup>/s is measured, corresponding to a cross section of $`\sigma _{\mathrm{LiCs}}=3(1)\times 10^4`$ Å<sup>2</sup>. The square-wave modulation method has formerly been used to identify fine-structure changing collisions in a pure Li MOT which releases 5 GHz energy to each Li collision partner kawa93:pra . In these experiments, a sudden increase of the rate coefficient $`\beta _{\mathrm{Li}}`$ with decreasing duty cycle was observed when the duty cycle was lowered beyond the value corresponding to 5 GHz trap depth. We have performed equivalent measurements on $`\beta _{\mathrm{Li}}`$ for the same trap parameters as the data set shown in Fig. 10, but with maximum Li loading flux to achieve large numbers of trapped Li ($`N_{\mathrm{Li},0}10^8`$). This leads to a measurable influence of $`\beta _{\mathrm{Li}}`$ on the trap loss schlo98:diplom . The rate coefficient $`\beta _{\mathrm{Li}}`$ increases from $`5(2)\times 10^{12}`$ cm<sup>3</sup>/s for duty cycles between 60% and 100% to $`1\times 10^{10}`$ cm<sup>3</sup>/s at duty cycles below 40% <sup>5</sup><sup>5</sup>5These values are consistent with rate coefficients from Li-MOT measurements by Kawanake et al. kawa93:pra and Ritchie et al. ritch95:pra .. We find that the sudden increase in $`\beta _{\mathrm{Li}}`$ sets in at a slightly lower critical duty cycle than the increase of $`\gamma _{\mathrm{LiCs}}`$ shown in Fig. 10. This indicates, that the corresponding kinetic energy gain transferred to the lithium through an inelastic ground-state Li-Cs collisions must be larger than $`h\times 5`$ GHz. The only process releasing sufficient energy to explain the observations is therefore an inelastic collision changing the Cs hyperfine state. Note, that inelastic Li-Cs collisions changing the Li excited-state fine structure, which would release $`h\times 10`$ GHz and which are relevant for trap loss through inelastic Li-Li collisions kawa93:pra ; ritch95:pra , are excluded by the repulsive quasi-molecular potential (see Sec. 2.3). ## 5 Conclusions Our results can be summarized in the following picture of binary inelastic Li-Cs collisions in a combined magneto-optical trap. Lithium and cesium approach each other with a mean relative velocity $`1`$m/s which is determined by the lithium temperature. Since the MOT is operated with near resonant light, atoms can absorb a trapping photon when the interaction energy is still small compared to $`\mathrm{}\delta `$, i.e. at internuclear separations larger than the Condon point at about 100 Å. When the lithium absorbs a trapping photon at 671 nm, the excited Li and ground-state Cs repel each other and inelastic processes are prevented (optical shielding). The rate coefficient for trap loss by Li-Cs collisions is therefore found to be independent of the average Li excitation in the two-species MOT. When a 852 nm-photon is absorbed by the Cs, Li and Cs are accelerated by the attractive molecular potential. Due to the comparatively large relative velocity, the Cs excitation survives over distances around 300 Å. The probability is therefore high to reach very small internuclear distances in the excited state. The excited quasi-molecular wavepacket might even oscillate for some periods in the molecular potential well before spontaneous emission occurs. Inelastic Li-Cs processes such as changes of the Cs fine-structure state or the spontaneous emission of a red-detuned photon are then likely to take place. Both processes release sufficient energy for the escape of both atoms from the trap, and our trap loss experiments do not distinguish among them. The cross section for such inelastic Li-Cs collisions scales with the average Cs excitation in the MOT, and acquires a value of $`\sigma _{\mathrm{CsLi}}=\sigma _{\mathrm{LiCs}}=0.7(2)\times 10^4`$ Å<sup>2</sup> at maximum excited- state populations around $`\frac{1}{2}`$, which corresponds to a trap loss rate coefficient of $`\gamma _{\mathrm{CsLi}}=\gamma _{\mathrm{LiCs}}=1.1(2)\times 10^{10}`$ cm<sup>3</sup>/s. Collisions involving lithium and cesium in the ground state generally do not transfer sufficient energy to overcome the trap energy barrier. If, however, the lithium trap depth is decreased below $`9`$GHz, lithium atoms eventually escape from the trap after having undergone a Li-Cs collision in which the cesium changes its hyperfine ground state. The cesium atom will be retained in the trap since only 5% of the released energy is transferred to the cesium. Our measurements yield a cross section larger than $`\sigma _{\mathrm{LiCs}}=3\times 10^4`$ Å<sup>2</sup> for a ground-state Li-Cs collision with change of the Cs hyperfine state. The lower bound for the corresponding rate coefficient is $`\gamma _{\mathrm{LiCs}}=5(1)\times 10^{10}`$ cm<sup>3</sup>/s. Homonuclear Cs trap loss collisions give about the same inelastic cross sections as Li-Cs collisions, while homonuclear Li collisions have a cross section which is more than one order of magnitude smaller. Due to the small extension of the ground-state and the excited state interaction potentials, the Li-Cs cross sections are essentially determined by the s-wave contribution. To our best knowledge, the short-range part of the Li-Cs molecular potential has not yet been theoretically investigated. Detailed knowledge on the fine details of the short-distance molecular potential is necessary to perform calculations on the relative importance of fine-structure changing and radiative-escape processes and to estimate trap-loss cross sections. Our investigations of inelastic processes between lithium and cesium represent an important step towards a new class of experiments with binary atomic mixtures. Such mixtures open new perspectives for collisional studies in conservative potentials like magnetic or optical traps, for the formation and trapping of cold polar molecules through, e.g., photoassociation, or for the investigation of two-species Bose condensates law97:prl . Starting from our two-species MOT, we plan to transfer the cold lithium and cesium simultaneously into a far-detuned optical dipole trap grim98:adv for the investigation of elastic Li-Cs collisions with the prospect to sympathetically cool lithium with optically cooled cesium engl98:applphys . ###### Acknowledgements. Fruitful discussions with O. Dulieu are greatfully acknowledged. We thank D. Schwalm for his encouragement and support. This work was supported in part by the Deutsche Forschungsgemeinschaft in the frame of the Gerhard-Hess-Programm.
no-problem/9902/cond-mat9902130.html
ar5iv
text
# Effect of magnetic field on over-doped HTc superconductors: Conflicting predictions of various HTc theories ## I Introduction In a high accuracy $`NMR`$ experiment on near optimal doped $`YBCO`$, Gorny et al. found that a magnetic field of $`14.8`$ Tesla shift $`T_c`$ down by as much as $`8K`$, while the spin pseudo-gap remains unaffected (as measured by $`\left(T_1T\right)^1`$). They concluded that it is an evidence that ”…hence the pseudo-gap is unrelated to superconducting fluctuations”. Here, we make a testable prediction: That, over-doped $`HT_c`$ samples (which normally do not show a spin pseudo-gap state) when subjected to a moderate magnetic field will unprecedentedly reveal a spin pseudogap above $`T_c\left(B\right)`$ starting from approximately the original $`T_c\left(B=0\right)`$ of zero magnetic field. Our prediction is based both on a new phenomenological interpretation of the experiments (in contrast with the original interpretation of Gorny et al.), and further on a microscopic stripes theoretical approach. As elaborated below, the above prediction is not shared by several other current HTc theories. The general difference in the prediction of various theoretical approaches for the effect of a magnetic field on over-doped HTc cuprates can be understood from the difference in the position of the spin pseudo-gap line, $`T^{}`$, in the two theoretical phase diagrams depicted in Figure-1. Therefore, repeating the experiments of Gorny et al. on overdoped HTc samples constitute a crucial experiment to determine the proper form of the HTc phase diagram in the over-doped region, and hence to provide further theoretical constraints. ## II Two phase diagrams: Conflicting predictions The spin pseudo-gap is often referred to as a peculiarity of underdoped and near optimal doped HTc superconductors, where the gap evolves smoothly as the temperature is increased through $`T_c`$ and remains significant up to a cross-over temperature $`T^{}>T_c`$. This is in sharp contrast to the behavior of over-doped cuprates (doping $`x>0.2`$) and conventional superconductors where the gap closes at $`TT_c`$. Hence, it is common to find references to the over-doped cuprates as more conventional, (even by theorists who otherwise advocate non-conventional mechanisms). At the under-doped and optimal regions of doping ($`0.05<x<0.2`$), there is a growing agreement that two cross-over temperatures can be identified; A doping dependent cross-over temperature $`T_0\left(x\right)`$ is experimentally marked by a broad maxima in the spin susceptibility $`\chi _0\left(T\right)`$. In addition, below $`T_0\left(x\right)`$, $`T_1T`$ decreases linearly in temperature and there is a downward deviation of the in-plane resistivity $`\rho _{ab}\left(T\right)`$. At a lower temperature $`T^{}\left(x\right)`$, a second cross-over occurs, when a pseudo-gap feature appears in NMR, ARPES, neutron scattering, and specific heat measurements. Below $`T^{}\left(x\right)`$, $`\chi _0\left(T\right)`$ continues to decrease even more rapidly down to $`T_c`$, but $`T_1T`$ exhibit a minimum followed an increase as temperature is lowered further, which is suggestive of a spin gap formation. (sometime both cross-overs are referred to as ”pseudo-gaps”, which remains a cause for confusion). In the over-doped region, the continuation of the pseudo-gap lines below $`T_c`$ in Figure-1 should be understood as ”what would be if there was no superconducting phase”, which is exactly what a magnetic field does. The difference between phase diagrams (A) and (B) in Figure-1 is in the over-doped region, and the proposed NMR experiment will reveal the correct one (and thus pose a challenge to the other theories). (a) Figure-1A is the one most commonly found in the literature. Near an over-doping point $`x_{over}0.2`$, where $`T^{}T_0=T_c`$, the pseudo-gap lines end sharply (i.e., cross the $`T_c`$ line). Explicit examples of such phase diagrams are currently drown by Pines and collaborators (which advocate a spin fluctuation exchange mechanism), and by the Rome group of Castellani, DiCastro, Grilli and collaborators (which advocate a quantum critical point fluctuations mechanism). In addition, if the pseudo-gap below $`T^{}`$ is not related to pairing then there is no reason for it to be correlated with $`T_c`$ in the over-doped region. The choice of the $`x_{over}0.2`$ point is not arbitrary. There are various experimental indicators for a significant qualitative change in the cuprates beyond this point; A prime example is the experiment of Boebinger which implies a metal-insulator quantum phase transition. The physical significance of each cross-over line (and the $`x_{over}0.2`$ point) is, of course, varying between theories. For over-doping $`x>x_{over}0.2`$ in Figure-1A, it means that $`T^{}T_0<T_c`$. Therefore, if Figure-1A is the correct phase diagram then for an over-doped HTc sample under a magnetic field $`B`$ of about $`814`$ $`Tesla`$, the following predictions are implied: 1. Though $`T_c`$ will be suppressed by a few degrees, there will remain no signature of a spin pseudo-gap behavior above $`T_c\left(B\right)`$. 2. In particular, $`\chi _0\left(T\right)`$ will continue to increases with decreasing temperature down to $`T_c`$, in sharp contrast with a pseudo-gap behavior where below $`T^{}`$ it decreases rapidly down to $`T_c`$. (b) In contrast, we now introduce arguments in favor of the phase diagram depicted in Figure-1B, where the $`T^{}`$ cross-over line merges continuously with $`T_c`$, but it is still ”there” (as a pairing mechanism) and can be revealed in the appropriate NMR experiment. To explain the pseudo-gap phenomenon below $`T^{}`$, a general argument base on superconducting phase fluctuations was introduced by Emery&Kivelson and elaborated by Millis and collaborators . As depicted in figure-2, the superconducting transition temperature $`T_c`$ is determined by the lowest of two parameters; the pairing temperature $`T_{pair}\mathrm{\Delta }\left(0\right)/2`$ , and the phase ordering temperature $`T_\theta `$. The establishment of a significant pairing amplitude is determined by the pairing energy scale which is given by the spin gap $`\mathrm{\Delta }\left(T\right)`$. The classical phase ordering temperature $`T_\theta `$ is obtained by considering the disordering effects of only the classical phase fluctuations as $`T_\theta V_0`$, where $`V_0=\frac{\mathrm{}^2n_s(0)a}{4m^{}}`$ is the zero-temperature value of the “phase stiffness” (which sets the energy scale for the spatial variation of the superconducting phase). In conventional weak coupling BCS superconductors $`T_\theta T_{pair}`$ and hence $`T_c=T_{pair}`$. Note that the phase stiffness can be reduced either by increasing the effective quasiparticle mass (i.e., when $`m^{}m_e`$), or by low superfluid density $`n_s(0)`$. It is argued that in the HTc cuprates $`n_s(0)`$ is indeed low enough that phase fluctuations become important (see further discussion below). The density of mobile charge carriers, and hence the superfluid density $`n_s(0)`$, naturally increases with increased doping $`x`$. Figure-2A depicts the resulting theoretical phase diagram in the absence of a magnetic field (and neglecting competition with other order parameters such as AFM). In the under-doped and optimal-doped regions, where $`T_{pair}>T_\theta `$, $`T_c`$ is determined by the phase ordering temperature $`T_\theta `$. In particular, there is a temperature range $`T_c<T<T_{pair}`$ where there is significant pairing amplitude without global phase coherence. Therefore, within this framework we make the identification of the spin pseudo-gap temperature $`T^{}=T_{pair}`$. In contrast, in the over-doped region, $`T_c`$ is determined by $`T_{pair}`$, i.e., by the pairing energy scale as in conventional weak coupling BCS. Focussing here on the over-doped region, one would be first lead to the conclusion that the above picture entails that the effect of a magnetic field on an over-doped sample would be similar to the case of conventional superconductors. It is important to understand that the microscopic pairing mechanism and the phase coherence mechanism are distinct. The effect of a magnetic field is sensitive to microscopic details which are not part of the macroscopic phase fluctuation theory. In weak coupling BCS theory, a significant part of the pairing energy is an outcome to the overlap between the pairs (due to the large pair coherence length), which leads to coherent scattering among many pairs within the single pair coherence length. In this sense, phase coherence (on a local scale) between pairs is affecting also the pairing scale (i.e., the gap magnitude) even though phonons are not affected. This is not the case in HTc superconductors where the $`dwave`$ pair coherence length is on the order of one lattice spacing. Interpreting the experimental findings of Gorny et al. within the above framework, we are led to the statement that a magnetic field suppresses the superconducting phase coherence, while the pairing mechanism (leading to the spin pseudo-gap) remains much less affected. (At present we draw this conclusion phenomenologically, irrespective of microscopic origins. A candidate microscopic theory leading to such phenomena will be discussed below). Thus, the effect of a moderate magnetic field on the phase diagram is as depicted in Figure-2B. The down shift of $`T_\theta `$ by a magnetic field requires some explanation: In the absence of a magnetic field, the phase coherence temperature $`T_\theta `$ is a result only of phase fluctuations. Within Ginzburg-Landau theory, the magnetic field energy density is competing with the condensation energy, i.e., with phase coherence. Therefore, in the presence of a magnetic field, $`T_\theta `$ is determined by adding the phase fluctuation contribution on top of the usual phase coherence frustrating effect of the magnetic field. For an over-doped HTc sample in a magnetic field, the following predictions are implied: 1. Under a sufficiently strong magnetic field $`H`$ (e.g., $`814`$ Tesla), a spin pseudo-gap will be revealed at $`T^{}T_c\left(H=0\right)`$ above $`T_c\left(H\right)`$ at approximately the original $`T_c`$ in the absence of a magnetic field. This is an unprecedented prediction. 2. The dependence of $`T_c\left(H\right)`$ on the magnetic field will also be quite peculiar, and different from what is observed in under-doped samples. Initially, when still $`T_\theta \left(H\right)>T_{pair}\left(H\right)=T_c\left(H\right)`$, there will be very little suppression of $`T_c\left(H\right)`$ due to the relatively small suppression of the pairing mechanism. Yet, below a critical magnetic field $`T_\theta \left(H_{cr}\right)=T_{pair}\left(H_{cr}\right)=T_c\left(H_{cr}\right)`$, there will be a more rapid suppression of $`T_c`$ with increased magnetic field (since now $`T_c\left(H\right)=T_\theta \left(H\right)<T_{pair}\left(H\right)`$). We add a remark that the effect of a magnetic field on phase coherence and pair fluctuation effects may depend on microscopic details. In a weak coupling BCS model of a d-wave superconductor, Esching et al. concluded that the a magnetic field reduces also the fluctuation corrections to $`\left(T_1T\right)^1`$, i.e., lead to also lower $`T^{}`$. Similarly, Pines (, page:14) state that moderate magnetic fields will have a dephasing effect on the pairing channel (via AFM spin fluctuation exchange) and thus significantly suppress $`T^{}`$ (in contrast with our phenomenological assumption above, and with the stripe model described below). Emery-Kivelson and Collaborators elaborated a theoretical approach to HTc which is based on coupled fluctuating spin and charge stripes in real space. The stripes are local phase separated electronic structures made out of quasi one-dimensional hole rich conducting electronic filaments (referred to as ”hole-lines”) and confined in between them are narrow ladder-like half-filled regions (which thus have substantial AFM correlations). The systematics of phase fluctuations, mentioned above, suggests that pairing on a high energy scale does not require interaction between metallic charge stripes. Instead, pairing is established first on single stripes, independently, at temperature $`T^{}`$ (the single stripe is modelled as a 1D electron gas coupled to the various low-energy states of an insulating ladder-like environment). Below $`T^{}`$, each charge stripe can be regarded as a spin gaped one dimensional extended ”grain” with enhanced pairing. In turn, $`T_c`$ is controlled by the Josephson coupling required to establish phase coherence for an array of stripes. Another way of looking at the situation is to compare the superfluid density $`n_s(0)`$ with the number of particles $`n_P`$ involved in pairing. In BCS theory, at $`T=0`$, $`n_P`$ is of order $`\mathrm{\Delta }_0/E_F`$ (where E<sub>F</sub> is the Fermi energy) and $`n_s(0)`$ is given by all the particles in the Fermi sea; i.e. $`n_pn_s(0)`$. For Bose condensation $`n_P=n_s(0)`$. In the stripes model of high temperature superconductors, $`n_Pn_s(0)`$; most of the electrons in the Fermi sea participate in the spin gap below $`T^{}`$ (since both the electronic ladder environment and the hole-lines develop the spin gap) but the superfluid density of the doped insulator is small, since the mobile charge density, proportional to $`x`$, includes only the charges in the hole lines. Though the stripes are extended objects, their effective one-dimensionality entails that a magnetic field does not significantly alter the electron pairing dynamics on individual stripes, and thus $`T^{}\left(H\right)`$ is predicted to remains almost constant. Similarly, the lack of large fluctuation diamagnetism between $`T^{}`$ and $`T_c`$ is readily understood, since an applied magnetic field does not drive any significant orbital motion until coherence develops in two (and ultimately three) dimensional patches, close to $`T_c`$. Below $`T^{}`$, Josephson coupling between stripes leads to the establishment of global phase coherence. Hence, as in conventional granular superconductors, a magnetic suppresses the phase coherence between stripes. In conclusion, the effect of a magnetic field on the microscopic dynamics of the stripes model leads to the same predictions which where deduced above following a phenomenological re-interpretation of the experiments of Gorny et al due to the separation of pairing an phase coherence scales. As an additional remark, notice that the under-doped end of the $`T^{}`$ line is drawn as going down sharply towards zero below $`x=0.04`$. This is also a testable consequence of the spin-gap-proximity effect mechanism in the stripes approach. Between doping $`x=0.06`$ and $`x=0.02`$, we may envision two extreme scenarios leading to the same consequence for the spin gap of the effective AFM ladder environment (between each two hole rich lines), which in turn affect the total spin gap: 1) If the hole lines filling fraction remains constant then the AFM ladder triples its width, which implies that the theoretical maximum spin gap decreases by a factor $`e^3\frac{1}{20}`$. 2) If the width of the ladder environment remains constant then the hole-lines filling ”overflows” and dissolves the stripe structure. ## III Summary The NMR experiment of Gorny et al. indicate that a magnetic field shifts $`T_c`$ down while the spin pseudo-gap (what ever is its origin) remains relatively unaffected (i.e., $`T^{}`$constant). We point out that in any model in which superconducting pairing and phase ordering are governed by distinct physics, that distinct dependences on parameters of the pairing scale and the superconducting $`T_c`$ are to be expected, in conflict with the conclusion of Gorny et al. If the pseudo-gap is associated with local pairing then it implies that a magnetic field is only weakly suppressing the pairing energy scale (unlike weak coupling BCS). Current theoretical approaches where conceived to agree with the known experimental results in under-doped and optimal-doped cuprates. Yet, their implied characterization of the over-doped region ($`x>0.2`$) are distinct. As depicted in Figure-1, the difference is highlighted by the continuation of the $`T^{}`$ line in the over-dope region. In this paper we argued that NMR experiments can reveal those differences. In particular, we present the following argument and prediction: (1) Let us assume that there is only one and the same mechanism of HTc superconductivity in all the cuprates and over the whole doping range (from under to over doping). (2) Assume that the spin pseudo-gap below $`T^{}`$ is indeed a precursor of the superconducting gap, i.e., an outcome of a developed local pairing amplitude in the absence of global phase coherence. (3) The over-doped cuprates are characterized by $`T_c=T^{}`$ in the absence of an external magnetic field. (4) It follows from the experiment of Gorny et al. that a magnetic field suppresses the superconducting phase coherence temperature $`T_c`$, while the pairing mechanism remains much less affected (as measured by $`\left(T_1T\right)^1`$). Therefore, we predict that in an over-doped sample ($`x>0.2`$) in a moderate magnetic field ($`814`$ Tesla) a spin pseudo-gap will be revealed starting from approximately the original $`T_c`$, at $`T^{}T_c\left(H=0\right)`$, above $`T_c\left(H\right)`$ (while there was no pseudo-gap in the absence of a magnetic field). The above prediction is a natural consequence of the spin-gap-proximity effect mechanism advanced by Emery-Kivelson-Zachar , but is not shared by several other leading theoretical approaches. Hence, the result of performing the suggested $`NMR`$ experiment can serve to provide new theoretical constraints; On the one hand, if the pseudo-gap phenomenon will prove to be only a curiosity of underdoping then it does not reflect an essential part of the pairing mechanism. On the other hand, if the above suggested experiment will unprecedentedly reveal a pseudo-gap region also in over-doped samples then it will strengthen the view that the underdoped materials exemplify the essential physics of HTc, which is only getting progressively obscured (due to similar energy scale of otherwise distinct phenomena) in optimal and overdoped samples, and not vice versa.
no-problem/9902/math9902138.html
ar5iv
text
# Shock formation for forced Burgers equation and application ## 1 Introduction Usually it is not a simple task to prove the shock formation for quasi-linear equations. We refer to for an exposition of techniques. For the forced Burgers equation there are usually some initial data which do not lead to formation of shocks. In this paper we use integral geometry approach to prove the shock formation for at least some leaves of foliation which is composed by the graphs of solutions of the forced Burgers equation. This approach is inspired by Hopf’s famouse theorem in Riemannian geometry, which can be naturally interpreted as a result on formation of shocks. Acknowledgements The paper was written during my stay at the University of Cambridge. I would like to thank Robert MacKay for his hospitality and for useful discussions. I would also like to thank Yakov Sinai for helpful discussions on his paper on Burgers equations. ## 2 The main result Consider the inviscid Burgers equation $$f_t+ff_q+F=0$$ (2.1) We shall assume that the force $`F(q,t)=u_q(q,t)`$, where the potential function $`u(q,t)`$ satisfies the following requirements. (2.2a) $`u`$ is of class $`C^2`$, 1-periodic in $`q,u(q+1,t)=u(q,t)`$. (2.2b) for a positive constant $`K`$, $`_0^1u_q^2(q,t)𝑑q<K`$ holds for all $`t`$. We consider the initial values for $`f(q,t)`$ depending on parameter $`\alpha `$ $$f_\alpha (q,t)|_{t=0}=\phi (\alpha ,q)$$ where $`\phi `$ is a $`C^2`$-function satisfying: (2.3a) $`\phi (\alpha ,q)`$ is a 1-periodic in $`q`$, for any $`\alpha `$ (2.3b) for any fixed $`q`$, the mapping $`\alpha \phi (\alpha ,q)`$ is a $`C^2`$-diffeomorphism of $`R`$. Geometrical meaning of (2.3) is that the graphs of $`\phi (\alpha ,q)`$ for a $`C^2`$-foliation of the cylinder $`S^1\times R`$. That is why we refer to such initial data as foliated initial data. ###### Theorem 1 Let $`u(q,t)`$ be the potential of a non-zero force $`F`$ satisfying (2.2a,b). Then for any foliated initial data $`\phi (\alpha ,q)`$ satisfying (2.3a,b) there always exist the values of $`\alpha `$ such that the corresponding solutions of Burgers equation (2.1) develop shock singularities in a finite (positive or negative) time. ###### Remark 1 The only case when shocks are not created is the case of zero force, with the initial data $`\phi (\alpha ,q)=\phi (\alpha )`$. It should be mentioned that there are many potentials satisfying (2.2a,b) such that some solutions of (2.1) do not form shocks. For instance, this is always the case if $`u`$ is smooth enough and periodic in both $`q`$ and $`t`$. In this case KAM theory applies and yields that there are many solutions for (2.1) periodic in $`q`$ and $`t`$. The next result shows that it is not necessarily true that the shocks described by Theorem 1 always appear in a forward time. ###### Theorem 2 Let $`u`$ be any $`C^2`$-function periodic in $`q`$ satisfying the following (2.4a) $`u(q,t)0`$ for $`0<Tt`$ (2.4b) for all $`0tT`$, $`|u_{qq}(q,t)|<\left(\frac{\pi }{T}\right)^2`$ Then there exists a foliated initial data at $`t=0`$ for (2.1) such that all the shocks are created in a backward time. ## 3 Proofs For the proof of Theorem 1 we will need the following ###### Lemma 1 Let $`V`$ be a $`C^1`$-vector field on $`R^2`$ with the following property $$divVCV^2$$ (3.1) for a positive constant $`C`$. Then $`V0`$. ###### Remark 2 This lemma does not generalise to higher dimensions. There are smooth vector fields on $`R^n`$ for $`n3`$ satisfying (3.1) everywhere. Proof of Lemma 1 Integrate (3.1) over the circle $`S_r`$. We obtain $$_{S_r}𝑑ivV𝑑\omega _rC_{S_r}V^2𝑑\omega _r$$ (3.2) where $`d\omega _r`$ is the standard measure on $`S_r`$. The left hand side of (3.2) can be easily written in the form $$_{S_r}𝑑ivV𝑑\omega _r=\frac{d}{dr}_{B_r}𝑑ivVd(vol)=\frac{d}{dr}\left(_{S_r}<V,n>d\omega _r\right)$$ (3.3) where $`B_r=S_r`$ and $`n`$ is a unite normal to $`S_r`$. The right hand side of (3.2) can be estimated by Cauchy–Shwarz inequality $$_{S_r}V^2𝑑\omega _r\left(_{S_r}<V,n>d\omega _r\right)^2/_{S_r}n^2𝑑\omega _r=\frac{1}{2\pi r}\left(_{S_r}<V,n>d\omega _r\right)^2$$ (3.4) Combining (3.2) with (3.3) and (3.4) we obtain the following $$\phi ^{}(r)\frac{C}{2\pi r}\text{w}here\phi (r)=_{S_r}<V,n>d\omega _r$$ (3.5) It is easy to see that the only solution of (3.5) which is finite for all $`r>0`$ is $`\phi 0`$. But then $`V0`$, by (3.3) and (3.2). $`\mathrm{}`$ Proof of Theorem 1 Proof goes by contradiction. Let $`\phi (\alpha ,q)`$ be a foliated initial data for (2.1) which does not leed to formation of shocks. Note that the characteristics of (2.1) are given by the Newton Equations $$\{\begin{array}{cc}\dot{q}\hfill & =p\hfill \\ \dot{p}\hfill & =u_q\hfill \end{array}$$ (3.6) The periodicity assumption (2.2a) implies that the flow $`g^t`$ of (3.6) is complete. Then we have that the graphs $`\left\{p=f_\alpha (q,t)\right\}`$ form a $`C^2`$-foliation of the space $`R(p)\times S^1(q)\times R(t)`$. Define the function $`\omega (p,q,t)`$ by the rule $$\omega (f_\alpha (q,t),q,t)=\frac{f_\alpha }{q}(q,t).$$ Then $`\omega `$ is $`C^1`$, and it is easy to see that the following equation holds true: $$\omega _t+p\omega _qu_q\omega _p+\omega ^2+u_{qq}=0$$ (3.7) Integrate (3.7) with respect to $`q`$ over $`S^1`$ and obtain $$\frac{}{t}\omega 𝑑q+\frac{}{p}\omega u_q𝑑q=\omega ^2𝑑q$$ (3.8) Denote by $`V_1(p,t)=\omega 𝑑q,V_2(p,t)=\omega u_q𝑑q`$ and set $`V=(V_1,V_2).`$ Then the equation (3.8) implies for the field $`V`$ on the plane $`R^2(p,t)`$ satisfies $$divV=\omega ^2𝑑q$$ (3.9) Cauchy-Shwarz inequality applied to the right handside of (3.9) together with the assumption (2.2b) imply that the field $`V`$ meets the assumption (3.1) of the lemma. But then $`V`$ vanishes identically and so does $`\omega `$ (by (3.9)) and also $`u_q`$ (by (3.7)). This completes the proof. $`\mathrm{}`$ Proof of Theorem 2 Fix a number $`\alpha `$ and consider the family $`M_\alpha `$ consisting of those characteristics which for $`tT`$ can be written $$q(t,\beta )=\alpha (tT)+\beta $$ (3.10) It follows that $`M_\alpha `$ is ordered and defines a smooth foliation of the semi-cylinder $`S^1\times \left\{t0\right\}`$. Indeed, the Jacobi field $`\xi _\beta (t)=\frac{q}{\beta }(t,\beta )`$ satisfies the linearised equation $$\ddot{\xi }_\beta +u_{qq}^{\prime \prime }(q(t,\beta ),t)\xi _\beta =0$$ (3.11) with $`\dot{\xi }_\beta (T)=0`$ by (3.10) . Comparing the equation (3.11) with $`\ddot{\xi }+\left(\frac{\pi }{T}\right)^2\xi =0`$ on the segment $`[0,T]`$ one immediately concludes by (2.4b) that $`\xi _\beta (t)0`$, for all t in $`[0,T]`$. This implies that $`\frac{q}{\beta }(t,\beta )>\text{const}>0`$. And thus $`M_\alpha `$ is a smooth foliation. To each $`M_\alpha `$ naturally corresponds the solution $`f_\alpha (q,t)`$ of (2.1) defined by the rule $$f_\alpha (q(t,\beta ),t)=\frac{q}{t}(t,\beta ).$$ The family of solutions $`f_\alpha (q,t)`$ define the foliated initial data $`\phi _\alpha (q)=f_\alpha (q,0)`$ for which, obviously, the solutions exist infinite positive time. The proof is completed. $`\mathrm{}`$ ## 4 An Application Consider the quasi-linear system $`U_t=A(U)U_q`$ for $`U=(u_1(q,t)\mathrm{}u_n(q,t))`$ of the following form: $$(u_k)_t=(nk+1)u_{k1}(u_1)_q(u_{k+1})_qfork=1,\mathrm{},n$$ (4.1) where $`u_{n+1}0`$ and $`u_0`$ is a constant parameter. This system naturally appears in the study of polynomial integrals for the Hamiltonian system (3.6). It turns out that (4.1) has a remarkable Hamiltonian form and infinitely many conservation laws. It is an open question if all smooth solutions of (4.1) defined on the whole cylinder $`S^1(q)\times R(t)`$ are simple waves solutions (see for partial results). We apply Theorem 1 for the elliptic regimes of (4.1), i.e. for those solutions for which $`A(U)`$ has no real eigenvalues ($`n`$ is automatically even in this case).<sup>2</sup><sup>2</sup>2We shall report on strictly hyperbolic case elsewhere. Let me start with the Example In the case $`n=2`$, the system has the form $$\left(\begin{array}{c}u_1\hfill \\ u_2\hfill \end{array}\right)_t=\left(\begin{array}{cc}2u_0\hfill & 1\hfill \\ u_1\hfill & 0\hfill \end{array}\right)\left(\begin{array}{c}u_1\hfill \\ u_2\hfill \end{array}\right)_q$$ (4.2) For a solution $`U=(u_1,u_2)`$ lying in the elliptic domain, ie. satisfying $`u_1(q,t)>u_0^2`$ introduce the function $$E(t)=_{S^1}(u_1)^\gamma 𝑑qfor\gamma (0,1).$$ (4.3) Then the direct computation using (4.2) yields $$\ddot{E}(t)=\gamma (\gamma 1)_{S^1}(u_1)^{\gamma 2}\left((u_1)_t^22u_0(u_1)_t(u_1)q+(u_1)_q^2\right)𝑑q$$ Since $`u_1>u_0^2`$ the integrand is non-negative and hence by the choice of $`\gamma `$ in $`(0,1)`$, one obtains that the function $`E(t)`$ is a concave positive function. Thus $`E0`$ and then $`u_1,u_2`$ are constants. For higher $`n`$ we prove the following: ###### Theorem 3 Let $`n`$ be even and greater than two and let $`U=(u_1(q,t)\mathrm{}u_n(q,t))`$ be a smooth solution for (4.1) defined on $`S^1(q)\times R(t)`$ satisfying (4.3a) $`U`$ is such that the matrix $`A(U)`$ has no real eigenvalues. (4.3b) $`_{S^1}u_1^2(q,t)𝑑q<K`$, for some positive constant $`K`$. Then $`Uconst`$. Proof of theorem 3 For a solution $`U`$ introduce a polynomial function $$F=\frac{1}{n+1}p^{n+1}+u_0p^n+u_1p^{n1}+\mathrm{}+u_n$$ It can be easily checked that the system (4.1) expresses the fact that the function $`F(p,q,t)`$ satisfies the equation $$F_t+pF_q(u_1)_qF_p=0$$ ie. $`F`$ has constant values along the Hamiltonian flow of (3.6) (with $`u=u_1`$). Moreover, it turns out that characteristic polynomial of $`A(U)`$ satisfies $$det(A(U)\lambda I)=F_p(\lambda ,u_0,u_1,\mathrm{}u_n)$$ But then by the assumption (4.3a) $`F_p`$ does not vanish and hence the levels of $`F`$ determine the foliation consisting of graphs of solutions of (2.1) with the potential $`u_1(q,t)`$. Then it follows from Theorem 1 that $`U`$ must be a constant. $`\mathrm{}`$ ## 5 Open questions We formulate here some natural open questions 1. It is not clear if the growth condition (2.2b) is really essential for theorem 1 to be true. Moreover, since theorem 1 is used for the proof of theorem 3 we had to assume (4.3b). But the argument in the Example indicates that probably this assumption may be omitted. 2. The lemma used for the proof of theorem 1 does not generalise to highest dimensions. It would be interesting to find some other integral geometric tools applicable in higher dimensions. One such tool was suggested in for potentials periodic both in space and in time. 3. It is important to understand if there exist non-zero potential forces satisfying (2.2 a, b) such that all orbits have no conjugate points. Our method does not imply that the force $`F`$ must vanish in this case, though it is close to that. Such a dichotomy is well known in this type of question (see for example ).
no-problem/9902/cond-mat9902128.html
ar5iv
text
# Meron-Cluster Solution of Fermion Sign Problems \[ ## Abstract We present a general strategy to solve the notorious fermion sign problem using cluster algorithms. The method applies to various systems in the Hubbard model family as well as to relativistic fermions. Here it is illustrated for non-relativistic lattice fermions. A configuration of fermion world-lines is decomposed into clusters that contribute independently to the fermion permutation sign. A cluster whose flip changes the sign is referred to as a meron. Configurations containing meron-clusters contribute $`0`$ to the path integral, while all other configurations contribute $`1`$. The cluster representation describes the partition function as a gas of clusters in the zero-meron sector. preprint: DUKE-TH-99-183, MIT-CTP 2821/99 \] The numerical simulation of fermions is a notorious problem that hinders progress in understanding high-temperature superconductivity , QCD at non-zero chemical potential and many other important problems in physics. One of the main problems originates from the minus-signs associated with Fermi statistics which prevent us from interpreting the Boltzmann factor in a fermionic path integral as a positive probability. When the sign of the Boltzmann factor is incorporated in measured observables, the fluctuations in the sign give rise to dramatic cancellations. Especially for large systems at low temperatures this leads to relative statistical errors that are exponentially large in both the volume and the inverse temperature. This makes it impossible in practice to study such systems with standard numerical methods. Here, for the first time, we completely eliminate a severe sign problem in the simulation of a non-relativistic system of interacting lattice fermions using a cluster algorithm. The solution of the problem proceeds in two steps. The idea of the first step is to use cluster algorithm techniques to reduce the problem of canceling many contributions $`\pm 1`$ to the problem of averaging over non-negative contributions $`0`$ and $`1`$. This step solves one half of the sign problem as discussed below. In large volumes and at small temperatures one still generates vanishing contributions to the average sign most of the time and very rarely one encounters a contribution $`1`$. In order to solve the other half of the problem a second step is necessary which guarantees that contributions $`0`$ and $`1`$ are generated with similar probabilities. The idea behind the second step is to include a Metropolis decision in the process of cluster decomposition. The two basic ideas behind our algorithm are general and apply to a variety of systems. In this paper, we illustrate our method for a simple model which serves as a testing ground for the new ideas. Let us consider a fermionic path integral $`Z_f=_n\text{Sign}[n]\mathrm{exp}(S[n])`$ over configurations $`n`$ with a Boltzmann weight of $`\text{Sign}[n]=\pm 1`$ and magnitude $`\mathrm{exp}(S[n])`$. Here $`S[n]`$ is the action of a corresponding bosonic model with partition function $`Z_b=_n\mathrm{exp}(S[n])`$. A fermionic observable $`O[n]`$ is obtained in a simulation of the bosonic ensemble as $$O_f=\frac{1}{Z_f}\underset{n}{}O[n]\text{Sign}[n]\mathrm{exp}(S[n])=\frac{O\text{Sign}}{\text{Sign}}.$$ (1) The average sign in the simulated bosonic ensemble is $$\text{Sign}=\frac{Z_f}{Z_b}=\mathrm{exp}(\beta V\mathrm{\Delta }f).$$ (2) The last equality (valid for large $`\beta V`$) points to the heart of the sign problem. The expectation value of the sign is exponentially small in both the volume $`V`$ and the inverse temperature $`\beta `$ because the difference between the free energy densities $`\mathrm{\Delta }f=f_ff_b`$ of the fermionic and bosonic systems is necessarily positive. Even in an ideal simulation of the bosonic ensemble which generates $`N`$ completely uncorrelated configurations, the relative statistical error of the sign (again for large $`\beta V`$) is $$\frac{\mathrm{\Delta }\text{Sign}}{\text{Sign}}=\frac{\sqrt{\text{Sign}^2\text{Sign}^2}}{\sqrt{N}\text{Sign}}=\frac{\mathrm{exp}(\beta V\mathrm{\Delta }f)}{\sqrt{N}}.$$ (3) Here we have used $`\text{Sign}^2=1`$. To determine the average sign with sufficient accuracy one needs to generate on the order of $`N=\mathrm{exp}(2\beta V\mathrm{\Delta }f)`$ configurations. For large volumes and small temperatures this is impossible in practice. It is possible to solve one half of the problem if one can match all contributions $`1`$ with $`1`$ to give $`0`$, such that only a few unmatched contributions $`1`$ remain. Then effectively $`\text{Sign}=0,1`$ and hence $`\text{Sign}^2=\text{Sign}`$. This reduces the relative error to $$\frac{\mathrm{\Delta }\text{Sign}}{\text{Sign}}=\frac{\sqrt{\text{Sign}\text{Sign}^2}}{\sqrt{N^{}}\text{Sign}}=\frac{\mathrm{exp}(\beta V\mathrm{\Delta }f/2)}{\sqrt{N^{}}}.$$ (4) One gains an exponential factor in statistics, but one still needs to generate $`N^{}=\sqrt{N}=\mathrm{exp}(\beta V\mathrm{\Delta }f)`$ independent configurations in order to accurately determine the average sign . This is because one generates exponentially many vanishing contributions before one encounters a contribution $`1`$. In several cases cluster algorithms provide an explicit matching of contributions $`1`$ and $`1`$ using an improved estimator. Cluster algorithms are a very efficient tool to simulate quantum spin systems . In particular, the method can be implemented directly in the Euclidean time continuum . The basic idea behind these algorithms is to decompose a configuration into $`N_C`$ clusters of spins which can be flipped independently. Averaging analytically over the $`2^{N_C}`$ configurations generated by the cluster flips, one can construct improved estimators for various physical quantities. As we will show, using an improved estimator for the fermion sign, cluster algorithms can solve the sign problem if the clusters contribute independently to the sign and a reference cluster orientation with a positive weight always exists. This means that the flip of any given cluster either changes the sign or not, independent of the orientation of all the other clusters. A cluster algorithm for lattice fermions was first presented in with the hope of finding such an improved estimator. Unfortunately, in that algorithm the clusters do not affect the sign independent of one another. Still, cluster algorithms have been used for fermion models . For systems with no severe sign problem these algorithms work much better than standard numerical methods, but they do not solve the fermion sign problem. A solution to a sign problem using cluster algorithms was first found in a bosonic model with a complex action — the 2-d $`O(3)`$ model at vacuum angle $`\theta =\pi `$ . The cluster independence was achieved by constructing a non-standard action. In that model clusters whose flip changes the sign are half-instantons which are usually referred to as merons. In this paper we extend the meron-concept to fermionic models by demanding cluster independence. For non-relativistic spinless fermions hopping on a $`d`$-dimensional cubic lattice of size $`V=L^d`$ with periodic boundary conditions, this leads us to the Hamiltonian $$H=\underset{x,i}{}[\frac{t}{2}(c_x^+c_{x+\widehat{i}}+c_{x+\widehat{i}}^+c_x)+U(n_x\frac{1}{2})(n_{x+\widehat{i}}\frac{1}{2})],$$ (5) with $`Ut>0`$. Here $`\widehat{i}`$ is a unit vector in the $`i`$-direction, $`c_x^+`$ and $`c_x`$ are fermion creation and annihilation operators obeying the standard anticommutation relations and $`n_x=c_x^+c_x`$ is the occupation number of the lattice site $`x`$. Since $`U>0`$, two fermions or two holes on neighboring lattice sites repel each other, while a fermion and a hole attract one another. This is a simple example of a fermionic model for which the sign problem can be solved completely using a meron-cluster algorithm. Let us now discuss our model and algorithm in more detail. Following we introduce a space-time lattice with $`2dM`$ time-slices and spacing $`\epsilon =\beta /M`$ in the Euclidean time direction and we insert complete sets of occupation number $`n(x,t)=0,1`$ eigenstates at each time-slice to express the partition function as a path integral. The magnitude $`\mathrm{exp}(S[n])`$ of the Boltzmann factor is a product of four-site interactions associated with space-time plaquette configurations $`[n(x,t),n(x+\widehat{i},t),n(x,t+1),n(x+\widehat{i},t+1)]`$. The sign factor $`\text{Sign}[n]=\pm 1`$ has a topological meaning. The occupied sites form fermion world-lines which are closed in Euclidean time. Particles may be exchanged during their Euclidean time evolution and the fermion world-lines define a permutation of particles. According to the Pauli principle, $`\text{Sign}[n]`$ is just the sign of that permutation. In the following we restrict ourselves to $`U=t`$. Then the bosonic system without the sign factor is the antiferromagnetic spin 1/2 quantum Heisenberg model, and $`n(x,t)=0`$ and $`1`$ correspond to spin $`1/2`$ and $`1/2`$, respectively. The staggered occupation (the analog of the staggered magnetization) $`O[n]=ϵ_{x,t}(1)^{x_1+x_2+\mathrm{}+x_d}(n(x,t)\frac{1}{2})`$, and the corresponding susceptibility $`\chi =O^2\text{Sign}/\beta V\text{Sign}`$ are important observables. The algorithm decomposes a configuration into closed loops of lattice points which may be flipped independently. When a loop is flipped, the occupation numbers of all points on the loop are changed from $`0`$ to $`1`$ and vice versa. Each lattice point participates in two space-time plaquette interactions $`[n(x,t),n(x+\widehat{i},t),n(x,t+1),n(x+\widehat{i},t+1)]`$. On each interaction plaquette the lattice points are connected in pairs and a sequence of connected points defines a loop-cluster. For space-time plaquette configurations $`[0,0,0,0]`$ and $`[1,1,1,1]`$ the lattice points are connected with their time-like neighbors, for configurations $`[0,1,1,0]`$ and $`[1,0,0,1]`$ they are connected with their space-like neighbors and for configurations $`[0,1,0,1]`$ and $`[1,0,1,0]`$ they are connected with their time-like neighbors with probability $`p=2/(1+\mathrm{exp}(ϵU))`$ and with their space-like neighbors with probability $`1p`$. After identifying the clusters, they are flipped independently with probability $`1/2`$. A remarkable property of the cluster rules is that $`\text{Sign}[n]=_{i=1}^{N_C}\text{Sign}[C_i]`$, where $`C_i,i=1,\mathrm{},N_C`$ denotes the oriented clusters in a configuration. By properly flipping the clusters, one can reach a reference configuration (the first configuration in figure 1) in which all even lattice sites are occupied and all odd sites are empty. In the reference orientation $`\text{Sign}[C_i]=1`$. When the cluster is flipped, $`\text{Sign}[C_i]=1`$ if $`N_w+N_h/2`$ is odd and $`1`$ otherwise. Here $`N_w`$ is the temporal cluster winding number and $`N_h`$ is the number of times the cluster hops to a neighboring lattice point. This relation follows directly from the fermionic anticommutation relations. Following , we refer to clusters whose flip changes the sign as merons. The flip of a meron-cluster permutes the fermions and changes the topology of the fermion world-lines. Since flipping all clusters does not change the fermion sign, the number of meron-clusters is always even. Two fermion configurations together with a meron-cluster are shown in figure 1. The improved estimator for $`\text{Sign}`$ is the average over the $`2^{N_C}`$ configurations obtained from independently flipping the $`N_C`$ clusters in all possible ways. For configurations that contain merons the average sign is zero because flipping a single meron leads to a cancellation of signs $`\pm 1`$. Only the configurations without merons contribute to $`\text{Sign}`$ and their contribution is always $`1`$. This solves one half of the sign problem as discussed before. Let us now consider an improved estimator for $`O^2\text{Sign}`$ which is needed to determine the susceptibility $`\chi `$. The staggered occupation, $`O[n]=_CO_C`$, is a sum of staggered occupations of the clusters, $`O_C=ϵ_{(x,t)C}(1)^{x_1+x_2+\mathrm{}+x_d}(n(x,t)\frac{1}{2})`$. When a cluster is flipped, its staggered occupation changes sign. In a configuration without merons, where $`\text{Sign}[n]=1`$ for all relative cluster flips, the average of $`O[n]^2\text{Sign}[n]`$ over all $`2^{N_C}`$ configurations is $`_C|O_C|^2`$. For configurations with two merons the average is $`2|O_{C_1}||O_{C_2}|`$ where $`C_1`$ and $`C_2`$ are the two meron-clusters. Configurations with more than two merons do not contribute to $`O^2\text{Sign}`$. Thus, the improved estimator for the susceptibility is given by $$\chi =\frac{_C|O_C|^2\delta _{N,0}+2|O_{C_1}||O_{C_2}|\delta _{N,2}}{V\beta \delta _{N,0}},$$ (6) where $`N`$ is the number of meron-clusters in a configuration. Hence, to determine $`\chi `$ one must sample the zero- and two-meron sectors only. The probability to find a configuration without merons is exponentially small in the space-time volume since it is equal to $`\text{Sign}`$. Thus, although we have increased the statistics tremendously with the improved estimators, without a second step one would still need an exponentially large statistics to accurately determine $`\chi `$. One goal of the second step is to eliminate all configurations with more than two merons. This enhances both the numerator and the denominator in eq.(6) by an exponentially large factor, but leaves their ratio unchanged. We start with an initial configuration with zero or two merons. For example, a completely occupied configuration has no merons. We then visit all plaquette interactions one after the other and choose new cluster connections between the four sites according to the cluster rules. If the new connection increases the number of merons beyond two, it is not accepted and the old connection is kept for that plaquette. To decide if the meron number changes, one needs to examine the clusters affected by the new connection. Although this requires a computational effort proportional to the cluster size (and hence to the physical correlation length) this is no problem, because one gains a factor that is exponentially large in the volume. The above procedure obeys detailed balance because configurations with more than two merons do not contribute to the observables we consider. Also, one can show that the algorithm is still ergodic. The simple reject step eliminates almost all configurations with weight $`0`$ and is the essential step to solve the other half of the fermion sign problem. Since for large space-time volumes the two-meron sector is much larger than the zero-meron sector, without further improvements one would still need statistics quadratic (but no longer exponential) in the space-time volume to accurately measure $`\chi `$. The remaining problem can be solved with a re-weighting technique similar to the one used in . To enhance the zero-meron configurations in a controlled way, we introduce a trial probability $`p_t(N)`$ for each $`N`$-meron sector. We set $`p_t(N)`$ for $`N>2`$ to infinity and use it in a Metropolis accept-reject step for the newly proposed cluster connection on a specific plaquette interaction. A new connection that changes the meron number from $`N`$ to $`N^{}`$ is accepted with probability $`p=\text{min}[1,p_t(N)/p_t(N^{})]`$. In particular, configurations with $`N^{}>2`$ are never generated because then $`p_t(N^{})=\mathrm{}`$ and $`p=0`$. After visiting all plaquette interactions, each cluster is flipped with probability $`1/2`$ which completes one update sweep. After re-weighting, the zero- and two-meron configurations appear with similar probabilities. This completes the second step in our solution of the fermion sign problem. The re-weighting of the zero- and two-meron configurations is taken into account in the final expression for the susceptibility as $$\chi =\frac{_C|O_C|^2\delta _{N,0}p_t(0)+2|O_{C_1}||O_{C_2}|\delta _{N,2}p_t(2)}{V\beta \delta _{N,0}p_t(0)}.$$ (7) We have implemented the meron cluster algorithm in (2+1) dimensions and have tested it using exact diagonalization results on small lattices. Table 1 contains a comparison of results obtained with two algorithms using the same number of sweeps in both cases. The first algorithm ($`\text{A}_1`$) has the improved estimators and solves one half of the sign problem. The second algorithm ($`\text{A}_2`$) has both the improved estimators and the additional Metropolis step and also solves the other half of the problem. The algorithm $`\text{A}_2`$ is clearly superior once the average sign becomes small. In particular, we have applied $`\text{A}_2`$ to systems of size $`V=12^2`$ at a low temperature $`\beta U=8`$. This is far beyond reach of standard fermion algorithms and even of the algorithm $`\text{A}_1`$. It should be noted that our model has a very severe sign problem which persists after integrating out the fermions even at half-filling. Cluster representations in general and the meron-concept in particular are more than mere algorithmic tools. In fact, we have shown that the fermionic partition function can be expressed as a classical statistical mechanics system of clusters. The cluster formulation is a novel type of bosonization which works in any dimension. In this formulation the Pauli principle manifests itself by the vanishing Boltzmann weight of a configuration containing meron-clusters. If we ignore the fermion permutation sign, the theory describes a gas of merons and non-merons with a large configuration space. Including the sign factor forces even numbers of merons to be bound into non-merons. As a consequence, in agreement with the Pauli principle, the configuration space is very restricted. The merons allow us to simulate fermions with local bosonic variables. This is much more efficient than integrating out the fermions, which leads to non-local bosonic effective actions. While the details of our algorithm are specific to the fermion model we have considered, the two basic ideas behind it are general and apply to a variety of models. They lead to a complete solution of the fermion sign problem for models of relativistic staggered fermions as well as for non-relativistic fermions with spin. In applications of the meron-cluster algorithm to systems in the Hubbard model family we have so far not found high-temperature superconductivity. Meron-cluster algorithms are also applicable to quantum spin models in an arbitrary magnetic field for which a similar type of sign problem arises. Similarly, one can solve the sign problem resulting from a complex action in the 2-d $`O(3)`$ model at non-zero chemical potential or at non-zero vacuum angle $`\theta `$. The next challenge is to find applications of this method to QCD at non-zero baryon density. It seems likely that progress along the lines discussed here can be made in the quantum link D-theory formulation of the problem. U.-J. W. likes to thank the physics department of Duke University where part of this work was done for its hospitality and the A. P. Sloan foundation for its support. This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreements DE-FC02-94ER40818 and DE-FG02-96ER40945.
no-problem/9902/cond-mat9902129.html
ar5iv
text
# REFERENCES submitted to Nature Discovery of the Acoustic Faraday Effect in Superfluid <sup>3</sup>He-B Y. Lee, T.M. Haard, W.P. Halperin and J.A. Sauls Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208, USA Acoustic waves provide a powerful tool for studying the structure of matter. The speed, attenuation and dispersion of acoustic waves give useful details of the molecular forces and microscopic mechanisms for absorption and scattering of acoustic energy. In solids both compressional and shear waves occur, so-called longitudinal and transverse sound. However, normal liquids do not support shear forces and consequently transverse waves do not propagate in liquids with one notable exception. In 1957 Landau predicted that the quantum liquid phase of <sup>3</sup>He might exhibit transverse sound at sufficiently low temperatures where the restoring forces for shear waves are supplied by the collective action of the particles in the fluid. Shear waves in liquid <sup>3</sup>He involve displacements of the fluid transverse to the direction of propagation. The displacement defines the polarization direction of the wave similar to electromagnetic waves. We have observed rotation of the polarization of transverse sound waves in superfluid <sup>3</sup>He-B in a magnetic field. This magneto-acoustic effect is the direct analogue to the magneto-optical effect discovered by Michael Faraday in 1845, where the polarization of an electromagnetic wave is rotated by a magnetic field along the propagation direction. Superfluidity in <sup>3</sup>He results from the binding of the <sup>3</sup>He particles with nuclear spin $`s=1/2`$ into molecules called “Cooper pairs” with binding energy, $`2\mathrm{\Delta }`$ The pairs undergo a type of Bose-Einstein condensation having a close analogy to the Bardeen-Cooper-Schrieffer condensation phenomenon associated with superconductivity in metals. One important difference is that the pairs that form the condensate in <sup>3</sup>He have total spin $`S=1`$ and an orbital wave function with relative angular momentum $`L=1`$ ($`p`$-$`wave`$). This is in contrast to superconductors which are formed with Cooper pairs of electrons having $`S=0`$ and $`L=0`$ ($`s`$-$`wave`$) or, as is the case of high temperature superconductors, $`S=0`$ and $`L=2`$ ($`d`$-$`wave`$). In superfluid <sup>3</sup>He, the spin and orbital angular momentum vectors are locked at a fixed angle to one another. This is called broken relative spin-orbit symmetry. The equilibrium superfluid state is described as a condenstae of Cooper pairs with a total angular momentum, $`J=LS=0`$. In addition, the Cooper pairs can be resonantly excited by sound waves to quantum states with total angular momentum $`J=2`$ This is reminiscent of diatomic molecules which have similar excited states. The above description applies to the B-phase of superfluid <sup>3</sup>He, the most stable phase at low pressure. The acoustic Faraday effect occurs in <sup>3</sup>He-B as a consequence of spontaneously broken relative spin-orbit symmetry. An applied field magnetically polarizes the spins of the Cooper pairs which, through coupling to their orbital motion, rotates the polarization of transverse sound. The rotational excitations of Cooper pairs are essential to our observation of the propagation of transverse acoustic waves in <sup>3</sup>He-B since they significantly increase the sound velocity making the sound mode much easier to detect; the closer the sound energy is to the energy of the Cooper pair excited state, the stronger is this effect. Furthermore the Cooper pair excited states have a linear Zeeman splitting with magnetic field. Of the five ($`2J+1`$) Zeeman sub-states there is one, $`m_J=+1`$, which couples to right circularly polarized transverse sound, and a second, $`m_J=1`$, couples to left circularly polarized sound. Thus the speeds of these two transverse waves are different in a magnetic field. We call this property acoustic birefringence. It leads to the acoustic Faraday effect where the magnetic field rotates the polarization direction of linearly polarized sound. Our measurements show that the rotation angle can be as large as $`1.4\times 10^7`$ deg/cm-Tesla, much larger than the ususal magneto-optical Faraday effect. Excitation and detection of transverse sound is provided by a high Q ($`3000`$) AC-cut, quartz transducer with a fundamental resonance frequency of 12 MHz. It generates and detects shear waves with a specific linear polarization. The detection method is based on measurement of the electrical impedance of the transducer using a frequency-modulated cw-bridge spectrometer. All measurements were performed at 82.26 MHz, the $`7^{th}`$ harmonic of the transducer, with frequency modulation at 400 Hz and an amplitude of 3 kHz. The electrical impedance of the transducer is a direct measure of the acoustic impedance of the surrounding liquid <sup>3</sup>He in the acoustic cavity that is shown in Fig. 1. Linearly polarized waves are excited by the transducer, reflected from the opposite surface of the acoustic cell, and detected by the same transducer. Under conditions of high attenuation there is no reflected wave, and the acoustic response is determined by the bulk acoustic impedance, $`Z_a=\rho \omega /q`$ where $`\rho `$ is the density of the liquid, $`\omega `$ is the sound frequency, $`q=k+i\alpha `$ is the complex wave number, $`\alpha `$ is the attenuation, and $`2\pi /k`$ is the wavelength. A change in either the attenuation or the phase velocity, $`C_\varphi =\omega /k`$, produces a change in the impedance, $`Z_a`$. On cooling into the superfluid the acoustic response shown in Fig. 2 varies smoothly with temperature in this highly attenuating region. If the attenuation is low, there is interference between the source and reflected waves which modulates the local acoustic impedance detected by the transducer. Consequently the acoustic response oscillates as the phase velocity changes with temperature. The oscillations in Fig. 2 at low temperatures correspond to interference between outgoing and reflected waves, and so they indicate the existence of some form of propagating wave. Each period of the oscillations corresponds to a change in velocity sufficient to increase, or decrease, by unity the number of half wavelengths in the cavity. The amplitude of the oscillations increases as the temperature is reduced indicating that attenuation of the sound mode decreases with decreasing temperature. The features labeled $`A`$ and $`B`$ in Fig. 2 are identified with known physical processes for sound absorption in superfluid <sup>3</sup>He-B. Feature $`A`$ corresponds to onset of the dissociation of Cooper pairs by sound where $`\mathrm{}\omega =2\mathrm{\Delta }(T_A)`$. In the temperature range between $`T_A`$ and the superfluid transition temperature, $`T_c`$, the attenuation of the liquid is extremely high owing to this mechanism. The point $`B`$ corresponds to resonant absorption of sound at $`\mathrm{}\omega =1.5\mathrm{\Delta }(T)`$ by the excited Cooper pairs with angular momentum $`J=2`$ Transverse sound is extinguished below this temperature. Early attempts to observe transverse sound in the normal phase of <sup>3</sup>He were inconlusive, and furthermore, it was originally expected that the transverse mode would be suppressed in the superfluid phase. More recent theoretical work clarified the role of the Cooper pair excitations showing that they increase the transverse sound speed which results in a more robust propagating transverse acoustic wave at low temperatures in superfluid <sup>3</sup>He-B. The first experimental evidence for this can be found in the acoustic impedance measurements of Kalbfeld, Kucera, and Ketterson. The proof that the impedance oscillations correspond to a propagating transverse sound mode is given in Fig. 3. In Fig. 3a we show data sets at a pressure of $`4.42\mathrm{bar}`$ in magnetic fields of $`52\mathrm{G}`$, $`101\mathrm{G}`$ and $`152\mathrm{G}`$. The principal feature is that the magnetic field modulates the zero field oscillations shown in Fig. 2. Our detector is only sensitive to linearly polarized transverse sound having a specific direction. Application of a field of $`52\mathrm{G}`$ in the direction of wave propagation suppresses the oscillations near $`T=0.465T_c`$ that were present in zero field. This corresponds to a $`90^{}`$ rotation of the polarization of the first reflected transverse sound wave making the polarization orthogonal to the detection direction. Doubling the magnetic field restores the transverse sound oscillations at this temperature. The oscillations are suppressed once again by tripling the field to $`152\mathrm{G}`$. Also note that near the points labeled $`90^{}`$ and $`270^{}`$, there are smaller amplitude impedance oscillations with shorter period than the primary oscillations. These come from interference of doubly reflected waves within the cavity. We demonstrate this fact with a simple, but powerful, simulation of the acoustic impedance oscillations shown in Fig. 3b. In zero field, superfluid <sup>3</sup>He-B is non-magnetic and non-birefringent. Linearly polarized transverse sound is the superposition of two circularly polarized waves having the same velocity and attenuation. Application of a magnetic field gives rise to acoustic circular birefringence through the Zeeman splitting of the excited states of the Cooper pairs that couple to the transverse sound modes; thus, right- and left-circularly polarized waves propagate with different speeds, $`C_\pm =C_\varphi \pm \delta C_\varphi `$. For magnetic fields well below $`1\text{kG}`$ the difference in propagation speeds is linear in the magnetic field, $`\delta C_\varphi H`$. This implies that a linearly polarized wave generated by the transducer undergoes Faraday rotation of its polarization as it propagates. Upon reflection from the opposite wall of the cavity the linearly polarized wave with $`𝐪𝐇`$ reverses direction. The reflected wave propagates with the polarization rotating with the same handedness relative to the direction of the field, i.e. the rotation of the polarization accumulates after reflection from a surface. The spatial period for rotation of the polarization by $`360^{}`$ is $$\mathrm{\Lambda }=4\pi \left(\frac{C_\varphi }{\omega }\right)\left|\frac{C_\varphi }{\delta C_\varphi }\right|.$$ (1) The Faraday effect produces a sinusoidal modulation of the impedance oscillations as a function of magnetic field with a period that is inversely proportional to the field, i.e. $`\mathrm{\Lambda }1/H`$. The constant of proportionality in magneto-optics is called the Verdet constant, $`\text{V}=2\pi /H\mathrm{\Lambda }`$. In Fig. 3b we show the result of our numerical calculation of the sound wave amplitude in the direction detected by the transducer. The oscillations shown in the figure come from interference between the source wave and multiply reflected waves. The calculation uses the attenuation and phase velocity measured in zero field. The Verdet constant is obtained from the measurement at $`52\text{G}`$. The simulation reproduces all the observed features of the impedance as a function of temperature including the maximum in the modulation at $`T/T_c=0.415,H=101\text{G}`$ and the minimum at $`T/T_c=0.415,H=152\text{G}`$, which confirms that the Faraday period is proportional to $`1/H`$. The simulation also produces the fine structure oscillations in the impedance near the points labeled $`90^{}`$ and $`270^{}`$. The fine structure is observed when the polarization rotates by an odd multiple of $`90^{}`$ upon a single round trip in the cell. Then waves that traverse the cell twice are $`180^{}`$ out of phase relative to the source wave, and consequently the period of the impedance oscillations is halved. The amplitude of the oscillations is substantially reduced because of attenuation over the longer pathlength. This structure provides proof that impedance oscillations are modulated by the Faraday effect for propagating transverse waves. The impedance data from our experiments were analyzed to obtain the spatial period for the rotation of the polarization and were found to be in agreement with the theoretical prediction for the Faraday rotation period. The theoretical results for the period can be expressed in the form, $$\mathrm{\Lambda }=K\frac{\sqrt{T/T_+1}}{gH}.$$ (2) for fields $`H1\text{kG}`$ and temperatures above and near the extinction point $`B`$. The temperature, $`T_+`$, corresponds to the extinction of transverse sound by resonant excitation of Cooper pairs with $`J=2,m_J=+1`$, at a slightly higher temperature than the $`B`$ extinction point in zero field as shown in the inset to Fig. 2 ($`e.g.`$ at $`H=100\text{G}`$, $`T_+T_B1\mu K`$). The magnitude of the Faraday rotation period depends on accurately known superfluid properties, contained in the parameter $`K`$, as well as one parameter that is not well-established, the Landé g-factor, $`g`$, for the Zeeman splitting of the $`J=2`$, Cooper pair excited state. Movshovich,et al. analyzed the splitting of the $`J=2`$ multiplet in the absorption spectrum of longitudinal sound to find a value of $`g=0.042`$. In that experiment it was not possible to resolve the splitting except for fields above $`2\text{kG}`$. At these high fields the non-linear field dependence due to the Paschen-Back effect becomes comparable to the linear Zeeman splitting, which makes it difficult to determine the Landé g-factor accurately. We have analyzed our measurements of the acoustic Faraday effect to determine the g-factor with high accuracy at low fields, which eliminates the complication of the high-field Paschen-Back effect. We find $`g=0.020\pm 0.002`$. Our significantly smaller value of the Landé g-factor has the interpretation that there are important $`L=3`$ ($`f`$-$`wave`$) pairing correlations in the superfluid condensate, about $`7\%`$ of the dominant $`p`$-$`wave`$ interactions.
no-problem/9902/astro-ph9902111.html
ar5iv
text
# 1 Uncertainties of parameters, Bulge (𝐼=15, 𝜓=225^∘, 5 hours)
no-problem/9902/cond-mat9902283.html
ar5iv
text
# Universal and non-universal properties of cross-correlations in financial time series ## Abstract We use methods of random matrix theory to analyze the cross-correlation matrix C of price changes of the largest 1000 US stocks for the 2-year period 1994-95. We find that the statistics of most of the eigenvalues in the spectrum of C agree with the predictions of random matrix theory, but there are deviations for a few of the largest eigenvalues. We find that C has the universal properties of the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large inverse participation ratios at both edges of the eigenvalue spectrum—a situation reminiscent of results in localization theory. There has been much recent work applying physics concepts and methods to the study of financial time series . In particular, the study of correlations between price changes of different stocks is both of scientific interest and of practical relevance in quantifying the risk of a given stock portfolio . Consider, for example, the equal-time correlation of stock price changes for a given pair of companies. Since the market conditions may not be stationary, and the historical records are finite, it is not clear if a measured correlation of price changes of two stocks is just due to “noise” or genuinely arises from the interactions among the two companies. Moreover, unlike most physical systems, there is no “algorithm” to calculate the “interaction strength” between two companies (as there is for, say, two spins in a magnet). The problem is that although every pair of companies should interact either directly or indirectly, the precise nature of interaction is unknown. In some ways, the problem of interpreting the correlations between individual stock-price changes is reminiscent of the difficulties experienced by physicists in the fifties, in interpreting the spectra of complex nuclei. Large amounts of spectroscopic data on the energy levels were becoming available but were too complex to be explained by model calculations because the exact nature of the interactions were unknown. Random matrix theory (RMT) was developed in this context, to deal with the statistics of energy levels of complex quantum systems. With the minimal assumption of a random Hamiltonian, given by a real symmetric matrix with independent random elements, a series of remarkable predictions were made and successfully tested on the spectra of complex nuclei . RMT predictions represent an average over all possible interactions . Deviations from the universal predictions of RMT identify system-specific, non-random properties of the system under consideration, providing clues about the nature of the underlying interactions . In this letter, we apply RMT methods to study the cross-correlations of stock price changes. First, we demonstrate the validity of the universal predictions of RMT for the eigenvalue statistics of the cross-correlation matrix. Second, we calculate the deviations of the empirical data from the RMT predictions, obtaining information that enables us to identify cross-correlations between stocks not explainable purely by randomness. We analyze a data base containing the price $`S_i(t)`$ of stock $`i`$ at time $`t`$, where $`i=1,\mathrm{},1000`$ denotes the largest 1000 publicly-traded companies and the time $`t`$ runs over the 2-year period 1994-95. From this time series, we calculate the price change $`G_i(t,\mathrm{\Delta }t)`$, defined as $$G_i(t,\mathrm{\Delta }t)\mathrm{ln}S_i(t+\mathrm{\Delta }t)\mathrm{ln}S_i(t),$$ (1) where $`\mathrm{\Delta }t=30`$ min is the sampling time scale. The simplest measure of correlations between different stocks is the equal-time cross-correlation matrix C which has elements $$C_{ij}\frac{G_iG_jG_iG_j}{\sigma _i\sigma _j},$$ (2) where $`\sigma _i\sqrt{G_i^2G_i^2}`$ is the standard deviation of the price changes of company $`i`$, and $`\mathrm{}`$ denotes a time average over the period studied. We analyze the statistical properties of C by applying RMT techniques. First, we diagonalize C and obtain its eigenvalues $`\lambda _k`$ —with $`k=1,\mathrm{},1000`$—which we rank-order from the smallest to the largest. Next, we calculate the eigenvalue distribution and compare it with recent analytical results for a cross-correlation matrix generated from finite uncorrelated time series . Figure 1 shows the eigenvalue distribution of C, which deviates from the predictions of Ref. , for large eigenvalues $`\lambda _k1.94`$ (see caption of Fig. 1). This result is in agreement with the results of Ref. for the eigenvalue distribution of C on a daily time scale. To test for universal properties, we first calculate the distribution of the nearest-neighbor spacings $`s\lambda _{k+1}\lambda _k`$. The nearest-neighbor spacing is computed after transforming the eigenvalues in such a way that their distribution becomes uniform—a procedure known as unfolding. Figure 2(a) shows the distribution of nearest-neighbor spacings for the empirical data, and compares it with the RMT predictions for real symmetric random matrices. This class of matrices shares universal properties with the ensemble of matrices whose elements are distributed according to a Gaussian probability measure—the Gaussian orthogonal ensemble (GOE). We find good agreement between the empirical data and the GOE prediction, $$P_{\mathrm{GOE}}(s)=\frac{\pi s}{2}\mathrm{exp}\left(\frac{\pi }{4}s^2\right).$$ (3) A second independent test of the GOE is the distribution of next-nearest-neighbor spacings between the rank-ordered eigenvalues . This distribution is expected to be identical to the distribution of nearest-neighbor spacings of the Gaussian symplectic ensemble (GSE) as verified by the empirical data \[Fig. 2(b)\]. The distribution of eigenvalue spacings reflects correlations only of consecutive eigenvalues but does not contain information about correlations of longer range. To probe any “long-range” correlations, we first calculate the number variance $`\mathrm{\Sigma }^2`$ which is defined as the variance of the number of unfolded eigenvalues in intervals of length $`L`$ around each of the eigenvalues. If the eigenvalues are uncorrelated, $`\mathrm{\Sigma }^2L`$. For the opposite case of a “rigid” eigenvalue spectrum, $`\mathrm{\Sigma }^2`$ is a constant. For the GOE case, we find the “intermediate” behavior $`\mathrm{\Sigma }^2\mathrm{ln}L`$, as predicted by RMT \[Fig. 2(c)\]. A second way to measure “long-range” correlations in the eigenvalues is through the spectral rigidity $`\mathrm{\Delta }`$, defined to be the least square deviation of the unfolded cumulative eigenvalue density from a fit to a straight line in an interval of length $`L`$. For uncorrelated eigenvalues, $`\mathrm{\Delta }L`$, whereas for the rigid case $`\mathrm{\Delta }`$ is a constant. For the GOE case we find $`\mathrm{\Delta }\mathrm{ln}L`$ as predicted by RMT \[Fig. 2(d)\]. Having demonstrated that the eigenvalue statistics of C satisfies the RMT predictions, we now proceed to analyze the eigenvectors of C. RMT predicts that the components of the normalized eigenvectors of a GOE matrix are distributed according to a Gaussian probability distribution with mean zero and variance one. In agreement with recent results , we find that eigenvectors corresponding to most eigenvalues in the “bulk” ($`\lambda _k2`$) follow this prediction. On the other hand, eigenvectors with eigenvalues outside the bulk ($`\lambda _k2`$) show marked deviations from the Gaussian distribution. In particular, the vector corresponding to the largest eigenvalue $`\lambda _{1000}`$ deviates significantly from the Gaussian distribution predicted by RMT. The component $`\mathrm{}`$ of a given eigenvector relates to the contribution of company $`\mathrm{}`$ to that eigenvector. Hence, the distribution of the components contains information about the number of companies contributing to a specific eigenvector. In order to distinguish between one eigenvector with approximately equal components and another with a small number of large components we define the inverse participation ratio $$I_k\underset{\mathrm{}=1}{\overset{1000}{}}[u_k\mathrm{}]^4,$$ (4) where $`u_k\mathrm{}`$, $`\mathrm{}=1,\mathrm{},1000`$ are the components of eigenvector $`k`$. The physical meaning of $`I_k`$ can be illustrated by two limiting cases: (i) a vector with identical components $`u_k\mathrm{}1/\sqrt{N}`$ has $`I_k=1/N`$, whereas (ii) a vector with one component $`u_{k1}=1`$ and all the others zero has $`I_k=1`$. Therefore, $`I_k`$ is related to the reciprocal of the number of vector components significantly different from zero. Figure 3 shows $`I_k`$ for eigenvectors of a matrix generated from uncorrelated time series with a power law distribution of price changes. The average value of $`I_k`$ is $`I3\times 10^31/N`$ indicating that the vectors are extended —i.e., almost all companies contribute to them. Fluctuations around this average value are confined to a narrow range. On the other hand, the empirical data show deviations of $`I_k`$ from $`I`$ for a few of the largest eigenvalues. These $`I_k`$ values are approximately 4-5 times larger than $`I`$ which suggests that there are groups of approximately 50 companies contributing to these eigenvectors. The corresponding eigenvalues are well outside the bulk, suggesting that these companies are correlated . Surprisingly, we also find that there are $`I_k`$ values as large as $`0.35`$ for vectors corresponding to the smallest eigenvalues $`\lambda _i0.25`$ . These deviations from the average are two orders of magnitude larger than $`I`$, which suggests that the vectors are localized —i.e., only a few companies contribute to them. The small values of the corresponding eigenvalues suggests that these companies are uncorrelated with each other. The presence of vectors with large $`I_k`$ also arises in the theory of Anderson localization. In the context of localization theory, one frequently finds “random band matrices” containing extended states with small $`I_k`$ in the middle of the band, whereas edge states are localized and have large $`I_k`$. Our finding of localized states for small and large eigenvalues of the cross-correlation matrix C is reminiscent of Anderson localization and suggests that C may be a random band matrix In summary, we find that the most eigenvalues in the spectrum of the cross-correlation matrix of stock price changes agree surprisingly well with the universal predictions of random matrix theory. In particular, we find that C satisfies the universal properties of the Gaussian orthogonal ensemble of real symmetric random matrices. We find through the analysis of the inverse participation ratio of its eigenvectors that C may be a random band matrix, which may support the idea that a metric can be defined on the space of companies and that a distance can be defined between pairs of companies. Hypothetically, the presence of localized states may allow us to draw conclusions about the “spatial dimension” of the set of stocks studied here and about the “range” of the correlations between the companies. We thank M. Barthélémy, N.V. Dohkolyan, X. Gabaix, U. Gerland, S. Havlin, R.N. Mantegna, Y. Lee, C.-K.-Peng and D. Stauffer for helpful discussions. LANA thanks FCT/Portugal for financial support. The Center for Polymer Studies is supported by NSF.
no-problem/9902/hep-th9902173.html
ar5iv
text
# References UOSTP 99-105 SNUST 99-02 hep-th/9902173 revised version Cosmic Holography<sup>1</sup><sup>1</sup>1 Work supported in part by BK-21 Initiative Program and KRF International Collaboration Grant 1998-010-192. Dongsu Bak<sup>1</sup> and Soo-Jong Rey<sup>2</sup> Physics Department, University of Seoul, Seoul 130-743 Korea<sup>1</sup> Physics Department & Center for High-Energy Physics Seoul National University, Seoul 151-742 Korea<sup>2</sup> dsbak@mach.uos.ac.kr, sjrey@gravity.snu.ac.kr abstract A version of holographic principle for the cosmology is proposed, which dictates that the particle entropy within the cosmological apparent horizon should not exceed the gravitational entropy associated with the apparent horizon. It is shown that, in the Friedmann-Robertson-Walker (FRW) cosmology, the open Universe as well as a restricted class of flat cases are compatible with the principle, whereas closed Universe is not. It is also found that inflationary universe after the big-bang is incompatible with the cosmic holography. The holographic principle in quantum gravity is first suggested by ’t Hooft and, later, extended to string theory by Susskind. The most radical part of the principle is that the degrees of freedom of a spatial region reside not in the bulk but in the boundary. Further, the number of boundary degrees of freedom per Planck area should not exceed unity. Recently, the holographic principle is applied to the standard cosmological context by Fischler and Susskind. This Fischler-Susskind version of cosmological holographic principle demands that the particle entrophy contained in a volume of particle horizon should not exceed the area of the horizon in Planck units. The string cosmology is also tested by the Fischler-Susskind holographic principle. In both cases, the matter contents as well as the spacetime geometry of the universe are restricted by the holographic principle alone and the results appear to be consistent with the recent measurement of the redshift-to-distance relation and the theory of the large scale structure formation. In applying the holography in the cosmological context, several outstanding questions still remain unanswered. One of them is concerning a natural choice of the holographic boundary. Fischler and Susskind have chosen it to be the particle horizon, but it is not clear if the choice is consistent with other physical principles. In this note, we will propose a simple choice of the boundary surface based on the concept of cosmological apparent horizon that is a boundary hypersurface of an anti-trapped region and has a topology of $`𝐒^2`$. It turns out that there is natural gravitational entropy associated with the apparent horizon and the associated holographic principle demands that the particle entrophy inside the apparent horizon should not exceed the apparent-horizon gravitational entropy. Moreover, the holography based on the apparent horizon obeys the first law of thermodynamics, in sharp contrast to that based on the particle horizon. We shall apply the proposed principle to the FRW cosmology and show that, in both the standard cosmology and the string cosmology, the open universe as well as restricted class of flat universe are compatible, while the closed universe is not. We shall further show that the inflationary senario for the standard cosmology is not compatible with the cosmic holography. Cosmological Apparent Horizon and Gravitational Entropy: We shall consider the spatially homogeneous and isotropic universe described by the FRW metric, $$ds^2=dt^2+a^2(t)\frac{dr^2}{1kr^2}+a^2(t)r^2d\mathrm{\Omega }_{d1}^2,$$ (1) where $`k=0,1,+1`$ correspond to a flat, open or closed Universe respectively. Using the spherical symmetry, the metric can be rewritten as $$ds^2=h_{ab}dx^adx^b+\stackrel{~}{r}^2(𝐱)d\mathrm{\Omega }_{d1}^2,$$ (2) where $`x^0=t`$, $`x^1=r`$ and the two metric $`h_{ab}=\mathrm{diag}[1,\frac{a^2}{1kr^2}]`$ is introduced. A dynamical apparent horizon is defined by $`h^{ab}_a\stackrel{~}{r}_b\stackrel{~}{r}=0`$, which implies that the vector $`\stackrel{~}{r}`$ is null (or degenerate) on the apparent horizon surface. The explicit evaluation of the condition reads $$\stackrel{~}{r}_{\mathrm{AH}}=\frac{1}{\sqrt{H^2+\frac{k}{a^2}}},$$ (3) where $`H=\dot{a}/a`$ is the Hubble parameter. The expansion $`\theta _{\mathrm{IN}}`$ ($`\theta _{\mathrm{OUT}}`$) of the ingoing (outgoing) null geodesic congruence are given by $`\theta _{\mathrm{IN}}=H{\displaystyle \frac{1}{\stackrel{~}{r}}}\sqrt{1{\displaystyle \frac{k\stackrel{~}{r}^2}{a^2}}}`$ $`\theta _{\mathrm{OUT}}=H+{\displaystyle \frac{1}{\stackrel{~}{r}}}\sqrt{1{\displaystyle \frac{k\stackrel{~}{r}^2}{a^2}}}.`$ (4) The region of spherically symmetric spacetime is referred to as trapped (antitrapped) if the expansions of both in- and out-going null geodesics, normal to the spatial $`d1`$ sphere with a radius $`\stackrel{~}{r}`$ centered at the origin, are negative (positive). The region will be called normal if ingoing rays have negative expansion but the outgoing rays have positive expansion. The region of $`\stackrel{~}{r}>\left(H^2+\frac{k}{a^2}\right)^{\frac{1}{2}}`$ is, then, antitrapped, whereas the region of $`\stackrel{~}{r}<\left(H^2+\frac{k}{a^2}\right)^{\frac{1}{2}}`$ is normal (assuming $`H>0`$). The boundary hypersurface of the antitrapped spacetime region is nothing but the apparent horizon surface. The ingoing rays outside the horizon actually propagate in the direction of the growing $`\stackrel{~}{r}`$, whereas the ingoing rays inside the horizon are moving toward the origin. In general, the radius of the apparant horizon, $`\stackrel{~}{r}_{\mathrm{AH}}`$, changes in time. But, for example, in the de Sitter universe where $`a(t)=a_0e^{Ht}`$ with a constant $`H`$ and $`k=0`$, the apparent horizon, $`\stackrel{~}{r}_{\mathrm{AH}}=\frac{1}{H}`$, is constant in time and agrees with the cosmological event horizon of the de Sitter space. The antitrapped region outside of the apparent horizon in the de Sitter space can never be seen in the comoving observer located at the origin. However, in generic situation, the apparent horizon evolves in time and visiblity of the outside antitrapped region depends on the time developement of the apparent horizon. In case the $`\stackrel{~}{r}_{\mathrm{AH}}`$ becomes smaller in time, the spatial region outside the horizon can never be seen. On the other hand, if it grows, the spatial region outside of the horizon at a given time may be observed at later time. The situation here is reminiscent of what is happening with the black-hole apparent horizon. Namely, the trapped region can never be seen by an outside observer if the horizon of the black grows by an infalling matter, while once-trapped region may become normal if the apparent horizon shrinks by an evaporation of the black hole in presence of the Hawking radiation. For the total energy inside a sphere of radius $`\stackrel{~}{r}`$, we introduce an energy defined by $$E\frac{d(d1)𝒱_d}{16\pi G}\stackrel{~}{r}^{d2}(1h^{ab}_a\stackrel{~}{r}_b\stackrel{~}{r}),$$ (5) where $`𝒱_d=\frac{\pi ^{\frac{d}{2}}}{\mathrm{\Gamma }\left(\frac{d}{2}+1\right)}`$ denotes the volume of the $`d`$-dimensional unit ball. This is actually the direct $`(d+1)`$ dimensional generalization of the $`(3+1)`$ dimensional one given by Misner and Sharp. It is interesting to note that the energy surrounded by the apparent horizon is given by $`E\frac{d(d1)𝒱_d}{16\pi G}\stackrel{~}{r}_{\mathrm{AH}}^{d2}`$, which agrees with the expression for the mass in the $`d+1`$ dimensional Schwarzschield black hole once the apparent horizon is replaced by the event horizon of the black hole. In terms of the energy-momentum tensor of matter $`T^{ab}`$ that is the projection of the $`d+1`$ energy-momentum tensor $`T^{\alpha \beta }`$ to the normal direction of the $`d1`$ sphere, one may define the work density by $$w\frac{1}{2}T^{ab}h_{ab},$$ (6) and the energy-supply vector by $$\psi _aT_a^b_b\stackrel{~}{r}+w_a\stackrel{~}{r}.$$ (7) As noted in Ref. , the work density at the apparent horizon may be viewed as the work done by the change of the apprent horizon and the energy-supply at the horizon is total energy flow through the apparent horizon. The Einstein equation relates these quantities by $$E=A\psi +wV.$$ (8) where $`A=d𝒱_d\stackrel{~}{r}^{d1}`$ and $`V=𝒱_d\stackrel{~}{r}^d`$. This equation may be interpreted as unified first law. The entropy is associated with the energy-supply term, which in fact can be rewritten, with again help of the Einstein equations, as $$A\psi =\frac{\kappa }{8\pi }A+\stackrel{~}{r}^{d2}\left(\frac{E}{\stackrel{~}{r}^{d2}}\right),$$ (9) with the surface gravity $`\kappa `$ defined by $$\kappa \frac{1}{2\sqrt{h}}_a\sqrt{h}h^{ab}_b\stackrel{~}{r}.$$ (10) At the apparent horizon, the last term in (9) drops out and, then, the dynamic entropy of the gravity is identified with $`S=\frac{A}{4}`$ that is a quarter of the area of the apparent horizon measured in the Planck unit. This is direct $`(d+1)`$ dimensional generalization of the definition of dynamic entropy introduced by Hayward for the $`(3+1)`$ case. More precisely, the dynamic entropy associated with the apparent horizon is $$S_{\mathrm{AH}}=\frac{𝒱_d}{4d}\stackrel{~}{r}_{\mathrm{AP}}^{d1}.$$ (11) We now apply the definition to the FRW universe dictated by $`H^2+{\displaystyle \frac{k}{a^2}}={\displaystyle \frac{16\pi }{d(d1)}}\rho ,`$ $`{\displaystyle \frac{\ddot{a}}{a}}={\displaystyle \frac{8\pi }{d(d1)}}[(d2)\rho +dp],`$ (12) with the energy-momentum conservation $$\frac{d}{dt}(\rho a^d)+p\frac{d}{dt}a^d=0.$$ (13) The projected ($`1+1`$)-dimensional energy-momentum tensor for the FRW cosmology reads $$T_{ab}=\mathrm{diag}[\rho ,\frac{p}{1kr^2}].$$ (14) From (12), the Misner-Sharp energy can be evaluated, in terms of the matter density, as $$E=\frac{d(d1)𝒱_d}{16\pi }\left(H^2+\frac{k}{a^2}\right)=𝒱_d\stackrel{~}{r}^d\rho ,$$ (15) which is the matter density multiplied by the flat-volume in $`d`$ spatial dimensions. One should note that the flat-volume is different from the spatial volume (i.e. $`V_p=d𝒱_da^d_0^r\frac{dr}{\sqrt{1kr^2}}`$ ) of radius $`\stackrel{~}{r}`$ in case of the open or closed universe. This discrepancy appears due to the gravity contribution to the energy in addition to the matter contribution. Having clarified the issue of the cosmological entropy, we now state a version of the cosmic holographic principle based on the cosmological apparent horizon: the particle entropy inside the apparent horizon can never exceed the apparent-horizon gravitational entropy. The main difference from the Fischler-Susskind version of the cosmological holography lies in the choice of the horizon; namely in the Fischler-susskind proposal, the particle horizon and a quarter of the associated area for the gravitational entropy is chosen for the holography. In the cosmological context, there apeared two different kinds of horizon based on the light paths. The particle horizon that specifies the visible region for a comoving observer at time t, is expressed as $$\stackrel{~}{r}_{\mathrm{PH}}=a(t)G^1\left(_{t_I}^t\frac{dt^{}}{a(t^{})}\right),$$ (16) where $`G(x)_0^x\frac{dy}{\sqrt{1ky^2}}`$ and $`t_I`$ represents the initial moment of the Universe. (In case the universe has no beginning, $`t_I=\mathrm{}`$.) On the other hand, the cosmological event horizon that specifies the boundary of the spatial region to be seen in the future by the comoving observer reads $$\stackrel{~}{r}_{\mathrm{EH}}=a(t)G^1\left(_t^{t_F}\frac{dt^{}}{a(t^{})}\right),$$ (17) where $`t_F`$ is the final moment of the universe. This is contrasted to the fact that the apparent horizon in (3) does not refer to the initial or final moment where our ability of physical description often breaks down. Cosmic Holography Tested Against FRW Unverse: The holographic principle may restrict the matter contents of our universe because it involves the particle entropy of universe. Since matter contents are molding the geometry and evolution of the Universe, the Universe itself conformed with the holography principle may well belong to a restricted class. The cosmic holography condition leads to an inequality $$\sigma \mathrm{Vol}_{\mathrm{AH}}(t)\frac{A_{\mathrm{AH}}(t)}{4}.$$ (18) where $`\mathrm{Vol}_{\mathrm{AH}}(t)=d\mathrm{V}_d_0^{r_{\mathrm{AH}}(t)}𝑑r\frac{r^{d1}}{\sqrt{1kr^2}}`$ is the coordinate volume inside the apparent horizon. The left side of (18) is the total particle entropy inside the apparent horizon, where the coordinate entropy density is constant in time. In testing the condition to the FRW cosmology, we shall restrict ourselves to the cases of matters with simple equation of state $`p=\gamma \rho `$. In the flat Universe ($`k=0`$), the condition is explicitly $$\frac{4\sigma }{da^{d1}(t)\dot{a}(t)}1.$$ (19) Since $`a(t)=a_0t^{\frac{2}{d(1+\gamma )}}`$ for $`\gamma 1`$ and $`a(t)=a_0e^{H_0t}`$ for $`\gamma =1`$, one concludes that the holography condition is satisfield all the time followed by the Planck time for $`|\gamma |1`$ once it is satisfied at the Planck time, $`tt_P`$. The matters with $`|\gamma |>1`$ that are also inconsistent with special relativity, is not compatible with the cosmic holography condition. Let us now turn to the case of open universe. For the discussion of open and closed universes, it is convenient to introduce the conformal time $`\eta `$, $$\eta =^t\frac{dt^{}}{a(t^{})}.$$ (20) The holography condition now reads $$\frac{4\sigma _0^{\chi (\eta )}𝑑\xi \mathrm{sinh}^{d1}\xi }{a^{d1}(\eta )\mathrm{sinh}^{d1}(\chi (\eta ))}1.$$ (21) where we define $`\chi (\eta )`$ by $`\mathrm{sinh}\chi (\eta )=r_{\mathrm{AH}}(\eta )`$. The solutions of the equation of motion (12) are given by $$a(\eta )=a_0\left(\mathrm{sinh}|(K1)\eta |\right)^{\frac{1}{K1}}.$$ (22) where $`K=\frac{d(1+\gamma )}{2}`$, $`\eta (0,\mathrm{})`$ for $`K1>0`$ and $`\eta (\mathrm{},\mathrm{\hspace{0.17em}0})`$ for $`K1<0`$. Here the initial moment corresponds to $`\eta =0`$ for $`K1>0`$ and $`\mathrm{}`$ for $`K1<0`$. Using the definition of apparent horizon in (3), one finds $`\chi (\eta )=|(K1)\eta |`$. Using the explicit solution, one may easily show that, for $`\gamma 1`$, the holography condition is satisfied once satisfied at the initial moment around the Planck epoch. For $`\gamma >1`$, the maximum of the left hand side of (21) occurs at some finite time after initial moment and the holography is respected if this this maximun satisfies the bound. But if one assumes that the holography bound is saturated at the Planck epoch, the case of $`\gamma >1`$ is rejected by the holography condition. So far the restrictions on the matter contents due to our holography condition are not quite different from those of Fischler-Susskind holography. But we will see that there is some difference in the case of the closed Universe. The holography condition for the closed Universe ($`k=1`$) is $$\frac{4\sigma _0^{\chi (\eta )}𝑑\xi \mathrm{sin}^{d1}\xi }{a^{d1}(\eta )\mathrm{sin}^{d1}(\chi (\eta ))}1.$$ (23) where we define $`\chi (\eta )`$ by $`\mathrm{sin}\chi (\eta )=r_{\mathrm{AH}}(\eta )`$. The solutions of the equation of motion (12) are given by $$a(\eta )=a_0\left(\mathrm{sin}|(K1)\eta |\right)^{\frac{1}{K1}}.$$ (24) where $`\eta (0,\frac{1}{|K1|})`$. For the Universe with $`K1<0`$ (i.e. $`\gamma <\frac{2}{d}1`$) begins with infinite scale factor $`a(\eta )`$, and we shall not discuss these cases since it is clear that our Universe begins with small scale factor from the observational ground. Noting $`r_{\mathrm{AH}}(\eta )=\mathrm{sin}(|(K1)|\eta )`$, one obtains that $`\chi (\eta )=|K1|\eta `$. Then one finds that the universe with $`\gamma >\frac{2}{d}1`$ are not compatible with the cosmic holography condition. Namely, even if one may satisfy the bound at an initial moment, it is badly violated before reaching the big crunch. This is because the term $`_0^{\chi (\eta )}𝑑\xi \mathrm{sin}^{d1}\xi `$ monotonically grow with time. The situation is worsened if one assumes that the bound saturates at the Planck time scale. In the Fischler-Susskind case, a careful analysis shows that the disfavored region by the holography condition is only for $`\gamma \frac{4}{d}1`$. With $`d=3`$, the universe with the matter of $`\frac{4}{d}1=\frac{1}{3}`$ corresponds to radiation-dominated Universe, which is their marginal bound. On the contrary, the cosmic holography condition seems to disfavor any closed Universe, cleary an over-restrictive condition. We now consider inflationary model of our $`d=3`$ Universe. As illustrated in Ref. , the particle entropy-area ratio $`\alpha (t)\frac{4\sigma \mathrm{Vol}_{\mathrm{AH}}(t)}{A_{\mathrm{AH}}(t)}`$ at the time of decoupling is<sup>2</sup><sup>2</sup>2The estimation used in the Ref. relies upon the particle-horizon based holography, but, in case of standard cosmology, the particle horizon differs from apparent horizon by a constant factor of order one. Thus the estimation is still valid for our version of the holography. $$\alpha (t_D)10^{28},$$ (25) where the decoupling time is $`t_D10^{56}`$ in Planck units. Since $`\alpha (t)`$ is proportional to $`t^{\frac{1}{2}}`$ during the radiation dominant era, the expresion of the ratio in the era reads $$\alpha (t)=10^{28}\left[\frac{t_D}{t}\right]^{\frac{1}{2}}.$$ (26) This show that $`\alpha (t)1`$ for all the time later Planck time in case Universe starts off as a radiation-dominant Universe after the Planck time. However, if one assume the inflationary periods after the Planck epoch and an exit to the radiation-dominant Universe, the above conclusion drastically changed. To illustrate this, let us note that in the de Sitter phase, the $`\alpha (t)`$ scales like $`e^{(d+1)Ht}=e^{4Ht}`$ with constant $`H`$. For example, the inflationary factor $`Pe^{Ht_E}/e^{Ht_B}10^{100000}`$ for the chaotic inflationary senario, where $`t_E`$ and $`t_B`$ denote respectively the exit time and the beginning of the inflation, is obtained from the theory of galaxy formation. This implies that the $`\alpha (t_B)`$ is bigger than $`\alpha (t_E)`$ by a factor $`10^{400000}`$. The cosmic holography condition is clearly incompatible with this result. Furthermore, any models where $`\alpha (t)`$ scales $`t^C`$ with $`C<\frac{1}{2}`$ after Planck epoch, violates the holographic principle, so any typical post-big-bang inflationary models that solve the the flatness problem by an amplification of the scale factor appear to be incompatible with the holography. Does only the pre-big-bang super-inflationary senario survive out of the restrictive holography condition or, otherwise, should one resort to the regularity of the Planck epoch to solve the flatness and the horizon problem? One is thus led to conclude that the holography based on apparent horizon, despite its aesthetically simple and appealing features such as compatibility with the first-law of thermodynamics, are not totally satisfactory when applied to cosmology. Nevertheless, we trust that our proposal based on apparent horizon bears an important core of truth provided further specification is supplied on the holographic surfaces and bounded regions therein. After the present work has appeared, Bousso has made an interesting proposal concerning a covariant entropy bound valid for all surfaces in all physical spacetimes. In particular, Bousso has established that Bekenstein bound holds if the surface permits a complete, future-directed, ingoing null geodesic congruences. This singles out the apparent horizon as the largest admissible holographic surface, thus confirming our proposal. Bousso’s proposal has also remedied nicely difficulties we have posed in the present paper. Most relevantly, it was shown that the holography bound is satisfied only if the surface obeys the completeness condition and, when dealing with closed universe, the entropy ought to be interpreted as that on null geodesic congruences directed toward the smaller part of the universe. Before closing, we would like to make a brief comments on the string cosmology. If one applies the new cosmic holographic principle to the string cosmology model considered in Ref., it is straghtforward to show that it selects out open Universe together with flat universes with $`|\gamma |\frac{1}{\sqrt{d}}`$, while the closed universes are ruled out. Here one must note that the apparent horizon should be defined with the Einstein frame metric instead of the string-frame metric because the gravitational entropy is associated with the Einstein frame metric although the relevant physics should be independent of the choice of description. In view of Bousso’s proposal , the above results should be interpreted as the condition on completeness of the apparent horizon. The above results are true both the pre-big-bang branch and the post-big-bang branch, so that the time reversal symmetry is respected by the cosmic holography. In the analysis using the Fischler-Susskind version, any flat Universes with matters are ruled out by the holography condition. This difference stems from the fact that the particle horizon in the pre-big-bang cosmology is infinite whereas the apparent horizon is finite in the pre-big-bang branch.
no-problem/9902/nucl-th9902004.html
ar5iv
text
# 1 Introduction ## 1 Introduction One of the challenges in the study of heavy–ion collisions is the understanding of multifragmentation in relation to liquid–gas phase transitions. In spite of many experimental and theoretical efforts these processes have not been fully understood yet. This is largely due to the fact that a heavy ion collision is a dynamical process where the state of nuclear matter varies strongly in space and time, and which during much of the reaction is not in global or even local equilibrium. This is, e.g. seen by looking at local momentum space distributions in transport calculations which are found to be highly anisotropic even during the compression phase of the collision. Only at the later stages of the reaction the local momentum distributions become more and more thermalized without neccessarily leading to global thermal equilibrium. Therefore, non–equilibrium effects are important for a reliable description of heavy ion collisions . The influence of these non–equilibrium effects on the determination of the equation of state of nuclear matter has been discussed in Ref. . In this contribution we concentrate on the question of the applicability of thermodynamical concepts in the non–equilibrium situation of heavy ion collisions and, in particular, with respect to phase transitions and multifragmentation. We want discuss the question whether, starting from a transport description of a heavy ion collision, where in principle everything is known about the system, a thermodynamical picture of multifragmentation can be deduced. ## 2 Determination of temperature In this work we make use of two different methods of determining temperature: In the first, we determine local temperatures by fitting the local momentum distributions (obtained in our case from relativistic transport calculations) to covariant Fermi–Dirac distributions at finite temperature in the local rest frame . Non–equilibrium effects are taken into account by allowing a parametrization of the momentum space distribution in terms of two thermalized Fermi spheres (or covariantly by ellipsoids). With this method we obtain a local microscopic temperature, $`T_{loc}`$. In the second method we follow the experimental method of fitting fragment energy spectra. These are generated in our calculations by applying a phase space coalescence algorithm to the final stages of the transport calculations. As in experimental analyses these spectra are interpreted in a Siemens–Rasmussen or blast model , which assumes a thermalized freeze–out configuration of nucleons and fragments with a collective radial flow profile and a unique temperature . In this model the fragment spectra are given by $`{\displaystyle \frac{dN}{dE}}`$ $``$ $`pE{\displaystyle \mathrm{exp}(\gamma E/T)\left[\frac{\mathrm{sinh}\alpha }{\alpha }\left(\gamma +\frac{T}{E}\right)\frac{T}{E}\mathrm{cosh}\alpha \right]}`$ (1) $`\times `$ $`n(\beta )\beta ^2d\beta .`$ where $`n(\beta )`$ is the flow profile, $`\gamma =\sqrt{1\beta ^2}`$ and where T is the global temperature ($`\alpha \gamma \beta p/T`$). The flow profile is obtained from the simulation. Then the remaining parameter is the temperature $`T`$ which is fitted to experimental, resp. generated fragment spectra. In experimental analyses a global temperature is assumed, which characterizes the shapes of all fragment spectra. This is not obvious and should be clarified in the analyses of our transport calculations. ## 3 Analysis of spectator matter (semi–central collisions) We first discuss the thermodynamical properties of spectator matter in semi–central Au on Au reactions at intermediate energies. This reactions has been studied extensively by the ALADIN collaboration . We determine the spectator temperature from fits to local momentum distributions (for more details see Ref. ). Fig. 1 shows the time evolution of the temperature in the spectator (left side) for different beam energies. When the spectators are clearly developed in the transport calculations after about $`40`$ fm/c, their temperatures approach a rather constant value of about $`T5`$ MeV which remains fairly stable up to about 80 fm/c, and furthermore is rather independent on the incident energy considered. Fig. 1. Left: Temperature evolution in the spectator in semi–central Au on Au reactions at different beam energies indicated in the figure. Right: Density-pressure trajectories for the spectator matter for the same reaction at 600 AMeV. The solid and dashed curves represent londitudinal and transverse pressure, respectively. The squares and circles are the values at different times starting from $`t=35`$ fm/c in steps of $`5`$ fm/c. The dotted curves are the nuclear matter isothermal equation of state for $`T=5`$ and $`7`$ (lower and upper curve, respectively). These results are in good agreement with experiments of the ALADIN collaboration, which from measurements with different ”thermometers” determine the same value of $`T5`$ MeV, depending only moderately on the beam energy of the reaction. We also generate pressure–density trajectories for the spectator matter as a function of time (right side of Fig. 1). Dynamical instabilities should arise when the pressure increases with decreasing density indicating a negative effective compressibility, which occurs here at $`t50`$ fm/c. The system at this stage therefore enters an instability region and should break up into fragments. Comparing to the nuclear matter isothermal equation of state for temperatures of $`T=5`$ and $`7`$ MeV corresponding to the range of spectator temperatures in Fig. 1 one sees that the thermodynamical conditions, as determined here, are close to but not identical to those of equilibrated nuclear matter. Only at the final stages the spectator closely follows the nuclear matter behavior at temperatures of about $`T5`$ MeV. The densities at the instability condition are about $`1/31/2`$ of saturation density. It thus appears that the spectator closely approaches a freeze–out configuration in thermal and chemical equilibrium. ## 4 Analysis of the fireball (central collisions) Fig. 2. Temperatures (left) and radial flow (right) obtained from blast model fits to fragment energy spectra as function of the beam energy. The theoretical results are shown for two mean field models (see text). The data are taken from . In central collisions the situation is rather different. If very central events are selected experimentally using charged particle multiplicities or theoretically at polar angles near mid rapidity , there is no spectator matter. Rather one observes a hot dense fireball which expands isotropically as found by our calculations and also by other groups . Thus, assuming thermalization one can use Eq. (1) to extract the mean collective radial flow $`\beta _f=<\beta >`$ and a slope temperature $`T_{slope}`$ from fits to the fragment energy spectra. Fig. 2 shows the energy dependence of these quantities as determined from our calculations and from experiments for central Au on Au collisions. Two parametrizations of the mean field (non–linear Walecka model and configuration dependent Dirac–Brueckner mean fields ) were used in the calculations to demonstrate the moderate dependence of of $`\beta _f`$ and $`T`$ on the mean field. As seen in Fig. 2 the experimental data for the radial flow are reproduced very well. The comparison of the extracted slope temperatures $`T_{slope}`$ (left side of Fig. 2) is, however, only qualitative. It is of interest, to discuss the relation of these slope temperatures to the local temperatures $`T_{loc}`$ determined from the momentum distribution of the calculation (we used a Maxwell–Boltzmann distribution here, in order to be consistent with eq. (1), but the difference is $`5`$ MeV in the final stages of the collision). It is also of interest to make the comparison separately for different fragment masses in order to determine whether a freeze–out szenario with a unique temperature is realistic. Fig. 3. Temperatures (left) and radial flow (right) for fragments of different mass as determined from blast model fits. Also simultaneos fits to all fragments including nucleons (filled circle) and only to fragments with $`A_f2`$ (filled triangle) are shown. The open diamond in the left figure is the local temperature at freeze–out obtained from a fit to local momentum space distributions (see text). The results of this analysis are shown in Fig. 3. Here we show the slope temperatures and radial flow velocities from blast model fits to the spectra of different fragment masses $`A_f`$. The temperature increases and the radial flow decreases with increasing mass. In the coalescence picture such a behavior is reasonable, since a larger fragment has to be generated more inside the fireball, where the flow velocity is smaller and the temperature higher. In Fig. 3 we also show the result of a simultaneous blast model fit to all fragments and to fragments with mass $`A_f2`$. Since the fragment multiplicities are roughly exponential and thus dominated by the nucleons the results for all fragments are close to those for $`A_f=1`$ alone. On the other hand the fit to the heavier fragments alone has lower radial flow and higher temperature and is the one compared in Fig. 2 to the corresponding experimental value. Also shown in Fig. 3 is the local temperature from the momentum distributions determined at about $`35`$ fm/c. At this time the fireball in the calculations approaches a freeze–out configuration (nucleon–nucleon collisions cease) in equilibrium (pressure isotropic). It is then a consistent check that the local temperature for this situation agrees approximately with the one determined from a blast model fit to the $`A_f=1`$ energy spectra. ## 5 Conclusions We have studied the thermodynamical state of nuclear matter in heavy ion collisions by analyzing local phase space configurations and by analyzing fragment energy spectra. For the spectator a consistency with temperatures and breakup conditions with results of the ALADIN collaboration was found. For the participant matter we have applied in addition a blast model analysis to fragment spectra generated in the coalescence model. We see that the slope temperatures in such a description do not yield a unique value for all fragments. This does not favour the picture of a freeze–out configuration in thermodynamical equilibrium. Rather it appears that fragment emission is a dynamical process which occurs during a longer stage of the heavy ion reaction.
no-problem/9902/astro-ph9902059.html
ar5iv
text
# ABSTRACT ## ABSTRACT Results obtained with BeppoSAX observations of blazars within various collaborative programs are presented. The spectral similarity “paradigm”, whereby the spectral energy distributions of blazars follow a sequence, leading to a unified view of the whole population, is briefly illustrated. We concentrate on recent observations of flares and associated spectral variability long for three objects at the “blue” end of the spectral sequence, namely PKS 2155–304, Mkn 421 and Mkn 501. The results are discussed in terms of a general analytic synchrotron self-Compton interpretation of the overall spectrum. The physical parameters of the quasi-stationary emission region can be derived with some confidence, while the variability mechanism(s) must be complex. ## 1 INTRODUCTION One of the most poorly understood phenomena in AGN is the origin of relativistic jets. Their existence, previously inferred from indirect arguments (Blandford & Rees 1978), was spectacularly demonstrated by the observation of superluminal expansion of the knots in the radio jets on the parsec scale. The unified scheme of AGN postulates that all radio-loud AGN possess relativistic jets and that blazars are the subset where the jet happens to point at a small angle to the line of sight. Because the plasma flows in the jet at relativistic speed the emitted radiation is concentrated in a narrow cone (beam) along the direction of motion. An observer at small angle to the beam will receive radiation strongly enhanced by Doppler boosting. In blazars the non thermal continuum received from the jet is largely dominant over the more isotropic radiation emitted by the surrounding gas or stars. Therefore they are the best laboratory to probe the processes at work in relativistic jets. Understanding the radiation mechanisms allows reconstructing the spectra of relativistic particles and discussing the mechanisms of particle acceleration and energy transport along the jet and ultimately their origin. ## 2 THE SPECTRAL ENERGY DISTRIBUTIONS OF BLAZARS Observationally, blazars are characterised by a strong radio core with flat or inverted spectrum, and by an extremely luminous broad band continuum, highly variable at all wavelengths. After the launch of the CGRO it was discovered that this continuum extends into the $`\gamma `$-ray range and most importantly that the $`\gamma `$-ray luminosity represents a large fraction of the total emitted power. The presence or absence of broad emission lines in the optical-UV spectrum has led to distinguish quasar-like objects from BL Lac type objects. However there are arguments to believe that these differences, rather than representing a genuine dichotomy in the type of processes occurring at the nucleus may arise from different physical conditions outside the jet (e.g., Bicknell 1994). Support to this unifying view comes from a comparison of the overall spectral shapes of different subclasses of blazars. To this end we collected multiwavelength data for three complete samples of blazars, the 2 Jy sample of flat spectrum radio-loud quasars (FSRQ) the 1 Jy sample of BL Lac objects and the X-ray selected BL Lacs from the Einstein Slew Survey (see Fossati et al. 1998 for full information and references). The results are shown in Fig. 1, where the three samples have been merged and the total sample has been binned according to radio-luminosity only. From the figure it is apparent that: * All the spectral energy distributions (SED) in the $`\nu f_\nu `$ representation are characterized by two peaks, indicating two main spectral components. * The first peak falls at lower frequency for higher luminosity objects * The second peak frequency seems to correlate with the first one, as indicated by the $`\gamma `$-ray and X-ray slopes. The curves drawn as reference correspond to a fixed ratio between the two peak frequencies. Although one should be aware that selection biases may affect the results (see Fossati et al. 1997), Figure 1 suggests that the SEDs of all blazars are globally similar and lie along a continuous spectral sequence. For the most luminous objects the first peak falls at frequencies lower than the optical band, while for the least luminous ones the reverse is true. Thus highly luminous objects have steep (“red”) optical-UV continua, while low luminosity objects with peak frequency beyond the UV range have flat (“blue”) optical-UV continua. For brevity we will refer to objects on the high and low luminosity end of the sequence as “red” or “blue” blazars. ## 3 IMPORTANCE OF MULTIWAVELENGTH VARIABILITY It is generally thought that the first spectral component is due to synchrotron radiation. The spectra from the radio to the submillimeter range most likely involve superposed contributions from different regions of the jet with different self-absorption turnovers. From infrared (IR) frequencies upwards the synchrotron emission should be thin and could be produced in a single zone of the jet, allowing adoption of a homogeneous model. The second spectral component (peaking in $`\gamma `$-rays) could be produced by the high energy electrons responsible for the synchrotron component, upscattering soft photons via the inverse Compton process (IC). (e.g., Sikora 1994; Ulrich, Maraschi, & Urry, 1997 (UMU97), and references therein). An immediate consequence of this interpretation is that changes in the electron population should produce correlated variability in the two components. Different models for the soft photon source(s) (the synchrotron photons themselves, synchrotron self-Compton, SSC; photons from a possible accretion disk, or related broad line region, EC; synchrotron photons backscattered from gas clouds close to the jet, “mirror Compton”, MC, see e.g., UMU97, and references therein) imply different possible ways of variability of the IC component. For instance in the SSC model the Compton component must vary with the synchrotron component and with larger amplitude (in the Thomson regime), while in the EC model variations of the IC component without associated variations in the synchrotron component should be possible (Ghisellini & Maraschi 1996). However in the case MC model variability could closely mimick the SSC type. Clearly the study of correlated variability at frequencies close to the two peaks of the SED is an essential tool to constrain models. It is important to stress that in blue blazars the X-ray range represents the high energy end of the synchrotron component. Deriving from extremely energetic electrons, with short lifetimes the X-ray emission from blue blazars is rapidly variable. The associated IC emission falls beyond the EGRET energy range and is detectable in the TeV band for the few brightest objects. This offers the possibility of ground based monitoring at the highest energies (VHE) and makes the study of the X-ray/TeV correlation extremely interesting. For red blazars the corresponding energy ranges are from the IR to the UV (1-10 eV) for the synchrotron component and from the MeV to the GeV bands for the IC component, while X-rays represent the low energy end of the IC component. Correlation studies of the IR to UV emission on one hand with the X-ray to $`\gamma `$-ray emission on the other have been performed by several groups (see UMU97 and refs therein). The most intensively observed object has been 3C 279. Repeated intensive multiwavelength campaigns have detected a series of high and low states with rather regular spectral variations. The synchrotron intensity is indeed correlated with the IC intensity at least on long time scales with $`\gamma `$-rays showing the largest variability amplitude. The BeppoSAX data obtained in 1997, when the source was in an intermediate intensity state, confirm this trend (see Fig. 1, from Maraschi 1998). Unfortunately, the exhaustion of gas is causing substantial degradation of the EGRET sensitivity, practically preventing further monitoring in $`\gamma `$-rays. At the same time the developing capabilities of the ground based Cherenkov telescopes allow improved monitoring at TeV energies, shifting the focus of multiwavelength studies from red to blue blazars. Therefore we will concentrate here on BeppoSAX results concerning three bright blue blazars detected in the TeV band, PKS 2155–304, Mkn 501 and Mkn 421. Other important results obtained with BeppoSAX concern the spectral variability of the third blue BL Lac detected at TeV energies (Giommi et al. 1998a), the X-ray spectra of bright, red blazars (e.g., Ghisellini et al. 1998) and of other BL Lac samples (e.g., Wolter et al. 1998, Padovani et al. 1998). ## 4 PKS 2155–304 PKS 2155–304 is one of the brightest BL Lacertae objects in the X-ray band and one of the few detected in $`\gamma `$-rays by the EGRET experiment on CGRO (Vestrand et al. 1995). It was observed by BeppoSAX during the PV phase (Giommi et al. 1998b). No observations at other wavelengths simultaneous with the $`\gamma `$-ray ones were ever obtained for this source, yet it is essential to measure the IC and synchrotron peaks at the same time in order to constrain emission models unambiguously (e.g., Dermer et al. 1997, Tavecchio, Maraschi, & Ghisellini 1998). For these reasons, having been informed by the EGRET team of their observing plan and of the positive results of the first days of the CGRO observation, we asked to swap a prescheduled target of our BeppoSAX blazar program with PKS 2155–304. In 1997 November 11-17 (Sreekumar & Vestrand 1997) the $`\gamma `$-ray flux from PKS 2155–304 was very high, roughly a factor of three greater than the previously published value from this object. BeppoSAX pointed PKS 2155–304 for about 1.5 days starting on November 22. Quick-look analysis indicated that also the X-ray flux was close to the highest recorded levels (Chiappetti & Torroni 1997). A paper on these data is currently submitted for publication (Chiappetti et al. 1998). Here we summarise the most important results. i) Light Curves Fig. 2 (left panel) shows the light curves binned over 1000 sec obtained in different energy bands, 0.1-1.5 keV (LECS) and 3.5-10 keV (MECS). The light curves show a clear high amplitude variability: three peaks can be identified. The most rapid variation observed (the decline from the peak at the start of the observation) has a halving timescale of about $`2\times 10^4`$ s, similar to previous occasions (see e.g., Urry et al. 1997). No shorter time scale variability is detected although our observations would have been sensitive to doubling timescales of order $`10^3`$ s. The variability amplitude is energy dependent being larger at higher energies. The hardness ratio correlates positively with the flux, indicating that states with higher flux have harder spectra. We looked for time lags between variations at different energies as suggested by previous ASCA observations of the same source and of Mkn 421 (Makino this volume; Takahashi et al. 1996). The possible presence of a soft lag is indicated by the fact that the maximum hardness ratio occurs before maximum intensity. Using the Discrete Correlation Function (Edelson & Krolik 1988) and fitting its maximum with a Gaussian we estimate from its peak a lag of $`0.49\pm 0.08`$ (1-$`\sigma `$). A more detailed discussion of the issue of lags in this source, including comparison with previous observations and error estimates through Monte Carlo simulations is given in Treves et al. (1998). ii) Spectra We found that the joint LECS and MECS spectra could not be adequately fitted by either a single or a broken power law (bpl). A single power law is unacceptable even in each spectral range while a bpl model with Galactic absorption yields good fits to the data of each instrument with consistent slopes at the high energy end of the LECS and at the low energy end of the MECS. We therefore adopted this model as an approximation to a more realistic continuously curved spectral shape (see Giommi et al. 1998b) The spectral indices derived from separate bpl fits to the LECS and MECS data integrated over the whole observation are reported in Table 1, which allows comparison with the other two sources discussed below. The change in slope between the softest (0.1-1 keV) and hardest (3-10 keV) energy ranges is $`0.8`$. Fitting together the MECS and PDS data yields spectral parameters very similar to those obtained for the MECS alone. The residuals show that the PDS data are consistent with an extrapolation of the MECS fits up to about 50 keV. Above this energy the PDS data show an excess indicating a flattening of the spectrum. The spectrum at the flare peak is harder than at lower intensity, as can be seen by computing directly the ratio of the count rate spectra as a function of energy, yet the spectral change is small and for bpl fits of the separate instruments the derived parameters are only marginally different. The deconvolved spectra are shown in Fig. 2 (right panel). ## 5 Mkn 421 Mkn 421, closely similar to PKS 2155–304 in brightness and spectral shape at UV and X-ray wavelengths was observed by BeppoSAX in April 1998 as part of a large multiwavelength campaign based on a week of continuous observation with ASCA and simultaneous monitoring with the available Cherenkov telescopes, Whipple, HEGRA and CAT. The BeppoSAX observations were scheduled before the ASCA ones in order to extend the coverage in time. A well defined pronounced flare was observed at the beginning of the observation reaching the peak in about 3 hrs and decaying in half a day. Due to its bright state the source was well visible also in the high energy instrument, the PDS. i) Light Curves The light curves in three energy bands, derived from the LECS, MECS and PDS, all normalised to their respective average intensity, are shown in Fig. 3. The amplitude of variability increases and the decay time scales decrease with increasing energy although the latter effect is difficult to quantify without a specific model for the light curve. It is interesting to explore whether the light curves at different frequencies show lags as observed previously in this source by Takahashi et al. (1996). A first analysis with the DCF method reveals that, contrary to the case of PKS 2155–304 and of the 1994 flare of Mkn 421, in the present flare the soft photons lead the medium energy ones by about 1500 sec. The significance of this result needs to be assessed by a reliable estimate of the errors of the DCF method, however we can definitely exclude that lags or leads larger than this value are present. ii)Spectra The shape of the X-ray spectra is qualitatively similar to the case of PKS 2155–304 and similar considerations apply. The spectral indices derived from separate bpl fits to the LECS and MECS data for 1998 April 21, which cover the whole flare, and for 1997 May 4 are reported in Table 1. The change in slope between the softest (0.1-1 keV) and hardest (3-10 keV) energy ranges is $`0.8`$ in both cases. Deconvolved spectra are compared in Fig. 3. It is apparent that the peak in the power distribution moves to higher energies with increasing intensity, reaching 1 keV during the flare. ## 6 Mkn 501 BeppoSAX observations of Mkn 501 in April 1997 revealed a completely new behavior. The spectra showed that at that epoch the synchrotron component peaked at 100 keV or higher energies, implying a shift of at least two orders of magnitude of the peak energy with respect to the quiescent state (Pian et al. 1998). Correspondingly the source was extremely bright in the TeV band and exhibited rapid flares (Catanese et al. 1997, Aharonian et al. 1997). The source was reobserved with BeppoSAX at three epochs, on 28, 29 April and 1 May 1998, for $``$10 hours, simultaneously with ground-based optical and TeV Cherenkov telescopes (Whipple and HEGRA). For all epochs, fits to all the data either with a single or a broken power-law are unacceptable. The “curvature” in the LECS-MECS range is however smaller than for the previous two sources and a bpl fit to the joint LECS-MECS spectra gives a satisfactory result. The joint MECS, HPGSPC and PDS spectra are also well fitted with a bpl model. The spectral indices, obtained by fixing the value of the hydrogen column density to the Galactic value ($`1.73\times 10^{20}`$ cm<sup>-2</sup>, from Lockman & Savage 1995), are reported in Table 1. The deconvolved spectra of 1998 are compared in Fig. 4 (right panel) with that of the 1997 flare. The 2-10 keV flux observed from Mkn 501 in April-May 1998 was close to that measured on 7 April 1997, namely the lowest observed with BeppoSAX , but substantially higher than observed historically. In 1998 the synchrotron peak was located at an energy of $``$20 keV, much lower than on April 97, but still exceptionally high compared to the two sources discussed above and even to most blue blazars, except possibly for 1ES2344+514 (Giommi et al. 1998). As in the cases shown above, the synchrotron peak is at higher energies during brighter states and in a given energy range the spectra flatten with increasing intensity. However spectral variations are small at energies much below the peak. The TeV flux measured in April-May 1998 by the Whipple and HEGRA telescopes was definitely lower than observed last year at the beginning of the simultaneous X-ray and TeV outburst. Since the X-ray spectra at the two epochs differed mainly in the 20-100 keV band, we are led to conclude that the TeV emission is most likely produced through IC scattering off the high energy electrons which radiate in the hard X-ray band via the synchrotron mechanism. If this hypothesis is correct, then, analogously to the synchrotron radiation peak, also the IC peak must have shifted toward lower energies causing significant spectral changes in the TeV band. ## 7 PRESENT UNDERSTANDING AND OPEN PROBLEMS The simplest model one can consider, applicable to all the sources discussed above, attributes the high energy radiation to SSC emission from a homogeneous spherical region of radius R, whose motion can be characterized by a Doppler factor $`\delta `$, pervaded by a magnetic field B and filled with relativistic particles with energy distribution described by a bpl (the latter corresponds to 4 parameters: two indices $`n_1`$, $`n_2`$, a break Lorentz factor $`\gamma _b`$ and a normalization constant). This seven parameter model is strongly constrained if observations provide a determination of the two slopes (in the X- and $`\gamma `$-ray bands), the frequency and flux of the synchrotron peak, the frequency and flux of the IC peak. Assuming $`R=ct_{var}`$ the system is practically closed. We refer to Tavecchio, Maraschi & Ghisellini (1998) for a general analytic procedure to determine the physical parameters of different sources in this class of models. The main point we wish to stress is that there is little uncertainty on the model parameters if both peaks can be measured simultaneously, and even when some of the values (e.g., the peak energy of the IC component) are lacking, the shape of the SED constrains the parameters in relatively restricted ranges. In fact, different authors find closely similar parameters even when using somewhat different formulations of the SSC model (e.g., Mastichiadis & Kirk 1997, Ghisellini, Maraschi, Dondi 1996 for Mkn 421). As an illustration, we show in Fig. 4 (left panel) two SEDs computed for PKS 2155–304 with the SSC model described above (the parameters are reported in the figure caption). The scope was to reproduce the lower and higher X-ray states of November 1997, together with the $`\gamma `$-ray data from the discovery observation (Vestrand, Stacy, & Sreekumar 1995) and the brighter $`\gamma `$-ray state of November 1997 (Sreekumar & Vestrand 1997). We (arbitrarily) assumed that the lower and higher X-ray intensity states correspond to the lower and higher $`\gamma `$-ray states, respectively. In order to account for the flaring state, leaving the other parameters unchanged, the break energy of the electron spectrum had to be shifted to higher energies by a factor 1.5. As a consequence, both the synchrotron and IC peaks increase in flux and move to higher energies. However, for the latter the effects are reduced with respect to the “quadratic” relations expected in the Thomson limit, since for these very high energy electrons the suppression due to the Klein-Nishina regime plays an important role. <sup>1</sup><sup>1</sup>1The Compton emission is computed here with the usual step approximation for the Klein-Nishina cross section, i.e. $`\sigma =\sigma _T`$ for $`\gamma \nu _t<mc^2/h`$ and $`\sigma =0`$ otherwise, where $`\gamma `$ is the Lorentz factor of the electron and $`\nu _t`$ is the frequency of the target photon. The models predict TeV emission at a detectable level. Indeed, towards the completion of this work, we have been informed of the detection of high energy $`\gamma `$-rays by the Mark 6 telescope. In November 1997 the source was seen at its highest flux (Chadwick et al. 1998). The time averaged flux corresponds to $`4.2\times 10^{11}`$ ph cm<sup>-2</sup> s<sup>-1</sup> above 300 GeV (and extending up to $`>`$ 3 TeV). In fact, the model for the lower intensity state reproduces the TeV emission flux level remarkably well. More observations are needed to study the correlation between the TeV and X-ray fluxes in this source. ## 8 DISCUSSION AND CONCLUSIONS The X-ray spectra and spectral variability of the three sources discussed above, which are the brightest blue blazars in the X-ray band, appear extremely coherent in the sense that they can be described using analogous spectral laws, where differences between different sources and different states of the same source can be understood as changes in the energy at which the peak power is emitted. Moreover their TeV emission is generally correlated with the X-ray intensity level within the same source. This behavior can be well understood in the context of the SSC process in a relativistic jet, which seems quite convincingly to account for the SED of blue blazars. In these objects, of relatively low intrinsic power, the synchrotron and IC components tend to peak at the highest energies (X-ray and TeV energies, respectively) and the synchrotron photons dominate the seed radiation field upscattered to $`\gamma `$-ray energies. The models allow us to establish the physical parameters of the jet plasma with relatively little uncertainty and there is general agreement that at least for the three sources discussed above the required values of the Doppler factors are quite high ($`\delta 10`$), the magnetic fields are $`10^11`$ G and the critical electron energy (the break energy in the bpl SSC model) are of order $`10^5`$ with values as high as $`10^6`$ for Mkn 501 during the 1997 high state. While the assessment of the radiation mechanisms and physical parameters in the jet seems quite reliable, the inferred variations in the critical electron energies, which seem to correlate well with brightness, potentially contain important clues for an understanding of the variability and of the modes of particle acceleration and injection. Time dependent models for the evolution of the energy distributions of electrons subject to acceleration, injection, propagation and energy losses are required. The problem is in general complex and only some simplified cases have been treated up to now (e.g., Kirk et al. 1998, Dermer et al. 1998, Makino 1998). In addition, light travel time effects through the emitting region may be important (Chiaberge & Ghisellini 1998). An important point is the measurement and interpretation of lags of the soft photons with respect to the harder ones. These can be due to radiative cooling if the population of injected (accelerated) electrons has a low energy cut off or possibly a sharp low energy break, as clearly shown by Kazanas et al. (1998). If so, the observed lag $`\tau _{obs}`$ depends only on the value of the magnetic field (assuming synchrotron losses are dominant, as is roughly the case for these sources) and on $`\delta `$. Their relation can be expressed as $$B\delta ^{1/3}=300\left(\frac{1+z}{\nu _1}\right)^{1/3}\left[\frac{1(\nu _1/\nu _0)^{1/2}}{\tau _{obs}}\right]^{2/3}G$$ (1) where $`\nu _1`$ and $`\nu _0`$ represent the frequencies (in units of $`10^{17}`$ Hz) at which the observed lag has been measured. It is interesting to note that the value of the soft lag inferred for PKS 2155–304 from the BeppoSAX observations yields a B and $`\delta `$ combination consistent with the parameters obtained independently from the spectral fitting, which supports the radiative interpretation of the observed X-ray variability (Tavecchio, Maraschi, & Ghisellini 1998). However the soft lead possibly present in the 1998 flare of Mkn 421 should probably be related to the acceleration time scale (Kirk et al. 1998), implying that the time dependence of acceleration/injection events themselves plays a significant role. In conclusion, the time resolved continuum spectroscopy, made possible by sensitive and broad band instruments like ASCA and BeppoSAX , has allowed us to reconstruct the spectra of the emitting high energy electrons in blazars and to follow their evolution in time. Therefore X-ray data of the present quality allow us to probe not only the radiation mechanisms but also the fundamental processes of particle acceleration and transport in relativistic jets. ## 9 REFERENCES Aharonian, F., et al. 1997, A&A, 327, L5 Bicknell, G.V., 1994, ApJ, 422, 542 Blandford, R.D., & Rees, M.J., 1978, in Pittsburgh Conf. on BL Lac objects, ed. AM Wolfe, 328 Catanese, M., et al. 1997, ApJ, 487, L143 Chadwick, P.M., et al., 1998, ApJ, in press (astro-ph/9810209) Chiaberge, M., & Ghisellini, G., 1998, MNRAS, submitted (astro-ph/9810263) Chiappetti, L. & Torroni, V. 1997, IAU Circ., 6776, 2 Chiappetti, L., et al., 1998, submitted Dermer, C. D., Sturner, S. J., & Schlickeiser, R. 1997, ApJS, 109, 103 Dermer, C. D. 1998, ApJ, 501, L157 Edelson, R. A., & Krolik, J. H. 1988, ApJ, 333, 646 Fossati, G., et al., 1997, MNRAS, 289, 136 Fossati, G., et al., 1998, MNRAS, 299, 433 Ghisellini, G., Maraschi, L., Dondi, L., 1996, A&AS, 120, 503 Ghisellini, G. & Maraschi, L., 1996, ASP Conf. Ser., 110, 436 Ghisellini, G., et al., 1998, Nucl. Phys. B Proc. Suppl., 69, 427 Giommi, P., et al., 1998a, Nucl. Phys. B Proc. Suppl., 69, 407 Giommi, P., et al., 1998b, A&A, 333,L5 Kazanas, D., Titarchuk, L. G., & Hua, X.-M. 1998, ApJ, 493, 708 Kirk, J.G., Rieger, F.M., & Mastichiadis, A. 1998, A&A, 333, 452 Lockman, F. J. & Savage, B. D. 1995, ApJS , 97, 1 Mastichiadis, A. & Kirk, J.G., 1997, A&A, 320, 19 Makino, F., 1998, to be published in BL Lac phenomenon Maraschi, L., et al., 1994, ApJ, 435, L91 Maraschi, L., 1998, Nucl. Phys. B Proc. Suppl., 69, 389 Maraschi, L., et al., 1998, to be published in ”Tutti i colori degli AGN”, third italian conference on AGN, Roma May 18-21, Memorie SAIt (astro-ph/9808177) Padovani, P., et al., 1998, Proc. of the Conference “BL Lac Phenonmenon” (Turku, Finland), PASP Conf. Ser., eds. L. Takalo, in press Pian, E., et al. 1998, ApJ, 492, L17 Sikora, M. 1994, ApJS, 90, 923 Sreekumar, P. & Vestrand, W. T. 1997, IAU Circ., 6774, 2 Takahashi, T., et al., 1996, ApJ, 470, L89 Tavecchio, F., Maraschi, L., Ghisellini, G., 1998, ApJ, in press (astro-ph/9809051) Treves, A., et al., 1998, Proc. of the Conference “BL Lac Phenonmenon” (Turku, Finland), PASP Conf. Ser., eds. L. Takalo, in press (astro-ph/9811244) Ulrich, M.H., Maraschi, L., Urry, C.M., 1997, Ann. Rev. Astron. and Astroph., 35, 445 Urry, C. M., et al. 1997, ApJ, 486, 799 Vestrand, W. T., Stacy, J. G., & Sreekumar, P. 1995, ApJ, 454, L93 Wehrle, A. E., et al. 1998, ApJ, 497, 178 Wolter, A., et al., 1998, A&A, 335, 899
no-problem/9902/cond-mat9902150.html
ar5iv
text
# Kondo Screening in Gapless Magnetic Alloys ## Abstract The low-energy physics of a spin-$`\frac{1}{2}`$ Kondo impurity in a gapless host, where a density of band states $`\rho _0(ϵ)=|ϵ|^r/(|ϵ|^r+\beta ^r)`$ vanishes at the Fermi level $`ϵ=0`$, is studied by the Bethe ansatz. The growth of the parameter $`\mathrm{\Gamma }_r=\beta \mathrm{g}^{1/r}`$ (where $`\mathrm{g}`$ is an exchange constant) is shown to drive the system ground state from the Kondo regime with the screened impurity spin to the Anderson regime, where the impurity spin is unscreened, however, in a weak magnetic field $`H`$, it exceeds its free value, $`S_i(H)>\frac{1}{2}`$, due to a strong coupling to a band. It is shown also that a sufficiently strong potential scattering at the impurity site destroys the Anderson regime. A growing body of theoretical studies of unconventional magnetic alloys, initiated by Withoff and Fradkin , shows that the standard picture of the Kondo effect in metals should be fundamentally revised in the case of so called “gapless” hosts, where an effective density of band states vanishes precisely at the Fermi level $`ϵ_F`$ as $`|ϵϵ_F|^r`$ with $`r>0`$. “Poor-man’s” scaling arguments , large-$`N`$ studies , and numerical renormalization group calculations show that the Kondo screening of the impurity spin in gapless systems occurs only if an effective electron-impurity coupling exceeds some critical value. Otherwise, an impurity decouples from a band. However, a Bethe ansatz (BA) analysis of the ground state properties of an infinite-$`U`$ Anderson impurity both in a BCS superconductor (a “gapped” Fermi system) and in a gapless host have shown no weak-coupling regime in the low-energy behavior of the system. The ground state of an unconventional Anderson system preserves basic characteristic features of the metallic version. The appearance of a sufficiently small gap or a pseudogap in a band dispersion results only in some corrections to the standard solution . In contrast to the metallic version, the unconventional Anderson systems exhibit, however, a nonuniversal behavior, that could explain discrepancies between a BA solution and results of studies based on scaling arguments. In this Letter, we employ hidden integrability of a spin-$`\frac{1}{2}`$ Kondo impurity in an unconventional host to explore the low-energy physics of gapless systems, where an effective density of band states can be modeled by $$\rho _0(ϵ)\frac{dk(ϵ)}{dϵ}=\frac{|ϵϵ_F|^r}{|ϵϵ_F|^r+\beta ^r},r>0.$$ (1) Here, $`ϵ_F`$ is the Fermi energy, $`k(ϵ)`$ is the inverse band dispersion, and the parameter $`\beta `$ characterizes the size of domain with a nonmetallic behavior of $`\rho _0(ϵ)`$. In the Bethe ansatz approach to the theory of dilute magnetic alloys , pioneered by Wiegmann and Andrei , the spectrum of a free host is alternatively described in terms of interacting Bethe particles rather than in terms of free electrons with spin “up” and “down”, while an impurity plays a role of an additional scattering center for Bethe particles. Because of separation of the charge and spin degrees of freedom, the spectrum of Bethe excitations, in general, contains charge excitations, spin waves, and their bound complexes. In the Kondo model, an electron-impurity scattering is energy independent, therefore the Bethe spectrum of the model does not contain charge complexes. The ground state of the system is composed of charge excitations and spin waves only. The spin waves screen the impurity spin in the zero-temperature limit $`T0`$. In the Anderson model, in contrast, the ground state is composed of charge complexes, in which two charge excitations are bound to a spin wave. Since charge complexes are singlets, the spin of a spinless Anderson impurity is naturally quenched in the ground state of the system. In unconventional hosts, scattering amplitudes acquire an additional energy dependence because of an energy dependent density of band states. In the Anderson system, scattering amplitudes essentially depend on energy already in a metallic host. An additional dependence only renormalizes them slightly near the Fermi level, that does not lead to any drastic changes in the low-energy physics of the system in comparison with the metallic version. In the Kondo system, the situation is clear to be very different. Since scattering amplitudes in the standard Kondo model are energy independent, the appearance of energy dependence in an unconventional host could really lead to drastic changes in the physics of the system. As in the Anderson models, the Bethe spectrum of the gapless Kondo systems is shown to contain charge complexes. As usual, only the simplest complexes contribute to the ground state of the system. Therefore, one may restrict a further consideration to (i) charge excitations, (ii) spin waves, and (iii) the simplest charge complexes, in which two charge excitations are bound to a spin wave. To simplify our terminology, we use hereafter the terms “particles” and “complexes” to refer to charge excitation and charge complexes, respectively. One can propose two different physical scenarios of the system behavior when the parameter $`\beta `$ increases from its metallic value $`\beta =0`$. (i) One may expect that at arbitrarily large $`\beta `$ the ground state of a gapless system preserves basic characteristic features of the standard Kondo model, so that the impurity spin is screened. However, the Kondo temperature decreases to extremely low temperatures at large $`\beta `$, and the Kondo effect thus practically disappears. (ii) In the second scenario, one may expect that the ground states of a gapless system with a sufficiently large $`\beta `$ and the metallic system are qualitatively different, so that the impurity spin in a gapless host is unscreened. In terms of BA language, it is obvious that the only way to suppress the Kondo screening is to reconstruct the ground state in such a way that all spin waves are built into singlet complexes, as it takes place in the Anderson systems. To explore the low-energy physics of the system, we derive the thermodynamic BA equations for renormalized (fundamental) energies of Bethe excitations at a finite temperature $`T`$ and then study the limit $`T0`$. Solving these equations at $`T=0`$, we find the ground state of the system as a state, in which all states of Bethe excitations with negative energies are filled out, while all states with positive energies are empty. To derive the thermodynamic BA equations one has to fix the exponent $`r`$ in Eq. (1). In the cases of $`r=\frac{1}{2}`$, $`r=1`$, and $`r=2`$, bare energies of complexes are negative, therefore they essentially affect the ground state properties of the system. However, this is not sufficient yet to suppress the Kondo screening. The second physical scenario presupposes that complexes expel all particles and spin waves from the ground state of the system. Only in this case, the Kondo screening is suppressed completely. The qualitative behaviors of the systems in all three cases mentioned above are very similar, while the BA mathematics in the $`r=\frac{1}{2}`$ and $`r=1`$ cases is more tedious. To keep our mathematics as simple as possible, we focus in this Letter on the $`r=2`$ case, which is, however, of particular physical interest . We show that at a sufficiently large energy scale of complexes $`\mathrm{\Gamma }_r=\beta \mathrm{g}^{1/r}`$, where $`\mathrm{g}`$ is an effective coupling constant, the renormalized energy of particles is positive over the whole band. The growth of the parameter $`\mathrm{\Gamma }_r`$ drives thus the ground state of the system from the Kondo type, in which spin waves screen the impurity spin, to the Anderson type, in which all spin waves are built into singlet complexes, and, therefore, the impurity spin is unscreened, $`S_i=\frac{1}{2}`$. Nevertheless, as in the case of the Anderson system, we still deal with a strong-coupling regime of our system. To clarify this point we study magnetic properties of the ground state of the system in the Anderson type regime. Despite the impurity spin is unscreened and the magnetic susceptibility of the impurity $`\chi _i`$ diverges in a weak external magnetic field $`H`$, $`\chi _iH^{1/3}`$, the impurity does not behave like a “free” localized magnetic moment. Indeed, in a magnetic field, a part of complexes decay into particles and spin waves. While spin waves disappear to create a finite magnetization of a host, particles bring a positive contribution to the unscreened impurity spin. Thus, at $`T=0`$ the impurity spin in a magnetic field exceeds its free magnitude, $`S_i(H)>\frac{1}{2}`$, that is clear to have no analogy in a free impurity behavior. An effective 1D Hamiltonian of the system is written in terms of the Fermi operators $`c_\sigma (ϵ)`$ which refer to a band electron with spin $`\sigma =,`$ in an $`s`$-wave state of energy $`ϵ`$, $$=\underset{\sigma }{}\frac{dϵ}{2\pi }ϵc_\sigma ^{}(ϵ)c_\sigma (ϵ)+\underset{\sigma ,\sigma ^{}}{}\frac{dϵ}{2\pi }\frac{dϵ^{}}{2\pi }I(ϵ,ϵ^{})c_\sigma ^{}(ϵ)\left(\stackrel{}{\sigma }_{\sigma \sigma ^{}}\stackrel{}{S}\right)c_\sigma ^{}(ϵ^{})$$ (2) Here, $`\stackrel{}{\sigma }`$ are the Pauli matrices and $`\stackrel{}{S}`$ is the impurity spin operator. The electron energies and momenta in Eq. (2) and hereafter are taken relative to the Fermi values, which are set to be equal to zero. The effective exchange coupling, $`I(ϵ,ϵ^{})=\frac{1}{2}I\sqrt{\rho _0(ϵ)\rho _0(ϵ^{})}`$, involve the exchange coupling constant $`I`$ and the density of band states $`\rho _0(ϵ)`$. At an arbitrary density of band states, the model (1) is diagonalized by the following BA equations : $`\mathrm{exp}(ik_jL)\theta _{\frac{1}{2}}(u_j+1/\mathrm{g})`$ $`=`$ $`{\displaystyle \underset{\alpha =1}{\overset{M}{}}}\theta _1(u_j\lambda _\alpha ),`$ (4) $`\theta _1(\lambda _\alpha +1/\mathrm{g}){\displaystyle \underset{j=1}{\overset{N}{}}}\theta _1(\lambda _\alpha u_j)`$ $`=`$ $`{\displaystyle \underset{\beta =1}{\overset{M}{}}}\theta _2(\lambda _\alpha \lambda _\beta ),`$ (5) where $`\theta _\nu (x)=(xi\nu /2)/(x+i\nu /2)`$, $`k_j=k(\omega _j)`$ and $`\omega _j`$ are electron momenta and energies, $`N`$ is the total number of electrons on an interval of size $`L`$ and $`MN/2`$ is the number of electrons with spin “down”. The eigenenergy $`E`$ and the $`z`$ component of the total spin of the system $`S^z`$ are given by $$E=\underset{j=1}{\overset{N}{}}\omega _j,S^z=\frac{1}{2}+\frac{N}{2}M,$$ (6) and an energy dependence of a charge “rapidity” $`u_j=u(\omega _j)`$ reads $$u(\omega )=\frac{2}{I}\frac{1}{\rho _0(\omega )}\frac{3}{32}I\rho _0(\omega )\frac{1}{\mathrm{g}},$$ (7) where $`\mathrm{g}^1=2/I3I/322/I`$ is an effective coupling constant. The second term in $`u(\omega )`$ is much smaller than the first one at all $`\omega `$, however, as it will be clearly seen in what follows, this term plays a crucial role in the low-energy physics of the system, and must be kept. In a metal, where $`\beta =0`$, and hence $`\rho _0=1`$ and $`u_j=0`$, Eqs. (3) reduce to the BA equations of the standard Kondo model. However, from the point of view of the BA mathematics, they are similar to the BA equations of the Anderson rather than the Kondo system. As in the Anderson model, apart from particles with real energies and momenta, the BA equations (3) admit also complexes in which $`2n`$ charge excitations are bound to a spin complex of order $`n`$. Thus, an energy dependence of charge rapidity in an unconventional host essentially enriches the Bethe spectrum of the Kondo system. As in the Anderson system, only complexes of the lowest order, $`n=1`$, contribute to the low-energy physics of the system. Therefore, we restrict our consideration to the simplest complexes, in which two charge excitations with complex energies $`\omega _\pm (\lambda )`$, $$u(\omega _\pm )=\lambda \pm \frac{i}{2},$$ (8) and corresponding momenta $`k_\pm (\lambda )k[\omega _\pm (\lambda )]`$ are bound to a spin wave with a rapidity $`\lambda `$, provided that $`\text{Im}k_+(\lambda )>0`$. A bare energy of a complex $$\xi _0(\lambda )=\omega _+(\lambda )+\omega _{}(\lambda )=2\mathrm{\Gamma }X(\lambda )2\gamma x(\lambda ),$$ (10) where $`\mathrm{\Gamma }=\beta /\sqrt{\mathrm{g}}`$, $`\gamma =\frac{3}{32}\mathrm{g}\mathrm{\Gamma }=\frac{3}{32}\beta \sqrt{\mathrm{g}}`$, and $`X(\lambda )`$ $`=`$ $`\sqrt{2}{\displaystyle \frac{d}{d\lambda }}\left(\lambda +\sqrt{\lambda ^2+1/4}\right)^{1/2},`$ (11) $`x(\lambda )`$ $``$ $`{\displaystyle \frac{1}{(\lambda +1/\mathrm{g})\sqrt{\lambda }}},`$ (12) is negative at all $`\lambda `$. Here, we took into account a smallness of the second term in Eq. (3d) which results in the small second term in Eq. (5a), $`\gamma \mathrm{\Gamma }`$. Moreover, for the latter we use only its asymptotic form at $`\lambda 1`$. In the standard manner , the thermodynamic BA equations of our model for the renormalized energies of particles, $`\epsilon (\omega )`$, spin waves, $`\kappa (\lambda )`$, and complexes, $`\xi (\lambda )`$, are found to be $`\epsilon (\omega )`$ $`=`$ $`\omega {\displaystyle \frac{1}{2}}H{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda a_1[u(\omega )\lambda ]F[\kappa (\lambda )]+{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda a_1[u(\omega )\lambda ]F[\xi (\lambda )],`$ (14) $`\kappa (\lambda )`$ $`=`$ $`H+{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega u^{}(\omega )a_1[\lambda u(\omega )]F[\epsilon (\omega )]+{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda ^{}a_2(\lambda \lambda ^{})F[\kappa (\lambda ^{})],`$ (15) $`\xi (\lambda )`$ $`=`$ $`\xi _0(\lambda )+{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega u^{}(\omega )a_1[\lambda h(\omega )]F[\epsilon (\omega )]+{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda ^{}a_2(\lambda \lambda ^{})F[\xi (\lambda ^{})].`$ (16) Here, $`F[f(x)]T\mathrm{ln}\{1+\mathrm{exp}[f(x)/T]\}`$, $`u^{}(\omega )=du/d\omega `$, $`a_\nu (x)=(2\nu /\pi )(\nu ^2+4x^2)^1`$, and $`H`$ is an external magnetic field. In the zero-temperature limit, $`T0`$, all states with negative energies must be filled out, while all states with positive energies must be empty. In the absence of a magnetic field, the magnetization of a host must be equal to zero. This implies that at $`H=0`$ the number of particles in the system is twice bigger than the number of spin waves. Therefore, at $`H=0`$ the ground state is composed of charge complexes only if $$\kappa (\lambda )>0,\lambda (\mathrm{},\mathrm{});\epsilon (\omega )>0,\omega (ϵ_F,ϵ_F),$$ (17) The energy of spin waves is easily seen from Eq. (6b) to be positive, provided $`\epsilon (\omega )>0`$. Therefore, in the limit $`T0`$, the conditions (7) reduce to $`\epsilon (\omega )>0`$ at $`H=0`$, where $$\epsilon (\omega )=\omega \frac{1}{2}H_{\mathrm{}}^{\mathrm{}}𝑑\lambda a_1[u(\omega )\lambda ]\xi (\lambda ),$$ (19) and the energy $`\xi (\lambda )`$ is found from the equation $$\xi (\lambda )=\xi _0(\lambda )_{\mathrm{}}^{\mathrm{}}𝑑\lambda ^{}a_2(\lambda \lambda ^{})\xi (\lambda ^{}).$$ (20) Inserting a solution of Eq. (8b) into Eq. (8a), one obtains $$\epsilon (\omega )=\omega \frac{1}{2}H+2\mathrm{\Gamma }_{\mathrm{}}^{\mathrm{}}𝑑\lambda s[u(\omega )\lambda ]X(\lambda )+2\gamma _{\mathrm{}}^{\mathrm{}}𝑑\lambda s[u(\omega )\lambda ]x(\lambda ),$$ (22) where $`s(x)=[2\mathrm{cosh}(\pi x)]^1`$. As $`\omega 0`$ the rapidity $`u(\omega )\mathrm{}`$, and we find $$\epsilon (\omega )\omega \frac{1}{2}H+|\omega |+\gamma \frac{|\omega |^3}{\mathrm{\Gamma }^3}+𝒪\left(|\omega |^5\right).$$ (23) At $`\omega <0`$, the first and third terms are cancelled, while the forth term determines a small positive contribution to the particle energy. In a sufficiently weak magnetic field, the function $`\epsilon (\omega )`$ is negative in a small domain between points $`\mathrm{\Omega }_{}\mathrm{\Gamma }(H/2\gamma )^{1/3}`$ and $`\mathrm{\Omega }_+H/4`$, where $`\mathrm{\Omega }_\pm `$ are found from the equation $`\epsilon (\mathrm{\Omega }_\pm )=0`$. As $`\omega \mathrm{}`$ the function $`\epsilon (\omega )\omega +\text{const}`$, and hence it has the third zero, at some point $`\omega =\mathrm{\Omega }<0`$. The critical magnitudes of the parameters at which particles and spin waves disappear from the ground state of the system are clear to be determined by the condition $`\mathrm{\Omega }=ϵ_F`$, or $$2𝒢_{\mathrm{}}^{\mathrm{}}𝑑\lambda s(𝒢^2\lambda )\left[X(\lambda )+\frac{3}{32}\mathrm{g}x(\lambda )\right]=1,$$ (24) where $`𝒢=\mathrm{\Gamma }_{\mathrm{cr}}/ϵ_F`$. In the absence of the second term in the brackets, the left-hand side of Eq. (10) is less than $`1`$, but it is asymptotically very close to $`1`$ already at $`𝒢1`$. Therefore, an estimate $`\mathrm{\Gamma }_{\mathrm{cr}}ϵ_F`$ works very well at all reasonable values of the coupling constant $`\mathrm{g}`$. Thus, at $`\mathrm{\Gamma }<\mathrm{\Gamma }_{\mathrm{cr}}`$, the ground state of the system is composed of particles, spin waves, and complexes. The complexes essentially affect the ground state properties, however, as in a metal, “free” spin waves, unbuilt into complexes, screen completely the impurity spin in the absence of a magnetic field, $`S_i=0`$. At $`\mathrm{\Gamma }>\mathrm{\Gamma }_{\mathrm{cr}}`$, the scattering on complexes renormalizes the energy of particles to positive values over the whole band. In other words, particles and spin waves are completely expelled from the ground state of the system, which is composed now of singlet complexes only. Therefore, the impurity spin is unscreened and equal to its free magnitude, $`S_i=\frac{1}{2}`$, in the absence of a magnetic field. Despite the impurity spin is unscreened, the impurity is easily seen to be strongly coupled to a band. In the continuous limit , Eqs. (3) for the system ground state take the form of integral equation for the densities of states of particles, $`\rho (\omega )`$, and complexes, $`\sigma (\lambda )`$, $`\rho (\omega )`$ $`=`$ $`{\displaystyle \frac{1}{L}}\delta [u(\omega )+1/\mathrm{g}]+{\displaystyle \frac{1}{2\pi }}\rho _0(\omega )u^{}(\omega ){\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda a_1[u(\omega )\lambda ]\sigma (\lambda )`$ (26) $`\sigma (\lambda )`$ $`=`$ $`{\displaystyle \frac{1}{L}}\mathrm{\Delta }(\lambda +1/\mathrm{g})+{\displaystyle \frac{1}{2\pi }}{\displaystyle \frac{dp(\lambda )}{d\lambda }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda ^{}a_2(\lambda \lambda ^{})\sigma (\lambda ^{}){\displaystyle _\mathrm{\Omega }_{}^{\mathrm{\Omega }_+}}𝑑\omega a_1[\lambda u(\omega )]\rho (\omega ).`$ (27) Here, $`p(\lambda )=k_+(\lambda )+k_{}(\lambda )`$ is the momentum of a complex, while the functions $`\delta [u(\omega )]=u^{}(\omega )a_{\frac{1}{2}}[u(\omega )]`$ and $`\mathrm{\Delta }(\lambda )=a_{\frac{3}{2}}(\lambda )a_1(\lambda )a_{\frac{1}{2}}(\lambda )`$ describe the scattering of particles and complexes at the impurity site. Separating the densities into the host and impurity parts, $`\rho (\omega )=\rho _h(\omega )+L^1\rho _i(\omega )`$, we find for the impurity spin $$S_i=\frac{1}{2}+\frac{1}{2}_\mathrm{\Omega }_{}^{\mathrm{\Omega }_+}𝑑\omega \rho _i(\omega ).$$ (28) Since complexes carry no spin, they do not contribute to the impurity spin. At $`H=0`$, $`\mathrm{\Omega }_\pm =0`$, and the impurity spin is equal to its free magnitude, $`S_i=\frac{1}{2}`$. In the presence of an arbitrarily weak field, $`\mathrm{\Omega }_\pm 0`$, the unscreened impurity spin acquires a positive contribution due to a strong coupling to the host band. In a weak field, the last term in the right-hand side of Eq. (11b) is small and can be omitted in the zero-order computations. Then, taking into account that $`\mathrm{\Delta }(\lambda )\lambda ^4`$ as $`\lambda \mathrm{}`$, one obtains $`\rho _i(\omega )\delta [u(\omega )]`$ as $`H0`$, and the impurity spin is found to be $$S_i=\frac{1}{2}+\frac{1}{8\pi }\left(\frac{H}{2\gamma }\right)^{2/3}+𝒪(H^2).$$ (29) While the magnetic susceptibility of the host vanishes, $`\chi _hH^2`$, the impurity susceptibility, $`\chi _iH^{1/3}`$, diverges as $`H0`$. Thus, the Nozières theory based on the Fermi liquid approach is not generalized to the case of a gapless host. Finally, it should be noted that the unscreened impurity spin regime is very sensitive to a potential (spin independent) scattering at the impurity site. While in a metal potential scattering does not play any essential role , in a gapless host, it is essentially affect the expression for charge rapidity $`u(\omega )`$. If a potential scattering with a coupling constant $`V`$ is taken into account , the expression (3d) takes the form $$u(\omega )=\frac{2}{I}[\rho _0^1(\omega )1]+\frac{\left(V+\frac{1}{4}I\right)\left(V\frac{3}{4}I\right)}{2I}[\rho _0(\omega )1]$$ At $`V=0`$ the second positive term has been shown to result in expelling particles from the system ground state. It is clear that if $`V`$ lies outside the interval $`(I/4,3I/4)`$, the sign of this term is negative, that immediately destroys the Anderson (unscreened impurity spin) regime. I am thankful to S. John for stimulating discussions.
no-problem/9902/cond-mat9902229.html
ar5iv
text
# Periodicity-dependent stiffness of periodic hydrophilic-hydrophobic hetero-polymers \[ ## Abstract From extensive Monte Carlo simulations of a Larson model of perfectly periodic heteropolymers (PHP) in water a striking stiffening is observed as the period of the alternating hydrophobic and hydrophilic blocks is shortened. At short period and low temperature needle-like conformations are the stable conformation. As temperature is increased thermal fluctuations induce kinks and bends. At large periods compact oligomeric globules are observed. From the generalized Larson prescription, originally developed for modelling surfactant molecules in aqueous solutions, we find that the shorter is the period the more stretched is the PHP. This novel effect is expected to stimulate polymer synthesis and trigger research on the rheology of aqueous periodic heteropolymer solutions. PACS Nos. 82.70.-y; 87.15.Aa; 61.41.+e \] Almost all the important ”molecules of life”, e.g., DNA, RNA and proteins, are hetero-polymers. Therefore, in order to gain insight into the in-vivo ”structure” and ”function” of these macromolecules, in recent years physicists and chemists have been studying the in-vitro structure and dynamics of simpler hetero-polymers consisting of only two different types of monomers. The sequence distribution is totally random in what are known as random heteropolymers (RHP) . The RHP are of special interest to theorists also because of their close relation with the random energy model and spin glasses; these similarities and the unusual properties of the RHP are consequence of the combination of quenched disorder and a special type of frustration arising from the competing interactions in the RHP. Very recently, random heteropolymers with correlated sequence distribution has also been considered theoretically. On the other hand, perfectly periodic heteropolymers (PHP) have begun to receive attention only very recently. Orlandini and Garel carried out what may be loosely called the first in-vacuo Monte Carlo (MC) simulations of PHP. The aim of this paper is to report the results of in-vitro MC simulations of a very simple model of PHP in water to demonstrate a novel dependence of the stiffness of the PHP on the periodicity of the hydrophilic (or, hydrophobic) segments. We follow the recent reformulation of the Larson model of surfactants in water to model the PHP in water. In the spirit of lattice gas models, the system is modelled as a simple cubic lattice of size $`L_x\times L_y\times L_z`$. Each of the molecules of water can occupy a single lattice site. A surfactant occupies several lattice sites successive pairs of which are connected by a nearest-neighbour bond of fixed length. We shall refer to each site on the surfactants as a monomer. The primary structure of each PHP can be described by the symbol $`I_pO_pI_pO_p\mathrm{}\mathrm{}I_pO_p`$ where $`I`$ and $`O`$ refer to the hydrophilic and hydrophobic monomers and the basic building block $`I_pO_p`$, each of length $`L_p`$, is repeated $`n`$ times such that $`2L_pn=L_a`$ is the total length of the PHP. No monomer is allowed to occupy a site which is already occupied by a water molecule. Besides, no two monomers of the PHP are allowed to occupy the same site simultaneously. If the chain consisted of only hydrophilic monomers it would behave exactly as a self-avoing walk in vacuo because of the complete identity between the hydrophilic monomers and the molecules of water. On the other hand, if it consisted of only hydrophobic monomers it would collapse forming a compact globule. What makes the model PHP so interesting is the competition between these two conformations arising from the competing hydrophilic-hydrophobic effects. For the convenience of computation, we have reformulated the model of PHP in terms of classical Ising-spin-like variables, generalizing the corresponding formulation for the single-chain surfactants . In this reformulation, a classical Ising-spin-like variable $`S`$ is assigned to each lattice site; $`S_i=1`$ if the $`i`$-th lattice site is occupied by a water molecule. If the $`j`$-th site is occupied by a monomer belonging to a PHP then $`S_j=1,1`$ depending on whether the monomer at the $`j`$th site is hydrophilic or, hydrophobic. respectively. The temperature $`T`$ of the system is measured in the units of $`J/k_B`$ where $`J`$ denotes the strength of the interaction between a spin and its six nearest-neighbours. This reformulation in terms of Ising-spin-like variables has been successfully used in studying a wide variety of phenomena exhibited by various types of surfactant molecules in aqueous media and should not be confused with magnetic polymers. Besides, molecular dynamics simulation of similar molecular models have also been carried out to study the spontaneous formation of self-assemblies of surfactant molecules. Both the position of the center of mass and the conformation of the PHP is random in the initial state of the system. The allowed moves of the PHP are the same as those of the small surfactants in the Larson model (see ref), namely, reptation, buckling and anti-buckling (also called pull) and kink movement. Starting from the initial state, the system is allowed to evolve following the standard Metropolis algorithm: each of the attempts to move the PHP takes place certainly if $`\mathrm{\Delta }E<0`$ and with a probability proportional to $`\mathrm{exp}(\mathrm{\Delta }E/T)`$ if $`\mathrm{\Delta }E0`$, where $`\mathrm{\Delta }E`$ is the change in energy that would be caused by the proposed move of the PHP. In order to collect informations on the qualitative features of the conformations of the PHP we have directly looked at many snapshots of the PHP at various stages of MC updating of the state of the system. We have also computed several different quantities which provide important quantitative informations on various aspects of the conformation of the PHP. A gross measure of the ”size” of the PHP in water is given by its radius of gyration $$R=\underset{j=1}{\overset{L_a}{}}(\stackrel{}{r}_j\stackrel{}{R}_{cm})^2$$ (1) where $`\stackrel{}{r}_j`$ is the position vector of the $`j`$-th monomer and $`\stackrel{}{R}_{cm}`$ is the position of the center of mass which is defined as $`R_{cm}=(1/L_a)_{j=1}^{L_a}\stackrel{}{r}_j`$. Insight into the composition of the local neighbourhood of an arbitrary hydrophilic monomer can be gained by computing the quantities $`N_{ii},N_{io}`$ and $`N_{iw}`$ which are the average numbers of its nearest-neighbour sites that are occupied by a hydrophilic monomer, a hydrophobic monomer and a water molecule, respectively. Similarly, the composition of the local neighbourhood of an arbitrary hydrophobic monomer is reflected in the numbers $`N_{oi},N_{oo}`$ and $`N_{ow}`$, which are the average numbers of its nearest-neighbour sites that are occupied by a hydrophilic monomer, a hydrophobic monomer and a water molecule, respectively. Obviously, $`N_{io}=N_{oi}`$ as, throughout this paper, we consider PHP consisting of equal number of hydrophilic and hydrophobic segments of the same length $`L_p`$. Suppose an index $`j`$ ($`j=1,2,\mathrm{},L_a`$) labels the monomers sequentially along the primary structure of the PHP chain from one fixed end. The $`i,j`$-th element, $`C_{ij}`$, of the contact map $`C`$ is defined to be non-zero if and only if in at least one of its equilibrium configurations the $`i`$-th and the $`j`$-th monomers (irrespective of whether hydrophilic or hydrophobic) are not nearest-neighbours along the chain but occupy two nearest-neighbour lattice sites. The contact map has been used to reconstruct the three-dimensional conformation of bio-polymers. For a given $`L_a`$, $`L_p`$ and $`T`$, after equilibration, we have computed the above-mentioned quantities of our interest. Then we have repeated the calculations for several values of $`L_a`$, $`L_p`$ and $`T`$. All the data reported in this letter, however, have been generated for $`L_p=400`$, corresponding to the longest PHP, for which we could sample, after equilibration, sufficiently large number of configurations required for averaging. For a fixed $`L_a=400`$, typical snapshots of the PHP for a few different $`L_p`$ are shown in the figs.1a-d. The PHP is very stiff for $`L_p=4`$ (fig.1a). For intermediate values of $`L_p`$, e.g., $`L_p=40`$ (fig.1b) and $`L_p=50`$ (fig.1c), it has a necklace-like conformation where ”beads” of hydrophobic monomers are connected by hydrophilic chains. Finally, when $`L_p`$ is of the same order as $`L_a`$, e.g., $`L_p=100`$ (fig.1d), the hydrophic monomers form a large collapsed globule surrounded by hydrophobic monomers. Each of the hydrophilic (hydrophobic) monomers has a tendency to have hydrophilic (hydrophobic) nearest-neighbours and avoid having hydrophobic (hydrophilic) nearest-neighbours. The snapshots shown in fig.1 also indicate that a longer $`L_p`$ enables the PHP to satisfy these tendencies. This can be shown more quantitatively (fig.2) by plotting $`N_{ii}`$, $`N_{io},N_{iw},N_{oo}`$ and $`N_{ow}`$ against $`f=L_p/L_a`$ at a fixed temperature $`T=2.0`$. One striking feature of the PHP is that the shorter is the period the more stretched is the PHP, as shown by the snapshots in fig.1. This trend of variation is reflected in the structure of the contact maps, shown in the figs.3a-d, corresponding to the figs.1a-d, respectively. In the contact map for $`f=L_p/L_a=0.01`$ there are very few non-zero elements outside the diagonal backbone of the map. With increase of $`f`$ more and more non-zero elements far from the diagonal backbone appear signalling folding or collapse of the PHP. This trend of variation of the ”size” of the PHP can be seen quantitatively also in fig.4 where we plot the radius of gyration $`R`$ of the PHP as a function of $`f`$. Finally, keeping $`f`$ fixed at a small value, say $`f=0.01`$, corresponding to which the PHP is very stiff, if we raise $`T`$, the $`R`$ of the PHP falls monotonically with increasing $`T`$ (fig.5), as expected, because of stronger thermal fluctuations. In summary, in this letter we have developed a Larson-type model of a periodic hetero-polymer. By carrying out MC simulations of these model PHP, each consisting of equal numbers of hydrophilic and hydrophobic monomers, we have investigated the effects of varying the period on its conformations in equilibrium. We have observed that, at a given temperature, the smaller is the ratio $`f=L_p/L_a`$, the stiffer is the PHP. We would like to emphasize that the stiffness of the PHP at a fixed temperature decreases with increasing $`L_p=L_a/(2n)`$, where $`n`$ is the number of segments of each type, in spite of the fact that, $`nL_p`$, the total number of hydrophobic monomers remains fixed for the given $`L_a`$. This prediction, we believe, can be tested directly in laboratory experiments. We thank E. Domany for enlightening discussions on contact maps and L. Santen for help with the graphics. This work is supported by the SFB341 Köln-Aachen-Jülich and German-Israeli Foundation. $``$ On leave from the Physics Department, I.I.T., Kanpur 208016, India.
no-problem/9902/hep-ex9902023.html
ar5iv
text
# The CLEO-III Ring Imaging Cherenkov Detector ## 1 INTRODUCTION The CLEO detector is undergoing a major upgrade (CLEO-III) in conjunction with a luminosity upgrade of the CESR electron-positron collider (CESR Phase-III) . This upgrade will increase the luminosity of the machine by more than a factor of 10, to $`\mathrm{\pounds }=2\times 10^{33}`$ cm<sup>-2</sup>sec<sup>-1</sup>, or $``$20 fb<sup>-1</sup>/yr, allowing unprecedented sensitivity to study CP violation in charged $`B`$ decays as well as the phenomenology of rare $`B`$ decay modes (with $`\mathrm{BR}10^6`$). Charged hadron identification is crucial in distinguishing these decay modes. Typically, one wants highly efficient $`\pi /K`$ separation with mis-identification probabilities $`10^2`$ over the full momentum range of secondaries from $`B`$-hadrons produced at the $`\mathrm{{\rm Y}}(4S)`$ resonance. Achieving this capability in modern particle detection has heretofore been elusive. We at CLEO believe that the best way to accomplish this task is to construct a Ring Imaging Cherenkov (RICH) Detector. ## 2 GENERAL PRINCIPLES OF A PROXIMITY-FOCUSED RICH The CLEO-III RICH detector consists of three components: radiator, expansion volume, and photon detector. No focusing is used; this is called “proximity-focusing” . When an incident charged particle with sufficient momentum ($`\beta >1/n`$) passes through a radiator medium, it emits photons at an angle $`\mathrm{\Theta }`$ via the Cherenkov effect; some photons are internally reflected due to the large refractive index $`n`$ of the radiator, and some escape. These latter photons propagate in a transparent expansion volume, sufficiently large to allow the Cherenkov cone to expand in size (as much as other spatial constraints allow). The photons are imaged by a two-dimensional pad detector, a photosensitive multi-wire chamber which records their spatial position. The resulting images are portions of conic sections, distorted by refraction and truncated by internal reflection at the boundaries of media with different optical densities. Thus, knowing the track parameters of the charged particle and the refractive index of the radiator, one can reconstruct the Cherenkov angle $`\mathrm{\Theta }=\mathrm{cos}^1(1/n\beta )`$ and extract the particle mass. This elegant and compact approach was pioneered by the Fast-RICH Group . In order to achieve efficient particle identification with low fake rates, we set as a design goal a system capable of $`\pi /K`$ separation with 4$`\sigma `$ significance ($`N_\sigma =\mathrm{\Delta }\mathrm{\Theta }/\sigma _\mathrm{\Theta }`$) at 2.65 GeV/$`c`$, the mean maximum momentum for two-body $`B`$-decays at a symmetric $`e^+e^{}`$ collider. At this momentum, the $`\pi `$-$`K`$ Cherenkov angle difference $`\mathrm{\Delta }\mathrm{\Theta }=14.4`$ mrad, which along with $`1.8\sigma `$ $`dE/dx`$ identification from the central Drift Chamber, requires a Cherenkov angle resolution $`\sigma _\mathrm{\Theta }=4.0`$ mrad per track. Using the relation $`\sigma _\mathrm{\Theta }=\sigma _{\mathrm{\Theta }\mathrm{pe}}/\sqrt{N_{\mathrm{pe}}}`$, we can establish round-number benchmarks for our design: a resolution of 14 mrad per photoelectron , and a photoelectron yield of 12 pe per track. ## 3 BASIC DETECTOR DESIGN The overall RICH design is cylindrical, with compact photon detector modules at the outer radius and radiator crystals at the inner radius, forming thirty 12 sectors in azimuth. A schematic of the RICH is given in Ref. . The RICH resides between the central Drift Chamber and the CsI Calorimeter. The ensuing space budget constrains the detector to fit in 80–100 cm in radius, and 2.5 m in length (82% of the solid angle). The mass budget restricts the thickness to 12% $`X_\mathrm{o}`$ to avoid significantly degrading the performance of the Calorimeter. However, the driving constraint of the design in many respects is the choice of Triethylamine (TEA) as a photosensor, which is both chemically aggressive and manifests a quantum efficiency in the VUV regime (135–165 nm). This greatly restricts the available materials: optical materials must be transparent in the VUV, and construction materials need to be chemically resistant to TEA and low outgassing. Each detector component and related design issues are discussed in turn. ## 4 CRYSTAL RADIATORS The baseline design for the radiator is an array of individual planar LiF crystals<sup>1</sup><sup>1</sup>1The LiF and CaF<sub>2</sub> crystals are grown and polished by OPTOVAC, North Brookfield, MA., each $`170\times 170`$ mm<sup>2</sup> and 10 mm thick, mounted on an inner carbon fiber cylinder. Due to the high refractive index of LiF ($`n=1.50`$ at 150 nm), a large number of Cherenkov photons are internally reflected, resulting in only a partial ring being imaged (about 1/3 of the initial cone). This is especially severe for incident track angles under 15, where the radiator would have to be tilted on the inner cylinder for the incident track to exceed this angle. In order to improve this situation, a novel radiator geometry has been implemented , cf. Figure 1. This “sawtooth” radiator, with its inner surface cut in profile to resemble the teeth of a saw, allows photons to cross the surface at near normal incidence. This reduces the photon loss by internal reflection and, as a consequence of the refraction angle, also the dominant chromatic error contribution to the resolution. Detailed Monte Carlo simulations indicate that all performance parameters are better with a sawtooth radiator, and especially so at small values of incident track angle. Hence the central region of the detector will use 120 sawtooth radiators (47% of the solid angle coverage), and the outer regions will use 300 planar radiators. ## 5 EXPANSION VOLUME The expansion volume is essentially empty space, 157 mm in radial distance, filled with pure N<sub>2</sub> gas. There are no support structures in the interior of the detector, which would be obstructions to photon propagation. However structural rigidity must be maintained with low mass, and so our mechanical design calls for aluminum end-flanges glued to the inner cylinder, and a detector module support frame, with reinforcing rib and box structures added to the modules. Importantly, UV photons will be lost if the expansion volume is not well-sealed from O<sub>2</sub> and H<sub>2</sub>O contamination, and from any possible leakage from the photon detector volume. Redundant gas seals are used in our design, and the N<sub>2</sub> will be exchanged at a flow rate sufficient to maintain high transparency. ## 6 PHOTON DETECTORS The photon detector is a compact, photosensitive asymmetric multi-wire chamber, shown in Figure 2, filled with CH<sub>4</sub> carrier gas bubbled through liquid TEA at 15C (5.5% vapor concentration). TEA has a peak QE of 33% at 150 nm and a spectral bandwidth of 135–165 nm . The detection sequence is: a photon passes through a thin UV-transparent CaF<sub>2</sub> window and is converted to a single electron by ionizing a TEA molecule. The single photoelectron drifts towards, then avalanches near, the 20 $`\mu `$m Ø Au-W anode-wires, and induces a charge signal on the array of $`8.0\times 7.5`$ mm<sup>2</sup> cathode-pads, providing a spatial point for the photoelectron. The design maximizes the photon conversion efficiency by having the wire-window gap be many photoabsorption lengths ($`ϵ=99.9`$%, with $`\mathrm{}_{\mathrm{abs}}=0.5`$ mm at 150 nm), and maximizes the anode-cathode charge coupling ($`C78\%`$) by having the wire-pad gap be as small as practical (1 mm). Both may be optimized at once for a fixed thickness by making the chamber asymmetric. The CaF<sub>2</sub> window crystals must be deposited with metallized traces in order to act as field electrodes. Moreover, the crystals are large ($`191\times 308`$ mm<sup>2</sup>) and quite thin (2 mm) in order to minimize photon absorption and radiation length, and must be mounted with no mechanical stress to avoid inducing fractures. Our design utilizes Ultem hinges, to which the CaF<sub>2</sub> windows are glued, to relieve stresses. All construction materials in the chamber volume must be low outgassing and TEA-compatible. Extensive tests of TEA effects on various materials and adhesives of interest have been made. ## 7 READOUT ELECTRONICS The choice of readout electronics is governed by the Furry (exponential) charge distribution of a single photoelectron avalanche, cf. Figure 3, and also by the time allowed for the readout. The most likely charge is zero, but the tail is long, and so it is necessary to have analog front-end electronics with low noise and large dynamic range in order to maximize the photoelectron detection efficiency. The charge information is necessary in accurately determining the centroid of the photoelectron, as well as in disentangling the overlap of two nearby charge distributions. The latter requires high segmentation. Given the modest pad size over the large detector area, there are 230,400 total electronics channels. Occupancy is low ($`<`$1%) and so sparsification is required. The front-end signal processor is the Viking VA\_RICH chip, a new custom-designed 64-channel VLSI chip<sup>2</sup><sup>2</sup>2This Viking chip was designed and manufactured by IDE AS, Oslo, Norway. incorporating these requirements, with measured rms noise $`\mathrm{ENC}=130e^{}+9C_{\mathrm{det}}e^{}/\mathrm{pF}`$ ($`150e^{}`$ for modest $`C_{\mathrm{det}}2`$ pF), a shaping time of 1–3 $`\mu `$sec, and good linearity up to $`\pm 4.5\times 10^5e^{}`$ input. The chip has input protection, a preamplifier/shaper, sample & hold circuitry, and a differential current output multiplexer. The analog signal travels over a 6 m long cable from the detector to a VME databoard with receiver, 12-bit ADC and sparsifier. ## 8 BEAM-TEST SETUP In order to test our understanding of the design and behavior of this detector, a comprehensive beam test was performed. The first two completed photon detectors of the CLEO-III RICH were mounted on an aluminum box simulating the expansion volume, and equipped with one planar and two sawtooth LiF radiators (cf. Figure 4). The beam test was performed in a muon halo beam in the Meson East area of Fermilab, downstream of Experiment E866. The setup consisted of the RICH itself, its supporting gas and HV systems, trigger scintillators, and a charged-particle tracking system (2 MWPCs with 0.7 mm spatial resolution per station and combined $``$1 mrad track angle resolution). Data was taken with the RICH box rotated at various polar and azimuthal angles to simulate the different incident track angles expected in CLEO-III. Photon detector operation was stable during the three week beam test, running at a nominal gain of $`4\times 10^4`$. The readout for the beam test consisted of 240 VA\_RICH chips and 8 VME databoards (for 15360 channels). After common-mode subtraction the remaining incoherent rms noise observed was $`400e^{}`$, providing an average signal-to-noise ratio for photoelectrons of 100:1. ## 9 BEAM-TEST RESULTS Photoelectrons (pe) are reconstructed by determining topological clusters of pads with pulse height above a 5$`\sigma `$ pedestal cut ($`\sigma =400e^{}`$). The overlap of multiple photoelectrons in a given cluster is disentangled by using the pulse height profile. The unbiased centroid is then found as the location of the photoelectron. (At operating voltage, pad multiplicities are 2.2 pads per cluster, and 1.1 pe per cluster.) From each photoelectron position, the original photon trajectory is optically traced back through all media to the center of the radiator, and the Cherenkov angle is reconstructed. Charged track clusters are distinguished from photon clusters by total charge and by the number of pads in the cluster. Data has been taken at a variety of track angles; in the following discussion only the datasets for the plane radiator at 30 track incidence and for the sawtooth radiator at 0 are considered in detail. Figure 5 shows a cumulative event display for all ring images in these two datasets. For the plane radiator one arc of the Cherenkov ring is visible, while for the sawtooth radiator two arcs in opposition are visible, with the lower one largely outside of the fiducial region of the detectors. Acceptance is lowered by this image truncation, and by mechanical transmission losses from construction elements in the detector. The acceptance for contained plane radiator images is the maximum realistic acceptance for a full RICH system, which is about 85%; the acceptance for sawtooth images is approximately 50% in the two-sector beam test setup. Results from the analysis of the 30 plane radiator dataset are shown in Figure 6; only images confined to a single detector are used. The distribution of Cherenkov angle for single photoelectrons has an asymmetric tail and modest background; it is fit with a Crystal-Ball<sup>3</sup><sup>3</sup>3The Crystal-Ball lineshape is a Gaussian with an exponential tail at higher angles. Resolution ($`\sigma `$) is extracted from the full-width at half-maximum. lineshape plus polynomial background, yielding a single photoelectron Cherenkov angle resolution $`\sigma _{\mathrm{\Theta }\mathrm{pe}}=(13.2\pm 0.05\pm 0.56)`$ mrad with a background fraction of 9.2% under the image and a Monte Carlo estimate of 13.5 mrad. Errors quoted are first statistical then systematic, with the latter taken from two different fitting procedures, i.e., two methods of background estimation. This background is not electronic noise but rather it is principally due to out-of-time hadronic showers from an upstream beam dump; there will be no such background in the CLEO-III running conditions. The Cherenkov angle per track is found as the arithmetic mean of all photoelectrons in an image. There is an image cut of $`\pm 3\sigma _{\mathrm{\Theta }\mathrm{pe}}`$ and a systematic alignment correction applied. The resulting distribution of Cherenkov angle per track is fit to a Gaussian, and gives the angle resolution per track $`\sigma _{\mathrm{\Theta }\mathrm{trk}}=(4.54\pm 0.02\pm 0.23)`$ mrad, which compares favorably with the Monte Carlo estimate of 4.45 mrad. The systematic error is estimated from the variation<sup>4</sup><sup>4</sup>4This variation has a number of root causes, each at the few percent level: the expansion volume transparency was monitored to be above 95%, there are variations in transparency over each radiator face, etc. Hence the systematic error is estimated to be at the 5% level. between different datasets taken at the same track angle, which are repeatable to 5%. The photoelectron yield $`N_{\mathrm{pe}}=(12.9\pm 0.07\pm 0.36)`$ pe per track is extracted from the area under the single photoelectron peak followed by background subtraction. Again systematic errors dominate and are given by different methods of background estimation. (Here the beam-test Monte Carlo makes no prediction for $`N_{\mathrm{pe}}`$ but rather uses the measurement as an input parameter.) This yield exceeds our benchmark of 12 pe/track. Similar analysis for the 0 sawtooth radiator dataset, cf. Figure 7, gives a single photoelectron Cherenkov angle resolution $`\sigma _{\mathrm{\Theta }\mathrm{pe}}=(11.7\pm 0.03\pm 0.42)`$ mrad with a background fraction of 12.0% (compared with 11.1 mrad from Monte Carlo), an angle resolution per track $`\sigma _{\mathrm{\Theta }\mathrm{trk}}=(4.49\pm 0.01\pm 0.22)`$ mrad (4.28 mrad from Monte Carlo), and a photoelectron yield $`N_{\mathrm{pe}}=(10.4\pm 0.04\pm 1.0)`$ pe/track, background subtracted. Adjusted for full 85% geometric acceptance, $`N_{\mathrm{pe}}`$ becomes 18.8 pe/track. Figure 8 provides a summary of beam-test results from all datasets at all incident angles. The measured Cherenkov angle resolution per track from the plane radiator data (denoted by squares) increases as a function of the incident track angle due to the increase in emission-point error.<sup>5</sup><sup>5</sup>5The Cherenkov angle resolution per track is dominated by chromatic and emission-point errors . The chromatic error is larger at small track angles, but they become comparable at large track angles. The beam-test Monte Carlo simulation gives the light dashed curve in Figure 8, which represents the data well. However the per track resolution, e.g. 4.54 mrad at 30, is larger than that naively calculated by statistics, i.e. $`13.2\mathrm{mrad}/\sqrt{12.9}=3.68\mathrm{mrad}`$. Monte Carlo studies indicate that the sources of the increased resolution are the MWPC tracking errors (the principal cause, 2.3 mrad at 30) and the beam background (1.2 mrad at 30). The tracking errors per se obviously cannot change with rotation of the RICH box, rather they effectively increase the emission-point error due to an incorrect track impact point on the radiator face, and hence become more prominent with track angle. In CLEO-III the background will be much reduced, and the tracking error contribution will be smaller yet still significant. In order to estimate the ultimate performance of this RICH, an extrapolation was made based on the beam-test Monte Carlo. The background and tracking errors are associated only with our beam test, so both were removed from the simulation for this study. The resulting photoelectron yield was then corrected for the geometric acceptance of the beam-test setup and scaled to “full acceptance”, defined as 85% of the solid angle covered by a cylindrical RICH. The result of this “full acceptance” extrapolation for the per track resolution for the plane radiator is shown as the light solid curve in Figure 8, which is flat in track angle and below our benchmark of 4 mrad for CLEO-III. The measured per track resolution from the sawtooth radiator data (denoted by circles in Figure 8) also increases with track angle, as expected , again due to the increase in emission-point error. However the value of the measured per track resolution is larger than expected. This has several sources: acceptance, MWPC tracking errors, beam background, and sawtooth profile effects. Geometric acceptance is the largest contribution; it is approximately 50% for all track angles. By naive statistical calculation this increases the per track resolution by 35%. Monte Carlo studies show that tracking errors are the next largest contribution (e.g. 1.9 mrad at 0), and are exacerbated in this configuration because one of the arcs in the image is out of the detector fiducial. The beam background is approximately constant for all track angles (e.g. 1.3 mrad at 0). Sawtooth profile effects are defined as deviations of the real sawtooth radiator from an ideal sawtooth (e.g. rounding of the edges of teeth). However our simulation indicates that profile effects contributes little to the broadening of the resolution since mechanical imperfections are offset by a reduced transmission through the radiator in these same areas. However even taking into account all these effects the beam-test Monte Carlo, which gives the heavy dashed curve in Figure 8, does not represent the data completely. It consistently underestimates the resolution, indicating that there are additional subtle systematic effects associated with the sawtooth radiator yet to be investigated. The “full acceptance” extrapolation for the per track resolution for the sawtooth radiator, the heavy solid curve in Figure 8, is flatter in track angle and closer to our expectations in value. A more sophisticated approach is made in Figure 9, which shows the Cherenkov resolution per track for the 0 sawtooth dataset as a function of photoelectron yield. One may read off the per track resolution at the measured yield of 10.4 pe/track and extrapolate to the expected yield of 18.8 pe/track, giving the result as 3.5 mrad. Since this curve is derived from the data it automatically takes into account statistical and systematic effects. Hence we have met our benchmark of 4 mrad. To summarize, for both plane and sawtooth radiators unfolding the effects of background and tracking error gives angle resolution per track very close to the expected values. The expected CLEO-III RICH performance will fall somewhere between the beam-test Monte Carlo curve and the “full acceptance” curve in Figure 8. Clearly this is sufficient to meet our needs for CLEO-III. ## 10 CONCLUSIONS and OUTLOOK A beam test of the first two sectors of the CLEO-III RICH Detector has been successfully carried out. The results obtained fulfill CLEO-III requirements for 4$`\sigma `$ $`\pi /K`$ separation, particularly a Cherenkov angle resolution of about 4 mrad. The CLEO-III RICH Detector is in the final phase of construction. At present 85% of the photon detectors have been built (with 40% fully tested); all CaF<sub>2</sub> windows, 78% of the LiF planar radiators, and 51% of the LiF sawtooth radiators have been delivered; and all readout chips have been acquired and tested. Completion and installation is expected in Summer 1999. ## ACKNOWLEDGMENTS We would like to thank Fermilab for providing us with the dedicated beam time for our test, the Computing Division for its excellent assistance, and our colleagues from E866 for their hospitality in the beamline.
no-problem/9902/astro-ph9902074.html
ar5iv
text
# Untitled Document TIME VARIATIONS OF SOLAR NEUTRINO. THE MAIN ARGUMENTS PRO AND SOME INFERENCES Rivin Yu.R., Obridko V.N. I. Arguments in support of time variations of the high-speed solar neutrino flux, recorded with the Homestake detector. 1. In a separate run of measurements of solar neutrino by the Homestake detector, the ratio of the number of $`{}_{}{}^{37}Ar`$ atoms per day (signal) to the average measuring error (noise) is $`1`$. For the annual mean values, the signal–to-noise ratio is $`34`$, which allows us to analyze cyclic (and, less reliably, “quasi-biennial”) variations of the solar neutrino flux, Pn (Rivin et al., 1983; Rivin, 1989-1993; Rivin & Obridko, 1997). 2. According to interpretation of the $`P_\nu `$ curve and its spectrum, suggested in the mentioned works, these variations have two independent sources. One is the processes inside the convection zone that produce oscillations with a quasi-period $`T1`$ years. Another source is associated with thermal processes in the core that have a characteristic period of $`25`$ years, i.e. “quasi-biennial” variations (Sacurai, 1979; Sacurai & Hasegawa, 1985; Rivin et al., 1983). Up to 1985, the neutrino flux had displayed a good correlation with the Wolf numbers, W, changing in anti-phase with a period of 11-years (Rivin et al., 1983; Rivin, 1993; Davis, 1984). After 1985 (in solar cycle 22), the correlation broke: the maximum values of $`W`$ in cycles 21 and 22 were approximately equal, whereas the maximum of $`P_\nu `$ in cycle 22 was significantly lower, than in the previous one. The correlation could be somewhat improved by smoothing the original curves (removing the short- periodic oscillations), but it still remained poor after 1985 non only in the Homestake, but also in the Kamiokande-II data. This fact raised serious doubts as to whether any time variations of $`P_\nu `$ did exist at all. However the reason for such discrepancy was explained later in (Obridko & Rivin, 1995, 1996; Rivin & Obridko, 1997; Rivin, 1997). It was shown that the Wolf numbers did not adequately represent cyclic variations of the modulus of the deep quadrupole solar magnetic field, $`B_q`$, that proved to conserve correlation with $`P_\nu `$. The correlation even became better during the past 20 years. The fact is that the amplitude of $`B_q`$ variations, modulating the neutrino flux, is by several fold smaller in cycle 22, than in cycle 21. This ratio of amplitudes in two successive cycles differs much from that of the Wolf number series. Later on, this work was used to construct a model of 11-year variations of $`B`$ (Obridko & Rivin, 1996c; Rivin, 1997a, 1998a-c). The model involves three spatial coordinates in addition to the temporal one, and provides a simplified mechanism that describes observations better, that the available theoretical models. The observation series of solar neutrino is too short to justify any definite conclusions, however the coincidence in shape of the $`P_\nu `$ and $`B_q`$ amplitude variations provides an additional evidence in support of our hypothesis of the 11-year cycle and other variations of $`P_\nu `$. Note that the present-day argument pro and contra time variations of $`P_\nu `$ has historic analogy. G.Schwabe, who discovered the 11-year cycle of solar activity in 1843, could not publish his results for several years, because authoritative scientists of that time believed them implausible, taking into account that the observed and calculated diurnal Wolf numbers displayed a scatter far beyond the annual mean values. The publication was finally promoted by A.Humboldt, however another 50–60 years had to pass before the solar cycle was broadly recognized by the scientific community. We hope that recognition of the 11-year cycle and other variations of $`P_\nu `$ will not take long. The quasi-biennial variation of $`P_\nu `$ is much more obscured by the recording errors, than the 11-year cycle, and besides, it is additionally distorted due to a break in operation of the detector in 1985–1986. Nevertheless Rivin (1993) could reveal a good correlation of $`P_\nu `$ with the respective variations of Wolf numbers and a somewhat worse correlation with $`B_q`$. Thus, the correlation of $`P_\nu `$ with the Wolf numbers and $`B_q`$ over two periods ($`11`$ years and $`25`$ years) was established reliably enough to make inferences of real time variations of the neutrino flux and even of its modulation sources. Another important feature of time variations of the neutrino flux is described in (Rivin et al., 1983; Rivin, 1989, 1993). The amplitudes of $`P_\nu `$ variations with $`T11`$ years and $`T25`$ years are approximately equal, unlike the case of $`W`$ and $`B_q`$, where the amplitude of the 11–year cycle is much larger, than the amplitude of oscillations with a shorter period. This fundamental difference was not taken into account in statistical studies (e.g. in the work by Bahcall (1989), in the chapter devoted to neutrino time variations). This erroneously resulted in the conclusion that time variations in $`P_\nu `$ were absent. The analysis would have been more correct if short- period variations had been filtered out from the data series before processing. Besides, taking into account all said above, it would be reasonable to use as a characteristic of the solar magnetic field (especially after 1986) the modulus of the large-scale field, measured directly in the photosphere, rather than the Wolf numbers (Obridko & Rivin, 1995a, 1996a). 3. The works by Akhmedov (1997) and Masseti & Storini (1996) served a starting point for our analysis of seasonal variations in $`P_\nu `$. As a result, an annual wave with the extrema at equinoxes (in spring higher and in autumn lower than at the solstices) was isolated by two different methods. It was suggested to be due to the Voloshin-Vysotsky-Okun mechanism (VVO) (Voloshin et al., 1986), working in the magnetic field with a significant spatial asymmetry (Rivin & Obridko, 1998). This variation, however, was not revealed in the data from Kamiokande-II and SuperKamiokande quoted by Suzuki (1996). The explanation may be as follows: as a result of modulation by the 11–year cycle the amplitude of the annual wave is much smaller in the years of minimum, than in the years of maximum of cycle 21, and it may become pronounced again in a year or two. Thus, the joint analysis of the Homestake Pn data and the $`B_q`$ measurements in the photosphere allows us to make a conclusion of two probable mechanisms in the Sun, responsible for modulation of the neutrino flux, $`P_\nu `$. One is the modulus of the magnetic quadrupole, generated at the base of the transition layer between the convection and the radiative zones, where the differential rotation is changed by the rigid one (here, the 11–year cycle and the annual wave take their origin). Another source is associated with the thermal fluctuations of the core (quasi–biennial variations). Since the quadrupole magnetic field is likely to rotate mainly with a period $`T27`$ days, one should expect to observe the same period in $`P_\nu `$. This inference can probably be verified in future using the SuperKamiokande and other up-to- date detectors. Preliminary diurnal $`P_\nu `$ data from the SuperKamiokande detector were given by Suzuki (1996) for 175 days after 1.04.1996. These data did not reveal any pronounced fluctuations with $`T27`$ days, though the scatter of intividual values was quite significant. However it may be due to that fact that, 1996 being the year of solar minimum, the amplitude of the 27-day variation of $`B_q`$ (and hence he corresponding $`P_\nu `$ wave) was too small to be positively detected. In a year or two the situation may change. Unfortunately, the data from other detectors (e.g. Neutrino–96) are practically useless for the analysis of $`P_\nu `$ and are not easily accessible. II. Some speculations on the $`P_\nu (t)`$ modulation by $`B_q`$ The transverse magnetic field, modulating the neutrino, flux is usually estimated from the sunspot magnetic field, measured in the photosphere (Voloshin et al., 1986). However this procedure does not take into account the following: 1) The nature of sunspot magnetic fields is not quite clear. Are they “fragments” of the toroidal field that tear off and emerge as magnetic tubes in the photosphere, or do they proceed from the cut-off fields in the magnetic tube generation region? In the latter case, their intensity must be much lower, than the intensity of the original toroidal field. For more than a hundred year history of observations, the sunspot magnetic fields did not display cyclic variations of their intensity (H) that are characteristic of the corresponding toroidal dynamo field ($`B`$). Some authors (e.g., Vitinsky et al., 1986) argue that such periodicity in $`H`$ is absent, and the cyclic variation of solar activity is only manifested in the changing number of sunspots. This means that the sunspot magnetic fields are most likely due to the cut-off fields in the magnetic flux generation region (the generation mechanism was described, in particular, by Parker (1975)). In this case, the obtained estimate seems to be merely the lower limit of the real magnetic field value near the generation region. 2) Does the intensity of sunspot fields correspond to that of the fields at the base of the convection zone? This question arises in connection with the new model of cyclic variations of $`B_q`$, suggested by Obridko & Rivin (1996c) and developed by Rivin (1997, 1998). The model suggests the existence of two independent, but related regions of magnetic field generation in the Sun. The upper dynamo mechanism at the center of the convection zone generates the field of quasi-axial dipole, changing with a period $`T22`$ years. This is where $`99\%`$ of the sunspots observed in the photosphere occur. Taking into account all said above about the possible origin of sunspot fields, one can suggest that the intensity of the toroidal field, generated by the aw-dynamo, may reach in this region $`15`$ kG (Steshenko, 1967). The lower dynamo of the quadrupole magnetic field with a probable period $`T27`$ days operates at the bottom of the convection zone, where the rotation is rigid. (It is $`\alpha \alpha `$ or some other dynamo mechanism, but not $`\alpha \omega `$). Due to a large radial density gradient in the convection zone (five to six orders of magnitude), part of the upper field is “pumped” away from the generation region to the base of the convection zone as a result of anisotropic turbulent pumping (ATP). As the ATP mechanism is nonlinear, the pumping is accompanied by a detection (a cycle with $`T11`$ years appears) and enhancement of the dipole field, and the latter modulates the lower dynamo and the amplitude of variations of the quadrupole magnetic field. It can be suggested that the neutrino flux from the solar interior is modulated simultaneously. In this case, the phase shift of about 1 year between the quadrupole and the neutrino flux variations, on the one hand, and the dipole field, on the other, is due to the downward pumping of the dipole field, that takes several times as long as the emergence of the quadrupole magnetic tubes to the photosphere. The emergence velocity depends on the magnetic field, $`B`$ (according to Parker (1975), $`V_bB/4\pi \rho `$, where $`\rho `$ is the matter density). Hence, one should expect in the modulation region the existence of strong magnetic fields with $`T11`$ years (probably 10-100 kG) and even stronger guadrupole fields, that could ensure a rapid ($`le12`$ months) emergence of the magnetic tubes. For the present, it is difficult to estimate the radial dimension of the modulation region and the lower dynamo region of the quadruple magnetic field. To judge from the relative difference between the real and the model seismic velocities (see Fig. 12 in Kosovichev et al., 1997), it makes $`R_{}`$, where $`R_{}`$ is the radius of the Sun. The suggested model of generation of the 11-year magnetic cycle by two different mechanisms (Rivin, 1998b,c) accounts for the absence of semi-annual wave in the $`P_\nu `$ variation, that follows from the VVO hypothesis. This problem is discussed in detail in (Bahcall, 1989; Obridko & Rivin, 1995a, 1996a, 1998c). According to VVO model, the neutrino flux is modulated by toroidal components of the axisymmetric quasi-dipole magnetic field in the solar convection zone. Only in this case, the components of opposite sign in different hemispheres could form an equatorial gap, where the field is weak, i.e. the flux modulation must be absent. However the model by Rivin (1998b,c) shows that no such gap is possible in the Sun, first, because the field in the modulation region is asymmetric between the hemispheres and, second, because the equatorial zone is filled with the magnetic tubes of the toroidal quadrupole large– scale magnetic field emerging from the transition layer to the photosphere. With all these considerations taken into account, the VVO model quite adequately describes the cyclic modulation and the annual wave of the solar neutrino flux. The mechanism of quasi-biennial oscillations needs further investigation on the basis of up-to- date measurements with various detectors and the analysis of large-scale and local (sunspot) magnetic fields in the Sun. III. Conclusion The present-day knowledge brings us close to the problem of description and simulation of cyclic variations of magnetic fields in the solar interior, however we still cannot estimate the magnetic field in the lower dynamo region. Nevertheless, the correlation of cyclic variations of $`P_\nu `$ and $`B_q`$ shows that the main characteristics of the magnetic fields, the modulation region in the solar interior, and the magnetic moment of the solar neutrino are such as to suggest the reality of the neutrino modulation effect. It is the task for the nearest future to more exactly determine each of these values from observations of the large-scale magnetic fields on the solar surface, from helioseismologic data, as well as by detecting solar neutrino fluxes with up-to– date facilities that allow the study of their time variations. One of the urgent tasks is to analyze the probability of variation with $`T27`$ days and its subharmonics, and to compare this variation (if discovered) with the corresponding variation of the quadrupole solar magnetic field. The authors are grateful to E.I.Prutenskaya for technical assistance in preparing the manuscript. The work was done under the sponsorship of the Russian Foundation for Basic Research (grant N 96-02-17054) and the State Astronomy Program (grant N 4-264). References Akhmedov, E.Kh.// hep-ph/ 9705451. 21.5.1997. Bahcall, J.N.// Neutrino Astrophysics. Cambridge Univ. Press, 1989. Davis, R., Cleveland, B.N., Rowley, J.K.// Intersec. Part. and Nucl. Phys. Conf .Steaboat. Springs, 23-30 May 1984. New York. 1984. P.1037. Kosovichev, F.G. et al. // Solar Phys. 1997.V.170. P.43. Massetti, S., Storini, M.// Astrophys. J. 1996. V.472. P.287. Neutrino 96 / Eds. Kari Enqvist et al. Helsinki. World Scientific. 1996. Obridko, V.N., Rivin, Yu.R. // Izv. RAN, Seriya.Fizicheskaya.. 1995a. T.59. N9. S.110 (1995a, Bull.Russ. Acad. Sci. Phys.. 59. N9). Obridko, V.N., Rivin, Yu.R.// Commissions 27 and 42 of the IAU Information. Bulletin on Variable Stars 1995b. P.1. Obridko,V.N., Rivin, Yu.R.// Astron.& Astrophys. 1996a. V.308. P.951. Obridko, V.N., Rivin, Yu.R. //Problemy Geokosmosa. Book of Abstracts. June 17- 23.1996. S.-Petersburg. 1996b. P.90. Obridko, V.N., Rivin, Yu.R.// Astron. J. 1996c. V.73. N 5. S.812 (1996c, Astron. Report.40. N5). Parker, E.N. // Astrophys. J. 1975. V.198. P.205. Rivin, Yu.R., Gavryuseva, E.A., Gavryusev, V.G., Kosheleva, L.V. //Geomagnitnye variatsij, elektricheskie polja i toki/ Ed. Levitin A.E. M.: IZMIRAN. 1983a. P.153. Rivin Yu.R., Gavryuseva E.A., Gavryusev V.G., Kosheleva L.V. // Issledovanie muonov i neitrino v bolshikh vodnykh ob’emakh/ Ed. Kolomeec, Alma-Ata: KazGU. 1983b. P.33. Rivin, Yu.R. // Astron. Tsirk. 1989a. N1539. P.22. ( 1989a, Astron. Tsirk. N1539. P.22). Rivin, Yu.R. // 5th Simpozium KAPG po solnechno-zemnoi fisike. Samarkand. 1989b. P.22. Rivin, Yu.R. // Astron. Tsirk. 1991. N1551. P.26. (1991, Astron. Tsirk. N1551. P.26). Rivin, Yu.R. // Astronom. J.. 1993. V.70. N2. P.392 (1993, Astron. Report. 33. N2). Rivin, Yu.R. // Magnitnye polya Solntaa i gelioseismologiya/ Ed. Dergachev V.A. S- Petersburg. 1994. p.52. Rivin, Yu.R. // Sovremennye problemy solnechnoiy aktivnosti. S-Petersburg: GAO. 1997a. Tezisy dokladov. P.76. Rivin, Yu.R. // Sovremennye problemy solnechnoiy aktivnosti /. S-Petersburg.: GAO. 1997b. Tezisy dokladov. p.80. Rivin, Yu.R. // Sovremennye problemy solnechnoiy aktivnosti / Eds. Makarov V.I., Obridko V.N. S-Petersburg.: GAO. 1997c. P.218. Rivin, Yu.R., Obridko, V.N. // Astronom. J. 1997. V.74. N1. P.83 (1997, Astron. Report. 41. N1). Rivin, Yu.R. // Izv. RAN, Seriya. Fizicheskaya. 1998a. V.62. N6. P.1263 ( 1998a, Bull. Russ. Acad. Sci. Phys. 74. N6) Rivin, Yu.R. // Izv. RAN, Seriya Fizicheskaya. 1998a. V.62. N9. P. 1867 (1998b, Bull. Russ. Acad. Sci. Phys. 74. N9). Rivin, Yu.R. // Solar Phys. 1998c. (in press) Rivin, Yu.R., Obridko, V.N. // Astronom. J. 1998d. (in press). (1998, Astron. Report. in press) Sacurai, K. // Nature. 1979. V.278. N5700. P.146. Sacurai, K. // 19th Int. Cosmic Ray Conf., La-Jolla, Aug.11-23, 1985. Washington. D.C.1985.P.430. Steshenko, N.V.// Izv. Kryumskoy Astronom. Observatorii. 1967. V.37. P.21. Suzuki, Y.// 17th Int. Conf. On Neutrino Phys. and .Astrophysics. Neutrino-96. Helsinki, June 13-19. 1996. World Scientific. 1996. P.73. Voloshin, M.B., Visotsky, M.I., Okun, L.B.// Zh. Teoret. i Experiment. Fiziki, 1986, V.91. P.754. Vitinsky, Yu.I., Kopecky, M., Kuklin, G.V. Statistika pyatnoobrazovatelnoi deyatelnosti Solntsa M.: Nauka. 1986. 296p.
no-problem/9902/cond-mat9902019.html
ar5iv
text
# Reply to Comment on ”Theory of Spinodal Decomposition” (Submitted in Physical Review Letters 06 July 1995 Revised 18 April 1996) In his Comment to my paper Rutenberg notes that ”the basis of my analysis of conserved scalar phase-ordering dynamics, to apply only the global conservation constraint $$d𝐱\psi =N,$$ (1) is incorrect”. Because ”for physical conserved systems, which evolve by mass transport, the stronger local conservation law embodied by continuity equation” $$\psi /t=𝐣.$$ (2) This question needs clarification. Replying shortly, I say that my theory has no any contradiction with the local conservation law (2). It describes the evolution of all three types of thermodynamic systems: without conservation law (NCOP), with global conservation law (1) (GCOP) and with local one (2) (LCOP). In these all cases the order parameter (OP) $`\delta \psi (𝐱,t)=\psi (𝐱,t)\psi _0(𝐱)`$ dynamics near the extremal $`\psi _0(𝐱)`$ is described by the same evolution equation $$\psi /t=\mathrm{\Gamma }\delta /\delta \psi $$ (3) with different thermodynamic potential functionals $`\{\psi \}`$. For NCOP-system $`\{\psi \}F\{\psi \}=d𝐱f\{\psi \}`$, where $`F\{\psi \}`$is a free energy functional and $`f\{\psi \}`$ is a specific free energy one. For GCOP- and LCOP-systems $`\{\psi \}\mathrm{\Omega }\{\psi \}=F\mu _gN`$, where the constant $`\mu _g`$ is a global chemical potential defined by (1) and equilibrium condition $`\delta /\delta \psi |_{\psi =\psi _0(𝐱)}=0`$. The difference between $`F`$ and $`\mathrm{\Omega }`$ is that the conservation law (1) is taken into account. It should be noted that for GCOP and LCOP-systems (1) must be also used directly for selection of valid solutions of (3). So there is no difference between GCOP and LCOP evolution. That is why I have used the same notation (COP) for them in . Writing evolution equation (3) I proceed from classical point of view that a system has a thermodynamic potential $`\{\psi \}`$ if and only if it is related to dynamical class of, so called, potential systems. For potential systems the OP evolution near the extremal $`\psi _0(𝐱)`$ is described by the equation (3). More generally, $`\{\psi \}`$ is the system’s Lyapunov functional for attractor $`\psi _0`$, if $`\dot{}<0`$ for $`\psi \psi _0`$ and $`\dot{}=0`$ for $`\psi =\psi _0`$. Dynamics of the potential system consists of relaxation toward the minimum $`_0=\{\psi _0\}`$ and the Lyapunov functional existence guarantees a system’s global asymptotic stability at $`\psi =\psi _0`$. In more physical language the condition $`\dot{}<0`$ for $`\psi \psi _0`$ and $`\dot{}=0`$ for $`\psi =\psi _0`$ is nothing more nor less than the second law of thermodynamics. Rutenberg states that (2) applies a supplementary constraint on the OP evolution. It is not right, because for LCOP we can always find from (2) and (3) the OP flux j , being a priori an unknown quantity, by writing a transport equation $$𝐣=\mathrm{\Gamma }\delta \mathrm{\Omega }/\delta \psi .$$ (4) If $`\mathrm{rot}𝐣=0`$, we can rewrite (4) as $$𝐣=\mathrm{\Gamma }/(4\pi ).$$ (5) Here $`(𝐱,t)=d𝐱^{}\{[\delta \mathrm{\Omega }\{\psi (𝐱^{},t)\}/\delta \psi ]/|𝐱𝐱^{}|\}`$ is a non-local characteristic of the system, so called, scalar potential of the vortex-free vector field $`𝐣(𝐱,t)`$ . In linear approximation, when $`f=[A\psi ^2+C(\psi )^2]/2`$, we get from (4) and (5) $$𝐣=\mathrm{\Gamma }(A\psi C\mathrm{}\psi ),$$ (6) and $`𝐣(𝐤,\omega )=\omega _\psi (𝐤)\delta \psi (𝐤,\omega )𝐤/k^2`$ with $`\omega _\psi (𝐤)=\mathrm{i}\mathrm{\Gamma }(A+Ck^2)`$. The condition $`𝐣(𝐤,\omega )|_{𝐤=0}=0`$ is guaranteed by $`\delta \psi (𝐤,\omega )|_{𝐤=0}=0`$ , which follows from (1). We see that j consist of two transport modes: a dilatation one, which corresponds to an extension or a contraction of OP and a diffusion one, which corresponds to Ficken diffusion. If we suppose that the characteristic time of OP dilatation $`\tau _A=1/(\mathrm{\Gamma }A)`$ is much greater than the caracteristic time of OP diffusion $`\tau _C=1/(\mathrm{\Gamma }Ck^2)`$, i.e. $`k^2>>1/\xi ^2`$ ($`\xi =\sqrt{C/A}`$ is the correlation length of OP fluctuations), we get Fick’s law in its classical form $$𝐣=𝒟\psi $$ (7) with the diffusion coefficient $`𝒟=\mathrm{\Gamma }C`$. To derive Fick’s law in the form $$𝐣=\lambda \mu _l,$$ (8) where $`\mu _l=\mu _l(𝐱,t)`$ is a local chemical potential and $`\lambda `$ is a transport coefficient, it is necessary to introduce a local equilibrium assumption. Let us divide the system into cells of a small volume $`V_l=l^d`$ with $`l<<1/k`$. In each cell the diffusion mode of $`𝐣(𝐤,\omega )`$ dies away after the time $`\tau _l=l^2/(\mathrm{\Gamma }C)`$ much smaller than the relaxation time $`\tau _C`$ of the diffusion mode in the system as a whole. Then we can believe that all local thermodynamic variables and functions as OP $`\psi `$, free energy $`F_l`$, entropy $`S_l`$, pressure $`P_l`$, chemical potential $`\mu _l`$ etc. are constants in the cell. It is the assumption of local equilibrium that makes possible to meaningfully define the local free energy $`F_l(𝐱,t)`$ that is the same function of the local thermodynamic variables as the equilibrium free energy is of the equilibrium thermodynamic parameters. That is why the fundamental differential relation $`\mathrm{d}F_l=S_l\mathrm{d}TP_l\mathrm{d}V+V_l\mu _l\mathrm{d}\psi `$ is valid locally. Replacing $`F\{\psi \}=_{V_l}d𝐱[A\psi ^2+C(\psi )^2]/2`$ by the local free energy $`F_l(\psi )=V_lA\psi ^2/2`$ and using formulas of traditional equilibrium thermodynamics we can calculate the local chemical potential $`\mu _l(𝐱,t)=\mathrm{d}F_l(\psi )/\mathrm{d}\psi =V_lA\psi (𝐱,t)`$ and find (8) with $`\lambda =\mathrm{\Gamma }C/A`$ . If we are interested by self-organisation effects, we must consider the situation, where $`\xi ^2k^21`$ and $`𝐣`$ is defined by (6), but not by (7) or by (8). We see that Fick’s law in form (8) is itself a consequence of OP evolution equation, so it can not be utilised for its derivation. Fick’s law in form (7) describes a special case of OP transport: diffusion without dilatation. Fick’s law in form (8) is an approximative expression for diffusion flux, not only in the sense that it does not take into account the higher order gradient terms of a chemical potential $`\mu _l`$, but also in the sense that it is a law of the linear thermodynamics for which the local equilibrium assumption is compulsory. General expression for OP flux (5) has nothing to do with the Fick’s law form $$𝐣=\lambda \delta F/\delta \psi $$ (9) criticized in and accepted in as ”motivated phenomenologically”. The use of (8) with formally defined ”local chemical potential” $`\mu (𝐱,t)=\delta F/\delta \psi `$ has no thermodynamic foundation and is incorrect. This completely concerns the equation $$\psi /t=\lambda ^2\delta F/\delta \psi .$$ (10) An origin of the misunderstanding is, as has been note in , a false adoption of the Fick’s law in form (9) as a fundamental general law, which can be used as a basis for derivation of the OP evolution equation of nonequilibrium thermodynamic systems with COP. Fick’s law (8) fails completely in a non-linear dynamics. In a non-linear approximation, when $`f\{\psi \}=[A\psi ^2+B\psi ^4/2+C(\psi )^2+D(\mathrm{}\psi )^2+E(\psi \psi )^2]/2`$ equation (4) has the form $$𝐣=\mathrm{\Gamma }[A\psi +B\psi ^3C\mathrm{}\psi +D\mathrm{}^2\psi E\psi ^2\mathrm{}\psi ].$$ (11) We see that $`𝐣`$ includes now not only the dilatation mode and the diffusion mode, but also a cross term $`E\psi ^2\mathrm{}\psi `$, being responsible for the dilatation \- diffusion coupling. In this case we can not in principal separate dilatation and diffusion effects. Some words about the example given by Rutenberg. It does not show any inconsistence of my theory with the local consevation law (2), but brilliantly demonstrate incorrectness of (10). Let us, following , take ”a special initial condition” and ”require that the dissipative dynamics be invariant under $`\psi \psi `$ and that $`F\{\psi \}`$ is minimized by $`\psi =\pm 1`$ everywhere except for a small sphere where $`\psi =1`$, the other of which has $`\psi =+1`$ and $`1`$, respectively”. For simplicity, let us omit the gradient term in Rutenberg’s free energy funtional and take $`F\{\psi \}=d𝐱(\psi ^21)^2`$. Then the system initial state is evidently an equilibrium one for all three cases NCOP, GCOP or LCOP. Therefore we must not observe any changes of $`\psi (𝐱,t)`$ in time. However, following , we get from (10) that ”the spheres evolve”. This is nonsense. This circumstance can be directly discovered from (10), if we note that by the condition $`^2\delta F/\delta \psi =0`$ it defines a false equilibrium state, being different, in general case, from the real equilibrium one. The later must always minimize $`F\{\psi \}`$, i.e. must be determined by the condition $`\delta F/\delta \psi =0`$.
no-problem/9902/astro-ph9902156.html
ar5iv
text
# Wide binaries in the Orion Nebula Cluster ## 1 Introduction Amongst the stars around us in the Galactic field, single systems like our own are a minority. Most stars are found in binaries, or in systems of even higher multiplicity. Recent surveys of the nearby stellar population , extending to 20 pc or so, indicate that about 60 per cent of systems in the solar neighbourhood are binary. In younger populations, such as those in nearby star-forming regions, the degree of binarity is often even higher. Surveys of the low-mass dark cloud complexes in Taurus-Auriga, Ophiuchus, Chameleon, Lupus, and Corona Australis reveal a fraction of binary systems roughly twice as high as that of the solar neighbourhood in the range of binary separations detected (typically 10–1500 AU) . Thus single stars are rare in these clusters. An exception to this trend is the Orion Nebula Cluster (ONC) in the Orion A giant molecular cloud, which has a binary fraction roughly the same as the solar neighbourhood value, at least over the separation range of $``$ 25–500 AU . The ONC differs from the other regions mentioned in that it is densely clustered, and contains a number of massive O and B type stars. In particular, the core of the ONC, known as the Trapezium cluster after the eponymous four central OB stars which dominate it , contains some 1000 stars with a peak density of 2–5$`\times 10^4`$ stars $`\mathrm{pc}^3`$ . This compares to core densities in the Taurus-Auriga complex of only around 10 stars $`\mathrm{pc}^3`$. It has been argued, based on estimates of star formation rates , that dense OB associations like the ONC are responsible for the majority of star formation in the Galaxy, and are therefore representative of the typical environment for early stellar evolution. The results on binarity in star-forming regions would seem to support this conclusion. However, the high density of the ONC has meant that the surveys carried out there so far have only been able to detect binaries separated by less than about 500 AU. It would be very useful to examine the binary fraction at wider separations, but in doing so one is increasingly likely to observe chance projected alignments rather than actual bound pairs. Bate, Clarke & McCaughrean analysed the mean surface density of companions in the centre of the ONC using data from Prosser et al. and McCaughrean et al. , and found weak evidence there for a deficit of binaries with separations greater than 500 AU. But to do better, and to detect wider binaries properly, one must use the fact that they have a common proper motion across the sky. Specifically, one looks for stars whose relative velocity is less than the critical value needed to escape their mutual gravitational attraction (which can be determined by making an assumption or estimate of the stars’ masses). In this work we attempt to detect common proper motion binaries in the ONC using the data obtained and catalogued by Jones & Walker (1988; hereafter also JW), and thence determine whether the number found is consistent with the field star binary distribution over the same separation range. ## 2 Finding common proper motion pairs The JW study used a series of optical wavelength photographic plates taken over a baseline of about 20 years, and obtained proper motion measurements for over 1000 stars. Compared to deeper and higher spatial resolution surveys carried out using the Hubble Space Telescope at optical wavelengths and ground-based adaptive optics techniques at infrared wavelengths , the JW data reveal only a fraction of the total stellar population. However, it remains the best proper motion survey to date, and covers a relatively large area on the sky, extending roughly 15 arcmin or 2 pc from the cluster centre, taken to be $`\theta ^1`$ Ori C, the most massive of the four Trapezium stars. Ideally, one would have data in *three* dimensions for position and velocity, plus the mass of each star, as it would then be a simple task to identify all the bound pairs within the cluster, and give an accurate value for the binary fraction. Instead, because the data provides information only in two dimensions, one can do no more than place an upper limit on the number of possible bound stars. The approach taken here is to look for *apparent binaries* in the JW data – pairs whose 2D positions and velocities allow them to be bound if one ignores the third dimension – and then to compare the number of these with the number of similar pairs observed in a randomly generated model cluster with known parameters. Hillenbrand found the mean stellar mass in the Trapezium to be approximately 0.8 $`\mathrm{M}_{\mathrm{}}`$; here we make the assumption that all the stars are of solar mass. The distance to the ONC is taken to be 470 pc (the value used by JW), at which 1 pc corresponds to 7.3 arcmin on the sky. We test for binarity at projected separations up to 5000 AU, beyond which the critical relative velocity (the escape velocity for a pair at that separation) is lower than the mean error in the JW velocity data ($`0.8\mathrm{km}\mathrm{s}^1`$). A lower limit of 1000 AU is imposed by the observational resolution of $``$ 2 arcsec. Thus our study is complementary to previous work, where binaries were sought at separations roughly ten times smaller. An analysis of the JW data gives the following initial results. Eliminating stars with less than 80 per cent membership probability leaves 894 stars in the cluster catalogue, which give a total of $`894\times (8941)/2=399171`$ pairs. Of these, 192 have apparent separations between 1000 and 5000 AU. Assuming solar masses and using the JW velocity measurements (ignoring errors for the time being), three of these 192 have relative velocities lower than the critical velocity for their separation. It is perhaps likely that if data for the radial dimension were known, these binaries would turn out to be unbound. But it is also likely that the effect of errors has been to ‘disrupt’ other apparent binaries in the cluster. These complications – in particular the projection effect – mean that we cannot directly compare our result with a prediction from the Duquennoy & Mayor (or any other) period distribution. Instead, as mentioned above, the comparison is made by constructing an ensemble of model clusters whose observable parameters match the JW data statistically, but which contain the population of binaries we want to compare with. ## 3 Simulated clusters The model clusters have a three-dimensional density distribution corresponding to an isothermal sphere with a flat core: $$\rho (r)=\{\begin{array}{cc}\rho _0\hfill & \text{if }rR_{\mathrm{core}}\hfill \\ \rho _0\left(\frac{R_{\mathrm{core}}}{r}\right)^2\hfill & \text{if }rR_{\mathrm{core}}\hfill \end{array}$$ (1) where $`R_{\mathrm{core}}=0.04\mathrm{pc}`$. A similar distribution was shown by Bate, Clarke, & McCaughrean to give a good fit to the projected density distribution of the JW data. Figure 1 shows a comparison between the simulated and observed data. A finite radius of 5 pc is chosen, within which a sufficient number of stars are distributed according to (1) to ensure that, after the corrections discussed below, roughly 900 appear projected within the same radius as the JW survey ($`2`$ pc). Each system is then assigned a velocity chosen from a Maxwellian distribution with dispersion of 4 $`\mathrm{km}\mathrm{s}^1`$, equal (in three dimensions) to that found by Jones & Walker for most of the cluster. Two different binary populations are considered, one with a binary fraction of zero (i.e. no binaries at all), and the other with a population matching that found by Duquennoy & Mayor . In the latter, the periods have a log-normal distribution peaking at 180 years, which corresponds to a typical separation for solar mass stars of 30 AU. Once a cluster has been generated with these parameters, we mimic the process of observing it, applying the same errors and restrictions to the data as were present in Jones & Walker’s survey. For example, the velocity errors in the JW catalogue were found to be log-normally distributed with a peak at 0.8 $`\mathrm{km}\mathrm{s}^1`$, and we apply noise to the velocities in the model cluster using the same distribution. Similarly, the JW survey had a resolution limit of about 2 arcsec or 1000 AU, so any model stars which appear closer in projection are merged into one. We must also correct for the fact that some of the binaries present in Duquennoy & Mayor’s population would have appeared as single stars to Jones & Walker, because one companion would have been fainter than they could detect. In the outer part of the cluster, JW give their detection limit as $`I16`$, but in the centre, where there is bright background emmission from the Orion Nebula, this limit is reduced. Indeed, within about 2 arcmin (0.3 pc) of the centre they apparently detect no stars fainter than $`I=14`$. We take the mean reddening of $`A_V=2.4`$ magnitudes measured towards the optically-visible stars by Herbig & Terndrup , which corresponds to a mean reddening in the $`I`$ band of 1.4 magnitudes. Then, in the outer regions, the JW study is sensitive to stars with intrinsic $`I`$ magnitudes as faint as 14.6, which at the assumed distance and an age of 1 Myr (see §4) corresponds to a mass of roughly 0.2 $`\mathrm{M}_{\mathrm{}}`$ . The brighter detection limit within 0.3 pc corresponds to a mass of 0.6 $`\mathrm{M}_{\mathrm{}}`$. The minimum companion mass in the wide binaries included in Duquennoy & Mayor’s survey was also about 0.2 $`\mathrm{M}_{\mathrm{}}`$, this limit being determined by the magnitude range of the data they used – see Patience et al. . We therfore choose to apply a correction only to the centre of our simulated clusters, removing a certain fraction of the binary components whose projected distance from $`\theta ^1`$ Ori C is less than 0.3 pc. Denoting this fraction by $`X`$, we need to evaluate the following integral: $$X=_{0.6}^{\mathrm{}}\left(\frac{_{0.2}^{0.6}s(m,\mu )d\mu }{_{0.2}^ms(m,\mu )d\mu }\right)f(m)dm$$ (2) where $`f(m)`$ is the stellar mass function in the cluster, and $`s(m,\mu )`$ is the companion mass function – the distribution of companions of mass $`\mu `$ to primaries of mass $`m`$. For simplicity, we assume that the two mass functions match in the region where $`\mu m`$: $$s(m,\mu )=\{\begin{array}{cc}f(\mu )\hfill & \text{if }\mu m\hfill \\ 0\hfill & \text{if }\mu >m\hfill \end{array}$$ (3) (The primary is defined to be the more massive star.) Hillenbrand found the mass distribution of stars in the ONC to be reasonably fitted by a Miller-Scalo mass function – a semi log-normal, falling from a maximum at around 0.1 $`\mathrm{M}_{\mathrm{}}`$. Using this function for $`f(m)`$, we evaluate the integral numerically, and obtain a result of 0.75 for $`X`$. With these corrections applied, each model cluster is then analysed for apparent binaries, as was done with the JW data in the previous section. The results for ensembles of 2000 clusters, plotted in Figure 2, show that the clusters with zero binary fraction, when seen in projection, are far more likely to show three common proper motion binaries than the clusters with a Duquennoy & Mayor binary population. Specifically, we find that in the former case, a result of three apparent binaries occurs in 427 (21 per cent) of the 2000 clusters, while for the Duquennoy & Mayor clusters which have been corrected for mass detection limits it occurs in just 13 (0.7 per cent) of them. A result of three binaries is 49 centiles away from the median of the latter distribution, which in a gaussian curve would correspond to a distance of just over 2.3 times the standard deviation. ## 4 Discussion Over the bulk of the cluster, Jones & Walker found a roughly constant three-dimensional velocity dispersion of about 4 $`\mathrm{km}\mathrm{s}^1`$, increasing only slightly towards the core. They also found some evidence for anisotropy in the velocity dispersion towards the outer regions of the cluster, increasing in the radial direction – the direction towards and away from the cluster centre – and decreasing in the tangential direction. In terms of separation, 4 $`\mathrm{km}\mathrm{s}^1`$ corresponds to a hard/soft binary limit of about 30 AU, much closer than the binaries we have considered here. Our wide binaries are therefore well into the regime where one would expect disruption through encounters with other cluster members. The typical encounter time $`\tau `$ of a binary with another star in the cluster can be crudely estimated as $$\tau =\frac{1}{n\sigma S}$$ (4) where $`n`$ is the stellar density, $`\sigma `$ the velocity dispersion, and $`S`$ the binary cross section (taken simply to be $`\pi a^2`$, with $`a`$ the semi-major axis, since we ignore the effect of gravitational focusing for wide binaries). In the core, taking a stellar density of $`2\times 10^4`$ pc<sup>-3</sup>, we expect a binary with $`a=1000`$ AU to have $`\tau `$ less than $`10^5`$ years. Further out, $`\tau `$ increases as the density drops, surpassing $`10^7`$ years at about 1 pc from the cluster centre (assuming an isothermal sphere distribution). In her study of masses and ages in the ONC, Hillenbrand found the great majority of stars to be less than a million years old. Since this is significantly shorter than the disruption timescale for wide (1000–5000 AU) binaries in the outer regions of the cluster, their apparent absence could be taken either as evidence that none were formed in that separation range, or as an indication that all the stars in the outer cluster have at some stage passed through a denser environment, such as the core. Taking the core radius to be about 0.2 pc,<sup>1</sup><sup>1</sup>1Note that this is not the same core radius as was used previously in generating the model clusters. That value (0.04 pc) simply parameterised the isothermal sphere distribution used to match Jones & Walker’s data. as found by Hillenbrand & Hartmann , a star moving at 4 $`\mathrm{km}\mathrm{s}^1`$ will take some $`10^5`$ years to pass through, so that even one visit would suffice to disrupt most of these wide binaries. But the ONC is a young cluster, and it is not clear that the majority of stars will have visited the core in the time available. Brandner & Köhler have examined the period distribution of binaries in two subgroups of the Scorpius-Centaurus OB association, and found significantly more wide ($`a>200`$ AU) binaries in the subgroup containing fewer massive B type stars. It is plausible that the binary period distribution in star-forming regions might be determined at the time of formation, and be influenced by conditions like the temperature and density of the cloud, as has been proposed by Durisen & Sterzik . However, it is difficult to exclude the effects of subsequent dynamical evolution, which can happen on a short timescale in these dense environments, and which can account for a deficit of wide binaries independently of the initial distribution. This effect has recently been investigated in dynamical simulations of clusters in equilibrium, cold collapse, and expansion by Kroupa, Petr & McCaughrean . An overview of dynamical processes in young clusters is provided by Bonnell & Kroupa . In conclusion, we observe that about twenty per cent of the binaries in Duquennoy & Mayor’s survey are wider than 1000 AU, so with a total binary fraction of 0.6, they comprise around fifteen per cent of G dwarf stars in the Galactic field. A similar figure can be assumed for the M dwarfs in Fischer & Marcy’s survey. If such systems do not come from regions like Orion – whether because they are never formed or because they are dynamically disrupted – then a significant number of field stars must be formed elsewhere, possibly in cooler or less dense environments. This raises the more general question of what ‘mix’ of star forming regions is required to produce the field distribution we see today, and demonstrates the utility of binary stars as a diagnostic probe for such syntheses. Further collation of binary statistics in different star forming regions, over the widest range of separations, should continue to shed light on these questions. ## 5 Acknowledgements We thank Ian Bonnell, Pavel Kroupa, and Melvyn Davies for valuable help and discussions. A. Scally is grateful for the support of a European Union Marie Curie Fellowship.
no-problem/9902/astro-ph9902290.html
ar5iv
text
# Measuring and Modelling the Redshift Evolution of Clustering: the Hubble Deep Field North ## 1 Introduction Clustering properties represent a fundamental clue about the formation and evolution of galaxies. Several large spectroscopic surveys have measured the correlation function of galaxies in the local universe, studying its dependence on morphological type or absolute magnitude (Santiago & da Costa 1990; Park et al. 1994; Loveday et al. 1995; Benoist et al. 1996; Tucker et al. 1997). Higher values of the correlation length $`r_0`$ are observed for elliptical galaxies (or galaxies with brighter absolute magnitude), while lower values are obtained for late type galaxies (or galaxies with fainter absolute magnitude). This difference in the clustering strength suggests that the various galaxy populations are not related in a straightforward way to the distribution of the matter. To account for these observations, one has to consider as a first approach that galaxies are biased tracers of the matter distribution as $`\xi _{\mathrm{gal}}(r)=b^2()\xi _\mathrm{m}(r)`$ (Kaiser 1984), where $`\xi _{\mathrm{gal}}(r)`$ refers to the spatial correlation function of the galaxies, $`\xi _\mathrm{m}(r)`$ refers to the spatial correlation function of the mass and $`b()`$ represents the bias associated with different galaxy populations. Here $``$ describes the intrinsic properties of the objects (like mass, luminosity, etc). Deep spectroscopic surveys have made it possible to reach higher redshifts and study the evolution of galaxy clustering. For example the Canada-France Redshift Survey (CFRS; Le Fèvre et al. 1996) samples the universe up to $`z1`$ while the K-selected galaxy catalogue by Carlberg et al. (1997) reaches $`z1.5`$. From these data it has been possible to find a clear signal for evolution in the clustering strength: the correlation length is three times smaller at high redshifts ($`z1`$) than its local value. In addition, Carlberg et al. (1997) have found segregation effects between the red and blue samples similar to those observed locally. A common approach is to assume that the galaxy sample traces the underlying mass density fluctuation \[$`b(,z)=1`$, or at least $`b(,z)=constant`$\], and fit the clustering evolution of the mass with a parametric form: $`\xi (r,z)=\xi (r,0)(1+z)^{(3+ϵ)}`$ (Peebles 1980), where $`ϵ`$ describes the evolution of the mass distribution due to the gravitational instability. Such an assumption makes it straightforward to discriminate between different cosmological models. From N-body simulations, Colín, Carlberg & Couchman (1997) found faster evolution in the Einstein-de Sitter (hereafter EdS) universe than in an open universe with matter density parameter $`\mathrm{\Omega }_{0\mathrm{m}}=0.2`$ ($`ϵ0.8`$ and $`ϵ=0.2`$, respectively). Carlberg et al. (1997) obtained from their data a small value of $`ϵ`$ which would be quite difficult to reconcile with an EdS universe, while Le Fèvre et al. (1996) found a value $`0ϵ2`$, still consistent with any fashionable cosmological model. However, using directly the galaxy clustering evolution to derive the relevant properties of the mass is a questionable practice, due to the bias acting as a complicating factor. Different samples select a mixture of galaxy masses and the effective bias, which is expected in current hierarchical galaxy formation theories to depend on redshift and mass \[i.e. $`b(,z)`$\], plays a key role in the observed evolution of clustering. Exciting progress in this field has been achieved with the recent discovery of a large number of galaxies at $`z3`$ (Lyman-Break Galaxies, hereafter LBGs) using the U-dropout technique (Steidel et al. 1996). For the first time, the high-$`z`$ universe is probed via a population of quite “normal” galaxies in contrast with the previous surveys dominated by QSOs or radio galaxies. The LBG samples offer the opportunity to estimate in a narrow time-scale ($`2.6z3.4`$) number densities, luminosities, colours, sizes, morphologies, star formation rates (SFR), chemical abundances, dynamics and clustering of these primordial galaxies. By using different catalogues and statistical techniques, Giavalisco et al. (1998, hereafter G98) and Adelberger et al. (1998, hereafter A98) have measured the correlation length $`r_0`$ of this population. The values they found are at least comparable to that of present-day spiral galaxies ($`r_0=24h^1`$ Mpc when an EdS universe is assumed). Such a strong clustering at $`z3`$ is inconsistent with clustering evolution modeled in terms of the $`ϵ`$ parameter for any value of $`ϵ`$ (G98). By comparing the correlation amplitudes with the predictions for the mass correlation, G98 and A98 obtained (for an EdS universe) a linear bias $`b4.5`$ and $`b6`$, respectively. These results suggest that the LBGs formed preferentially in massive dark matter haloes. An alternative way to extend the present information over a larger range of redshifts is to use the photometric measurements of redshifts in deep multicolor surveys. This technique, based on the comparison between theoretical (and/or observed) spectra and the observed colours in various bands, makes it possible to derive a redshift estimate for galaxies which are one or two magnitudes fainter than the deepest limit for spectroscopic surveys (even with 10 m-class telescopes). An optimal combination of deep observations and the photometric redshift technique has been attained with the Hubble Deep Field (HDF) North. Photometric redshifts have been used to search for high-redshift galaxies (Lanzetta, Yahil & Fernández-Soto 1996) and investigate the evolution of their luminosity function and SFR (Sawicki, Lin & Yee 1996; Madau et al. 1996; Gwyn & Hartwick 1996; Franceschini et al. 1998), their morphology (Abraham et al. 1996; van den Bergh et al. 1996; Fasano et al. 1998) and clustering properties (Connolly, Szalay & Brunner 1998; Miralles & Pelló 1998; Magliocchetti & Maddox 1999; Roukema et al. 1999). A critical issue is the statistical uncertainty of the photometric redshifts which strongly depends on the number of bands following at the various redshifts the main features of a galaxy spectral energy distribution (hereafter SED), in particular the 4000 Å break and the 912 Å Lyman break. The aim of this paper is to measure the galaxy clustering evolution in the full redshift range $`0z4.5`$, using the photometric redshifts of a galaxy sample with $`I_{AB}28.5`$ in the HDF North (including infrared data, i.e. Fernández-Soto, Lanzetta & Yahil 1999, hereafter FLY99) and carry out an extended comparison of the results with the theoretical predictions of different current galaxy formation scenarios based on variants of the Cold Dark Matter model. This comparison will be performed using the techniques introduced by Matarrese et al. (1997) and Moscardini et al. (1998), which allow a detailed modelling of the evolution of galaxy clustering, accounting both for the non-linear dynamics of the dark matter distribution and for the redshift evolution of the galaxy-to-mass bias factor. Our sample probes a population fainter than the spectroscopic LBGs and an inter-comparison of their clustering properties will be useful to address the differences in the nature of the two populations. However, the photometric redshift approach should be used with some caution when reaching such faint limits. In fact, uncertainties and systematic errors are expected to be larger than those estimated in the comparison of photometric and spectroscopic redshifts, which is typically limited to $`I_{AB}26`$. This problem is particularly relevant for the analysis of the angular correlation function since in this statistic all galaxies at a given redshift contribute with the same weight. This is different, for example, to what happens when these objects are used to estimate the star formation rate history, where brighter objects, with smaller uncertainties in the redshift determination, have more weight. For these reasons we try to provide a rough estimate of the errors in the redshift estimates at faint magnitudes, by comparing the results of different photometric redshift techniques and by using Monte Carlo simulations. This in turn provides the necessary information to define optimal redshift bin sizes (i.e. minimizing the effects of the redshift uncertainties) for the clustering analysis. The plan of the paper is as follows. In Section 2, we present the photometric database and we describe the photometric redshift technique. In Section 3, we investigate the reliability of the photometric redshift estimates. In Section 4, we present the results for the angular correlation function computed in different redshift ranges. Section 5 is devoted to a comparison of these results with the theoretical predictions of different cosmological models belonging to the general class of the Cold Dark Matter scenario. Finally, discussion and conclusions are presented in Section 6. ## 2 The photometric redshift measurement ### 2.1 The photometric database As a basis for the present work, we have used the photometric catalogue produced by FLY99 on the HDF-North using the source extraction code SExtractor (Bertin & Arnouts 1996). In addition to the four optical WFPC2 bands (Williams et al. 1996), infrared observations in J, H and Ks bands (Dickinson et al. 1999) are incorporated. A particularly valuable feature of the FLY99 catalogue is that the optical images are used to model spatial profiles that are fitted to the infrared images in order to measure optimal infrared fluxes and uncertainties. In this way, for the large majority of the objects, an estimate of the infrared flux is available down to the fainter magnitudes. This is a definite advantage for the derivation of photometric redshifts. The analysis described below has been applied to the F300W, F450W, F606W, F814W, J, H, Ks magnitudes of 1023 objects down to $`I_{AB}28.5`$ (here we note that the magnitude $`I_{AB}`$ refers directly to the photometric catalogue given by FLY99 and not to their best fit $`I_{AB}`$ reported in their photometric redshift catalogue). ### 2.2 The photometric redshift technique Various authors have explored a number of different approaches to estimate redshifts of galaxies from deep broad-band photometric databases. Empirical relations between magnitudes and/or colours and redshifts have been calibrated using spectroscopic samples (Connolly et al. 1995; Wang, Bahcall & Turner 1998). Other techniques are based on the comparison of the observed colours of galaxies with those expected from template SEDs, either observed (Lanzetta et al. 1996; FLY99) or theoretical (Giallongo et al. 1998) or a combination of the two (Sawicki, Lin & Yee 1997; hereafter SLY97). Bayesian estimation has also been used (Benítez 1998). #### 2.2.1 The synthetic spectral libraries The type of approach followed in the present work is based on the comparison of observed colours with theoretical SEDs and has been described by Giallongo et al. (1998). Here we summarize its main ingredients: 1. The SEDs are derived from the GISSEL library (Bruzual & Charlot 1999). The spectral synthesis models are governed by a number of free parameters listed in Table 1. The star formation rate for a galaxy with a given age is governed by the assumed e-folding star formation time-scale $`\tau `$. Several values of $`\tau `$ and galaxy ages are necessary to reproduce the different observed spectral types. We also have to assume a shape for the initial mass function (IMF). As shown by Giallongo et al. (1998), the photometric redshift estimate is not significantly changed by using different IMFs. Here we restricted our analysis to a Salpeter IMF. 2. In addition to the GISSEL parameters, we have added the internal reddening for each galaxy by applying the observed attenuation law of local starburst galaxies derived by Calzetti, Kinney & Storchi-Bregmann (1994) and Calzetti (1997). The different values of the reddening excess are listed in Table 1. We have also included the Lyman absorption produced by the intergalactic medium as a function of redshift in the range $`0z5`$, following Madau (1995). As a result we obtained a library of $`2.5\times 10^5`$ spectra, which can be used to derive the colours as a function of redshift for all the model galaxies with an age smaller than the Hubble time at the given redshift (which is cosmology-dependent; the adopted cosmological parameters are also given in Table 1). #### 2.2.2 Estimating redshifts To measure the photometric redshifts we used a standard $`\chi ^2`$ fitting procedure comparing the observed fluxes $`F_{\mathrm{obs}}`$ (and corresponding uncertainties) with the GISSEL templates $`F_{\mathrm{tem}}`$: $$\chi ^2=\underset{i}{}\left[\frac{F_{\mathrm{obs},i}sF_{\mathrm{tem},i}}{\sigma _i}\right]^2,$$ (1) where $`F_{\mathrm{obs},i}`$ and $`\sigma _i`$ are the fluxes observed in a given filter $`i`$ and their uncertainties, respectively; $`F_{\mathrm{tem},i}`$ are the fluxes of the template in the same filter; the sum runs over the seven filters. The template fluxes have been normalized to the observed ones by choosing the factor $`s`$ which minimizes the $`\chi ^2`$ value ($`\chi ^2/s=0`$): $$s=\underset{i}{}\left[\frac{F_{\mathrm{obs},i}F_{\mathrm{tem},i}}{\sigma _i^2}\right]/\underset{i}{}\left[\frac{F_{\mathrm{tem},i}^2}{\sigma _i^2}\right].$$ (2) In the GISSEL library the models provide fluxes emitted per unit mass (in $`M_{}`$) and the normalization parameter $`s`$, which rescales the template fluxes to the observed ones, provides a rough estimation of the observed galaxy mass. We have limited the range of models accepted in the $`\chi ^2`$ comparison to the interval $`10^7`$$`10^{14}M_{}`$. We derived the $`\chi ^2`$ probability function (CPF) as a function of $`z`$ using the lowest $`\chi ^2`$ values at any redshift. To have an idea of the redshift uncertainties we have derived the interval corresponding to the standard increment $`\mathrm{\Delta }\chi ^2=1`$. At the same time the CPF is analyzed to detect the presence, if any, of secondary peaks with a multi-thresholding algorithm (typically we decompose the normalized CPF into ten levels). We notice that our estimates of the photometric redshifts are changed by less than 2% if we adopt a different cosmology \[($`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$, $`\mathrm{\Omega }_{0\mathrm{\Lambda }}=0`$) or ($`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$, $`\mathrm{\Omega }_{0\mathrm{\Lambda }}=0.7`$)\] and our mass estimates are nearly unchanged. ## 3 Comparison with previous works and simulations ### 3.1 Spectroscopic vs. photometric redshifts In Figure 1 we show the comparison of our estimates of the photometric redshifts $`z_{\mathrm{phot}}`$ with the 106 spectroscopic redshifts $`z_{\mathrm{spec}}`$ up to $`z5`$ listed in the FLY99 catalogue (see references therein). Our values are generally consistent with the observed spectroscopic redshifts within the estimated uncertainties over the full redshift range. The r.m.s. dispersion $`\sigma _z`$ for different redshift intervals is reported in Table 2. At redshifts lower than 1.5, two galaxies have photometric redshifts which appear clearly discrepant: galaxy # 191 (the number refers to the FLY99 number) with $`z_{\mathrm{phot}}1.05`$ vs. $`z_{\mathrm{spec}}0.37`$ and galaxy # 619 with $`z_{\mathrm{phot}}0.95`$ vs. $`z_{\mathrm{spec}}0.37`$. Also FLY99 and SLY97 found for these two objects $`z_{\mathrm{phot}}0.88`$. As discussed in the next section, the techniques used in SLY97, in FLY99 and in the present work are significantly different; therefore, if the spectroscopic redshifts are correct, both objects are expected to have a really peculiar SED. For example, various SEDs used in these works do not include spectra with strong emission lines (starbursts, AGN, …). Yet, based on the observed spectra, the two spectroscopic redshifts are very uncertain (see http://astro.berkeley.edu/davisgrp/HDF/). Disregarding these two objects, the photometric accuracy at $`z<1.5`$ decreases from $`\sigma _z0.13`$ to $`\sigma _z0.09`$. These values are consistent with the photometric redshift estimates obtained in previous works and compiled by Hogg et al. (1998). At redshifts $`z1.5`$ the dispersion is $`\sigma _z=0.24`$, if the galaxy # 687, which shows catastrophic disagreement (it is found at low redshift also by FLY99, while there is no clear association in the SLY97 catalogue) is discarded. Direct inspection of the original frames shows that in this case the photometry can be incorrect due to the complex morphology of this object, which was assumed to be a single unit. ### 3.2 Comparison with other photometric redshifts The relatively good agreement of the photometric redshifts with the spectroscopic ones shows the reliability of our method at bright magnitudes. Obviously, the same accuracy cannot be expected also at fainter magnitudes, below the spectroscopic limit $`I_{AB}26`$. The uncertainty in the identification of the characteristic features (4000 Å and Lyman break) in the observed SEDs necessarily increases when the errors in the photometry become larger. In order to obtain a rough idea of the uncertainty also in the domain inaccessible to spectroscopy we have compared the results of our code with those obtained with other photometric methods. FLY99 and SLY97 have used the four spectra provided by Coleman, Wu & Weedman (1980) which reproduce different star formation histories or different galaxy types (E/S0, Sbc, Scd and Irr). The wavelength coverage of these template spectra is however too small (1400 - 10000 Å) to allow a direct comparison with the full range of photometric data (3000 - 25000 Å). To bypass this problem, both authors have extrapolated the infrared SEDs by using the theoretical SEDs of the GISSEL library, corresponding to the four spectral types. In the UV SLY97 have used again an extrapolation based on GISSEL while FLY99 have used the observations of Kinney et al. (1993). SLY97 have enlarged the SED library with two spectra of young galaxies with a constant star formation (from the GISSEL library) and interpolated between the six spectra to reduce the aliasing effect due to the SED sparse sampling. The comparison of the two approaches with spectroscopic redshifts has been carried out by the authors: the uncertainties are typically $`\sigma _z0.100.15`$ at $`z1.5`$ and reach $`\sigma _z0.200.25`$ at higher redshifts. In the SLY97 analysis, only the four optical bands have been used to estimate the photometric redshifts. To carry out a fair comparison, we have set up a code based on a library similar to that used by SLY97 and we recomputed the photometric redshifts with the FLY99 catalogue (hereafter called Coleman Extended model: CE). The comparison between the three methods is shown in Figure 2 (upper panels). The three redshift distributions are shown in the lower panels of the the same figure. From these plots, we observe that: 1. For $`I_{AB}26`$ the three methods are compatible within $`\mathrm{\Delta }z0.5`$. A small number ($``$ 2%) of catastrophic discrepancies ($`\mathrm{\Delta }z1`$) is observed. Excluding these objects, we find r.m.s. dispersions $`\sigma _z0.12`$ and $`\sigma _z0.23`$ between the GISSEL and CE models at $`z1.5`$ and $`1.5<z5`$, respectively. In the high-redshift range a systematic shift is observed with $`z_{\mathrm{GIS}}z_{\mathrm{CE}}0.15`$. Between the GISSEL and FLY99 models, the dispersions are $`\sigma _z0.16`$ and $`\sigma _z0.26`$ at $`z1.5`$ and $`1.5<z5`$, respectively, with a systematic shift in the high-redshift range $`z_{\mathrm{GIS}}z_{\mathrm{FLY99}}+0.18`$. These results are compatible with the uncertainties based on the spectroscopic sample. Finally the three resulting redshift distributions are in good agreement. 2. For $`I_{AB}28.5`$ the number of objects with $`\mathrm{\Delta }z1`$ increases and represents the 6% of the full sample in both cases. Excluding these objects, we find dispersions $`\sigma _z0.18`$ and $`\sigma _z0.26`$ between the GISSEL and CE models at $`z1.5`$ and $`1.5<z5`$, respectively. For the high-redshift range a systematic shift is still observed with $`z_{\mathrm{GIS}}z_{\mathrm{CE}}0.11`$. Comparing the GISSEL and FLY99 models, the dispersions are $`\sigma _z0.22`$ and $`\sigma _z0.32`$ at $`z1.5`$ and $`1.5<z5`$, respectively, with a larger systematic shift in the high-redshift range $`z_{\mathrm{GIS}}z_{\mathrm{FLY99}}+0.31`$. 3. The large shift for $`z1.5`$ observed with FLY99 is due to a feature appearing in their redshift distribution with a large number of sources between $`1.2z2`$, not observed in the two other models (Figure 2, lower right panel). The interval $`1.2z2`$ is critical for the photometric determination of the redshifts, due to the lack of strong features. In fact the Lyman-alpha break is not yet observed in the $`F300W`$ band and the break at $`4000`$ Å is located between the $`F814W`$ and $`J`$ bands. Therefore the estimates rest basically on the continuum shape of the templates. As shown by FLY99 in their Figure 6, their photometric redshifts suffer from a systematic underestimate with respect to the spectroscopic ones around $`z2`$. This may be due to an inadequacy of the UV extrapolation used by FLY99 in reproducing the UV shape of the high-$`z`$ objects. This effect disappears at higher redshifts because of the U-dropout effect. As a check, we have added to the four templates of FLY99 a spectrum of an irregular galaxy with constant star formation rate (with higher UV flux). In this case the excess of galaxies with $`1.2z2`$ disappears and the objects are re-distributed in better agreement with the two other methods. 4. Our GISSEL model produces a smaller number of objects at $`z3.5`$ with respect to the two other approaches. The discrepant objects (found at lower redshift by the GISSEL code) are generally fitted by using a significant fraction of reddening excess ($`\mathrm{E}(\mathrm{B}\mathrm{V})0.3`$). Note that in general objects found at $`z3.5`$ by the GISSEL code are also at high redshift with the other techniques. ### 3.3 Comparison with the NICMOS F110W and F160W observations Recently, deep NICMOS images have been obtained in the area corresponding to chip 4 of the WFPC2 camera in the HDF-North (Thompson et al. 1999). The observations have been carried out in the two filters $`F110W`$ and $`F160W`$ and reach $`F160W_{AB}28.8`$ (at 3$`\sigma `$). We have associated each NICMOS detection (from the published catalogue) with the FLY99 catalogue. We consider in our analysis the 164 objects detected in both NICMOS filters. These data provide a crucial check thanks to their depth and high spatial resolution and also to the spectral coverage of the $`F110W`$ band. This filter fills the gap between the $`F814W`$ filter and the standard $`J`$ filter and makes it possible to detect the 4000 Å break at $`z1.2`$. We have recomputed the photometric redshifts with our GISSEL models using the four optical bands and replacing the J, H, Ks filters with the $`F110W`$ and $`F160W`$ filters. The results are shown in Figure 3. This subsample shows a good agreement between NICMOS and J, H, Ks photometry and corroborates the reliability of the infrared measurements performed by FLY99. The redshift agreement in the range $`0z5`$ is better than $`|\mathrm{\Delta }z|=0.5`$ up to magnitudes $`I_{AB}28.5`$ and only 5/164 objects present discrepancies with $`|\mathrm{\Delta }z|1`$. ### 3.4 Comparison with Monte Carlo simulations As final check we performed Monte Carlo simulations to study the effect of photometric errors on our redshift estimates. To do so we have added to the original fluxes of the 1067 galaxies of the FLY99 catalogue a gaussian random noise with r.m.s. equal to the flux uncertainties in each band. This operation has been repeated 20 times to produce a catalogue of approximately 21,000 simulated galaxies for which we have re-estimated the photometric redshifts with our code. In Figure 4, we show the distribution of the differences $`\mathrm{\Delta }z`$ between the simulated redshifts $`z_{\mathrm{sim}}`$ and the original ones $`z`$ ($`\mathrm{\Delta }z=z_{\mathrm{sim}}z`$) for different magnitude and redshift ranges. Several comments can be made from this figure. 1. The median value of the redshift difference is very close to zero ($`0.05`$) for any magnitude and redshift range. The dispersion around the peak, $`\sigma _z`$, is larger for larger magnitudes and redshifts. In Table 3 we report $`\sigma _z`$ for galaxies with $`I_{AB}28.5`$ for different redshift ranges. These dispersions are compatible with the observed ones based on the comparison made above between different codes. 2. Table 3 also reports the number of simulated galaxies put in a redshift bin different from their original one because of the photometric errors (Column 3). These results show that the number of lost original galaxies varies between 15% to 25% at any redshift for $`I_{AB}28.5`$. In the redshift range $`0z0.5`$, the discrepant objects are distributed in a high redshift tail between $`1z4`$. For the three bins with $`z1.5`$, the discordant objects are preferentially located in a secondary peak at low $`z`$ ($`0z_{\mathrm{sim}}1`$). 3. The galaxies lost from an original bin are a contaminating factor for the others. We can estimate for each bin this contamination which is also reported in Table 3 (Column 4). In the same table the contaminating fraction due only to the adjacent bins is reported (Column 5). We can see that the contamination plays a different role at different redshifts. For $`0z0.5`$, the contamination is quite large ($`30\%`$) and it is not due to the adjacent bin (representing only one third of the total). In this case the main source of contamination are high-redshift galaxies put at low redshifts. For the other bins the contamination is close to 20% and is essentially due to the adjacent bins. ## 4 The angular correlation function ### 4.1 Definition of the redshift bin sizes and subsamples We have limited our analysis to the region of the HDF with the highest signal-to-noise, excluding the area of the PC, the outer part of the three WFPC and the inner regions corresponding to the junction between each chip. In this area we included in our sample all galaxies brighter than $`I_{AB}28.5`$. This procedure leads to a slight reduction of the overall number of galaxies: our final sample contains 959 out of the 1023 original ones. To correctly compute the angular correlation function (ACF) the following details have to be taken into account: 1. the relatively small field of view of the HDF (the angular distance corresponds to $`1h^1`$ Mpc at $`z1`$, with $`q_0=0.5`$); 2. the accuracy of the photometric redshifts; 3. the number of objects in each redshift bin, in order to reduce the shot noise and achieve sufficient sensitivity to the clustering signal. As a consequence, relatively large redshift bins are required: according to Figure 2 and Table 3, a minimum redshift bin size of $`\mathrm{\Delta }z=0.5`$ (corresponding to $`\mathrm{\Delta }z2\times \sigma _z`$) is required for $`z1.5`$. At higher redshifts, due to the uncertainties in the redshifts and the relatively low surface densities, a more appropriate bin size is $`\mathrm{\Delta }z=1`$. Moreover, these large bin sizes can reduce the effects of redshift distortion and, most important, attenuate the sample variance effect caused by the small area covered by the HDF North (approximately 4 arcmin<sup>2</sup>). A refined approach to treat the sample variance has been recently proposed by Colombi, Szapudi & Szalay (1998). Finally, we note that the contamination discussed in the previous section can introduce a dilution of the clustering signal. In the worst case, assuming that the contaminating population is uncorrelated, it introduces a dilution of about $`(1f)^2`$ (where $`f`$ corresponds to the contaminating fraction reported in Table 3). This correction factor has been used to define upper-limits to the clustering estimates which are shown in the following figures. ### 4.2 The computation of the Angular Correlation Function The angular correlation function $`\omega (\theta )`$ is related to the excess of galaxy pairs in two solid angles separated by the angle $`\theta `$ with respect to a random Poisson distribution. The angular separation used for the computation of $`\omega (\theta )`$ covers the range from 5 arcsec up to 80 arcsec. We use logarithmic bins with steps of $`\mathrm{\Delta }\mathrm{log}\theta =0.3`$. The lower limit makes it possible to avoid a spurious signal at small scales due to the multi-deblending of resolved bright spirals and irregulars, the upper cut-off is almost half the size of the HDF and corresponds to the maximum separation where the ACF provides a reliable signal. To derive the ACF in each redshift interval, we used the estimator defined by Landy & Szalay (1993): $$\omega _{\mathrm{est}}(\theta )=\frac{DD(\theta )2DR(\theta )+RR(\theta )}{RR(\theta )},$$ (3) where DD is the number of distinct galaxy-galaxy pairs, DR is the number of galaxy-random pairs and RR refers to random-random pairs with separation between $`\theta `$ and $`\theta +\mathrm{\Delta }\theta `$. The random catalogue contains 20,000 sources covering the same area of our sample. In Figure 5 we show the measured ACF for each redshift bin. The uncertainties are Poisson errors as shown by Landy & Szalay (1993) for this estimator. Adopting a power-law form for the ACF as $`\omega (\theta )=A_\omega \theta ^\delta `$, we derive the amplitude $`A_\omega `$ assuming $`\delta =\gamma 1=0.8`$. Here $`\gamma `$ is the slope of the spatial correlation function, which is also assumed to follow a power-law relation. Formally, we can use both $`A_\omega `$ and $`\delta `$ as free parameters to be obtained from the least-square fitting, but due to the limited sample, we prefer to fix $`\delta `$ and leave as free parameter only $`A_\omega `$. The value of the slope we assume is larger than the estimates obtained by Le Fèvre et al. (1996) in the analysis of the CFRS catalogue (which covers the interval $`0z1`$), and is smaller than the estimates obtained for LBGs by G98 at $`z3`$. Nevertheless, the adopted value is still consistent with the respective uncertainties. The value of the slope could also depend on the magnitude, as discussed by Postman et al. (1998). To estimate the amplitude of the ACF, due to the small size of the field, we introduce the integral constraint IC in our fitting procedure as $`\omega _{\mathrm{est}}\omega _{\mathrm{true}}IC=A_\omega \times (\theta ^{0.8}B)`$. The quantity $`IC=A_\omega \times B`$ has been computed by a Monte Carlo method using the same geometry of the HDF and masking the excluded regions. In this computation, we adopt the same value for the slope ($`\delta =0.8`$) and we derive $`B=0.044`$ (for $`\theta `$ measured in arcsec). The best fits for the ACF in each redshift bin are shown as solid lines in Figure 5. The amplitudes $`A_\omega `$ obtained by the best fits are listed in Table 5 with the adopted magnitude limits and the number of galaxies used. We give also the measured amplitude for the galaxies with $`I_{AB}28.5`$ and with $`0z6`$. All these values are not corrected for the contamination factor. In Figure 6 we compare our values of $`A_\omega `$ (at 10 arcsec) to other published data (Connolly et al. 1998; G98; Magliocchetti & Maddox 1999). The values of $`A_\omega `$ take into account the adopted redshift bin sizes. At a given redshift, a larger $`\mathrm{\Delta }z`$ implies smaller $`A_\omega `$ due to the increasing number of foreground and background galaxies with respect to the unchanged number of physically correlated pairs ($`A_\omega \mathrm{\Delta }z^1`$; see e.g. Connolly et al. 1998). Then, if we assume that $`A_\omega `$ does not strongly evolve inside the redshift bin, we can correct the original amplitudes by using $`A_\omega \times \mathrm{\Delta }z`$, which allows a more direct comparison. From this figure we note that our results are in good agreement with those of Connolly et al. (1998) and slightly smaller than those for LBGs obtained by G98. The agreement with Magliocchetti & Maddox (1999) is worse but still consistent with both error bars. In this figure we show the possible effect of the contamination factor discussed in the previous section. This correction increases all the values which should be regarded as upper-limits due to the basic assumption that the contaminating population is uncorrelated. Moreover we notice that our estimate in the redshift bin $`0z0.5`$ can be affected by the lack of nearby bright galaxies in the HDF. For this reason, this point will be not considered in the following comparison between observational results and model predictions ## 5 Comparison with theoretical models ### 5.1 The formalism We can now predict the behaviour of the angular correlation function $`\omega (\theta )`$ for our galaxy sample in various cosmological structure formation models. The angular two-point function for a sample extended in the redshift direction over an interval $`𝒵`$ can be written in terms of the spatial correlation function using the relativistic Limber equation (Peebles 1980). We adopt here the Limber formula as given in Matarrese et al. (1997), namely $$\omega _{\mathrm{obs}}(\theta )=N^2_𝒵𝑑z\left(\frac{dr}{dz}\right)^1𝒩^2(z)_{\mathrm{}}^{\mathrm{}}𝑑u\xi _{\mathrm{gal}}[r(u,\theta ,z),z]$$ (4) where $`r(u,\theta ,z)=\sqrt{u^2+r^2(z)\theta ^2}`$, in the small-angle approximation (e.g. Peebles 1980). The relation between the comoving radial coordinate $`r`$ and the redshift $`z`$ is given with whole generality by $$r(z)=\frac{c}{H_0\sqrt{|\mathrm{\Omega }_0|}}𝒮\left(\sqrt{|\mathrm{\Omega }_0|}_0^z\left[\left(1+z^{}\right)^2\left(1+\mathrm{\Omega }_{0\mathrm{m}}z^{}\right)z^{}\left(2+z^{}\right)\mathrm{\Omega }_{0\mathrm{\Lambda }}\right]^{1/2}𝑑z^{}\right),$$ (5) where $`\mathrm{\Omega }_01\mathrm{\Omega }_{0\mathrm{m}}\mathrm{\Omega }_{0\mathrm{\Lambda }}`$, with $`\mathrm{\Omega }_{0\mathrm{m}}`$ and $`\mathrm{\Omega }_{0\mathrm{\Lambda }}`$ the density parameters for the non-relativistic matter and cosmological constant components, respectively. In this formula, for an open universe model, $`\mathrm{\Omega }_0>0`$, $`S(x)\mathrm{sinh}(x)`$, for a closed universe, $`\mathrm{\Omega }_0<0`$, $`S(x)\mathrm{sin}(x)`$, while in the EdS case, $`\mathrm{\Omega }_0=0`$, $`S(x)x`$. In the Limber equation above, $`𝒩(z)`$ is the redshift distribution of the catalogue (whose integral over the entire redshift interval is $`N`$), which is given by $`𝒩(z)=_{}d\mathrm{ln}M𝒩(z,M)`$, with $`𝒩(z,M)=4\pi g_c(z)\varphi (z,M)\overline{n}_c(z,M)`$ and $`\overline{n}_c(z,M)`$ is the expected number of galaxies per comoving volume at redshift $`z`$; $`\varphi (z,M)`$ is the isotropic catalogue selection function. The quantity $`𝒩(z,M)`$ represents the number of objects actually present in the catalogue, with redshift in the range $`z,z+dz`$ and intrinsic properties (like mass, luminosity, …) in the range $`M,M+dM`$ ($``$ representing the overall interval of variation of $`M`$). In the latter integral we also defined the comoving Jacobian $$g_c(z)r^2(z)\left[1+\frac{H_0^2}{c^2}\mathrm{\Omega }_0r^2(z)\right]^{1/2}\frac{dr}{dz}.$$ (6) In what follows we will assume a simple model for our galaxy distribution, where galaxies are associated in a one-to-one correspondence to their hosting dark matter haloes. The advantage of this model is that haloes can be simply characterized by their mass $`M`$ and formation redshift $`z_f`$. Since haloes merge continuously into larger mass ones one can safely assume that their formation redshift coincides with the observation one, namely $`z_f=z`$. This simple model of galaxy clustering was named ‘transient’ model in Matarrese et al. (1997) and Moscardini et al. (1998); Coles et al. (1998) adopted it to describe the clustering of LBGs. The application of this model is more appropriate at high redshifts where merging dominates while at low redshifts it can only be a rough approximation. Recently Baugh et al. (1999) showed that this simple model under-predicts the clustering properties at low redshift because it does not take into account the possibility that a single halo can host more than one galaxy. Indeed, as discussed in Moscardini et al. (1998), a ‘galaxy conserving’ bias model is likely to provide a better description of the galaxy clustering evolution at low redshift. In practice, in our modelling we select a minimum mass $`M_{\mathrm{min}}`$ for the haloes hosting our galaxies, i.e. we take $`\varphi (z,M)=\theta (MM_{\mathrm{min}})`$, with $`\theta `$ the Heaviside step function, and we compute the corresponding value of the effective bias $`b_{\mathrm{eff}}`$ (see equation below) at each redshift. In what follows we will consider two possibilities: i) $`M_{\mathrm{min}}`$ fixed to a sensible value (we will show results obtained by using $`10^{10}`$, $`10^{11}`$ and $`10^{12}`$ $`h^1M_{}`$), ii) $`M_{\mathrm{min}}=M_{\mathrm{min}}(z)`$ chosen to reproduce a relevant set of observational data. For the latter case we will adopt two different strategies: in the first case we assume $`M_{\mathrm{min}}(z)`$ so that the theoretical $`𝒩(z)`$ fits the observed one in each redshift bin (e.g. Mo & Fukugita 1996; Moscardini et al. 1998; A98; Mo, Mao & White 1999); in the second case we adopt at any redshift the median of the mass distribution estimated by our GISSEL model. Actually this model gives a rough estimate of the baryonic mass. To convert it to the mass of the hosting dark matter halo we multiply by a factor 10. This value corresponds to a baryonic fraction close to that predicted by the standard theory of primordial nucleosynthesis. Variations in the range from 5 to 20 produce only small changes in the following results. As a first, though accurate, approximation the galaxy spatial two-point function can be taken as being linearly proportional to that of the mass, namely $`\xi _{\mathrm{gal}}(r,z)b_{\mathrm{eff}}^2(z)\xi _\mathrm{m}(r,z)`$, where $$b_{\mathrm{eff}}(z)𝒩(z)^1_{}d\mathrm{ln}M^{}𝒩(z,M^{})b(M^{},z)$$ (7) is the effective bias of our galaxy sample and $`\xi _\mathrm{m}`$ the matter covariance function. The bias parameter $`b(M,z)`$ for haloes of mass $`M`$ at redshift $`z`$ in a given cosmological model can be modeled as (Mo & White 1996) $$b(M,z)=1+\frac{1}{\delta _c}\left(\frac{\delta _c^2}{\sigma _M^2D_+^2(z)}1\right),$$ (8) where $`\sigma _M^2`$ is the linear mass-variance averaged over the scale $`M`$, extrapolated to the present time ($`z=0`$), $`\delta _c`$ the critical linear overdensity for spherical collapse ($`\delta _c=\mathrm{const}=1.686`$ in the EdS case, while it depends slightly on $`z`$ for more general cosmologies) and $`D_+(z)`$ is the linear growth factor of density fluctuations (e.g. $`D_+(z)=(1+z)^1`$ in the EdS case). In comparing our theoretical predictions on clustering with the data, we will always adopt for the galaxy redshift distribution $`𝒩(z)`$ the observed one. Nevertheless, consistency requires that the predicted halo redshift distribution for a given minimum halo mass always exceeds (because of the effects of the selection function) the observed galaxy one. For the calculation of the effective bias, where we need $`𝒩(z,M)`$, one might adopt the Press & Schechter (1974) recipe to compute the comoving halo number density (per unit logarithmic interval of mass); it reads $$\overline{n}_c(z,M)=\sqrt{\frac{2}{\pi }}\frac{\overline{\varrho }_0\delta _c}{MD_+(z)\sigma _M}\left|\frac{d\mathrm{ln}\sigma _M}{d\mathrm{ln}M}\right|\mathrm{exp}\left[\frac{\delta _c^2}{2D_+^2(z)\sigma _M^2}\right]$$ (9) (with $`\overline{\varrho }_0`$ the mean mass density of the Universe at $`z=0`$). However, a number of authors have recently shown that the Press-Schechter formula does not provide an accurate description of the halo abundance both in the large and small-mass tails (see e.g. the discussion in Sheth & Tormen 1999). Also, the simple Mo & White (1996) bias formula of Equation (7) has been shown not to correctly reproduce the correlation of low mass haloes in numerical simulations. Several alternative fits have been recently proposed (Jing 1998; Porciani, Catelan & Lacey 1999; Sheth & Tormen 1999; Jing 1999). An accurate description of the abundance and clustering properties of the dark matter haloes corresponding to our galaxy population will be obtained here by adopting the relations introduced by Sheth & Tormen (1999), which have been obtained by fitting to the distribution of the halo population of the GIF simulations (Kauffmann et al. 1999): this technique allows to simultaneously improve the performance of both the mass function and the bias factor. The relevant formulas, replacing Eqs.(8) and (9) above, read $$b(M,z)=1+\frac{1}{\delta _c}\left(\frac{a\delta _c^2}{\sigma _M^2D_+^2(z)}1\right)+\frac{2p}{\delta _c}\left(\frac{1}{1+[\sqrt{a}\delta _c/(\sigma _MD_+(z))]^{2p}}\right)$$ (10) and $$\overline{n}_c(z,M)=\sqrt{\frac{2aA^2}{\pi }}\frac{\overline{\varrho }_0\delta _c}{MD_+(z)\sigma _M}\left[1+\left(\frac{D_+(z)\sigma _M}{\sqrt{a}\delta _c}\right)^{2p}\right]\left|\frac{d\mathrm{ln}\sigma _M}{d\mathrm{ln}M}\right|\mathrm{exp}\left[\frac{a\delta _c^2}{2D_+^2(z)\sigma _M^2}\right],$$ (11) respectively. In these formulas $`a=0.707`$, $`p=0.3`$ and $`A0.3222`$, while one would recover the standard (Mo & White and Press & Schechter) relations for $`a=1`$, $`p=0`$ and $`A=1/2`$. The computation of the clustering properties of any class of objects is completed by the specification of the matter covariance function $`\xi _\mathrm{m}(r,z)`$ and its redshift evolution. To this purpose we follow Matarrese et al. (1997) and Moscardini et al. (1998) who used an accurate method, based on the Hamilton et al. (1991) original ansatz to evolve $`\xi _\mathrm{m}(r,z)`$ into the fully non-linear regime. Specifically, we use here the fitting formulas proposed by Peacock & Dodds (1996). As recently pointed out by various authors (e.g. Villumsen 1996; Moessner, Jain & Villumsen 1998), when the redshift distribution of faint galaxies is estimated by applying an apparent magnitude limit criterion, magnification bias due to weak gravitational lensing would modify the relation between the intrinsic galaxy spatial correlation function and the observed angular one. Modelling this effect within the present scheme would be highly desirable, but is certainly beyond the scope of our work. Nevertheless, we note that this magnification bias would generally lead to an increase of the apparent clustering of high-$`z`$ objects above that produced by the intrinsic galaxy correlations, by an amount which depends on the amplitude of the fluctuations of the underlying matter distribution. ### 5.2 Structure formation models We will consider here a set of cosmological models belonging to the general class of Cold Dark Matter (CDM) scenarios. The linear power-spectrum for these models can be represented by $`P_{\mathrm{lin}}(k,0)k^nT^2(k)`$, where we use the fit for the CDM transfer function $`T(k)`$ given by Bardeen et al. (1986), with “shape parameter” $`\mathrm{\Gamma }`$ defined as in Sugiyama (1995). To fix the amplitude of the power spectrum (generally parameterized in terms of $`\sigma _8`$, the r.m.s. fluctuation amplitude inside a sphere of $`8h^1`$ Mpc) we either attempt to fit the local cluster abundance, following the Eke, Cole & Frenk (1996) analysis of the temperature distribution of X-ray clusters (Henry & Arnaud 1991), or the level of fluctuations observed by COBE (Bunn & White 1997). In particular, we consider the following models: A version of the standard CDM (SCDM) model with $`\sigma _8=0.52`$, which reproduces the local cluster abundance, but is inconsistent with COBE data. The so-called $`\tau `$CDM model (White, Gelmini & Silk 1995), with shape parameter $`\mathrm{\Gamma }=0.21`$. A COBE normalized tilted model, hereafter called TCDM (Lucchin & Matarrese 1985), with $`n=0.8`$, $`\sigma _8=0.52`$ and high (10 per cent) baryonic content (e.g. White et al. 1996; Gheller, Pantano & Moscardini 1998); the normalization of the scalar perturbations, which takes into account the production of gravitational waves predicted by inflationary theories (e.g. Lucchin, Matarrese & Mollerach 1992; Lidsey & Coles 1992), allows to simultaneously fit the CMB fluctuations observed by COBE and the local cluster abundance. The three above models are all flat and without cosmological constant. We also consider here: A cluster normalized open CDM model (OCDM), with matter density parameter $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$, and $`\sigma _8=0.87`$, which is also consistent with COBE data. Finally, a cluster normalized low-density CDM model ($`\mathrm{\Lambda }`$CDM), with $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$, but with a flat geometry provided by the cosmological constant, with $`\sigma _8=0.93`$, which is also consistent with COBE data. A summary of the parameters of the cosmological models used here is given in Table 4. ### 5.3 Results In Figure 7 we compare the observed amplitude of the ACF with the predictions of the various cosmological models. For consistency with the analysis performed on the observational data shown in the previous section, here the theoretical results have been obtained by fitting the data in the same range of angular separation and using the same stepping $`\mathrm{\Delta }\mathrm{log}\theta =0.3`$. A fixed slope of $`\delta =0.8`$ is also used in the following analysis. Notice that this value is only a rough estimate of the best fit slopes: generally the resulting values are smaller ($`\delta 0.6`$) in all redshift intervals and for all the models. The discrepancy is higher for TCDM and $`\tau `$CDM ($`\delta 0.30.4`$) and can lead to some ambiguity in the interpretation of the results (see the discussion on the effective bias below). In each panel the solid lines show the results obtained when we use different (but constant in redshift) values of $`M_{\mathrm{min}}`$ ($`10^{10}`$, $`10^{11}`$ and $`10^{12}`$ $`h^1M_{}`$ from bottom to top). These results can be regarded as a reference on what is the minimum mass of the galaxies necessary to reproduce the observed clustering strength. However, the assumption that the catalogue samples at any redshift the same class of objects, i.e. with the same typical minimum mass, cannot be realistic. In fact, we expect that at high redshifts the sample tends to select more luminous, and on average more massive, objects than at low redshifts. This is supported by the distribution of the galaxy masses inferred by the GISSEL model, shown in Figure 8. The solid line, which represents the median mass, is an increasing function of redshift: from $`z0`$ to $`z4`$ its value changes by at least a factor of 30. In Figure 8 we also show the masses necessary to reproduce at any redshift the observed galaxy density. In general, they are compatible with the GISSEL distribution but the redshift dependence is different for the various cosmological models considered here. For EdS universe models (left panel) the different curves are quite similar and almost constant with typical values of $`10^{10.5}h^1M_{}`$. On the contrary for OCDM and $`\mathrm{\Lambda }`$CDM models (shown in the right panel) $`M_{\mathrm{min}}(z)`$ is an increasing function of redshift: at $`z0`$ $`M_{\mathrm{min}}10^{10}h^1M_{}`$, while at $`z4`$ $`M_{\mathrm{min}}10^{11.5}h^1M_{}`$. The amplitudes of the ACF obtained by adopting these $`M_{\mathrm{min}}(z)`$ values are also shown in Figure 7. In general, all the models are able to reproduce the qualitative behaviour of the observed clustering amplitudes, i.e. a decrease from $`z=0`$ to $`z11.5`$ and an increase at higher redshifts. The EdS models are in rough agreement with the observational results when a minimum mass of $`10^{11}h^1M_{}`$ is used at any redshift. As discussed above this mass is slightly larger than the one required to fit the observed $`𝒩(z)`$. The situation for OCDM and $`\mathrm{\Lambda }`$CDM models is different. The amount of clustering measured would require that the involved objects have, at redshifts $`z11.5`$, minimum masses smaller than $`10^{10}h^1M_{}`$, at redshifts $`1.5<\text{ }z<\text{ }3`$, minimum masses of the order of $`10^{11.5}h^1M_{}`$, while, at $`z4`$, $`M_{\mathrm{min}}10^{12}h^1M_{}`$ is needed to reproduce the clustering strength. These small values at low redshifts are probably due to the kind of biasing model adopted in an epoch when merging starts to be less important. This is particularly true for open models and flat models with a large cosmological constant, where the growth of perturbations is frozen by the rapid expansion of the universe. On the contrary, the need to explain the high amplitude of clustering at $`z4`$ with very massive objects can be in conflict with the observed abundance of galaxies at this redshift, which requires smaller minimum masses. If the spatial correlation function can be written in the simple form $`\xi _{\mathrm{gal}}(r,z)=[r/r_0(z)]^\gamma `$, it is possible to obtain the comoving correlation length $`r_0(z)`$ and the r.m.s. galaxy density fluctuation $`\sigma _8^{\mathrm{gal}}(z)`$, with the assumption that the clustering does not strongly evolve inside each redshift bin used for the amplitude measurements (see Magliocchetti & Maddox 1999 for the relevant formulas in the framework of different cosmological models). The values for the comoving $`r_0(z)`$ obtained from our data are listed in Table 5 for three different cosmologies. In Figure 9, we compare our values of $`r_0`$ as a function of $`z`$ to a compilation of values taken from the literature. The results are given under the assumption of an EdS universe. From this figure, one can notice that $`r_0`$ shows a small decline from $`z0`$ to $`z11.5`$ followed by an increase at higher $`z`$. At $`z2`$ the clustering amplitude is comparable to or higher than that observed at $`z0.25`$. An implication of the results shown in this figure is that the evolution of galaxy clustering cannot be properly described by the standard parametric form: $`\xi (r,z)=\xi (r,z=0)(1+z)^{(3+ϵ\gamma )}`$, where $`ϵ`$ models the gravitational evolution of the structures. Due to the dependence of the bias on redshift and mass, the evolution of galaxy clustering is related to the clustering of the mass in a complex way. This has already been noticed by G98 from the study of LBGs at $`z3`$ (see also Moscardini et al. 1998 for a theoretical discussion of the problem). In the plot of the correlation length $`r_0`$ we present also the results for $`z<1`$ obtained by Le Fèvre et al. (1996) from the estimates of the projected correlation function of the CFRS. We do not show in the figure the correlation lengths obtained by Carlberg et al. (1997), who performed the same analysis using a K selected sample, because they adopted a different cosmological model. Their estimates with $`q_0=0.1`$ of $`r_0`$ are approximately a factor of 1.5 larger than the CFRS results in a comparable magnitude and redshift range. Our results are lower than these previous estimates and show that the objects selected by our catalogue at low redshifts tend to have different clustering properties. This effect suggests a dependence of the clustering properties on the selection of the sample which is even more evident at high redshift. In fact our value of $`r_0`$ at $`z3`$ is smaller than that obtained by A98 and G98 for their LBG catalogues at the same redshift. To measure the clustering properties, A98 used a bright sample of 268 spectroscopically confirmed galaxies and derived $`r_04h^1`$ Mpc; G98 used a larger sample of 871 galaxies and derived a value two times smaller ($`r_02h^1`$ Mpc). Our value, referring to galaxies with $`I_{AB}28.5`$, is $`r_01.7h^1`$ Mpc. Notice that this value is a lower limit since it does not take into account the effects of contamination. All these reported values of the correlation length are obtained by assuming an EdS universe. This decrease of $`r_0`$ suggests that at fainter magnitudes we observe less massive galaxies which are intrinsically less correlated. This is in qualitative agreement with the prediction of the hierarchical galaxy formation scenario (e.g. Mo, Mao & White 1999). On the contrary, such an interpretation is only marginally consistent with the reported higher value at $`z3`$ of Magliocchetti & Maddox (1999), computed with the same FLY99 catalogue. In order to better display the relation between the clustering strength and the abundance of a given class of objects (defined as haloes with mass larger than a given mass $`M_{\mathrm{min}}`$) in Figure 10 we show, for the different cosmological models, the relation between the predicted correlation length $`r_0`$ and the expected surface density, i.e. the number of objects per square arcminute. The quantity $`r_0`$ shown in this figure is defined as the comoving separation where the predicted spatial correlation is unity; the number density is computed by suitably integrating the modified Press-Schechter formula \[Equation (11)\] over the given redshift range. In the left panel, showing the results for the interval $`2.5z3.5`$, we also plot the results obtained in this work (points at high density with their associated upper-limits due to contamination effects) and those coming from the LBG analysis of A98 and G98 and corresponding to a lower abundance. All the models are able to reproduce the observed scaling of the clustering length with the abundance and no discrimination can be made between them. Similar conclusions have been reached by Mo, Mao & White (1999). The right panel shows the same plot but at $`z4`$, where the only observational estimates come from this work and from Magliocchetti & Maddox (1999). Here the situation seems to be more interesting. In fact the observed clustering is quite high and in the framework of the hierarchical models seems to require a low abundance for the relevant objects. This density starts to be in conflict with the observed one (which represents a lower limit due to the unknown effect of the selection function) for some of the models here considered, for example the OCDM model. Thus, if our results will be confirmed by future observations, the combination of the clustering strength and galaxy abundance at redshift $`z4`$ could be a discriminant test for the cosmological parameters. An alternative way to study the clustering properties is given by the observed r.m.s. galaxy density fluctuation $`\sigma _8^{\mathrm{gal}}`$. Its redshift evolution is shown in the upper panels of Figure 11 for three cosmological models: Einstein-de Sitter universe (left panel); open universe with $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$ and vanishing cosmological constant (central panel); flat universe with $`\mathrm{\Omega }_{0\mathrm{m}}=0.3`$ and cosmological constant (right panel). In the same plot we show also the theoretical predictions computed by using the linear theory when the cosmological models are normalized to reproduce the local cluster abundance. Since the corresponding values of $`\sigma _8^\mathrm{m}`$ at $`z=0`$ (reported in Table 4) are smaller than unity, we can safely compute the redshift evolution by adopting linear theory. As shown in Moscardini et al. (1998), the differences between these estimates and those obtained by using the fully non-linear method described above are always smaller than 3% at $`z=0`$ and consequently negligible at higher redshifts. The comparison suggests that, while some anti-bias is present at low redshift, the high-redshift galaxies are strongly biased with respect to the dark matter. This observation strongly supports the theoretical expectation of biased galaxy formation with a bias parameter evolving with $`z`$. Finally, the lower panels of Figure 11 report directly the values of the bias parameter $`b`$ as deduced from our catalogue. The results show that $`b`$ is a strongly increasing function of redshift in all cosmological models: from $`z0`$ to $`z4`$ the bias changes from $`b1`$ to $`b5`$ in the EdS model and from $`b0.5`$ to $`b3`$ in OCDM and $`\mathrm{\Lambda }`$CDM models. This qualitative behaviour is what is expected in the framework of the hierarchical models of galaxy formation, as confirmed by the curves of the effective bias computed by using Equation (7), with $`M_{\mathrm{min}}=10^{10}`$, $`10^{11}`$ and $`10^{12}h^1M_{}`$. The observed bias is well reproduced when a minimum mass of $`10^{11}h^1M_{}`$ is adopted for SCDM, in agreement with the discussion of the results about the correlation amplitude $`A_\omega `$. On the contrary, the study of the bias parameter for the other two EdS models (TCDM and $`\tau CDM`$) seems to suggest a smaller value of $`M_{\mathrm{min}}10^{10}h^1M_{}`$. The discrepancy is due to the fact that the computation of the correlation amplitudes has been made by adopting a fixed slope of $`\delta =0.8`$, which is not a good estimate of the best fit value for these two models. For OCDM and $`\mathrm{\Lambda }`$CDM models a minimum mass of $`M_{\mathrm{min}}10^{11}h^1M_{}`$ gives an effective bias in agreement with the observations when $`1.5<\text{ }z<\text{ }3`$, while a smaller (larger) minimum mass is required at lower (higher) redshifts. We can analyze the properties of the present-day descendants of our galaxies at high $`z`$, assuming that the large majority of them contains only one of our high-redshift galaxies (see e.g. Baugh et al. 1998). Following Mo, Mao & White (1999), we can obtain the present bias factor of these descendants by evolving $`b(z)`$ backwards in redshift from the formation redshift $`z`$ to $`z=0`$, according to the ‘galaxy-conserving’ model (Matarrese et al. 1997; Moscardini et al. 1998); this gives $$b(M,0)=1+D_+(z)\left[b(M,z)1\right],$$ (12) where, for $`b(M,z)`$, we can use the effective bias obtained for our galaxies by dividing the observed galaxy r.m.s. fluctuation on $`8h^1`$ Mpc by that of the mass, which depends on the background cosmology. For the galaxies at $`z3`$ we find $`b(M,0)1.4,1.3,1.3`$ for the EdS, OCDM and $`\mathrm{\Lambda }`$CDM models, respectively. The values of $`b(M,0)`$ that we obtained can be directly compared with those for normal bright galaxies, which have $`b_01/\sigma _8`$, i.e. approximately 1.9 in the EdS universe and 1.1 in the OCDM and $`\mathrm{\Lambda }`$CDM models. Consequently, the descendants of our galaxies at $`z3`$ appear in the EdS universe to be less clustered than the present-day bright galaxies and can be found among field galaxies. On the contrary, the values resulting for the OCDM and $`\mathrm{\Lambda }`$CDM models seem to imply that the descendants are clustered at least as much as the present-day bright galaxies, so they could be found among the brightest galaxies or inside clusters. This is in agreement with the findings of Mo, Mao & White (1999) for the LBGs (see also Mo & Fukugita 1996; Governato et al. 1998; Baugh et al. 1999). If we repeat the analysis by using our galaxies at redshift $`z4`$, we find that $`b(M,0)1.8,1.7,1.6`$ for the EdS, OCDM and $`\mathrm{\Lambda }`$CDM models, respectively. The ratio between the correlation amplitudes of the descendants and the normal bright galaxies is $`0.9,2.3,2.2`$. This result confirms that for the EdS models they have clustering properties comparable to “normal” galaxies, while for non-EdS models the descendants seem to be very bright and massive galaxies. ## 6 Discussion and Conclusions In this paper we have measured over the redshift range $`0z4.5`$ the clustering properties of a faint galaxy sample in the HDF North (Fernández-Soto et al. 1999), by using photometric redshift estimates. This technique makes it possible both to isolate galaxies in relatively narrow redshift intervals, reducing the dilution of the clustering signal (in comparison with magnitude limited samples; Villumsen, Freudling & da Costa 1997), and to measure the clustering evolution over a very large redshift interval for galaxies fainter than the spectroscopic limits. The comparison with spectroscopic measurements shows that, for galaxies brighter than $`I_{AB}26`$, our accuracy is close to $`\sigma _z0.1`$ for $`z1.5`$ and $`\sigma _z0.2`$ for $`z1.5`$. We have checked the reliability of our photometric redshifts in the critical interval $`1.2z2`$ by replacing the J, H, Ks photometry of Dickinson et al. (1999) with the $`F110W`$, $`F160W`$ measurements in the HDF-N sub-area observed with NICMOS (Thompson et al. 1999). The new photometry is in general consistent with the IR photometry of Fernández-Soto et al. (1999) and our photometric redshifts are not significantly changed. In order to infer the confidence level for the galaxies beyond the spectroscopic limits ($`26I_{AB}28.5`$), we have compared our results first with those obtained by other photometric codes and second with Monte Carlo simulations. The first comparison shows that the resulting dispersion is $`\sigma _z0.20`$ at $`z1.5`$ and increases at higher redshifts ($`\sigma _z0.30`$), with a possible systematic shift ($`z_{\mathrm{GIS}}z_{\mathrm{CE}}0.15`$ and $`z_{\mathrm{GIS}}z_{\mathrm{FLY99}}+0.3`$). The second comparison with Monte Carlo simulations (made to determine the effects of photometric errors in the redshift estimates) shows that the r.m.s. dispersion obtained in this way is compatible with the previous estimates done by comparing the different codes: for galaxies with $`I_{AB}28.5`$ we found $`\sigma _z0.20.3`$ with a maximum $`\sigma _z=0.35`$ for the redshift range $`1.5z2.5`$. The contamination fraction of simulated galaxies incorrectly put in a bin different from the original one due to photometric errors is close to $`20\%`$. The dominant source of contamination in a given redshift bin is due to the r.m.s. dispersion in the redshift estimates, with the exception of the bin $`0z0.5`$ where the contamination is due to the high-redshift galaxies ($`z1`$) improperly put at low $`z`$. Due to the contamination effect at any redshift, we note that our clustering measurements should be considered as a lower limit. Assuming that the contaminating population is uncorrelated, we have applied a correction $`(1f)^2`$ to our original measurements, where $`f`$ is the contaminating fraction. This correction should be regarded as an upper-limit. As a consequence of the redshift uncertainties we have chosen to compute the angular correlation function $`\omega (\theta )`$ in large bins with $`\mathrm{\Delta }z=0.5`$ at $`z1.5`$ and $`\mathrm{\Delta }z=1.0`$ at $`z1.5`$. The resulting $`\omega (\theta )`$ has been fitted with a standard power-law relation with fixed slope, $`\delta =0.8`$. This latter value can be questioned because of the present lack of knowledge about the redshift evolution of the slope and its dependence on the different classes of objects. In order to avoid systematic biases in the analysis of the results, the theoretical predictions have been treated with the same basic assumptions. The behaviour of the amplitude of the angular correlation function at 10 arcsec ($`A_\omega `$) shows a decrease up to $`z11.5`$, followed by a slow increase. The comoving correlation length $`r_0`$ computed from the clustering amplitudes shows a similar trend but its value depends on the cosmological parameters. Finally, we have compared our $`\sigma _8^{\mathrm{gal}}`$ to that of the mass predicted for three cosmologies to estimate the bias. For all cases, we found that the bias is an increasing function of redshift with $`b(z0)1`$ and $`b(z4)5`$ (for EdS universe), and $`b(z0)0.5`$ and $`b(z4)3`$ (for open and $`\mathrm{\Lambda }`$ universe). This result confirms and extends in redshift the results obtained by Adelberger et al. (1998) and Giavalisco et al. (1998) for a Lyman-Break galaxy catalogue at $`z3`$, suggesting that these high-redshift galaxies are located preferentially in the rarer and denser peaks of the underlying matter density field. We have compared our results with the theoretical predictions of a set of different cosmological models belonging to the class of the CDM scenario. With the exception of the SCDM model, all the other models are consistent with both the local observations and the COBE measurements. We model the bias by assuming that the galaxies are associated in a one-to-one correspondence with their hosting dark matter haloes defined by a minimum mass ($`M_{\mathrm{min}}`$). Moreover, we assume that the haloes continuously merge into more massive ones. The values of $`M_{\mathrm{min}}(z)`$ used in these computations refer either to a fixed mass or to the median mass derived by our GISSEL model or to the value required to reproduce the observed density of galaxies at any redshift. The comparison shows that all galaxy formation models presented in this work can reproduce the redshift evolution of the observed bias and correlation strength. The halo masses required to match the observations depend on the adopted background cosmology. For the EdS universe, the SCDM model reproduces the observed measurements if a typical minimum mass of $`10^{11}h^1M_{}`$ is used, while the $`\tau `$CDM and TCDM models require a lower typical mass of $`10^{10}10^{10.5}h^1M_{}`$. For OCDM and $`\mathrm{\Lambda }`$CDM models, the mass is a function of redshift, with $`M_{\mathrm{min}}10^{10}h^1M_{}`$ at $`z1.5`$, $`10^{11.5}h^1M_{}`$ between $`1.5z3`$ and $`10^{12}h^1M_{}`$ at $`z4`$. The higher masses required at high $`z`$ to reproduce the clustering strength for these models are a consequence of the smaller bias they predict at high redshifts compared to the EdS models. We notice that at very low $`z`$, both OCDM and $`\mathrm{\Lambda }`$CDM models overpredict the clustering and consequently the bias. Two effects may be responsible for this failure. First, the one-to-one correspondence between haloes and galaxies may be an inappropriate description at low $`z`$, where a more complex picture might be required. Second, we have assumed that merging continues to be effective at low $`z`$ when, on the contrary, the fast expansion of the universe acts against this process, particularly for these models. As a consequence of the bias dependence on the redshift and on the selection criteria of the samples, the behaviour of the galaxy clustering cannot provide a straightforward prediction on the behaviour of the underlying matter clustering. For this reason, the parametric form $`\xi (r,z)=\xi (r,z=0)(1+z)^{(3+ϵ\gamma )}`$, where $`ϵ`$ models the gravitational evolution of the structures, cannot correctly describe the observations for any value of $`ϵ`$. Another prediction of the hierarchical models is the dependence of the clustering strength on the limiting magnitude of the samples. At $`z3`$, we have compared our clustering measurements with the previous results obtained for the LBGs by Adelberger et al. (1998) and Giavalisco et al. (1998). The three samples correspond to different galaxy densities (our density in the HDF is approximately 65 times higher than the LBGs of Adelberger et al. 1998). The clustering strength shows a decrease with the density. This result is in excellent agreement (both qualitatively and quantitatively) with the clustering strength predicted by the hierarchical models as a function of the halo density. More abundant haloes are less clustered than less abundant ones (see also Mo et al. 1998). Moreover, this result, which is independent of the adopted cosmology, supports our assumption of a one-to-one correspondence between haloes and galaxies at high-redshifts (see also Baugh et al. 1999), because otherwise we would expect a higher small-scale clustering at the observed density. As also noticed by Adelberger et al. (1998) for LBGs, such a result seems to be incompatible with a model which assumes a stochastic star formation process, which would predict that observable galaxies have a wider range of masses. In fact, in this case the correlation strength should be lower than the observed one because of the contribution by the most abundant haloes (which are less clustered). Moreover, it seems possible to exclude that a very large fraction (more than 50%) of massive galaxies are lost by observations due to dust obscuration, because the correlation strength would be incompatible with the observed density. Consequently, one of the main results of Adelberger et al. (1998), namely the existence of a strong relation between the halo mass and the absolute UV luminosity due to the fact that more massive haloes host the brighter galaxies, seems to be supported by the present work also for galaxies ten times fainter. We have estimated the clustering properties at the present epoch of the descendants of our high-redshift galaxies. To do so, we have assumed that only one galaxy is hosted by the descendants. The resulting local bias for the descendants of the galaxies at $`z3`$ is $`b(z=0)1.4,1.3,1.3`$ for the EdS, OCDM and $`\mathrm{\Lambda }`$CDM models, respectively. Considering the galaxies at $`z4`$, we obtain $`b(z=0)=1.8,1.7,1.6`$, respectively. These values seem to indicate that in the case of the the EdS universe they are field or normal bright galaxies while for the OCDM and $`\mathrm{\Lambda }`$CDM models the descendants can be found among the brightest and most massive galaxies (preferentially inside clusters). As already noted, at $`z3`$, the clustering strength and the observed density of galaxies are in good agreement with the theoretical predictions for any fashionable cosmological model. At $`z4`$, the present analysis seems to be more discriminant. Although our estimation should be regarded as tentative and needs future confirmation, we find a remarkably high correlation strength. For some models the observed density of galaxies starts to be inconsistent with the required theoretical halo density. The relation between clustering properties and number density of very high redshift galaxies therefore provides an interesting way to investigate the cosmological parameters. The difference in the predicted masses ($``$ 15 to 30 at $`z`$ 3 and 4) between EdS and non-EdS universe models is also in principle testable in terms of measured velocity dispersions. The present results have been obtained in a relatively small field for which the effects of cosmic variance may be important (see Steidel 1998 for a discussion). Nevertheless they show a possibility of challenging cosmological parameters which becomes particularly exciting in view of the rapidly growing wealth of multi-wavelength photometric databases in various deep fields and availability of 10m-class telescopes for spectroscopic follow-up in the optical and near infrared. ## Acknowledgments. We are grateful to H. Aussel, C. Benoist, M. Bolzonella, A. Bressan, S. Charlot, S. Colombi, S. d’Odorico, H. Mo, R. Sheth and G. Tormen for useful general discussions. We acknowledge K. Lanzetta, A. Fernández-Soto and A. Yahil for having made available the photometric optical and IR catalogues of the HDF North. We also thank the anonymous referee for comments which allowed us to improve the presentation of this paper. Many thanks to P. Bristow for carefully reading the manuscript. This work was partially supported by Italian MURST, CNR and ASI and by the TMR programme Formation and Evolution of Galaxies set up by the European Community. S. Arnouts has been supported during this work by a Marie Curie Grant Fellowship.
no-problem/9902/hep-ph9902206.html
ar5iv
text
# Neutrino Absorption Tomography of the Earth’s Interior using Isotropic Ultra-high Energy Flux ## The nature of the Earth’s interior has traditionally been deduced by indirect physical methods. An early, noteworthy result was Cavendish’s 1793 deduction that the Earth must have a dense core, obtained by “weighing the Earth” gravitationally. Current measurements are based largely on seismic wave propagation, which is rather indirect and has substantial intrinsic uncertainties . Adding extra information fails to remove ambiguitites, including studies ranging from the vibrational modes of the Earth as an elastic body, to temperature constraints , to the detailed composition of the core . Controversies currently exist: for example, seismic data has indicated that there may be an unsymmetrical differentially rotating element in the core , with contenders to explain this including a very large single crystal. Independent measurements of the density profile would be of considerable value. Here we discuss a novel way to take a rather direct ‘snapshot’ of the nucleon density in the Earth’s interior, by considering tomography with ultra-high energy neutrinos of cosmic origin. The principle of neutrino tomography is essentially the same as X-ray tomography, except for substituting penetrating neutrinos to serve in place of X-rays. By measuring neutrino absorption along different paths through a solid body, one can deduce the nucleon density in the interior of the object. The results would be utterly independent of the geophysical model, and directly measure the nucleon density. The interaction strength of neutrinos with other fundamental particles increases strongly with energy, and has been well measured at several high energy accelerators . The energy range of these direct measurements runs from below $`10^8`$ eV up to almost $`10^{14}`$ eV, the latter being achieved recently at the accelerator HERA in Hamburg. For almost all of the ultra-high energy (UHE) region of incident energy above $`10^{13}`$ eV, the cross section has not been measured directly. However, $`\sigma `$ can be calculated by exploiting relations in the Standard Model between electron-initiated reactions which have been measured, the neutrino initiated reactions desired, and evolution of active quark and anti-quark pairs with energy . The uncertainty due to theory in these calculations is small, leading to a fundamental interaction which is sufficiently well known for the purposes of tomography. The flux of UHE neutrinos from cosmic sources cannot yet be considered established, and several pilot experiments in the TeV (1 TeV = $`10^{12}`$ eV) energy range are underway to measure it. The BAIKAL experiment operates in lake water; the AMANDA project detects neutrinos interacting in the Antarctic ice cap. A third scheme called RICE exists in the pilot stage, and uses a novel radio detection strategy which is the most effective method above 100 TeV. The optimal energy range for neutrino tomography is roughly 10-1000 TeV, a region where these existing pilot projects have some overlap. However, current detectors are small and would give a marginal or insufficient event rate for Earth tomography. With better resources, employing a few hundred optimally tuned detectors, our calculations indicate that one should be able to say something useful about the Earth’s interior. However our primary focus is a future detector on a much larger scale, with a detection volume of order 1 $`\mathrm{km}^3`$ (KM3). With fluxes of the order of current astrophysical estimates, a KM3 detector should be able to perform definitively. Our approach differs from previous studies of neutrino absorption tomography. The older studies concentrated on exploiting point neutrino sources assumed to have a steady time dependence . Reconstruction of the density profile is done by observing periodic occultation of the sources due to the Earth’s rotation. This method relies on the rate obtained from limited point sources, and is also subject to errors if the energy dependence of the primary spectrum is poorly determined. The geometry will not work for a detector located at the South Pole. Kuo et al. investigated Earth tomography in the context of the DUMAND II array, concluding that a time scale of ‘from years to decades’ was needed to obtain sufficient data with this method. These results are important, but we offer a complementary and more promising scheme. We consider neutrinos coming from unresolved active galactic nuclei, gamma ray bursts, secondary emissions from cosmic rays whose directions are scrambled by cosmic magnetic fields, and other possible galactic or cosmological sources of ultra high energy neutrinos. Integrating over the Universe, this diffuse flux should be nearly isotropic, with a sizable component in the optimal energy region. Such a flux has several advantages. For example, one overcomes the serious problem of binning events by arrival times to incorporate the effects of the Earth’s rotation. There is far less ambiguity due to any possible time dependent fluctuations in the flux of an energetic point source. Most attractively, the entire Earth density profile can be obtained unambiguously, by a simple inversion of a well-measured observable in the data, namely the angular distribution. The overall normalization of the flux is not needed, as we arrange the calculation so that it drops out of the determination of the density profile. The energy dependence of the primary neutrino flux is also determined by the procedure. This is unexpected and rather miraculous, but it occurs because the interaction cross section and detector efficiencies are energy dependent. Given initial data on the angular distribution, and supposing poorly determined initial data, or guesses, on the energy spectrum (always a problem in cosmic ray physics), our procedure iterates the energy spectrum to obtain consistency with the angular distribution and density. Put another way, a faulty energy spectrum would be inconsistent, and by iteration the angular distribution and measured energy flux after attenuation determine the energy spectrum. This is quite interesting and may serve as a good method to measure the incident energy spectrum. If one assumes that the density profile of the Earth is already well known, then the angular distribution strongly overdetermines the problem, and one might even deduce the energy dependence of the cross section, contributing a powerful check on fundamental physics. To illustrate these remarks, consider the angular distribution of neutrinos passing through the Earth (Fig. 1). With an isotropic primary flux, the angular dependence comes from differing amounts of matter traversed en route to the detector. The effect is expressed by an evolution equation for the flux, $`\mathrm{\Phi }`$, as a function of distance $`z`$ traversed: $$\frac{d\mathrm{ln}\mathrm{\Phi }(E,z)}{dz}=n(z)\sigma _{\mathrm{eff}}(E).$$ (1) Here $`\sigma _{\mathrm{eff}}`$ is a known ‘effective’ cross section which incorporates both charged current cross sections, neutral current cross sections, and neutral current regeneration for neutrinos of energy $`E`$. We measure the polar angle $`\theta `$ with respect to the nadir (Fig. 1). We assume spherical symmetry, so that the density $`n=n(r)`$ is a positive definite function of distance $`r`$ from the Earth’s center. Let us assume momentarily that the measurement is dominated by a sufficiently narrow range of neutrino energies, so that the variation of (1) with energy can be neglected. By taking the logarithm of the flux, the overall flux normalization is an additive constant that drops out of the angular distribution. Solving (1) for the surviving neutrino flux that can be measured at a detector site located near the Earth’s surface, one obtains, $$\mathrm{\Phi }_{\mathrm{surv}}(E,\theta )=\mathrm{\Phi }_\nu (E)e^{\sigma _{\mathrm{eff}}(E)Rn(R)f(\theta )},$$ (2) where $`\mathrm{\Phi }_\nu `$ is the incident neutrino flux, $`R`$ is the Earth’s radius and the function $`f(\theta )`$ is proportional to the integrated nucleon density along the chord $`0<z<2R\mathrm{cos}(\theta )`$. It is convenient to measure $`r`$ in units of the Earth’s radius $`R`$. Then, $$f(\theta )=\frac{1}{n(R)}_{\mathrm{sin}^2(\theta )}^1\frac{n(r)d(r^2)}{\sqrt{r^2\mathrm{sin}^2(\theta )}}.$$ (3) Given data for the angular dependence over the region $`0<\theta <\frac{\pi }{2}`$, this particular transform can be inverted; the result is: $$n(r)=\frac{n(R)}{\pi }_{\mathrm{sin}^1(r)}^{\frac{\pi }{2}}\frac{df(\theta )}{d\theta }\frac{d\theta }{\sqrt{\mathrm{sin}^2(\theta )r^2}}.$$ (4) This analytic result shows that the angular distribution is sufficient to give the density profile. The result is simpler than might be expected, because the particular spherical geometry of the problem has been exploited. Deviations from spherical symmetry are of interest, so that relaxing our assumptions can be contemplated, but our goal here is to prove the practicality of the simplest scheme when confronting realistic difficulties. The primary difficulty appears to be statistical fluctuations from small expected data sets, but we will find that these appear to be under control on the scale of KM3. We now turn to the question of energy dependence that was sidestepped above. The simple procedure above can be applied within a small energy bin. One might then imagine requiring that in each angular bin, we also bin the data in energy. With limited statistics and limited energy resolution of the detector, we find that such a method is unlikely to be practical. Yet integrating over energy does not commute with the angular inversion, so a priori the energy integrated angular distrbution does not appear to be adequate. To get around these problems, we created an alternative procedure in which the Earth’s density profile and the incident flux is iteratively improved. We start by assuming a trial function for the density profile, which will yield our initial guess for the attenuation factor $`f(\theta )`$. This can be used, along with data on the energy dependence of observed flux integrated over nadir angle, to obtain our first guess for the incident energy spectrum. The attenuation function $`f(\theta )`$ can then be further improved by using the calculated value of the incident flux. Again this is compared with the angular distribution of the observed flux integrated over energy. At no stage is it necessary to have the joint distribution in energy and in angle. This procedure is repeated till convergence is obtained for both the incident flux and $`f(\theta )`$. The same procedure has then converged to the Earth’s density profile. In order to obtain convergence and a unique solution, it is necessary to fix one boundary condition, which is taken to be the value of the density of the Earth near the surface. Since the surface density is known with reasonable accuracy, the boundary condition should not introduce any bias. In fact the result is overdetermined, because two moments of the density are already known: These are the total mass of the Earth (just as Cavendish used), and the Earth’s moment of inertia. But the entire procedure can be carried out without making use of these moments, so in practice the density is over-determined. This is important, because one can expect only crude measurement of the energy distribution from a realistic cosmic ray detector. We used standard Monte Carlo methods to study the feasibility of the iterated inversion technique. The simulations used the Preliminary Reference Earth Model (PREM) for the Earth’s density profile. The range of neutrino energies was restricted to lie between $`1010^4`$ TeV. Our numerical results show that the optimal lower limit in energy is between 10 and 50 TeV, since below this value the Earth is essentially transparent. The optimal upper limit is between $`10^3`$ and $`10^4`$ TeV, beyond which the number of events are expected to be too small to be of much use for tomography. We employed a generic form of the diffuse AGN neutrino flux, $`\mathrm{\Phi }_\nu (E)=\mathrm{\Phi }_oE^2`$ for $`10\mathrm{TeV}<E<10^4\mathrm{TeV}`$. This form is within the range of current theoretical predictions. For example, in the energy range of interest, the AGN model of Stecker, Done, Salamon and Sommers (SDSS) gives a flux $`E^1`$. A model due to Szabo and Protheroe (SP) , while not now thought to be correctly normalized, has the neutrino spectrum falling like $`E^2`$. We take an agnostic position on the flux, and address uncertainties by simply renormalizing results at the end. We simulated data for a generic UHE neutrino telescope, for the purposes of study defined in two ways: in one extreme for simplicity, we took detector response independent of neutrino energy and angle of incidence. The other extreme is the case of detector with the energy and angular response calculated for a radio array . The radio method has an response strongly increasing with energy, making the flat response to the higher-energy part of the spectrum a more conservative method. Meanwhile we do not have sufficient information on the angular response of optical detection. Since we believe that a realistic detection scheme would combine the strengths of both optical and radio detection, the two cases should give a reasonable range of results without excluding either or getting bogged down in details: in fact, the results were so similar that we simply report the simpler (isotropic and flat) response. Of course, detector response for a particular experimental situation, as well as realistic energy resolution and pointing accuracy, can always be incorporated. Our simulation generated $`N`$ events distributed in nadir angle, $`0^o<\theta <90^o`$, and energy, 10 TeV $`<E_\nu <10^4`$ TeV, according to Eqn. (2). We normalized our calculation so the total number of events observed per antenna between 100 TeV and $`10^4`$ TeV is about 100 per year. This is about one fourth of the event rate calculated in , which takes into account subsequent changes in estimates of the incident flux. While we thus normalize our rates to radio detection, any combination of methods can be rescaled in an obvious way. The Monte Carlo data were divided into 20 energy $`E`$ bins, with widths increasing like $`E^2`$. The data was also divided into 10 angular bins chosen to be equally spaced in radius from the center of the Earth. These bins were chosen to obtain the density at roughly uniform intervals. We made no attempt to discover an optimal binning procedure. However the solid angle subtended by the central bin, namely the one containing the center of Earth, is very small, and hence it puts severe requirements on the total number of events needed to say something useful about the density in this region. This also happens to be one of the most interesting regions for geophysics. Depending on the total number of events and optimization scheme, one may wish to adjust this bin to get a better measurement. We found that for a wide range of trial density profiles convergence of the iterative procedure was obtained within about 5 iterations. We plot in Fig. 2 an example of successive approximations to the attenuation function $`f(\theta )`$, with the solid line showing the final result. Fig. 3 shows the successive approximations for the incident flux in energy. The plots show rapid convergence occurs with the scheme chosen. The figures show explicitly that a considerable error in the incident energy spectrum can be tolerated, with the final energy spectrum converging to the actual spectrum. The final extracted density profile, along with the PREM density profile, is shown in Fig. 4. The step between the core and lower mantle is very well resolved, while the inner core is not. Since the density converged to proper value, then all of its moments also converged, showing that the total mass and moment of inertia were consistent, or: Neutrinos can “weigh the Earth”. In practice the known mass density moments provide an excellent handle on the overall consistency of the final result. Statistical error bars on the derived density are acceptable (Fig. 4), although one would probably want to optimize the central core region further. The results were obtained by assuming that a detector with 1000 antennas is deployed for two years. The number is ambitious but within the scope of planning for future arrays. Because the errors are statistical, the same result would be obtained for an incident flux normalized 5 times lower in 10 years running. Put yet another way, even a modest array of 200 antennas might say something useful in the time scale of 10 years. In Fig. 5 we also show the final result assuming the optimistic flux estimates, but only 100 antennas for a running time of 2 years. In this case we divide the earth’s radius into only five bins instead of ten, in order to get a reasonable number of events in the central bin. The step between the core and lower mantle remains well resolved. We hasten to add that there can be many uncertainties in realistic experimental design, which only further study can address. During a period of a few years to a decade, a $`KM3`$ neutrino telescope will be fulfilling a primary mission as a fundamentally new kind of instrument for observing the cosmos. We believe that with the same kind of detector, neutrino tomography could also provide important and independent information about the Earth’s interior. Acknowledgements: We thank Geoff Abers, John Doveton, Doug McKay and R. P. Singh for useful comments. Supported by DOE grant number DE-FGO2-98ER41079, the KU General Research Fund, NSF-K\*STAR Program under the Kansas Institute for Theoretical and Computational Science and DAE grant number DAE/PHY/96152.
no-problem/9902/cond-mat9902070.html
ar5iv
text
# Wetting at Non-Planar Substrates: Unbending & Unbinding ## Abstract We consider fluid wetting on a corrugated substrate using effective interfacial Hamiltonian theory and show that breaking the translational invariance along the wall can induce an unbending phase transition in addition to unbinding. Both first order and second order unbending transitions can occur at and out of coexistence. Results for systems with short-ranged and long-ranged forces establish that the unbending critical point is characterised by hyperuniversal scaling behaviour. We show that, at bulk coexistence, the adsorption at the unbending critical point is a universal multiple of the adsorption for the correspondent planar system. Recently, the subject of fluid adsorption and wetting on structured (non-planar) and heterogeneous substrates has began to receive considerable attention . This work is not only a natural extension of studies of wetting on idealised planar surfaces but it is also of more fundamental interest since the broken translational invariance along the wall necessarily leads to competition between surface tension and direct molecular effects. Thus, we may anticipate that new interesting phenomena (phase transitions, scaling, universality) will emerge which do not occur for planar systems. In this letter, we report results of extensive numerical calculations, supported by approximate non-perturbative analysis and scaling theory, of wetting on a periodic (corrugated) substrate. These reveal that novel first and second order transitions can take place, directly related to the inhomogeneity along the wall. For long-ranged forces, the phase transition, referred as unbending, only occurs for sufficiently large wall corrugations (beyond the range of previously employed perturbative methods ), dependent on the wave vector of the corrugation. In contrast, for short-ranged forces, the critical threshold is wave vector independent and rather weak. There are three aspects of our work that we emphasize in particular. Firstly, the unbending transition precedes a wetting (unbinding) transition occurring at a higher temperature (and at bulk two-phase coexistence). For second order unbinding transitions, on which we concentrate, the location of the wetting transition is unaffected by wall corrugation. Secondly, the location of the unbending line and critical point, as well as the interface structure, only depend on the amplitude and period of the wall corrugation function through hyperuniversal scaling variables analogous to that encountered in the theory of finite-size effects at bulk critical points . As a consequence, the unbending critical point is associated with non-trivial universal amplitude ratios which relate the adsorptions in the non-planar and correspondent planar system. Finally, unbending is directly related to nonlinear bifurcation phenomena occurring in dynamical systems, a subject whose mathematical aspects continue to attract attention . To begin, we describe the results of a specific mean-field (MF) model of unbending and unbinding which also serves to illustrate important scaling properties which we shall later put in a more general context. For simplicity, we assume that the wall has a corrugated sinusoidal shape $`\psi (x)=a\mathrm{cos}(qx)`$, which breaks the translational invariance in one direction only. Following the work of earlier authors , we take as our starting point the (reduced) standard effective interfacial model $$H[\mathrm{}]=\frac{1}{L}_L𝑑x\left[\frac{\mathrm{\Sigma }}{2}\left(\frac{\mathrm{}}{x}\right)^2+W(\mathrm{}\psi )\right]$$ (1) restricted to the space of periodic solutions which is sufficient for our description of equilibrium phenomena. Here, $`\mathrm{\Sigma }`$ is the surface stiffness, $`W`$ is the binding potential and $`\mathrm{}(x)`$ is the collective coordinate measuring the height of the interface relative to the mean position of the wall whose period $`L`$ satisfies $`q=2\pi /L`$. We also restrict ourselves to a MF description in which the equilibrium profiles $`\mathrm{}_\nu `$ are obtained by minimising Eq. (1). The importance of fluctuation effects will be discussed later in the context of scaling theory . We start by considering systems with short-ranged forces at bulk two-phase coexistence and write $$W(\mathrm{})=\mathrm{\Delta }Te^{\mathrm{}}+\beta e^2\mathrm{};$$ (2) so that both the film thickness $`\mathrm{}`$ and corrugation amplitude $`a`$ are measured in units of the bulk correlation length. With this potential (and positive $`\beta `$) the planar system undergoes a second-order unbinding transition at $`\mathrm{\Delta }TT_wT=0`$ such that the MF interface thickness and the transverse correlation length diverge at that critical point as $`\mathrm{}_\pi \mathrm{log}(\mathrm{\Delta }T)`$ and $`\xi _{}\mathrm{\Delta }T^1`$, corresponding to standard wetting critical exponents $`\beta _𝖲=0(\mathrm{log})`$ and $`\nu _{}=1`$ respectively . For $`a0`$, the MF configuration(s) are the solutions of the Euler-Lagrange equation $$\mathrm{\Sigma }\mathrm{}_\nu ^{\prime \prime }(x)=W^{}(\mathrm{}_\nu \psi ),$$ (3) solved subject to periodic boundary conditions and where the prime denotes differentiation w.r.t. the argument. This deceptively simple looking nonlinear equation can show multiple solutions and bifurcations corresponding to different possible phases for the equilibrium interface configuration. Whilst a full analytic solution is not possible, it is straightforward to show that the solutions exhibit an important scaling property which allows us to collapse results obtained for different periods $`L=2\pi /q`$ onto a universal surface phase diagram. To see this, we introduce the new variables $`\eta \mathrm{}\psi \mathrm{}_\pi `$ and $`tqx`$ so that (3) becomes $$\ddot{\eta }=\mathrm{\Delta }\stackrel{~}{T}^{\mathrm{\hspace{0.33em}\hspace{0.17em}2}}(e^\eta e^{2\eta })+a\mathrm{cos}t,$$ (4) which is the equation of a forced inverted nonlinear oscillator. Here the overdot corresponds to differentiation w.r.t. $`t`$ whilst the temperature, stiffness and substrate periodicity are combined in the rescaled temperature variable $`\mathrm{\Delta }\stackrel{~}{T}\mathrm{\Delta }T/q\sqrt{2\beta \mathrm{\Sigma }}`$. Consequently, any new phase transition induced by the corrugation amplitude $`a`$ is not affected by the value of the wall periodicity $`q`$ which only acts to rescale the temperature deviation from $`T_w`$. In Fig. 1, we show plots of the mean interface thickness $`\mathrm{}_0`$, defined as the average $`<\mathrm{}(x)>_x`$, as a function of $`\mathrm{\Delta }\stackrel{~}{T}`$ for various $`a`$, obtained by numerically minimising Eq. (1). It can be seen that, whilst the location of the unbinding transition is unaffected by the wall corrugation, a new phase transition occurs for corrugation amplitudes $`a>a_𝖼2.914`$ and $`\mathrm{\Delta }\stackrel{~}{T}>\mathrm{\Delta }\stackrel{~}{T}_𝖼2.12`$. The surface phase diagram is shown in Fig. 2 and exhibits the termination of the first-order phase boundary at an unbending critical point as well as representative shapes of the coexisting interfacial phases at the transition. Again, we emphasize the universal value of the critical corrugation amplitudes $`a_𝖼`$ (which is independent of $`q`$) whilst the temperature shift from $`T_w`$ satisfies $`\mathrm{\Delta }T_𝖼(q)q`$. Before we discuss further scaling properties that emerge from the exact minimization of (1), we describe an approximate treatment of the model which recovers the unbending transition and yields relatively good values for the critical point. To this end, we suppose that the interface configuration, and consequently the free-energy, can be parametrized by two variables by restricting ourselves to profiles of the form $`\mathrm{}(x)\mathrm{}_0+(1ϵ)\psi (x)`$. Thus, $`\mathrm{}_0`$ is the average interface displacement whilst $`ϵ`$ measures the extent of interfacial corrugation. The bounding value $`ϵ=1`$ corresponds to a completely flat configuration whereas $`ϵ=0`$ refers to a configuration with identical corrugation to the wall. Substituting this parametrized profile shape into the Hamiltonian, Eq. (1), and minimising w.r.t. $`\mathrm{}_0`$, we are led to the following approximate expression for the dependence of the free-energy $`F`$ on the interface corrugation parameter $`ϵ`$, $$\frac{2}{\mathrm{\Sigma }q^2}F(ϵ)=\frac{a^2}{2}(1ϵ)^2\mathrm{\Delta }\stackrel{~}{T}^{\mathrm{\hspace{0.33em}\hspace{0.17em}2}}\frac{I_0^2(ϵa)}{I_0(2ϵa)}$$ (5) where $`I_0`$ denotes the modified Bessel function of zero-order. The two terms on the r.h.s. represent the competition between the surface tension and binding potential effects which are each minimized separately by $`ϵ=1`$ (flat interface) and $`ϵ=0`$ (corrugated interface) respectively. Plots of $`F(ϵ)`$ for various $`a`$ moving along the unbending line are shown in Fig. 3 and illustrate the possibility of phase coexistence between bent and rather flat states for sufficiently large $`a`$. The locus of the unbending transition in the surface phase diagram obtained in this approximate manner is shown as the dashed line in Fig. 2 and agrees reasonably well with the exact numerical result. Note that the solutions will only depend on $`\mathrm{\Delta }\stackrel{~}{T}`$ and $`a`$, as in the exact solution. This method also has a distinct advantage over previously adopted perturbative treatments (involving an expansion about the planar system) which, whilst not without merit, cannot handle the occurrence of distinct branches (i.e. a bifurcation) in the free-energy . We also note that the location of the unbending critical point within this approximate non-perturbative method can be determined with an elegant graphical construction . We consider now the same phenomena for systems with long-ranged (dispersion) forces. For this case, we use the binding potential $$W(\mathrm{})=\frac{\mathrm{\Delta }T}{\mathrm{}^2}+\frac{\beta }{\mathrm{}^3}$$ (6) which again describes a continuous unbinding transition in the planar system as $`TT_w`$ . For this system, the film thickness and transverse correlation length diverge as $`\mathrm{}_\pi \mathrm{\Delta }T^1`$ and $`\xi _{}\mathrm{\Delta }T^{5/2}`$, corresponding to critical exponents $`\beta _𝖲=1`$ and $`\nu _{}=5/2`$ respectively . Turning to the non-planar geometry, we make the judicial change of variables $`\eta (\mathrm{}\psi )/\mathrm{}_\pi `$ and $`tqx`$ which again reduces the Euler-Lagrange equation (3) to that of a forced inverted nonlinear oscillator: $$\ddot{\eta }=\mathrm{\Delta }\stackrel{~}{T}^{\mathrm{\hspace{0.33em}\hspace{0.17em}2}}(\frac{1}{\eta ^3}\frac{1}{\eta ^4})+\stackrel{~}{a}\mathrm{cos}t.$$ (7) Once more, the two scaling variables $`\mathrm{\Delta }\stackrel{~}{T}2\mathrm{\Delta }T/\mathrm{\Sigma }q^2\mathrm{}_\pi ^4`$ and $`\stackrel{~}{a}a/\mathrm{}_\pi `$ determine the multiplicity of solutions and hence the surface phase diagram. Plots of the mean interface position $`\mathrm{}_0`$ vs. $`\mathrm{\Delta }\stackrel{~}{T}`$ for different $`a`$ obtained from the numerical minimization of Eq. (1) are, in essence, the same as that shown in Fig. 1 for short-ranged forces and, therefore, are not presented here. The numerical values for the scaled variables at the unbending critical point are $`\stackrel{~}{a}_𝖼2.061`$ and $`\mathrm{\Delta }\stackrel{~}{T}8.66`$ which imply a wave-vector dependence $`a_𝖼(q)q^{2/5}`$ and $`\mathrm{\Delta }T_𝖼(q)q^{2/5}`$ for the critical corrugation amplitude and temperature shift, respectively. The MF results described above suggest that the location of the unbending critical point can be understood using scaling theory. To this end, we suppose that, in the planar system, the excess free-energy per unit area contains a singular contribution $`F_\pi ^{\text{sing}}\mathrm{\Delta }T^{\mathrm{\hspace{0.17em}2}\alpha _𝖲}`$ (with $`\alpha _𝖲=0`$ and $`1`$ for the model potential (2) and (6), respectively ). In the non-planar system, we conjecture that the corresponding quantity is described by the scaling function $$\mathrm{\Delta }F_\nu ^{\text{sing}}=\mathrm{\Delta }T^{\mathrm{\hspace{0.33em}2}\alpha _𝖲}W(a\mathrm{\Delta }T^{\beta _𝖲},q\mathrm{\Delta }T^\nu _{})$$ (8) where $`W(x,y)`$ is the scaling function whose variables correspond to the hyperuniversal combination of lengthscales $`a/\mathrm{}_\pi `$ and $`q\xi _{}`$ . Since the singularity in the free-energy at the unbending critical point occurs for $`\mathrm{\Delta }T0`$, we are immediately led to the prediction for the critical corrugation amplitude and temperature $$a_𝖼(q)q^{\frac{\beta _𝖲}{\nu _{}}};\mathrm{\Delta }T_𝖼(q)q^{\frac{1}{\nu _{}}}$$ (9) consistent with our explicit results, provided that for short-ranged forces we interpret $`\beta _𝖲/\nu _{}`$ as zero and not logarithmic. We believe that the existence of a finite critical threshold even in the $`q0`$ limit is a surprising finding of our work. These scaling ideas can be extended to the interface structure at the unbending critical point where the hyperuniversal nature of the scaling variables $`x`$ and $`y`$ play an important role. Here, we concentrate on systems with long-ranged forces for which $`\beta _𝖲0`$ where the definition of universal critical amplitudes is more straightforward. We suppose that, in the vicinity of the unbending critical point, the mean interface thickness in the non-planar system is described by the scaling law $$\mathrm{}_0=\mathrm{}_\pi \mathrm{\Lambda }(\frac{a}{\mathrm{}_\pi },q\xi _{})$$ (10) where $`\mathrm{\Lambda }(x,y)`$ is a universal scaling function. As a consequence, precisely at the unbending critical point, the mean film thickness $`\mathrm{}_0^𝖼`$ is a universal multiple of the corresponding planar adsorption (at the same temperature). Thus, we define the universal critical amplitude ratio $$R\frac{\mathrm{}_0^𝖼}{\mathrm{}_\pi }\text{at}a=a_𝖼(q),\mathrm{\Delta }T=\mathrm{\Delta }T_𝖼(q)$$ (11) which we have numerically determined as $`R1.321`$ (independent of q) calculated using our MF theory with the binding potential (6). Note that the definition of $`R`$ is equivalent to the ratio of adsorptions in the non-planar and planar systems. Other universal critical amplitudes can also be defined. For example, at the unbinding critical point, the shift in the mean interface height relative to the planar system satisfies $$R^{}\frac{\mathrm{}_0^𝖼\mathrm{}_\pi (\mathrm{\Delta }T_𝖼)}{a_𝖼(q)}$$ (12) with $`R^{}`$ also independent of $`q`$. The advantage of this definition is that it is also appropriate for systems in which $`\beta _𝖲=0(\mathrm{log})`$. We have numerically determined that $`R^{}=0.640`$ and $`0.156`$ for the potentials (2) and (6) respectively. To finish our article, we make two pertinent remarks. Firstly, we have established that for $`a>a_𝖼`$ the first order unbending transition also occurs out of the two-phase coexistence for sufficiently small bulk ordering field $`\overline{h}`$. The result of our numerical calculations for short-ranged forces including an additional $`\overline{h}\mathrm{}`$ term in the binding potential are shown in Fig. 4. The existence of an unbending line extending out of bulk two-phase coexistence is analogous to prewetting at (planar) first-order phase transitions. Secondly, we have established that unbending also occurs for first order wetting transitions in non-planar systems although the scaling behaviour is less obvious. A section of the surface phase diagram in the $`(T,\overline{h})`$ plane thus shows both prewetting and unbending lines. While this first appears similar to prefilling on a wedge, there are profound and subtle differences between unbending and prefilling relating to the order of these transitions and their relation with wetting. In summary, we have shown that for non-planar systems an additional interfacial phase transition is associated with unbinding. The critical point of the unbending transition exhibits novel scaling and observable universal critical properties. Further work should concentrate on more general wall shapes, calculations with more microscopic models and also aim to establish whether the values of the universal critical amplitudes presented here are substantially affected by including fluctuation effects beyond mean-field level. At present, simulation studies seem best equipped to answer this latter question although renormalization group analysis may be possible. C.R. is on leave from the Departamento de Física Teórica de la Materia Condensada, Universidad Autónoma de Madrid, and acknowledges economical support from La Caixa and The British Council.
no-problem/9902/astro-ph9902378.html
ar5iv
text
# GRBs: when do blackbody spectra look like non-thermal ones? ## 1 Motivation Gamma-ray bursts (GRBs) still remain an unresolved mystery of modern astrophysics in spite of recent progress in the observations of their X-ray, optical and radio counterparts. Not only the nature of internal engine, but even the mechanism of gamma-ray emission is unclear. Studying the spectra of GRBs is one of the keys that can unlock this great mystery in future. Observations of the GRB spectra (Band et al. 1993) show that, in general, they are well described by a low-energy power law with the exponent $`\alpha `$, being exponentially cut of at $`EE_0`$, and by a high-energy power law with the exponent $`\beta `$. Though the values of $`(\alpha ,\beta ,E_0)`$ can be different for individual bursts, they usually are in the range of $`(1.5\mathrm{}0.5,3\mathrm{}2,100\mathrm{}200\text{keV})`$. Note that in this paper we consider the photon spectrum $`N(E)`$ or $`N(\nu )`$, the differential energy flux density $`F_\nu =h\nu N(h\nu )`$, and $`\nu F_\nu `$ distribution. By default, all the power indices in this paper refer to $`N(E)`$. The power-law appearance of the spectra can possibly be explained by the hypothesis of their nonthermal origin. The synchrotron shock mechanism (Tavani 1996), where the GRB emission is produced by an optically thin relativistic plasma in a weak magnetic field, is one of those models which give a good agreement with observed spectra. Cohen et al (1997) find that the low-energy spectral index $`\alpha `$ in the time-integrated of GRB is usually in the range from $`2/3`$ to $`3/2`$ as predicted by the synchrotron shock model. The limits of this range correspond to the synchrotron spectra of instantaneous sample of electrons and the one integrated over their radiative decay (Rybicki & Lightman 1979). However, Crider, Liang & Preece (1997) have shown, on the basis of the analysis of the time-resolved spectra of 99 GRBs, that neither the synchrotron shock nor the simple inverse Compton mechanism can explain the instantaneous GRB spectra and their evolution: the time-resolved spectral slope $`\alpha `$ is often outside the limits of the synchrotron model and does not change monotonically with time, as the inverse Compton model predicts. While these models of gamma-ray bursts (which generally fit the observations) have some difficulties to match them in detail, we can present here a blackbody model that should be at least not worse than the other current ones. The conflict of the optically thick model for GRBs with observations was discussed already by Paczyński (1986) and Goodman (1986). Paczyński (1986) mentioned: ‘The observed spectra are averaged over large fractions of a second, and this may be responsible for the shallow slope of the low energy part of the spectrum’. The problem was raised recently by Band & Ford (1997). They have posed a question ‘whether burst spectra are narrowband on short time-scales’. So, the question is: are the observed broadband GRB spectra formed by time integrations of an evolving quasi-blackbody instantaneous spectrum, or not. Band & Ford (1997) found no evidence for narrowband emission down to 1 ms time-scale. In the present paper we consider time-scales that are shorter for an observer. It is well known, that assuming high values of Lorentz factor $`\mathrm{\Gamma }`$ of the GRB ejecta is necessary to solve the compactness problem (Guilbert, Fabian & Rees 1983, Paczyński 1986, Goodman 1986, Krolik & Pier 1991, Rees & Mészáros 1992, Piran 1996). The typical time-scale of the variability of the gamma-ray emission $`\mathrm{\Delta }t10^2`$ seconds implies the size of the emitting region $`R<c\mathrm{\Delta }t`$, as small as $`10^3`$ km. The enormous number of gamma photons in such a small volume should produce electron-positron pairs which make the emitting region optically thick. This conflicts with the observed nonthermal spectra unless one supposes that the emitting region moves towards the observer at a relativistic speed with Lorentz factor $`\mathrm{\Gamma }`$, then its size would be $`\mathrm{\Gamma }^2c\mathrm{\Delta }t`$, and the optical depth correspondingly smaller. We propose an important supplement to this solution of the compactness problem. In our version, the relativistic motion is still required, in order to provide the formation of an integrated spectra from an ensemble of the thermal ones. It is known that a sum of different thermal blackbody spectra can produce a power-like spectrum looking as a nonthermal one. It happens, e.g., in the classical case of the Shakura-Sunyaev thin accretion disk (Shakura & Sunyaev 1973). As shown in this paper, a similar approach can provide an analogous result in the case of a relativistically moving emitter. Evidently, in any realistic situation the spectrum produced by an optically thick body is never a pure blackbody, because of opacity (and hence emission) dependence on wavelength, the effects of sphericity (see Mihalas 1978) etc. For us the black body is just a ‘toy’ model which is however far enough from the spectra of an optically thin plasma, invoked by others for explaining GRBs. Ryde & Svensson (1999) consider another basic model (a non-thermal one) and show that the observed spectra result from the time integration. Our approach is more radical than that. By the spectrum formation model presented in this paper we do not introduce a new physical model of gamma-ray bursts. We simply point out the fact that the observed non-thermal spectrum can be produced by an optically thick body. The assumptions needed for this seem not to be very unnatural. If such a picture can be worked out as a physical one (not only the ‘toy’ model), then new classes of GRB models become possible, producing ‘dirty’ fireballs, e.g. by the neutrino annihilation (Goodman, Dar, & Nussinov 1987). On the GRB models with a moderately high baryon load see Woosley (1993), Ruffert et al. (1997), Fuller & Shi (1998), Fryer & Woosley (1998), Popham, Woosley, & Fryer (1998). ## 2 The model of spectrum formation Let us assume that the emitting surface is moving towards the observer with $`\mathrm{\Gamma }10^3`$ – it can be an expanding shell, or a blob, or a ‘bullet’, or an ‘internal shock’ (e.g. Piran 1998) – and producing at each instant a pure blackbody spectrum (which has a resemblance to the real spectra of optically thick plasmas). Due to the well known effect, if the emitter is moving towards the observer with the velocity $`v`$ corresponding to $`\mathrm{\Gamma }=(1v^2/c^2)^{1/2}`$ then the emitter and observer time-scales differ by a factor of $`2\mathrm{\Gamma }^2`$ (e.g. Rees & Mészáros 1992, Shaviv & Dar 1995, Piran 1998, Dar 1998). Here and below we assume that all clocks are synchronized in the observer’s rest frame, i.e. the effect is purely kinematical (see Fig.1), moreover it is Galilean, not truly relativistic (in the sense that Relativity plays no role in this effect). The Lorentz factor $`\mathrm{\Gamma }`$ is here simply a measure of the deviation of $`v`$ from $`c`$, and nothing else. The difference of the emitter and observer time-scales means that, for example, $`\tau =10`$ ms, the time of integration by an observer, corresponds to $`\tau ^{}5`$ hours of emission time (Fig.1). During this long time the emitting object can expand and cool significantly, so the spectra it produces in the beginning and at the end of the observation interval $`\tau `$ can differ drastically. Therefore, the observed spectrum is formed by an integration of some cooling sample of instantaneous spectra. For simplicity, we assume that the temperature $`T`$ and the area $`A`$ of the emitting object change with time as described by the following power laws: $$T=T_0(t/t_0)^\theta =T_0(t^{}/t_0^{})^\theta ;A=A_0(t/t_0)^\sigma =A_0(t^{}/t_0^{})^\sigma .$$ (1) Here and below the primed time variables will refer to the emission time, while the non-primed ones - to the detection time. ### 2.1 Analytic treatment Let us consider a set of arbitrary elementary spectra. If the members of the set have a parameter distributed according to a power law, then the integration of elementary spectra leads quite often to the formation of a power spectrum. Let us show this with a simple example (Fig. 2). We would like to denote the elementary (instant) spectrum as $`n(E)`$, and the resulting (integral) one as $`N(E)`$. The spectra will be integrated in time from $`t_0`$ to $`t_1=t_0+\tau `$, where $`t_1t_0`$. 1. Let the elementary spectrum be $`n(E)E^\beta `$ (Fig. 2a) in the high energy part of the spectrum ($`E>E_0`$) and constant if ($`E<E_0`$), where $`E_0`$ evolves in time like $`t^\theta `$ and at $`t_1`$ reaches the value $`E_1=E_0(t_1/t_0)^\theta `$. Then the observed integral spectrum should be $$N(E)=\underset{t_0}{\overset{t_1}{}}A(t)n(E,t)𝑑t\{\begin{array}{cc}E^\beta ,\hfill & E>E_0\hfill \\ E^{\beta 1/\theta },\hfill & E_1<E<E_0\hfill \\ \text{const},\hfill & E<E_1\hfill \end{array},$$ (2) i.e. have two power-law parts the harder of which reflect the high-energy tail of the elementary spectrum and the softer accounts for the elementary spectrum evolution. 2. Let us now examine the case of a stepwise elementary spectrum (Fig. 2b) described by a Heaviside $`\mathrm{\Theta }`$-function: $`n(E)=\mathrm{\Theta }(E_0E)`$, where $`E_0`$ evolves as in the previous example. Then the integral spectrum should be $$N(E)=\{\begin{array}{cc}0,\hfill & E>E_0\hfill \\ E^{\frac{\sigma +1}{\theta }},\hfill & E_1<E<E_0\hfill \\ \text{const},\hfill & E<E_1\hfill \end{array},$$ (3) i.e the Heaviside step is smoothed into a power function. The discontinuity of the above $`N(E)`$ at the point $`E_0`$ is an artifact of the approximation: in fact, there is not an exact power function, but very close to it if $`t_1t_0`$ as supposed. 3. So we have shown that the integration of both power and stepwise spectra leads to a power-law behaviour between $`E_1`$ and $`E_0`$. The elementary spectrum with Plank (Wien) high-energy tail (Fig. 2c) lies between the power and stepwise cases, it is not as steep as the stepwise one but steeper than the power one. So it is natural to expect a similar result (power-law behaviour) for the integral spectrum. ### 2.2 The integration of the blackbody elementary spectrum Now we have come to the time integration of the blackbody Plank spectrum. $$n(E,t)=A(t)\frac{E^2}{\mathrm{exp}[E/T(t)]1}.$$ (4) Here we measure $`E`$ and $`T`$ in the same units, say, $`T_0`$, and let us measure time in units of $`t_0`$, so instead of $$A(t)=A_0\left(\frac{t}{t_0}\right)^\sigma ,T(t)=T_0\left(\frac{t}{t_0}\right)^\theta $$ (5) we have simply $$A(t)=A_0t^\sigma ,T(t)=t^\theta .$$ (6) The observed integral spectrum is $$N(E)=_1^{t_1}𝑑tA(t)\frac{E^2}{\mathrm{exp}[E/T(t)]1}=A_0_1^{t_1}𝑑t\frac{t^\sigma E^2}{\mathrm{exp}(Et^\theta )1}.$$ (7) Introducing $`y=Et^\theta `$, we rewrite this as $$N(E)=A_0\frac{E^{2(\sigma +1)/\theta }}{\theta }_E^{Et_1}𝑑y\frac{y^{(\sigma +1)/\theta 1}}{\mathrm{exp}(y)1}.$$ (8) From this general expression we derive the asymptotic cases. 1. The most interesting case is when $`E<1`$, that is $`E<kT_0`$ in standard units, and $`Et_11`$. One should remember that $`t_1`$ is always greater than unity, so the latter inequality is true when $`E`$ is not too small. Then we find, replacing the lower integration limit by zero, and the upper one by infinity, $$N(E)A_0\frac{E^{2(\sigma +1)/\theta }}{\theta }_0^{\mathrm{}}𝑑y\frac{y^{(\sigma +1)/\theta 1}}{\mathrm{exp}(y)1}.$$ (9) The value of the integral is not interesting for us now. Thus, we produce a power-law spectrum with the exponent $`2(\sigma +1)/\theta `$. Say, for $`\sigma =2`$ and $`\theta =3/4`$ we find the spectrum $`N(E)\nu ^2`$. For $`\sigma =2`$ and $`\theta =1`$ we find the spectrum $`N(E)E^1`$ (flat $`F_\nu \nu ^0`$), etc. See the numerical examples below. 2. When $`E1`$ and $`Et_11`$, we have $`y1`$, so we are in the Rayleigh-Jeans (RJ) regime, $`\mathrm{exp}(y)1y`$, and $$N(E)A_0\frac{E^{2(\sigma +1)/\theta }}{\theta }_\nu ^{Et_1}𝑑yy^{(\sigma +1)/\theta 2}E.$$ (10) 3. For high frequencies, $`E>1`$ and $`Et_1>>E>1`$, the flux reduces to $$N(E)A_0\frac{E^{2(\sigma +1)/\theta }}{\theta }_E^{Et_1}𝑑yy^{(\sigma +1)/\theta 1}e^yA_0\frac{E^{2(\sigma +1)/\theta }}{\theta }E^{(\sigma +1)/\theta 1}e^EE^1e^E.$$ (11) So, here, in the Wien regime, for any $`\sigma `$ and $`\theta `$ we have in standard units $`N(E)E^1\mathrm{exp}(E/kT_0)`$ \[contrary to $`N_b(E)E^2\mathrm{exp}(E/kT_0)`$ for the blackbody of temperature $`T_0`$\]. ### 2.3 Numerical Examples Fig. 3 presents the results of numerical integration of the $`4`$ cases of elementary spectra with various model parameters. It also illustrates the correctness of the above analytical estimates. One can compare these spectra with fig. 4 from Chiang & Dermer (1998) where a similar time integration is done, but in a different physical situation. For a fixed pair of $`\sigma `$ and $`\theta `$ the spectrum consists of two power laws and one exponential (Wien) high-energy part. Therefore, it is in some sense similar to the Band (1993) function which also has two power law parts, so it can be expected to fit the observations as well. The moderately high energy part ($`E_1<E<E_0`$) has the power law spectrum with the the exponent depending both on the cooling ($`\theta `$) and expansion ($`\sigma `$) laws. The dynamical range, i.e. the spectral width of this part is $`E_1/E_0=(t_1/t_0)^\theta 1`$. Of course it depends on the integration time $`\tau =t_1t_0t_1`$, and should be smaller for high temporal resolution. This can be a serious test of the present model. The highest energy part ($`E>E_0`$) represents the exponential breakdown, which may be observed or not, depending on the value of $`E_0`$. Also, for such energies, there may exists some other (optically thin?) radiation mechanisms which can provide more intense emission than the proposed blackbody one. The low energy ($`E<E_1`$) part of the spectrum in our model should have only one possible value of the slope. This is clear from the analytical considerations: $`\alpha =1`$ (see eqn. 10). This $`\alpha `$ is close to be consistent with many observations (Crider et al. 1997), but the observed variety of spectra is much richer, than the simple RJ case, and there are claims that some GRB’s do show here the spectra predicted by synchrotron model (Cohen et al. 1997).We can demonstrate, that with a small sophistication our blackbody model can reproduce those spectra as well. Introducing new parameters is usually a means to improve a fit, but also makes the latter physically less reliable. In what follows, we will keep one parameter constant, let us take $`\sigma =2`$, as the most natural choice. Instead, we can introduce a physically motivated additional parameter $`f_{\mathrm{hard}}`$ as the fraction of the time when the value of $`\theta `$ is constant, assuming that after some time, $`f_{\mathrm{hard}}\tau `$, the temperature power law changes. In the examples below, for the fixed $`\sigma `$ at constant value $`2`$, we allow the value of $`\theta `$ to change a bit. For illustration, we have taken GRB spectra given in (Cohen et al. 1997) in the form of postscript files and superimposed them onto our fits. The spectra from (Cohen et al. 1997) are all integrated in time, it would be better to have time-resolved spectra. But in any case it is not possible to have time resolution better then 1 ms and for the illustration of our idea the spectra used are quite good. As shown by Figs. 4567, our black body model can provide good fits for the GRB spectra which were claimed to give evidence for synchrotron radiation. ## 3 Discussion and Conclusions We have assumed that at each moment the spectrum of the gamma-ray burst emission is close to the black body one. After the integration in time over the typical temporal resolution of the observations it produces a spectrum which can be similar to the observed ‘non-thermal’ GRB spectrum. In reality, both the instantaneous spectrum and its true time evolution can deviate significantly from our simplified assumption. So in reality one can have a much richer variety of observed spectra. In our work, we wish only to point out a simple fact: that the observed non-thermal spectrum can be produced by an optically thick expanding body under fairly natural assumptions We have in mind the following picture. The central engine of GRB operates on a space scale like $`10^6`$ cm (the size of a neutron star or a stellar mass black hole). It produces shells or bullets of matter moving with the speed which is only one millionth slower than the light speed $`c`$, i.e. we assume $`\mathrm{\Gamma }10^3`$. The high value of $`\mathrm{\Gamma }`$ is needed in any case for cosmological GRBs in order to solve the compactness problem (see e.g. Piran 1998 for all refs). But the standard picture invokes the high $`\mathrm{\Gamma }`$ in order to make the fireball transparent only if it is clean (without baryon load): they have the optical depth $`\tau _{\gamma \gamma }`$ going down $`\mathrm{\Gamma }^{4+\beta }`$, if the $`\beta 2`$ is the index of the power spectrum at the hard tail $`N(E)E^\beta `$. So in standard picture, and in our picture as well, if we see a pulse of GRB lasting $`1`$ ms, the size of the shell should have grown from $`10^6`$ cm up to $`10^6`$ light milliseconds $`=10^3`$ light seconds $`10^{14}`$ cm, since $`R2\mathrm{\Gamma }^2c\times 1`$ ms. Now one can only start speculating, where do the next pulses of GRB come from. These can be internal or external shocks (Piran 1998), or light reflections (Shaviv & Dar 1995, Drozdova & Panchenko 1997), etc. However, there are arguments (Fenimore, Ramirez, & Sumner 1997) that one shell expanding forever is not able to produce GRB pulses which only show a slight ‘hard to soft’ evolution for hundreds of pulses. Already for the first pulse the shell had to expand from $`10^6`$ cm to $`10^{14}`$ cm. Therefore, a model where a central engine repeats shooting shells or bullets for the whole duration of the GRB is preferred (see also Dar 1998). Thus, we have $`R`$ like $`10^6`$ cm and $`t_0310^5`$ sec, and $`t_1`$ (or $`\tau `$) like $`10^3`$ second, so $`7`$ orders of magnitude for the dynamical range of a power-law spectrum in our model is quite plausible. This covers the range from keV to GeV. The evidence for hard, TeV, emission associated with GRBs remains inconclusive (Padilla et al. 1998). Only if the extremely hard TeV photons are detected, as suggested by Totani (1998), then one should invoke a truly non-thermal emission mechanism. We do not say that our shell has the thickness in the end like $`10^{14}`$ cm, it must be geometrically thin, so more dense – it is optically thick. But its radius $`R`$ is of course $`10^{14}`$ cm, with $`dRR`$. It can be loaded with baryons to some extend, not violating the energy limits of course (Krolik & Pier 1991). This is good if one has something like stripping the surface layers of neutron stars (Blinnikov et al. 1984; Eichler et al. 1989; Ruffert et al. 1997). Reaching the size $`10^{14}`$ cm our shell (or bullet) has expanded and cooled enough to become transparent in the end. The shell traveled this distance $`10^3`$ seconds according to our clocks, but one should not forget that it kept running almost with speed of light, the light that it had produced. So the difference in time of the beginning of the flash, that we see on Earth, and its end is only 1 millisecond. While our shell is still very near the centre, the engine has shot already the second shell (or bullet, Fig. 8), then the 3rd one, …, the 100th, etc. If the total GRB duration observed on Earth was a few seconds, all the shots of the central engine were done when our first shell was like (few seconds/$`10^3`$ seconds) smaller than in the end, so its radius was like $`10^{11}`$ to $`10^{12}`$ cm. At this time it was very optically thick, but one should not forget that it moves so fast, that the light of the 2nd, 3rd, …, 100th etc. shells can reach the first shell only after the first one is far away, $`10^{14}`$ cm from the centre, and absolutely transparent. If instead of shells we have bullets, moving at some small angles to us there is no problem of transparency. They can cool down and become small solid bodies (this is perhaps not probable, since they must be heated up by ISM). In reality, not only time, but also space integration takes place. As shown by Rees (1966), (see also Drozdova & Panchenko 1997, Sari 1998) in the case of an expanding emitting shell an observer simultaneously detects radiation produced in different moments of time (thus, with different temperatures) on the ellipsoidal or egg-like surface. The integration over this surface can give the same effect as the integration over time done in this paper, but we do not perform this here because the result strongly depends on the unknown geometry of the emitting surface. To conclude, we found that a variety of observed ‘nonthermal’ GRB spectra can be well reproduced by the time-integrated emission of a black-body spectra. The most critical test of our model can be the discovery of the temporal resolution dependence of the power spectrum range (here $`E_1\mathrm{}E_0`$). However, it can be smoothed by a space integration. The main advantage of the proposed model is that it allows the baryon load to be limited not by the optical thickness, but by energy considerations only (one cannot accelerate too much baryons because of their high rest mass). Acknowledgements. Our work is partly supported by RBRF grants 96-02-16352 and 96-02-19756, INTAS ‘Thermonuclear Supernovae’, ISTC 97-370, and Russian Federal programs ‘Astronomy’ and ‘Science Schools’. The work of IEP was made possible by the INTAS 96-0315 and RFBR 98-02-16801 grants. Part of the work was done while SIB was visiting Stockholm Observatory under the grant of the Swedish Royal Academy of Sciences, and he is grateful to Peter Lundqvist and Claes Fransson for their hospitality, to Felix Ryde for his data on GRBs, and to Claes-Ingvar Björnsson for stimulating comments. The support of MPA, Garching, and encouragement by Wolfgang Hillebrandt are gratefully acknowledged.
no-problem/9902/hep-ph9902438.html
ar5iv
text
# 1 The CDF data [] for 𝐵⁢𝑑⁢𝜎/𝑑⁢𝑝_𝑇 (in nb/GeV) for 𝐽/𝜓 production at 1.8 TeV with -0.6≤𝜂≤0.6, compared to the model predictions with ⟨𝑘_𝑇⟩={0, 0.7, 1.0} GeV, respectively. TIFR/TH/99-09 February 1999 hep-ph/yymmxxx Issues in Quarkonium Production K. Sridhar<sup>*</sup><sup>*</sup>*sridhar@theory.tifr.res.in Department of Theoretical Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400 005, India. Invited talk presented at the 13th International Conference on Hadron Collider Physics, Mumbai, India, 14-20 January 1999. ABSTRACT In this talk, I start with a brief introduction to Non-Relativistic QCD (NRQCD) and its applications to quarkonium physics. This theory has provided a consistent framework for the physics of quarkonia, in particular, the colour-octet Fock components predicted by NRQCD have important implications for the phenomenology of charmonium production in experiments. The applications of NRQCD to $`J/\psi `$ production at Tevatron and the tests of the theory in other experiments is discussed. In particular, the apparent disagreement of NRQCD with results from HERA on inelastic photoproduction of $`J/\psi `$ is discussed and it is shown that the results are rather susceptible to intrinsic transverse momentum smearing. The photoproduction data, therefore, do not provide a good test of NRQCD. It is argued that NRQCD may be tested stringently by looking for the production of other charmonium resonances at the Tevatron, because the production rates for these resonances can be predicted within the NRQCD framework. Over the last few years, there has been a considerable advance in the understanding of quarkonium physics due to the development of the non-relativistic effective field theory of QCD, called non-relativistic QCD (NRQCD) . The Lagrangian for this effective theory is obtained from the full QCD Lagrangian by neglecting all states with momenta larger than a cutoff of the order of the heavy quark mass, $`m`$, and accounting for this exclusion by introducing new interactions in the effective Lagrangian, which are local since the excluded states are relativistic. Beyond the leading order in $`1/m`$ the effective theory is non-renormalisable. The scale $`m`$ is an ultraviolet cut-off for the physics of the bound state; however the latter is more intimately tied to the scales $`mv`$ and $`mv^2`$, where $`v`$ is the relative velocity of the quarks in the bound state. The physical quarkonium state admits of a Fock expansion in $`v`$, and it turns out that the $`Q\overline{Q}`$ states appear in either colour-singlet or colour-octet configurations in this series. Of course the physical state must be a colour-singlet, so that a colour-octet $`Q\overline{Q}`$ state is connected to the physical state by the emission of one or more soft gluons. In spite of the non-perturbative nature of the soft gluon emissions, the effective theory still gives useful information about the intermediate octet states. This is because the dominant transitions that occur from colour-octet to physical colour-singlet states are $`via`$ E$`1`$ or M$`1`$ transitions with higher multipoles being suppressed by powers of $`v`$. It then becomes possible to use the usual selection rules for these radiative transitions to keep account of the quantum numbers of the octet states, so that the production of a $`Q\overline{Q}`$ pair in a octet state can be calculated and its transition to a physical singlet state can be specified by a non-perturbative matrix element. The cross-section for the production of a meson $`H`$ then takes on the following factorised form: $$\sigma (H)=\underset{n=\{\alpha ,S,L,J\}}{}\frac{F_n}{m^{d_n4}}𝒪_\alpha ^H({}_{}{}^{2S+1}L_{J}^{}),$$ (1) where the $`F_n`$’s are the short-distance coefficients and the $`𝒪_n`$ are local 4-fermion operators, of naive dimension $`d_n`$, describing the long-distance physics. The short-distance coefficients are associated with the production of a $`Q\overline{Q}`$ pair with the colour and angular momentum quantum numbers indexed by $`n`$. These involve momenta of the order of $`m`$ or larger and can be calculated in a perturbation expansion in the QCD coupling $`\alpha _s(m)`$. The $`Q\overline{Q}`$ pair so produced has a separation of the order of $`1/m`$ which is pointlike on the scale of the quarkonium wavefunction, which is of order $`1/(mv)`$. The non-perturbative long-distance factor $`𝒪_n^H`$ is proportional to the probability for a pointlike $`Q\overline{Q}`$ pair in the state $`n`$ to form a bound state $`H`$. The existence of the colour-octet components of the quarkonium wave function is the new feature of the NRQCD approach. Before the development of NRQCD, the production and decay of quarkonia were treated within the framework of the colour-singlet model . In this model, it is assumed that the $`Q\overline{Q}`$ pair is formed in the short-distance process in a colour-singlet state. The corrections from terms higher order in $`v`$ were neglected. While this model gave a reasonable description of low-energy $`J/\psi `$ data, it was known that it was incomplete because of an inconsistency in the treatment of the $`P`$-state quarkonia. This was due to a non-factorising infra-red divergence, noted first in the application of the colour-singlet model to $`\chi _c`$ decays , and the proper resolution of this problem was obtained only by including the colour-octet components in the treatment of the $`P`$-states . The colour-octet components, however, had a more dramatic impact on the phenomenology of $`P`$-state charmonium production at large $`p_T`$ at the Tevatron $`p\overline{p}`$ collider where the colour-singlet model was seen to fail miserably. While the inclusion of the colour-octet components for the $`P`$-states was necessary from the requirement of theoretical consistency, there was no such problem with the $`S`$ states because the corresponding amplitude was finite and the colour-octet components were suppressed compared to the colour-singlet component by $`O(v^4)`$. But the data on direct $`J/\psi `$ and $`\psi ^{}`$ production at the Tevatron seem to indicate an important contribution from the colour-octet components for the $`S`$-states as well . While it is clear that the correct description of the Tevatron large-$`p_T`$ data requires that the colour-octet components of the quarkonium wave function have to be taken into account, the major problem is that the corresponding long-distance matrix elements are a priori unknown and can be obtained only by fitting to the Tevatron data . The direct $`J/\psi `$ production cross section in the NRQCD approach receives contributions from the colour-singlet $`{}_{}{}^{3}S_{1}^{[1]}`$ channel and the colour-octet $`{}_{}{}^{3}P_{J}^{[8]}`$, $`{}_{}{}^{1}S_{0}^{[8]}`$ and $`{}_{}{}^{3}S_{1}^{[8]}`$ channels. The non-perturbative parameter for the colour-singlet channel is known from $`J/\psi `$ leptonic decay. Given this input, the three non-perturbative parameters $`𝒪({}_{}{}^{3}P_{J}^{[8]})`$, $`𝒪({}_{}{}^{1}S_{0}^{[8]})`$, $`𝒪({}_{}{}^{3}S_{1}^{[8]})`$ (which we call matrix elements $`M_1`$, $`M_2`$ and $`M_3`$ respectively) are extracted from a fit to the CDF data. It turns out that for $`p_T>4`$ GeV, the $`p_T`$ dependence of the short-distance coefficients corresponding to the $`{}_{}{}^{3}P_{J}^{[8]}`$ and the $`{}_{}{}^{1}S_{0}^{[8]}`$ channels are identical. The $`{}_{}{}^{3}S_{1}^{[8]}`$ channel on the other hand has a different $`p_T`$ distribution, because fragmentation-type contributions are present only only for this channel. Consequently, the shape of the experimental $`p_T`$ distributions can be used to determine $`M_3`$ spearately, but only a linear combination of $`M_1`$ and $`M_2`$ (i.e. $`M_1/m_c^2+M_2/3`$) can be fitted. Clearly it is important to have other tests of NRQCD, and much effort has been made recently to understand the implications of these colour-octet channels for $`J/\psi `$ production in other experiments. We discuss some of these below. 1. The prediction for prompt $`J/\psi `$ production at LEP in the colour-singlet model is of the order of $`3\times 10^5`$, which is almost an order of magnitude below the experimental number for the branching fraction obtained from LEP . Recently, the colour-octet contributions to $`J/\psi `$ production in this channel have been studied and it is found that the inclusion of the colour-octet contributions in the fragmentation functions results in a predictions for the branching ratio which is $`1.4\times 10^4`$ which is compatible with the measured values of the branching fraction from LEP . A more accurate analysis, resumming large logarithms in $`E_{J/\psi }/M_Z`$ ignored in Ref. has been recently performed . 2. The production of $`J/\psi `$ in low energy $`e^+e^{}`$ machines can also provide a stringent test of the colour-octet mechanism . In this case, the colour-octet contributions dominate near the upper endpoint of the $`J/\psi `$ energy spectrum, and the signature for the colour-octet process is a dramatic change in the angular distribution of the $`J/\psi `$ near the endpoint. 3. One striking prediction of the colour-octet fragmentation process both for $`p\overline{p}`$ colliders and for $`J/\psi `$ production at the $`Z`$-peak, is that the $`J/\psi `$ coming from the process $`gJ/\psi X`$ is produced in a transversely polarised state . For the colour-octet $`c\overline{c}`$ production, this is predicted to be a 100% transverse polarisation, and heavy-quark spin symmetry will then ensure that non-perturbative effects which convert the $`c\overline{c}`$ to a $`J/\psi `$ will change this polarisation only very mildly. This spin-alignment can, therefore, be used as a test of colour-octet fragmentation. 4. The colour-octet components are found to dominate the production processes in fixed-target $`pp`$ and $`\pi p`$ experiments. Using the colour-octet matrix elements extracted from elastic photoproduction data it is possible to get a very good description of the $`\sqrt{s}`$-dependence and also the $`x_F`$ and rapidity distributions. More recently, NLO corrections to the fixed-target cross-sections have been calculated . 5. The associated production of a $`J/\psi +\gamma `$ is also a crucial test of the colour-octet components and also of the fragmentation picture . Similar tests can be concieved of with double $`J/\psi `$ production at the Tevatron . 6. $`J/\psi `$ and $`\psi ^{}`$ production in $`pp`$ collisions at centre-of-mass energies of 14 TeV at the LHC also provides a crucial test of colour-octet fragmentation . Recently, $`J/\psi +\gamma `$ production at the LHC has also been studied at the LHC. One important cross-check is the inelastic photoproduction of $`J/\psi `$ at the HERA $`ep`$ collider . The inelasticity of the events is ensured by choosing $`zp_pp_{J/\psi }/p_pp_\gamma `$ to be sufficiently smaller than one and, in addition, using $`p_T>1`$ GeV. The surprising feature of the comparisons of the NRQCD results with the data from HERA is that the colour-singlet model prediction is in agreement with the data while including the colour-octet component leads to violent disagreement with the data at large $`z`$. While the colour-singlet cross section dominates in most of the low-$`z`$ region, the colour-octet contribution increases steeply in the large-$`z`$ ($`0.8<z<0.9`$) region and this rise is not seen in the data. In these comparisons, the values of the non-perturbative matrix elements are taken to be those determined from a fit to the Tevatron large-$`p_T`$ data. Naively, one would think that this points to a failure of NRQCD. But this conclusion is premature. The reason is that while at the Tevatron the measured $`p_T`$ of the $`J/\psi `$ is greater than about 5 GeV, at HERA the $`p_T`$ can be as small as $`𝒪(1)`$ GeV. At such small values of $`p_T`$ (and also for $`z`$ very close to unity), there could be significant perturbative and non-perturbative soft physics effects. One way to explore the effect of such contributions is to include transverse momentum smearing of the partons inside the proton. and study the effects of the parton transverse momentum, $`k_T`$, on the $`J/\psi `$ distributions both at the Tevatron and at HERA. It has been demonstrated that the $`z`$ distribution measured at HERA is particularly sensitive to the effects of $`k_T`$ smearing, and that inelastic photoproduction at HERA, with the present kinematic cuts, is not a clean test of NRQCD <sup>1</sup><sup>1</sup>1Other effects such as soft-gluon resummation and the breakdown of NRQCD factorisation near $`z=1`$ have been discussed in the context of this discrepancy.. In Fig. 1, the results of the fits to the Tevatron data are shown, for three different values of average $`k_T`$, $`k_T`$, viz. $`k_T=0,0.7,1.0\mathrm{GeV}`$. It is observed that the effect of the $`k_T`$ smearing on the parameters extracted from the data is very modest. Fig. 1 shows that the fits to the data when $`k_T`$ smearing is included are very good and comparable in quality to the case $`k_T=0`$. Taking these fitted values of the parameters, inelastic $`J/\psi `$ photoproduction at HERA is considered, for the same choice of parton distributions, scales etc. as used in the Tevatron fits. The $`z`$ distribution, for $`\sqrt{s_{\gamma p}}=100`$ GeV and $`p_T>1`$ GeV, is compared with the data from HERA in Fig. 2. Again the theoretical curves in Fig. 2 are for $`k_T=0,0.7,1.0`$ GeV. In the absence of smearing, $`k_T=0`$, we see that the colour-octet component makes a large contribution at $`z`$ close to 1 which is not supported by the data. However the introduction of $`k_T`$ makes a substantial change to the octet contribution. Whereas the effect of $`k_T`$-smearing is very small for large-$`p_T`$ production at the Tevatron, these effects are found to be very important for $`J/\psi `$ production at HERA. In particular, smearing signficantly reduces the size of the cross section and the $`z`$ distribution also becomes flatter, in better agreement with the HERA data. It is safe to conclude that while a direct comparison of the NRQCD predictions with the $`z`$-dependence of the inelastic photoproduction cross section for $`J/\psi `$ at HERA show a marked disagreement between the two, we argue that such a comparison is misleading. The inelastic photoproduction process does not provide a clean test of NRQCD because of the very low $`p_T`$-cut ($`1`$ GeV) used in the HERA experiments making the data very susceptible to effects like $`k_T`$ smearing. Better tests of NRQCD may be obtained by studying other observables at the Tevatron itself. The study of the polarisation of the produced $`J/\psi `$ mentioned earlier is one example; in the following, we will discuss the production of other charmonium resonances whose cross-section can be predicted in NRQCD. One important feature of the NRQCD Lagrangian is that it shows an approximate heavy-quark symmetry, which is valid to $`O(v^2)0.3`$. The implication of this symmetry is that the nonperturbative parameters have a weak dependence on the magnetic quantum number. Using this symmetry some non-perturbative matrix elements can be expressed in terms of others already determined from the Tevatron data. In particular, the $`{}_{}{}^{1}P_{1}^{}`$ matrix elements can be inferred from the Tevatron data on $`\chi `$ production and, therefore, the production of the $`{}_{}{}^{1}P_{1}^{}`$ charmonium state, $`h_c`$, can be predicted in NRQCD . The production of the $`h_c`$ is interesting in its own right : charmonium spectroscopy predicts this state to exist at the centre-of-gravity of the $`\chi _c({}_{}{}^{3}P_{J}^{})`$ states. While the E760 collaboration at the Fermilab has reported the first observation of this resonance its existence needs further confirmation. The cross-section for $`h_c`$ production at the Tevatron energy ($`\sqrt{s}=1.8`$ TeV) has been presented in Ref. For 20 pb<sup>-1</sup> total luminosity, for $`p_T`$ integrated between 5 and 20 GeV we expect of the order of 650 events in the $`J/\psi +\pi `$ channel. Of these, the contribution from the colour-singlet channel is a little more than 40, while the octet channel gives more than 600 events. The colour-octet dominance is more pronounced at large-$`p_T`$. Recent results on $`J/\psi `$ production from CDF are based on a total luminosity of 110 pb<sup>-1</sup>. For this sample, more than 3000 events can be expected to come from the decay of the $`h_c`$ into a $`J/\psi `$ and a $`\pi `$. With this large event rate, the $`h_c`$ should certainly be observable if the $`pi^0`$ coming from its decay can be reconstructed efficiently. A similar prediction for the absolute production rate can be made for $`\eta _c`$ production , where the two-photon decay mode of the $`\eta _c`$ has been considered. Heavy quark symmetry allows the $`\eta _c`$ cross-section to be determined in terms of the non-perturbative parameters $`M_1`$ and $`M_2`$ obtained from $`J/\psi `$ data. But as explained before, the $`J/\psi `$ data do not allow for a separate determination of these parameters, but only a linear combination of these parameters. We can saturate the linear combination with either $`M_1`$ or $`M_2`$ and we obtain the $`\eta _c`$ event rate in both these cases. For the integrated event rate, with a $`p_T`$-cut of 5 GeV and assuming an integrated luminosity of 110 pb<sup>-1</sup>, we find that the number of $`\eta _c\gamma \gamma `$ lies between 425 and 7700, depending on whether $`M_1`$ or $`M_2`$ saturates the linear combination. The sensitivity of the event rate to $`M_1`$ and $`M_2`$ shows that the experimental measurement of the $`eta_c`$ cross-section will allow for an accurate determiantion of these non-perturbative parameters. We reiterate that the rates for $`h_c`$ and $`\eta _c`$ are $`predictions`$ of NRQCD, and it is not possible to have similar predictions in alternative approaches to quarkonium production like colour-evaporation . In conclusion, NRQCD provides a predictive theoretical framework for quarkonium physics. In particular, the anomalies in the $`J/\psi `$ production at the Tevatron are properly understood using NRQCD. Several other tests of the theory, proposed in the literature, have been discussed. In particular the inelastic photoproduction of $`J/\psi `$ at HERA is discussed and it shown that the apparent disagrement of the experimental results with the predictions of NRQCD is misleading. Because of the low values of $`p_T`$ in the photoproduction case, we find that the effect of $`k_T`$-smearing is important and that, indeed, for $`k_T0.7`$ GeV, the discrepancy between theory and experiment is no longer observed. On the other hand, the inclusion of $`k_T`$ smearing has a very modest effect on the large-$`p_T`$ $`J/\psi `$ data from the Tevatron. Better tests of NRQCD may be obtained by studying other observables at the Tevatron itself, such as the study of the polarisation of the produced $`J/\psi `$ or the production of other charmonium resonances whose cross-section can be predicted in NRQCD.
no-problem/9902/hep-ph9902288.html
ar5iv
text
# A Quantum Field Theory Warm Inflation Model ## Abstract A quantum field theory warm inflation model is presented that solves the cosmological horizon/flatness problems. An interpretation of the model is given from supersymmetry and superstring theory. preprint: VAND-TH-98-01 To appear in Proceedings COSMOS 98, Monterey, CA, November 1998 It has been known for a long time to us (for a review of earlier work see ) and perhaps a longer time to Nature that inflation is a very attractive solution to the cosmological puzzles. Yet despite its simple picture, a dynamical realization of inflation has proven to be an arduous task. For a long time cosmologists adhered to the notion that a de Sitter expansion regime would necessitate a rapid depletion of radiation energy density $`\rho _r`$ thus creating a supercooled environment during inflation. Warm inflation cosmology has clarified this misconception by demonstrating the viability of concurrent radiation production during an inflationary regime. Furthermore, the warm inflation picture has a couple of immediate conceptional advantages. Firstly the dynamics is completely free of questions about the quantum-to-classical transition. The scalar inflaton field is in a well defined classical state, thus immediately justifying the application of a classical evolution equation. Also, the fluctuations of the inflaton, which are the metric perturbations , are classical. Secondly the dynamics underlying warm inflation is based on the best understood nonequilibrium regime, the state perturbed from thermal equilibrium. In this regime, a self-consistent prescription for dynamics is well defined. These two points imply the dynamics is free of conceptual ambiguity, thus permitting a clear road towards a theory. Notwithstanding, the challenge is to find models that satisfy the requirements of this prescription. In this talk, a quantum field theory warm inflation model is presented, based on the analysis in , that solves the horizon/flatness problems. The model obtains, from the elementary dynamics of particle physics, cosmological scale factor trajectories that begin in a radiation dominated regime, enter an inflationary regime and then smoothly exit back into a radiation dominated regime, with nonnegligible radiation throughout the evolution. The basic idea of our implementation of warm inflation is quite simple; a scalar field, which we call the inflaton, interacts with several other fields through shifted couplings $`g^2(\phi M_i)^2\chi _i^2`$ and $`g(\phi M_i)\overline{\psi }\psi `$ to bosons and fermions repectively. The mass sites $`M_i`$ are distributed over some range. As the inflaton relaxes toward its minimum energy configuration, it will decay into all fields that are light and coupled to it. In turn this generates an effective viscosity. That this indeed happens has been demonstrated in detail in Refs. . In order to satisfy one of the requirements of a successful inflation (60 or so e-folds), overdamping must be very efficient. The purpose of distributing the masses $`M_i`$ is to increase the interval for $`\phi `$ in which light particles emerge through the shifted couplings. The basic Lagrangian that we will consider is of a scalar field $`\varphi `$ interacting with $`N_M\times N_\chi `$ scalar fields $`\chi _{jk}`$ and $`N_M\times N_\psi `$ fermion fields $`\psi _{jk}`$, $`[\varphi ,\chi _{jk},\overline{\psi }_{jk},\psi _{jk}]={\displaystyle \frac{1}{2}}(_\mu \varphi )^2{\displaystyle \frac{m^2}{2}}\varphi ^2{\displaystyle \frac{\lambda }{4!}}\varphi ^4`$ (3) $`+{\displaystyle \underset{j=1}{\overset{N_M}{}}}{\displaystyle \underset{k=1}{\overset{N_\chi }{}}}\left\{{\displaystyle \frac{1}{2}}(_\mu \chi _{jk})^2{\displaystyle \frac{f_{jk}}{4!}}\chi _{jk}^4{\displaystyle \frac{g_{jk}^2}{2}}\left(\varphi M_j\right)^2\chi _{jk}^2\right\}`$ $`+{\displaystyle \underset{j=1}{\overset{N_M}{}}}{\displaystyle \underset{k=1}{\overset{N_\psi }{}}}\left\{i\overline{\psi }_{jk}\overline{)}\psi _{jk}h_{jk}(\varphi M_j)\overline{\psi }_{jk}\psi _{jk}\right\},`$ where all coupling constants are positive: $`\lambda `$, $`f_{jk},g_{jk}^2,h_{jk}`$ $`>0`$. For simplicity, we consider in the following $`f_{jk}=f`$, $`g_{jk}=h_{jk}=g`$. Also, we will set $`N_\psi =N_\chi /4`$, which along with our choice of coupling implies a cancelation of radiatively generated vacuum energy corrections in the effective potential . We call this kind of model a distributed mass model (DMM), where the interaction between $`\varphi `$ with the $`\chi _{jk}`$ and $`\psi _{jk}`$ fields establishes a mass scale distribution for the $`\chi _{jk}`$ and $`\psi _{jk}`$ fields, which is determined by the mass parameters $`\{M_i\}`$. Thus the $`\chi _{jk}`$ and $`\psi _{jk}`$ effective field-dependent masses, $`m_{\chi _{jk}}(\varphi ,T,\{M\})`$ and $`m_{\psi _{jk}}(\varphi ,T,\{M\})`$, respectively, can be constrained even when $`\varphi =\phi `$ is large. The mass sites are chosen to be $`M_i=iT_M/g`$, where $`T_M`$ is a constant that is of order the temperature $`T`$ during warm inflation. The above Lagrangian has been realized from an effective N=1 global SUSY theory with superpotential $$W(\mathrm{\Phi },\{X_i\})=4m\mathrm{\Phi }^2+\lambda \mathrm{\Phi }^3+\underset{i=1}{\overset{N_M}{}}\left[4\mu _iX_i^2+f_iX_i^3+\lambda _i^{}\mathrm{\Phi }^2X_i+\lambda _i^{^{\prime \prime }}\mathrm{\Phi }X_i^2\right],$$ (4) which represents an inflaton interacting with the modes of a string. Here $`\mathrm{\Phi }`$ is a single chiral superfield which represents the inflaton and $`X_i`$ $`i=1,\mathrm{},N_M`$ are a set of chiral superfields that interact with the inflaton. All the superfields will have their antichiral superfields $`\overline{\mathrm{\Phi }}`$, $`\{\overline{X}_i\}`$ appearing in kinetic and Hermetian conjugate (h.c.) terms. In the chiral representation the expansion of the superfields in terms of the Grassmann variable $`\theta `$ is $`\mathrm{\Phi }=\varphi +\theta \psi +\theta ^2F`$ and $`X_i=\chi _i+\theta \psi _i+\theta ^2F_i`$, $`i=1,\mathrm{},N_M`$. Here $`\varphi =(\varphi _1+i\varphi _2)/\sqrt{2}`$ and $`\chi _i=(\chi _1+i\chi _2)/\sqrt{2}`$ are complex scalar fields as well as $`F`$ and $`\{F_i\}`$, and $`\psi `$ and $`\{\psi _i\}`$ are Weyl spinors. By definition, the inflaton $`\mathrm{\Phi }`$ characterizes the state of the vacuum energy through a nonzero amplitude in the bosonic sector $`\varphi \phi 0`$. The DM-model is realized for the case $`\mu _i=gM_i/2`$, $`\lambda _i^{^{}}=0`$, and $`\lambda _i^{^{\prime \prime }}=2g`$, for which the masses of the $`\chi _i,\psi _i`$ fields are respectively $`m_{\chi _i}^2=g^2(\phi M_i)^22gm\phi (3g\lambda /4)\phi ^2`$ and $`m_{\psi _i}^2=g^2(\phi M_i)^2`$. At $`\phi =0`$, the masses of the $`\chi _i,\psi _i`$ pair are equal, which is required by supersymmetry. On the other hand, a nonzero inflaton field amplitude, $`\phi 0`$, implies a soft breaking of supersymmetry, which in turn permits mass differences. It has been checked that the soft breaking terms do not cause any problems for the results in . The hierarchy of mass levels in the above model Eqs. (3) and (4) has a reminiscient similarity to the mass levels of a string. Clearly the above superpotential, thus the DM-model, captures this basic feature of strings. Also, since the DM-model can be derived from F-term SUSY, i.e. the superpotential, it is a natural model in the technical sense of renormalizability. Further details on matters related to the SUSY origin of the DM-model and its string interpretation can be found in . The next task is to derive the effective equation of motion for $`\phi `$ that descibes dissipative dynamics. The basic idea underlying dissipative dynamics is very simple. The decay products of $`\varphi `$ and fields to which it couples create, in a sense, a viscous fluid which in turn acts to slow the motion of $`\phi `$. The 1-loop effective equation of motion for the scalar field $`\varphi `$ is obtained by setting $`\varphi =\phi +\eta `$ in Eq. (3) and imposing $`\eta =0`$. Then from Weinberg’s tadpole method the 1-loop evolution equation for $`\phi `$ (for homogeneous field) is $`\ddot{\phi }+3H\dot{\phi }+m^2\phi +{\displaystyle \frac{\lambda }{6}}\phi ^3+{\displaystyle \frac{\lambda }{2}}\phi \eta ^2`$ (6) $`+g^2{\displaystyle \underset{i}{\overset{N_M}{}}}{\displaystyle \underset{j}{\overset{N_\chi }{}}}(\phi M_i)\chi _{ij}^2+g{\displaystyle \underset{i}{\overset{N_M}{}}}{\displaystyle \underset{j}{\overset{N_\chi /4}{}}}\psi _{ij}\overline{\psi }_{ij}=0.`$ In the above, the term $`3H\dot{\phi }`$ describes the energy red-shift of $`\phi `$ due to the expansion of the Universe. This term comes naturally once we start with a background expanding metric for Eq. (3). In the warm-inflation regime of interest here, the thermalization condition must hold, which requires the characteristic time scales (given by the inverse of the decay width) for the fields in Eq. (3) to be faster than the expansion time scale, $`H\mathrm{\Gamma }`$, where $`\mathrm{\Gamma }`$ are decay widths given in . In this case, the calculation of the (renormalized) thermal averages in Eq. (6) can be approximated just as in the Minkowski space-time case. A systematic perturbative evaluation of the averages in the adiabatic, strong dissipative regime was presented in and re-derived in with extension to fermions. Based on the systematic perturbative approach, the effective equation of motion for $`\phi `$ is $$\ddot{\phi }+V_{\mathrm{eff}}^{}(\phi ,T)+\eta (\phi )\dot{\phi }=0,$$ (7) where $`V_{\mathrm{eff}}^{}(\phi ,T)=V_{\mathrm{eff}}(\phi ,T)/\phi `$ is the field derivative of the 1-loop finite temperature effective potential, which can be computed by the standard methods. $`\eta (\phi )\eta ^\mathrm{B}(\phi )+\eta ^\mathrm{F}(\phi )`$ is a field dependent dissipation; their explicit expressions are given in . The above equation of motion is subject to the thermalization condition and the adiabatic condition, that the dynamic time-scale for $`\phi `$ must be much larger than the typical collision time-scale ($`\mathrm{\Gamma }^1`$), $`|\phi /\dot{\phi }|\mathrm{\Gamma }^1`$. The model outlined above has been analyzed in the regime where $`V^{}(\phi ,T)\lambda \phi ^3/6`$ dominates. To enforce this condition, further constraints are imposed on the parameter space. There is insufficient space to present the complete solution here, but it can be found in . To summaries the results, we find observationally large e-folds $`N_e>60`$ in the regime $`g<1`$, $`N10`$, $`\phi /T10^3`$, and $`\lambda 10^9`$. In addition, a large number of mass sites $`M_i`$ are necessary $`N_M10^3`$. The total number of particle fields necessary for the dynamics in this regime, $`NN_M10^4`$, is not inconsistent with the particle content of excited states in string theory . In summary, the model described above has two appealing features. Firstly, since the dynamics is derived from a first principles treatment of thermal field theory, which drives inflation from the natural dynamics of a scalar field, slow roll is a consequence of the dynamics and not an input. Secondly, the model offers an interesting connection to high energy unification, through its relation to superstrings. Finally an aside, it is interesting to examine whether small warm inflations, $`N_e1`$, can be implemented as reheating phases after supercooled inflation.
no-problem/9902/hep-ph9902272.html
ar5iv
text
# Quantum Dew \[ ## Abstract We consider phase separation in nonequilibrium Bose gas with an attractive interaction between the particles. Using numerical integrations on a lattice, we show that the system evolves into a state that contains drops of Bose-Einstein condensate suspended in uncondensed gas. When the initial gas is sufficiently rarefied, the rate of formation of this quantum dew scales with the initial density as expected for a process governed by two-particle collisions. PACS: 98.80.Cq, 03.75.Fi PURD-TH-99-03, CERN-TH/99-21 hep-ph/9902272 \] Theory of interacting Bose gases has been an important part of quantum statistical mechanics ever since Bogoliubov’s seminal work . In particular, nonequilibrium Bose gases are interesting from at least two points of view. First, such gases can now be produced in laboratory via modern cooling techniques. A dramatic demonstration of the resulting nonlinear dynamics is Bose-Einstein condensation (BEC) observed in alkali vapors . Second, nonequilibrium gases of elementary particles frequently arise in cosmological scenarios and could have played an important role in the evolution of the universe. One way how nonequilibrium Bose gases arise in cosmology is via decay of a coherently oscillating field. This mechanism could be important, for instance, at the end of an inflationary stage, i.e. during the reheating after inflation. Indeed, it has been found that in some inflationary models, the oscillating inflaton field decays rapidly and completely into a gas that contains both the inflaton quanta and other types of Bose particles . These gases have very large occupation numbers in low-momentum modes and almost no occupation in high-momentum modes; they are highly nonthermal. The possibility of existence of such gases is by no means limited to the postinflationary era. In particular, there are indications that nonbaryonic cold dark matter constitutes a significant fraction of the matter in the universe at present. At the epoch of galaxy formation, gravitational instability develops on a variety of scales, which may lead to formation of small-scale dark matter clumps in galaxy halos. Dark matter particles trapped in a gravitational well are out of thermal equilibrium and are nonrelativistic. Typically interparticle interactions are very small, but, if the particles are bosons, the relaxation time can in certain cases be comparable to the age of the universe . This opens a possibility of Bose-Einstein condensation and formation of Bose-stars . One proposed precursor of those is axion miniclusters , but modern particle models contain a variety of other fields of potential interest in this respect: majoron, dilaton, moduli, to name a few. Thus, it is important to investigate the evolution of nonequilibrium Bose gases under various conditions. If the interaction between the particles is repulsive, and the energy density is sufficiently low, a Bose-Einstein condensate will form. The process of Bose-Einstein condensation in this case has been studied theoretically in a number of papers ,. The question we want to address in this paper is what happens to a nonequilibrium Bose gas if the interaction between its particles is attractive, at least within a certain range of interparticle distances. There is hardly any doubt that an attractive interaction will lead to clumping and phase separation, and statements to that effect have appeared in recent literature . However, it has remained unclear whether the clumps will be in the normal or the superfluid state. In addition, kinetics of the clumping needs to be elucidated. Our main result, obtained via numerical integrations, is that the clumps are drops of Bose-Einstein condensate, i.e. each of them is characterized by a macroscopic order parameter. These drops remained suspended in uncondensed gas for as long as we could follow the evolution, although they did grow somewhat at the expense of the gas. Because Bose condensation in the drops is attributable to the quantum statistics of the particles, we call such drops quantum dew. The coherent, macroscopically ordered nature of quantum dew may be important in cosmological (as well as laboratory) applications. Suppose for example that the particles it is made of can decay into some other particles. The macroscopically populated mode of a coherent clump may work as a laser ; as a result, quantum dew may decay much faster than an incoherent clump would. The purpose of the present paper is to prove the coherent nature of the clumps and to study the kinetics of appearance and growth of dew drops. For this purpose, we have chosen the simplest model with an attractive interaction and, nevertheless, a stable ground state. From the point of view of cosmological applications, perhaps the most important effects left out of this simple model are the expansion of the universe and the gravitational attraction. The expansion dilutes gas available for clumping and thus slows the clumping down. The gravitational attractions works in the opposite direction. The net effect of these opposing tendencies can in principle be found via numerical integrations, and we plan to return to this important question in future. The model contains a nonrelativistic complex Bose field $`\psi `$ with the following equation of motion $$2mi\frac{\psi }{t}=^2\psi |\psi |^2\psi +g_6|\psi |^4\psi .$$ (1) The field is normalized so that the attractive cubic term on the right-hand side has the coefficient of unity. The corresponding coupling $`g_4`$ then appears in the commutation relation: $$[a_\text{k},a_\text{k}^{}^{}]=|g_4|\delta _{\text{k}\text{k}^{}},$$ (2) the annihilation operators $`a_\text{k}`$ being defined via $`\psi (\text{r})=V^{1/2}_\text{k}a_\text{k}\mathrm{exp}(i\text{k}\text{r})`$ in a finite volume $`V`$. The quintic term in (1) is repulsive, and it becomes important when $`\psi ^{}\psi `$ approaches $`g_6^1`$. (The coupling $`g_4`$ appearing in (2) is related to $`\lambda `$ of the relativistic $`\lambda \varphi ^4/4`$ potential via $`g_4=3\lambda /2m`$, and to the scattering length $`a`$ of a nonrelativistic Bose gas via $`g_4=8\pi a`$; here $`\mathrm{}=1`$. The physical density is $`\psi ^{}\psi /|g_4|`$.) Our integrations are set up as follows. In the initial state the occupation numbers $`n_\text{k}=a_\text{k}^{}a_\text{k}`$ have a Gaussian distribution over momenta $$n_\text{k}=A\mathrm{exp}(k^2/k_0^2).$$ (3) The population of the homogeneous mode is $`n_0=A`$, which is not considerably larger than population of other modes with small $`k`$. In this sense, there is no macroscopic condensate in the initial state. Now, we assume that $`g_4`$ in (2) is small compared to $`A`$ in (3). Then, we can neglect the commutator of $`a`$ and $`a^{}`$ compared to the typical magnitude of $`a`$ itself, cf. Bogoliubov . As a result, the problem becomes classical and can be integrated on a lattice. This classical approximation has been used to study nonlinear dynamics of relativistic Bose fields at large occupation numbers and the process of Bose-Einstein condensation . In the present work, we use it to demonstrate the formation of quantum dew in the model (1). This use involves no contradiction of terms: quantum dew is an effect of quantum statistics when one thinks in terms of individual particles, but it comes out as an effect of classical evolution in the collective, field-theoretical description. Notice that eq. (3) determines only absolute values of $`a_\text{k}`$; their phases are chosen as uncorrelated random numbers. We choose the parameters of the model in such a way that $`\psi ^{}\psi g_6^1`$; the angular brackets denote averaging over the lattice. This means that in the initial state the attractive interaction is much more important than the repulsive one. In this case, we expect that, in appropriate dimensionless units, the time scale $`t_c`$ of the initial collapse of the gas into clumps depends only on the single remaining parameter of nonlinearity $$\xi =\frac{\psi ^{}\psi }{2mϵ},$$ (4) where $`ϵ`$ is the average kinetic energy per particle in the initial state; $`\xi `$ is of order of the ratio of the initial potential energy of attraction to the initial kinetic energy. A similar parameter for an atomic gas in a trap will be introduced below. We can write $$t_c^1=ϵF(\xi ),$$ (5) where $`F`$ is some function obeying the condition $`F(0)=0`$ and $`F(1)1`$. The form of $`F(\xi )`$ at small $`\xi `$ is established below. For the initial distribution (3), the initial parameter of nonlinearity is $`\xi =k_0A/12\pi ^{3/2}`$. We choose units of time so that $`2m=1`$. We also use units of length in which $`k_0=2\pi `$, i.e. measure lengths in units of the particles’ typical initial de Broglie wavelength. Except where stated otherwise, we consider the case of moderate nonlinearity $`A=5`$, which corresponds to $`\xi =0.47`$. We use $`g_6^1=3600`$. The results below are from integrations on a $`64^3`$ cubic lattice with side $`L=2.25`$ (in the above length units) and periodic boundary conditions. The state of the system was updated via a second-order in time algorithm based on the Crank-Nicholson method for the diffusion equation. The algorithm conserves the number of particles exactly. Energy non-conservation was below 2% for the entire integration time. Fig. 1 shows two snapshots of the field, at times $`t=0.1`$ and $`t=20`$. Dots have been placed on all lattice sites at which $`|\psi |>30`$ (the mean-square value of $`|\psi |`$ is 5.3). Drops of dew are clearly seen. A movie of the evolution of this picture from $`t=0.1`$ to $`t=20`$ shows that the drops of dew move around, gradually slowing down, and occasionally coalesce. The overall growth of the number of sites with $`|\psi |>30`$ continues even at $`t=20`$, the latest time in our computation, but at that time it is already quite slow. If we define that grid points with $`|\psi |8`$ belong to the gas, and correspondingly grid points with $`|\psi |>8`$ belong to the dew ($`|\psi |=8`$ is approximately the boundary between the gas and the dew at $`t=20`$, see Fig. 3 below), we find that around 15% of all particles are in the gas, and around 85% had condensed in the dew by the time $`t=20`$. These fractions, however, may be altered when gravity is included. Fig. 2 shows initial stages of the condensation process: we plot the fraction of particles that are in the dew, as a function of time. We observe two distinct stages: a rapid collapse followed by a slower chaotic evolution. Because $`\xi 1`$ at $`A=5`$, we estimate the time of collapse $`t_c`$ from (5) as $`t_ck_0^2`$. For $`k_0=2\pi `$ this gives $`t_c0.025`$, in good agreement with the data of Fig. 2. In the regime of weak nonlinearity, $`\xi 1`$, we expect the collapse to be due to two-particle collisions, in which case $`F(\xi )\xi ^2`$. (With this form of $`F(\xi )`$, the estimate (5) for the time of the collapse coincides with the estimate of the condensation time of Ref. , which can also be obtained from a solution to the Boltzmann equation ). The time of the collapse then has to scale as $`A^2`$ when we decrease $`A`$ and keep all other parameters fixed. Results of integrations with different values of $`\xi <1`$ confirm this, see Fig. 2. For an atomic gas confined in a trap, at some temperature $`T`$, one can introduce initial parameter of nonlinearity $`\xi _T`$: $$\xi _T=\frac{4\pi \mathrm{}^2|a|n}{mT},$$ (6) where $`n`$ is a typical gas density, and $`a`$ is the scattering length, which in the present case is negative. In (6), $`k_B=1`$, but we have restored $`\mathrm{}`$. Let us use for estimates $`n=(mT_0/3.31\mathrm{}^2)^{3/2}`$ and $`T=T_0`$, where $`T_0`$ is the temperature of BEC of an ideal monoatomic gas in a given trap and with a given number of particles. Bradley et al. quote $`T_0=300`$ nK and $`a=27.3a_0`$ for their experiment with trapped <sup>7</sup>Li ($`a_0`$ is the Bohr radius); using these values we obtain $`\xi _T=0.006`$. Estimating the rate of the collapse as $`\mathrm{}t_c^1T_0\xi _T^2`$, we find $`t_c1`$ s. We thus expect that quantum dew can be observed in laboratory in traps of a sufficiently large size. The onset of the slower chaotic evolution indicates that a chemical quasiequilibrium between the dew and the gas has been reached, i.e. the processes of evaporation of particles from the existing dew drops and condensation back onto them are approximately (but not exactly) balanced. This interpretation is supported by the following test. The probability distribution of the absolute value of the field over lattice sites shows two distinct peaks: one at large $`|\psi |`$, corresponding to the dew drops, and another at small $`|\psi |`$, corresponding to the gas of particles, see Fig. 3. If at some instant we remove the gas, i.e. set $`\psi =0`$ at all sites where we had $`|\psi |<10`$, and then continue the evolution, the gas reappears, while the number of sites occupied by the dew decreases down to another slowly evolving value. Apparently, the dew partially evaporates, so as to restore the chemical quasiequilibrium. Finally, Fig. 4 illustrates the coherent nature of the dew. It shows the field $`\psi `$ at $`t=0.2`$ at a section of our integration cube parallel to the $`x`$$`y`$ plane. For visual clarity only even-even numbered sites are included. The length of an arrow represents $`|\psi |`$, and the angle clockwise from 12 noon represents $`\mathrm{arg}\psi `$. We see that the sites occupied by the dew (i.e. having large $`|\psi |`$) are in drops, and each such drop is coherent—the arrows point approximately in the same direction. The direction of arrows in each drop rotates with time, just as in the homogeneous case , but these directions are different for different drops. Similar slices at later times show that the clumping becomes more pronounced, the dew is still coherent, while the remaining gas (occupying sites with small $`|\psi |`$) is incoherent. Eq. (1) has stable nontopological solitons of the form $$\psi (\text{r},t)=\chi (\text{r})\mathrm{exp}(i\omega t)$$ (7) (nonrelativistic analogs of Q-balls ). As we continue to truncate the gas, i.e. to remove particles from sites with progressively smaller $`|\psi |`$, and to evolve the system between these truncations, we expect to eventually reach a state in which solitons float in vacuum (or gas of a very small density). Changes in the probability distribution function (p.d.f.) of $`|\psi |`$ resulting from this procedure are shown in Fig. 5. We interpret the limiting form to which the p.d.f. converges in the middle range of $`|\psi |`$ as corresponding to the wall profile of nontopological solitons. Computation of $`\psi `$ at the center of a soliton in the thin-wall approximation gives $`|\psi |_c=(3/4g_6)^{1/2}52`$, in good agreement with the position of the peak in the p.d.f. at large $`|\psi |`$. Like nontopological solitons produced in a decay of an unstable homogeneous condensate , quantum dew may work as cold dark matter. To summarize, our main results are: (i) a numerical proof that the clumps of matter formed in a nonequilibrium gas with an attractive interaction are coherent drops of Bose-Einstein condensate; (ii) evidence that the rapid collapse of particles into drops of this quantum dew is followed by a slower evolution, during which the dew is in approximate chemical equilibrium with the surrounding gas; (iii) evidence that at weak nonlinearity the rate of the initial collapse is consistent with being determined by two-particle collisions. We thank A. Kusenko and M. Shaposhnikov for useful discussions. This work was supported in part by DOE grant DE-FG02-91ER40681 (Task B) and NSF grant PHY-9501458. S.K. thanks ITP, Santa Barbara for hospitality during completion of this work.
no-problem/9902/astro-ph9902011.html
ar5iv
text
# H i in Early-Type Galaxies ## 1 Introduction Early-type galaxies constitute quite a heterogeneous group of galaxies. Many of the properties of these systems vary from galaxy to galaxy and, more interestingly, vary systematically with luminosity and environment. Many of the differences may be related to the amount of gas (and dissipation) present during the formation and the evolution of the galaxies. Moreover, evidence is accumulating now that some early-type galaxies probably have a long-lived interstellar medium (ISM; e.g. Knapp 1998). To help understand the mechanisms and processes behind the differences between different early-type galaxies and to study the ISM observed in some of these galaxies, it may be worthwhile to study the systematics of the properties of the neutral hydrogen in early-type galaxies as function of luminosity and environment. To this end, we have observed a large number of early-type galaxies with the Australia Telescope Compact Array and the Very Large Array (Morganti et al. 1997a,b, 1998a,b). In particular, we have tried to observe galaxies over a range of luminosities in order to see whether the H i properties of low-luminosity early-type galaxies (which we define here as galaxies with absolute blue magnitude in the range of $`16`$ to $`19`$) vary systematically with luminosity. Here we give a brief overview of the results obtained so far. In section 2 we give a brief overview of properties of early-type galaxies observed at other wavebands that are relevant for the discussion. In section 3 we briefly discuss the H i content of early-type galaxies and in section 4 we summarize the morphology and the kinematics of the H i as function of luminosity and environment and the relation of these properties to other properties of the galaxies. ## 2 Properties at other wavelengths There are several properties of early-type galaxies that systematically vary with luminosity: Stellar rotation. Some early-type galaxies are ‘pressure supported’, while in other galaxies the rotation of the stellar component is important for the dynamics of the system. In general, in luminous galaxies the random motions dominate while in lower luminosity systems the stellar rotation becomes more important (e.g., Davies et al. 1982). Isophotal shape. In many early-type galaxies, the isophotes are not perfect ellipses. In lower luminosity galaxies, the isophotes are more often disky, while in luminous systems they tend to be more often box shaped. Core properties. Imaging studies performed with HST have shown that the central density distributions vary systematically with luminosity (e.g., Lauer 1997). Low-luminosity systems usually have steeper cores that more luminous systems Excitation of the ionized gas. Many early-type galaxies have optical emission lines in their spectrum. The character of these lines changes systematically with luminosity. In low-luminosity systems, the spectrum is usually that of H ii regions. In luminous galaxies it corresponds to a liner spectrum (Sadler 1987). This indicates that the ionization mechanism is different in these two types of galaxies. Star formation history. The star formation history appears to change systematically with luminosity. For example, the relative abundance of Mg with respect to Fe correlates with velocity dispersion. Luminous galaxies typically have \[Mg/Fe\] $``$0.4, while fainter galaxies have values around 0. This indicates that the enrichment history of the ISM changes systematically with luminosity. Lower-luminosity galaxies also show a larger spread in the Mg-$`\sigma `$ relation (e.g., Bender 1996), again pointing to a different star formation history. Many low-luminosity ellipticals in fact display star formation in the central parts of the galaxies. It appears that disky galaxies have stronger H$`\beta `$ indices, indicating that some star formation occurred recently (de Jong & Davies 1997). X-ray emission. The amount of X-ray emission correlates strongly with optical luminosity. In large ellipticals, part of the X-ray emission originates from a halo of hot gas, while in smaller ellipticals the X-ray is due only to X-ray binaries (e.g., Canizares et al. 1987). Many of these differences between different galaxies can be explained by different amounts of gas present in the formation/evolution, and in many models for galaxy formation the gas supply is a key factor (e.g., Kauffmann 1996). For example, the differences between boxy and disky galaxies, the importance of rotation vs. anisotropic galaxies, and the different central density distributions are possible a consequence of the relative importance of gas. Obviously, since stars form from gas, the different star formation histories must be related to different gas contents during the evolution. Considering the relation between gas and these different properties, it is worthwhile to investigate the systematics of the H i properties of early-type galaxies. ## 3 H i content Before discussing the H i properties of early-type galaxies, it is important to define which type of galaxies are considered and in which environment the galaxies are. Table 1 lists the detection rates for different kinds of early-type galaxies. A few things are evident from this table. First, only a small fraction of ‘pure’ elliptical galaxies have detectable amounts of H i. But as soon as the optical morphology shows some peculiarity, the probability of detecting H i increases dramatically. This result has often been interpreted to mean that the origin of the H i in elliptical galaxies is external (e.g. Knapp et al. 1985). It implies that if we investigate the characteristics of the H i in these galaxies (morphology and kinematics), one is considering a subset of the whole population of early-type galaxies, namely those for which it is likely that some interaction/accretion in the recent past has occurred, and it is important to keep this in mind. Nevertheless it is important not to restrict samples to ‘pure’ ellipticals with no optical peculiarities, since the H i-rich galaxies may represent an important phase in the evolution of many early-type galaxies. The table does however suggest that there may be a second origin for the H i in early-type galaxies. The detection rate also depends strongly on how much stellar disk is present in a galaxy. This could imply that not in all early-type galaxies the presence of H i is due to a recent accretion. The fact that the H i content is related to the fundamental structure of a galaxy could suggest that some of the disky galaxies may have a long-lived ISM. It appears that many early-type galaxies, especially those in the field, indeed often have an ISM with similar characteristics as the ISM in spirals, the main difference being that early-type galaxies have less of it (see e.g., Knapp 1998). It is often stated that low-luminosity early-type galaxies are more likely to have H i. This is usually based on a study done by Lake and Schommer (1984) who observed a small sample of low-luminosity early-type galaxies and found that the detection rate of their sample was significantly higher than that of more luminous early-type galaxies as it was known at the time. However, when using larger samples of H i data on early-type galaxies that are available now, the situation appears to be somewhat different from that suggested by Lake and Schommer. In figure 1 we show the detection rates of early-type galaxies as function of luminosity as it can be derived from the compilation of data of Bregman et al. (1992). The figure shows that galaxies brighter than absolute magnitude $`22`$ appear to be poorer in H i than galaxies fainter than this limit. However, for galaxies in the magnitude range $`16`$ to $`21.5`$, the detection rate appears to be reasonably flat. There is no strong evidence that low-luminosity galaxies (absolute magnitude between $`16`$ and $`19`$) are richer in H i than galaxies in the range $`19`$ to $`22`$. A similar conclusion was obtained by Knapp et al. (1985). It appears that a more correct statement about H i content would be that the most luminous galaxies are poor in H i. Galaxies fainter than $`M_B=16`$ appear to have more often H i, although the number of galaxies for which data are available is small. It is however, somewhat difficult to derive strong conclusions from the compilation of Bregman et al., because it consists of a mix of field and cluster galaxies, and differential environmental effects are possibly important. ## 4 H i Morphology and Kinematics One interesting difference between low-luminosity galaxies ($`19>M_B>16`$) compared to more luminous galaxies ($`22>M_B>19`$) is that the range of morphology and kinematics observed is different for the two groups. From our data, together with data in the literature (e.g. Lake et al. 1987) there are now about 10 H i data cubes available for low-luminosity early-type galaxies. Almost without exception, the H i in these galaxies is in a disk with a regular morphology and kinematics. In some galaxies there is evidence that (part of) the H i may have been accreted recently, but in several galaxies the structure of the H i is very regular and there is no evidence from the kinematics that a recent accretion has occurred. To illustrate this, in figure 2 we give the total H i image and the velocity field for the galaxy NGC 802 ($`M_B=18`$). The H i in this galaxy is very centrally concentrated. The velocity field shows however that the gas is rotating around the optical major axis, like in polar-ring galaxies. This suggests that the H i in NGC 802 has been accreted after the main stellar body had formed. Another galaxy low-luminosity galaxy that shows a very similar H i configuration is NGC 855 (Walsh et al. 1990). An example of a regular H i disk in a low-luminosity galaxy is given in figure 3, where we show the total H i image and a position-velocity map taken along the major axis of this galaxy. Also here, the H i is quite centrally concentrated. The position-velocity map in figure 3 shows that the H i in NGC 2328 is in a regularly rotating disk, aligned with the optical body. In contrast to the low-luminosity galaxies, the range in H i morphology in the more luminous galaxies ($`22>M_B>19`$) is much broader. For this group, in most galaxies the H i shows an irregular morphology, indicating that the gas is accreting onto the galaxy, or is left over from a recent merger event. A good example of this is NGC 5266 (figure 4; Morganti et al. 1997). This is a minor-axis dust-lane elliptical with a large amount of H i ($``$$`10^{10}`$ $`M_{}`$, $`M_{\mathrm{HI}}/L_B0.2`$). Almost all the H i is in an elongated structure parallel to the optical major axis. Most of this gas is rotating in a reasonably regular fashion, although several subsystems can be identified that are not in stable circular rotation. Interestingly, this large-scale H i structure is perpendicular to the inner, minor axis, dust lane, and some H i associated with this dust lane is also detected (a few percent of the H i mass). Clearly, NGC 5266 is a system where a large amount of H i has been accreted recently, or is a remnant of a recent merger, and the H i is still settling in the galaxy. Interestingly, there are now a few luminous early-type galaxies known that do have very regular H i structures. A very good example is the E4 galaxy NGC 807 (figure 5). Deep H i observations reveal a low-surface brightness H i disk that shows no signs that this H i has been accreted recently. Figure 5 gives the position-velocity map of this H i disk, clearly showing the regular rotation of this disk. The evolution of this disk is very slow, and this H i disk can be quite old. Often, these regular H i structures have a depression or hole in the centre that is filled up with a disk of ionized gas that has very similar kinematics as the H i disk. A good example of this is the dust-lane galaxy NGC 3108. Another striking difference between the H i structures seen in low-luminosity early-type galaxies and in more luminous galaxies, is that the central surface brightnesses are quite different. In the low-luminosity galaxies, the H i is quite centrally concentrated with central H i surface densities of at least 4 $`M_{}`$ pc<sup>-2</sup>. These densities are high enough for star formation to occur on a reasonable large scale, and indeed star formation is observed in the centres of these galaxies. Outside the centre, the surface densities of the H i are below 1 $`M_{}`$ pc<sup>-2</sup> and perhaps only sporadic star formation could occur. In contrast, the surface densities in the more luminous galaxies are much lower, even in the galaxies with a regular H i disk or ring. The peak surface densities are typically around 1 $`M_{}`$ pc<sup>-2</sup>, too low for large scale star formation to occur. Figure 6 shows the H i density profile of a low-luminosity elliptical and of a more luminous E4 galaxy. The difference between these profiles is quite typical for what is observed in most galaxies. ## 5 Connection with other properties The range of H i properties observed in early-type galaxies is quite large, but it appears that there are a few systematic trends in the data, and in particular that some of the H i properties may be connected to properties observed at other wavebands. Many low-luminosity early-type galaxies that have H i, have this H i in a regularly rotating disk. In the optical, these galaxies also show a disky morphology and are rotationally supported. One possibility is that the H i disk observed is the normal gas counterpart of the stellar disk structure in these galaxies. The central surface densities of the H i are high enough for star formation to occur, and indeed star formation is observed in the centres of many of these galaxies, and the optical spectrum of the emission lines is that of H ii regions. The higher densities of the H i could also be related to the steeper cores that are observed in low-luminosity galaxies, although also other mechanisms could be responsible for that. It appears that the H i properties of low-luminosity early-type galaxies fit in with other properties of these galaxies, indicating that the ISM in these galaxies has played an important role in determining the structure of these galaxies. The regular H i disks/rings observed in the more luminous galaxies could be similar in origin and character to the ones observed in the smaller galaxies (e.g. Morganti et al. 1998b), except that some mechanism must be responsible for keeping the central surface density of the H i low. A clue to this mechanism could be that the centres of these regular structures are often filled up with a disk of ionized gas that shows similar kinematics. The conditions appear to be such that high H i surface densities cannot build up because the H i in the centre gets ionized. This could be connected to the fact that more luminous early-type galaxies often have a halo of hot has, that could interact with the H i and ionize it (e.g. Goudfrooij 1998). This could also explain the different excitation of the optical gas that is observed. In a few low-luminosity galaxies, there is still evidence that the H i has accreted recently, a process that appears to occur more often in more luminous galaxies. The different characteristics of the H i in these galaxies suggest that the accretion of the H i occurs in a different way in low-luminosity galaxies compared to the more luminous ones. In low-luminosity galaxies, acccretion appears to result in a more regular H i structure. This could also be due to interactions with a halo of hot gas playing a role in luminous galaxies. If H i falls into a more luminous galaxy, the H i could get partially ionized and may not have time to settle in a disk-like structure. In NGC 4696 such an interaction could be occurring (Sparks et al. 1989; de Jong et al. 1990), although this galaxy is in a cluster and it may not be representative for the galaxies we have studied. Another factor affecting the way gas is accreted could be the environment. Several of the more luminous galaxies we studied are in small groups of galaxies, while the low-luminosity galaxies are more isolated. Interactions and accretions are of course more common in small groups and less relaxed H i structures should be more common. Also the luminous galaxies with regular H i structures tend to be more isolated, consistent with the idea that environment plays an important role in the evolution of H i in early-type galaxies. Acknowledgements. The optical images shown in this paper are taken from the Digital Sky Survey. These image are based on photographic data obtained using The UK Schmidt Telescope. The UK Schmidt Telescope was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council, until 1988 June, and thereafter by the Anglo-Australian Observatory. Original plate material is copyright (c) the Royal Observatory Edinburgh and the Anglo-Australian Observatory. The plates were processed into the present compressed digital form with their permission. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166. ## References Bender, R. 1996, in New Light on Galaxy Evolution, IAU symp. 171, eds R. Bender, R. Davies, Kluwer, p181 Bregman, J.N., Hogg, D.E., Roberts, M.S. 1992, ApJ, 387, 484 Canizares, C.R., Fabbiano, G., Trinchieri, G. 1987, ApJ, 312, 503 Davies R.L., Efstathiou G., Fall S.M., Illingworth G.D., Schechter P. 1982, ApJ, 266, 41 de Jong, T., Norgaard-Nielsen, H.U., Jorgensen, H.E., Hansen, L. 1990, A&A, 232, 317 de Jong, R., Davies, R. 1997, MNRAS, 285, 1 Goudfrooij, P. 1998, in Star Formation in Early-Type Galaxies, eds. P. Carral and J. Cepa, ASP Conf. Proc., in press (astro-ph/980957) Kauffmann, G. 1996, MNRAS, 281, 487 Knapp, G.R., Turner, E.L., Cunniffe, P.E. 1985, AJ, 90, 54 Knapp, G., 1998, in Star Formation in Early-Type Galaxies, eds. P. Carral and J. Cepa, ASP Conf. Proc., in press (astro-ph/9808266) Lake, G., Schommer, R.A. 1984, ApJ, 280, 107 Lake, G., Schommer, R.A., van Gorkom, J.H. 1987, ApJ, 314, 57 Lauer, T. 1997, in Second Stromlo Symposium: The Nature of Elliptical Galaxies, eds M. Arnaboldi, G.S. Da Costa P. Saha, eds, ASP Conf. Ser. Vol 116, p.113 Morganti, R., Sadler, E., Oosterloo, T., Pizzella, A., Bertola, F., 1997a, AJ, 113, 937 Morganti R., Sadler E., Oosterloo T., 1997b, in: Second Stromlo Symposium: The Nature of Elliptical Galaxies, M. Arnaboldi, G.S. Da Costa, P. Saha, eds, ASP Conf. Ser., Vol 116, p. 354 Morganti R., Oosterloo T., Tsvetanov Z., 1998a, AJ, 115, 915 Morganti R., Oosterloo T., Sadler E.M., Vergani D., 1998b, in: Star formation in early-type galaxies, ASP Conf. Ser., in press Sadler, E.M 1987, in: Structure and Dynamics of Elliptical Galaxies, IAU symp. 127, p125 Sparks, W.B., Macchetto, F., Golombek, D. 1989, ApJ, 345, 153 Walsh, D.E.P., Van Gorkom, J.H., Bies, W.E., Katz, N., Knapp, G.R., Wallington, S. 1990, ApJ, 352, 532
no-problem/9902/math-ph9902018.html
ar5iv
text
# A note on Farey sequences and Hausdorff dimension ## Abstract We prove that the Farey sequences can be express into equivalence classes labeled by a fractal parameter which looks like a Hausdorff dimension $`h`$ defined within the interval $`1`$$`<`$$`h`$$`<`$$`2`$. The classes $`h`$ satisfy the same properties of the Farey series and for each value of $`h`$ there exists an algebraic equation. From considerations about Fractional Quantum Hall Effect (FQHE) we have found a connection between a fractal parameter or Hausdorff dimension $`h`$ and the Farey series. Thus, we have the following theorem: The elements of the Farey series belong to distinct equivalence classes labeled by a fractal parameter $`h`$ defined into the interval $`1`$$`<`$$`h`$$`<`$$`2`$, such that these classes satisfy the same properties observed for that fractions. Also, for each value of $`h`$ there exists an algebraic equation . The fractal parameter $`h`$ is related to $`\nu `$ ( an irreducible number $`\frac{p}{q}`$, with $`p`$ and $`q`$ integers ) as follows $`h1=1\nu ,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}0}<\nu <1;h1=\nu 1,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}1}<\nu <2;`$ (1) $`h1=3\nu ,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}2}<\nu <3;h1=\nu 3,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}3}<\nu <4;`$ (2) $`h1=5\nu ,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}4}<\nu <5;h1=\nu 5,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}5}<\nu <6;`$ (3) $`h1=7\nu ,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}6}<\nu <7;h1=\nu 7,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}7}<\nu <8;`$ (4) $`h1=9\nu ,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}8}<\nu <9;h1=\nu 9,\mathrm{\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}9}<\nu <10;`$ (5) $`etc,`$ (6) We can extract, for example, the classes $`\{{\displaystyle \frac{1}{3}},{\displaystyle \frac{5}{3}},{\displaystyle \frac{7}{3}},{\displaystyle \frac{11}{3}},\mathrm{}\}_{h=\frac{5}{3}},\{{\displaystyle \frac{5}{14}},{\displaystyle \frac{23}{14}},{\displaystyle \frac{33}{14}},{\displaystyle \frac{51}{14}},\mathrm{}\}_{h=\frac{23}{14}};`$ (7) $`\{{\displaystyle \frac{4}{11}},{\displaystyle \frac{18}{11}},{\displaystyle \frac{26}{11}},{\displaystyle \frac{40}{11}},\mathrm{}\}_{h=\frac{18}{11}},\{{\displaystyle \frac{7}{19}},{\displaystyle \frac{31}{19}},{\displaystyle \frac{45}{19}},{\displaystyle \frac{69}{19}},\mathrm{}\}_{h=\frac{31}{19}};`$ (8) $`\{{\displaystyle \frac{10}{27}},{\displaystyle \frac{44}{27}},{\displaystyle \frac{64}{27}},{\displaystyle \frac{98}{27}},\mathrm{}\}_{h=\frac{44}{27}},\{{\displaystyle \frac{3}{8}},{\displaystyle \frac{13}{8}},{\displaystyle \frac{19}{8}},{\displaystyle \frac{29}{8}},\mathrm{}\}_{h=\frac{13}{8}};`$ (9) $`\{{\displaystyle \frac{3}{7}},{\displaystyle \frac{11}{7}},{\displaystyle \frac{17}{7}},{\displaystyle \frac{25}{7}},\mathrm{}\}_{h=\frac{11}{7}},\{{\displaystyle \frac{4}{9}},{\displaystyle \frac{14}{9}},{\displaystyle \frac{22}{9}},{\displaystyle \frac{32}{9}},\mathrm{}\}_{h=\frac{14}{9}}`$ (10) $`\{{\displaystyle \frac{5}{11}},{\displaystyle \frac{17}{11}},{\displaystyle \frac{27}{11}},{\displaystyle \frac{39}{11}},\mathrm{}\}_{h=\frac{17}{11}},\{{\displaystyle \frac{6}{13}},{\displaystyle \frac{20}{13}},{\displaystyle \frac{32}{13}},{\displaystyle \frac{46}{13}},\mathrm{}\}_{h=\frac{20}{13}},`$ (11) $`\{{\displaystyle \frac{2}{5}},{\displaystyle \frac{8}{5}},{\displaystyle \frac{12}{5}},{\displaystyle \frac{18}{5}},\mathrm{}\}_{h=\frac{8}{5}},`$ (12) and for that we can consider the serie $`(h,\nu )`$ $`({\displaystyle \frac{5}{3}},{\displaystyle \frac{1}{3}})({\displaystyle \frac{18}{11}},{\displaystyle \frac{4}{11}})({\displaystyle \frac{13}{8}},{\displaystyle \frac{3}{8}})({\displaystyle \frac{8}{5}},{\displaystyle \frac{2}{5}})`$ (13) $`({\displaystyle \frac{11}{7}},{\displaystyle \frac{3}{7}})({\displaystyle \frac{14}{9}},{\displaystyle \frac{4}{9}})({\displaystyle \frac{17}{11}},{\displaystyle \frac{5}{11}})({\displaystyle \frac{20}{13}},{\displaystyle \frac{6}{13}})\mathrm{}`$ (14) The classes $`h`$ satisfy all properties of the Farey series: P1. If $`h_1=\frac{p_1}{q_1}`$ and $`h_2=\frac{p_2}{q_2}`$ are two consecutive fractions $`\frac{p_1}{q_1}`$$`>`$$`\frac{p_2}{q_2}`$, then $`|p_2q_1q_2p_1|=1`$. P2. If $`\frac{p_1}{q_1}`$, $`\frac{p_2}{q_2}`$, $`\frac{p_3}{q_3}`$ are three consecutive fractions $`\frac{p_1}{q_1}`$$`>`$$`\frac{p_2}{q_2}`$$`>`$$`\frac{p_3}{q_3}`$, then $`\frac{p_2}{q_2}=\frac{p_1+p_3}{q_1+q_3}`$. P3. If $`\frac{p_1}{q_1}`$ and $`\frac{p_2}{q_2}`$ are consecutive fractions in the same sequence, then among all fractions between the two, $`\frac{p_1+p_2}{q_1+q_2}`$ is the unique reduced fraction with the smallest denominator. For more details about Farey series see. All these properties can be verified for the classes considered above as an example. Another example is $`(h,\nu )`$ $`=`$ $`({\displaystyle \frac{11}{6}},{\displaystyle \frac{1}{6}})({\displaystyle \frac{9}{5}},{\displaystyle \frac{1}{5}})({\displaystyle \frac{7}{4}},{\displaystyle \frac{1}{4}})({\displaystyle \frac{5}{3}},{\displaystyle \frac{1}{3}})`$ (17) $`({\displaystyle \frac{8}{5}},{\displaystyle \frac{2}{5}})({\displaystyle \frac{3}{2}},{\displaystyle \frac{1}{2}})({\displaystyle \frac{7}{5}},{\displaystyle \frac{3}{5}})({\displaystyle \frac{4}{3}},{\displaystyle \frac{2}{3}})`$ $`({\displaystyle \frac{5}{4}},{\displaystyle \frac{3}{4}})({\displaystyle \frac{6}{5}},{\displaystyle \frac{4}{5}})({\displaystyle \frac{7}{6}},{\displaystyle \frac{5}{6}})\mathrm{},`$ where the $`\nu `$ sequence is the Farey series of order $`6`$. Thus, we observe that because of the fractal spectrum (Eq.1), we can write down any Farey series of rational numbers. In summary, we have extracted from considerations about FQHE a beautiful connection between number theory and physics. We have shown that the Farey series can be arranged into equivalence classes labeled by a fractal parameter $`h`$ which looks like a Hausdorff dimension. For each value of $`h`$ we have an algebraic equation derived of the functional equation $`\xi =\left\{𝒴(\xi )1\right\}^{h1}\left\{𝒴(\xi )2\right\}^{2h},`$ (18) where $`\xi `$ is an exponential function. Then, there exists a relation between algebraic equation and Farey series. The connection between a geometric parameter related to paths of particles ( charge-flux system called fractons identified with holes of a multiply connected space) defined in the context of a two dimensional physical system and rational numbers deserve a more deep research.
no-problem/9902/cond-mat9902225.html
ar5iv
text
# Untitled Document ## Abstract It is discussed the scenario of two-step magnetic ordering in layered HTS after charge ordering. At the temperature decreasing the transition from 3D Heisenberg spin behavior to 2D XY coupling of the Cu spins occurs in di-stripes. Further temperature decreasing leads to the spin glass transition at $`T_g`$. Magnetic ordering in layered high temperature superconductors G.G.Sergeeva Kharkov Institute of Physics and Technology, Academicheskaya 1, 310108, Kharkov, Ukraine $`keyword`$ charge ordering, magnetic ordering hole concentration in $`CuO_2`$-planes 2D XY-model, quasi 2D XY-model 3D-spin glass transition 1.INTRODUCTION The conclusion about the decisive role of fluctuational antiferromagnetic (AFM) excitations with 3D Heisenberg spin behavior in HTS has been substantiated reliably to now. The opening of a ”spin pseudogap” on the Fermi surface was connected at first with strong AFM correlations. But later the propositions that the formation of the pseudogap as well as superconductivity are associated with charge ordering (dynamic analog of phase separation) were discussed (see reviews ) and were experimentally confirmed . Simple example of charge ordering was observed in Bi2212, where in the Cu-O plane metallic (met-) stripes with orthorombic structure and the stripes of the dielectric (di-) tetrogonal phase with a short range AFM correlations were found out at $`T<T_{ch}`$ ($`T_{ch}`$ is the temperature of charge ordering) . In spite of essential anisotropy of exchange constants of in-plane ($`J_0`$) and enter-plane ($`J_1`$) interactions, $`J_0/J_1>10^3`$, strong AFM fluctuations in layered HTS prevent to 2D Heisenberg ordering in di-stripes. In this paper it is discussed the scenario of two-step magnetic ordering in layered HTS which is based on the known properties of 2D XY-model \[6-7\]. It is shown, that after charge ordering in Cu-O planes the transition from 3D Heisenberg spin behavior to 2D XY coupling of the Cu spins occurs at $`T=T_{BKT}=T_{sp}<T_{ch}`$ in the di-stripes, where $`T_{BKT}`$ is Berezinskii-Kosterlitz-Thouless temperature . Further temperature decreasing leads to the spin glass transition at $`T_g`$, which was predicted for quasi-2D XY-model in Ref.8. 2. RESULTS. It is known that in 2D XY-model the phase transition occurs which leads to the formation of phase with power law decreasing of in-plane correlations, and to the peculiarities of temperature dependencies of the resistivity, magnetic susceptibility and specific heat at $`T<T_{sp}`$ . For layered systems the order parameter of quasi-2D XY-model $`q=0`$ at $`T>T_{sp}`$ and $`qJ_1^{\mathrm{\Delta }/(2\mathrm{\Delta })}`$ at $`T<T_{sp}`$, where $`\mathrm{\Delta }`$ is scaling dimensionality . If $`p_{cr}`$ is hole concentration in $`CuO_2`$-planes, at which the compound becomes the superconductor, and if the effective hole concentration in the di-stripes $`p_{sh}^{}<p_{cr}`$, the temperature of spin ordering, $`T_{sp}`$, may be large enough, $`T_{sp}(p_{sh})T_N(p_{sh}^{})`$ (here $`p_{sh}`$ is the hole concentration in $`CuO_2`$-planes and $`T_N`$ is the Neel temperature). Taking into account that exchange constant $`J_1J_0`$, as well as that 2D XY spin ordering at $`T_{sp}`$ occurs independently in each di-stripe, order parameter $`q1`$ and the sample remains non-magnetized. But after influence of magnetic field weak summarized magnetization can be observed. The temperature decreasing can lead to the 3D-spin glass transition at $`T_gT_{sp}`$, if each di-stripe is considered as 2D XY-model with occasional anisotropy and if the coupling between layers has special form $`J_1cosn(\mathrm{\Theta }_{i+1}\mathrm{\Theta }_i)`$ ( $`\mathrm{\Theta }`$ is the angle which is determined the direction of spin in $`i`$-plane, $`n`$ is the anisotropy order). Such transition for quasi-2D XY-model was predicted for quasi-2D XY-model and for HTS was observed at $`\mu sR`$ measurements . 3.DISCUSSION. For magnetic compounds with high anisotropy of exchange constants the two step spin ordering is well known and the peculiarities of temperature dependencies of the resistivity, magnetic susceptibility and specific heat at temperature of 2D XY ordering were observed. The first indirect evidence that spins may cross over to 2D XY-like behavior at $`T20`$ K, was observed in neutron measurements of single crystal $`La_{2x}Sr_xCuO_4`$ which is in the intermediate region of $`x=0.04`$ with neither long range AFM order nor superconductivity . There are enough experimental evidences of two step magnetic ordering in compounds with $`p_{sh}<p_{cr}`$ obtained in neutron and magnetic measurements, and $`\mu sR`$ studies ( see Ref.10 and references therein). Magnetic phase diagram as a function of $`p_{sh}`$ for $`La_{2x}Sr_xCuO_4`$ and $`Y_{1x}Ca_xBa_2Cu_3O_{6.02}`$ ($`0<x<0.11`$) was obtained in $`\mu sR`$ measurements . The coexistence of spin glass state and superconducting state for hole concentration $`0.06<p_{sh}<0.10`$ was found out. The observations of the oscillations of attenuation decrement after influence of magnetic field and residual magnetization at 300 K in Bi2223 ceramics with 30 % Bi2212 are indirect evidences of 2D XY magnetic ordering and weak summarized magnetization of di-stripes (see Ref.11 and references therein). It should be interesting to carry out the measurements of $`T_{sp}`$ for $`p_{sh}>p_{cr}`$ such as were performed for $`p_{sh}<p_{cr}`$. These measurements of temperature of 2D XY magnetic ordering are seemed to very important and experimentally feasible step in studies of nature of HTS. References 1. V.Barzykin and D.Pines, Phys.Rev. B 52, 13585 (1995). 2. S.G.Ovchinnikov, Usp.Fiz.Nauk 167, 1042 (1997); G.G.Sergeeva et al.Fiz.Nizk.Temp. 24, 1029 (1998)\[Low Temp.phys. 24, 771 (1998)\] 3. A.A.Zakharov et al. Physica C 223, 157 (1994) 4. Ch.Niedermayer et al. Phys.Rev.Lett. 80, 3843 (1998) 5. A.Bianconi and M.Missori, J.Phys.I (Paris),4, 361 (1994) 6. V.L.Berezinskii, JETP, 59, 907 (1970); ibid. 61, 1144 (1971). J.M. Kosterlitz, D.J.Thouless. J.Phys., C6, 1181 (1973) 7. V.L.Berezinskii, A.Ya.Blank, JETP,64, 723 (1973). 8. Vik.Dotsenko and M.V.Feigelman, JETP, 83, 345 (1982). 9. B.Keimer et al. Phys.Rev. B 46, 14034 (1992) 10. F.C.Chou et al. Phys.Rev.Lett. 75, 2204 (1995) 11. B.G.Lazarev et al. Fiz.Nizk.Temp.22, 819 (1996) \[Low Temp.Phys 22, 4629 (1996)\];
no-problem/9902/quant-ph9902007.html
ar5iv
text
# Semiclassical Calculation of Transition Matrix Elements for Atoms in External Fields ## Abstract Closed orbit theory is generalized to the semiclassical calculation of cross-correlated recurrence functions for atoms in external fields. The cross-correlation functions are inverted by a high resolution spectral analyzer to obtain the semiclassical eigenenergies and transition matrix elements. The method is demonstrated for dipole transitions of the hydrogen atom in a magnetic field. This is the first semiclassical calculation of individual quantum transition strengths from closed orbit theory. In 1925, Heisenberg pushed open the door to quantum mechanics when he proposed a quantum theory consisting only of “in principle observable” quantities – matrix elements $`A(n,m)`$ –, whose physical meaning was hinged upon the correspondence principle: For high quantum numbers $`n`$ and $`|nm|n`$ the matrix elements were to turn into the Fourier amplitudes of the corresponding physical observable $`A(t)`$ associated with a classical periodic orbit, viz. $`A(n,m)\mathrm{exp}(i(E(n)E(m))t/\mathrm{})A_\tau (n)\mathrm{exp}(i\tau \omega (n))`$, with $`\tau =nm`$ and $`\omega (n)`$ the frequency of the periodic orbit. By a consistent application of the translation rules to the quantization condition for actions, $`\mathrm{\Delta }S/\mathrm{\Delta }n=h`$, he arrived at the canonical commutation relations, underlying the whole of quantum mechanics. Thus establishing the connection between the information encoded in classical orbits and quantum mechanical transitions amplitudes proved one of the fundamental questions of quantum physics. The correspondence principle is silent about transition amplitudes involving low-lying states. It was, therefore, a great success when, more than sixty years later, closed orbit theory – a variant of periodic orbit theory – came up with the discovery that there exists an intimate connection between classical orbits and transitions amplitudes even in cases where one of the states lies in the deep quantum regime. In closed orbit theory, developed by Du and Delos and Bogomolny , the transition amplitudes are given as the sum of two terms, one a smoothly varying part (as a function energy) and the other a superposition of sinusoidal modulations. The frequencies, amplitudes, and phases of the modulations are directly obtained from information contained in the closed classical orbits. When the resulting transition amplitudes are Fourier transformed the sinusoidal modulations produce sharp peaks in the Fourier transform recurrence spectra. Closed orbit theory has been applied to the interpretation of photoabsorption spectra of atoms in external fields and has been most successful in explaining the quantum mechanical recurrence spectra qualitatively and even quantitatively in terms of the closed orbits of the underlying classical system . However, up to now practical applications of closed orbit theory have always been restricted to the semiclassical calculation of low resolution spectra for two reasons. Firstly, the closed orbit sum requires, in principle, the knowledge of all orbits up to infinite length, which are not normally available from a numerical closed orbit search, and, secondly, the infinite closed orbit sum suffers from fundamental convergence problems . It is therefore usually believed that the calculation of individual transition matrix elements, e.g., of the dipole operator $`D`$, $`\varphi _i|D|\psi _f`$, which describe the transition strengths from the initial state $`|\varphi _i`$ to final states $`|\psi _f`$, is a problem beyond the applicability of the semiclassical closed orbit theory, i.e., is the domain of quantum mechanical methods. It is the purpose of this Paper to demonstrate that high-precision quantum transition amplitudes between low-lying and highly excited states can be obtained within the framework of closed orbit theory using solely the information contained in closed classical orbits. In this way we can establish a connection between classical orbits and quantum mechanical matrix elements that goes far beyond what was conceived of in the early days of quantum mechanics. To that end, we slightly generalize closed orbit theory to the semiclassical calculation of cross-correlated recurrence functions. We then adopt the method of Refs. to harmonically invert the cross-correlated recurrence signal and to extract the semiclassical eigenenergies and transition matrix elements. Results will be presented for the photo excitation of the hydrogen atom in a magnetic field. The oscillator strength $`f`$ for the photo excitation of atoms in external fields can be written as $$f(E)=\frac{2}{\pi }(EE_i)\mathrm{Im}\varphi _i|DG_E^+D|\varphi _i,$$ (1) where $`|\varphi _i`$ is the initial state at energy $`E_i`$, $`D`$ is the dipole operator, and $`G_E^+`$ the retarded Green’s function of the atomic system. The basic steps for the derivation of closed orbit theory are to replace the quantum mechanical Green’s function in (1) with its semiclassical Van Vleck-Gutzwiller approximation and to carry out the overlap integrals with the initial state $`|\varphi _i`$. Here we go one step further by introducing a cross-correlation matrix $$g_{\alpha \alpha ^{}}=\varphi _\alpha |DG_E^+D|\varphi _\alpha ^{}$$ (2) with $`|\varphi _\alpha `$, $`\alpha =1,2,\mathrm{},L`$ a set of independent initial states. As will be shown below the use of cross-correlation matrices can considerably improve the convergence properties of the semiclassical procedure. In the following we will concentrate on the hydrogen atom in a magnetic field (for reviews see ) with $`\gamma =B/(2.35\times 10^5\mathrm{T})`$ the magnetic field strength in atomic units. The system has a scaling property, i.e., the shape of periodic orbits does not depend on the scaling parameter, $`w=\gamma ^{1/3}=\mathrm{}_{\mathrm{eff}}^1`$, and the classical action scales as $`S=sw`$, with $`s`$ the scaled action. As, e.g., in Ref. , we consider scaled photoabsorption spectra at constant scaled energy $`\stackrel{~}{E}=E\gamma ^{2/3}`$ as a function of the scaling parameter $`w`$. We choose dipole transitions between states with magnetic quantum number $`m=0`$. Note that the following ideas can be applied in an analogous way to atoms in electric fields. Following the derivation of Refs. the semiclassical approximation to the fluctuating part of $`g_{\alpha \alpha ^{}}`$ in Eq. 2 reads $`g_{\alpha \alpha ^{}}^{\mathrm{sc}}(w)`$ $`=`$ $`w^{1/2}{\displaystyle \underset{\mathrm{co}}{}}{\displaystyle \frac{(2\pi )^{5/2}}{\sqrt{|m_{12}^{\mathrm{co}}|}}}\sqrt{\mathrm{sin}\vartheta _i^{\mathrm{co}}\mathrm{sin}\vartheta _f^{\mathrm{co}}}`$ (3) $`\times `$ $`𝒴_\alpha (\vartheta _i^{\mathrm{co}})𝒴_\alpha ^{}(\vartheta _f^{\mathrm{co}})e^{i\left(s_{\mathrm{co}}w\frac{\pi }{2}\mu _{\mathrm{co}}+\frac{\pi }{4}\right)},`$ (4) with $`s_{\mathrm{co}}`$ and $`\mu _{\mathrm{co}}`$ the scaled action and Maslov index of the closed orbit (co), $`m_{12}^{\mathrm{co}}`$ an element of the monodromy matrix, and $`\vartheta _i^{\mathrm{co}}`$ and $`\vartheta _f^{\mathrm{co}}`$ the initial and final angle of the trajectory with respect to the magnetic field axis. The angular functions $`𝒴_\alpha (\vartheta )`$ depend on the states $`|\varphi _\alpha `$ and the dipole operator $`D`$, and are given as a linear superposition of Legendre polynomials, $`𝒴_\alpha (\vartheta )=_l_{l\alpha }P_l(\mathrm{cos}\vartheta )`$. For low-lying initial states with principal quantum number $`n`$ only few coefficients $`_{l\alpha }`$ with $`ln`$ are nonzero. Explicit formulas for the calculation of the coefficients can be found in Refs. . The problem now is to extract the semiclassical eigenenergies and transition matrix elements from Eq. 4 because the closed orbit sum does not converge. We therefore adopt the idea of Ref. where we proposed to adjust the Fourier transform of a non-convergent Dirichlet series like the semiclassical expression (4) to the functional form of its quantum mechanical analogue. The Fourier transformation of $`w^{1/2}g_{\alpha \alpha ^{}}^{\mathrm{sc}}(w)`$ yields the cross-correlated recurrence signals $$C_{\alpha \alpha ^{}}^{\mathrm{sc}}(s)=\underset{\mathrm{co}}{}𝒜_{\alpha \alpha ^{}}^{\mathrm{co}}\delta (ss_{\mathrm{co}}),$$ (5) with the amplitudes $`𝒜_{\alpha \alpha ^{}}^{\mathrm{co}}`$ $`=`$ $`{\displaystyle \frac{(2\pi )^{5/2}}{\sqrt{|m_{12}^{\mathrm{co}}|}}}\sqrt{\mathrm{sin}\vartheta _i^{\mathrm{co}}\mathrm{sin}\vartheta _f^{\mathrm{co}}}`$ (6) $`\times `$ $`𝒴_\alpha (\vartheta _i^{\mathrm{co}})𝒴_\alpha ^{}(\vartheta _f^{\mathrm{co}})e^{i\left(\frac{\pi }{2}\mu _{\mathrm{co}}+\frac{\pi }{4}\right)}`$ (7) being determined exclusively by closed orbit quantities. The corresponding quantum mechanical cross-correlated recurrence functions, i.e., the Fourier transforms of $`w^{1/2}g_{\alpha \alpha ^{}}^{\mathrm{qm}}(w)`$ read $$C_{\alpha \alpha ^{}}^{\mathrm{qm}}(s)=i\underset{k}{}b_{\alpha k}b_{\alpha ^{}k}e^{iw_ks},$$ (8) with $`w_k`$ the eigenvalues of the scaling parameter, and $$b_{\alpha k}=w_k^{1/4}\varphi _\alpha |D|\psi _k$$ (9) proportional to the transition matrix element for the transition from the initial state $`|\varphi _\alpha `$ to the final state $`|\psi _k`$. The method to adjust (5) to (8) for fixed states $`|\varphi _\alpha `$ and $`|\varphi _\alpha ^{}`$ is that of harmonic inversion as discussed in Ref. . However, information theoretical considerations then yield an estimate for the required signal length, $`s_{\mathrm{max}}4\pi \overline{\varrho }(w)`$ , which may result in an unfavorable scaling because of a rapid proliferation of closed orbits with increasing period. Moreover, it is a special problem to resolve nearly degenerate states and to detect states with very low transition strengths from the harmonic inversion of a single function $`C_{\alpha \alpha ^{}}^{\mathrm{sc}}(s)`$. Note also that the element of the monodromy matrix $`m_{12}`$ and the values of the angular functions $`𝒴_\alpha (\vartheta _i)`$ and $`𝒴_\alpha ^{}(\vartheta _f)`$ are intrinsically intertwined in Eq. 7 for the amplitudes $`𝒜_{\alpha \alpha ^{}}^{\mathrm{co}}`$, i.e., a single function $`C_{\alpha \alpha ^{}}^{\mathrm{sc}}(s)`$ does not contain independently the information from the monodromy matrix and the starting and returning angles $`\vartheta _i`$ and $`\vartheta _f`$ of the closed orbits. In this Paper we therefore propose to apply an extension of the method to the harmonic inversion of cross-correlation functions , which has recently also served as a powerful tool for the semiclassical calculation of tunneling splittings . The idea is that the informational content of an $`L\times L`$ time signal is increased roughly by a factor of $`L`$ as compared to a $`1\times 1`$ signal. The additional information is gained with the set of linearly independent angular functions $`𝒴_\alpha (\vartheta )`$, $`\alpha =1,2,\mathrm{}L`$ in Eq. 7 evaluated at the starting and returning angles $`\vartheta _i`$ and $`\vartheta _f`$ of the closed orbits. Note that the cross-correlation matrix (5) is constructed by using independently the information of the closed orbit quantities, i.e., the elements $`m_{12}`$ of the monodromy matrix and the angles $`\vartheta _i`$ and $`\vartheta _f`$. For a given number of closed orbits the accuracy of semiclassical spectra can be significantly improved with the help of the cross-correlation approach, or, alternatively, spectra with similar accuracy can be obtained from a closed orbit cross-correlation signal with a significantly reduced signal length. Here we only give a qualitative and brief description of the method. The details of the numerical procedure of solving the generalized harmonic inversion problem (8) have been presented in Refs. . The idea is to recast the nonlinear fit problem as a linear algebraic problem . This is accomplished by associating the signal $`C_{\alpha \alpha ^{}}(s)`$ (to be inverted) with a time cross-correlation function between an initial state $`\mathrm{\Phi }_\alpha `$ and a final state $`\mathrm{\Phi }_\alpha ^{}`$, $$C_{\alpha \alpha ^{}}(s)=\mathrm{\Phi }_\alpha ^{}|e^{is\widehat{H}_{\mathrm{eff}}}\mathrm{\Phi }_\alpha ,$$ (10) where the fictitious quantum dynamical system is described by an effective Hamiltonian $`\widehat{H}_{\mathrm{eff}}`$. The latter is defined implicitly by relating its spectrum to the set of unknown spectral parameters $`w_k`$ and $`b_{\alpha k}`$. Diagonalization of $`\widehat{H}_{\mathrm{eff}}`$ would yield the desired $`w_k`$ and $`b_{\alpha k}`$. This is done by introducing an appropriate basis set in which the matrix elements of $`\widehat{H}_{\mathrm{eff}}`$ are available only in terms of the known signals $`C_{\alpha \alpha ^{}}(s)`$. The Hamiltonian $`\widehat{H}_{\mathrm{eff}}`$ is assumed to be complex symmetric even in the case of a bound system, which makes the harmonic inversion stable with respect to “noise” due to the imperfections of the semiclassical approximation. We now demonstrate the method of harmonic inversion of the cross-correlated closed orbit recurrence functions (5) for the example of the hydrogen atom in a magnetic field at constant scaled energy $`\stackrel{~}{E}=0.7`$. This energy was also chosen for detailed experimental investigations of the helium atom . We investigate dipole transitions from the initial state $`|\varphi _1=|2p0`$ with light polarized parallel to the magnetic field axis to final states with magnetic quantum number $`m=0`$. For this transition the angular function in Eq. 7 reads $`𝒴_1(\vartheta )=(2\pi )^{1/2}2^7e^4(4\mathrm{cos}^2\vartheta 1)`$. For the construction of a $`2\times 2`$ cross-correlated recurrence signal we use for simplicity as a second transition formally an outgoing $`s`$-wave, i.e., $`D|\varphi _2Y_{0,0}`$, and, thus, $`𝒴_2(\vartheta )=\text{const}`$. A numerical closed orbit search yields 1395 primitive closed orbits (2397 orbits including repetitions) with scaled action $`s/2\pi <100`$. With the closed orbit quantities at hand it is straightforward to calculate the cross-correlated recurrence functions in (5). The real parts of the functions $`C_{11}^{\mathrm{sc}}(s)`$, $`C_{12}^{\mathrm{sc}}(s)`$, and $`C_{22}^{\mathrm{sc}}(s)`$ with $`s/2\pi <50`$ are presented in Fig. 1. The imaginary parts are not shown because they qualitatively resemble the real parts. Note that for symmetry reasons $`C_{21}^{\mathrm{sc}}(s)=C_{12}^{\mathrm{sc}}(s)`$. We have inverted the $`2\times 2`$ cross-correlated recurrence functions in the region $`0<s/2\pi <100`$. The resulting semiclassical photoabsorption spectrum is compared with the exact quantum spectrum in Fig. 2a for the region $`16<w<21`$ and in Fig. 2b for the region $`34<w<40`$. The upper and lower parts in Fig. 2 show the exact quantum spectrum and the semiclassical spectrum, respectively. Note that the region of the spectrum presented in Fig. 2b belongs well to the experimentally accessible regime with laboratory field strengths $`B=6.0\mathrm{T}`$ to $`B=3.7\mathrm{T}`$. The overall agreement between the quantum and semiclassical spectrum is impressive, even though a line by line comparison still reveals small differences for a few matrix elements. It is important to note that the high quality of the semiclassical spectrum could only be achieved by our application of the cross-correlation approach. For example, the two nearly degenerate states at $`w=36.969`$ and $`w=36.982`$ cannot be resolved and the very weak transition at $`w=38.894`$ with $`2p0|D|\psi _f^2=0.028`$ is not detected with a single $`(1\times 1)`$ recurrence signal of the same length. However, these hardly visible details are indeed present in the semiclassical spectrum in Fig. 2b obtained from the harmonic inversion of the $`2\times 2`$ cross-correlated recurrence functions. In conclusion, we have demonstrated that closed orbit theory is not restricted to describe long-range modulations in quantum mechanical photoabsorption spectra of atoms in external fields but can well be applied to extract individual eigenenergies and transition matrix elements from the closed orbit quantities. This is achieved by a high resolution spectral analysis (harmonic inversion) of cross-correlated closed orbit recurrence signals. For the hydrogen atom in a magnetic field we have obtained, for the first time, transition matrix elements between low-lying and highly excited Rydberg states using exclusively classical closed orbit data. It will be straightforward, and rewarding, to apply the method to atoms in electric fields. We acknowledge fruitful discussions with V. Mandelshtam. This work was supported in part by the Sonderforschungsbereich No. 237 of the Deutsche Forschungsgemeinschaft. J.M. thanks the Deutsche Forschungsgemeinschaft for a Habilitandenstipendium (Grant No. Ma 1639/3).
no-problem/9902/cond-mat9902246.html
ar5iv
text
# 3D Anderson transition for two electrons in 2D \[ ## Abstract It is shown that the Coulomb interaction can lead to delocalization of two electron states in two-dimensional (2D) disordered potential in a way similar to the Anderson transition in three dimensions (3D). At fixed disorder strength the localized phase corresponds to low electron density and large value of parameter $`r_s`$. \] Contrary to the well established theoretical result , according to which noninteracting electrons are always localized in 2D disordered potential, the pioneering experiment by Kravchenko et al. demonstrated the existence of metal-insulator transition for real interacting electrons in 2D. The ensemble of experimental data obtained by different groups clearly indicates the important role played by interaction. In the majority of experiments the Coulomb energy of electron - electron interaction $`E_{ee}`$ is significantly larger than the Fermi energy $`E_F`$, estimated for noninteracting electron gas in absence of disorder. The ratio of these energies is characterized by the dimensionless parameter $`r_s=1/\sqrt{\pi n_s}a_B^{}E_{ee}/E_F`$, where $`n_s`$ is the electron density in 2D, and $`a_B^{}=\mathrm{}^2ϵ_0/m^{}e^2`$, $`m^{}`$, $`ϵ_0`$ are the effective Bohr radius, electron mass and dielectric constant respectively. Such large $`r_s`$ values as 10 - 30 have been reached experimentally . At these $`r_s`$ the electrons are located far from each other and it is natural to assume that in this regime the interaction effects will be dominated by pair interaction. The important role of the residual two-body interaction is also clear from the fact that in the Hartree-Fock (mean field) approximation the problem is again reduced to the one-particle 2D disordered potential with localized eigenstates . The problem of two electrons interacting in the localized phase is rather nontrivial. Indeed, recently it has been shown that a short range repulsive/attractive interaction between two particles can destroy one-particle localization and lead to creation of pairs propagating on a distance much larger than their size . The pair size is of the order of one-particle localization length $`l_1`$. Inside this length the collisions between particles destroy the quantum interference that results in their coherent propagation on a distance $`l_cl_1`$. The important point is that only pairs can propagate on a large distance. Indeed, the particles separated by a distance $`Rl_1`$ have exponentially small overlap, the interaction between them is weak and such states are localized as in the noninteracting case. According to the theoretical estimates in 2D the localization length $`l_c`$ grows exponentially with $`l_1`$ according to the relation $`\mathrm{ln}(l_c/l_1)\kappa >1`$. Here $`\kappa \mathrm{\Gamma }_2\rho _2`$, where $`\mathrm{\Gamma }_2U^2/(Vl_1^2)`$ is the interaction induced transition rate between localized states in e.g. 2D Anderson model, $`\rho _2l_1^4/V`$ is the density of two-particle states directly coupled by interaction, $`V`$ is the hopping between nearest sites, $`U`$ is on (nearest) site interaction, and energy is taken in the middle of the band. In a sense the above estimate is similar to the case of one-particle localization in 2D where $`\mathrm{ln}l_1k_F\mathrm{}(V/W)^2`$ and the product of the Fermi wave vector $`k_F`$ on mean free path $`\mathrm{}`$ is proportional to a local diffusion rate ; $`W`$ is the strength of on site disorder. Indeed, in the same manner the interaction induced diffusion rate of a pair is given by $`D_2l_1^2\mathrm{\Gamma }_2\kappa /l_1^2\mathrm{ln}l_c`$. According to the above estimates $`l_c`$ should vary smoothly with the effective interaction strength characterized by the dimensionless parameter $`\kappa `$. However, this consideration is valid only for a short range interaction while the analysis of the long range Coulomb interaction requires a separate study. The investigation of this case is also dictated by the experiments where the electrons are not screened and are located far from each other $`(r_s1)`$. On a qualitative grounds one can expect that the effect of Coulomb interaction will be stronger since electrons are always interacting in a difference from the case of short range interaction. As we will see later the interaction effects will play an important role even at low density when the electrons are far from each other $`(Rl_1)`$ and where the interaction can lead to the delocalization transition similar to one in the 3D Anderson model. It is convenient to study this transition by the means of level spacing statistics as it was done for 3D one-particle case in . To analyze the effect of Coulomb interaction between two electrons let us consider the 2D Anderson model with the diagonal disorder $`(W/2<E_i<W/2)`$, hopping $`V`$, the lattice constant $`a=1`$ and the interaction $`U/|𝐫_1𝐫_2|`$. In these notations $`r_s=U/(2V\sqrt{\pi n_s})`$ and it is convenient to introduce another dimensionless parameter $`r_L=Ul_1/2\sqrt{\pi }V`$ which is equal to $`r_s`$ value at $`n_s=1/l_1^2`$. We will consider the case with $`UV`$ and $`r_s1`$ when the average distance between electrons $`R=|𝐫_1𝐫_2|`$ is much larger than their noninteracting localization length: $`R1/\sqrt{n_s}r_sl_11`$. In this case the two-body interelectron interaction has a dipole-dipole form and is of the order of $`U_{dd}U\mathrm{\Delta }𝐫_1\mathrm{\Delta }𝐫_2/R^3Ul_1^2/R^3`$. Indeed, the first two terms in the expansion of Coulomb interaction give only mean field corrections to one-particle potential and the nontrivial two-body term appears only in the second order in the electron displacements $`\mathrm{\Delta }𝐫_1\mathrm{\Delta }𝐫_2l_1`$ near their initial positions $`𝐫_{1,2}`$. The matrix element of this dipole-dipole interaction between localized noninteracting eigenstates is of the order of $`U_sU\mathrm{\Delta }𝐫_1\mathrm{\Delta }𝐫_2\psi ^4/R^3U/R^3`$. Here $`\psi \mathrm{exp}(|\mathrm{\Delta }𝐫_{1,2}|/l_1)/l_1`$ are localized one-electron states and due to localization the sum runs over $`l_1^4`$ sites and each term in the sum has a random sign. According to the Fermi golden rule these matrix elements give the interaction induced transition rate $`\mathrm{\Gamma }_eU_s^2\rho _2U_{dd}^2/V`$, where the density of coupled states in the middle of the energy band is still $`\rho _2l_1^4/V`$ since due to localization only jumps on a distance $`l_1`$ are allowed. These interaction induced matrix elements mix two-electron states if $`\kappa _e\mathrm{\Gamma }_e\rho _2>1`$, that corresponds to $`R<l_1(Ul_1/V)^{1/3}`$ (a similar estimate for electrons in 3D was given in Ref. 1b). Since $`l_11`$ the condition $`Rl_1`$ is still satisfied. For $`\kappa _e>1`$ these transitions lead to a diffusion with the rate $$D_el_1^2\mathrm{\Gamma }_eV\kappa _e/l_1^2$$ (1) This diffusion expands in an effective 3D space. Indeed, the center of mass of two electrons diffuses in 2D lattice plane and in addition the electrons diffusively rotate on a ring of radius $`R`$ and width $`l_1`$. The radius of the ring is related to the e-e-energy $`EU/R`$ which remains constant. Since $`Rl_1`$ it takes a long time to make one rotation along the ring. As for the 3D Anderson model this diffusion becomes delocalized when the hopping is larger than the level spacing between directly coupled states, namely: $$\chi _e\kappa _e^{1/6}r_L^{4/3}/r_s>1$$ (2) Formally the situation corresponds to a quasi-two dimensional case with $`M_{ef}\pi R/l_1=\pi r_L^{1/3}>>1`$ parallel planes (number of circles of size $`l_1`$ in the ring) so that the pair localization length $`l_c`$ jumps from $`l_cl_1`$ for $`\kappa _e<1`$ to $`l_cl_1\mathrm{exp}(\pi \kappa _er_L^{1/3})l_1`$ above the transition $`\kappa _e>1`$. The transition is sharp and similar to 3D Anderson transition when $`r_s>r_L1`$. If electrons would be able to move inside the ring then $`M_{ef}`$ would be even larger $`(M_{ef}r_L^{2/3})`$. It is important to stress that the parameter $`\chi _e`$ which determines the delocalization border and measures the effective strength of two-body interaction decreases with the increase of $`r_s`$. Apparently, this looks to be against the common lore according to which the larger is $`r_s`$ the stronger is the e-e-interaction. The reason of this contradiction with (2) is simply due to the fact that $`r_s`$ compares $`E_{ee}`$ with $`E_F`$ computed in the absence of disorder. In the presence of not very weak disorder ($`r_D=E_{ee}/W1`$ and $`r_L1`$) the one-electron states are localized and form the basis of Coulomb glass . In this Coulomb glass phase the e-e-interaction becomes weaker and weaker with the growth of average distance between electrons $`Rn_s^{1/2}r_s`$ in the natural agreement with (2). The transition border (2) was obtained for excited states. However, it is clear that if the interaction is not able to delocalize the excited states then the low energy states will also remain localized since two-electron density $`\rho _2`$ drops at low energy. In this sense (2) determines the upper border for $`r_s`$. To study the delocalization transition (2) the level spacing statistics $`P(s)`$ is determined numerically for different system sizes $`L`$. To follow the transition from the localized phase with the Poisson statistics $`P_P(s)`$ to delocalized one with the Wigner-Dyson statistics $`P_{WD}(s)`$ it is convenient to use the parameter $`\eta =_0^{s_0}(P(s)P_{WD}(s))𝑑s/_0^{s_0}(P_P(s)P_{WD}(s))𝑑s`$, where $`s_0=0.4729\mathrm{}`$ is the intersection point of $`P_P(s)`$ and $`P_{WD}(s)`$ . In this way $`\eta =1`$ corresponds to the Poissonian case, and $`\eta `$=0 to $`P_{WD}(s)`$. The dependence of $`\eta `$ on the one electron energy $`ϵ=E/2`$, counted from the ground state is shown in Fig. 1 for different disorder $`W`$ and interaction strength $`U`$. Usually ND=4000 realizations of disorder are used and in addition the average in a small energy interval allows to increase the total statistics for $`P(s)`$ and $`\eta `$ from $`NS=12000`$ for low energy states up to $`NS=10^6`$ at high energies with larger density of levels. The matrix diagonalization is done in the one-electron eigenbasis truncated at high energies that allowed to study two electron low energy excitations (with energy $`E`$) at large system sizes $`L24`$. The periodic boundary conditions are used for one-electron states, the Coulomb interaction is taken between electrons in one cell of size $`L`$ and with 8 charge images in nearby 8 cells. The Coulomb interaction periodic in one cell gave similar results. Only the triplet case was considered but the singlet case should give similar results . The results of Fig.1a show that at fixed interaction and strong disorder $`W/V=15`$ the $`P(s)`$ statistics approaches to the Poisson distribution $`(\eta =1)`$ at large system size L and large $`r_s=UL/(2\sqrt{2\pi }V)`$. This means that all states are localized. For smaller disorder the situation becomes different (Fig. 1b,c). While near the ground state still $`\eta 1`$ for large $`L`$, the tendency is inverted above some critical energy $`ϵ_c`$ where $`\eta 0`$. All curves $`\eta (ϵ)`$ for different $`L`$ are crossed in one point in a way similar to the 3D Anderson transition studied in . This result can be understood in the following way. At strong Coulomb interaction $`UV`$ the excitation energy $`ϵ`$ is related to the distance between electrons $`R`$: $`ϵU/R`$ (similar relation was used in for the Coulomb glass). At higher $`ϵ`$ the distance $`R`$ becomes smaller, the interaction is stronger and for $`ϵ>ϵ_c`$ the delocalization border $`RU/ϵl_1r_L^{1/3}`$ (2) is crossed and the states become delocalized. Since the distance $`R`$ is related with the two electron energy $`E=2ϵU/R`$ the spacing statistics $`P(s)`$, which is local in energy and therefore also in $`R`$, is not influenced by states where particles are far from each other. In this sense the situation is different from the case of short range interaction. According to the above arguments $`\stackrel{~}{ϵ_c}=ϵ_cl_1^{4/3}/B`$ should remain constant when $`l_1`$ changes with disorder. The value of $`l_1`$ can be extracted from the average inverse participation ratio $`\xi _1=1/|\psi |^4`$ computed for one-particle states in the middle of the band $`(l_1\sqrt{\xi _1})`$. For $`L=24`$ and $`W/V=10;7;5`$ we have respectively $`\xi _1=11.6;36.7;84.2`$ that with $`ϵ_c/B0.6;0.28;0.16`$ (the case $`W/V=5`$ is not shown) gives $`\stackrel{~}{ϵ_c}=3.08\pm 0.01`$ in a satisfactory agreement with the above expectations. The variation of $`\eta `$ with the interaction $`U`$ is shown in Fig. 1d. According to it $`\eta `$ increases with the decrease of $`U`$ (states become more localized) in agreement with the general estimate (2). The analysis above allows to understand the dependence of $`\eta `$ on $`ϵ`$ and $`L`$. Another reason for the decrease of $`\eta `$ at higher $`ϵ`$ is related to the fact that the two-electron density of states $`\rho _2`$ grows with energy that allows to mix levels more easily. A more detail theory should take this fact into account but also to analyze the variation of the rate $`\mathrm{\Gamma }_e`$ with $`ϵ`$. The results in this direction will be published elsewhere . The $`P(s)`$ statistics for two electrons in 2D near the critical point $`ϵ_c/B`$ is shown in Fig. 2. Its comparison with the critical statistics in 3D Anderson model taken from (see also ) demonstrates that both statistics are really very close in agreement with the arguments given above. At the critical point the value of $`\eta _c`$ is close to its value in the Anderson model ($`\eta _c=0.20`$). The small deviations from this value in the case of 2D electrons ($`\eta _c0.25(W/V=10);0.17(W/V=7)`$) can be attributed to the fact that the parameter $`l_1^{1/3}`$ was not sufficiently large. The investigation of the case with larger $`l_1`$ requires a significant increase of the system size $`L>24`$. Indeed, for $`L=24`$ and $`W/V=5`$ the localization length becomes comparable with $`L`$ ($`l_1\sqrt{\xi _1}9`$) that gives a decrease of $`\eta _c0.13`$. Of course, one cannot expect that the simple model of two electrons considered above will explain the variety of experimental results obtained by different groups . However, it shows some tendencies which are in agreement with the experiment. Indeed at large $`r_s`$ (density lower than some critical $`n_c`$) the experiments demonstrate the transition from metal to insulator. According to Fig. 4 in the density at the transition $`n_c1/\sqrt{r_s}`$ drops exponentially with the increase/decrease of the mobility/disorder $`\mu 1/W^2`$. This qualitatively agrees with the estimate (2) according to which near the transition $`\mathrm{log}n_c\mathrm{log}(1/r_s)\mathrm{log}r_L1/W^2`$. However, the condition $`r_sr_L`$ seems to be not well satisfied and apparently multi-electron effects should be also taken into account. Another interesting experimental result (Fig. 2 in ) shows that the conductivity $`\sigma _c`$ near the critical point grows with increase of density $`n_c`$ or disorder $`W`$. This is in a qualitative agreement with the estimate (1) according to which $`\sigma _cD_e/V1/l_1^2r_L^2r_s^{8/3}n_c^{4/3}`$ since near the critical point (2) $`\kappa _e1`$ and $`r_sr_L^{4/3}`$. It is also interesting to remark that the scaling index $`\nu 1.5`$ found in is close to the index $`\nu 1.5`$ near 3D Anderson transition (the fact that in 3D $`\nu s`$ can be related to the observed symmetry of I-V curves). I thank Y.Hanein and A.Hamilton for the stimulating discussions of experimental results, D.Braun for the possibility to use the data of Ref. and K.Frahm for a useful suggestion.
no-problem/9902/cond-mat9902331.html
ar5iv
text
# Dynamical Scaling: the Two-Dimensional XY Model Following a Quench ## I Introduction The study of non-equilibrium dynamics in systems with continuous symmetries has burgeoned . Liquid-crystalline systems , evolving after being quenched into an ordered phase, provide picturesque examples of topological defects and their interactions. Evolving systems of topological defects are also found in applications from cosmology to quantum Hall ferromagnets . A relatively simple system with a continuous symmetry is the two-dimensional XY ferromagnet with no disorder, which supports singular vortices that carry topological charge and have logarithmic interactions. The equilibrium properties have spawned a rich and fertile literature punctuated by the work of Kosterlitz and Thouless . More recently, the non-equilibrium behavior of the 2D XY model following a quench to below the Kosterlitz-Thouless critical temperature, $`T_{KT}`$, has been studied theoretically and also experimentally with specially prepared liquid-crystal systems. Related 2D liquid-crystal systems have also been studied theoretically and experimentally . Following a quench at $`t=0`$ from a disordered phase into an ordered phase, a crucial issue is whether there is dynamical scaling at late times $`t`$, where $$C(r,t)\stackrel{}{\varphi }(x,t)\stackrel{}{\varphi }(x+r,t)=f(r/L).$$ (1) Here, $`\stackrel{}{\varphi }`$ is the XY order parameter, $`f(x)`$ is a time-independent scaling-function for the two-point correlations, and $`L(t)`$ is a growing length-scale that captures all of the correlation dynamics. The explicit or implicit assumption of dynamical scaling underpins most theoretical descriptions of phase-ordering structure . Unfortunately, apart from a limited number of solvable systems, there exist no theoretical approaches to a priori determine dynamical scaling. Indeed, the presence or absence of dynamical-scaling remains an unresolved issue in the 2D XY model . This is surprising, since simple systems that break scaling are seen as exceptions . For example, the weak scaling violations in the conserved spherical model identified by Coniglio and Zannetti are due to non-commuting spherical and asymptotic-time limits related to similar phenomena in equilibrium critical dynamics . Stronger scaling violations are found in one- and two-dimensional systems with non-singular topological textures . These systems segregate into domains of similarly charged textures, similar to the morphologies seen in reaction-diffusion $`A+B\mathrm{}`$ systems . The domain-size and the texture separation provide distinct growing length-scales. Within this context, the difficulty in resolving scaling in the 2D XY model can be understood. Viewed as a plasma of overdamped charged vortices with logarithmic interactions , quenched from high-temperatures, the 2D XY model sits exactly at the marginal dimension ($`d=2`$) below which segregated morphologies with strong scaling violations are expected, and above which a mixed morphology with only one length-scale, the particle separation, is seen . Such particle systems are expected to scale, with no domain structure, at the marginal dimension , however the asymptotic regime could be quite late. With dissipative dynamics and the assumption of dynamical-scaling the predicted asymptotic growth-law of the characteristic length-scale is $$L(t)A(t/\mathrm{ln}[t/t_0])^{1/2},$$ (2) where $`A`$ and $`t_0`$ are the non-universal amplitude and time-scale, respectively. This growth-law characterizes the correlations with a length $`L_{1/2}(t)`$, where $`C(L_{1/2},t)=1/2`$, as well as the vortex separation with a length-scale $`L_v(t)`$, where the vortex density $`\rho _{def}=1/L_v^2`$. These lengths will only differ by prefactors and by subdominant contributions at late times. \[Eqn. 2 also describes the annihilation time of an isolated vortex-antivortex pair with an initial separation $`L`$ .\] The logarithmic factor is crucial, and stems from the logarithmic vortex mobility. The same growth-law is expected in liquid-crystal films with vortices . The analytical evidence for scaling violations is mostly suggestive: explicit violations in four-point correlations and multiple energy-scales seen in energy-scaling calculations . These would indicate multiple lengths which differ at most by logarithmic factors, consistent with the marginal dimensionality within a reaction-diffusion context . Indeed, approximation schemes for correlation functions in the 2D XY model typically find scaling but with no logarithmic factors (see, e.g., , see also ). Additionally, the 2D XY model quenched between two temperatures below $`T_{KT}`$, and coarse-grained to a fixed scale to eliminate bound vortex pairs, is solvable and dynamically scales without any logarithmic factor, $`L(t)t^{1/2}`$. Previous numerical evidence for scaling violations is stronger. Cell-dynamical simulations of XY models quenched to $`T=0`$ by Blundell and Bray found that two-point correlations did not scale well with respect to the defect separation $`L_v`$, though they scale with respect to the correlation length $`L_{1/2}`$ (see also ). Mondello and Goldenfeld also found indications of multiple length-scales. Simulations of nematic films by Zapotocky et al. found a variety of effective growth exponents, though again the correlation function appeared to scale (see also ). Other simulations on the 2D XY model at finite temperatures have recovered the expected growth law , and have found dynamical scaling . Simulations of quenches to $`T=0`$ in hard-spin systems found dynamical scaling of correlations even though the dynamics froze at late times ! Experiments on liquid-crystal systems, following the pioneering work by Shiwaku et al. , have recovered the $`t^{1/2}`$ growth of defect separation after a quench, though with insufficient resolution to determine logarithmic factors and with some difficulties in achieving an unbiased (symmetric) quench . When measured, the structure and other two-point correlations are consistent with dynamical scaling . In this paper we want to clarify the existence or absence of dynamical scaling in the 2D XY model. A successful strategy can then be applied more generally to systems that seem to violate scaling, in particular to systems with more complicated collections of defects . We first discuss the appropriate definition of dynamical scaling, within the context of systems relaxing after a quench. We then derive approximate forms for various correlation functions via Gaussian closure techniques, which impose scaling. While we do not expect them to exactly match the measured correlations, they are used to normalize the measured values in order to enhance our sensitivity to scaling or its absence. In combination with the growth-law, we have a “null hypothesis” which would be broken by scaling violations. We present our simulation data and find no evidence for scaling violations. We then explicitly reconstruct a two-point gradient correlation function, within the periodic system using only the vortex positions and charges, and find it significantly different from the unreconstructed scaling form. However both correlations scale with respect to the defect density. This indicates that both topological (vortex) and non-topological (“spin-wave”) contributions to the order parameter are asymptotically relevant, with characteristic lengths that remain asymptotically proportional. ## II Dynamical Scaling In phase-ordering, dynamical scaling colloquially means that there is a single characteristic length scale growing in time. This leads to a rough-and-ready symptom of dynamical scaling violations: multiple length-scales with distinct growth-laws, see for example . While useful as a guide, this approach has limitations. One must first identify each asymptotic growth law, i.e. the effective exponent after it is constant in time and before finite-size effects of the sample become important. Practically, at most one or two decades in time are available in simulations if a $`5\%`$ exponent variation is tolerated, and often less than a decade in experiments. When the scaling prediction for the growth-law is not a priori known, this approach on its own is dangerous. Indeed, sub-dominant corrections to the asymptotic growth law can depend on the method used to extract the length-scale . Even the observation of two asymptotically distinct length-scales does not demonstrate that they are dynamically interconnected. A silly example helps here: consider a sample made from gluing together a conserved binary-alloy system (asymptotic growth law $`t^{1/3}`$), and a non-conserved order-disorder alloy system (growth law $`t^{1/2}`$). Clearly two-growth laws could be observed in the hybrid, but they should not imply scaling violations. \[Such dynamically independent sub-systems would lead to correlation functions that are sums of scaling functions.\] The situation is more complicated when both lengths are observed within a homogeneous sample, such as the asymptotic behavior of monopoles and vortex lines in bulk nematics . Non-trivial inter-relationships of observed lengths can generally only be resolved with the help of simplified dynamical models, for example see . A more precise definition of dynamical scaling is that two-point equal-time correlations have a time-independent scaling form, see Eqn. 1, which also implies scaling of the structure factor $$S(k,t)\stackrel{}{\varphi }(k,t)\stackrel{}{\varphi }(k,t)=L^dg(kL),$$ (3) where $`g(x)`$ is a time-independent scaling function. This is directly measured in scattering experiments, can be well approximated analytically, and is easy to extract from simulations. For systems with singular topological defects, such as domain walls, hedgehogs, vortices, or vortex lines, a generalized Porods law connects the density of defect core $`\rho _{def}`$ to the asymptotics of the structure via $$S(k)\rho _{def}k^{(d+n)},kL1,$$ (4) where $`n`$ characterizes the defect type \[for the 2D XY model, $`n=d=2`$\]. This directly implies that the length derived from the defect density, $`L_v`$, is asymptotically proportional to the correlation length $`L_{1/2}`$ when the correlations dynamically scale. This definition is still incomplete, since systems can satisfy Eqn. 3 yet have distinct lengths intimately connected by the dynamics — e.g. in the 1D XY model . Additionally, higher-point correlations can be constructed in the 2D XY model which explicitly do not scale . Should these be viewed as violations of dynamical scaling? Fortunately a self-contained definition of dynamical scaling exists, introduced by Bray and Rutenberg . In order to calculate the rate of free-energy dissipation in a coarsening system, they additionally require the scaling of the time-derivative correlation function $`T(r,t)`$ $``$ $`_t\stackrel{}{\varphi }(x,t)_t\stackrel{}{\varphi }(x+r,t)=(\dot{L}/L)^2F(r/L)`$ (5) where $`F`$ is a new time-independent scaling function and $`\dot{L}dL/dt`$. Note that power-law growth, with or without additional logarithmic factors, implies that the prefactor $`(\dot{L}/L)^21/t^2`$. If dynamic scaling holds both for $`T(r,t)`$, as just defined, and for $`C(r,t)`$, then the growth exponent can be determined through a self-consistent energy-scaling approach . This restricted definition of dynamical scaling, of both $`C(r,t)`$ and $`T(r,t)`$, picks up the scaling violations of the 1D XY model , and clearly separates the role of two-point from higher-point correlations . We use this restricted definition here, and recommend it in the study of systems where dynamical scaling is questioned but Eqn. 1 seems to be satisfied. ## III Dynamics We study purely dissipative quenches of 2D XY models from well above to below the Kosterlitz-Thouless transition temperature $`T_{KT}`$. Because of the line of critical points in the 2D XY model the correlations in quenches to $`0<T<T_{KT}`$ have a modified scaling form . Essentially, critical equilibrium correlations have no characteristic length-scale and so the standard coarse-graining to make temperature irrelevant to large-scale correlations is impossible. However, there is no indication that temperature changes dynamical scaling, or its absence, in the 2D XY model. Accordingly, in this paper, we only investigate quenches to $`T=0`$. The non-conserved coarse-grained dynamics are $`F[\stackrel{}{\varphi }]`$ $`=`$ $`{\displaystyle d^2x\left[(\stackrel{}{\varphi })^2+V_0(\stackrel{}{\varphi }^21)^2\right]},`$ (6) $`_t\stackrel{}{\varphi }`$ $`=`$ $`\mathrm{\Gamma }\delta F/\delta \stackrel{}{\varphi },`$ (7) $`\stackrel{}{\varphi }(𝐱,0)\stackrel{}{\varphi }(𝐱^{},0)`$ $`=`$ $`\mathrm{\Delta }\delta (𝐱𝐱^{}),`$ (8) where $`\mathrm{\Gamma }`$ is a kinetic coefficient that sets the time-scale, $`V_0`$ is the potential strength that sets the ‘hardness’ of the vector spins, and $`\mathrm{\Delta }`$ characterizes the initial disordered state. The orientation of the two-component order-parameter $`\stackrel{}{\varphi }(𝐱)`$ defines an angle $`\theta (𝐱)[0,2\pi ]`$, which is identical to the XY phase. The numerical implementation of the dynamics is discussed below in Sec. IV A. In overview of the evolution: we start with a random high-temperature configuration and quench to $`T=0`$. The order parameter locally equilibrates, but competition between degenerate ground-states leads to topologically stable vortices, with integer charges. The annihilation of oppositely charged vortices drives the subsequent dynamics, and characterizes one possible growing length scale — the vortex separation $`L_v`$. Of course, the order-parameter field around a moving vortex is not rigidly comoving , and so non-singular “spin-wave” distortions are generated by the dynamics even at $`T=0`$. The dynamics, emphasizing the vortices, can be visualized with a Schlieren pattern, see Fig. 1, analogous to those used in the study of liquid-crystal films . ### A Scaling Correlations from Gaussian Closure Several approximation schemes eliminate high-order correlations in the evolution equation for two-point correlations . We use a Gaussian-closure approximation, which gives quite good two-point correlations. We will use the results to normalize our correlations. This allows for a more sensitive test of scaling properties than has been possible before, and also highlights weaknesses of this approach (see also ). For general $`O(n)`$ fields, we start with the Bray-Humayun-Toyoki (BPT) approach . We introduce an auxiliary field $`\stackrel{}{m}`$ parallel to the order parameter, $`\widehat{m}=\widehat{\varphi }`$. The zeros of $`\stackrel{}{m}`$ match the positions of the topological defect cores, while $`|\stackrel{}{m}|`$ is roughly the distance to the closest defect core. Assuming a Gaussian probability distribution for $`\stackrel{}{m}`$ results in two-point correlations between $`(r_1,t_1)`$ and $`(r_2,t_2)`$: $$C_g(r,t_1,t_2)=\frac{n\gamma }{2\pi }\left[B(\frac{1}{2},\frac{n+1}{2})\right]^2F(\frac{1}{2},\frac{1}{2};\frac{n+2}{2};\gamma ^2),$$ (9) where $`r=|𝐫_2𝐫_1|`$, $`B(x,y)`$ is the beta function, and $`F(a,b;c;z)`$ is the hyper-geometric function. The result is expressed in terms of the the normalized two-point, two-time correlation function of $`\stackrel{}{m}`$: $`\gamma =m(1)m(2)/[m^2(1)m^2(2)]^{1/2}`$. The various approximation schemes schemes differ on the manner of determining $`\gamma `$. We use the the systematic approach introduced by Bray and Humayun which produces $$\gamma (r,t_1,t_2)=\left(\frac{4t_1t_2}{(t_1+t_2)^2}\right)^{d/4}\mathrm{exp}(r^2/[4(t_1+t_2)]),$$ (10) where $`d`$ is the spatial dimension. For equal-time correlations, we obtain the scaling form $`C_g(r,t)=f_{BPT}(x)`$, where $`x=r/L`$ and $`L(t)=(4t)^{1/2}`$. This highlights a problem with all existing correlation-closure approaches as applied to 2D XY models, since while they recover a scaling form they miss the logarithmic factor in the growth-law . The same scaling variable is used in the time-derivative correlation function $$T_g(r,t)=\frac{1}{16t^2}\left[\gamma ^2x^4C_{\gamma \gamma }(x)+\gamma (x^44x^2+2d)C_\gamma (x)\right],$$ (11) where $`C_\gamma C_g/\gamma `$ and $`C_{\gamma \gamma }^2C_g/\gamma ^2`$. ## IV Simulation ### A Simulation Methods We use a standard CDS update for soft spins, $`\stackrel{}{\varphi }(𝐢,t)`$, on a periodic lattice, where $`t`$ is now a discrete integer time and $`𝐢`$ is the position: $$\stackrel{}{\varphi }(𝐢,t+1)=\frac{D}{4}\underset{𝐣}{}\left[\stackrel{}{\varphi }(𝐣,t)\stackrel{}{\varphi }(𝐢,t)\right]+E\widehat{\varphi }(𝐢,t)\mathrm{tanh}\left[\left|\stackrel{}{\varphi }(𝐢,t)\right|\right],$$ (12) where $`\widehat{\varphi }=\stackrel{}{\varphi }/\left|\stackrel{}{\varphi }\right|`$ is the unit vector. We use the standard values $`D=0.5`$ and $`E=1.3`$. The dynamics are stable and have the same attractors as Eqn. 6. We do not observe pinning effects in quenches to $`T=0`$ (see also ). The random initial conditions are chosen uniformly for each component from $`[0.1,0.1]`$. We identify vortices with three methods that prove equally effective: by looking for the zeros in the vector field, by looking for plaquettes around which the phase rotates through $`\pm 2\pi `$, and by finding the peaks on the local energy density $`E_𝐢=_𝐣\stackrel{}{\varphi }(𝐢)\stackrel{}{\varphi }(𝐣)`$, where the sum is over nearest neighbors of site $`𝐢`$. Due to the periodic boundary conditions, the system has no net vorticity. In addition to tracking the number of vortices, we measure several correlations of the “hardened” order parameter, $`\widehat{\varphi }(𝐣,t)`$: $$C(r,t)=\widehat{\varphi }(𝐣,t)\widehat{\varphi }(𝐣+𝐫,t).$$ (13) The average $`\mathrm{}`$ is over the independent sets of initial conditions, and includes a spherical average and an average over lattice sites $`𝐣`$. The structure factor is also calculated: $$S(k,t)=\stackrel{}{\varphi }(𝐤,t)\stackrel{}{\varphi }(𝐤,t).$$ (14) We also measure the time derivative correlation function, $$T(r,t)=\delta _t\stackrel{}{\varphi }(𝐣)\delta _t\stackrel{}{\varphi }(𝐣+𝐫),$$ (15) where $`\delta _t\stackrel{}{\varphi }=\stackrel{}{\varphi }(t+1)\stackrel{}{\varphi }(t)`$ is a finite difference approximation for the time derivative. To probe the distinction between vortex and non-vortex contributions to correlations, we measure a phase-gradient correlation function: $`D(r,t)`$ $``$ $`\theta (𝐣+𝐫,t)\theta (𝐣,t),`$ (16) $`=`$ $`h(r/L)/L^2,`$ (17) where the second line is the natural scaling ansatz for the correlations. Note that $`\theta =0`$. We then reconstruct the vortex contribution $`D_r(r,t)`$ directly from the charges and locations of the vortices at a given time. From the vortex positions we build up the phase field $`\stackrel{~}{\theta }(𝐣)`$ using the periodic image of the minimal energy solution for each single vortex, $`^2\stackrel{~}{\theta }=0`$, due to Grønbech-Jensen : $`d\theta /dx`$ $`=`$ $`\pi {\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\mathrm{sin}(2\pi y)/[\mathrm{cosh}(2\pi (x+n))\mathrm{cos}(2\pi y)],`$ (18) $`d\theta /dy`$ $`=`$ $`\pi {\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\mathrm{sin}(2\pi x)/[\mathrm{cosh}(2\pi (y+n))\mathrm{cos}(2\pi x)],`$ (19) where $`x`$ and $`y`$ are the relative coordinates of the vortex in a system of size unity. The solutions for every vortex (with $`\pm 1`$ factors for vortices and anti-vortices, respectively) were added together for every point in the system to obtain the fully-periodic minimal-energy phase-field consistent with the vortex configuration. \[Direct reconstruction of the order-parameter field $`\stackrel{}{\varphi }`$ proved intractable due various counter-charge effects imposed by the periodic boundary conditions. In principle we could use our $`\theta `$ reconstruction to recover the order-parameter field with additional line-integrations.\] To obtain more accurate vortex positions, we first identify the lattice plaquette by windings or energy peaks, then we use bilinear interpolation to more accurately locate the zero of the order parameter within the plaquette. The sign of the vortex is determined by the winding of the phase field around the plaquette. ### B Simulation Results We simulate a size $`512\times 512`$ system, averaging over $`40`$ independent samples. We check that there are no significant finite size effects in comparison to a $`256\times 256`$ size system, with $`20`$ samples. The data for reconstructed correlations is currently restricted to the $`256\times 256`$ size system. In Fig. 2 a), we plot $`C(r,t)`$ with respect to distance scaled by $`L_{1/2}`$ \[$`C(L_{1/2},t)=1/2`$\]. The scaling is excellent, and the Gaussian-closure result ($`f_{BPT}`$, solid line) is indistinguishable from the data. In Fig. 2 b), however, the scaling collapse is not good with respect to the vortex separation \[$`L_v`$, where $`\rho =1/L_v^2`$\] . It is difficult to tell from this second plot alone whether scaling simply has a later onset time, or if scaling violations are indicated. This must be determined by a direct comparison of the length-scales $`L_v`$ and $`L_{1/2}`$, as well as by a study of the time-derivative correlations $`T(r,t)`$, as discussed in Sec. II. By normalizing the correlations with the Gaussian-closure result, $`C_g`$, we can sensitively probe scaling with the real-space correlations, see Fig. 3. While $`C_g`$ is clearly too small at large scaled distance, correlations scale relatively well for $`t1000`$. The structure factor scales with respect to $`L_k1/k`$, its inverse first moment, see Fig. 4. Also shown (solid line) is the Fourier transform of the Gaussian-closure prediction, which slightly but systematically under and over-estimates the structure. By using a log-log plot we emphasize the $`S(k)\rho _{def}k^4`$ generalized Porod tail for $`k/k2`$, as per Eqn. 4. The good scaling of the Porod tail, which is determined by the vortex density, indicates that $`L_kL_v`$ asymptotically. We now directly test the assumption that all lengths asymptotically have the scaling growth-law of Eqn. 2 by plotting $`t/L^2`$ vs $`\mathrm{ln}t`$ for $`L_{1/2}`$, $`L_k`$, and $`L_v`$, in Fig. 5. The scaling prediction is a linear plot, with non-universal slope and intercept given by the amplitude $`A`$ and time-scale $`t_0`$. \[Both of these can vary from one length-scale to another.\] Linearity is observed for $`\mathrm{ln}t7.7`$ ($`t2200`$), in agreement with Fig. 3. We have fit them with straight lines with the same amplitude $`A`$ but different $`t_0`$. The correlation length $`L_{1/2}`$ has the strongest corrections to scaling, which is one cause of the bad scaling of $`C(r,t)`$ when plotted vs. $`L_v`$ in Fig. 2 b). It is worth noting that the growing length scales can also be well fit using effective exponents of $`0.42`$, $`0.40`$, and $`0.40`$ ($`\pm 0.01`$), respectively, without logarithmic factors — see also . However, if these effective exponents were asymptotically valid, and hence disagreed with the scaling prediction of Eqn. 2, we would not see scaling in the correlations . While the two-point correlations $`C(r,t)`$ and $`S(k,t)`$ support dynamical scaling, we must also investigate the time-derivative correlation function, $`T(r,t)`$, as discussed in Sec. II. In Fig. 6, we scale lengths with respect to $`L_{1/2}`$, and remove the prefactor in Eqn. 5 by plotting $`T(r,t)/T(L_{1/2},t)`$. While scaling only sets in for $`t2000`$, it is supported by the data. This correlation function has much more structure than the equal time correlations, such as a local maximum at $`x2.3`$ and a logarithmic divergence at small-$`x`$ due to fast vortex annihilations. As a result, it provides a more stringent test of the Gaussian-closure approximation. We find significant discrepancies, the first to be found in two-point correlation functions. Further confirmation of scaling in $`T(r,t)`$ is found by exploring the time dependence of the amplitude $`T(L_{1/2},t)t^\mu `$, see Fig. 7. The scaling form in Eqn.5 gives $`\mu =2`$ (independent of logarithms), and we find $`\mu =2.0\pm 0.1`$. This is consistent with scaling. In combination with the scaling of $`C(r,t)`$ \[and $`S(k,t)`$\], and the consistency of the growth laws of all measured length scales with the scaling result, we conclude that the quenched 2D XY model dynamically scales. In the equilibrium 2D XY model, the singular (vortex) and non-singular (spin-wave) degrees of freedom have independent contributions to the free energy . Could it be possible for such distinct ‘singular’ and ‘non-singular’ length-scales to exist in phase-ordering systems (see, e.g., )? If separation of vortices and spin-waves occurs, we expect spin-wave contributions to have a characteristic scale $`Lt^{1/2}`$, i.e. to have a faster decay with no logarithmic factor . In which case, the direct correlations should either have scaling violations due to the different length scale or the spin-waves should be asymptotically irrelevant — leaving the direct and reconstructed correlations asymptotically equal at late times. As can be seen from the snapshots of $`|\theta |`$ in Fig. 8, the reconstruction maintains the vortex locations and is periodic. Indeed, the reconstruction provides the minimal energy configuration consistent with vortex positions — in other words any ‘non-singular’ contribution is absent. In Fig. 9, the correlation function for the direct and reconstructed fields are shown as a function of the scaled distance. We first notice that both correlations scale with respect to $`L_v=\rho ^{1/2}`$ but with different functional forms. $`D_r`$ has a sharper knee at $`r\rho ^{1/2}0.7`$, for example. This knee reflects the faster decay of $`\theta `$ from the vortex core in the reconstructed configurations, as is apparent in Fig. 8. The significant differences between the bare and reconstructed correlations in the scaling limit indicate that both vortex and “spin-wave” contributions are relevant to the direct correlations, and that the separation seen in static properties does not hold in the dynamics. The Porod plot of the Fourier-transformed correlations, see Fig. 10, further highlights the differences (note the $`k0^+`$ intercepts). It is interesting that while $`\theta =0`$, the scaling curve has a non-conserved character. This is similar to correlations in globally-conserved systems. We also observe a $`k^2`$ Porod tail for $`k/\rho ^{1/2}2`$, which is expected from Fourier transforming the real-space scaling ansatz, Eqn. 17, and setting the amplitude of the Porod tail proportional to the vortex density $`1/L^2`$. The Porod tail has the same amplitude between the direct and reconstructed correlations, reflecting the singular structure of the vortex cores. ## V Summary and Discussion We find no evidence for scaling violations in the 2D XY model. All lengths, $`L_{1/2}`$, $`L_v`$, and $`L_k`$, have the same asymptotic form given by Eqn. 2, albeit with different non-universal coefficients. Real-space correlations, structure, and time-derivative correlations all scale as expected. Phase-gradient correlations, reconstructed from the vortex locations to have minimal energy and hence no spin-wave contributions, differed significantly from the direct correlations, indicating that both vortex and non-singular “spin-wave” contributions are asymptotically relevant. We expect similar results to hold in closely related planar liquid-crystal systems. We have also shown how Gaussian-closure approximations can be useful to sensitively explore scaling. This has the added benefit of testing the approximation schemes. In particular we find significant discrepancies with respect to the measured time-derivative correlations, $`T(r,t)`$. More generally, we emphasize the role of sensitive null-like tests in checking apparent scaling violations. For example we plot the length-scales vs the expected growth law so that linear behavior is expected if scaling is obeyed. When scaling predictions are available, and in the face of transient corrections to scaling, this is preferable to the measurement of effective exponents. One can never absolutely rule out scaling violations, if only because simulations and experiments can never reach $`t\mathrm{}`$. Each length in a system that dynamically scales will generically have different corrections to scaling. In quenches of the 2D XY model the leading correction is described well by $`t_0`$, the time-scale of the logarithmic factor. Since scaling violations seem to be rare in quenched systems, the assumption should be that systems dynamically scale without strong evidence to the contrary — including the inability to perform a scaling collapse with any length-scale for either $`C(r,t)`$ or $`T(r,t)`$. This provides a self-consistent confirmation of dynamical scaling, provided the lengths used for the collapse are consistent with the same asymptotic growth-law. For some systems, including this one, the scaling growth-law can be independently determined. This is invaluable when long-lived (logarithmically decaying) corrections to scaling are expected. The scaling of some other lengths in the problem can sometimes also be required for consistency. In this case the defect-separation scale $`L_v`$ is needed to set the Porod amplitude, and hence must be consistent with the lengths $`L_k`$ and $`L_{1/2}`$ extracted from $`S(k,t)`$ and $`C(r,t)`$, respectively. ## VI Acknowledgments F. Rojas thanks CONACYT (Mexico) and EPSRC (UK) grant GR/J24782, while A. D. Rutenberg thanks the NSERC, and le Fonds pour la Formation de Chercheurs et l’Aide à la Recherche du Québec. We would like to thank Alan Bray, Rob Wickham, and Martin Zapotocky for useful discussions.
no-problem/9902/math9902093.html
ar5iv
text
# Dimensions of quantized tilting modules ## 1. Introduction Let $`G`$ be a simply connected algebraic group. In G. Lusztig proved existense of bijection between two finite sets: the set of two-sided cells in the affine Weyl group attached to $`G`$ (this set is defined combinatorially) and the set of unipotent $`G`$orbits. The proof in is quite involved and this bijection remains rather mysterious. In J. Humphreys suggested a natural conjectural construction for Lusztig’s bijection using cohomology of tilting modules over algebraic groups in characteristic $`p>0`$ or, similarly, over quantum groups at a root of unity. In the author proved some partial results towards this Humphreys’ Conjecture. Now this Conjecture is known to be true in a quantum group case thanks to the (unpublished) work of R. Bezrukavnikov. For an element $`w`$ of a finite Weyl group $`W_f`$ one defines number $`a(w)`$ as Gelfand-Kirillov dimension of highest weight simple module $`L(w0)`$ over the corresponding semisimple Lie algebra. Generalizing this G. Lusztig defined $`a`$function on any Coxeter group, see . The $`a`$function takes constant value on any two-sided cell and appears to be very useful for the theory of cells in Coxeter groups. J. Humphreys suggested that his construction of Lusztig’s bijection is compatible with the theory of $`a`$function in the following way: dimension of any tilting module corresponding to two-sided cell $`A`$ is divisible by $`p^{a(A)}`$ and generically is not divisible by higher power of $`p`$ (here $`p`$ is characteristic of field in a case of algebraic group and order of root of unity in a case of quantum group). In the author proved that the first statement (divisibility of dimensions by $`p^{a(A)}`$) is a consequence of Humphreys’ Conjecture, so it is a consequence of Bezrukavnikov’s work. The second statement (generical indivisibility by $`p^{a(A)+1}`$) seems to be harder. The main result of this note is that in quantum group case for any cell $`A`$ there exists tilting module $`T`$ corresponding to $`A`$ such that its dimension is not divisible by $`p^{a(A)+1}`$. So we determine $`p`$component of dimension of certain tilting modules what seems to be of some interest independently of Humphreys’ Conjecture. We will follow the notations of . Let $`(Y,X,\mathrm{})`$ be a simply connected root datum of finite type. Let $`p`$ be a prime number bigger than the Coxeter number $`h`$. Let $`\zeta `$ be a primitive $`p`$-th root of unity in $``$. Let $`U`$ be the quantum group with divided powers associated to these data. Let $`𝒯`$ be the category of tilting modules over $`U`$, see e.g. . Recall that any tilting module is a sum of indecomposable ones, and indecomposable tilting modules are classified by their highest weights, see loc. cit. Let $`X_+`$ be the set of dominant weights, and for any $`\lambda X_+`$ let $`T(\lambda )`$ denote the indecomposable tilting module with highest weight $`\lambda `$. The tensor product of tilting modules is again a tilting module. Let us introduce the following preorder relation $`_T`$ on $`X_+`$: $`\lambda _T\mu `$ iff $`T(\lambda )`$ is a direct summand of $`T(\mu )\text{(some tilting module)}`$. We say that $`\lambda _T\mu `$ if $`\lambda _T\mu `$ and $`\mu _T\lambda `$. Obviously, $`_T`$ is an equivalence relation on $`X_+`$. The equivalence classes are called weight cells. The set of weight cells has a natural order induced by $`_T`$. It was shown in that the partially ordered set of weight cells is isomorphic to the partially ordered set of two-sided cells in the affine Weyl group $`W`$ associated with $`(Y,X,\mathrm{})`$ ($`W`$ is a semidirect product of the finite Weyl group $`W_f`$ with the dilated coroot lattice $`pY`$). Let $`G`$ and $`𝔤`$ be the simply connected algebraic group and the Lie algebra (both over $``$) associated to $`(Y,X,\mathrm{})`$, and let $`𝒩`$ be the nilpotent cone in $`𝔤`$, i.e. the variety of $`ad`$nilpotent elements. It is well known that $`𝒩`$ is a union of finitely many $`G`$orbits called nilpotent orbits. Using the theory of support varieties one defines Humphreys map $`:`$ { the set of weight cells} $``$ { the set of closed $`G`$invariant subsets of $`𝒩`$}, see . The construction is as follows: it is known that cohomology ring $`\text{H}^{}(u)`$ of small quantum group $`uU`$ is isomorphic to the ring of regular functions on $`𝒩`$ (this is a Theorem due to V. Ginzburg and S. Kumar, see ); now let $`A`$ be a weight cell and take any weight $`\lambda A`$, then $`\text{Ext}^{}(T(\lambda ),T(\lambda ))`$ is naturally a module over $`\text{H}^{}(u)`$, so it can be considered as a coherent sheaf on $`𝒩`$, finally Humphreys map $`(A)`$ is just support of this sheaf. The Conjecture due to J. Humphreys says that the image of map $``$ consists of irreducible varieties, i.e. the closures of nilpotent orbits; moreover J. Humphreys conjectured that this map coincides with Lusztig’s bijection between the set of two-sided cells in the affine Weyl group and the set of nilpotent orbits, see and . In particular, Humphreys map should preserve Lusztig’s $`a`$function; this function is equal to half of codimension in $`𝒩`$ of the nilpotent orbit and is defined purely combinatorially on the set of two-sided cells, see . The aim of this note is to show that the Humphreys map does not decrease the $`a`$function: for a weight cell $`A`$ corresponding to a two-sided cell $`\underset{¯}{A}`$ in $`W`$ we have the inequality $`\text{codim}_𝒩(A)a(\underset{¯}{A})`$. This inequality follows easily from the definition of $``$, Theorem 4.1 in and the Main Theorem below: Main Theorem. Let $`A`$ be a weight cell corresponding to a two-sided cell $`\underset{¯}{A}`$ in the affine Weyl group. Then there exists a weight cell $`B_TA`$ and a regular weight $`\lambda B`$ such that $`dimT(\lambda )`$ is not divisible by $`p^{a(\underset{¯}{A})+1}`$ provided $`p`$ is sufficiently large. Remark. It follows from the Humphreys’ Conjecture proved by Bezrukavnikov that in the Main Theorem $`B=A`$. The proof of this Theorem is based on formulas for characters of indecomposable tilting modules obtained by W. Soergel in . In what follows we will freely use notations and results from and . Warning. The character formulas for tilting modules use certain Kazhdan-Lusztig elements in the Iwahori-Hecke algebra of $`W`$, and in modules thereof. The Iwahori-Hecke algebra is an algebra over $`[v,v^1]`$, and we will only need its specialization at $`v=1`$. So all the notions related with it (e.g. Kazhdan-Lusztig bases) will be understood in the specialization $`v=1`$. ## 2. Proof of the Main Theorem. Let $``$ denote the Bruhat order on the affine Weyl group $`W`$. For any $`xW`$ let $`(1)^x`$ denote sign of $`x`$, that is $`(1)^{l(x)}`$ where $`l(x)`$ is length of $`x`$. ### 2.1. We may and will suppose that our root system $`R`$ is irreducible. Let $`S`$ be the set of simple reflections in the affine Weyl group $`W`$. For any $`sS`$ let $`W_s`$ be the parabolic subgroup generated by $`S\{s\}`$. The subgroup $`W_s`$ is finite. There exists a unique point $`p\mu _sX_{}`$ invariant under the $`W_s`$action. In general, $`\mu _sX`$, but the denominators of its coordinates contain only bad primes for $`R`$. In particular, let $`s_aS`$ be the unique affine reflection. Then $`W_{s_a}=W_f`$ is the finite Weyl group. There exists a natural projection $`WW_f,x\overline{x}`$. This projection embeds all the subgroups $`W_s`$ into $`W_f`$. Recall from that the set $`W^f`$ of minimal length representatives of cosets $`W_fW`$ is identified with the set of dominant alcoves. ### 2.2. Recall that any two-sided cell of $`W`$ intersects nontrivially some $`W_s`$, see . In the group algebra of $`W_s`$ there are two remarkable bases: the Kazhdan-Lusztig base $`\underset{¯}{\overset{~}{H}}_w,wW_s`$, and the dual Kazhdan-Lusztig base $`\underset{¯}{H}_w,wW_s`$ (notations from ). Recall that $`\underset{¯}{H}_w=_{xw}p_{x,w}x`$ and $`\underset{¯}{\overset{~}{H}}_w=_{xw}p_{x,w}(1)^{xw}x`$ where $`p_{x,w}`$ are the values at 1 of Kazhdan-Lusztig polynomials. Let $`V=X_{}`$ be the reflection representation of $`W_f`$. For any $`sS`$ the restriction of $`V`$ to $`\overline{W}_s`$ is isomorphic to the reflection representation $`V_s`$ of $`W_s`$. We refer the reader to for definition and properties of Lusztig’s $`a`$function. This function is defined on the set of elements of a Coxeter group and takes values in $`\mathrm{}`$. We will use following properties of $`a`$function: (i) $`a`$-function is constant on any two-sided cell, see 5.4. (ii) Suppose that $`wW_0W`$ where $`W_0`$ is a parabolic subgroup of $`W`$. Then values of $`a`$function of $`w`$ calculated with respect to Coxeter groups $`W_0`$ and $`W`$ coincide, see 1.9 (d). (iii) Let $`wW_s`$. The element $`\underset{¯}{\overset{~}{H}}_w`$ acts trivially on $`S^i(V_s)`$ for $`i<a(w)`$,see . The space $`S^{a(w)}(V_s)`$ contains exactly one irreducible component (special representation) such that elements $`\underset{¯}{\overset{~}{H}}_w^{},w^{}_{LR}w`$, act nontrivially on it, see loc. cit. Moreover, these elements generate an action of the full matrix algebra on this component, see Chapter 5. We will say that this special representation corresponds to $`w`$. Convention. The equivalence relation $`_{LR}`$ depends on the ambient group, e.g. if $`w_1,w_2W_s`$ then $`w_1_{LR}w_2`$ in $`W`$ does not imply $`w_1_{LR}w_2`$ in $`W_s`$. In what follows the equivalence relation $`_{LR}`$ is considered with respect to $`W_s`$. In spite of this we apply the notation $`_{LR}`$ with respect to $`W`$. We hope that this does not cause ambiguity in what follows. ### 2.3. Let $`\mathrm{\Delta }(\lambda )=_{\alpha R_+}\frac{\lambda ,\alpha ^{}}{\rho ,\alpha ^{}}`$ be the Weyl polynomial. For any $`wW_s`$ and $`yW_f`$ let us consider the following polynomial in $`\lambda `$ and $`\mu `$: $$\mathrm{\Delta }(y,W_s,w,\mu ,\lambda )=\underset{xw}{}p_{x,w}\mathrm{\Delta }(\mu +y\overline{x}y^1\lambda ).$$ Lemma. The lowest degree term of $`\mathrm{\Delta }(y,W_s,w,\mu ,\lambda )`$ in $`\mu `$ has degree $`a(w)`$. Proof. It is well known that the polynomial $`\mathrm{\Delta }(\lambda )`$ is skew-symmetric with respect to the $`W_f`$action: $`\mathrm{\Delta }(y\lambda )=(1)^y\mathrm{\Delta }(\lambda )`$. Hence $$\mathrm{\Delta }(y,W_s,w,\mu ,\lambda )=\mathrm{\Delta }(1,W_s,w,y^1\mu ,y^1\lambda ).$$ So for the proof of the Lemma it is enough to consider the case $`y=1`$. Using skew-symmetricity of $`\mathrm{\Delta }(\lambda )`$ with respect to the $`W_f`$action once again, we have: $`()\mathrm{\Delta }(1,W_s,w,\mu ,\lambda )={\displaystyle \underset{xw}{}}p_{x,w}(1)^x\mathrm{\Delta }(\overline{x}^1\mu +\lambda ).`$ The element $`_{xw}p_{x,w}(1)^xx^1`$ equals $`\underset{¯}{\overset{~}{H}}_{w^1}`$, see e.g. , proof of the Theorem 2.7. Since $`a(w)=a(w^1)`$ (see ), the result follows from 2.2. ### 2.4. Now we consider $`\mathrm{\Delta }(\mu +\lambda )`$ as polynomial in two variables $`\mu ,\lambda V`$. The action of the Weyl group $`W_f`$ on the space $`S^{}(VV)`$ of all polynomials in two variables $`\mu `$ and $`\lambda `$ via the variable $`\mu `$ is well-defined and preserves degrees of polinomials with respect to both $`\mu `$ and $`\lambda `$. Lemma. Let $`W_s`$ act on the polynomial $`\mathrm{\Delta }(\mu +\lambda )`$ by the rule $`x\mathrm{\Delta }(\mu +\lambda )=\mathrm{\Delta }(\overline{x}\mu +\lambda )`$. Then the representation generated by the summand of degree $`a(w)`$ in $`\mu `$ contains the special representation corresponding to $`w^1`$. Proof. Let $`E_1S^{a(w)}(V)`$ be the special representation of $`W_s`$ corresponding to $`w^1`$. According to sect.3, the $`W_f`$representation $`E`$ generated by $`E_1`$ is irreducible, occurs with multiplicity 1 in the space of polynomials of degree $`a(E_1)=a(w)`$ and does not occur in the spaces of polynomials of lower degree. Moreover, $`E`$ lies in the space of harmonic polynomials which is identified with the cohomology of the flag variety $`\text{H}^{2a(w)}(G/B)`$. Hence the Lemma is reduced to the following statement: #### 2.4.1. Lemma. (R. Bezrukavnikov) Let $`W_f`$ act on the polynomial $`\mathrm{\Delta }(\mu +\lambda )`$ by the rule $`x\mathrm{\Delta }(\mu +\lambda )=\mathrm{\Delta }(x\mu +\lambda )`$. Then the representation generated by the summand of degree $`i`$ in $`\mu `$ contains any irreducible constituent of $`\text{H}^{2i}(G/B)`$. Proof. We identify cohomology space $`\text{H}^{}(G/B\times G/B)`$ with the space of harmonic polynomials in two variables $`\mu `$ and $`\lambda `$ (with respect to the group $`W\times W`$). It is well known that diagonal class is represented by $`\mathrm{\Delta }(\mu +\lambda )`$. Using Poincaré duality we identify $`\text{H}^{}(G/B\times G/B)`$ with $`\text{End}(\text{H}^{}(G/B))`$ (this identification is not $`W\times W`$equivariant since fundamental class is $`W`$antiinvariant but it is $`W`$equivariant with respect to the action of first copy of $`W`$). Now any vector $`v\text{H}^{}(G/B)`$ defines $`W`$equivariant map $`\text{End}(\text{H}^{}(G/B))\text{H}^{}(G/B),xxv`$ where $`W`$ acts on $`\text{End}(\text{H}^{}(G/B))=\text{H}^{}(G/B)(\text{H}^{}(G/B))^{}`$ via the first factor and under this map $`1v`$. The diagonal class $`\mathrm{\Delta }(\mu +\lambda )`$ corresponds to $`1\text{End}(\text{H}^{}(G/B))`$ and the summand of degree $`i`$ in $`\mu `$ corresponds to $`1\text{End}(\text{H}^{2i}(G/B))`$. The result follows. ### 2.5. Let $`N`$ denote the degree of the polynomial $`\mathrm{\Delta }(\lambda )`$. Lemma. Let us fix $`\mu `$ such that $`\mu ,\alpha ^{}0`$ for any $`\alpha R`$. Then there exists $`w^{}W_s,w^{}_{LR}w`$, such that the summand of $`\mathrm{\Delta }(y,W_s,w^{},\mu ,\lambda )`$ of degree $`Na(w)`$ in $`\lambda `$ is nontrivial. Proof. We may and will assume that $`y=1`$. By the formulae $`()`$ we have $`\mathrm{\Delta }(1,W_s,w_1,\mu ,\lambda )=\underset{¯}{\overset{~}{H}}_{w_1^1}\mathrm{\Delta }(\mu +\lambda )`$ where $`W_s`$ acts on $`\mathrm{\Delta }(\mu +\lambda )`$ via the variable $`\mu `$. Since elements $`\underset{¯}{\overset{~}{H}}_{w_1^1},w_1_{LR}w`$ generate an action of the full matrix algebra on the special representation corresponding to $`w^1`$ by 2.2 the Lemma 2.4 show that the set of summands of degree $`a(w)`$ in $`\mu `$ of $`\underset{¯}{\overset{~}{H}}_{w_1^1}\mathrm{\Delta }(\mu +\lambda )`$ where $`w_1`$ runs through all $`w_1_{LR}w`$ contains a basis over the field of rational functions in $`\lambda `$ of special representation corresponding to $`w^1`$. Evidently these summands are exactly summands of $`\mathrm{\Delta }(1,W_s,w_1,\mu ,\lambda )`$ of degree $`Na(w)`$ in the variable $`\lambda `$. Our Lemma claims that this set contains at least one nonzero element when we specialize $`\mu `$ to any weight satisfying conditions of the Lemma. Consider ideal generated by this set in a ring of polynomials in $`\mu `$ with coefficients which are rational functions in $`\lambda `$. Evidently, the Lemma is a consequence of the following statement: #### 2.5.1. Lemma. Let $`U`$ be an irreducible $`W_f`$-submodule of $`S^{}(V)`$ not contained in $`(S^+(V))^{W_f}`$. In other words, $`U`$ projects nontrivially to $`S^{}(V)/(S^+(V))^{W_f}=\text{H}^2(G/B)`$. Then the zero set of the ideal of $`S^{}(V)`$ generated by $`U`$ is contained in the union of hyperplanes $`\mu ,\alpha ^{}=0,\alpha R`$. Proof. Evidently, the ideal generated by $`U`$ is $`W_f`$invariant. By Poincaré duality for any $`0v\text{H}^i(G/B)`$ there exists $`v^{}\text{H}^{2Ni}(G/B)`$ such that $`vv^{}`$ represents fundamental class of $`G/B`$. Hence the ideal generated by $`U`$ contains an element $`\omega S^N(V)`$ which projects nontrivially on $`\text{H}^{2N}(G/B)`$. The alternation $`\omega ^{}=\frac{1}{|W_f|}_{wW_f}(1)^ww(\omega )`$ is also contained in our ideal and projects nontrivially on $`\text{H}^{2N}(G/B)`$. But $`\omega ^{}`$ should be a nonzero multiple of Weyl polynomial $`\mathrm{\Delta }(\lambda )`$ since Weyl polynomial is unique up to scalar $`W`$antiinvariant in $`S^N(V)`$. The Lemma is proved. ### 2.6. Let $`\underset{¯}{A}W`$ be a two-sided cell. Choose $`W_s`$ such that $`W_s\underset{¯}{A}\mathrm{}`$ (this is possible by Theorem 4.8(d)). Let us fix $`w_1W_s\underset{¯}{A}`$. We choose $`yW^f`$ minimal with the property: (\**) For some $`wW_s`$ such that $`w_{LR}w_1`$ the summand of $`\mathrm{\Delta }(\overline{y},W_s,w,yp\mu _s,\overline{y}\lambda )`$ of degree $`Na(w)`$ in $`\lambda `$ is nonzero. By Lemma 2.5 such $`y`$ exists since there exists $`yW^f`$ such that $`y\mu _s`$ lies strictly inside the dominant Weyl chamber. In the following Lemma we use notations of . Lemma. Let $`yW^f`$ and $`wW_s`$ be as above. Then the element $`\underset{¯}{N}_y\underset{¯}{H}_w𝒩`$ is a sum of elements $`\underset{¯}{N}_x,x_{LR}\underset{¯}{A}`$, with positive integral coefficients, and hence can be considered as the character of tilting module in a regular block. Proof. By the formulae in the end of Proposition 3.4 of we have: $$\underset{¯}{N}_1\underset{¯}{H}_x=\{\begin{array}{cc}\underset{¯}{N}_x& \text{if}xW^f\\ 0& \text{if}xW^f.\end{array}$$ So, $`\underset{¯}{N}_y\underset{¯}{H}_w=\underset{¯}{N}_1\underset{¯}{H}_y\underset{¯}{H}_w`$ and the Lemma follows from the definition of cells, together with the positivity properties of multiplication in the Iwahori-Hecke algebra, see e.g. §3. ### 2.7. Proof of the main Theorem We can rewrite the element $`\underset{¯}{N}_y\underset{¯}{H}_w`$ as $$\underset{¯}{N}_y\underset{¯}{H}_w=N_1\underset{y_1W^f,y_1y}{}n_{y_1,y}\underset{xw}{}p_{x,w}H_{y_1x}.$$ Let $`\lambda _1`$ be a regular weight from the fundamental alcove. The dimension of the tilting module $`T`$ in the linkage class of $`\lambda _1`$ with character given by $`\underset{¯}{N}_y\underset{¯}{H}_w`$ is equal to $$\underset{y_1W^f,y_1y}{}n_{y_1,y}\underset{xw}{}p_{x,w}\mathrm{\Delta }(y_1x\lambda _1+\rho ).$$ Now let us write $`\lambda _1=\rho +p\mu _s+\lambda `$. We have $$\text{dim}T=\underset{y_1y}{}n_{y_1,y}\underset{xw}{}p_{x,w}\mathrm{\Delta }(y_1p\mu _s+\overline{y_1x}\lambda )=$$ $$=\underset{y_1y}{}n_{y_1,y}\mathrm{\Delta }(\overline{y}_1,W_s,w,y_1p\mu _s,\overline{y}_1\lambda ).$$ According to (\**), for some $`w_{LR}w_1`$ the polynomial $`\text{dim}T`$ has nonvanishing summand of degree $`Na(\underset{¯}{A})`$ in $`\lambda `$. Hence, for $`p0`$ it is possible to choose such $`\lambda `$ that this summand is not divisible by $`p`$ and $`\lambda _1`$ lies in the lowest alcove. The Main Theorem is proved. Acknowledgements. This note is a result of conversations with many mathematicians. Especially I wish to thank R. Bezrukavnikov, M. Finkelberg, J. Humphreys, J. C. Jantzen and G. Rybnikov for their generous help and extremely useful discussions. I am grateful to the referee for careful reading of the paper and useful comments.
no-problem/9902/astro-ph9902023.html
ar5iv
text
# 1 Introduction ## 1 Introduction When the strategy for the Las Campanas Redshift Survey (LCRS; ) was first being formulated in the mid-1980’s, the goals were two-fold: first, to sample a fair volume of the local Universe (what are the largest coherent structures? at what point does the Universe start to look smooth?), and, second, to study clustering on large scales ($`\xi _{\mathrm{gg}}(s)`$, void statistics, …). Therefore, it was decided to survey an unfilled “checkerboard” pattern in both the north and the south galactic caps out to a redshift of $`z0.2`$. To facilitate these ends, it was decided to use a 50-fiber multi-object spectrograph (MOS) to obtain the radial velocities. This was circa 1986/87. A few years into the survey (circa 1990/91), the survey strategy evolved into what became its final form: a set of six filled-in slices – three in the north galactic cap, and three in the south. Also during this time frame, the MOS was upgraded to 112-fibers. The maps that were gradually built up over the course of the survey eventually began to show an intriguing picture – that of a Universe which looked smooth on large scales! Visually, there was little or no evidence in the LCRS slices for high-contrast coherent structures on scales $`100h^1`$ Mpc. Quantitative evidence has tended to support this initial view (e.g., see Fig. 8 of and Fig. 1 of ). At the other extreme, on relatively small scales ($`<10h^1`$ Mpc), the LCRS has generally confirmed what had been observed in previous, shallower surveys (, ). But what about scales near the transition to homogeneity (50 – 200$`h^1`$ Mpc)? What can the LCRS tell us about clustering on these scales? ## 2 Results On very large scales ($`>50h^1`$ Mpc), it makes sense to think of an individual LCRS slice as a 2-dimensional plane, since the third dimension of slice width does not contribute much to our understanding of clustering on these scales (Fig. 1). Now consider a “toy” Universe composed of osculating spherical voids of diameter $`100h^1`$ Mpc. In such a Universe, an LCRS slice would intersect several voids at various random positions. Clearly, the measured cross-sectional diameters of these intersections will never overestimate the true diameter of the spheres (in this case, $`100h^1`$ Mpc). On average, the diameter of the cross-section of a sphere randomly intersecting a plane is $$<D_{\mathrm{cross}\mathrm{section}}>=\frac{\pi }{4}D_{\mathrm{sphere}}\frac{3}{4}D_{\mathrm{sphere}}.$$ (1) Thus, in such a “toy” Universe, we would expect that an LCRS slice would typically underestimate the average void diameter by about $`1/3`$. This projection effect in real space is basically the same as the aliasing of power from large- to small-scales in Fourier space. In real-space, however, this effect seems rather more benign and correctable. With the above discussion in mind, what – if any – clustering do we see on scales of 50 – 200$`h^1`$ Mpc? First, consider a couple visual representations of the LCRS $`12^{}`$ Slice, the most densely and most homogeneously sampled of the six LCRS slices. Looking at a standard velocity-RA plot of this slice (Fig. 2), we may notice that the largest coherent high-contrast structures tend to form the walls of underdense regions (“voids”) 50 – 100$`h^1`$ Mpc in diameter. To remove radial and field-to-field selection effects, one can generate a smoothed number-density contrast map (Fig. 3); therein, our initial suspicions are confirmed: high-density ($`\delta \rho /\rho >2.5`$) regions tend to surround low-density ($`\delta \rho /\rho <1`$) “voids” with diameters of 50 – 100$`h^1`$ Mpc. Second, we note that both the LCRS 2D power spectrum and large-scale ($`>10h^1`$ Mpc) 3D spatial autocorrelation function indicate features of excess clustering on scales $`100h^1`$ Mpc. We note that the results from the 2D power spectrum show higher statistical significance, since the signal at these scales (see Fig. 1) is essentially 2D; at these scales, some of the signal is washed out in the 3D autocorrelation function analysis. Third, Doroshkevich et al.’s core-sampling analysis of the LCRS measures the mean free path between 2D sheets and between 1D filaments. Comparing ’s Fig. 12 with our Fig. 3, it is apparent that their sheet-like structures correspond roughly with regions of smoothed $`\delta \rho /\rho >2.5`$; their rich filaments, with regions of smoothed $`1.5<\delta \rho /\rho <2`$. Doroshkevich et al. measure the mean free path between sheets to be $``$ 80 – 100 $`h^1`$ Mpc . Fourth, Einasto et al. give evidence that Abell Clusters in rich superclusters tend to lie within a 3D “chessboard” of gridsize of $`120h^1`$ Mpc . Einasto and his colleagues are presently performing a similar analysis for LCRS galaxies in rich environments ; in fact, our Fig. 3 is from the early stages of that analysis. Results, however, are still pending. ## 3 Conclusions There does seem to be something going on in the LCRS at a scale of $`100h^1`$ Mpc, but this result needs confirmation using other techniques (e.g., a 2D $`\xi _{\mathrm{gg}}`$) and using other large surveys covering different regions and/or having different selection effects (e.g., the ESP , 2dF , SDSS , …). Acknowledgements. We wish to thank the following for fruitful discussions regarding the topic of this paper: Andrei Doroshkevich, Jaan Einasto, Richard Fong, Yasuhiro Hashimoto, Robert Kirshner, Stephen Landy, Augustus Oemler, and Paul Schechter.
no-problem/9902/cond-mat9902340.html
ar5iv
text
# Quantum Transport in Disorderd Mesoscopic Ferromagnetic Films ## Abstract The effect of impurity and domain-wall scattering on the electrical conductivity of disordered mesoscopic magnetic thin films is studied by use of computer simulation. The results indicate a reduction of resistivity due to a domain wall, which is consistent with the explanation in terms of the dephasing caused by domain wall. PACS number: 73.50.-h The electrical transport properties of ferromagnetic metals have attracted much interest recently see e.g. -. In the present work we study the quantum transport in mesoscopic wires that contain a magnetic domain wall. The motion of the electrons passing through a wire that contain a magnetic domain wall is affected by various physical processes. As the electron approaches the domain wall it experiences a change in potential energy, leading to a reflection and hence to a reduction of the conductivity. However, unless the domain wall is unrealistically narrow (compared to the Fermi wavelength of the electrons) this reduction has been shown to be negligibly small in the case of a spin-independent collision time. In the presence of a domain wall the spin of the electron will change as the electron passes through the wire. This rotation will lead to a mixing of spin-up and spin-down components. Assuming that the (Boltzmann) collision time is spin-dependent, this mixing then results in an increase of the resistivity, a scenario that has been proposed to explain the experimental results on thin Co films at room temperature . Spin dependent scattering is the essential ingredient in models for electron transport in magnetic materials that exhibit giant magnetoresistance (GMR) -. In disordered systems at low temperatures the quantum interference, which becomes important as a result of random spin-independent impurity scattering, also influences strongly the electron transport properties. Theoretical work has shown that the domain wall suppresses the interference (and thus weak localization) due to impurity scattering, resulting in a decrease of the resistivity. Very recently there have been several experimental studies of a resistivity in a mesoscopic wire of ferromagnetic metals -. The results suggest a reduction of resistivity due to a domain wall, and interestingly the effect increases by lowering the temperature; below 50 K , and 20 K respectively. This reduction might be related to the quantum decoherence caused by the wall. But other classical mechanisms of the reduction have also been proposed as well and further studies are needed to clarify its origin. The purpose of the present paper is to study the interplay of the domain wall and spin-independent impurity scattering in more detail and to compare quantitatively the theoretical prediction of the Kubo-formula approach with first-principle quantum mechanical calculations. The geometry of the model system is shown in Fig.1. The electrons are assumed to move in a two-dimensional metallic strip with a single magnetic domain wall. The Hamiltonian for this model reads $$=\frac{1}{2m^{}}\left(𝐩e𝐀/c\right)^2\mu _B\sigma 𝐌+V,$$ (1) where $`𝐩=(p_x,p_y)`$ is the momentum operator of the electron with effective mass $`m^{}`$, $`\sigma =(\sigma ^x,\sigma ^y,\sigma ^z)`$ denote the Pauli spin matrices. $`𝐌=𝐌(x,y)`$ describes the magnetization in the material and $`V=V(x,y)`$ represents the potential due to non-magnetic impurities. We neglect the vector potential $`𝐀`$ resulting from the sum of the atomic magnetic-dipole contributions because in the case of a thin wire, it has little effect on the electron transport. Following , we assume that the magnetic domain wall can be described by $$M_x(x,y)=M_0\mathrm{sech}\left(\frac{xx_0}{\lambda _w}\right)$$ (2) and $$M_z(x,y)=M_0\mathrm{tanh}\left(\frac{xx_0}{\lambda _w}\right),$$ (3) with $`x_0`$ the center of the domain wall and $`\lambda _w`$ measures its extent. Note that $`M_z^2(x,y)+M_x^2(x,y)=M_0^2`$ so that at each point $`(x,y)`$ the magnetization is a constant. For a schematic picture of how the magnetization changes with $`x`$ see Fig.1. As a model for each impurity we take a square potential barrier, i.e. $$V_n(x,y)=\{\begin{array}{cc}0,& \hfill (x,y)S_n\\ V_0,& \hfill (x,y)S_n\end{array}$$ (4) where $`S_n`$ denote the area of square with label $`n`$. The position of the square is drawn from a uniform random distribution, rescaled to an area of size $`L_x\times L_y`$ (see Fig.1). The concentration of impurities, $`c`$ is given by $`c=_{n=1}^NS_n/(L_xL_y)`$ where $`N`$ denotes the total number of impurities. The potential entering in Eq. (1) is given by $`V=V(x,y)=_{n=1}^NV_n(x,y)`$. We will follow two routes to study the effect of the domain wall on the electrical conductivity: 1) By solving the time-dependent Schrödinger equation (TDSE) and 2) through an extension of the Kubo-formula-based theory of Tatara and Fukuyama . The results of these two fundamentally different approaches can be compared by making use of the Landauer formula , relating the conductivity $`\sigma `$ to the tranmission coefficient $`T`$. In the TDSE approach the procedure to calculate the transmission coefficient $`T`$ consists of three steps. First the incoming electrons are represented by a wave packet with average momentum $`𝐩=\mathrm{}k=(\mathrm{}k_F,0)`$. For concreteness we take this intitial state to represent electrons with spin up only, i.e. $$\mathrm{\Psi }(x,y,t=0)=(\psi _{}(x,y,t=0),\psi _{}(x,y,t=0))=(\psi _{}(x,y,t=0),0),$$ (5) and $`𝑑x𝑑y|\mathrm{\Psi }(x,y,t=0)|^2=1`$. The second step involves the solution of the TDSE $$i\mathrm{}\frac{\mathrm{\Psi }(x,y,t)}{t}=\mathrm{\Psi }(x,y,t)$$ (6) for sufficiently long times. The method we use to solve the TDSE has been described at length elsewhere ,, so we omit details here. As indicated in Fig.1, we place imaginary detection screens at various $`x`$positions. The purpose of each screen is to record the accumulated current that passes through it (the wave function is not modified by this detection process). Dividing the transmitted current (detector 2, see Fig.1) by the incident current (detector 1) yields the transmission coefficient $`T`$. As the simulation package , that we use solves the TDSE subjected to Dirichlet boundary conditions, some precautions have to be taken in order to suppress artifacts due to reflections from the boundaries at $`x=0,x=L`$. We have chosen to add to $`V`$, an imaginary linear potential that is non-zero near the edges of the sample, as indicated by the gray strips in Fig.1, and found that the absorption of intensity that results is adequate for the present purpose. For numerical work it is convenient to rewrite the TDSE (6) in a dimensionless form. Taking the Fermi wavelength $`\lambda _F`$ as the characteristic length scale of the electrons, the energy is measured in units of the Fermi-energy $`E_F=h^2/(2m\lambda _F^2)`$ and time in units of $`\mathrm{}/E_F`$. For our model simulations we have taken $`L=100\lambda _F`$, $`L_y=6.5\lambda _F`$, $`\mu _BM_0=0.4E_F`$, $`V_0=100E_F`$ and $`S_n=0.25\lambda _F^2`$. In Figs.2 and 3 we show some snapshots of the probability distribution for the spin-up (top) and spin-down (bottom) part of the electron wave, moving through an impurity-free region. Initially at $`t=0`$, the probability for having electrons with spin-down is zero. As the wave moves to the right, the $`M_x`$ component of the magnetization causes the spin to rotate, resulting in a conversion of electrons with spin-up into electrons with spin-down. For realistic values of the strength (i.e. $`\mu _BM_0<E_F`$) and width of the domain wall (i.e. $`\lambda _w>\lambda _F`$) the conversion will be almost 100 $`\%`$ (for all practical purposes), which leads to a negligibly small reflection . We have chosen $`\lambda _w=2\lambda _F,\mathrm{},16\lambda _F`$, which may be reasonable in the case of a very narrow wire or a strong anisotropy. In the presence of impurities two new effects appear: 1. As a result of the scattering by the potential barriers electrons will be reflected, leading to a reduction of the transmission coefficient in the sense of Boltzmann transport. At the same time interference among scattered electrons leads to weak localization, and this quantum mechanical effect also suppresses the transmission. Obviously these effects are present in the absence of a domain wall as well. 2. As a result of the presence of the domain wall, electrons that are backscattered and have their spin reversed due to the wall, no longer interfere with electrons whose spin is unchanged. Hence the effect of the domain wall is to reduce the enhanced backscattering due to the interference. On the basis of this argument it is to be expected that in the presence of a domain wall the transmission coefficient can be larger than in the absence of it. In our simulations the contribution due to quantum interference effects resulting from the presence of the domain wall can be separated from all other contributions by a simple procedure: We compute the ratio of the transmission with $`(T)`$ and without $`(T_0)`$ a domain wall. Some representative results of our calculations are depicted in Figs.4-8. The simulation data shown are obtained from a single realization of the impurity distribution. No ensemble averaging of the transmission coefficient has been performed. The transmission in the absence of the wall ($`T_0`$) is plotted in Fig. 4 as a function of impurity concentration in the case of $`L_x=16`$. In Figs. 5 and 6 we show the ratio $`T/T_0`$ as a function of the impurity concentration $`c`$, for $`L_x=8`$ and $`L_x=16`$ respectively. The two sets of simulation data in Fig. 5 correspond to different impurity configurations, and the difference between the two is due to a different interference pattern. The enhancement alluded to above is clearly present. The effect of conversion of the electron spin by the wall is amplified considerably by the quantum interference at larger impurity concentration. The larger the scattering the more effective the domain wall is in converting electrons with spin-up into electrons with spin-down. In Figs.7 and 8 we present results for domain walls of different width $`\lambda _w`$, keeping fixed the area in which the impurities are present ($`L_x=4`$, and $`L_x=8`$ respectively). The net result of increasing $`\lambda _w`$ in this case is to reduce the effectiveness of the $`M_x\sigma ^x`$ term in the Hamiltonian. Indeed by increasing $`\lambda _w`$, $`M_x(x,y)`$ becomes more smooth, hence less effective in the sense that less electrons flip their spin. Let us compare these results with the analytical result based on Kubo formula, which is obtained by extending the theory of Tatara and Fukuyama . In the absence of domain wall the conductivity in two dimensions with the effect of weak localization taken into account is given by $`\sigma _0`$ $`=`$ $`{\displaystyle \frac{e^2n\tau }{m}}{\displaystyle \frac{2e^2}{\pi \mathrm{}}}{\displaystyle \frac{1}{V}}{\displaystyle \underset{q}{}}{\displaystyle \frac{1}{q^2}}`$ (7) $`=`$ $`{\displaystyle \frac{e^2}{h}}n\lambda _Fl\left(1{\displaystyle \frac{\lambda _F}{l}}{\displaystyle \frac{2}{\pi ^3}}{\displaystyle \frac{L_x}{L_y}}\right),`$ (8) where $`n`$ is the electron density, $`\tau `$ and $`l(\mathrm{}k_F\tau /m)`$ being the elastic lifetime and the mean free path, respectively. We have carried out the $`q`$-summation in one dimension, since $`L_y`$ is much smaller than the inelastic diffusion length in the absence of the wall, which should be regarded as infinity in the simulation here. The transmission coefficient $`T_0`$ is related to the conductivity by $`\sigma _0=(e^2/h)(L_x/L_y)[T_0/(1T_0)]`$ and thus $$T_0\frac{\beta }{\beta +\nu c}\left[1\frac{\nu c^2}{\beta +\nu c}\frac{2}{\pi ^3}\frac{1}{\alpha }\frac{L_x}{L_y}\right],$$ (9) where $`\beta n\lambda _F^2\alpha `$, $`\nu (L_x/L_y)`$ and the mean free path is related to $`c`$ through $`l\alpha \lambda _F/c`$. We treat $`\alpha `$ and $`\beta `$ as fitting parameters. The solid curve in Fig. 4 is obtained for $`\alpha =0.05`$ and $`\beta =6`$ (or equivalently $`l0.5\lambda _F3k_F^1`$ for $`c=0.1\%`$, which appears to be reasonable). The dotted line is the classical contribution to $`T_0`$ (i.e. the first term in (9)) and it is larger than $`T_0`$ at large $`c`$. In the presence of a domain wall the conductivity is expressed as $$\sigma =\frac{e^2}{h}nl\lambda _F\left[1\frac{1}{2\pi ^2}\frac{\lambda _F^2}{\lambda _wL}\frac{2}{\pi ^2}\frac{\lambda _F}{l}\left(\frac{L_w}{L_y}\mathrm{tan}^1\frac{L_x}{\pi L_w}\right)\right],$$ (10) where the second term is the classical contribution from the wall reflection and the third term is a weak localization correction with the effect of the wall included. The effect of the wall is to cause dephasing among the electron as is represented by the inelastic diffusion length, $`L_w\sqrt{D\tau _w}`$. Here $`\tau _w`$ is the inelastic lifetime due to the spin-flip scattering by the wall, $`\tau _w^1(\lambda _FE_F)^2/(24\pi ^2\lambda _wL_x\mathrm{\Delta }^2\tau )`$ ($`\mathrm{\Delta }\mu _BM_0`$ denoting the Zeeman splitting). The expression of $`T/T_0`$ is obtained as $$\frac{T}{T_0}=1+\frac{\nu c^2}{\beta +\nu c}\frac{1}{\alpha }\left[\frac{2}{\pi ^3}\frac{L_x}{L_y}\left(1\frac{\pi L_w}{L_x}\mathrm{tan}^1\frac{L_x}{\pi L_w}\right)\frac{1}{2\pi ^2}\frac{\lambda _F^2}{\lambda _wL_x}\right].$$ (11) The result is plotted as solid lines in Figs. 5-8. The classical contribution (the last term) is negligibly small compared with the quantum correction in the region we are interested, and thus the enhancement of the transmission by the wall is seen. We have used the same value of parameter $`\beta =6`$, but with different $`\alpha `$ ($`\alpha =0.05`$ for Fig. 5 but $`\alpha =0.02`$ for Figs. 6-8). We think this dependence of $`\alpha `$ on $`L_x`$ is due to the ambiguity in relating the mean free path in Kubo formula to $`c`$ in the simulation. Results of eq. (11) thus obtained explain the simulation data well. Acknowledgements This work was partially supported by the “Stichting Nationale Computer Faciliteiten (NCF)”, the NWO Priority Program on Massive Parallel Processing and a Grant-in-Aid for Scientific Research of the Japanese Ministry of Education, Science and Culture.
no-problem/9902/cond-mat9902251.html
ar5iv
text
# BaCu2Si2O7: a new quasi 1-dimensional 𝑆 = 1/2 antiferromagnetic chain system ## I Introduction The amazing structural diversity of copper-oxide compounds makes these materials very useful as model systems for fundamental studies of low-dimensional magnetism. In most cases their unique properties result from the particular topologies of the corresponding spin networks formed by magnetic Cu<sup>2+</sup> ions. Examples of such networks are found in CuGeO<sub>3</sub> (linear chain: spin-Peierls), CaCuGe<sub>2</sub>O<sub>6</sub> (isolated dimer), BaCuSi<sub>2</sub>O<sub>6</sub> (two-dimensional bilayer), Sr<sub>2</sub>CuO<sub>3</sub> (rigid linear chain), SrCu<sub>2</sub>O<sub>3</sub> (two-leg ladder), and SrCuO<sub>2</sub> (edge-shared double linear chain), etc. Magnetic interactions in all these systems are determined by the microscopic spatial coordination of Cu<sup>2+</sup> and O<sup>2-</sup>. The so-called corner-sharing ($`\mathrm{}`$Cu-O-Cu $``$ 180) and edge-sharing ($`\mathrm{}`$Cu-O-Cu $``$ 90) configurations have been extensively studied. A far richer spectrum of properties is expected from systems with intermediate bond angles, and the search for realizations of such coupling geometries has become important. The rather poorly studied BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> is a good example of a zig-zag chain network of corner-sharing CuO<sub>4</sub> plaquettes (Fig. 1) with intermediate values of $`\mathrm{}`$Cu-O-Cu bond angles, 124. The cystal structure is orthorhombic, space group $`Pnma`$, with the cell constants being $`a=6.862(2)`$ Å, $`b=13.178(1)`$ Å, and $`c=6.897(1)`$ Å. The spin chains run along the $`c`$ crystallographic axis. In the present paper we report magnetic susceptibility measurements and inelastic neutron scattering experiments on this material. We find it to be an excellent model of weakly-coupled quantum $`S=1/2`$ chains, exhibiting a crossover from 1-D behavior at high temperatures to 3-D behavior and long-range AF (Néel) order at low temperatures. As anticipated for a system with intermediate values of bond angles, the magnetic properties are extremely sensitive to slight modifications in atomic positions. This is demonstrated by preliminary susceptibility results for BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub>, which, unlike the isomorphous BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>, orders in a weak-ferromagnetic structure with a small net magnetization. ## II Experimental Polycrystalline samples of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> (or BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub>) were prepared by conventional solid-state reaction method, using BaCO<sub>3</sub>, SiO<sub>2</sub> (or GeO<sub>2</sub>), and CuO as starting materials. The polycrystalline rods were crushed to fine powder for neutron powder diffraction studies. Single crystals were grown using the floating-zone technique from a sintered polycrystalline rod. In inelastic neutron scattering experiments we utilized an as-grown single-crystal rod (5 mm diameter $`\times `$ 10 mm long). A small fragment cut to a 3 mm<sup>3</sup> cube was used in single-crystal susceptibility measurement. All magnetic susceptibility experiments were done with a commercial SQUID magnetometer ($`\chi `$-MAG7, Conductus, Co. Ltd.) in the temperature range of 2 to 300 K. Neutron powder diffraction experiments were carried out at the 400 cells CRG diffractometer D1B at Institut Laue Langevin (Grenoble, France). Single-crystal neutron scattering studies were performed with the ISSP-PONTA triple-axis spectrometer installed at 5G beam hole of the Japan Research Reactor 3M at the Japan Atomic Energy Research Institute (Tokai, Japan). Pyrolytic graphite PG(002) reflections were used in the monochromator and analyzer. The collimation setup was 60’-80’-40’-80’, with a PG filter positioned after the sample. The final neutron energy was fixed at $`E_f`$ = 14.7 meV. The sample was mounted in a standard “ILL Orange” cryostat with the (0, $`k`$, $`l`$) reciprocal-space plane in the scattering plane of the spectrometer, and the data were collected in the temperature range 2 - 15 K. ## III Results ### A Magnetic susceptibility The temperature dependences of DC magnetic susceptibility measured under $`H`$ = 10 kOe in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub> are shown in Fig. 2(a). The experimental curve for BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> shows a sharp peak around 9 K and a broad maximum around 150 K. The BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> data were fitted to the theoretical Bonner-Fisher (BF) curve for a one-dimensional $`S=1/2`$ quantum antiferromagnet (solid lines). A Heisenberg exchange constant $`J`$ 280 K (24.1 meV) was obtained from this analysis. Note that the BF fit becomes rather poor in the low-temperature region, suggesting the onset of 3-dimensional spin correlations in this regime. To identify the low-temperature ordered structure of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> below 9.2 K, the anisotropy of magnetic susceptibility was studied in a single-crystal sample. The measured temperature dependences are shown in Fig. 2(b). Signs of long-range Néel ordering are clearly seen. Below 9.2 K a substantial drop is observed in $`\chi _c`$, while both $`\chi _a`$ and $`\chi _b`$ change little from 9.2 K down to 2 K. This clearly points to the presence of antiferromagnetic long-range order below $`T_N`$ = 9.2 K with crystallographic $`c`$ axis being the magnetic easy axis of the system. Note that at $`T0`$ $`\chi _c`$ retains a substantial non-zero value, suggesting a reduction of ordered moment, presumably due to the 1-D nature of the system. As expected, the one-dimensional character of paramagnetic phase is observed in single crystals as well. The susceptibility shows broad maxima around 180 K along all the three crystallographic directions. Fitting these data to a BF curve, and assuming an anisotropic $`g`$-factor, we obtain $`g_a`$ = 2.5 and $`g_b=g_c`$ = 2.2. For BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub> a BF analysis (dashed line in Fig. 2(a)) of the high-$`T`$ part of the experimental $`\chi (T)`$ curve yields $`J540`$ K (46.5 meV), i.e., much larger than for BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>. In the low-temperature region the magnetic susceptibilities of the two isomorphous compounds become qualitatively different. The drastic increase of magnetization in BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub> observed below 8.5 K is attributed to the appearance of uniform spontaneous magnetization. This increase is particularly well seen in the low-field measurement (Fig. 2(a), inset). The saturation value of magnetzation ($`6\times 10^2`$ emu/g) is orders of magnitude less than 22.15 emu/g, that one would obtain if all the Cu<sup>2+</sup> spins were aligned ferromagnetically. The clear dominance of $`c`$-axis antiferromagnetism in the paramagnetic phase of BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub> suggests that the spontaneous uniform moment below 8.5 K is a result of a canted weak-ferromagnetic spin arrangement. ### B Neutron diffraction. As a first step towards determining the magnetic structure of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> we performed neutron powder diffraction experiments at 2 K and 15 K, i.e., above and below the Néel temperature. Subtraction of the powder scan data collected at these two temperatures did not reveal any sign of magnetic Bragg reflections. This failure to observe any magnetic signal in the powder experiment can be attributed to a small value of ordered moment and the fact that all magnetic reflections appear on top of strong nuclear peaks (see below), not surprising for a structure with 8 magnetic Cu<sup>2+</sup> ions per unit cell, and are thus very difficult to detect. Even in the subsequent single-crystal experiment, only limited information on the magnetic structure of AF phase of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> could be obtained. Only one magnetic reflection, namely $`(011)`$, could be reliably measured at low temperatures. The $`(011)`$ nuclear reflection is allowed by crystal symmetry, but its intensity is fortunately relatively weak. The inset (a) in Fig. 3 shows rocking curves of the $`(011)`$ Bragg peak measured above (open circles) and below (solid circles) the magnetic transition temperature. The difference between these two scans is shown in the inset (b) of Fig. 3. The measured peak intensity is plotted against temperature in the main panel of Fig. 3. The observed increase upon cooling through $`T_N=9.2`$ K is attributed to long-range magnetic ordering. Intensities of several other Bragg peaks, including $`(031)`$, $`(013)`$, $`(051)`$, $`(071)`$, $`(053)`$, and $`(035)`$ were also measured as functions of temperature. However, the nuclear contribution to these reflections is much larger than that for $`(011)`$, and no intensity increase could be observed beyond the experimental error bars upon cooling through $`T_N`$. ### C Inelastic scattering Inelastic neutron scattering studies of spin excitations in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> were also performed. The rather steep spin wave dispersion along the chain axis was measured in two constant-$`E`$ scans near the 1-D AF zone-center $`\stackrel{}{Q}=(0,1.5,l)`$, at energy transfers $`\mathrm{}\omega =10`$ meV (Fig. 4(a)) and $`\stackrel{}{Q}=(0,0,l)`$ at $`\mathrm{}\omega =5`$ meV (Fig. 4(b)). Both data sets were collected at $`T=2`$ K, and transverse wave vector $`k`$ was chosen to optimize focusing conditions. Even in the 10 meV scan the two spin wave branches are barely resolved. Unfortunately, for higher energy transfers the magnetic signal was too weak to be measured in the present experiment. The spin wave dispersion along the $`b`$ axis was measured at $`T=2`$ K in constant-$`Q`$ scans at three reciprocal-space points at the bottom of longitudinal dispersion (Fig. 5): $`\stackrel{}{Q}=(0,0,1)`$, $`\stackrel{}{Q}=(0,0.5,1)`$, and $`\stackrel{}{Q}=(0,1.1,1)`$. Clearly the transverse bandwidth ($`1`$ meV) is very small compared to that along the chain axis, which confirms the 1-D character of spin correlations in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>. Even in the vicinity of the 3-D magnetic zone-center $`(011)`$ the excitation energy appears to extrapolate to a non-zero value, i.e., the spectrum has a gap $`\mathrm{\Delta }_{(011)}1.5`$ meV. ## IV Discussion All our experimental results point to that both BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub> should be considered as quasi-1D systems, dominated by strong intrachain antiferromagnetic exchange interactions. Long-range ordering at low temperatures occurs owing to a much weaker interchain coupling. The subtleties of the low-temperature magnetic structures and magnetic anisotropy, as well as the variations of the intrachain coupling constant $`J`$ in the two systems are expected to be related to the details of the microscopic atomic arrangement. ### A Microscopic structure and magnetic interactions Rather remarkable is the drastic (a factor of 2) difference in the intrachain AF exchange constants of the two isomorphous systems BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub>. The AF spin chains in these species are formed by corner-sharing CuO<sub>4</sub> plaquettes, and the main contribution to AF interactions is expected to be the standard superexchange mechanism involving the shared O sites. A single chain is visualized in Fig. 6(a). Only the Cu sites and the shared oxygen sites are shown. The Cu<sup>2+</sup> ions are almost perfectly lined up. However, the Cu-O-Cu bond angle is substantially smaller than $`180^{}`$. The difference in $`J`$ is most likely due to a difference in this crucial bonding angle: $`124^{}`$ in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and $`135^{}`$ in the Ge-based system. Indeed we know that a larger bond angle is more favorable for superexchange involving oxygen. Subtle structural differences between the two systems are also to be held responsible for the difference in the low-$`T`$ behavior. In addition to single-ion anisotropy that manifests itself in the anisotropy of the $`g`$-factor and the gap in the spin wave spectrum (see discussion below), one should also consider the possibility of Dzyaloshinskii-Moriya asymmetric exchange interaction. The latter is allowed by local Cu-site symmetry in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub>, and may be responsible for the weak ferromagnetism of BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub>. The correlation between the microscopic structure and magnetic properties of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub> is indeed a very interesting problem. However, at the present stage we do not have sufficient information required for a more detailed discussion of this subject, and suggest that further experiments, possibly using polarized neutron diffraction, should be performed to clarify the magnetic structures in the ordered state. ### B Weakly coupled quantum spin chains Ignoring for now such subtleties as magnetic anisotropy, spin canting in the ordered state and the possibility of DM-interaction, we shall turn to discussing what makes the two new systems really valuable for our cause, namely the 1-D quantum antiferromagnetic aspect of their magnetic properties. In the following sections we shall demonstrate that the observed behavior of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> can be very well understood within the framework of existing theories for weakly-coupled $`S=1/2`$ AF chains. Particularly useful for describing such systems are the chain-MF (Mean Field) and the corresponding chain-RPA (Random Phase Approximation) models. These approaches were previously shown to work extremely well for a number of well-characterized compounds (see for example Refs. and references therein). #### 1 A model Hamiltonian Before we can apply these MF-RPA theories to our particular system, we have to construct a model spin Hamiltonian for BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>. A slight complication arises from a non-Bravais arrangement of magnetic ions in the crystal. Indeed, the fractional cell coordinates of Cu<sup>2+</sup> are 1) $`(0.25\delta _x,\delta _y,0.75+\delta _z)`$, 2) $`(0.25+\delta _x,\delta _y,0.25+\delta _z)`$, 3) $`(0.75\delta _x,0.5\delta _y,0.75\delta _z)`$, 4) $`(0.75+\delta _x,0.5+\delta _y,0.25\delta _z)`$, 5) $`(0.75+\delta _x,\delta _y,0.25\delta _z)`$, 6) $`(0.75\delta _x,\delta _y,0.75\delta _z)`$, 7) $`(0.25+\delta _x,0.5+\delta _y,0.25+\delta _z)`$, and 8) $`(0.25\delta _x,0.5\delta _y,0.75+\delta _z)`$, where $`\delta _x=0.028`$, $`\delta _y=0.004`$, and $`\delta _z=0.044`$. The shifts $`\delta _x`$ and $`\delta _z`$ present few problems, as they do not disturb the equivalence of nearest-neighbor Cu-Cu bonds along the $`a`$ and $`c`$ axes, respectively. $`\delta _y`$, on the other hand, leads to alternating nearest-neighbor Cu-Cu distances along the $`b`$ axis. At present we shall ignore this slight alternation, assuming $`\delta _y0`$ and postulating nearest-neighbor Cu-Cu bonds to be equivalent along the $`b`$ axis as well. The simplest nearest-neighbor Heisenberg Hamiltonian can then be written as $$\widehat{H}=\underset{m,n,p}{}𝐒_{m,n,p}\left[J_x𝐒_{m+1,n,p}+J_y𝐒_{m,n+1,p}+J_z𝐒_{m,n,p+1}\right]$$ (1) Here $`m`$, $`n`$, and $`p`$ enumerate the spins $`𝐒_{m,n,p}`$ along the $`a`$, $`b`$, and $`c`$ axes, respectively. As BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> is clearly a quasi 1-D antiferromagnet, $`J_z>0`$ and $`|J_z||J_y|,|J_x|`$. #### 2 Reduction of ordered moment One of the most important predictions of the chain-MF model is the relation between ordering temperature, magnitude of interchain coupling, and saturation moment at $`T0`$. At the MF level the actual signs of magnetic interactions are not important and these properties are determined by the mean interchain coupling constant $`|J_{}|=(|J_x|+|J_y|)/2`$. Schulz gives the relation between $`m_0`$, $`T_N`$ and $`|J_{}|`$: $$|J_{}|=\frac{T_N}{1.28\sqrt{\mathrm{ln}(5.8J/T_N)}},$$ (2) $$m_01.017\sqrt{\frac{|J_{}|}{J}}.$$ (3) In our case of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> the susceptibility data ($`T_N`$ = 0.793 meV and $`J`$ = 24.1 meV) gives $`m_0=0.11\mu _B`$. To test this theoretical prediction we have to determine the spin structure in the ordered state. Unfortunatelly, with only one measured magnetic reflection we can only speculate on this. Let us consider an approximate collinear magnetic state, with all spins strictly along $`(001)`$, as suggested by $`\chi (T)`$ measurements. The two simplest possible AF structures (A and B) consistent with the presence of a substantial $`(011)`$ magnetic peak are shown in Figs. 7(a) and (b). We can estimate the magnitude of saturation moment that would be required in structures A and B to produce a $`(011)`$ magnetic peak of measured intensity. Comparing the measured magnetic intensity of $`(011)`$ to that of two strong nuclear reflections $`(020)`$ and $`(022)`$, and making use of the known room-temperature crystal structure, for the saturated moment $`m_0`$ of Cu<sup>2+</sup> we get $`m_0=0.16\mu _B`$ and $`m_0=0.55\mu _B`$ for structures A and B, respectively. A simple calculation of nuclear and magnetic structure factors indicates that a magnetic moment as large as $`0.55\mu _B`$ would be easily detected in a powder experiment at ($`h`$, $`k`$, $`l`$-odd) positions. The lack of any clear magnetic signal in our powder data suggests that of the two proposed structures only structure A can be realized in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>. For structure A the estimated ordered moment is in rather good agreement with the predictions of Eq. (3). We shall thus use structure A as a working assumption for the spin arrangement in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>. For the Hamiltonian (1) to stabilize this type of ground state we will have to assume the exchange interactions to be ferromagnetic along the $`a`$ axis ($`J_x<`$ 0), and antiferromagnetic along the $`b`$ axis ($`J_y>`$ 0). #### 3 Spin dynamics As is well known, for an isolated antiferromagnetic $`S=1/2`$ chain the dynamic structure factor can be described as a two-spinon excitation continuum. To a good approximation $$S(q,\omega )\frac{1}{\sqrt{\omega ^2\omega _q^2}}\mathrm{\Theta }(\omega \omega _q),$$ (4) where $`\omega _q`$ is the lower bound of the continuum given by $$\mathrm{}\omega _q=\frac{\pi }{2}J_{}|\mathrm{sin}(q)|,$$ (5) and $`J_{}`$ is intrachain coupling constant. This type of behavior has been seen experimentally in a number of systems, in particular KCuF<sub>3</sub> (Refs. ) and CuGeO<sub>3</sub> (Ref. ). As mentined above, spin correlations in weakly coupled quantum $`S=1/2`$ spin chains are very well described by the chain-RPA model. As the system becomes ordered in three dimensions, the spectrum develops a mass gap $`\mathrm{\Delta }(\stackrel{}{Q}_{})`$ that depends on the transverse momentum transfer $`\stackrel{}{Q}_{}`$. The bandwidth of transverse dispersion is of the order of $`J_{}`$. This is in striking contrast with classical spin wave theory, where the transverse bandwidth is proportional to $`\sqrt{J_{}J_{}}`$. The mass gap goes to zero only at the 3-D magnetic zone-centers, i.e., at the position of magnetic Bragg reflections. A sharp single-magnon mode is the lowest-energy excitation that is split off from the lower bound of the continuum. A two-magnon continuum then starts at $`2\mathrm{\Delta }(\stackrel{}{Q}_{})`$. The dispersion of the magnon branch is given by $$(\mathrm{}\omega _\stackrel{}{Q})^2=\frac{\pi ^2}{4}J_{}^2\mathrm{sin}^2(Q_{})+\mathrm{\Delta }^2(\stackrel{}{Q}_{}).$$ (6) For the interchain coupling geometry of KCuF<sub>3</sub> (equal nearest-neighbor ferromagnetic interactions along the $`a`$ and $`b`$ axes) the expression for $`\mathrm{\Delta }(\stackrel{}{Q}_{})`$ has been derived in Ref. . Near the bottom of 1-D dispersion the single mode approximation (SMA) works very well and to a good approximation the dynamic structure factor at $`T=0`$ is the given by $$S(\stackrel{}{Q},\omega )\frac{1}{\omega }\delta (\omega \omega _\stackrel{}{Q}).$$ (7) For $`\mathrm{}\omega |J_x|,|J_y|`$, on the other hand, it is more appropriate to use Eq. (4) for isolated chains. The dynamic structure factor $`S(\stackrel{}{Q},\omega )`$ for the non-Bravais spin lattice in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> can be expressed through the dynamic structure factor $`S_0(\stackrel{}{Q},\omega )`$ of an equivalent system with the same exchange constants and a Bravais spin lattice. The latter is obtained by setting $`\delta _x`$, $`\delta _y`$ and $`\delta _z`$ to zero. It is straightforward to show that: $`S(\stackrel{}{Q},\omega )=\mathrm{cos}^2\left(2\pi h\delta _x\right)\mathrm{cos}^2\left(2\pi l\delta _z\right)S_0(\stackrel{}{Q},\omega )`$ (8) $`+\mathrm{cos}^2\left(2\pi h\delta _x\right)\mathrm{sin}^2\left(2\pi l\delta _z\right)S_0(\stackrel{}{Q}+\{001\},\omega )`$ (9) $`+\mathrm{sin}^2\left(2\pi h\delta _x\right)\mathrm{cos}^2\left(2\pi l\delta _z\right)S_0(\stackrel{}{Q}+\{100\},\omega )`$ (10) $`+\mathrm{sin}^2\left(2\pi h\delta _x\right)\mathrm{sin}^2\left(2\pi l\delta _z\right)S_0(\stackrel{}{Q}+\{101\},\omega ).`$ (11) Here we have defined $`Q=(\frac{2\pi }{a}h,\frac{2\pi }{b}k,\frac{2\pi }{c}l)`$. In our particular case $`\delta _x,\delta _z1`$, so, for not too large momentum transfers $`S(\stackrel{}{Q},\omega )S_0(\stackrel{}{Q},\omega )`$. In other words, we can safely analyze the measured inelastic scans assuming an idealized Bravais arrangement of Cu<sup>2+</sup> sites. We can now rewrite Eqs. (4) and (5) to match the notation introduced above for BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>: $`S(\stackrel{}{Q},\omega ){\displaystyle \frac{1}{\sqrt{\omega ^2\omega _Q_{}^2}}}\mathrm{\Theta }(\omega \omega _Q_{}),`$ (12) $`\mathrm{}\omega _Q_{}={\displaystyle \frac{\pi }{2}}J_z|\mathrm{sin}(\pi l)|.`$ (13) The result for transverse dispersion $`\mathrm{\Delta }(\stackrel{}{Q}_{})`$ derived for the case of KCuF<sub>3</sub> by Essler et al. can also be easily adapted for use with the coupling geometry in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>: $$(\mathrm{}\omega _\stackrel{}{Q})^2=\frac{\pi ^2}{4}J_z^2\mathrm{sin}^2(\pi l)+A^2\frac{J_yJ_x}{4}\left[J_yJ_x+J_x\mathrm{cos}(\pi h)+J_y\mathrm{cos}(\pi k)\right]+D^2.$$ (14) In this formula we introduced an empirical anisotropy gap $`D`$. The dimensionless mass gap $`A`$ is given by $`A6.175`$. #### 4 Analysis of inelastic data The dynamic structure factor, Eq. (12), convoluted with the 4-dimensional spectrometer resolution function was used to analyze the two measured constant-$`E`$ scans. Reasonably good fits to the data (solid lines in Fig. 4) were obtained with only three adjustable parameters, namely $`J_z`$, an intensity prefactor, and a flat background for each scan. The refined value $`J_z=19.84`$ meV is in agreement with the previous estimation, 24.1 meV, based on the experimental $`\chi (T)`$ curve. For analyzing the $`b`$-axis dispersion we used the SMA given by Eq. (7), also convoluted with the resolution function. As we do not have any data for the dispersion along the $`a`$ axis, to reduce the number of parameters we have assumed $`|J_x|=|J_y|=J_{}`$. The relevant adjustable parameters for the fit were thus $`J_{}`$, the anisotropy constant $`D`$, responsible for the gap at $`(011)`$, and an intensity prefactor. The fitting procedure gives $`J_{}=0.29(2)`$ meV and $`D=1.59(4)`$ meV. The resulting simulated scans are shown in solid lines in Fig. 4. On the other hand, $`J_{}`$ is estimated from the susceptibility data using Eq. (2), which gives $`J_{}`$ = 0.27 meV for BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub>, in excellent agreement with what we find from the analysis of transverse dispersion. ## V Concluding remarks The purpose of this paper was to introduce BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> as a new model $`S=1/2`$ quantum antiferromagnet. The particular combination of intrachain and transverse coupling constants make future neutron scattering studies of this material particularly promising. Indeed, BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> is a better 1-D compound than KCuF<sub>3</sub> with a smaller ratio of $`T_N`$ / $`J_{}`$. At the same time the interchain interactions in the new silicate are sufficiently strong to make the mass gap easily observable with inelastic neutron scattering techniques. The use of cold neutrons should enable an experimantal study of the double gap, i.e., the separation between the magnon branch and the 2-particle continuum. This effect in BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> is expected be similar to the double gap found in the dimerized state of CuGeO<sub>3</sub>, but is caused by interchain interactions, rather than dimerization within the chains. When this work was in progress we became aware of the similar susceptibility data of polycrystalline samples of BaCu<sub>2</sub>Si<sub>2</sub>O<sub>7</sub> and BaCu<sub>2</sub>Ge<sub>2</sub>O<sub>7</sub>. These results are consistent with our data. ###### Acknowledgements. We would like to thank T. Masuda, T. Yamada, and Z. Hiroi for valuable discussions. We also thank K. Nakajima, and T. Yosihama for technical assistance of neutron experiments. This work is supported in part by the U.S. -Japan Cooperative Program on Neutron Scattering, the Grant-in-Aid for Scientific Research on Priority Area “Mott transition”, Grant-in-Aid for COE Research “SCP coupled systems”, and Grant-in-Aid for Scientific Research (A) of the Ministry of Education, Science, Sports, and Culture. Work at Brookhaven National Laboratory was carried out under Contract No. DE-AC02-76CH00016, Division of Material Science, U.S. Department of Energy.
no-problem/9902/hep-ex9902025.html
ar5iv
text
# NOE: a neutrino experiment for the CERN-Gran Sasso Long Base line project ## 1 Introduction The scientific goal of the NOE long baseline (LBL) experiment is the measurement of neutrino masses looking at $`\nu _\mu \nu _e`$ and $`\nu _\mu \nu _\tau `$ oscillations. The philosophy of NOE design is to have oscillation sensitivity by looking for the $`\tau `$ decay ($`\nu _\mu \nu _\tau `$ oscillation) or for an electron excess ($`\nu _\mu \nu _e`$ oscillation) and by measuring a deficit of muons in apparent NC/CC ratio. The main experimental hint for the $`\nu `$ oscillation search in the region of low $`\mathrm{\Delta }m^2`$ ($`10^2÷10^3eV^2`$) comes from the muon deficit observed in atmospheric neutrino flux measurements . Recent results from LBL reactor neutrino experiment (CHOOZ) excluded neutrino oscillation in $`\overline{\nu }_e`$ disappearance mode, up to $`\mathrm{sin}^22\theta >0.18`$ for large $`\mathrm{\Delta }m^2`$ . Taking into account the confirmations of the atmospheric neutrino anomaly and the negative CHOOZ result, a LBL experiment has to fulfill the following requirements: 1. $`\nu _\tau `$ tagging. The search for $`\nu _\tau `$ appearance is mandatory to confirm the oscillation phenomenon. This search requires detector high performances to tag the $`\tau `$ decay. 2. Measurement of the ratio NC/CC. This robust and unambiguos test is important to investigate on the existence of a neutrino oscillation signal. There is no doubt that this measurement can be done only with a massive detector. 3. Atmospheric neutrinos. After the last results from Superkamiokande, suggesting smaller values of $`\mathrm{\Delta }m^2`$, the interest for the atmospheric neutrinos is raised up. It would be interesting to test this effect using a massive apparatus based on a different technique with respect to the water Čerenkov detectors. 4. Fast response. If a beam from CERN to Gran Sasso will be available in the next years, a strong competition with American and Japanese LBL programs is foreseen: at present, the 7 kton NOE project can adequately compete with the 8 kton MINOS detector and with K2K. According to these remarks the NOE program can be summarized in this way: * Direct $`\nu _\tau `$ appearance by kinematical $`\tau `$ decay reconstruction and inclusive (NC/CC) $`\nu _\mu `$ disappearance. * Investigation of $`\nu _\mu \nu _e`$ oscillation in a mixing angle region two orders of magnitude beyond the CHOOZ limit. * Atmospheric neutrino studies. In order to improve the $`\nu _\tau `$ search, the apparatus has been equipped with Transition Radiation Detector (TRD) interleaved between calorimetric modules (CAL). The combination of TRD and CAL information strongly enforces $`e`$, $`\mu `$, and $`\pi `$ identification, thus permitting the study of the $`\tau `$ decay. In particular the $`\tau `$ appearence can be detected through the $`\tau e\nu \nu `$ channel, with a clean signature, taking advantage of the low background (residual $`\nu _e`$ beam). Moreover the good electron identification in the TRD and the low $`\pi ^o`$ background allow to reach high $`\mathrm{sin}^22\theta `$ sensitivity looking for $`\nu _\mu \nu _e`$ oscillation, thus considerably enlarging the region investigated by CHOOZ. ## 2 The NOE detector The detector (Fig. 1) is a typical fixed target apparatus consisting of a sequence of 12 basic modules(BM). Each BM is composed by a lighter part (TRD target) in which vertex, $`e`$-identification and kinematics are defined, followed by a scintillating fiber calorimeter devoted to energy measurement and event containement. The Basic Module (BM) is shown in Fig. 2. Appearance measurement is performed using events generated in TRD ($`2.4kton`$), while disappearance measurements are performed by looking at events generated both in the TRD and in the calorimeter (total mass $`7kton`$). The calorimetric element is an 8 meter long bar filled with iron ore and scintillating fibers<sup>1</sup><sup>1</sup>1Extruded scintillator strips with wavelength shifter fibers have been also studied.. The calorimetric module is made by alternate planes of crossed bars. The calorimetric bar consists of more logical cells with square cross section. The iron ore is radiopure and practically cost free. A lot of R$`\&`$D has been performed to improve the scintillating fibers performance. The present production of $`2mm`$ diameter scintillating fibers provide an attenuation length $`\lambda `$=$`4.5m`$ and an light yield L $``$ 20 pe/MeV. These figures allow to build $`8m`$ long bars. Further investigations to improve fiber features are in progress: longer fibers would allow to increase the NOE cross section ($`9\times 9m^2`$) and therefore the total mass ($`8kton`$). It is worth noting the very high intrinsic granularity of the proposed calorimeter : the average distance between the fibers inside the absorber is of the order of $`3mm`$. The fibers are grouped together at each side of the calorimetric bar and sent to single or multipixel photodetector. The energy resolutions for electrons and hadrons in the calorimeter have been evaluated by means of a GEANT Montecarlo simulation. They are, respectively, $`\sigma (E)/E=0.01+0.17/\sqrt{E}`$ and $`\sigma (E)/E=0.08+0.42/\sqrt{E}`$. The TRD module consists of 32 vertical layers of $`8\times 8m^2`$ area, each made by polyethylene foam radiator ($`\rho 100mg/cm^3`$) and 256 proportional tubes ($`3\times 3cm^2`$ cross section), filled with an $`Ar`$ (60%) - $`Xe`$ (30%) - $`CO_2`$ (10%) mixture, already tested in MACRO experiment. Consecutive layers have tubes rotated of $`90^{}`$. A graphite wall of $`5cm`$ thickness is set in front of each of the first 24 layers of the TRD module acting as a $`174ton`$ target for $`\nu _e`$ and $`\nu _\tau `$ interactions, to be identified in the following layers. The last target wall is followed by 8 TRD layers in order to identify the secondary particles. Each target wall corresponds to $`0.25X_0`$ while the entire TRD basic module corresponds to about $`7X_0`$ and $`3.5\lambda _I`$. The total length is about $`3.76m`$. So many layers of proportional tubes permit to determine the muon energy by means of multiple measurements of energy loss $`dE/dx`$. Combining the informations coming from both subdetectors (TRD and CAL) the discrimination between $`e`$, $`\mu `$ and $`\pi `$ is largely enforced allowing the study of several neutrino oscillation channels. ## 3 $`\nu _\tau `$ appearance and requirements about $`\nu `$ beam The rate of $`\nu _\tau `$ CC events is given by $$R_\tau =A\sigma _\tau P_{osc}\mathrm{\Phi }𝑑E,$$ (1) where E is the energy, $`\sigma _\tau `$ the $`\nu _\tau `$ CC cross section, $`P_{osc}`$ the oscillation probability, $`\mathrm{\Phi }`$ the muon neutrino flux and $`A`$ the number of target nucleons in the detector. The search for $`\nu _\tau `$ requires that the term $`\sigma _\tau P_{osc}\mathrm{\Phi }`$ is large. Therefore a dedicated $`\nu `$-beam has to provide most of its flux in the energy range where the factor $`\sigma _\tau P_{osc}`$ is larger. Assuming the mixing of two neutrinos, the oscillation probability is $$P_{osc}=\mathrm{sin}^22\theta \mathrm{sin}^2(1.27\mathrm{\Delta }m^2L/E),$$ (2) where $`L=731km`$ is the distance CERN - Gran Sasso. We have to take into account that the $`\tau `$ cross section grows slowly with energy above a threshold of about $`3.5GeV`$. The factor $`\sigma _\tau P_{osc}`$ is shown in Fig. 3 for different values of $`\mathrm{\Delta }m^2`$. The optimal energy is about $`15GeV`$ for $`\mathrm{\Delta }m^2=0.01eV^2`$ and decreases gently with $`\mathrm{\Delta }m^2`$ towards a limiting value of about $`10GeV`$. In the following, the beam from Ref. and 5 years of data taking are assumed. ## 4 $`\tau `$ appearance searches Tau appearance search is performed on the basis of kinematical identification of the $`\tau `$ decay. The $`\tau e\nu \nu `$ is the favourite channel for this search due to the low background level and the the good electron identification capability of the TRD. It is worth noting that in the region of atmospheric anomaly the oscillation probability is $`50÷100`$ times higher than expected in NOMAD, as a consequence a much lower background rejection power is required. In order to check the overall NOE performances, a complete chain of event simulation and analysis has been performed. Event generators that include Fermi motion, $`\tau `$ polarization and nuclear rescattering inside the nucleus have been used to simulate quasi elastic, resonance and deep inelastic events. Generated events are processed by a GEANT based MonteCarlo in which calorimeter and TRD geometrical set-up are described in detail, down to a scale of a few $`mm`$. Fiber attenuation length, Birks saturation, photoelectron fluctuations and readout electronics non linearities for both TRD and calorimeter have been taken into account. DST of processed events ($`\tau e\nu \nu `$, $`\nu _\mu `$ NC and $`\nu _e`$ CC) have been produced and analysed. Electron identification is performed by looking for high energy releases in the TRD and in the calorimeter readout elements in fully contained events. The electron candidate is the one that maximizes collected energy in a $`5^{}`$ cone centered at the interaction vertex. Electron direction is reconstructed by weighting hits position by collected energy. With present algorithms an angular resolution of $`0.6^{}`$ and a $`180MeV/c`$ resolution on the measurement of transverse momentum are achieved. The remaining part of the event is used to reconstruct the hadronic component. The obtained resolution on the measurement of transverse momentum is $`420MeV/c`$. Topological cuts on the electromagnetic shower are applied to reject $`\nu _\mu `$ NC events with $`\pi ^o`$ faking electrons. Fig.6 shows a tipical neutral current event with a $`\pi ^o`$: the electromagnetic shower doesn’t start in the event vertex, allowing an easy rejection of the event. Work is in progress to improve the reconstruction efficiency and $`\nu _\mu `$ NC rejection. Additional cuts are performed to reduce the background: * total reconstructed energy $`<15GeV`$, * electron energy $`>1.5GeV`$, * the component of electron momentum perpendicular to the hadronic jet direction $`Q_{lep}>0.75GeV/c`$, * transverse mass$`M_T=\sqrt{4p_T^ep_T^m\mathrm{sin}^2(\varphi _{em}/2)}<2GeV`$, * $`\varphi _{eh}`$ $`\varphi _{mh}`$ correlation as shown in Fig. 5. Resulting efficiencies and residual backgrounds are reported in Table 1. ## 5 Neural network for the ratio NC/CC The measurement of the NC/CC ratio is performed looking at the observable $`R_{obs}`$= (no $`\mu `$)/$`\mu `$, where (no $`\mu `$) is the number of events not showing a muon track while $`\mu `$ is the number of events with a reconstructed muon. The procedure to perform this measurement is well known . The oscillation probability can be written as: $$P=\frac{R_{obs}R_{th}ϵ(R_{th}+1)}{R_{obs}(1\eta B)+\eta (1B)}$$ (3) where $`ϵ`$ is the ratio of $`\nu _e`$ to $`\nu _\mu `$ in the beam, B is the branching ratio for $`\tau `$ decay in muon, $`R_{th}`$=$`\sigma _{NC}`$/$`\sigma _{CC}`$ and $`\eta `$=$`\sigma _\tau `$/$`\sigma _\mu `$. The last ratio has to be carefully evaluated. For istance, considering the NGS beam and $`\mathrm{\Delta }`$$`m^2`$ $``$1$`0^2`$-1$`0^3`$ e$`V^2`$, $`\eta `$ ranges between 0.25 and 0.32. In this measurement, the systematics associated with $`R_{th}`$, play of course an important role and can be in principle reduced by using a near detector. Nevertheless, near and far beams are not identical, introducing a possible systematic error, requiring a detailed study. From an experimental point of view the separation of (no $`\mu `$) events from $`\mu `$ events is usually supposed to be straightforward. On the contrary, this measurements become difficult at low energies, having the muons a shorter path length. In NOE analysis the identification of charged and neutral current events has been improved by means of a neural network. The algorithm uses 24 topological, geometrical and calorimetric event parameters as input. The network has been trained with $`\nu _\mu `$ CC and NC MonteCarlo events with a energy uniformly distributed in the range $`0÷50GeV`$. The results of this analysis are shown in Fig. 7. The $`\mu `$ event recognition is mainly dependent on the track length. For neutrino energy $`E_\nu `$$``$5GeV a good selection/rejection efficiency is obtained. As expected, the efficiency decreases with the energy, becoming the muon track shorter. We are presently working to improve NC and CC separation at low energy. This measurement ( Fig. 8) allows a search for $`\nu _\mu `$$``$ $`\nu _\tau `$ oscillation down to $`\mathrm{\Delta }`$$`m^2`$=1.$``$1$`0^3`$e$`V^2`$. ## 6 Conclusions The combined use of two subdetectors (TRD and CAL) allows to search for $`\tau `$ appearance signal for events generated in the $`2.4kton`$ TRD target where electron identification, vertex and kinematics reconstruction are performed at best. Nevertheless the whole $`8kton`$ mass can be exploited for disappearance oscillation tests. Such measurements can be carried out at the same time by using an appropriate neutrino beam. A full analysis has recently been performed to demonstrate the feasibility of both measurements. In Fig. 8 the sensitivity of NOE experiment to $`\nu `$ oscillations is shown.
no-problem/9902/cond-mat9902358.html
ar5iv
text
# Molecular-Dynamics Simulation of a Glassy Polymer Melt: Rouse Model and Cage Effect ## I Introduction In 1953 P. E. Rouse proposed a simple model to describe the dynamics of a polymer chain in dilute solution . The model considers the chain as a sequence of Brownian particles which are connected by entropic harmonic springs. Being immersed in a structureless solvent the chain experiences a random force by the incessant collisions with the (infinitesimally small) solvent particles. The random force is assumed to act on each monomer separately and to create a monomeric friction coefficient. The model therefore contains chain connectivity, a local friction and a local random force. All non-local interactions between monomers distant along the backbone of the chain, such as excluded-volume or hydrodynamic interactions, are neglected. In dense melts, where both interactions are screened, the Rouse model was shown to describe the viscoelastic properties of short chain melts . For the polyethylene melts studied in it was also shown by neutron spin-echo experiments and computer simulations that the conformational dynamics can be reasonably well described by the Rouse model on length scales below the radius of gyration and time scales below the longest relaxation time of the chains (Rouse time). However, by construction (see section II), it can also only be applicable on length scales above the statistical segment length, $`b`$, of the chains and on time scales larger than $`b^2\zeta /k_\mathrm{B}T`$, where $`\zeta `$ is the monomer friction coefficient. On length scales below the statistical segment length local stiffness effects due to the intramolecular potentials become important and there are several suggestions in the literature how to incorporate those into a modified Rouse model . For a short chain polymer melt undergoing a glass transition it is also well established that the dramatic increase in relaxation times observable for instance in the viscoelastic response can be accounted for within the Rouse model by fitting a temperature dependent monomer friction coefficient in the form of a Vogel-Fulcher-Tammann (VFT) law $$\zeta (T)=\zeta _{\mathrm{}}\mathrm{exp}\left[\frac{E}{TT_0}\right].$$ (1) The extrapolated temperature of divergence, $`T_0`$, in this law (complete structural arrest) together with the (also extrapolated) vanishing of the excess configurational entropy of the glass with respect to the crystal (Kauzmann paradox ) lead to theories assuming an underlying phase transition for the glassy freezing . Especially the Gibbs-DiMarzio theory for polymers is capable of reproducing many phenomenological properties of the polymer glass transition, the prediction of a vanishing configurational entropy, however, could be shown to arise from too crude approximations in calculating the high temperature limit . The physical significance of these extrapolated singularities is therefore questionable. The VFT law has one further characteristic feature. There exists a temperature region, where the behavior of the viscosity turns over from a gradual high temperature increase upon lowering the temperature to a very steep increase in the supercooled region approaching the viscosimetric glass transition temperature $`T_\mathrm{g}`$. In this temperature region a change in the physical relaxation mechanism occurs. At high temperatures the mean square displacement of the particles directly crosses over from short time ballistic to long time diffusive motion. In this crossover region a plateau regime intervenes where the particles are temporarily trapped in a cage formed by their neighbors until some thermally activated process leads to an escape from the cage. Experimentally, this two step process is best observed in intermediate scattering functions, and it is well established for all fragile to intermediate glass forming systems , i.e., those where the viscosity is describable by the VFT law, but is also observable for strong glass formers at very high temperatures (where the viscosity follows an Arrhenius law). Theoretically it is the mode coupling theory of the glass transition (MCT) that focuses on this temperature region. In its idealized version (neglecting the activated processes that lead to an escape from the cage) it predicts an ergodic to non-ergodic transition at a dynamical critical temperature $`T_\mathrm{c}`$, which phenomenologically seems to be the same as the inflection point in the VFT law (Fischer et al. in ) marking the center of the crossover region. In this work we want to answer the question, how this two step relaxation process that is induced by the cage effect influences the Rouse model description of the dynamics of short chain polymer melts. We have shown in previous work that the two step relaxation process occurring in our model upon supercooling the system is compatible with the MCT predictions and have determined the dynamical critical temperature $`T_\mathrm{c}=0.45`$ and the exponent parameter $`\lambda =0.635`$ governing the algebraic divergence of correlation times at this temperature in the idealized version of the MCT. In this work, we will analyze the conformational relaxation of the chains and their self-diffusion properties in the same temperature region in terms of the Rouse model. Section II will introduce our simulation model and a short collection of results from the Rouse model for later reference. In section III we will then look at the behavior of the Rouse modes and section IV will focus on the self-diffusion properties of our model. Section V will present our conclusions. ## II Simulation Model and Theoretical Background In this section we will give a short introduction to our simulation model and technique and then summarize some pertinent results from the analytic solution of the Rouse model. A more detailed description of our simulation model can be found in reference . ### A Simulation Model Our simulation model consists of bead spring chains of length $`N=10`$. All beads interact through a Lennard-Jones potential $$U_{\mathrm{LJ}}(r_{ij})=4ϵ\left[\left(\frac{\sigma }{r_{ij}}\right)^{12}\left(\frac{\sigma }{r_{ij}}\right)^6\right],$$ (2) which is truncated at $`2\times 2^{1/6}\sigma `$ and shifted so as to vanish smoothly at that point. $`\sigma =1`$ defines the length scale and $`ϵ=1`$ defines the temperature scale of our model. Bonded neighbors in a chain furthermore interact through a FENE potential $$U_{\mathrm{FENE}}(r_{ij})=15R_0^2\mathrm{ln}\left[1\left(\frac{r_{ij}}{R_0}\right)^2\right]$$ (3) with $`R_0=1.6`$. The resulting equilibrium bond length is $`l_0=0.96`$. For a Lennard-Jones potential that is truncated in the minimum (soft spheres) this is the Kremer-Grest model for which it was shown that the conformational relaxation of short melt chains in the high temperature regime can be rather well described by the Rouse model . Note, however, that the latter model (because of the strictly repulsive interaction between the monomers) does not yields physically reasonable equation of state for the polymer melt. Also the friction coefficient depends on temperature only weakly, unlike equation (1). We performed Molecular Dynamics simulations in the canonical (NVT) ensemble using the Nosé–Hoover thermostat. For each temperature, however, the equilibrium density is determined by an isobaric-isothermal (NpT) simulation before the canonical simulation was started. In this way we follow a constant pressure path upon cooling . ### B Rouse model The Rouse model is defined through the following equation of motion for the repeat units of a polymer chain, $`𝒓_n`$ being the position of the $`n`$-th effective monomer at time $`t`$, $$\zeta \mathrm{d}𝒓_n(t)=\frac{3k_\mathrm{B}T}{b^2}\left(𝒓_{n+1}(t)2𝒓_n(t)+𝒓_{n1}(t)\right)\mathrm{d}t+\mathrm{d}𝑾_n(t),$$ (4) where $`b`$ is the statistical segment length of the chains and defines the length scale of the model, $`\zeta `$ is the monomer friction coefficient and the $`\mathrm{d}𝑾_n(t)`$ denote random forces modeled as Gaussian white noise: $`\mathrm{d}W_{n\alpha }(t)\mathrm{d}W_{m\beta }(t^{})=\delta _{nm}\delta _{\alpha \beta }\delta (tt^{})\mathrm{d}t.`$The Rouse modes are defined as the cosine transforms of position vectors, $`𝒓_n`$, to the monomers. For the discrete polymer model under consideration they can be written as $$𝑿_p(t)=\frac{1}{N}\underset{n=1}{\overset{N}{}}𝒓_n(t)\mathrm{cos}\left[\frac{(n1/2)p\pi }{N}\right],p=0,\mathrm{},N1.$$ (5) The normalized time-correlation function of the Rouse modes is given by $$\mathrm{\Phi }_{pq}(t)=\frac{𝑿_p(t)𝑿_q(0)}{𝑿_p(0)𝑿_q(0)}=\mathrm{exp}\left[\frac{t}{\tau _p(T)}\right],p=1,\mathrm{},N1$$ (6) with ($`p,q0`$) $$𝑿_p(0)𝑿_q(0)=\frac{b^2}{8N[\mathrm{sin}(p\pi /2N)]^2}\delta _{pq}\stackrel{p/N1}{}\frac{Nb^2}{2\pi ^2p^2}\delta _{pq}$$ (7) and $$\tau _p(T)=\frac{\zeta (T)b^2}{12k_\mathrm{B}T[\mathrm{sin}(p\pi /2N)]^2}\stackrel{p/N1}{}\frac{\zeta (T)N^2b^2}{3\pi ^2k_\mathrm{B}Tp^2}.$$ (8) According to equations ( 6), (7) and (8) the Rouse modes should have the following properties: (1.) They are orthogonal at all times. (2.) Their correlation function decays exponentially. (3.) The normalized correlation functions for different mode indices, $`p`$, and temperatures can be scaled onto a common master curve when the time axis is divided by $`\tau _p(T)`$. We will examine to what extent these properties are realized by the studied model. In order to do this comparison we use the following relations $$b^2=\frac{R^2}{N1}\text{and}\frac{\zeta (T)b^2}{k_\mathrm{B}T}=\frac{R^2}{N(N1)D},$$ (9) where $`R^2`$ ($`12.3`$) is the squared end-to-end distance, and $`D`$ is the diffusion coefficient of a chain. The first relation holds because of the Gaussian chain statistics, which is well established in dense melts, and thus fixes the length scaling between the simulation and the Rouse model. Use of the second relation means the the diffusive behavior at late times is employed to fix the translation between the time scale of the simulation and the theory. Using equation (5) for the Rouse modes the position vectors of the monomers can be written as $$𝒓_n(t)=𝑿_0(t)+2\underset{p=1}{\overset{N1}{}}𝑿_p(t)\mathrm{cos}\left[\frac{(n1/2)p\pi }{N}\right],n=1,\mathrm{},N,$$ (10) which implies for the mean square displacement of the $`n`$-th monomer $$\left[𝒓_n(t)𝒓_n(0)\right]^2=g_3(t)+8\underset{p=1}{\overset{N1}{}}|𝑿_p(0)|^2\left[1\mathrm{\Phi }_p(t)\right]\mathrm{cos}^2\left[\frac{(n1/2)p\pi }{N}\right],$$ (11) where $`g_3(t)=[𝑹_{\mathrm{cm}}(t)𝑹_{\mathrm{cm}}(0)]^2`$ denotes the mean square displacement of the chains’ center of mass. Note that only the orthogonality of the Rouse modes, i.e., $`\mathrm{\Phi }_{pq}(t)\delta _{pq}`$, enters the derivation of equation (11). ## III Rouse Modes: Time and Temperature Dependence For the Hamiltonian chosen for our simulation we find that the stiffness of the chains as for instance given by the characteristic ratio only weakly depends on temperature. We find $`C_N=R^2/(N1)l_0^2=1.52`$ for $`T=1.0`$ and $`C_N=1.46`$ for $`T=0.46`$. So there is a slight shrinking of the chains due to the non-bonded interactions. The assumption of Gaussian statistics for all intramolecular distances leads to equation (6) for the correlation function of the amplitudes of the Rouse modes. We find for our model that the static cross correlations between different modes $`p`$ and $`q`$ are always two to three orders of magnitude smaller than the static autocorrelation of modes $`p`$, shown in Figure (1). We can see that the mode amplitudes are independent of temperature as could be expected from the behavior of the characteristic ratio. Their qualitative trend as a function of mode number is the same as for the Rouse prediction of equation (7). Quantitatively, however, already the second Rouse mode shows a small deviation from the Rouse prediction (full curve in Figure (1), which was plotted using the calculated value for the autocorrelation of mode $`p=1`$ according to equation (7) using (9) to relate to quantities measured in the simulation). Up to mode $`p=6`$ (this is the scale of a dimer) the behavior can be described by a power law with an effective exponent $`x=2.2`$, whereas the small-$`p`$ expansion of equation (7) with exponent $`x=2`$ fails for $`p>3`$. These deviations from the Rouse prediction stem from the fact that the assumption of Gaussian distributed intramolecular distances breaks down on the length scale $`R/p`$ connected with mode $`p`$, where $`R`$ is the end to end distance of the chain. They depend on the details of the polymer model under investigation . Let us now turn to the dynamics of the Rouse modes. In Figure (2) we show the dynamic autocorrelation function of the first Rouse mode as a function of scaled time. The time scale $`\tau _1`$ is defined as the time value where the mode autocorrelation function has decayed to a value of $`\varphi _1(\tau _1)=0.3`$. According to equation (6) this should lead to a master plot showing a single exponential decay, when the autocorrelation function for different temperatures are compared. In Figure (2) we included all simulated temperatures from $`T=0.49`$ to $`T=1.0`$, and we can see that the time-temperature superposition principle is fulfilled nicely. The master curve can, however, not be described by a single exponential decay as is obvious from the figure. If the complete decay is included into a single exponential fit a law $`\mathrm{exp}(1.2t/\tau _1)`$ is obtained. The prefactor of $`1.2`$ in the argument of the exponential compensates for the $`0.3`$ definition versus the $`\mathrm{e}^1`$ definition of the relaxation time. If only the last $`85\%`$ of the decay are included into the Kohlrausch-Williams-Watts fit ($`A_p\mathrm{exp}\{\mathrm{ln}(A_p/0.3)(t/\tau _1)^{\beta _p}\}`$) with the amplitude $`A_p`$ and the exponent $`\beta _p`$ as fit parameters, this law gets a prefactor of $`A_1=0.98`$ and an exponent $`\beta _1=0.98`$, which is equal to one within our error bars. These two fit curves bracket the observed scaling function for scaled times below one. The reason for the deviation of the scaling function from the single exponential decay is more obvious when we look at the mode autocorrelation function for higher $`p`$. As an example we show in Figure (3) the scaling plot for $`p=5`$ for $`5`$ temperatures ranging from $`T=0.47`$ to $`T=1`$. Here the time temperature superposition only works for the late stages of the decay (scaled times of about $`0.5`$ and larger), the so-called $`\alpha `$-process. The scaling range between the curves at different temperatures, however, increases upon lowering the temperature. The dashed curve in Figure (3) is the same exponential decay as in Figure (2). The master function for $`p=5`$ is significantly stretched compared to the single exponential decay. Using again the last $`85\%`$ of the decay for a fit to the Kohlrausch-Williams-Watts law, we get a stretching exponent of $`\beta _5=0.81`$ (dotted line). This stretching exponent for the $`\alpha `$-relaxation is, however, independent of temperature in the shown temperature interval, which covers the high temperature liquid region ($`T=1.0`$) as well as the supercooled fluid region ($`T=0.47`$). This is the same result as was found in for a Monte Carlo simulation of a polymer lattice model undergoing a glass transition, in contrast to the discussion in . The decrease in the Kohlrausch exponent $`\beta _p`$ upon supercooling that is predicted in that model calculation is not observed in our simulation. The stretching, however, strongly depends on mode number and the values for the exponent $`\beta _p`$ for the different Rouse modes are collected in Table $`1`$. This mode number dependence was also found in the lattice simulation as well as an atomistic simulation of a polyethylene melt . For scaled times below about $`0.5`$, the $`\alpha `$-scaling breaks down. For $`p=5`$ we can resolve the development of a two step decay for temperatures $`T0.7`$ with an intervening plateau that increases in length upon cooling. This is the manifestation of the cage effect in the Rouse modes. The consequences of this caging for the structural relaxation behavior of our model was analyzed in detail in . It was not observable for $`p=1`$ because the amplitude of the plateau is too close to one in that case (even for $`p=9`$ it is still larger than $`0.9`$), but it leads to the discussed deviations from the single exponential decay. This plateau regime is called $`\beta `$-relaxation within mode coupling theory and the decay off the plateau should be describable by a von Schweidler-like law $$\mathrm{\Phi }_5\left(\frac{t}{\tau _5}\right)=f_5^\mathrm{c}B_1\left(\frac{t}{\tau _5}\right)^b+B_2\left(\frac{t}{\tau _5}\right)^{2b},$$ (12) where we included the first correction to the leading order in analogy to the predicted behavior of the incoherent scattering function . We take the von Schweidler exponent $`b=0.75`$ from our previous work and fit the von Schweidler law to the data in the time interval $`10^4t/\tau _50.5`$ and obtain a very good description of the asymptotic scaling in this $`\beta `$-regime with the parameters $`f_5^\mathrm{c},B_1`$ and $`B_2`$ quoted in the caption of Figure (3). Because of the mode number dependent stretching a time-mode superposition cannot be expected to hold. In Figure (4a) we show the failure of the time-mode superposition for the high temperature liquid case ($`T=1`$) and Figure (4b) shows the same for the supercooled state $`(T=0.49)`$. Here we can also observe the development of the $`\beta `$-plateau with increasing mode number with a plateau value of about $`0.9`$ for $`p=9`$. When we look at the temperature dependence of the mode relaxation times $`\tau _p`$ within the Rouse model, equations (7) with (9) tell us that the quantity $`\tau _pD/R^2`$, where $`D`$ is the center of mass diffusion coefficient and $`R^2`$ the squared end-to-end distance of the chains, should be independent of temperature. Figure (5) shows that this is indeed the case and that the dependence on mode index can be very well described by the Rouse prediction up to $`p=4`$. The improved agreement with the Rouse model in comparison to the static behavior in Figure (1) suggests that the deviations from the conformational assumptions in the Rouse model are partially compensated by deviations from the dynamic assumptions going into the model, as they are manifest in the stretching of the modes. In Figure (6) we finally analyze the temperature dependence of the mode relaxation times within the framework of the mode coupling theory of the glass transition. To this end, we plot the measured relaxation times double-logarithmically as a function of the distance to the mode coupling critical temperature $`T_\mathrm{c}=0.45`$ as determined for our model in . For about one decade in the reduced temperature we observe an algebraic behavior as predicted for the $`\alpha `$-relaxation time within MCT. For $`T0.49`$ deviations from the algebraic divergence at $`T_\mathrm{c}`$ occur, where ergodicity restoring processes that were neglected in the idealized MCT calculation start to become important. The exponent we observe is $`\gamma _p=1.83\pm 0.02`$, which is within the error bars equal to the exponent observed for the temperature dependence of the self-diffusion coefficient of the chains $`\gamma _D1.82`$. The error bar for $`\gamma _p`$ indicates the scattering between the fits to the different modes when fixing $`T_\mathrm{c}=0.45`$. The exponent is distinctly different from the $`\gamma =2.09`$ obtained from a $`\beta `$-analysis of the incoherent scattering function in . The agreement of the temperature dependence of an orientational correlation time as given by $`\tau _1`$ and a translational correlation time definable by $`R^2/D`$ was also found experimentally from a comparison of dielectric and pulsed-field gradient NMR data . It also gave rise to the scaling observed in Figure (5). We can conclude from Figure (6) that all modes show a freezing transition at the same temperature $`T_\mathrm{c}=0.45`$ and with the same exponent $`\gamma _p`$ in contrast to an analysis in which, however, had to employ a frozen matrix assumption. If one treats matrix and probe chain dynamics self-consistently a unique freezing temperature is again obtained . In order to interpret the difference in exponent between the incoherent scattering function and the conformational relaxation times of a chain as given by the Rouse modes, we have to keep in mind that the relaxation times at the mode coupling $`T_\mathrm{c}`$ do not actually diverge but stay finite. When we compare the temperature dependence of $`\tau _q(T)/\tau _q(T_\mathrm{c})`$ for a given momentum transfer $`q`$ with the temperature dependence of $`\tau _p(T)/\tau _p(T_\mathrm{c})`$ for a given Rouse mode $`p`$, the latter quantity decays much slower with increasing distance $`TT_\mathrm{c}`$ due to the presence of connectivity correlations between monomers of the same chain. ## IV Rouse Model and Mean Square Displacements In this section we want to analyze the translational behavior of central monomers of a chain, $`g_1(t)=[𝒓_{N/2}(t)𝒓_{N/2}(0)]^2`$, and of the center of mass of a chain, $`g_3(t)`$, as a function of temperature. Figure (7) displays these functions in the high temperature case ($`T=1`$). For short times both displacements are ballistic and equal to $`𝒗^2t^2=3Tt^2`$ and $`𝒗^2t^2/N`$, respectively, in reduced units where $`k_\mathrm{B}=1`$ and the monomer mass is set to one. Then we see a crossover to a subdiffusive behavior in $`g_1`$ which is induced by the connectivity of the chains (Rouse mode dominated regime). The exponent in this regime is $`0.63`$ instead of the Rouse prediction of $`0.5`$, which is a deviation generally found in simulations and mostly attributed to the shortness of the chains leading to an early crossover from the Rouse mode regime to the long time free diffusion limit. This last regime is seen to occur, when the mean square displacements of the monomers are equal to the squared end-to-end distance of the chains. The center of mass displacement $`g_3`$ reaches the free diffusion limit at earlier times, namely when it is equal to the mean squared radius of gyration of the chains. The full line in Figure (7) proves in another way that the Rouse modes are eigenmodes of the chains. This curve is calculated from the mode autocorrelation functions using equation (11). As a consistency check we exactly reproduce the monomer mean square displacement curve. We have to use all Rouse modes to obtain this exact agreement. If one only wants to describe the time dependence for $`g_1(t)R_g^2`$, the first five Rouse modes suffice for our model. Also included in the Figure is a horizontal line at $`6r_{\mathrm{sc}}^2`$ indicating the size of the next neighbor cage as obtained in . For this displacement value we observe at high temperatures the crossover from the short-time ballistic motion to the short time diffusion and further the onset of the connectivity dominated regime. Both occur at the same distance scale, as the length scale of the non-bonded interaction, $`\sigma `$, and the bond length, $`l_0`$, are approximately equal. For the supercooled melt this picture changes qualitatively, as can be seen in Figure (8). For this temperature of $`T=0.48`$ we were only able to propagate the chains for about two squared radii of gyration (after equilibration for about one order of magnitude longer in time). In the supercooled melt a plateau-like regime which extends for this temperature for about two decades in time intervenes between the short time ballistic regime and the connectivity dominated regime where the subdiffusive behavior again shows the exponent $`0.63`$. This plateau is at the value of the cage size and it is centered on the $`\beta `$-relaxation time scale $`t_\epsilon `$ of MCT . The MCT prediction for the mean squared monomer displacement $`(\mathrm{\Delta }r)^2`$, averaged over all monomers, for the $`\beta `$-regime is $$g_1(t)(\mathrm{\Delta }r)^2=6r_{\mathrm{sc}}^26h_{\mathrm{msd}}\left[\frac{t_0}{t_\epsilon }\right]^ag(t/t_\epsilon )6h_{\mathrm{msd}}C_a\left[\frac{t_0}{t}\right]^{2a}6h_{\mathrm{msd}}B^2C_b\left[\frac{t_0}{t_\epsilon }\right]^{2a}\left[\frac{t}{t_\epsilon }\right]^{2b}.$$ (13) The parameters $`r_{\mathrm{sc}}^2=0.009`$, $`h_{\mathrm{msd}}t_0^a=0.0045`$, $`t_\epsilon =4.933`$, $`a=0.352`$, $`b=0.75`$, $`B=0.476`$ $`C_at_0^a=0.3`$ and $`C_bt_0^a=0.25`$ are taken from reference . As was already discussed in that work, the MCT prediction for the $`\beta `$-regime for our model allows for a consistent description of the cage effect on the mean square monomer displacement. The MCT curve would predict a crossover from the breakup of the cage to the free diffusion of the particles, as no connectivity effects are included in the theory. For a polymer fluid, however, this behavior is altered and we obtain a crossover to the mode dominated regime, $`t^{0.63}`$, which is subdiffusive with a smaller exponent than the cage breakup described by the von Schweidler $`t^{0.75}`$ law (in leading order). Figure (8) tells us also why an analysis of the caging process in a polymer melt within the framework of a theory developed for simple liquids can work at all. The typical distance traveled by a monomer at the plateau is of the order of $`10^1`$ in units of the Lennard-Jones length scale which in turn is approximately equal to the bond length in our model. For this length scale connectivity effects are not yet felt by the monomers even in the high temperature regime shown in Figure (7). Only the late stage of the cage breakup (late $`\beta `$-process) and the structural relaxation ($`\alpha `$-process) are influenced by the connectivity of the chains. It is also noteworthy that the analysis in terms of the Rouse modes works throughout the cage region as can be concluded from the full line in Figure (8), obtained in the same way as for Figure (7). Furthermore the caging process is not only observed in the monomer mean square displacement, but in the displacement of the center of mass of the chains as well. The difference between a polymer melt and a simple liquid in the $`\beta \alpha `$ crossover region is elucidated from a different angle in Figure (9). If we plot the mean square displacement for all temperatures as a function of time scaled by the center of mass diffusion coefficient at that temperature we obtain the set of curves displayed in the inset of Figure (9). The envelope master curve of this set of curves is shown in the main part of Figure (9) in comparison to the same master curve constructed for a binary Lennard-Jones fluid . For large times the data for the two models have to agree by construction. But whereas for the Lennard-Jones fluid a direct crossover from the cage effect to the free diffusion occurs, the polymer exhibits the intervening connectivity dominated regime for length scales between the bond length, $`l1`$, and the end-to-end distance $`R`$. In this regime the observed mean square displacement curve drops below the MCT description, which is here displayed as an effective von Schweidler law $`6r_1^2+A_1(Dt)^{0.75}`$, with $`r_1=0.087`$ and $`A_1=11.86`$. The same type of fit is possible for the displacement of the center of mass of the chains as is shown in Figure (10). The center of mass of a chain exhibits qualitatively the same crossover form the cage regime to the free diffusion regime as seen for the simple liquids. The fit values for the effective von Schweidler law in this case are $`r_3=0.041`$ and $`A_3=2.68`$. ## V Conclusions We have shown in this work that the Rouse modes stay eigenmodes of the dynamics of short melt chains from the high temperature to the supercooled fluid state. They are orthogonal and for the smallest mode numbers they obey the predictions of the Rouse model for their static amplitudes. When the length scale probed by a mode is too short to allow for Gaussian distributed intramolecular distances on that scale, deviations from the static and dynamic scaling predictions occur. Nevertheless, when one uses the actual Rouse mode correlation functions to predict the time-dependent mean square displacements, one obtains perfect agreement with the simulation results. The most important deviation from the Rouse model prediction is a mode number dependent stretching of the time autocorrelation function of the mode amplitude. This stretching is within our accuracy not temperature dependent for temperatures above the mode coupling critical temperature of our model, which were accessible for a dynamic analysis starting from equilibrated melts. We therefore have to conclude that it is not so much connected with the approach to the glass transition, but with deviations from the simple Rouse model due to intramolecular stiffness and intermolecular interaction effects. For temperatures in the supercooled fluid region the mode autocorrelation functions develop a clear two step decay similar to the behavior of the intermediate scattering function as discussed in for the same model. The decay in the plateau region is compatible with the $`\beta `$-analysis following MCT presented earlier. Furthermore, the mode relaxation times follow the $`\alpha `$-scaling behavior of MCT with the same dynamic critical temperature obtained from the intermediate scattering function. The exponent of the seeming algebraic divergence is the same as for the center of mass self-diffusion coefficient of the chains and different from the exponent seen in the intermediate scattering function. The difference can be understood in terms of additional correlations between monomers connected along the same chain, that are not included in the MCT, which was developed for simple liquids. This connectivity leads to the Rouse prediction of a subdiffusive monomer displacement at intermediate times between the short time ballistic regime and the long time free diffusion limit. In the supercooled melt the cage effect, that leads to the discussed two step decays, intervenes between the short time ballistic motion of a monomer and the subdiffusive Rouse behavior. Therefore there is no direct crossover from the cage breakup to a free diffusion behavior as for simple liquids, but a crossover to the connectivity dominated regime with a smaller exponent than the von Schweidler exponent, which describes the cage breakup. Since the higher Rouse modes do not contribute to the center of mass displacement (which is the zeroth Rouse mode), the center of mass of the chains exhibits qualitatively the same crossover from cage region to free diffusion as observed for simple liquids. The whole applicability of the MCT to the supercooled polymer melt rests on the fact that the caging process occurs on a length scale of about one tenth of the bond length. On this scale the monomers just start to feel the connectivity to their neighbors and therefore only the late stages of the cage breakup and the structural relaxation are affected by the connectivity of the chains. The applicability of the Rouse model starts for length scales larger than the bond length and time scales of the order of the structural relaxation time of the melt. ## Acknowledgment We are indebted to Drs. W. Kob, M. Fuchs and I. Alig for helpful discussions. In the course of this work, we have profited from generous grants of simulation time by the computer center at the university of Mainz and the HLRZ Jülich, which are gratefully acknowledged, as well as financial support by the Deutsche Forschungsgemeinschaft under SFB262/D2.
no-problem/9902/chao-dyn9902011.html
ar5iv
text
# Passive scalar intermittency in compressible flow \[ ## Abstract A compressible generalization of the Kraichnan model (Phys. Rev. Lett. 72, 1016 (1994)) of passive scalar advection is considered. The dynamical role of compressibility on the intermittency of the scalar statistics is investigated for the direct cascade regime. Simple physical arguments suggest that an enhanced intermittency should appear for increasing compressibility, due to the slowing down of Lagrangian trajectory separations. This is confirmed by a numerical study of the dependence of intermittency exponents on the degree of compressibility, by a Lagrangian method for calculating simultaneous $`N`$-point tracer correlations. \] In the last few years, much effort has been devoted to the study of statistical properties of scalar quantities advected by random flows with short memory. Remarkable progress in understanding intermittency and anomalous scaling has been achieved for the Kraichnan model of passive scalar advection by random, Gaussian, incompressible and white-in-time velocity fields. A crucial property of the model is that equal-time correlation functions obey closed equation of motion. Analytical treatments are thus feasible, and the identification of a general mechanism for intermittency has been established. Its source has been found in zero modes of the operators governing the Eulerian dynamics of $`N`$-point correlation functions . Concerning numerical studies of the Kraichnan model, efficient Lagrangian methods have been recently proposed and thanks to them both the limits of the vanishing of intermittency corrections, for which perturbative predictions are available , and the non-perturbative region, have been successfully investigated . A compressible generalization of the Kraichnan model has been recently proposed and the existence of very different behaviors for the Lagrangian trajectories, depending on the degree of compressibility, has been shown analytically . For weak compressibility, the well-known direct cascade of the passive scalar energy takes place. This is associated, from a Lagrangian point of view, to the explosive separation of initially close trajectories , a feature characterizing the direct energy cascade for the incompressible Kraichnan model as well. On the contrary, when the compressibility is strong enough, particles collapse: both non intermittent inverse cascade of tracer energy exciting large scales and suppression of the short-scale dissipation occur . The relation between intermittency and compressibility is the main issue of the present short communication. As already highlighted , because compressibility inhibits the separation between Lagrangian trajectories, the resulting scalar transport slows down and scaling properties may be affected. Our remark here is that the slowing down of Lagrangian separations plays an essential role in characterizing intermittency in the direct cascade regime. This can be easily grasped from the following considerations. In the direct cascade regime, typical trajectories are stretched, whereas contractions are rare and thus affect only the extreme tails of the pdf of scalar differences. Furthermore, within a Lagrangian framework, scalar correlations are essentially governed by the time spent by particles with their mutual distances smaller than the integral scale of the problem. The stretching process, typical of the direct energy cascade, is thus intermittent because contracted trajectories cause strong fluctuations of the time needed to reach the integral scale. When compressibility is present, even if weakly, trapping effects are amplified due to the slowing down of Lagrangian separations. It then follows that the dynamical role of collapsing trajectories increases for increasing compressibility, and the same should happen for the intermittency. It is worth noting that the trapping mechanism, enhanced by the compressibility, works in the same direction as that induced by lowering the spatial dimension $`d`$: it is indeed observed perturbatively that when $`d`$ is reduced an increased intermittency arises, a fact corroborated by numerical evidences comparing results of the incompressible Kraichnan model in two and three dimensions. These considerations will be here quantitatively supported by numerical simulations. The compressible generalization of the Kraichnan model is governed by the equation (for the Eulerian dynamics) $$_t\theta (𝒓,t)+𝒗(𝒓,t)\theta (𝒓,t)=\kappa ^2\theta (𝒓,t)+f(𝒓,t),$$ (1) where, as for the incompressible case, the velocity and the forcing are zero mean, Gaussian independent processes, both homogeneous, isotropic and white-in-time. The velocity is self-similar, with the 2-point correlation function: $$v_\alpha (𝒓,t)v_\beta (𝒓^{},t^{})=\delta (tt^{})\left[d_{\alpha \beta }^0d_{\alpha \beta }(𝒓𝒓^{})\right],$$ (2) where $`d_{\alpha \beta }(𝒓)`$, the so-called eddy-diffusivity, is fixed by isotropy and scaling behavior along the scales: $`d_{\alpha \beta }(𝒓)=`$ (3) $`r^\xi \left\{\left[A+(d+\xi 1)B\right]\delta _{\alpha \beta }+\xi \left[AB\right]{\displaystyle \frac{r_\alpha r_\beta }{r^2}}\right\},`$ (4) where $`d`$ is the dimension of the space. The degree of compressibility is controlled by the ratio $`\mathrm{}𝒞^2/𝒮^2`$, being $`𝒮^2A+(d1)B(𝒗)^2`$ and $`𝒞^2A(𝒗)^2`$, which satisfies the inequality $`0\mathrm{}1`$. The statistics of the forcing term is defined by the 2-point correlation function $$f(𝒓,t)f(𝒓^{},t^{})=\delta (tt^{})\chi (|𝒓𝒓^{}|),$$ (5) where $`\chi `$ is chosen nearly constant for distance $`|𝒓𝒓^{}|`$ smaller than the integral scale $`L`$ and rapidly decreasing for $`rL`$. It is worth remarking that equation (1) physically describes the evolution of a tracer, that is a quantity which is conserved along the Lagrangian trajectories in absence of diffusivity and forcing. To characterize the advection of a density, one should consider the equation $$_t\rho (𝒓,t)+\left(𝒗(𝒓,t)\rho (𝒓,t)\right)=\kappa ^2\rho (𝒓,t)+f(𝒓,t),$$ (6) which in the ideal case ($`\kappa =0`$, $`f=0`$) enjoys the conservation of the total mass. The density advection equation has also a wide realm of physical applications and should deserve a detailed study in its own, as well as a specific numerical approach. Hereafter we shall limit ourselves to the case of tracer advection ruled by (1). Exploiting the $`\delta `$-correlation in time, equations for the even scalar correlations (odd correlations being trivially zero) in the stationary state, can be deduced ; for the generic $`N`$-point correlation function $`C_N^\theta \theta (r_1)\mathrm{}\theta (r_N)`$ the expression reads: $$_NC_N^\theta =\underset{i<j}{}\chi \left(\frac{r_{ij}}{L}\right)\theta (r_1)\underset{\widehat{i}\widehat{j}}{\mathrm{}\mathrm{}}\theta (r_N),$$ (7) with $`r_{ij}r_ir_j`$, and $`_N`$ is the differential operator given by: $`_N=`$ (8) $`{\displaystyle \underset{1n<mN}{}}d_{\alpha \beta }(𝒓_n𝒓_m)_{r_{n\alpha }}_{r_{m\beta }}\kappa {\displaystyle \underset{1nN}{}}_{r_n}^2.`$ (9) As for the incompressible case, this model has a Gaussian limit for $`\xi 0`$, and the perturbative expansion at small $`\xi `$’s can be done as in Ref. . Accordingly, the calculation performed in the weakly compressible case (i.e. $`\mathrm{}<d/\xi ^2`$) corresponding to the direct cascade regime leads (see Ref. ) to the expression for the intermittent correction $`\mathrm{\Delta }_N^\theta `$, to the normal scaling exponent $`(2\xi )N/2`$ of the N-point structure function $`S_N^\theta (r)=[\theta (𝒓)\theta (\mathrm{𝟎})]^Nr^{(2\xi )N/2\mathrm{\Delta }_N^\theta }`$; namely: $$\mathrm{\Delta }_N^\theta =\frac{N(N2)(1+2\mathrm{})}{2(d2)}\xi +O(\xi ^2).$$ (10) The perturbative approach gives thus a first clue that compressibility works to enhance intermittent corrections. We are however interested in checking that this is a general and robust feature associated to compressibility and thus that it is present for generic $`\xi `$. This problem is not accessible by perturbative techniques; numerical methods are generally needed to investigate it. With this purpose in mind, we have developed a new Lagrangian numerical method (a different viewpoint with respect to the one in Ref. ), where the strategy is now formulated in terms of a first exit time problem . The method consists in the Montecarlo simulation of Lagrangian trajectories according to the stochastic differential equation $$\dot{𝒓}_n=𝒗(𝒓_n,t)+\sqrt{2\kappa }\dot{w}_n,$$ (11) where the $`w_n`$ are independent Wiener processes. The evolution of the probability $`P_N(t,𝒙|t_0,𝒙_0)`$ that the $`N`$ Lagrangian tracers have a configuration $`𝒙=(𝒓_1,\mathrm{},𝒓_N)`$ at time $`t`$ given their initial configuration $`𝒙_0`$ at time $`t_0`$ is ruled by the Fokker-Planck equation $$\frac{}{t}P_N(t,𝒙|t_0,𝒙_0)+_N^{}(𝒙)P_N(t,𝒙|t_0,𝒙_0)=0,$$ (12) where the operator $`_N^{}`$ is the adjoint of (9). As a consequence of (12) the probability obeys also the backward Kolmogorov equation $$\frac{}{t_0}P_N(t,𝒙|t_0,𝒙_0)+_N(𝒙_0)P_N(t,𝒙|t_0,𝒙_0)=0.$$ (13) We now introduce the Green function $$G(𝒙,𝒙_0)=_{t_0}^{\mathrm{}}𝑑tP_N(t,𝒙|t_0,𝒙_0),$$ (14) which enjoys the following properties $`_N^{}(𝒙)G(𝒙,𝒙_0)`$ $`=`$ $`\delta (𝒙𝒙_0),`$ (15) $`_N(𝒙_0)G(𝒙,𝒙_0)`$ $`=`$ $`\delta (𝒙𝒙_0).`$ (16) Let us define the characteristic size of a configuration of $`N`$ particles as $`R(𝒙)=[(_{i<j}|𝒓_i𝒓_j|^2)/(N(N1)/2)]^{1/2}`$ . We now impose Dirichlet (absorbing) boundary conditions at $`R(𝒙)=LR(𝒙_0)`$, and compute numerically the first exit time from the volume of configuration space limited by the boundary, which is expressed in terms of the Green function as (see e.g. ) $$T_L(𝒙_0)=_{R(x)<L}𝑑xG(𝒙,𝒙_0).$$ (17) A trivial consequence of the property (16) is that $$_N(𝒙_0)T_L(𝒙_0)=1,$$ (18) an equation whose structure resembles that of (7); indeed we can conclude, similarly to what happens for correlation functions (e.g.), that $`T_L(𝒙_0)`$ must amount to the sum of an inhomogeneous solution plus a linear combination of zero modes $`f_j`$ of the operator $`_N`$: $$T_L(𝒙_0)=\underset{j}{}C_jL^{\gamma \sigma _j}f_j(𝒙_0)+\text{inhomog. term},$$ (19) where the explicit dependence on $`L`$ has been extracted taking advantage of the scaling properties of $`_N`$, $`\sigma _j`$ is the scaling exponent of the zero mode $`f_j`$ and $`C_j`$ is a constant independent of $`L`$. Among the non trivial zero-modes $`f_j`$, only the functions which depend on all the coordinates can contribute to the $`N`$-th order structure function. We would like to extract this contribution leaving aside all the others: it is easy to realize that this result can be achieved performing a linear combination of the exit times with different initial conditions. This operation will remove also the inhomogeneous term. If we denote with $`_i(𝝆)`$ the operator acting on the functions of $`N`$-particles coordinates as $`_i(𝝆)F(𝒓_1,\mathrm{},𝒓_i,\mathrm{},𝒓_N)=F(𝒓_1,\mathrm{},𝒓_i+𝝆,\mathrm{},𝒓_N)F(𝒓_1,\mathrm{},𝒓_i,\mathrm{},𝒓_N)`$ we will have $$\mathrm{\Sigma }_N(L)=\underset{i}{}_i(𝝆)T_L(𝒙_0)L^{\gamma \zeta _N}$$ (20) where $`\zeta _N=(2\xi )N/2\mathrm{\Delta }_N^\theta `$ is the scaling exponent of the structure function $`S_N^\theta (r)r^{\zeta _N}`$. Whenever $`𝒙_0=\mathrm{𝟎}`$, due to the simmetry of the $`f_j`$’s under exchanges of particles coordinates, the expression for $`\mathrm{\Sigma }_N(L)`$ takes a simple form, which, for example, for $`N=4`$ reads as $`\mathrm{\Sigma }_4(L)=2T_L(\mathrm{𝟎},\mathrm{𝟎},\mathrm{𝟎},\mathrm{𝟎})8T_L(𝝆,\mathrm{𝟎},\mathrm{𝟎},\mathrm{𝟎})+6T_L(𝝆,𝝆,\mathrm{𝟎},\mathrm{𝟎})`$. Summarizing: the numerical method consists in the Monte-Carlo simulation of Lagrangian trajectories of $`N`$ particles advected by a rapidly changing velocity field, according to the Fokker-Planck equation (12); average first exit times outside a volume of size $`L`$ are computed for different arrangements of the initial conditions, and then linearly combined according to (20) in order to extract the scaling exponent $`\zeta _N`$. As a final remark, the numerical method here employed can be viewed as a merging of the two Lagrangian methods introduced by Frisch, Mazzino and Vergassola in Ref. and by Gat and Zeitak in Ref. . Namely, it borrows form the first one the idea of subtracting exit times of different initial conditions to extract the only zero mode that contributes to the structure functions, while inherits from the second the spirit of working with particle configurations (shapes). The advantages of the present method with respect to mainly reside in the evaluation of first exit times rather than of residence times, a fact which substantially reduces the computational cost. We present the numerical results obtained for the scaling of the fourth-order structure function $`S_4(r;L)(\theta (𝒓)\theta (\mathrm{𝟎}))^4`$ in three dimensions. As previously mentioned, when the dimension $`d`$ of the space is lowered fluctuations increase and as a consequence the number of realizations needed to have a clean scaling grows as well; the addition of compressibility further enhances this effect. For the first numerical experiments with the new method, we have thus opted for $`d=3`$. The method has been tested performing the analysis of the incompressible limit $`\mathrm{}=0`$ for different values of $`\xi `$: the anomaly $`\mathrm{\Delta }_4^\theta =2\zeta _2\zeta _4`$ has always been found to be compatible with the results presented in Refs. . The computation of $`\mathrm{\Sigma }_2(L)`$ – which can be evaluated analitically – has provided another stringent test for this method. Varying the degree of compressibility $`\mathrm{}`$, we have studied in the direct cascade regime the connection between the slowing down of Lagrangian trajectories and intermittency at the two distinct values $`\xi =0.75`$ and $`\xi =1.1`$. Notice that for these two values of $`\xi `$, the condition ($`\mathrm{}<d/\xi ^2`$) for the direct cascade of energy to take place is verified for the entire range of values $`0\mathrm{}1`$ of the compressibility. Different motivations account for this choice; first of all we avoided the region of $`\xi `$ close to $`0`$ ($`\gamma 2`$) where capturing the subdominant anomalous exponents is numerically expensive, and furthermore the results are known from perturbative expansion. Second, when $`\xi `$ is close to $`2`$ ($`\gamma 0`$) non local effects are very strong and the range of values of $`\mathrm{}`$ (i.e. $`\mathrm{}<d/\xi ^2`$) pertaining to the direct cascade is narrower. In Figs. 1 and 2 are shown the behavior of $`\mathrm{\Sigma }_4(L)`$ for the two values of $`\xi `$ under consideration and for different values of $`\mathrm{}`$, which all display a fairly good power law scaling. According to the relation (20) the scaling exponent is $`\gamma \zeta _4=\gamma +\mathrm{\Delta }_4^\theta `$, so that the curves become flatter and flatter as the anomaly grows . It is thus evident from our results that when compressibility increases, the intermittent correction to the normal scaling grows as well. Notice that ratio between $`\mathrm{\Sigma }_4`$ and the dominant contribution to each term of the sum scales as $`L^{\zeta _4}`$. As a consequence, small values of $`\xi `$ (which correspond to large values of $`\zeta _4`$) require a larger amount of statistics to make the subdominant contribution emerge. This is the reason for which the scaling region for $`\xi =0.75`$ is smaller than that for $`\xi =1.1`$. Finally, our results are summarized in Fig. 3 which shows the anomaly $`2\zeta _2\zeta _4`$ vs the compressibility factor $`\mathrm{}`$ for $`\xi =0.75`$ (squares joined by a dot-dashed line) and $`\xi =1.1`$ (circles joined by a dashed line). As in Ref. , the error bars are obtained by analyzing the fluctuations of local scaling exponents over octave ratios of values for $`L`$, a method which gives a very conservative estimate of the errors. The effectiveness of the first exit time computation is somehow balanced by the need of a huge number of realizations to achieve a satisfactory statistical convergence. This drawback is particularly visible for large $`L`$, where the signal is rather noisy. In conclusion, we have shown in the context of the Kraichnan compressible model that there is a tight relationship between intermittency of passive scalar statistics and compressibility of the advecting velocity field. This result can be easily understood from the Lagrangian viewpoint. Intermittency arises whenever the particles experience long periods of inhibited separation: since compressible flows are characterized by the presence of trapping regions, an enhancement of intermittency can be reasonably expected. The validity of this argument has been assessed by means of a numerical Lagrangian method. We acknowledge innumerable discussions on the subject matter with M. Vergassola. Simulations were performed in the framework of the SIVAM project of the Observatoire de la Côte d’Azur. Part of them were performed using the computing facilities of CINECA.
no-problem/9902/cond-mat9902030.html
ar5iv
text
# Phaseseparation in overdoped Y1-0.8Ca0-0.2Ba2Cu3O6.96-6.98 ## Introduction The unusual metallic properties of the high $`T_c`$ cuprate superconductors are difficult to reconcile with a homogenous electronic state. Since inhomogeneities in the electronic structure may lift the translational invariance of the underlying lattice, it is suggesting to measure both, the atomic structure using short-range (or local) structural probes, and the average crystallographic structure using diffraction techniques RoeKal . Anomalous atomic displacements may be then extracted from careful comparisons between the local and the crystallographic structure. The anomalous electronic structure of the cuprate supercoductors is frequently discussed in terms of dynamic inhomogenieties, for instance a mixture of microscopically segregated phases. The notorious nonstoichiometry of all known superconducting cuprates, even their optimum doped phases, causes many static inhomogeneities thus adding a constraint to the analysis of anomalous atomic displacements. Advantageously the metallic CuO<sub>2</sub>-planes of the cuprate superconductors are the structurally most perfect blocks. Thus significant structural anomalies in the planes can be safely related to nontrivial electronic inhomogeneities. We have recently shown that upon oxygen doping of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> the locally measured spacing between the Cu2 and O2,3 layers (“dimpling“) in the CuO<sub>2</sub>-planes keeps track of the planar hole concentration RoeKal . The data from Y $`K`$-edge EXAFS comprise the atomic structure of the 8 next neighboured CuO<sub>2</sub>-plaquettes, and thus the effective charge in only 32 Cu2–O23 bonds. Oxygen doping for $`xx_{opt}`$=6.92 has been found to increase the dimpling, in other words: the increasing number of oxygen holes bends the Cu2–O23 bonds out-of-plane towards the Ba-layer. At the onset of the overdoped regime, $`x6.95`$, the dimpling and thus also the number of holes exhibits a sharp maximum KalLoe . A concomitant displacive transformation of the crystallographic structure however seems to block a further increase of the dimpling. Although further doping from $`x6.957`$ increases the nomimal hole concentration, the dimpling starts to decrease. In this contribution we report on Y $`K`$-EXAFS measurements of the dimpling in a series of overdoped compounds, YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.96-6.98</sub>, additionally overdoped by substitution of Y<sup>3+</sup> with 2–20% Ca<sup>2+</sup>. ## Experimental Details The polycrystalline samples were from the same batches studied previously by diffraction with x-rays and neutrons, and by magnetometry BoeFau . Up to 20% Ca could be homogenously solved in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> while the oxygen content was kept at the highest numbers possible: $`x=6.966.98`$, $`i.e.`$ surprisingly always $`<7.00`$. Ca EXAFS of the same samples confirmed that calcium has replaced yttrium ($`>97`$%), and not barium. The EXAFS spectra ($`T=2060K`$) were recorded at the European Synchrotron Radation Facility (ESRF) using the double crystal spectrometer at BM29. Details of the spectroscopic technique and of the data analysis are given in RoeKal . ## Results Fig.1 exhibits the dimpling of the CuO<sub>2</sub>-planes in Y<sub>1-y</sub>Ca<sub>y</sub>Ba<sub>2</sub>Cu<sub>3</sub>O<sub>6.96-6.98</sub> ($`T=25`$ K) for $`y=0.020.2`$ as extracted from the Y EXAFS (large full circles, drawn out line). Comparison is made with the results from neutron diffraction (small circles, dashed line) by Böttger et al. BoeFau . From the Y EXAFS the dimpling is found independent on the Ca concentration up to 9% (0.281(2) Å), then undergoes a step-like decrease by about 0.015 $`\mathrm{\AA }`$, and flatens around 0.265(4) Å for 14–20 % Ca. The position of the step may be located around 12% Ca (dotted vertical line). The discontinous behaviour of the dimpling from the Y EXAFS is at variance with the continous behaviour obtained from the refinement of the average crystallographic structure BoeFau . For better comparison the diffraction data are offset by +0.01 Å showing that both methods yield the same overall variations, and that the discontinuity from the EXAFS work is clearly outside the scatter of the data points from the diffraction work. ## Discussion The step-like variation of the dimpling with increasing Ca concentration points to a percolative transition induced by Ca doping. Fig. 2 exhibits a plausible scheme demonstrating the occurence of percolative paths for concentrations around 16% Ca. Here it is assumed that the holes doped by Ca<sup>2+</sup> are predominantly screened by the nn Y cells thus creating a cross-like cluster of 5 distorted cells. It is suggesting that these holes are trapped and do not contribute to the density of the mobile carriers. Considering the Ca impurities as percolating sites in a random process, the exact theory for a square 2-D lattice StaAha predicts the critical percolation to occur for 59.2746%. Then straightforwardly the critical percolation for the cross-like clusters, each centered at a Ca impurity, is expected for $`59.2746\%/512`$% Ca. We conclude that the observed step-like decrease of the dimpling around 12% Ca is connected to this percolation threshold. The other way around: the observation of a percolation threshold at 12% Ca indicates the percolating sites to be 5 cells large. The solid solution thus segregates into two phases: i. the matrix of undistorted Y cells, and ii. distorted clusters at the Ca sites. ## Concluding Remarks We conclude that doping of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> with heterovalent cations substituting Y is electronically nonequivalent with oxygen doping in the chain layer. Thus generalized phase diagrams treating Ca<sup>+2</sup> and O<sup>-2</sup> dopants in Y<sub>1-y</sub>Ca<sub>y</sub>Ba<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> on an equal footing are questionable. In particular the superconducting transition temperature of dually doped Y<sub>1-y</sub>Ca<sub>y</sub>Ba<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> is not expected to match a parabolic behaviour $`T_c`$ vs. hole concentration. ## Acknowledgments Beamtime at the ESRF was under the proposals HS376 and HS 533. We thank S. Thienhaus for help during the data acquisition.
no-problem/9902/cond-mat9902015.html
ar5iv
text
# Hubbard Fermi surface in the doped paramagnetic insulator ## Abstract We study the electronic structure of the doped paramagnetic insulator by finite temperature Quantum Monte-Carlo simulations for the 2D Hubbard model. Throughout we use the moderately high temperature $`T=0.33t`$, where the spin correlation length has dropped to $`<1.5`$ lattice spacings, and study the evolution of the band structure with hole doping. The effect of doping can be best described as a rigid shift of the chemical potential into the lower Hubbard band, accompanied by a transfer of spectral weight. For hole dopings $`<20`$% the Luttinger theorem is violated, and the Fermi surface volume, inferred from the Fermi level crossings of the ‘quasiparticle band’, shows a similar doping dependence as predicted by the Hubbard I and related approximations. Since the pioneering works of Hubbard, the metal-insulator transition in a paramagnetic metal has been the subject of intense study. Despite this, our theoretical understanding of this phenomenon is quite limited. Hubbard’s original solutions to the problem, the so-called Hubbard I-III approximations, have recently faced some criticism. One fact which is frequently held against his approximations or the closely related two-pole approximations is the difficulty to reconcile them with the Luttinger theorem. This can hardly be a surprise in that all of these approaches rely on splitting the electron creation operator into two ‘particles’ which are exact eigenstates of the interaction term $`H_U`$: $`c_{i,\sigma }^{}`$ $`=`$ $`c_{i,\sigma }^{}n_{i,\overline{\sigma }}+c_{i,\sigma }^{}(1n_{i,\overline{\sigma }})`$ (1) $`=`$ $`\widehat{d}_{i,\sigma }^{}+\widehat{c}_{i,\sigma }^{},`$ (2) with $`[H_U,\widehat{d}_{i,\sigma }^{}]=U\widehat{d}_{i,\sigma }^{}`$, and $`[H_U,\widehat{c}_{i,\sigma }^{}]=0`$. The interaction term is therefore treated exactly, approximations are made to the kinetic energy. This is precisely the opposite situation as compared to the perturbation expansion in $`U`$, which leads to the Luttinger theorem. At half-filling the two ‘particles’ $`\widehat{d}_{i,\sigma }^{}`$ and $`\widehat{c}_{i,\sigma }^{}`$, whose energy of formation differs by $`U`$, then form the two separate Hubbard bands. The effect of doping in both the Hubbard I approximation or the two-pole approximations consists in the chemical potential cutting gradually into the top of the lower Hubbard band, in much the same fashion as in a doped band insulator. On the other hand the spectral weight along the lower Hubbard band deviates from the free-particle value of $`1`$ per momentum and spin so that the Fermi surface volume (obtained from the requirement that the integrated spectral weight up to the Fermi energy be equal to the total number of electrons) is not in any ‘simple’ relationship to the number of electrons - the Luttinger theorem must be violated. In this manuscript we wish to address the question as to what really happens if a paramagnetic insulator is doped away from half-filling, by a Quantum Monte Carlo (QMC) study of the 2D Hubbard model. We use the value $`U/t=8`$ and work throughout at the moderately high temperature $`T=0.33t`$. This temperature is small compared to both the bandwidth, $`U`$, and the gap in the single particle spectrum (see Figure 1). The main effect of $`T`$ is the destruction of antiferromagnetic order, as discussed in our previous paper. We therefore believe that our study realizes to good approximation the situation for which Hubbard’s solutions were originally designed: a paramagnetic system in the limit of large $`U`$, at a temperature which is small on the relevant energy scales. Below, we present results for the single particle spectral function and its doping dependence. These data show, that the Hubbard I approximation is in fact considerably better than commonly believed: the effect of doping indeed consists mainly of a progressive shift of the chemical potential $`\mu `$ into the band structure of the insulator. The Fermi surface volume, if determined in an ‘operational’ way from the single particle spectral function, indeed is not consistent with the Luttinger theorem. We start with a brief discussion of the band structure at half-filling, see Figure 1, which shows the single particle spectral function. We note that this is quite consistent with previous QMC work. For comparison the two bands predicted by the Hubbard I approximation, $$E_\pm (𝒌)=\frac{1}{2}[(ϵ_𝒌+U)\pm \sqrt{ϵ_𝒌^2+U^2}]$$ (3) are also shown as the dashed dispersive lines ($`ϵ_𝒌`$$`=`$$`2t(\mathrm{cos}(k_x)+\mathrm{cos}(k_y))`$ is the noninteracting dispersion). These provide at best a rough fit to those parts of the spectral function which have high spectral weight. Inspection of the numerical spectra shows quite a substantial difference between the numerical and the Hubbard-type band structures: the latter always give two bands, whereas in the numerical spectra one can rather unambiguously identify $`4`$ of them, denoted as as $`B`$, $`A`$, $`A^{}`$ and $`B^{}`$ (see the Figure). None of these bands shows any indication of antiferromagnetic symmetry; together with the short spin correlation length this shows that we are really in the paramagnetic phase. We found that to model this $`4`$-band structure one can inroduce two additional dispersionless bands at energies of $`\overline{E}_\pm =\frac{U}{2}\pm ϵ`$. We now allow mixing between each of these dispersionless bands and the respective Hubbard band, as would be described by the Hamilton matrix $`H_\pm =\left(\begin{array}{cc}E_\pm (𝒌)& V\\ V& \overline{E_\pm }(𝒌)\end{array}\right).`$ (6) Using the values $`ϵ=3t`$ and $`V=t`$, the resulting $`4`$-band structure provides at least a qualitatively correct fit to the numerical data. We stress that at present we have no ‘theory’ for these two additional bands. Equation (6) is just a phenomenological ansatz to fit the numerical band structure. We note, however, that a $`4`$-band structure which has some similarity with our results has recently been obtained by Pairault et al. using a strong coupling expansion. We do not, however, pursue this issue further but turn to our main subject, the effect of hole doping. Figure 2 shows the development of $`A(𝒌,\omega )`$ with doping. Thereby the $`A(𝒌,\omega )`$ for different hole concentrations have been overlaid so as to match dominant features, and the chemical potentials for the different hole concentrations are marked by lines. It is quite obvious from this Figure that the $`2`$ bands seen at half-filling in the photoemission spectrum persist with an essentially unchanged dispersion. The chemical potential gradually cuts deeper and deeper into the $`A`$ band, forming a hole-like Fermi surface centered on $`(\pi ,\pi )`$, the top of the lower Hubbard band. The only deviation from a rather simple rigid-band behavior is an additional transfer of spectral weight: the part of the $`A`$-band near $`(\pi ,\pi )`$ gains in spectral weight, whereas the $`B`$-band looses weight. The loss of the $`B`$ band cannot make up for the increase of the $`A`$ band, but rather there is an additional transfer of weight from the upper Hubbard bands, predominantly the $`A^{}`$ band. This effect is quite well understood. The $`A^{}`$ band seems to be affected strongest by the hole doping and in fact the rather clear two-band structure visible near $`(\pi ,\pi )`$ at half-filling rapidly gives way to one broad ‘hump’ of weight. Apart from the spectral weight transfer, however, the band structure on the photoemission side is almost unaffected by the hole doping - the dispersion of the $`A`$-band becomes somewhat wider but does not change appreciably. In that sense we see at least qualitatively the behavior predicted by the Hubbard I approximation. Next, we focus on the Fermi surface volume. Some care is necessary here: first, we cannot actually be sure that at the high temperature we are using there is still a well-defined Fermi surface. Second, the criterion we will be using is the crossing of the $`A`$ band through the chemical potential. It has to be kept in mind that this may be quite misleading, because band portions with tiny spectral weight are ignored in this approach (see for example Ref. for a discussion). When thinking of a Fermi surface as the constant energy contour of the chemical potential, we have to keep in mind that portions with low spectral weight may be overlooked. On the other hand the fact that a peak with appreciable weight crosses from photoemission to inverse photoemission at a certain momentum is independent of whether we call this a ‘Fermi surface’ in the usual sense, and should be reproduced by any theory which claims to describe the system. It therefore has to be kept in mind that in the following we are basically studying a ‘spectral weight Fermi surface’, i.e. the locus in $`𝐤`$ space where an apparent quasiparticle band with high spectral weight crosses the chemical potential. With these caveats in mind, Figures 3 and 4 show the low-energy peak structure of $`A(𝒌,\omega )`$ for all allowed momenta of the $`8\times 8`$ cluster in the irreducible wedge of the Brillouin zone, and for different hole concentrations. In all of these spectra there is a pronounced peak, whose position shows a smooth dispersion with momentum. Around $`(\pi ,\pi )`$ the peak is clearly above $`\mu `$, whereas in the center of the Brillouin zone it is below. The locus in $`𝒌`$-space where the peak crosses $`\mu `$ forms a closed curve around $`(\pi ,\pi )`$ and it is obvious from the Figure that the ‘hole pocket’ around $`(\pi ,\pi )`$ increases very rapidly with $`\delta `$. To estimate the Fermi surface volume $`V_F`$ we assign a weight $`w_𝐤`$ of $`1`$ to momenta $`𝐤`$ where the peak is below $`\mu `$, $`0.5`$ if the peak is right at $`\mu `$ and $`0`$ if the peak is above $`\mu `$. Our assignments of these weights are given in Figure 3. The fractional Fermi surface volume then is $`V_F=\frac{1}{N}_𝐤w_𝐤`$, where $`N=64`$ is the number of momenta in the $`8\times 8`$ cluster. Of course, the assignment of the $`w_𝐤`$ involves a certain degree of arbitrariness. It can be seen from Figures 3 and 4,however, that our $`w_𝐤`$ would in any way tend to underestimate the Fermi surface volume, so that the obtained $`V_F`$ data points rather have the character of a lower bound to the true $`V_F`$. Even if we take into account some small variations of $`V_F`$ due to different assignments of the weight factors, however, the resulting $`V_F`$ versus $`\delta `$ curve never can be made consistent with the Luttinger volume, see Figure 5. The deviation from the Luttinger volume is quite pronounced at low doping. $`V_F`$ approaches the Luttinger volume for dopings $`20`$%, but due to our somewhat crude way of determining $`V_F`$ we cannot really decide when precisely the Luttinger theorem is obeyed. The Hubbard I approximation approaches the Luttinger volume for hole concentrations of $`50`$%, i.e. the steepness of the drop of $`V_F`$ is not reproduced quantitatively. The latter is somewhat improved in the so-called $`2`$-pole approximation. For example the Fermi surface given by Beenen and Edwards for $`n=0.94`$ obviously is very consistent with the spectrum in Figure 3 for $`n=0.95`$. In summary, we have studied the doping evolution of the single particle spectral function for the paramagnetic phase of the 2D Hubbard model, starting out from the insulator. As a surprising result, we found that in this situation the Hubbard I and related approximations give a qualitatively quite correct picture. The main discrepancy between the Hubbard I and the so-called $`2`$-pole approximation and our numerical spectra is the number of ‘bands’ of high spectral weight, which is $`4`$ in the numerical data. This is no reason for concern, because we have seen that adding two more bands allows for an quite reasonable fit to the numerical band structure and one might expect that finding a somewhat more intricate decoupling scheme for the Hubbard I approximation or a suitable $`4`$-pole approximation should not pose a major problem. The greatest success of the Hubbard-type approximations, however, is a qualitatively quite correct description of the evolution of the ‘Fermi surface’. The effect of doping consists of the progressive shift of the chemical potential into the topmost band observed at half-filling, accompanied by some transfer of spectral weight. The Fermi surface volume, determined in an ‘operational way’ from the band crossings, violates the Luttinger theorem for low hole concentrations and does not appear to be in any simple relationship to the electron density. The Luttinger sum rule is recovered only for hole concentrations around $`20`$%. It is interesting to note in this context that a recent study of the momentum distribution in the t-J model by high-temperature series expansion has also provided some evidence for a ‘Fermi surface’ which encloses a larger value than predicted by the Luttinger theorem. The criterion used there was a maximum of $`|_𝒌n_𝒌|`$, i.e. the locus of the steepest drop of $`n_𝒌`$. This would in fact be quite consistent with the present results. However, the same caveat as in the present case applies, i.e. this criterion will overlook Fermi level crossings of bands with low spectral weight. In our opinion the strange dependence of $`V_f`$ on electron density makes it questionable whether the ‘spectral weight Fermi surface’ in our data is a true constant energy contour for a system of ‘quasiparticles’. It may be possible that at the temperature we are studying a Fermi surface in the usual sense no longer exists, and that the Hubbard I approximation merely reproduces the spectral weight distribution in this case. As our data show, however, for that purpose the approximation is considerably better than commonly believed. Zero temperature studies for the doped t-J and Hubbard model are only possible by using exact diagonalization, in which case the shell-effects due to the small system size require special care. One crucial point is the very different shape of the quasiparticle dispersion at zero temperature. Whereas the $`A`$ band is at least topologically equivalent to a nearest neighbor hopping dispersion, with minimum at $`(0,0)`$ and maxiumum at $`(\pi ,\pi )`$, the zero temperature data show a second-nearest neighbor dispersion with a nearly degenerate band maximum along the antiferromagnetic zone boundary, and a shallow absolute maximum at $`(\pi /2,\pi /2)`$. The effect of hole doping at zero temperature, however, has a qualitatively very similar effect as in the present case: the chemical potential simply cuts into the quasiparticle band for the insulator, which thus is populated by hole-like quasiparticles. Again, these ‘hole pockets’ violate the Luttinger theorem, indicating again the breakdown of adiabatic continuity in the low doping regime persists also at low temperatures. We thank W. Hanke for useful comments. This work was supported by DFN Contract No. TK 598-VA/D03, by BMBF (05SB8WWA1), computations were performed at HLRS Stuttgart, LRZ Müchen and HLRZ Jülich.
no-problem/9902/astro-ph9902100.html
ar5iv
text
# The Top Ten List of Gravitational Lens Candidates from the HST Medium Deep Survey ## 1 Introduction The HST Medium Deep Survey (MDS) (Griffiths et al. 1994a ; Griffiths et al. 1994b ; Ratnatunga et. al. (1999)), has comprised parallel WFPC2 observations of just over 400 random fields for the systematic study of the evolution of faint galaxies, as well as being a serendipitous survey which has resulted in the discovery of many interesting objects. The survey has provided a unique set of data for over 150,000 galaxies wherein the selection of a correspondingly large number of field galaxies has been possible down to I$`<`$25 (Griffiths et al. (1996)). The discovery of two quadruple-type lenses from the HST MDS, viz. HST 12531$``$2914 and HST 14176$`+`$5226 has previously been reported (Ratnatunga et al. (1995)) and subsequent spectroscopic observations of HST 14176$`+`$5226 have provided confirmation that the system is indeed a gravitational lens (Crampton et al. (1996); Moustakas and Davis (1998)). Observations and modeling of lens systems can be used to constrain directly the distribution of dark matter in the lensing galaxies. The characteristic separation $`\mathrm{\Delta }\theta `$ of lensed images depends on the total lensing mass and the distances between the observer, the lens, and the source object. This allows the mass-to-light ratio of the lens galaxy to be determined directly if the lens and source redshifts are known. The shape of the lensing mass distribution can also be constrained, using the configuration and light distribution of the lensed images. Gravitational lenses can also be used to provide constraints on the geometry of the universe. The image separations $`\mathrm{\Delta }\theta `$ depend on the angular size distances to the lens and background source, which in turn depend on global cosmology. In order to measure the cosmological constant $`\mathrm{\Lambda }`$, for instance, it has been suggested that strong gravitational lenses might be used, i.e. isolated galaxies or clusters of galaxies for which the gravitational potential results in multiple imaging of a background source object (Paczynski and Gorski (1981); Alock and Anderson (1986); Gott, Park, and Lee (1989)). The use of the lens number counts (or the optical depth) has also been advocated since this is very sensitive to $`\mathrm{\Lambda }`$ (Fukugita et al. (1992)). Maoz and Rix (1993); Kochanek (1996) have applied this method, obtaining upper limits of $`\mathrm{\Lambda }0.7`$. Predictions of lensing probabilities by galaxies are very sensitive to $`\mathrm{\Lambda }`$. If $`\mathrm{\Lambda }1`$, for example, we would expect to see about 10 times more lensed systems than if $`\mathrm{\Lambda }=0`$ (Maoz and Rix (1993); Kochanek (1993)). Lensing constraints on cosmology are not entirely free from systematic errors, however. The biggest problem is the lack of information on the lens galaxies at $`z>0.4`$, which still allows significant error in the estimate of $`\mathrm{\Lambda }`$. Elliptical galaxies dominate the expected lensing rates, due to the large concentration of mass near their centers (e.g. Keeton and Kochanek (1997)), so any statistical bias needs be determined by comparing the non-lensing elliptical galaxies at comparable redshifts. At the faint magnitudes $`V>20`$ surveyed by the HST MDS we might expect to discover a few gravitational lenses in each square degree of sky. Unfortunately the total area of sky surveyed by the WFPC2 is still under about 1 square degree, and most of the lens candidates discovered are too small and faint for ground based spectroscopic follow-up with even the largest telescopes. The sample we are locating today probably needs the next generation of space based instrumentation for more detailed studies. ## 2 Gravitational Lenses in the Medium Deep Survey Miralda-Escudé and Lehár (1992) have pointed out that, for the same lens galaxy population that is responsible for the known lensed radio sources, there should be $`100`$ lensed faint galaxies per square degree. With small image separations ($`\mathrm{\Delta }\theta 1\stackrel{}{\mathrm{.}}`$) and faint background sources, these lensed systems cannot be detected from the ground. However, they should be readily detectable using HST observations, and could represent the largest untapped mine of lensed systems. A large statistical sample would greatly increase the importance of lensing for studies of galaxy evolution and cosmology. Among the sample of field galaxies in the MDS, we have discovered ten gravitational lens candidates. The discovery of two lenses HST 12531$``$2914 and HST 14176$`+`$5226 (Figures 1-2) has been reported already (Ratnatunga et al. (1995)) and the spectroscopic observation of HST 14176$`+`$5226 has provided confirmation that the system is a gravitational lens (Crampton et al. (1996); Moustakas and Davis (1998)). The other eight lens candidates were discovered subsequently, and their HST images are shown in Figures 3-10, in the light of filters F814W (I), F606W (V) and, when available, F450W (B). Each image extracted from an HST WFPC2 observation is $`6\stackrel{}{\mathrm{.}}4`$ on each side. Typically, the lensed (source) images are blue arcs, while the lensing galaxy appears to be a red elliptical galaxy, thus making them very good candidates for gravitational lensing. ## 3 Simple models of the Gravitational Lens systems A two-dimensional image fitting procedure was developed to model the first two HST lenses, as described in Ratnatunga et al. (1995). This simple modeling approach used singular isothermal ellipsoid potentials (Kormann, Schneider and Bartelmann (1994)), and this same approach was applied to each of the gravitational lenses to derive the best fitting model in each case. Our approach does not require careful astrometry or photometric measurement of the faint, under-sampled WFPC2 images. The model is derived by fitting the full 2D-image rather than a list of estimated positions and magnitudes. This approach, which is more CPU intensive, makes use of the full extended nature of the light distribution of lens and lensed images and also uses the observational errors associated with each image pixel. The choice of the number of parameters fitted was done interactively so as to obtain a good convergence while not exceeding the number of significant parameters. In a few cases the system was observed in several filters and the lensed images in through some filters were too faint or not sufficiently well resolved from the brighter lens galaxies. For these cases it was not possible to fit all the parameters and for the adopted model we took the geometric parameters and the description of the potential from the filter in which the system was observed best. The centroid of the gravitational potential of the lens was assumed to be the same as that of the light of the lensing galaxy. The axis ratio and orientation of the mass (potential) were left as free parameters in the model. If the lensed object is extended with a half-light radius larger than a pixel, then the axis ratio and orientation of the lensed galaxy were left as free parameters. The image profiles of the lens and the lensed objects were selected to be Disk-like or Bulge-like. If the lensed object was point-like within the WFPC2 resolution then a Gaussian profile was adopted. In a few cases the lens was better fitted by a disk-plus-bulge model as could be seen when the lens galaxy was analyzed alone through the regular MDS pipeline analysis software. However, to keep the modeling simple and limit the number of parameters fitted, the lens was modeled with a single component. Images of the candidate lenses in each filter are illustrated for F814W (I), F606W (V) and when available F450W (B), from the top down. On each horizontal strip of five objects, we have from left to right 1) the region selected for analysis, 2) the model of the WFPC2 observation, 3) the model without convolution by the HST PSF, 4) the residual image after subtraction of the model image from the observed image and 5) the model without the lensing galaxy. The corresponding tables give the fitted parameters, with errors for each model. More details and FITS files of the observations and fitted models are available from the MDS website http://archive.stsci.edu/mds/. ## 4 Discussion of each lens system HST 14176+5226 is the first MDS HST “Einstein Cross” lens candidate, found in the “Groth-Westphal strip” (GWS). This is the brightest lens system discovered with the HST and it has been confirmed spectroscopically. The elliptical lensing galaxy has a redshift of z=0.803 and a color index (V-I)=1.9 mag. The lensed source at redshift z=3.4 (Crampton et al. (1996); Moustakas and Davis (1998)) appears to be a QSO with an apparent color (V-I)=0.5 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$13 (angular distance between source and centroid of lens). The image magnification is about 2.4 magnitudes, making each component about 0.9 mag brighter than the source QSO. The modeled image gives a very good fit to the observed image, with a normalized $`\chi ^2`$ of unity. HST 12531-2914 is the second MDS HST “Einstein Cross” lens candidate. The elliptical lensing galaxy has an apparent color (V-I)=2.0 mag. The lensed source appears to be point-like with a color (V-I)=0.2 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$06. The image magnification is about 2.3 magnitudes, making each component about 0.8 mag brighter than the point-like source. The model gives a very good fit with a normalized $`\chi ^2`$ of unity. HST 14164+5215 is a pair of images symmetrically place around a brighter galaxy in GWS. It is a difficult candidate to model since the differential image distortion so close to the WFPC2 pyramid edge is unknown and probably significant. It was ignored in the preliminary model presented here and may account for the poor fit if the candidate is confirmed spectroscopically. Since the lensing galaxy is a disk with a 20% bulge, a disk profile was adopted for the single component lens model. The bulge is apparent at the center in the residual image. The lens galaxy has an apparent color (V-I)=1.0 mag. The lensed source appears to be point-like with a color (V-I)=0.7 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$1. The image magnification is about 3.5 magnitudes, making each component about 2.7 mag brighter than the source. The model gives a very good fit with a normalized $`\chi ^2`$ just larger than unity (1.07). HST 15433+5352 is very good strong lens candidate. The red elliptical lensing galaxy has an apparent color (V-I)=1.4 mag, and (B-V)=2.1 mag. The much bluer lensed source appears to be extended with a color (V-I)=0.4 mag and (B-V)=0.4 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$16. The image magnification is about 2.2 magnitudes. The model gives a reasonably good fit with a normalized $`\chi ^2`$ just larger than unity. HST 01247+0352 is a good candidate for a strong lensed pair. The red spherical elliptical lensing galaxy (E0) has an apparent color (V-I)=2.1 mag. The bluer lensed source appears to be point-like with a color (V-I)=0.7 mag. Two much fainter images can be seen near the detection limit which might make this a Quad system. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$08. The image magnification is about 3.5 magnitudes, making each component about 2.8 mag brighter than the source. The model gives a reasonably good fit with a normalized $`\chi ^2`$ close to unity. HST 01248+0351 is a good candidate for a strong lensed pair in the same WFPC2 field as HST 01247+0352. The edge-on disk lensing galaxy has a color (V-I)=1.2 mag. The bluer lensed source appears to be point-like with a color (V-I)=0.7 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$05. The image magnification is about 3.1 magnitudes, making each component about 2.4 mag brighter than the source. The model gives a good fit with a normalized $`\chi ^2`$ close to unity. Spectroscopy is required to confirm this candidate as a gravitational lens. It is included in this sample as the best candidate for lensing by a disk (Keeton and Kochanek (1998)). HST 16302+8230 could be an “Einstein ring” but clearly needs spectroscopic confirmation, since it may also be a polar ring galaxy. In our opinion it is the least probable of our top ten candidate list but the most intriguing. It has been nicknamed the “the London Underground” since it resembles that logo. The right half of the the image is much fainter for an unknown reason. The “ring” is clearly much bluer than the central galaxy. The edge-on disk lensing galaxy is barely visible in the V band and was not detected in the B band which just shows a part of the ring. If it is a lensed source, then it appears to be extended with a color (V-I)=0.9 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$14. The image magnification is about 2.1 magnitudes. The model gives a reasonably good fit with a normalized $`\chi ^2`$ close to unity. HST 16309+8230 is an arc in the same field as HST 16302+8230. The lensing elliptical galaxy has an apparent (V-I) color 1.4 mag and (B-V)=2.2 mag. The significantly distorted lensed source is an extended edge-on disk-like galaxy with color (V-I)=1.4 mag, and (B-V)=0.4 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$5. The image magnification is about 0.9 magnitudes. The model gives a good fit with a normalized $`\chi ^2`$ close to unity. HST 12368+6212 is an arc in the Hubble Deep Field (HDF). The lensing elliptical galaxy has an apparent (V-I) color of 1.0 mag and (B-V)=2.0 mag. The significantly distorted lensed source is extended with a color (V-I)=0.3 mag, and (B-V)=0.2 mag. The impact radius is 0$`\stackrel{}{\mathrm{.}}`$4. The image magnification is about 1.7 magnitudes. The model gives a good fit with a normalized $`\chi ^2`$ just larger than unity. HST 18078+4600 is an arc caused by the gravitational potential of a small group of 4 galaxies. Since the software at hand, developed for Ratnatunga et al. (1995) assumed a single lensing galaxy, the outer two galaxy images were masked out in the image analysis which was used to derive the approximate lens model. The two central galaxies, with a color (V-I)=2.0 mag, were merged in the model and were not detected in the B band. The significantly distorted lensed extended source with colors (V-I)=0.3 mag, and (B-V)=0.2 mag. The image magnification is about 0.7 magnitudes. The model gives a good fit in V and B with a normalized $`\chi ^2`$ just larger than unity, but a poor fit in I owing to the use of a single lens galaxy approximation in the model. ## 5 Survey Area The number of strong gravitational lenses as a function of redshift is very sensitive to the value of the cosmological constant $`\mathrm{\Lambda }`$ (Fukugita et al. (1992); Kochanek (1992)) and could be used as an estimator of the latter quantity if all of the complex detection thresholds which govern completeness are well understood and known quantitatively. These ten gravitational lenses were discovered serendipitously on WFPC2 images processed via the data analysis pipeline of the HST Medium Deep Survey and were not found as a result of a very systematic or quantitative search. In view of the very large range of data quality amongst the pure parallel MDS observations and because of overlapping fields, it is not possible to be precise about the number of fields surveyed and we have deliberately rounded the numbers to reflect the nature of approximate estimates. In the full survey we processed over 500 WFPC2 fields, each covering about 4.77 square arcmin of sky. Overlap between WFPC2 fields is on average about 20% for the MDS survey. All of the Gravitational lenses were however discovered on about 130 WFPC2 fields which were designated “priority one” for which we had three or more exposures in each of two or more filters. Although only just over 10% of the “priority one” fields had exposures in the three BVI filters, 50% of the lenses discovered, have observations in all three filters. Since the pointings were random, this implies a very strong selection effect in terms of data quality. Clearly we found the “Top ten list” of lenses in the best data available to the Medium Deep Survey. ## 6 Incompleteness Two factors distinguish these WFPC2 fields from the rest. More exposures usually indicated a longer total exposure time and maybe more importantly better cosmic ray rejection. Removal of cosmic rays in single exposures clearly runs the risk of losing the lensed images in the cleaning out of many cosmic rays, and this remains true for stacks with two images which require cleaning out of the non-negligible number of pixels hit by cosmic rays in both images. In Figure 11 we show the limiting magnitude for completeness of the object catalog as a function of total integration time, for all WFPC2 images processed through the MDS pipeline in the 3 main filters. The “priority one” MDS fields are plotted with circles, with filled circles showing those fields on which gravitational lenses were discovered. It appears that a limiting magnitude fainter than 25.1 mag in F450W, 24.8 in F606W and 24.3 in F814W, requiring total exposure times typically longer than 1 hour, is needed for discovery of a gravitational lens. Assuming that limiting magnitude is the only factor, the survey of about 160 WFPC2 fields, less about 20% overlap, leads to an estimate of the total survey area of about 0.17 square degrees. The actual process of discovery of the lenses included a manual component whereby the most likely candidates of gravitational lensing were picked out during an inspection stage of the MDS pipeline or from the residual images after removal of the maximum likelihood scale-free model for the galaxy. Typically the lensed object was much bluer than the lensing galaxy, and is seen better on the F450W or F606W filter rather than F814W which was by default the first filter used for parallel observations of limited total time on each field. The availability of an image in other filters was useful to ensure that the colors of the lensed images were about the same. So the fact that all the lenses discovered in the MDS are on “priority one” fields is probably not coincidental. If good images in two or more filters (MDS priority one) was the only factor, then the survey consisted of about 130 WFPC2 fields with an estimated area of about 0.14 square degrees. If we take both conditions i.e. limiting magnitude and MDS “priority one”, then we have about 90 fields or about 0.1 square degrees, which is a lower estimate. An incomplete sample of gravitational lenses can, however, be used to measure $`\mathrm{\Lambda }`$ if we know or can average over unknown properties of the lensing galaxies such as the mass distribution (Im, Griffiths and Ratnatunga (1997)). The expected bias caused by the orientation of mass should be random and could be averaged out. However a selection effect that is caused by internal extinction within the lens galaxy may lead to a systematic bias that does not average out. When a statistically significant sample of spectroscopically confirmed candidates becomes available, the MDS database is an ideal place for the comparison of the properties of the lensing ellipticals and non-lensing ellipticals to see if lenses are found preferentially in low extinction systems. We have presented a “Top Ten list” of the lens candidates. In most of these cases the lensed images are well resolved from the lensing galaxy. Most of the candidates were picked while processing the data through the MDS pipeline. The current sample unfortunately suffers the same selection biases as the serendipitously discovered sample of gravitational lenses known at brighter magnitudes. At the expense of having a slightly incomplete sample, the lenses modeled in this paper are selected to have the highest probability of being good gravitational lens candidates. If one lowers the standard of acceptance, one can pick out hundreds more possible lens candidates, a small fraction of which may turn out to be gravitational lenses. After removal of the smooth axis-symmetric galaxy model images, the residuals sometimes configure symmetrically into possible gravitational lens candidates. Most of these residual images are probably bright regions of star formation within the very faint spiral arms around the central bulge of a galaxy. It is thus very unlikely that most of these objects are lenses. A spectroscopic study of a significant sample of these objects is unfortunately not practical using current ground based observations, and these candidates therefore do not represent a scientifically usable sample. They have therefore been excluded from the present list. A case such as this was recently published by Fischer, Schade & Barrientos (1998). In fig 12 we show the region around this galaxy which appears to show an extended face-on spiral arm structure with the bright regions aligned along them. In addition, the best-fit lens model of the observed image configuration requires the adoption of a complex potential which is significantly misaligned from the light distribution. Although it will remain a candidate until spectroscopic observation is done, we suspect that these images are of star-formation regions, as found in the case reported in Glazebrook et al. (1994). ## 7 Conclusions Gravitational lenses discovered using HST are especially useful because the lens galaxies are at great distances (typically $`z_L0.6`$), allowing for an independent and new method for the study of both galaxy evolution and cosmology. A total of ten good candidates of gravitational lensing have been discovered in the HST Medium Deep Survey (MDS) and archival primary observations using WFPC2. Seven are “strong lens” candidates which appear to have multiple images of the source. Three are cases where the image of the source galaxy has been significantly distorted into an arc. We have summarized the data on all ten candidates and described them with simple models assuming singular isothermal potentials. This paper is based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The Medium-Deep Survey was funded by STScI grant G02684 et seq. and by the HST WFPC2 Science Team under JPL subcontract 960772 from NASA contract NAS7-918. We acknowledge contributions of Dr Lyman Neuschaefer who was associated with the MDS pipeline. We thank Kristin Wintersteen and Julie Snyder who did High School Projects at CMU with the MDS database. We thank Drs. Stefano Casertano, Myungshin Im and Joseph Lehar for useful discussions, and our referee for useful comments.
no-problem/9902/hep-ph9902240.html
ar5iv
text
# Untitled Document This paper has been withdrawn by the authors, and instead we have replaced hep-ph/9807214 by this new version.
no-problem/9902/math9902096.html
ar5iv
text
# Completions of cellular algebras ## 0. Introduction Cellular algebras, which were introduced by Graham and Lehrer in , are a class of associative algebras defined in terms of a “cell datum” satisfying certain axioms. Several interesting classes of finite dimensional algebras, including certain Hecke algebras and Brauer algebras, can be described in this way. One of the strengths of the theory of cellular algebras is that it provides a complete list of absolutely irreducible modules for the algebra over a field. Unfortunately, this property is no longer true if one allows an algebra satisfying the cellular axioms to be infinite dimensional, and there is a simple counterexample to show this. The goal of this paper is to give a natural definition of cellularity for a general algebra which not only agrees with the original definition if the algebra is finite dimensional, but which also produces results analogous to those in the finite dimensional case if the algebra is infinite dimensional. The material in §1 is expository, giving a summary of some of the important results in the theory of cellular algebras, and outlining the obstructions to the theory in infinite dimensions. The main problem is that although the definition of “cellular” makes sense for infinite as well as finite dimensional algebras, the results of assume tacitly that the algebra is finite dimensional. If the algebra is infinite dimensional, the results can fail badly even for simple examples. In §2, we introduce the notion of a cell datum of “profinite type”. For such a cell datum, the associated partially ordered set is allowed to be infinite, provided that it satisfies a certain finitary condition. If the condition is satisfied, the infinite dimensional algebra of profinite type may be completed with respect to a natural topology, which depends on the cell datum, resulting in a “procellular” algebra. The desired classification for absolutely irreducible modules can then be obtained by considering “smooth” representations of the completed algebra. In §3, we look at a nontrivial natural example of such a procellular algebra. This example arises as a completion of Lusztig’s algebra $`\dot{U}`$, which appears in , or alternatively as a projective limit of $`q`$-Schur algebras. We also discuss briefly another very interesting natural example of a procellular algebra which arises from a certain completion of the affine Temperley–Lieb algebra. ## 1. Cellular algebras ### 1.1 Definitions We start by reviewing some of the basic properties of cellular algebras. ###### Definition 1.1.1 (\[\xgl, Definition 1.1\]) Let $`R`$ be a commutative ring with identity. A cellular algebra over $`R`$ is an associative unital algebra, $`A`$, together with a cell datum $`(\mathrm{\Lambda },M,C,)`$ where 1. $`\mathrm{\Lambda }`$ is a partially ordered set (or “poset” for short), ordered by $`<`$. For each $`\lambda \mathrm{\Lambda }`$, $`M(\lambda )`$ is a finite set (the set of “tableaux” of type $`\lambda `$) such that $$C:\underset{\lambda \mathrm{\Lambda }}{}\left(M(\lambda )\times M(\lambda )\right)A$$ is injective with image an $`R`$-basis of $`A`$. 2. If $`\lambda \mathrm{\Lambda }`$ and $`S,TM(\lambda )`$, we write $`C(S,T)=C_{S,T}^\lambda A`$. Then $``$ is an $`R`$-linear involutory anti-automorphism of $`A`$ such that $`(C_{S,T}^\lambda )^{}=C_{T,S}^\lambda `$. 3. If $`\lambda \mathrm{\Lambda }`$ and $`S,TM(\lambda )`$ then for all $`aA`$ we have $$aC_{S,T}^\lambda \underset{S^{}M(\lambda )}{}r_a(S^{},S)C_{S^{},T}^\lambda modA(<\lambda ),$$ where $`r_a(S^{},S)R`$ is independent of $`T`$ and $`A(<\lambda )`$ is the $`R`$-submodule of $`A`$ generated by the set $$\{C_{S^{\prime \prime },T^{\prime \prime }}^\mu :\mu <\lambda ,S^{\prime \prime }M(\mu ),T^{\prime \prime }M(\mu )\}.$$ We use Definition 1.1.1 only when $`\mathrm{\Lambda }`$ is a finite set, although such an assumption is omitted in . There are many interesting examples of finite dimensional algebras which satisfy these axioms, some of which are described in detail in . These include the Brauer algebra, the Temperley–Lieb algebra, the Hecke algebra of type $`A`$ (using the Kazhdan–Lusztig basis as the cell basis) and Jones’ annular algebra. The theory of cellular algebras may be applied to find, among other things, criteria for semisimplicity and a complete set of absolutely irreducible modules over a field. In this paper, we will mostly be concerned with the latter application, and to understand it, we recall the notion of cell modules. ###### Definition 1.1.2 (\[\xgl, Definition 2.1\] For each $`\lambda \mathrm{\Lambda }`$ define the left $`A`$-module $`W(\lambda )`$ as follows: $`W(\lambda )`$ is a free $`R`$-module with basis $`\{C_S:SM(\lambda )\}`$ and $`A`$-action defined by $$aC_S=\underset{S^{}M(\lambda )}{}r_a(S^{},S)C_S^{}(aA,SM(\lambda )),$$ where the $`r_a`$ are the structure constants of Definition 1.1.1. This is called the cell representation of $`A`$ corresponding to $`\lambda \mathrm{\Lambda }`$. Associated with each cell representation $`W(\lambda )`$ is a bilinear form $`\varphi _\lambda `$ which is of key importance in classifying the irreducible modules. ###### Lemma 1.1.3 Let $`\lambda \mathrm{\Lambda }`$. Then for any elements $`S_1,S_2,T_1,T_2M(\lambda )`$, we have $$C_{S_1,T_1}^\lambda C_{S_2,T_2}^\lambda \varphi _1(T_1,S_2)C_{S_1,T_2}^\lambda modA(<\lambda )$$ where $`\varphi _1R`$ is independent of $`T_2`$ and $`S_1`$. ###### Demonstration Proof This follows from \[8, Lemma 1.7\] in the case where $`a=1`$ in that proof. ∎ ###### Definition 1.1.4 (\[\xgl, Definition 2.3\]) For $`\lambda \mathrm{\Lambda }`$, define $`\varphi _\lambda :W(\lambda )\times W(\lambda )R`$ by $`\varphi _\lambda (C_S,C_T)=\varphi _1(S,T)`$, where $`S,TM(\lambda )`$, and extend $`\varphi _\lambda `$ bilinearly. From now on, we assume that $`R`$ is a field. We now state one of the key results of the paper , which is the main one we aim to generalise to infinite dimensions. This result genuinely requires $`\mathrm{\Lambda }`$ to be finite. ###### Theorem 1.1.5 (Graham, Lehrer) For $`\lambda \mathrm{\Lambda }`$, the subspace $`\text{rad}(\lambda )`$ of $`W(\lambda )`$ given by $$\text{rad}(\lambda ):=\{xW(\lambda ):\varphi _\lambda (x,y)=0\text{ for all }yW(\lambda )\}$$ is an $`A`$-submodule of $`W(\lambda )`$. The quotient module $`L(\lambda ):=W(\lambda )/\text{rad}(\lambda )`$ is absolutely irreducible. Define $`\mathrm{\Lambda }_0:=\{\lambda \mathrm{\Lambda }:\varphi _\lambda 0\}`$. Then the set $`\{L(\lambda ):\lambda \mathrm{\Lambda }_0\}`$ is a complete set of equivalences classes of absolutely irreducible $`A`$-modules. ###### Demonstration Proof This follows from \[8, Proposition 3.2, Theorem 3.4\]. ∎ ### 1.2 Obstructions to cellular algebras in infinite dimensions Note that in the definition of a cellular algebra, there was no requirement that the set $`\mathrm{\Lambda }`$ be finite, although the sets $`M(\lambda )`$ for a fixed $`\lambda `$ must be finite. (Removing the latter assumption would mean that the cell modules would be infinite dimensional, which would be very inconvenient.) The following simple example (which is also discussed in the remarks following \[12, Theorem 3.1\]) shows that Theorem 1.1.5 fails fairly badly for a general cellular algebra with an infinite poset $`\mathrm{\Lambda }`$. ###### Proposition 1.2.1 Let $`R`$ be an algebraically closed field and let $`A`$ be the ring of polynomials $`R[x]`$. Let $`\mathrm{\Lambda }`$ be the set of natural numbers (including $`0`$), ordered by the reverse of the usual order. Let $`M(\lambda )`$ be a set containing one element for each $`\lambda \mathrm{\Lambda }`$. If $`SM(\lambda )`$, we define $`C(S,S)=x^\lambda `$. Let $``$ be the identity map on $`A`$. Then $`(\mathrm{\Lambda },M,C,)`$ is a cell datum for $`A`$ over $`R`$, in the sense of Definition 1.1.1. ###### Demonstration Proof We omit the proof, since it is almost trivial. ∎ Although this algebra $`R[x]`$ satisfies the cellular axioms, Theorem 1.1.5 fails severely when applied to it. ###### Proposition 1.2.2 The irreducible representations of $`R[x]`$ over $`R`$ are obtained by evaluation at $`x`$ and are therefore in bijection with the elements of $`R`$. The set $`\mathrm{\Lambda }_0`$ consists only of the element $`0\mathrm{\Lambda }`$. Therefore Theorem 1.1.5 is false in general if $`\mathrm{\Lambda }`$ is infinite. ###### Demonstration Proof The first assertion follows from Schur’s lemma, since $`R`$ is algebraically closed. The second assertion follows from the fact that $`x^r.x^r0modx^r`$, unless $`r=0`$ in which case $`x^r.x^rx^rmodx^r`$. It is now obvious that the set $`\mathrm{\Lambda }_0`$ does not index the absolutely irreducible $`R[x]`$-modules over $`R`$, which is the third assertion. ∎ We observe that the only irreducible representation of $`R[x]`$ which extends to a “continuous” representation of the power series ring $`R[[x]]`$ is the one which is indexed by the element $`0\mathrm{\Lambda }_0`$. This suggests that in order to find an analogue of Theorem 1.1.5 in the infinite dimensional case, one should consider modules for a suitable completion of the original algebra. ## 2. Procellular algebras From now on, we assume that $`R`$ is a field unless otherwise stated. ### 2.1 Construction of procellular algebras ###### Definition 2.1.1 A coideal $`I`$ of the poset $`\mathrm{\Lambda }`$ is a subset of $`\mathrm{\Lambda }`$ for which $`aI`$ and $`a<b`$ imply $`bI`$. The coideal $`x_1,x_2,\mathrm{},x_r`$ generated by the elements $`x_1,x_2,\mathrm{},x_r`$ consists of the elements of $`\mathrm{\Lambda }`$ given by $$\{\lambda \mathrm{\Lambda }:x_i\lambda \text{ for some }1ir\}.$$ ###### Definition 2.1.2 We say that the cell datum $`(\mathrm{\Lambda },M,C,)`$ for an algebra $`A`$ is of profinite type if $`\mathrm{\Lambda }`$ is infinite and if for each $`a\mathrm{\Lambda }`$, the coideal $`a`$ is a finite set. The coideals of the poset $`\mathrm{\Lambda }`$ will be used to define cellular quotients of the infinite dimensional algebra $`A`$. We will then make an inverse system using these quotients. To achieve this, we make the following definitions. ###### Definition 2.1.3 Let $`\mathrm{\Lambda }`$ be the poset of a cell datum of profinite type. We denote the set of finite coideals of $`\mathrm{\Lambda }`$, ordered by inclusion, by $`\mathrm{\Pi }`$. If $`P\mathrm{\Pi }`$, we write $`A_P`$ for the free $`R`$-module with basis parametrised by the set $$\{C_P(S,T):S,TM(\lambda ),\lambda P\}.$$ We write $`I_P`$ for the $`R`$-submodule of $`A`$ spanned by all elements $`C(S,T)`$ where $`S,TM(\lambda )`$ for $`\lambda P`$. ###### Proposition 2.1.4 Let $`P\mathrm{\Pi }`$ as above. Then $`A_P`$ is a homomorphic image $`\psi _P(A)`$ of $`A`$ corresponding to the quotient of $`A`$ by the ideal $`I_P`$. The algebra $`A_P`$ is a finite dimensional cellular algebra which inherits a cell datum from $`A`$ by restriction of the poset $`\mathrm{\Lambda }`$ to the set $`P`$. ###### Demonstration Proof This follows from the defining axioms for a cellular algebra. ∎ ###### Definition 2.1.5 Maintain the above notation. Let $`P_1,P_2\mathrm{\Pi }`$ and suppose $`P_1P_2`$. We define the $`R`$-linear map $`\psi _{P_1,P_2}:A_{P_1}A_{P_2}`$ via the conditions $$\psi _{P_1,P_2}(C_{P_1}(S,T))=\{\begin{array}{cc}C_{P_2}(S,T)\hfill & \text{ if }S,TM(\lambda )\text{ where }\lambda P_2,\hfill \\ 0\hfill & \text{ otherwise.}\hfill \end{array}$$ ###### Lemma 2.1.6 The maps $`\psi `$ of Definition 2.1.5 are surjective algebra homomorphisms. ###### Demonstration Proof This follows from the fact that $`\psi _{P_1,P_2}.\psi _{P_1}=\psi _{P_2}`$. ∎ We now use the maps $`\psi `$ to define a inverse limit of cellular algebras. This idea has its roots in \[10, §6.4\]. ###### Definition 2.1.7 Let $`A`$ be a cellular algebra with a cell datum of profinite type. Consider the inverse system whose elements are the sets $`A_P`$ for $`P\mathrm{\Pi }`$ and whose homomorphisms are given by $$\psi _{P_1,P_2}:A_{P_1}A_{P_2}$$ whenever $`P_1P_2`$. Denote the associated inverse limit by $`\widehat{A}`$. We call an algebra $`\widehat{A}`$ arising in this way a procellular algebra. ###### Demonstration Remark The terminology “procellular” is by analogy with profinite groups, which are inverse limits of finite groups. ### 2.2 Properties of procellular algebras An element of a procellular algebra $`\widehat{A}`$ can be regarded as a certain infinite combination of the basis elements $`C(S,T)`$ of $`A`$, as follows. ###### Proposition 2.2.1 There is a canonical bijection between elements of $`\widehat{A}`$ and formal infinite sums $$\underset{(T,T^{})}{}a(T,T^{})C(T,T^{}),$$ as $`(T,T^{})`$ ranges over the domain of $`C`$. The element of $`\widehat{A}`$ corresponding to this sum projects in the inverse system to the element of $`A_P`$ given by $$\underset{(T,T^{})}{}a(T,T^{})\psi _P(C(T,T^{})).$$ ###### Demonstration Remark The properties of the map $`\psi _P`$ (see Proposition 2.1.4) mean that the second sum appearing above is finite. ###### Demonstration Proof By arguments familiar from §2.1, we see that the elements of $`A_P`$ as defined above give (as $`P`$ varies over $`\mathrm{\Pi }`$) an element of $`\widehat{A}`$. It is clear that this correspondence is an injection. It remains to show that every element of $`\widehat{A}`$ is of this form. Consider an element $`C(T,T^{})A`$, where $`T,T^{}M(\lambda )`$. Note that $`\lambda \mathrm{\Pi }`$. Define $`a(T,T^{})`$ to be the coefficient of $`\psi _\lambda (C(T,T^{}))`$ in $`A_\lambda `$. Then for any $`P`$ such that $`\lambda P`$, the coefficient of $`\psi _P(C(T,T^{}))`$ in $`A_P`$ must also be $`a(T,T^{})`$, by consideration of $`\psi _{P,\lambda }`$. It follows that $`\widehat{A}`$ is determined solely by the coefficients $`a(T,T^{})`$, which proves the assertion. ∎ ###### Lemma 2.2.2 The procellular algebra $`\widehat{A}`$ admits an algebra anti-automorphism $`\widehat{}`$ of order 2 which arises from the anti-automorphisms $`_P`$ on the cellular algebras $`A_P`$. ###### Demonstration Proof This is immediate from the construction of the inverse system. ∎ ###### Definition 2.2.3 Let $`P\mathrm{\Pi }`$. We define $`\widehat{I}_P`$ to be the ideal of $`\widehat{A}`$ given by the infinite sums $$a(T,T^{})C(T,T^{})$$ where $`a(T,T^{})0T,T^{}M(\lambda )`$ for $`\lambda P`$. The procellular algebra $`\widehat{A}`$ is equipped with a natural topology arising from the poset $`\mathrm{\Lambda }`$. It will turn out that $`\widehat{A}`$ is in fact the completion of $`A`$ with respect to this topology. ###### Proposition 2.2.4 Let the set $`\{\widehat{I}_P:P\mathrm{\Pi }\}`$ be a base of neighbourhoods of $`0\widehat{A}`$. This gives $`\widehat{A}`$ the structure of a Hausdorff, complete topological ring in which the operation $`\widehat{}`$ is a homeomorphism. ###### Demonstration Remark By a topological ring, we mean a ring in which addition, negation and multiplication are continuous operations, so that in particular, addition is a homeomorphism. The topology may be specified by giving a base of neighbourhoods for $`0`$, meaning that any open neighbourhood of $`0`$ contains one of the neighbourhoods in the base. The open sets are those arising from the structure of the ring as a topological abelian group under addition. ###### Demonstration Proof The assertion that $`\widehat{A}`$ acquires the structure of a topological ring is clear, because the neighbourhoods in the base are ideals. We show that the intersection of the ideals $`\widehat{I}_P`$ is trivial. Let $`0x_{P\mathrm{\Pi }}\widehat{I}_P`$, and suppose $`C(T,T^{})`$ appears with nonzero coefficient $`a(T,T^{})`$ in the infinite sum expansion of $`x`$. If $`T,T^{}M(\lambda )`$, then $`\psi _\lambda (x)0`$, and therefore $`x\widehat{I}_\lambda `$, which is a contradiction. It follows that $`\widehat{I}_\lambda `$ and $`x+\widehat{I}_\lambda `$ are disjoint open sets which contain the points $`0`$ and $`x`$ respectively. We deduce that the topology is Hausdorff. Completeness can be checked from the infinite sum realisation of $`\widehat{A}`$. Finally we observe that $`\widehat{}`$ is of order 2, is an algebra homomorphism, and fixes the ideals $`\widehat{I}_P`$ setwise. This proves the last assertion. ∎ ###### Corollary 2.2.5 The algebra $`A`$ embeds canonically in $`\widehat{A}`$ by considering the finite sums of $`C(T,T^{})`$ as special cases of the infinite sums of Proposition 2.2.1. Furthermore, $`\widehat{A}`$ is the completion of $`A`$ with respect to the topology defined above. ###### Demonstration Proof It is clear that the embedding works as stated. We also observe that any infinite sum can be approximated arbitrarily closely by a finite sum in the given topology, because all sets $`P\mathrm{\Pi }`$ are finite. It follows that $`A`$ embeds densely and thus that $`\widehat{A}`$ is the completion of $`A`$. ∎ ### 2.3 Representations of procellular algebras As might be expected from the theory of topological groups, we will concentrate on the “smooth” representations of the procellular algebras. We are also only interested in finite dimensional representations. We define the smooth representations as follows, based on \[1, Definition 1.1\]. ###### Definition 2.3.1 A representation $`\rho `$ of $`\widehat{A}`$ on a space $`V`$ is smooth if the annihilator of every vector of $`V`$ is open. ###### Lemma 2.3.2 Let $`\rho `$ be a finite dimensional representation of $`\widehat{A}`$ on a space $`V`$. Then $`\rho `$ is smooth if and only if $`\mathrm{ker}\rho `$ is open. ###### Demonstration Proof Suppose $`\rho `$ is smooth. Then $`\mathrm{ker}\rho `$ is the intersection of a finite number of annihilators of vectors $`v`$ and is therefore open. Conversely, assume $`\mathrm{ker}\rho `$ is open, and pick $`vV`$. Then the annihilator of $`v`$ is a union of cosets of $`\mathrm{ker}\rho `$, and is therefore open. ∎ We will also refer to “smooth modules”, with the obvious meaning. ###### Definition 2.3.3 Let $`\lambda \mathrm{\Lambda }`$. The cell module $`\widehat{W}(\lambda )`$ for $`\widehat{A}`$ is the left module obtained by considering the cell module $`W(\lambda )`$ for $`A_\lambda `$ as a module for $`\widehat{A}`$ via the homomorphism $`\psi _\lambda `$. The module $`\widehat{W}(\lambda )`$ inherits the bilinear form $`\varphi _\lambda `$ from $`W(\lambda )`$. We define $`\widehat{L}(\lambda )`$ as the $`\widehat{A}`$-module corresponding to the $`A_\lambda `$-module $$L(\lambda )=W(\lambda )/\text{rad}(\lambda ).$$ The importance of the modules $`\widehat{L}(\lambda )`$ is demonstrated by the following theorem (compare with Theorem 1.1.5). ###### Theorem 2.3.4 Let $`A`$ be a cellular algebra with a datum of profinite type, and let $`\widehat{A}`$ be the corresponding procellular algebra. Define $`\mathrm{\Lambda }_0:=\{\lambda \mathrm{\Lambda }:\varphi _\lambda 0\}`$. Then the set $`\{\widehat{L}(\lambda ):\lambda \mathrm{\Lambda }_0\}`$ is a complete set of equivalences classes of absolutely irreducible smooth $`\widehat{A}`$-modules. ###### Demonstration Proof It is clear from the properties of $`L(\lambda )`$ that the module $`\widehat{L}(\lambda )`$ is absolutely irreducible for each $`\lambda `$. To check the smoothness property, we observe that $`\widehat{I}_\lambda `$ is an open set which lies in the kernel of the representation corresponding to $`\widehat{W}(\lambda )`$. The kernel of $`\widehat{L}(\lambda )`$ is a union of cosets of this open set and is therefore open. Thus by Lemma 2.3.2, the module is smooth. We see that two modules $`\widehat{L}(\lambda )`$ and $`\widehat{L}(\mu )`$ are nonisomorphic by considering them as modules for the finite dimensional cellular algebra $`A_{\lambda ,\mu }`$. It remains to prove that the given modules exhaust the set of absolutely irreducible smooth $`\widehat{A}`$-modules. Let $`L`$ be such a module. Then the kernel of $`L`$ is open, so $`\widehat{I}_P`$ annihilates $`L`$ for some $`P\mathrm{\Pi }`$. Thus $`L`$ is an absolutely irreducible $`A_P`$-module in a canonical way. Proposition 2.1.4 shows that $`A_P`$ is naturally a finite dimensional cellular algebra, which means by Theorem 1.1.5 that $`L`$ is a cell module for $`A_P`$ corresponding to some $`\mu P`$. The definition of the modules $`\widehat{L}(\lambda )`$ means that $`L`$, regarded as an $`\widehat{A}`$-module, is nothing other than $`\widehat{L}(\mu )`$. ∎ ## 3. Examples It is now clear that the power series counterexample in Proposition 1.2.2 is compatible with Theorem 2.3.4. Any smooth module $`L`$ for the procellular algebra $`\widehat{R[x]}=R[[x]]`$ must satisfy the property that $`x^n.L=0`$ for sufficiently large $`n`$, meaning that $`x.L=0`$. Thus the only smooth module for the procellular algebra corresponds to the element $`0\mathrm{\Lambda }_0`$. This is a trivial example, but there are also interesting examples of procellular algebras as we explain in the next section. ### 3.1 The algebras $`\dot{U}`$ In \[13, §29\], Lusztig essentially proves that the algebras $`\dot{U}`$, which are associated with Dynkin diagrams of various types, are cellular. (This result is referred to as the Peter–Weyl theorem.) The analogues of the finite dimensional algebras $`A_P`$ are also considered. The algebras $`\dot{U}`$ are equipped with canonical bases, $`\dot{B}`$, which have many beautiful properties which are developed in . We concentrate in this section on the case where $`\dot{U}`$ corresponds to a Dynkin diagram of type $`A`$, because this has interesting connections with the $`q`$-Schur algebra, and because it is better understood. Since we are only introducing these algebras for illustrative purposes, we shall not go into the full details. The interested reader is referred to the literature for more information. The $`q`$-Schur algebra $`S_q(n,r)`$, which first appeared in , is a finite dimensional algebra over a base ring containing an indeterminate $`q`$ and its square root $`q^{1/2}`$. This algebra was proved by the author to be cellular with respect to a Murphy-type basis \[10, Proposition 6.2.1\]. Du introduced a canonical basis $`\{\theta _{S,T}\}`$ for this algebra, and the $`q`$-Schur algebra is also cellular with respect to the $`\theta `$-basis. ###### Proposition 3.1.1 The $`q`$-Schur algebra $`S_q(n,r)`$ has a basis $`\{\theta _{S,T}\}`$, where $`S,T`$ are semistandard tableaux of the same shape with $`r`$ boxes each and entries from the integers $`\{1,2,\mathrm{},n\}`$. This gives the $`q`$-Schur algebra the structure of a cellular algebra, where $`\mathrm{\Lambda }=\mathrm{\Lambda }^+(n,r)`$ ordered by the dominance order (as in \[9, §2\]), $`M(\lambda )`$ is the set of semistandard tableaux of shape $`\lambda `$ (as above), $`C`$ is the map $`C(S,T)=\theta _{S,T}`$ and $``$ is the anti-automorphism sending $`\theta _{S,T}`$ to $`\theta _{T,S}`$. ###### Demonstration Remark Recall that “semistandard” means that the entries increase weakly along the rows, and strictly down the columns of the tableau. This result is a generalisation of the example of the Hecke algebra of type $`A`$ appearing in \[8, Example 1.2\]. ###### Demonstration Proof Apart from the assertion involving $``$, this is simply \[6, Theorem 5.3.3\]. It follows from the definition of the $`\theta `$-basis given in that there is an anti-automorphism sending $`\theta _{\lambda ,\mu }^w`$ to $`\theta _{\mu ,\lambda }^{w^1}`$, where $`\lambda ,\mu `$ are compositions of $`r`$ into at most $`n`$ pieces, and $`w`$ is a certain double coset representative in the symmetric group $`S_r`$. The assertion about $``$ comes from properties of the Robinson–Schensted correspondence used in \[6, §5.2\]. It is a well-known property of this correspondence that inversion of the group element corresponds to exchange of the pair of tableaux. In the notation of semistandard tableaux, the anti-automorphism of the previous paragraph is nothing other than $``$. ∎ The terminology “canonical” for this basis is justified, as the following result shows. ###### Proposition 3.1.2 (Du) For each $`r`$, there is an epimorphism of algebras $$\pi _r:\dot{U}(sl_n)S_q(n,r).$$ This takes elements of the basis $`\dot{B}`$ to zero or to canonical basis elements $`\theta _{S,T}`$. For each basis element $`b\dot{B}`$, there exists an $`r`$ such that $`\pi _r(b)0`$. ###### Demonstration Proof This follows from \[5, Theorem 3.5\] and the remarks in \[4, §5.5\]. ∎ ###### Proposition 3.1.3 Let $`\mathrm{\Lambda }`$ be the set of weights associated to finite dimensional simple modules for the Lie algebra $`sl_n`$ over $``$, ordered by the usual dominance order, where $`\lambda <\mu `$ means $`\lambda `$ dominates $`\mu `$. Let $`M(\lambda )`$ be the set of semistandard tableaux with $`l_i`$ boxes in the $`i`$-th row, where $`l_il_j`$ for $`i<j`$, $`l_n=0`$ and $`\lambda _i=l_il_{i+1}`$. Let $`C`$ be the map taking a pair $`(S,T)`$ of semistandard tableaux of the same shape to the element $`b\dot{B}`$ corresponding to the canonical basis element $`\theta _{S,T}`$ of the $`q`$-Schur algebra $`S_q(n,r)`$ where $`r=\lambda _i`$. Let $``$ be the map induced by sending $`\theta _{S,T}`$ to $`\theta _{T,S}`$. Then $`(\mathrm{\Lambda },M,C,)`$ is a cell datum for $`\dot{U}`$, and the datum is of profinite type. ###### Demonstration Note One can check this directly using the theory of quantum groups developed in , but the proof we sketch below is based on inverse systems of $`q`$-Schur algebras. ###### Demonstration Proof The cell structure is essentially inherited from the cellular structure of the algebras $`S_q(n,r)`$ given in Proposition 3.1.1 and the homomorphisms of Proposition 3.1.2. As mentioned in \[4, §5.5\], one can construct an inverse limit of $`q`$-Schur algebras using the epimorphisms $`\psi _{i+jn}`$ from $`S_q(n,i+(j+1)n)`$ to $`S_q(n,i+jn)`$ which are dual to the $`q`$-analogue of coalgebra injection corresponding to multiplication by the determinant map. The effect of $`\psi `$ on the $`\theta `$-basis is known explicitly \[4, 5.4 (b)\] and takes a $`\theta `$-basis element to another $`\theta `$-basis element or to zero. The properties of cell ideals in cellular algebras now imply that if $`\theta _{S_1,T_1}`$ and $`\theta _{S_2,T_2}`$ are in the same cell (i.e. $`S_1,T_1,S_2,T_2M(\lambda )`$ for the same $`\lambda `$), then their images under $`\psi `$ are either both zero, or both nonzero and in the same cell. It now follows that $`\dot{U}`$ does inherit a cellular structure from the inverse system described above. We can thus identify a basis element $`b`$ with its images under the maps $`\psi `$, the “lowest” of which corresponds to a basis element $`\theta `$ whose tableaux have leftmost columns with fewer than $`n`$ entries. The fact that the dominance order is the relevant order can be deduced from properties of the poset order in $`S_q(n,r)`$. Using these facts, we can check that the assertions in the statement hold. ∎ ###### Demonstration Remark One can check from the Robinson–Schensted correspondence that the effect of $`\psi `$ is as follows. If $`S`$ and $`T`$ have a column of length $`n`$ then $`S^{},T^{}`$ are obtained from $`S,T`$ respectively by removal of the leftmost columns, and $`\theta _{S,T}`$ maps to $`\theta _{S^{},T^{}}`$. If this is not the case then $`\theta _{S,T}`$ is mapped to zero. This phenomenon of column removal was also studied in \[10, §6.4\]. ### 3.2 Further remarks on procellular algebras The cell datum for $`\dot{U}`$ given in §3.1 gives rise to a procellular algebra $`\widehat{U}`$ which has interesting properties. Du \[4, §5.5\] calls the algebra $`\widehat{U}`$ the completion of $`U`$ with respect to $`q`$-Schur algebras. The algebra $`\widehat{U}`$ has some convenient properties not shared by $`\dot{U}`$, such as the existence of a multiplicative identity. The algebra $`\dot{U}`$ has a positivity property for its structure constants with respect to $`\dot{B}`$, which can be easily seen (by the results in §3.1) to be equivalent to the positivity of the structure constants for the $`\theta `$-basis. The positivity property for the $`\theta `$-basis was proved by the author in . In both cases, the structure constants have interpretations in the geometrical framework of perverse sheaves. The completion $`\widehat{U}(sl_n)`$, as well as containing the algebra $`\dot{U}(sl_n)`$, contains the quantized enveloping algebras $`U(sl_n)`$ in the sense of Drinfel’d and Jimbo. This was shown in \[10, Theorem 6.4.19\]. Several more of the main results concerning cellular algebras go over to procellular algebras essentially verbatim, so we will not discuss them explicitly here. Another interesting example of a procellular algebra is the completion of the affine Temperley–Lieb algebra. In this case, the smooth modules for the completed algebra correspond to finite dimensional indecomposable modules of the original algebra. Furthermore, the procellular completion depends on a nonzero parameter; altering this parameter produces different families of indecomposable modules. The reader is referred to for further details. ## Acknowledgements The author is grateful to Steffen König and Changchang Xi for conversations and correspondence relevant to this work, and to the referee for helpful suggestions. ## References P. Cartier, Representations of $`p`$-adic groups: a survey, Proc. Sympo. Pure Math., 33 (1979), part 1, 111-155 R. Dipper and G.D. James, The $`q`$-Schur algebra, Proc. L.M.S. 59 (1989), 23–50. J. Du, Kazhdan-Lusztig bases and isomorphism theorems for $`q`$-Schur algebras, Contemp. Math. 139 (1992), 121–140. J. Du, $`q`$-Schur Algebras, Asymptotic Forms, and Quantum $`SL_n`$, J. Alg 177 (1995), 385–408. J. Du, Global IC Bases for Quantum Linear Groups, J. Pure Appl. Alg. 114 (1996), 25–37. J. Du and H. Rui, Based algebras and standard bases for quasi-hereditary algebras, preprint. K. Erdmann and R.M. Green, On Representations of Affine Temperley–Lieb Algebras, II, Pac. J. Math., to appear J.J. Graham and G.I. Lehrer, Cellular Algebras, Invent. Math. 123 (1996), 1–34. J.A. Green, Combinatorics and the Schur algebra, J. Pure Appl. Alg. 88 (1993) 89–106. R.M. Green, Ph.D. thesis, University of Warwick, 1995. R.M. Green, Positivity Properties for $`q`$-Schur Algebras, Proc. Cam. Phil. Soc., 122 (1997), 401–414. S. König and C. Xi, On the structure of cellular algebras, to appear in the Proceedings of the 8th International Conference on Representations of Algebras. G. Lusztig, Introduction to Quantum Groups, Birkhäuser, 1993.
no-problem/9902/cs9902012.html
ar5iv
text
# Digital Library Technology for Locating and Accessing Scientific Data #### KEYWORDS: Information Retrieval, Scientific Data, Astronomy Data, Scientific Information Systems ## INTRODUCTION In the last few years we have seen the evolution of the Internet and WWW from a loose connection of information sources toward information rich digital libraries. (e.g., ) In our view, a digital library is an organized way to locate, access, and analyze digital artifacts of many kinds, federating many data services to create a virtual information space. The evolving digital library may play a key role for scientists by providing a unified environment for information discovery and access. In particular, the digital library can go beyond the traditional library, and provide direct, immediate location and access to both literature and data. At NCSA, the Emerge project has been constructing the basic infrastructure required for interoperable searching and for analysis of many kinds of data from many heterogeneous data services distributed about the network. In our current work, we are implementing a prototype to demonstrate the effectiveness of this technology for searching and accessing astronomy data. This prototype illustrates how our flexible architectures can be applied to a collection of existing systems, to create an enhanced environment for information discovery. This work has been a collaboration of astronomers and computer scientists and NCSA, NASA, University of Ulster, and elsewhere. The NCSA Astronomy Digital Image Library (ADIL) has been the key testbed for demonstrating the technology. In this paper, we describe our model and prototype implementation for interoperable search and analysis as applied to scientific digital libraries. Our model places constraints on standards necessary for meaningful interoperability. In particular, we will show that the Dublin Core alone is insufficient to support the metadata associated with complex scientific data. However, appropriate standards can facilitate richer forms of research tuned to a distributed scientific environment. ### Digital Libraries for Science For scientists, the digital library is particularly important because of the importance of digital data and analysis, and the need for timely and rich information exchange. Today, scientific discovery and communication routinely creates and uses many kinds of significant digital components: * raw data * analyzed data * imagery * analysis environments * simulations * notes, letters, and reports * published articles These digital artifacts are complex and inter-related; for example, a digitally published article “points” to the data, instrumentation, and software which are described and interpreted by the text. Similarly, the digital representations of simulations and analyzed data such as images are most useful and valid in the context of the published documentation and scientific reports. An archive of scientific data will contain pointers from the data to published articles which explain and validate them; and scientific articles contain pointers to the data which they report. Similarly, theoretical results in the form of computational models are intended to be correlated with relevant observational and experimental data. There are many repositories of scientific data already on line, and each new scientific project almost inevitably produces significantly larger amounts of digital data. Through the use of the World Wide Web and URLs, scientific information is already becoming a rich web of connected digital information. However, it remains a significant and lasting challenge for humans to exploit this richness, to discover, access, and understand the knowledge that may reside or be created from digital resources. We believe that these archives should be an integral part of the digital library of the future, bringing together all types of scientific information resources in a single environment. The Emerge project at NCSA is developing practical infrastructure for this new type of digital library. In this vision, a student or researcher could “go to the library” to ask a scientific research question. For example, a researcher could seek to inquire about the climate in Illinois in recent years. Even today, the library would provide pointers to published literature about weather, vegetation, wildlife, and so on, much of which is available on-line. The results should also provide pointers to relevant climate data, satellite imagery, computational models, and resources such as email archives. In most cases, these resources already exist and are available on the Web, but locating and accessing this diverse set of materials would be difficult without the organizing and facilitating role of a new kind of research library. This kind of digital library will not only make routine scientific information finding more efficient, it will enable cross-discipline and synergistic discovery; since the investigator will likely be presented with information from many unexpected sources. ### A Case Study: Astronomy and Space Science Data In recent decades, we have experienced a golden age for the exploration of the universe. New ground and space based instruments and powerful computing systems have produced an explosion of astronomy and space science data. This explosion has driven the development of data archives, digital libraries, and other network-based services that make it easier to access research-quality information. The success of such services has created environments within which one can gather knowledge from diverse sources to address new scientific questions. ### The Astronomy Digital Image Library (ADIL) Testbed The NCSA Astronomy Digital Image Library (ADIL) has been the key testbed for demonstrating the technology. The ADIL was developed with support from NASA and the National Science Foundation to address some of the challenges of distributing scientific data over the network. Its specific mission is to collect fully processed astronomical images in FITS format (a standard astronomical image format ) and make them available to the research community and the interested public via the World Wide Web. The ADIL allows users to search, browse, and download astronomical images. As we will discuss below, this can be a non-trivial process when the images are not in the usual GIF or JPEG formats. The ADIL is more than a tool for astronomers looking for images to augment their research. It is also a means for authors who wish to share their images with the community. While many of the Library’s images come from observatories, the core of the collection comes from individual authors. The ADIL provides a way to upload the images to the Library, along with any supporting data, where it can be processed and made available to the Library users. Authors deposit images into the Library in the form of collections we refer to as “projects”. Normally, an author would make a deposit at the end of some scientific study when the resulting publication is going to press; all the fully processed images associated with that paper would make up the project. In this way, the ADIL is part of the new paradigm for scientific publishing. ## EXTENDING THE WWW MODEL: A ‘CONVERSATION WITH THE DATA’ In the conventional WWW model, which is biased toward small, text-oriented documents, a data location service usually returns a set of URLs pointing to documents which the user must visit–i.e. download to the client–to view and analyze. For scientific data, “search-and-download” is not a practical model because the objects are typically not ”documents”, but rather large, complex objects (datasets) stored in formats not supported by standard browsers (such as FITS or HDF ). In earlier work, we described the need for a “conversation with the data” which extends the standard Web model. The basic scenario is: 1. search to locate candidate data objects 2. browse and select the objects 3. download selected data for further analysis Scientific archives have adapted to the Web by integrating a browsing stage to the information discovery process (e.g. ). In so-called server-side browsing, the data provider presents a preview of a dataset which might include a GIF or JPEG rendering of the data and a display of some subset of the associated metadata, all packaged as an HTML document. The ADIL is a good example of such a data service. This model can work very well when interacting with a single data provider and a set of datasets that isn’t too large. However this model becomes quite laborious to the user when trying to interact with more than a few weterogeneouwith s data providers because: 1. a single question must be entered differently into each of the providers’ custom interfaces 2. each HTML-formatted query response and associated browsing documents must be visited for visual interpretation in a series of individual, stateless requests, e.g., a list of links to URLs. 3. browsing of the data items is limited to what is provided by each data providers’ interfaces. Also, some kinds of user interaction are difficult to implement with server-side browsing. For instance, drawing a bounding box or dynamically fiddling with color maps is difficult to implement well on a server. To address these issues we extend the “search-browse-and-download” model by adding: 1. stateful communication with a data-provider, 2. support for standard query profiles, 3. support for standard record format appropriate for scientific data. The result is a more fluid, automated, and efficient interaction with multiple data providers. Users can interact with data from one provider while queries to other providers are being processed. The browsing can occur with different levels of detail. Perhaps the most powerful feature is that clients can take greater control of the browsing by plugging in specialized visualizers for quick plotting or manipulating of the results. ## INFRASTRUCTURE FOR INFORMATION DISCOVERY FOR SCIENTIFIC RESEARCH Information discovery is increasingly the most critical component of scientific research. As scientists work to solve problems, they need multi-modal access to geographically-distributed collections of large and highly structured data sets. Discovering which data sets are potentially relevant to a particular problem involves more or less elaborate characterizations of the data in terms of domain-specific attributes. Furthermore, examining candidate data sets to locate the most relevant ones involves highly specialized interactions with the data. Yet this diversity must not come at the cost of interoperability. For science to progress, it is crucial that scientists be able to locate information from many different scientific domains when attempting to solve a problem in their own domain. Cross-disciplinary researchers should not be burdened with a different set of information discovery software tools for each discipline they work in, especially in the respects that those software tools perform essentially the same functions. Also, data services should provide data not only to individual end users but also to other services which add value to them. Even within a single discipline, it may be necessary to query many data repositories in order to locate all the data relevant to a scientific question. For instance, the NASA Space Science Data System Technical Working Group reports a real scenario based on the investigation of sulfur ($`S_2`$) on comets. This investigation turned out to require data from multiple sources, including several spacecraft, ground based telescopes, the Hubble Space telescope, and published (and unpublished) scientific literature. The data was retrieved from many different sources, in widely different formats. (, Appendix 2) A distributed information discovery infrastructure should be built which emphasizes standard search protocols, file formats and general purpose-tools. It should be designed in such a way that profiles, formats, and browsers specific to a particular domain can be easily plugged in to the infrastructure and shared between data providers and consumers. In a sense, the WWW already provides a semblance of such an infrastructure. HTTP supports a variety of file formats and forms-based CGI services can be used to implement search tools which return views of information with additional forms controls for manipulating the view. However this mode of using the WWW does little to advance a standard query syntax or means of defining metadata schemas and profiles. Also, it fails to separate the user interface for information retrieval (the HTML forms) from the delivery of information itself (the metadata in the page). Search results returned as part of an HTML page are not standardized and cannot be easily be compared to similar results from a different service. ### NCSA Emerge: Practical Infrastructure For Information Discovery The NCSA Emerge Project is addressing these issues, with the goal of designing infrastructure to create unified information discovery across heterogeneous databases; developing free software based on standards (Z39.50 , XML ). The Emerge software includes : * the Gazelle gateway, which adds Z39.50 to a database (using Z39.50 is recommended but not required) * the Gazebo search gateway, which manages searches across multiple heterogeneous databases * a Java client toolkit, which communicates with the Gazebo gateway, creating queries and presenting results. This infrastructure is being developed for several applications, including engineering literature and medical research databases, as well as astronomy. ### Profiles In order to build a distributed search infrastructure which serves the scientific community, we see the need for the following requirements: * Profiles which are extensible to particular domains * Protocols for remote access to data collections * Query syntax and semantics for searching data collections * File formats for data sets or subsets (e.g., XML document types) * Flexible record formats for metadata describing data items (e.g., XML document types) In order to search across such a diversity of sources with a single query, there must be some sort of “common denominator”, a common set of search terms with shared semantics. The Dublin Core and related W3C efforts provide this kind of standard for many kinds of “document-like objects”. However, scientific data require metadata considerably beyond the Dublin Core. For example, in addition to the Dublin Core categories, the ADIL supports searches by: * Sky position (e.g., in galactic coordinates) * Astronomical Object (e.g., M31) * Type of object (e.g., galaxy) * Wavelength Still other types of metadata are needed for archives of planetary data, such as orbital positions and descriptions of atmospheres and clouds. While each discipline–and maybe even each project and instrument–may require some unique metadata, we believe there is enough common ground to establish standard profiles for broad classes of scientific data, just as the Dublin Core has done for documents. (See our proposals for astronomy metadata in .) There must also be standards for the format of the results of queries for data; a structured record that describes the kinds of objects returned as data, such as images, tables, and datasets in various formats such as FITS. USMARC records have served this role for many years for bibliographic material, but new, more flexible record formats are needed to support scientific data. The W3C RDF and schema initiative provides a sound framework for expressing such records, but it is critical for communities to work to establish the appropriate standards. ### AML: Astronomical Markup Language, a metadata standard for astronomy The Astronomical Markup Language (AML) addresses the needs for standardized metadata for Astronomy data. AML is an XML language describing various kinds of data useful in astronomy, and is aimed at being an exchange format for astronomical data, and especially metadata, over the Internet. AML is both a proposed profile standard and a prototype implementation. Results of a search can be formatted as an AML document, that is, as an XML document containing a description of the resource using the AML DTD. The AML document can be processed by a program or presented by a browser. Guillaume has created a Java applet to browse AML documents as easily as one would browse HTML documents, but with some additional features specific for astronomical data. For example, the AML applet displays astronomical coordinates, and displays measurements with the relevant units and uncertainties. The use of AML is an improvement for both the information providers and the users (who are astronomers). For the information providers, XML separates the data from the user interface, so that different data can be used with different user interfaces without any difficulty. A small institute could also focus on the information, and let other institutes provide user interfaces. For the users, the use of the AML browser provides a uniform and unified way to access various data coming from different servers. Finally, users wanting to get and process the data automatically, can use the AML documents directly, as the AML browser applet does. XML is much more useful for this purpose than HTML, because HTML documents contain a mix of information about both the user interface and the data. The AML language is organized as seven types of objects. An AML document is a collection of AML objects, describing different types of information. AML objects may contain links to other AML objects, and to external objects such as data, images, or documents. The AML objects are summarized in Table 1. The AML language can be easily extended, for example by adding a “Set of images” object. AML records are designed to allow programs to automatically process and analyze the metadata. Guillaume has demonstrated techniques for automatically clustering astronomical information sources, e.g., applying a graph partitioning algorithm to the keywords and links in AML records. One outstanding feature of this work was that information from diverse sources was successfully correlated, because the AML records are standardized. It is easy to imagine how this work could be extended to support filtering for specific users and selective dissemination of information. ## INTERACTION WITH THE DATA The search system described above locates information sources, and returns AML records to describe them. From the AML records, the user identifies data that appears to be of interest. There may in fact be a large number of large datasets, so it is important for the user to select subsets and subsamples from the data. For instance, it may be the case that only one region or time period is required, or only certain measurements are relevant. Sometimes the metadata itself is not sufficient to make the selection, in which case the user needs to browse data itself. A low resolution “thumbnail” image may be viewed, and the user may ask to pan and zoom around the image, to examine in detail areas of possible interest. Regions of an image may be selected with a bounding box, data from tables may be examined, from which particular columns (fields) and rows selected. It may be useful to make simple histograms or other plots, to identify characteristics of the data, and it may be useful to manipulate the color tables or other aspects of the display to highlight features of the imagery. The dataset may contain tables of data, or other multidimensional data structures, all of which need to be efficiently navigated in a similar fashion. When the precise data of interest is identified, the user then requests subsets and subsamples to be downloaded for detailed analysis. At this point, the data will be input to data analysis programs or simulations. These may range from simple graphing and spreadsheets, up to complex, multi-supercomputer environments. In any case, the results may ultimately be published, adding new documents and datasets to the library. In the case of the ADIL, the FITS data might be filtered, combined with data and models, and visualized. This might be done using AIPS++ or a similar package. The results of the analysis would be saved as one or more FITS images, which might be entered in the ADIL when the study is published in a journal. Because of the elaborateness of scientific data, even the retrieval step itself can sometimes involve fairly complex calculations above and beyond the boolean matching typical of bibliographic data; e.g., applying a pattern recognition algorithm to a database of images. Furthermore, data may need to be processed and formatted even before it’s browsed. And finally, the scientific investigation may involve analysis of multiple data sets to produce a composite data product distilled from diverse data from several sources. Today, these types of activities are carried out routinely using a heterogeneous, ad-hoc assortment of applications, typically specialized applications requiring access to data on local disks. In the future, this will increasingly be done using “workbenches” (i.e., specialized Web portals such as ) and ubiquitous computational GRIDs. ## A PROTOTYPE IMPLEMENTATION Over the past few years NCSA Project 30 has been constructing a prototype which provides a sophisticated “conversation with the data” for astronomy data. Our prototype uses NCSA Emerge and AML as the basis to build a system to locate, browse, and retrieve astronomy data from the NCSA Astronomy Digital Image Library and other data services. The data sources are already available through standard Web interfaces which return HTML. We have added the ability to use Z39.50 to query, installing the Gazelle Z39.50 gateway on the data server if needed. The Gazebo GUI implements a query construction interface, which presents one or more profiles, i.e., standard sets of query terms and meanings. The client configuration is loaded from the Gazebo gateway, so the same client can have many “views” of the information space. The current prototype implements both a “simple” query interface (a single list of keywords), and an “advanced” interface (a graphical interface to construct a boolean expression). The prototype supports a general purpose profile for bibliographic searching, and a specialized astronomy profile. The results of the query are returned as AML records, as well as HTML. Creating AML (XML) records is usually a straightforward extension of the existing code that generates HTML. The Gazebo GUI sends queries, encoded in the XML-based Gazebo abstract search language, to the Gazebo gateway to be executed on a set of target data sources. Gazebo translates each query into the native query syntax of each target data source, and executes it remotely using the native search protocol of each target data source. This behavior is highly configurable. Requesting result records is handled similarly; Gazebo translates the GUI’s requests for results into the target data sources’ protocols. The result records returned by typical data sources are more or less structured data. Gazebo can return them unmodified or it can process them through external CGI scripts which can translate them from arbitrary file formats into any MIME type. This is useful, for instance, for providing HTML views of records in non-text formats. The records are passed from the gateway to the GUI, which displays them appropriately. The GUI displays the number of records returned by each server, and retrieves and shows the short records as requested. When a full record is requested, the GUI retrieves the record and launches an appropriate applet to display it. If the record is HTML, it is displayed with the ICE HTML viewer. When the record is AML, the AML browser is invoked to display it. The AML record may contain pointers to abstracts and/or datasets. The user may follow these links to view the actual data. The abstract will be viewed with the HTML viewer. When a FITS dataset is selected, the Horizon Image Browser will be launched to browse the data, and download it if desired. ## CONCLUSION AND RELATION TO OTHER WORK We have constructed a complete environment for locating astronomy information, for examining and browsing metadata, and for browsing and accessing both the text and the data. Our system is unique in that we support both text and data, using a general, standards based protocol. We have defined new protocols for describing astronomy data, and created a much more complex “conversation” than most systems can support. The flexible configuration and interoperable standards we use make it comparatively easy to add databases. It is important to reiterate that the Emerge software is extremely flexible, and is used for several application communities. The astronomy specific features are replaceable modules, the system can be customized for different user communities. The Gazebo gateway superficially resembles many conventional Web gateways and portals. However, we use Z39.50 to distribute the queries and AML to return the results. These standards assure much greater interoperability than Web CGI and HTML. Z39.50 has been widely used by libraries for many years, and there are many efforts to federate Z39.50 services, such as the CIC Virtual Electronic Library. There is also a well established effort to standardized metadata for bibliographic resources, e.g., the Dublin Core. Our work is important because it shows that Z39.50 can be used with scientific data. Our protocol development and the AML extend the principles of the Dublin Core to a significant body of scientific data. The AML uses standard XML, but is not directly related to the still evolving W3C metadata efforts. As RDF standards become established, AML can presumably be aligned with them. For instance, the XML RDF schema and the XML-Data proposal are likely to be important, and the AML will follow these standards as they become established. The Gazebo gateway and GUI implement a Query protocol using XML. The W3C is currently in the early stages of defining a standard for representing queries in XML. When this standard matures, Gazebo will support it appropriately. ## ACKNOWLEDGEMENTS This work was partly funded by NASA Office of Space Science , the National Science Foundation, and the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign. NCSA is funded in part by the National Science Foundation, the Advance Research Projects Agency, corporate partners and the state and University of Illinois. Earlier parts of this work were funded by Project Horizon, a NASA cooperative agreement. Some parts of this work were funded by the NSF/DARPA/NSF Digital Library Initiative, and by the National Cancer Institute.
no-problem/9902/astro-ph9902141.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recent observational breakthroughs have made possible the measurement of the Star Formation Rate (SFR) history of the universe from rest–frame UV fluxes of moderate– and high–redshift galaxies (Lilly et al. 1996, Madau et al. 1996, 1998). The strong peak observed at $`z1.5`$ seems to be correlated with the decrease of the cold–gas comoving density in damped Lyman–$`\alpha `$ systems between $`z=2`$ and $`z=0`$ (Lanzetta et al. 1995, Storrie–Lombardi et al. 1996) These results nicely fit in a view where star formation in bursts triggered by interaction/merging consumes and enriches the gas content of galaxies as time goes on. Such a view is qualitatively predicted within the paradigm of hierarchical growth of structures in which galaxy formation is a continuous process (see e.g. Baugh et al. 1998). However, these observational data come from optical surveys that probe the rest–frame UV and visible emission of high–$`z`$ galaxies. In the early universe, what fraction of star/galaxy formation was hidden by dust that absorbs UV/visible starlight and thermally re–radiates at larger wavelengths ? In the local universe (and thanks to IRAS and ISO observations), we know that about 30 % of the bolometric luminosity of galaxies is radiated in the IR (Soifer & Neugebauer 1991), and that a large fraction of dust heating is due to young stellar populations (Genzel et al. 1998). Now, IR/submm observations are beginning to unveil what actually happened at higher redshift. We might have kept so far the prejudice that high–redshift galaxies have little extinction, simply because their heavy–element abundances are low (typically 1/100 to 1/10 of solar at $`z>2`$). However, low abundances do not necessarily mean low extinction. For instance, if we assume that dust grains have a size distribution similar to the one of our Galaxy ($`n(a)daa^{3.5}`$ with $`a_{min}aa_{max}`$), and are homogeneously distributed in a region with radius $`R`$, the optical depth varies as $`\tau a_{min}^{0.5}R`$ while the total dust mass varies as $`M_{dust}a_{max}^{0.5}R^3`$. For given dust mass and size distribution, there is more extinction where grains are small, and close to the heating sources. This is probably the reason why Thuan et al. (1998) observed a significant dust emission in the extremely metal–poor galaxy SBS0335-052. In this context, we hereafter briefly review the attempts to correct the UV fluxes emitted by high–redshift galaxies for the effect of extinction, as well as recent measurements of the “Cosmic Infrared Background” (hereafter CIRB), and deep surveys at FIR/submm wavelengths. These observations strongly suggest that a significant fraction of the young stellar populations is hidden by dust. Finally, we propose a semi-analytic modeling of galaxy formation and evolution in which the computation of dust extinction and emission is explicitly implemented. This model is helpful to prepare forthcoming observations in the FIR/submm range. ## 2 The issue of extinction in high–redshift galaxies Deep spectroscopic surveys and the development of the powerful UV drop–out technique have led to the reconstruction of the cosmic SFR comoving density (Lilly et al. 1996, Steidel & Hamilton 1993, Steidel et al. 1996, 1999, Madau et al. 1996, 1998). However, a complete assessment of the effect of extinction on UV fluxes emitted by young stellar populations, and of the luminosity budget of star–forming galaxies is still to come. The cosmic SFR density determined only from UV observations of the Canada–France Redshift Survey has been recently revisited with a multi–wavelength approach including IR, submm, and radio observations. The result is an upward correction of the previous values by an average factor 2.9 (Flores et al. 1999). At higher redshift, various authors have attempted to estimate the extinction correction and to recover the fraction of UV starlight absorbed by dust (e.g. Meurer et al. 1997, Pettini et al. 1998). It turns out that the observed slope $`\alpha `$ of the UV spectral energy distribution $`F_\lambda (\lambda )\lambda ^\alpha `$ (say, around 2200 Å) is flatter than the standard value $`\alpha _02.5`$ computed from models of spectrophotometric evolution. The derived extinction corrections are large and differ according to the method. For instance, Pettini et al. (1998) and coworkers fit a typical extinction curve (the Small Magellanic Cloud one) to the observed colors, whereas Meurer et al. (1997) and coworkers use an empirical relation between $`\alpha `$ and the FIR to 2200 Å luminosity ratio in local starbursts. The former authors derive $`<E(BV)>0.09`$ resulting in a factor 2.7 absorption at 1600 Å, whereas the latter derive $`<E(BV)>0.30`$ resulting in a factor 10 absorption. This discrepancy suggests sort of a bimodal distribution of the young stellar populations : the first method would take into account the stars detected in the UV with relatively moderate reddening/extinction, while the second one would phenomenologically add the contributions of these “apparent” stars and of heavily–extinguished stars. Fig. 1 shows the cosmic SFR comoving density in the early version (no extinction), and after the work by Flores et al. (1999) at $`z<1`$ and extinction corrections. ## 3 The diffuse IR/submm background and submm counts A lower limit of the “Cosmic Optical Background” (hereafter COB) is currently estimated by summing up the faint counts obtained in the Hubble Deep Field (HDF), and from ground–based observations. The shallowing of the slope suggests that the counts are close to convergence. In the submm range, Puget et al. (1996) discovered an isotropic component in the FIRAS residuals between 200 $`\mu `$m and 2 mm. This measure was confirmed by subsequent work in the cleanest regions of the sky (Guiderdoni et al. 1997), and by an independent determination (Fixsen et al. 1998). The analysis of the DIRBE dark sky has also led to the detection of the isotropic background at 240 and 140 $`\mu `$m, and to upper limits at shorter wavelengths down to 2 $`\mu `$m (Schlegel et al. 1998, Hauser et al. 1998). Recently, a measure at 3.5 $`\mu `$m was proposed by Dwek & Arendt 1998. The results of these analyses seem in good agreement, though the exact level of the background around 200 $`\mu `$m is still a matter of debate. The controversy concerns the correction for the amount of Galactic dust in the ionized gas uncorrelated with the HI gas. It appears very likely that this isotropic background is the long–sought CIRB. As shown in fig. 2, its level is about 5–10 times the no–evolution prediction based on the local IR luminosity function determined by IRAS. There is about twice as much flux in the CIRB than in the COB. If the dust that emits at IR/submm wavelengths is mainly heated by young stellar populations, the sum of the fluxes of the CIRB and COB gives the level of the Cosmic Background associated to stellar nucleosynthesis (Partridge and Peebles 1967). The bolometric intensity (in W m<sup>-2</sup> sr<sup>-1</sup>) is : $$I_{bol}=\frac{ϵ_{bol}}{4\pi }\frac{dl}{(1+z)^4}=\frac{c\eta }{4\pi }\frac{\rho _Z(z=0)}{(1+z_{eff})}$$ (1) where the physical emissivity due to young stars at cosmic time $`t`$ is $`ϵ(t)=\eta (1+z)^3d\rho _Z(t)/dt`$ and $`z_{eff}`$ is the effective redshift for stellar He and metal nucleosynthesis. The census of the local density of heavy elements $`\rho _Z(z=0)1\times 10^7`$ $`M_{}`$ Mpc<sup>-3</sup> gives an expected bolometric intensity of the background $`I_{bol}50(1+z_{eff})^1`$ nW m<sup>-2</sup> sr<sup>-1</sup>. This value is roughly consistent with the observations for $`z_{eff}1`$ – 2. Of course, it is not clear yet whether star formation is responsible for the bulk of dust heating, or there is a significant contribution of AGNs. In order to address this issue, one has first to identify the sources that are responsible for the CIRB. At low $`z`$, it is well known that the IRAS satellite has discovered “luminous IR galaxies” (hereafter LIRGs), mostly interacting systems, and the spectacular “ultraluminous IR galaxies” (hereafter ULIRGs), which are mergers and emit more than 95 % of their energy in the IR (see e.g. the review by Sanders and Mirabel 1996). The question of the origin of dust heating in these heavily–extinguished objects is a difficult one, because both starburst and AGN rejuvenation can be fueled by gas inflows triggered by interaction. However, according to Genzel et al. (1998), the starburst generally contributes to 50–90 % of the heating in local ULIRGs. Now, it is very likely that the high–redshift counterparts of the local LIRGs and ULIRGs are largely responsible for the CIRB. but the redshift evolution of the fraction and power of AGNs that are harbored in these distant objects is still unknown. Various submm surveys have been achieved or are in progress. The FIRBACK program is a deep survey of 4 deg<sup>2</sup> at 175 $`\mu `$m with the ISOPHOT instrument aboard ISO. The analysis of about 1/4 of the Southern fields (that is, of 0.25 deg<sup>2</sup>) unveils 24 sources (with $`S_\nu >100`$ mJy), corresponding to a surface density five times larger than the no–evolution predictions based on the local IR luminosity function (Puget et al. 1998). The total catalogue of the 4 deg<sup>2</sup> will include about 275 sources (Dole et al. 1999). The radio and optical follow–up for identification is still in progress. This strong evolution is confirmed by the other 175 $`\mu `$m deep survey by Kawara et al. (1998). Various deep surveys at 850 $`\mu `$m have been achieved with the SCUBA instrument at the JCMT (Smail et al. 1997, Hughes et al. 1998, Barger et al. 1998, Eales et al. 1998). They also unveil a surface density of sources (with $`S_\nu >2`$ mJy) much larger than the no–evolution predictions (by two or three orders of magnitude !). The total number of sources so far discovered in SCUBA deep surveys now reaches about 40 (see e.g. Blain et al. 1998). The tentative optical identifications seem to show that some of these objects look like distant ULIRGs (Smail et al. 1998, Lilly et al. 1999). In the HDF, 4 of the brightest 5 sources seem to lie between redshifts 2 and 4 (Hughes et al. 1998), but the optical identifications are still a matter of debate (Richards, 1998). The source SMM 02399-0136 at $`z=2.803`$, which is gravitationally amplified by the foreground cluster A370, is clearly an AGN/starburst galaxy (Ivison et al. 1998, Frayer et al. 1998). Fig. 3 gives an account of the faint counts in the submm range. ## 4 Modeling dust spectra in a semi–analytic framework Various models have been proposed to account for the FIR/submm emission of galaxies and to predict forthcoming observations. The level of sophistication (and complexity) increases from pure luminosity and/or density evolution extrapolated from the IRAS local luminosity function with $`(1+z)^n`$ laws, and modified black–body spectra, to physically–motivated spectral evolution. Guiderdoni et al. (1998) proposed a consistent modeling of IR/submm galaxy counts in the paradigm of hierarchical clustering. Only stellar heating is taken into account. The IR/submm spectra of galaxies are computed in the following way: (i) follow chemical evolution of the gas; (ii) implement extinction curves which depend on metallicity as in the Milky Way, the LMC and SMC; (iii) compute $`\tau _\lambda `$; (iv) assume the so–called “slab” geometry where the star and dust components are homogeneously mixed with equal height scales. (v) compute a spectral energy distribution by assuming a mix of various dust components. The contributions are fixed in order to reproduce the observational correlation of IRAS colours with total IR luminosity (Soifer & Neugebauer 1991). These FIR/submm spectra are implemented in a semi–analytic model of galaxy formation and evolution. This type of model has been very effective in computing the optical properties of galaxies in the paradigm of hierarchical clustering. We only extend this approach to the IR/submm range, and take the standard CDM case with $`H_0`$=50 km s<sup>-1</sup> Mpc<sup>-1</sup>, $`\mathrm{\Omega }_0=1`$, $`\mathrm{\Lambda }=0`$, and $`\sigma _8=0.67`$. We assume a Star Formation Rate $`SFR(t)=M_{gas}/t_{}`$, with $`t_{}\beta t_{dyn}`$ and a Salpeter IMF ($`x=1.35`$). The efficiency parameter $`1/\beta =0.01`$ gives a nice fit of local spirals. The robust result of this type of modeling is a cosmic star formation rate history that is too flat with respect to the data. As a phenomenological way of reproducing the steep rise of the cosmic SFR history from $`z=0`$ to $`z=1`$, we introduce a “burst” mode of star formation involving a mass fraction that increases with $`z`$ as $`(1+z)^4`$, with ten times higher efficiencies $`1/\beta =0.1`$. In order to reproduce the level of the CIRB, we have to assume that a small fraction of the gas mass (typically less than 10 %) is involved in star formation with a top–heavy IMF in heavily–extinguished objects (ULIRG–type galaxies). The so–called “model E” with these assumptions fairly reproduces the cosmic SFR and luminosity densities, as well as the CIRB (see fig. 1 and 2). Fig. 3 gives the predictions of number counts at 15, 60, 175, and 850 $`\mu `$m for this model. The agreement of the predictions with the data seems good enough to suggest that these counts do probe the evolving population contributing to the CIRB. The model shows that 15 % and 60 % of the CIRB respectively at 175 $`\mu `$m and 850 $`\mu `$m are built up by objects brighter than the current limits of ISOPHOT and SCUBA deep fields. The predicted median redshift of the ISO–HDF is $`z0.8`$. It increases to $`z1.2`$ for the deep ISOPHOT surveys, and to $`z2`$ for SCUBA, though the latter value is very sensitive to the details of the evolution. An extension of the spectra and counts to the near–IR, optical and ultraviolet ranges is in progress (Devriendt et al. 1999, Devriendt & Guiderdoni 1999). A fit of a typical ULIRG is proposed in fig. 4 as an example of what can be obtained with these extended spectra. ## 5 Future instruments Fig. 5 gives the far–UV to submm spectral energy distribution that is typical of a $`L_{IR}=10^{12}`$ $`L_{bol}`$ ULIRG at various redshifts. This model spectrum is taken from the computation of Devriendt et al. (1999). The reader should note the specific behavior of the observed flux at submm wavelengths, where the shift of the 60 – 100 $`\mu `$m rest–frame emission bump counterbalances distance dimming. The instrumental sensitivities of various past and on–going satellite and ground–based instruments are plotted on this diagram : the IRAS Very Faint Source Survey at 60 $`\mu `$m, ISOCAM at 15 $`\mu `$m, ISOPHOT at 175 $`\mu `$m, the IRAM interferometer at 1.3 mm, SCUBA at 450 and 850 $`\mu `$m, and various surveys with the VLA. Forthcoming missions and facilities include WIRE, SIRTF, SOFIA, the PLANCK High Frequency Instrument, the FIRST Spectral and Photometric Imaging REceiver, and the imaging modes of the SUBARU IRCS and VLT VIRMOS instruments. Finally, the capabilities of the NGST, MMA/LSA and Infrared Space Interferometer (DARWIN) are also plotted. The final sensitivity of the next–generation instruments observing at FIR and submm wavelengths (WIRE, SIRTF, SOFIA, PLANCK, FIRST) is going to be confusion limited. However, the observation of a large sample of ULIRG–like objects in the redshift range 1–5 should be possible. More specifically, the all–sky shallow survey of PLANCK HFI, and the medium–deep survey of FIRST SPIRE (to be launched by ESA in 2007), will respectively produce bright ($`S_\nu >`$ a few 100 mJy) and faint ($`S_\nu >`$ a few 10 mJy) counts that will be complementary. A 10 deg<sup>2</sup> survey with SPIRE will result in $`10^4`$ sources. The study of the $`250/350`$ and $`350/500`$ colors are suited to point out sources which are likely to be at high redshifts. These sources can be eventually followed at 100 and 170 $`\mu `$m by the FIRST Photoconductor Array Camera & Spectrometer and by the FTS mode of SPIRE, to get the spectral energy distribution at $`200\lambda 600`$ $`\mu `$m with a typical resolution $`R\lambda /\mathrm{\Delta }\lambda =20`$. After a photometric and spectroscopic followup, the submm observations should readily probe the bulk of (rest–frame IR) luminosity associated with star formation. The reconstruction of the cosmic SFR comoving density will thus take into account the correct luminosity budget of high–redshift galaxies. However, the spatial resolution of the submm instruments will be limited, and only the MMA/LSA should be able to resolve the FIR/submm sources and study the details of their structure. ## 6 Conclusions 1. There is now strong evidence that high–redshift galaxies emit much more IR luminosity than predictions based on the local IR luminosity function, without evolution. The submm counts seem to unveil the bright end of the population that is responsible for the CIRB. 2. The issue of the relative contributions of the starbursts and AGNs to dust heating is still unsolved. Local ULIRGs seem to be dominated by starburst heating, but the behavior at higher redshift is unknown. 3. It is difficult to correct for the influence of dust on the basis of the optical spectra alone. Multi–wavelength studies are clearly necessary to address the history of the cosmic SFR density, through a correct assessment of the luminosity budget. 4. Under the assumption that starburst heating is dominant, simple models in the paradigm of hierarchical clustering do reproduce the current IR/submm data. 5. The current studies on faint counts at submm wavelengths will guide models for the preparation of the observing strategies with forthcoming instruments : e.g., SIRTF, SOFIA, the PLANCK High Frequency Instrument, the FIRST Spectral and Photometric Imaging REceiver, and the MMA/LSA. A large number of high–redshift sources should be observable with these IR/submm instruments.
no-problem/9902/astro-ph9902159.html
ar5iv
text
# X-ray and lensing results on the cluster around the powerful radio galaxy 4C+55.16 ## 1 Introduction 4C+55.16 (= 0831+557) is a radio galaxy at a redshift of 0.240 (Lawrence et al 1996) in the Pearson-Readhead (1981, 1988) 5 GHz flux-density-limited complete sample. The radio source is compact and powerful ($`1.1\times 10^{26}`$W Hz<sup>-1</sup>sr<sup>-1</sup> at 5 GHz). In the high resolution radio maps obtained from MERLIN and VLBI (Whyborn et al 1985; Pearson & Readhead 1988), highly irregular, small-scale structures have been resolved. The radio spectrum of the compact core shows a turnover around 1 GHz (Component C in Whyborn et al 1985), resembling the class of Giga-hertz Peaked Spectrum (GPS) sources (e.g., O’Dea 1998) whilst the mini double-lobe with a projected size of 11″ (54 kpc at a redshift of 0.24) has a steep, low-frequency spectrum. A sum of the two components results in a flat radio spectrum over the GHz range. A blue continuum contributes $`50`$ per cent of the light at 3850Å (Heckman et al 1983) and its optical emission-line spectrum shows a medium ionization state (e.g., Heckman et al 1983 based on the \[OII\]$`\lambda 5007`$/\[OII\]$`\lambda 3727`$ ratio; Whyborn et al 1985; \[OIII\]$`\lambda 5007`$/H$`\beta 4`$, Lawrence et al 1996). X-ray emission was detected during the ROSAT All Sky Survey (RX J08349+5534, Brinkmann et al 1995; Laurent-Muehlei et al 1996; Bade et al 1998). The ROSAT papers assumed that the X-ray emission originates in an active nucleus residing in 4C+55.16. However, as our imaging and spectral study using the ROSAT HRI and ASCA data show below, the observed X-rays appear to be dominated by cluster emission surrounding the radio galaxy. The optical CCD images of 4C+55.16 were obtained by Hutchings, Johnson & Pyke (1988) using the Canada-France-Hawaii Telescope (CFHT) with $`B`$ and $`R`$ filters. There is a blue feature about 15 arcsec southwest from the centre of the galaxy, which we tentatively identify as a gravitationally lensed arc. We find a good agreement between the cluster masses estimated from X-ray and lensing techniques. ## 2 Observations and data reduction 4C+55.16 was observed with ASCA (Tanaka, Inoue & Holt 1994) and ROSAT HRI (Pfeffermann et al 1987). ASCA provides moderate resolution imaging and spectra in the 0.5–10 keV band while the ROSAT HRI provides high resolution (FWHM $``$ 5 arcsec) 0.1–2.4 keV imaging. A summary of the observations is given in Table 1. The four detectors onboard ASCA, the Solid state Imaging Spectrometers (SIS; S0 and S1), the Gas Imaging Spectrometers (GIS; G2 and G3) were operating normally. The radio galaxy was pointed at the nominal position on the best-calibrated CCD chip of each SIS detector. The lowest energy event threshould in the SIS was set at 0.47 keV. Standard calibration (used for the Rev2 processing) and data reduction techniques were employed, using FTOOLS (version 4.0) provided by the ASCA Guest Observer Facility at Goddard Space Flight Center. The SIS data were corrected for the effects of the Dark Frame Error (DFE) and ‘Echo’ and hot/flickering pixels were removed. The ROSAT HRI observation was short (16.6 ks). The soft X-ray image revealed extended X-ray emission which could not be resolved with ASCA (Section 3). The data were also analysed by the deprojection technique (Section 5). There is no evidence for X-ray variation in both ASCA and ROSAT data. ## 3 The ROSAT HRI image The X-ray source detected at the position of 4C+55.16 is clearly extended beyond the Point Spread Function (PSF) of the HRI. The raw HRI image has been smoothed with the adaptive kernel method, ASMOOTH (Ebeling, White & Rangarajan 1998), using a gaussian kernel and a characteristic smoothing threshold, above the local background, of $`4\sigma `$. Fig. 1 sows the X-ray intensity contours of the smoothed data overlaid on the optical Digitized Sky Survey (DSS) image. The X-ray emission is sharply peaked at the centre of the extended image, as observed in strong cooling flow clusters. The X-ray peak is displaced from the galaxy nucleus by $`7`$ arcsec, well within the mean displacement (25 arcsec) between X-ray and optical peaks seen in cooling flow clusters observed with ROSAT HRI (Peres et al 1997). Although the displacement is consistent with the positional uncertainty of ROSAT pointing ($`10`$ arcsec), if it is real the galaxy resides in the X-ray cavity, as seen in the Perseus cluster (Böhringer et al 1993). There are three point-like sources detected in the HRI field. One of them shows an offset from a possible optical counterpart, similar in amplitude and direction to that for 4C+55.16. Therefore the displacement between X-ray and optical peaks may be merely due to a pointing error. The elongation of the X-ray image is prominent at the low suface brightness levels. However, this may be partly due to a contamination from point sources in the field, particularly in the SW region. The present observation is too short to investigate such a faint X-ray morphology. ## 4 The ASCA X-ray spectrum The spectral analysis was performed using XSPEC (version 10.0). The MEKAL model (the original MEKA code, described by Kaastra 1992, with modified Fe-L line emissivity by Liedahl et al 1995) for an optically thin, collisional ionization equilibrium plasma was used with solar abundances taken from Anders & Grevesse (1989). The photoelectric absorption model was taken from Morrison & McCammon (1983). The Galactic absorption at the position of 4C+55.16 is estimated to be $`N_\mathrm{H}`$ = $`4.2\times 10^{20}`$cm<sup>-2</sup> from the HI measurements of Dickey & Lockman (1990). Absorption column densities obtained from the spectral fits are then excesses above the Galactic value. Quoted errors to the best-fit spectral parameters are 90 per cent confidence regions for one parameter of interest. The ASCA spectrum shows a clear line feature at 5.4 keV, which is in an excellent agreement with the rest Fe K$`\alpha `$ line emission at $`6.7`$ keV expected from a thin thermal plasma with a temperature of several keV at the redshift ($`z=0.240`$) of 4C+55.16. The agreement between the redshifts of the Fe K$`\alpha `$ line and the radio galaxy strongly supports the presence of a cluster around the radio galaxy rather than in the background which was suspected by Hutchings et al (1988). The thermal emission model (MEKAL) provides a slightly better fit to the data (0.6–9 keV from the SIS; 0.9–10 keV from the GIS) than the model of a power-law plus a gaussian line for the Fe K$`\alpha `$ line feature (see Table 2). The strong, narrow Fe K$`\alpha `$ line at 6.7 keV is unlikely for an active galaxy but naturally explained by thermal emission from cluster gas whose X-ray emission has been spatially resolved by the ROSAT HRI. There is evidence for multi-phase gas in the ASCA spectrum. Fits to the ASCA data above and below 3 keV with a single thermal emission model (MEKAL) give significantly different values of temperature (Table 3). The data below 3 keV also require excess absorption ($`\mathrm{\Delta }N_\mathrm{H}=0.98_{0.32}^{+0.34}\times 10^{21}`$cm<sup>-2</sup> measured in the Earth frame) above the Galactic value. The extrapolation of the best-fit model for the 3–10 keV data leaves excess emission down to 1 keV followed by a decline towards 0.6 keV is probably due to absorption (Fig. 2). This indicates a multi-phase gas consisting of at least an absorbed, cool component with a less absorbed, ambient medium. This is characteristic of a cooling flow. A fit to the whole band data (0.6–8 keV for the SIS; 0.9–10 keV for the GIS) with a multi-phase model (Model-c in Table 2), consisting of a single temperature MEKAL with Galactic absorption and the cooling flow model (Johnstone et al 1992) modified by extra absorption at the source, gives temperature, metal abundance, mass deposition rate and excess absorption column for the cooled component of $`kT=5.4_{0.9}^{+1.4}`$ keV, $`Z=0.50_{0.13}^{+0.12}`$$`Z_{}`$, Ṁ $`=1100_{410}^{+240}`$Myr<sup>-1</sup>, and $`N_\mathrm{H}`$$`=4.9_{1.3}^{+3.4}\times 10^{21}`$cm<sup>-2</sup>, respectively. The temperature is listed for the ambient medium (i.e., before cooling), the metallicity is assumed to be identical between the two components, and the absorption column density is corrected for the galaxy redshift. The quality of the fit is acceptable and comparable to the single phase model (see Table 2). The covering fraction of the cold absorption must be larger than 0.9 (90 per cent lower limit). The observed fluxes in the 0.5–2 keV and 2–10 keV bands obtained from the GIS are $`1.72\times 10^{12}`$erg cm<sup>-2</sup> s<sup>-1</sup> and $`2.56\times 10^{12}`$erg cm<sup>-2</sup> s<sup>-1</sup>, respectively. In the best-fit multi-phase model, about half of the total flux comes from the cool component. Excess line-like emission at a rest energy of 7.9 keV (6.4 keV in the observed frame) is marginally detected in the SIS data (Fig. 3). It can be identified with Fe K$`\beta `$. This feature is barely seen in the G3 data but not in the G2 data which show an unusual deficit between 6–7 keV, probably due to some anomaly in the detector. Since there is an instrumental feature at 6.4 keV in the SIS spectrum, the SIS results should be treated with caution. If the feature in the SIS data is real, the Fe K$`\beta `$ emission is underestimated by a factor of $`2.3\pm 1.5`$ by the best-fit thermal emission model. An anomalous Fe K$`\beta `$/Fe K$`\alpha `$ ratio has been observed in the central region of a few cooling flow clusters, and has been interpreted as the effect of resonant scattering of line photons in the cluster core (e.g., Akimoto et al 1996; Molendi et al 1998). This interpretation is however not applicable to 4C+55.16 because the whole cluster emission is observed. Further deeper observations are required to confirm the existence of this emission feature. ## 5 Deprojection analysis of the HRI data and the X-ray mass model We have carried out a deprojection analysis of the ROSAT HRI data. An azimuthally-averaged X-ray surface brightness profile was constructed for the cluster. This was background-subtracted, corrected for telescope vignetting and re-binned into 12 arcsec bins to provide sufficient counts in each annulus for a reliable statistical analysis to be carried out. With the X-ray surface brightness profile as the primary input, and under assumptions of spherical symmetry and hydrostatic equilibrium, the deprojection technique yields the basic properties of the intracluster gas (temperature, density, pressure, cooling rate) as a function of radius. The deprojection method requires the total mass profile for the cluster to be specified. We have iteratively determined the mass profile (which has been parameterized as an isothermal sphere; Equation 4-125 of Binney & Tremaine 1987) that results in a deprojected temperature profile which is isothermal within the region probed by the ROSAT data and which is consistent with the best-fit temperatures determined from the ASCA spectra, using the cooling-flow model (Section 4, The validity of the assumption of isothermal mass-weighted temperature profiles in the cluster cores is discussed by Allen 1998b). The best-fitting mass model has a core radius of $`60\pm 20`$ kpc and a velocity dispersion of $`820_{70}^{+100}`$ km s<sup>-1</sup>. The primary results from the deprojection analysis are as follows: for an assumed Galactic column density of $`4.2\times 10^{20}`$cm<sup>-2</sup>, we determine the mean cooling time within the central 12 arcsec bin of $`t_{\mathrm{cool}}=2.0_{0.2}^{+0.3}\times 10^9`$ yr, a cooling radius (beyond which the cooling time exceeds a Hubble time) of $`r_{\mathrm{cool}}=180_{40}^{+130}`$ kpc, and an integrated mass deposition rate within the cooling radius of $`\dot{M}=460_{140}^{+260}`$M yr<sup>-1</sup>. If we correct for intrinsic absorption in the cluster, as determined from the ASCA data, these values are adjusted to $`t_{\mathrm{cool}}=1.5_{0.2}^{+0.2}\times 10^9`$ yr, $`r_{\mathrm{cool}}=270_{10}^{+50}`$ kpc, and $`\dot{M}=970_{450}^{+270}`$M yr<sup>-1</sup>. ## 6 Lensing analysis and comparison with the X-ray results The mass model determined from the X-ray data may be compared to the mass implied by the observed lensing configuration in the cluster (Section 1). Since only a single, putative gravitational arc is seen, and to be consistent with the X-ray analysis, we have only carried out a simple, spherically-symmetric analysis of the lensing data. For a spherical mass distribution, the projected mass within the tangential critical radius, which we assume to be equal to the arc radius, $`r_{\mathrm{arc}}=15`$ arcsec (71.8 kpc), is given by $$M_{\mathrm{arc}}(r_{\mathrm{arc}})=\frac{c^2}{4G}\left(\frac{D_{\mathrm{arc}}}{D_{\mathrm{clus}}D_{\mathrm{arc}\mathrm{clus}}}\right)r_{\mathrm{arc}}^2$$ (1) where $`D_{\mathrm{clus}}`$, $`D_{\mathrm{arc}}`$ and $`D_{\mathrm{arc}\mathrm{clus}}`$ are respectively the angular diameter distances from the observer to the cluster, the observer to the lensed object, and the cluster to the lensed object. Fig. 4 shows the projected mass within the critical radius as a function of the redshift of the arc (solid curve). The horizontal dashed and dotted lines mark the best fit (projected) mass measurement and 90 per cent confidence limits determined from the X-ray data, within the same radius ($`3.8_{0.6}^{+1.0}\times 10^{13}`$ M). We see that the X-ray and lensing mass measurements are consistent for any arc redshift $`z_{\mathrm{arc}}>0.7`$. The best match between the X-ray and lensing mass measurements is obtained for an arc redshift of 1.5. ## 7 Discussion The ROSAT HRI image of the powerful radio galaxy 4C+55.16 shows extended X-ray emission peaking at the radio galaxy indicating cluster emission with a strong cooling flow. A spectral study of the ASCA data suggests the X-ray emitting gas to be multi-phase. An absorbed, cool component is found in the spectrum. The muti-phase spectral analysis indicates that the temperature of the ambient cluster medium is $`kT5.4`$ keV. A single-phase model fitted to the data gives a temperature lower by $`1`$ keV, typical of a cooling flow cluster. The mass deposition rate of the cooling flow, $`1100_{410}^{+240}`$M yr<sup>-1</sup>, derived from the spectral analysis is consistent with that ($`970_{450}^{+270}`$M yr<sup>-1</sup>) estimated from the image analysis when corrected for excess absorption. Agreements between mass deposition rates derived from the two methods have been found for other distant cooling-flow clusters (Allen 1998b). The optical spectrum of 4C+55.16 (Lawrence et al 1996) is indeed very similar to the other cooling flow galaxies (Crawford et al 1999). The H$`\alpha `$ luminosity is about $`8\times 10^{42}`$erg s<sup>-1</sup> (Lawrence et al 1996). The relatively large Balmer decriment (H$`\alpha `$/H$`\beta 5.5`$) suggests a significant reddening, which is often observed in cooling flows. The inferred absorption column density is slightly smaller than observed in the X-ray spectrum. The absorption-corrected 2–10 keV and bolometric lumiosities, computed from the multi-phase model, are $`8.0\times 10^{44}`$erg s<sup>-1</sup>, and $`2.2\times 10^{45}`$erg s<sup>-1</sup>, respectively (H<sub>0</sub> = 50 km s<sup>-1</sup> Mpc<sup>-1</sup> and q<sub>0</sub> = 0.5). About 60 per cent of the bolometric luminosity is due to the cooling flow. The bolometric luminosity exceeds that predicted for the single-phase temperature of 4 keV from the correlation between emission-weighted cluster temperature and luminosity (Mushotzky 1984; Edge & Stewart 1991; David et al 1993; Fabian et al 1994; Mushotzky & Sharf 1997; White et al 1997). A similar descrepancy is found for the other strong cooling flow clusters (e.g., Fabian et al 1994; Allen & Fabian 1998a; Markevich 1998). Taking the temperature derived from the multi-phase spectral analysis, 4C+55.16 fits well the $`kT_\mathrm{X}`$$`L_{\mathrm{Bol}}`$ correlation obtained from a similar analysis of other luminous ($`L_{\mathrm{Bol}}>10^{45}`$erg s<sup>-1</sup>) clusters for which the effect of cooling flows is included (Allen & Fabian 1998a), and is consistent with $`L_{\mathrm{Bol}}T_\mathrm{X}^2`$ expected from simple gravitational collapses for formation of clusters (Kaiser 1986; Navarro, Frenk & White 1995). As shown in Section 6 (and Fig. 4), the mass estimated using the tentatively-identified lensing arc and the gravitational mass derived from the X-ray deprojection analaysis are in good agreement. Moreover, the core radius of $`60\pm 20`$ kpc measured for 4C+55.16 is similar to the best-fit mean value of $`50`$ kpc measured in the six lensing cooling-flow clusters studied by Allen (1998b). The inferred metallicity of this cluster gas is $`0.5`$$`Z_{}`$, which is not unusual, but is certainly one of the higher values measured among the ASCA cluster sample compiled by Allen & Fabian (1998b). They showed that cooling-flow clusters show higher metallicity than non cooling-flow clusters, and suggest that the sharply peaked X-ray brightness profiles may give apparently high values of the emissivity-weighted metallicity in cooling flow clusters when there is a metallicity gradient in the cluster core. Thus the high metallicity in 4C+55.16 may be due to a steep metallicity gradient towards the cluster centre. The high metallicity measured in the spectrum also rules out significant contribution from the active nucleus to the observed hard X-ray emission around the Fe K band, otherwise the line would be less prominent. We conclude that all of the available evidence points to the environment of 4C+55.16 being like other distant massive cooling flow clusters. 4C+55.16 is yet another powerful radio galaxy surrounded by a strong cooling flow. Unlike Cygnus A and 3C295, 4C+55.16 is a compact radio source. Detailed high spatial resolution observations with AXAF will be required to determine the interaction between the radio source and the dense surrounding intracluster medium. Probably the most comparable cluster to 4C+55.16 is PKS 0745–19, which also contains a strong cooling flow and exhibits gravitationally-lensed arcs (Allen, Fabian & Kneib 1996). However, the amorphous radio source shows a steep spectrum and is almost 2 orders of magnitude less powerful thtan 4C+55.16 at 5 GHz. Massive cooling flows have not been found so far around ‘pure’ compact, GPS cores (e.g., O’Dea et al 1996). The gas pressure at the centre of a cooling flow is $`P=nT10^7`$ cm<sup>-3</sup> K (Heckman et al 1989). This leads to a free-free absorption optical depth $`\tau _{\mathrm{ff}}0.6P_7^2\nu _9^2T_4^{7/2}l_1`$ at a frequency $`10^9\nu _9`$ Hz, through a cloud of length $`10l_1`$ pc at pressure $`10^7P_7`$ cm<sup>-3</sup> K and temperature $`10^4T_4`$ K. It is therefore plausible that the turnover seen in the radio spectrum of the compact core at $`1.7`$ GHz (Whyborn et al 1985) is due to free-free absorption in the H$`\alpha `$ emitting gas close to the nucleus. The covering fraction in such clouds must however decrease rapidly away from the nucleus in order that the radio knot observed $`100`$ pc north-west of the nucleus (Whyborn et al 1985; Pearson & Readhead 1988) is not absorbed. The inverted spectrum of the counter-jet of NGC1275, which also resides in a cooling flow, is suspected to be due to free-free absorption on the pc scale (Vermeulen, Readhead, & Backer 1994). Free-free absorption is favoured for the spectral turnover in GPS sources by Begelman (1997) and Bicknell, Dopita & O’Dea (1997). Part of the observed optical emission-line luminosity from the innermost part of 4C+55.16 can be expected from shocked gas surrounding the jets of such a powerful and a relatively young radio source (e.g., Bicknell et al 1997), as well as from the cooling flow. ## Acknowledgements We thank the ASCA and ROSAT teams for their efforts on operation of the satellites, and the calibration and maintenance of the software. The optical image we used in Fig. 1 was taken from the Digitized Sky Survey which was produced at the Space Telescope Science Institute (ST ScI) under U.S. Goverment grant NAG W-2166. We thank Royal Society (ACE, ACF, SE) and PPARC (KI, SWA) for support.
no-problem/9902/astro-ph9902167.html
ar5iv
text
# X–ray Observations of BL Lacertae During 1997 Outburst and its Association with Quasar-like Characteristics ## 1 INTRODUCTION The eponymous BL Lacertae has been identified as a counterpart of the variable radio source VRO 42.22.01 by Schmitt (1968); subsequent work by Oke, Neugerbauer, & Becklin (1969) as well as DuPuy et al.(1969) revealed a featureless spectrum, devoid of emission or absorption lines. In 1972, Strittmatter et al.(1972) suggested that BL Lac-type objects are akin to Quasi-Stellar Radio Sources, but distinguished by the absence of emission lines. Detailed spectroscopy of BL Lacertae by Miller & Hawley (1977) revealed an absorption line system at a redshift of 0.069, presumably due to the galaxy hosting BL Lacertae. At the 1978 conference devoted to BL Lac - type objects, it was suggested that the presence or absence of optical and/or UV emission lines is perhaps less relevant to the physical structure of blazars than their rapid variability, the existence of compact radio sources, and a large degree of polarization (cf. numerous articles in Wolfe 1978). The term “blazar,” dating back to that conference, includes both “lineless” BL Lac objects as well as “lined” quasars showing the same characteristics as BL Lac objects, plus emission lines. Independently, further observations of compact radio sources, particularly of their variability and of the detection of superluminal expansion, led to the suggestion that at least the matter responsible for the radio emission is moving at a relativistic speed at an angle close to the line of sight, perhaps in a jet-like structure (see, e.g., Blandford & Rees 1978). The most convincing evidence of similarity between the two sub-classes of blazars is the fact that both often show show strong and variable GeV $`\gamma `$–ray emission (cf. von Montigny et al.1995). Of all classes of extragalactic sources, only blazars are strong GeV $`\gamma `$–ray emitters. In contrast, ordinary radio-quiet quasars and Seyfert galaxies, regardless of their optical, UV, and X–ray fluxes, are not detected above $`1`$ MeV. In general, blazars show two distinct peaks in their $`E\times F(E)`$ spectra, with one located in the infrared - through - X–ray band, and another located in the MeV to GeV (or even TeV) $`\gamma `$–rays (cf. Von Montigny et al.1995). (In some cases, other, narrower peaks are present, but these are generally attributed to isotropic emission due to the host galaxy or the accretion disk.) The rapid variability observed in the $`\gamma `$–rays implies that the $`\gamma `$–ray source must be very compact. The simplest way to avoid an excessive opacity to $`\gamma \gamma `$ pair production is to invoke relativistic boosting of $`\gamma `$–ray emission; most likely, the entire continuum emission in blazars is Doppler-boosted via a jet-like structure. BL Lacertae itself indeed shows GeV emission in the EGRET observations; in fact, the $`\gamma `$–ray flux ($`E>`$ 100 MeV) increased from an upper limit of $`30\times 10^8`$ photons cm<sup>-2</sup> s<sup>-1</sup> in Oct. 1994 to a detection of $`40\pm 12\times 10^8`$ photons cm<sup>-2</sup> s<sup>-1</sup> in Jan. 1995 (cf. Catanese et al.1997). X–ray surveys conducted over the last 20 or so years with satellites such as HEAO-A, Einstein Observatory, ROSAT, and Asca, revealed that BL Lac objects are generally strong X–ray emitters, with the X–ray spectrum generally well-described as a power law. (For a recent review, see, e.g., Kubo et al.1998.) Those surveys revealed that the X–ray emission from BL Lac objects obeys a peculiar correlation: the ratio of the X–ray to optical fluxes was greater for objects where the ratio of the radio-to-optical fluxes was smaller (cf. Maraschi 1988). This correlation suggested that instead of dividing blazars by the presence/absence of emission lines, a better classification would rely on the overall spectrum. In this context, the BL Lac objects with the low energy peak located in the UV or X–rays – usually found via X–ray surveys – would be labeled as “High-energy peaked BL Lacs” or HBLs (cf. Giommi, Ansari, & Micol 1995), while those with the lower energy peak in the IR were labelled as “Low-energy peaked BL Lacs” or LBLs. The LBLs show broad-band spectra similar to blazars associated with lined, compact, flat radio-spectrum quasars (cf. Sambruna, Maraschi, & Urry 1986), which we will call here “quasar-hosted blazars,” or QHBs. In this classification BL Lacertae is an LBL. Its spectrum peaks at $`10^{14}`$ Hz (cf. Kawai et al.1991), and, historically, the optical spectrum was devoid of any emission lines. This changed within a few years before May 1995. A serendipitous observation at the Hale Observatories, intended for calibration purposes (Vermeulen et al.1995) revealed emission lines with equivalent H$`\alpha `$ width of $`7`$ Å or more and FWHM of $`4000`$ km s<sup>-1</sup>. Corbett et al.(1996) confirmed the presence of the lines and inferred that the line flux increased four-fold. BL Lacertae, therefore, lost its defining characteristics. Interestingly enough, this happened around the epoch of the increase of the GeV flux. Following reports that BL Lacertae entered an active, high state in June 1997 (Noble et al.1997), an impromptu monitoring campaign was organized. We observed it with the Rossi X–Ray Timing Explorer (RXTE) (Madejski, Jaffe, & Sikora 1997). The questions to be addressed were: what are the changes in the X–ray emission associated with the emergence of emission lines and the occurrence of the flare, and what constraints can the broad-band data impose on models of blazar jets? We report on X–ray observations (RXTE, as well as previous X–ray observations) in Sec. 2. In Sec. 3 and 4, we put the X–ray observations in the context of the broad-band spectrum and discuss the most plausible models for the radiative processes in the source, and in Sec. 5 we summarize our results. ## 2 OBSERVATIONS Prior to the RXTE observation in June 1997 – which we describe in Sec. 2.3 – BL Lacertae was observed with the ROSAT PSPC and by Asca. We extracted these data from the HEASARC archives, and found that these show softer continuum and lower flux than the RXTE data. In addition, the well-exposed Asca data are very helpful in determining the low energy (photoelectric) absorption, which is probably not related to the source, and is assumed to be non-variable. ### 2.1 Asca Observations and Spectral Fitting Asca observed BL Lacertae on 1995 November 22 for approximately 30 ks. The Asca data were screened using the ftool ascascreen and the standard screening criteria. The pulse-height data for the source were extracted using spatial regions with a diameter of $`3^{}`$ (for SISs) and $`4^{}`$ (for GISs) centered on the nominal position of BL Lacertae, while background was extracted from source-free regions of comparable size away from the source. The PHA data were subsequently rebinned to provide at least 20 counts per spectral bin. For the SIS data, we used the response matrix as appropriate for the observation epoch, generated via the ftool sisrmg v. 1.1; for the GIS data, we used the nominal (v. 4.0) response matrices. For both instruments, we used the telescope effective areas via the ftool ascaarf v. 2.72. The details of the observation (including the net counting rates) are given in Table 1. We fitted the full-band PHA data using a simple power law absorbed at the low energies by neutral gas with Solar composition and cross-sections as given by Morrison & McCammon (1983); the results of the fits are given in Table 1. Using the data for all four Asca detectors over their full bandpass (0.6 – 10 keV), we get the following best fit (yielding $`\chi ^2`$ of 737 for 778 PHA channels): energy power law index $`\alpha =0.94\pm 0.04`$, and equivalent hydrogen column density of absorbing gas $`N_{\mathrm{H},\mathrm{X}\mathrm{ray}}=2.7\pm 0.2\times 10^{21}`$ cm<sup>-2</sup> (all errors are $`90\%`$ confidence regions, meaning that they are determined from the values of fitted parameters at $`\chi _{\mathrm{min}}^2+2.7`$). While this fit is statistically acceptable, the value of the absorbing column in this simple absorbed power law model may be inconsistent with the Galactic value. Since BL Lacertae is located relatively close to the Galactic plane, it is necessary to consider the possible contribution from the Galactic molecular gas in addition to that associated with the usual 21 cm atomic hydrogen measurement. In the case of BL Lacertae, this was detected in emission in <sup>12</sup>CO (Kazes & Crovisier 1981; Bania, Marscher, & Barvainis 1991) and in <sup>13</sup>CO (Crovisier, Kazes, & Brillet 1984), as well as in absorption in <sup>12</sup>CO (Marscher, Bania, & Wang 1991). The absorbing column consists therefore of two components: that associated with neutral hydrogen, inferred from the 21 cm measurements of Dickey et al.(1983) of $`N_{\mathrm{H},21\mathrm{c}\mathrm{m}}`$ of $`1.8\times 10^{21}`$ cm<sup>-2</sup>, and the molecular component. Bania et al.(1991) measure an integrated CO emission $`W_{\mathrm{CO}}`$ of 4.6 K km s<sup>-1</sup>, adopt the ratio $`N_{\mathrm{H},\mathrm{mol}}`$ / $`W_{CO}`$ of $`6\times 10^{20}`$ K<sup>-1</sup> km<sup>-1</sup> s cm<sup>-2</sup>, and infer that the column of the molecular component corresponds to $`N_{\mathrm{H},\mathrm{mol}}`$ of $`2.8\times 10^{21}`$ cm<sup>-2</sup>, yielding the total column $`N_{\mathrm{H},\mathrm{tot}}`$ of $`4.6\times 10^{21}`$ cm<sup>-2</sup>, significantly larger than $`N_{\mathrm{H},\mathrm{X}\mathrm{ray}}`$. There are two possible reasons for this discrepancy. Regarding the CO measurements, the conversion of $`W_{CO}`$ to $`N_{\mathrm{H},\mathrm{mol}}`$ may be unreliable (and direction-dependent); de Vries, Heithausen, & Thaddeus (1987) suggest that towards Ursa Major, the conversion of $`N_{\mathrm{H},\mathrm{mol}}`$ / $`W_{CO}`$ of $`1\pm 0.6\times 10^{20}`$ K<sup>-1</sup> km<sup>-1</sup> s cm<sup>-2</sup> may be more appropriate. When applied to the case of BL Lacertae, this would yield $`N_{\mathrm{H},\mathrm{mol}}`$ of $`0.5\times 10^{21}`$ cm<sup>-2</sup>, and $`N_{\mathrm{H},\mathrm{tot}}`$ of $`2.3\times 10^{21}`$ cm<sup>-2</sup>, which is now less than $`N_{\mathrm{H},\mathrm{X}\mathrm{ray}}`$. If the value of $`N_{\mathrm{H},21\mathrm{c}\mathrm{m}}`$ is indeed accurate, one could obtain an agreement between $`N_{\mathrm{H},\mathrm{X}\mathrm{ray}}`$ and $`N_{\mathrm{H},\mathrm{tot}}`$ using the conversion $`N_{\mathrm{H},\mathrm{mol}}`$ / $`W_{CO}`$ of $`2\times 10^{20}`$ K<sup>-1</sup> km<sup>-1</sup> s cm<sup>-2</sup>, which is probably acceptable. Alternatively, if the $`N_{\mathrm{H},\mathrm{tot}}`$ of $`4.6\times 10^{21}`$ derived by Bania et al.(1991) is indeed correct, this would imply that our simple power law model for the X–ray spectrum emitted by BL Lacertae is incorrect. This would then require that the intrinsic spectrum hardens towards higher energies. Such a gradually hardening spectrum is entirely possible, given the fact that the 1997 RXTE observation – with the bandpass extending beyond Asca’s – implies an even harder index than that in the simple power law model applied to the Asca data. As an alternative, we thus adopt a model consisting of a sum of two power laws, both absorbed by the fixed column suggested by Bania et al.(1991) of $`4.6\times 10^{21}`$ cm<sup>-2</sup>. (We note here that a commonly used broken power law model for a description of such a hardening (concave) spectrum is unphysical, and we do not consider it here.) In this case, we obtain the “soft” power law index (dominating below $`1`$ keV) of $`3.4\pm 0.7`$ and the “hard” index (dominating above $`1`$ keV) of $`0.88_{0.14}^{+0.09}`$. This yields $`\chi ^2`$ of 736 for 778 PHA channels. We conclude that we cannot distinguish between the two models purely on the statistical basis. Nonetheless, the double power law model, if correct, may be attractive, suggesting that the Asca data reveal simultaneously the soft component – presumably the “tail” of the synchrotron emission – and the hard component, presumably the onset of the Compton component. However, we stress that this is not a unique representation of the Asca data, and more precise X–ray observations (with an instrument of better spectral resolution) are required to precisely measure the absorbing column via the measurement of the individual edges. In any case, the observed 2 – 10 keV flux of BL Lacertae during the 1995 November observation for either of the above models is $`9\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, with a 10% nominal error. In the 0.5 – 2 keV band (useful for a comparison with the ROSAT data), it is $`3.6\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, again with a 10% nominal error. ### 2.2 ROSAT PSPC Observations and Spectral Fitting ROSAT observed BL Lacertae with the PSPC starting on December 22, 1992, in 2 short pointings about two days apart. The data were reduced using standard ROSAT PSPC procedures via the ftool xselect v. 1.4. The source PHA file was extracted from the event data using a circular extraction region of $`3^{}`$ in diameter, while the background was extracted from an annular region with the inner and outer diameters of $`10^{}`$ and $`20^{}`$, respectively. The data were fitted using the detector response matrix pspcb\_gain2\_256.rmf and telescope effective area file prepared using the ftool pcarf v. 2.1.1. The resulting counting rate was $`0.16\pm 0.01`$ count s<sup>-1</sup> with no measurable variability apparent in the data. A 2 ks observation of a relatively faint source with strong absorption such as BL Lacertae with an instrument with a relatively soft X–ray response such as the ROSAT PSPC yields data with only moderate statistical quality. Nonetheless, an interesting conclusion can be drawn by comparing these data against the Asca and the 21 cm / <sup>12</sup>CO data sets. An absorbed power law model of the form as above yields an acceptable fit ($`\chi ^2`$ is 30.4 for 40 PHA bins) with $`\alpha =4.1(1.6,+2.4)`$ and $`N_H=7\pm 3\times 10^{21}`$ cm<sup>-2</sup>; however, this best-fit value of the absorbing column is significantly higher than that inferred from the 21 cm / <sup>12</sup>CO or the Asca data. The most likely explanation is that the underlying continuum is more complex, curving downward. Independently of the severe conflict with the absorption measured by other instruments, such a steep power law cannot extend to lower energies indefinitely, and thus it must have a convex shape. Instead, such a shape may be approximated as a broken power law (which can be a physically realistic approximation of a spectrum; see, e.g., the comment after Eq. 20), or an exponentially cutoff power law. Again, the quality of the data is modest, but to shed some light on the possible shape of the X–ray spectrum of BL Lacertae above $`1`$ keV in a very low state, we assumed that the absorbing column is indeed $`2.7\times 10^{21}`$ cm<sup>-2</sup> and that the index below the break is the same as observed in Asca, $`\alpha _{lo}=0.9`$. With this, we infer that for a broken power law model, the break energy $`E_b`$ is $`1.0<E_b<1.6`$ keV, and the index above the break is $`\alpha _{hi}>2.2`$ ($`\chi ^2`$ is 29.3). For an exponentially cutoff power law model with an underlying index $`\alpha =0.9`$, we infer the e-folding energy $`E_c`$ to be $`1.0<E_c<2.3`$ keV, with $`\chi ^2`$ of 34.8. Therefore a broken power law yields a better fit. The fits are summarized in Table 1; regardless of the model, the 0.5 – 2 keV flux is $`1.5\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, with a nominal 10% error; this is roughly a half of the Asca flux in the same bandpass. In summary, when BL Lacertae was relatively faint, its intrinsic spectrum showed convex curvature and is well described by a phenomenological model of a broken power law. ### 2.3 RXTE Observations and Spectral Fitting RXTE observations consisted of seven pointings over six days listed in Table 2; each pointing covered one or two orbits, interrupted only by the Earth occultations. These observations were scheduled during low-background orbits, i.e. those relatively free of passages through the South Atlantic Anomaly (SAA). The PCA instrument (Jahoda et al.1996) consists of 5 individual passively collimated, co-aligned, gas-filled proportional counter X–ray detectors, sensitive over the bandpass of 2 – 60 keV, each having an open area of $``$ 1300 cm<sup>-2</sup>. The field of view of the instrument has roughly a triangular response, with FWHM of $`1^\mathrm{o}`$. The HEXTE instrument (Rothschild et al.1998) consists of two detector clusters, each having four NaI / CsI scintillation counters, sensitive over the bandpass of 15 – 250 keV, with the total effective area of $``$ 1600 cm<sup>2</sup>, and a field of view also of $`1^\mathrm{o}\times 1^\mathrm{o}`$. For the PCA data, only three or four of the five detectors were operating. For maximum consistency among all observations, we used only the three detectors (known as PCU 0, 1, and 2) which were turned on for all the observations. The selection criteria were: the elevation angle over the Earth limb greater than $`10^\mathrm{o}`$, and pointing direction less than $`0.02^\mathrm{o}`$ away from the nominal position of the source at RA(2000) $`=22^\mathrm{h}02^\mathrm{m}43^\mathrm{s}.2`$, Dec(2000) = $`42^\mathrm{o}16^{^{}}40^{^{\prime \prime }}`$. #### 2.3.1 PCA data For a source as faint as BL Lacertae, more than half of the counts collected in the PCA are due to unrejected instrumental and cosmic X–ray background. To maximize the signal-to-noise ratio, we used only the top layer PCA data from the “Standard 2” mode. The PCA background, instrumental plus cosmic, has been modeled from observations of blank (i.e. not near known X–ray sources) high latitude ($`|b|>30^\mathrm{o}`$) sky. The raw counting rate varies with satellite latitude (i.e. with a period about half of the 96 minute orbital period) and with activation induced by the South Atlantic Anomaly (SAA). At least 3 time constants are present in the unrejected background ($``$ 20 minutes, $``$ 4 hours, and $``$ 4 days). The latitude variation is primarily due to the instantaneous particle environment while the activation is primarily due to the recent history of passages through the SAA. The background is parameterized by the so-called ”L7” rate, which is derived from the two-fold coincidence Lower Level Discriminator (LLD) event rates present in the Standard 2 data (Jahoda et al.1996). In particular, “L7” consists of the instantaneous sum of signals derived from coincident “events” on anodes L1+R1, L2+R2, L1+L2, R1+R2, R3+L3, R2+R3, L2+L3 (Zhang et al.1993.) This rate tracks the particle-induced background as well as the short and long time constants. To this model, a time dependent term is added. The rate measured by the HEXTE particle monitor (the only detector onboard RXTE which operates during the SAA passages) is integrated through each SAA pass, and a term proportional to the sum of recent SAA rates $`\times e^{(tt_{saa}/\tau )}`$ is included. The spectral shape of this activation term is assumed to be constant, and the amplitude is determined by comparison of the observed total background in orbits just following SAA passages with the observed background in orbits far from SAA passages. The distribution of residuals to a background-subtracted count rate from a single blank sky region suggests a residual 1 $`\sigma `$ systematic error of 0.15 count s<sup>-1</sup> (3 PCUs, 2 - 10 keV) (see Jahoda et al.1999, in preparation). The resulting background-subtracted count rates for the 7 observations are shown in Table 2 and Fig. 1. Visual inspection indicates that the source varies significantly, and that the variability is undersampled; we can only state that the variability time scale is a day or less. Since the data for the background estimation were collected from blank sky observations, the average Cosmic X–ray Background (CXB) is included in the background estimate, by construction. The contribution of the CXB flux to the total observed flux in the PCA data can be scaled from the measurement given by Marshall et al.(1980) of 3.2 keV cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup> keV<sup>-1</sup> at 10 keV, with a spectral shape well-approximated as a thermal bremsstrahlung with $`kT=40`$ keV. Since the solid angle of the PCA collimator is $`3.2\times 10^4`$ sr (Jahoda et al.1996), the 2 - 10 keV flux is $`1.7\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. While it is possible to estimate the contribution of the mean CXB flux, the CXB is not uniform from all directions in the sky. The fluctuations in the CXB on the solid angle scale of the PCA collimator can be estimated by scaling the HEAO-1 A2 fluctuations by the ratio of square root of the solid angles of A2 and PCA detectors (Shafer 1983; Mushotzky & Jahoda 1992, Fig. 1, with the correction that the mean counting rate should be 3.5 count s<sup>-1</sup>, and not 5.6 count s<sup>-1</sup> as stated in the caption). The 1 $`\sigma `$ CXB fluctuation is about $`10\%`$ of the CXB contribution, or $``$ 0.2 count s<sup>-1</sup> per PCA detector (2 – 10 keV). While the variability can be reliably measured at much smaller levels, this represents a limiting uncertainty in the determination of the absolute flux for observations using the PCA alone. #### 2.3.2 HEXTE data The HEXTE background is continuously measured, as each cluster alternately rocks to one of two off-source pointings (on alternate 16 second intervals for this observation). This gives an effectively simultaneous background measurement at four positions around the source. This background is dominated by internal activation (Gruber et al.1996). A significant instrument deadtime (15 – 40%), however, introduces some uncertainty in the absolute flux level. This deadtime is largely particle-induced and is therefore significant even for faint sources. A deadtime correction factor is calculated from the charged particle rates to bring the exposure to within a few percent of its actual value (based on 16 s timescale observations of the Crab). The resulting residuals for the background estimation using this procedure are 1% of background or less (Rothschild et al.1998). However, while BL Lacertae was detected with HEXTE at $`0.5`$ net count s<sup>-1</sup>, each data segment had too low a signal-to-noise ratio to measure the spectrum, and we could only do so for the summed data (cf. Table 2). #### 2.3.3 Spectral fitting of the RXTE data We fit the PHA spectrum from each observation to a model including a simple power law which is photoelectrically absorbed at low energies by neutral gas with Solar abundances and with cross-sections given by Morrison & McCammon (1983) with a fixed column density of $`2.7\times 10^{21}`$ cm<sup>-2</sup> as determined from the Asca observations. (We note that adopting a column of $`4.6\times 10^{21}`$ cm<sup>-2</sup> does not change our conclusions significantly.) In our fits, we used the instrumental response matrix generated via ftool pcarmf v. 3.5, which corrects for the slight PCA gain drift ($``$ 1% over 2 years) and energy – to – channel conversion table pca\_e2c\_e03v04.fits, as appropriate for the observation epoch of BL Lacertae. The results in Table 2 indicate that the data for each pointing are well-described by the absorbed power law model, but we do observe spectral variability from one pointing to another, with the energy power law index $`\alpha `$ varying from $`0.42\pm 0.08`$ to $`0.80\pm 0.06`$, with no clear correlation of the index with the flux level. To determine the average spectral shape to the highest possible energies (where additional information is gained from the HEXTE data), we also co-added all data. For simultaneous analysis of PCA and HEXTE data, there is some uncertainty in the relative normalization which must be taken into account. This difference is not determined uniquely as yet, but in general, the PCA agrees more closely with previous data (namely OSSE and Ginga) and is therefore taken as the baseline against which the HEXTE data are normalized. This factor is generally $`0.7`$ for the fits using the effective areas released in ftools v. 4.1. The addition of the HEXTE data does not change the resulting spectral fit parameters, but implies that the hard X–ray spectrum observed in PCA extends to higher energies ($`70`$ keV); the summed PCA and HEXTE spectra are plotted in Fig. 2. The best-fit spectral model gives a mean 2 – 10 keV flux of $`2.6\pm 0.3\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. This value has an additional associated uncertainty of $`0.2\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup> from the fluctuations of the CXB. ## 3 DISCUSSION The RXTE observations of BL Lacertae conducted in July 1997 show that the source was bright and variable in the X–ray band, with the X–ray spectrum significantly harder than observed during the periods of lower brightness and activity; the mean 2 – 10 keV flux was $`2.6\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, about 3 times greater than that observed by Asca in November 1995. A direct comparison with the ROSAT PSPC observation in December 1992 is not possible, but the 0.5 – 2 keV flux in 1992 was about a half of that in Nov. 1995, so the mean July 1997 flux must have been at least 6 times greater than in December 1992. BL Lacertae was also observed by Ginga in 1988 (Kawai et al.1991). The June 15 1988 observation implies a 2 – 10 keV flux of $`7.6\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> with $`\alpha `$ of $`0.71\pm 0.07`$, while for July 17 1988 observation, the 2 – 10 keV flux was $`5.5\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> with $`\alpha `$ of $`1.16\pm 0.24`$. In general, the X–ray spectrum of BL Lacertae appears to be harder when the source is brighter. This is illustrated in Fig. 3, where we plot the unfolded summed PCA spectrum together with the Asca GIS and ROSAT PSPC spectra. The analysis of the Asca data by Kubo et al.(1998), as well as many papers published earlier (cf. Sambruna et al.1996; Worrall & Wilkes 1990), imply that HBL-type blazars generally have relatively soft X–ray ($`\alpha >1`$) spectra, while QHBs show harder spectra, with $`\alpha <1`$. The spectra of LBLs, the class of objects where BL Lacertae belongs, are intermediate. For BL Lacertae, the shape of the spectrum is related to the state of the source. Interestingly, the X–ray spectral characteristics of BL Lacertae appear to be “HBL-like” when the emission lines are weak or absent, and “QHB-like” when the emission lines are strong; an intriguing possibility (although by no means certain, depending on the details of the Galactic absorption; cf. Section 2.1) is that during the intermediate state of the source, shortly after the emergence of the emission lines, the spectrum simultaneously consisted of both the LBL-like (hard) and HBL-like (soft) components, with comparable fluxes at $`1`$ keV. The observations of blazars imply that the entire continuum (including both the low- and high-energy components mentioned in Sec. 1) arises in a relativistic jet. The polarization and the local power-law shape of the spectrum of the low energy component suggest that the process responsible for emission in this spectral region is synchrotron radiation by highly relativistic electrons, accelerated by an as yet unknown mechanism. For the high energy peak, the best current model is Compton-upscattering of soft photons by the same electrons that produce synchrotron radiation via interaction with magnetic field. The origin of those soft photons is under debate: they can be either synchrotron photons internal to the jet, as in synchrotron self-Compton models (cf. Blandford & Königl 1979; Königl 1981; Ghisellini & Maraschi 1989), or they can be external to the jet, either from the accretion disk (Dermer, Schlickeiser, & Mastichiadis 1992), or else from broad emission line clouds and/or intercloud material (Sikora, Begelman, & Rees 1994). Perhaps the best current picture has the former mechanism dominating the $`\gamma `$–ray emission in HBL-type blazars, which usually do not show broad emission lines (but see, e.g., Padovani et al.1998), while the latter dominates in QHBs, blazars associated with quasars (cf. Madejski et al.1997). In this context, the soft X–ray spectra of HBLs are the “tails” of the synchrotron component (and thus produced by the most energetic electrons), while the hard spectra of QHBs are emitted via the Compton process by the lower energy electrons. The overall broad-band spectrum of BL Lacertae, including the July 1997 data, is plotted in Fig. 4. The two peaks generally present in blazar spectra are apparent in the plot. However, the fact that the $`\gamma `$–ray spectrum is hard – with a spectral index which is lower (harder) than the two-point spectral index between the hard X–rays and the beginning of the EGRET spectrum – and that it lies below the extrapolation of the X–ray power-law spectrum, suggests that that the total high energy spectrum consists of two separate components. Below, we follow the suggestion presented by us at the November 1997 HEAD meeting that the high energy spectrum of this source as measured in July 1997 actually consists of two components, one radiated via the synchrotron self-Compton process, dominating in the X–ray band, and another radiated via external-radiation-Compton, dominating in the GeV $`\gamma `$–ray band, and in the context of such a scenario, we estimate the physical parameters of the radiating plasma. ## 4 THEORETICAL MODELS In our study of the radiative processes operating in the jet of BL Lacertae, we use the instantaneous spectrum averaged over the available July 1997 data that are simultaneous with the EGRET observation (Fig. 4). The optical data (Bloom et al.1997) show that the July 1997 high state of BL Lacertae is a superposition of many flares (and probably the same is true for other spectral bands). We thus interpret these flares as a result of the formation of relativistic shocks, which, within a given distance range in a jet, effectively accelerate relativistic particles. We assume that the transverse size of these shocks is $$ac\mathrm{\Delta }t\mathrm{\Gamma },$$ (1) where $`\mathrm{\Delta }t`$ is the observed time scale of the flare and $`\mathrm{\Gamma }`$ is the bulk Lorentz factor of the radiating matter. We investigate two models, the SSC (synchrotron-self-Compton) where the GeV radiation results from Comptonization of the intrinsic synchrotron radiation, and the ERC (external-radiation-Compton) where the GeV radiation is produced by Comptonization of the broad emission line light. The models are specified by adopting the following parameters describing activity of BL Lac in July 1997: \- location of the peak of the low-energy (synchrotron) component - $`h\nu _S1\mathrm{e}\mathrm{V}`$; \- location of the peak of the high-energy (Compton) component - $`h\nu _C10\mathrm{G}\mathrm{e}\mathrm{V}`$; \- synchrotron luminosity - $`L_S2\times 10^{45}`$ erg s<sup>-1</sup>; \- Compton luminosity - $`L_C8\times 10^{45}`$ erg s<sup>-1</sup>; \- time scale of a flare - $`\mathrm{\Delta }t8`$ hrs; \- typical energies corresponding to broad emission line frequencies - $`h\nu _L10`$ eV; \- energy spectral index in the X–ray band - $`\alpha _X0.5`$. To determine the parameters of the ERC model, we also need to know the luminosity of the external radiation. We derive it by using the measurements of the $`H_\alpha `$ line in June 1995 (Corbett et al.1996) and by assuming that the line intensity ratios in BL Lacertae are the same as those in quasars. Using the “line-bolometric” correction (Celotti, Padovani & Ghisellini 1997), we find $`L_{BEL}4\times 10^{42}`$ erg s<sup>-1</sup> and adopt it for the July 1997 flare. ### 4.1 Synchrotron Self-Compton In the SSC model, $`\nu _C\nu _{SSC}`$, and we have $$h\nu _S\gamma _{m}^{}{}_{}{}^{2}(B^{}/B_{cr})\mathrm{\Gamma }\mathrm{m}_\mathrm{e}\mathrm{c}^2$$ (2) and $$h\nu _Ch\nu _S\gamma _{m}^{}{}_{}{}^{2},$$ (3) where $`B^{}`$ is the intensity of the magnetic field, $`B_{cr}2\pi m_e^2c^3/he4.4\times 10^{13}`$ Gauss, and $`\gamma _m^{}`$ is the Lorentz factor at which the energy distribution of electrons has a high energy break/cutoff. All primed quantities are measured in the comoving frame of the flow in the active region. Assuming that observer is located at an angle $`\theta _{obs}1/\mathrm{\Gamma }`$ from the jet axis and noting that $`L_S\mathrm{\Gamma }^4L_S^{}`$, we find that the ratio of the SSC peak luminosity to the synchrotron luminosity is given by (Sikora et al.1994; Ghisellini, Maraschi, & Dondi 1996) $$\frac{L_C}{L_S}=\frac{u_S^{}}{u_B^{}}\frac{L_S^{}}{4\pi a^2c}\frac{8\pi }{B_{}^{}{}_{}{}^{2}}\frac{2L_S}{c^3\mathrm{\Delta }t^2\mathrm{\Gamma }^6B_{}^{}{}_{}{}^{2}}.$$ (4) Now, combining eqs. (1) – (4), and substituting the values adopted by us for BL Lacertae, we find $$\mathrm{\Gamma }\left(\frac{L_S^2}{L_C}\frac{\nu _C^2}{\nu _S^4}\frac{1}{\mathrm{\Delta }t^2}\frac{2m_e^2c}{h^2B_{cr}}\right)^{1/4}100,$$ (5) $$B^{}=\frac{1}{\mathrm{\Gamma }}\frac{\nu _S^2}{\nu _C}\frac{hB_{cr}}{m_ec^2}10^4\mathrm{Gauss},$$ (6) and $$\gamma _m^{}=\sqrt{\frac{\nu _C}{\nu _S}}10^5.$$ (7) For such model parameters, the low energy spectral break predicted to arise due to synchrotron self-absorption should be located at a frequency $`\nu _a`$ given by $$\nu _a1.7\times 10^2B_{}^{}{}_{}{}^{1/7}(L_\nu \nu )_{\nu =\nu _a}^{2/7}\mathrm{\Delta }t^{4/7}\mathrm{\Gamma }^{5/7}1.3\times 10^{10}\mathrm{Hz},$$ (8) This is lower than the observed value by at least a factor of 10 – 30 (see, e.g., Bregman et al.1990). It should be emphasized here that because no signature of the high-energy break is observed up to the highest energies covered by EGRET, the value $`h\nu _C=10`$ GeV used here is actually the lowest plausible value of the location of the high energy break, and that for a larger $`h\nu _C`$, the output model parameters become even more extreme. ### 4.2 External Radiation-Compton The discovery of broad emission lines in BL Lacertae (Vermeulen et al.1995; Corbett et al.1996) indicates that in this object, just as in the case of quasars, the subparsec jet is embedded in a diffuse radiation field. The energy density of this field as measured in the comoving frame of the blob is thus amplified by a factor $`\mathrm{\Gamma }^2`$. The rate of the electron energy losses due to Comptonization of this radiation can then be approximated by (Sikora et al.1994) $$\frac{d\gamma ^{}}{dt^{}}\frac{c\sigma _Tu_{ext}^{}}{m_ec^2}\gamma _{}^{}{}_{}{}^{2},$$ (9) where $$u_{ext}^{}\frac{L_{BEL}\mathrm{\Gamma }^2}{4\pi r^2c},$$ (10) and $`r`$ is the distance of an active region from the central object (black hole). For an opening half-angle of a jet $`\theta _ja/r1/\mathrm{\Gamma }`$, and for an observer located within or close to the jet cone, the ratio of the ERC peak luminosity to the synchrotron peak luminosity is given by (Sikora 1997): $$\frac{L_C}{L_S}\frac{u_{ext}^{}}{u_B^{}}\frac{2L_{BEL}\mathrm{\Gamma }^2}{r^2cB_{}^{}{}_{}{}^{2}}\frac{2L_{BEL}}{c^3\mathrm{\Delta }t^2B_{}^{}{}_{}{}^{2}\mathrm{\Gamma }^2},$$ (11) where now $$\nu _C(\gamma _m^{}\mathrm{\Gamma })^2\nu _L.$$ (12) Combining equations (2), (11), and (12), we obtain $$\mathrm{\Gamma }\left(\frac{2L_SL_{BEL}}{L_C}\right)^{1/4}\left(\frac{\nu _C}{\nu _L\nu _S}\frac{1}{\mathrm{\Delta }t}\frac{m_ec^{1/2}}{hB_{cr}}\right)^{1/2}8,$$ (13) $$B^{}=\mathrm{\Gamma }\frac{\nu _S\nu _L\mathrm{\Gamma }}{\nu _C}\frac{hB_{cr}}{m_ec^2}1\mathrm{Gauss},$$ (14) and $$\gamma _m^{}\frac{1}{\mathrm{\Gamma }}\left(\frac{\nu _C}{\nu _L}\right)^{1/2}4\times 10^3.$$ (15) For these parameters, $`\nu _a4\times 10^{11}`$ Hz, which is consistent with the observations (Bregman et al.1990). In addition, the relatively low value of $`\mathrm{\Gamma }`$ is consistent with both the superluminal expansion data (cf. Mutel & Phillips 1982) and with the limits derived from the considerations of the compactness of the source as inferred from the variability data via opacity to pair production. ### 4.3 ERC Radiation plus SSC Radiation Provided that both ERC and SSC spectral components are produced in the Thomson regime and that the observer is located within or near the jet cone, the ratio of the peak luminosities of these two components can be approximated by the formula $$\frac{L_{SSC}}{L_{ERC}}\frac{u_S^{}}{u_{ext}^{}}\frac{1}{\mathrm{\Gamma }^4}\frac{L_S}{L_{BEL}},$$ (16) Assuming that the $`\gamma `$–rays are produced by the ERC mechanism, we use in eq. (12) $`\mathrm{\Gamma }8`$ and $`\gamma _m^{}4\times 10^3`$ (see §3.2) and obtain a luminosity of the SSC radiation of $`L_{SSC}0.2L_{ERC}0.6\times 10^{46}`$ erg s<sup>-1</sup> and a location of its peak at $`h\nu _{SSC}\gamma _{m}^{}{}_{}{}^{2}\nu _S15`$ MeV. The two spectral components, the SSC and the ERC, overlap for $`\mathrm{\Gamma }^2\nu _L<\nu <\nu _{SSC}`$. In this range, $$\frac{L_{SSC\nu }}{L_{ERC\nu }}\left(\frac{\gamma _{(SSC)}^{}}{\gamma _{(ERC)}^{}}\right)^{2(1\alpha _X)}\frac{u_S^{}}{u_{ext}^{}}\left(\frac{h\nu _L}{m_ec^2}\frac{B_{cr}}{B^{}}\frac{\mathrm{\Gamma }}{\gamma _{m}^{}{}_{}{}^{2}}\right)^{1\alpha _X}\frac{L_S}{L_{BEL}}\frac{1}{\mathrm{\Gamma }^4}10,$$ (17) where $`\gamma _{(ERC)}^{}\sqrt{\nu /\nu _L}/\mathrm{\Gamma }`$ and $`\gamma _{(SSC)}^{}\sqrt{\nu /\nu _S}`$ are the energies of the electrons contributing to the radiation at a frequency $`\nu `$ via ERC and SSC process, respectively. This combined synchrotron + SSC + ERC model predicts that all three spectral components should have a break due to adiabatic losses of electrons below a certain energy. Since radiative energy losses are dominated by the ERC process, the corresponding break energy of the electron distribution, $`\gamma _b^{}`$, can be found from equating the time scale of the ERC energy losses, $`t_{ERC}\mathrm{\Gamma }\gamma ^{}/(d\gamma ^{}/dt^{})_{ERC}`$, to the time scale of the propagation of the perturbed flow pattern, $`\mathrm{\Delta }t\mathrm{\Gamma }^2`$. Using eqs. (9) and (10), we obtain $$\gamma _b^{}\frac{4\pi m_ec^4}{\sigma _T}\frac{\mathrm{\Delta }t\mathrm{\Gamma }}{L_{BEL}}10^3.$$ (18) The presence of this break in the electron distribution should result in a change of slope of $`\mathrm{\Delta }\alpha 0.5`$ (Sikora et al.1994) in all three spectral components. Note that during a flare, the spectrum most likely would vary in time as a result of the effective change of the spectral slope due to the change of the location of this break. In our case, we model the time averaged spectrum; in particular, the adiabatic losses of electrons with $`\gamma ^{}<\gamma _b^{}`$ result in the change of the spectral slope at $`\nu (\gamma _b)`$, and the data indeed show $`\mathrm{\Delta }\alpha 0.5`$. In the synchrotron component the break occurs around 0.2 eV, while in the SSC component it is around 0.75 MeV, and in the ERC spectral component around 0.5 GeV. As calculated from the electron kinetic equation, this break is very smooth, and since $`\gamma _b^{}`$ is very close to the maximum electron energy, the “adiabatic” breaks should join smoothly with the intrinsic high energy breaks. This is illustrated in Fig. 4, where the data for the July 1997 flare are fitted with our ERC+SSC model. As one can see from Fig. 4, the SSC component, calculated self-consistently within the framework of the ERC model, fits the X–ray data reasonably well. We note that such a three-component spectral structure was also inferred by Kubo et al.(1998) for other QHB blazars. Our result is that the high energy ($`>3`$ keV) spectrum of BL Lacertae is more similar to blazars associated with quasars (QHBs) than to HBLs, further strengthening the inference that LBLs are weak-lined quasars, and that HBLs form a somewhat distinct subclass of BL Lac type objects. We also note that in the context of this model, the overall spectrum of BL Lacertae (and, by similarity, that of most other LBL-type blazars) is not expected to extend to the TeV energies. This is because the distribution of the relativistic electrons does not extend to sufficiently high energies to produce TeV $`\gamma `$–rays, while the second-order Comptonization is inefficient because of the drop in the Compton cross-section due to the Klein-Nishina limit. This is in contrast to HBLs, where electron energy distributions extend to a range that can be $`10^210^3`$ times greater (Kubo et al.1998) than that derived above. ## 5 SUMMARY We summarize as follows: (1) The RXTE observations of BL Lacertae in July 1997, during the flare observed in the ground-based optical and $`\gamma `$–ray EGRET data, imply that the source was bright in the X–ray band. The X–ray spectrum was relatively hard, exhibiting both flux and spectral variability with a mean energy power law index $`\alpha `$ of 0.59. The source showed a peak in its X–ray flux nearly coincident with the peak of the GeV $`\gamma `$–ray flux. (2) A comparison of the RXTE data to the archival Asca and ROSAT PSPC data implies that the spectrum of BL Lacertae is generally harder when the source is brighter. The Asca data possibly show an additional soft component above the extrapolation of the hard power law to lower energies (but the presence of this soft component is uncertain and subject to the details of the Galactic absorption). This general spectral behavior appears to be associated with an emergence of broad emission lines in BL Lacertae first reported in 1995. (3) The broad-band spectrum of BL Lacertae appears more similar to a blazar associated with a quasar than to the more common HBL - type, “certified lineless” sub-class of blazars, implying an association of the presence of the broad emission lines with the high energy portion of the overall spectrum. This further supports the suggestion of Vermeulen et al.(1995) that BL Lacertae is no longer a BL Lac object; this seems to be true of the high energy portion of its spectrum as well. (4) The broad-band spectrum of BL Lacertae cannot be readily fitted with either the synchrotron self-Compton (SSC) or the External Radiation Compton (ERC) model. However, a hybrid model, where the X–ray radiation arises via the SSC process and the GeV $`\gamma `$–ray radiation is produced via ERC, can fit the data well. From this model, we derive the bulk Lorentz factor of the jet $`\mathrm{\Gamma }8`$, magnetic field $`B1`$ Gauss, and Lorentz factors $`\gamma _m^{}`$ of the electrons radiating in all three components at the peak of their respective $`\nu \times F(\nu )`$ distributions to be $`4\times 10^3`$. We acknowledge the referee, Dr. Marscher, for his comments leading to significant improvements to this paper, and the support from NASA RXTE observing grants to GSFC via Universities Space Research Association (USRA), and Polish KBN grant 2P03D00415.
no-problem/9902/cond-mat9902072.html
ar5iv
text
# Experimental evidence of delocalized states in random dimer superlattices ## Abstract We study the electronic properties of GaAs-AlGaAs superlattices with intentional correlated disorder by means of photoluminescence and vertical dc resistance. The results are compared to those obtained in ordered and uncorrelated disordered superlattices. We report the first experimental evidence that spatial correlations inhibit localization of states in disordered low-dimensional systems, as our previous theoretical calculations suggested, in contrast to the earlier belief that all eigenstates are localized. In recent years, a number of tight-binding and continuous models of disordered one-dimensional (1D) systems have predicted the existence of sets of extended states, in contrast to the earlier belief that all the eigenstates are localized in 1D disordered systems. These systems are characterized by the key ingredient that structural disorder is short-range correlated. Due to the lack of experimental confirmations, there are still some controversies as to the relevance of these results and their implications on physical properties. In this context, some authors have proposed to find physically realizable systems that allow for a clear cut validation of the above-mentioned purely theoretical prediction . Given that semiconductor superlattices (SL’s) have been already used successfully to observe electron localization due to disorder , these authors have suggested SL’s as ideal candidates for controllable experiments on localization or delocalization and related electronic properties . To the best of our knowledge, up to now there is no experimental verification of this theoretical prediction owing to the difficulty in building nano-scale materials with intentional and short-range correlated disorder. However, the confirmation of this phenomenon is important both from the fundamental point of view and for the possibility to develop new devices based on these peculiar properties. In this work we present an experimental verification of this phenomenon in semiconductor nano-scale materials, taking advantage of the molecular beam epitaxy growth technique, which allows the fabrication of semiconductor nanostructures with monolayer controlled perfection. We grew several GaAs-Al<sub>0.35</sub>Ga<sub>0.65</sub>As SL’s and we studied their electronic properties by photoluminescence (PL) at low temperature and dc vertical transport in the dark. Indeed PL has been proven to be a good technique to study the electronic properties of disordered SL’s , giving transition energies comparable with theoretical calculations of the electronic levels. The electronic states were calculated using a Kronig-Penney model that has been shown to hold in this range of well and barrier thicknesses, with precise results . This allows the analysis of the experimental transition energies for PL and the ascertainment of the localization and delocalization properties of the SL’s. The details of the calculations and a schematic view of the conduction-band profiles of the three SL’s can be found in Ref. . The samples are three undoped SL’s grown by molecular beam epitaxy (MBE). All the SL’s have 200 periods and Al<sub>0.35</sub>Ga<sub>0.65</sub>As barriers $`3.2`$ nm thick. In the Ordered-SL all the 200 wells are identical with thickness $`3.2`$ nm (hereafter referred to as A wells). In the Random-SL, $`58`$ A wells are replaced by wells of thickness $`2.6`$ nm (hereafter referred to as B wells) and this replacement is done randomly. The so-called Random dimer-SL is identical to the Random-SL with the additional constraint that the B wells appear only in pairs . In the latter sample the disorder exhibits the desired short-range spatial correlations. In each sample, the SL is cladded on each side by $`100`$ nm of n-Al<sub>0.3</sub>Ga<sub>0.7</sub>As, Si doped to $`4\times 10^{18}`$ cm<sup>-3</sup>, with a $`50`$ nm n-GaAs buffer layer (doped to $`4\times 10^{18}`$ cm<sup>-3</sup>) on the substrate and a $`3`$ nm n-GaAs cap layer (doped to $`6\times 10^{18}`$ cm<sup>-3</sup>). We measured X-ray diffraction spectra of the SL’s with a double-crystal diffractometer, in order to check their structural parameters. The diffraction curve at ($`004`$) symmetric reflections for the two disordered samples show satellite peaks of order $`\pm 1`$ lying close to $`\pm 0.8`$ degrees with respect to the GaAs peak. These satellite peaks are located at identical positions for the two disordered SL’s, showing that the random SL’s have identical periods. Therefore, the dimer constraint intentionally introduced during sample growth is the only difference between Random and Random dimer samples. The PL spectra were taken in the $`11300`$K temperature range with a closed cycle cryostat, and were excited with $`514.5`$ nm light from an Ar<sup>+</sup>-ion laser (with an excitation intensity of approximately $`0.5`$W/cm<sup>2</sup>). Photoluminescence was dispersed by a $`0.46`$m Jobin Yvon monochromator and detected by a cooled photomultiplier using a standard lock-in technique. Figure 1 shows the PL spectra of the three SL’s at $`11`$ K. We observed that the energy of the near-band edge peaks depends on the sample, but the energy shift between them is almost independent of temperature on a wide range, as shown in Fig. 2. The PL peak for the Ordered-SL, which lies at $`1.688`$ eV, is due to recombination between electrons in the conduction-band and heavy-holes in the valence-band . We calculated the miniband structure of this SL with the Kronig-Penney model, using $`\mathrm{\Gamma }`$ effective masses (in units of free electron mass) $`m_e^{}=0.067`$ for GaAs and $`m_e^{}=0.096`$ for Al<sub>0.35</sub>Ga<sub>0.65</sub>As. The expected miniband in the conduction-band lies in the range between $`1.68`$ and $`1.76`$ eV, measured from the (very narrow) heavy hole miniband. This calculation is in good agreement with the experimental PL spectrum, the calculated lower energy of the miniband being very close to the energy at which PL intensity rises up. Figure 3(a) shows an energy schema of the radiative transitions in the Ordered-SL. Let us now analyze the spectrum obtained for the Random-SL. The PL peak of this sample shifts towards higher energies compared with the other two samples. In the Random-SL the intentional disorder introduced by the random distribution of wells B ($`2.6`$ nm) localizes the electronic states . The calculated energy for the transition between electron and holes in this case is $`1.72`$ eV assuming that the exciton binding energy is the same in the three SL’s. This value is again in excellent agreement with the PL peak, as can be seen in Fig. 1. Figure 3(b) shows a schematic diagram of the radiative transitions between localized states in the Random-SL. The PL peak of the Random dimer-SL is at $`1.696`$ eV and, as can be clearly seen in Fig. 1, is red shifted with respect to the PL peak for the Random-SL. As has been shown by Fujiwara , the red shift of the PL peak in semiconductor SL’s is due to the formation of a miniband with tunnel process for carriers between the GaAs wells. This result strongly supports previous theoretical predictions of the occurrence of a band of extended states in Random dimer-SL’s . We calculated the transmission coefficient for the Random dimer-SL according to Ref. and found that the energy difference between the onset for electron delocalization (that is, the energy at which transmission suddenly rises, as can be seen in Fig. 4) and the heavy hole miniband is around $`1.70`$ eV, in good agreement with the experimental PL peak energy. Figure 3(c) presents a schematic diagram of the radiative transitions in the Random dimer-SL. Additionally, the PL line-width gives support to these findings. The PL full width at half maximun of the Ordered-SL is $`9.1`$ meV, increasing to $`13.2`$ meV and $`12`$ meV in the Random-SL and Random dimer-SL respectively, indicating that these last two samples reflect in the optical spectra their intentional disorder. To confirm the above interpretation of the PL spectra we have performed additional measurements of the resistance at low temperatures . The results for the temperature dependence of the resistance are shown in Fig. 5. The resistance of the Random dimer-SL is very similar to the resistance of the Ordered-SL for any temperature below $`40`$K, and the small differences are due to the different miniband-width between both SL’s (see Fig. 3). On the other hand, the Random-SL shows a much higher resistance in this range of temperatures. This is completely consistent with the above interpretation of the PL spectra and it is clear evidence of the presence of extended states in the Random dimer-SL showing transport properties very similar to an Ordered-SL. Moreover, the resistance of the Random-SL still depends on temperature below $`30`$K, while the resistance of the two other samples reaches a plateau. For low temperatures, transport properties in the presence of true extended states should be independent of temperature and, as can be seen in Fig. 5, this behavior is only observed in the Random dimer-SL and in the Ordered-SL, which is additional evidence of the presence of extended states in these samples. In summary, we have observed that the introduction of short-range correlations in a disordered semiconductor SL inhibits localization and gives rise to extended states, as expected theoretically . The positions of the electronic levels were calculated with the Kronig-Penney model and the calculations show that the Ordered-SL and the Random dimer-SL exhibit extended electronic states. According to theoretical studies, these extended states in Random dimer-SL’s are not Bloch-like, as occurs in Ordered-SL’s. The PL of the Random dimer-SL is red shifted with respect to the PL of the Random-SL, indicating the formation of delocalized extended states. The experimental PL energies are in very good agreement with the calculated electronic states. The temperature dependence of the resistance of the Random dimer-SL is very similar to that of the Ordered-SL. Both SL’s shows no temperature dependence below $`30`$ K as should be expected for transport in the presence of extended states. On the contrary, the resistance of the Random-SL is much higher for any temperature and shows temperature dependence as would be expected for localized states. To conclude, we have experimentally validated the existence of extended states in low-dimensional random systems with short-range correlations, where Anderson localization is inhibited. ###### Acknowledgements. Work in Italy has been supported by the INFM Network “Fisica e Tecnologia dei Semiconduttori III-V” and in Madrid by the CAM under Project No. 07N/0034/1998. E. D. and F. D.-A. thank A. Sánchez for collaboration on these topics during these years.
no-problem/9902/astro-ph9902137.html
ar5iv
text
# 1 Introduction ## 1 Introduction Neutron induced reactions on <sup>37</sup>Ar occur in the weak component of the s-process, where the most relevant neutron energies are situated in the keV range. In order to calculate the Maxwellian averaged cross section in this energy region, the reactions under investigation have to be studied with thermal as well as with resonance neutrons. The thermal value is often used to normalise the higher energy values and besides it is needed in the calculation of the Maxwellian averaged cross section as a summation of the thermal and resonance components. ## 2 Sample Preparation Samples with a well-defined mass are of great importance to perform reliable reaction cross section measurements, so a lot of effort was put in the preparation and characterisation of suited samples. A detailed description of the procedure is given in. <sup>37</sup>Ar atoms were produced via the <sup>37</sup>Cl(p,n)<sup>37</sup>Ar reaction by bombarding a NaCl target with 30 MeV protons from a cyclotron of the UCL at Louvain-la-Neuve (Belgium). These atoms were then ionised to the 1<sup>+</sup> state and implanted at 8 keV in a 20 $`\mu `$m thick Al-foil. Different samples were produced, containing $`10^{14}`$ up to $`5\times 10^{15}`$ atoms. These numbers were determined at the IRMM in Geel via the detection of the 2.6 keV KX-rays (emitted in the decay of <sup>37</sup>Ar) with a gas flow proportional counter. ## 3 Measurements with thermal neutrons The experiments with thermal neutrons were performed at the end of the 87m curved neutron guide H22 of the High Flux Reactor at the ILL in Grenoble (France). A well-thermalised flux of about $`5\times 10^8`$ neutrons/cm<sup>2</sup> was available at the sample position. The <sup>37</sup>Ar samples were mounted in a vacuum chamber together with suited surface barrier detectors, placed outside the neutron beam. A typical charged particle spectrum obtained during a 62 h neutron irradiation of one of the <sup>37</sup>Ar samples is shown in figure 1. The results of these measurements are summarised in table 1. Comparison of our results with those obtained by Ashgar et al. shows that our values for both the (n<sub>th</sub>,$`\alpha _0`$) and (n<sub>th</sub>,p) cross sections are about two times smaller, which indicates that the discrepancy most likely lies in the determination of the number of <sup>37</sup>Ar atoms in the sample or in the neutron flux determination. | Table 1. | | | | --- | --- | --- | | Reaction | Q-value | Cross section | | | (MeV) | (b) | | <sup>37</sup>Ar(n<sub>th</sub>,$`\alpha _0`$)<sup>34</sup>S | 4.63 | $`1070\pm 80`$ | | <sup>37</sup>Ar(n<sub>th</sub>,$`\alpha _1`$)<sup>34</sup>S | 2.50 | $`0.29\pm 0.05`$ | | <sup>37</sup>Ar(n<sub>th</sub>,$`\gamma \alpha `$)<sup>34</sup>S | | $`6`$ | | <sup>37</sup>Ar(n<sub>th</sub>,p)<sup>37</sup>Cl | 1.60 | $`37\pm 4`$ | ## 4 Measurements with resonance neutrons The measurements with resonance neutrons were carried out at a 9 m long flight path of the linear accelerator GELINA of the IRMM in Geel (Belgium), covering a neutron energy range from 10 meV up to 70 keV. The flux determination was done via the well known <sup>10</sup>B(n,$`\alpha `$)<sup>7</sup>Li reaction. An overview of the characteristics of the measurements is given in table 2. In none of the three measuring cycles the (n,p), (n,$`\gamma \alpha `$) or (n,$`\alpha _1`$) reactions were observed, as could be expected from their small thermal values. | Table 2. | | | | | | --- | --- | --- | --- | --- | | linac frequency | detector | energy range | number of <sup>37</sup>Ar atoms | irradiation time | | 100 Hz | ionisation chamber | 10 meV $``$ E<sub>n</sub> $``$ 1 keV | $`2.15\times 10^{15}`$ | 150h | | 800 Hz | ionisation chamber | 1 eV $``$ E<sub>n</sub> $``$ 15 keV | $`1.50\times 10^{15}`$ | 480h | | 800 Hz | surface barrier detector | 1 eV $``$ E<sub>n</sub> $``$ 70 keV | $`4.20\times 10^{15}`$ | 100h | In the 100 Hz measuring campaign a 1/v shape of the <sup>37</sup>Ar(n,$`\alpha _0`$) cross section could be established (figure 2). A second measuring cycle, with the linac operating at 800 Hz, provided us with cross section data for neutron energies up to 15 keV (figure 2). Two strong resonances were observed at 1.6 keV and at 2.5 keV, with resonance areas of $`(43\pm 9)`$ b.keV and $`(33\pm 7)`$ b.keV. In a third measuring cycle we used a surface barrier detector mounted in a vacuum chamber and realised good experimental conditions up to 70 keV neutron energy. Here, two smaller resonances were observed at 25 keV and at 40 keV. Resonance areas are in the order of 12 b.keV and 15 b.keV respectively. ## 5 Maxwellian averaged cross section The determination of the Maxwellian averaged cross section is based on a formula which calculates the Maxwellian averaged cross section as a sum of the 1/v extrapolation of the thermal value and the contributions of the resonances: $$\sigma _{kT}=\sigma _{th}\sqrt{\frac{25.3\times 10^6}{kT}}+\frac{2}{\sqrt{\pi }}\underset{res}{}A_{res}\frac{E_{res}}{\left(kT\right)^2}\mathrm{exp}\left(\frac{E_{res}}{kT}\right).$$ (1) In eq. (1) $`\sigma _{th}`$ is the thermal cross section value in mb, $`kT`$ the stellar temperature in keV, $`E_{res}`$ the resonance energy in keV and $`A_{res}`$ the resonance area in (mb.keV). Our data result in very large values for the Maxwellian averaged cross section, e.g. 19 b at 2 keV which is 7 times larger than the theoretically calculated one (figure 3). ## 6 Conclusion For the first time neutron induced reactions on <sup>37</sup>Ar were performed, covering a neutron energy range from thermal energy up to 70 keV. Measurements with thermal neutrons were performed at the high flux reactor of the ILL, leading to cross section values for the (n,p), (n,$`\alpha _0`$) and (n,$`\alpha _1`$) reactions of $`(37\pm 4)`$ b, $`(1070\pm 80)`$ b and $`(290\pm 50)`$ mb respectively. For the (n,$`\alpha _0`$) reaction, measurements at the neutron spectrometer GELINA of the IRMM gave evidence for a perfect 1/v shape of the cross section in the low energy region and moreover revealed the existence of four resonances in the region up to 70 keV. The obtained resonance parameters combined with the thermal cross section value lead to very large values of the Maxwellian averaged cross section.
no-problem/9902/quant-ph9902026.html
ar5iv
text
# Untitled Document COMPLETE POSITIVITY AND NEUTRON INTERFEROMETRY F. Benatti Dipartimento di Fisica Teorica, Università di Trieste Strada Costiera 11, 34014 Trieste, Italy and Istituto Nazionale di Fisica Nucleare, Sezione di Trieste R. Floreanini Istituto Nazionale di Fisica Nucleare, Sezione di Trieste Dipartimento di Fisica Teorica, Università di Trieste Strada Costiera 11, 34014 Trieste, Italy Abstract We analyze the dynamics of neutron beams in interferometry experiments using quantum dynamical semigroups. We show that these experiments could provide stringent limits on the non-standard, dissipative terms appearing in the extended evolution equations. An open quantum system can be modeled in general as being a small subsystem in interaction with a suitable large environment. Although the global dynamics of the compound system is described by unitary transformations generated by the total hamiltonian, the effective time evolution of the subsystem usually manifests dissipation and irreversibility. This reduced dynamics, obtained by eliminating (i.e. by tracing over) the environment degrees of freedom, turns out to be in general rather complicated. However, under some mild assumptions, that essentially ask for a weak coupling between system and environment, one obtains a subdynamics free from memory effects, that can be realized in terms of linear maps. Furthermore, this set of transformations possesses very basic and fundamental physical properties, like forward in time composition (semigroup property), probability conservation, entropy increase and complete positivity. They form a so-called quantum dynamical semigroup.\[1-3\] This description is rather universal and can be applied to model a large variety of different physical situations, ranging from the study of quantum statistical systems,\[1-3\] to the analysis of dissipative effects in quantum optics,\[4-6\] to the discussion of the interaction of a microsystem with a macroscopic measuring apparatus.\[7-9\] It has also been proposed as an effective description for phenomena leading to loss of quantum coherence,\[10-14\] induced by quantum gravity effects at Planck’s scale. The basic idea is that space-time should be topologically nontrivial at this scale, manifesting a complicated “foamy” structure; as a consequence, transitions from pure to mixed states could be conceivable (for more recent elaborations, see ). Analysis based on the study of the dynamics of strings also support, from a different point of view, the idea of loss of quantum coherence. In this respect, one can show that, in a rather model-independent way, a non-unitary, dissipative, completely positive subdynamics is the direct result of the weak interaction with an environment constituted by a gas of D0-branes, effectively described by a heat-bath of quanta obeying infinite statistics. These results suggest that it should be possible to observe dissipative, non-standard effects in many quantum systems. However, crude dimensional estimates show that these effects are in general tiny, and therefore unobservable in practice. Nevertheless, a detailed analysis of the dynamics and decay of the neutral kaon system based on quantum dynamical semigroups shows that the non-standard contributions in the extended time-evolution equations could lead to testable effects.\[19, 20, 21-23\] These non-standard terms can be parametrized by six real phenomenological constants, whose presence modify the expressions of usual $`K`$-$`\overline{K}`$ observables, like decay rates and asymmetries. Although the present experimental data are not accurate enough to detect these modifications, the next generation of kaon experiments should be able to put stringent limits on the six non-standard parameters. In this respect, particularly promising are the experiments at $`\varphi `$-factories, where systems of correlated neutral kaons are copiously produced. One should note that the description of the kaon dynamics in terms of completely positive maps is in this case essential in order to obtain a consistent extension of the standard quantum mechanical time-evolution (for a complete discussion, see ). Another system in which non-standard quantum evolutions based on dynamical semigroups can be studied is a neutron interferometer.\[25-28\] The capability of producing very slow neutron beams at reactors, together with the technological ability of producing and cutting with high precision macroscopic silicon crystals have made possible direct, very accurate tests of various physical phenomena.\[25-30\] In a typical experimental setup, a neutron beam is split into two components which travel along different paths and are subsequently brought together to interfere. The two components pass through a tiny slab of material before interfering; this produce a relative “phase shift” between the two splitted beams. By varying the relative orientation of the slab across the two beams, one obtains an interference figure. This figure changes under the action of various external phenomena, produced e.g. by earth gravity and rotation, or by an external magnetic field (earth magnetic field is usually properly screened); the corresponding theoretically calculated phase shifts induced by these phenomena have all been experimentally checked with high precision by analyzing the modified interference patterns.\[28-30\] In the following, we shall analyze in detail the dynamics of the neutron beams in such interferometric devices under the hypothesis that the corresponding time-evolution be described by a quantum dynamical semigroup. We shall see that, at least in principle, neutron interferometry experiments could provide very accurate estimates of the non-standard, dissipative terms appearing in the corresponding evolution equations. A preliminary analysis based on recent published data from one of those experiments will also be presented. States of a quantum system evolving in time can be suitably described by a density matrix $`\rho `$; this is a positive, hermitian operator, i.e. with positive eigenvalues, and constant trace. We shall analyze the evolution of neutrons in an abstract interferometer, where the original monoenergetic beam is splitted in two components that then interfere at the end, giving rise to intensity fringe patterns in two possible exit beams. We can model this generic physical setup by means of a two-dimensional Hilbert space, taking as basis states those corresponding to the two components of the split beam inside the interferometer. With respect to this basis, the density matrix $`\rho `$ describing the state of our physical system can be written as: $$\rho =\left(\begin{array}{cc}\rho _1& \rho _3\\ \rho _4& \rho _2\end{array}\right),$$ $`(1)`$ where $`\rho _4\rho _3^{}`$, and $``$ signifies complex conjugation. As explained in the introductory remarks, our analysis is based on the assumption that the evolution in time of the neutrons inside the interferometer is given by a quantum dynamical semigroup, i.e. by a completely positive, one parameter (=time) family of linear maps: $`\rho (0)\rho (t)`$. These maps are generated by equations of the following form: $$\frac{\rho (t)}{t}=iH\rho (t)+i\rho (t)H+L[\rho (t)].$$ $`(2)`$ The first two terms in the r.h.s. of this equation are the standard quantum mechanical ones. They contain the effective (time-independent) hamiltonian $`H`$, that can be taken to be hermitian, since the fact that the neutrons are unstable can be neglected in interferometry experiments. The third piece $`L[\rho ]`$ is a linear map, whose form is fully determined by the requirement of complete positivity and trace conservation:\[1-3\] $$L[\rho ]=\frac{1}{2}\underset{j}{}\left(A_j^{}A_j\rho +\rho A_j^{}A_j\right)+\underset{j}{}A_j\rho A_j^{}.$$ $`(3)`$ The operators $`A_j`$ must be such that $`_jA_j^{}A_j`$ is a well-defined $`2\times 2`$ matrix; further, to assure entropy increase, the $`A_j`$ can be taken to be hermitian. In absence of $`L[\rho ]`$, pure states (i.e. states of the form $`|\psi \psi |`$) would be transformed into pure states. Instead, the additional piece in (3) produces dissipation and possible transitions from pure to mixed states. As already mentioned, equations of the form (2), (3) have been used to describe various phenomena connected with open quantum systems; in particular, they have been applied to analyze the propagation and decay of the neutral kaon system.\[21-24\] Although the basic general idea behind these treatments is that quantum phenomena at Planck’s length produce loss of phase-coherence, it should be stressed that the form (2), (3) of the evolution equation is independent from the microscopic mechanism responsible for the dissipative effects. Indeed, equations (2) and (3) are the result of very basic physical assumptions, like probability conservation, entropy increase, complete positivity, and therefore should be regarded as phenomenological in nature. Among the just mentioned physical requirements, complete positivity is perhaps the less intuitive. Indeed, it has not been enforced in previous analysis, in favor of the more obvious simple positivity. Simple positivity is in fact enough to guarantee that the eigenvalues of the density matrix $`\rho (t)`$ describing our system remain positive at any time; this is an unavoidable requirement in view of the probabilistic interpretation of $`\rho `$. Complete positivity is a stronger property, in the sense that it assures the positivity of the density matrix describing the states of a larger system, obtained by coupling in a trivial way the system under study with another arbitrary finite-dimensional one. At first, the requirement of complete positivity of (2) seems a mere technical complication. Nevertheless, it turns out to be essential in properly treating correlated systems, like the two neutral kaons coming from the decay of a $`\varphi `$-meson; it assures the absence of unphysical effects, like the appearance of negative probabilities, that could occur for just simply positive dynamics. One should also add that standard unitary quantum mechanical time evolutions satisfies this property in a rather trivial way. For these reasons, in analyzing possible non-standard, dissipative effects even in simpler, non correlated systems, the phenomenological equations (2) and (3) should be used. In the particular case of neutron interferometers, as for the $`K`$-$`\overline{K}`$ system, a more explicit description of (2) and (3) can be given. In the chosen basis, the effective hamiltonian can be written as: $$H=\left(\begin{array}{cc}E+\omega & 0\\ 0& E\omega \end{array}\right).$$ $`(4)`$ Indeed, the neutron beams inside the interferometer can be assimilated to a two-level system, with $`E`$ the incident (kinetic) neutron energy. The splitting in energy $`2\omega `$ among the two internal beams can be induced by various physical effects. In the following, we shall consider the case of a thin slab of material (e.g. alluminium) inserted transversally to the two split beams. A slight rotation of this slab produces different effective interactions of the neutrons with the slab material in the two internal paths, yielding a non-vanishing $`\omega `$. Using an eikonal approximation, perfectly suitable for describing slow neutrons, one can theoretically compute this energy splitting in terms of the neutron-nuclear scattering parameters for the slab material.\[25-27\] Typically, one finds that $`\omega `$ is of the order of $`10^7`$ eV. The explicit form of the term $`L[\rho ]`$ in (3) can be most simply given by expanding the $`2\times 2`$ matrix $`\rho `$ in terms of Pauli matrices $`\sigma _i`$ and the identity $`\sigma _0`$: $`\rho =\rho _\mu \sigma _\mu `$, $`\mu =\mathrm{\hspace{0.17em}0}`$, 1, 2, 3. In this way, the map $`L[\rho ]`$ can be represented by a symmetric $`4\times 4`$ matrix $`\left[L_{\mu \nu }\right]`$, acting on the column vector with components $`(\rho _0,\rho _1,\rho _2,\rho _3)`$. It can be parametrized by the six real constants $`a`$, $`b`$, $`c`$, $`\alpha `$, $`\beta `$, and $`\gamma `$: $$\left[L_{\mu \nu }\right]=2\left(\begin{array}{cccc}0& 0& 0& 0\\ 0& a& b& c\\ 0& b& \alpha & \beta \\ 0& c& \beta & \gamma \end{array}\right),$$ $`(5)`$ with $`a`$, $`\alpha `$ and $`\gamma `$ non-negative. These parameters are not all independent; the condition of complete positivity of the time-evolution $`\rho \rho (t)`$ imposes the following inequalities: $$\begin{array}{cc}& 2R\alpha +\gamma a0,\hfill \\ & 2Sa+\gamma \alpha 0,\hfill \\ & 2Ta+\alpha \gamma 0,\hfill \\ & RST2bc\beta +R\beta ^2+Sc^2+Tb^2.\hfill \end{array}\begin{array}{cc}& RSb^2,\hfill \\ & RTc^2,\hfill \\ & ST\beta ^2,\hfill \end{array}$$ $`(6)`$ As already observed, the dissipative correction (5) to the evolution equation (2) should be regarded as phenomenological; it is therefore difficult to give an apriori estimate of the magnitude of the non-standard parameters in (5). However, following the idea that the term $`L[\rho ]`$ originates from quantum effects at Planck’s scale, one expects the values of $`a`$, $`b`$, $`c`$, $`\alpha `$, $`\beta `$ and $`\gamma `$ to be very small, at most of the order $`m_n^2/m_P10^{19}\mathrm{GeV}`$, where $`m_n`$ is the neutron mass, while $`m_P`$ is the Planck scale. The dissipative contribution $`L[\rho ]`$ in (2) is therefore at least three order of magnitude smaller than the one given by the standard hamiltonian terms: this allows an approximate analysis of the evolution equation (2), in which the term (5) can be treated as a small perturbation. For the considerations that follows, it will be sufficient to stop at the first order in the perturbative expansion. For generic initial conditions, the time dependence of the four components of the corresponding solution $`\rho (t)`$ of (2) is explicitly given by: $$\begin{array}{ccc}& \rho _1(t)=(1\gamma t)\rho _1+\gamma t\rho _2\frac{C}{\omega }e^{i\omega t}\mathrm{sin}(\omega t)\rho _3\frac{C^{}}{\omega }e^{i\omega t}\mathrm{sin}(\omega t)\rho _4,\hfill & (7a)\hfill \\ & \rho _2(t)=\gamma t\rho _1+(1\gamma t)\rho _2+\frac{C}{\omega }e^{i\omega t}\mathrm{sin}(\omega t)\rho _3+\frac{C^{}}{\omega }e^{i\omega t}\mathrm{sin}(\omega t)\rho _4,\hfill & (7b)\hfill \\ & \rho _3(t)=\frac{C^{}}{\omega }e^{i\omega t}\mathrm{sin}(\omega t)(\rho _1\rho _2)+(1At)e^{2i\omega t}\rho _3+\frac{B}{2\omega }\mathrm{sin}(2\omega t)\rho _4,\hfill & (7c)\hfill \\ & \rho _4(t)=\frac{C}{\omega }e^{i\omega t}\mathrm{sin}(\omega t)(\rho _1\rho _2)+\frac{B^{}}{2\omega }\mathrm{sin}(2\omega t)\rho _3+(1At)e^{2i\omega t}\rho _4,\hfill & (7d)\hfill \end{array}$$ where the following convenient combinations of the non-standard parameters have been introduced: $$A=\alpha +a,B=\alpha a+2ib,C=c+i\beta .$$ $`(8)`$ Any physical property of the diffracted neutron beams exiting the interferometer can be extracted from the solution (7) for the density matrix $`\rho (t)`$ by computing its trace with suitable hermitian operators. In particular, the observation of the neutron intensity pattern just outside the interferometer corresponds to the computation of the mean value of the following projector operators: $$𝒪_+=\frac{1}{2}\left(\begin{array}{cc}1& e^{i\theta }\\ e^{i\theta }& 1\end{array}\right),𝒪_{}=\frac{1}{2}\left(\begin{array}{cc}1& e^{i(\theta +\pi )}\\ e^{i(\theta +\pi )}& 1\end{array}\right),$$ $`(9)`$ that refer to the two possible exit beams in which a neutron can be found, having traveled the whole interferometer; the parameter $`\theta `$ is a phase which depends on the specific experimental setup. Then, the intensity $`I_\pm `$ of the interference figure in the two exit beams is given by: $$I_\pm (t)=𝒪_\pm =\mathrm{Tr}[𝒪_\pm \rho (t)].$$ $`(10)`$ In the basis we are using, the initial conditions for a neutron entering the interferometer is given by one of the following two density matrices: $$\rho ^{(1)}=\frac{1}{2}\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right),\rho ^{(2)}=\frac{1}{2}\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right).$$ $`(11)`$ They correspond to the two possible choices of orientation of the incident neutron beam with respect to the interferometer, and give rise to the same final results; in the following, we shall work with $`\rho ^{(1)}`$. Inserting this initial condition in the time evolution given by (7), from (10) one obtains the following two interference patterns: $$I_\pm (t)=\frac{1}{2}\left\{1\pm \left[e^{At}\mathrm{cos}\left(\theta +2\omega t\right)+\frac{|B|}{2\omega }\mathrm{sin}(2\omega t)\mathrm{cos}(\theta \theta _B)\right]\right\},$$ $`(12)`$ where $`|B|`$ and $`\theta _B`$ are modulus and phase of $`B`$ in (8); this formula holds for times such that $`At1`$. Since a neutron having traveled fully inside the interferometer can only be detected in one of the two exit beams, particle conservation requires: $`I_+(t)+I_{}(t)=1`$, as it is evident from (12). The interference figures described by (12) are those produced by a perfectly monoenergetic neutron incident beam and an ideal interferometer. In practice, the neutron momenta in the incident beam have a finite distribution of magnitude and directions; furthermore, there are always slight imperfections in the construction of the interferometer, not to mention residual strains in the crystal itself. These effects can only partially be controlled, and produce significant attenuation in the intensity of the interference figures. Detailed calculations based on neutron optics allow precise estimates of the modifications that are needed in the spectra (12) in order to take into account those effects.\[25-27\] In keeping with our phenomenological point of view, we will not use directly those estimates, but rather modify the expressions (12) by introducing suitable unknown parameters. By denoting with $`N_\pm `$ the actual neutron countings at the two exit beams, one generalizes the spectra in (12) as: $$N_\pm (t)=N_\pm ^{(0)}\left\{1\pm 𝒞_\pm \left[e^{At}\mathrm{cos}\left(\theta +2\omega t\right)+\frac{|B|}{2\omega }\mathrm{sin}(2\omega t)\mathrm{cos}(\theta \theta _B)\right]\right\}.$$ $`(13)`$ The parameters $`𝒞_\pm `$, the so-called fringe contrast, take into account the previously mentioned intensity attenuation, while $`N_\pm ^{(0)}`$ are just normalization constants. Clearly, the accuracy of the determination of the parameters $`A`$ and $`B`$ from the measured data will increase as the fringe contrast gets closer to one. In actual experiments, one finds that the best values for $`𝒞_\pm `$ are usually around 0.6. Further, note that now particle conservation requires: $$N_+^{(0)}𝒞_+=N_{}^{(0)}𝒞_{}.$$ $`(14)`$ In order to be able to compare the phenomenological predictions (13) with actual experimental data, further elaborations are required. Although the intensity spectra in (13) are time-dependent, in an interference experiment, being the paths followed by the neutrons fixed, the evolution time $`t`$ is also fixed; it can only be modified by changing the wavelength (i.e. the energy) of the primary neutron beam. It follows that the phase $`\phi 2\omega t`$ that gives the interference figure can be varied only by changing the energy split $`\omega `$ between the two paths inside the interferometer, i.e. by changing the orientation of the material slab with respect to the neutron beams. Also, since $`t`$ is fixed, it is not possible a priori to extract the fringe contrast parameters $`𝒞_\pm `$ from the $`\phi `$-dependence of the intensities $`N_\pm `$. In other words, in comparing the behaviour in (13) with that given by the experiment, one needs to use the following form for the two intensity patterns: $$N_\pm (\phi )=N_\pm ^{(0)}\left\{1\pm \left[P_\pm \mathrm{cos}\left(\theta +\phi \right)+Q_\pm \frac{\mathrm{sin}\phi }{\phi }\right]\right\},$$ $`(15)`$ where $$P_\pm =𝒞_\pm e^{At},Q_\pm =𝒞_\pm |B|t\mathrm{cos}\left(\theta \theta _B\right).$$ $`(16)`$ A fit with the experimental data will give estimates for the parameters $`N_\pm ^{(0)}`$, $`P_\pm `$, $`Q_\pm `$ and $`\theta `$; at least in principle, this is sufficient to determine the non-standard constants $`A`$ and $`B`$. We have performed a preliminary $`\chi ^2`$ fit of the formulas in (15) with recent experimental data, published in . In that experiment, a so-called skew-symmetric silicon interferometer and polarized neutrons were used to study the “geometrical phase” of the neutron wavefunction; however, in order to check the apparatus, also standard interferometric spectra had been taken. In this way, the experimental setup described in is an actual realization of the abstract interferometer so far discussed in deriving (15). Although the results of other recent interferometry experiments are also available, in lacking for an ad-hoc experiment, we find the presentation of the data in the more suitable for the analysis of the consequences of the evolution equation (2) and (5). The results of our fit can be summarized as follows: $$\begin{array}{ccc}& N_+^{(0)}=942\pm 6P_+=0.17\pm 0.01Q_+=0.02\pm 0.02\theta =0.09\pm 0.05\hfill & (17a)\hfill \\ & N_{}^{(0)}=366\pm 4P_{}=0.46\pm 0.02Q_{}=0.06\pm 0.02\theta =0.03\pm 0.03.\hfill & (17b)\hfill \end{array}$$ Extracting estimates for the parameters $`A`$ and $`B`$ from these results requires the determination of the fringe contrast $`𝒞_\pm `$, characterizing the “optical” properties of the neutron interferometer. These constants are in fact function of the imaginary part of the refraction index of the material from which the interferometer is built. Therefore, the best way to obtain estimates for $`𝒞_\pm `$ is to measure the interference spectra with two neutron beams of wavelength $`\lambda `$ and $`\lambda /2`$, keeping unchanged the rest of the experimental setup. By comparing the corresponding fit estimates for the coefficients $`P_\pm `$ and $`Q_\pm `$, with the help of neutron optics theory, one is then able to extract the values of $`𝒞_\pm `$ at wavelength $`\lambda `$. In lacking of a two-wavelength experiment, in the following we shall estimate $`𝒞_\pm `$ using directly the data. In the standard quantum mechanical case, i.e. for $`A=B=\mathrm{\hspace{0.17em}0}`$, one can easily obtain the coefficients $`𝒞_\pm `$ from the maximum $`N^{(\mathrm{max})}`$ and the minimum $`N^{(\mathrm{min})}`$ neutron counts of the experimental interference figures. Indeed, from (15) with $`A=B=\mathrm{\hspace{0.17em}0}`$, one obtains: $$𝒞_\pm =\frac{N_\pm ^{(\mathrm{max})}N_\pm ^{(\mathrm{min})}}{N_\pm ^{(\mathrm{max})}+N_\pm ^{(\mathrm{min})}}.$$ $`(18)`$ Although this relation is only approximately valid for nonvanishing $`A`$ and $`B`$, in practice one can still use (18) with confidence, since the systematic error that one thus makes in the evaluation of the parameters $`A`$ and $`B`$ can be estimated at the end to be much smaller than the pure experimental uncertainty. Using the experimental data and (18), one obtains: $`𝒞_+=0.19\pm 0.02`$ and $`𝒞_{}=0.54\pm 0.03`$. As an independent test of the correctness of this evaluation, one can check that, within the errors, the relation (14) is perfectly satisfied. Note that the value of $`𝒞_+`$ is significally smaller than one, while that of $`𝒞_{}`$ is close to the best figures that can be attained in practice.\[26-28\] This difference in the fringe contrast of the two data samples will result in a significally less accurate determination of the non-standard parameters $`A`$ and $`B`$ from the the results in $`(17a)`$ with respects to those in $`(17b)`$. The flight time $`t`$ of the neutrons inside the interferometer can be very accurately determined by using the geometric specifications of the silicon interferometer used in , so that $`1/t=5.83\times 10^{21}\mathrm{GeV}`$. Further, on general grounds one expects $`e(B)`$ and $`m(B)`$ to be of the same order of magnitude (for the case of the neutral kaon system, see ); then, as a working assumption, we shall neglect the small phase $`\theta `$ with respect to $`\theta _B`$ in (16), so that only the real part of $`B`$ can be extracted from the estimates of $`Q_\pm `$. Putting everything together, one finally obtains from $`(17a)`$: $`A=(0.71\pm 0.73)\times 10^{21}\mathrm{GeV}`$ and $`e(B)=(0.76\pm 0.49)\times 10^{21}\mathrm{GeV}`$, while from $`(17b)`$: $`A=(0.84\pm 0.41)\times 10^{21}\mathrm{GeV}`$ and $`e(B)=(0.65\pm 0.24)\times 10^{21}\mathrm{GeV}`$. These two estimates are compatible, but, as expected, the second one is much more accurate. Alternatively, recalling the definitions (8), one can express the previous results as an estimate for the parameters $`a`$ and $`\alpha `$ of (5); using the best values for $`A`$ and $`e(B)`$, one finds: $`a=(0.10\pm 0.24)\times 10^{21}\mathrm{GeV}`$, $`\alpha =(0.74\pm 0.24)\times 10^{21}\mathrm{GeV}`$. Although these values should be taken as indicative, they seem to suggest a possible nonvanishing value for $`\alpha `$, while $`a`$ is compatible with zero at the present level of accuracy. If the non-standard parameter $`a`$ actually vanishes, the expression (5) of the extra term $`L[\rho ]`$ in the evolution equation (2) greatly simplifies. Indeed, for $`a=\mathrm{\hspace{0.17em}0}`$, the inequalities (6) readily imply: $`\gamma =\alpha `$, $`b=c=\beta =\mathrm{\hspace{0.17em}0}`$. In this case, the evolution equation (2) gives the most simple extension of ordinary quantum mechanics, compatible with the condition of complete positivity. In this simplified situation, the combinations $`A`$ and $`B`$ in (8) become both equal to $`\alpha `$, so that the relations (16) are modified as: $$P_\pm =𝒞_\pm e^{\alpha t}𝒞_\pm (1\alpha t),Q_\pm =𝒞_\pm \alpha t\mathrm{cos}\theta .$$ $`(19)`$ By eliminating $`\alpha t`$ from these two formulas, one is now able to determine the fringe contrast factors $`𝒞_\pm `$ from the fitted parameters $`P_\pm `$ and $`Q_\pm `$ without further assumptions: $`𝒞_+=0.20\pm 0.02`$, $`𝒞_{}=0.52\pm 0.03`$. Note that these values are equal within errors to those determined before. Then, from either of the relations in (19), one obtains two determinations of the parameter $`\alpha `$, which combined finally give: $$\alpha =(0.71\pm 0.21)\times 10^{21}\mathrm{GeV}.$$ $`(20)`$ Although this estimate for $`\alpha `$ points toward a nonvanishing value, roughly of the right order of magnitude for a quantum gravity or “stringy” origin, it should not be regarded as an evidence for non-standard, dissipative effects in the dynamics describing neutron interferometry. Rather, it should be considered as a rough evaluation of the sensitivity that present neutron interferometry experiments can reach in testing quantum dynamical time evolutions of the form given in (2) and (5). In closing, we would like to make a few comments on the existing literature on the subject. In our study of the effects of the environment on the propagation of the neutrons inside the interferometer, we have assumed that the refractive phenomena on the various silicon blades of the device be described by standard neutron optics. This theory is the result of a quantum mechanical analysis of the scattering of the neutron beams by the nuclei in the silicon crystal. For slow neutrons, the effects of these scattering phenomena can be effectively described by a model that resemble very closely “geometric optics” of light propagation theory.\[25-27\] In principle, non-standard dissipative effects can also be present in the scattering neutron-nucleus;\[31-33\] these phenomena can be described again by phenomenological equations of the form (2), (3), and could modify the predictions of standard neutron optics. A precise estimate of these changements requires detailed computations that certainly go beyond the scope of the present investigation. In any case, it should be stressed that these possible dissipative effects in the interaction neutron-nucleus would mostly affect the estimate of the intrinsic interferometer parameters, like the fringe contrast or the phase $`\theta `$, as functions of the wavelength of the incident neutrons and the nuclear properties of the refractive material. In our study, these parameters have been obtained directly from the experimental data, so that we expect little changements in our analysis from these extra effects. Nevertheless, the entire topic certainly deserves further attention and we hope to come back to these problems in the future. A study of possible phenomena violating quantum mechanics in neutron interferometry has been originally presented in . There, an equation of the form (2) has also been used to describe these effects, but without imposing the condition of complete positivity. Fitting an approximated formula for the exit beams interference figures with the experimental data available at that time, limits on some of the parameters violating quantum mechanics were given. These limits have been further strengthen by later analysis, exploiting wavefront splitting interference experiments (the analogs of Young’s two slit experiment). These estimates turn out to be rather crude: they are based on a rough evaluation of the flight-time of the neutrons inside the interferometer, rather than a detailed analysis of the interference patterns. Furthermore, as already mentioned, lacking of imposing the condition of complete positivity on the evolution equation could lead to serious inconsistencies. We stress that to avoid these problems, one needs to adopt phenomenological descriptions based on equations (2) and (3). Finally, the neutron interference experiments realized so far allows determining at best the values of only two of the six non-standard parameters in (5). The remaining ones could be estimated, at least in principle, by studying the behaviour of other observables $`𝒪`$, different from those appearing in (9). This would allow a direct test of the inequalities (6) and therefore of the hypothesis of complete positivity. In practice, however, the analysis of these new observables would correspond to the realization of completely different experimental setups. In this respect, a detailed study of possible non-standard, dissipative effects in neutron interferometry appears to be an exiting challenge not only theoretically, but experimentally as well. REFERENCES 1. R. Alicki and K. Lendi, Quantum Dynamical Semigroups and Applications, Lect. Notes Phys. 286, (Springer-Verlag, Berlin, 1987) 2. V. Gorini, A. Frigerio, M. Verri, A. Kossakowski and E.C.G. Surdarshan, Rep. Math. Phys. 13 (1978) 149 3. H. Spohn, Rev. Mod. Phys. 53 (1980) 569 4. W.H. Louisell, Quantum Statistical Properties of Radiation, (Wiley, New York, 1973) 5. C.W. Gardiner, Quantum Noise (Springer, Berlin, 1992) 6. M.O. Scully and M.S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge, 1997) 7. L. Fonda, G.C. Ghirardi and A. Rimini, Rep. Prog. Phys. 41 (1978) 587 8. H. Nakazato, M. Namiki and S. Pascazio, Int. J. Mod. Phys. B10 (1996) 247 9. F. Benatti and R. Floreanini, Phys. Lett. B428 (1998) 149 10. J. Ellis, J.S. Hagelin, D.V. Nanopoulos and M. Srednicki, Nucl. Phys. B241 (1984) 381; 11. S. Coleman, Nucl. Phys. B307 (1988) 867 12. S.B. Giddings and A. Strominger, Nucl. Phys. B307 (1988) 854 13. M. Srednicki, Nucl. Phys. B410 (1993) 143 14. L.J. Garay, Phys. Rev. Lett. 80 (1998) 2508; Thermal properties of spacetime foam, gr-qc/9806047 15. S. Hawking, Comm. Math. Phys. 87 (1983) 395; Phys. Rev. D 37 (1988) 904; Phys. Rev. D 53 (1996) 3099 16. S. Hawking and C. Hunter, Gravitational entropy and global structure, hep-th/9808085 17. J. Ellis, N.E. Mavromatos and D.V. Nanopoulos, Phys. Lett. B293 (1992) 37; Int. J. Mod. Phys. A11 (1996) 1489 18. F. Benatti and R. Floreanini, Non-standard neutral kaons dynamics from D-branes statistics, hep-th/9811196 19. J. Ellis, J.L. Lopez, N.E. Mavromatos and D.V. Nanopoulos, Phys. Rev. D 53 (1996) 3846 20. P. Huet and M.E. Peskin, Nucl. Phys. B434 (1995) 3 21. F. Benatti and R. Floreanini, Nucl. Phys. B488 (1997) 335 22. F. Benatti and R. Floreanini, Phys. Lett. B401 (1997) 337 23. F. Benatti and R. Floreanini, Nucl. Phys. B511 (1998) 550 24. F. Benatti and R. Floreanini, Mod. Phys. Lett. A12 (1997) 1465; Banach Center Publications, 43 (1998) 71; Comment on “Searching for evolutions of pure states into mixed states in the two-state system $`K`$-$`\overline{K}`$”, hep-ph/9806450 25. J.L. Staudenmann, S.A. Werner, R. Colella and A.W. Overhauser, Phys. Rev. A 21 (1980) 1419 26. S.A. Werner and A.G. Klein, Meth. Exp. Phys. A23 (1986) 259 27. V.F. Sears, Neutron Optics, (Oxford University Press, Oxford, 1989) 28. Advance in Neutron Optics and Related Research Facilities, M. Utsuro, S. Kawano, T. Kawai and A. Kawaguchi, eds., J. Phys. Soc. Jap. 65, Suppl.A, 1996 29. K.C. Littrell, B.E. Allman and S.A. Werner, Phys. Rev. A 56 (1997) 1767 30. B.E. Allman, H. Kaiser, S.A. Werner, A.G. Wagh, V.C. Rakhecha and J. Summhammer, Phys. Rev A 56 (1997) 4420 31. E.B. Davies, Ann. Inst. H. Poncaré, A 29 (1978) 395; . ibid. A 32 (1980) 361 32. R. Alicki, Ann. Inst. H. Poncaré, A 35 (1981) 97; Z. Phys. A 307 (1982) 279 33. L. Lanz and B. Vacchini, Int. J. Theor. Phys. 36 (1997) 67; Phys. Rev. A 56 (1997) 4826 34. A.G. Klein, Phys. Lett. B151 (1985) 275; Physica B 151 (1988) 44
no-problem/9902/astro-ph9902139.html
ar5iv
text
# The recent pulse period evolution of SMC X-1 ## 1 Introduction SMC X-1 was detected during a rocket flight (Price et al. 1971). The discovery of eclipses with the Uhuru satellite with a period of 3.89 days (Schreier et al. 1972) established the binary nature of the source. The optical counterpart Sk 160 has been identified as a B0 I supergiant (Webster et al. 1972; Liller 1973). Optical photometry indicated the presence of an accretion disk influencing the optical light curve (van Paradijs & Zuiderwijk 1977). In X-rays both low- and high-intensity states have been observed with an X-ray luminosity $`L_x`$ varying from $`10^{37}\text{erg s}\text{-1}\text{ }`$ to $`5\times 10^{38}\text{erg s}\text{-1}\text{ }`$ (Schreier et al. 1972; Tuohy & Rapley 1975; Seward & Mitchell 1981; Bonnet-Bidaud & van der Klis 1981). Angelini et al. (1991) discovered an X-ray burst from SMC X-1 probably from type II like in the Rapid Burster generated by an instability in the accretion flow. A $`60`$ day quasi-periodicity was suggested by Gruber & Rothschild (1984) from HEAO 1 (A4) data, and was confirmed by more recent RXTE observations (Levine et al. 1996; Wojdowski et al. 1998). A pulse period of $`P=0.71`$ sec (Lucke et al. 1976), neutron star mass $`M_\mathrm{x}=0.81.8M_{}`$, companion mass $`M_\mathrm{c}19M_{}`$ and companion radius $`R_\mathrm{c}18R_{}`$ (Primini, Rappaport & Joss 1977) are well established. A decay in the orbital period $`\dot{P}_{\mathrm{orb}}/P_{\mathrm{orb}}=(3.36\pm 0.02)\times 10^6yr^1`$ was found (Levine et al. 1993), probably due to tidal interaction between the orbit and the rotation of the companion star, which is supposed to be in the hydrogen shell burning phase. Li & van den Heuvel (1997) argued that the magnetic moment in SMC X-1 may be low like that of the bursting pulsar GRO J1744-28, i.e. $`10^{29}Gcm^3`$. ## 2 Observations The observations reported in this paper have been performed with the PSPC and HRI detectors of the ROSAT satellite (Trümper 1983). In Table 1 a log of the observations analysed in this work is given. The observations were centered on SMC X-1. The October-1991 observations and the June-1993 observation have been retrieved from the public ROSAT archive in November 1997. The ROSAT HRI observations were made by the first author of this paper. The recently discovered transient RX J0117.6-7330 located $`5^{}`$ southeast of SMC X-1 (Clark, Remillard & Woo 1996) has not been detected in these observations. ### 2.1 High- and low-intensity X-ray states SMC X-1 has been observed during the ROSAT all-sky survey (Kahabka & Pietsch 1996). The source was in a low-intensity state in a first pointed observation, and was found in a high state in a ROSAT PSPC pointing in October 1991, preceding the low-intensity state by $``$12 days. This limits the duration of this specific X-ray turn-off phase to less than 2 weeks. In this paper the pulse periods of SMC X-1 during three X-ray high states observed with the ROSAT HRI $``$4, $``$5.5, and $``$6.5 years after the 1991 high state observation are reported. Pulse period determinations from the ROSAT observations are summarized in Table 1. ### 2.2 Pulse periods and period derivatives We have searched for the pulse period in the data from four high state data and from the low state data following the first high state. In the present analysis the event times have been projected from the spacecraft to the solar-system barycenter with standard EXSAS software employed (Zimmermann et al. 1994). They have also been corrected for arrival time delays in the binary orbit by use of the ephemeris and the orbital solution given in Levine et al. (1993). This takes into account the change in the length of the orbital period and of the mid eclipse ephemeris due to orbital decay (Wojdowski et al. 1998). Period uncertainties have been determined from the relation $`\delta P=P/(T_{obs}\times N_{bin})`$, with the exposure time $`T_{obs}`$ given in Tab 1 and the number of phase bins $`N_{\mathrm{bin}}=10`$. Periods of $`P`$=0.709113 ($`\pm `$ 0.000003) s ($`\chi ^2`$=3000, 9 degrees of freedom), $`P`$=0.708600 ($`\pm `$ 0.000002) s ($`\chi ^2`$=980, 9 degrees of freedom), P=0.70769 ($`\pm `$ 0.00006) s ($`\chi ^2`$=56, 9 degrees of freedom), $`P`$=0.707065 ($`\pm `$ 0.000010) s ($`\chi ^2`$=113, 9 degrees of freedom) and $`P`$=0.70670 ($`\pm `$ 0.00002) s ($`\chi ^2`$=250, 9 degrees of freedom) have been obtained during the October-1991, June-1993, December-1995, May-1997 and the March-1998 high-intensity states, respectively (cf. Figure 1 and Table 1). From the October-1991 to the December-1995 high state a change in pulse period with a mean $`\dot{P}=(1.08\pm 0.05)10^{11}ss^1`$ and from the June-1993 to the March-1998 high state a mean $`\dot{P}=(1.25\pm 0.08)10^{11}ss^1`$ are derived. The period derivative derived over the $``$6 year interval from October-1991 to Mar-1998 is $`\dot{P}=(1.18\pm 0.06)10^{11}ss^1`$ consistent with the mean $`\dot{P}=\mathrm{1.20\; 10}^{11}ss^1`$ derived from previous observations (Levine et al. 1993). The evolution of the pulse period with Julian date using data from Henry & Schreier (1977), Kunz et al. (1993), Levine et al. (1993), Wojdowski et al. (1998) and the results from this work is given in Figure 2. Also shown are the residuals compared to a linear best-fit with a $`\dot{P}=\mathrm{1.153\; 10}^{11}ss^1`$. It is very evident that the pulse period of SMC X-1 undergoes a period walk with a time scale of a few 1000 days (a few years). But the amplitude of this period walk is small ($`\mathrm{1.5\; 10}^4s`$). It may be suspected that somewhere at the end of 1994 the “positive” deviation from the mean $`\dot{P}`$ was largest (cf Figure 2). After this time the mean $`\dot{P}`$ may have increased. It is not clear in which way the period walk continues. An explanation of this period walk in terms of a “free” precessing neutron star is unlikely (cf. Bisnovatyi-Kogan & Kahabka 1993). A pulse period search has also been performed in an observation during a low-intensity state performed in the time interval 16 to 19 Oct-1991 (cf. Table 1 and Figure 1). A period of $`P`$=0.709103 $`\pm `$ 0.000003 s ($`\chi ^2`$=71, 9 degrees of freedom) has been determined (cf. Figure 1). This period is close to the period determined during the 7-Oct to 8-Oct-1991 high-intensity state and consistent with the long-term negative $`\dot{P}`$ value. The significance of this period is $`8\sigma `$. The period derivative between the high and low state in Oct-1991 (with a time interval $`10`$ days) is $`\dot{P}(0.41.8)10^{11}ss^1`$. ## 3 Discussion Disk-fed magnetic neutron stars have been predicted to experience spin-up or spin-down episodes due to the torque exerted by the accretion disk. In the magnetically-threaded accretion disk model, first suggested by Ghosh & Lamb (1979a, 1979b), the spin-up rate $`\dot{P}`$ is given by $$2\pi I\dot{P}/P^2=\dot{M}(GMr_0)^{1/2}n(\omega _s),$$ (1) where $`I`$ is the moment of inertia of the neutron star, $`\dot{M}`$ the accretion rate, $`G`$ the gravity constant and $`r_0`$ the inner edge of the accretion disk. The dimensionless torque $`n(\omega _s)`$, which includes the torque contribution from both matter accretion and magnetic stress, is a function of the “fastness parameter” $`\omega _s(r_0/r_c)^{3/2}`$, where $`r_c(GMP^2/4\pi ^2)^{1/3}`$ is the corotation radius. With the mean spin-up rate and X-ray luminosity observed in SMC X-1, equation (1) has two sets of solutions: (1) the magnetic moment of the pulsar $`\mu `$ is less than a few $`10^{29}Gcm^3`$ with $`r_0r_c`$; (2) $`\mu `$ is around $`10^{30}Gcm^3`$ with $`r_0r_c`$ (Li & van den Heuvel 1997). If the X-ray intensity during the low state, which is lower than that during the high state by a factor of $`3550`$, is due to a reduction in the mass accretion rate, the condition $`r_0r_c`$ implies $`\mu \stackrel{<}{}2\times 10^{29}Gcm^3`$. However, Gruber & Rothschild (1984; see also Levine et al. 1996; Wojdowski et al. 1998) have suggested the possibility of modulation of the observed X-ray intensity by a tilted, precessing, accretion disk like that in Her X-1 (Katz 1973), which would imply that the intrinsic X-ray luminosity or mass accretion rate of the pulsar could be quite steady. In this case the pulsar magnetic moment may lie between $`10^{29}Gcm^3`$ and $`10^{30}Gcm^3`$, though a magnetic moment as high as $`10^{30}Gcm^3`$ seems less likely for SMC X-1, because of the following arguments. As seen in Fig. 2, the spin-up rate of the pulsar in 1980s and 1990s varied around its mean value by $`20\%`$. The most straightforward explanation for this change is that the accretion rate has fluctuated by a similar (or a bit larger) factor<sup>1</sup><sup>1</sup>1Due to disk precessing, the accretion torque also changes, but on a timescale much shorter than that of the pulse period variation.. If $`\mu 10^{30}Gcm^3`$, $`r_0`$ would be so close to $`r_c`$ that the pulsar would spin down when the accretion rate decreased by a small factor (less than 20%), contradicted with the steady spin-up observed. The above arguments are based on the classical accretion torque models, which, however, has encountered difficulties in explaining the period evolution in the X-ray pulsar Cen X-3, which possesses many similarities with SMC X-1. Both pulsars are in a close binary system with a supergiant companion star overflowing its Roche-lobe, accreting from a disk (Tjemkes et al. 1986) at a high rate, with the X-ray luminosities close to or higher than the Eddington luminosity (Nagase 1989). Tidal torque between the distorted supergiant and the neutron star leads to an orbital decay at a similar rate in the two systems (cf. White et al. 1995). However, Cen X-3 exhibits a pulse period evolution quite different from SMC X-1. Prior to 1991, Cen X-3 had already been found to show a secular slow spin-up superposed with fluctuations and short episodes of spin-down. The more frequently sampled BATSE data show that Cen X-3 exhibits 10-100 d intervals of steady spin-up and spin-down at a much larger rate, and the long-term ($``$ years) spin-up trend is actually the average consequence of the frequent transitions between spin-up and spin-down (Finger et al. 1994). Such spin behavior has been found in at least 4 out of 8 persistent X-ray pulsars observed with BATSE (Bildsten et al. 1997), and is difficult to explain in terms of classical accretion torque models, which would require finely tuned step-function-like changes in the mass accretion rate. It is interesting to see whether the secular spin-up in SMC X-1 actually consists of transitions of short-term spin-up and spin-down, as in Cen X-3. If it is the case, this would indicate a larger instantaneous accretion torque, and hence a higher magnetic moment. ## 4 Summary New values for the spin period of SMC X-1 have been determined during recent ROSAT observations. These observations clearly show that the steady spin-up of the neutron star has continued. This makes SMC X-1 the exceptional X-ray pulsar in which no spin-down episode has been observed, though it is still unknown whether this spin-up occurs in reality or it is just an apparent superposition of more frequent spin-up and spin-down episodes. The magnetic moment of the pulsar could be as small as $`10^{29}Gcm^3`$ if its spin trend can be described by classical accretion torque models. Detailed timing observations are strongly recommanded to resolve this subject, and will have important implications on the theoretical models for angular momentum transfer between a magnetic neutron star and the surrounding accretion disk. ###### Acknowledgements. P.K. thanks H.Henrichs and P.Ghosh for helpful discussions. I thank E.P.J. van den Heuvel for reading the article. This research was supported in part by the Netherlands Organisation for Scientific Research (NWO) through Spinoza Grant 08-0 to E.P.J. van den Heuvel. The ROSAT project is supported by the Max-Planck-Gesellschaft and the Bundesministerium für Forschung und Technologie (BMFT).
no-problem/9902/astro-ph9902140.html
ar5iv
text
# Nonuniqueness and Structural Stability of Self-Consistent Models of Elliptical Galaxies ## References Dejonghe, H. 1986. MNRAS, 224, 13 Lichtenberg, A.J. & Lieberman, M.A. 1992. Regular and Chaotic Motion, Springer-Verlag, New York Merritt, D. & Fridman, T. 1996. ApJ, 460, 136 Schwarzschild, M. 1979. ApJ, 232, 236 Siopis, C. 1999. Ph.D. thesis, University of Florida Siopis, C., Athanassoula, E. & Kandrup, H. E. 1999. In preparation Siopis, C. & Kandrup, H. E. 1999. MNRAS, in preparation
no-problem/9902/astro-ph9902116.html
ar5iv
text
# 1 He I 𝜆⁢4922:He II 𝜆⁢5411 Calibration Stars
no-problem/9902/hep-ph9902297.html
ar5iv
text
# TESTING B-BALL COSMOLOGY WITH THE CMB11footnote 1Invited talk at Strong and Electroweak Matter ’98, 2-5 December 1998, Copenhagen, Denmark ## 1 AD condensate and B-ball decay The quantum fluctuations of the inflaton field give rise to fluctuations of the energy density which are adiabatic . However, in the minimal supersymmetric standard model (MSSM), or its extensions, the inflaton is not the only fluctuating field. It is well known that the MSSM scalar field potential has many flat directions , along which a non-zero expectation value can form during inflation, leading to a condensate after inflation, the so-called Affleck-Dine (AD) condensate . When the Hubble rate becomes becomes of the order of the curvature of the potential, given by the susy breaking mass $`m_S`$, the condensate starts to oscillate. At this stage B-violating terms are comparable to the mass term so that the condensate achieves a net baryonic charge. In the AD baryogenesis scenario the subsequent decay of the condensate will then generate the observable baryon number . An important point is that the AD condensate is not stable but typically breaks up into non-topological solitons which carry baryon (and/or lepton) number and are therefore called B-balls (L-balls). For baryogenesis considerations, the most promising direction is the $`d=6`$ (“$`u^cu^cu^c`$”) direction , on which we shall focus on in the following. The formation of the B-balls takes place with an efficiency $`f_B`$, likely to be in the range 0.1 to 1. The properties of the B-balls depend on SUSY breaking and on the flat direction along which the AD condensate forms. We will consider SUSY breaking mediated to the observable sector by gravity. In this case the B-balls are unstable but long-lived, decaying well after the electroweak phase transition has taken place , with a natural order of magnitude for decay temperature $`T_d𝒪(1)\mathrm{GeV}`$. This assumes a reheating temperature after inflation, $`T_R`$, is less than about $`10^4`$ GeV. Such a low value of $`T_R`$ is in fact necessary in D-term inflation models because the natural magnitude of the phase of the AD field, $`\delta _{\mathrm{CP}}`$, is of the order of 1 in D-term inflation and along the d=6 direction AD baryogenesis implies that the baryon to entropy ratio is $`\eta _B\delta _{\mathrm{CP}}(T_R/10^9\mathrm{GeV})`$ so that $`T_R𝒪(1)\mathrm{GeV}`$ would be the most natural choice. It is significant that a low reheating temperature can naturally be achieved in D-term inflation models, as these have discrete symmetries in order to ensure the flatness of the inflaton potential which can simultaneuously lead to a suppression of the reheating temperature . ## 2 Fluctuations of the AD field The AD field $`\mathrm{\Phi }=\varphi e^{i\theta }/\sqrt{2}(\varphi _1+i\varphi _2)/\sqrt{2}`$ is a complex field and, in the currently favoured D-term inflation models , is effectively massless during inflation. Therefore both its modulus and phase are subject to fluctuations with $$\delta \varphi _i(\stackrel{}{x})=\sqrt{V}\frac{d^3k}{(2\pi )^3}e^{i\stackrel{}{k}\stackrel{}{x}}\delta _\stackrel{}{k},$$ (1) where $`V`$ is a normalizing volume and where the power spectrum is the same as for the inflaton field, $$\frac{k^3|\delta _\stackrel{}{k}|^2}{2\pi ^2}=\left(\frac{H_I}{2\pi }\right)^2,$$ (2) where $`H_I`$ is the value of the Hubble parameter during inflation. In D-term inflation models the phase of the AD field receives no order $`H`$ corrections after inflation and so its fluctuations are unsuppressed . The fluctuations of the phase correspond to fluctuations in the local baryon number density, or isocurvature fluctuations, while the fluctuations of the modulus give rise to adiabatic density fluctuations. For given background values $`\overline{\theta }`$ and $`\overline{\varphi }`$, (with $`\overline{\theta }`$ naturally of the order of 1) one finds $$\left(\frac{\delta \theta }{Tan(\overline{\theta })}\right)_k=\frac{H_I}{Tan(\overline{\theta })\overline{\varphi }}=\frac{H_Ik^{3/2}}{\sqrt{2}Tan(\overline{\theta })\overline{\varphi }_I},$$ (3) where $`\varphi _I`$ is the value of $`\varphi `$ when the perturbation leaves the horizon. The magnitude of the AD field $`\mathrm{\Phi }`$ remains at the non-zero minimum of its potential until $`Hm_S`$, after which the baryon asymmetry $`n_BSin(\theta )`$ forms. Thus the isocurvature fluctuation reads $$\left(\frac{\delta n_B}{n_B}\right)_k\delta _B^{(i)}=\left(\frac{\delta \theta }{Tan(\overline{\theta })}\right)_k$$ (4) The adiabatic fluctuations of the AD field may dominate over the inflaton fluctuations, with potentially adverse consequences for the scale invariance of the perturbation spectrum, thus imposing an upper bound on the amplitude of the AD field . In the simplest D-term inflation model, the inflaton is coupled to the matter fields $`\psi _{}`$ and $`\psi _+`$ carrying opposite Fayet-Iliopoulos charges through a superpotential term $`W=\kappa S\psi _{}\psi _+`$ . At one loop level the inflaton potential reads $$V(S)=V_0+\frac{g^4\xi ^4}{32\pi ^2}\mathrm{ln}\left(\frac{\kappa ^2S^2}{Q^2}\right);V_0=\frac{g^2\xi ^4}{2},$$ (5) where $`\xi `$ is the Fayet-Iliopoulos term and $`g`$ the gauge coupling associated with it. COBE normalization fixes $`\xi =6.6\times 10^{15}\mathrm{GeV}`$. In addition, we must consider the contribution of the AD field to the adiabatic perturbation. During inflation, the potential of the $`d=6`$ flat AD field is simply given by $$V(\varphi )=\frac{\lambda ^2}{32M^6}\varphi ^{10}.$$ (6) Taking both $`S`$ and $`\varphi `$ to be slow rolling fields one finds that the adiabatic part of the invariant perturbation is given by $$\zeta =\delta \rho /(\rho +p)=\frac{3}{4}\frac{\delta \rho _\gamma ^{(a)}}{\rho _\gamma }\frac{V^{}(\varphi )+V^{}(S)}{V^{}(\varphi )^2+V^{}(S)^2}\delta \varphi .$$ (7) Thus the field which dominates the spectral index of the perturbation will be that with the largest value of $`V^{^{}}`$ and $`V^{^{\prime \prime }}`$. ## 3 A lower bound on the isocurvature amplitude The index of the power spectrum is given by $`n=1+2\eta 6ϵ`$, where $`ϵ`$ and $`\eta `$ are defined as $$ϵ=\frac{1}{2}M^2\left(\frac{V^{}}{V}\right)^2,\eta =M^2\frac{V^{\prime \prime }}{V}.$$ (8) The present lower bounds imply $`|\mathrm{\Delta }n|<\mathrm{\hspace{0.33em}0.2}`$. It is easy to find out that the the condition that the spectral index is acceptably close to scale invariance essentially reduces to the condition that the spectral index is dominated by the inflaton, $`V^{}(\varphi )<V^{}(S)`$ and $`V^{\prime \prime }(\varphi )<V^{\prime \prime }(S)`$. The latter requirement turns out to be slightly more stringent and implies a lower bound on the AD condensate field with $`\varphi <\mathrm{\hspace{0.33em}0.48}\left(g/\lambda \right)^{1/4}(M\xi )^{1/2}`$. As a consequence, there is a lower bound on the isocurvature fluctuation amplitude. Because the B-ball is essentially a squark condensate, in R-parity conserving models its decay produces both baryons and neutralinos ($`\chi `$), which we assume to be the lightest supersymmetric particles (LSPs), with $`n_\chi 3n_B`$ . This case is particularly interesting, as the simultaneous production of baryons and neutralinos may help to explain the remarkable similarity of the baryon and dark matter neutralino number densities . With B-ball decay temperatures $`T_d𝒪(1)\mathrm{GeV}`$, the decay products no longer thermalize completely and, so long as $`T_d`$ is low enough that they do not annihilate after B-ball decay , retain the form of the original AD isocurvature fluctuation. Therefore in this scenario the cold dark matter particles can have both isocurvature and adiabatic density fluctuations, resulting in an enhancement of the isocurvature contribution relative to the baryonic case. The total LSP number density is the sum of the thermal relic density $`n_\chi ^{(th)}`$ and the density $`n_\chi ^{(B)}=3f_Bn_B`$ originating from the B-ball decay. (Their relative importance depends on $`T_R`$; for $`T_R<𝒪(1)\mathrm{GeV}`$ one would find $`n_\chi ^{(th)}0`$.) The isocurvature fluctuation imposed on the CMB photons is then found to be $$\frac{\delta \rho _\gamma ^{(i)}}{\rho _\gamma }\frac{4}{3}\left(1+\frac{m_B}{3f_Bm_\chi }\right)\left(\frac{\mathrm{\Omega }_\chi \mathrm{\Omega }_\chi ^{(th)}}{\mathrm{\Omega }_m}\right)\delta _B^{(i)}\frac{4}{3}\omega \delta _B^{(i)},$$ (9) where $`\rho _\chi ^{(B)}`$ is the LSP mass density from the B-ball, $`\mathrm{\Omega }_m(\mathrm{\Omega }_\chi )`$ is total matter (LSP) density (in units of the critical density). Thus $$\beta \left(\frac{\delta \rho _\gamma ^{(i)}}{\delta \rho _\gamma ^{(a)}}\right)^2=\frac{1}{9}\omega ^2\left(\frac{M^2V^{}(S)}{V(S)Tan(\overline{\theta })\overline{\varphi }}\right)^2,$$ (10) It then follows that the lower limit on $`\beta `$ is $$\beta >\mathrm{\hspace{0.33em}2.5}\times 10^2g^{3/2}\lambda ^{1/2}\omega ^2Tan(\overline{\theta })^2.$$ (11) Thus significant isocurvature fluctuations are a definite prediction of the AD mechanism. Isocurvature perturbations give rise to extra power at large angular scales but are damped at small angular scales . The amplitude of the rms mass fluctuations in an $`8h^1`$ Mpc<sup>-1</sup> sphere, denoted as $`\sigma _8`$, is about an order of magnitude lower than in the adiabatic case. Hence COBE normalization alone is sufficient to set a tight limit on the relative strength of the isocurvature amplitude. Small isocurvature fluctuations are, however, beneficial, in the sense that they would improve the fit to the power spectrum in $`\mathrm{\Omega }_0=1`$ CDM models with a cosmological constant (or $`\mathrm{\Omega }_0=1`$, $`\mathrm{\Lambda }=0`$ CDM models with some hot dark matter ). Detecting isocurvature fluctuations at the level of $`\beta 10^4`$ should be quite realistic at MAP and Planck. Thus the forthcoming CMB experiments offer a test not only of the inflationary Universe but also of the B-ball variant of AD baryogenesis. ## Acknowledgments I wish to thank John McDonald for many discussions and an enjoyable collaboration. This work has been supported by the Academy of Finland under the contract 101-35224. ## References
no-problem/9902/hep-ph9902331.html
ar5iv
text
# BEYOND HTL: THE CLASSICAL KINETIC THEORY OF LANDAU DAMPING FOR SELFINTERACTING SCALAR FIELDS IN THE BROKEN PHASE ## 1 Effective equation for the slow modes The model investigated in this paper has the Lagrangian density $$L=\frac{1}{2}(_\mu \phi (x))^2\frac{1}{2}m^2\phi ^2(x)\frac{\lambda }{24}\phi ^4(x).$$ (1) The field $`\phi (x)`$ is split into the sum of terms with low ($`p_0<\mathrm{\Lambda }`$) and high ($`p_0>\mathrm{\Lambda }`$) frequency Fourier-components, that is $`\phi (x)=\stackrel{~}{\mathrm{\Phi }}(x)+\varphi (x),\varphi (x)=0`$. The classical equation of motion for the low frequency component $`\stackrel{~}{\mathrm{\Phi }}(x)`$ is the following: $$(^2+m^2)\stackrel{~}{\mathrm{\Phi }}(x)+\frac{\lambda }{6}[\stackrel{~}{\mathrm{\Phi }}^3(x)+3\stackrel{~}{\mathrm{\Phi }}(x)\varphi ^2(x)+3\stackrel{~}{\mathrm{\Phi }}^2(x)\varphi (x)+\varphi ^3(x)]=0.$$ (2) The effective equation of motion arises upon averaging over the (quantum) fluctuations of the high frequency field $`\varphi (x)`$: $$(^2+m^2)\stackrel{~}{\mathrm{\Phi }}(x)+\frac{\lambda }{6}[\stackrel{~}{\mathrm{\Phi }}^3(x)+3\stackrel{~}{\mathrm{\Phi }}(x)\varphi ^2(x)]=0.$$ (3) The last term on the left hand side represents the source induced by the action of the high frequency modes. It is a functional of $`\stackrel{~}{\mathrm{\Phi }}(x)`$. The action of the effective field theory may be reconstructed from this equation. This approach is closely related to the Thermal Renormalisation Group equation of D’Attanasio and Pietroni . In the broken phase the non-zero average value spontanously generated below $`T_c`$ is separated from the low-frequency part, $`\stackrel{~}{\mathrm{\Phi }}(x)=\overline{\mathrm{\Phi }}+\mathrm{\Phi }(x)`$. The expectation value $`\overline{\mathrm{\Phi }}`$ is determined by the effective equation $$m^2+\frac{\lambda }{2}\varphi ^2(x)^{(0)}+\frac{\lambda }{6}\overline{\mathrm{\Phi }}^2=0,$$ (4) where we have introduced the indexed expectation value $$\varphi ^2(x)^{(j)}\mathrm{\Phi }^j$$ (5) to denote that part of the full expectation value which is ”proportional” functionally to the j-th power of $`\mathrm{\Phi }(x)`$. Our present goal is to determine the effective linear dynamics of the $`\mathrm{\Phi }`$-field, therefore it is sufficient to study the linearised effective equation for $`\mathrm{\Phi }(x)`$: $$(^2+\frac{\lambda }{3}\overline{\mathrm{\Phi }}^2)\mathrm{\Phi }(x)=\frac{\lambda }{2}\overline{\mathrm{\Phi }}\varphi ^2(x)^{(1)}.$$ (6) (In this equation $`\overline{\mathrm{\Phi }}`$ is the solution of (4)). Clearly, the linear response theory of (1) is contained in the induced current, determined by $`\varphi ^2(x)^{(1)}`$. ## 2 Statistics of the high frequency modes For the computation of the leading effect of the high-frequency modes in the low frequency projection of the equation of motion it is sufficient to study the two-point function $`\varphi (x)\varphi (y)`$. For its determination we follow the procedure carefully described by Mrówczyński and Danielewicz . Introducing the Wigner transform by the relation $$\varphi (x)\varphi (y)=\frac{d^4p}{(2\pi )^4}e^{ip(xy)}\mathrm{\Delta }(X,p),X=\frac{x+y}{2}.$$ (7) one arrives at the following equations for $`\mathrm{\Delta }(X,p)`$ $`({\displaystyle \frac{1}{4}}_X^2p^2+M^2(X))\mathrm{\Delta }(X,p)=0,`$ $`(p_X+{\displaystyle \frac{1}{2}}_XM^2(X)_p)\mathrm{\Delta }(X,p)=0,`$ (8) where $`M^2(x)=m^2+\lambda \stackrel{~}{\mathrm{\Phi }}^2(x)/2`$. The quantity appearing in the induced source is related to the Wigner transform of the two-point function through the relation $$\varphi ^2(x)^{(1)}=\frac{d^4p}{(2\pi )^4}\mathrm{\Delta }^{(1)}(x=X,p).$$ (9) The important limitation on the range of validity of the effective dynamics is expressed by the assumption that the second derivative with respect to $`X`$ is negligible relative to $`p^2`$ and $`M^2`$ on the left hand side of the first equation of (8). Then this equation is transformed simply into a local mass-shell condition, while the second equation of (8) can be interpreted as a Boltzmann-equation for the phase-space “distribution function” $`\mathrm{\Delta }(X,p)`$. The background-independent solution $`\mathrm{\Delta }^{(0)}`$ is given by the well-known free correlator, slightly modified to account for the lower frequency cut appearing in the Fourier series expansion of $`\varphi (x)`$: $$\mathrm{\Delta }^{(0)}(X,p)=(\mathrm{\Theta }(p_0)+\stackrel{~}{n}(|p_0|))2\pi \delta (p^2M^2),\stackrel{~}{n}(p_0)=\frac{1}{e^{\beta |p_0|}1}\mathrm{\Theta }(|p_0|\mathrm{\Lambda }).$$ (10) An iterated solution of the second equation of Eq.(8) starting from (10) yields $$\mathrm{\Delta }^{(1)}(X,p)=\frac{\lambda }{2}\overline{\mathrm{\Phi }}(p_X)^1\frac{\mathrm{\Phi }(X)}{X_\mu }\frac{\mathrm{\Delta }^{(0)}(X,p)}{p_\mu }.$$ (11) ## 3 The induced source For the analysis of the induced source $$j_{ind}(x)=\frac{\lambda }{2}\overline{\mathrm{\Phi }}\varphi ^2(x)^{(1)}=\frac{\lambda ^2\overline{\mathrm{\Phi }}^2}{4}\frac{d^4p}{(2\pi )^4}(p_x)^1_x\mathrm{\Phi }_p\mathrm{\Delta }^{(0)}(x,p)$$ (12) one takes its Fourier-transform with respect to $`x`$. Using the explicit expression (10) one easily recognizes the only non-trivial (non-local) contribution arises from the integral $$\frac{d^4p}{(2\pi )^3}\frac{1}{pk}k_0\delta (p^2M^2)\frac{d\stackrel{~}{n}}{dp_0}.$$ (13) Its imaginary part determines the rate of Landau-damping. Simple integration steps lead to $`\mathrm{Im}j(k_0,k)={\displaystyle \frac{\lambda ^2\overline{\mathrm{\Phi }}^2}{16\pi }}{\displaystyle \frac{k_0}{k}}\mathrm{\Phi }(k)\mathrm{\Theta }(k^2k_0^2){\displaystyle _{t_0}^{\mathrm{}}}𝑑t{\displaystyle \frac{d\stackrel{~}{n}}{dt}},`$ $`\stackrel{~}{n}(t)={\displaystyle \frac{1}{e^{\beta Mt}1}}\mathrm{\Theta }(Mt\mathrm{\Lambda }),t_0=1/\sqrt{1(k_0/k)^2}.`$ (14) This integral is zero if $`\mathrm{\Lambda }>Mt_0`$, but for $`\mathrm{\Lambda }<Mt_0`$ it gives $$\mathrm{Im}j(k_0,k)=\frac{\lambda ^2\overline{\mathrm{\Phi }}^2}{16\pi }\frac{k_0}{k}\mathrm{\Phi }(k)\mathrm{\Theta }(k^2k_0^2)\frac{1}{e^{\beta M/\sqrt{1(k_0/k)^2}}1}$$ (15) independent of the value of $`\mathrm{\Lambda }`$. The result has very transparent interpretation. In the HTL-limit, when only the modes with much higher frequencies than any mass scale in the theory are taken into account, no Landau-damping arises. The effective theory is local! Going beyond HTL, ($`k_0<<\mathrm{\Lambda }<<M`$) the correct non-local dynamics (reflected also by the Landau damping) originating from the 1-loop self-energy contribution is recovered, when comparison with Boyanovsky et al. is made . ## 4 Classical mechanical representation of the non-local dynamics Recently we proposed to superimpose on the scalar field theory (1) a gas of relativistic scalar particles with the action $$S_{mech}=\underset{i}{}𝑑\tau M_{loc}[\overline{\mathrm{\Phi }},\mathrm{\Phi }(\xi _i(\tau ))],M_{loc}^2[\mathrm{\Phi }(\xi _i)]=m^2+\frac{\lambda }{2}\stackrel{~}{\mathrm{\Phi }}^2(\xi _i(\tau )),$$ (16) where $`\xi _i(\tau )`$ denotes the world-line of the $`i`$-th particle of the gas. The mass of these particles varies with the field along the trajectory of the particles. The equation of motion of one of the particles is given by $$M_{loc}(\xi )\frac{dp_\mu }{d\tau }=\frac{1}{2}\frac{M_{loc}^2(\xi )}{\xi _\mu }.$$ (17) The kinetic equation for the collisionless evolution of the one-particle phase-space density $`f(x,p)`$ of this gas is $$p_\mu \frac{f(x,p)}{x_\mu }+M_{loc}\frac{dp_\mu }{d\tau }\frac{}{p_\mu }f(x,p)=0,$$ (18) which clearly agrees with the second equation of (8), while the solution of the first one has dictated the choice of the effective mass expression in (16). Variation of (16) with respect to $`\mathrm{\Phi }(x)`$ leads to the term corresponding to the induced source density in the wave equation: $$j_{ind}=\frac{\lambda }{2}\frac{d^3p}{(2\pi )^3p_0}(\overline{\mathrm{\Phi }}+\mathrm{\Phi }(x))f(x,p)$$ (19) $`(p_0^2=p^2+M^2)`$. This expression is obtained by averaging the contributions of the different particle-trajectories in the gas, passing nearby the point $`x`$ with any momentum $`p`$: $$𝑑\tau \underset{i}{}\delta ^{(4)}(x\xi _i(\tau ))=M_{loc}\frac{d^3p}{(2\pi )^3p_0}f(x,p).$$ (20) Using the solution of the Boltzmann equation obtained upon iteration starting from the equilibrium Bose-Einstein factor, one finds the same expression for the non-local part of the source, as was found above implying the same result also for the rate of Landau damping. ## 5 Conclusion We have presented two equivalent methods of treating the effective dynamics of the low-frequency fluctuations of self-interacting scalar fields in the broken phase of the theory. The dynamics is proved to be non-local if the effect of fluctuation modes below the mass scale $`M`$ is also taken into account. A fully local representation was proposed by superimposing a relativistic gas with specially chosen field dependent mass on the original field theory. The range of validity of the fully local version of the effective model can be ascertained only from its comparison with the result of lowering the separation scale $`\mathrm{\Lambda }`$ in the detailed integration over the fluctuations with different frequencies. From the comparison one learns that the combined kinetic plus field theory is equivalent to the effective theory for the modes with $`k_0<<\mathrm{\Lambda }<M`$, that is its validity goes beyond the Hard Thermal Loop approximation. ## Acknowledgements This work has benefited from discussions with Antal Jakovác and Péter Petreczky. Helpful comments of U. Heinz and C. Manuel during SEWM 98 are gratefully acknowledged. We thank the organizers the creative atmosphere of the meeting. ## References
no-problem/9902/cond-mat9902067.html
ar5iv
text
# Spin dynamics in high-𝑇_𝐶 superconductors ## Introduction Over the last decade, a great deal of effort has been devoted to show the importance of antiferromagnetic (AF) dynamical correlations for the physical properties of high-$`T_C`$ cuprates and consequently for the microscopic mechanism responsible for superconductivitymodel ; pines . To elucidate how these electronic correlations are relevant, it is then necessary to put the spectral weight of AF fluctuations on a more quantitative scale. Inelastic neutron scattering (INS) provides essential information on this matter as it directly measures the full energy and momentum dependences of the spin-spin correlation function. Recently, efforts have been made to determine them in absolute units by comparison with phonon scattering. The following definition, corresponding to $`\frac{1}{3}`$ of the total spin susceptibility, is usedrecentprb , $$\chi ^{\alpha \beta }(Q,\omega )=(g\mu _B)^2\frac{i}{\mathrm{}}_0^{\mathrm{}}𝑑t\mathrm{exp}^{i\omega t}<[S_Q^\alpha (t),S_Q^\beta ]>$$ (1) Our results are then directly comparable with both Nuclear Magnetic Resonance (NMR) results and theoretical calculations. Here, some aspects of the spin dynamics obtained in bilayer system will be presented in relation with recent results reported by other groupsincdai ; mookinc . However, it is before useful to recall the main features of magnetic correlations in the $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_{6+\mathrm{x}}`$ (YBCO) system over doping and temperaturerossat91 ; lpr ; sympo ; tony1 ; tony2 ; revue . ## Energy-dependences We first emphasize the energy dependence of the spin susceptibility at the AF wave vector, $`Q_{AF}=(\pi ,\pi )`$, for x $``$ 0.6 (or $`T_C`$ 60 K). $`Im\chi `$ in the normal state is basically well described in the underdoped regime by a broad peak centered around $``$ 30 meV (see Fig. 1)revue . Upon heating, the AF spin susceptibility spectral weight is reduced without noticeable renormalization in energy. Going into the superconducting state, a more complex line shape is observed essentially because a strong enhancement of the peak susceptibility occurs at some energy. This new feature is referred to as the resonance peak, as it is basically resolution-limited in energy (see e.g. rossat91 ; tony1 ; dai ). With increasing doping, the resonant peak becomes the major part of the spectrumrevue . At each doping, the peak intensity at the resonance energy is characterized by a striking temperature dependence displaying a pronounced kink at $`T_C`$ tony97 ; epl ; dai ; tonynew . Therefore, this mode is a novel signature of the unconventional superconducting state of cuprates which has spawned a considerable theoretical activity. Most likely, the magnetic resonance peak is due to electron-hole pair excitation across the superconducting energy gap tony1 ; revue . The resonance peak may or may not be located at the same energy as the normal state peak. Fig. 1 displays a case where both occurs at different energies. However, at lower doping, these two features are located around similar energies, $`\mathrm{}\omega `$ 30-35 meV for x $``$ 0.6-0.8revue ; epl ; tonynew . Indeed, the resonance energy more or less scales with the superconducting temperature transitionrevue ; tony97 ; epl whereas the normal state maximum does not shift much over the phase diagram for x $``$ 0.6revue . Apart from the sharp resonance peak, the broad contribution (around $``$ 30 meV) is still discernible below $`T_C`$ as a shoulder, shown around $`\mathrm{}\omega `$ 35 meV in Fig. 1revue . In the superconducting state, the situation looks more complex as the low energy spin excitations are removed below a threshold, so-called spin gaprossat91 ; lpr ; revue , likely related to the superconducting gap itself. The non-resonant contribution has not received much attention so far. However, its spectral weight in the normal state is important and may be crucial for a mechanism for the high-$`T_C`$ superconductivity based on antiferromagnetismpines . With increasing doping, the latter peak is continuously reduced: it becomes too weak to be measured in INS experiments in the overdoped regime YBCO<sub>7</sub>lpr ; tony1 ; tony2 ; revue . Using the same experimental setup and the same samplelpr ; revue , no antiferromagnetic fluctuations are discernible in the normal state above the nuclear background. Consistently, in the SC state, an isolated excitation around 40 meV is observed corresponding to the resonance peak. Above $`T_C`$, an upper limit for the spectral weight can be giventony2 which is about 4 times smaller than in YBCO<sub>6.92</sub>revue . Assuming the same momentum dependence as YBCO<sub>6.92</sub>, it would give a maximum of the spin susceptibility less than 80 $`\mu _B^2/eV`$ at $`(\pi ,\pi )`$ in our units. Therefore, even though YBCO<sub>7</sub> may be near a Fermi liquid picturerevue with weak magnetic correlations, the spin susceptibility at $`Q=(\pi ,\pi )`$ can still be $``$ 20 times larger than the uniform susceptibility measured by macroscopic susceptibility or deduced from NMR knight shiftpines . Therefore, $`Im\chi `$ is then naturally characterized in the superconducting state by two contributions having opposite doping dependences, the resonance peak becoming the major part of the spectrum with increasing doping. The discussion of Im$`\chi `$ in terms of two contributions has not been emphasized by all groupsdai . However, we would like to point out that this offers a comprehensive description consistent with all neutron data in YBCO published so far. In particular, it provides an helpful description of the puzzling modification of the spin susceptibility induced by zinc substitutionyvan ; tonyzn by noticing that, on the one hand, zinc reduces the resonant part of the spectrum and, on the other hand, it restores AF non-resonant correlations in the normal staterevue . Interestingly, the incommensurate peaks recently observed below the resonance peak in YBCO<sub>6.6</sub>incdai ; mookinc ; arai support the existence of two distinct contributions as the low energy incommensurate excitations cannot belong to the same excitation as the commensurate sharp resonance peak. Finally, these two contributions do not have to be considered as independent and superimposed excitations: the occurrence of the sharp resonance peak clearly affects the full energy shape of the spin susceptibilityrossat91 ; revue ; tony97 ; epl ; dai . We shall see below that the spin susceptibility q-shape is also modified below $`T_C`$. ## Momentum-dependences In momentum-space, although both contributions are most generally peaked around the commensurate in-plane wavevector $`(\pi ,\pi )`$, they exhibit different q-widths. The resonance peak is systematically related to a doping independent q-width, $`\mathrm{\Delta }q^{reso}=0.11\pm 0.02`$ Å<sup>-1</sup>sympo (HWHM), and hence to a larger real space distance, $`\xi =1/\mathrm{\Delta }q^{reso}9`$ Å in real space. Recent data dai ; epl ; tonynew ; arai agree with that conclusion. In contrast, the non-resonant contribution exhibits a larger and doping dependent q-width, so that, the momentum width displays a minimum versus energy at the resonance peak energysympo ; epl ; arai . Recently, in the underdoped regime $`x=0.6`$, Dai et alincdai reported low temperature q-scans at $`\mathrm{}\omega `$= 24 meV which were peaked at an incommensurate wavevector. Later, Mook et almookinc have detailed the precise position of these features, displaying a complex structure of the incommensurability with a squared-like shape with more intense four corners at $`Q=(\pi ,\pi (1\pm \delta ))`$ and $`Q=(\pi (1\pm \delta ),\pi )`$ with $`\delta `$= 0.21. Interestingly, the energy where this structure is reported is systematically located in a small energy range below the resonance energy, $`E_r`$= 34 meVdai ; arai . Further, this structure is only clearly observed at temperatures below $`T_C`$. In the normal state, its existence remains questionable owing to background subtraction difficulties in such unpolarized neutron experimentsmookinc ; arai . A broad commensurate peak is unambiguously recovered above 75 K in polarized beam measurementsdai . To clarify that situation, we have performed a polarized neutron triple-axis experiment on an underdoped sample YBCO<sub>6.7</sub> with $`T_C`$= 67 Ktony97 ; tonynew . The experiment on IN20 at the Institut Laue Langevin (Grenoble) with a final wavevector $`k_F`$= 2.662 Å<sup>-1</sup> (Experimental details will be reported elsewheretonynew ). Fig. 2 displays q-scans at $`\mathrm{}\omega `$= 24 meV in the spin-flip channel at two temperatures: T=14.5 K and T= $`T_C`$ \+ 3 K. The polarization analysis, and especially the comparison of the two guide field configurations (H // Q and H $``$ Q), allows unambiguously to remove the phonon contributionstony2 . Surprisingly, the magnetic intensity is basically found commensurate at both temperatures. Tilted goniometer scans have been performed to pass through the locus of the reported incommensurate peaksmookinc : less magnetic intensity is measured there meaning that there is no clear sign of incommensurability in that sample. However, Fig. 2 shows different momentum shapes at both temperatures: a flatter top shape is found at low temperature indicating that the momentum dependence of the spin susceptibility evolves with temperature. Fig. 3 underlines this point as it displays the temperature dependence of the intensity at both the commensurate wavevector and at the incommensurate positions (along the (310) direction as reported in Ref. incdai ). Two complementary behaviors are found: at the commensurate position, the peak intensity is reduced at $`T_C`$dai whereas at the incommensurate position the intensity increases at a temperature which likely corresponds to $`T_C`$. As quoted by Dai et alincdai , on cooling below $`T_C`$, the spectrum rearranges itself with a suppression at the commensurate point accompanied by an increase in intensity at incommensurate positions. Therefore, even though our YBCO<sub>6.7</sub> sample does not exhibit well-defined incommensurate peaks, quite similar temperature dependences are observed in both samples. Superconductivity likely triggers a redistribution of the magnetic response in momentum space, that may marginally result in an incommensurate structure in a narrow energy range. Interestingly, the sharp resonance peak simultaneously occurs. So that, superconductivity affects the spin susceptibility shape in both the momentum and the energy spaces. Then, the interplay between the resonant and the non-resonant contributions may induce the incommensurate structure. In this respect, the magnetic incommensurability found in the La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> system would have a different origin as the wavevector dependence of Im$`\chi `$ in LSCO remains essentially the same across the superconducting transitionmason . ## concluding remarks Energy and momentum dependences of the antiferromagnetic fluctuations in high $`T_C`$ cuprates YBCO have been discussed. The sharp resonance peak occurs exclusively below $`T_C`$. It is likely an intrinsic feature of the copper oxides as it has been, recently, discovered in $`\mathrm{Bi}_2\mathrm{Sr}_2\mathrm{CaCu}_2\mathrm{O}_{8+\delta }`$resobsco . This resonance peak is accompanied in underdoped samples by a broader contribution remaining above $`T_C`$.
no-problem/9902/astro-ph9902136.html
ar5iv
text
# Quasars as Extreme Case of Galaxies ## 1 Introduction In recent years, evidence has been mounting that quasars are extreme case of galaxies rather than being truly different phenomena (Athreya (1996); Bathel (1989); Pasacoff (1989)). One generally believes that the galaxies, as separate units, originated through some sort of gravitational instability. One assumes that, a fluctuation in density either developed or pre-existed in the proto-galaxies from which the galaxies were to form. As a fluctuation grew in mass, it collapsed under the action of gravity, cooled and eventually a galaxy was formed. If we assume that the quasars are extreme case of galaxies we must seek for some characteristic physical parameters which are responsible for the observational differences of these objects. Field and Colgate (hereafter FC) (1976) considered the angular velocity of proto-galaxies as such characteristic parameter. We will present a simple approach to this model using the Tully-Fisher relation (1977) and the Ogerell-Hossel phenonenological formula (1991) to obtain a relation between the luminosity and the angular velocity of the galaxies. The FC model assumes that the size of the galaxies, average mass of their constituent stars and their total energy output depend on the rotation rate of the proto-galaxies or equally on the balance of the gravity with the centrifugal force at the end of the contraction process. As an example, assume two proto-galaxies with the same initial mass and size but, one with an angular velocity ten times that of the other. The centrifugal force will then be hundred times weaker for the slowly rotating proto-galaxy. Such an object will eventually be about fifty times smaller in size and have, on the average, stars of about a hundred times more massive than that of the fast rotating one. Assuming that their constituent stars are of main sequence type, the compact object will generate about $`2.510^3`$ times more energy than the extended one. According to the FC model, the compact and the extended sources in the preceding example are the representatives for a quasar and an ordinary galaxy, respectively. Here we investigate the distribution of quasars in different luminosity classes together with the consideration of their look back time. The result seems to agree with the FC model along with, assuming the so called a ”decay” mechanism for these objects. Of course some people do not accept the FC model and consider different scenario that assume the quasars to be the objects that by some evolutionary mechanism become dimmer and dimmer in the course of time (Kembhavi and Narlikar (1997)) or, alternatively, to be a certain phase in the process of the galaxies formation (Haenhelt and Rees (1993)). However, these propositions need, in turn, more investigations and efforts. Furthermore, we have considered the space distribution for about $`40,000`$ ordinary galaxies extracted from LEDA database (Patruel et al. (1996)). The result looks like the distribution of quasars associated with the lowest luminosity class. In other word, it seems that the galaxies and quasars may be brought under one and the same umbrella rather than being different phenomena. The later claim is also supported by the behavior of the luminosity functions of these objects. The luminosity function analysis is also employed to introduce a critical angular velocity which specifies the branch point of the evolution of proto-galaxies into the quasars and normal galaxies. It is well known that the cluster of galaxies in space are linked in a filamentary network with the great voids between them (Seeds (1994)). These observations concerning the large scale structure of the universe may also be treated by studying the distribution of the quasars. This is done and a few filamentary structures and voids are resolved. In section 2 the results and discussions are given. Section 3 is devoted for concluding remarks. ## 2 Results and Discussion If one assumes that the quasars are evolved from the slowly rotating proto- galaxies and, therefore, possess much massive stars, one should accept that they must evolve faster than the ordinary galaxies as well. Considering the previous example, the quasars formed in this way would have a half life proportional to the inverse square of its mass if presumably populated by the main sequence type of stars and will evolve about $`10^4`$ times faster than the corresponding ordinary galaxy. Quasars including stars with the masses greater than eight solar mass, may eventually disappear from the contact with the rest of the universe as a result of collapsing after consuming their energy sources. This process, if done, may lead to an evolutionary decay mechanism woking on these objects in the course of time. To realize this phenomenon, one possible way is presumably to study the behavior of the distribution of quasars in space. Considering the look back time of these relatively distant objects, one expects a nonuniform distribution for them. In other words, the plot of number density of quasars versus distance or in some sense, versus time, should reveal more quasars at far distances (i. e. at very long times ago) than in nearby regions. However, this is not satisfied by Fig. 1, that shows more or less a uniform large scale density distribution for them. We will return to this point later. The required parameters for investigation of the quasars in the present work are obtained using their absolute magnitudes and redshifts given by Veron et al. (1991). Also the validity of Hublle’s law with the value 75$`km/secMpc`$ for the Hublle constant is assumed. The completeness of data is carried out by the well known $`V/V_m`$ method first used by Maarten Schmidt to study the space distribution of a complete sample of radio quasars from 3CR catalogue (Schmidt (1968)). Let us come back to the FC model and introduce an alternative approach to it on the basis of the Tully-Fisher relation which is $$v=Al^{0.22},$$ (1) where $`l`$ and $`v`$ are the luminosity and the circular rotation velocity of the galaxies, respectively, and $`A`$ is a constant. On the other hand, one may consider the Oegerle-Hoessel phenomenological formula as follows $$r=Bv^{1.33}<SB>^{o.83},$$ (2) where $`r`$ and $`<SB>`$ are the characteristic radius and mean surface brightness of the galaxies, respectively, and $`B`$ is a constant. One then uses Eqs. (1) and (2) to derive $$l=C\omega ^{0.7},$$ (3) where $`\omega `$ is the angular velocity and $`C`$ is a constant of proportionality. Equation (3) implies that the luminosity increases as the angular velocity decreases consistent with the imlication of the FC model. The corresponding data for known S0 morphological type of the galaxies is plotted in Fig. 2. Here, the units of the luminosity and the angular velocity are $`10^{33}erg/sec`$ and $`10^{15}rad/sec`$, respectively. Fitting a function of the form $`l=constant.\omega ^\alpha `$ to this figure gives the value $`0.78`$ for $`\alpha `$, satisfied by Eq. (3). A question which arises here is, how the quasars with different angular velocities evolve? According to FC model the quasars made of the proto-galaxies with lower angular velocities will evolve faster than the normal galaxies made of the proto-galaxies with higher angular velocities. Therefore, if one divides the full range of compelete sample of observed quasars into different luminosity classes, one expects that the evolution behavior will look different for different classes and the discrepancy encountered before, may be removed. To do this we have classified the data into 5 luminosity classes, in sucha way that, the luminosity increases with increasing the order of the classes. The ranges of luminosity are not necessarily equal for each class and are chosen arbitrarily to be 0.0-0.5, 0.5-8.0, 8.0-40.5, 40.5-128.0 and 128.0-312.5 in the unit of $`10^{45}erg/s`$, respectively. The corresponding distributions are plotted in Figs. 3 to 7. As an overal view, it is clear from these figures that the decay mechanism is more pronounced for the luminous quasars as expected. The space distribution of quasars associated with the 1st class, i. e. the dimmer quasars, as plotted in Fig. 3, is almost uniform. The situation demonstrates a group of quasars that consume their energy sources by a relatively lower rate. It means that the decaying process goes rather slowly for this class. The situation is different for 2nd and 3rd classes as shown in Figs. 4 and 5. The slope of the distribution changes sign gradually for these classes, while the quasars that are plotted in the left portion of the diagram are gradually diminished. It may be interpreted in such a way that the correspondig quasars produce energy with a higher rate relative to those in the first class. In Fig. 6, it is seen that the 4th luminosity class of quasars have not been already observed at the distances less than about $`2Gpc`$. In other words, they have been disapeared before than about 6.5 billions of years ago due to their relatively high rate of energy output. The situation is still more pronounced for 5th class of the quasars that have been observed at the distances greater than about $`2.7Gpc`$ and, therefore, belonging to at least about 8.7 billions of years ago as shown in Fig. 7. A rapid increase in the density of quasars at the right extremes in 4th and 5th classes, seen in Figs. 6 and 7, may be considered to be responsible for a remarkable increase in the density of observed quasars at large distance limit as shown in Fig. 1. Therefore, one may conclude that the FC model modified by admitting the notion of decay mechanism and the look back time is supported by observations. By the same procedure we have investigated the distribution of a sample of about 40,000 normal galaxies in space. The result is plotted in Fig. 8. It looks like the distribution of quasars associated with the first luminosity class indicating probably a common origin for these objects. As a supporting idea, the luminosity functions of the quasars and normal galaxies are investigated and plotted in Fig. 9. It is seen from this figure that, the absolute magnitude of observed quasars starts more or less from a value, at which, that of the galaxies end up. This may be considered as another observational evidence to consider these objects to be the same phenomena, but, with the different manifestations starting from Fig. 3 and ending up at Fig. 8. Another observational result which may be obtained from Fig. 9 is introducing a so called ”critical angular velocity”. The proto-galaxies taking the angular velocities greater than this critical value will eventually evolve to form the quasars and those with less than this value will finally make the normal galaxies. This is implied by Eq. (3), indicating a one to one correspondence between the absolute magnitudes and the angular velocities. In addition, the peak values of the number of quasars and galaxies may correspond to the so called ”most probable angular velocities”, at which, the proto-galaxies tend to form the quasars and galaxies with the highest rate. As another result, one may consider the distribution of the quasars, assuming them to be at the cosmological distances, to recognize their corelation to form the large scale filamentary structures and voids. They are intrinsically more luminous than the ordinary galaxies and could be observed as farthest objects and, though, may be employed to do a considerably deep exploration. In this respect the entire sky map of the quasars are plotted in galactic coordinate as shown in Fig. 10. The region of missing quasars is due to the dust clouds in our galaxy which block our view of other quasars. Note the voids and clumpy distribution of quasars looking like filaments. Their typical dimensions are comparable to that of the ”Great wall”. As an example, the size of one denoted by A with the approximate galactic lattitude and longitude of $`+52^{\mathrm{deg}}`$ and $`80^{\mathrm{deg}}`$, respectively, is about 200 Mpc. Note some other structures around the galactic north and south poles. ## 3 Concluding Remarks A modified version of the FC model for governing the formation and evolution of quasars and galaxies is considered. The notion of the look back time and decay process for these objects are considered as inherent properties of the model. The logic behind it, however, is different. In particular, no need arises to postulate the existance of the supermassive stars which is not well understood yet. An arrangement of the quasars in different luminosity classes and investigation of their evolution behveior via each class provides a reasoable attempt to unify the origin of these objects with that of the normal galaxies. The unifying aspect is also recommended by investigating the luminosity function of these objects. another intresting phenomenological idea is introducing the notion of critical angular velocity. The protogalaxies may, eventually, evolve to quasars or galaxies depending on whether their angular velocities are less or greater than the critical value. Further, the most probable rate of formation of these objects seems to correspond to the certain values of the angular velocities which may be obtained using Fig. 9 and Eq. (3). A few filamentary structures and voids may be distinguished on the entire sky map of the quasars plotted in the galactic coordinate. It seems that the quasars, if to be at the cosmological distances, are suitable candidates to serve this purpose. ###### Acknowledgements. We are grateful to professor Y. Sobouti for his helpful comments. We have made use of data from the Lyon-Meudon Extragalactic Database (LEDA) compiled by the LEDA team at the CRAL-Observatoire de Lyon (France).Also we have made use of data from Scientific Report of European Southern Observatory compiled by Veron, M. P. and Veron, P.
no-problem/9902/cond-mat9902025.html
ar5iv
text
# Quantum Conductors in a Plane When electrons are confined to move in a plane, strange things happen. For example, under normal circumstances, they are not expected to conduct electricity at low temperatures. The absence of electrical conduction in two dimensions at zero temperature has been one of the most cherished paradigms in solid state physics. In fact, the 1977 physics Nobel prize was awarded, in part, for the formulation of the basic principle on which this result is based. However, recent experiments on a dilute electron gas confined to move at the interface between two semiconductors pose a distinct counterexample to the standard view. Transport measurements reveal that as the temperature is lowered, the resistivity drops without any signature of the anticipated up-turn as required by the standard account. It is the possible existence of a new conducting state and hence a new quantum phase transition in two dimensions that is the primary focus of this session. In the absence of a magnetic field, the only quantum phase transition known to exist in two dimensions (2D) that involves a conducting phase is the insulator-superconductor transition. Consequently, this session focuses on the general properties of quantum phase transitions, the evidence for the new conducting state in a 2D electron gas and the range of phenomena that can occur in insulator-superconductor transitions. Unlike classical phase transitions, such as the melting of ice, all quantum phase transitions occur at the absolute zero of temperature. While initially surprising, this state of affairs is expected as quantum mechanics is explicitly a zero-temperature theory of matter. As such, quantum phase transitions are not controlled by changing system parameters such as the temperature as in the melting of ice, but rather by changing some external parameter such as the number of defects or the magnitude of an applied magnetic field. In all instances, the underlying quantum mechanical states are transformed between ones that either look different topologically or have distinctly different magnetic properties. Two examples of quantum phase transitions are the disorder-induced metal-insulator transition and the insulator-superconductor transition. In a clean crystal, electrons form perfect Bloch waves or traveling waves and move unimpeded throughout the crystal. When defects (disorder) are present, electrons can become characterized by exponentially-decaying states which cannot carry current at zero-temperature because of their confined spatial extent. In a plane, the localization principle establishes that as long as electrons act independently, only localized states form whenever disorder is present. However, if for some strange reason, such as they are attracted through a third party to one another, electrons can form pairs. Such pairs constitute the charge carriers in a superconductor and are called Cooper pairs. Superconductors are perfect conductors of electricity and therfore have a vanishing resistance. However, formation of Cooper pairs is not a sufficient condition for superconductivity. If one envisions dividing a material into partitions, insulating behaviour obtains if each partition at each snapshot in time has the same number of Cooper pairs. That is, the state is static. However, if the number of pairs fluctuates between partitions, transport of Cooper pairs is possible and superconductivity obtains. The fundamental physical principle that drives all quantum phase transitions is quantum uncertainty or quantum entanglement. A superconductor can be viewed as an entangled state containing all possible configurations of the Cooper pairs. Scattering a single Cooper pair would require disrupting each configuration in which that Cooper pair resides. Since each Cooper pair exists in each configuration (of which there are an infinite number), such a scattering event is highly improbable. We refer to a superconducting state then as possessing phase coherence, that is rigidity to scattering. Insulators lack phase coherence. In the insulating state, the certainty that results in the particle number within each partition is counterbalanced by the complete loss of phase coherence. In a superconductor, phase certainty gives rise to infinite uncertainty in the particle number. Consequently, the product of the number uncertainty times the uncertainty in phase is the same on either side of the transition as dictated by the Heisenberg uncertainty principle. In essence, quantum uncertainty is to quantum phase transitions what thermal agitation is to classical phase transitions. Both transform matter from one state to another. In the experiments revealing the new conducting phase, the tuning parameter is the concentration of charge carriers. For negatively-charged carriers, such as electrons, a positive bias voltage is required to adjust the electron density–the more positive, the higher the electron density. Subsequently, if the electrons are confined to move laterally at the ultra-thin (25Å) interface between two semiconductors, transport will be two-dimensional as it is confined to a plane. Devices of this sort constitute a special kind of transistor, not too dis-similar from those used in desktop computers. As illustrated in Fig. (1), when the electron density is slowly increased beyond $`10^{11}/cm^2`$, the resistivity changes from increasing (insulating behavior) to decreasing as the temperature decreases, the signature of conducting behavior. At the transition between these two limits, the resistivity is virtually independent of temperature. While it is still unclear ultimately what value the resistivity will acquire at zero temperature, the marked decrease in the resistivity above a certain density is totally unexpected and more importantly not predicted by any theory. Whether we can correctly conclude that a zero-temperature transition exists between two distinct phases of matter is still not settled, however. Nonetheless, the data do possess a feature common to quantum phase transitions, namely scale invariance. In this context, scale invariance simply implies that the data above the flat region in Fig. (1) all look alike. This also holds for the data below the flat region in Fig. (1). As a consequence, the upper and lower family of resistivity curves at various densities can all be made to collapse onto just two distinct curves by scaling each curve with the same density-dependent scale factor. The resultant curves have slopes of opposite sign as shown in the inset of Fig. (1). It is difficult to reconcile this bi-partite structure unless the two phases are in fact distinct electrically at zero temperature. These experiments lead naturally to the question, what is so special about the density regime probed. We know definitively that at high and ultra-low densities, a 2D electron gas is localized by disorder. Because the Coulomb interaction decays as $`1/r`$ (with $`r`$ the separation between the electrons) whereas the kinetic energy decays as $`1/r^2`$, Coulomb interactions dominate at low density. At sufficiently low electron densities, the electrons form a crystal. It is precisely between the ultra-low crystalline limit and the non-interacting regime that the possibly new conducting phase resides. This density regime represents one of the yet-unconquered frontiers in solid state physics. Experimentally, it is clear that whatever happens in this intermediate density regime is far from ordinary as evidenced by the observed destruction of the conducting phase by an applied in-plane magnetic field. As an in-plane magnetic field can only polarize the spins, the conducting phase is highly sensitive to the spin state, a key characteristic of superconductivity. Experimentally, a direct transition from a superconductor to an insulator in 2D has been observed by two distinct mechanisms. The first is simply by decreasing the thickness of the sample. This effectively changes the scattering length and hence is equivalent to changing the amount of disorder. As a result, Cooper pairs remain intact throughout the transition. While single electrons are localized by disorder, Cooper pairs in a superconducting state are not. Under normal circumstances, Cooper pairs give rise to a zero resistance state at $`T=0`$. The second means by which a superconducting state can be transformed to an insulator in 2D is by applying a perpendicular magnetic field. A perpendicular magnetic field creates resistive excitations called vortices (the dual of Cooper pairs) which frustrate the onset of global phase coherence. Surprisingly, howeer, in both the disorder and magnetic field-tuned transitions, the resistivity has been observed to flatten on the ‘superconducting’ side. The non-vanishing of the resistivity is indicative of a lack of phase coherence. Phase fluctuations are particularly strong in 2D and are well-known to widen the temperature regime over which the resisitivity drops to zero. However, the precise origin of the flattening of the resistivity (an indication of a possible metallic state) at low temperatures is not known. Ultimately, the resolution of the experimental puzzles raised here must be settled by further experiments. But a natural question that arises is, are the two phenomena related? This question is particularly germane because the only excitations proven to survive the localizing effect of disorder in 2D are Cooper pairs. It is for partly this simple reason and other more complex arguments that superconductivity has been proposed to explain the new conducting state in 2D. Because phase fluctuations create a myriad of options (‘metal’ or superconductor at T=0) for Cooper pairs in a plane, measurements sensitive to pair formation must augment the standard transport measurements to definitively settle whether Cooper pair formation is responsible for new conducting state in a 2D electron gas. But maybe some yet-undiscovered conducting spin singlet state exists that can survive the localizing effect of disorder. But maybe not and possibly only ‘classical’ trapping effects are responsible for the decrease of the resistivity on the conducting side. While the former cannot be ruled out, the latter seems unlikely as new experiments reveal the new conducting phase is tied to the formation of a Fermi surface and related to the plateau transitions in the quantum Hall effect. This implies that indeed a deep quantum mechanical principle is responsible for the new conducting state, perhaps as has been suggested that the proximity of the new conducting phase to a strongly-correlated insulator mediates pairing as in copper-oxide superconductors.
no-problem/9902/gr-qc9902060.html
ar5iv
text
# 1 Qualitative evolution of the horizon scale and of the proper size of a homogeneous region for (a) standard de Sitter inflation, and (b) pre-big bang superinflation, represented in the Einstein frame as a contraction. The time direction coincides with the vertical axis. The three horizontal spatial sections corresponds, from top to bottom, to the present time, to the end and to the beginning of inflation. The shaded area represents the horizon, and the dashed lines its time evolution. The full curves represent the time evolution of the border of the homogeneous region, controlled by the scale factor. \[ BA-TH/99-332 gr-qc/9902060 To appear in Phys. Rev. D Inflation and Initial Conditions in the Pre-Big Bang Scenario M. Gasperini Dipartimento di Fisica, Università di Bari, Via G. Amendola 173, 70126 Bari, Italy and Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari, Italy The pre-big bang scenario describes the evolution of the Universe from an initial state approaching the flat, cold, empty, string perturbative vacuum. The choice of such an initial state is suggested by the present state of our Universe if we accept that the cosmological evolution is (at least partially) duality-symmetric. Recently, the initial conditions of the pre-big bang scenario have been criticized as they introduce large dimensionless parameters allowing the Universe to be “exponentially large from the very beginning”. We agree that a set of initial parameters (such as the initial homogeneity scale, the initial entropy) larger than those determined by the initial horizon scale, $`H^1`$, would be somewhat unnatural to start with. However, in the pre-big bang scenario, the initial parameters are all bounded by the size of the initial horizon. The basic question thus becomes: is a maximal homogeneity scale of order $`H^1`$ necessarily unnatural if the initial curvature is small and, consequently, $`H^1`$ is very large in Planck (or string) units? In the impossibility of experimental information one could exclude “a priori”, for large horizons, the maximal homogeneity scale $`H^1`$ as a natural initial condition. In the pre-big bang scenario, however, pre-Planckian initial conditions are not necessarily washed out by inflation and are accessible (in principle) to observational tests, so that their naturalness could be also analyzed with a Bayesan approach, in terms of “a posteriori” probabilities. \] Recently, the validity of the pre-big bang scenario as a viable inflationary model has been questioned on the grounds of its initial conditions . The main criticism raised against models in which the Universe evolves from the flat, zero-interactions, string perturbative vacuum is mainly based on two points . The first concerns the homogeneity problem, in particular the largeness of the initial homogeneous region in string (or Planckian) units; the second concerns the flatness problem, and in particular the two large dimensionless parameters (the inverse of the string coupling and of the curvature, in string units) characterizing the Universe at the beginning of inflation. The fact that, as a consequence of these large numbers, “the pre-big bang Universe must be very huge and homogeneous from the very beginning” is quoted as a serious problem, supporting the conclusion that “the current version of the pre-big bang scenario cannot replace usual inflation” . I agree with the remarks concerning the initial size of the Universe (indeed, the need for an initial state with a Universe very large in Planck units was already noted in the first paper on the pre-big bang scenario and, even before, in the context of string-driven superinflation ; in particular, the condition on the duration of inflation, reported in as eq. (8), was already derived in ). The large initial size of the Universe is only part of the conditions to be imposed at the onset of pre-big bang inflation, and I also agree with the fact that a successful pre-big bang scenario requires an initial state characterized by very small (or very large) dimensionless ratios measuring the initial curvature and coupling constant, and possibly leading to a fine-tuning problem, as first pointed out in . I disagree, however, with the conclusion presented in , and I would like to point out some arguments, hoping to clarify a different point of view on a large initial Universe. I will concentrate, in particular, on the largeness of the initial horizon scale, which can be thought to be at the ground of the various objections discussed in . The large dimensionless ratios of the initial state, when referred to the Einstein frame in which the Planck length is fixed, correspond indeed to a small initial curvature in Planck units, and then to a large horizon (in Planck length units), allowing a large homogeneous domain as initial condition. I do not pretend, of course, to provide a final answer to all problems. The modest aim of this paper is to stress that the problems raised in reduce, in the end, to the question of whether the horizon scale, irrespective of its size, may be a natural scale for determining the inflationary initial conditions (in particular, the size of the initial homogeneous region), and to suggest the possibility that the answer is not negative “a priori”, at least when the initial conditions are imposed well inside the classical regime, like in the case of the pre-big bang scenario. Let me start recalling that the kinematical problems of the standard scenario can be solved by two classes of accelerated backgrounds . Consider, for instance, the flatness problem, requiring a phase in which the ratio $`r=k/a^2H^2\dot{a}^2`$ decreases, so as to compensate its growth up to the present value $`r<1`$ during the subsequent phase of standard evolution. By parametrizing the scale factor as $`a|t|^\beta `$, the decrease of $`r|t|^{2(1\beta )}`$ can be arranged either by 1) $`\beta >1`$, $`t+\mathrm{}`$, or 2) $`\beta <1`$, $`t0_{}`$. Both classes of backgrounds are accelerated, as $`\mathrm{sign}\dot{a}=\mathrm{sign}\ddot{a}`$. The first class corresponds to power-inflation, and includes de Sitter inflation in the limit $`\beta \mathrm{}`$. The second class includes superinflation for $`\beta <0`$, and accelerated contraction for $`0<\beta <1`$. The main kinematic difference between the two classes is the behaviour of the event horizon, whose proper size is defined by $$d_e(t)=a(t)_t^{t_M}𝑑t^{}a^1(t^{}).$$ (1) Here $`t_M`$ is the maximal future extension of the cosmic time coordinate for the inflationary manifold. Therefore, $`t_M=+\mathrm{}`$ for the first class, and $`t_M=0`$ for the second class of backgrounds. In both cases we find that the integral converges, and that $`d_e(t)|H|^1(t)`$, so that the horizon size is constant or growing for class 1), shrinking for class 2), following the inverse behaviour of the curvature scale. The phase of pre-big bang evolution, in particular, is dual to a phase of standard, decelerated evolution: its accelerated kinematics is characterized by a growing curvature scale (i.e. growing $`|H|`$), and may be represented as superinflation, in the string frame, or accelerated contraction, in the Einstein frame . In order to recall the criticism of we will now compare the kinematics of standard de Sitter inflation and pre-big bang superinflation, for an oversimplified cosmological model in which the standard radiation era begins at the Planck scale, and it is immediately preceeded by a phase of accelerated (inflationary) evolution. Also, for the sake of simplicity, we will identify at the end of inflation the present value of the string length $`L_s`$ with the Planck length $`L_p`$ (at tree-level, they are related by $`L_p=gL_s=\mathrm{exp}\varphi /2L_s`$, with a present dilaton expectation value $`g0.10.01`$). At the beginning of the radiation era the horizon size is thus controlled by the Planck length $`L_pL_s`$, while the proper size of the homogeneous and causally connected region inside our present Hubble radius, rescaled down at the Planck epoch according to the standard decelerated evolution of $`a(t)`$, is unnaturally larger than the horizon by the factor $`10^{30}L_p`$. During the inflationary epoch, the ratio $$\frac{\mathrm{proper}\mathrm{size}\mathrm{horizon}\mathrm{scale}}{\mathrm{proper}\mathrm{size}\mathrm{homogeneous}\mathrm{region}}\frac{H^1(t)}{a(t)}\eta $$ (2) must thus decrease at least by the factor $`10^{30}`$, so as to push the homogeneous region outside the horizon, of the amount required by the subsequent decelerated evolution. Since the above ratio evolves linearly in conformal time $`\eta a^1𝑑t`$, the condition of sufficient inflation can be written as $$|\eta _f|/|\eta _i|<10^{30},$$ (3) where $`\eta _i`$ and $`\eta _f`$ mark, respectively, the beginning and the end of the inflationary epoch. Let us now compare de Sitter inflation, $`a(\eta )^1`$, with a typical dilaton-dominated superinflation, $`a(t)^{1/\sqrt{3}}(\eta )^{1/(\sqrt{3}+1)}`$ (the same discussed in ). In the standard de Sitter case the horizon and the Planck length are constant, $`H^1L_sL_p`$; as we go back in time, according to eq. (3), $`a(t)`$ reduces by the factor $`10^{30}`$ so that, at the beginning of inflation, we find a homogeneous region just of size $`L_p`$, like the horizon. In the superinflation case, on the contrary, during the conformal time interval (3), $`a(t)`$ is only reduced by the factor $`a_i/a_f=10^{30/(1+\sqrt{3})}10^{11}`$, so that the size of the homogeneous region, at the beginning of inflation, is still large in string units, $`10^{30\sqrt{3}/(1+\sqrt{3})}L_s10^{19}L_s`$. The situation is even worse in Planck units since, at the beginning of inflation, the string coupling $`\mathrm{exp}\varphi /2`$, and thus the Planck length $`L_p`$, are reduced with respect to their final values by the factor $`L_p/L_s=|\eta _f/\eta _i|^{\sqrt{3}/2}10^{15\sqrt{3}}`$, so that $`10^{19}L_s10^{45}L_p`$. This, by the way, is exactly the initial size of the homogeneous region evaluated in the Einstein frame in which $`L_p`$ is constant, and the above dilaton-driven evolution is represented as a contraction, with $`a(\eta )^{1/2}`$ (see Fig. 1 for a qualitative illustration of the differences between de Sitter inflation and pre-big bang inflation). According to , case (a) of Fig. 1 provides an acceptable example of inflationary scenario, as the initial homogeneity scale is contained within a single domain of Planckian size. Case (b), on the contrary, is not satisfactory because of the initial homogeneity on scales much greater than Planckian, $`10^{19}L_s10^{45}L_p`$. Quoting Ref. , this situation “is not much better than the situation in the non-inflationary big bang cosmology, where it was necessary to assume that the initial size of the homogeneous part of our Universe was greater than $`10^{30}L_p`$”. I would like to stress, however, that in case (b) the initial homogeneous region is large in Planck units, but not larger than the horizon itself. Indeed, during superinflation, the horizon scale shrinks linearly in cosmic time. As we go backwards in time, for the particular example that we are considering, the horizon increases by the factor $`H_i^1/H_f^1=|t_i|/|t_f|=(\eta _i/\eta _f)^{\sqrt{3}/(1+\sqrt{3})}`$, so that, at the beginning of inflation, $`H^110^{30\sqrt{3}/(1+\sqrt{3})}L_s10^{19}L_s10^{45}L_p`$, i.e. the horizon size is just the same as that of the homogeneous region (as illustrated in Fig. 1). In this sense, both initial conditions, in cases (a) and (b), seem to be equally natural. The difference is that in case (b) the initial horizon is large in Planck units, while in case (a) it is of order one. This is an obvious consequence of the different curvature scales at the beginning of inflation. The question about the naturalness of the initial conditions seems thus to concern the unit of length used, in particular, to measure the size of the initial homogeneous domain, and, more generally, to characterize the initial geometric configuration at the onset of inflation: which basic length scale has to be used, the Planck (or string) length, or the radius of the causal horizon? This, I believe, is the question to be answered. Providing a definite answer may deserve a careful analysis, which is outside the scope of this brief paper. Let me note that, according to , it is the Planck (or string) scale that should provide the natural units for the size of the initial homogenous patches and for the initial curvature and coupling scale. This is certainly reasonable when initial conditions are imposed on a cosmological state approaching the high-curvature, quantum gravity regime. In the pre-big bang scenario, however, initial conditions are to be imposed when the Universe is deeply inside the low-curvature, weak coupling, classical regime. In that regime the Universe does not know about the Planck length, and the causal horizon $`H^1`$ could represent a natural candidate for controlling the set of initial conditions. For what concerns homogeneity, however, I am not suggesting that the horizon (which is the maximal homogeneity scale) should be always assumed as the natural scale of homogeneity. I am suggesting that this possibility should be discussed on the ground of some quantitative and objective criterium, as attempted for instance in , and not discarded a priori, as in (see also for a discussion of “generic” initial conditions in a string cosmology context). One might think that, accepting the horizon size as a natural homogeneity scale, there is no need of inflation to explain our present homogeneous Universe . This is not the case, however, because if we go back in time without inflation our Universe should start in the past from a homogeneous region unnaturally larger than the horizon (see Fig. 1). Only with inflation the homogeneous region, going back in time, re-enters inside the horizon. So, only if there is inflation, an initial homogeneity scale of the order of the horizon scale is enough to reproduce our present Universe. Also, one might think, as noted in , that the classical homogeneity of the horizon might be destroyed by quantum fluctuations amplified during the contraction preceeding the onset of the inflationary era, in such a way as to prevent the formation of a large homogeneous domain. This problem has been recently discussed in for the case of a homogeneous string cosmology background with negative spatial curvature: it has been shown that quantum fluctuations die off much faster than classical inhomogeneities as they approach the initial perturbative vacuum, and remain negligible throughout the perturbative pre-big bang phase. For classical perturbations, however, the situation is different, and no general result is presently available. The initial amplitude of the classical inhomogeneities is not normalized to a vacuum fluctuation spectrum, the results of cannot be applied, and inflation can occurr successfully or not depending on the initial distribution of the classical amplitudes. Finally, one might argue that a large initial horizon, assuming a saturation of the bound imposed by the holographic principle in a cosmological context , implies a large initial entropy, $`S=`$ (horizon area in Planck units), and thus a small probability for the initial configuration. Indeed, if $`S`$ is large, the probability that such a configuration be obtained through a process of quantum tunnelling (proportional to $`\mathrm{exp}[S]`$) is exponentially suppressed, as emphasized in . However, in the pre-big bang scenario, quantum effects such as tunnelling or reflection of the Wheeler-De Witt wave function are expected to be important towards the end of inflation , and not the beginning, as they may be effective to exit , eventually, from the inflationary regime, not to enter it and to explain the origin of the initial state. A large entropy of the initial state, in the weakly coupled, highly classical regime, can only correspond to a large probability of such configuration, (proportional to $`\mathrm{exp}[S]`$), as expected for classical and macroscopic configurations. In conclusion, let me come back on the large dimensionless parameters characterizing the initial state of pre-big bang inflation . The physical meaning of those parameters, i.e. the fact that the initial string coupling and curvature are very small in string (or Planck) units, is to be understood as a consequence of the perturbative initial conditions, suggested by the underlying duality symmetries. On the other hand, whenever inflation starts at curvature scales smaller than Planckian, the initial state is necessarily characterized by a large dimensionless ratio – the inverse of the curvature in Planck units. If one believes that such large numbers should be avoided, then should be prepared to accept the fact that natural initial conditions are only possible in the context of models in which inflation starts at the Planck scale: for instance chaotic inflation, as pointed out in . This is a rather strong conclusion, that rules out, as a satisfactory explanation of our present cosmological state, not only the pre-big bang scenario, but any model in which inflation starts at scales smaller than Planckian (unless we have a scenario with different stages of inflation responsible for solving different problems). Even for a single stage of inflation very close to the Planck scale, however, we are not free of problems, as we are led, eventually, to the following question: can we trust the naturalness of inflation models like chaotic inflation, in which classical general relativity is applied to set up initial conditions at Planckian curvature scales, i.e. deeply inside the non-perturbative, quantum gravity regime? The Planckian regime is certainly problematic to deal with, both in the string and in the standard inflationary scenario: in string cosmology, in particular, it prevents a simple solution of the “graceful exit” problem . The pre-big bang scenario, however, tries to look back in time beyond the Planck scale by using the powerful tools of superstring theory, in particular its duality symmetries. According to duality, the pre-Planckian Universe approaches initially the state of a low-energy system, and initial conditions are to be set up in a regime well described by the lowest order effective action, in which all quantum and higher-order corrections are small, and under control. It is true, however, that the presence of the Planckian regime can indirectly affect the initial conditions also in a string cosmology context, as it imposes a finite duration of the low-energy dilaton-driven phase: the initial homogeneity scale, as a consequence, has to be large enough to emerge with the required size at the Planck epoch, and to avoid the need for a further period of high-curvature, Planckian inflation . It should be stressed, finally, that the main difference from the standard scenario, in which any tracks of the pre-Planckian cosmological state is washed out by inflation, is probably the fact that the pre-Planckian history may become visible, in the sense that its phenomenological consequences can be tested (at least in principle) even today . So, while in the context of standard inflation the naturalness criterium can be safely applied to select an initial state at the Planck scale, it seems difficult (in my opinion) to apply the same criterium in a string cosmology context, and to discard a model of pre-Planckian evolution only on the grounds of the large parameters characterizing the initial conditions. Such initial conditions have consequences accessible to observational tests, and the analysis of the “a posteriori” probabilities with the Bayesan approach of suggests that a state with a large initial horizon may become “a posteriori” natural, because of the duality symmetries intrinsic to the pre-big bang scenario. However, much further work is certainly needed before a final conclusion is reached. Irrespective of the final results, such work will certainly improve our present understanding of string theory and of the physics of the early Universe. ###### Acknowledgements. I wish to thank Raphael Bousso, Nemanja Kaloper, Andrei Linde, and Gabriele Veneziano for stimulating discussions and helpful comments (not necessarily in agreement with the personal point of view presented in this paper).
no-problem/9902/hep-ph9902270.html
ar5iv
text
# 1 Introduction ## 1 Introduction Supersymmetry is at present the only known framework in which the Higgs sector of the Standard Model (SM), so crucial for its internal consistency, is natural . The minimal version of the Supersymmetric Standard Model (MSSM) contains two Higgs doublets $`(H_1,H_2)`$ with opposite hypercharges: $`Y(H_1)=1`$, $`Y(H_2)=+1`$, so as to generate masses for up- and down-type quarks (and leptons), and to cancel gauge anomalies. After spontaneous symmetry breaking induced by the neutral components of $`H_1`$ and $`H_2`$ obtaining vacuum expectation values, $`H_1=v_1`$, $`H_2=v_2`$, $`\mathrm{tan}\beta =v_2/v_1`$, the MSSM contains two neutral $`CP`$-even ($`h`$, $`H`$), one neutral $`CP`$-odd ($`A`$), and two charged ($`H^\pm `$) Higgs bosons . Because of gauge invariance and supersymmetry, all the Higgs masses and the Higgs couplings in the MSSM can be described (at tree level) in terms of only two parameters, which are usually chosen to be $`\mathrm{tan}\beta `$ and $`m_A`$, the mass of the $`CP`$-odd Higgs boson. In particular, all the trilinear self-couplings of the physical Higgs particles can be predicted theoretically (at the tree level) in terms of $`m_A`$ and $`\mathrm{tan}\beta `$. Once a light Higgs boson is discovered, the measurement of these trilinear couplings can be used to reconstruct the Higgs potential of the MSSM. This will go a long way toward establishing the Higgs mechanism as the basic mechanism of spontaneous symmetry breaking in gauge theories. Although the measurement of all the Higgs couplings in the MSSM is a difficult task, preliminary theoretical investigations by Plehn, Spira and Zerwas , and by Djouadi, Haber and Zerwas (DHZ) , of the measurement of these couplings at the LHC and at a high-energy $`e^+e^{}`$ linear collider, respectively, are encouraging. We have considered in detail the question of possible measurements of the trilinear Higgs couplings of the MSSM at a high-energy $`e^+e^{}`$ linear collider. We assume that such a facility will operate at an energy of 500 GeV with an integrated luminosity per year of $`_{\mathrm{int}}=500\text{fb}^1`$ . (This is a factor of 10 more than the earlier estimate.) In a later phase one may envisage an upgrade to an energy of 1.5 TeV. The trilinear Higgs couplings that are of interest are $`\lambda _{Hhh}`$, $`\lambda _{hhh}`$, and $`\lambda _{hAA}`$, involving both the $`CP`$-even and $`CP`$-odd Higgs bosons. The couplings $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ are rather small with respect to the corresponding trilinear coupling $`\lambda _{hhh}^{\mathrm{SM}}`$ in the SM (for a given mass of the lightest Higgs boson $`m_h`$), unless $`m_h`$ is close to the upper value (decoupling limit). The coupling $`\lambda _{hAA}`$ remains small for all parameters. Throughout, we include one-loop radiative corrections to the Higgs sector in the effective potential approximation. In particular, we take into account the parameters $`A`$ and $`\mu `$, the soft supersymmetry breaking trilinear parameter and the bilinear Higgs(ino) parameter in the superpotential, respectively, and as a consequence the left–right mixing in the squark sector, in our calculations. We thus include all the relevant parameters of the MSSM in our study . Related work has recently been presented by Dubinin and Semenov . For a given value of $`m_h`$, the values of these couplings significantly depend on the soft supersymmetry-breaking trilinear parameter $`A`$, as well as on $`\mu `$, and thus on the resulting mixing in the squark sector. Since the trilinear couplings tend to be small, and depend on several parameters, their effects are somewhat difficult to estimate. The dominant source of multiple production of the Higgs ($`h`$) boson, is through Higgs-strahlung of $`H`$, and through production of $`H`$ in association with the $`CP`$-odd Higgs boson. This source of multiple production can be used to extract the trilinear Higgs coupling $`\lambda _{Hhh}`$. The non-resonant fusion mechanism for multiple $`h`$ production, $`e^+e^{}\nu _e\overline{\nu }_ehh`$, involves two trilinear Higgs couplings, $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$, and is useful for extracting $`\lambda _{hhh}`$. ## 2 The Higgs Sector of the MSSM At the tree level, the Higgs sector of the MSSM is described by two parameters, which can be conveniently chosen as $`m_A`$ and $`\mathrm{tan}\beta `$ . There are, however, substantial radiative corrections to the $`CP`$-even neutral Higgs masses and couplings . They are, in general, positive, and they shift the mass of the lightest MSSM Higgs boson upwards. The Higgs mass falls rapidly at small values of $`\mathrm{tan}\beta `$. Since the LEP experiments are obtaining lower bounds on the mass of the lightest Higgs boson, they are beginning to rule out significant parts of the small-$`\mathrm{tan}\beta `$ parameter space, depending on the model assumptions. ALEPH finds a lower limit of $`m_h>72.2`$ GeV, irrespective of $`\mathrm{tan}\beta `$, and a limit of $`88`$ GeV for $`1<\mathrm{tan}\beta <2`$ . We take $`\mathrm{tan}\beta =2`$ to be a representative value. ## 3 Trilinear Higgs couplings In units of $`gm_Z/(2\mathrm{cos}\theta _\mathrm{W})=(\sqrt{2}G_F)^{1/2}m_Z^2`$, the relevant tree-level trilinear Higgs couplings are given by $`\lambda _{Hhh}^0`$ $`=`$ $`2\mathrm{sin}2\alpha \mathrm{sin}(\beta +\alpha )\mathrm{cos}2\alpha \mathrm{cos}(\beta +\alpha ),`$ (3.1) $`\lambda _{hhh}^0`$ $`=`$ $`3\mathrm{cos}2\alpha \mathrm{sin}(\beta +\alpha ),`$ (3.2) $`\lambda _{hAA}^0`$ $`=`$ $`\mathrm{cos}2\beta \mathrm{sin}(\beta +\alpha ),`$ (3.3) with $`\alpha `$ the mixing angle in the $`CP`$-even Higgs sector, which can be calculated in terms of the parameters appearing in the $`CP`$-even Higgs mass matrix. The dominant one-loop radiative corrections are proportional to $`(m_t/m_W)^4`$ . The trilinear couplings depend significantly on $`m_A`$, and thus also on $`m_h`$. This is shown in Fig. 1, where we compare $`\lambda _{Hhh}`$, $`\lambda _{hhh}`$ and $`\lambda _{hAA}`$ for three different values of $`\mathrm{tan}\beta `$, and the SM quartic coupling $`\lambda ^{\mathrm{SM}}`$ (which also includes one-loop radiative corrections ). At low values of $`m_h`$, the MSSM trilinear couplings are rather small. For some value of $`m_h`$ the couplings $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ start to increase in magnitude, whereas $`\lambda _{hAA}`$ remains small. The values of $`m_h`$ at which they start becoming significant depend crucially on $`\mathrm{tan}\beta `$. For $`\mathrm{tan}\beta =2`$ (Fig. 1a) this transition takes place around $`m_h90`$–100 GeV, whereas for $`\mathrm{tan}\beta =5`$ the critical value of $`m_h`$ increases to 100–110 (see Fig. 1b). In this region, the actual values of $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ (for a given value of $`m_h`$) change significantly if $`A`$ becomes large and positive. A non-vanishing squark-mixing parameter $`A`$ is thus quite important. Also, for special values of the parameters, the couplings may vanish . To sum up the behaviour of the trilinear couplings, we note that $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ are small for $`m_h<100`$–120 GeV, depending on the value of $`\mathrm{tan}\beta `$. However, as $`m_h`$ approaches its maximum value, which is reached rapidly as $`m_A`$ becomes large, $`m_A>200`$ GeV, these trilinear couplings become large. Thus, as functions of $`m_A`$, the trilinear couplings $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ are large for most of the parameter space. We also note that, for large values of $`\mathrm{tan}\beta `$, $`\lambda _{Hhh}`$ tends to be relatively small, whereas $`\lambda _{hhh}`$ becomes large, if also $`m_A`$ (or, equivalently, $`m_h`$) is large. ## 4 Production mechanisms The different mechanisms for the multiple production of the MSSM Higgs bosons in $`e^+e^{}`$ collisions have been discussed by DHZ. The dominant mechanism for the production of multiple $`CP`$-even light Higgs bosons ($`h`$) is through the production of the heavy $`CP`$-even Higgs boson $`H`$, which then decays via $`Hhh`$. The heavy Higgs boson $`H`$ can be produced by $`H`$-strahlung, in association with $`A`$, and by the resonant $`WW`$ fusion mechanism. These mechanisms for multiple production of $`h`$ $`\begin{array}{ccc}e^+e^{}& & ZH,AH\\ e^+e^{}& & \nu _e\overline{\nu }_eH\end{array}\},Hhh,`$ (4.3) are shown in Fig. 2. All the diagrams of Fig. 2 involve the trilinear coupling $`\lambda _{Hhh}`$. A background to (4.3) comes from the production of the pseudoscalar $`A`$ in association with $`h`$ and its subsequent decay to $`hZ`$ $$e^+e^{}hA,AhZ,$$ (4.4) leading to $`Zhh`$ final states. A second mechanism for $`hh`$ production is double Higgs-strahlung in the continuum with a $`Z`$ boson in the final state, $$e^+e^{}Z^{}Zhh.$$ (4.5) We note that the non-resonant analogue of the Feynman diagram of Fig. 2b involves, apart from the coupling $`\lambda _{Hhh}`$, the trilinear Higgs coupling $`\lambda _{hhh}`$ as well. Finally, there is a mechanism of multiple production of the lightest Higgs boson through non-resonant $`WW`$ fusion in the continuum (see Section 7): $$e^+e^{}\overline{\nu }_e\nu _eW^{}W^{}\overline{\nu }_e\nu _ehh.$$ (4.6) It is important to note that all the diagrams of Fig. 2 involve the trilinear coupling $`\lambda _{Hhh}`$ only. On the other hand, the non-resonant analogue of Fig. 2b, and Fig. 3c involve both the trilinear Higgs couplings $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$. ## 5 Higgs-strahlung and Associated Production of $`H`$ The dominant source for the production of multiple Higgs bosons ($`h`$) in $`e^+e^{}`$ collisions is through the production of the heavier $`CP`$-even Higgs boson $`H`$ either via Higgs-strahlung or in association with $`A`$, followed, if kinematically allowed, by the cascade decay $`Hhh`$. The cross sections for these processes can be found in . In Fig. 4 we plot the cross sections for the $`e^+e^{}`$ centre-of-mass energies $`\sqrt{s}=500\mathrm{GeV}`$, as functions of the Higgs mass $`m_H`$ and for $`\mathrm{tan}\beta =2.0`$. For large values of the mass $`m_A`$ of the pseudoscalar Higgs boson, all the Higgs bosons, except the lightest one ($`h`$), become heavy and decouple from the rest of the spectrum. At values of $`\mathrm{tan}\beta `$ that are not too large, the trilinear $`Hhh`$ coupling $`\lambda _{Hhh}`$ can be measured by the decay process $`Hhh`$, which has a width proportional to $`\lambda _{Hhh}^2`$. However, this is possible only if the decay is kinematically allowed, and the branching ratio is sizeable. In Fig. 5 we show the branching ratios (at $`\mathrm{tan}\beta =2`$) for the main decay modes of the heavy $`CP`$-even Higgs boson as a function of the $`H`$ mass. Apart from the $`hh`$ decay mode, the other important decay modes are $`HWW^{}`$, $`ZZ^{}`$. We note that the couplings of $`H`$ to gauge bosons can be measured through the production cross sections for $`e^+e^{}\nu _e\overline{\nu }_eH`$; therefore the branching ratio $`BR(Hhh)`$ can be used to measure the triple Higgs coupling $`\lambda _{Hhh}`$. For increasing values of $`\mathrm{tan}\beta `$, the $`Hhh`$ coupling gradually gets weaker (see Fig. 1), and hence the prospects for measuring $`\lambda _{Hhh}`$ diminish. This is also indicated in Fig. 5, where we show the $`H`$ branching ratios for $`\mathrm{tan}\beta =5`$. There is actually a sizeable region in the $`m_A`$$`\mathrm{tan}\beta `$ plane where the decay $`Hhh`$ is kinematically forbidden. This is shown in Fig. 6, where we also display the regions where the $`Hhh`$ branching ratio is in the range 0.1–0.9. Clearly, in the forbidden region, the $`\lambda _{Hhh}`$ cannot be determined from resonant production. ## 6 Double Higgs-strahlung and Triple $`h`$ Production For small and moderate values of $`\mathrm{tan}\beta `$, the study of decays of the heavy $`CP`$-even Higgs boson $`H`$ provides a means of determining the triple-Higgs coupling $`\lambda _{Hhh}`$. In order to extract the coupling $`\lambda _{hhh}`$, other processes involving two-Higgs ($`h`$) final states must be considered. The $`Zhh`$ final states, which can be produced in the non-resonant double Higgs-strahlung $`e^+e^{}Zhh`$ , could provide one possible opportunity, since it involves the coupling $`\lambda _{hhh}`$. These non-resonant processes have also been investigated . We show in Fig. 7 the $`Zhh`$ cross section, with $`\stackrel{~}{m}=1\mathrm{TeV}`$. The structure around $`m_h=70\mathrm{GeV}`$ (in the case of no mixing) is due to the vanishing and near-vanishing of the trilinear coupling. In the case of no mixing, there is a broad minimum from $`m_h78`$ to 90 GeV, followed by an enhancement around $`m_h90`$–100 GeV. This structure is due to the vanishing of the branching ratio for $`Hhh`$, which is kinematically forbidden in the region $`m_h78`$–90 GeV, see Fig. 6 (this coincides with the opening up of the channel $`HWW`$), followed by an increase of the trilinear couplings. This particular structure depends considerably on the exact mass values $`m_H`$ and $`m_h`$. Thus, it depends on details of the radiative corrections and on the mixing parameters $`A`$ and $`\mu `$. ## 7 Fusion Mechanism for Multiple-$`h`$ Production A two-Higgs ($`hh`$) final state in $`e^+e^{}`$ collisions can also result from the $`WW`$ fusion mechanism, which can either be a resonant process as in (4.3), or a non-resonant one like (4.6). Since the neutral-current couplings are smaller than the charged-current ones, the cross section for the $`ZZ`$ fusion mechanism in (4.3) and (4.6) is an order of magnitude smaller than the $`WW`$ fusion mechanism, and is here ignored. The $`WW`$ fusion cross section for $`e^+e^{}H\overline{\nu }_e\nu _e`$ can be written as (see also ) $$\sigma (e^+e^{}H\overline{\nu }_e\nu _e)=\frac{G_F^3m_W^4}{64\sqrt{2}\pi ^3}\left[_{\mu _H}^1𝑑x_x^1\frac{dy}{\left[1+(yx)/\mu _W\right]^2}(x,y)\right]\mathrm{cos}^2(\beta \alpha ).$$ (7.1) This cross section is plotted in Fig. 4 for the centre-of-mass energy $`\sqrt{s}=500`$ GeV, and for $`\mathrm{tan}\beta =2.0`$, as a function of $`m_H`$. The resonant fusion mechanism, which leads to $`[hh]`$ \+ \[missing energy\] final states is competitive with the process $`e^+e^{}HZ[hh]`$ \+ \[missing energy\], particularly at high energies. Since the dominant decay of $`h`$ will be into $`b\overline{b}`$ pairs, the $`H`$-strahlung and the fusion mechanism will give rise to final states that will predominantly include four $`b`$-quarks. On the other hand, the process $`e^+e^{}AH`$ will give rise to six $`b`$-quarks in the final state, since the $`AH`$ final state typically yields three-Higgs $`h[hh]`$ final states. Besides the resonant $`WW`$ fusion mechanism for the multiple production of $`h`$ bosons, there is also a non-resonant $`WW`$ fusion mechanism: $$e^+e^{}\nu _e\overline{\nu }_ehh,$$ (7.2) through which the same final state of two $`h`$ bosons can be produced. The cross section for this process, which arises through $`WW`$ exchange as indicated in Fig. 3, can be written in the “effective $`WW`$ approximation” as $$\sigma (e^+e^{}\nu _e\overline{\nu }_ehh)=_\tau ^1dx\frac{\mathrm{d}L}{\mathrm{d}x}\widehat{\sigma }_{WW}^{}(x),$$ (7.3) where $`\tau =4m_h^2/s`$. Here, the cross section is written as a $`WW`$ cross section, at invariant energy squared $`\widehat{s}=xs`$, folded with the $`WW`$ “luminosity” : $$\frac{\mathrm{d}L(x)}{\mathrm{d}x}=\frac{G_\mathrm{F}^2m_W^4}{2}\left(\frac{v^2+a^2}{4\pi ^2}\right)^2\frac{1}{x}\left\{(1+x)\mathrm{log}\frac{1}{x}2(1x)\right\},$$ (7.4) where $`v^2+a^2=2`$. The $`WW`$ cross section receives contributions from several amplitudes, according to the diagrams (a)–(d) in Fig. 3. We have evaluated these contributions . Our approach differs from that of DHZ in that we do not project out the longitudinal degrees of freedom of the intermediate $`W`$ bosons. Instead, we follow the approach of Ref. , where transverse momenta are ignored everywhere except in the $`W`$ propagators. We show in Fig. 8 the $`WW`$ fusion cross section, at $`\sqrt{s}=1.5\mathrm{TeV}`$, as given by Eqs. (7.1) and (7.3), with $`\stackrel{~}{m}=1\mathrm{TeV}`$. The structure is reminiscent of Fig. 7, and the reasons for this are same. Notice, however, that the scale is different. ## 8 Sensitivity to $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ In Fig. 9 we have indicated in the $`m_A`$$`\mathrm{tan}\beta `$ plane the regions where $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ might be measurable for $`\sqrt{s}=500\mathrm{GeV}`$. We identify regions according to the following criteria : * Regions where $`\lambda _{Hhh}`$ might become measurable are identified as those where $`\sigma (H)\times \text{BR}(Hhh)>0.1\text{ fb}`$ (solid), with the simultaneous requirement of $`0.1<\text{BR}(Hhh)<0.9`$ \[see Figs. 56\]. In view of the recent, more optimistic, view on the luminosity that might become available, we also give the corresponding contours for 0.05 fb (dashed) and 0.01 fb (dotted). * Regions where $`\lambda _{hhh}`$ might become measurable are those where the continuum $`WWhh`$ cross section \[Eq. (7.3)\] is larger than 0.1 fb (solid). Also included are contours at 0.05 (dashed) and 0.01 fb (dotted). Such regions are given for two cases of the mixing parameters $`A`$ and $`\mu `$, as indicated. We have excluded from the plots the region where $`m_h<72.2\mathrm{GeV}`$, according to the LEP lower bound . This corresponds to low values of $`m_A`$. With an integrated luminosity of 500 fb<sup>-1</sup>, the contours at 0.1 fb correspond to 50 events per year. This will of course be reduced by efficiencies, but should indicate the order of magnitude that can be reached. At $`\sqrt{s}=500\mathrm{GeV}`$, with a luminosity of 500 fb<sup>-1</sup> per year, the trilinear coupling $`\lambda _{Hhh}`$ is accessible in a considerable part of the $`m_A`$$`\mathrm{tan}\beta `$ parameter space: at $`m_A`$ of the order of 200–300 GeV and $`\mathrm{tan}\beta `$ up to the order of 5. With increasing luminosity, the region extends somewhat to higher values of $`m_A`$. The “steep” edge around $`m_A200\mathrm{GeV}`$ (where increased luminosity does not help) is determined by the vanishing of $`\text{BR}(Hhh)`$, see Fig. 6. The coupling $`\lambda _{hhh}`$ is accessible in a much larger part of this parameter space, but with a moderate luminosity, “large” values of $`\mathrm{tan}\beta `$ are accessible only if $`A`$ is small. It should be stressed that the requirements discussed here are necessary, but not sufficient conditions for the trilinear couplings to be measurable. We also note that there might be sizable corrections to the $`WW`$ approximation, and that it would be desirable to incorporate the dominant two-loop corrections to the trilinear couplings. ## 9 Conclusions We have presented the results of a detailed investigation of the possibility of measuring the MSSM trilinear couplings $`\lambda _{Hhh}`$ and $`\lambda _{hhh}`$ at an $`e^+e^{}`$ collider. Where there is an overlap, we have confirmed the results of Ref. . Our emphasis has been on taking into account all the parameters of the MSSM Higgs sector. We have studied the importance of mixing in the squark sector, as induced by the trilinear coupling $`A`$ and the bilinear coupling $`\mu `$. At moderate energies ($`\sqrt{s}=500\mathrm{GeV}`$) the range in the $`m_A`$$`\mathrm{tan}\beta `$ plane that is accessible for studying $`\lambda _{Hhh}`$ changes quantitatively for non-zero values of the parameters $`A`$ and $`\mu `$. As far as the coupling $`\lambda _{hhh}`$ is concerned, however, there is a qualitative change from the case of no mixing in the squark sector. If $`A`$ is large, then high luminosity is required to reach “high” values of $`\mathrm{tan}\beta `$. At higher energies ($`\sqrt{s}=1.5\mathrm{TeV}`$), the mixing parameters $`A`$ and $`\mu `$ change the accessible region of the parameter space only in a quantitative manner. This research was supported by the Research Council of Norway, and (PNP) by the University Grants Commission, India under project number 10-26/98(SR-I).
no-problem/9903/astro-ph9903050.html
ar5iv
text
# Structure formation with strings plus inflation: a new paradigm ## I Introduction Structure formation theories fall broadly within two classes: inflation scdm and defects csreviews . In inflationary scenarios the structure of the Universe originates from microphysical quantum fluctuations, which get stretched to cosmological scales by inflationary expansion. In topological defect scenarios as the Universe cools down, high temperature symmetries are spontaneously broken. Remnants of the unbroken phase, called topological defects, may survive the transition, and later seed fluctuations in the CMB and CDM. A major drawback of inflationary theories is that they are far-removed from particle physics models. Attempts to improve on this state of affairs have been made recently, resorting to supersymmetry Cas+89 ; DTermInfl ; rachel ; LytRio98 ; Cop+94 ; Tka+98 ; sug . In these models one identifies flat directions in the potentials, which are enforced by a (super)symmetry. Such flat directions produce “slow-roll inflation”. In order to stop inflation one must tilt the potential, allowing for the fields to roll down. In so-called D-term supersymmetric inflationary scenarios, inflation stops with a symmetry-breaking phase transition, at which a U(1) symmetry is spontaneously broken, leading to the formation of cosmic strings. This is only the most natural of a whole class of models of so-called hybrid inflation. Hence a network of cosmic strings is formed at the end of inflation. If one takes the standard cosmology (sCDM), of $`\mathrm{\Omega }=1`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }_b=0.05`$ and $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, one finds that neither strings or inflation fit the COBE normalized large scale structure power spectrum. However, the failings of the inflationary and defect sCDM models are to a certain extent complementary, and an obvious question is whether they can help each other to improve the fit to the data. Using our recent calculations for local cosmic strings chm1 and the by now familiar inflationary calculations cmbfast , we are able to demonstrate that the answer is yes chm2 . Even with Harrison-Zeldovich initial conditions and no inflation produced gravitational waves, the large-angle CMB spectrum is mildly tilted, as preferred by COBE data kris . The CMB spectrum then rises into a thick Doppler bump, covering the region $`\mathrm{}=200600`$, modulated by soft secondary undulations. More importantly the standard CDM anti-biasing problem is cured, giving place to a slightly biased scenario of galaxy formation. The cosmic string biasing problem is also cured. Similar results have been reported by two other groups stinf1 ; stinf2 . ## II Model building The general features of structure formation with strings plus inflation do not depend on the concrete underlying inflationary model. We illustrate these models by considering the $`D`$-term inflation model in which the strings plus inflation scenario finds an attractive expression. To begin with, we define the reduced Planck mass $`M=1/\sqrt{8\pi G}`$. We recall that a supergravity theory is defined by two functions of the chiral superfields $`\mathrm{\Phi }_i`$: the function $`G(\overline{\mathrm{\Phi }},\mathrm{\Phi })`$, which is related to the Kähler potential $`K(\overline{\mathrm{\Phi }},\mathrm{\Phi })`$ and the superpotential $`W(\mathrm{\Phi })`$ by $`G=K+M^2\mathrm{ln}|W|^2/M^6`$, and the gauge kinetic function $`f_{AB}(\overline{\mathrm{\Phi }},\mathrm{\Phi })`$. The scalar potential $`V`$ is composed of two terms, the $`F`$-term $$V_F=M^2e^{G/M^2}\left(G_i(G^1)_j^iG^j3M^2\right)$$ (1) and the $`D`$-term $$V_D=\frac{1}{2}g^2\mathrm{Re}f_{AB}^1D^AD^B$$ (2) where $`g`$ is the U(1) gauge coupling, $`G^i=G/\mathrm{\Phi }_i`$, and $`G_i=G/\overline{\mathrm{\Phi }}^i`$. The function $`D^A`$ i given by $`D^A`$ $`=`$ $`G^i(T^A)_i^j\varphi _j+\xi ^A,`$ (3) where the Fayet-Iliopoulos terms $`\xi ^A`$, which we take to be positive, can be non-zero only for those $`(T^A)_i^j`$ which are U(1) generators. We see that in order to have a positive potential energy density, either the $`F`$ term or the $`D`$ term must be non-zero. In order to have inflation, there must be a region in field space where the slow-roll conditions $`ϵ\frac{1}{2}M^2|V^i/V|^21`$ and $`|\eta ||\mathrm{min}\mathrm{e}igM^2V_j^i/V|1`$ are satisfied, where by the notation in the second condition we mean that the smallest eigenvalue of the matrix is much less than unity. In $`D`$-term inflation, the conditions are satisfied because the fields move along a trajectory for which $`\mathrm{exp}(G/M^2)`$, $`G^i`$ and $`G^i(T^A)_i^j\varphi _j`$ all vanish, leaving a tree-level potential energy density of $`g^2\xi ^A\xi ^A/2`$. Thus the potential is completely flat before radiative corrections are taken into account. At the end of inflation, if the fields are to relax to the supersymmetric minimum with $`D^A+\xi ^A=0`$, the U(1) gauge symmetries are necessarily broken, assuming their corresponding Fayet-Iliopoulos terms are non-zero. Thus strings are inevitable: the only question is how much inflation there is before the fields attain the minimum. ## III Calculations The spectrum of the perturbations from $`D`$-term inflation is calculable rachel , and can be expressed in terms of $`N`$, the number of $`e`$-foldings between the horizon exit of cosmological scales today and the end of inflation, which occurs at $`|\eta |=1`$. One finds $$\frac{\mathrm{}(\mathrm{}+1)C_{\mathrm{}}^\mathrm{I}}{2\pi T_{\mathrm{CMB}}^2}\frac{1}{4}|\delta _H(k)|^2\frac{(2N+1)}{75}\left(\frac{\xi ^2}{M^4}\right),$$ (4) where $`T_{\mathrm{CMB}}=2.728K`$ is the temperature of the microwave background, and $`\delta _H(k)`$ is the matter perturbation amplitude at horizon crossing. The corrections to this formula, which is zeroth order in slow roll parameters, are not more than a few per cent. The inflationary fluctuations in this model are almost scale-invariant (Harrison-Zeldovich) and have a negligible tensor component DTermInfl . The string contribution is uncorrelated with the inflationary one, and is proportional to $`(G\mu )^2`$, where $`\mu `$ is the string mass per unit length, given by $`\mu =2\pi \xi `$. We can write it as $$\frac{\mathrm{}(\mathrm{}+1)C_{\mathrm{}}^\mathrm{S}}{2\pi T_{\mathrm{CMB}}^2}=\frac{𝒜^\mathrm{S}(\mathrm{})}{16}\left(\frac{\xi ^2}{M^4}\right),$$ (5) where the function $`𝒜^\mathrm{S}(\mathrm{})`$ gives the amplitude of the fractional temperature fluctuations in units of $`(G\mu )^2`$. Allen et al. all+ report $`𝒜^\mathrm{S}(\mathrm{})60`$ on large angular scales, with little dependence on $`\mathrm{}`$. Our simulations give $`𝒜^\mathrm{S}(\mathrm{})120`$, with a fairly strong tilt. The source of the difference is not altogether clear: our simulations are based on a flat space code which neglects the energy losses of the strings through Hubble damping. The simulations of Allen et al. do include Hubble damping, which would tend to reduce the string density and hence the normalisation. However, they have a problem of lack of dynamic range, and therefore may be missing some power from strings at early times, and therefore higher $`\mathrm{}`$. Jeannerot rachel took the Allen–Shellard normalisation and $`N60`$, and found that the proportion of strings to inflation is roughly $`3:1`$. With our normalisation, the approximate ratio is $`4:1`$. In any case this ratio is far from a robust prediction in strings plus inflation models, as it depends on the number of $`e`$-foldings, and the string normalisation, both of which are uncertain. We will therefore leave it as a free parameter. For definiteness we shall parametrize the contribution due to strings and inflation by the strings to inflation ratio $`R_{\mathrm{SI}}`$, defined as the ratio in $`C_{\mathrm{}}`$ at $`\mathrm{}=5`$, that is $`R_{\mathrm{SI}}=C_5^S/C_5^I`$. It is curious to note that the number of e-foldings required for solving the flatness problem still leaves room for tuning $`R_{\mathrm{SI}}`$ between nearly 0 and 1. ## IV Results In Figs. 1 and 2 we present power spectra in CMB and CDM produced by a sCDM scenario, by cosmic strings, and by strings plus inflation. We have assumed the traditional choice of parameters, setting the Hubble parameter $`H_0=50`$ km sec<sup>-1</sup> Mpc<sup>-1</sup>, the baryon fraction to $`\mathrm{\Omega }_b=0.05`$, and assumed a flat geometry, no cosmological constant, 3 massless neutrinos, standard recombination, and cold dark matter. The inflationary perturbations have a Harrison-Zeldovich or scale invariant spectrum, and the amount of gravitational radiation (tensor modes) produced during inflation is assumed to be negligible. We now summarise the results. * The CMB power spectrum shape in these models is highly exotic. The inflationary contribution is close to being Harrison-Zeldovich. Hence it produces a flat small $`\mathrm{}`$ CMB spectrum. The admixture of strings, however, imparts a tilt. Depending on $`R_{\mathrm{SI}}`$ one may tune the CMB plateau tilt between 1 and about 1.4, without invoking primordial tilt and inflation produced gravity waves. * The proverbial inflationary Doppler peaks are transfigured in these scenarios into a thick Doppler bump, covering the region $`\mathrm{}=200600`$. The height of the peak is similar for sCDM and strings, with standard cosmological parameters. The Doppler bump is modulated by small undulations, which cannot truly be called secondary peaks. By tuning $`R_{\mathrm{SI}}`$ one may achieve any degree of secondary oscillation softening. This provides a major loophole in the argument linking inflation with secondary oscillations in the CMB power spectrum andbar ; inc . If these oscillations were not observed, inflation could still survive, in the form of the models discussed in this Letter. * In these scenarios the LSS of the Universe is almost all produced by inflationary fluctuations. However COBE scale CMB anisotropies are due to both strings and inflation. Therefore COBE normalized CDM fluctuations are reduced by a factor $`(1+R_{\mathrm{SI}})`$ in strings plus inflation scenarios. This is equivalent to multiplying the sCDM bias by $`\sqrt{1+R_{\mathrm{SI}}}`$ on all scales, except the smallest, where the string contribution may be non negligible. Given that sCDM scenarios produce too much structure on small scales (too many clusters) this is a desirable feature. ## V Praise for the model “Strings plus inflation” are interesting first of all as an inflationary model. Its “flat potential” is not the result of a finely tuned coupling constant, but the result of a symmetry. Hence in some sense these models achieve inflation without fine tuning. The only free parameters are the number of inflationary e-foldings, and the scale of symmetry breaking. These parameters also fix the absolute (and therefore relative) normalizations of string and inflationary fluctuations. “Strings plus inflation” models are also pervaded by a higher component of particle physics, when compared with other inflationary models. The structure formation paradigm resulting from this scenario is highly exotic and worth considering just by itself. Regarded in abstract, structure formation may be due to two types of mechanism: active and passive perturbations. Passive fluctuations are due to an apparently acausal imprint in the initial conditions of the standard cosmic ingredients, which are then left to evolve by themselves. Active perturbations are due to an extra cosmic component, which evolves causally (and often non-linearly), and drives perturbations in the standard cosmic ingredients at all times. Inflationary fluctuations are passive. Defects are the quintessential active fluctuation. A scenario combining active and passive perturbation would bypass most of the current wisdom on what to expect in either scenario. It is believed that the presence or absence of secondary Doppler peaks in the CMB power spectrum tests the very fundamental nature of inflation, whatever its guise andbar . In the mixed scenarios we shall consider inflationary scenarios could produce spectra with any degree of secondary oscillation softening. The combination of these two scenarios smoothes the hard edges of either separate component, leaving a much better fit to LSS and CMB power spectra. We illustrated this point in this review, but left out a couple of issues currently under investigation which we now summarise. The CDM power spectrum in these scenarios has a break at very small scales, when string produced CDM fluctuations become dominant over inflationary ones. This aspect was particularly emphasized in stinf2 , and there is some observational evidence in favour of such a break. An immediate implication of this result is that it is easier to form structure at high redshifts steidel ; lalfa . In bmw it is shown that even with Hot Dark Matter, these scenarios produce enough damped Lyman-$`\alpha `$ systems, to account for the recent high-redshift observations. Another issue currently under investigation is the timing of structure formation steidel . Active models drive fluctuations at all times, and therefore produce a time-dependence in $`P(k)`$ different from passive models. The effect is subtle, but works so as to slow down structure formation. Hence for the same normalization nowadays there is more structure at high redshifts in string scenarios. Overall we end up with a picture in which the CMB is produced by both strings and inflation, the current large scale structure of the Universe is produced by inflation except on the very small scales, but most of the structure at high redshift is produced by strings. In such models there would also be intrinsic non-Gaussianity at the scale of clusters, with interesting connections with the work of james . ## Acknowledgments JM and CC thank the organizers for an excellent meeting. We acknowledge financial support from the Beit Foundation (CC), PPARC (MH), and the Royal Society (JM), and also from the EC (JM and CC).
no-problem/9903/hep-th9903235.html
ar5iv
text
# Comment on “Singularities in axially symmetric solutions of Einstein-Yang Mills and related theories, by Ludger Hannibal, [hep-th/9903063]” ## Abstract We point out that the statements in \[hep-th/9903063\] concerning the regularity of static axially symmetric solutions in Yang-Mills-dilaton (YMD) and Einstein-Yang-Mills(-dilaton) (EYMD) theory are incorrect, and that the non-singular local gauge potential of the YMD solutions is twice differentiable. We have constructed numerically static axially symmetric solutions in Yang-Mills-dilaton (YMD) and Einstein-Yang-Mills-dilaton (EYMD) theory , employing a singular form of the gauge potential. For the solutions of YMD theory we have recently demonstrated explicitly, that the singular gauge potential can be locally gauge transformed into a well defined gauge potential . After that, Hannibal has claimed in his paper , “We show that the solutions of $`SU(2)`$ Yang-Mills-dilaton and Einstein-Yang-Mills-dilaton theories described in a sequence of papers by Kleihaus and Kunz are not regular in the gauge field part”(\*). Here we comment on this paper only as far as our work is concerned. In static axially symmetric Ansätze for the $`SU(2)`$ gauge potential have been considered and regularity conditions for the gauge field functions parameterizing the Ansatz have been derived . Comparing the properties of the gauge field functions employed in to these regularity conditions, it was then concluded in , that “the solutions constructed by Kleihaus and Kunz do not have the regular form”. While the solutions constructed numerically in are presented within a singular form of the gauge potential in the sense, that the gauge potential is not well defined on the $`z`$-axis and at the origin , this, however, only means that regularity of the solutions is not guaranteed a priori. It does not mean, that the solutions are not regular. Since the gauge potential transforms under gauge transformations, a regular gauge potential $`\widehat{A}_\mu `$ can be gauge transformed into a singular gauge potential $`A_\mu `$ by a singular gauge transformation. Both gauge potentials would describe the same physical solution. From the observation that the gauge potential $`A_\mu `$ is in a singular form, one could neither conclude that there is no regular gauge potential $`\widehat{A}_\mu `$ nor, in particular, that the physical solution is not regular. Therefore, the claim (\*) in the abstract of Hannibals paper is not correct and misleading. Hannibal has then considered the gauge transformation , which transforms locally the singular gauge potential of YMD solutions into a non-singular gauge potential. He recognizes, that the transformed gauge potential is continuous at $`\theta =0`$ (i. e. at $`x=y=0`$), but he claims that “the potentials are still possibly not differentiable at $`\theta =0`$”. However, it is a trivial task to check that the transformed gauge potential is differentiable. For instance for winding number $`n=2`$ and $`z>0`$, the non-singular gauge potential is given explicitly in in Cartesian coordinates, $`\widehat{A}_x`$ $`=`$ $`{\displaystyle \frac{x}{12r^4}}\left[r_r\stackrel{~}{H}_{12}3\stackrel{~}{H}_{12}\right]\rho ^2\tau _\phi ^2{\displaystyle \frac{y}{6r^4}}\stackrel{~}{H}_{43}\rho ^2\tau _\rho ^2+{\displaystyle \frac{y}{2r^2}}\left[f^2+2g1\right]\tau _3,`$ (1) $`\widehat{A}_y`$ $`=`$ $`{\displaystyle \frac{y}{12r^4}}\left[r_r\stackrel{~}{H}_{12}3\stackrel{~}{H}_{12}\right]\rho ^2\tau _\phi ^2+{\displaystyle \frac{x}{6r^4}}\stackrel{~}{H}_{43}\rho ^2\tau _\rho ^2{\displaystyle \frac{x}{2r^2}}\left[f^2+2g1\right]\tau _3,`$ (2) $`\widehat{A}_z`$ $`=`$ $`{\displaystyle \frac{1}{4r^3}}\stackrel{~}{H}_{12}\rho ^2\tau _\phi ^2,`$ (3) expanded to lowest order in the variable $`\rho =\sqrt{x^2+y^2}`$. Using $`\rho ^2\tau _\phi ^2=2xy\tau _1+(x^2y^2)\tau _2`$ and $`\rho ^2\tau _\rho ^2=(x^2y^2)\tau _1+2xy\tau _2`$, one sees immediately, that the gauge potential (3) is differentiable at $`x=y=0`$. <sup>*</sup><sup>*</sup>* Note, that the functions $`\stackrel{~}{H}_{12},\stackrel{~}{H}_{43},f,g`$ and their derivatives are bounded functions of $`r=\sqrt{x^2+y^2+z^2}`$. The function $`f`$ used here and in should not be confused with the function $`f`$ in . In addition it is evident, that the gauge potential is twice differentiable at $`x=y=0`$, provided one can show that no functions like $`\rho x`$, $`\rho y`$ arise from the next to leading order terms. But this is straightforward, since it concerns only the function $`\widehat{F}_4`$, multiplying the matrix $`\tau _3`$, see . Observing from , that the functions $`H_3`$ and $`\mathrm{\Gamma }_{(z)}^{(n)}`$ are odd in $`\theta `$ up to order $`\theta ^3`$ (inclusively) while the function $`H_4`$ is even in $`\theta `$ up to order $`\theta ^2`$, it is easy to see, that the function $`\widehat{F}_4`$ is odd in $`\theta `$ up to order $`\theta ^3`$. Thus the next to leading order term vanishes, indeed. At the origin a similar consideration shows that the next to leading order terms only contribute functions which are twice differentiable at the origin. Consequently, the gauge transformed gauge potential is twice differentiable, even in the lowest order of the expansion . Nevertheless, we have carried out the expansion near the positive $`z`$-axis to the next order and have found that the next to leading order terms vanish for all gauge field functions for the YMD solutions with winding number $`n=2,3`$ . To conclude, it has not been shown in that the static axially symmetric solutions constructed numerically in are not regular. Furthermore, the non-singular gauge potentials obtained to lowest order in are continuous, differentiable and twice differentiable at $`x=y=0`$ and at the origin. Consequently, the singular gauge potential of the solutions obtained in can locally be gauge transformed into regular form.
no-problem/9903/nucl-th9903043.html
ar5iv
text
# A FEW SIMPLE OBSERVATIONS ON PION-CONDENSATION IN NUCLEI ## Abstract We present a few simple observations on the occurrence of $`\pi `$-condensation in Nuclei, aimed at clarifying the nature of the $`\pi `$-condensation implied by the coherent nuclear $`\pi `$-N-$`\mathrm{\Delta }`$ interaction, proposed in 1990 to explain the puzzling emergence of the Shell-Model. We show that such condensation is totally unrelated to the one proposed by A. B. Migdal at the beginning of ’70, which can easily be shown not to occur at the normal nucleon density $`\rho _N`$ 0.17 fm<sup>-3</sup>. preprint: MITH 99/1 In 1990 a new approach to the dynamics of the Nucleus and of the Nuclear matter was proposed, based on an analogy between the coherent $`\pi `$-Nucleon QCD interaction in nuclear matter and the coherent QED interaction in ordinary condensed matter . The rapidly growing research program aimed at elucidating the rôle of the coherent electrodynamical interactions among the constituents (atoms and molecules) of ordinary condensed matter, lent itself in a surprisingly natural way to a far reaching generalization to nuclear matter, that could finally clarify several points of Nuclear Physics that had remained mysterious at least to the natural philosopher, if not to the expert of the field. The mysteries we are referring to can all be essentially encapsulated in the following question: why is the Shell-Model (SM) such a good (approximate) description of the structure of the Nucleus ? A question that has puzzled the more thoughtful students of Nuclear Physics, since its proposal by Mayer and Jensen almost 50 years ago . Let’s analyze the origin and motivations of the puzzle which the SM poses to our physical intuition. Since the seminal ideas of Yukawa nobody has ever put in doubt the notion that the nucleons of the Nucleus are held together by a “nucleostatic force”, the Yukawa interaction, arising from the virtual exchange of $`\pi `$-mesons between pairs of nucleons. The finiteness of the $`\pi `$-mass, as well known, implies the exponential decay of such force as $`e^{m_\pi r}`$, giving it the rather short range $`R_{1\pi }=\frac{1}{m_\pi }`$ 1.4 fm. It is also well known that the basic $`\pi `$-exchange interaction has an important spin-isospin structure, which leads to repulsion instead of attraction in well defined spin-isospin channels. The same can be said of all other one-boson-exchanges which have, however, much smaller ranges . Thus the only universal dynamical mechanism of attraction between nucleons, irrespective of their spin and isospin, responsible for the existence of highly complex Nuclei, has been identified in the 2$`\pi `$-exchange, involving the virtual transition to the $`\mathrm{\Delta }`$(1232) as well. The range of this kind of nuclear Van der Waals forces is thus $`R_{2\pi }=\frac{1}{2m_\pi }`$ 0.7 fm, a remarkably small distance approximately equal to the radius of the nucleons. Let us now take an assembly of nucleons and squeeze them in a volume so small that their average mutual distance is comparable with $`R_{2\pi }`$<sup>*</sup><sup>*</sup>*As a matter of fact for the actual average nucleon density $`\rho _N`$ 0.17 fm<sup>-3</sup> the intranucleon average distance $`a_N\rho _N^{1/3}`$ 1.81 fm turns out to be remarkably large, a fact that can be appreciated by computing the average of $`\mathrm{exp}\frac{|\stackrel{}{x}_1\stackrel{}{x}_2|}{R_{2\pi }}`$ for two nucleons with a gaussian density distribution of radius $`R_N`$ which yields the small value 0.05, for $`R_N`$ 0.7 fm, the nucleon’s radius.. Then what kind of equilibrium configuration can we expect for such system ? A dense plasma with a stable neutralizing background bears a close resemblance to our nucleon system: in fact the short range pionic interaction is a rather accurate mock-up of the neutralizing background’s interaction with the plasma which is Debye-screened at a distance comparable with the distances between charges. And the physics of such dense plasma is well known, resembling a kind of jelly where the charges, the seeds, oscillate around their equilibrium position with the “plasma frequency”In order to have some idea about the structure of such jelly, it is amusing to pursue in a crude way the analogy with a dense plasma. The plasma frequency is then $$\omega _p\frac{g}{m_N^{1/2}}\left(\frac{N}{V}\right)^{1/2}=\frac{g}{m_N^{1/2}}\left(\frac{1}{a_N}\right)^{3/2}$$ (1) where $`g`$ 1 yields at r = $`R_{2\pi }`$ the reasonable potential $`V_{2\pi }`$ 100 MeV and $`\omega _p`$ 40 MeV. The typical oscillation amplitude is then $`\delta =\frac{1}{(m_N\omega _p)^{1/2}}10^{13}`$ cm, a rather reasonable value, too. . Wouldn’t it,then, be reasonable to expect such a “jellium” structure to accurately represent the dynamics of the Nucleus as well ? To such question the answer of the SM is a surprising, incontrovertible no. The nucleons of the Nucleus revolve around it in global orbits, much in the same way as the electrons whirl around the Nucleus in the Atom. Very strange, isn’t it ? Indeed, something that would have advised the nuclear physicists to look somewhere else in search of a physically realistic basis for the remarkable phenomenological success of the SM. From a historical point of view it is interesting to contemplate the initial skepticism and the later wonder of the leading nuclear physicists when confronted with the simplicity and the effectiveness of the SM, skepticism and wonder that through habit the successive generations came to completely forget, deeply involved on one hand in the complicated calculations of nuclear structure, and on the other to check the “self-consistency” of the SM: a task that completely overlooked the fundamental question: why a dense plasma, governed in QED by a similar set of interactions has a dynamical behaviour which is completely different from that of the Nucleus ? The 1990 paper, referred to above, finally succeeded in identifying a completely new interaction mechanism, whose basic structure was just what is needed to make sense, in a realistic way, of the SM and of other suitable aspects of nuclear dynamicsIn and in the Chapter 11 of the book one can find a number of applications of this novel approach. In a nutshell the fundamental idea is that the Nucleon is just one level of the s-wave three non-strange quark system, whose excited state is $`\mathrm{\Delta }`$(1232), lying some 300 MeV above it. These two levels are strongly coupled to the $`\pi `$-field (itself a quark-antiquark system in s-wave), which induces the transitions $`\pi +N\mathrm{\Delta }(1232)\pi +N`$ etc. The similarity of this physical system with the familiar Laser should not escape the attention of anybody. However, in the generally accepted view a system of such kind will “lase” if and only if it is “inverted”, i.e. if through some suitable device - the pump - one brings a large number of atoms to the excited level. Furthermore it is important to place the system in a well-tuned optical cavity in order to prevent photons to leak out and be lost for the coherent laser evolution. If this were always true (it is certainly true in the operational conditions of the Lasers) the mechanism we are envisaging would be totally irrelevant, but it turns out that, contrary to what is generally believed, this is not always true. As demonstrated in 1973 by K. Hepp and E. Lieb , a system governed by the Dicke Hamiltonian (such as the laser) above a certain density and below a certain temperature undergoes spontaneously a Superradiant Phase Transition (SPT) to a Laser-like state, where matter and a number of resonant modes of the e.m. field interact coherently, oscillating in phase. And this without any need neither of pumps nor of cavities. This crucial and revolutionary result, which for mysterious reasons has had no impact on the Physics community, was rediscovered and generalized by one of us (G.P.) in 1987, and is the focal point of the book in Ref.. Based upon it the Nucleus becomes a “bona-fide” Pionic Laser, whose two levels are just the Nucleon and the $`\mathrm{\Delta }`$(1232), and, as shown in Ref., the couplings of both N(940) and $`\mathrm{\Delta }`$(1232) to the $`\pi `$-field are strong enough to meet the conditions for a SPT. In this way, through the coherent interaction with the $`\pi `$-field, which gets trapped in the region where the collective”N-$`\mathrm{\Delta }`$ current” is localized, i.e. within the Nucleus, the assembly of Nucleons reaches a completely novel ground state, where N’s and $`\mathrm{\Delta }`$’s oscillate in phase and their “orbit” are not constrained to be localized, for the binding $`\pi `$-field is spread out throughout the Nucleus, and not peaked around the single Nucleons as envisaged by the short-range “nucleostatic” potential. As a matter of fact, as argued in Ref., the SM just describes the ground state of a finite number of Fermions confined by their collective interactions within the nuclear volume. In a certain sense we may say that in the new approach the Nucleus owes its existence to a “condensation” of the $`\pi `$-field within the spatial extent of the Nucleus. But it is clear that such “condensate” is of a very peculiar type, characterized by its well defined phase relation with the N-$`\mathrm{\Delta }`$ oscillations, and by the collective, coherent character of its interaction with the N-$`\mathrm{\Delta }`$ system. In spite of the remarkably successful phenomenology that one can deduce from the precise quantum field theoretical formulation that has been expounded in Refs., these ideas have found no interest nor resonance in the community of nuclear physics. The “mystery” of such a consistent neglect of both the conceptual difficulties of nucleostatic forces vis-à-vis the SM and the nuclear structure in general, and the satisfactory and theoretically compelling solution by the coherent nuclear interaction sketched above, has recently been lifted in the occasion of a review of our work demanded by a funding Agency. We have finally learnt that this approach has been foresaken by the community for it violates a well know result in Nuclear Physics, which goes back to the beginning of the 70’s, and is associated mainly with the work of the Russian physicist A. B. Migdal . According to this work, which has been subsequently refined in many ways<sup>§</sup><sup>§</sup>§For a simple but very clear account see the book by Ericsson and Weise , $`\pi `$-condensation at the actual nuclear densities $`\rho _N`$ 0.17 fm<sup>-3</sup> is ruled out by a strong repulsion effects which push the critical density $`\rho _c3\rho _N`$, way above what can be realized in a Nucleus. It is the purpose of the last observation to clarify why the above argument is totally irrelevant for assessing the validity of the approach of the Coherent Nucleus. In simple terms, as described in Ref., the problem of $`\pi `$-condensation is dealt with by analysing the propagator of a $`\pi `$-field $`D(\omega ,\stackrel{}{k})`$ in a gas of Nucleons of density $`\rho `$. The condition of condensation is then reduced to finding whether the inverse propagator ($`\mathrm{\Pi }`$ is the self energy function) $$D^1(\omega ,\stackrel{}{k})=\omega ^2\stackrel{}{k}^{\mathrm{\hspace{0.17em}2}}m_\pi ^2\mathrm{\Pi }(\omega ,\stackrel{}{k})$$ (2) has a zero for $`\omega `$ 0, identifying the critical density $`\rho _{crit}`$ as that density for which the pole of $`D(\omega ,\stackrel{}{k})`$ is at $`\omega `$ = 0, i.e. $$D^1(0,\stackrel{}{k})=\stackrel{}{k}^{\mathrm{\hspace{0.17em}2}}m_\pi ^2\mathrm{\Pi }(0,\stackrel{}{k})=0.$$ (3) The negative, generally accepted, conclusion about $`\pi `$-condensation in ordinary Nuclei stems from the results of a calculation of the $`\pi `$-propagator which sums incoherently the contributions (particle-hole) of each of the Nucleons. In this way one obtain for the $`\pi `$-self energy $$\mathrm{\Pi }(\omega ,\stackrel{}{k})=\frac{\stackrel{}{k}^{\mathrm{\hspace{0.17em}2}}\chi _0(\omega ,\stackrel{}{k})}{1+g^{}\chi _0(\omega ,\stackrel{}{k})}$$ (4) where the susceptibility function $`\chi _0(\omega ,\stackrel{}{k})`$ receives contributions from both N(940) and $`\mathrm{\Delta }`$(1232) and is proportional to the nuclear density $`\rho `$, as implied by the incoherent sum. $`g^{}`$ is the “correlation parameter”, originating from short-range repulsion. It should be now abundantly clear that the pion condensation that is predicted by the coherent $`\pi `$-N-$`\mathrm{\Delta }`$ interaction is totally unrelated to the one familiar to the nuclear physicists, and that the impossibility of the latter cannot have any bearing on the likelihood of the former, which, besides its conceptual advantages,has on its side an impressive number of successes . To conclude, whereas incoherent $`\pi `$-condensation is definitely ruled out by both theory and experiment, the coherent “superradiant” process that produces a coherent $`\pi `$-condensate appears not only solidly rooted in theory but also supported by experiments, beginning with the stunning effectiveness of the SM.
no-problem/9903/astro-ph9903400.html
ar5iv
text
# Gamma Ray Burst Beaming Constraints from Afterglow Light Curves ## 1 Beamed Gamma Ray Burst Afterglow Models In the Rome meeting I presented a derivation of the dynamical behavior of a beamed gamma ray burst (GRB) remnant and its consequences for the afterglow light curve. (Cf. Rhoads 1999 \[Paper I\]). Here, I summarize these results and apply them to test the range of beaming angles permitted by the optical light curve of GRB 970508. Suppose that ejecta from a GRB are emitted with initial Lorentz factor $`\mathrm{\Gamma }_0`$ into a cone of opening half-angle $`\zeta _\mathrm{m}`$ and expand into an ambient medium of uniform mass density $`\rho `$ with negligible radiative energy losses. Let the initial kinetic energy and rest mass of the ejecta be $`E_0`$ and $`M_0`$, and the swept-up mass and internal energy of the expanding blast wave be $`M_{\mathrm{acc}}`$ and $`E_{\mathrm{int}}`$. Then energy conservation implies $`\mathrm{\Gamma }E_{\mathrm{int}}\mathrm{\Gamma }^2M_{\mathrm{acc}}c^2E_0\text{constant}`$ so long as $`1/\mathrm{\Gamma }_0M_{\mathrm{acc}}/M_0\mathrm{\Gamma }_0`$. The swept-up mass is determined by the working surface area: $`\mathrm{d}M_{\mathrm{acc}}/\mathrm{d}r\pi (\zeta _\mathrm{m}r+c_st_{\mathrm{co}})^2`$, where $`c_s`$ and $`t_{\mathrm{co}}`$ are the sound speed and time since the burst in the frame of the blast wave $`+`$ accreted material. Once $`\mathrm{\Gamma }1/\zeta _\mathrm{m}`$, $`c_st_{\mathrm{co}}\zeta _\mathrm{m}r`$ and the dynamical evolution with radius $`r`$ changes from $`\mathrm{\Gamma }r^{3/2}`$ to $`\mathrm{\Gamma }\mathrm{exp}(r/r__\mathrm{\Gamma })`$ (Rhoads 1998, & Paper I). The relation between observer frame time $`t_{}`$ and radius $`r`$ also changes, from $`t_{}r^{1/4}`$ to $`t_{}\mathrm{exp}(r/[2r__\mathrm{\Gamma }])`$. Thus, at early times $`\mathrm{\Gamma }t_{}^{3/8}`$, while at late times $`\mathrm{\Gamma }t_{}^{1/2}`$. The characteristic length scale is $`r__\mathrm{\Gamma }=\left(E_0/\pi c_s^2\rho \right)^{1/3}`$, and the characteristic observed transition time between the two regimes is $`t_{,b}1.125(1+z)\left(E_0c^3/[\rho c_s^8\zeta _\mathrm{m}^2]\right)^{1/3}\zeta _\mathrm{m}^{8/3}`$, where $`z`$ is the burst’s redshift. We assume that swept-up electrons are injected with a power law energy distribution $`N()^p`$ for $`=\gamma _em_ec^2>_{\mathrm{min}}\xi _em_pc^2\mathrm{\Gamma }`$, with $`p>2`$, and contain a fraction $`\xi _e`$ of $`E_{\mathrm{int}}`$. This power law extends up to the cooling break, $`_{\mathrm{cool}}`$, at which energy the cooling time is comparable to the dynamical expansion time of the remnant. Above $`_{\mathrm{cool}}`$, the balance between electron injection (with $`N_{\text{inj}}^p`$) and cooling gives $`N()^{(p+1)}`$. We also assume a tangled magnetic field containing a fraction $`\xi __B`$ of $`E_{\mathrm{int}}`$. The comoving volume $`V_{\mathrm{co}}`$ and burster-frame volume $`V`$ are related by $`V_{\mathrm{co}}V/\mathrm{\Gamma }M_{\mathrm{acc}}/\mathrm{\Gamma }`$, so that $`B^2=8\pi \xi __BE_{\mathrm{int}}/V_{\mathrm{co}}\mathrm{\Gamma }^2`$ and $`B\mathrm{\Gamma }`$. The resulting spectrum has peak flux density $`F_{\nu ,,m}\mathrm{\Gamma }BM_{\mathrm{acc}}/\mathrm{max}(\zeta _\mathrm{m}^2,\mathrm{\Gamma }^2)`$ at an observed frequency $`\nu _{,\mathrm{m}}\mathrm{\Gamma }B_{\mathrm{min}}^2/(1+z)\mathrm{\Gamma }^4/(1+z)`$. Additional spectral features occur at the frequencies of optically thick synchrotron self absorption (which we shall neglect) and the cooling frequency $`\nu _{,\mathrm{cool}}`$ (which is important for optical observations of GRB 970508). The cooling break frequency follows from the relations $`\gamma _{\mathrm{cool}}(6\pi m_ec)/(\sigma __T\mathrm{\Gamma }B^2t_{})`$ (Sari, Piran, & Narayan 1998; Wijers & Galama 1998) and $`\nu _{,\mathrm{cool}}\mathrm{\Gamma }B\gamma _{\mathrm{cool}}^2(\mathrm{\Gamma }^4t_{}^2)^1`$. In the power law regime, $`F_{\nu ,,m}t_{}^0`$, $`\nu _{,\mathrm{m}}t_{}^{3/2}`$, and $`\nu _{,\mathrm{cool}}t_{}^{1/2}`$; while in the exponential regime, $`F_{\nu ,,m}t_{}^1`$, $`\nu _{,\mathrm{m}}t_{}^2`$, and $`\nu _{,\mathrm{cool}}t_{}^0`$. The spectrum is approximated by a broken power law, $`F_\nu \nu ^\beta `$, with $`\beta 1/3`$ for $`\nu <\nu _{,\mathrm{m}}`$, $`\beta (p1)/2`$ for $`\nu _{,\mathrm{m}}<\nu <\nu _{,\mathrm{cool}}`$, and $`\beta p/2`$ for $`\nu >\nu _{,\mathrm{cool}}`$. The afterglow light curve follows from the spectral shape and the time behavior of the break frequencies. Asymptotic slopes are given in table 1. For the $`\mathrm{\Gamma }1/\zeta _\mathrm{m}`$ regime, we study the evolution of break frequencies numerically. The results for $`\nu _{,\mathrm{m}}`$ and $`F_{\nu ,,m}`$ are given in Paper I. For $`\nu _{,\mathrm{cool}}`$, a good approximation is $`\nu _{,\mathrm{cool}}=\left[5.89\times 10^{13}\left(t_{}/t_{,b}\right)^{1/2}+1.34\times 10^{14}\right]\text{Hz}`$ $`\times `$ $`\left({\displaystyle \frac{1}{1+z}}\right)\left({\displaystyle \frac{c_s}{c/\sqrt{3}}}\right)^{17/6}\left({\displaystyle \frac{\xi __B}{0.1}}\right)^{3/2}`$ $`\times `$ $`\left({\displaystyle \frac{\rho \text{cm}^3}{10^{24}\text{g}}}\right)^{5/6}\left({\displaystyle \frac{E_0/10^{53}\text{erg}}{\zeta _\mathrm{m}^2/4}}\right)^{2/3}\left({\displaystyle \frac{\zeta _\mathrm{m}}{0.1}}\right)^{4/3}.`$ ## 2 Application to GRB 970508 In the best-sampled GRB afterglow light curve yet available (the GRB 970508 R band data), the optical spectrum changed slope at $`t_{}1.4\text{day}`$, suggesting the passage of the cooling break through the optical band (Galama et al 1998). We explore the range of acceptable beaming angles for this burst by fitting the afterglow light curve for $`1.3\text{day}t_{}95\text{day}`$ assuming that $`\nu _{,\mathrm{cool}}<c/0.7\mu m`$. The range of acceptable energy distribution slopes $`p`$ for swept-up electrons is taken from the optical colors. Precise measurements for $`2\text{day}t_{}5\text{day}`$ give $`F_\nu \nu ^\beta `$ with $`\beta =1.10\pm 0.08`$ (Zharikov, Sokolov, & Baryshev 1998), so that $`p=2.20\pm 0.16`$. We take this value to hold throughout the range $`1.3\text{day}t_{}95\text{day}`$, thus assuming that $`p`$ does not change as the afterglow evolves. We subtract the host galaxy flux ($`R_H=25.55\pm 0.19`$; Zharikov et al 1998) from all data points before fitting. We fixed values of $`R_H`$ and $`p`$, and then executed a grid search on the break time $`t_{,b}`$ and normalization of the model light curve. Results are summarized in table 2 and figure 1. The final $`\chi ^2`$ per degree of freedom is $`4`$. These large $`\chi ^2`$ values make meaningful error estimates on parameters difficult. Let us suppose $`\chi ^2`$ is large because details omitted from the models (clumps in the ambient medium or blast wave instabilities) affect the light curve, and so attach an uncertainty of $`0.1\text{mag}`$ to each predicted flux. Adding this in quadrature to observational uncertainties when computing $`\chi ^2`$, we obtain $`\chi ^2/\text{d.o.f.}1`$. Error estimates based on changes in $`\chi ^2`$ then rule out $`\mathrm{lg}(t_{,b}/\text{day})<3.5`$ at about the 90% confidence level even for our “maximum beaming” case ($`p=2.04`$, $`R_H=25.36`$). To convert a supposed break time $`t_{,b}`$ into a beaming angle $`\zeta _\mathrm{m}`$, we need estimates of the burst energy per steradian and the ambient density. Wijers & Galama (1998) infer $`E_0/\mathrm{\Omega }=3.7\times 10^{52}\text{erg}/(4\pi \text{Sr})`$ and $`\rho =5.8\times 10^{26}\text{g}/\text{cm}^3`$. Combining these values with $`t_{,b}10^{3.5}\text{day}`$ gives $`\zeta _\mathrm{m}0.5\text{rad}30\mathrm{deg}`$. $`E_0/\mathrm{\Omega }`$ and $`\rho `$ are substantially uncertain, but because $`\zeta _\mathrm{m}(\rho /E_0)^{1/8}`$, the error budget for $`\zeta _\mathrm{m}`$ is dominated by uncertainties in $`p`$ rather than in $`E_0`$ or $`\rho `$. This beaming limit implies $`\mathrm{\Omega }0.75\text{Sr}`$, which is $`6\%`$ of the sky. GRB 970508 was at $`z0.835`$ (Metzger et al 1997). We then find gamma ray energy $`E_\gamma =2.8\times 10^{50}\text{erg}\times (\mathrm{\Omega }/0.75\text{Sr})(d__\mathrm{L}/4.82\text{Gpc})^2\left(1.835/[1+z]\right)`$. If the afterglow is primarily powered by different ejecta from the initial GRB, as when a “slow” wind ($`\mathrm{\Gamma }_010`$) dominates the ejecta energy, then our beaming limit applies only to the afterglow emission. The optical fluence implies $`E_{\mathrm{opt}}=3.4\times 10^{49}\text{erg}\times (\mathrm{\Omega }/0.75\text{Sr})(d__\mathrm{L}/4.82\text{Gpc})^2\left(1.835/[1+z]\right)`$. The irreducible minimum energy is thus $`3.4\times 10^{49}\text{erg}`$, using the smallest possible redshift and beaming angle. We have reduced the beaming uncertainty, from the factor $`\mathrm{\Gamma }_0^2300^210^5`$ allowed by $`\gamma `$-ray observations alone to a factor $`(4\pi \text{Sr})/(0.75\text{Sr})20`$, and thus obtain the most rigorous lower limit on GRB energy requirements yet. ###### Acknowledgements. I thank Re’em Sari for useful comments.
no-problem/9903/astro-ph9903010.html
ar5iv
text
# Precise radial velocities of Proxima Centauri. Based on observations collected at the European Southern Observatory, La Silla ## 1 Introduction Searches for companion objects to Proxima Centauri (Prox Cen, GJ551, V645 Cen, HIP70890), the nearest star ($`d=1.2948\pm 0.041`$ pc; M5Ve), date back 20 years to the astrometric work by Kamper & Wesselink (1978) and the infrared photometric scanning by Jameson et al. (1983). The most extensive search so far consists of astrometric monitoring with the HST FGS #3 (Benedict et al. 1998a) where an astrometric precision of $`0.002^{\prime \prime }`$ per axis and a detection limit for an astrometric variation of $`0.001^{\prime \prime }`$ was achieved, thereby strongly constraining the mass of a possible companion. Limits to the $`K`$-band magnitude of objects within projected separations of $`110`$ AU from Prox Cen were found by Leinert et al. (1997) to be $`K=12.915.1`$ mag, i.e. $`\mathrm{\Delta }K=35`$ mag below the empirical end of the main-sequence. Recently the efforts to detect substellar objects near Prox Cen culminated in the announcement of a possible companion by Schultz et al. (1998) who used the HST FOS as a coronographic camera. These authors reported excess light near Prox Cen seen in two images separated by 103 d. Within this time span, the suspected object appeared to have moved in separation from $`0.23^{\prime \prime }`$ to $`0.34^{\prime \prime }`$ (indicating a separation near $`0.5`$ AU) and in P.A. from $`45^{}`$ to $`100^{}`$. If interpreted as a companion the object would be $`7`$ mag fainter in the FOS red detector. However, a subsequent observation with the HST WFPC2 at two epochs (separated by 21 d) by Golimowski & Schroeder (1998) could not verify the existence of any companion object to Prox Cen within a separation from $`0.09^{\prime \prime }`$ to $`0.85^{\prime \prime }`$ ($`0.111.1`$ AU). The authors concluded that they should have seen the object at only $`3.7`$ mag fainter than Prox Cen in their images taken at $`1\mu `$m. Consequently, they suggested that the excess light seen by Schultz et al. (1998) was an instrumental effect. In this paper we report on 4 years of precise radial velocity (RV) monitoring of Prox Cen and contribute new evidence to the debate on a substellar companion. ## 2 The planet search program at the ESO CES Our planet search program of 39 late-type stars using high-precision RVs was begun at ESO La Silla in Nov. 1992. Prox Cen was first observed in July 1993. We used the ESO 1.4m CAT telescope and CES spectrograph equipped with the f/4.7 Long Camera and ESO CCDs $`\mathrm{\#}30`$ or $`\mathrm{\#}34`$. The obtained resolving power, central wavelength and spectrum length were $`100,000`$, $`5389`$ Å, and $`48`$ Å. For high measurement precision for differential RVs we self-calibrated the spectrograph using an iodine ($`I_2`$) gas absorption cell temperature controlled at $`50^{}`$ C (Kürster et al. 1994; Hatzes et al. 1996; Hatzes & Kürster 1994). To obtain RV measurements we model the stellar spectra as observed through the iodine cell using a ‘pure’ stellar spectrum (recorded without the iodine cell in the light path) and a ‘pure’ iodine ($`I_2`$) spectrum from dome flat measurements. The resulting RV data are then corrected to the solar system barycenter via the JPL ephemeris DE200. For stars brighter than 5.5 mag our short-term (i.e. single night), best case long-term, and working long-term precisions (i.e. obtained under all observing conditions) are $`47`$, $`11`$, and $`2025\mathrm{ms}^1`$, respectively (Kürster et al. 1994, 1998, 1999). Prox Cen is unique in our sample in that it is by far the faintest star we have observed ($`V=11.01`$ mag). ## 3 The RV measurements for Proxima Cen Tab. 1 shows the journal of observations of our differential RV measurements. A total of 58 spectra from 29 nights were available. Before further analysis the spectra were combined into nightly bins (col. 1) as outlined below, with the bins containing between 1 and 5 spectra (col. 2). While self-calibration with an iodine cell is an excellent method to overcome instrumental instabilities such as instrumental drifts other instabilities such as focus and alignment changes or instrument vibrations require (in principle) additional modelling of the instrumental profile (IP; Butler et al. 1996; Valenti et al. 1995). At low signal-to-noise (S/N) ratios such as obtained for Prox Cen IP reconstruction becomes impossible. However, to some extent one can overcome the remaining measurement uncertainty by modelling the observed spectra (star+iodine) with various combinations of pure star and iodine spectra. Ten different pure star and pure iodine pairs (of the same night) were available to build models for each of the 58 star+iodine spectra. Based on goodness-of-fit some of the most inadequate models were rejected. Col. 3 gives the total number of the accepted star+iodine models for all the spectra in each nightly bin (sum over all spectra and models). For an intercomparison of the different models their RV zero points have to be matched. To do this we subtracted for each model the mean RV for the whole times series. Subsequently, we averaged for each star+iodine spectrum the RV data from the individual models. At last, nightly (bin) averages for the Julian day (col. 4) and RV data corrected to zero mean (col. 5) were calculated. A first estimate of the RV error (col. 6) was then based on the rms scatter in each data bin together with a propagation of the error introduced by the process of matching the zero points for the different models. However, we found a positive correlation with a correlation coefficient of $`+0.514`$ between the total number of models in a bin (col. 3) and the error estimate (col. 6) meaning that these errors tend to be smaller when estimated from smaller numbers of models. This indicates that these error estimates are not representative of the true errors, in particular those from small numbers of models. As an independent estimate of the weight of each data bin, col. 7 shows the combined S/N ratio per spectral pixel (mean values) of each data bin. We know from simulations (Hatzes & Cochran 1992) that the RV measurement error $`\mathrm{\Delta }RV(S/N)^1`$ which serves to estimate the RV error. Choosing the constant of proportionality such that the mean of the resulting errors is equal to the total scatter in the RV measurements ($`53.9\mathrm{ms}^1`$) we obtain the equivalent RV errors listed in col. 8 of Tab. 1. Fig. 1 displays a time series of our RV data (29 bins) for Prox Cen with error bars corresponding to these equivalent errors. ## 4 Period search To look for a periodic signal that could manifest the presence of a companion, we searched the frequency range $`f_{\mathrm{min}}=1/T\mathrm{}f_{\mathrm{max}}=1/(2\mathrm{\Delta }t)`$, or $`0.0007\mathrm{}0.5494\mathrm{d}^1`$, where $`T`$ is the total time baseline and $`\mathrm{\Delta }t`$ is the minimum separation between data points. Thus a period range of $`1.8202\mathrm{}1428.6`$ d was searched. The choice of the maximum frequency was made in analogy to the Nyquist criterion which, however, is well defined only for equidistant data sampling. Since most of our data sampling is much cruder than our minimum sampling any signals at periods shorter than about 5 d should be treated with care. Two different types of periodogram were used: a) the Scargle periodogram (Scargle 1982) which is equivalent to least squares sine fitting with equal weight for all data points; the power is proportional to the square of the amplitude of the corresponding sine wave; b) a sine-fitting routine that takes into account data errors by minimizing $`\chi ^2`$; we used the equivalent errors (col. 8 of Tab. 1). Fig. 2 shows only the Scargle periodogram, since the $`\chi ^2`$ minimization approach yielded a very similar result that does not change the interpretation. ¿From 10,000 runs of a bootstrap randomization scheme (see Kürster et al. 1996; Murdoch et al. 1993) we determined the levels of the false alarm probability $`\varphi `$ (FAP) corresponding to various power levels. As shown in Fig. 2 all periodogram peaks are insignificant having $`\varphi >90\%`$. In contrast to searching a period range signals at a priori known periods can be significant at smaller power levels $`z`$. For the Scargle periodogram the FAP is given by $`\varphi =1(1e^z)^n`$, where $`n`$ is the number of independent frequencies in the search interval (Scargle 1982). Hence for a signal at a single a priori known period $`\varphi =e^z`$. Periods that may be present are the period of the stellar rotation and that of the activity cycle. RV searches strongly benefit from ancilliary information such as the knowledge of these periods that aids the interpretation of RV data. Star spots and inhomogeneous granulation patterns in active stars cause distortions in stellar absorption lines; when sufficient resolution and/or signal-to-noise is lacking these distortions can be mis-interpreted as RV shifts that vary with the rotation period (rotational modulation) or with the activiy cycle. Estimates for the rotation period of Prox Cen were given by Benedict et al. (1998b; based on HST FGS photometry) finding $`P_{\mathrm{rot}}=83.5`$ d plus variability at the first harmonic, and by (Guinan & Morgan 1996; monitoring of the MgII h+k flux with IUE) who found $`31.5\pm 1.5`$ d. Benedict et al. (1998b) also estimate the length of Prox Cen’s activity cycle to be $`1100`$ d. We do not find significant FAPs at any one of these periods nor at first harmonics or periods twice as large (relevant in case the literature values are first harmonics themselves). Allowing for some uncertainty in these period values we also searched their vicinities finding $`\varphi =0.47\%(0.16\%)`$ for $`P=150.4d`$ and $`\varphi =1.32\%(0.92\%)`$ for $`P=30.4d`$, where theoretical values, $`\varphi =e^z`$, as well as values bootstrapped from 10,000 runs (values in brackets) are given. At best, i.e. only if one allows for considerable errors in the original period estimates, a marginal detection of (1) a period twice as long as the rotation period by Benedict et al. (1998b) and (2) the rotation period by (Guinan & Morgan 1996) might be indicated. Reconfiguration of active regions causes amplitude and phase changes in rotationally modulated signals complicating their detection in data that extend over many rotation cycles. However, our results concur with the findings by Saar et al. (1998) that predict an activity-related RV scatter of $`<10\mathrm{ms}^1`$ for rotation periods $`>16`$ d. It appears that the RV variation seen for Prox Cen is representative of our measurement precision for this faint star, and cannot be attributed to instrinsic stellar variability. ## 5 Limits to companion parameters Lacking a clear RV signal in our Prox Cen data we used a Monte Carlo simulation to derive upper limits to the mass of still possible companions in the period range $`0.753000`$ d. Random data sets with the same temporal sampling and with the rms scatter of our data were created. We added sinusoidal signals with different periods, amplitudes, and phases, and evaluated their periodograms. At each period the amplitude was determined for which 99% of the periodograms showed a power corresponding to $`1\%`$ FAP. Hence the combined confidence for the detection of a sinusoidal signal is 98%. For eccentric orbits it may be somewhat lower. Fig. 3 shows the $`m\mathrm{sin}i`$ ($`m`$ the planet mass, $`i`$ the orbital inclination), that corresponds to RV amplitudes at this confidence level, as a function of period and separation. We assumed a stellar mass of $`0.12\mathrm{M}_{}`$ (for an M5Ve star; Kirkpatrick & McCarthy 1994). To account for the unknown inclination one can use its probability distribution (for random orientation of the orbits). The probability that $`i`$ exceeds some angle $`\theta `$ is given by $`p(i>\theta )=\mathrm{cos}(\theta )`$. From this one can construct confidence intervals for the true companion mass. There is a confidence of 90% (95%, 99%) that the true mass is no more than a factor 2.294 (3.203, 7.088) larger than the $`m\mathrm{sin}i`$. Mass limits for these confidence intervals are also included in Fig. 3 with the corresponding labels. Values in brackets are the combined confidence levels accounting for both the confidence of the inclination and the confidence of detection. We choose the curve with the highest confidence (97% combined) as the RV derived upper mass limit. We also got upper limits (99% confidence) to the companion mass from HST FGS astrometry (Benedict, priv. comm.; cf. Benedict et al. 1998a) that are included in Fig. 3. Being most stringent at longer periods they complement our RV derived limits constraining the period range $`50600`$ d. When combining them with our RV mass limits we can exclude massive planets around Prox Cen over a wide period range as detailed in Sect. 6. ## 6 Conclusions 1. Prox Cen does not have a close ($`0.4`$ AU) brown dwarf companion as suggested by Schultz et al. (1998). 2. RV derived upper mass limits range from $`1.1`$ to $`3.7\mathrm{M}_{\mathrm{Jup}}`$ for periods from a few days to a few weeks. 3. In the period range $`50600`$ d (separations $`0.130.69`$ AU) the RV derived mass limits range from $`3.4`$ to $`8.3\mathrm{M}_{\mathrm{Jup}}`$; in this interval the more stringent astrometry even indicates the absence of objects from $`1.1`$ to $`0.22\mathrm{M}_{\mathrm{Jup}}`$, i.e. below Saturn mass for periods $`370`$ d. 4. Hence no massive planets $`>3.7\mathrm{M}_{\mathrm{Jup}}`$ exist in orbits with periods of $`0.75600`$ d, i.e. at $`0.0080.69`$ AU. 5. At periods $`>6003000`$ d (separations $`>0.691`$ AU) RV derived mass limits range from $`8.3`$ to $`22\mathrm{M}_{\mathrm{Jup}}`$. 6. At the level of our measurement precision the RV data are not notably affected by stellar activity. ###### Acknowledgements. We are grateful to F. Benedict for communicating to us the astrometric mass limits. We thank the ESO OPC for generous allocation of observing time. The support of the La Silla 3.6m+CAT team and the Remote Control Operators at ESO Garching was invaluable for obtaining these data. APH and WDC acknowledge support by NASA grant NAG5-4384 and NSF grant AST-9808980.
no-problem/9903/physics9903028.html
ar5iv
text
# Decay laws for three-dimensional magnetohydrodynamic turbulence ## Abstract Decay laws for three-dimensional incompressible magnetohydrodynamic turbulence are obtained from high-resolution numerical simulations using up to $`512^3`$ modes. For the typical case of finite magnetic helicity $`H`$ the energy decay is found to be governed by the conservation of $`H`$ and the decay of the energy ratio $`\mathrm{\Gamma }=E^V/E^M`$. One finds the relation $`(E^{5/2}/ϵH)\mathrm{\Gamma }^{1/2}/(1+\mathrm{\Gamma })^{3/2}=const`$, $`ϵ=dE/dt`$. Use of the observation that $`\mathrm{\Gamma }(t)E(t)`$ results in the asymptotic law $`Et^{0.5}`$ in good agreement with the numerical behavior. For the special case $`H=0`$ the energy decreases more rapidly $`Et^1`$, where the transition to the finite-$`H`$ behavior occurs at relatively small values. Many plasmas, especially in astrophysics, are characterized by turbulent magnetic fields, the best-known and most readily observable example being the solar wind. The convenient framework to describe such turbulence is magnetohydrodynamics (MHD). Here one ignores the actual complicated dissipation processes, which occur on the smallest scales and would usually require a kinetic treatment, assuming that the main turbulent scales are essentially independent thereof. Instead dissipation is modeled by simple diffusion terms. If, moreover, interest is focussed on the intrinsic turbulence dynamics, one can also ignore the largest scales in the system, which depend on the specific way of turbulence generation, restricting consideration to a small open homogeneous domain of the globally inhomogeneous turbulence. Homogeneous MHD turbulence has become a paradigm in fundamental turbulence research, which has been receiving considerable attention. It is well known that 2D and 3D MHD turbulence have many features in common concerning, in particular, the cascade properties. In both cases there are three quadratic ideal invariants: the energy $`E=\frac{1}{2}(v^2+B^2)𝑑V`$, the cross helicity $`K=𝐯𝐁𝑑V`$, and a purely magnetic quantity, the magnetic helicity $`H=𝐀𝐁𝑑V`$ in 3D and the mean-square magnetic potential $`H^\psi =\psi ^2𝑑V`$ in 2D, which both exhibit an inverse cascade. Many theoretical predictions do not distinguish between 2D and 3D, concerning, e.g., the tendency toward velocity and magnetic field alignment or the spectral properties. Thus it is not surprising, that numerical studies of MHD turbulence have been mostly concentrated on two-dimensional simulations, where high Reynolds numbers can be reached much more readily, see e.g., . While 2D simulations are now being performed with up to $`N^2=4096^2`$ modes (or, more accurately, collocation points) , studies of 3D MHD homogeneous turbulence have to date been restricted to relatively low Reynolds numbers using typically $`N^3=64^3`$ modes, e.g., , , precluding an inertial range scaling behavior. Also in Ref. , where a somewhat higher Reynolds number could be reached by using $`180^3`$ modes, attention was focussed primarily on the process of turbulence generation from smooth initial conditions and the properties of the prominent spatial structures, current and vorticity sheets. In this Letter we present results of a numerical study of freely decaying 3D MHD turbulence with spatial resolution up to $`512^3`$ modes. We discuss the decay laws of the integral quantities, in particular the energy $`E`$ and the ratio of kinetic and magnetic energies $`\mathrm{\Gamma }=E^V/E^M`$, and their dependence on the quasi-constant value of $`H`$. The energy decay is found to follow a simple law, which is determined by $`\mathrm{\Gamma }(t)`$ and $`H`$. While most previous studies have been restricted to the case of negligible magnetic helicity $`H0`$, we focus attention on the properties of the turbulence for finite $`H`$, which is more typical for naturally existing MHD turbulence occuring mostly in rotating systems. We find that for finite $`H`$ the energy decays significantly more slowly than for $`H0`$. This behavior is primarily caused by the rapid decrease of the energy ratio $`\mathrm{\Gamma }`$, which has the same decay time as the energy. The 3D incompressible MHD equations, written in the usual units, $$_t𝐁\times (𝐯\times 𝐁)=\eta _\nu (1)^{\nu 1}^{2\nu }𝐁,$$ (1) $$_t𝐰\times (𝐯\times 𝐰)\times (𝐣\times 𝐁)=\mu _\nu (1)^{\nu 1}^{2\nu }𝐰,$$ (2) $`𝐰=\times 𝐯,𝐣=\times 𝐁,`$ are solved in a cubic box of size $`2\pi `$ with periodic boundary conditions. The numerical method is a pseudo-spectral scheme with spherical mode truncation as conveniently used in 3D turbulence simulations (instead of full dealiasing by the 2/3 rule chosen in most 2D simulations). Initial conditions are $$𝐁_𝐤=a\mathrm{e}^{k^2/k_0^2i\alpha _𝐤},𝐯_𝐤=b\mathrm{e}^{k^2/k_0^2i\beta _𝐤},$$ (3) which are characterized by random phases $`\alpha _𝐤`$, $`\beta _𝐤`$ and satisfy the conditions $`𝐤𝐁_𝐤=𝐤𝐯_𝐤=0`$ as well as $`E=1`$ and $`\mathrm{\Gamma }=1`$. Further restrictions on $`𝐁_𝐤`$ and $`𝐯_𝐤`$ arise by requiring specific values of $`H`$ and $`K`$, respectively. The wavenumber $`k_0`$, the location of the maximum of the initial energy spectrum, is chosen as $`k_0=4`$, which allows the inverse cascade of $`H_𝐤`$ to develop freely during the simulation time of 10-20 eddy turnover times. This implies a certain loss of inertial range, i.e., a reduction in Reynolds number, but the sacrifice is unavoidable in the presence of inverse cascade dynamics. Choosing $`k_01`$ would lead to magnetic condensation in the lowest-$`k`$ state, which would affect the entire turbulence dynamics. We have used both normal diffusion $`\nu =1`$ and hyperdiffusion $`\nu =2`$. Apart from the fact that inertial ranges are wider and $`H`$ is much better conserved for $`\nu =2`$ than for $`\nu =1`$, no essential differences are found between the two cases. The generalized magnetic Prandtl number $`\eta _\nu /\mu _\nu `$ has been set equal to unity. Table I lists the most important parameters of the simulation runs. The energy decay law is a characteristic property of a turbulent system. In hydrodynamic turbulence the decay rate depends on the energy spectrum at small $`k`$. Assuming time invariance of the Loitsianskii integral $`=_0^{\mathrm{}}𝑑ll^4v_l(x+l)v_l(x)`$ the energy has been predicted to follow the similarity law $`Et^{10/7}`$ . The invariance of $``$ has, however, been questioned, see e.g., . Both closure theory and low-Reynolds number simulations yield a significantly slower decrease, $`Et^1`$. Experimental measurements of the energy decay law $`t^n`$ are rather difficult and do not give a uniform picture, $`n`$ ranging between 1.3 and 2 . The invariance of the Loitsianskii integral has recently also been postulated for MHD turbulence , where $`_{\mathrm{MHD}}`$ is defined in analogy to $``$ in terms of the longitudinal correlation function $`z_l^\pm (x+l)z_l^\pm (x)`$ of the Elsaesser fields $`𝐳^\pm =𝐯\pm 𝐁`$. Since $`z^2E`$, this assumption gives $`_{\mathrm{MHD}}L^5E=const`$, where $`L`$ is the macroscopic scale length of the turbulence. In addition the expression for the energy transfer $`dE/dt=ϵz^4/LB_0`$ was used, which formally accounts for the Alfvén effect ,. These relations give $`(dE/dt)B_0/E^{11/5}=const`$ and hence $`Et^{5/6}`$, treating $`B_0`$ as constant. One may, however, argue that the Alfvén effect is only important on small scales $`lL`$, while on the scale $`L`$ of the energy-containing eddies $`B_0`$ is not constant but $`B_0E^{1/2}`$ (except for the case that $`B_0`$ is an external field, which would, however, make the turbulence strongly anisotropic), hence $`ϵE^{3/2}/L`$, which would give the same result $`n=10/7`$ as predicted for hydrodynamic turbulence. Low-resolution numerical simulations indicate $`n1`$, which is also found in recent simulations of compressible MHD turbulence . For finite magnetic helicity $`H`$ provides a constant during energy decay, which for high Reynolds number is more robust than the questionable invariance of the Loitsianskii integral. It is true that in contrast to the 2D case, where $`E^M`$ and $`H^\psi `$ are tightly coupled, such that $`E^M0`$ implies $`H^\psi 0`$, in 3D a state with $`H=0`$ and finite magnetic energy is possible. But this is only a special and not typical case, since in nature magnetic turbulence usually occurs in rotating systems, which give rise to finite magnetic helicity. If the process of turbulence decay is self-similar, which also implies that the energy ratio $`\mathrm{\Gamma }`$ remains constant, the energy decay law follows from a simple argument . With the scale length $`L=E^{3/2}/ϵ`$, the dominant scale of the energy-containing eddies, we have $$HE^MLEL,$$ (4) since owing to the assumed self-similarity $`E^ME^VE`$. Inserting $`L`$ gives $$\frac{dE}{dt}=ϵ\frac{E^{5/2}}{H},$$ (5) which has the similarity solution $`Et^{2/3}`$. In Fig. 1 the ratio $`E^{5/2}/(ϵH)`$ is plotted for the runs from Table I with $`H0`$ and small initial correlation $`\rho _0`$. The figure shows that this quantity is not constant, but increases in time. Moreover, there is a significant scatter of the different curves. Integration yields a slower asymptotic energy decay than predicted $`n0.50.55`$. (The log-log representation of $`E(t)`$, often given in the literature to make a power law behavior visible, is misleading, since the major part of such a curve refers to the transition period of turbulence generation. The solution $`(tt_{})^n`$ approaches the power law $`t^n`$ only asymptotically for $`tt_{}`$, where $`t_{}`$ is not accurately known. We therefore prefer to plot the decay law in the primary differential form.) We can attribute this discrepancy to the fact that the turbulence does not decay in a fully self-similar way. Indeed the energy ratio $`\mathrm{\Gamma }`$ is found to decrease rapidly, in contrast to the 2D case, where $`\mathrm{\Gamma }`$ decays much more slowly, typically logarithmically , . (The ratio of viscous and resistive dissipation $`ϵ^\mu /ϵ^\eta `$, however, remains constant just as in the 2D case , which simply reflects the basic property, that dissipation takes place in current sheets and that these are also vorticity sheets, i.e., the location of viscous dissipation.) Let us incorporate the dynamic change of $`\mathrm{\Gamma }`$ in the theory of the energy decay. Assuming that the most important nonlinearities arise from the $`𝐯`$ contributions in the MHD equations, Eq. (5) is replaced by $$ϵ(E^V)^{1/2}\frac{E}{L}=\frac{\mathrm{\Gamma }^{1/2}}{(1+\mathrm{\Gamma })^{3/2}}\frac{E^{5/2}}{H},$$ (6) using the relation (4). Figure 2 shows that $`(E^{5/2}/ϵH)\mathrm{\Gamma }^{1/2}/(1+\mathrm{\Gamma })^{3/2}`$ is indeed nearly constant for $`t>2`$, when turbulence is fully developed, and the scatter in Fig. 1 is strongly reduced. Hence relation (6) is generally valid for finite magnetic helicity. It is also independent of the magnitude of the dissipation coefficients and character of the dissipation ($`\nu =1`$ or 2), as long as $`H`$ is well conserved. Also the time evolution of the energy ratio $`\mathrm{\Gamma }`$ exhibits a uniform behavior which is demonstrated in Fig. 3. The slight shift of the uppermost curve corresponding to the smallest value of $`H`$ (run 4), is due to the smaller drop of $`\mathrm{\Gamma }`$ during the very first phase of turbulence generation $`t<0.5`$ not included in the figure. Moreover, we find that $`\mathrm{\Gamma }(t)`$ is proportional to $`E(t)`$, $`\mathrm{\Gamma }cE/H`$, $`c=0.10.15`$, as seen in Fig. 4, where $`\mathrm{\Gamma }/(E/H)`$ is plotted. Inserting this result in Eq. (6) we obtain the differential equation for $`E`$, which in the asymptotic limit $`\mathrm{\Gamma }1`$ becomes $$\frac{dE}{dt}0.5\frac{E^3}{H^{3/2}}$$ (7) with the similarity solution $`Et^{0.5}`$. For finite $`\mathrm{\Gamma }`$ the theory predicts a somewhat steeper decay flattening asymptotically to $`t^{0.5}`$ as $`\mathrm{\Gamma }`$ becomes small, which is exactly the behavior of $`E(t)`$ observed in the simulations. (Note, that if $`E(t)`$ is plotted on the traditional log-log scale, which overrates the transition period $`t1`$, a steeper decay would be suggested.) The relation $`\mathrm{\Gamma }E`$ now gives also the similarity law for the kinetic energy $`E^Vt^1`$. This theory does not apply to the special case $`H=0`$. Here we find indeed a different decay law, $`Et^1`$ from run 3, which is consistent with previous simulations at lower Reynolds numbers and with the prediction in Ref. . The transition to the slower decay for finite $`H`$ occurs at relatively small values, 0.1–0.2 of the maximum possible value. We have also studied the effect of an initial velocity and magnetic field alignment $`\rho _0=K/E`$. For small $`\rho _0<0.1`$ the alignment, after increasing initially, tends to saturate at some small value, which is due to the fact that $`K`$ is less well conserved than $`H`$. For higher $`\rho _0>0.3`$ (runs 9 and 10 in Table I) the alignment becomes very strong, which as expected slows down the energy decay drastically. In conclusion we have presented a new phenomenology of the energy decay in 3D incompressible MHD turbulence, which agrees very well with direct numerical simulations at relatively high Reynolds numbers. We consider in particular the case of finite magnetic helicity $`H`$, which is typical for naturally occuring magnetic turbulence. The energy decay is governed by the conservation of $`H`$ and the time evolution of the energy ratio $`\mathrm{\Gamma }=E^V/E^M`$. We find that the relation $`(E^{5/2}/ϵH)\mathrm{\Gamma }^{1/2}/(1+\mathrm{\Gamma })^{3/2}const`$ is satisfied for most $`H`$-values and is independent of the magnitude of the dissipation coefficients and the order of the diffusion operator, provided the Reynolds number is sufficiently high such that $`H`$ is well conserved. The kinetic energy is found to decrease more rapidly than the magnetic one, in contrast to the behavior in 2D, in particular we find $`\mathrm{\Gamma }E`$. This proportionality leads to a simple energy decay law, $`dE/dtE^3`$, or $`Et^{0.5}`$. We also obtain the similarity law for the kinetic energy $`E^Vt^1`$. For the special case $`H=0`$ the energy decays more rapidly, $`Et^1`$, which agrees with previous simulations at lower Reynolds numbers. The transition to the finite-$`H`$ behavior occurs at relatively small values of $`H`$. Results concerning the spatial scaling properties of 3D MHD turbulence will be published in a subsequent paper. The authors would like to thank Andreas Zeiler for providing the basic version of the code, Antonio Celani for developing some of the diagnostics, and Reinhard Tisma for optimizing the code for the CRAY T3E computer.
no-problem/9903/astro-ph9903317.html
ar5iv
text
# Two-component model for the chemical evolution of the Galactic disk ## 1 Introduction Since the existence of the thick disk of our Galaxy was confirmed by Gilmore & Reid (1983) more than ten years ago, it has been generally accepted that a complete description of the thick disk, such as its scale length, scale height, density normalization, metallicity and kinematical properties, is a necessary step towards understanding the Galaxy formation, halo collapse, disk dynamical and chemical evolution. Unfortunately, the characteristics of this population remain controversial, especially for its density profile. Several attempts have been made to deduce this parameter by remote star counts and field star survey ( Buser & Kaeser 1985, Gilmore, Reid & Hewett 1985, Reid & Majewski 1993, Buser, Rong & Karaali 1998). But the results are still quite uncertain. This is partly due to the lack of complete sample of thick disk stars since its members cannot be easily recognized from that of the thin disk and/or the halo in most observable distributions. Moreover, the determination of thick disk characteristics requires large star samples in various directions well distributed in both the longitude and latitude (Robin et al. 1996), which cannot be obtained easily at the present time. Recently, the study of chemical evolution of the Galactic disk has been proven to be a powerful tool to explore the formation and evolution of our Galaxy. Numerous models have been detailedly put forward (Matteucci & Francois 1989, Ferrini et al. 1994, Giovagnoli & Tosi 1995, Prantzos & Aubert 1995, Timmes et al. 1995, Carig 1996, Pilyugin & Edmunds 1996a, Chiappini, Matteucci & Gratton 1997, Allen et al. 1998, Thon & Meusinger 1998, Prantzos & Silk 1998). Among them, Chiappini, Matteucci & Gratton (1997 hereafter CMG97) were the first to take into account the effect of thick disk. They assumed that there are two main accretion episodes. The first is responsible for the formation of the thick disk, and the second, delayed relative to the first, forms the thin disk. The predictions of their best-fitting model are in good agreement not only with the observed metallicity distribution , but with the observed number of very low metallicity stars (Rocha-Pinto & Maciel 1996, hereafter RM96). This enlightens us to do more detailed analyze of the disk evolution based on the new chemical constraints. In the present paper, the two-component model for the Galactic disk evolution (such as CMG97) is adopted, in which the local surface density of the thick disk at the present time is chose to be one of the free parameters. The infall rate is assumed to be in a Gaussian form instead of an exponentially decreasing one. The quantitative comparison between model predictions and the observations, i.e., the new G-dwarf metallicity distribution obtained by Hou et al (1998), is used for the $`\chi ^2`$-test of the best-fitting model. The outlines are as follows. In section 2, we present brief description of observational constraints up to now, of which the most important is the G-dwarf metallicity distribution. Section 3 is the model and its main ingredients. In section 4, we present best-fittings of four different models, which are closed-box, one-component, pre-thin and post-thin models respectively, to the observations. Discussions of the models are also included in section 4. Our conclusions are shown in the last section. ## 2 Observational constraints A successful model of the chemical evolution of the galactic disk should reproduce the main observational features of both the solar neighborhood and whole disk. Our set of constraints includes: (1) G-dwarf metallicity distribution in the solar neighbourhood (Hou et al 1998); (2) radial abundance gradients at present time; (3) age-metallicity relation (AMR); (4) the correlation between \[O/Fe\] and \[Fe/H\]; (5) radial profiles for the gas surface density; and (6) the variations of Star Formation Rate (SFR) across the disk. The first one is selected as the observational constraint to quantitatively estimate the best-fitting model in this paper, since G dwarfs cover the whole life of the Galactic disk and its metallicity distribution can reflect the local chemical enrichment history. The others are used for the comparisons between the best-fit model predictions and observations. ### 2.1 G-dwarf metallicity distribution The metallicity distribution of G dwarfs in the solar neighborhood is one of the most important constraints on the chemical evolution of the Galactic disk. Since G dwarfs have lifetimes comparable to the estimated age of the Galaxy, they represent a sample which has never been depleted by stellar evolution, accumulating since the first episodes of low-mass star formation. Therefore, a complete sample of these stars in the solar neighborhood carries memory of the local star formation and chemical enrichment history. Pagel & Patchett (1975) derived a cumulative G-dwarf metallicity distribution, based on a volume-limited sample of $`132`$ G dwarfs within about 25 pc of the Sun. Pagel (1989) revised previous data of Pagel & Patchett (1975) by means of a new calibration between the ultraviolet excess $`\delta `$ (U-B) and \[Fe/H\]. Later, Rana (1991) and Sommer-Larsen (1991) independently revised the distribution of Pagel (1989), taking into account the dynamical heating effect on the observed distribution. RM96 derived a G-dwarf metallicity distribution in the solar neighborhood, using $`uvby`$ photometry and up-to-date parallaxes. RM96 introduced a chemical criterion, according to which all sters have \[Fe/H\] $`<`$ -1.2 are considered to be halo members and excluded from the final sample. The distribution of RM96 comprises 287 G dwarfs within 25 pc from the Sun and differs from the classic one by having a prominent single peak around \[Fe/H\] = - 0.2 (see RM96 for details). Recently, Hou et al (1998) collected a new, enlarged sample of G dwarfs within 25pc from the Sun. The stars are selected from the third Catalogue of Nearby Stars (Gliese & Jahreiss 1991). The $`uvby`$ data are taken from the catalogues of Olsen (1993) and Hauck & Mermilliod (1990). No chemical criterion was introduced in Hou et al (1998) since observational evidences showed that, in the metallicity interval -1.5 $`<`$ \[Fe/H\] $`<`$ -1.0, the fraction of thick disk stars in the solar neighbourhood appears to be as high as 60$`\%`$ (Nissen & Schuster 1997). This is one of the main differences between the distribution of Hou et al (1998) and that of RM96. The final sample contains 382 G dwarfs with photometric data. The adopted metallicity calibration and kinetic correction is the same as that of RM96. Following Pagel (1989) and RM96, Hou et al (1998) have also corrected the distribution for observational errors and cosmic scatter. The results of Hou et al (1998) are shown in Table 1, in which the sample is divided into 19 bins. The first column of Table 1 is the metallicity range in each bin. The raw distribution $`(\mathrm{\Delta }N_0)_i`$ are presented in column 2. The third is the weight factor $`f_i`$ for the scale height correction according to Sommer-Larson (1991). The fourth presences $`\delta (\mathrm{\Delta }N_0)_i`$, the correction factors for the observational errors and cosmic scatter. The obtained relative distribution is given in the last column, where $`N=_{i=1}^{19}[\frac{(\mathrm{\Delta }N_0)_i}{f_i}+\delta (\mathrm{\Delta }N_0)_i]`$ is the total number of G dwarfs after correction. In the last column, the second to fourth bins are grouped to yield a mean value for the distribution, which is similar with the method used in RM96. This procedure is also used for the fifth to seventh bins. Table 1 shows that the resulted distribution of Hou et al (1998) differs from that of RM96 by having a larger width and smaller amplitude of the single peak. Moreover, the metal-poor tail of the new distribution (Hou et al 1998) extends to \[Fe/H\]=-1.5. ### 2.2 Abundance gradients Furthermore, radial metallicity variations of the interstellar medium (ISM) can constrain models of Galaxy formation and chemical evolution. From extensive studies of optical emission lines in HII regions, Shaver et al. (1983) derived an oxygen abundance gradient of the order of -0.07 dex/kpc. Afflerbach et al.(1996) have indirectly deduced a similar result by measuring electron temperature variations in a set of ultra-compact HII regions. A relatively flatter gradient has been obtained by Vilchez & Esteban(1996) for the outer Galaxy, using spectroscopic observations of a sample of HII regions towards the Galactic anti-center. The recent radial profile of oxygen in the Galaxy can also be traced by observations of B-type stars, with main sequence ages less then 1.0 Gyr. A series of medium- to high-resolution spectroscopic observations of early B-type main-sequence objects have been published (Smartt et al. 1997 and references there in). Using this homogeneous sample, Smartt et al.(1997) derived an oxygen abundance gradient of -0.07 $`\pm `$ 0.01 dex/kpc between Galactocentric distance of 6 kpc $`r`$ 18 kpc, which is in good agreement with nebular studies. Gummersbach et al (1998) determines the stellar parameters and abundances of several element for 16 early B main-sequence stars in Galactocentric distance 5.0 kpc $`r`$ 14.0 kpc by reanalyzing and extending the observations of Kaufer et al (1994). An oxygen abundance gradient -0.07 $`\pm `$ 0.01 dex/kpc is derived, typical for normal spiral galaxies of similar Hubble type. ### 2.3 Others Just as mentioned above, other chemical constraints should be taken into account at the same time, such as the age-metallicity relation, the correlation between \[O/Fe\] and \[Fe/H\] for field stars as well as the radial profiles of gas surface density and SFR at the present time for the whole disk. Twarog (1980) obtained the first AMR for the local disk stars, with the stellar ages from the theoretical isochrones. The same sample has been reanalyzed by Carlberg et al. (1985) with different results, due to the revision of the isochrones as well as the calibration of abundances. A more accurate AMR was obtained by Edvardsson et al. (1993). They have derived abundances of 13 different elements, such as O, Fe, Si, Ba etc., as well as individual photometric ages, for $`189`$ nearby field F and G dwarfs. Their abundance analysis was made with theoretical LTE model atmospheres, based on the extensive high resolution, high S/N, spectroscopic observations of carefully selected field stars. The resulted AMR of Edvardsson et al. (1993) was used at the present study. However, this AMR does not constitute a tight constraints of the chemical model, since there is a considerable scatter. Moreover, the results of the survey of Edvardsson et al. (1993) concerning O vs. Fe relationships for field stars are used in this study. As for metal-poor stars, the correlation between \[O/Fe\] and \[Fe/H\] are taken from Barbuy (1988). The radial Galactic profiles of atomic and molecular hydrogen are discussed in Lacey & Fall (1985). An updated discussion is given in Dame (1993). Inside the solar circle, the molecular and atomic gas are found in roughly equal amounts. However, the surface density of atomic hydrogen, which seems to be constant from $`4kpc`$ to $`15kpc`$, dominates the gas profiles outside the solar circle. The radial distribution of the sum of atomic and molecular hydrogen given in Dame (1993) is adopted in this paper. The radial distribution of the present SFR in the Galaxy are taken from Gusten & Merger (1983), Lyne et al (1983) and Guibert et al (1978). Data are based on several tracers of star formation: Lyman continuum photons from HII regions, pulsars and supernova remnants. It is normalized to the present SFR in the solar neighbourhood (as in Lacey & Fall 1985) since the absolute values depend on poorly known conversion factors. ## 3 The model It is assumed that the Galactic disk is sheet-like, which originates and grows only from the infall of primordial gas. The disk is considered as a system of independent rings with $`1kpc`$ wide for each. No radial inflows or outflows are considered and the center of each ring locates at its median Galactocentric radius. The ring centered at Galactocentric distance $`r_{}`$ = 8.5 kpc is labeled as the solar neighbourhood. The age of the disk is adopted to be 13.0Gyr (Rana 1991). ### 3.1 Basic equations and nucleosynthesis The instantaneous-recycling approximation (IRA) is relaxed, but instantaneous mixing of the gas with the stellar ejecta is assumed, i.e., the gas is characterized by a unique composition at each epoch of time. We solve numerically the classical set of equations of Galactic chemical evolution (Tinsley 1980, Pagel 1997) as $`{\displaystyle \frac{d\mathrm{\Sigma }_{tot}(r,t)}{dt}}`$ $`=`$ $`f(r,t),`$ (1) $`{\displaystyle \frac{d\mathrm{\Sigma }_{gas}(r,t)}{dt}}`$ $`=`$ $`\psi (r,t)+{\displaystyle _{m_t}^{100}}(mm_r)\psi (r,t\tau _m)\varphi (m)dm+f(r,t),`$ (2) $`{\displaystyle \frac{d[Z_i(r,t)\mathrm{\Sigma }_{gas}(r,t)]}{dt}}`$ $`=`$ $`Z_i(r,t)\psi (r,t)+{\displaystyle _{m_t}^{100}}my_{i,m}\psi (r,t\tau _m)\varphi (m)dm+Z_{i,f}f(r,t),`$ (3) where $`\mathrm{\Sigma }_{tot}(r,t)`$ and $`\mathrm{\Sigma }_{gas}(r,t)`$ are the total and gas surface density respectively in the ring centered at Galactocentric distance $`r`$ at evolution time $`t`$; $`f(r,t)`$ is often called the infall or accretion rate; $`\psi (r,t)`$ is the star formation rate (SFR) and $`\varphi (m)`$ is the initial mass function (IMF); $`m_r`$ and $`\tau _m`$ are the remnant mass and the lifetime of a star of initial mass $`m`$, respectively, and $`m_t`$ is the corresponding initial mass of a star whose main-sequence lifetime $`\tau _m`$ equates to evolution time $`t`$ (the turnoff mass). Here the mass range of IMF is taken from $`0.1M_{}`$ to $`100M_{}`$. The mass of element $`i`$ in the gas evolves via star formation (putting metals from the ISM into stars), ejection, and gas inflows, according to equation (3), where $`y_{i,m}`$ is the stellar yield of element $`i`$, i.e., the mass fraction of a star of initial mass $`m`$ that is converted to element $`i`$ and ejected, and $`Z_{i,f}`$ is the mass abundance of element $`i`$ in the infalling gas, which is assumed to be piromordial in this study: $`Z_{O,f}=Z_{Fe,f}=0`$. It should be emphasized that the second terms in the right hand of equations (2) and (3) also include the contribution of Type Ia supernovas (Type Ia SNs), which is detailedly presented in Matteucci & Greggio (1986). The constant $`A`$ in equation (9) of Matteucci & Greggio (1986) describes the fraction of systems with total mass in appropriate range, which eventually succeed in giving rise to a Type Ia SN event, and in this study, it is fixed by requiring to present best-fit to the metal-rich tail of the G-dwarf metallicity distribution in the solar neighbourhood. It is also assumed that every star ejects its envelope just after leaving the main sequence. The adopted relation between main-sequence lifetimes $`\tau _m`$ ( in units of Gyr) and stellar initial mass $`m`$ (in units of $`M_{}`$) is ( Larson 1974): $$logm=1.9831.054\sqrt{(log\tau _m+2.52)}.$$ (4) For the sake of simplicity, we assume that, except for Type Ia SNs, any star evolves as a single star even if it is the member of a binary system. All massive stars ($`m>9M_{}`$) explode as type II supernovas (Type II SNs), leaving behind a neutron star of mass $`m_R=0.5M_{}`$ (Prantzos & Silk 1998). The final stage of the intermediate /low mass stars ($`M9M_{}`$) is white dwarfs, and the final-initial mass relation is taken from Weidemann(1984). Type Ia SNs are thought to originate from carbon deflagration in C-O white dwarfs in binary systems. The method included the contribution of Type Ia SNs is the same as Matteucci & Greggio (1986). In this paper, we only consider the evolution of iron and oxygen. The oxygen and iron production for Type II SNs and Type Ia SNs are taken from Woosley & Weaver (1995) and Woosley (1997), respectively. Recently, using the evolutionary tracks of Geneva group up to the early asymptotic giant branch (AGB) in combination with a synthetic thermal-pulsing AGB model, van den Hock & Groenewegen (1997) calculated in detail the chemical evolution and yields of six elements up to the end of AGB. Their results showed that the low-mass stars ($`m<3M_{}`$) produce small amounts of oxygen, yet the intermediate mass stars ($`3M_{}m8M_{}`$) destroy the initial oxygen through Hot Bottom Burning (HBB). Therefore, it is reasonable in this paper to neglect the oxygen production by intermediate/low mass stars compared with that of massive stars. ### 3.2 The infall rate Currently popular models of the galaxy formation are semi-analytic models within the framework of the hierarchical structure formation paradigm (White & Rees 1978, White & Frenck 1991, Wechsler et al 1998, Mo et al 1998), which allow one to model the astrophysical processes involved in galaxy formation in a simplified but physical way (Kauffmann et al 1993, Somerville & Primack 1998, Kauffmann et al 1998, Primack et al 1998). These models are in good agreement with a broad range of local galaxy observations, including the correlation between luminosity and circular velocity for spirals ( the Tully-Fisher relation), the B-band luminosity function, cold gas contents, metallicities, and colors. These models postulate that the formation of galaxy is mainly regulated by gas cooling, dissipation, star formation, and supernova feedback. However, for the purpose of simplicity, Galactic chemical evolution models assume that the integral effect of these processes can be represented by that of an infall rate, which is a function of evolution time and Galactocentric distance. The form of the infall rate is adjusted to satisfy the constraint of G-dwarf metallicity distribution in the solar neighbourhood. Although an exponentially decreasing infall rate is widely used, we adopt here a Gaussian form for the infall rate. The physical motivation for such a choice is that because of its small initial surface density, the local disk initially accretes a small amount of the surrounding gas; as the disk mass and gravitational potential build up, the accretion rate gradually increases, but starts decreasing when the gas reservoir is depleted (Prantzos & Silk 1998). Numerous models for the formation of thick disk have been put forward since the confirmation of its existence by Gilmore & Reid (1983) (see Majewski 1993 for details). The models fall into either ”top-down” scenarios (the pre-thin model), where the formation of the thick disk precedes that of the thin disk, or the ”bottom-up” scenarios (the post-thin model), where the thick disk is the result of some action on or by the thin disk. In the pre-thin disk model, the formation of the thick disk is a transitional phase during the general contraction of the Galaxy. This model views the thick disk as a dissipative, rotational-supported structure, and the halo as non-dissipative and supported by the kinetic pressure provided by large, anisotropic velocity dispersions. The post-thin model resorts to formation of the thick disk after the gas has completely collapsed into a thin disk. Possible physical processes are: (1) Secular kinematic diffusion of thin disk stars (Norris 1987); (2) Violent thin disk heating by the accretion of a satellite galaxy (Quinn et al 1993). The required events must not occur too late in the disk life time so that the gas can cool again and form stars in the thin disk; (3) Halo response to disk potential (Gilmore & Reid 1983). The post-thin model leaves two important observational signatures. First, the thick disk is a separate population distinct from the thin disk and the halo. Second, no gradient can be generated in the thick disk by the events, although a pre-existing gradient may survive the merger. In this study, both the pre-thin and post-thin model for the formation of Galactic disk are considered. Following CMG97, we assume that there are two main infall episodes in both cases. The rate of mass accretion (in unit of $`M_{}pc^2Gyr^1`$) in each ring could be expressed as $$f(r,t)=\frac{A(r)}{\sqrt{2\pi }\sigma _t}e^{(t\tau _t)^2/2\sigma _t^2}+\frac{B(r)}{\sqrt{2\pi }\sigma _d}e^{(t\tau _d)^2/2\sigma _d^2},$$ (5) where $`\tau _t`$ and $`\tau _d`$ (in units of Gyr) are the maximum infall time of the thick and thin disk respectively, $`\sigma _t`$ and $`\sigma _d`$ (in units of Gyr) are the corresponding half-width. It is assumed that $`\sigma _t\tau _t`$ and $`\sigma _d\tau _d`$, i.e., the value of the half-width can just varies a little around that of the formation time-scale (Prantzos & Silk 1998). For the pre-thin model, the first infall episode forms the thick disk, which originated from a fast dissipative collapse such as that suggested by Eggen, Lynden-Bell & Sandage (1962) (Sandage 1990, Majewski 1993, CMG97). The second infall episode, delayed to the first, forms the thin disk component, with a time-scale much longer than that of the thick disk. The infall rate $`f(r,t)`$ is normalized to the local disk density at present time, i.e., $`_0^{t_g}f(r,t)dt=\mathrm{\Sigma }_{tot}(r,t_g)`$. Assuming the total masses of different rings in the disk at the present time are all exponentially decreased with the increasement of Galactocentric distance for thin and thick disk with the same scale-length $`r_0`$, the form of $`A(r)`$ and $`B(r)`$ in the pre-thin model can be written respectively as $`A(r)`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Sigma }_{thick}(r_{},t_g)}{_0^{t_g}\frac{e^{(t\tau _t)^2/2\sigma _t^2}}{\sqrt{2\pi }\sigma _t}dt}}e^{\frac{rr_{}}{r_0}}`$ (6) $`B(r)`$ $`=`$ $`\{\begin{array}{cc}0\hfill & \mathrm{if}t<t_{max}\hfill \\ \frac{\mathrm{\Sigma }_{tot}(r_{},t_g)\mathrm{\Sigma }_{thick}(r_{},t_g)}{_{t_{max}}^{t_g}\frac{e^{(t\tau _d)^2/2\sigma _d^2}}{\sqrt{2\pi }\sigma _d}dt}e^{\frac{rr_{}}{r_0}}\hfill & \mathrm{if}t_{max}tt_g\hfill \end{array}`$ (9) respectively, where $`t_g`$ is the age of the Galactic disk, $`t_{max}`$ represents the epoch of time at which the formation of the thin disk begins, $`\mathrm{\Sigma }_{tot}(r_{},t_g)`$ is the present total surface density in the solar neighborhood, and $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ is the local surface density of the thick disk at the present time, which is one of the free parameters in our model. We adopted $`r_0`$ = 2.7 kpc (Robin et al 1996, Kent 1992), and $`\mathrm{\Sigma }_{tot}(r_{},t_g)`$ = 55.0 $`M_{}pc^2`$ (Rana 1991, Sackett 1997) for Galactic disk. There are four free parameters in the pre-thin model, $`\tau _t,\tau _d,t_{max}`$ and $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$. Contrary to the pre-thin model, the post-thin model assumes the first infall episode forms thin disk. Then, the thick disk forms delayedly as a result of some actions on or by the thin disk. Using the same method described above, the form of $`A(r)`$ and $`B(r)`$ in the post-thin model can be written as $`A(r)`$ $`=`$ $`\{\begin{array}{cc}0\hfill & \mathrm{if}t<t_{max}\hfill \\ \frac{\mathrm{\Sigma }_{thick}(r_{},t_g)}{_{t_{max}}^{t_g}\frac{e^{(t\tau _d)^2/2\sigma _d^2}}{\sqrt{2\pi }\sigma _d}dt}e^{\frac{rr_{}}{r_0}}\hfill & \mathrm{if}t_{max}tt_g,\hfill \end{array}`$ (12) $`B(r)`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Sigma }_{tot}(r_{},t_g)\mathrm{\Sigma }_{thick}(r_{},t_g)}{_0^{t_g}\frac{e^{(t\tau _t)^2/2\sigma _t^2}}{\sqrt{2\pi }\sigma _t}dt}}e^{\frac{rr_{}}{r_0}},`$ (13) respectively, where each parameter has the same notation as Eq. (6) and (7) except $`t_{max}`$, which represents the epoch of time at which the thick disk begins to form. There are also four free parameters in the pre-thin model, $`\tau _t,\tau _d,t_{max}`$ and $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$. ### 3.3 SFR and IMF In the majority of chemical evolution models, star formation rate (SFR) is assumed to depend on some power of the gas surface density ( Prantzos & Aubert 1995, Tosi 1996, CMG97). Based on gravitational instability, Wang & Silk (1994) developed a self-consistent model to derive the global star formation rate as a function of radius in galactic disks. The resulted star formation rate not only depends on the gas surface density, but also is proportional to the epicycle frequency $`\kappa `$. Since $`\kappa r^1`$, we adopt a similar star formation rate as that of Prantzos & Aubert (1995), which can be expressed as (in units of $`M_{}pc^2Gyr^1`$ ) : $$\psi (r,t)=\nu \mathrm{\Sigma }_{gas}^n(r,t)/r$$ (14) where $`\mathrm{\Sigma }_{gas}(r,t)`$ and $`r`$ are in units of $`M_{}pc^2`$ and $`kpc`$, respectively. The power law index $`n=1.4`$ is adopted (Prantzos & Silk 1998), which is in some degree similar to that of Kennicutt (1998). The value of $`\nu `$ is derived from the condition of reproducing the present observed gas surface density in the solar neighborhood. We adopted $`\mathrm{\Sigma }_{gas}(r_{},t_g)`$ = 10.0 $`M_{}pc^2`$ (Scoville & Sanders 1987, Prantzos & Aubert 1995, Sackett 1997). The adopted stellar initial mass function (IMF) is taken from Kroupa et al. (1993), in which the IMF is described by a three-slope power law, $`\varphi (m)m^{(1+x)}`$. In the high-mass region, the IMF has a relatively steep slope of $`x`$ = 1.7, while it flattens in the low-mass range ($`x`$ = 1.2 for $`0.5M_{}m1.0M_{}`$ and $`x`$=0.3 for $`m<0.5M_{}`$). The adopted IMF is normalized to $`_{0.1}^{100.0}m\varphi (m)dm=1`$. ## 4 Results and discussions ### 4.1 $`\chi ^2`$-test The observed G-dwarf metallicity distribution is treated as the tightest observational constraints on the chemical evolution models of the Galactic disk(CMG97). The metallicity of local disk stars in the sample of Hou et al (1998) extends to \[Fe/H\] = -1.5, which is consistent with the observations of the thick disk (Majewski 1993, Nissen & Schuster 1997). However, the metal-weak tail of the thick disk is excluded from the distribution of RM96 by using a chemical criterion. Since the main aim of this work is to predict the general properties of the thick disk, we select the distribution of Hou et al (1998) as the observational constraint. The $`\chi ^2`$ of the goodness-of-fit to the observed data is calculated as follows $$\chi ^2(n)=\underset{i=1}{\overset{n}{}}(\frac{y_{mi}y_{oi}}{\sigma _i})^2,$$ (15) where $`y_{mi}`$ and $`y_{oi}`$ are respectively the model-generated and observed data in $`i`$th data bin, $`\sigma _i`$ is the error of the observed data in $`i`$th bin, and $`n`$ is the total number of bins. For the metallicity distribution, Hou et al (1998) have already divided the observed data into $`19`$ bins. The error $`\sigma _i`$ is given by $$\sigma _i(\frac{\mathrm{\Delta }N_i}{N})=\sqrt{(\frac{\sqrt{\mathrm{\Delta }N_i}}{N})^2+(\frac{\mathrm{\Delta }N_i}{N}\frac{1}{\sqrt{N}})^2}=\sqrt{\frac{\mathrm{\Delta }N_i}{N}\frac{1}{N}+(\frac{\mathrm{\Delta }N_i}{N}\frac{1}{\sqrt{N}})^2},$$ (16) where $`\mathrm{\Delta }N_i/N`$ is the relative number of G-dwarfs in the $`i`$th bin and $`N`$ is the total number of G-dwarfs in the corrected sample. For comparison, we consider four different models respectively, which are closed-box, one-component, pre-thin and post-thin model. The differences among these models are the treatments of the infall rate. We perform model calculations for a broad range of free parameter combinations, if there are any, to present the best-fit to the new metallicity distribution. Using $`\chi ^2`$-test, the results of best-fit to the G-dwarf metallicity distribution for these individual models are shown in table 2, which are detailedly discussed at following subsections. The characteristics of the models are shown in the first column. The resulted values of free parameters, the $`\chi ^2`$ and the model confidential levels for the best-fit are shown in second, third, fourth column respectively. Figure 1 presents the best-fits to the new G-dwarf metallicity distribution for closed-box model (dotted line), the one-component model (dot-dashed line), pre-thin model (long dashed line) and post-thin model (full line). ### 4.2 Closed-box model The closed-box model considers the whole disk as an isolated system. Thus, $`A(r)=B(r)=0`$. No free parameter exists if the closed-box model is considered. Figure 2 shows that the closed-box model predicts larger number of metal-poor stars than that of the observations. It is often called the G-dwarf problem. The fact that closed-box model does not work is also shown in table 2, where the confidential level is too low to accept the model. Several possible explanations to the G-dwarf problem have been proposed (see brief reviews in Francois et al 1990, Malinie et al 1993). Generally, a good agreement between model predictions and observations is obtained by models that assume the disk formed by infall of piromordial gas (Pilyugin & Edmunds 1996b, CMG97) ### 4.3 One-component model The one-component model treats the thick and thin disk as one disk, which corresponds the infall rate of only one Gaussian form, i.e., $`A(r)=0,B(r)0`$. There is only one free parameter $`\tau _d`$, i.e., the infall timescale of the whole disk in one-component model after normalization mentioned above. Figure 1 shows that , for the one-component model, the goodness-of-fit between best-fit model predictions and observations seems to be acceptable, especially for the shape of the single peak of the metallicity distribution for G dwarfs. But, the results of $`\chi ^2`$-test gives that the confidential level of one-component model is some 30$`\%`$, yet it can reach 70$`\%`$ for the two-component model (Table 2). This means one-component model is not the best, which is consistent with the observational result that the thick disk is kinematically and chemically different from the thin disk (see brief reviews in Majewski 1993). Therefore, it is necessary to treat the thick and thin disk differently, if one want to investigate detailedly the Galactic chemical evolution. ### 4.4 Two-component model For the two-component model, both the pre-thin and post-thin model are considered. The differences between these two models are represented by different treatments of the infall rate, which are detailedly described in section 2.2. There are four free parameters in both the pre-thin and post-thin model, $`\tau _t,\tau _d,t_{max},\mathrm{\Sigma }_{thick}(r_{},t_g)`$. It should be emphasized that the $`t_{max}`$ has different meaning in different model. Figure 1 show that, for these two models, the best-fit model predictions are in good agreement with the observations, which are confirmed by the results of quantitative tests (Table 2). From Figure 1 and Table 2, it is difficult to distinguish which model is better. One important parameter predicted by the best-fit of our two-component model is the local surface density of thick disk at the present time. On the other hand, the density ratio of thick to thin disk are usually deduced from studies of star counts. In Table 3, we present the comparison between our model predictions and the data compiled from literatures. Since the density normalization and scale-height of the thick disk are anti-correlated when fitted simultaneously (Reid & Majeweski 1993, Majeweski 1993, Robin et al. 1996), the scale heights of the thick disk are also shown in 2th column. The previous results for the space density ratio of thick to thin disk (the parameter $`D`$) are shown in 3th column. Taking the scale-height of thick and thin disk as $`h_t`$ = 1000pc and $`h_d`$ = 300pc respectively as usual (Majewski 1993), the local space density ratio of thick to thin disk at the present time predicted by our model can be obtained based on the following equation: $$D\frac{\mathrm{\Sigma }_{thick}(r_{},t_g)}{\mathrm{\Sigma }_{tot}(r_{},t_g)\mathrm{\Sigma }_{thick}(r_{},t_g)}\frac{h_d}{h_t},$$ (17) where $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$=10.0$`M_{}pc^2`$ and 15.0 $`M_{}pc^2`$ for the best-fittings of the pre-thin and post-thin model, respectively. Moreover, equation (13) can also be used for deducing the $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ with published $`D`$ and $`h_t`$ in literatures. The results are shown in 4th column of Table 3 with $`h_d`$=300pc and $`\mathrm{\Sigma }_{tot}(r_{},t_g)`$=55.0$`M_{}pc^2`$, respectively. From Table 3, it can be seen that the previous density normalizations of thick disk span a wide range (from 0.02 to 0.11). This is partly due to the difficulty in distinguishing the kinematical, chemical and spatial characteristics of thick disk with halo and thin disk. Moreover, comparing published star counts and color distribution in neighboring fields, star density discrepancies are sometimes larger than photometric random errors as established by authors (Ojha et al. 1996, Robin et al. 1996). But the results of more recent surveys are in good agreement within the error range (Ojha et al. 1996, Robin et al. 1996, Buser et al 1998). Table 3 shows that the value of $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ predicted by the post-thin model are consistent with most of the previous results from literatures, while $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ predicted by the pre-thin model is larger than the previous data from studies of star counts. To illustrate this quantitatively, Figure 2 shows the $`\chi ^2`$ as a function of the $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ within reasonable range for both the pre-thin model (long dashed curve) and post-thin model (full curve) in which other free parameters are fixed as $`\tau _t`$=1.0Gyr, $`t_{max}`$=1.0Gyr and $`\tau _d`$=4.0Gyr. It is shown that the value of $`\chi ^2`$ is very sensitive to $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$. This suggests that the thick disk has great influence on the Galactic chemical evolution. In figure 2, three short horizontal lines indicate the model confidential level of $`30\%`$ (dotted line), 50$`\%`$ (long dashed line) and 70$`\%`$ (full line), respectively. The points in Figure 2 indicate the different results of $`\chi ^2`$ if the value of $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ from ten literatures (Table 3 ) are adopted for both the pre-thin model (open squares) and post-thin model (open circles), respectively. Figure 2 shows that, for the post-thin model, five points have model confidential level larger than 70$`\%`$, while only two points have confidential level larger than 70$`\%`$ for the pre-thin model. This suggests that the post-thin model be better than pre-thin model. Other evidences tended to favour the post-thin scenario for the formation of thick disk comes from the following facts. First, the thick disk is kinematically distinct from the thin disk and it shows no kinematic gradients (Ojha et al 1994 a,b). Second, Gilmore et al (1995) studied metallicity distribution of thick disk stars up to about $`3kpc`$ from the Galactic plane. They found that thick disk stars show no vertical abundance gradient. This argues against dissipational setting as the formation process of the thick disk (Freeman 1996). ### 4.5 Discussions Based on the above discussions, we treat the post-thin model with $`\tau _t`$=1.0 Gyr, $`t_{max}`$=1.0Gyr, $`\tau _d`$=4.0 Gyr and $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$= 10.0$`M_{}pc^2`$ as the best-fit model. Figure 3 presents the comparison between our best-fit model predictions for the AMR and the observations. The full line is our model predictions and the points are observational data taken from Edvardsson et al (1993). Figure 3 shows that, at the beginning of the formation of the thick disk ($`t=t_{max}=1.0Gyr`$), the iron abundance of ISM decreases a little due to the increasing infall rate of the primordial gas. After that phase, the model predicts that the metallicity increases smoothly with time. The overall tendency for this relation is consistent with the mean observations, but the present model can not reproduce the large observed scatters. Nordstrom et al. (1997) discussed in detail the main hypotheses for the origin of this scatter, such as star formation in an inhomogeneous gaseous medium, orbital diffusion in homogeneous galaxy and mergers or accretion events. However, a physical mechanism that reproduces the observed scatter in the AMR without violating other observational constraints has not yet been identified. Figure 4 compares the predicted behaviors of \[O/Fe\] vs. \[Fe/H\] for the best-fit model (full line) with the observations. The observed data are taken from Edvardsson et al (1993) (asterisks) and Barbuy (1988) (full squares). Our model predicts that there is a small loop at \[Fe/H\]= -1.6. The similar behavior is predicted in the best-fit model of CMG97. Figure 4 shows that our model prediction is in good agreement with the observations. This suggests that the relative stellar yield for oxygen and iron we adopted here be reasonable. Contrary to the case of the solar neighbourhood, the available observations for the Milky Way disk offer information maily about its current status, not its past history. Therefore, there is much more freedom in constructing a model. Up to now, chemical evolution models of the Galactic disk consider the disk as a system of independent rings. This oversimplification generally ignores the possibility of radial inflows produced in gaseous disks, e.g. by viscosity or by the infall of gas with a specific angular momentum different from that of the underlying disk (Prantzos & Silk 1998). Fortunately, a radial variation of the infall time-scale may play a similar role. In our best-fit model, the infall timescale of the thin disk $`\tau _d`$ is assumed to be radially dependent, taking an lower values in the inner disk ($`\tau _d`$=2 Gyr at $`r`$=2 kpc) and larger ones in the outer disk ($`\tau _d`$=4 Gyr at $`r`$=8.5 kpc). In Figure 5, we compare the predicted radial distribution of oxygen abundances by our best-fit model (full line) with the observations. The observed data of HII regions are from Shaver et al (1983) (asterisks), Fich & Silkey (1991) (open circles) and Vilchez & Esteban (1996) (open squares). The observations of early B-type main-squence stars are taken from Smartt et al (1997) (full squares) and Gummersbach et al (1998) (crosses). It shows that model predictions of oxygen abundances are larger than most of the observations. Moreover, our model predicts that the abundance gradient in inner region (-0.01 dex/kpc for $`r`$ 8.5 kpc) is steeper than that in outer region (-0.07 dex/kpc for $`r`$ 8.5 kpc). This seems to be contradictory to the observations of HII region, which suggests a flatter oxygen abundance gradients in outer region. This disagreement was also obtained by most of chemical evolution models for the Galactic disk (see brief reviews in Tosi 1996). This is probably due to the simplicities of the present models. Samland et al. (1997) developed a chemi-dynamical evolution model for the Galactic disk and presented better fit for the observed variations of oxygen abundance across the whole disk. Therefore, this disagreement probably could be solved if one considers the influence of dynamical evolution on the chemical evolution and the effect of gas heating on star formation rate. Figure 6 presents the comparison between the predictions of our best-fit model for the radial profile of the present SFR (full line) and the observations. Observed data are normalized to SFR in the solar neighborhood. Data are based on several tracers of star formation: Lyman continuum photons from HII regions (full squares, from Gusten & Merger 1983); pulsars (open circles: from Lyne et al 1985); and supernova remnants (crosses: from Guibert et al 1978). The good agreement between our model prediction and the observations indicates that our model predicts reasonable star formation history for the Galactic disk. Finally, in Figure 7, we present the comparison for the radial distribution of the present gas surface density between our best-fit model predictions (full line) and the observations (dashed lines). Two dashed lines are reproduced from Prantzos & Aubert (1995). The lower dashed line is the sum of atomic and molecular hydrogen given in Dame (1993), corrected for the contribution of 30$`\%`$ helium. The upper one is obtained by adopting the gas surface density in the solar neighbourhood as 16 $`M_{}pc^2`$ and scaling the curve of Dame (1993) accordingly (Prantzos & Aubert 1995). Given the uncertainties in the observational data, the model is also in good agreement with the observed profile. ## 5 Conclusions In this work, we introduce a two-component models for the chemical evolution of the Galactic disk, which assumes that the formation of the thick and thin disks occur in two main accretion episodes. The infall rate is assumed to be Gaussian. Both the pre-thin and post-thin scenarios for the formation of the Galactic disk are considered. The local surface density of the thick disk at the present time is chosen to be one of the free parameters. Following Prantzos & Silk (1998), we also assume that the SFR is not only proportional to $`n`$ power of gas surface density, but directly correlates with the Galactocentric distance. Comparing model predictions with the new metallicity distribution in the solar neighbourhood (Hou et al 1998), we use the $`\chi ^2`$-test to derive best-fittings and compare the reasonableness of four different models, which are closed-box, one-component, pre-thin and post-thin models. Moreover, comparisons between the predictions of our best-fit model and the main observational constraints are presented. The main results can be summarized as follows. 1. Our results suggests that the post-thin model for the formation of the Galactic disk should be preferred. This is consistent with the observational evidences that the thick disk is chemically and kinematically distinct from the thin disk and it shows no vertical abundance gradient. 2. The goodness-of-fit for model predictions about metallicity distribution to the observations, $`\chi ^2`$, is very sensitive to the local surface density of the thick disk at the present time. This suggests that it is necessary to treat the thick and thin disks differently if one want to investigate detailedly the Galactic chemical evolution. 3. The post-thin model predicts $`\mathrm{\Sigma }_{thick}(r_{},t_g)`$ =10.0$`M_{}pc^2`$. The resulted space density ratio of thick to thin disk is consistent with the previous data from recent studies of star counts. However, the pre-thin model predicts a larger value of the local thick disk density. 4. The predictions of our best-fit model are in good agreement not only with the observed data in the solar neighborhood, but also with the main observational features of the Galactic disk. However, contrary to the observations in HII regions, our model predicts the oxygen abundance gradient in outer region is steeper than that in inner region. ###### Acknowledgements. The authors wish to thank Prof. Peng Qiuhe of Nanjing University for helpful discussions. This research was supported in part by a grant from the National Natural Science Foundation of China and in part by the grant from Young Lab of Shanghai Observatory.
no-problem/9903/gr-qc9903061.html
ar5iv
text
# Untitled Document TOPOLOGICAL CENSORSHIP KRISTIN SCHLEICH and DONALD M. WITT Department of Physics, University of British Columbia Vancouver, BC V6T 1Z1, Canada ABSTRACT Classically, all topologies are allowed as solutions to the Einstein equations. However, one does not observe any topological structures on medium range distance scales, that is scales that are smaller than the size of the observed universe but larger than the microscopic scales for which quantum gravity becomes important. Recently, Friedman, Schleich and Witt have proven that there is topological censorship on these medium range distance scales: the Einstein equations, locally positive energy, and local predictability of physics imply that any medium distance scale topological structures cannot be seen. More precisely, we show that the topology of physically reasonable isolated systems is shrouded from distant observers, or in other words there is a topological censorship principle. 1. Introduction An interesting observed fact about our universe is that its spatial topology is trivial, that is continuously deformable to three dimensional Euclidean space, that is a region of $`𝐑^3`$. Thus is true on a remarkably large range of distance scales, ranging from fermis, the scale of interactions of high energy particles to megaparsecs, the scale of intergalactic distances. The theory of general relativity does not select a trivial topology; all 3-manifolds occur as the spatial topology of solutions to the Einstein equations.<sup>1</sup> Thus general relativity allows spacetimes that contain arbitrarily large numbers of wormholes and other complicated topological structures. In fact, there even exist inflationary universes with an arbitrarily wide range of topology.<sup>2</sup> Moreover, the dynamics of general relativity do not allow for the topology to change. Why don’t we see topological structures in our spacetime? At first the answer to this question might seem obvious; we see no topological structures because they are not there. However, there is a more interesting possibility; such topological structures may indeed be present in our universe, but cannot be observed. This second possibility was more formally stated as the topological censorship conjecture by Friedman and Witt: The topology of any physically reasonable isolated system is shrouded. That is a spacetime may contain isolated topological structures such as wormholes; however, there is no method by which an experimenter can determine that a spacetime contains such non-Euclidean topology and report this result to a distant observer. Recently Friedman, Schleich and Witt have proven this conjecture.<sup>3</sup> Today, instead of emphasizing the technical details used in proving the result, I would like to give an informal and intuitive sketch of how this proof works and mention some of its consequences for general relativity and cosmology. 2. Background One of the fundamental tenets of classical physics is physical predictability, that is given knowledge of the initial condition of the system, the laws of physics allow one to determine the behavior of the system at all future (and past) times. For example the motion of a particle in a potential is physically predictable; given the position and velocity of the particle at one instant of time, the equations of motion when solved determine the trajectory of the particle for all time. Similarly a spacetime is physically predictable if information about its geometry and matter sources at one instant of time allows the determination of its spacetime geometry for all times via the Einstein equations. For those familiar with general relativity, physically predictable spacetimes are termed globally hyperbolic spacetimes.<sup>4</sup> Such a definition is needed because not all solutions of the Einstein equations are physically predictable. For example, solutions that contain closed timelike curves or naked singularities are not physically predictable; however such solutions exhibit properties that patently violate the laws of physics. Thus it is the properties of physically predictable spacetimes that are of great interest as our universe is of this type. Topology describes the properties of a space that are independent of the metric, that is the distances between points in the space. A familiar two dimensional example is given by the torus. Observe that there is one hole in this space; moreover, this hole remains under any continuous deformation of the surface, that is under any continuous twisting or stretching of the surface. A sphere has no hole; therefore, there is no continuous deformation of the torus that takes it into a sphere. This observation is formally stated in terms of topology: a sphere and a torus have different topology. This difference in topology can be seen in the properties of curves on the torus and the sphere: One can find closed curves on the torus that are noncontractible, that is they cannot be continuously shrunk to a point. Such curves are those that loop about the hole in the torus. However, all closed curves on the sphere are contractible. In fact, the properties of curves on the two spaces rigorously characterize their topology. How would an experimenter go about determining the topology of spacetime? An important fact to remember is that experimenters, like all other massive objects, travel on timelike paths through their spacetime. Moreover, any measurement that an experimenter performs relies on information that also travels on a timelike or null path. Note especially that the path need not be everywhere timelike or everywhere null; for example, information could be carried from one point to another first by a massive particle travelling a timelike path that then releases a photon travelling a null path. Thus a determination of the topology of spacetime must be carried out using information that travels a path that is either timelike or null at all points along its course. Such paths are referred to as causal curves.<sup>*</sup><sup>*</sup>*An observant reader may worry that our universe is not two dimensional, but four dimensional; however, properties of curves in it still characterize its topology. Our universe is a physically predictable spacetime and thus its topology is completely carried by the topology of the three dimensional spatial hypersurface; physical predictability implies the topology at one time must determine it for all times. The topology of a three dimensional hypersurface can be characterized finding noncontractible curves as in the two dimensional case. Therefore, by laying out causal curves in the four dimensional spacetime, an experimenter can actively probe its topology. 1. A two dimensional spacetime with nontrivial topology. An observer can detect this topology through nontrivial timelike curves such as c as shown on the left, or through nontrivial null curves, as shown on the right. The observer cannot detect the topology using a spatial curve, such as s as information cannot travel such a curve. For example, an experimenter living in the two dimensional spacetime illustrated in figure 1 can determine that the spatial topology is a circle by discovering that there are noncontractible causal curves in the spacetime. The experimenter can find such curves, for example, by laying out a rope while walking around the space counterclockwise as illustrated by curve $`c`$ in figure 1. The resulting loop of rope clearly will not be contractible. Note that the experimenter cannot use a noncontractible spacelike curve such as $`s`$ to determine the topology as it is physically impossible to lay out such a curve. It is especially important to note that the experimenter can probe the topology using distant objects that emit information such as massive particles or light that travels along causal curves. For example the experimenter may be able to use light travelling two distinct paths from a star to probe the topology as also illustrated in figure 1. It is apparent that the causal structure of spacetime, that is which points of the spacetime can be connected to each other by causal curves, is important to the topological censorship theorem; after all the question at hand is whether or not causal curves can thread the topology and come out at any time, even infinitely far in the future. Therefore, a concrete way of talking about such curves is needed. This terminology and a very useful pictorial way of representing this causal structure, Penrose diagrams, is based off of the causal structure of Minkowski space.Ref. 4, p. 118 provides a clear discussion of terminology used in causal structure and Penrose diagrams as well as a comprehensive discussion of many of the other concepts used in the theorem. References to the original literature also can be found in this text. 2. The causal structure of Minkowski spacetime. On the left, Minkowski spacetime is represented in spherical coordinates as a two dimensional diagram by suppressing the angular coordinates. On the right is the Penrose diagram for Minkowski space. Points once off at infinite distance such as future and past timelike infinity and future and past null infinity now appear at finite distance. An illustration of Minkowski spacetime with metric $$ds^2=dt^2+dr^2+r^2(d\theta ^2+\text{sin}^2\theta d\varphi ^2)$$ $`(1)`$ is given in the left side of figure 2. Note that there is no concrete way in this diagram to represent where the timelike geodesics or the null geodesics end up at in the infinite future. For example, it is not clear from this diagram that null geodesics go to to a different infinity than timelike geodesics. The reason is obvious; the coordinates $`r`$, and $`t`$ have infinite ranges; therefore there is no way to draw where infinity is. However, this is not a problem with studying causal structure; rather it is a problem with the choice of coordinates. But physics is coordinate invariant, so instead write the metric of Minkowski space in a set of coordinates such that these infinite points now occur at finite coordinate values. Defining $`t^{}`$ and $`r^{}`$ by $$\text{2}t=\text{tan}(\frac{t^{}+r^{}}{2})+\text{tan}(\frac{t^{}r^{}}{2})\text{2}r=\text{tan}(\frac{t^{}+r^{}}{2})\text{tan}(\frac{t^{}r^{}}{2})$$ the Minkowski metric becomes $$ds^2=\mathrm{\Omega }^2(t^{},r^{})\left(dt^2+dr^2+r^2(d\theta +\text{sin}^2\theta d\varphi ^2)\right)$$ $`(2)`$ where $`\mathrm{\Omega }(t^{},r^{})=\frac{1}{2}\text{sec}(\frac{t^{}+r^{}}{2})\text{sec}(\frac{t^{}r^{}}{2})`$. The new coordinates have finite ranges, $`r^{}0`$, $`\pi <t^{}+r^{}<\pi `$, and $`\pi <t^{}r^{}<\pi `$; thus the full spacetime can now be represented in a finite diagram. Finally, one uses the fact that two spacetimes related by a conformal transformation have the same causal structure to further simplify the discussion; this fact means that one need not represent the factor of $`\mathrm{\Omega }^2`$ in the above metric to concretely illustrate the causal structure. The resulting diagram, the Penrose diagram of Minkowski spacetime is given in the right side of figure 2. By changing coordinates, the infinite future and past are now clearly described. Timelike geodesics, for example the path travelled an observer who remains at a fixed radial coordinate position $`r`$ begin at past timelike infinity, $`i^{}`$, and end at future timelike infinity, $`i^+`$. Similarly null geodesics, for example the path travelled by a photon travelling radially inward to the coordinate origin begin at past null infinity, $`𝒥(^{}`$, and end at future null infinity, $`𝒥(^+`$. It is now clear that the future infinities of that null geodesics and timelike geodesics are distinct. Observe that not all timelike curves end at $`i^+`$; curves corresponding to accelerated timelike observers can reach $`𝒥(^+`$. Finally, radially directed photons travel along paths at 45 degree angles in this spacetime; thus any timelike or null curve leaving a point in this spacetime must have tangent lying between the inward directed radial null geodesic and outward directed radial null geodesic. Therefore, this Penrose diagram neatly encapsulates the information about the causal structure of Minkowski spacetime. It is clear that technique used to concretely discuss the causal structure of Minkowski spacetime can be applied to illustrate the causal structure of other spacetimes. Of course, spacetimes with different metrics and topology will not have Penrose diagrams that are identical to figure 2. However, spacetimes that resemble or approach Minkowski space in regions of the spacetime will have similar causal structure in those regions. In particular, spacetimes containing isolated topological structures will have similar causal structure far away from the topology. An isolated topological structure is, as implied by its name, one that can be isolated from the rest of the spacetime. More precisely, one can place a sphere around the topology at a given instant in time and ”cut out” the topology by excising this sphere and everything inside it from the spatial hypersurface. When one does so, the remaining space has the topology of three dimensional Euclidean space minus a ball. Note that the sphere surrounding the topology might be very large and contain many topological structures; for example, there might be a large number of wormholes inside the sphere. Additionally, the metric of the spacetime outside the evolution of the excised ball approaches Minkowski spacetime as one goes infinitely far away; that is one can find a set of coordinates such that as one goes to infinite spatial distance, the metric is Eq. (1) plus $`1/r`$ correction terms. A spacetime that satisfies these conditions is termed an asymptotically flat spacetime. The Schwarzchild solution is a canonical example of an asymptotically flat spacetime, though it is obvious that there are myriad examples of such spacetimes, including those with isolated topological structures. It is important for understanding the theorem to observe that there may be more than one asymptotically flat region for a spacetime containing isolated topological structures; the topological structures potentially can connect several different copies of $`𝐑^3`$, each of which admits a metric approaching that of Minkowski space.A spatial analog of this feature is provided by figure 4, although it is included in this paper for other purposes. It contains two spatially asymptotically flat regions connected by a throat. Additionally note that as the spacetime is approaching Minkowski spacetime in each asymptotically flat region, intuitively its causal structure is also approaching that of Minkowski spacetime. Indeed this is the case; the Penrose diagram for an asymptotically flat spacetime will contain one or more regions which have the same causal structure as Minkowski spacetime. 3. The Topological Censorship Theorem The Einstein equations can be coupled to a wide range of matter, but certain general properties characterize classical physical matter, that is sources that are not quantum in nature. These properties are called energy conditions. The energy condition used in the proof of the topological censorship theorem is the null energy condition.<sup>6</sup> Physically, this condition states that an observer travelling along either a timelike or null curve measures the energy in their local frame to be positive at any point in the spacetime. This condition is satisfied by all classical sources of matter found in nature such as gas, dust, radiation, electromagnetic fields, as well as idealized sources such as classical scalar fields. It is implied by each of the other classical energy conditions: the weak energy condition, the strong energy condition and the dominant energy condition. Therefore, the null energy condition is a very reasonable and physical restriction. Given the above, it is now possible to state the theorem: Theorem. If an asymptotically flat globally hyperbolic spacetime satisfies the null energy condition, then every causal curve from past null infinity to future null infinity is deformable to $`\gamma _0`$. The curve $`\gamma _0`$ is a representative causal curve with past endpoint at $`𝒥(^{}`$ and future endpoint at $`𝒥(^+`$ that lies in the asymptotically flat region; i.e. it is a curve that does not pass through any of the topology of the spacetime. 3. A Penrose diagram illustrating the statement of the theorem. The theorem proves that any causal curve traversing the topology in the shaded region cannot reach future null infinity. Thus the curve $`\gamma `$ in this diagram either does not go through the topology or is spacelike at points along its course. Figure 3 is a Penrose diagram of a spacetime used to illustrate the theorem; for convenience this spacetime is assumed to have only one asymptotic region. Note especially that the causal properties of the shaded region, that is the region containing topology are not faithfully illustrated; only those of the asymptotically flat region, that with the causal structure of the radially distant part of Minkowski spacetime, have been correctly diagrammed. The theorem states that any causal curve $`\gamma `$ that can reach an observer in the asymptotically flat region is deformable to the trivial curve $`\gamma _0`$. This means that $`\gamma `$ cannot traverse any topological structure because if it did, it could not be deformed into the trivial one as it would hook on the topology. It follows that all causal curves that enter a topological structure cannot come out again; therefore there is no way to actively probe the topology. The proof of the theorem is based on a lemma that applies to simply connected spacetimes. A simply connected spacetime is one for which all closed curves are contractible. For example, a sphere and a plane are simply connected spacetimes, but a torus is not. Thus simply connected spacetimes have very special topology. Lemma. Suppose one has an asymptotically flat simply connected spacetime that satisfies the null energy condition. Then no 2-surface $`\tau `$ that is outer trapped with respect to $`𝒥`$$`(`$ can be seen from $`𝒥(^+`$. A surface is said to be outer trapped if radially outward directed null rays are converging.<sup>7</sup> Note that radially outward is defined by the direction one goes to reach the asymptotically flat region of the observer. In order to understand the motivation behind this definition, consider the behavior of light emitting surfaces in both Minkowski and curved spacetime. First take a sphere at one instant of time in Minkowski space and release radially outward directed photons from its surface. At some small interval of time later, say one second, consider the surface formed by these photons. As the spacetime geometry is flat, this photon sphere will have larger area than that of the original sphere. This result seems obvious; however, if one works in a curved spacetime the result can be very different and depends on the curvature of the spacetime in the region of the sphere. An example is illustrated in figure 4; the surface $`\tau `$ in this particular spacetime has a photon sphere that has larger area at later time, very similar to the situation in Minkowski space. However, the surface $`\tau ^{}`$ has a photon sphere with smaller area at later time; the curvature of the spacetime in the neighborhood of $`\tau ^{}`$ forces the light rays to converge even though the begin by heading in the radially outward direction. Thus this surface is outer trapped. Physically what is happening is the curvature of spacetime is so strong that even radially outward directed light rays are being forced inward. 4. Illustration of an outer trapped surface. The picture represents a three dimensional spatial slice of a spacetime; one dimension is suppressed. $`\tau `$ and $`\tau ^{}`$ are spatial spheres in the hypersurface. $`\tau `$ is not outer trapped; $`\tau ^{}`$ is outer trapped. Given this physical picture of an outer trapped surface one intuitively gathers that no information from it or interior to it can escape out to asymptotic infinity to be detected by the observer; no signal travels faster than light and radially outward directed light is travelling outward at the maximal rate. Therefore if radially directed light is forced spatially inward by the curvature, all other signals, radial or nonradial will be as well. Thus you cannot see inside a trapped surface. This is precisely what the Lemma proves rigorously. I will not go into the proof here, but note that it uses standard techniques from the singularity theorems.<sup>5</sup> This lemma is for a simply connected spacetime, but the theorem applies to all asymptotically flat spacetimes. The way that a connection is made is through the concept of a universal cover. A universal cover is a spacetime that is related to the original spacetime through properties of curves; curves that are noncontractible in the original space are unwrapped to become contractible curves in the universal cover. Again it is easiest to illustrate this concept through an example. An illustration of the cylinder spacetime and its universal cover are given in Figure 5. In the original spacetime, $`b`$ is a trivial curve, $`c`$, $`s`$ and $`d`$ are all nontrivial curves. Its universal cover is a plane; one can wrap the plane around the cylinder to cover it an infinite number of times. Each point in the cylinder is covered by an infinite number of points in the plane, for example $`q`$ corresponds to an infinite number of points in the plane as indicated. Now noncontractible curves in the cylinder spacetime are identified with contractible curves connecting different copies of the starting point $`q`$ to $`p`$ in the universal covering spacetime. For example the curve $`c`$ is identified with a curve attaching one copy of $`q`$ to $`p`$, the curve $`d`$ is identified with a curve attaching a different copy of $`q`$ to $`p`$. Clearly, by construction the universal cover is a simply connected spacetime. 5. An example of a universal covering space. On the right is the universal cover of the spacetime illustrated on the left. All noncontractible curves in the spacetime on the left are unwrapped to contractible curves in the universal cover. Although we have illustrated the definition of a universal cover in a particular example, certain features of this example are generic. Of particular importance is the fact that noncontractible curves in the original spacetime correspond to contractible curves connecting copies of the original starting and ending points in the covering spacetime. This implies that the covering spacetime of an asymptotically flat spacetime with nontrivial topology always has multiple asymptotically flat regions even if the original spacetime had only one. Most importantly, any curve in the original spacetime that traverses the topology necessarily connects two distinct asymptotic regions in the covering spacetime. 6. An illustration of the proof of the theorem by contradiction. If the curve $`\gamma `$ of figure 3 traverses a topological structure, then it corresponds to a curve $`\mathrm{\Gamma }`$ in the universal cover whose beginning is in a different copy of the asymptotic region than that containing its end and the trivial curve $`\gamma _0`$. We now have all the tools to discuss the proof of the theorem. The proof is by contradiction. Suppose the theorem is false. Then there is a causal curve $`\gamma `$ from $`𝒥(^{}`$ to $`𝒥(^+`$ that passes through the topology and thus is not deformable to the trivial curve $`\gamma _0`$. (Recall that figure 3 provides an example of such a spacetime.) Now consider the universal covering space of the spacetime.(A schematic Penrose diagram of the universal covering space of figure 3 is given in figure 6.) As $`\gamma `$ passes through the topology, its corresponding curve $`\mathrm{\Gamma }`$ in the covering space must begin in a different asymptotic region than that containing $`\gamma _0`$. In this asymptotic region, the spacetime is becoming asymptotically flat; therefore, the curve $`\mathrm{\Gamma }`$ intersects arbitrarily large spatial spheres as it approaches the infinite past. Null curves that can reach $`𝒥(^+`$ must be directed inward from these large spheres as indicated in the inset of figure 6. Therefore photon spheres released from these surfaces corresponding to these null curves are shrinking in area. Thus these large spheres near the origin of the curve $`\mathrm{\Gamma }`$ are outer trapped with respect to an observer at $`𝒥(^+`$. However,the assumption that $`\mathrm{\Gamma }`$ reaches $`𝒥(^+`$ means that an observer can see these spheres at $`𝒥(^+`$. But this conclusion contradicts the Lemma! Therefore, our assumption that $`\gamma `$ is a causal curve reaching $`𝒥(^+`$ must be wrong. Thus there is no causal curve that passes through the topology and reaches future null infinity. As any $`\gamma `$ not deformable to $`\gamma _0`$ must correspond to a curve that goes to another asymptotic region in the universal covering spacetime, we conclude that all curves reaching future null infinity are deformable to $`\gamma _0`$ if they are causal. Q.E.D. 4. Discussion The consequences of the topological censorship theorem can be seen by considering a physically predictable spacetime with non-Euclidean topology such as a handle attached to a plane. Note that this spacetime has one asymptotic region. Its universal cover will be a spacetime with multiple asymptotic regions. Now suppose that an experimenter wishes to probe the topology of this spacetime and communicate the results of the measurements to a distant observer near $`𝒥(^+`$; note that this distant observer could even be the experimenter herself if she sends signals or uses signals from distant objects in the spacetime. In order to detect the handle, the path of some signal must traverse the handle and exit to $`𝒥(^+`$; but this is forbidden by the theorem. Only causal paths that do not loop through the handle can communicate with $`𝒥(^+`$, and such causal curves do not detect the existence of non-Euclidean topology. Thus general relativity prevents one from actively probing the topology of spacetime. So if the topology of the handle cannot be detected, what will the experimenter see? Note that curvature of spacetime such as that associated with a handle acts like a mass when viewed from a distant region. Moreover, one cannot probe the properties of this mass; the topology appears to be behind a horizon to the experimenter. Thus isolated topological structures appear to be black holes to outside observers, indistinguishable classically from black holes formed by the collapse of matter. Therefore, if our universe were full of isolated topological structures, they would appear to us as black holes. The theorem was proven for asymptotically flat spacetimes; however, in cosmology one would like to apply it to spacetimes that do not have the precise asymptotic behavior of the metric described in the theorem. This is no difficulty for the case where the scale of the isolated topological structure is small, for example when it is the size of the solar system (or smaller!) or even of a galactic core. For these cases our universe is well approximated by an asymptotically flat spacetime. More generally, note that the key use of the asymptotic behavior of the metric is in showing that arbitrarily large spatial spheres are outer trapped in the region from which $`\mathrm{\Gamma }`$ originates. Intuitively, one expects this behavior to occur in spacetimes with more general behavior in the asymptotic regions and that the theorem could be generalized to spacetimes that have metrics that allow arbitrarily large spatial spheres. Indeed this is the case. Therefore, the topological censorship theorem applies quite generally to cases of spacetime relevant to cosmology. Finally, although the sketch of the topological censorship theorem uses the null energy condition, one can show that it actually can be rigorously proven for a weaker energy condition, the averaged null energy condition.<sup>8</sup> Physically this energy condition states that the energy can be negative in small regions so long as it is positive when averaged over the whole spacetime. This implies that the topological censorship theorem can be applied not only to spacetimes with classical matter but may also apply to spacetimes containing certain types of quantum matter. 5. References D. M. Witt, Phys. Rev. Lett. 57 (1986) 1386 and to be published. J. Morrow-Jones and D. M. Witt, Phys. Rev. D 48 (1993) 2516. J. L. Friedman, K. Schleich and D. M. Witt, Phys. Rev. Lett. 71 (1993) 1486. S. W. Hawking and G. F. R. Ellis, The Large Scale Structure of Spacetime, (Cambridge University Press, Cambridge, 1973) p. 206. ibid., ch. 8. ibid., p. 95. ibid., p. 2. A. Borde, Class. Quantum. Grav. 4 (1987) 343.
no-problem/9903/quant-ph9903033.html
ar5iv
text
# Quantum entanglement and classical communication through a depolarising channel ## 1 Introduction Entanglement is probably the most important resource in quantum information processing: quantum teleportation , quantum cryptography , quantum computation , quantum error correction are only some examples of its ubiquitous crucial role. In all the instances mentioned above its main use is transmitting, processing, correcting quantum information. In this paper we would like to analyse whether it is useful in one particular scenario of transmission of classical information along a noisy channel: the depolarising channel. It has been proved that when multiple uses of the channel are allowed entangled states can maximise distinguishability in a particular case of noisy channel: the two-Pauli channel . It is however common belief that, due to their fragility in the presence of noise, entangled states cannot improve the transmission of classical information when used as signal states, at least when the noise is isotropic - in the sense that will be explained in the following sections - as in the case of the depolarising channel. Although there is numerical evidence in this direction no explicit proof has been produced so far, at least at the best of the authors’ knowledge. The scope of this paper is precisely to prove that the mutual information cannot be increased when signals of entangled states of two qubits are used with a memoryless depolarising channel. ## 2 Description of the Channel In the following we will consider a memoryless quantum channel acting on individual qubits. The channel is described by “action” operators $`A_k`$ satisfying $`_kA_k^{}A_k=1𝐥`$ such that if we send through the channel a qubit in a state described by the density operator $`\pi `$ the output qubit state is given by the map $$\pi \mathrm{\Phi }(\pi )=\underset{k}{}A_k\pi A_k^{}$$ (1) By definition a channel is memoryless when its action on arbitrary signals $`\pi _i`$, consisting of $`n`$ qubits (including entangled ones), is given by $$\mathrm{\Phi }(\pi _i)=\underset{k_1\mathrm{}k_n}{}(A_{k_n}\mathrm{}A_{k_1})\pi _i(A_{k_1}^{}\mathrm{}A_{k_n}^{})$$ (2) For the depolarising channel, on which we will concentrate our attention, the action operators take the following simple form $$A_0=\frac{1}{2}\sqrt{1+3\eta }1𝐥,A_{x,y,z}=\frac{1}{2}\sqrt{1\eta }\sigma _{x,y,z}.$$ (3) Here $`\sigma _{x,y,z}`$ are the Pauli matrices, which are transformed under the action of the channel as $$\underset{k}{}A_k\sigma _lA_k^{}=\eta \sigma _l.$$ (4) As we can see, the depolarising channel can be specified by the parameter $`\eta `$, whose meaning will be clear in the following section, or equivalently by the error probability $`p_e=3(1\eta )/4`$. This gives us a complete description of the channel for any input state composed of an arbitrary number of qubits. ## 3 Entanglement and mutual information In the simplest scenario the transmitter can send one qubit at a time along the channel. In this case the codewords will be restricted to be the tensor product of the states of the individual qubits. Quantum mechanics however allows also the possibility to entangle multiple uses of the channel. For this more general strategy it has been shown that the amount of reliable information which can be transmitted per use of the channel is given by $$C_n=\frac{1}{n}\text{sup}_{}I_n(),$$ (5) where $`=\{p_i,\pi _i\}`$ with $`p_i0,p_i=1`$ is the input ensemble of states $`\pi _i`$ of $`n`$ – generally entangled – qubits and $`I_n()`$ is the mutual information $$I_n()=S(\rho )\underset{i}{}p_iS(\rho _i),$$ (6) where the index $`n`$ stands for the number of uses of the channel. Here $$S(\chi )=\text{tr}(\chi \mathrm{log}\chi )$$ (7) is the von Neumann entropy, $`\rho _i=\mathrm{\Phi }(\pi _i)`$ are the density matrices of the outputs and $`\rho =_ip_i\rho _i`$. Logarithms are taken to base 2. The advantage of the expression (6) is that it includes an optimisation over all possible POVMs at the output, including collective ones. Therefore no explicit maximisation procedure for the decoding at the output of the channel is needed. The interest for the possibility of using entangled states as channel input is that it cannot generally be excluded that $`I_n()`$ is superadditive for entangled inputs, i.e. we might have $`I_{n+m}>I_n+I_m`$ and therefore $`C_n>C_1`$. In this scenario the classical capacity $`C`$ of the channel is defined as $$C=\underset{n\mathrm{}}{lim}C_n.$$ (8) For the depolarising channel a lower bound on $`C`$ is given by the one-shot capacity $`C_1`$ (see ), while upper bounds are given in . In this paper we will not attempt to find the classical capacity of the depolarising channel, as this would imply analysing signals with any degree of entanglement between $`n`$ qubits and performing the limit $`n\mathrm{}`$. We will restrict ourselves to the simplest non-trivial case, namely $`n=2`$, and we will find the maximal mutual information $`I_2()`$. The question we want to address is: is it possible to increase the mutual information by entangling the two qubits, i.e. is $`C_2>C_1`$ for the depolarising channel? To answer this question with “no” we will show that due to the isotropy of the depolarising channel the mutual information for orthogonal entangled states of two qubits depends only on the degree of entanglement, and in particular that it is a decreasing function of the entanglement. We anticipate that, as we will show later, the mutual information cannot be increased by an alphabet of non-orthogonal states. To simplify our analysis we express the signal states in the “Bloch vector” representation $$\pi =\frac{1}{4}\{1𝐥1𝐥+1𝐥\underset{k}{}\lambda _k^{(2)}\sigma _k+\underset{k}{}\lambda _k^{(1)}\sigma _k1𝐥+\underset{kl}{}\chi _{kl}\sigma _k\sigma _l\},$$ (9) where as before $`\sigma _k`$ are the Pauli operators of the two qubits. Using Eq. (4) the output of the channel can be written as $$\rho =\mathrm{\Phi }(\pi )=\frac{1}{4}\{1𝐥1𝐥+\eta \underset{k}{}(1𝐥\lambda _k^{(2)}\sigma _k+\lambda _k^{(1)}\sigma _k1𝐥)+\eta ^2\underset{kl}{}\chi _{kl}\sigma _k\sigma _l\}.$$ (10) This shows that the output states are linked to the input signals by an isotropic shrink of the Bloch vectors $`\lambda _k^{(1)},\lambda _k^{(2)}`$ by a factor $`\eta `$ and of the tensor $`\chi _{kl}`$ by a factor $`\eta ^2`$. We will now consider the following general signal state $`|\pi _i`$ $`=`$ $`\mathrm{cos}\vartheta _i|00_i+\mathrm{sin}\vartheta _i|11_i.`$ (11) In the above equation $`\vartheta _i[0,\pi /4]`$ parametrises the degree of entanglement between the two qubits that carry the signal, which we will measure with the usual entropy of entanglement $`S(\tau )=\text{tr}\{\tau \mathrm{log}\tau \}`$ , where $`\tau =\text{tr}_1\pi _i=\text{tr}_2\pi _i`$ denotes the reduced density operator of one qubit (note that $`\tau `$ is the same for both qubits as the trace can be taken indifferently over any of the two ). For the state (11) we have $`S_i=\mathrm{cos}^2\vartheta _i\mathrm{log}\mathrm{cos}^2\vartheta _i\mathrm{sin}^2\vartheta _i\mathrm{log}\mathrm{sin}^2\vartheta _i`$. We point out that this is the most general choice for a signal state. This can be easily verified observing that any pure state $`|\mathrm{\Psi }`$ of two qubits can be decomposed in the Schmidt basis as follows $`|\mathrm{\Psi }=c_1|u_1|v_1+c_2|u_2|v_2,`$ (12) where $`c_i`$ are real and $`c_1^2+c_2^2=1`$, while $`\{|u_i\}`$ and $`\{|v_i\}`$ represent orthogonal bases for the first and the second qubit, respectively. These two bases will not in general have the same orientation in the Bloch vector representation, but for our purpose this is not an impediment because of the isotropy of the depolarising channel. In other words, since there is no privileged direction for the choice of basis $`\{|\mathrm{\hspace{0.17em}0},|\mathrm{\hspace{0.17em}1}\}`$ \- they do not even have to be the same for each of the two qubits - we are free to let them coincide with the two respective Schmidt bases. After the action of the depolarising channel, the density operator corresponding to the input state (11) takes the form $`\rho _i={\displaystyle \frac{1}{4}}\{1𝐥1𝐥+\eta \mathrm{cos}2\vartheta _i(1𝐥\sigma _z+\sigma _z1𝐥)+\eta ^2(\sigma _z\sigma _z+\mathrm{sin}2\vartheta _i(\sigma _x\sigma _x\sigma _y\sigma _y))\}.`$ (13) In the basis $`\{|\mathrm{\hspace{0.17em}00},|\mathrm{\hspace{0.17em}01},|\mathrm{\hspace{0.17em}10},|\mathrm{\hspace{0.17em}11}\}`$ the output $`\rho _i`$ reads $$\rho _i=\frac{1}{4}\left(\begin{array}{cccc}1+2\eta \mathrm{cos}2\vartheta _i+\eta ^2& 0& 0& 2\eta ^2\mathrm{sin}2\vartheta _i\\ 0& 1\eta ^2& 0& 0\\ 0& 0& 1\eta ^2& 0\\ 2\eta ^2\mathrm{sin}2\vartheta _i& 0& 0& 12\eta \mathrm{cos}2\vartheta _i+\eta ^2\end{array}\right)$$ The corresponding eigenvalues are $`\alpha _1=\alpha _2`$ $`=`$ $`{\displaystyle \frac{1}{4}}(1\eta ^2)`$ (14) $`\alpha _{3,4}(\vartheta _i)`$ $`=`$ $`{\displaystyle \frac{1}{4}}(1+\eta ^2\pm 2\eta \sqrt{\mathrm{cos}^22\vartheta _i+\eta ^2\mathrm{sin}^22\vartheta _i}).`$ (15) They depend on the degree of entanglement, but are independent of the choice of basis. For maximisation of the second term of the mutual information $`I_2`$, given in Eq. (6), it is sufficient to optimise independently terms of the form $$S(\rho _i)=\underset{j=1}{\overset{4}{}}\alpha _j(\vartheta _i)\mathrm{log}\alpha _j(\vartheta _i).$$ (16) Remember that the a priori probabilities add to one, so neither the number of input states nor their probabilities enter in the maximisation of the second term. As we will show later, the first term in (6) and therefore the mutual information is maximised for a set of orthogonal and equally probable states. We can search the extremum of (16) analytically leading to the requirement $$\mathrm{cos}2\vartheta _i\mathrm{sin}2\vartheta _i\{\mathrm{log}(1+\eta ^2+2\eta \sqrt{\mathrm{cos}^22\vartheta _i+\eta ^2\mathrm{sin}^22\vartheta _i})\mathrm{log}(1+\eta ^22\eta \sqrt{\mathrm{cos}^22\vartheta _i+\eta ^2\mathrm{sin}^22\vartheta _i})\}=0.$$ (17) For $`\eta 0`$ there are two solutions, namely $$\vartheta _i=0,$$ (18) which corresponds to non-entangled input states and turns out to be the maximum, and $$\vartheta _i=\pi /4,$$ (19) which means maximal entanglement of the inputs and corresponds to the minimum of $`I_2`$. The explicit expression for the maximal mutual information corresponding to four equiprobable orthogonal non-entangled input states is given by $`I_2^{max}`$ $`=`$ $`(1+\eta )\mathrm{log}(1+\eta )+(1\eta )\mathrm{log}(1\eta ).`$ (20) Notice that this is equivalent to twice the one-shot capacity. So we have shown that in this case entanglement does not help to increase the mutual information . As promised, we now justify our choice of orthogonal states by proving that the mutual information cannot be increased by using non-orthogonal alphabets. We were able to maximise the two parts of the mutual information, defined in equation (6), independently of each other for the following reasons. Note first that $`S(\rho )`$ is maximal for $`\rho =\frac{1}{4}1𝐥_11𝐥_2`$. For the depolarising channel this form is achieved by any set of four orthogonal equiprobable input states. The minimum of the term $`_ip_iS(\rho _i)`$ in the mutual information, however, does not depend on the orthogonality of the states, as only the eigenvalues of the output states determine the extremum. We noticed that each $`S(\rho _i)`$ can be minimised independently for each input state $`\pi _i`$. As we have shown before, $`S(\rho _i)`$ is minimal, and reaches the same minimum value, for any choice of input states of non-entangled qubits. Note that due to these reasons the same mutual information could be also reached by using a larger alphabet of non-orthogonal non-entangled states and adjusted probabilities, but it can never be improved beyond the maximum value given in (20). As an illustration, in figure 1 we show the mutual information for the following set of equally probable orthogonal states $`|\pi _1`$ $`=`$ $`\mathrm{cos}\vartheta |00+\mathrm{sin}\vartheta |11`$ $`|\pi _2`$ $`=`$ $`\mathrm{sin}\vartheta |00\mathrm{cos}\vartheta |11`$ $`|\pi _3`$ $`=`$ $`\mathrm{cos}\beta |01+\mathrm{sin}\beta |10`$ $`|\pi _4`$ $`=`$ $`\mathrm{sin}\beta |01\mathrm{cos}\beta |10`$ (21) as a function of $`\vartheta `$ and $`\beta `$ for $`\eta =0.8`$. As we can see, the mutual information is a decreasing function of the degree of entanglement. In figure 2 we report the mutual information as a function of $`\eta `$ for uncorrelated states and maximally entangled states. As proved above, we can see that uncorrelated signals lead to a higher mutual information for any channel with $`0<\eta <1`$. To summarise, we have asked whether classical communication through a depolarising channel can be improved by entangling two uses of the channel. Our analytical results show that this is not the case: the mutual information is maximised when using orthogonal equiprobable non-entangled states. The generalisation to more than two qubits remains an open problem . ## 4 Acknowledgements We would like to thank C.H. Bennett, C. Fuchs, R. Jozsa, G. Mahler and J. Schlienz for helpful discussions. In particular, we thank A. Peres for constructive critics. This work was supported in part by the European TMR Research Network ERB 4061PL95-1412 “The physics of quantum information”, by Ministero dell’Università e della Ricerca Scientifica e Tecnologica under the project “Amplificazione e rivelazione di radiazione quantistica” and by Deutsche Forschungsgemeinschaft under grant SFB 407. Part of this work was completed during the 1998 workshops on quantum information organised by ISI Foundation - Elsag-Bailey and by the Benasque Center for Physics.
no-problem/9903/hep-ex9903018.html
ar5iv
text
# A Search for CP(T) Violation in B Decays at OPAL ## I Introduction CP violation in the B-meson system has generated considerable experimental and theoretical interest, as potentially large effects are expected. Searches for CP(T) violation using the small sample of $`\mathrm{Z}^0\mathrm{b}\overline{\mathrm{b}}`$ decays at the LEP collider at CERN provide “proofs of principle” for analysis techniques which will be employed by future B-Factory experiments. Indirect CP violation is possible in the $`\mathrm{B}^0`$ system provided the weak eigenstates $`\mathrm{B}^0`$ and $`\overline{\mathrm{B}}^0`$ differ from the mass eigenstates $`\mathrm{B}_1`$ and $`\mathrm{B}_2`$: $`|\mathrm{B}_1>`$ $`=`$ $`{\displaystyle \frac{(1+ϵ_B+\delta _B)|\mathrm{B}^0>+(1ϵ_B\delta _B)|\overline{\mathrm{B}}^0>}{\sqrt{2\left(1+\left|ϵ_B+\delta _B\right|^2\right)}}}`$ (1) $`|\mathrm{B}_2>`$ $`=`$ $`{\displaystyle \frac{(1+ϵ_B\delta _B)|\mathrm{B}^0>(1ϵ_B+\delta _B)|\overline{\mathrm{B}}^0>}{\sqrt{2\left(1+\left|ϵ_B\delta _B\right|^2\right)}}},`$ (2) where $`ϵ_B`$ and $`\delta _B`$ parametrize CP and CPT violation, respectively. These parameters have been investigated using semileptonic b hadron decays, resulting in limits of order $`10^2`$ on both $`ϵ_B`$ and $`\delta _B`$. In the Standard Model, Re($`ϵ_\mathrm{B}`$) is expected to be around $`10^3`$, but it could be up to an order of magnitude larger in superweak models. A non-zero value of $`ϵ_\mathrm{B}`$ gives rise to a time-dependent rate asymmetry, $`\mathrm{A}(\mathrm{t})`$, in inclusive $`\mathrm{B}^0`$ vs. inclusive $`\overline{\mathrm{B}}^0`$ decays, defined as: $`\mathrm{A}(\mathrm{t})`$ $``$ $`{\displaystyle \frac{\mathrm{B}^0(\mathrm{t})\overline{\mathrm{B}}^0(\mathrm{t})}{\mathrm{B}^0(\mathrm{t})+\overline{\mathrm{B}}^0(\mathrm{t})}},`$ (3) where $`\mathrm{B}^0(\mathrm{t})`$ and $`\overline{\mathrm{B}}^0(\mathrm{t})`$ are the decay rates of $`\mathrm{B}^0`$ and $`\overline{\mathrm{B}}^0`$ mesons. For an unbiased selection of $`\mathrm{B}^0`$ and $`\overline{\mathrm{B}}^0`$ mesons, the time-dependent inclusive decay rate asymmetry can be rewritten in terms of proper decay time $`\mathrm{t}`$: $`A(t)`$ $`=`$ $`a_{cp}\left[{\displaystyle \frac{\mathrm{\Delta }m_d\tau _{B^0}}{2}}\mathrm{sin}(\mathrm{\Delta }m_dt)\mathrm{sin}^2\left({\displaystyle \frac{\mathrm{\Delta }m_dt}{2}}\right)\right],`$ (4) where $`\mathrm{a}_{\mathrm{cp}}`$ is the CP-violating observable, $`\mathrm{\Delta }\mathrm{m}_\mathrm{d}`$ is the $`\mathrm{B}^0`$ oscillation frequency, and $`\tau _{\mathrm{B}^0}`$ is the $`\mathrm{B}^0`$ lifetime. For $`|ϵ_B|<<1`$, the parameter $`\mathrm{a}_{\mathrm{cp}}`$ is related to $`ϵ_B`$ by: $`\mathrm{Re}(ϵ_\mathrm{B})=\mathrm{a}_{\mathrm{cp}}/4`$. Furthermore, CPT invariance implies that $`\tau _b=\tau _{\overline{b}}`$. If CPT violation occurred, the lifetimes of $`\mathrm{b}`$ and $`\overline{\mathrm{b}}`$ hadrons could be different: $`\tau _{b/\overline{b}}`$ $`=`$ $`\left[1\pm {\displaystyle \frac{1}{2}}\left({\displaystyle \frac{\mathrm{\Delta }\tau }{\tau }}\right)_b\right]\tau _{av},`$ (5) where $`\tau _{\mathrm{av}}`$ is the average and $`(\mathrm{\Delta }\tau /\tau )_\mathrm{b}`$ is the fractional difference in lifetimes. ## II Inclusive CP(T) Tests The measurement of the time-dependent rate asymmetry, A(t), and the extraction of Re($`ϵ_B`$) proceeds in several steps. First, selected $`\mathrm{Z}^0\mathrm{q}\overline{\mathrm{q}}`$ events are divided into 2 hemispheres defined by the plane $``$ to thrust axis and containing the $`\mathrm{e}^+\mathrm{e}^{}`$ interaction point. A sample of about 400,000 $`\mathrm{Z}^0\mathrm{b}\overline{\mathrm{b}}`$ events is identified using b-tagging techniques described in detail in . The b-tags rely on the presence of a displaced secondary vertex or a high momentum lepton. In each event, the hemisphere containing the b-tag is referred to as the “T-tagged” hemisphere. The b-tag has a $`ϵ_{\mathrm{hemi}}`$ of 37 % for $`\mathrm{b}\overline{\mathrm{b}}`$ events and a non-b impurity of 13%. Next, the b hadron proper decay time, t, in the opposite “measurement” hemisphere is reconstructed by forming a secondary vertex, measuring the decay distance from the primary vertex, and estimating the b hadron energy. The quantity $`\mathrm{a}_{\mathrm{cp}}`$ is then extracted via a binned $`\chi ^2`$-fit to the observed time-dependent asymmetry in bins of reconstructed proper time. ### A Production Flavor Tag The production flavor estimate, $`\mathrm{Q}_\mathrm{T}`$, in the tagged hemisphere is the output of a neural net with the following inputs: * 1. Jet charge of the highest energy jet, $`\mathrm{Q}_{\mathrm{jet}}`$, with (with momentum weight $`\kappa =0.5`$). The jet charge is defined as: $`Q_{jet}`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Sigma }_i(p_i^l)^\kappa q^i}{\mathrm{\Sigma }_i(p_i^l)^\kappa q^i}},`$ (6) where $`p_i^l`$ is the longitudinal momentum component with respect to the jet axis and $`q_i`$ is the charge of track $`i`$. 2. Vertex charge, $`Q_{vtx}=\mathrm{\Sigma }_iw_iq_i`$, where $`w_i`$ is the weight for track $`i`$ to have come from a secondary instead of a primary vertex. Weights are derived from an artificial neural network with three inputs: the momentum of track $`i`$, the transverse momentum of track $`i`$ with respect to the jet axis, and the impact parameter. 3. Uncertainty on the vertex charge, $`\sigma _{Q_{vtx}}=\mathrm{\Sigma }_iw_i(1w_i)q_i^2`$. 4. The product of lepton charge and output of neural network used to select $`\mathrm{b}\mathrm{l}`$ decays, a lepton tag. The jet charge and vertex charge are not charge symmetric due to detector effects resulting in the difference in the rate and reconstruction of positively and negatively charged tracks. Offsets are evaluated using inclusive samples of T-tagged events and are subtracted prior to constructing $`\mathrm{Q}_\mathrm{T}`$. The lepton tag is diluted by $`\mathrm{b}`$ mixing, cascade $`\mathrm{b}`$ decays, $`\mathrm{c}`$ decays, and fake leptons. Separate networks with fewer inputs are trained for events with a vertex or a lepton only. The variable $`\mathrm{Q}_\mathrm{T}`$ is defined as: $`Q_T`$ $`=`$ $`{\displaystyle \frac{N_b(x)N_{\overline{b}}(x)}{N_b(x)+N_{\overline{b}}(x)}}`$ (7) $`|Q_T|`$ $`=`$ $`12\eta ,`$ (8) where $`\mathrm{N}_\mathrm{b}(\mathrm{x})`$ and $`\mathrm{N}_{\overline{\mathrm{b}}}(\mathrm{x})`$ are the numbers of MC $`\mathrm{b}`$ and $`\overline{\mathrm{b}}`$ hadron hemispheres with a given value of artificial neural network output $`x`$ and $`\eta `$ is the mistag probability. If $`\mathrm{Q}_\mathrm{T}>0`$, then the tagged hemisphere is more likely to contain a $`\mathrm{b}\mathrm{hadron}`$ than a $`\overline{\mathrm{b}}\mathrm{hadron}`$, and vice versa. If $`\mathrm{Q}_\mathrm{T}=0`$, both hypotheses are equally likely. The $`\mathrm{Q}_\mathrm{T}`$ flavor tag has some sensitivity to the decay flavor of the tagged hemisphere, which is not desirable in an inclusive measurement. Therefore, another tag $`\mathrm{Q}_\mathrm{M}`$ is applied in the opposite, or measurement, hemisphere. The $`\mathrm{Q}_\mathrm{T}`$ output in the T-tagged hemisphere, as well as the jet charge in the measurement hemisphere, $`\mathrm{Q}_\mathrm{M}`$ (with momentum weight $`\kappa =0`$), are combined to construct the composite variable: $`Q_2`$ $`=`$ $`2\left[{\displaystyle \frac{(1Q_T)(1+Q_M)}{(1Q_T)(1+Q_M)+(1+Q_T)(1Q_M)}}\right]1.`$ (9) Again, if $`\mathrm{Q}_2>0`$ ($`\mathrm{Q}_2<0`$), the so-called “M-tagged” hemisphere contains a $`\mathrm{b}`$-hadron tag ($`\overline{\mathrm{b}}`$-hadron) tag. The $`\mathrm{Q}_2`$ variable is designed to be sensitive to the production, but not the decay flavor of the $`\mathrm{b}`$-hadron, thus avoiding biases to the reconstructed proper time measurement. After flavor tagging, 394119 events remain in the data sample. ### B Proper Decay Time Reconstruction The CP-violating parameter $`\mathrm{a}_{\mathrm{cp}}`$ can be extracted from the rate asymmetry distribution, A(t), as defined in Equation 4. This is accomplished by calculating the number of $`\mathrm{b}`$-hadron M-tags minus the number of $`\overline{\mathrm{b}}`$-hadron M-tags in bins of reconstructed proper time $`t`$ and performing a binned $`\chi ^2`$ fit to measure $`\mathrm{a}_{\mathrm{cp}}`$. The $`\mathrm{b}`$-hadron proper time is defined as: $`t`$ $`=`$ $`{\displaystyle \frac{m_bL}{\sqrt{E_b^2m_b^2}}},`$ (10) where $`L`$ is the hadron decay length, $`E_b`$ is the $`\mathrm{b}`$-hadron energy, and $`m_b`$ is the mass of the $`\mathrm{b}`$-hadron, taken be that of the $`\mathrm{B}^+`$ and $`\mathrm{B}^0`$ (5.279 GeV). The hadron decay length, $`L`$, is reconstructed in the measurement hemisphere by first forming a “seed” secondary vertex using the two tracks with the largest impact parameter, $`d_0`$, relative to the primary vertex in the highest energy jet. All tracks with $`p>0.5`$ GeV, $`|d_0|<1`$ cm, and $`\sigma _{d_0}<0.1`$ cm, which are consistent with the “seed” vertex are then added to it via an interative procedure. The secondary vertex must contain at least 3 tracks and have an invariant mass exceeding 0.8 GeV, assuming all constituent tracks are pions. To further eliminate badly reconstructed or fake secondary vertices, the secondary vertex must be kinematically consistent with a long-lived particle originating from the primary vertex. Secondary vertices meeting the above criteria are identified in approximately 70$`\%`$ of M-tagged hemispheres for both signal and background. The decay length $`L`$ between the primary and secondary vertices is then calculated using the jet axis as a constraint. The $`\mathrm{b}`$-hadron energy is computed by first estimating the energy of the $`\mathrm{b}`$-jet by treating the event as a 2-body decay of a $`\mathrm{Z}^0`$ into a $`\mathrm{b}`$-jet of mass $`\mathrm{m}_\mathrm{b}`$ and another object. The charged and neutral fragmentation energy, $`\mathrm{E}_{\mathrm{bfrag}}`$, was estimated using the procedure described in , involving the charged track weights $`\mathrm{w}_\mathrm{i}`$, and the unassociated electromagnetic calorimeter clusters weighted according to their angle with respect to the jet axis. The $`\mathrm{b}`$-hadron energy is then, $`\mathrm{E}_\mathrm{b}=\mathrm{E}_{\mathrm{bjet}}\mathrm{E}_{\mathrm{bfrag}}`$. The reconstructed proper time distribution described by Equation 10 is convolved with 2 Gaussians to account for detector resolution effects. The RMS widths of the resolution functions are 0.33 and 1.3 ps and are determined from Monte Carlo studies. About 65$`\%`$ of events lie within the narrower Gaussian. These resolution functions represent an average over all true decay proper times $`\mathrm{t}`$. The non-Gaussian effects apparent in small slices of $`\mathrm{t}`$ due to contamination from primary vertex tracks are not critical in this analysis, as the result is not particularly dependent on accurate decay time resolution. ## III Fit to Re($`ϵ_\mathrm{B}`$) For each of 34 time bins $`\mathrm{i}`$ (in the range -2 to 15 ps), the asymmetry is calculated in 10 bins $`\mathrm{j}`$ of $`|\mathrm{Q}_2|`$: $`A_{ij}^{obs}`$ $`=`$ $`{\displaystyle \frac{N_{ij}^bN_{ij}^{\overline{b}}}{<\left|Q_2\right|>_{ij}(N_{ij}^b+N_{ij}^{\overline{b}})}},`$ (11) and the error $`\sigma _{A_{ij}^{\mathrm{obs}}}`$ is given by: $`\sigma _{A_{ij}^{\mathrm{obs}}}={\displaystyle \frac{1(<\left|Q_2\right|>_{ij}A_{ij}^{\mathrm{obs}})^2}{2<\left|Q_2\right|>_{ij}}}\sqrt{{\displaystyle \frac{N_{ij}^\mathrm{b}+N_{ij}^{\overline{\mathrm{b}}}}{N_{ij}^\mathrm{b}N_{ij}^{\overline{\mathrm{b}}}}}},`$ where $`N_{ij}^\mathrm{b}`$ ($`N_{ij}^{\overline{\mathrm{b}}}`$) is the number of events with $`\mathrm{Q}_2>0`$ ($`\mathrm{Q}_2<0`$). The factor $`1/<\left|\mathrm{Q}_2\right|>_{\mathrm{ij}}`$ corrects for the tagging dilution (mis-tagging), which reduces the observed asymmetry for imperfectly tagged events. The 10 estimates of $`\mathrm{A}_{\mathrm{ij}}^{\mathrm{obs}}`$ in each bin $`i`$ are then averaged, weighting by $`(\sigma _{\mathrm{A}_{\mathrm{ij}}^{\mathrm{obs}}})^2`$ to get $`\mathrm{A}_\mathrm{i}^{\mathrm{obs}}`$. A binned $`\chi ^2`$-fit to the reconstructed proper time which accounts for non-$`\mathrm{B}^0`$ background, the $`\mathrm{B}^0`$ lifetime, and the Gaussian time resolution functions yields: $`a_{cp}`$ $`=`$ $`0.005\pm 0.055\pm 0.013`$ (12) $`Re(ϵ_B)`$ $`=`$ $`0.001\pm 0.014\pm 0.003.`$ (13) The asymmetry, A(t), as a function for reconstructed proper time $`\mathrm{t}`$ is shown in Figure 1. The dots denote OPAL data, the solid line the fit result, and the dashed line the expected asymmetry for $`\mathrm{a}_{\mathrm{cp}}=0.15`$. The systematic uncertainties are summarized in Table I. Detailed descriptions of the various contributions can be found in Reference . If the reconstruction efficiency for $`\mathrm{B}^0`$ decays to different numbers of charm hadrons is not the same, the expected asymmetry could take the form: $`A(t)`$ $`=`$ $`c_{cp}\mathrm{sin}\left(\mathrm{\Delta }m_dt\right)a_{cp}\mathrm{sin}^2\left(\mathrm{\Delta }m_dt/2\right).`$ (14) Repeating the fit, letting both $`\mathrm{a}_{\mathrm{cp}}`$ and $`\mathrm{c}_{\mathrm{cp}}`$ vary gives: $`a_{cp}`$ $`=`$ $`0.002\pm 0.055`$ (15) $`c_{cp}`$ $`=`$ $`0.026\pm 0.027.`$ (16) Differences in efficiency are not significant, as $`\mathrm{a}_{\mathrm{cp}}`$ does not change much. The systematic uncertainties on the measurement of $`\mathrm{c}_{\mathrm{cp}}`$ are listed in the second column of Table I. ## IV Fit to $`(\mathrm{\Delta }\tau /\tau )_\mathrm{b}`$ The fractional difference between the $`\mathrm{b}`$ and $`\overline{\mathrm{b}}`$\- hadron lifetimes is measured by dividing the data into 20 bins of $`\mathrm{Q}_2`$ (related to $`\mathrm{b}`$/$`\overline{\mathrm{b}}`$-hadron purity) and performing a simultaneous fit of the reconstructed proper time distrbutions to the expected proper time distribution. The fit yields values for both $`\tau _{\mathrm{avg}}`$ and $`(\mathrm{\Delta }\tau /\tau )_\mathrm{b}`$. However, the result for $`\tau _{\mathrm{avg}}`$ has a large systematic uncertainty due to the time resolution function and should not be interpreted as a measurement of the average $`\mathrm{b}`$-hadron lifetime. The expected distribution accounts for time resolution effects, non-$`\mathrm{b}\overline{\mathrm{b}}`$ background, the lifetimes of $`\mathrm{b}`$ and $`\overline{\mathrm{b}}`$-hadrons, and a background component with a lifetime $`\tau _{\mathrm{bg}}`$. The fit result is: $`\left(\mathrm{\Delta }\tau /\tau \right)_b`$ $`=`$ $`0.001\pm 0.012\pm 0.008.`$ (17) The uncertainty in the flavor mistag rate dominates the systematic uncertainty on $`(\mathrm{\Delta }\tau /\tau )_\mathrm{b}`$. Reconstructed proper time distributions in 4 ranges of $`\mathrm{Q}_2`$ are shown in Figure 2. ## V Conclusion An inclusive sample of $`\mathrm{b}`$-hadron decays is used to search for CP and CPT violation effects. No such effects are seen. From the time dependent asymmetry of inclusive $`\mathrm{B}^0`$ decays, the CP violation parameter is measured to be: $`Re(ϵ_B)`$ $`=`$ $`0.001\pm 0.014\pm 0.003.`$ (18) This result agrees with the OPAL measurement using semileptonic b decays: $`\mathrm{Re}(ϵ_\mathrm{B})=0.002\pm 0.007\pm 0.003`$, and is also in agreement with other less precise results from CLEO and CDF. The fractional difference in the lifetimes of $`\mathrm{b}`$ and $`\overline{\mathrm{b}}`$-hadrons is also measured to be: $`\left(\mathrm{\Delta }\tau /\tau \right)_b`$ $`=`$ $`0.001\pm 0.012\pm 0.008.`$ (19) This is the first analysis accepted for publication which tests the equality of the $`\mathrm{b}`$ and $`\overline{\mathrm{b}}`$-hadron lifetimes. These results are summarized in Figures 3 and 4.
no-problem/9903/cond-mat9903195.html
ar5iv
text
# Resonant neutron scattering on the high Tc cuprates and 𝜋 and 𝜂 excitation of the 𝑡-𝐽 and Hubbard models ## Abstract We review the explanation for resonant neutron scattering experiments in $`YBa_2Cu_3O_{6+\delta }`$ and $`Bi_2Sr_2Ca_2O_{8+\delta }`$ materials from the point of view of triplet excitation in the particle-particle channel, the $`\pi `$ excitation. Relation of these resonances to the superconducting condensation energy and their role in stabilizing the superconducting state is discussed. Due to the superconducting fluctuations, the $`\pi `$ resonance may appear as a broad peak above Tc. Analogue problem to the $`\pi `$ excitation for the case of $`s`$-wave pairing, the $`\eta `$ excitation, is considered in the strong coupling limit with an emphasis on the resonance precursors in a state with no long range order. Inelastic neutron scattering (INS) on optimally doped $`YBa_2Cu_3O_7`$ revealed a striking resonance at the commensurate wavevector $`Q=(\pi /a,\pi /a)`$ and energy $`41meV`$ . This resonance appears in the spin-flip channel only, therefore it is of magnetic origin and not due to scattering by phonons. The most remarkable feature of this resonance is that it appears only below Tc and therefore tells us about enhanced antiferromagnetic fluctuations in the superconducting state of this material. Similar resonances have later been found for the underdoped $`YBa_2Cu_3O_{6+\delta }`$ at smaller energies but the same wavevector of commensurate antiferromagnetic fluctuations $`(\pi /a,\pi /a)`$. A new feature of the resonances in underdoped materials is that they no longer disappear above Tc but have precursors appearing at some temperature above the superconducting transition temperature. Recently, resonant peaks below Tc have also been observed in INS experiments on $`Bi_2Sr_2Ca_2O_{8+\delta }`$ materials . Several theories have been suggested to explain the observed resonances. One of the proposals was to identify them with a $`\pi `$-excitation, a triplet excitation at momentum $`(\pi /a,\pi /a)`$ in the particle-particle channel : $`\pi ^{}=_p(cosp_xcosp_y)c_{p+Q}^{}c_p^{}`$. This excitation is a well-defined collective mode for a wide range of lattice Hamiltonians, including the Hubbard Hamiltonian and the $`t`$-$`J`$ model. A simple way to visualize the $`\pi `$-excitation is to think of a spin-triplet pair of electrons sitting on the nearest sites, having the center of mass momentum $`(\pi /a,\pi /a)`$ and the same relative wavefunction as a $`d`$-wave Cooper pair. When the system acquires a long-range superconducting order the $`d`$-wave Cooper pairs may be resonantly scattered into the $`\pi `$ pairs and this gives rise to the sharp resonance seen in neutron scattering. Motivated by a recent proposal of Scalapino and White, we showed that last argument may be enhanced and we argued that it may be energetically favorable for the system to become superconducting, so that the $`\pi `$ channel may contribute to the spin fluctuations and the system can lower its antiferromagnetic exchange energy. In this way, the $`\pi `$ excitation may be “promoted” from being a consequence of the $`d`$-wave superconductivity to being its primary cause. This argument is similar to the argument for the stabilization of the A phase of superfluid helium-3, where spin fluctuations are enhanced in the A phase relative to the B-phase. So the effect of the spin fluctuation feedback may lead to stabilization of the A phase relative to the B base, which would always be favored in weak coupling BCS analysis. The hypothesis of the $`\pi `$ excitation being the major driving force of the superconducting transition is supported by comparison of the superconducting condensation energy and the change in the antiferromagnetic exchange energy due to the appearance of the resonance. For optimally doped $`YBa_2Cu_3O_7`$ the resonance lowers the exchange energy by $`18K`$ per unit cell , whereas the condensation energy for this material is $`12K`$ per unit cell. So the resonance by itself is sufficient to account for the superconducting condensation energy. At finite temperature there also seems to be a connection between the resonance intensity and the electronic specific heat, giving another indication of the important role played by the resonance in the thermodynamics of superconducting transition (see H. Mook’s article in these proceedings). Another important aspect of the $`\pi `$ excitation is that it allows to unify the spin $`SU(2)`$ and charge $`U(1)`$ symmetries into a larger $`SO(5)`$ symmetry and suggests a new effective low energy Lagrangian for the description of the high $`T_c`$ cuprates . From the point of view of $`SO(5)`$ symmetry $`\pi `$ operator is one of the generators of the symmetry and resonances observed in neutron scattering are pseudo-Goldstone bosons of broken symmetry. Discussion of the $`\pi `$ excitation in the state with long range superconducting order has been given within gauge invariant RPA formalism in and from the point of view of $`SO(5)`$ symmetry in . However thorough understanding of precursors of this excitation in the normal state of underdoped materials is still lacking. In reference it was pointed out that the most likely origin of these precursors is the existence of strong superconducting fluctuations in the pseudogap regime of underdoped cuprates , an assumption phenomenologically supported by correlations between the temperature at which the resonance precursors appear and broadening of the singularity in the electronic specific heat above $`T_c`$. In weak coupling precursor of the $`\pi `$ resonance have been identified with a process in which a $`\pi `$-pair and a preformed Cooper pair propagate in opposite directions (see Figure 1), however quantitative analysis of such process is very difficult. In this article we would like to point out that there is an analogue problem to the $`\pi `$-excitation, $`\eta `$ excitation of the negative $`U`$ Hubbard model, which allows a detailed study in the strong coupling limit, including the analysis of resonance precursors in a state without long range order. So in the remaining part of this article we will review the $`\eta `$ excitation, pseudospin $`SU(2)`$ symmetry that it gives rise to and discuss the strong coupling limit of the negative $`U`$ Hubbard model. As originally suggested by Yang and Zhang the negative $`U`$ Hubbard model (we consider $`d=3`$ case) $`=t{\displaystyle \underset{ij\sigma }{}}c_{i\sigma }^{}c_{j\sigma }+U{\displaystyle \underset{i}{}}(n_i{\displaystyle \frac{1}{2}})(n_i{\displaystyle \frac{1}{2}})\mu {\displaystyle \underset{i\sigma }{}}c_{i\sigma }^{}c_{i\sigma }`$ (1) at half filling has $`SO(4)`$ symmetry where in addition to the usual spin $`SU(2)`$ symmetry the system possesses pseudospin $`SU(2)`$ symmetry, generated by operators $`\eta ^{}=_pc_{p+Q}^{}c_p^{}`$, $`\eta _0=\frac{1}{2}(N_eN)`$, and $`\eta =\left(\eta ^{}\right)^{}`$, where $`N_e`$ is the total number of electrons and $`N`$ is the total number of sites. The $`\eta `$ operator that was introduced is similar to the $`\pi `$ operator that we discussed earlier in that it acts in the particle-particle channel and has momentum $`Q=(\pi /a,\pi /a)`$, however it is spin singlet operator and creates pairs of electrons where electrons sitting on the same site. While the $`SO(5)`$ symmetry of the $`t`$-$`J`$ model is related to the competition between the antiferromagnetic and d-wave superconducting orders in this system, the pseudospin $`SU(2)`$ symmetry relates CDW and s-wave superconductivity in the negative $`U`$ Hubbard model. At half filling when $`\mu =0`$ the symmetry is exact and there is a degeneracy between the CDW state and state with superconducting order. Away from half-filling the symmetry is explicitly broken and superconducting state is energetically more favorable than the CDW state. The $`\eta `$ operator no longer commutes with the Hamiltonian but satisfies $`[,\eta ^{}]=2\mu \eta ^{}`$. Therefore, when it acts on the ground state it creates an exact excited state at the energy of $`2\mu `$. In the normal state such $`\eta `$ excitation does not contribute to the density fluctuation spectrum, since it is charge 2 operator, however when the system becomes superconducting and there is a finite probability of converting particles into holes the $`\eta `$ resonance shows as a resonance in the density fluctuation spectrum (one can also think of this as resonant scattering of s-wave Cooper pairs into $`\eta `$ pairs). Another way to see how the $`\eta `$ channel gets coupled to the density channel below $`T_c`$ is to notice that there is an exact commutation relation $`[\eta ^{},\rho _Q]=\mathrm{\Delta }`$ where $`\rho _Q=_{p\sigma }c_{p+Q\sigma }c_{p\sigma }`$ is density operator at momentum $`Q`$ and $`\mathrm{\Delta }`$ is a superconducting s-wave order parameter. In the superconducting state the right hand side of this relation acquires an expectation value, so $`\rho _Q`$ and $`\mathrm{\Delta }`$ become conjugate variables and resonant peak in one of them shows up as a resonance in the other. The question that we want to address is what happens if we do not have long range order but only strong superconducting fluctuations. Can they lead to precursors of the $`\eta `$ resonance in the density-density correlation function and what is the form of such precursors. The limit that we want to consider is strong coupling limit when $`U>>t`$. We perform a particle-hole transformation on a bipartite lattice $`c_ic_ic_i\left\{\begin{array}{c}c_i^{},iA,\hfill \\ c_i^{},iB\hfill \end{array}\right\}`$ (4) where $`A`$ and $`B`$ denote two sublattices. This transformation maps the negative $`U`$ Hubbard model at finite doping ( i.e. finite $`\mu `$ ) into positive $`U`$ Hubbard model at half-filling but in the presence of magnetic field along $`z`$-axis. Pseudospin $`SU(2)`$ symmetry goes into the spin $`SU(2)`$ symmetry and $`\eta `$ resonance becomes a Larmor resonance in the transverse spin channel. Particle-hole transformation maps superconducting order parameter into antiferromagnetic order parameter in the $`x`$-$`y`$ plane and a CDW order parameter into antiferromagnetic order parameter in $`z`$ direction. The advantage of doing such particle-hole transformation is that strong coupling limit of the positive $`U`$ Hubbard model at half filling is well known. It is the Heisenberg model with the nearest neighbor exchange interaction $`J=4t^2/U`$ . So the effective Hamiltonian in strong coupling is $`=J{\displaystyle \underset{ij}{}}𝐒_i𝐒_j+H_z{\displaystyle \underset{i}{}}S_i^z`$ (5) with $`H_z=2\mu `$. At $`T=0`$ Heisenberg model in external field will develop a long range antiferromagnetic order in the plane perpendicular to the direction of the applied field ( $`xy`$ plane in our case, which corresponds to the superconducting order in the original negative $`U`$ model ). When this happens, $`N_\pm `$ get expectation values and we can use the commutation relation $`[S_\pm ,N_z]=\pm N_\pm `$ to show that Larmor resonance, which was originally present in $`S_\pm `$ channels only, will appear in the $`N_z`$-$`N_z`$ correlation function as well. However we are interested in the regime when we have only finite range antiferromagnetic correlations (although strong) and we want to find possible precursors of the Larmor resonance in the $`N_z`$ channel. This effect is conveniently studied by using Schwinger boson representation of the Heisenberg model . On a bipartite lattice one represents spin operators as $`S_i^+=a_i^{}b_i`$ and $`S_i^z=1/2(a_i^{}a_ib_i^{}b_i)`$ on sublattice $`A`$, and $`S_i^+=b_i^{}a_i`$ and $`S_i^z=1/2(b_i^{}b_ia_i^{}a_i)`$ on sublattice $`B`$. Provided that constraint $`a_i^{}a_i+b_i^{}b_i=1`$ is satisfied at each site and $`a`$ and $`b`$ obey the usual bosonic commutation relation we easily recover the proper commutation relations for spin $`SU(2)`$ algebra at each site. In terms of $`a`$ and $`b`$ operators Hamiltonian (5) can be written as $`={\displaystyle \frac{J}{2}}{\displaystyle \underset{ij}{}}A_{ij}^{}A_{ij}+{\displaystyle \frac{H_z}{2}}{\displaystyle \underset{i}{}}()^i(a_i^{}a_ib_i^{}b_i)+{\displaystyle \underset{i}{}}\lambda _i(a_i^{}a_i+b_i^{}b_i1)`$ (6) where $`A_{ij}=a_ia_j+b_ib_j`$ and the last terms enforces the constraint at each site. In the mean field approximation the last Hamiltonian may be diagonalized by quasiparticles $`\alpha _{a\mu ,k}`$ ($`a=1,2`$, $`\mu =\pm `$, and momentum $`k`$ runs in the magnetic zone only) with dispersion $`\omega _{a\pm ,k}=\omega _k\pm H_z/2`$, where $`\omega _k=\sqrt{\lambda ^24Q^2\gamma _k^2}`$ and $`\gamma _k=cosk_x+cosk_y`$. Mean-field parameters $`\lambda `$ and $`Q`$ have to be found from minimizing the free energy $`F={\displaystyle \frac{2}{\beta }}{\displaystyle \underset{k}{}}ln\left\{sinh[{\displaystyle \frac{\beta }{2}}(\omega _k+{\displaystyle \frac{B_z}{2}})]\right\}+{\displaystyle \frac{2}{\beta }}{\displaystyle \underset{k}{}}ln\left\{sinh[{\displaystyle \frac{\beta }{2}}(\omega _k{\displaystyle \frac{B_z}{2}})]\right\}2\lambda N+{\displaystyle \frac{4Q^2}{J}}N`$ (7) In terms of $`\alpha `$ operators the uniform and staggered spin operators can be written as $`M_+`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{k}{}}\left(\alpha _{1+,k}^{}\alpha _{2,k}\alpha _{2+,k}^{}\alpha _{1,k}\right)M_z={\displaystyle \frac{1}{N}}{\displaystyle \underset{ak}{}}\left(\alpha _{a+,k}^{}\alpha _{a+,k}\alpha _{a,k}^{}\alpha _{a,k}\right)`$ (8) $`N_z`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{a\mu k}{}}()^a\left\{cosh(2\theta _k)\alpha _{a\mu ,k}^{}\alpha _{a\mu ,k}+{\displaystyle \frac{1}{2}}sinh(2\theta _k)[\alpha _{a\mu ,k}^{}\alpha _{a\mu ,k}^{}+\alpha _{a\mu ,k}\alpha _{a\mu ,k}]\right\}`$ (9) $`N_x`$ $`=`$ $`{\displaystyle \frac{1}{N}}{\displaystyle \underset{\mu k}{}}\{cosh(2\theta _k)\alpha _{1\mu ,k}^{}\alpha _{2\overline{\mu },k}+{\displaystyle \frac{1}{2}}sinh(2\theta _k)[\alpha _{1\mu ,k}^{}\alpha _{2\overline{\mu },k}^{}+\alpha _{1\mu ,k}\alpha _{2\overline{\mu },k}]\}`$ (10) where $`tanh(2\theta _k)=2Q\gamma _k/\lambda `$. From equation (8) we notice that the $`M_+`$ operator has the exact energy of $`H_z`$ due to the fact that the mean-field approximation on Schwinger bosons does not break spin symmetry. The correlation function for $`N_z`$ is given by $`D(\omega >0)=Im{\displaystyle 𝑑te^{i\omega t}\theta (t)N_z(t)N_z(0)}={\displaystyle \frac{1}{N}}{\displaystyle \underset{k}{}}\left(1+N(\omega _{+,k})+N(\omega _{,k})\right)\delta (\omega \omega _{+,k}\omega _{,k})`$ (11) where $`N(\omega )=1/(e^{\beta \omega }1)`$. At $`T<T_c`$, bosons $`\alpha _{1,k=0}`$ and $`\alpha _{2,k=0}`$ are condensed so there is a $`\delta `$-function resonance in $`D(\omega )`$ at frequency $`H_z=2\mu `$ with the weight $`sinh^2(2\theta _{k=0})=N_x^2/M_z`$ as dictated by the $`SU(2)`$ symmetry . There is also an additional broad peak due to $`\alpha _{a,k0}`$ that starts at energy $`2\mu `$ and then extends in a frequency range of around $`T`$ with an integrated weight proportional to $`T^{3/2}`$. Above the Neel ordering temperature the $`\delta `$-peak in $`D(\omega )`$ is absent, however there is a broadened peak at the energy slightly larger than $`2\mu `$ (the lower energy threshold is given by $`\omega _{min}=\omega _{,k=0}+\omega _{+,k=0}=2\mu +2\omega _{,k=0}`$, and $`\omega _{,k=0}`$ is small since it vanishes at $`T_c`$ ). The width of this broad feature is $`T`$ and the total spectral weight is proportional to $`T^{3/2}e^{\omega _{,k=0}/T}`$. As the temperature is lowered and the system approaches Bose condensation at $`T_c`$, the energy $`\omega _{,k=0}`$ becomes smaller leading to a smooth increase of the integrated intensity of the peak. So the intensity of the resonance changes continuously across $`T_c`$ but as may be shown by a detailed analysis there is a jump in the derivative. On Figure 2 we show a sketch of $`D(\omega )`$ for temperatures below and above $`T_c`$. Before concluding we would like to remark that that many features of the precursors of the $`\eta `$ excitations that we discussed here, such as temperature dependent shift to higher energies and considerable broadening are likely to be present for the $`\pi `$-excitation of the $`t`$-$`J`$ model as well. INS on underdoped $`YBa_2Cu_3O_{6+\delta }`$ reveals considerable broadening of the resonance above $`T_c`$, however the current accuracy of experiments does not allow to see any changes in the energy of the resonance above the superconducting transition. This work is partially supported by the NSF at ITP (ED) and by the NSF grant DMR-9814289. We acknowledge useful discussions with A. Auerbach and D.J. Scalapino.