id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/0002/cond-mat0002074.html | ar5iv | text | # Two time scales and FDT violation in a Finite Dimensional Model for Structural Glasses
## Abstract
We study the breakdown of fluctuation-dissipation relations between time dependent density-density correlations and associated responses following a quench in chemical potential in the Frustrated Ising Lattice Gas. The corresponding slow dynamics is characterized by two well separated time scales which are characterized by a constant value of the fluctuation-dissipation ratio. This result is particularly relevant taking into account that activated processes dominate the long time dynamics of the system.
In recent years considerable progress has been achieved in the theoretical description of the glassy state of matter. A scenario for the observed slow dynamics of glass forming materials has emerged through detailed analysis of mean field (MF) spin glass models . The equations describing the off-equilibrium dynamics of these MF spin glasses simplify, above the transition, to the single equation for the Mode Coupling Theory for supercooled liquids . These approaches have been successful in explaining history dependence or aging effects and the nature of the two characteristic relaxations in glasses, the short time or $`\beta `$-relaxation and the structural long time $`\alpha `$-relaxation. In the $`\alpha `$-relaxation the system falls out of local equilibrium, as a consequence Fluctuation-Dissipation Theorem (FDT) breaks down and can be replaced by the more general relation
$$R(t,t_w)=\frac{X(t,t_w)}{T}\frac{C(t,t_w)}{t_w},$$
(1)
where $`C(t,t_w)`$ is a two times correlation function and $`R(t,t_w)`$ with $`t>t_w`$ is the associated response. $`T`$ is the heat bath temperature and $`X(t,t_w)`$ is a function that measures the departure from FDT: at equilibrium $`X=1`$ and the usual FDT is recovered, while in the out of equilibrium regime $`X<1`$. In MF approximation the function $`X`$, called โfluctuation-dissipation ratioโ(FDR), turns out to depend on both times only through $`C(t,t_w)`$. Moreover in MF models of glasses $`X`$ is a constant (when different from 1). This scenario reflects the existence of only two well separated time scales, the equilibrium or FDT scale and a longer one where the system is out of equilibrium. The FDR has been interpreted as an effective temperature and it has been demonstrated that it is exactly the temperature that a thermometer would measure if it would be coupled to the slowly relaxing modes of the system . Recently the first experimental determination of the FDR has been done in glycerol .
This scenario, while appealing, is essentially based on an analogy between the physics of some MF spin glasses and the behavior of real finite structural glasses (being the formal connection valid only in the high temperature region). At this point it seems crucial to test the link between MF theories and realistic models in the glassy phase. In particular, we still do not know which will be the role of activated processes in realistic models. Activated processes are absent in completely connected models in the thermodynamic limit while on the contrary they dominate the relaxation dynamics below the glass transition temperature $`T_g`$ in real glasses.
The goal of the present letter is to go beyond the MF like description of structural glasses by considering a finite dimensional model, the frustrated Ising lattice gas (FILG) , which presents most of the relevant features of glass forming materials, in particular activated processes at low temperatures. Here we analyze the violation of FDT in the FILG in three dimensions through Monte Carlo simulations. The (very precise) results confirm the qualitative scenario of MF models of a constant FDR with large separation of time scales and set the stage for a detailed investigation of activated processes in realistic models of glasses.
The FILG is defined by the Hamiltonian:
$$H=J\underset{<ij>}{}(\epsilon _{ij}\sigma _i\sigma _j1)n_in_j\mu \underset{i}{}n_i.$$
(2)
At each site of the lattice there are two different dynamical variables: local density (occupation) variables $`n_i=0,1`$ ($`i=1\mathrm{}N`$) and internal degrees of freedom, $`\sigma _i=\pm 1`$. The usually complex spatial structure of the molecules of glass forming liquids, which can assume several spatial orientations, is in part responsible for the geometric constraints on their mobility. Here we are in the simplest case of two possible orientations, and the steric effects imposed on a particle by its neighbors are felt as restrictions on its orientation due to the quenched random variables $`\epsilon _{ij}=\pm 1`$. The first term of the Hamiltonian implies that when $`J\mathrm{}`$ any frustrated loop in the lattice will have at least one hole and then the density will be $`\rho <1`$, preventing the system from reaching the close packed configuration. The system will then present โgeometric frustrationโ. Finally, $`\mu `$ represents a chemical potential ruling the system density (at fixed volume).
The system presents a slow โagingโ dynamics after a quench from a small value of $`\mu `$ characteristic of the liquid phase to a large $`\mu `$ corresponding to the glassy phase , what is equivalent to a sudden compression. In the present numerical experiments we always let the system evolve after a quench in $`\mu `$ with $`J`$ and $`T`$ fixed. The origin of the times is set on the quench time. After the quench the density slowly relaxes up to a critical value near $`\rho _c0.675`$ (further details will be given in ). After a waiting time $`t_w`$ we fix the density to the value $`\rho =\rho (t_w)`$ (for technical reasons explained below) and a small random perturbation ($`\mu _i=\pm 1`$) is applied :
$$H^{}(t)=H(t)ฯต(t)\underset{i}{}\mu _in_i(t).$$
(3)
In all our numerical experiments the field is switched on at time $`t_w`$ and kept fixed for later times, that is $`ฯต(t)=ฯต\theta (tt_w)`$. Then we measure the density-density autocorrelation function
$$C(t,t_w)=\frac{1}{N\rho }\underset{i}{}\overline{n_i(t)n_i(t_w)},$$
(4)
where $``$ and $`\overline{}`$ are the averages over thermal histories and disorder realizations . At the same time we measure the associated response function integrated over the time and divided by the perturbing field intensity, which defines the off-equilibrium compressibility
$$\kappa (t,t_w)=\frac{1}{ฯต}_{\mathrm{}}^tR(t,s)ฯต(s)ds=_{t_w}^tR(t,s)ds,$$
(5)
where, as usual, the response is defined as
$$R(t,t^{})=\frac{1}{N\rho }\underset{i}{}\frac{\overline{n_i(t)}}{ฯต(t^{})}.$$
(6)
In the large times limit ($`t,t_w\mathrm{}`$), $`X(t,t_w)`$ depends on both times only through the correlation $`C(t,t_w)`$. Then integrating Eq.(1) from $`t_w`$ to $`t`$ we obtain a useful relation linking the correlation and the compressibility in the out of equilibrium regime
$$T\kappa (t,t_w)=_{C(t,t_w)}^1X(C)dC.$$
(7)
This is the key relation used in order to extract the FDR.
In our case the perturbing term in the Hamiltonian, shown in Eq.(3), gives to the integrated response the following form
$$\kappa (t,t_w)=\frac{1}{N\rho }\left(\underset{i}{}\overline{[\mu _in_i(t)]_{\mathrm{av}}}\underset{i}{}\overline{[\mu _in_i(t_w)]_{\mathrm{av}}}\right),$$
(8)
where $`[]_{\mathrm{av}}`$ is the average over the random $`\mu _i`$ realizations . The second term can be ignored because the $`\mu _i`$ are random and completely uncorrelated from the configuration at time $`t_w`$.
Performing a parametric plot of the compressibility (or the integrated response) versus the correlation is a useful way of getting information about the different dynamical regimes present in the model and in particular the time scales structure of the system. In fact from Eq.(7) it is easy to see that, plotting $`T\kappa (t,t_w)`$ vs. $`C(t,t_w)`$, the FDR can be simply obtained as minus the derivative of the curve, i.e.
$$X(C^{})=\frac{[T\kappa (t,t_w)]}{C(t,t_w)}|_{C(t,t_w)=C^{}}.$$
(9)
There is already a considerable literature on this kind of analysis in systems with and without quench disorder . Particularly relevant to the present discussion are references , where a constant FDR was found in model glasses with interactions of the Lennard-Jones type and in a purely kinetic lattice gas. The FILG has the advantage of being a Hamiltonian lattice model with short range interactions and, in this sense, it is more realistic than purely kinetic models and more accessible analytically and computationally than Lennard-Jones systems. Moreover, it is a valid on-lattice model for structural glasses and it may be simple enough to apply statistical mechanics techniques.
We have simulated the FILG in 3D for linear sizes $`L=30`$ and $`60`$. Fixing the coupling constant $`J=1`$ and a temperature $`T=0.1`$ the system presents a glass transition around $`\mu =0.5`$ .
In all our numerical experiments we have prepared the system in an initial state with low density, characteristic of the liquid phase and then, at time zero, we have performed a sudden quench in $`\mu `$ to a value deep in the glass phase (the data presented here refer to $`\mu =1`$). As already explained above, after a time $`t_w`$ a perturbation in the form of a random small chemical potential has been applied and the density-density autocorrelation, Eq.(4), and the corresponding integrated response, Eq.(8), have been recorded. At time $`t_w`$ the density has been fixed to the value $`\rho =\rho (t_w)`$ for the following reason: the perturbing term in the Hamiltonian \[see Eq.(3)\] favors roughly half of the sites (those with $`\mu _i=1`$) and the remaining half to be empty. When the perturbation is switched on, the first half starts to be more filled than the second one and this increases the integrated response, as it should. On very long times, however, because the density reached at time $`t_w`$ is still a bit below the asymptotic value, the number of particles continues growing and new particles are added with higher probability on the sites with $`\mu _i=1`$, which are more empty. Then the response becomes negative because of systematic errors. We avoid the negative responses fixing the density at time $`t_w`$. In the limit $`t_w\mathrm{}`$ we recover the right behavior in any case, however with our choice the extrapolation is safer.
In all our simulations we have verified that the strength of the perturbation $`ฯต=0.02`$ is small enough in order to be in the linear response regime. We have also checked that the system thermalizes at temperatures $`T0.3`$ (being always $`\mu =1`$) and satisfies FDT. Further details will be given in .
In Fig. 1 we show the main result of this letter: a parametric plot of the integrated response versus the density autocorrelation for different waiting times. The behavior of the curves is exactly the one predicted by MF theories for a glass former. Two distinct regimes can be perfectly recognized. In the FDT or quasi-equilibrium regime, $`tt_wt_w`$ ($`\beta `$-relaxation), the points lie on the straight line given by
$$T\kappa (t,t_w)=1C(t,t_w).$$
(10)
This first time regime corresponds then to a FDR $`X(t,t_w)=1`$ independent of $`t`$ and $`t_w`$. In this regime the system is in quasi-equilibrium, with the particles moving inside the cages formed by nearly frozen neighbors and the temperature measured from particles fluctuations is that of the heat bath. When $`tt_wt_w`$ the system falls out of equilibrium, entering the aging regime, and the data in Fig. 1 depart from the FDT line. From the figure it is clear that the FDR is still a constant, but now $`0<X(t,t_w)<1`$. The constancy of the FDR in the out of equilibrium regime is one of the central predictions of MF approaches. Here we see that this is still valid for a finite dimensional system.
In Fig. 2 we compare the results for the FDR measured on two large systems, whose sizes are $`N=30^3`$ and $`N=60^3`$. Both curves correspond to $`t_w=10^4`$ and no finite size effects are evident. Fitting the data in the out of equilibrium regime to the straight line
$$T\kappa (t,t_w)=X(t_w)[q_{\mathrm{EA}}(t_w)C(t,t_w)]+[1q_{\mathrm{EA}}(t_w)],$$
(11)
allows us to compute the $`t_w`$-dependent FDR $`X(t_w)`$ and the Edwards-Anderson order parameter $`q_{\mathrm{EA}}(t_w)`$. In the large times limit they should converge to the corresponding equilibrium values .
We report the linear fit in Fig. 2 in order to show how well the data can be fitted with the formula in Eq.(11). As one can see from Fig. 1, the slope $`X(t_w)`$ changes very little with $`t_w`$ and it takes the same value (within the error) for the two largest waiting times. The results of our fits give $`X=0.64(3)`$ and $`q_{\mathrm{EA}}=0.92(1)`$. In comparison with other works the correlation range we are exploring may seem quite small. However it should be kept in mind that we are using non-connected correlations functions which tend in the large times limit to $`\rho ^20.44`$. So we are actually spanning half of the allowed range.
In the inset of Fig. 2 we present the same kind of data ($`t_w=10^5,10^6`$) for the Edwards-Anderson (EA) model, which is expected to have more than two time scales. It is clear that in the EA model the slope changes along the curve and a straight line is not able to fit the whole set of data. Moreover higher temperatures data suggest that the slope still have to decrease (in modulus) for smaller correlations, making the linear fit even poorer.
An important difference between the present model and MF approaches should be evident: activated processes present in finite-dimensional systems should dominate the asymptotic dynamics and as $`t_w\mathrm{}`$ equilibrium dynamics should be restored (a similar behavior can be observed in MF models by considering small systems ). However the time a macroscopic system needs to reach such an equilibrium (thermalization time, $`t_{\mathrm{eq}}`$) increases very rapidly with the system size and then the use of large sizes (here we have $`L=30`$ and $`60`$) prevents the system from reaching equilibrium in accessible time scales. In other words, we expect that the FDR explicitly depends on $`t_w`$ and it tends to 1 in the limit $`t_w\mathrm{}`$. However in the range $`1t_wt_{\mathrm{eq}}`$ (where we actually are) the FDR should relax into some very long plateau. It is very remarkable that finite times effects are very well described by MF theories.
In Fig. 3 we show the usual integrated response versus correlation plot for different temperatures, being the chemical potential always equal to $`\mu =1`$. We estimated, both from the density measurements and from the FDR, the glassy transition to be located around $`T_g0.2`$. So all the data refer to the glassy phase. The main result to be noted is the good parallelism between all the curves in the aging regime. They can be perfectly fitted with the same value for $`X`$ and different values for $`q_{\mathrm{EA}}`$, that increases lowering the temperature. As $`T`$ approaches $`T_g`$ from below, FDT is recovered. Also, as $`q_{\mathrm{EA}}(T)`$ decreases, $`X`$ remains constant within the numerical precision, what is in contrast with MF models where the FDR is usually proportional to the temperature ($`XT`$). The exact behavior near the transition, that is, whether $`X`$ (and $`q_{\mathrm{EA}}`$) are continuous or not , is difficult to establish numerically and will be addressed in a future work.
In summary, we have found that the glassy phase of a realistic model of structural glass presents two well separated time scales, as found in MF models of spin glasses. The out of equilibrium long time dynamics can be characterized by a constant value of the fluctuation dissipation ratio $`X`$ which is, with a very good approximation, independent of temperature in the glass phase. This last observation does not agree with MF predictions. Being a model in 3D, the frustrated lattice gas is an interesting test ground for performing a systematic study of activated processes, a main ingredient absent in MF models.
This work was partly supported by Brazilian agencies CNPq and FAPEMIG. JJA acknowledges the Abdus Salam ICTP (Trieste) for support during his stay, where part of this work was done. |
no-problem/0002/hep-ph0002246.html | ar5iv | text | # References
hep-ph/0002246
LMU-00-02
New Formulation of Matter Effects on Neutrino Mixing and $`CP`$ Violation
Zhi-zhong Xing <sup>*</sup><sup>*</sup>*Electronic address: xing@theorie.physik.uni-muenchen.de
Sektion Physik, Universit$`\ddot{a}`$t M$`\ddot{u}`$nchen, Theresienstrasse 37A, 80333 Munich, Germany
## Abstract
Within the framework of three lepton families we have derived an exact and compact formula to describe the matter effect on neutrino mixing and $`CP`$ violation. This model- and parametrization-independent result can be particularly useful for recasting the fundamental lepton flavor mixing matrix from a variety of long-baseline neutrino experiments.
Recent observation of the atmospheric and solar neutrino anomalies by the Super-Kamiokande Collaboration has provided robust evidence for massive neutrinos that mix among different flavors Throughout this work the LSND evidence for neutrino oscillations , which has not been independently confirmed by other experiments , will not be taken into account.. It opens a convincing window for new physics beyond the Standard Model, and has important cosmological implications.
Although the non-accelerator neutrino experiments have yielded some impressive constraints on the parameter space of atmospheric and solar neutrino oscillations, a precise determination of the lepton flavor mixing angles and the leptonic $`CP`$-violating phase(s) has to rely on a new generation of accelerator experiments with very long baselines , including the possible neutrino factories . In such long-baseline neutrino experiments the earth-induced matter effects, which are likely to deform the neutrino oscillation behaviors in vacuum and even to fake the genuine $`CP`$-violating asymmetries, must be taken into account. To single out the โtrueโ theory of lepton mass generation and $`CP`$ violation depends crucially upon how accurately the fundamental parameters of lepton flavor mixing can be measured and disentangled from the matter effects. It is therefore desirable to explore the analytical relationship between the genuine flavor mixing matrix and the matter-corrected one beyond the conventional two-flavor framework . Some attempts of this nature have so far been made , but they are subject to specific assumptions, approximations or parametrizations and need be substantially improved.
In this note we present an exact and compact formula to describe the matter effect on lepton flavor mixing and $`CP`$ violation within the framework of three lepton families. The result is completely independent of the specific models of neutrino masses and the specific parametrizations of neutrino mixing. Therefore it will be particularly useful, in the long run, to recast the fundamental flavor mixing matrix from the precise measurements of neutrino oscillations in a variety of long-baseline neutrino experiments.
First let us define the $`3\times 3`$ lepton flavor mixing matrix in vacuum to be $`V`$. It links the neutrino mass eigenstates ($`\nu _1,\nu _2,\nu _3`$) to the neutrino flavor eigenstates ($`\nu _e,\nu _\mu ,\nu _\tau `$):
$$\left(\begin{array}{c}\nu _e\\ \nu _\mu \\ \nu _\tau \end{array}\right)=\left(\begin{array}{ccc}V_{e1}& V_{e2}& V_{e3}\\ V_{\mu 1}& V_{\mu 2}& V_{\mu 3}\\ V_{\tau 1}& V_{\tau 2}& V_{\tau 3}\end{array}\right)\left(\begin{array}{c}\nu _1\\ \nu _2\\ \nu _3\end{array}\right).$$
(1)
If neutrinos are massive Dirac fermions, $`V`$ can be parametrized in terms of three rotation angles and one $`CP`$-violating phase. If neutrinos are Majorana fermions, however, two additional $`CP`$-violating phases are in general needed to fully parametrize $`V`$. The strength of $`CP`$ violation in neutrino oscillations, no matter whether neutrinos are of the Dirac or Majorana type, depends only upon a universal parameter $`๐ฅ`$ :
$$\mathrm{Im}\left(V_{\alpha i}V_{\beta j}V_{\alpha j}^{}V_{\beta i}^{}\right)=๐ฅ\underset{\gamma ,k}{}ฯต_{\alpha \beta \gamma }^{}ฯต_{ijk}^{},$$
(2)
where $`(\alpha ,\beta ,\gamma )`$ and $`(i,j,k)`$ run over $`(e,\mu ,\tau )`$ and $`(1,2,3)`$, respectively. In the specific models of fermion mass generation $`V`$ can be derived from the mass matrices of charged leptons and neutrinos . To test such theoretical models one has to compare their predictions for $`V`$ with the experimental data of neutrino oscillations. The latter may in most cases be involved in the potential matter effects and must be carefully handled.
In matter the effective Hamiltonian for neutrinos can be written as
$$_\nu =\frac{1}{2E}\left[V\left(\begin{array}{ccc}m_1^2& 0& 0\\ 0& m_2^2& 0\\ 0& 0& m_3^2\end{array}\right)V^{}+\left(\begin{array}{ccc}A& 0& 0\\ 0& 0& 0\\ 0& 0& 0\end{array}\right)\right],$$
(3)
where $`m_i`$ (for $`i=1,2,3`$) denote neutrino masses, $`A=2\sqrt{2}G_\mathrm{F}N_eE`$ describes the charged-current contribution to the $`\nu _ee^{}`$ forward scattering, $`N_e`$ is the background density of electrons, and $`E`$ stands for the neutrino beam energy. The neutral-current contributions, which are universal for $`\nu _e`$, $`\nu _\mu `$ and $`\nu _\tau `$ neutrinos, lead only to an overall unobservable phase and have been neglected. Transforming $`_\nu `$ from the flavor eigenbasis to the mass eigenbasis, we obtain the effective mass-squared matrix of neutrinos:
$$\mathrm{\Omega }_\nu =\left(\begin{array}{ccc}m_1^2+A|V_{e1}|^2& AV_{e1}^{}V_{e2}& AV_{e1}^{}V_{e3}\\ AV_{e1}V_{e2}^{}& m_2^2+A|V_{e2}|^2& AV_{e2}^{}V_{e3}\\ AV_{e1}V_{e3}^{}& AV_{e2}V_{e3}^{}& m_3^2+A|V_{e3}|^2\end{array}\right),$$
(4)
This Hermitian matrix can be diagonalized through a unitary transformation: $`U^{}\mathrm{\Omega }_\nu U=\mathrm{Diag}\{\lambda _1,\lambda _2,\lambda _3\}`$, where $`\lambda _i`$ denote the effective mass-squared eigenvalues of neutrinos. Explicitly one finds The expressions of $`\lambda _i`$ in Ref. have apparent printing errors, while those given in Ref. depend upon a specific parametrization of $`V`$.
$`\lambda _1`$ $`=`$ $`m_1^2+{\displaystyle \frac{1}{3}}x{\displaystyle \frac{1}{3}}\sqrt{x^23y}\left[z+\sqrt{3\left(1z^2\right)}\right],`$
$`\lambda _2`$ $`=`$ $`m_1^2+{\displaystyle \frac{1}{3}}x{\displaystyle \frac{1}{3}}\sqrt{x^23y}\left[z\sqrt{3\left(1z^2\right)}\right],`$
$`\lambda _3`$ $`=`$ $`m_1^2+{\displaystyle \frac{1}{3}}x+{\displaystyle \frac{2}{3}}z\sqrt{x^23y},`$ (5)
where $`x`$, $`y`$ and $`z`$ are given by
$`x`$ $`=`$ $`\mathrm{\Delta }m_{21}^2+\mathrm{\Delta }m_{31}^2+A,`$
$`y`$ $`=`$ $`\mathrm{\Delta }m_{21}^2\mathrm{\Delta }m_{31}^2+A\left[\mathrm{\Delta }m_{21}^2\left(1|V_{e2}|^2\right)+\mathrm{\Delta }m_{31}^2\left(1|V_{e3}|^2\right)\right],`$
$`z`$ $`=`$ $`\mathrm{cos}\left[{\displaystyle \frac{1}{3}}\mathrm{arccos}{\displaystyle \frac{2x^39xy+27A\mathrm{\Delta }m_{21}^2\mathrm{\Delta }m_{31}^2|V_{e1}|^2}{2\left(x^23y\right)^{3/2}}}\right]`$ (6)
with $`\mathrm{\Delta }m_{21}^2m_2^2m_1^2`$ and $`\mathrm{\Delta }m_{31}^2m_3^2m_1^2`$. Note that the columns of $`U`$ are the eigenvectors of $`\mathrm{\Omega }_\nu `$ in the vacuum mass eigenbasis. After a lengthy calculation, we arrive at the elements of $`U`$ as follows:
$$U_{ii}=\frac{N_i}{D_i},U_{ij}=\frac{A}{D_j}\left(\lambda _jm_k^2\right)V_{ei}^{}V_{ej},$$
(7)
where $`(i,j,k)`$ run over $`(1,2,3)`$ with $`ijk`$, and
$`N_i`$ $`=`$ $`\left(\lambda _im_j^2\right)\left(\lambda _im_k^2\right)A\left[\left(\lambda _im_j^2\right)|V_{ek}|^2+\left(\lambda _im_k^2\right)|V_{ej}|^2\right],`$
$`D_i^2`$ $`=`$ $`N_i^2+A^2|V_{ei}|^2\left[\left(\lambda _im_j^2\right)^2|V_{ek}|^2+\left(\lambda _im_k^2\right)^2|V_{ej}|^2\right].`$ (8)
One may check that $`A=0`$ leads to $`\lambda _i=m_i^2`$ and $`D_i^2=N_i^2`$, and then $`U`$ becomes the unity matrix.
It should be noted that the unitary matrix $`U`$ transforms the neutrino mass eigenstates in matter to those in vacuum. Therefore the lepton flavor mixing matrix in matter, denoted by $`V^\mathrm{m}`$, is a product of the lepton flavor mixing matrix in vacuum ($`V`$) and the matter-to-vacuum transformation matrix $`U`$: $`V^\mathrm{m}=VU`$. The elements of $`V^\mathrm{m}`$ turn out to be
$`V_{\alpha i}^\mathrm{m}`$ $`=`$ $`{\displaystyle \frac{N_i}{D_i}}V_{\alpha i}+{\displaystyle \frac{A}{D_i}}V_{ei}\left[\left(\lambda _im_j^2\right)V_{ek}^{}V_{\alpha k}+\left(\lambda _im_k^2\right)V_{ej}^{}V_{\alpha j}\right],`$ (9)
where $`\alpha `$ runs over $`(e,\mu ,\tau )`$ and $`(i,j,k)`$ over $`(1,2,3)`$ with $`ijk`$. Obviously $`A=0`$ leads to $`V_{\alpha i}^\mathrm{m}=V_{\alpha i}`$. This exact and compact formula shows clearly how the flavor mixing matrix in vacuum is corrected by the matter effects. Instructive analytical approximations can be made for Eq. (9), once the hierarchy of neutrino masses is experimentally known or theoretically assumed.
The result obtained above is valid for neutrinos interacting with matter. As for antineutrinos, the corresponding formula for the flavor mixing matrix in matter can straightforwardly be obtained from Eq. (9) through the replacements $`VV^{}`$ and $`AA`$.
With the help of Eq. (9) we now calculate the universal $`CP`$-violating parameter in matter, defined as $`๐ฅ_\mathrm{m}`$ through
$$\mathrm{Im}\left(V_{\alpha i}^\mathrm{m}V_{\beta j}^\mathrm{m}V_{\alpha j}^\mathrm{m}V_{\beta i}^\mathrm{m}\right)=๐ฅ_\mathrm{m}\underset{\gamma ,k}{}ฯต_{\alpha \beta \gamma }^{}ฯต_{ijk}^{},$$
(10)
where $`(\alpha ,\beta ,\gamma )`$ and $`(i,j,k)`$ run over $`(e,\mu ,\tau )`$ and $`(1,2,3)`$, respectively. The unitarity of $`V^\mathrm{m}`$ allows us to simplify the lengthy calculation and arrive at an instructive relationship between $`๐ฅ_\mathrm{m}`$ and $`๐ฅ`$:
$$๐ฅ_\mathrm{m}=๐ฅ\frac{\mathrm{\Delta }m_{21}^2\mathrm{\Delta }m_{31}^2\mathrm{\Delta }m_{32}^2}{(\lambda _2\lambda _1)(\lambda _3\lambda _1)(\lambda _3\lambda _2)}.$$
(11)
The same result has been obtained by Harrison and Scott using the Jarlskog determinant of lepton mass matrices, which is invariant for neutrinos in vacuum and in matter . Eq. (11) indicates that the matter contamination to $`CP`$\- and $`T`$-violating observables is in general unavoidable. However, $`T`$ violation is expected to be less sensitive to matter effects than $`CP`$ violation, since the former is associated only with either neutrinos ($`+A`$) or antineutrinos ($`A`$) while the latter is related to both of them.
To be more explicit we calculate the conversion probabilities of $`\nu _\alpha `$ (or $`\overline{\nu }_\alpha `$) to $`\nu _\beta ^{}`$ (or $`\overline{\nu }_\beta ^{}`$) neutrinos in matter. We obtain
$`P_\mathrm{m}(\nu _\alpha \nu _\beta ^{})`$ $`=`$ $`4{\displaystyle \underset{i<j}{}}[\mathrm{Re}(V_{\alpha i}^\mathrm{m}V_{\beta j}^\mathrm{m}V_{\alpha j}^\mathrm{m}V_{\beta i}^\mathrm{m})\mathrm{sin}^2\mathrm{\Delta }_{ij}]+8๐ฅ_\mathrm{m}{\displaystyle \underset{i<j}{}}\mathrm{sin}\mathrm{\Delta }_{ij},`$
$`P_\mathrm{m}(\overline{\nu }_\alpha \overline{\nu }_\beta ^{})`$ $`=`$ $`4{\displaystyle \underset{i<j}{}}[\mathrm{Re}(\stackrel{~}{V}_{\alpha i}^\mathrm{m}\stackrel{~}{V}_{\beta j}^\mathrm{m}\stackrel{~}{V}_{\alpha j}^\mathrm{m}\stackrel{~}{V}_{\beta i}^\mathrm{m})\mathrm{sin}^2\stackrel{~}{\mathrm{\Delta }}_{ij}]8\stackrel{~}{๐ฅ}_\mathrm{m}{\displaystyle \underset{i<j}{}}\mathrm{sin}\stackrel{~}{\mathrm{\Delta }}_{ij},`$ (12)
where $`(\alpha ,\beta )`$ run over $`(e,\mu )`$, $`(\mu ,\tau )`$ or $`(\tau ,e)`$; $`\stackrel{~}{V}_{\alpha i}(A)V_{\alpha i}(A)`$, $`\stackrel{~}{\mathrm{\Delta }}_{ij}(A)\mathrm{\Delta }_{ij}(A)`$, and $`\stackrel{~}{๐ฅ}_\mathrm{m}(A)๐ฅ_\mathrm{m}(A)`$; and $`\mathrm{\Delta }_{ij}1.27(\lambda _i\lambda _j)L/E`$ with $`L`$ the distance between the production and interaction points of $`\nu _\alpha `$ (in unit of km) and $`E`$ the neutrino beam energy (in unit of GeV). The probabilities of $`\nu _\beta ^{}\nu _\alpha `$ and $`\overline{\nu }_\beta ^{}\overline{\nu }_\alpha `$ transitions can be read off from Eq. (11) with the replacements $`๐ฅ_\mathrm{m}๐ฅ_\mathrm{m}`$ and $`\stackrel{~}{๐ฅ}_\mathrm{m}\stackrel{~}{๐ฅ}_\mathrm{m}`$, respectively.
One can then define the $`CP`$\- and $`T`$-violating asymmetries as
$`๐_{CP}`$ $`=`$ $`{\displaystyle \frac{P_\mathrm{m}(\nu _\alpha \nu _\beta ^{})P_\mathrm{m}(\overline{\nu }_\alpha \overline{\nu }_\beta ^{})}{P_\mathrm{m}(\nu _\alpha \nu _\beta ^{})+P_\mathrm{m}(\overline{\nu }_\alpha \overline{\nu }_\beta ^{})}},`$
$`๐_T`$ $`=`$ $`{\displaystyle \frac{P_\mathrm{m}(\nu _\alpha \nu _\beta ^{})P_\mathrm{m}(\nu _\beta ^{}\nu _\alpha )}{P_\mathrm{m}(\nu _\alpha \nu _\beta ^{})+P_\mathrm{m}(\nu _\beta ^{}\nu _\alpha )}}.`$ (13)
Note that $`๐_T=๐_{CP}`$ holds in vacuum (i.e., $`A=0`$), as a consequence of $`CPT`$ invariance. Any discrepancy between these two observables will definitely measure the matter effects in long-baseline neutrino experiments.
Finally let us give a numerical illustration of the matter-induced corrections to the flavor mixing matrix and $`CP`$ (or $`T`$) violation in vacuum. The elements of $`V^\mathrm{m}`$, except the Majorana phases, can be completely determined by four rephasing-invariant quantities (e.g., four independent $`|V_{\alpha i}^\mathrm{m}|`$ or three independent $`|V_{\alpha i}^\mathrm{m}|`$ plus $`๐ฅ_\mathrm{m}`$). As the solar and atmospheric neutrino oscillations in vacuum are essentially associated with the elements in the first row and the third column of $`V`$, it is favored to choose $`|V_{e1}|`$, $`|V_{e2}|`$, $`|V_{\mu 3}|`$ and $`๐ฅ`$ as the four basic parameters. For illustration we take $`|V_{e1}|=0.816`$, $`|V_{e2}|=0.571`$, $`|V_{\mu 3}|=0.640`$, and $`๐ฅ=\pm 0.020`$ for neutrinos and antineutrinos <sup>ยง</sup><sup>ยง</sup>ยงThis specific choice corresponds to $`\theta _{12}35^{}`$, $`\theta _{23}40^{}`$, $`\theta _{13}5^{}`$, and $`\delta \pm 90^{}`$ in the PDG-advocated parametrization of $`V`$ .. Such a choice is consistent with the CHOOZ experiment , the large-angle MSW solution to the solar neutrino problem , and a nearly maximal mixing in the atmospheric neutrino oscillation. The relevant neutrino mass-squared differences are typically taken to be $`\mathrm{\Delta }m_{21}^2=510^5\mathrm{eV}^2`$ and $`\mathrm{\Delta }m_{31}^2=310^3\mathrm{eV}^2`$ . To calculate the $`CP`$\- and $`T`$-violating asymmetries in the long-baseline neutrino experiments, we assume a constant earth density profile and take $`A=2.2810^4\mathrm{eV}^2E/[\mathrm{GeV}]`$ . We also choose the baseline length $`L=732`$ km, corresponding to a neutrino source at Fermilab pointing toward the Soudan mine or that at CERN toward the Gran Sasso underground laboratory. Using these inputs as well as Eqs. (9)โ(12), we first take $`(\alpha ,\beta )=(e,\mu )`$ and compute the asymmetries $`๐_{CP}`$ and $`๐_T`$ changing with $`E`$ in the range $`2\mathrm{GeV}E30\mathrm{GeV}`$. Then we compute the ratios $`|V_{\alpha i}^\mathrm{m}|/|V_{\alpha i}|`$ and $`๐ฅ_\mathrm{m}/๐ฅ`$ as functions of the matter parameter $`A`$, instead of $`E`$, in the range $`10^7\mathrm{eV}^2A10^2\mathrm{eV}^2`$. The numerical results are shown in Figs. 1 and 2.
We observe that matter effects can be significant for the elements in the first and the second columns of $`V`$, if $`A10^5\mathrm{eV}^2`$. In comparison, the magnitudes of $`|V_{e3}|`$, $`|V_{\mu 3}|`$ and $`|V_{\tau 3}|`$ may be drastically enhanced or suppressed only for $`A>10^3\mathrm{eV}^2`$. The neutrinos are relatively more sensitive to the matter effects than the antineutrinos.
The magnitude of $`๐ฅ_\mathrm{m}`$ decreases, when the matter effect becomes significant (e.g., $`A10^4\mathrm{eV}^2`$). However, this does not imply that the $`CP`$\- or $`T`$-violating asymmetries in realistic long-baseline neutrino oscillations would be smaller than their values in vacuum. Large matter effects can significantly modify the frequencies of neutrino oscillations and thus enhance (or suppress) the genuine signals of $`CP`$ or $`T`$ violation. As for the long-baseline neutrino experiment under consideration, the matter-induced effect in the $`T`$-violating asymmetry $`๐_T`$ is negligibly small. The matter effect on the $`CP`$-violating asymmetry in vacuum cannot be neglected, but the former is unlikely to fake the latter completely. We confirm numerically that the relationship $`๐_T=๐_{CP}`$ in vacuum becomes violated in matter.
If the earth-induced matter effects can well be controlled, it is possible to recast the fundamental flavor mixing matrix $`V`$ from a variety of measurements of neutrino oscillations. Such a goal is expected to be reached in the neutrino factories .
In summary, we have derived an exact and compact formula to show the analytical relationship between the fundamental neutrino mixing matrix and the matter-corrected one within the framework of three lepton families. This model- and parametrization-independent result can be particularly useful for the study of flavor mixing and $`CP`$ violation in the long-baseline neutrino experiments. An extension of the present work, in which the mixing of a sterile neutrino with three active neutrinos can be incorporated, is in progress.
Acknowledgment: The author would like to thank H. Fritzsch for useful discussions. |
no-problem/0002/astro-ph0002122.html | ar5iv | text | # The VizieR database of Astronomical Catalogues
## 1 Introduction
The Centre de Donnรฉes astronomiques de Strasbourg (CDS) has a very long experience in acquiring, cross-identifying, and distributing astronomical data (Genova et al. CDS (2000)): a collaboration for the exchange of what was called machine-readable astronomical data started with the NASA-GSFC and the Astronomisches Rechen-Institut around 1970. This collaboration has been maintained over this 30 year period, and collaborations with other institutes for similar exchanges have been developed. The volume of data shared of course increased, at a rate which has been exploding in the recent years.
Compared to the late 60โs, where the bulk of the machine-readable data consisted in a set of the basic catalogues carefully keypunched, the situation has changed drastically, now that every instrument or detector is generating megabytes or gigabytes of daily output. These huge data sets are hopefully not stored in data centers, but are processed in the observing center where the expertise exists to generate the best high-quality archives and catalogues in a form usable by astronomers who are not familiar with the instrument. The Data Centersโ role is essentially to collect such โfinalโ catalogues, or more generally high-quality data, i.e. data which either were published in the refereed scientific literature, or at least a paper describing these data and their context was accepted for publication in a refereed scientific journal.
Making an efficient usage of the data distributed by the data centers โ for instance for the analysis of the statistical properties of some interesting population of stars โ often requires to combine data coming from several data sets; this operation is far from simple, and this is why the first creation of CDS was SIMBAD, a data-base resulting from the cross-identification of the major catalogues, later expanded to thousands of catalogues and to published literature (see Wenger et al. simbad (2000)).
The VizieR system results from a different approach: the astronomical catalogues are kept in their original form, but homogeneous descriptions of all these data sets are provided in order to maximize their usability. In other words, VizieR relies on an homogenization of the catalogue descriptions โ what is also called metadata, or data describing other data โ to transform the set of machine-readable astronomical catalogues into a set of machine-understandable data. VizieR actually consists in an interface able to query this set of machine-understandable astronomical catalogues.
## 2 Astronomical Catalogues
Jaschek (cj (1989)) defined a catalogue as a long list of ordered data of a specific kind, collected for a particular purpose. What a long list means has evolved dramatically in the last decade: the new way of processing data actually resulted in a tremendous increase in both the number and the volume of the astronomical catalogues. To illustrate the evolution in the domain of catalogued surveys, one can remember that the largest catalogues in the beginning of this century, called the Durchmusterungen โ the Bonner, Cordoba and Cape Durchmusterungen โ provided only a position and a visual estimate of the brightness for $`1.5\times 10^6`$ stars, and required over 50 years to be completed. Today, a catalogue gathering similar parameters โ with an accuracy one order of magnitude better โ is well represented by the USNO-A2.0 (Monet (1998)) which contains roughly $`5\times 10^8`$ sources, almost three orders of magnitude larger. Even larger catalogues are being built: let us quote the GSC-II (Greene et al., 1998) which should contain all optical sources brighter than $`18^{th}`$ magnitude, which can be estimated to about $`2\times 10^9`$ objects.
The existence of these new mega-catalogues โ which are, in fact, rather giga-catalogues โ does however not mean that the old catalogues can just be ignored: virtually any astronomical object can be subject to variability, maybe over periods of several centuries, and the discrepancies between old and newer results have therefore to be analyzed.
Another important source of tabular material consists in tables published in the astronomical literature. These tables are now almost always originally in digital form, and contain highly processed data which usage can be precious; access to these electronic data is also essential for maintaining the large databases like Simbad or NED.
The potential interest of the reusability of these tables led the Editors of the leading astronomical journals to distribute the tabular material in electronic form. The first realisations for A&A started in 1993 (see Ochsenbein & Lequeux aatables (1995)), and Table 1 summarizes the frequency of the availability of electronic tabular data among the publications in some of the main astronomical journals in the recent years: not surprisingly, the Supplement Series, which were created essentially for the presentation of the observational results, show a high rate of associated electronic data.
## 3 Astronomical Catalogues in the Data Centers
### 3.1 Current Contents
The growth of the collection of astronomical catalogues managed by data centers is illustrated by Table 2: the current set of available catalogues is now around 3,000, with an annual increase about 15%. Note that the entity designated as a โcatalogueโ can represent a table of about 100 entries (e.g. the list of galactic globular clusters), as well as a multimillion source catalogue (e.g. the USNO-A2.0).
In Table 2, the catalogues are grouped according to categories which were defined in the 70โs, when the bulk of astronomical studies were dealing with the properties of stars in the optical wavelength domain. Rather than defining regularly a new classification scheme following the evolution of the discipline, it was decided, in agreement with the other data centers, to assign designations to electronic tables according to the published paper, and to reserve the assignment in the โtraditionalโ categories to somewhat important catalogues or compilations. Simultaneously, it was decided to assign keywords to each catalogue, in order to allow easy retrieval of catalogues with similar contents and purposes.
Note that, if most of the catalogues contain data related to the observation of astronomical sources, other types of data are also available, generally grouped in the โMiscellaneousโ (VI) category: catalogues of atomic data like wavelength tables or results of the Opacity Project, tabulated results of stellar evolution models, ephemeris elements, etcโฆ
### 3.2 Usage of astronomical catalogues
One of the main goals of the CDS is to promote the usage of the reliable astronomical catalogues to the astronomical community. The โCatalogue Serviceโ has been one of the major CDS services since the beginning of the CDS activity, and used to distribute catalogues on magnetic tapes and floppies; the service has been implemented on the network as a FTP server in March 1992, generating immediately a large increase in the number of distributed files. The FTP activity is still increasing at a high rate, as can be inferred from Table 3: the current traffic is equivalent to a copy of the whole collection every month.
It is also interesting to quote those catalogues which are the most frequently copied from the CDS archives, summarized in Table 4 for the last two years: not surprisingly, surveys, and what Jaschek (cj (1989)), in his section 5.2, designates as General Compilation Catalogues, are among the most popular catalogues. It is also interesting to note the large number of copies of the GSC catalogue (about 300 Mbytes): it was copied by over 500 nodes in the last 12 months, which is 4 times more than in the previous year; this could indicate that catalogues of this size can be quite easily managed on small computers nowadays.
## 4 Standardized Description of Astronomical Catalogues
Making use of the data contained in a set of rapidly evolving catalogues, as illustrated by Table 2, raises the problem of accessing and understanding accurately the parameters contained in catalogues which are constantly improved. Typical questions to be addressed are: does the catalogue contain colours; if yes what is their reliability; are they expressed in a well-known standard system; are they taken from other publications or catalogues; how can the associated data file be processed? All these details which describe the data โ the metadata โ are traditionally presented in the introduction of the printed catalogue, or detailed in one or several published papers presenting and/or analyzing the catalogued data.
Metadata play therefore a fundamental role: first the scientists have to get information about the environment of the data in order to make their judgement about the suitability of the data for their project, such as: date and/or method of acquisition, related publications, estimation of the internal and external errors, purpose of the data collection, etc.; but also a minimal knowledge of the metadata is required by the data processing system in order to merge or compare data from different origins โ for instance, the comparison of data expressed in different units requires a unit-to-unit conversion which can be performed automatically only if the units are specified unambiguously.
This need for a description which is readable both by a computer and by a scientist led to a standardized way of documenting astronomical catalogues and tables, promoted by CDS from 1993 in the form of a dedicated ReadMe file associated to each catalogue (Ochsenbein readme (1994)). An example of such a file is presented in Fig. 1: it is a plain ascii file, quite easy to interpret for a scientist, and at the same time structured enough to be interpreted by a dedicated software. The ReadMe description file starts with a header specifying the basic references โ title, authors, references โ and contains a few key sections introduced by standard titles like Description: or Byte-by-byte Description of file:. Such a file is relatively easy to produce by someone who knows the catalogue contents. The example of Fig. 1 represents the documentation of a very simple catalogue, made of just two data tables, each with a small set of parameters. The output catalogue of the Hipparcos mission<sup>1</sup><sup>1</sup>1http://vizier.u-strasbg.fr/cgi-bin/Cat?I/239 is an example of a much more complex catalogue: it is composed of two fundamental large tables (HIP with $`10^5`$ stars and TYC with $`10^6`$ stars) and includes a dozen of annex tables, but can still be described by the the same kind of simple standardized documentation.
The most important part of the ReadMe file is the
Byte-by-byte Description which details the table structures in terms of formats, units, column naming or labels, existence of data (possibility of unspecified or null values), and brief explanations. Among the conventions, some fundamental parameters are assigned fixed labels like sky coordinates (components of right ascension RAโฆ and declination DEโฆ in Fig. 1); a prefix convention, detailed in Table 5, is also used to specify obvious relations between a value, its mean error, its origin, etcโฆ
This standardized way of presenting the metadata proved to be extremely useful, especially for data checking and format conversion: many errors were detected in old catalogues simply because a general checking mechanism became available. Tools have been developed for generating a Fortran source code which loads the data into memory, or for converting the data into the FITS format which is presently the most โuniversalโ data format understood by data processing systems in astronomy โ but unfortunately a data format which is not convenient outside this context (see e.g. Grรธsbรธl et al. fits (1988)).
During the six years since this standardized way of describing astronomical catalogue has been defined, over 2,600 astronomical catalogues have been described by means of this ReadMe file, and the same conventions have been adopted by the other astronomical data centers and journals for the electronic publication of tables. The present (October 1999) figures of the amount of standardized catalogues are summarized in the rightmost column of Table 2; previous figures were presented in an earlier paper (Ochsenbein russie (1997)).
It is expected, in the future, that the authors will supply the documentation of their data in this simple form; it is already the case for a very significant fraction of the tables mailed to the CDS, and in order to help the authors, template files as well as a few tips on how to create the ReadMe file are accessible on the Web<sup>2</sup><sup>2</sup>2http://vizier.u-strasbg.fr/doc/submit.htx. The ReadMe files and the data files are then checked by a specialist, who contacts the authors if errors are detected or when changes are necessary to increase the clarity or homogeneity of the description.
## 5 VizieR Organisation
VizieR<sup>3</sup><sup>3</sup>3http://vizier.u-strasbg.fr/ is a natural extension of the usage of the metadata stored in the ReadMe files, as an implementation of these metadata in terms of tables managed by a relational database management system (RDBMS).
The first prototype of VizieR was the result of a fruitful collaboration between ESIS (European Space Information System, a project managed by ESRIN, a department of the European Space Agency) and the CDS; VizieR has been under full responsibility of CDS since January 1996. It was presented at the 1996 AAS meeting (Ochsenbein et al., aas96 (1996)), and became fully operational in February 1996. This prototype has been significantly upgraded in May 1997, just in time for the implementation of the final catalogues of the Hipparcos mission. The number of catalogues accessible within the VizieR system has grown since that time to 2,374 catalogues (Table 6).
The core of VizieR consists in the organisation of the meta dictionary, i.e. the set of metadata extracted from the standardized ReadMe descriptions discussed in section 4. There are however two main problems which had to be solved: the access to very large catalogues (larger than a few million rows) for which RDBMS proved to be inefficient, requiring therefore dedicated search methods, and the generation of links allowing to connect two related pieces of information, like other tables in the same catalog, or spectra, images from remote services, etc.
### 5.1 META dictionary
The meta-dictionary consists in 3 main tables detailed below, and about 20 annex tables, all stored in a relational database:
1. METAcat describes the catalogues, a catalogue being defined as a set of related tables published together: typically a catalogue gathers a table of observations, a table of mean values, a table of references, a list of related images, etcโฆ; METAcat details the authors, reference, title, explanations of each stored catalogues. This table contains currently 2,374 rows (Table 6).
2. METAtab describes each data table stored in VizieR: table caption, number of rows, how to access the actual data, the equinox and epoch of the coordinates, etcโฆThis table contains currently 6,071 rows (Table 6) โ i.e. the average catalogue is made of 2.6 tables.
3. METAcol details each of the 77,260 columns (Table 6) currently stored in VizieR: column name or label, the textual explanation of the column contents, datatypes (numeric or character) and storage mode within the database (integer or floating-point, maximal length of strings, etc), units in which the data are stored in the data-base and units in which the data are presented to the user, edition formats, and a few flags used for searches (e.g. column used as primary key) or data presentation (e.g. column to be displayed in the default presentation of the result). The average table is therefore made of $`12.7`$ columns โ in fact $`11.7`$ because each table contains an identification column in addition to the original set of columns.
Note that, since the set of META tables is itself described in VizieR, the meta-dictionary can be viewed and queried like any of the catalogues stored in VizieR โ allowing to locate easily e.g. tables with a large number of rows, or catalogues having the words mass loss in the description of one of their columns, etcโฆ
The annex tables of the meta-dictionary contain some definitions, like the list of known data-types (METAtypes) and keywords (METAkwdef); or other details like the acronyms used to designate well-known catalogues like HIP, GSC(METAcro), the keywords associated to each catalogue (METAkwd), detailed notes and remarks (METAnot), or the list of those objects which are individually quoted in the ReadMe files (METAobj). A special indexing scheme (METAcell), explained briefly in section 5.5, was built to locate the existing objects in all catalogues in a single run. Details on how to generate links are stored in the METAmor table.
### 5.2 Links in VizieR
The interest of having a link, or an anchor in HTML terms, becomes obvious when a table contains a column representing a reference to an original paper, as for example in Vรฉron and Vรฉronโs compilation of quasars<sup>4</sup><sup>4</sup>4http://vizier.u-strasbg.fr/cgi-bin/VizieR?-source=7207/table1: once the rules to transform the contents of this column into an actual link to e.g. the ADS bibliographic service<sup>5</sup><sup>5</sup>5http://adswww.harvard.edu/ is set up, details about the authors and references, or even the full article, can then be displayed on the screen by a simple mouse click. Another frequent example is the possible expansion of some footnote symbol into the lengthy note detailed in some other table.
The links existing in VizieR may be classified in the following categories:
1. hard-wired links which are part of the standard description presented in section 4, like the existence of notes (stored in the METAnot table), or the r\_ prefix (Table 5) which indicates a reference which may be detailed in a table of references;
2. internal links which connect tables of the same catalogue: such links may be expressed in terms of keys in the RDBMS terminology (definitions of columns as primary and/or foreign keys), by the existence of note flags, or by more complex relations stored in the METAmor table. Another type of internal link allows one to retrieve the spectra or images which are part of the catalogue, but which are stored as separate files.
3. VizieR links which refer to another catalogue within the VizieR system;
4. external links which refer to any other service, like bibliographic services, external databases or archives, image servers, etc.
While links of the first 3 categories can easily be maintained, the maintenance of the external links depends on modifications which are completely outside VizieRโs control. These external links are maintained by the GLU system (Fernique et al., GLU (1998)), a system which (i) allows one to use symbolic names instead of hard-coded URLs, and (ii) translates these symbolic names with the help of a distributed dictionary in which the service providers keep up the descriptions of their own services only in terms of URL addresses and actual presentation of the query parameters.
### 5.3 VizieR feeding pipeline
On the average, about one new catalog โ or 2.6 tables โ is added daily into VizieR. Such figures imposed the following constraints on the addition of new tables into VizieR:
1. no human intervention is required to populate the database (the meta dictionary and the data tables): all meta-data related to a catalogue can be found or computed on the basis of documentation and configuration files which are read by the VizieR feeding pipe-line ;
2. we rely as much as possible on the standardized description of the catalogues presented in section 4: this means that the configuration file associated to each catalogue should be minimized, i.e. as few ad-hoc details as possible should be needed besides the ReadMe files.
The actual delay required to ingest a new catalogue into the system is currently estimated to something between a few minutes and several days for the preparation of the ReadMe description file, depending on the initial presentation supplied by the authors and on the catalogue complexity โ the delay can be occasionally longer when problems are encountered, requiring interactions with the authors; and a few seconds up to an hour for the actual ingestion into VizieR from the standardized files.
### 5.4 Access to Very Large Catalogues
The second challenge is to open a fast access for querying the mega-catalogues introduced in section 2. This denomination was somewhat arbitrarily assigned to catalogues having $`10^7`$ or more rows. Such large catalogues are essentially surveys used as reference catalogues, typically to find all objects detected in some region of the sky under some conditions of wavelength, time, object structure, etc. The set of such catalogues currently implemented is summarized in Table 7, but this set will grow rapidly in the near future with the continuation of the infra-red surveys, and the emergence of surveys presently in preparation (SLOAN, GSC-II, NVSS, โฆ).
The limit of $`10^7`$ rows corresponds to a limit in performance and time required to ingest the tables into the relational databases; the largest table, in terms of number of rows, currently stored in VizieR is the AC2000 catalog (Urban et al., ac2000 (1997)), with $`4.62\times 10^6`$ rows.
The method used to access these very large catalogues consists in grouping the objects within carefully designed groups based essentially on the location in the sky, followed by a lossless compression obtained by replacing the actual values by offsets within the group; details about the actual results and performances are described in another paper (Derriere & Ochsenbein, adass99-poster (1999)). Each very large catalogue has presently its own organisation which depends on its actual column contents, and therefore requires a dedicated program for accessing it. VizieR stores in its META dictionary (see section 5.1) which program has to be called to actually access the catalogue, and the description of the columns as they are returned from the dedicated program.
### 5.5 Accessing all catalogues from a position in the sky
In order to allow a fast answer to the question: find out all objects for all available catalogues around some target position, an indexing mechanism is necessary. The total number of object positions currently stored in VizieR, excluding the megacatalogues, is about $`32\times 10^6`$ (Table 6); a classical indexation, in terms of relational DMBS, shows very poor performances especially in the updating phase: the addition of a new catalogue can require up to 4.6 millions modifications or additions โ which becomes dramatically slow.
The method adopted for this indexation consists first in a mapping of the celestial coordinates into a set of boxes using a hierarchical spherical-cubic projection similar to the techique used by Simbad (Wenger et al., simbad (2000)), but down to a level 8 which corresponds to a granularity of about $`20^{}`$, or $`6\times 4^8`$ ($`4\times 10^5`$) individual boxes. The list of catalogues which exhibit sources in the region of the sky covered by the box is then stored for each of the defined boxes, allowing therefore a fast answer to the question: โwhat is the list of catalogues which have a fair chance of having at least one source close to a specified target ?โ The final step consists in looking successively into the matching catalogues.
The method offers the particularity of being hierarchical: 6 boxes are defined at level 0, 24 at level 1, โฆ, and going down one step in the hierarchy consists in dividing each box into four parts. The indexing mechanism recursively groups contiguous non-empty boxes represented by a single box at the upper level, meaning that a dense survey covering the whole sky is just represented by the 6 boxes of level 0 in this index. In practice, the 1247 catalogues with positions are summarized in this index by $`3.9\times 10^6`$ elements (to be compared to the $`31.6\times 10^6`$ sources in Table 6), i.e. an average of 3,000 elements per catalogue.
### 5.6 Current Contents
The status of VizieR contents is presented in Table 6, where we distinguished those tables representing data about actual astronomical objects which can be accessed by their position in the sky. In terms of number of available records, those containing celestial positions represent over 78% even when the megacatalogs are omitted, even though only 32% of the tables are concerned. In other words, the average table dealing with actual astronomical objects contains around 16,000 rows โ a theoretical mean, as can be seen from the histogram of the table populations in VizieR represented in Fig. 2 which shows a modal value around tables of 100 objects.
## 6 VizieR Interfaces
Several interfaces are currently available for an access to the data stored in VizieR: directly from a Web browser, via a construction of the query using the ASU conventions, or the developing XML interfaces.
### 6.1 Access from a Browser
From a WWW-browser, a โstandard queryโ in VizieR consists in a few steps:
1. Locate the interesting catalogues in the VizieR Service<sup>6</sup><sup>6</sup>6http://vizier.u-strasbg.fr/cgi-bin/VizieR. This can be done in various ways illustrated in Fig. 3: from well-known catalogue acronyms like HIP or GSC, from a choice in the set of predefined keywords, from authorsโ names, or from a self-organizing (or Kohonen) map constructed on the basis of the keywords attached to the catalogues (Poinรงot et al. kohonen (1998)). New possibilities for locating catalogues of interest for the user are currently under development.
2. Once a catalog table โ or a small set of catalog tables โ is located (for instance the Hipparcos Catalog<sup>7</sup><sup>7</sup>7http://vizier.u-strasbg.fr/cgi-bin/VizieR?-source=I/239/hip\_main resulting from the Hipparcos mission), constraints about what to search and how to present the results can be specified, as:
* constraints based on the celestial coordinates, i.e. location in the neighbourhood of a target specified by its actual coordinates in the sky, or by one of its name as known in Simbad (see Wenger et al., simbad (2000))
* any other constraint on any of the columns of the table(s), like a minimal flux value, or the actual existence of some parameter (non-NULL value)
* which columns are to be displayed, and in which order the matching rows are to be presented.
By pushing the appropriate buttons, it is for instance easy to get the list of Hipparcos stars closer than 5 parsecs to the Sun, ordered by their increasing distance<sup>8</sup><sup>8</sup>8http://vizier.u-strasbg.fr/cgi-bin/VizieR?-source=I/239/hip\_main&-sort=-Plx&Plx=%3e=200.
3. Obtaining full details about one row is achieved by a mouse click in the first column of the result: for instance, the first row of the search for nearby stars described above leads to the VizieR Detailed Page with Hipparcos parameters and their explanations concerning Proxima Centauri<sup>9</sup><sup>9</sup>9http://vizier.u-strasbg.fr/cgi-bin/VizieR-5?-source=I/239/hip\_main&HIP=70890.
4. Finally, there may be correlated data, like notes or remarks, references, etcโฆ. In our example, Proxima Centauri is related to the $`\alpha `$ Cen multiple star system, which components can be viewed from the link to the double and multiple stars (CCDM)<sup>10</sup><sup>10</sup>10http://vizier.u-strasbg.fr/cgi-bin/VizieR-6?-source=1239&-corr=PK=CCDM&CCDM==14396-6050 that appears in the detailed page.
The quantitative monthly usage of VizieR is presently (October 1999) about 40,000 external requests from 2700 different nodes; mirror copies were installed recently in the US<sup>11</sup><sup>11</sup>11http://adc.gsfc.nasa.gov/vizier/ and in Japan<sup>12</sup><sup>12</sup>12http://z13.mtk.nao.ac.jp/vizier/ in order to overcome the transcontinental network congestions.
### 6.2 The ASU protocol
The uniform access to all catalogues is based on the so-called ASU<sup>13</sup><sup>13</sup>13http://vizier.u-strasbg.fr/doc/asu.html (Astronomical Standardized URL) protocol resulting from discussions between several institutes (CDS, ESO, CADC, Vilspa, OAT). The basic concept of ASU is a standardized way of specifying queries to remote catalogues in terms of HTTP requests: the target catalogue is specified by a -source=catalog\_designation parameter, the target sky position by a -c=name\_or\_position,rm=radius\_in\_arcmin parameter, the output format by -mime=type, and general constraints on parameters by column\_name=constraint. It should be noticed that the representation of a target by the name of an astronomical object (typically a star or galaxy name, e.g. 3C 273) implies the usage of a name server converting a target name into a position in the sky, which is typically achieved by a call to Simbad.
### 6.3 The XML Interface
The output of a query to VizieR as presented in section 6.1 can hardly be used by an independent application for further data processing, such as the Aladin<sup>14</sup><sup>14</sup>14http://aladin.u-strasbg.fr/ visualisation tool (Bonnarel et al., Aladin (2000)) which allows to superimpose the catalogued sources on top of actual image of the sky: the application requires an accurate interpretation of the catalogued output in terms of celestial positions in order to find out the exact location of each source. This means that Aladin has to figure out not only which are the columns representing the celestial coordinates, but also accurate definitions of the system used to express the coordinates, their accuracy, etcโฆโ in other words the metadata about the celestial coordinates.
XML (eXtensible Markup Language) is an emerging standard which allows to embed markup โtagsโ within a document; the key advantages of this language are that the same document can either be parsed by simple-minded programs (XML uses hierarchical structuring), or can be displayed in the new generation of browsers (via an XSL style sheet which maps the markup โtagsโ into typographical specifications). This language presents other potential interests, especially regarding interoperability issues facilitated by the emergence of generic tools able to process XML documents.
The XML layout of astronomical tables was discussed extensively with interested collaborators, and the agreed definitions were presented at a recent ADASS meeting (Ochsenbein et al. adass99 (1999)). The output of VizieR is readily available in this format<sup>15</sup><sup>15</sup>15http://vizier.u-strasbg.fr/cgi-bin/asu-xml, currently used by the Aladin image applet; it is hoped that it will facilitate the usage of the astronomical data in new contexts.
### 6.4 Current Developments
With the large set of homogenized catalogues, VizieR plays a central role in a data-mining project currently in development as a collaboration of ESO and CDS, in two main directions: (i) make use of the VizieR large set of described columns (over 70,000 currently) to build up new methods for locating the catalogues which are the best suited to a particular research topic; and (ii) develop automatized cross-correlation tools which can take into account the largest possible set of meaningful parameters (Ortiz et al, portiz (1999)).
## 7 Conclusions
VizieR is an illustration of the benefits resulting from an homogeneous documentation of the existing astronomical catalogues, facilitating the transformation of a set of heterogeneous data into a fully interactive database, furthermore able to interact with remote services. The interoperability issues between the databases, in astronomy and problably in connected disciplines, will most likely be among the key developments necessary to allow the scientists to make use of the existing high-quality data whithout the prerequisite of being familiar with the data.
###### Acknowledgements.
The long-term exchanges of data have been fundamental for these developments; more specifically, we wish to thank Jaylee Mead, Nancy G. Roman, Wayne H. Warren and Gail Schneider at NASA/ADC for decades of collaborative work, and the present director Cynthia Y. Cheung; and Olga Dluzhnevskaya at INASAN, the Russian data center. The support of INSU-CNRS and CNES is acknowledged, as well as the contribution of ESA-ESIS for the initial developments of VizieR, and more specifically Salim Ansari and Isabelle Bourekeb. The development of VizieR also resulted from fruitful discussions with Franรงoise Genova, Michel Crรฉzรฉ and Daniel Egret; the enthusiasm of James Lequeux and its implication for the emergence of electronic tables in the A&A publication had a large impact on the accessiblity of the astronomical data. We are also grateful to those who contributed in the more tedious, although critical, part of data standardisation: Simona Mei, Joseph Florsch and Patricio Ortiz at CDS; Gail Schneider and collaborators at NASA/ADC; Koichi Nakajima at ADAC/Japan; Veta S.Avedisova and collaborators at INASAN; and we would like to thank also the authors who participated in the elaboration of the documentation about their data, and answered patiently to all our questions. |
no-problem/0002/cond-mat0002065.html | ar5iv | text | # Scaling analysis of the magnetic-field-tuned quantum transition in superconducting amorphous InโO films
## Abstract
We have studied the magnetic-field-tuned superconductor-insulator quantum transition (SIT) in amorphous InโO films with different oxygen content and, hence, different electron density. While for states of the film near the zero-field SIT the two-dimensional scaling behaviour is confirmed, for deeper states in the superconducting phase the SIT scenario changes: in addition to the scaling function that describes the conductivity of fluctuation-induced Cooper pairs, there emerges a temperature-dependent contribution to the film resistance. This contribution can originate from the conductivity of normal electrons.
The scaling analysis is an important experimental tool for studying quantum phase transitions. For two-dimensional (2D) disordered superconductors, alongside with the zero-field superconductor-insulator transition (SIT) as driven by disorder change in the film, there exists a SIT that is induced by a normal magnetic field. A scenario of the field-induced SIT was proposed in Ref. : at zero temperature the normal magnetic field alters the state of a disordered film from superconducting at low fields, through a metallic one at the critical field $`B=B_c`$ with the universal sheet resistance $`R_c`$ close to $`h/4e^26.4`$ k$`\mathrm{\Omega }`$, to an insulating state at fields $`B>B_c`$. The SIT was supposed to be continuous with the correlation length $`\xi `$ of quantum fluctuations, diverging as $`\xi (BB_c)^\nu `$, where the critical index $`\nu >1`$. At a finite temperature the size of quantum fluctuations is restricted by the dephasing length $`L_\varphi T^{1/z}`$ with the dynamical critical index $`z`$, which determines the characteristic energy $`U\xi ^z`$ and
is expected to be equal to $`z=1`$ for SIT.
The ratio of these two length parameters defines the scaling variable $`u`$ so that near the transition point ($`T=0,B_c`$) all data $`R(T,B)`$ as a function of $`u`$ should fall on a universal curve
$$R(T,B)R_cr(u),u=(BB_c)/T^{1/z\nu }.$$
(1)
Although small in the scaling region, temperature dependent corrections with the leading quadratic term are expected to the critical resistance $`R_c`$ .
The above theoretical description is based on the concept of electron pair localization which has been supported by a recent publication . In that paper it is shown that for 2D superconducting films with sufficiently strong disorder the region of fluctuation superconductivity, where the localized electron pairs
(called also boson and cooperon )
occur, should extend down to zero temperature. In this region the unpaired electrons are supposed to be localized because of disorder in a film.
So far, a theory of the field-driven 3D quantum SIT has not been created. An idea to consider the quantum SIT for 3D disordered systems in zero magnetic field in terms of charged boson localization was not at first accepted because the fluctuation superconductivity region was regarded to be small. In fact, as was shown later in Ref. , the fluctuation region enlarges as the edge of single electron localization is approached. This gives an opportunity to apply the scaling relation deduced for 3D boson localization also for the field-induced SIT description
$$R(T,u)T^{1/z}\stackrel{~}{r}(u),$$
(2)
where $`\stackrel{~}{r}(u)`$ is a universal function and the scaling variable $`u`$ is assumed to have the same form as defined by Eq. (1).
From Eqs. (1) and (2) it follows that in the vicinity of $`B_c`$ the isotherms $`R(B)`$ are straight lines with slopes
$$\frac{R}{B}T^{(d2+1/\nu )/z},$$
(3)
where $`d`$ is the system dimensionality. Because the behaviours of the resistance in the relations (1) and (2) are very different, the problem of the film dimensionality is of major importance.
Data obtained in experimental studies on $`a`$-InโO , $`a`$-MoโGe , and $`a`$-MoโSi followed the 2D scaling relation (1) except for the universality of the $`R_c`$ value.
This was regarded as evidence of existence of SIT. The failure in satisfying the scaling relations in ultrathin Bi films was interpreted as indication of the absence of SIT and crossover observation between different flux-flow regimes. Studies did not give arguments backing boson localization. At the first time such arguments are appeared by interpretation of the resistance drop at high fields observed on $`a`$-InโO films .
Here, we perform the detailed study of the scaling relations near the field-induced SIT for different states of an $`a`$-InโO film. We find that the 2D scaling relation (1) holds for film states near the zero-field SIT but progressively fails as the zero-field SIT is departed from. This failure is manifested by the appearance of an extra temperature-dependent term in the film resistance.
The experiments were performed on 200 ร
thick amorphous InโO films evaporated by e-gun from high-purity $`In_2O_3`$ target onto a glass substrate .
This material proved to be very useful for investigations of the transport properties near the SIT . Oxygen deficiency compared to fully stoichiometric insulating compound In<sub>2</sub>O<sub>3</sub> causes the film conductivity. By changing the oxygen content one can cover the range from a superconducting material to an insulator with activated conductance . The procedures to change reversibly the film state are described in detail in Ref. . To reinforce the superconducting properties of our films we used heating in vacuum up to a temperature from the interval 70 โ 110C until the sample resistance got saturated. To shift the film state in the opposite direction we made exposure to air at room temperature. As the film remains amorphous during these manipulations, it is natural to assume that the treatment used results mainly in a change of the total carrier concentration $`n`$ and that there is a critical concentration $`n_c`$ corresponding to the zero-field SIT.
The low-temperature measurements were carried out by a four-terminal lock-in technique at a frequency of 10 Hz using two experimental setups: a He<sup>3</sup>-cryostat down to 0.35 K or Oxford TLM-400 dilution refrigerator in the temperature interval 1.2 K โ 30 mK. The ac current was equal to 1 nA and corresponded to the linear regime of response. The aspect ratio of the samples was close to one.
We investigated three different homogeneous states of the same amorphous InโO film . We characterize the sample state by its room temperature resistance $`R_r`$. Assuming that the disorder for all states is approximately the same, we have for the carrier density $`n1/R_r`$, i.e., the smaller $`R_r`$, the deeper the state in the superconducting phase and, hence, the larger the value of $`B_c`$.
The parameters of the investigated states are listed in Table I. State 1 is the closest to the zero-field SIT and state 3 is the deepest in the superconducting phase.
Sets of the isomagnetic curves $`R(T)`$ for all studied states are depicted in Fig. 1. For each set the curves can be divided roughly into two groups by sign of the second derivative: the positive (negative) sign corresponds to the insulating (superconducting) behaviour. Henceforth, the boundary isomagnetic curve $`R_c(T)`$ between superconductor and insulator, which corresponds to the boundary metallic state at $`T=0`$, will be referred to as separatrix. While for state 1 it is easy to identify the horizontal separatrix in accordance with Eq. (1), for states 2 and 3 the fan and separatrix are โtiltedโ, i.e., each of the curves in the lower part of the fan is a maximum at a temperature $`T_{\mathrm{max}}`$ which shifts with $`B`$. To determine the separatrix $`R_c(T)`$ one has to extrapolate the maximum position to $`T=0`$ for which it is good to know the extrapolation law as the accessible temperature range is restricted.
The absence of a horizontal separatrix for states 2 and 3 can also be established from the behaviour of isotherms $`R(B)`$ (Fig. 2). As seen from the figure, the isotherms of state 1 cross at the same point ($`B_c,R_c`$) whereas those of state 3 form an envelope.
To determine $`B_c`$ and $`R_c`$ for states 2 and 3 we use the simplest linear extrapolation to $`T=0`$ of the functions $`R(T_{\mathrm{max}})`$ and $`B(T_{\mathrm{max}})`$, see Fig. 3. The open symbols correspond to the maximum positions on isomagnetic curves (Fig. 1) and the filled symbols represent the data obtained from the intersections of consecutive isotherms (Fig. 2): if two consecutive isotherms at close temperatures $`T_1`$ and $`T_2`$ intersect at a point ($`B_i,R_i`$), the isomagnetic curve at the field $`B_i`$ reaches its maximum $`R_i`$ at $`T_{\mathrm{max}}(T_1+T_2)/2`$.
As seen from Fig. 3, the dependence $`B(T_{\mathrm{max}})`$ is weak and so we believe that the linear extrapolation is good to extract $`B_c`$. In contrast, the accuracy of the determination of $`R_c`$ is poor.
The derivative $`R/B`$ near $`B_c`$ as a function of temperature is shown in Fig. 4. The exponent turns out to be the same within experimental uncertainty for the film states 1 and 3 and is in agreement with results of Refs. where authors argued observation of the field-induced 2D SIT for states close to the zero-field SIT. This fact is in favour of 2D SIT scenario also for deeper film states in the superconducting phase.
Knowing $`B_c`$ and the scaling exponent we can replot the experimental data as a function of scaling variable $`u`$ (Fig. 5). As seen from Figs. 5a and b, for state 1 the data collapse onto a single curve whereas for state 3 we obtain a set of similar curves shifted along the vertical axis. Subtracting formally from $`R(T,B)`$ the linear temperature term $`R_c\alpha T`$ (where $`\alpha `$ is a factor) does reveal the 2D scaling behaviour for state 3 (Fig. 5c). We note that the procedure of dividing the experimental data in Fig. 5b by $`R_c(T)`$, which corresponds to the formula (2) for 3D scaling, does not lead to success.
Thus, we find that the 2D scaling holds for states near the zero-field SIT while the data for deeper states in the superconducting phase are best described by the relation (1) with an additive temperature-dependent correction $`f(T)`$
$$R(T,B)R_c[r(u)+f(T)].$$
(4)
To get a basis for the formal analysis of the experimental data we have to answer two questions: (i) whether our film is really 2D; and (ii) what is the physical origin for the temperature dependence of $`R_c(T)`$? In the first case we need to compare the film thickness $`h`$ with characteristic lengths. These are the coherence length $`\xi _{sc}=c\mathrm{}/2eB_{c2}l`$ (where $`l`$ is the mean free path in normal state) in superconducting state and the dephasing length $`L_\varphi (T)\mathrm{}^2/m\xi _{sc}T`$ that restricts the diverging correlation length $`\xi `$ in the vicinity of quantum SIT. Knowing the normal state film resistance $`R5`$ k$`\mathrm{\Omega }`$ at $`T4`$ K and assuming that we deal with the amorphous 3D metal in which the mean free path is normally close to the lowest possible value $`l1/k_F`$, we estimate the length $`l8`$ ร
. If we crudely evaluate the field $`B_{c2}`$ at $`B_c=7.2`$ T as determined for state 3, we get
upper limit of
$`\xi _{sc}500`$ ร
and $`L_\varphi 400`$ ร
at $`T=0.5`$ K. This supports the 2D scenario of quantum SIT although in the normal state the film turns out to be 3D.
With respect to the temperature-dependent $`R_c(T)`$, at finite temperatures the conductivity of the film near $`B_c`$ should include the contribution from localized normal electrons in addition to the conductivity defined by the diffusion of fluctuation-induced Cooper pairs . It is the normal electron conductivity that explains the non-universality of the critical resistance as well as the additional term in Eq. (4). We write this term in the general form because the linear extrapolation used is likely to break in the vicinity of $`T=0`$.
So, all of the experimental observations can be reconciled with the 2D scaling scenario. Intriguingly, the same scaling behaviour has been established in a parallel magnetic field . Although not in favour of 2D concept, this fact indicates that the restrictions imposed by the theory may be too severe.
We would like to mention an alternative way to make up for the term $`f(T)`$ in Eq. (4): to introduce the temperature-dependent field $`B_c(T)`$ defined through the constancy of $`R_c`$. Formally both ways are equivalent and correspond to shifts of the isotherms in Fig. 2 either along the $`R`$-axis or along the $`B`$-axis so that in the vicinity of transition a common crossing point is attained. In contrast to the normal behaviour of the critical fields in superconductors, the so-defined $`B_c(T)`$ increases with temperature. This can be interpreted in terms of temperature-induced boson delocalization.
In summary, in experiments on amorphous InโO films with different oxygen content we have found a change of the field-driven 2D SIT scenario as the film state departs from the zero-field SIT. For deep film states in the superconducting phase, in addition to the universal function of scaling variable that describes the conductivity of fluctuation-induced Cooper pairs, there emerges a temperature-dependent contribution to the film resistance. This contribution can be attributed to the conductivity of normal electrons.
We gratefully acknowledge useful discussions with V. Dobrosavljevich and A.I. Larkin. This work was supported by Grants RFBR 99-02-16117 and RFBR-PICS 98-02-22037 and by the Programme โStatistical Physicsโ from the Russian Ministry of Sciences. |
no-problem/0002/hep-ph0002160.html | ar5iv | text | # I Introduction
## I Introduction
The physics of non-equilibrium systems includes a broad class of phenomena, such as the physics of steady states, relaxation and dynamics far from equilibrium. Dynamical processes which range from those in the early universe, to ultra-relativistic heavy ion collisions and the formation of the quark-gluon plasma all involve non-equilibrium physics in an essential manner, perhaps requiring an understanding of the physics beyond linear response. Physics of steady states contains interesting non-equilibrium phenomena, such as transport, spatially varying observables, and is also important to the study of systems where the time scales for local equilibration are smaller than macroscopic time scales which might describe some global evolution of the system. Non-equilibrium steady states can be realized in many ways, such as the placement of a systems in a thermal gradient, or in an environment which provides shearing, pressure gradients and so forth. In this article, we will examine the statistical mechanics of the non-equilibrium steady states of a classical field theory in thermal gradients. This will allow us to understand the behavior of the theory under these non-equilibrium conditions and to consider problems related to the range or validity of local equilibrium, linear response, equilibrium thermodynamics and statistical mechanics. In particular, the thermal gradients we study can be quite strong and in such situations, it is natural to ask whether the system always relaxes to local equilibrium.
When systems are near equilibrium, one expects linear response to provide a description of the transport coefficients. However, there is no means to address its regime of validity within the theory itself. Furthermore, even in this regime, in low dimensions, $`d2`$, it has been argued using kinetic theory that linear response often does not hold; the Green-Kubo autocorrelation functions are expected to behave as $`t^{d/2}`$, leading to divergent transport coefficients. Such divergences of the Green-Kubo integral have been observed in certain low dimensional systems such as the FPU model or the diatomic Toda lattice , and seem to be endemic to Hamiltonians which conserve total momentum. Other studies have also found thermal transport in the linear regime to diverge or have focused on somewhat exotic models , while strong thermal gradients have been studied in cellular automata.
In this work, we attempt to address many of the basic questions regarding the non-equilibrium properties of the $`\varphi ^4`$ theory on the lattice in (1+1) dimensions, by studying the model both near and far from equilibrium. We choose the $`\varphi ^4`$ theory since it is a prototypical model which appears in a variety of contexts, including particle physics. From the outset, we should point out that our work has two limitations, namely that it is classical and that it is a lattice field theory. On the other hand, we make no further approximations and we analyze the model from first principles without any dynamical assumptions. This will allow us to answer interesting physical questions that cannot yet be addressed in the full quantum case. Furthermore, the approach we adopt can be generalized to other classical lattice field theories in a straightforward manner. Our main objective is to develop a comprehensive understanding of the underlying dynamics of the scalar field theory in thermal gradients and to lay the ground work for further analysis. As such, we shall provide the necessary details of our methods for further work and in such a manner so that can be easily generalized to other models.
Classical field theory is relevant to high temperature behavior of quantum field theories: For instance, recently, it was used to derive properties of the standard electroweak model at finite temperature . Properties of the quantum $`\varphi ^4`$ theory in equilibrium has also been studied previously . However, the relation between the physical quantities in the classical lattice field theory and the quantum theory is far from trivial and it is beyond the scope of this work. The dynamics of classical field theories is of interest on its own right, which in our case can be thought of as the dynamics of an anharmonic chain. While we discuss the results of the (1+1) dimensional theory in this work, it is not an essential limitation of our model or approach; we do find that the basic physics understanding developed in (1+1) dimensions carries over to results in (3+1) dimensions, although the latter will be discussed elsewhere. As such, our analysis in (1+1) dimensions is not specific to 1-d systems. It should be noted that classical $`\varphi ^4`$ theory has been considered in various contexts in the past: The Lyapunov spectra has been computed in the microcanonical ensemble in massless and massive models. Equilibration of the model was studied in .
In addition to elucidating the physics underlying some of the nonโequilibrium phenomena of field theories, we believe that our results shed light on the nature of non-equilibrium statistical ensembles. The extension of the Gibbs ensemble to the non-equilibrium steady state remains unanswered. However, limited results to date suggest that the theory is far from trivial, including divergent Gibbs entropy and singular steady-state measures. While some approaches exist, such as maximum entropy or projection techniques to construct non-equilibrium operators, assumptions must be made about the non-equilibrium state in order to compute its properties. By using classical field theory as our starting point, we can use existing techniques to construct non-equilibrium steady states without assumptions on the dynamics of the model which are symptomatic to other approaches. A seemingly simple question concerns the thermal profile $`T(x)`$ which develops in a system when it is in a thermal gradient. In other approaches, often some form is assumed for the profile $`T(x)`$, which in principle should be dynamically obtained, as we shall do here. Furthermore, in our study, whether local equilibrium is achieved is not an assumption but is determined dynamically by the system. We will see that there are qualitative differences between the behavior of the system in the various regimes. These are characterized in Table 1.
We would like to emphasize once again the motivation for this analysis: While the lattice model we work with does have well defined thermal transport, a single component scalar continuum field theory does not support thermal conductivity in the usual sense. It would be interesting to generalize to more complex theories with more than one conserved current, such as the two component scalar field theory, gauge theory with matter and so on. On the other hand, rather than introduce additional degrees of freedom and observables such as charge or matter density, we prefer first to focus on some important questions in the lattice theory. By focusing on the simplest lattice theory, we are able to clearly present an approach to the non-equilibrium statistical mechanics of classical lattice models from first principles, which can then be applied to other models like the ones mentioned above. Our results are also of interest to the statistical mechanics of many body systems in non-equilibrium. So as is, we do not attempt to make claims concerning the continuum limit but rather concentrate on elucidating the physics of the lattice theory near and far from equilibrium.
In section 2, we describe the model we study and how we analyze the theory, particularly paying attention to the way the temperature boundary conditions are implemented. In section 3, we discuss thermal transport in our theory. In section 4, we analyze the equilibrium physics of the model and in section 5, the nonโequilibrium physics. In particular, we study the thermal conductivity and its temperature dependence. We analyze the thermal profiles and establish that the linear response theory works even for visibly curved profiles, up to certain thermal gradients. We further examine the relations between the various physical quantities such as entropy, speed of sound, heat capacity and the thermal conductivity. We end with a discussion in section 6.
Table 1: Behavior of the $`\varphi ^4`$ theory under varying thermal gradients.
Here, $`\mathrm{}`$ is the mean free path as explained in section V.
| Regime | | Properties |
| --- | --- | --- |
| Global Equilibrium (GE) | | $`T=0`$, $`f(\pi ,\varphi )\mathrm{exp}[H/T]`$. |
| Local Equilibrium I (LE-I) | | $`\mathrm{\Delta }T/T1`$; $`T=`$constant; |
| | | Fourierโs law holds globally; |
| | | Agreement with linear response theory. |
| Local Equilibrium II (LE-II) | | $`\mathrm{}T/T1/10`$; $`T`$constant; |
| | | Fourierโs law holds locally; |
| | | Small deviations from linear response theory; |
| | | Existence of boundary temperature jumps. |
| Local Non-Equilibrium (LNE) | | $`\mathrm{}T/T1/10`$; $`T`$constant; |
| | | Local equilibrium description inadequate; |
| | | Definition of temperature ambiguous; |
| | | No suitable definition for transport coefficients. |
## II The Model
We start with the $`\varphi ^4`$ Lagrangian (with the metric convention $`(,+)`$),
$$=\frac{1}{2}\left(\frac{\stackrel{~}{\varphi }(\stackrel{~}{x})}{\stackrel{~}{x}_\mu }\right)^2+\frac{1}{2}\stackrel{~}{m}^2\stackrel{~}{\varphi }(\stackrel{~}{x})^2+\frac{\stackrel{~}{g}^2}{4}\stackrel{~}{\varphi }(\stackrel{~}{x})^4.$$
(1)
We discretize and perform the rescaling
$$\varphi _x(t)=a\stackrel{~}{g}\stackrel{~}{\varphi }(\stackrel{~}{x},\stackrel{~}{t}),t=\stackrel{~}{t}/a,x=\stackrel{~}{x}/a,m^2=\stackrel{~}{m}^2a^2,$$
(2)
where $`a`$ is the lattice spacing. We then obtain the corresponding Hamiltonian where the lattice spacing is scaled out
$$H(\pi ,\varphi )=\frac{1}{2}\underset{i}{}\left[\pi _i^2+\left(\varphi _i\right)^2+m^2\varphi _i^2+\frac{1}{2}\varphi _i^4\right].$$
(3)
Here $`k=1,2,\mathrm{}L`$ runs over all sites in the lattice, the lattice derivative is $`\varphi _k\varphi _{k+1}\varphi _k`$. The resulting equations of motion are:
$$\dot{\varphi }_i=\pi _i,\dot{\pi }_i=(^2\varphi )_im^2\varphi _i\varphi _i^3.$$
(4)
Here, we defined the lattice Laplacian as $`(^2\varphi )_k\varphi _{k+1}2\varphi _k+\varphi _{k1}`$.
### A Finite Temperature Equilibrium Dynamics: $`T=0`$
Starting from the microcanonical dynamics (4), we can develop a realization of the constant temperature dynamics using the global demons of Ref. . In this approach, auxiliary variables are added to the systems which dynamically emulate the presence of a heat bath. This type of dynamics, while not as rigorously understood, has been shown to converge much faster than the optimized hybrid Monte Carlo methods. Further, it does not suffer as much from critical slowing down.
When we are interested in studying the statistical properties of a system described by an action $`S(\phi )`$, where $`\phi =(\phi _1,\phi _2,\mathrm{}\phi _n)`$ are the degrees of freedom, we usually start with the definition of a statistical measure, such as
$$f๐\mu (\phi )\mathrm{exp}[S(\phi )/T]๐\mu (\phi ).$$
(5)
Here, $`๐\mu (\phi )`$ might include constraints in the dynamical space, as in the case of motion on curved manifolds, such as Lie groups. Because we know the measure, steady-state values of observables are readily determined. For an arbitrary observable $`๐ช`$, we have
$$๐ช=\frac{1}{Z}๐\mu (\phi )e^{S(\phi )/T}๐ช,Z=๐\mu (\phi )e^{S(\phi )/T}.$$
(6)
While the approach we discuss is suited to general measures, in this article we use the measure $`๐\mu (\phi )=๐\phi `$ over the phase space, where $`\phi `$ will typically represent canonically conjugate coordinates and momenta, $`\phi =(\varphi ,\pi )`$, and $`S(\phi )`$ is taken as a Hamiltonian, $`S(\varphi ,\pi )=H(\varphi ,\pi )`$. While the dynamics of the model may now be easily implemented using equations of motion, (4) โ often referred to as the molecular dynamics method โ we would like to add finite temperature constraints to the equations. For this to happen, we must no longer evolve on the constant energy surface, so that $`S(\phi )`$ should no longer be conserved. The method we discuss here is reminiscent of Parisi and Wuโs stochastic quantization, although the one adopted in our work is deterministic and time-reversal invariant. It is also a versatile approach in that it has been applied to systems with non-trivial measures, such as Lie algebraic Hamiltonians, equilibrium and non-equilibrium quantum systems, atomic clusters and molecules, magnetic materials and lattice models.
There are many formulations of this dynamics, initially motivated by the approaches of Nosรฉ and Hoover. Consider the following equations of motion for a thermostatted site labeled by $`k`$:
$$\dot{\varphi }_k=\pi _k,\dot{\pi }_k=\frac{S}{\varphi _k}\frac{dG(w_k)}{dw_k}F(\pi _k)\frac{dG^{}(w_k^{})}{dw_k^{}}F^{}(\pi _k).$$
(7)
We have added two additional degrees of freedom, $`w_k,w_k^{}`$, which couple through forces indicated above. These extra degrees of freedom, so called โdemonsโ, may be coupled either to the fields, $`\varphi _k`$, or to their โmomentaโ, $`\pi _k`$. Here, we choose to couple them only to $`\pi _k`$โs in order to have the ability to apply thermostats locally at any one site. The microcanonical limit is recovered when these extra degrees of freedom are decoupled. In this extended space, $`\phi =(\varphi _i,\pi _i,w_k,w_k^{})`$ , we define a new action which is the old one plus additional terms for the demons:
$$f(\varphi ,\pi ,w,w^{})=\mathrm{exp}\left(\left[S(\varphi ,\pi )+\underset{\genfrac{}{}{0pt}{}{k:\mathrm{thermostatted}}{\mathrm{sites}}}{}\left(G(w_k)+G^{}(w_k^{})\right)\right]/T\right).$$
(8)
In contrast to microcanonical dynamics, where the Hamiltonian is a constant of the motion, $`f`$ is not preserved by the constant temperature dynamics. While the choice of forces in the equations of motion as well as that of $`f`$ are seemingly arbitrary, steady state expectation values will be independent of these under reasonably general conditions. Consequently, they are chosen to optimize convergence of the physical variables.
To find the dynamics associated with the demons $`w_i,w_i^{}`$, we simply require that $`f`$ satisfy a continuity (Liouville) equation in the configuration space $`\phi =(\varphi _i,\pi _i,w_k,w_k^{})`$:
$$0=\frac{f}{t}+\underset{i}{}\frac{(\dot{\phi }_if)}{\phi _i}.$$
(9)
This is equivalent to requiring that the master equation, enforcing conservation of probability under evolution of the ensemble, be satisfied. By substituting the equations of motion into the continuity equation, and using the definition of $`f`$, we can derive solutions for $`\dot{w}_k,\dot{w}_k^{}`$:
$$\dot{w_k}=\pi _kF(\pi _k)T\frac{dF(\pi _k)}{d\pi _k},\dot{w^{}}_k=\pi _kF^{}(\pi _k)T\frac{dF^{}(\pi _k)}{d\pi _k}.$$
(10)
By construction, this dynamics preserves the measure Eq. (8), so that time averages of observables on a given trajectory will converge to the configuration space average over the canonical measure. There is clearly some freedom in defining the dynamics; namely, the functions $`G(w),G^{}(w^{})`$ and $`F(\pi ),F^{}(\pi )`$. The only restriction on $`G(w),G^{}(w^{})`$ is that the measure Eq. (8) leads to a finite integral; in general the auxiliary variables $`w`$ can have any desired measure. In practice, highly non-linear functions are impractical since they will require small integration time steps. For these reasons, it is convenient to take $`G(w),G^{}(w)`$ to be $`\mu w^2/2`$ or $`\mu ^{}w^4/4`$, where $`\mu ,\mu ^{}`$ are positive constants. The constant couplings $`\mu ,\mu ^{}`$ of the demons to the physical degree of freedom are in principle arbitrary as long as the phase space integration is finite. Choosing these couplings to be too weak will make the convergence slow or choosing them to be too strong will lead to small time steps in the evolution, so that the couplings are chosen to optimize the convergence of physical observables. A necessary condition for $`F(\pi ),F^{}(\pi )`$, on the other hand, is for it to be at least linear in its argument, the minimal requirement for the existence of the fluctuations in the phase space volume which allows for the exploration of the canonical measure. The precise relation to the fluctuations in a phase space volume $`๐ฑ`$, or equivalently, the instantaneous phase space compressibility, can be found using the divergence theorem
$$\frac{1}{๐ฑ}\frac{d๐ฑ}{dt}=_๐ฑ๐\phi \underset{k}{}\frac{dG(w_k)}{dw_k}\frac{dF(\pi _k)}{d\pi _k}+\frac{dG^{}(w_k^{})}{dw_k^{}}\frac{dF^{}(\pi _k)}{d\pi _k},$$
(11)
where the index $`k`$ runs over the thermostatted sites. In this paper, we do not explore the effect of different choices of $`G(w),G^{}(w),F(\pi ),F^{}(\pi )`$. Such studies have been done on various other systems .
Finally, we note that the linearized equations of motion are evolved by the stability matrix, $`\dot{\phi }_i/\phi _j`$. The eigenvalues of this matrix are the Lyapunov exponents for the system. Hence we have the relation that
$$\underset{i}{}\lambda _i=\underset{i}{}\frac{\dot{\phi }_i}{\phi _i}.$$
(12)
For the canonical ensemble, the Liouville equation gives $`\lambda _i=0`$, while in steady state non-equilibrium systems, $`\lambda _i<0`$.
One specific realization of the finite temperature equilibrium dynamics we use couples the thermostats only at the endpoints of the systems, $`k=1,L`$. With the choice of $`G(w)=w^4/4,G^{}(w^{})=w^2/2,F(\pi )=\pi /T,F^{}(\pi )=\pi ^3/T`$, we obtain
$$\begin{array}{ccc}\dot{\varphi }_k\hfill & =& \pi _kk=1,2,\mathrm{}L\hfill \\ \dot{\pi }_k\hfill & =& \{\begin{array}{cc}(^2\varphi )_km^2\varphi _k\varphi _k^3\hfill & k=2,3,\mathrm{},L1\hfill \\ (^2\varphi )_km^2\varphi _k\varphi _k^3w_k^3\pi _k/Tw_k^{}\pi _k^3/T\hfill & k=1,L\hfill \end{array}\hfill \\ \dot{w}_k\hfill & =& \pi _k^2/T1,\dot{w}_k^{}=\pi _k^4/T3\pi _k^2k=1,L.\hfill \end{array}$$
(13)
Just thermostatting the two boundary points is sufficient to thermalize the entire system. We then can examine the interior system far from the boundaries to study the finite temperature theory. Either free or fixed boundary conditions were used for the $`\varphi `$ field with no significant effect on the physics behavior of the theory. To compute observables, we use the fact that in this type of dynamics, the time averages converge to the ensemble average using the desired ensemble (8):
$$\overline{๐ช}=\underset{t\mathrm{}}{lim}\frac{1}{t}_0^t๐t๐ช(\varphi (t),\pi (t))=๐ช_{EQ}=\frac{๐\phi ๐ชf}{๐\phi f}.$$
(14)
### B Non-equilibrium Boundary Conditions: $`T0`$
One of the problems immediately encountered in the study of non-equilibrium systems is the nature of the steady state statistical distribution. While equilibrium statistical mechanics is well understood, once we apply thermal gradients to the system, much less is known about the system. To set up the non-equilibrium molecular dynamics, we use the demons to thermostat the end-points of our system in the same way we did in the equilibrium simulation. The only difference is that we now control the two endpoint temperatures separately. A consequence will be that the phase space distribution, $`f(\varphi ,\pi ,t)`$, will evolve to a non-smooth function that describes the non-equilibrium steady state.
We take the equations of motion at finite $`T`$ and now introduce two temperatures $`T_1^0`$ and $`T_2^0`$. The superscript is needed to distinguish the thermostatted temperatures from those measured just inside the system, which can suffer from boundary jumps which we will analyze in detail. One set of equations, similar to the equilibrium case in Eq. (13), we have used are
$$\begin{array}{cccc}\dot{\varphi }_k\hfill & =& \pi _kk=1,2,\mathrm{}L\hfill & \\ \dot{\pi }_k\hfill & =& \{\begin{array}{cc}(^2\varphi )_km^2\varphi _k\varphi _k^3\hfill & k=2,3,\mathrm{},L1\hfill \\ (^2\varphi )_1m^2\varphi _1\varphi _1^3w_1^3\pi _1/T_1^0w_1^{}\pi _1^3/T_1^0\hfill & \\ (^2\varphi )_Lm^2\varphi _L\varphi _L^3w_L^3\pi _L/T_2^0w_L^{}\pi _L^3/T_2^0\hfill & \end{array}\hfill & \\ \dot{w}_1\hfill & =& \pi _1^2/T_1^01,\dot{w}_1^{}=\pi _1^4/T_1^03\pi _1^2\hfill & \\ \dot{w}_L\hfill & =& \pi _L^2/T_2^01,\dot{w}_L^{}=\pi _L^4/T_2^03\pi _L^2.\hfill & \end{array}$$
(15)
One can see that the thermostats are applied to the endpoints $`k=1`$ and $`k=L`$. It should be noted that inside the boundaries, the dynamics of the system is that of the $`\varphi ^4`$ theory itself with no other degrees of freedom. We have also considered both variations of these thermostats, such as different forms of the interactions or increasing the number of sites at each end where we apply the demons. We will comment when these distinctions are relevant.
The equations of motion are solved on a spatial grid, using two methods: fifth and sixth order Runge-Kutta, and leap-frog algorithms. We used from $`10^6`$ to $`10^9`$ time steps of $`dt`$ from $`0.1`$ to $`0.001`$, with observables being sampled every $`\mathrm{\Delta }t=20100dt`$. The lattice size was varied from $`L=20`$ to 8000.
A consequence of the non-equilibrium steady state is that the measure becomes singular with respect to the Liouville measure . We do not rely explicitly on the singular nature of the non-equilibrium measure. Rather, we use it to interpret certain observables in the non-equilibrium state. To see this we start with the two main equations for $`f(\varphi ,\pi ,t)`$; the continuity equation and the expression for the total derivative of a phase space valued function. If we denote the vector $`\phi `$ to include all degrees of freedom, $`\phi =(\varphi ,\pi ,w)`$, we have
$$\frac{df}{dt}=\frac{f}{t}+\underset{i}{}\dot{\phi }_i\frac{f}{\phi _i}.$$
(16)
By combining this equation with the continuity equation (9), which holds both in equilibrium and in non-equilibrium, we derive
$$\frac{df}{dt}=f\underset{i}{}\frac{\dot{\phi }_i}{\phi _i}.$$
(17)
Solving for $`f`$, we obtain
$$f(t)=f(0)\mathrm{exp}\left(_0^t๐t\underset{i}{}\frac{\dot{\phi }_i}{\phi _i}\right)=f(0)\mathrm{exp}\left(t\underset{i}{}\frac{\dot{\phi }_i}{\phi _i}_{NE}\right)=f(0)\mathrm{exp}\left(t\underset{i}{}\lambda _i\right).$$
(18)
In these steps we have replaced the time average with the non-equilibrium ensemble average. The average of the divergence of the equations of motion is nothing more than the sum of Lyapunov exponents. In non-equilibrium steady states, whether from thermal gradients, or shearing, and so forth, the sum of the exponents is observed to become negative, signaling the presence of a fractal dimension. In the steady state,
$$f(t)t\mathrm{}\mathrm{}.$$
(19)
As the continuity equation is satisfied, the allowed phase space volume must be shrinking onto a set of measure zero, with respect to the original measure. A consequence of the divergence of the distribution function in the non-equilibrium steady state is that Gibbs entropy will also diverge,
$$S_G=\mathrm{log}f\mathrm{}.$$
(20)
Although we do not know how to properly define the fractal measure for our non-equilibrium steady state, we are able to compute non-equilibrium expectation values with respect to it using time averages:
$$๐ช_{NE}=\overline{๐ช}=\underset{t\mathrm{}}{lim}\frac{1}{t}_0^t๐t๐ช(\varphi (t),\pi (t)).$$
(21)
We will verify this in the linear response regime where near-equilibrium results can be compared to thermal equilibrium predictions obtained using the linear response theory.
Simulations of many-body systems in non-equilibrium steady states have found that the steady state measure is typically singular with respect to the original equilibrium measure. As one moves further from thermal equilibrium, the available phase space contracts onto an ergodic fractal: the accessible points are dense in the phase space, but fractal in nature. The resulting loss of dimension is related to the transport coefficient. One can see that in our dynamics as well since $`f`$ satisfies the continuity equation, so that total probability is conserved, yet Eq. (19) is satisfied. This means that the accessible phase space volume is contracted onto a set of measure zero. This type of โdimensional lossโ in the steady state can be more rigorously seen in low dimensional systems like the Lorentz gas.
From the point of view of dynamical systems theory, it is possible to understand the steady state measures under certain special conditions, namely that the dynamics is hyperbolic or Anosov. Unfortunately the conditions for hyperbolicity are not satisfied for our system since the number of positive Lyapunov exponents can vary along a trajectory. Nevertheless, it is useful to note that for hyperbolic systems described by some flow, $`x(t)=S_tx`$, for time evolution operator $`S_t`$ and initial point $`x`$, the Sinai-Ruelle-Bowen theorem provides the existence of the steady state measure, denoted $`\mu _{SRB}`$. It can be shown that for a continuous function $`f(x)`$, there exists a unique measure $`\mu _{SRB}`$ such that
$$\underset{t\mathrm{}}{lim}\frac{1}{t}_0^t๐t^{}f(x(t))=\frac{f(x)๐\mu _{SRB}}{๐\mu _{SRB}}.$$
(22)
This measure is known to be fractal for various non-equilibrium systems, and can be explicitly constructed for certain maps such as the modified Bakerโs map. Here a basis of fractal Takagi functions has been implemented.
For the Lorentz gas, it was shown numerically under general conditions, and rigorously proven in the linear response regime, that the steady state measure is singular. Here one can explicitly see that the dimensional loss $`\mathrm{\Delta }D`$ is proportional to the transport coefficient, which in this case is the electrical conductivity. While there is still some contention concerning the existence of singular measures in the extension of the Gibbs ensemble to the far from equilibrium steady state, the compelling evidence suggests that whatever its nature is, it is far from trivial.
## III Transport
One of the relevant observables is the stress-energy tensor,
$$๐ฏ^{\mu \nu }=\frac{}{(_\mu \varphi )}_\nu \varphi +\eta _{\mu \nu }.$$
(23)
From the continuity equation $`0=_\mu ๐ฏ^{\mu \nu }`$, we have
$`0={\displaystyle \frac{}{x^\mu }}๐ฏ^{0\mu }={\displaystyle \frac{๐ฏ^{00}}{t}}+{\displaystyle \frac{๐ฏ^{0i}}{x^i}}.`$ (24)
We can identify the heat flux as $`๐ฏ^{0i}`$. On the lattice and in one spatial dimension,
$$๐ฏ^{01}(x)=\pi (x)\varphi (x)๐ฏ_k^{01}=\pi _k(\varphi _{k+1}\varphi _k).$$
(25)
Defining the lattice energy density consistently as
$$๐ฏ^{00}(x)=\frac{1}{2}\pi ^2+\frac{1}{2}(\varphi )^2+V(\varphi )๐ฏ_k^{00}=\frac{1}{2}\pi _k^2+\frac{1}{2}(\varphi _{k+1}\varphi _k)^2+V(\varphi _k),$$
(26)
we find that the discrete version of the continuity equation is satisfied:
$$\frac{}{t}๐ฏ_k^{00}+\left(๐ฏ_k^{01}๐ฏ_{k1}^{01}\right)=0.$$
(27)
This establishes that $`๐ฏ_k^{01}`$ is a constant in space and time in steady state. On the other hand, there is no way to satisfy the spatial component of the continuity equation, $`_t๐ฏ_k^{01}+๐ฏ^{11}=0`$. The reason for this can be understood as follows; while the translation invariance in the time direction is preserved, the translation invariance in the spatial direction has been lost due to the lattice discretization.
It is interesting to contrast this with the so called FPU $`\beta `$ model, which has divergent thermal conductivity in one spatial dimension. In this case the Hamiltonian is
$$H_{FPU}=\frac{1}{2}\underset{k}{}\left[p_k^2+(q_{k+1}q_k)^2+\frac{\beta }{2}(q_{k+1}q_k)^4\right],$$
(28)
so that the heat flux reads
$$๐ฏ^{01}=\pi _k(q_{k+1}q_k)\left(1+\beta (q_{k+1}q_k)^2\right).$$
(29)
If we think of this as originating from a continuum field theory, we would have the following (non-relativistic) Lagrangian and heat flux:
$$=\frac{1}{2}\left(\frac{\varphi }{t}\right)^2+\frac{1}{2}\left(\frac{\varphi }{x}\right)^2+\frac{\beta }{4}\left(\frac{\varphi }{x}\right)^4,๐ฏ^{01}=\frac{\varphi }{t}\frac{\varphi }{x}\left(1+\beta \left(\frac{\varphi }{x}\right)^2\right).$$
(30)
It is straightforward to check that the heat current satisfies the continuity equation (24). In this case, the thermal conductivity diverges as $`L^{2/5}`$ .
In the simulations we will define the temperature locally using the concept of an ideal gas thermometer, which measures the second moment of the momentum distribution. Specifically, at a site $`k(x=ka)`$ we define in both equilibrium and non-equilibrium
$$T(x)=T_k=\pi _k^2.$$
(31)
We will see that this definition is sensible in these regimes in the context of both statistical mechanics and thermodynamics.
### A Green-Kubo Approach
The standard approach to near equilibrium or local equilibrium transport is to apply linear response theory or Green-Kubo formulas. In this approach, external fields are added to a Hamiltonian which generate the desired transport processes and expressions for the transport coefficients can be derived in the weak field limit. For thermal conduction, the arguments are somewhat heuristic since one does not have well defined external fields that produce heat flow. Nevertheless, the procedure empirically results in sensible expressions. The Green-Kubo approach expresses transport coefficients in terms of equilibrium autocorrelation functions. For the thermal conductivity, one has
$`\kappa (T)`$ $`=`$ $`{\displaystyle \frac{1}{T^2}}{\displaystyle _0^{\mathrm{}}}๐t{\displaystyle ๐x๐ฏ^{01}(x,t)๐ฏ^{01}(x_0,0)_{EQ}}`$ (32)
$`=`$ $`{\displaystyle \frac{1}{NT^2}}{\displaystyle _0^{\mathrm{}}}๐t{\displaystyle \underset{k,k^{}=1}{\overset{N}{}}}๐ฏ^{01}(x_k,t)๐ฏ^{01}(x_k^{},0)_{EQ}.`$ (33)
$`N(<L)`$ is the number of sites in the region inside the boundaries used in the computation. We typically choose the region to be as large as possible while excluding the boundary effects. While this expression is expected to hold near equilibrium, the region of its validity cannot be determined within the linear response theory itself.
Autocorrelation functions, such as Eq. (32), have been argued to decay algebraically, rather than exponentially. Originally observed in early molecular dynamics simulations, the velocity-velocity autocorrelation function was found to decay as $`v(t)v(0)_{EQ}t^{d/2}`$ up to times on the order of a few ten times the mean free time. This behavior, if it persists, leads to divergence of the diffusion constants for $`d<3`$. Using kinetic theory and some general assumptions, it was later argued that this is a generic feature of dynamical systems for long times. These long-time power law tails cause divergences in transport coefficients in a large class of low dimensional systems including the FPU $`\beta `$ model (28)โ(30). It is believed, however, that for theories without strict momentum conservation, such a divergence can be absent. This is the case for our lattice model due to our โon-siteโ nature of the potential, as we shall see below.
### B Non-Equilibrium Approach
We also compute the thermal conductivity directly by constructing ensembles near equilibrium which lead to constant gradient thermal profiles. This is arguably more fool-proof than the Green-Kubo formula since no assumptions are necessary for the computation. Here we first confirm Fourierโs law and then use it to compute the conductivity using
$$\kappa (T)=\frac{๐ฏ^{01}_{NE}}{T(x)}.$$
(34)
While Fourierโs law is believed to be valid near equilibrium, what constitutes the โlinear regimeโ is not known. We will see that it eventually breaks down as we move far from equilibrium, while still in the steady state. It is not clear at this point how to make a sensible definition of the thermal conductivity. One approach would be to try to the characterize the โnonlinearโ response by incorporating the dependence of the conductivity on higher order terms in the derivatives, such as $`(T)^3,(^2T)(T)`$ and so on. We will try to clarify these issues below.
## IV Equilibrium Ensemble
When the boundary temperatures are equal, we recover equilibrium physics for our system. In the remainder of the paper we specialize to the $`m^2=0`$ case.
### A Thermalization
A test of the thermalization of the system is readily performed by studying the distribution functions of various quantities. If we take a single trajectory, $`(\varphi (t),\pi (t))`$, and histogram $`\pi _k(t)`$ at $`t=0`$, $`\mathrm{\Delta }t`$, $`2\mathrm{\Delta }t`$,โฆ, where $`\mathrm{\Delta }t`$ is some time step which ensures the points are reasonably well decorrelated, we will converge to the thermal distribution function
$$f(\pi _k)\mathrm{exp}[\pi _k^2/2T].$$
(35)
In Fig. 1 (left column), we show computed equilibrium distributions for the momenta and heat flux at the center of a lattice with $`L=163`$. Here we have taken the endpoint temperatures to be $`T_1^0=T_2^0=1`$. The measured temperature in the middle is $`T=0.995(8)`$, and one can see that the measured momentum distribution (histogram, top left) agrees with the predicted value (solid, top left). In the bottom left we show the measured thermal distribution of $`๐ฏ^{01}`$, for which we do not have a theoretical prediction. Since we are in equilibrium, there is no heat flow, so $`๐ฏ^{01}=0`$, which is evident from the symmetric nature of the thermal distribution $`f(๐ฏ^{01})`$. We have verified that when $`T_1^0=T_2^0`$, these boundary conditions reproduce the equilibrium canonical measure $`f_{eq}(\pi ,\varphi )\mathrm{exp}[H(\pi ,\varphi )/T_1]`$ at all points, including the thermostatted sites. The measured thermal profiles satisfy $`T(x)=T_1^0(=T_2^0)`$ for any $`x`$ within numerical error over the temperature range we investigated: $`T=0.01`$ to $`T=10`$. Hence the boundary conditions (15) do produce the desired physics for this theory.
In our presentation of equilibrium and non-equilibrium steady state expectation values, we verify that stationary results are obtained. This is done by examining the time evolution of observables. For instance, if we want to measure the heat flow through the system, we measure at all sites and verify that the time averages converge to the same value. A typical result is shown in Fig. 2, where the average $`๐ฏ^{01}`$ is shown as a function of time at three sites: $`L/4`$, $`L/2`$ and $`3L/4`$ for $`L=800`$. One can see that the values eventually converge to the ensemble average. In this case the endpoint temperatures are distinct, so that there is a net heat flux. Equilibrium figures are similar, with convergence to $`๐ฏ^{01}=0`$.
### B Auto-Correlation Functions
We compute auto-correlation functions for the components of the stress-energy tensor, which includes the Green-Kubo formula for the thermal conductivity. We define the normalized correlation functions
$$C(t)=\frac{1}{(๐ฏ^{\mu \nu }(0))^2}\left[๐ฏ^{\mu \nu }(t)๐ฏ^{\mu \nu }(0)๐ฏ^{\mu \nu }(0)^2\right].$$
(36)
In Fig. 3, we show the time dependence of $`C(t)`$ for $`๐ฏ^{00}`$ (dashes), $`๐ฏ^{01}`$ (solid) and $`๐ฏ^{11}`$ (dots) as a function of time, where the time is normalized by the mean free time $`\tau `$. (As we discuss below, $`\tau `$ will be of order of the thermal conductivity.) One can see that these functions decay to half their initial values on the order of the mean free time. For larger times $`t>10\tau `$, these functions oscillate about zero. This behavior is seen at all temperatures studied.
To compute the thermal conductivity, we are interested in the auto-correlation function for $`๐ฏ^{01}`$:
$$\kappa (T,t)=\frac{1}{NT^2}_0^t๐t\underset{k,k^{}}{}๐ฏ^{01}(x_k,t)๐ฏ^{01}(x_k^{},0)_{EQ},\kappa (T)=\underset{t\mathrm{}}{lim}\kappa (T,t).$$
(37)
By studying the convergence $`\kappa (T,t)`$ to the thermal conductivity, we can explore the problems associated with the long-time tails. In Fig. 4 we plot $`\kappa (T,t)`$ as a function of time, where $`t`$ is normalized by the mean free time $`\tau `$. According to the predictions based on kinetic theory, in $`d=1`$ we would expect $`\kappa (T,t)t^{1/2}`$, leading to an infinite conductivity. At the temperatures shown ($`T=1/50`$, 1/10, 1, 2), $`\kappa (T,t)`$ does apparently display behavior similar to $`t^{1/2}`$ (dashes) on time scales up to $`t10\tau `$, but on longer time scales, the results converge to well defined values. Consequently, divergences such as those found in the FPU model are not present here, and long-time tails are at best a transient aspect of this dynamics.
The behavior of $`\kappa (T)`$ found from the Green-Kubo approach is summarized in Fig. 5 (crosses). This analysis will be continued when we discuss the direct measurements below.
### C Speed of Sound
To better understand the kinetic theory aspects of this finite temperature theory, we would like to know the thermal behavior of the โspeed of soundโ. This can be estimated in a number of ways. A convenient approach is to use the $`๐ฏ^{01}`$ auto-correlation function. The sound speed, $`c_s`$, is defined here as how fast excitations travel through the system and is the relevant velocity for the transport theory in our particular model. We note, however, that strictly speaking, this is not โsoundโ in the hydrodynamic sense. We define
$$G(x,t;x_0,t_0)๐ฏ^{01}(x_0,t_0)๐ฏ^{01}(x,t)_{EQ}.$$
(38)
Instead of the time dependence, we consider the spatial dependence of $`G(x,t;x_0,t_0)`$. Not only will it decorrelate in time, as we saw previously in the Green-Kubo integrals, but it will also decorrelate over space. A typical behavior of $`G(x,t;x_0,t_0)`$ is shown in Fig. 6. Here we choose $`x_0=0`$ to be the center of the lattice, and $`t_0=0`$. The temperature is $`T=1/10`$ and $`L=160`$. The autocorrelation function is shown for times $`t=0,30,60,90`$ where one can see the regions of high correlation separate. By measuring the rate at which the peaks separate, we can obtain an estimate for $`c_s`$ at that temperature. In Fig. 7, we plot the sound speed extracted in this manner against the temperature. There is little temperature dependence until $`T1/10`$, after which the speed begins to decrease with $`T`$, as one might naively expect. So although it shows some temperature dependence, $`c_s`$ is generally of order unity over this temperature range.
## V Non-Equilibrium Steady States
By controlling the boundary temperatures, we can begin to develop an understanding of the physics of hot scalar field theory, as it moves increasingly further from thermal equilibrium. We discuss the regimes we have found as we move away from equilibrium.
### A Near Equilibrium: $`T_1T_2`$
In the near equilibrium limit, the physics is consistent with linear response and Fourierโs law. Here $`๐ฏ^{01}`$ will develop a non-zero expectation value. As the temperatures $`T_1`$ and $`T_2`$ begin to separate, a constant $`T`$ develops. This is illustrated in Fig. 8 for a lattice of $`L=8000`$ where a small gradient is developed around a temperature $`T=1/4`$ using boundary temperatures $`T_1^0=0.15`$ and $`T_2^0=0.35`$ (solid). A linear profile, expected from Fourierโs law, is superimposed (dashes). While such linear profiles in principle provide a direct measure of the conductivity by dividing the measured heat flux by the average gradient, we perform a more careful analysis by taking increasingly smaller gradients about the same average temperature, thereby explicitly verifying Fourierโs law. This is shown in Fig. 9 for the temperature $`T=1/4`$. All these points correspond to thermal profiles which are linear. As the boundary temperatures approach $`T=1/4`$, we see that the heat flux also decreases. The slope, $`\kappa (T)=๐ฏ^{01}/T`$, corresponds to the thermal conductivity at this temperature.
The measurements of $`\kappa (T)`$ in the range $`1/100<T<20`$ are summarized in Fig. 5 for both direct measurements ($`\mathrm{}`$) and Green-Kubo ($`\times `$). We find that both methods agree and that the general behavior is well described by a power law:
$$\kappa (T)=\frac{A}{T^\gamma },\gamma =1.35(2),A=2.83(4).$$
(39)
This type of behavior is reminiscent of lattice phonons at high temperature. The fact that the direct measurements agree with the Green-Kubo integrals of the auto-correlation functions is sufficient to dispel the notion of asymptotic long-time tails in this system.
Another important aspect to verify is that we have achieved a bulk limit in the values of $`\kappa (T)`$ reported. In Fig. 10, we show the dependence of the direct measurements of $`\kappa `$ on the size $`L`$ of the system for several temperatures. The dashed lines are the predictions from the power law fit. One can see that a bulk limit is realized for relatively small lattices. On closer inspection, we find that the bulk limit is reached for smaller lattices when the temperature is higher. This can be understood as follows; the mean free path of the system is of the order of the thermal conductivity, as we shall see when we analyze the kinetic theory aspects of this problem. The bulk limit is reached for lattices larger than the mean free path, roughly speaking.
### B Far from Equilibrium: $`T_1T_2:`$
We now consider the behavior of the system as we move it further from equilibrium. One of the first characteristics to emerge is the development of curvature in the temperature profile. In Fig. 11 we plot a succession of steady state temperature profiles as we change the boundary temperatures from $`(T_1^0,T_2^0)=(0.3,0.7)`$ (dots) to $`(0.2,0.8)`$ (dashes) and finally to $`(0.1,2)`$ (solid). As the system moves away from the constant gradient profiles, it starts to feel the temperature dependence of the thermal conductivity, which decreases with increasing temperatures. As a consequence, the hot end of the system cannot conduct heat as well as to lower temperature end. (Of course the converse of this argument would hold if the power law behavior has $`\gamma <0`$, in which case the curvature would have an opposite sign.)
Fig. 2, which was already mentioned when we discussed the convergence properties, displays the time evolution of the heat flux at three sites within a system thermostatted at $`(T_1^0,T_2^0)=(0.3,0.7)`$ (dots in Fig. 11). Regardless of how far the system is from equilibrium, the heat flux must be independent of position in steady state, since there are no sources or sinks for heat inside the boundaries. We see that the values at the three sites converges to the same value in the steady state, namely, a constant non-equilibrium heat flux independent of $`x`$.
Another property of the non-equilibrium system is the appearance of jumps in the temperature at the boundaries. This is illustrated in Fig. 12; in Fig. 12 (top), we show a typical non-equilibrium thermal profile with curvature (solid โ the dashes will be discussed below). In the lower panels we examine the low and high temperature ends. $`T_1^0`$ and $`T_2^0`$ are the temperatures enforced by our boundary temperatures. One can see that there is a difference between these temperatures and what one obtains by smoothly extrapolating the temperature profile near the edge. These are physical phenomena associated with the dynamics at the interface, which can be readily understood quantitatively, as discussed in section V D below. We will define the points obtained through smooth extrapolation by $`(T_1,T_2)`$. If we now focus on the thermal profile away from the edges, we can understand the curvature in the temperature profile using Fourierโs law. At or near equilibrium, we have found the power law behavior $`\kappa (T)=AT^\gamma `$, as in Eq. (39). If we integrate Fourierโs law and re-express the result in terms of extrapolated temperatures $`(T_1,T_2)`$, we find
$$T(x)=\{\begin{array}{cc}T_1\left[1\left(1\left(\frac{T_2}{T_1}\right)^{1\gamma }\right)\frac{x}{L}\right]^{\frac{1}{1\gamma }},\hfill & \gamma 1\hfill \\ T_1\left(\frac{T_2}{T_1}\right)^{x/L},\hfill & \gamma =1.\hfill \end{array}$$
(40)
Some previous efforts to model the temperature profiles in systems exist , although these were generally model fits and furthermore, a full understanding requires a description of the boundary effects, which up to now have been lacking. We have found that this formula provides an excellent description, but ultimately breaks down as we see below. In Fig. 12, the dashed line is the description given by Eq. (40) with $`\gamma =1.35`$, which offers a fit to within a few percent. In should be noted that $`\gamma `$ used here was obtained from systems at or near equilibrium as in Eq. (39), independently of the systems far from equilibrium. One can see that at the high temperature end there is a tendency to overshoot the measured behavior as in Fig. 12 (bottom right). This analytic expression provides a sufficiently good description of the physics as we move away from equilibrium, until Fourierโs law fails to hold locally. This includes the first three regimes listed in Table 1. Eventually, Fourierโs law will break down locally as well, as we develop steady states which are locally non-equilibrium (LNE).
The analytic understanding of the thermal profile allows us to understand the behavior of the heat flux for systems not too far from equilibrium. Using Fourierโs law and Eq. (40), we have for
$$๐ฏ^{01}_{NE}=\frac{A}{L(1\gamma )}(T_1^{1\gamma }T_2^{1\gamma })(\gamma 1).$$
(41)
For $`\gamma =1`$, $`๐ฏ^{01}_{NE}=(A/L)\mathrm{log}(T_1/T_2)`$. When compared to measured heat flux using the extrapolated temperatures from the thermal profile, this formula is found to provide a very good description of the heat flux, typically to within a few percent. In the near equilibrium regime where the temperature profile is visibly linear, the boundary jumps vanish, and we expect it to behave as
$$๐ฏ^{01}_{NE}^0=\kappa (T_{av})\frac{T_2T_1}{L}$$
(42)
where the superscript denotes the linear response, or constant gradient limit. In general, these are related by
$$๐ฏ^{01}_{NE}=๐ฏ^{01}_{NE}^0\left\{1+\frac{\gamma (\gamma +1)}{24}\left(\frac{\mathrm{\Delta }T}{T_{av}}\right)^2+๐ช\left(\left[\frac{\mathrm{\Delta }T}{T_{av}}\right]^4\right)\right\},$$
(43)
where $`T_{av}=(T_2+T_1)/2`$ and $`\mathrm{\Delta }T=(T_2T_1)`$. Note that these temperatures are the extrapolated ones and not the boundary temperatures, $`(T_1^0,T_2^0)`$. In this way we see that the temperature profile will no longer be linear when $`\mathrm{\Delta }T/T_{av}LT(x)/T(x)1`$.
In this regime of curved thermal profiles, we see some differences between the thermal distributions measured in equilibrium. In Fig. 1 we display on the right side the measured steady-state non-equilibrium statistical distributions for momenta (histogram, upper right) compared to the equilibrium thermal distribution expected at that temperature based on the notion of local equilibrium (solid). We find that as we more further from equilibrium, the momentum distributions are typically sharper than the gaussian expected based on the ideal gas thermometer. In the lower right, we show the non-equilibrium distribution of heat flux which develops the asymmetry needed to give a non-vanishing expectation value $`๐ฏ^{01}<0`$ for $`T_1^0<T_2^0`$.
An illustration of a system very far from equilibrium is the non-equilibrium steady state shown in Fig. 13. Here the ratio of the endpoint temperatures is 100, and the thermal profile is shown on a log scale (solid). The fit of Eq. (40) is shown by the dashed line. While we still measure temperature locally through the second moment, the measured distributions are no longer gaussian. In Fig. 14 (top) we show $`f(\pi )`$ in the center of the system. In the bottom panel, the ratio of Eq. (40) to the measured behavior in Fig. 13 is shown. Deviations up to 40% are evident on the low temperature end. Here, there is no accepted formalism to describe the dynamics and we are in the LNE regime of Table 1. We will return to this issue when we discuss local equilibrium in more detail in section V F.
### C Entropy and Kinetic Theory
While the Gibbs entropy for the theory diverges as it contracts onto a set of measure zero, not all measures of entropy behave in this manner. Irreversible thermodynamics provides a description of systems on scales much larger than the mean free path. When one has heat flow, one defines the local rate of entropy production as
$`\sigma (x)`$ $`=`$ $`๐ฏ^{01}_{NE}{\displaystyle \frac{d}{dx}}{\displaystyle \frac{1}{T(x)}}`$ (44)
$`=`$ $`{\displaystyle \frac{A}{L^2(1\gamma )^2T_1^\gamma }}\left[1\left({\displaystyle \frac{T_2}{T_1}}\right)^{1\gamma }\right]^2\left[1\left(1\left({\displaystyle \frac{T_2}{T_1}}\right)^{1\gamma }\right){\displaystyle \frac{x}{L}}\right]^{\frac{(2\gamma )}{(\gamma 1)}}>0.`$ (45)
Integrating this formula, we can compute the net rate of entropy production to be
$`\dot{S}_{irr}`$ $`=`$ $`{\displaystyle _0^L}๐x\sigma (x)`$ (46)
$`=`$ $`๐ฏ^{01}_{NE}\left({\displaystyle \frac{1}{T_2}}{\displaystyle \frac{1}{T_1}}\right)`$ (47)
$`=`$ $`{\displaystyle \frac{A}{L(\gamma 1)T_1^\gamma }}\left(1{\displaystyle \frac{T_1}{T_2}}\right)\left[1\left({\displaystyle \frac{T_1}{T_2}}\right)^{\gamma 1}\right]>0.`$ (48)
This expression can be simply interpreted as saying that the global entropy production rate is due to the difference in entropy production rates at the boundaries coming from the demons. On the other hand, irreversible thermodynamics predicts a constant local rate of entropy production, $`\sigma (x)`$, which is at odds with the coarse-grained local Boltzmann entropy computed below. The latter calculation is microscopically based and does not rely on any hydrodynamic limit of the theory.
If we envision the expansion of hot systems similar to RHIC collisions, the local entropy is an important quantity. From the statistical mechanics point of view, when one discusses the notion of entropy in a local frame, it is not Gibbs entropy of the entire systems that is needed. Hence we would like to consider Boltzmannโs entropy. To understand the behavior of this entropy in increasingly non-equilibrium environments, we consider the $`n`$-body Boltzmann entropy, defined through the $`n`$-body distribution functions $`f_B^{(n)}`$. The best we can do, of course, is the coarse-grained limit of these quantities. In the โlocal frameโ at $`x=x^{}`$, we define them as integrals of the full phase space distribution $`f`$ over all quantities except the arguments of $`f_B^{(n)}`$:
$`f_B^{(1)}(\varphi (x^{}),\pi (x^{}))`$ $`=`$ $`f_B^{(1)}(\varphi _k,\pi _k)`$ (49)
$`f_B^{(2)}(\varphi (x^{}),\varphi (x^{\prime \prime }),\pi (x^{}),\pi (x^{\prime \prime }))`$ $`=`$ $`f_B^{(2)}(\varphi _k,\varphi _{k+1},\pi _k,\pi _{k+1})`$ (50)
$`\mathrm{}`$ . (51)
The distributions $`f_B^{(k)}`$ are obtained by histogramming the corresponding degrees of freedom ( $`(\varphi _k,\pi _k)`$ for $`f_B^{(1)}`$, $`(\varphi _k,\pi _k,\varphi _{k+1},\pi _{k+1})`$ for $`f_B^{(2)}`$,โฆ), and is readily constructed in equilibrium and non-equilibrium conditions. In the equilibrium or non-equilibrium steady states, we then compute
$`S_B^{(1)}`$ $`=`$ $`{\displaystyle ๐\pi (x)๐\varphi (x)f_B^{(1)}\mathrm{log}f_B^{(1)}}`$ (52)
$`S_B^{(2)}`$ $`=`$ $`{\displaystyle ๐\pi (x)๐\varphi (x)๐\pi (x^{})๐\varphi (x^{})f_B^{(2)}\mathrm{log}f_B^{(2)}}`$ (53)
$`\mathrm{}`$ (54)
These satisfy the inequalities
$$S_G\mathrm{}\frac{L}{2}S_B^{(2)}LS_B^{(1)}.$$
(55)
To compute $`S_B`$ we must coarse-grain the phase space. Hence we are actually computing the coarse-grained 1- and 2-body densities, which we denote $`f_\mathrm{\Delta }^{(k)}`$. These are related by
$$S_B^{(1)}f_\mathrm{\Delta }\mathrm{log}f_\mathrm{\Delta }\mathrm{log}(\mathrm{\Delta }\pi _k\mathrm{\Delta }\varphi _k),f_\mathrm{\Delta }=1.$$
(56)
We have computed these entropies and find that $`S_B^{(1)}`$ does not shift noticeably from its equilibrium value regardless of how far the system is from equilibrium (see Fig. 16). Further, $`S_B^{(2)}`$ $`(2S_B^{(1)})`$ is only slightly less than its upper limit $`2S_B^{(1)}`$ and remains so even far from equilibrium. So unlike $`S_G`$ $`\left(LS_B^{(1)}\right)`$, $`S_B`$ is rather insensitive to the non-equilibrium nature of the system. Since this is coarse-grained, it is not too surprising that this is so. It would be more revealing to consider the behavior of $`f_B^{(k)}`$ as the size of the bins decreases, but this is currently too computationally intensive.
Thermodynamic quantities, such as the entropy, allow us to investigate the underlying dynamics of the theory and also probe the possible deviations of the physical observables from their equilibrium values under non-equilibrium conditions. We shall analyze these questions below. Let us first obtain the specific heat, $`C_V`$, from the equilibrium ensembles; $`C_V`$ may be obtained using the standard formula,
$$C_V=\frac{E^2_{EQ}E_{EQ}^2}{T^2},$$
(57)
where $`E`$ is the energy per site. We find that the specific heat has a weak temperature dependence which may be fitted by a simple temperature dependence,
$$C_V=C_0T^\alpha ,C_0=0.86(2),\alpha =0.025(6).$$
(58)
This entails that the energy of the lattice per site has the following behavior
$$E=\frac{C_0}{1\alpha }T^{1\alpha }\alpha 0E=C_0T.$$
(59)
Since the temperature dependence is weak, it is natural to compare $`C_V`$ against $`E/T`$ obtained locally both in equilibrium and in non-equilibrium. Such a comparison of $`C_V`$, $`E/T`$ for equilibrium, near equilibrium and far from equilibrium is shown in Fig. 15. We find that they all agree quite well. There seems to be an intriguing tendency for the non-equilibrium values of $`E/T`$ to be larger than their equilibrium counterparts. However, this cannot be delineated within the errors and more investigation is necessary to see if this is a real effect.
Let us now move onto the computation of the Gibbs entropy, $`S_G`$, in equilibrium. From the application of the first law of thermodynamics, we obtain
$$S_G=^T\frac{C_V}{T}๐T.$$
(60)
$$S_G(T)=S_{G,0}\frac{C_0}{\alpha }\left(T^\alpha 1\right)\alpha 0S_{G,0}+C_0\mathrm{ln}T.$$
(61)
In Fig. 16 we plot the computed Boltzmann entropy $`S_B^{(1)}`$ as a function of temperature for systems in equilibrium, as well as for systems near and far from equilibrium and find a similar form, $`S_B^{(1)}=S_{B,0}+S_{B,1}\mathrm{log}T`$. While we cannot measure $`S_G`$ away from equilibrium, $`S_{B,1}C_0`$ so that the behavior of $`S_B^{(1)}`$ in equilibrium and in non-equilibrium is similar to that of $`S_G`$ in equilibrium. We also find that $`S_B^{(1)}`$ is completely insensitive to the departure from equilibrium. Consequently, Boltzmannโs entropy and some of the thermodynamic concepts such as temperature, still retain their relationships in these local states far from equilibrium, even when the momentum distributions are no longer gaussian. Other physical quantities were also studied for effects of the system being in non-equilibrium, including $`๐ฏ^{11}`$,
$$๐ฏ^{11}=\frac{1}{2}\pi ^2+\frac{1}{2}\left(\varphi \right)^2\frac{\varphi ^4}{4}.$$
(62)
We find that the results for $`๐ฏ^{11}`$ in equilibrium, near equilibrium and far from equilibrium are consistent with each other in a manner similar to $`๐ฏ^{00}`$ in Fig. 15.
### D Boundary Temperature Jumps
Boundary temperature jumps in non-equilibrium steady states, such as systems experiencing thermal gradients are well known, although it seems that their behavior has never had a suitable explanation. Systems experiencing shearing due to moving walls also display jumps in the velocity profile, with the fluid inside the wall moving slightly slower than the wall velocity, for large velocities. Such effects are known to be sources of error in experiments that measure transport coefficients . We find that it is possible to achieve a quantitative understanding of these effects using simple kinetic arguments.
We have found in this system that $`c_s`$ and $`C_V`$ are of order unity, and have at most a weak temperature dependence (see Fig. 7 and Fig. 15). The thermal conductivity is related to the mean free path by $`\kappa C_Vc_s\mathrm{}`$ by a standard kinetic theory argument. Hence we expect $`\kappa \mathrm{}`$. Strictly speaking, $`\mathrm{}`$ is the mean free path in equilibrium, but we will refer to this loosely as the mean free path also away from equilibrium, to use as a natural length scale in the system. It corresponds to the mean free path when the thermal gradients are not too strong (up to LE-II regime in Table 1), but the standard kinetic theory arguments presumably break down when local equilibrium no longer holds. The boundary temperature jumps are due to the mean free path being non-zero and the deviation of the system from equilibrium. When $`\mathrm{}L`$,
$$T_i^0T_i=\eta \frac{T}{n}|_{boundary},$$
(63)
where $`n`$ denotes the normal to the boundary . This formula should apply when the jump $`|T_i^0T_i|/T_i`$ is relatively small. On dimensional grounds, the coefficient $`\eta `$ should be on order of the mean free path $`\mathrm{}`$, so that $`\eta \kappa `$. We have verified this relation by plotting $`\eta `$ measured directly from thermal profiles, as a function of the extrapolated temperature $`T`$ at the boundary. These are summarized in Fig. 18 (top). In the figure we display only the jumps on the low temperature end since the errors are much smaller. (The large gradient data on the high temperature end, while consistent with the low end data, is quite noisy.) We also show a fit to the data (dashes), which gives
$$\eta (T)=(6.1\pm 0.5)T^{1.5\pm 0.1}.$$
(64)
On the same figure we also show the behavior of $`\kappa (T)`$, which indicates that $`\eta \kappa `$ is consistent with the observed behavior.
An independent verification of the behavior of the jumps can be made by studying how the jumps depend on the heat flux. We let $`\eta =\alpha \kappa `$, where $`\alpha `$ is a constant to be determined. From Eq. (63), we can then associate the heat flux with the right side of Eq. (63):
$$T_iT_i^0\alpha ๐ฏ^{01}_{NE}\alpha (T_2^0T_1^0)\frac{\kappa (T_{av})}{L}+\mathrm{}.$$
(65)
By plotting the boundary jumps directly as a function of the heat flux in non-equilibrium steady states both near and far from equilibrium, we obtain the data in Fig. 18 (bottom). One can see that there is a simple relationship, where the slope gives $`\alpha =2.6(1)`$, consistent with the assumption that $`\eta =\alpha \kappa `$. The understanding of these jumps together with that of the temperature profile (40) provides a complete description of $`T(x)`$ in terms of the boundary temperatures, $`(T_1^0,T_2^0)`$. As was alluded to earlier, these simple relations (63),(65) break down in the LNE region of Table 1, where the thermal gradients are too strong.
A note on boundary conditions is in order: There is reasonable freedom in how the thermostats are implemented. We can vary the number of thermostatted sites or how the demons are couple to the physical degrees of freedom. We find that the changes in the way thermostats are implemented bring about changes in the boundary jumps when we are sufficiently far from equilibrium. This will also result in a different $`๐ฏ^{01}`$ in a manner consistent with Eq. (41). Different thermostats will correspond to different values of $`\alpha `$, which is not an intrinsic parameter of the $`\varphi ^4`$ theory but rather reflects different manners in which a heat bath might efficiently couple to the scalar field theory. In other words, as expected, different thermostats do not change our understanding of the physics at all.
In understanding the generality of Eq. (65), we note that the FPU $`\beta `$ model (28) displays temperature profiles that depend on $`L`$. In Refs. , it has been shown that the heat flux varies with the size as $`๐ฏ^{01}L^{0.55}`$. Consequently, one would expect that the boundary jumps seen there behave as $`\delta TL^{0.55}`$. A cursory analysis of those thermal profiles suggests that this is indeed the case.
### E Non-Equilibrium Distributions
It is worth making a few remarks on the non-equilibrium distribution functions we have seen. Many usual approaches to the non-equilibrium statistical properties of systems make from the outset certain model dependent choices. In some thermo-field approaches, the temperature profile $`T(x)`$ is chosen to have a specific form, resulting in a statistical operator of the type $`\mathrm{exp}(\stackrel{~}{H}(\pi ,\varphi )/T(x))`$, where $`\stackrel{~}{H}(\pi ,\varphi )`$ might include contributions due to transport. Employing the approach of Zubarev, one assumes a particular behavior for the steady state distribution, which is then used to solve the dynamical equations.
The local non-equilibrium distribution, $`f_B^{(1)}(\varphi _k,\pi _k)`$ is shown in Fig. 17 for a system very far from equilibrium. While the distribution $`f`$ in the full configuration space is expected to be fractal, $`f_B^{(1)}(\varphi _k,\pi _k)`$ is only a projection on a lower dimensional space, and should be smooth. Such a non-equilibrium statistical operator will differ from those assumed in non-equilibrium approaches in their correlations. One aspect of the fractal nature of $`f`$ is that the dynamical space is of reduced dimension, so that additional correlations will exist that are not present in those standard non-equilibrium approaches.
Another limit in which one obtains unusual thermal profiles is a โballisticโ limit in which the mean free path is on the order or larger than the size of the system. Because the excitations pass through the system rather readily, one has large boundary jumps on both ends. As a consequence, the thermal profile is almost flat, in spite of the values of the endpoint temperatures. In this case the system is still thermalized; the distributions of momenta at the endpoints and inside are gaussian. These types of results are included in our figures of entropy, specific heat and so forth, in the near-equilibrium data set, and they fall in line with non-ballistic results.
### F Local Equilibrium
While we do not yet have any precise criteria regarding when local equilibrium fails to be a good approximation, we can analyze various physical quantities and compare them to their values when local equilibrium holds. This should provide us with a measure of the dependence of the physics on local equilibrium. While field dependent observables, such as moments of $`\varphi (x)`$ will depend on the model under investigation, the momenta should be more robust. A natural place to start is with the analysis of cumulants. If local equilibrium is expected holds, we should have
$$\pi _k^2=T,\pi _k^4=3T^2,\pi _k^6=15T^3$$
(66)
and so forth. Equivalently, we can say that
$$\pi _k^4=\pi _k^43\pi _k^2^2,\pi _k^6=\pi _k^615\pi _k^4\pi _k^2+30\pi _k^2^3.$$
(67)
vanish in local equilibrium. Hence one measure of the breaking of local equilibrium when we are far from equilibrium is
$$\frac{\pi _k^4}{3\pi _k^2^2}=\frac{\pi _k^4}{3\pi _k^2^2}1.$$
(68)
Similar expressions for higher cumulants are also possible.
Non-equilibrium behavior begins to emerge in certain quantities as soon as $`T_1T_2`$. For instance, $`\pi ^4(x)/\pi ^2(x)^23`$, the equality holding in equilibrium, and the value growing as one departs from equilibrium. One can also see that the non-equilibrium measure is not locally Boltzmann since quantities such as $`\pi (x)\varphi (x^{})0`$ for $`xx^{}`$, as is expected for a system supporting some type of transport. However, the origins in this case we attribute not to additional terms in the statistical distribution $`f`$, but rather to additional correlations in non-equilibrium measure due to its reduced dimensionality, as expected from dynamical systems theory. Unfortunately, there is no way to explicitly estimate the dimensional loss without direct measurement of the entire Lyapunov spectrum, which is a rather numerically intensive task.
As one moves away from equilibrium, the linear response theory is expected to eventually breakdown. Such behavior is confirmed for our system; when the temperature gradient becomes too large, the formula for the heat flow (41) ceases to be valid. We display the relative difference of the measured current to the current obtained from the linear response theory (41) in Fig. 19 (bottom). It can be seen that the relative deviations can be of order one for large thermal gradients signaling the breakdown of the linear response theory. The deviations are plotted against $`\kappa (T)T/T`$ which seems to be the natural scale, since $`\kappa `$ is the mean free path, roughly speaking, which is the natural length scale in the problem as discussed in section V D. One obvious possibility would be to interpret this as a nonlinear response of the thermal conductivity, $`\kappa (T,๐ฏ^{01})`$, which may, for instance, be parametrized as
$$\kappa (T,๐ฏ^{01})=\kappa _0(T)+\kappa _2(T)\left(๐ฏ^{01}\right)^2+๐ช\left(\left(๐ฏ^{01}\right)^4\right).$$
(69)
Such approaches have been discussed in the literature .
However, such an interpretation presupposes, perhaps tacitly, that the concept of local equilibrium holds and that the standard notion of temperature applies, amongst other things. Therefore, it is imperative to first check that the local equilibrium is achieved in this โnon-linearโ regime and that it is this point that we now wish to investigate with care. Such questions have been asked previously and in those situations, the local equilibrium was seen to be valid . First, we need to ask ourselves what constitutes local equilibrium? The concept of local equilibrium has been defined previously and it is not our intention here to discern the possible differences in the various definitions of local equilibrium. In the least, local equilibrium assumes that we have a Maxwellian distribution for the momentum for the class of Hamiltonians we work with, leading to the usual concept of temperature, which will be our point of investigation.
In Fig. 19 (top), we plot the fourth cumulant of $`\pi `$, (68), against $`\kappa (T)T/T`$. The cumulant, which is defined locally, quite clearly deviates from the equilibrium value under strong thermal gradients. Even in these situations, the thermostatted boundary sites are in local equilibrium, as they should be, as shown in Fig. 20. It can be seen that the deviations from local equilibrium start to occur at roughly the same value of $`\kappa (T)T/T1/10`$ where linear response theory breaks down. In Table 1, we identify this as a steady state which is locally non-equilibrium (LNE). We have verified that higher cumulants display similar behavior. Therefore, at least in our model, the breakdown of local equilibrium needs to be considered when nonlinearity of the response is to be analyzed. Of course, this does not preclude the possibility of the nonlinearity of the response as discussed above, but that the nonlinearity needs to be disentangled from the deviations from local equilibrium with care.
We emphasize here that ร priori, this needs not be the case. Namely, it is in principle possible that there is a region where the concept of local equilibrium is still valid and yet the linear response theory breaks down. In such a situation, โnonโlinear responseโ theory should be quite appropriate for analyzing the situation. However, in the cases we studied, such regimes do not exist and the failure of the linear response theory occurs simultaneously with the breakdown of local equilibrium.
## VI Conclusions
We have constructed non-equilibrium steady states for classical $`\varphi ^4`$ lattice field theory in one dimension, under conditions near and far from equilibrium. We obtained the behavior of the thermal conductivity with respect to the temperature in the linear regime and found that the results were consistent with the linear response regime. The underlying dynamics of the theory was investigated and physical quantities such as the speed of sound, heat capacity, Boltzmannโs entropy and their temperature dependence, were obtained. The results could consistently understood using the kinetic theory approach. This understanding was further used to clarify the dynamics behind the temperature gaps that arise at the boundaries of the system. We also found that for temperature gradients that are not too large, the linear response law is adequate for understanding the behavior of the system, even though the temperature profile might be visibly non-linear. For even larger gradients, even though the system is in a steady state, we found that the linear response law eventually ceases to hold, but the local equilibrium is also violated.
We have classified the steady states in Table 1, which identify distinct dynamical regimes of the theory. It would be nice to develop more precise measures for these dynamical regimes, but it is clear that even a one component classical, lattice field theory does contain a means to understand the non-equilibrium physics of many-body systems. It would be interesting to extend these results to theory with phase transitions, multi-component theories which would also allow the analysis of Onsager reciprocity relations and so on. The additional degrees of freedom will provide additional measurable quantities, but we expect many of the qualitative features of this simple model to persist.
We acknowledge support through the grants from Keio University and DOE grant DE-FG02-91ER40608. We would like to thank Guy Moore and Larry Yaffe for enlightening discussions and the the Institute for Nuclear Physics at University of Washington for hospitality, where some of the work was conducted. |
no-problem/0002/gr-qc0002040.html | ar5iv | text | # Logarithmic correction to the Bekenstein-Hawking entropy
## Abstract
The exact formula derived by us earlier for the entropy of a four dimensional non-rotating black hole within the quantum geometry formulation of the event horizon in terms of boundary states of a three dimensional Chern-Simons theory, is reexamined for large horizon areas. In addition to the semiclassical Bekenstein-Hawking contribution proportional to the area obtained earlier, we find a contribution proportional to the logarithm of the area together with subleading corrections that constitute a series in inverse powers of the area.
The derivation of the Bekenstein-Hawking (BH) area law for black hole entropy from the quantum geometry approach (and also earlier from string theory for some special cases), has lead to a resurgence of interest in the quantum aspects of black hole physics in recent times. However, the major activity has remained focussed on confirming the area law for large black holes, which, as is well-known, was obtained originally on the basis of arguments of a semiclassical nature. The question arises as to whether any essential feature of the bona fide quantum aspect of gravity, beyond the domain of the semiclassical approximation, has been captured in these assays. Indeed, as has been most eloquently demonstrated by Carlip , a derivation of the area law alone seems to be possible on the basis of some symmetry principle of the (semi)classical theory itself without requiring a detailed knowledge of the actual quantum states associated with a black hole. The result seems to hold for arbitrary number of spatial dimensions, so long as a particular set of isometries of the metric is respected. That quantum gravity has a description in terms of spin networks (or for that matter, in terms of string states in a fixed background) appears to be of little consequence in obtaining the area law, although these proposed underlying structures also lead to the same behaviour via alternative routes, in the semiclassical limit of arbitrarily large horizon area.
Although there is as yet no complete quantum theory of gravitation, one would in general expect key features uncovered so far to lead to modifications of the area law which could not have been anticipated through semiclassical reasoning. Thus, the question as to what is the dominant quantum correction due to these features of quantum gravity becomes one of paramount importance. Already in the string theory literature examples of leading corrections to the area law, obtained by counting D-brane states describing special supersymmetric extremal black holes (interacting with massless vector supermultiplets) have appeared. This has received strong support recently from semiclassical calculations in $`N=2`$ supergravity supplemented by ostensible stringy higher derivative corrections which are incorporated using Waldโs general formalism describing black hole entropy as Noether charge . However, the geometrical interpretation of these corrections remains unclear. Further, there are subtleties associated with direct application of Waldโs formalism which assumes a non-degenerate bifurcate Killing horizon, to the case of extremal black holes which have degenerate horizons. Moreover, the string results do not pertain to generic (i.e., non-extremal) black holes of Einsteinโs general relativity, and are constrained by the unphysical requirement of unbroken spacetime supersymmetry.
In this paper, we consider the corrections to the semiclassical area law of generic four dimensional non-rotating black holes, due to key aspects of non-perturbative quantum gravity (or quantum geometry) formulated by Ashtekar and collaborators . In , appropriate boundary conditions are imposed on dynamical variables at the event horizon considered as an inner boundary. These boundary conditions require that the Einstein-Hilbert action be supplemented by boundary terms describing a three dimensional $`SU(2)`$ Chern-Simons theory living on a finite โpatchโ of the horizon with a spherical boundary, punctured by links of the spin network bulk states describing the quantum spacetime geometry interpolating between asymptopia and the horizon. On this two dimensional boundary there exists an $`SU(2)`$ Wess Zumino model whose conformal blocks describe the Hilbert space of the Chern-Simons theory modelling the horizon. An exact formula for the number of these conformal blocks has been obtained by us two years ago, for arbitrary level $`k`$ and number of punctures $`p`$. It has been shown that in the limit of large horizon area given by arbitrarily large $`k`$ and $`p`$, the logarithm of this number duly yields the area law. Here we go one step further, and calculate the dominant sub-leading contribution, as a function of the classical horizon area, or what is equivalent, as a function of the BH entropy itself.
On purely dimensional grounds, one would expect the entropy to have an expansion, for large classical horizon area, in inverse powers of area so that the BH term is the leading one,
$$S_{bh}=S_{BH}+\underset{n=0}{\overset{\mathrm{}}{}}C_nA_H^n$$
(1)
where, $`A_H`$ is the classical horizon area and $`C_n`$ are coefficients which are independent of the horizon area but dependent on the Planck length (Newton constant). Here the Barbero-Immirzi parameter has been โfittedโ to the value which fixes the normalization of the BH term to the standard one. However, in principle, one could expect an additional term proportional to $`lnA_H`$ as the leading quantum correction to the semiclassical $`S_{BH}`$. Such a term is expected on general grounds pertaining to breakdown of naรฏve dimensional analysis due to quantum fluctuations, as is common in quantum field theories in flat spacetime and also in quantum theories of critical phenomena. We show, in what follows, that such a logarithmic correction to the semiclassical area law does indeed arise from the formula derived earlier and derive its coefficient.
We first briefly recapitulate the derivation of the general formula for the number of conformal blocks of the $`SU(2)_k`$ Wess Zumino model on a punctured 2-sphere appropriate to the black hole situation. This number can be computed in terms of the so-called fusion matrices $`N_{ij}^r`$
$$N_๐ซ=\underset{\{r_i\}}{}N_{j_1j_2}^{r_1}N_{r_1j_3}^{r_2}N_{r_2j_4}^{r_3}\mathrm{}\mathrm{}N_{r_{p2}j_{p1}}^{j_p}$$
(2)
Diagrammatically, this can be represented as shown in fig. 1 below.
Here, each matrix element $`N_{ij}^r`$ is $`1or0`$, depending on whether the primary field $`[\varphi _r]`$ is allowed or not in the conformal field theory fusion algebra for the primary fields $`[\varphi _i]`$ and $`[\varphi _j]`$ ($`i,j,r=0,1/2,1,\mathrm{}.k/2`$):
$$[\varphi _i][\varphi _j]=\underset{r}{}N_{ij}^r[\varphi _r].$$
(3)
Eq. (2) gives the number of conformal blocks with spins $`j_1,j_2,\mathrm{},j_p`$ on $`p`$ external lines and spins $`r_1,r_2,\mathrm{},r_{p2}`$ on the internal lines.
We then use the Verlinde formula to obtain
$$N_{ij}^r=\underset{s}{}\frac{S_{is}S_{js}S_s^r}{S_{0s}},$$
(4)
where, the unitary matrix $`S_{ij}`$ diagonalizes the fusion matrix. Upon using the unitarity of the $`S`$-matrix, the algebra (2) reduces to
$$N_๐ซ=\underset{r=0}{\overset{k/2}{}}\frac{S_{j_1r}S_{j_2r}\mathrm{}S_{j_pr}}{(S_{0r})^{p2}}.$$
(5)
Now, the matrix elements of $`S_{ij}`$ are known for the case under consideration ($`SU(2)_k`$ Wess-Zumino model); they are given by
$$S_{ij}=\sqrt{\frac{2}{k+2}}sin\left(\frac{(2i+1)(2j+1)\pi }{k+2}\right),$$
(6)
where, $`i,j`$ are the spin labels, $`i,j=0,1/2,1,\mathrm{}.k/2`$. Using this $`S`$-matrix, the number of conformal blocks for the set of punctures $`๐ซ`$ is given by
$$N_๐ซ=\frac{2}{k+2}\underset{r=0}{\overset{k/2}{}}\frac{\underset{l=1}{\overset{p}{}}sin\left(\frac{(2j_l+1)(2r+1)\pi }{k+2}\right)}{\left[sin\left(\frac{(2r+1)\pi }{k+2}\right)\right]^{p2}}.$$
(7)
Eq. (7) thus gives the dimensionality of the $`SU(2)`$ Chern-Simons states corresponding to a three-fold bounded by a two-sphere punctured at $`p`$ points. The black hole microstates are counted by summing $`N_๐ซ`$ over all sets of punctures $`๐ซ`$, $`N_{bh}=_{\{๐ซ\}}N_๐ซ`$. Then, the entropy of the black hole is given by $`S_{bh}=\mathrm{log}N_{bh}`$.
We are however interested only in the leading correction to the semiclassical entropy which ensues in the limit of arbitrarily large $`A_H`$. To this end, recall that the eigenvalues of the area operator are given by
$$A_H=8\pi \beta l_{Pl}^2\underset{l=1}{\overset{p}{}}[j_l(j_l+1)]^{\frac{1}{2}},$$
(8)
where, $`l_{Pl}`$ is the Planck length, $`j_l`$ is the spin on the $`l`$th puncture on the 2-sphere and $`\beta `$ is the Barbero-Immirzi parameter . Clearly, the large area limit corresponds to the limits $`k\mathrm{},p\mathrm{}`$. Now, from eq. (8), it follows that the number of punctures $`p`$ is largest for a given $`A_H`$ provided all spins $`j_l=\frac{1}{2}`$. Thus, for a fixed classical horizon area, we obtain the largest number of punctures $`p_0`$ as
$$p_0=\frac{A_H}{4l_{Pl}^2}\frac{\beta _0}{\beta },$$
(9)
where, $`\beta _0=1/\pi \sqrt{3}`$. In this approximation, the set of punctures $`๐ซ_0`$ with all spins equal to one-half dominates over all other sets, so that the black hole entropy is simply given by
$$S_{bh}=lnN_{๐ซ_0},$$
(10)
with $`N_{๐ซ_0}`$ being given by eq. (7) with $`j_l=1/2`$.
Observe that $`N_{๐ซ_0}`$ can now be written as
$$N_{๐ซ_0}=\frac{2^{p_0+2}}{k+2}\left[F(k,p_o)F(k,p_0+2)\right]$$
(11)
where,
$$F(k,p)=\underset{\nu =1}{\overset{[\frac{1}{2}(k+1)]}{}}cos^p\left(\frac{\nu \pi }{k+2}\right).$$
(12)
The sum over $`\nu `$ in eq. (12) can be approximated by an integral in the limit $`k\mathrm{},p_0\mathrm{}`$, with appropriate care being taken to restrict the domain of integration; one obtains
$$F(k,p_0)\left(\frac{k+2}{\pi }\right)_0^{\pi /2}๐xcos^{p_0}x,$$
(13)
so that,
$$N_{๐ซ_0}\frac{2^{p_0+2}}{\pi (p_0+2)}B(\frac{p_0+1}{2},\frac{1}{2}),$$
(14)
where, $`B(x,y)`$ is the standard $`B`$-function . Using well-known properties of this function, it is straightforward to show that
$`lnN_{๐ซ_0}`$ $`=`$ $`p_0ln2{\displaystyle \frac{3}{2}}lnp_0ln(2\pi )`$ (15)
$``$ $`{\displaystyle \frac{5}{2}}p_0^1+O(p_0^2).`$ (16)
Substituting for $`p_0`$ as a function of $`A_H`$ from eq. (9) and setting the Barbero-Immirzi parameter $`\beta `$ to the โuniversalโ value $`\beta _0ln2`$ , one obtains our main result
$$S_{bh}=S_{BH}\frac{3}{2}ln\left(\frac{S_{BH}}{ln2}\right)+const.+\mathrm{},$$
(17)
where, $`S_{BH}=A_H/4l_{Pl}^2`$, and the ellipses denote corrections in inverse powers of $`A_H`$ or $`S_{BH}`$.
Admittedly, the above calculation is restricted to the leading correction to the semiclassical approximation. It has been done for a fixed large $`A_H`$ by taking the spins on all the punctures to be 1/2 so that we have the largest number of punctures. But it is not difficult to argue that the coefficient of the $`lnA_H`$ term is robust in that inclusion of spin values higher than 1/2 do not affect it, although the constant term and the coefficients of sub-leading corrections with powers of $`O(A_H^1)`$ might get affected. The same appears to be true for values of the level $`k`$ away from the asymptotic value which we have assigned it above: the coefficient of the $`lnA_H`$ is once again unaffected. Thus, the leading logarithmic correction with coefficient -3/2 that we have discerned for the black hole entropy is in this sense universal. Moreover, although we have set $`\beta =\beta _0ln2`$ in the above formulae, the coefficient of the $`lnA_H`$ term is independent of $`\beta `$, a feature not shared by the semiclassical area law.
It is therefore clear that the leading correction (and maybe also the subleading ones) to the BH entropy is negative. One way to understand this could be the information-theoretic approach of Bekenstein : black hole entropy represents lack of information about quantum states which arise in the various ways of gravitational collapse that lead to formation of black holes with the same mass, charge and angular momentum. Thus, the BH entropy is the โmaximalโ entropy that a black hole can have; incorporation of leading quantum effects reduces the entropy. The logarithmic nature of the leading correction points to a possible existence of what might be called a โnon-perturbative fixed pointโ. That this happens in the physical world of four dimensions is perhaps not without interest.
Recently, the zeroth and first law of black hole mechanics have been derived for situations with radiation present in the vicinity of the horizon, using the notion of the isolated horizon . Our conclusions above for the case of non-rotating black holes hold for such generalizations as well. Note however that while, the foregoing analysis involves $`SU(2)_k`$ Chern Simons theory, for large $`k`$ this reduces to a specific $`U(1)`$ theory presumably related to the โgauge fixedโ classical theory discussed in . The charge spectrum of this $`U(1)`$ theory is discrete and bounded from above by $`k`$. The $`SU(2)`$ origin of the theory thus provides a natural โregularizationโ for calculation of the number of conformal blocks.
Note Added: After the first version of this paper appeared in the Archives, it has been brought to our attention that corrections to the area law in the form of logarithm of horizon area have been obtained earlier for extremal Reissner-Nordstrom and dilatonic black holes. These corrections are due to quantum scalar fields propagating in fixed classical backgrounds appropriate to these black holes. The coefficient of the $`lnA_H`$ term that appears in ref. is different from ours. This is only expected, since in contrast to ref. , our corrections originate from non-perturbative quantum fluctuations of spacetime geometry (for generic non-rotating black holes), in the absence of matter fields. Thus, this correction is finite and independent of any arbitrary โrenormalization scaleโ associated with divergences due to quantum matter fluctuations in a fixed classical background.
We thank Prof. A. Ashtekar for many illuminating discussions and Prof. R. Mann for useful correspondence. |
no-problem/0002/quant-ph0002077.html | ar5iv | text | # The Physical Implementation of Quantum Computation
## I Introduction
<sup>*</sup><sup>*</sup>*Prepared for Fortschritte der Physik special issue, Experimental Proposals for Quantum Computation, eds. H.-K. Lo and S. Braunstein.
The advent of quantum information processing, as an abstract concept, has given birth to a great deal of new thinking, of a very concrete form, about how to create physical computing devices that operate in the hitherto unexplored quantum mechanical regime. The efforts now underway to produce working laboratory devices that perform this profoundly new form of information processing are the subject of this book.
In this chapter I provide an overview of the common objectives of the investigations reported in the remainder of this special issue. The scope of the approaches, proposed and underway, to the implementation of quantum hardware is remarkable, emerging from specialties in atomic physics, in quantum optics, in nuclear and electron magnetic resonance spectroscopy, in superconducting device physics, in electron physics, and in mesoscopic and quantum dot research. This amazing variety of approaches has arisen because, as we will see, the principles of quantum computing are posed using the most fundamental ideas of quantum mechanics, ones whose embodiment can be contemplated in virtually every branch of quantum physics.
The interdisciplinary spirit which has been fostered as a result is one of the most pleasant and remarkable features of this field. The excitement and freshness that has been produced bodes well for the prospect for discovery, invention, and innovation in this endeavor.
## II Why quantum information processing?
The shortest of answers to this question would be, why not? The manipulation and transmission of information is today carried out by physical machines (computers, routers, scanners, etc.), in which the embodiment and transformations of this information can be described using the language of classical mechanics. But the final physical theory of the world is not Newtonian mechanics, and there is no reason to suppose that machines following the laws of quantum mechanics should have the same computational power as classical machines; indeed, since Newtonian mechanics emerges as a special limit of quantum mechanics, quantum machines can only have greater computational power than classical ones. The great pioneers and visionaries who pointed the way towards quantum computers, Deutsch, Feynman, and others, were stimulated by such thoughts. Of course, by a similar line of reasoning, it may well be asked whether machines embodying the principles of other refined descriptions of nature (perhaps general relativity or string theory) may have even more information processing capabilities; speculations exist about these more exotic possibilities, but they are beyond the scope of the present discussion.
But computing with quantum mechanics really deserves a lot more attention than wormhole computing or quantum-gravity computing; quantum computing, while far in the future from the perspective of CMOS roadmaps and projections of chip fab advances, can certainly be seen as a real prospect from the perspective of research studies in quantum physics. It does not require science fiction to envision a quantum computer; the proposals discussed later in this issue paint a rather definite picture of what a real quantum computer will look like.
So, how much is gained by computing with quantum physics over computing with classical physics? We do not seem to be near to a final answer to this question, which is natural since even the ultimate computing power of classical machines remains unknown. But the answer as we know it today has an unexpected structure; it is not that quantum tools simply speed up all information processing tasks by a uniform amount. By a standard complexity measure (i.e., the way in which the number of computational steps required to complete a task grows with the โsizeโ $`n`$ of the task), some tasks are not sped up at all by using quantum tools (e.g., obtaining the $`n`$th iterate of a function $`f(f(\mathrm{}f(x)\mathrm{}))`$), some are sped up moderately (locating an entry in a database of $`n`$ entries), and some are apparently sped up exponentially (Shorโs algorithm for factoring an $`n`$-digit number).
In other types of information processing tasks, particularly those involving communication, both quantitative and qualitative improvements are seen: for certain tasks (choosing a free day for an appointment between two parties from out of $`n`$ days) there is a quadratic reduction of the amount of communicated data required, if quantum states rather than classical states are transmitted. For some tasks (the โset disjointness problemโ, related to allocating non-overlapping segments of a shared memory in a distributed computation) the reduction of required communication is exponential. Finally, there are tasks that are doable in the quantum world that have no counterpart classically: quantum cryptography provides an absolute secrecy of communication between parties that is impossible classically. And for some games, winning strategies become possible with the use of quantum resources that are not available otherwise.
This issue, and this chapter, are primarily concerned with the โhowsโ of quantum computing rather than the โwhys,โ so we will leave behind the computer science after this extremely brief mention. There is no shortage of other places to obtain more information about these things; I recommend the recent articles by Aharonov and by Cleve; other general introductions will give the reader pointers to the already vast specialized literature on this subject.
## III Realizing quantum computation
Let me proceed with the main topic: the physical realization of quantum information processing. As a guide to the remainder of the special issue, and as a means of reviewing the basic steps required to make quantum computation work, I can think of no better plan than to review a set of basic criteria that my coworkers and I have been discussing over the last few years for the realization of quantum computation (and communication), and to discuss the application of these criteria to the multitude of physical implementations that are found below.
So, without further ado, here are the
Five (plus two) requirements for the implementation of quantum computation
1. A scalable physical system with well characterized qubits
For a start, a physical system containing a collection of qubits is needed. A qubit (or, more precisely, the embodiment of a qubit) is simply a quantum two-level system like the two spin states of a spin 1/2 particle, like the ground and excited states of an atom, or like the vertical and horizontal polarization of a single photon. The generic notation for a qubit state denotes one state as $`|0`$ and the other as $`|1`$. The essential feature that distinguishes a qubit from a bit is that, according to the laws of quantum mechanics, the permitted states of a single qubit fills up a two-dimensional complex vector space; the general state is written $`a|0+b|1`$, where $`a`$ and $`b`$ are complex numbers, and a normalization convention $`|a|^2+|b|^2=1`$ is normally adopted. The general state of two qubits, $`a|00+b|01+c|10+d|11`$, is a four-dimensional vector, one dimension for each distinguishable state of the two systems. These states are generically entangled, meaning that they cannot be written as a product of the states of two individual qubits. The general state of $`n`$ qubits is specified by a $`2^n`$-dimensional complex vector.
A qubit being โwell characterizedโ means several different things. Its physical parameters should be accurately known, including the internal Hamiltonian of the qubit (which determines the energy eigenstates of the qubit, which are often, although not always, taken as the $`|0`$ and $`|1`$ states), the presence of and couplings to other states of the qubit, the interactions with other qubits, and the couplings to external fields that might be used to manipulate the state of the qubit. If the qubit has third, fourth, etc., levels, the computerโs control apparatus should be designed so that the probability of the system ever going into these states is small. The smallness of this and other parameters will be determined by the capabilities of quantum error correction, which will be discussed under requirement 3.
Recognizing a qubit can be trickier than one might think. For example, we might consider a pair of one-electron quantum dots that share a single electron between them as a two-qubit system. It is certainly true that we can denote the presence or absence of an electron on each dot by $`|0`$ and $`|1`$, and it is well known experimentally how to put this system into the โentangledโ state $`1/\sqrt{2}(|01+|10)`$ in which the electron is in a superposition of being on the left dot and the right dot. But it is fallacious to consider this as a two-qubit system; while the states $`|00`$ and $`|11`$ are other allowed physical states of the dots, superselection principles forbid the creation of entangled states involving different particle numbers such as $`1/\sqrt{2}(|00+|11)`$.
It is therefore false to consider this as a two-qubit system, and, since there are not two qubits, it is nonsense to say that there is entanglement in this system. It is correct to say that the electron is in a superposition of different quantum states living on the two different dots. It is also perfectly correct to consider this system to be the embodiment of a single qubit, spanned by the states (in the misleading notation above) $`|01`$ (โelectron on the right dotโ) and $`|10`$ (โelectron on the left dotโ). Indeed, several of the viable proposals, including the ones by Schรถn, Averin, and Tanamoto in this special issue, use exactly this system as a qubit. However, false lines of reasoning like the one outlined here have sunk various proposals before they were properly launched (no such abortive proposals are represented in this book, but they can be found occasionally in the literature).
An amazing variety of realizations of the qubit are represented in this volume. There is a very well developed line of work that began with the proposal of Cirac and Zoller for an ion-trap quantum computer, in which, in its quiescent state, the computer holds the qubits in pairs of energy levels of ions held in a linear electromagnetic trap. Various pairs of energy levels (e.g., Zeeman-degenerate ground states, as are also used in the NMR approach discussed by Cory) have been proposed and investigated experimentally. The many neutral-atom proposals (see chapters by Kimble, Deutsch, and Briegel) use similar atomic energy levels of neutral species. These atomic-physics based proposals use other auxiliary qubits such as the position of atoms in a trap or lattice, the presence or absence of a photon in an optical cavity, or the vibrational quanta of trapped electrons, ions or atoms (in the Platzman proposal below this is the primary qubit). Many of the solid-state proposals exploit the fact that impurities or quantum dots have well characterized discrete energy level spectra; these include the spin states of quantum dots (see chapters by Loss and Imamoglu), the spin states of donor impurities (see Kane), and the orbital or charge states of quantum dots (see Tanamoto). Finally, there are a variety of interesting proposals which use the quantized states of superconducting devices, either ones involving the (Cooper-pair) charge (see Schรถn, Averin), or the flux (see Mooij).
2. The ability to initialize the state of the qubits to a simple fiducial state, such as $`|000\mathrm{}`$
This arises first from the straightforward computing requirement that registers should be initialized to a known value before the start of computation. There is a second reason for this initialization requirement: quantum error correction (see requirement 3 below) requires a continuous, fresh supply of qubits in a low-entropy state (like the $`|0`$ state). The need for a continuous supply of 0s, rather than just an initial supply, is a real headache for many proposed implementations. But since it is likely that a demonstration of a substantial degree of quantum error correction is still quite some time off, the problem of continuous initialization does not have to be solved very soon; still, experimentalists should be aware that the speed with which a qubit can be zeroed will eventually be a very important issue. If the time it takes to do this initialization is relatively long compared with gate-operation times (see requirement 4), then the quantum computer will have to equipped with some kind of โqubit conveyor beltโ, on which qubits in need of initialization are carried away from the region in which active computation is taking place, initialized while on the โbeltโ, then brought back to the active place after the initialization is finished. (A similar parade of qubits will be envisioned in requirement 5 for the case of low quantum-efficiency measurements.)
There are two main approaches to setting qubits to a standard state: the system can either be โnaturallyโ cooled when the ground state of its Hamiltonian is the state of interest, or the standard state can be achieved by a measurement which projects the system either into the state desired or another state which can be rotated into it. These approaches are not fundamentally different from one another, since the projection procedure is a form of cooling; for instance, the laser cooling techniques used routinely now for the cooling of ion states to near their ground state in a trap are closely connected to the fluorescence techniques used to measure the state of these ions. A more โnaturalโ kind of cooling is advocated in many of the electron spin resonance based techniques (using quantum dots or impurities) in which the spins are placed in a strong magnetic field and allowed to align with it via interaction with their heat bath. In this kind of approach the time scale will be a problem. Since the natural thermalization times are never shorter than the decoherence time of the system, this procedure will be too slow for the needs of error correction and a โconveyor beltโ scheme would be required. Cooling by projection, in which the Hamiltonian of the system and its environment are necessarily perturbed strongly, will have a time scale dependent on the details of the setup, but potentially much shorter than the natural relaxation times. One cannot say too much more at this point, as the schemes for measurement have in many cases not been fully implemented (see requirement 5). In the NMR quantum computer implementations to date (see Cory below), cooling of the initial state has been foregone altogether; it is acknowledged that until some of the proposed cooling schemes are implemented (a nontrivial thing to do), NMR can never be a scalable scheme for quantum computing.
3. Long relevant decoherence times, much longer than the gate operation time
Decoherence times characterize the dynamics of a qubit (or any quantum system) in contact with its environment. The (somewhat overly) simplified definition of this time is that it is the characteristic time for a generic qubit state $`|\psi =a|0+b|1`$ to be transformed into the mixture $`\rho =|a|^2|00|+|b|^2|11|`$. A more proper characterization of decoherence, in which the decay can depend on the form of the initial state, in which the state amplitudes may change as well, and in which other quantum states of the qubit can play a role (in a special form of state decay called โleakageโ in quantum computing), is rather more technical than I want to get here; but see Refs. and for a good general discussion of all these. Even the simplest discussion of decoherence that I have given here should also be extended to include the possibility that the decoherence of neighboring qubits is correlated. It seems safest to assume that they will be neither completely correlated nor completely uncorrelated, and the thinking about error correction has taken this into account.
Decoherence is very important for the fundamentals of quantum physics, as it is identified as the principal mechanism for the emergence of classical behavior. For the same reason, decoherence is very dangerous for quantum computing, since if it acts for very long, the capability of the quantum computer will not be so different from that of a classical machine. The decoherence time must be long enough that the uniquely quantum features of this style of computation have a chance to come into play. How long is โlong enoughโ is also indicated by the results of quantum error correction, which I will summarize shortly.
I have indicated that the โrelevantโ decoherence times should be long enough. This emphasizes that a quantum particle can have many decoherence times pertaining to different degrees of freedom of that particle. But many of these can be irrelevant to the functioning of this particle as a qubit. For example, the rapid decoherence of an electronโs position state in a solid state environment does not preclude its having a very long spin coherence time, and it can be arranged that this is the only time relevant for quantum computation. Which time is relevant is determined by the choice of the qubit basis states $`|0`$ and $`|1`$; for example, if these two states correspond to different spin states but identical orbital states, then orbital decoherence will be irrelevant.
One might worry that the decoherence time necessary to do a successful quantum computation will scale with the duration of the computation. This would place incredibly stringent requirements on the physical system implementing the computation. Fortunately, in one of the great discoveries of quantum information theory (in 1995-6), it was found that error correction of quantum states is possible and that this correction procedure can be successfully applied in quantum computation, putting much more reasonable (although still daunting) requirements on the needed decoherence times.
In brief, quantum error correction starts with coding; as in binary error correction codes, in which only a subset of all boolean strings are โlegalโ states, quantum error correction codes consist of legal states confined to a subspace of the vector space of a collection of qubits. Departure from this subspace is caused by decoherence. Codes can be chosen such that, with a suitable sequence of quantum computations and measurements of some ancillary qubits, the error caused by decoherence can be detected and corrected. As noted above, these ancillary qubits have to be continuously refreshed for use. I will not go much farther into the subject here, see for more. It is known that quantum error correction can be made fully fault tolerant, meaning that error correction operations can be successfully intermingled with quantum computation operations, that errors occurring during the act of error correction, if they occur at a sufficiently small rate, do no harm, and that the act of quantum computation does not itself cause an unreasonable proliferation of errors.
These detailed analyses have indicated the magnitude of decoherence time scales that are acceptable for fault-tolerant quantum computation. The result is that, if the decoherence time is $`10^410^5`$ times the โclock timeโ of the quantum computer, that is, the time for the execution of an individual quantum gate (see requirement 4), then error correction can be successful. This is, to tell the truth, a rather stringent condition, quantum systems frequently do not have such long decoherence times. But sometimes they do, and our search for a successful physical implementation must turn towards these. At least this result says that the required decoherence rate does not become ever smaller as the size and duration of the quantum computation grows. So, once the desired threshold is attainable, decoherence will not be an obstacle to scalable quantum computation.
Having said this, it must be admitted that it will be some time before it is even possible to subject quantum error correction to a reasonable test. Nearly all parts of requirements 1-5 must be in place before such a test is possible. And even the most limited application of quantum error correction has quite a large overhead: roughly 10 ancillary qubits must be added for each individual qubit of the computation. Fortunately, this overhead ratio grows only logarithmically as the the size of the quantum computation is increased.
In the short run, it is at least possible to design and perform experiments which measure the decoherence times and other relevant properties (such as the correlation of decoherence of neighboring qubits) of candidate implementations of qubits. With such initial test experiments, caution must be exercised in interpreting the results, because decoherence is a very system-specific phenomenon, depending on the details of all the qubitsโ couplings to various environmental degrees of freedom. For example, the decoherence time of the spin of an impurity in the bulk of a perfect semiconductor may not be the same as its decoherence time when it is near the surface of the solid, in the immediate neighborhood of device structures designed to manipulate its quantum state. Test experiments should probe decoherence in as realistic a structure as is possible.
4. A โuniversalโ set of quantum gates
This requirement is of course at the heart of quantum computing. A quantum algorithm is typically specified as a sequence of unitary transformations $`U_1`$, $`U_2`$, $`U_3`$,โฆ, each acting on a small number of qubits, typically no more than three. The most straightforward transcription of this into a physical specification is to identify Hamiltonians which generate these unitary transformations, viz., $`U_1=e^{iH_1t/\mathrm{}}`$, $`U_2=e^{iH_2t/\mathrm{}}`$, $`U_3=e^{iH_3t/\mathrm{}}`$, etc.; then, the physical apparatus should be designed so that $`H_1`$ can be turned on from time $`0`$ to time $`t`$, then turned off and $`H_2`$ turned on from time $`t`$ to time $`2t`$, etc.
Would that life were so simple! In reality what can be done is much less, but much less can be sufficient. Understanding exactly how much less is still enough, is the main complication of this requirement. In all the physical implementations discussed in this volume, only particular sorts of Hamiltonians can be turned on and off; in most cases, for example, only two-body (two-qubit) interactions are considered. This immediately poses a problem for a quantum computation specified with three-qubit unitary transformations; fortunately, of course, these can always be re-expressed in terms of sequences of one- and two-body interactions, and the two-body interactions can be of just one type, the โquantum XORโ or โcNOTโ. There are some implementations in which multi-qubit gates can be implemented directly.
However, this still leaves a lot of work to do. In some systems, notably in NMR (see Cory), there are two-body interactions present which cannot be turned off, as well as others which are switchable. This would in general be fatal for quantum computation, but the particular form of the fixed interactions permit their effects to be annulled by particular โrefocusingโ sequences of the controllable interactions, and it has recently been discovered that these refocusing sequences can be designed and implemented efficiently.
For many other systems, the two-body Hamiltonian needed to generate directly the cNOT unitary transformation is not available. For example, in the quantum-dot proposal described by Loss below, the only two-body interaction which should be easily achievable is the exchange interaction between neighboring spins, $`H\stackrel{}{S}_i\stackrel{}{S}_{i+1}`$; in the Imamoglu chapter, the attainable interaction is of the XY type, i.e., $`HS_{ix}S_{jx}+S_{iy}S_{jy}`$. An important observation is that with the appropriate sequence of exchange or XY interactions, in conjunction with particular one-body interactions (which are assumed to be more easily doable), the cNOT transformation can be synthesized. It is incumbent on each implementation proposal to exhibit such a sequence for producing the cNOT using the interactions that are naturally realizable.
Often there is also some sophisticated thinking required about the time profile of the two-qubit interaction. The naive description above uses a โsquare pulseโ time profile, but often this is completely inappropriate; for instance, if the Hamiltonian can also couple the qubit to other, higher-lying levels of the quantum system, often the only way to get the desired transformation is to turn on and off the interaction smoothly and slowly enough that an adiabatic approximation is accurate (in a solid-state context, see also ). The actual duration of the pulse will have to be sufficiently long that any such adiabatic requirement is satisfied; then typically only the time integral $`๐tH(t)`$ is relevant for the quantum gate action. The overall time scale of the interaction pulse is also controlled by the attainable maximum size of the matrix elements of $`H(t)`$, which will be determined by various fundamental considerations, like the requirement that the system remains in the regime of validity of a linear approximation, and practical considerations, like the laser power that can be concentrated on a particular ion. Given these various constraints, the โclock timeโ of the quantum computer will be determined by the time interval needed such that two consecutive pulses have negligible overlap.
Another consideration, which does not seem to present a problem with any current implementation schemes, but which may be an issue in the future, is the classicality of the control apparatus. We say that the interaction Hamiltonian $`H(t)`$ has a time profile which is controlled externally by some โclassicalโ means, that is, by the intensity of a laser beam, the value of a gate voltage, or the current level in a wire. But each of these control devices is made up themselves of quantum mechanical parts. When we require that these behave classically, it means that their action should proceed without any entanglement developing between these control devices and the quantum computer. Estimates indicate that this entanglement can indeed be negligible, but this effect needs to be assessed for each individual case.
In many cases it is impossible to turn on the desired interaction between a pair of qubits; for instance, in the ion-trap scheme, no direct interaction is available between the ion-level qubits. In this and in other cases, a special quantum subsystem (sometimes referred to as a โbus qubitโ) is used which can interact with each of the qubits in turn and mediate the desired interaction: for the ion trap, this is envisioned to be the vibrational state of the ion chain in the trap; in other cases it is a cavity photon whose wavefunction overlaps all the qubits. Unfortunately, this auxiliary quantum system introduces new channels for the environment to couple to the system and cause decoherence, and indeed the decoherence occurring during gate operation is of concern in the ion-trap and cavity-quantum electrodynamics schemes.
Some points about requirement 4 are important to note in relation to the implementation of error correction. Successful error correction requires fully parallel operation, meaning that gate operations involving a finite fraction of all the qubits must be doable simultaneously. This can present a problem with some of the proposals in which the single โbus qubitโ is needed to mediate each interaction. On the other hand, the constraint that interactions are only among nearest neighbors in a lattice, as in many of the solid-state proposals, does allow for sufficient parallelism.
Quantum gates cannot be implemented perfectly; we must expect both systematic and random errors in the implementation of the necessary Hamiltonians. Both types of errors can be viewed as another source of decoherence and thus error correction techniques are effective for producing reliable computations from unreliable gates, if the unreliability is small enough. The tolerable unreliability due to random errors is in the same vicinity as the decoherence threshold, that is, the magnitude of random errors should be $`10^410^5`$ per gate operation or so. It might be hoped that systematic errors could be virtually eliminated by careful calibration; but this will surely not always be the case. It seems harder to give a good rule for how much systematic error is tolerable, the conservative estimates give a very, very small number (the square of the above), but on the other hand there seems to be some evidence that certain important quantum computations (e.g., the quantum Fourier transform) can tolerate a very high level of systematic error (over- or under-rotation). Some types of very large errors may be tolerable if their presence can be detected and accounted for on the fly (we are thinking, for example, about charge switching in semiconductors or superconductors).
Error correction requires that gate operations be done on coded qubits, and one might worry that such operations would require a new repertoire of elementary gate operations for the base-level qubits which make up the code. For the most important error correction techniques, using the so called โstabilizerโ codes, this is not the case. The base-level toolkit is exactly the same as for the unencoded case: one-bit gates and cNOTs, or any gate repertoire that can produce these, are adequate. Sometimes the use of coding can actually reduce the gate repertoire required: in the work on decoherence free subspaces and subsystems, codes are introduced using blocks of three and four qubits for which two-qubit exchange interactions alone are enough to implement general quantum computation. This simplification could be very useful in the quantum-dot or semiconductor-impurity implementations.
5. A qubit-specific measurement capability
Finally, the result of a computation must be read out, and this requires the ability to measure specific qubits. In an ideal measurement, if a qubitโs density matrix is $`\rho =p|00|+(1p)|11|+\alpha |01|+\alpha ^{}|10|`$, the measurement should give outcome โ0โ with probability $`p`$ and โ1โ with probability $`1p`$ independent of $`\alpha `$ and of any other parameters of the system, including the state of nearby qubits, and without changing the state of the rest of the quantum computer. If the measurement is โnon-demolitionโ, that is, if in addition to reporting outcome โ0โ the measurement leaves the qubit in state $`|00|`$, then it can also be used for the state preparation of requirement 2; but requirement 2 can be fulfilled in other ways.
Such an ideal measurement as I have described is said to have 100% quantum efficiency; real measurements always have less. While the fidelity of a quantum measurement is not captured by a single number, the single quantum-efficiency parameter is often a very useful way to summarize it, just as the decoherence time is a useful if incomplete summary of the damage caused to a quantum state by the environment.
While quantum efficiency of 100% is desirable, much less is needed for quantum computation; there is, in fact, a tradeoff possible between quantum efficiency and other resources which results in reliable computation. As a simple example, if the quantum efficiency is 90%, then, in the absence of any other imperfections, a computation with a single-bit output (a so-called โdecision problemโ, common in computer science) will have 90% reliability. If 97% reliability is needed, this can just be achieved by rerunning the calculation three times. Much better, actually, is to โcopyโ the single output qubit to three, by applying two cNOT gates involving the output qubit and two other qubits set to $`|0`$, and measuring those three. (Of course, qubits cannot be โcopiedโ, but their value in a particular basis can.) In general, if quantum efficiency $`q`$ is available, then copying to somewhat more than $`1/q`$ qubits and measuring all of these will result in a reliable outcome. So, a quantum efficiency of 1% would be usable for quantum computation, at the expense of hundreds of copies/remeasures of the same output qubit. (This assumes that the measurement does not otherwise disturb the quantum computer. If it does, the possibilities are much more limited.)
Even quantum efficiencies much, much lower than 1% can be and are used for successful quantum computation: this is the โbulkโ model of NMR (see Cory and ), where macroscopic numbers of copies of the same quantum computer (different molecules in solution) run simultaneously, with the final measurement done as an ensemble average over the whole sample. These kinds of weak measurements, in which each individual qubit is hardly disturbed, are quite common and well understood in condensed-matter physics.
If a measurement can be completed quickly, on the timescale of $`10^4`$ of the decoherence time, say, then its repeated application during the course of quantum computation is valuable for simplifying the process of quantum error correction. On the other hand, if this fast measurement capability is not available, quantum error correction is still possible, but it then requires a greater number of quantum gates to implement.
Other tradeoffs between the complexity and reliability of quantum measurement vs. those of quantum computation have recently been explored. It has been shown that if qubits can be initialized into pairs of maximally entangled states, and two-qubit measurements in the so-called Bell basis ($`\mathrm{\Psi }^\pm =|01\pm |10`$, $`\mathrm{\Phi }^\pm =|00\pm |11`$) are possible, then no two-qubit quantum gates are needed, one-bit gates alone suffice. Now, often this tradeoff will not be useful, as in many schemes a Bell measurement would require two-bit quantum gates.
But the overall message, seen in many of our requirements, is that more and more, the theoretical study of quantum computation has offered a great variety of tradeoffs for the potential implementations: if X is very hard, it can be substituted with more of Y. Of course, in many cases both X and Y are beyond the present experimental state of the art; but a thorough knowledge of these tradeoffs should be very useful for devising a rational plan for the pursuit of future experiments.
## IV Desiderata for quantum communication
For computation alone, the five requirements above suffice. But the advantages of quantum information processing are not manifest solely, or perhaps even principally, for straightforward computation only. There are many kinds of information-processing tasks, reviewed briefly at the beginning, that involve more than just computation, and for which quantum tools provide a unique advantage.
The tasks we have in mind here all involve not only computation but also communication. The list of these tasks that have been considered in the light of quantum capabilities, and for which some advantage has been found in using quantum tools, is fairly long and diverse: it includes secret key distribution, multiparty function evaluation as in appointment scheduling, secret sharing, and game playing.
When we say communication we mean quantum communication: the transmission of intact qubits from place to place. This obviously adds more features that the physical apparatus must have to carry out this information processing. We formalize these by adding two more items to the list of requirements:
6. The ability to interconvert stationary and flying qubits
7. The ability faithfully to transmit flying qubits between specified locations
These two requirements are obviously closely related, but it is worthwhile to consider them separately, because some tasks need one but not the other. For instance, quantum cryptography involves only requirement 7; it is sufficient to create and detect flying qubits directly.
I have used the jargon โflying qubitsโ, which has become current in the discussions of quantum communication. Using this term emphasizes that the optimal embodiment of qubits that are readily transmitted from place to place is likely to be very different from the optimal qubits for reliable local computation. Indeed, almost all proposals assume that photon states, with the qubit encoded either in the polarization or in the spatial wavefunction of the photon, will be the flying qubit of choice, and indeed, the well developed technology of light transmission through optical fibers provides a very promising system for the transmission of qubits. I would note, though, that my colleagues and I have raised the possibility that electrons traveling though solids could provide another realization of the flying qubit.
Only a few completely developed proposals exist which incorporate requirements 6 and 7. Of course, there are a number of quite detailed studies of 7, in the sense that experiments on quantum cryptography have been very concerned with the preservation of the photon quantum state during transmission through optical fibers or through the atmosphere. However, these studies are rather disconnected from the other concerns of quantum computing. Requirement 6 is the really hard one; to date the only theoretical proposal sufficiently concrete that experiments addressing it have been planned is the scheme produced by Kimble and coworkers for unloading a cavity photon into a traveling mode via atomic spectroscopy, and loading it by the time-reversed process. Other promising concepts, like the launching of electrons from quantum dots into quantum wires such that the spin coherence of the electrons is preserved, need to be worked out more fully.
## V Summary
So, what is the โwinningโ technology going to be? I donโt think that any living mortal has an answer to this question, and at this point it may be counterproductive even to ask it. Even though we have lived with quantum mechanics for a century, our study of quantum effects in complex artificial systems like those we have in mind for quantum computing is in its infancy. No one can see how or whether all the requirements above can be fulfilled, or whether there are new tradeoffs, not envisioned in our present theoretical discussions but suggested by further experiments, that might take our investigations in an entirely new path.
Indeed, the above discussion, and the other chapters of this special issue, really do not cover all the foreseeable approaches. I will mention two of which I am aware: first, another computational paradigm, that of the cellular automaton, is potentially available for exploitation. This is distinguished from the above โgeneral purposeโ approach in that it assumes that every bit pattern throughout the computer will be subjected to the same evolution rule. It is known that general-purpose computation is performable, although with considerable overhead, by a cellular automaton. This is true as well for the quantum version of the cellular automaton, as Lloyd indicated in his original work. New theoretical work by Benjamin shows very explicitly how relatively simple local rules would permit the implementation of some quantum computations. This could point us perhaps towards some sort of polymer with a string of qubits on its backbone that can be addressed globally in a spectroscopic fashion. Experiments are not oriented towards this at the moment, but the tradeoffs are very different, and I donโt believe it should be excluded in the future.
Second, even more speculative, but very elegant, is the proposal of Kitaev to use quantum systems with particular kinds of topological excitations, for example nonabelian anyons, for quantum computing. It is hard to see at the moment how to turn this exciting proposal into an experimental program, as no known physical system is agreed to have the appropriate topological excitations. But further research in, for example, the quantum Hall effect might reveal such a system; more likely, perhaps, is that further understanding of this approach, and that of Freedman and his colleagues, will shed more light on doing quantum computing using the โstandardโ approach being considered in this book.
I am convinced of one thing: the ideas of quantum information theory will continue to exert a decisive influence on the further investigation of the fundamental quantum properties of complex quantum systems, and will stimulate many creative and exciting developments for many years to come.
## Acknowledgments
I gratefully acknowledge support from the Army Research Office under contract number DAAG55-98-C-0041. I thank Alec Maassen van den Brink for a careful reading of this manuscript. |
no-problem/0002/gr-qc0002069.html | ar5iv | text | # Untitled Document
THE CASE FOR INERTIA AS A VACUUM EFFECT:
A REPLY TO WOODWARD AND MAHOOD
York Dobyns
C-131 Engineering Quad, Princeton University
Princeton, NJ 08544-5263
Alfonso Rueda
Department of Electrical Engineering, ECS-561
California State University, Long Beach, CA 90840
Bernard Haisch<sup>a</sup>
<sup>a</sup>current address: California Institute for Physics and Astrophysics, 366 Cambridge Ave., Palo Alto, CA 94306, www.calphysics.org
Solar & Astrophysics Laboratory
Dept. L9-41, Bldg. 252, Lockheed Martin
3251 Hanover St., Palo Alto, CA 94304
Foundations of Physics, in press, Jan. 2000
ABSTRACT: The possibility of an extrinsic origin for inertial reaction forces has recently seen increased attention in the physical literature. Among theories of extrinsic inertia, the two considered by the current work are (1) the hypothesis that inertia is a result of gravitational interactions, and (2) the hypothesis that inertial reaction forces arise from the interaction of material particles with local fluctuations of the quantum vacuum. A recent article supporting the former and criticizing the latter is shown to contain substantial errors.
1. INTRODUCTION
Since the publication of Newtonโs Principia the default assumption of most physicists has been that inertia is intrinsic to mass. Theories of an extrinsic origin for inertia, however, have seen perennial if minor interest. Since the task of physics is to explore causative relationships among natural phenomena, it is appropriate for physicists to devote some work to asking how and why the property of mass arises to produce the phenomenon of inertia, rather than always and only treating it as a definitional property. Recent work, on the other hand, provides a more urgent reason to look into theories of extrinsic inertia: some of them suggest a resolution to one of the more intractable difficulties of current physical theory.
There appears to be a fundamental conflict between quantum theory and gravitational theory. Adler, Casey, and Jacob<sup>(1)</sup> have dubbed this the โvacuum catastropheโ to parallel the โultraviolet catastropheโ associated with blackbody radiation 100 years ago. Quantum field theory predicts a very large vacuum zero-point energy density, which according to general relativity theory (GRT) should have a huge gravitational effect. The discrepancy between theory and observation may be 120 orders of magnitude. As Adler et al. point out: โOne must conclude that there is a deep-seated inconsistency between the basic tenets of quantum field theory and gravity.โ
The problem is so fundamental that elementary quantum mechanics suffices to demonstrate its origin. The intensity of any physical field, such as the electromagnetic field, is associated with an energy density; therefore the average field intensity over some small volume is associated with a total energy. The Heisenberg uncertainty relation (in the $`\mathrm{\Delta }E\mathrm{\Delta }t`$ form) requires that this total energy be uncertain, in inverse proportion to the length of time over which it obtains. This uncertainty requires fluctuations in the field intensity, from one such small volume to another, and from one increment of time to the next; fluctuations which must entail fluctuations in the fields themselves, which must be seen to be more intense as the spatial and temporal resolution increases.
In the more formal and rigorous approach of quantum field theory, the quantization of the electromagnetic field is done โby the association of a quantum-mechanical harmonic oscillator with each mode $`\mathrm{}`$ of the radiation field.โ<sup>(2)</sup> Application of the Heisenberg uncertainty relation to a harmonic oscillator immediately requires that its ground state have a non-zero energy of $`\mathrm{}\omega /2`$, because a particle cannot simultaneously be exactly at the bottom of its potential well and have exactly zero momentum. The harmonic oscillators of the EM field are formally identical to those derived for a particle in a suitable potential well; thus there is the same $`\mathrm{}\omega /2`$ zero-point energy expression for each mode of the field as is the case for a mechanical oscillator. Summing up the energy over the modes for all frequencies, directions, and polarization states, one arrives at a zero-point energy density for the electromagnetic field of
$$W=_0^{\omega _c}\rho (\omega )๐\omega =_0^{\omega _c}\frac{\mathrm{}\omega ^3}{2\pi ^2c^3}๐\omega ,$$
$`(1)`$
where $`\omega _c`$ is a postulated cutoff in frequency. In conventional GRT, this zero-point energy density must be a source of gravity. This conflicts with astrophysical observations such as the size, age, and Hubble expansion of the Universe by as much as a factor of $`10^{120}`$. Moreover, in addition to the electromagnetic zero-point energy there is also zero-point energy associated with gluons and the $`W`$ and $`Z`$ vector bosons. From naรฏve mode counting it would seem that the gluons should contribute eight times as much zero-point energy as do the electromagnetic zero-point photons, since there are eight types of gluons. While this estimate could doubtless be refined with a more sophisticated examination of the gluon model, it nevertheless seems clear that the vacuum energy density of gluons must be at least comparable to, and could quite easily be an order of magnitude or so larger than, the vacuum energy density of photons. The massive vector bosons must likewise provide a contribution of roughly similar scale. The fields associated with other forces thus exacerbate a problem that is already difficult when only electromagnetism is considered.
There is no accepted quantum theory of gravity, but โwe might expect on the basis of studies of weak gravitational waves in general relativity that the field would also have a ground state energy $`\mathrm{}\omega /2`$ for each mode and the two polarization states of the waves.โ<sup>(1)</sup> This too would only compound the problem.
One possible solution to the dilemma lies in the Dirac vacuum. According to theory, the fermion field of virtual quarks, leptons, and their antiparticles, should have negative energy. If there were precise pairing of fermions and bosons, as for example results from supersymmetry, there could be a compensating negative zero-point energy. Unfortunately, while supersymmetry is often used as a starting point in modern theoretical investigations, it has neither been proven necessary nor demonstrated empirically; indeed, the ongoing failure to observe superpartners for any known particles is a longstanding albeit minor embarrassment for the theory (see e.g. Ramond 1981<sup>(3)</sup>).
Another approach is more phenomenological in content. It comes from GRT, though its quantum-field-theoretic interpretation is usually connected to the Dirac vacuum approach. This technique uses the โcosmological constantโ of the Einstein equation to absorb or cancel the effects of an arbitrary energy density. This will be discussed in more detail in a later section; for now it is sufficient to note that both of these approaches require cancellation of opposed densities to an utterly fantastic degree of precision.
One might try taking the position that the zero-point energy must be merely a mathematical artifact of theory. It is sometimes argued, for example, that the zero-point energy is merely equivalent to an arbitrary additive potential energy constant. Indeed, the potential energy at the surface of the earth can take on any arbitrary value, but the falling of an object clearly demonstrates the reality of a potential energy field, the gradient of which is equal to a force. No one would argue that there is no such thing as potential energy simply because it has no well-defined absolute value. Similarly, gradients of the zero-point energy manifest as measurable Casimir forces, which indicates the reality of this sea of energy as well. Unlike the potential energy, however, the zero-point energy is not a floating value with no intrinsically defined reference level. On the contrary, the summation of modes tells us precisely how much energy each mode must contribute to this field, and that energy density must be present unless something else in nature conspires to cancel it.
Further arguments for the physical reality of zero-point fluctuations will also be addressed in later sections. For the current introductory purposes we may simply observe that Adler et al. <sup>(1)</sup> summarize the situation thus:
Quantum field theory predicts without ambiguity that the vacuum has an energy density many orders of magnitude greater than nuclear density. Measurement of the Casimir force between conducting plates and related forces verify that the shift in this energy is real, but considerations of gravity in the solar system and in cosmology imply stringent upper limits on the magnitude, which are in extreme conflict with the theoretical estimate, by some hundred orders of magnitude! Unless one considers an ad hoc constant cancellation term an adequate explanation then there appears to be a serious conflict between our concepts of the quantum vacuum and gravity; that is, there is a vacuum catastrophe.
None of the resolutions to this โvacuum catastropheโ suggested above is entirely satisfactory, but some speculative developments suggest one more potential alternative. We may consider the possibility that the electromagnetic and other zero-point fields really do exist as fundamental theoretical considerations mandate, but that their zero-point energies do not gravitate because it is the actions of these fields on matter that generate gravitational forces (which are mathematically represented by the curving of spacetime). The zero-point energies do not gravitate because the zero-point fields do not, indeed cannot, act upon themselves. The basis of such a zero-point gravitation theory was conjectured by Sakharov<sup>(4)</sup> and Zelโdovich<sup>(5)</sup> and has undergone a preliminary development by several authors (see e.g. Adler<sup>(6)</sup>). More recently, and in consonance with our approach, this situation appeared in a clearer manner in the attempt of Puthoff.<sup>(7)</sup>
We point to the potential importance and possible direction of a zero-point gravitation theory, but do not attempt to develop this ourselves. The principle of equivalence, however, dictates that if gravitation is an effect traceable to the action of zero-point fields on matter, then so must the inertia of matter be traceable to zero-point fields. This approach Woodward and Mahood<sup>(8)</sup> vehemently find to be objectionable, treating it as if it were a dangerous new heresy. In their paper they summarize some connections between gravity and inertia, but fail to see that this simply establishes relationships that must exist between the two regardless of whether gravity and inertia are due to zero-point fields or not. Their arguments about inertia leave the paradox between quantum theory and gravitation theory as unresolved as ever.
As alluded to above, the recent work of Haisch, Rueda, and Puthoff<sup>(9)</sup>, and more recent development by Rueda and Haisch<sup>(10)</sup>, derives inertial reaction forces from interactions with the zero-point fluctuations of the quantum vacuum. The contrary theory of Woodward and Mahood<sup>(8)</sup> builds on earlier work in gravity and GRT to suggest that inertia is an extrinsic result of interactions with the gravitational field arising from the overall mass distribution of the cosmos.
The current analysis consists largely of a rebuttal to this last reference, and a response to its criticisms. Due to the frequency of reference, we shall use WM to refer to Woodward and Mahood<sup>(8)</sup>, HRP to refer to Haisch, Rueda, and Puthoff<sup>(9)</sup>, and RH to refer to Rueda and Haisch.<sup>(10)</sup>
2. CRITIQUE OF GRAVITATIONAL INERTIA
2.1 General problems with a gravitational theory of inertia
One of the most striking features of the General Theory of Relativity is that it essentially banishes the concept of a gravitational force. Gravity, according to GR, is a distortion of the metric of spacetime. An object seen by a distant observer to be accelerating in a gravitational field is, in fact, pursuing a geodesic path appropriate to the spacetime geometry in its immediate vicinity: no accelerometer mounted on such an object will detect an acceleration.
The Principle of Equivalence, adopted by Einstein as a starting point in the construction of GR, asserts that the state of free-fall one would encounter in deep space, far from all gravitational sources, is in fact the same state one encounters while falling freely in a strong gravitational field.<sup>(11)</sup> As a corollary of this equivalence, an acceleration relative to the local free-fall geodesic has the same effects, whatever the local geometry. Near Earthโs surface, for example, geodesic paths accelerate toward Earthโs center. To hold an object at rest relative to Earthโs surface, therefore, requires that it be โacceleratedโ relative to this geodesic by the application of force; and, by Einsteinโs original formulation of equivalence, the effects of this acceleration are indistinguishable from those encountered in an accelerating reference frame in remote space (see, e.g. Einstein<sup>(12)</sup>).
In other words, the Principle of Equivalence asserts that gravitational โforcesโ as conventionally measured are inertial reaction forces โ pseudo-forces, as these are sometimes called. We thus see that any attempt to identify gravity as the source of inertia, within the context of GRT, suffers from an essential circularity. At the level of ordinary discourse, this is almost trivially obvious. We consider an extrinsic theory of inertia which claims that inertial reaction forces are gravitational forces. But the equivalence principle requires that gravitational forces are inertial reaction forces, so applying equivalence to the theoretical claim we see it reduce to the uninformative declaration that inertial reaction forces are inertial reaction forces.
To demonstrate that this is not simply linguistic play, let us consider the situation with a bit more rigor. The various extrinsic-inertia models discussed by WM all have the common feature that they mandate the appearance of a gravitational field in an accelerated frame of reference. This is, in fact, quite uncontroversial and in no way depends on the acceptance of Machโs principle. Traditional, non-Machian approaches to GRT note that an accelerating reference frame will see a space-time metric corresponding to a gravitational field pervading all space. This is quite unsurprising since the accelerating observer sees the entire Universe accelerating relative to itself, and how better to explain this than by a cosmic gravitational field? The Machian element comes in only when one requires that the source of this cosmic field should be the overall mass distribution of the cosmos, rather than an intrinsic property of spacetime.
Regardless of the source of the cosmic gravitational field, an object held at rest in it โ that is to say, any massive object sharing the motion of the accelerating reference frame โ will, of course, exert weight on whatever agency is holding it at rest. In the reference frame of the cosmos, on the other hand, the accelerating body is exerting the expected inertial reaction force on whatever agency is causing it to accelerate. Have we explained inertia via the cosmic gravitational field?
Unfortunately, the standard geometrical approach to GRT says otherwise. In the presence of a gravitational field, an unconstrained body must fall freely along a geodesic path. To alter its motion from this spontaneous condition, one must apply a force to it, creating an acceleration which will be noted by, for example, any accelerometer rigidly mounted on the body. Common experience requires that this will produce an inertial reaction force as the bodyโs inertia resists this acceleration. At this point we can identify three alternative explanations for the inertial reaction:
1. The inertia is intrinsic to the mass of the body. While this is consistent with observation it simply postulates inertia without explaining it.
2. The inertia is extrinsic to the mass, being the result of the interaction of the mass with some non-gravitational field. The ZPF-inertia theory of HRP falls into this class.
3. The inertia is extrinsic to the mass and results from the interaction of the mass with the apparent gravitational field. This gravitational explanation of inertia is the one WM are claiming.
To see how peculiar a theory of the third class above actually is, let us ask why the inertial reaction force appears at all in this theory. WM apparently believe that the presence of a gravitational field in the accelerating frame is a sufficient explanation: the reaction force is the bodyโs weight in this field. But why do bodies have weight in a gravitational field? In the standard formalism of geometrodynamics, gravity is not a force but a consequence of the local shape of spacetime. โWeightโ is actually the inertial reaction force that results from accelerating an object away from its natural geodesic path. But we are, here, trying to explain inertial reaction forces. To say that an inertial reaction force is the weight resulting from gravity in the accelerated frame explains nothing in geometrodynamics, because weight is already assumed to be an inertial reaction force and one is therefore positing inertial reactions to explain inertial reactions. Therefore, this โexplanationโ of the origin of inertial reaction forces is circular if one is operating in the standard geometrical interpretation of GRT.
It is, of course, possible to abandon this interpretation and presume that gravity actually does exert forces directly on objects, as in the original Newtonian theory. This, unfortunately, introduces a different circularity. The fact that a gravitational field appears in an accelerating frame is, as noted above, true in any formulation of GRT, Machian or not, and remains true whether inertia is intrinsic or extrinsic. The gravitational-inertia theory wishes to assert that this gravitational field is the cause of the inertial reaction force. But this is the same as the assumption that gravitational fields exert forces; we cannot claim to have explained inertia in this formalism when we incorporate our desired conclusion into the initial assumptions.
This would appear to be a very general problem with efforts to find a gravitational origin for inertia in the standard, geometrodynamic interpretation of GRT. There are, of course, ways around this. An argument by Sciama<sup>(13)</sup>, for example, finds a reaction force arising from a โgravito-magneticโ reaction with a presumed gravitational vector potential. It is, however, well worth noting that Sciamaโs argument is based on analogizing gravitation to electromagnetism, in the weak-field limit of GR. In this weak-field limit one typically does not work explicitly with the geometrical consequences of metric distortion, but rather represents interactions in terms of potentials and forces. The circularity noted above disappears, but with it the conceptual parsimony of GR. Indeed, as WM themselves assert (their section 3.2), Sciamaโs argument was originally conceived as a refutation of GRT.
General relativity, in reducing gravity to a consequence of geometry, offers a very hostile background to a gravitational theory of extrinsic inertia. GR shows how mass distorts spacetime, and allows one to calculate the trajectories unconstrained bodies will follow in the resulting distorted spacetime. It does not explain why a body, constrained by non-gravitational forces to travel on some trajectory that is not a geodesic, exerts an inertial reaction force proportional to its mass.
This is, of course, a trivial non-mystery if one naรฏvely presumes inertia to be intrinsic to mass. The attempt, however, to construct a gravitational theory of extrinsic inertia within geometrodynamics seems doomed to circularity.
2.2 Specific problems with WM argument
In fairness to WM they do seem aware, to a certain extent, of the circularity problem. At the end of their section 3.4 they devote a paragraph to an attempt to address it. Unfortunately, they dilute and weaken their argument by attempting to portray the circularity argument as a defense of ZPF-inertia theory, which it is not. Indeed, it would seem that the WM response to the the circularity argument consists mainly of the complaint that ZPF theories do not successfully explain inertia either, which even if it were the case is irrelevant to the failure of gravitationally based theories to do so. One should bear in mind that the default explanation of inertia, currently highly favored by Ockhamโs Razor as the least hypothesis, is that inertia is intrinsic to mass. Various important elements of physical theory, such as the conservation of momentum, which flow quite naturally from a theory of intrinsic inertia, require complicated supporting arguments or may even be violated in a theory of extrinsic inertia. (It is worth noting that one of the authors of WM has in fact published articles โ and obtained a U. S. Patent<sup>(14)</sup> โ demonstrating ways in which a theory of extrinsic gravitational inertia allows local violations of momentum conservation.<sup>(15)</sup> While one might hope, and indeed the same papers claim, that momentum is still conserved globally, this is actually a meaningless assertion in the Machian perspective of this theory.)
In their section 3.2 WM make the peculiar claim that โGRT dictates that inertia is gravitationally induced irrespective of whether cosmic matter density is critical or not.โ This claim is odd, because it seems to be supported only by the assertion that in Robertson-Walker cosmologies the local metric is determined solely by the distribution of material sources within the current horizon. While this claim is true, it does not address the relationship between critical density and gravitational inertia. All of the arguments employed by WM require a specific value for the total gravitational potential $`\varphi `$ in order for inertial reaction forces to behave properly. This depends on the cosmic mass density $`\rho `$ in a Robertson-Walker cosmology. While WMโs demonstration that sources outside the horizon may safely be ignored is valid and useful, it falls badly short of explaining why the actual density of sources inside the horizon can also be ignored in declaring that physics is Machian and inertia results from gravity.
In section 3.3 WM provide a general discussion of the relation between Machโs principle and GRT. In the current context this is notable mostly for its complete omission of results suggesting that GRT is not only not a Machian theory, but in fact incompatible with Machโs principle. For example, the Lense-Thirring precession is often touted as an example of the โMachianโ dragging of inertial frames by a rotating mass, but recent work by Rindler<sup>(16)</sup> demonstrates that the equatorial Lense-Thirring effect is inconsistent with a Machian formulation. Granted, the Lense-Thirring rotation is such a minute effect that it has not been empirically tested, but it is an unambiguous prediction of GRT: to have an anti-Machian effect emerge from GRT impedes the joint claim of WM that GRT is the correct theory of gravity and that the Universe is Machian.
WM go on in section 3.4 to discuss an argument by Nordtvedt<sup>(17)</sup> concerning frame dragging in translational acceleration. They present as their eq. 3.7 the relation:
$$\delta ๐=(4\varphi /c^2)๐,$$
$`(2)`$
which relates the induced (frame-dragging) acceleration $`\delta ๐`$ to the acceleration $`๐`$ of the accelerated mass and the gravitational potential $`\varphi `$ induced by that same mass. They point out that if $`4\varphi =c^2`$, then $`\delta ๐=๐`$ and all inertial frames are dragged rigidly along with the inducing body. If one regards the universe at large as Nordtvedtโs inducing body, and presumes that it has the appropriate value of $`\varphi `$ throughout, then any hypothetical acceleration of the universe would necessarily drag along all inertial frames; an alternative way of expressing this is to say that the bulk mass distribution of the cosmos defines which frames are inertial. So far this would appear to be an excellent demonstration of Machโs principle.
As a possible quibble we note that for $`\varphi >c^2/4`$ the โframe draggingโ acceleration is greater than the acceleration of the inducing body, a bizarre result that seems very difficult to attribute to frame dragging. In fact, as WM acknowledge, Nordtvedtโs derivation is of linear order in the mass, and is therefore of questionable validity for the large values of $`\varphi `$ they wish to apply. But this ranks only as a quibble, because the problem of inertia has not been addressed at all. Even if one, implausibly, stipulates the validity of eq. 2 over all $`\varphi `$, one has merely identified which states of motion are inertial reference frames: no explanation has been offered for the appearance of inertial reaction forces in non-inertial frames. We are once again facing the circularity problem of the previous section, with no progress toward an explanation. As noted above, WM have not successfully addressed this problem anywhere in their discussion of gravitational inertia.
The next difficulty in WM is perhaps best introduced by quoting their own argument, noting that $`\varphi `$ is their symbol for total gravitational potential as in eq. 2 above.
Since the locally measured value of $`\varphi `$ must be an invariant to preserve the principle of relativity, one might think that the gradient of the gravitational potential must vanish everywhere. Accordingly, it would seem that no local gravitational fields should exist. But the gradient of a locally measured invariant need not vanish if it is not a global invariant. The total gravitational potential is not a global invariant. As a result, the โcoordinateโ value of the gravitational potential in some frame of reference may vary from point to point, notwithstanding that the numerical value measured at each point is the same everywhere. And the gradient of the potential in these coordinates may be non-vanishing. As a familiar example of this sort of behavior we point to the vacuum speed of light โ a locally measured invariant โ in the presence of a gravitational field. As is well known, the speed of light in intense gravitational fields measured by non-local observers (that is, the โcoordinateโ speed of light) is often markedly different from the locally measured value. And for these non-local observers, the speed of light in general will have a non-vanishing gradient in their coordinates. (WM, section 4.2, excerpt from final paragraph.)
Clever as this argument and analogy may seem, it introduces a new paradox worse than the one they seek to evade. The speed of light in vacuum is deeply embedded in relativistic kinematics. If a given coordinate system measures an altered value of $`c`$ in some remote regions, it will also note distortions in lengths and time intervals in those regions such that it will expect an observer in that region to find the standard local value for $`c`$. The potential $`\varphi `$, on the other hand, is a dynamic variable, not a kinematic one. Where $`c`$ appears in such fundamental and inescapable relations as the velocity-addition rule, $`\varphi `$ is merely a potential; its value dictates how specific objects will move, not the nature of motion itself.
Let us posit the WM scheme of a locally invariant $`\varphi `$ that is nevertheless observed to vary and have a gradient in certain reference frames. The quantity $`\varphi `$ is, by definition, a gravitational potential: $`m_g\varphi `$ is the gravitational potential energy of an object with gravitational mass $`m_g`$. The value of $`\varphi `$ used in computing this quantity is, of course, the local value at the current position of the object. If $`\varphi `$ is a local invariant, no object can change its gravitational potential energy by moving from one location to another. A distant observer, seeing an object move from a region with potential $`\varphi _0`$ to a region at a different $`\varphi _1`$, would expect to see its kinetic energy change by the quantity $`m_g(\varphi _0\varphi _1)`$. A comoving observer, in contrast, observing that the gravitational potential energy is $`m_g\varphi `$ at both locations, does not expect any change in the relative velocity of the object with respect to the rest of the cosmos. These conflicting expectations cannot be reconciled.
As if the above problems were not enough, this new perspective on $`\varphi `$ shows that the Nordtvedt frame-dragging effect of eq. 2 above is, rather than a support of the WM inertia theory, absolutely fatal to it. If $`\varphi `$ is a locally measured invariant due to the action of the entire cosmos, no local concentration of matter can affect $`\varphi `$, which leads to the startling conclusion that no body smaller than the Universe as a whole can produce any frame dragging effects whatsoever! WM require this locally invariant character for $`\varphi `$ in order to avoid having inertia behave unacceptably (that is, in a manner contrary to long-established observation) in the vicinity of gravitating masses. Yet the price of this local invariance is the disappearance of all local frame-dragging effects. And, again as WM themselves point out, Nordtvedtโs frame-dragging effect is necessary for such quotidian phenomena as planetary orbits to display the proper invariance under arbitrary choices of coordinates.
In their section 4.3 WM refer to a โstronger versionโ of Machโs Principle, in which โโฆmass itself arises from the gravitational action of the distant matter in the universe on local objects โ mass is just the total gravitational potential energy a body possesses.โ Unfortunately this does not work, at least not in the all-encompassing sense that WM seem to have in mind. In order to establish the gravitational potential energy of a body, one must have at least one kind of mass, the gravitational mass $`m_g`$, as a preexisting quantity, so that $`m_g\varphi `$ gives the total gravitational potential energy. This version of Machโs principle would allow one to derive the energetic content of mass and explain why $`E/c^2m_g`$, but does not quite explain mass itself ex nihilo as WM appear to be claiming.
While certain other parts of WMโs explication of gravitational inertia are flawed, these closely involve their criticisms of ZPF theories, and so discussion of them is better deferred to the next section.
3. CRITICISMS OF ZPF: ERRORS AND CORRECTIONS
WM raise numerous criticisms, both of the notion of quantum zero-point fluctuations and of the specific HRP theory of extrinsic inertia based on interactions with ZPF. Most of these are severely flawed. Before dealing with the WM criticisms in detail, it is worth noting that the strongest criticism is not one that they raise explicitly, though it is implied by certain of their other arguments. The exact identity between the inertial mass which resists accelerations, the gravitational mass which acts as a source term in the Einstein field equation, and the energetic-content mass $`E/c^2`$ follows quite naturally in simplistic intrinsic-inertia theories. It needs careful attention, though, in any theory of extrinsic inertia, and the ZPF-inertia theory put forward in HRP is not yet able to account for this identity. Since the ZPF-inertia theory is still in its early stages of development, this should not be considered either surprising, or a refutation of the theory.
The various points raised in WM actually address two distinct issues, the physical reality of ZPF and the theory that ZPF interactions are the cause of inertial reaction forces. Obviously the former issue is logically prior to the latter; it is also empirically of greater consequence, since the existence of ZPF-driven effects such as the Casimir force and the Lamb shift have been confirmed experimentally. Some alternative explanation for them must be found if we wish to keep our theories in consonance with reality. We will therefore address the existence of the ZPF first.
3.1 Elementary theoretical justification
The Introduction above, in explaining the $`120`$ order-of-magnitude discrepancy that motivates the search for a ZPF-inertia theory, already provided several strong arguments for considering the ZPF physically real. One further argument worthy of consideration, however, emerges from experiments in cavity quantum electrodynamics involving suppression of spontaneous emission. As Haroche and Ramond explain<sup>(18)</sup>:
These experiments indicate a counterintuitive phenomenon that might be called โno-photon interference.โ In short, the cavity prevents an atom from emitting a photon because that photon would have interfered destructively with itself had it ever existed. But this begs a philosophical question: How can the photon โknow,โ even before being emitted, whether the cavity is the right or wrong size?
The answer is that spontaneous emission can be interpreted as stimulated emission by the ZPF, and that, as in the Casimir force experiments, ZPF modes can be suppressed, resulting in no vacuum-stimulated emission, and hence no โspontaneousโ emission.<sup>(19)</sup>
3.2 The cosmological constant problem
WM object that โ$`\mathrm{}`$if the ZPF really did exist, the gravitational effect of the energy resident in it would curl up the universe into a minute ballโ (section 2.2, WM). This, of course, is precisely the vacuum catastrophe problem discussed in detail in the Introduction. When various solutions to that quandary were being discussed, it was pointed out that several of them require an implausibly precise cancellation between the ZPF energy density and other physical factors. However, one of those theoretical devices โ the cosmological constant โ suffers a fine-tuning problem, whether or not it is invoked to avoid the vacuum catastrophe. The general form of the Einstein field equation,
$$R_{\mu \nu }\frac{1}{2}g_{\mu \nu }R+\mathrm{\Lambda }g_{\mu \nu }=\frac{8\pi G}{c^4}T_{\mu \nu },$$
$`(3)`$
includes an arbitrary โcosmologicalโ constant $`\mathrm{\Lambda }`$. This term can absorb any contribution from a uniform density such as the vacuum energy. As noted in the Introduction, actually matching the ZPF energy density would be a feat of remarkable precision. The fine-tuning problem persists even if one assumes that something else averts the vacuum catastrophe, because observational astronomy increasingly favors a cosmology with a small nonzero value of $`\mathrm{\Lambda }`$. Unfortunately, field-theoretic considerations suggest that โnaturalโ values of $`\mathrm{\Lambda }`$ should be either exactly zero, or else correspond to an energy density (positive or negative) on the rough order of one Planck mass per Planck volume. We are thus confronted with a fine-tuning problem for $`\mathrm{\Lambda }`$ whether or not we wish to use it to resolve the ZPF energy density problem.
3.3 Local fluctuations versus nonlocal interactions
WM point out that โ$`\mathrm{}`$ any local fluctuational explanation can be reinterpreted as a non-local, retarded/ advanced interaction with distant matter.โ (Section 4.4, emphasis in the original.) This may very well be true, but it can scarcely be taken as support for their thesis. Insofar as there is a consensus in the physics community on the issue of nonlocality, it would seem to be that nonlocality is to be avoided at almost any cost. WM refer to the well-established โnonlocalโ interactions of quantum mechanics (earlier in their section 4.4 than the above quote) in an attempt to justify their preference for a nonlocal explanation of ZPF-driven effects. Unfortunately, what quantum mechanics refutes is not locality but the conjunction of locality with some aspects of objective realism. (The minimal part of realism that must be rejected has been labeled โcontrafactual definiteness,โ the notion that it is meaningful to discuss the potential outcomes of experiments that might have been performed but in fact were not.) By observation, most physicists confronted with the failure of local realism prefer to abandon some aspect of realism rather than some part of locality.<sup>(20)</sup>
Other justifications WM present for preferring a theory that mixes retarded and advanced waves are the utility of Feynman-Wheeler absorber theory and the recent proposal of Cramerโs โtransactional interpretationโ of quantum mechanics. Remarkable though the Feynman-Wheeler theory is, we should not lose sight of the fact that it is one of several formalisms that all account successfully for the non-observation of advanced waves. The โtransactional interpretation,โ on the other hand, is by construction devoid of empirical content: all philosophical interpretations of quantum mechanics of necessity agree with all empirical predictions of QM and therefore permit no empirical preference for one over another. Oneโs choice of QM interpretation is therefore a matter for philosophical aesthetics rather than scientific judgement.
Contrary to the claims of WM, standard relativity theory in no way demands the โradical timelessnessโ they advocate. At least, it does not do so as long as nonlocal interactions are kept from contaminating the theory. In a conventional relativistic world without nonlocality, time proceeds in a well-ordered fashion along every timelike worldline. The inability of observers in different states of motion to agree on the relative ordering of remote, spacelike-separated events is irrelevant; this ambiguity can never lead to causal confusion or lead to โfutureโ events affecting the โpast.โ Essentially, this is because the conventional interpretation of relativity replaces the traditional view of past, present and future with a four-part division of reality. From any given event, the โfutureโ encompasses everything in the future light cone, the โpastโ the entire contents of the past light cone. โNow,โ which a Newtonian physicist could conceptualize as a shared instant of simultaneity encompassing all space, has shrunk to the single space-time point of the event under consideration. And the rest of the universe is in a region commonly dubbed โelsewhere,โ a constellation of space-time events that can neither affect nor be affected by the event under consideration in any way. So long as all interactions are local, the potentially inconsistent time-ordering of events โelsewhereโ can never lead to the slightest confusion between events in the past and events in the future, nor allow the latter to affect the former.
This of course breaks down if one admits of nonlocal interactions. By means of a nonlocal connection an event in the future light-cone can send a signal to an event โelsewhere,โ and cause a returning nonlocal signal to arrive at an event in the past. This should make it clear that it is not relativity, but relativity plus nonlocality, which demands the radical timelessness and its โvery strange consequencesโ advocated by WM.
Having addressed WMโs primary arguments against the physical reality of ZPF in general, we now turn to their arguments against the HRP theory of ZPF as the origin of inertia.
3.4 A Sketch of HRPโs and RHโs Claims
In the discussion by this name in their section 2.1, WM, in order to criticize the arguments of HRP and RH, present a simplified argument that in their terminology is intended to uncover โthe crux of the whole business.โ A simplified argument which still contained the essential physical ingredients of the calculation would be a useful pedagogical as well as conceptual excercise. It must, however, remain physically accurate. Unfortunately this is not the case with the presentation of WM, which, despite their claim of โaccurate formalismโ, is both misleading and erroneous.
Before discussing this presentation in detail, however, it seems desirable to clarify the motivations two of the current authors (AR and BH) had for producing the HRP and RH papers. The HRP paper involved a detailed calculation of the behavior of a Planck oscillator pushed by an external agent to move under uniform proper acceleration (so-called hyperbolic motion). In spite of some simplifying assumptions and a few fairly reasonable approximations, the mathematical development of the HRP article came out to be quite complex. The inertia effect was clearly obtained but assessment of the calculations and of the argument was challenging. It was not clear whether there was something in the vacuum, as viewed from an observer comoving with an accelerated frame, that could produce the effect predicted in HRP. Calculations in QED and QFT for a detector accelerated in a scalar vacuum field did not seem to find any anisotropy in the scalar field even though the well-known Unruh-Davies thermal background was predicted to occur.<sup>(21)</sup> It was necessary to check if the vector nature of the electromagnetic ZPF (as opposed to a scalar field) would produce the expected anisotropy in the vacuum background from the viewpoint of such a uniformly accelerated observer.
This problem was attacked and a confirmatory result emerged from the calculations. After approaching the problem in four different ways, as detailed in RH, it was in all four ways clearly found that an anisotropy appeared in the ZPF Poynting vector and hence that an anisotropy appeared in the flux of momentum density. More than that, the anisotropy in the Poynting vector was of the precise form to produce a radiation pressure opposite to the acceleration and proportional to it in the subrelativistic case, and also extended properly to the standard relativistic form of the inertial reaction 4-force at large speeds.
In their section 2.1 WM attempted to do two things, both of which were commendable in principle. First, they tried to present a simplified pedagogical view that would clearly illustrate the physics of the situation analyzed in the calculations presented in HRP and RH. Second, they attempted to relate the analysis of RH to that of HRP so that the physics of the inherent connection could easily be seen. We must report, however, that they were unfortunately unsuccessful in both of these endeavors. The main point of this part of their presentation in this respect was to replace eqs. (26) to (28) of HRP by the very simple proportionality relationship between the electric field $`๐_{zp}`$ and the velocity $`๐ฏ`$ of vibration of the subparticle component in the instantaneous inertial frame of reference at particle proper time $`\tau `$, in the form of WM eq. 2.1:
$$e๐_{zp}=k๐ฏ.$$
$`(3)`$
This enormous simplification had the following consequences:
(i) All $`๐`$-field frequency components and all components in all directions seemed to contribute with the same weight to the instantaneous velocity of the subparticle, contrary to the facts.
(ii) All those contributions appeared to come exactly in phase, contrary to the facts.
(iii) As a consequence of (i) and (ii) we get the physically very surprising feature that the electric field force was proportional to the velocity. (This might be called Aristotelian physics.) But we know this cannot happen unless energy is not conserved, or more precisely, unless energy goes to degrees of freedom that have not been accounted for in detail, as happens with a thermal reservoir. In reality the Planck oscillators interact with the ZPF in a dissipationless manner, so the dissipative force in the WM analysis is both inaccurate and misleading.
After such a disastrous start in the first equation, it is tempting to simply discard the entirety of WMโs subsequent argument. In particular, since WM eq. 2.3 depends on the inaccurate 2.1, it is itself invalid, and all conclusions drawn from it are suspect. However, there are additional and independent errors in the WM analysis which merit separate comment.
To reprise briefly the development of the HRP/RH argument given above: The inertialike reaction force appearing at the end of the HRP derivation implies the necessary existence of an anisotropy in the accelerated ZPF. However, earlier work in vacuum scalar fields found no such anisotropy. RH therefore investigated the existence of such anisotropy in vector fields, and found a net Poynting vector in accelerated vector ZPF by four separate lines of argument.
However, in RH no details on the particle were used since the analysis concentraed on the fields. The Poynting vector appears in the accelerated ZPF regardless of any entity that may interact with it. That interaction was introduced only at the end, in the form of a normalizing function $`\eta (\omega )`$ that quantified the momentum density passed to the accelerated object at every frequency. In contrast, the original HRP analysis modeled this interaction in great detail. In this case the Einstein-Hopf model was used, which implied only a first-order iterative solution and hence some degree of approximation. The considerable difference in methods between RH and HRP is the reason for the difference in appearance of the inertial mass expressions in RH and HRP. It seems likely that to derive the RH form from the expressions of HRP one would have had to pursue an iterative solution to many orders, going far beyond the Einstein-Hopf approximation.
The discussion presented by WM contrasts with the detailed analysis done in RH and HRP. For a serious discussion of the technical aspects of HRP (and to a lesser extent RH) we prospectively refer the interested reader to works presently in progress by Cole and Rueda, and by Cole.<sup>(22)</sup>
3.5 The problem of representing the accelerating body
Aside from the general flaws of WM section 2.1 noted above, we note that their simplified model includes the assumption that the โoscillatorโ interacting with the ZPF is in fact an elementary point charge. This is problematic. A point charge in classical theory has infinite self-energy, leading to some question of whether it is legitimate to deal with such objects except as an approximation good for long wavelengths and modest accelerations. This, unfortunately, is the exact opposite of the regime crucial to the ZPF-inertia theory. The empirical verification of quarks (or leptons) as pointlike extends only to length scales orders of magnitude longer than the wavelengths important to either the HRP or RH derivations. The representation of the particle/radiation interaction, in the one case by a generalized damping coefficent $`\mathrm{\Gamma }`$, in the other by an unspecified interaction function $`\eta (\omega )`$, seems appropriately cautious at our current level of ignorance.
3.6 The bare mass problem
In the discussion subsequent to their eq. 2.8 WM discuss the apparent circularity of using $`\mathrm{\Gamma }=2e^2/3m_0c^3`$, with a contribution from a โbareโ mass $`m_0`$ with presumed inertial effects, in the HRP derivation that purports to identify the source of inertial mass. This is a valid criticism, which suggests that a reworking of the formalism is desirable. In fact the later work of RH presents such a reworking, with no reference to unobservable โbareโ masses.
3.7 Quark and hadron masses
The extended discussion WM conduct in their section 2.2 on this issue implies the general mass-equivalence problem which, as noted above, is a valid concern and an unmet challenge for the ZPF-inertia theory. However, the specific points made by WM are, as they themselves point out, largely answered by HRP; and their rebuttal of this answer appears to misunderstand it. As is clearly indicated in the text WM choose to quote, the authors explicitly propose a revised formalism in which the interaction is assumed to be dominated by a resonance frequency $`\omega _0`$, determined by the particle dynamics, rather than the ZPF cutoff frequency $`\omega _c`$. WM respond to this proposed model by asserting:
Well, $`\omega _c`$ isnโt a โresonanceโ frequency. It is the upper limit in the integration over the frequency spectrum of the ZPF, and if that limit is not imposed, the result of that integration, and the inertial mass of the particle, is infinite irrespective of any resonances that may be present at finite frequencies. Remember, the spectral energy density of the ZPF goes as $`\omega ^3`$, so invoking a โlowโ frequency resonance will not suppress the cutoff unless the cutoff is assumed to lie quite close to the resonance frequency.
But this counterargument is clearly without merit. Any resonant phenomenon with a frequency response that falls off sharply enough for $`\omega >\omega _0`$ will have a converging and therefore finite integral in the reaction-force calculation. And the criterion for โsharply enoughโ is much less stringent than WM seem to imagine.
HRP present, in their eq. (3), the spectral energy density of the ZPF in an accelerated frame. We reproduce this equation (aside from a common factor $`d\omega `$ on both sides) here:
$$\rho (\omega )=\left[\frac{\omega ^2}{\pi ^2c^3}\right]\left[1+\left(\frac{a}{\omega c}\right)^2\right]\left(\frac{\mathrm{}\omega }{2}+\frac{\mathrm{}\omega }{e^{2\pi c\omega /a}1}\right).$$
$`(4)`$
We can see that there are four terms when this expression is multiplied out. One has $`\omega ^3`$ spectral dependence and is in fact the unaltered $`\mathrm{}\omega ^3/2\pi ^2c^3`$ ZPF spectrum itself. This means that an accelerated reference frame contains the same ZPF as in an inertial frame, plus three new components. Of these three, one is the thermal bath identified with the Davies-Unruh effect, one is not thermal but is, like thermal radiation, suppressed as $`e^\omega `$ for large $`\omega `$, and the third and last has a spectral dependence of $`\omega `$. It is this last term, varying as $`\omega `$, not $`\omega ^3`$, which HRP propose as the source of the reaction force in their discussion consequent to this formula.
If we assume then that the radiation term responsible for the reaction force has a frequency dependence of $`\omega `$, it follows naturally that any resonance centered on a frequency $`\omega _0`$ will have a finite total reaction force integral, even in the limit $`\omega _c\mathrm{}`$, so long as its frequency response falls off faster than $`\omega ^2`$ for $`\omega \omega _0`$. Even if we retain the assumption that the inertial reaction force derives from the full ZPF spectrum with its $`\omega ^3`$ energy density, a resonance falling off faster than $`\omega ^4`$ will remain finite regardless of cutoff.
This point incidentally answers the objection WM raise to the notion of changes in resonance being responsible for the inertial mass of a proton. They object that, since the scale of a proton is 20 orders of magnitude larger than the Planck length, resonances due to the protonโs structure are 20 orders of magnitude lower in frequency than the cutoff $`\omega _c`$. But we have just seen that the cutoff frequncy is irrelevant. The difference between the electron mass of .511 MeV, the quark mass of $``$10 MeV, and the hadron mass of $``$940 MEV can, at least in principle be accomodated by particle-specific resonances. These would almost certainly be different for a bound triplet of particles than some linear summation of individual resonances for three unbound particles.
If the electron has a resonant frequency $`\omega _e`$, we must presume that a โfreeโ quark has a resonant frequency $`\omega _q20\omega _e`$ to account for their mass difference. The term โfreeโ is used loosely, since of course color confinement demands that there really is no such thing as a free quark. What is commonly reported as quark mass is inferred from high-energy collisions between various sorts of projectiles and components within hadrons; the phenomenon of โasymptotic freedomโ in quantum chromodynamics means that in such high-energy interactions the quark is little constrained by the color force and behaves almost as a free particle. On the other hand, in the low-energy state of an unexcited proton or neutron, the quarks are presumably distributed as widely as is consistent with color confinement โ if they were more closely clustered than necessary, the resulting momentum uncertainty would equate to excess internal energy which would swiftly be emitted as gamma rays or possibly other particles. In the normal conditions within a proton or neutron, then, we would expect quarks to be strongly bound by the color force; and thus, there is plausible justification in principle for their resonance at a frequency $`\omega _p30\omega _q`$.
Moreover, a less strained justification is available. The HRP derivation deals only with EM vacuum fluctuations, as does the RH analysis. WM, in castigating an implied model of gluons as vast clouds of charged dust (to produce EM-ZPF reaction effects), overlook the fact that gluons, too, have a vacuum fluctuation spectrum. This fact was pointed out in the introductory discussion of the vacuum catastrophe problem; it does not disappear merely because we are examining a different consequence of ZPF effects. Electrons, being colorless, do not interact at all with gluon fluctuations. We must expect, however, that colored quarks do so quite strongly. If the ZPF-inertia theory gives the correct explanation of inertial reactions, therefore, all color-bearing particles must experience intense inertial reaction effects from a field orders of magnitude stronger than electromagnetism.
We may note in passing that this disposes of another WM criticism, that elementary particles do not show inertial masses proportional to the squared particle charge $`e^2`$. Since both $`e^2`$ and $`\omega _0`$ are factors in the inertial mass, and a general theory for $`\omega _0`$ values is not yet available, we cannot expect $`m_ie^2`$ to hold between different particles at even a heuristic level. Nor does the $`e^2`$ argument pay the slightest attention to the interaction of particles with fields other than the electromagnetic.
4. DISCUSSION AND CONCLUSIONS
In reviewing the arguments of Woodward and Mahood (1999), the following conclusions can clearly be seen:
1. Within the standard geometrical interpretation of general relativity, any attempt to identify gravity as the source of inertial reaction forces can succeed only by postulating the thesis it purports to prove. Such arguments can therefore be dismissed as circular.
2. While one can construct a gravitational theory for inertial reaction forces, as in the case of Sciamaโs 1953 theory, such theories are necessarily theories of explicit forces coupled to a source $`m_g`$, and therefore are quite distinct from the geometrical theory we know as general relativity.
3. The particular gravitational-inertia theory propounded by WM suffers a consistency problem in the handling of $`\varphi `$ as a quantity that (a) acts as a potential, (b) has a gradient, and (c) is a locally measured invariant. These three properties prove to be mutually incompatible.
4. The advocacy of WM for the philosophy of โradical timelessnessโ is, contrary to their own assertion, not a consequence of relativity but a consequence of their acceptance of nonlocal interactions in a relativistic framework.
5. The arguments of WM against the existence of quantum zero-point fluctuations are deeply flawed, being based in one case on a misunderstanding of the cosmological constant problem and in the second case on a willingness to adopt nonlocal interactions in a way which most working physicists would find unacceptable.
6. The arguments of WM against the HRP theory of extrinsic inertia arising from interactions with the ZPF make it clear that WM have misunderstood almost every important point of the argument. Their arguments are in most cases invalid, in some cases useful criticisms pointing to ways in which the theory needs to be strengthened and improved. In no case whatever do they constitute actual refutations.
Finally, we should note that among the possible theories of inertia the most plausible current contender, albeit also the least informative, remains the simplest: That inertia is inherent in mass. No theory of extrinsic inertia yet proposed has been able successfully to reproduce all of the observed phenomena which are trivial consequences of this simple premise. The alternative theories of extrinsic inertia require considerable further development before they can practically replace the standard interpretation of inertial reaction forces which has been thoroughly successful since the days of Newton.
ACKNOWLEDGEMENTS
B.H. and A.R. acknowledge support of this work by NASA contract NASW-5050.
REFERENCES
(1) R. J. Adler, B. Casey and O. C. Jacob, Am. J. Phys. 63, 720 (1995).
(2) R. Loudon, The Quantum Theory of Light, Clarendon Press, Oxford, (1982).
(3) P. Ramond, Field Theory โ A Modern Primer. Beryaunka, Menlo Park CA USA, pp. 55 ff. (1981).
(4) A. D. Sakharov, Dokl. Acad. Nauk SSSR 177, 70 (1968); translated in Sov. Phys. Dokl 12, 1040 (1968).
(5) Ya. B. Zelโdovich, Usp. Fiz. Nauk. 95, 209 (1968); translated in Sov. Phys. Usp. 11 (3), 381 (1968).
(6) S. L. Adler, Rev. Mod. Phys. 54, 729 (1982).
(7) H. E. Puthoff, Phys. Rev. A 39, 2333 (1989).
(8) J. F. Woodward and T. Mahood, Found. Physics, in press. (1999).
(9) B. Haisch, A. Rueda and H.E. Puthoff, Phys. Rev. A 49, 678 (1994).
(10) A. Rueda and B. Haisch, Physics Letters A 240, 115 (1998); A. Rueda and B. Haisch, Found. Physics 28, 1057 (1998).
(11) Strictly speaking this is true only for translational motions of spinless point particles. Since nothing in this discussion of inertial reactions depends either on physically extended bodies or on the presence or absence of tidal forces, this caveat is irrelevant to the remainder of the analysis. We likewise ignore the general relativistic spin-orbit interaction, since in any gravitational field there exist trajectories for which it vanishes.
(12) A. Einstein, Ann. der Phys. 49, p. 769 (1916) as quoted in H. C. Ohanian and R. Ruffini Gravitation and Spacetime. Second Edition. W. W. Norton & Company, New York, London, p. 53 (1994).
(13) D. W. Sciama, Mon. Not. Roy. Astron. Soc. 113, 34 (1953).
(14) U. S. Patent No. 5,280,864, โMethod for Transiently Altering the Mass of Objects to Facilitate Their Transport or Chage Their Stationary Apparent Weights,โ Inventor: James F. Woodward.
(15) James F. Woodward, 1997. โMachโs Principle and Impulse Engines: Toward a Viable Physics of Star Trek?โ Presentation to NASA Breakthrough Propulsion Physics Workshop, Cleveland, Ohio, August 12-14, 1997, Proc. NASA Breakthrough Propulsion Physics Workshop, NASA/CP-1999-208694, p. xx (1999).
(16) W. Rindler, Physics Letters A 187, 236 (1994).
(17) K. Nordtvedt, Int. J. Theor. Phys. 27, 1395 (1988).
(18) S. Haroche and J. M. Raimond, Scientific American 268, No. 4, 54 (1993).
(19) W. McCrea, Q. J. Royal Astro. Soc., 27, 137 (1986).
(20) J. T. Cushing and E. McMullin (eds.), Philosophical Consequences of Quantum Theory. University of Notre Dame Press, Notre Dame, Indiana (1989).
(21) J.R. Letaw, Phys. Rev. D 23, 1709 (1981); P.G. Grove and A.C. Ottewill, Class. Quantum Grav. 2, 373 (1985).
(22) D.C. Cole and A. Rueda, 1999 (in preparation) and D.C. Cole, 1999 (in preparation). |
no-problem/0002/astro-ph0002181.html | ar5iv | text | # Statistical properties of SGR 1806-20 bursts
## 1 Introduction
Soft gamma repeaters (SGR) are a rare class of objects characterized by their repetitive emission of low energy gamma-ray bursts. SGR bursts last $``$ 0.1 s and their spectra are usually well described by an optically thin thermal bremsstrahlung (OTTB) model with kT $``$ 20โ40 keV. Three of the four known SGRs are associated with slowly rotating (P<sub>spin</sub> $``$ 5โ8 s; Mazets et al. 1979, Kouveliotou et al. 1998, Hurley et al. 1999), ultra-strongly magnetized ($`B10^{14}`$ Gauss; Kouveliotou et al. 1998, Kouveliotou et al. 1999a) neutron stars positioned within or near young supernova remnants. For a review of the burst and persistent emission properties of SGRs, see Kouveliotou (1999b) and Hurley (2000).
Cheng et al. (1996) reported similarities between particular statistical properties of a sample of 111 SGR 1806-20 bursts (observed with the International Cometary Explorer, ICE, between 1979 and 1984) and earthquakes. They noted that the distribution of the event energies of both phenomena follow a power law, dN $``$ E dE, with index, $`\gamma `$ $``$ 1.6. Furthermore, they found that the cumulative waiting times between successive SGR bursts and earthquakes are similar. Laros et al. (1987) noted that the distribution of waiting times between successive SGR 1806-20 bursts follow a lognormal function, which was also seen between micro-glitches of the Vela pulsar (Hurley et al. 1994). Using the same data set, Palmer (1999) showed that, similar to earthquakes, some SGR 1806-20 bursts may originate from relaxation systems. Gรถฤรผล et al. (1999) studied a set of 1024 bursts from SGR 1900+14; 187 bursts were detected with the Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma Ray Observatory (CGRO) and 837 bursts were detected with the Proportional Counter Array (PCA) on the Rossi X-ray Timing Explorer (RXTE) during an active period of the source in 1998. We found that their fluence distribution is consistent with a power law of index $`\gamma `$ = 1.66 over 4 orders of magnitude. The distribution of waiting times between successive bursts also follows a lognormal function, which peaks at $``$ 49 s. We discussed the idea that SGRs, like earthquakes and solar flares, are manifestations of self-organized critical systems (Bak, Tang & Wiesenfeld 1988). All of these results are consistent with the idea that SGR bursts are caused by starquakes, which are the result of a fracture of the crust of a magnetically-powered neutron star, or โmagnetarโ (Duncan & Thompson 1992; Thompson and Duncan 1995, 1996).
SGR 1806-20 exhibited sporadic bursting activity from the launch of BATSE (in April 1991) until November 1993 (Kouveliotou et al. 1994). In October 1996, the source entered a burst active phase. The reactivation initiated a series of pointed observations with the RXTE/PCA over a period of two weeks. These observations led to the discovery of 7.47 s pulsations from SGR 1806-20 and confirmed its nature as a magnetar (Kouveliotou et al. 1998). In these two weeks RXTE/PCA recorded a total of 290 bursts <sup>7</sup><sup>7</sup>7Examples of RXTE/PCA observations of SGR 1806-20 can be seen at http://gammaray.msfc.nasa.gov/batse/sgr/sgr1806/. In the BATSE data, SGR 1806-20 burst activity was persistent but variable from October 1996 up to October 1999 with a total of 116 recorded bursts. In this Letter, we present a comprehensive study of the statistical properties of SGR 1806-20 by combining several data bases. Sections 2, 3 and 4 describe the CGRO/BATSE, RXTE/PCA and ICE observations, respectively. Our results are presented in Section 5 and discussed in Section 6.
## 2 BATSE Observations
In our analysis we have used DISCriminator Large Area detector (DISCLA) data with coarse energy resolution (4 channels covering energies from 25 keV to $``$2 MeV), Spectroscopy Time-Tagged Event (STTE) data and Spectroscopy High Energy Resolution Burst (SHERB) data with fine energy binning (256 channels covering energies from 15 keV to $``$10 MeV) from the Spectroscopy Detectors. A detailed description of BATSE instrumentation and data types can be found in Fishman et al. (1989).
BATSE triggered on 74 bursts between September 1993 and June 1999. For 32 of the brightest events, STTE or SHERB data with detailed spectral information were obtained. The background subtracted spectra were fit to optically-thin thermal bremsstrahlung (OTTB) and power law models. The OTTB model, F(E)$``$ E<sup>-1</sup>$`\mathrm{exp}`$($``$E/kT), provided suitable fits (0.76 $`<`$ $`\chi _\nu ^2`$ $`<`$ 1.36) to all spectra, with temperatures ranging between 18 and 43 keV. The power law model failed to fit most of the spectra. The weighted mean of the OTTB temperatures for this sample of 32 events is $`20.8\pm 0.2`$ keV.
To increase our burst sample we performed an off-line search for untriggered BATSE events from SGR 1806-20 using a method explained in detail by Woods et al. (1999a). Figure 1 shows the overall BATSE burst activity history of SGR 1806-20. We limited our search during active phases of the source. We found, in addition to the 74 triggered events, 42 untriggered bursts during the time intervals 1993 September 13 โ 1993 November 20 and 1995 September 7 โ 1999 October 26. Of these 116 events, 111 events (triggered and untriggered) had DISCLA data and were sufficiently intense to allow spectral fitting. Because of the long DISCLA data integration time (1.024 s) compared to typical SGR burst durations ($``$ 0.1 s), we could estimate only the fluence for each event. We fit the background-subtracted source spectrum to an OTTB model with a fixed kT of 20.8 keV, a reasonable choice considering the fairly narrow kT distribution of the triggered bursts derived above. We find that the burst fluences range between $`1.4\times 10^8`$ and $`4.3\times 10^6`$ ergs cm<sup>-2</sup>. For a distance to SGR 1806-20 of 14.5 kpc (Corbel et al. 1997), and assuming isotropic emission, the corresponding energy range is $`3.5\times 10^{38}`$$`1.1\times 10^{41}`$ ergs. In comparison, the energies of SGR 1900+14 bursts seen with BATSE range between $`1.1\times 10^{38}`$$`1.5\times 10^{41}`$ ergs (Gรถฤรผล et al. 1999) and those of SGR 1627-41 between $`8.0\times 10^{37}`$$`5.5\times 10^{41}`$ ergs (Woods et al. 1999b).
## 3 RXTE Observations
We performed 13 pointed observations of SGR 1806-20 with the RXTE/PCA, for a total effective exposure time of $``$ 141 ks between 1996 November 5 and 18. We searched PCA Standard 1 data (2-60 keV) with 0.125 s time resolution for bursts using the following procedure. For each 0.125 s bin, we estimated a background count rate by fitting a first order polynomial to 5 s of data before and after each bin with a 3 s gap between the bin searched and the background intervals. Bins with count rates exceeding 125 counts/0.125 s were assumed to include burst emission and were excluded from the background intervals. A burst was defined as any continuous set of bins with count rates above 5.5 $`\sigma `$ of the estimated background. For the typical PCA count rate of 12 $``$ 18 counts/0.125 s in this energy band, 5.5 $`\sigma `$ level corresponds to $``$ 20 $``$ 25 counts in a 0.125 s bin. We found 290 events and measured the count fluence of each burst by simply integrating the background-subtracted counts over the bins covering the event.
To compare the integrated count fluences obtained with the PCA to the BATSE fluences, we determined a conversion factor between the two as follows. First, we searched for bursts observed with both instruments and found 8 such events (5 of which had triggered BATSE). Assuming a constant OTTB model as described in Section 2, we estimated the fluence of these bursts. We then computed the ratio of the BATSE fluence to the PCA counts of each common event. These ratios fall within a fairly narrow range ($`3.5\times 10^{12}`$ and $`8.1\times 10^{12}`$ ergs cm<sup>-2</sup>counts<sup>-1</sup>). Their weighted mean is $`5.5\times 10^{12}`$ ergs cm<sup>-2</sup>counts<sup>-1</sup> with a standard deviation, $`\sigma `$ = $`1.3\times 10^{12}`$ ergs cm<sup>-2</sup> counts<sup>-1</sup>. The mean is very close to the one estimated for SGR 1900+14 (Gรถฤรผล et al. 1999) and consistent with the idea that SGR bursts have a similar spectral shape. Using this conversion factor, we find that the fluences of the PCA bursts range from $`1.2\times 10^{10}`$ to $`1.9\times 10^7`$ ergs cm<sup>-2</sup> and the burst energies range from $`3.0\times 10^{36}`$ to $`4.9\times 10^{39}`$ ergs.
## 4 ICE Observations
From 1978 to 1986 the Los Alamos GRB detector on board ICE satellite (Anderson et al. 1978) almost continuously observed the Galactic center region within which SGR 1806-20 is located. It detected 134 bursts from the source between 1979 January 7 and 1984 June 8 (Laros et al. 1987, 1990; Ulmer et al. 1993). Combining observational details given by Ulmer et al. (1993) and energy spectral information obtained by OTTB fits to bursts (at energies E $`>`$ 30 keV) given by Fenimore et al. (1994) and Atteia et al. (1987), we estimate that the ICE burst fluences range form $`1.5\times 10^8`$ to $`6.5\times 10^6`$ ergs cm<sup>-2</sup> and their corresponding isotropic energies are between $`3.6\times 10^{38}`$ and $`1.6\times 10^{41}`$ ergs.
## 5 Statistical Data Analysis and Results
From the previous 3 sections, we clearly see that the BATSE and ICE detection sensitivities are quite similar, with PCA extending the logN$``$logP distribution to lower values. We now combine all data bases to a common set, enabling several statistical analyses.
(i) Burst fluence distributions : To eliminate systematic effects due to low count statistics or binning, we have employed the maximum likelihood technique to fit the unbinned burst fluences. A power law fit to 92 BATSE fluence values between $`5.0\times 10^8`$ and $`4.3\times 10^6`$ ergs cm<sup>-2</sup> yields a power law exponent, $`\gamma `$ = $`1.76\pm 0.17`$ (68$`\%`$ confidence level). Bursts with fluences below $`5.0\times 10^8`$ ergs cm<sup>-2</sup> were excluded to avoid undersampling effects due to lower detection efficiency. Figure 2 shows the BATSE fluences binned into equally spaced logarithmic steps (filled circles). Similarly, we fit the 266 PCA fluence values between $`1.7\times 10^{10}`$ and $`1.9\times 10^7`$ ergs cm<sup>-2</sup> to a power law model and obtain a best fit exponent value of 1.43 $`\pm `$ 0.06 (see Fig 2, diamonds for PCA). Finally, the 113 ICE fluences between $`1.8\times 10^7`$ and $`6.5\times 10^6`$ ergs cm<sup>-2</sup> yield $`\gamma `$ = 1.67 $`\pm `$ 0.15 (see Fig 2 squares for ICE). We find that the power law indices obtained for BATSE and ICE agree well with each other, while the index obtained from PCA is marginally lower.
We fit the ICE fluences to a power law $`\times `$ exponential model and to a broken power law model to search for evidence of a turnover claimed by Cheng et al. (1996). Neither model provides a statistically significant improvement over a single power law fit. It is important to note that there is no evidence of a high energy cut-off or a break in the energy distribution (see Fig 2).
(ii) Waiting times distribution: To measure the waiting times between successive SGR 1806-20 bursts, we identified 22 RXTE observation windows containing two or more bursts without any gaps. We then determined 262 recurrence interval times $`\mathrm{\Delta }`$T (i.e. time difference between successive bursts). Figure 3 shows a histogram of the $`\mathrm{\Delta }`$Ts, which range from 0.25 to 1655 s. We have fit the ($`\mathrm{\Delta }`$T)-distribution to a lognormal function and found a peak at $``$ 97 s (with $`\sigma `$ 3.6). This fit does not include waiting times less than 3 s to avoid contribution of double peaked events in which the second peak appears shortly ($``$ 0.25$``$3 s) after the first one. To correct for biases due to the RXTE observation window ($``$ 3000 s), we performed extensive numerical simulations and found that the intrinsic peak of the distribution should be at $``$ 103 s. Note that the observation windows with no bursts may represent a long-waiting-time tail which is additional to the lognormal distribution.
To investigate the relation between the waiting time till the next burst ($`\mathrm{\Delta }`$$`T^+`$) and the intensity of each burst, we divided the 290 events sample into 6 intensity intervals, each of which contains approximately 50 events. We fit the $`\mathrm{\Delta }`$$`T^+`$-distribution also to a lognormal distribution and determined each peak mean-$`\mathrm{\Delta }`$$`T^+`$ (which range from 82 s to 148 s) and the mean counts for each of the 6 groups. We show in Figure 4 (a) that there is no correlation between $`\mathrm{\Delta }`$$`T^+`$ and the total burst counts (the Spearman rank-order correlation coefficient, $`\rho `$ = $``$0.2 with a probability that this correlation occurs in a random data set, P = 0.70). Similarly, we investigated the relation between the elapsed times since the previous burst ($`\mathrm{\Delta }`$$`T^{}`$) and the intensity of the bursts. We find that mean-$`\mathrm{\Delta }`$$`T^{}`$ extends from 77 s to 120 s. Figure 4 (b) shows that there is also no correlation between mean-$`\mathrm{\Delta }`$$`T^{}`$ and the burst counts ($`\rho `$ = $`0.4`$, P = 0.46).
## 6 Discussion
The fluence distributions of the SGR 1806-20 bursts seen with ICE and BATSE are well described by single power laws with indices 1.67 $`\pm `$ 0.15 and 1.76 $`\pm `$ 0.17, respectively, while RXTE bursts have an index of 1.43 $`\pm `$ 0.06. These indices are similar to those found for SGR 1900+14 (1.66, Gรถฤรผล et al. 1999) and SGR 1627-41 (1.62, Woods et al. 1999b). The ICE and BATSE values are consistent with one another, over nearly the same energy range but at different epochs. This suggests that SGR event fluence distributions may not vary greatly in time, therefore, we combine the ICE and BATSE values to calculate a โhigh-energyโ index, $`\gamma `$ = 1.71 $`\pm `$ 0.11. The difference between the โlow-energyโ (RXTE) index and the โhigh-energyโ index is insignificant ($``$ 2.3 $`\sigma `$); more โhigh-energyโ data are needed to determine whether there is a break in the distribution.
Power law energy distributions have also been found for earthquakes with $`\gamma `$ = 1.4 to 1.8 (Gutenberg & Richter 1956; Chen et al. 1991; Lay & Wallace 1995), and solar flares, $`\gamma `$ = 1.53 to 1.73 (Crosby et al. 1993, Lu et al. 1993). This is a typical behavior seen in self-organized critical systems. The concept of self-organized criticality (Bak, Tang & Wiesenfeld 1988) states that sub-systems self-organize due to some driving force to a critical state at which a slight perturbation can cause a chain reaction of any size within the system. SGR power law fluence distributions, along with a lognormal waiting time distribution support the idea that systems responsible for SGR bursts are in a state of self-organized criticality. We believe that in SGRs, the critical systems are neutron star crusts strained by evolving magnetic stresses (cf. Thompson & Duncan 1995).
Cheng et al. (1996) suggested that there is a high energy cut-off in the cumulative energy distribution of SGR 1806-20 bursts seen by ICE. In a cumulative energy distribution, the values of neighboring points are correlated, consequently, judging the significance of apparent deviations is very difficult. For these reasons we used a maximum likelihood fitting technique and displayed the differential energy distributions (e.g Fig.2). We find no evidence for a high-energy cut-off in the ICE data of SGR 1806-20 up to burst energies $`10^{41}`$ ergs. It should be noted, however, that a high energy cut-off or turnover must exist because otherwise the total energy diverges.
The distribution of waiting times of SGR 1806-20 bursts observed with RXTE is well described by a lognormal function, similar to that found by Hurley et al. (1994) for the bursts seen with ICE. The waiting times of the RXTE events are on average shorter than the ones observed with ICE, maybe due to different burst active phase of the source or to instrumental sensitivity (the PCA is more sensitive to weaker bursts than ICE, and the system displayed plenty of weaker bursts as well as strong ones in 1996), or combination of both. Recently Gรถฤรผล et al. (1999) showed that the recurrence time distribution of SGR 1900+14 bursts observed with RXTE is also a lognormal function which peaks at $``$ 49 s. The lack of any correlation between the intensity and the waiting time until the next burst agrees well with the results of ICE observations of SGR 1806-20 (Laros et al. 1987). This behavior, also seen in SGR 1900+14 (Gรถฤรผล et al. 1999) confirms that the physical mechanism responsible for SGR bursts is different from systems where accretion-powered outbursts take place (e.g. the Rapid Burster, Lewin et al. 1976, and the Bursting Pulsar, Kouveliotou et al. 1996)
The burst activity of SGR 1806-20 over the last three years is considerably different from that of SGR 1900+14. After a long period with almost no bursts, BATSE recorded 200 bursts from SGR 1900+14 between 1998 May and 1999 January, with a remarkably low activity thereafter. On the other hand, after SGR 1806-20 reactivated in 1996, it continued bursting on a lower rate, with 18 bursts in 1997, 32 in 1998 and 18 in 1999 through October. The latest RXTE observations of SGR 1806-20 in 1999 August revealed that smaller scale bursts are still occurring occasionally in this system, whereas contemporaneous RXTE observations of SGR 1900+14 do not show burst activity of any size. This continuation of burst activity may prevent the deposition of very large amounts of stress in the crust. Therefore, in SGR 1806-20 it may be less likely to expect, in the near future, a giant flare from this source, as the ones seen on 1979 March 5 from SGR 0526-66 (Mazetz et al. 1979) and on 1998 August 27 from SGR 1900+14 (Hurley et al. 1999).
We are grateful to the referee, Dr. David Palmer for his very constructive comments. We acknowledge support from NASA grant NAG5-3674 (E.G., J.v.P.) the cooperative agreement NCC 8-65 (P.M.W.); NASA grants NAG5-7787 and NAG5-7849 (C.K.); Texas Advanced Research Project grant ARP-028 and NASA grant NAG5-8381 (R.C.D.). |
no-problem/0002/cond-mat0002316.html | ar5iv | text | # Impurity spin dynamics in 2D antiferromagnets and superconductors
## Abstract
We discuss the universal theory of localized impurities in the paramagnetic state of 2D antiferromagnets where the spin gap is assumed to be significantly smaller than a typical exchange energy. We study the impurity spin susceptibility near the host quantum transition from a gapped paramagnet to a Nรฉel state, and we compute the impurity-induced damping of the spin-1 mode of the gapped antiferromagnet. Under suitable conditions our results apply also to d-wave superconductors.
Doped antiferromagnets (AF) have been the subject of intense studies in the context of the cuprate high-temperature superconductors and other layered transition metal compounds. We present a quantum theory of a particular class of doped AF where it is possible to neglect the coupling between the spin and charge degrees of freedom and consider a theory of the spin excitations alone. Such a theory will apply to (i) quasi-2D โspin gapโ insulators like $`\mathrm{SrCu}_2\mathrm{O}_3`$ or $`\mathrm{NaV}_2\mathrm{O}_5`$ in which a small fraction of the magnetic ions (Cu or V) are replaced by non-magnetic ions like Zn or Li and to (ii) high-temperature superconductors like $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_7`$ in which a small fraction of Cu has been replaced by non-magnetic Zn or Li. In the first case the spin gap $`\mathrm{\Delta }`$ is significantly smaller than the charge gap justifying a theory of the spin excitations alone. In the second situation the effect of the fermionic quasiparticles in the superconducting state can be shown to be weak due to the linearly vanishing density of states of the Fermi level.
The effect of a (magnetic or non-magnetic) impurity can be probed by measuring the uniform spin susceptibility, which takes the form $`\chi =(g\mu _B)^2(A\chi _b+\chi _{\mathrm{imp}})`$ where $`A`$ is the total area of the AF, $`\chi _b`$ is the bulk response per unit area, and $`\chi _{\mathrm{imp}}`$ is the additional impurity contribution. In the paramagnetic ground state of the host each impurity induces a distortion of the host spin arrangement with a net magnetic moment $`S`$ associated with the impurity. The distortion is confined to the vicinity of the impurity which implies that the impurity susceptibility follows
$$\chi _{\mathrm{imp}}=\frac{S(S+1)}{3k_BT}\text{as}T0.$$
(1)
For a non-magnetic impurity in a spin-1/2 system we have $`S=1/2`$; for a general impurity eq. (1) can be used as definition of $`S`$.
The basis of our investigations is a boundary quantum field theory which describes a bulk AF together with arbitrary localized deformations. We focus on the vicinity of a quantum transition from a paramagnet to a magnetically ordered Nรฉel state: Then the spin gap in the paramagnetic state is small compared to a typical nearest-neighbor exchange, $`\mathrm{\Delta }J`$, which is the situation realized in many compounds. The field theory has been discussed in Ref. ; it consists of $`d+1`$-dimensional $`\varphi ^4`$ theory for the bulk ordering transition and a coupling to a local quantum impurity spin. The renormalization-group (RG) analysis shows that both the bulk and the boundary couplings are marginal for $`d=3`$ and flow to fixed-point values for $`d<3`$. This implies that the coupling between the bulk and impurity excitations becomes universal, and the spin dynamics in the vicinity of the impurity is completely determined by bulk parameters, the gap $`\mathrm{\Delta }`$ and the velocity of spin excitations $`c`$. Based on the RG results one obtains a number of universal properties in an expansion in $`ฯต=3d`$; we mention here the behavior of the uniform susceptibility at the bulk critical point. The system shows the Curie response of an irrational spin as $`T0`$, $`\chi _{\mathrm{imp}}=๐_1/(k_BT)`$, where $`๐_1`$ is a universal number independent of microscopic details. The $`ฯต`$ expansion result for $`๐_1`$ is
$$๐_1=\frac{S(S+1)}{3}\left[1+\left(\frac{33ฯต}{40}\right)^{1/2}\frac{7ฯต}{4}+\mathrm{}\right].$$
(2)
More detailed dynamic information can be obtained by a self-consistent diagrammatic method. The paramagnetic phase of the bulk is assumed to be dimerized, its spin-1 excitations can be described using triplet bosons $`t_{๐ค\alpha }`$. The impurity is represented by an additional spin $`S_\alpha `$ at site 0,
$`H={\displaystyle \underset{๐ค,\alpha }{}}ฯต_๐คt_{๐ค\alpha }^{}t_{๐ค\alpha }+{\displaystyle \frac{K}{\sqrt{N_s}}}{\displaystyle \underset{๐ค\alpha }{}}S_\alpha {\displaystyle \frac{t_{๐ค\alpha }^{}+t_{๐ค\alpha }}{\sqrt{ฯต_๐ค/J}}}`$ (3)
where $`J`$ is the host exchange constant, $`ฯต_๐ค`$ the energy of the spin-1 mode in the bulk, $`K`$ the coupling constant to the impurity spin, and $`N_s`$ the number of lattice sites. The impurity spin is represented by auxiliary fermions $`f`$, the impurity dynamics is contained in the fermion self-energy which arises from the scattering off the $`t`$ bosons. We employ a self-consistent non-crossing approximation (NCA) to calculate this self-energy; this approach follows from a saddle-point principle after generalizing the spin symmetry to SU($`N`$) and taking the limit $`N\mathrm{}`$. The NCA equations can be solved in the scaling limit; the value of the coupling $`K`$ drops out of all results for physical observables provided that $`\mathrm{\Delta }J`$ โ we obtain the same universal behavior as predicted by the RG. In fact, the results for susceptibility and impurity spin correlations agree with the one-loop RG result .
The diagrammatic approach can be easily applied to a system with a finite density of impurities $`n_{\mathrm{imp}}`$. The important observation is that the impact of the impurities is determined by a single energy scale $`\mathrm{\Gamma }n_{\mathrm{imp}}(\mathrm{}c)^d/\mathrm{\Delta }^{d1}`$. The AF in the absence of impurities shows a pole in the dynamic susceptibility $`\chi _๐(\omega )`$ at the AF wavevector $`๐`$. Our main concern is the fate of this collective peak upon the introduction of impurities. Scaling arguments predict that the susceptibility takes the form
$$\chi _๐(\omega )=\frac{๐}{\mathrm{\Delta }^2}\mathrm{\Phi }(\frac{\mathrm{}\omega }{\mathrm{\Delta }},\frac{\mathrm{\Gamma }}{\mathrm{\Delta }})(T=0),$$
(4)
where $`\mathrm{\Phi }`$ is a universal function, and $`๐`$ denotes the quasiparticle weight. In the absence of impurities we have $`\mathrm{\Phi }(\overline{\omega },0)=1/(1(\overline{\omega }+i0^+)^2)`$. The self-energy of the spin-1 bosons caused by the scattering at randomly distributed impurities is calculated using a self-consistent Born approximation. The equations for the Greenโs functions can be entirely written in terms of scaling functions with arguments $`\mathrm{}\omega /\mathrm{\Delta }`$ and $`\mathrm{\Gamma }/\mathrm{\Delta }`$, consistent with the scaling prediction (4). A numerical result for $`\mathrm{\Phi }`$ is shown in Fig. 1. The quasiparticle pole is broadened to an asymmetric line, with a tail at high frequencies. Our theory can be applied to a recent experiment where the inpurity-induced broadening of the spin-1 โresonance peakโ at energy $`\mathrm{\Delta }=40`$ meV in $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_7`$ has been observed. This experiment has $`n_{\mathrm{imp}}=0.005`$, $`\mathrm{\Gamma }=5`$ meV, $`\mathrm{\Gamma }/\mathrm{\Delta }=0.125`$. The half-width of the line is approximately $`\mathrm{\Gamma }`$, and this is in excellent accord with the measured linewidth of 4.25 meV, see Fig. 1. More tests of the predictions of our theory should be possible in the future. |
no-problem/0002/math0002002.html | ar5iv | text | # On finiteness of the number of boundary slopes of immersed surfaces in 3-manifolds
For any hyperbolic 3-manifold $`M`$ with totally geodesic boundary, there are finitely many boundary slopes for essential immersed surfaces of a given genus. There is a uniform bound for the number of such boundary slopes if the genus of $`M`$ or the volume of $`M`$ is bounded above. When the volume is bounded above, then area of $`M`$ is bounded above and the length of closed geodesic on $`M`$ is bounded below.
support: This paper grew out of work begun while the first two authors were visiting the Mathematical Sciences Research Institute in Berkeley in 1996-97. Research at MSRI is supported in part by NSF grant DMS-9022140. The first author was partially supported by NSF grant DMS-9704286. The second and third authors were partially supported by Outstanging Younth Fellowship of NSFC
We say that a proper immersion of a surface $`F`$ into $`M`$ is an essential surface if it is incompressible and $``$-incompressible, meaning that the immersion induces an injection of the fundamental group and relative fundamental group. Let $`c`$ be an essential simple loop on the boundary $`M`$ of a compact 3-manifold $`M`$. If there is a proper immersion of an essential surface $`F`$ into $`M`$ such that each component of $`F`$ is homotopic to a multiple of $`c`$, we call $`c`$ a boundary slope of $`F`$.
We are interested in the following two questions:
###### Questions
(1) Given a compact 3-manifold $`M`$ and a genus $`g`$, are there finitely many boundary slopes for immersed essential surfaces with genus at most $`g`$?
(2) Under what conditions is there a bound for the number of boundary slopes in (1) which is independent of the 3-manifold?
Many results in these directions have been obtained for various classes of 3-manifolds:
(1) If $`M`$ is a torus and the surfaces are embedded, Hatcher \[H\] showed that there are only finitely many boundary slopes, without any genus restriction.
(2) When the surfaces are embedded punctured spheres or tori, explicit bounds are known on the number of boundary slopes. These bounds are based on highly developed combinatorial methods in knot theory and the theory of representations of knot groups. See the survey papers \[Go\], \[Lu\] and \[Sh\].
(3) When $`M`$ is a torus and the surfaces are immersed, a positive answer to Question (1) has been obtained recently in \[HRW\]. When $`M`$ is hyperbolic, minimal surface theory is used to derive these bounds. For fixed genus $`g`$, these turn out to be quadratic functions of $`g`$, independent of $`M`$. See also recent work of Agol \[Agol\].
(4) If $`M`$ is an irreducible, $``$-irreducible, acylindrical, atoroidal 3-manifold and the surfaces are embedded, Scharlemann and Wu \[SW\] gave a positive answer to Question 1 using combinatorial arguments.
(5) Suppose $`M`$ is a torus and the surfaces are immersed. Baker has given examples to show that the bounded genus assumption cannot be dropped. Oertel, using branched surface theory, has found manifolds in which every slope is realized by the boundary of an immersed essential surface \[Oe\].
In this note we give a positive answer to Question (1), which extends (3) to the case where $`M`$ can contain high genus components and generalizes (4) from embedded to immersed surfaces.
###### Theorem 1
Suppose $`M`$ is $``$-irreducible, acylindrical, atoroidal 3-manifold. Then for any $`g`$, there are only finitely many $``$-slopes for essential surfaces of genus $`g`$.
Next we consider the question of obtaining bounds for the number of possible slopes which are independent of the particular manifold we are studying. It turns out that only the genus of the boundary of $`M`$ is relevant.
We define the genus of $`M`$ be the sum of the genus of the components of $`M`$.
###### Theorem 2
There is a function $`n(g,g_{})`$ such that there are at most $`n(g,g_{})`$ $``$-slopes for essential surfaces of genus $`g`$ in a $``$-irreducible, acylindrical, atoroi- dal 3-manifold whose boundary has genus equal to $`g_{}`$.
We can also obtain bounds on the number of boundary slopes in terms of hyperbolic geometry.
Definition. Let $`(V)`$ be the set of all hyperbolic 3-manifolds of totally geodesic boundary and with volume bounded above by $`V>0`$.
###### Theorem 3
There is a function $`n_1(g,V)`$ such that there are at most $`n_1(g,V)`$ $``$-slopes for essential surfaces of genus $`g`$ in a 3-manifold $`M(V)`$.
Theorem 3 follows from either Theorem 1 or Theorem 2, the fact that all maximum torus cusps have volumes $`>C>0`$ \[Ad\], and the following
###### Theorem 4
There is an integer $`g^{}>0`$ and a number $`L>0`$ such that if $`M(V)`$, then
(1) the genus of $`M`$ is at most $`g^{}`$.
(2) the length of any closed geodesic on $`M`$ is at least $`L`$.
Remark on Theorem 4. Theorem 4 can be restated as follows: For hyperbolic 3-manifolds with totally geodesic boundary and bounded volume, the areas of their boundaries have an upper bound, and the lengths of simple closed geodesics on their boundary have a lower bound. Neither of those two assertions is true in dimension 2. Surfaces of given area can have geodesic boundaries of any length.
###### Demonstration Proof of Theorem 1
If $`M`$ has any 2-sphere boundary components, we can fill them in with balls without changing the number of boundary slopes. Since any essential surface can be homotoped off of a splitting 2-sphere, we can without loss of generality assume that $`M`$ is irreducible. The number of boundary slopes of essential surfaces lying on a torus boundary component of $`M`$ is finite by \[HRW\], so we restrict attention to surfaces with boundary on a higher genus component of $`M`$. By Thurstonโs Geometrization Theorem for Haken manifolds, $`M`$ admits a complete hyperbolic structure of finite volume with totally geodesic boundary \[T\]. We assume that $`M`$ is equipped with such a hyperbolic structure. The totally geodesic boundary components consist of the non-torus boundary components of $`M`$. Since $`M`$ have only finitely many components, to prove Theorem 1, we need only to show that for each component of $`M`$ there are finitely many boundary slopes of proper essential surfaces of genus at most $`g`$.
Suppose $`F`$ is an incompressible, boundary incompressible proper immersion with $`F`$ consisting of $`n`$ copies of a slope $`l`$. Let $`DM`$ be the double of $`M`$ along its totally geodesic boundary components. $`DM`$ is Haken and atoroidal, and admits a hyperbolic structure obtained by doubling that of $`M`$. The double $`DF`$ of $`F`$ is incompressible, and therefore a theorem of Schoen-Yau and Sacks-Uhlenbeck shows that there is a least area representative of its homotopy class, denoted by $`DF^{}`$ \[SY\]. The intersection of $`DF^{}`$ with the incompressible least area (in fact totally geodesic) surface $`M`$ consists of curves essential on both $`DF^{}`$ and $`M`$. Since $`F`$ is boundary incompressible in $`M`$, the intersection $`F^{}`$ of $`DF^{}`$ with $`M`$ is a surface homotopic (rel boundary) in $`M`$ to $`F`$. Since $`DM`$ admits an isometry which is a reflection about $`M`$, $`DF^{}`$ is perpendicular to $`M`$. If not, we could reflect $`DF^{}M`$ and get a homotopic surface with lower area. So $`F^{}`$ is properly homotopic to $`F`$, $`F^{}`$ is perpendicular to $`M`$ and $`F^{}`$ is a (possibly multiply covered) geodesic.
Choosing geodesic orthogonal coordinates near the geodesic boundary of the surface $`F^{}`$, we have (line 7 of p.374, \[BM\])
$$ds^2=du^2+J^2(u,v)dv^2,(J(u,v)>0\text{ and }J(0,v)=1).$$
$`1`$
where the $`u`$-curves (those where $`v=`$ constant) are geodesics perpendicular to the boundary and the $`v`$-curves lie on the boundary when $`u=0`$.
The geodesic curvature in $`F^{}`$ of a curve $`t(u(t),v(t))`$ is given by Formula 10.4.7.1 of \[BM\],
$$\frac{1}{\sqrt{Eu_{}^{}{}_{}{}^{2}+Gv_{}^{}{}_{}{}^{2}}}(\frac{d\varphi }{dt}+\frac{1}{2\sqrt{EG}}(\frac{G}{u}v^{}\frac{E}{v}u^{})),$$
$`2`$
where $`\varphi `$ is the angle between the curve and the $`u`$-curves and the metric on $`F^{}`$ is given by
$$Edu^2+Gdv^2.$$
When we consider the $`v`$-curves, we have $`u^{}=0`$, $`v^{}=1`$, $`\varphi =\pi /2`$, $`E=1`$ and $`G=J^2`$. Substituting into (2), the geodesic curvature for a $`v`$-curve $`\{u=c\}`$ oriented as the boundary of $`\{0uc\}`$ is given by:
$$k_g=\frac{1}{J}\frac{J}{u}.$$
$`3`$
Orienting the curve as the boundary of $`\{uc\}`$ changes the sign and gives
$$k_g=\frac{1}{J}\frac{J}{u}.$$
$`3^{}`$
The Gaussian curvature of the surface is Formula 10.5.3.3 of \[BM\]
$$K=\frac{1}{J}\frac{^2J}{u^2}.$$
$`4`$
A direct computation shows that $`k_g`$ satisfies the following equation
$$\frac{k_g}{u}=K+k_g^2.$$
$`5`$
Since $`M`$ is of constant curvature $`1`$, we have
$$K=k_1k_21$$
by Gaussโs Formula (p.179 \[Sp\]), where $`k_1`$ and $`k_2`$ are the principle curvatures. Since $`F`$ is a minimal surface, we have $`k_1k_20`$, and hence $`K1`$. Then by (4) it follows that
$$\frac{^2J}{u^2}J.$$
$`6`$
Fixing $`v=v_0`$, by (6) we have
$$\frac{J}{u}(u,v_0)=\frac{J}{u}(u,v_0)\frac{J}{u}(0,v_0)$$
$$=_0^u\frac{^2J}{u^2}(s,v_0)๐s_0^uJ(s,v_0)๐s0.$$
$`7`$
(1), (3โ) and (7) imply that $`k_g<0`$, if $`u>0`$.
Now consider the function
$$h(u)=\frac{e^ue^u}{e^u+e^u}.$$
$`8`$
which is the solution to the differential equation
$$\frac{dh}{du}=1+h^2$$
$`9`$
with the initial condition $`h(0)=0`$. Note that $`h(u)<0`$ when $`u>0`$ and that the function $`k_gh`$ satisfies the differential inequality
$$\frac{d(k_gh)}{du}=K+k_g^2+1h^2$$
$$k_g^2h^2=(k_gh)(k_g+h)$$
$`10`$
by (5) and (9). We want to show that $`k_gh0`$.
Suppose on the contrary that on an interval $`[0,U]`$, $`k_gh`$ is somewhere positive. Pick a $`u_0[0,U]`$, such that $`k_gh`$ takes its positive maximum at $`u_0`$. We know $`u_00`$ since $`F^{}`$ is a geodesic and $`k_g=h=0`$ at $`0`$. Then
$$\frac{d(k_gh)}{du}$$
is zero if $`u_0(0,u)`$, and is $`0`$ if $`u_0=U`$. Hence
$$\frac{d(k_gh)}{du}0$$
at $`u_0`$. Since both $`k_g`$ and $`h`$ are negative at $`u_0`$, we have
$$(k_gh)(k_g+h)<0.$$
This contradicts (10), and so $`k_gh`$.
For $`t>0`$, let $`N_t(M)`$ be the subset of $`M`$ with distance $`t`$ from the boundary. There is a $`b>0`$ such that when $`t<b`$ then $`N_t(M)`$ is a collar of $`M`$.
Choose $`U<b`$ in the above and let $`N_U(F^{})`$ be the neighborhood of $`F^{}`$ with $`u`$ coordinates at most $`U`$. Clearly $`N_U(F^{})N_b(M)`$. Since $`N_b(M)`$ is a collar of $`M`$ and the surface $`F^{}`$ is $``$-incompressible, it follows that $`N_U(F^{})`$ is a collar of $`F^{}`$. Letting
$$F_U=\overline{F^{}N_U(F^{})},$$
each component of $`F_U`$ is in the same homotopy class in $`M`$ as the slope $`l`$ and
$$\mathrm{\#}F_U=\mathrm{\#}F^{}=\mathrm{\#}F=n.$$
By Gauss-Bonnet, we have that
$$_{F_U}K๐A+_{F_U}k_g๐s=2\pi (\chi (F))=2\pi (22gn).$$
Let $`d`$ be the length of the geodesic in the homotopy class of the slope $`l`$. Then the length of each component of $`F_U`$ is larger than $`d`$. Since $`K1`$ and $`k_gh<0`$ at $`U`$, we have
$$nhd2\pi (22gn).$$
Then we have
$$d\frac{2\pi (2g+n2)}{hn}\frac{2\pi (2g+1)}{h}.$$
$`11`$
Since $`g`$ is given and $`h=h(U)<0`$, $`d`$ is bounded above. There are only finitely many homotopy class of essential closed curves in $`M`$ containing elements of length less than a given constant. Therefore for any fixed $`g`$ there are only finitely many $``$-slopes for immersed incompressible, boundary incompressible surfaces of genus $`g`$. โ
###### Demonstration Proof of Theorem 2
In the proof of Theorem 1, if we only consider 3-manifolds whose totally geodesic boundary has a collar of width bounded below by $`U_{}`$, so that $`U>U_{}`$, then $`h=h(U)h(U_{})=\mathrm{tanh}U_{}0`$. Moreover if we consider only boundary slopes of length at least $`L>0`$, then by (11) we have
$$Ld\frac{2\pi (2g+1)}{\mathrm{tanh}U_{}}.$$
$`12`$
Let $`A(R)`$ be the area of $`D(R)`$, the hyperbolic disc with radius $`R`$. Let $`\mathrm{\Gamma }_L`$ be any lattice on the hyperbolic plane such that the distance of any two vertices has distance at least $`L`$. Then the number of vertices of $`\mathrm{\Gamma }_L`$ in $`D({\displaystyle \frac{2\pi (2g+1)}{\mathrm{tan}U_{}}})`$ is at most
$$n(g,U_{},L)=\frac{A(\frac{2\pi (2g+1)}{\mathrm{tan}U_{}}+L)}{A(L)}.$$
$`13`$
It follows that the number of boundary slopes for proper essential surfaces of genus at most $`g`$ is bounded by $`n(g,U_{},L)`$.
To show the existence of the function $`n(g,g_{})`$ we need to establish in the proof of the previous theorem:
1. A lower bound $`U_{}`$ to the width of a collar around $`M`$ for any hyperbolic metric on a manifold $`M`$ in which $`M`$ is totally geodesic of genus $`g_{}`$.
2. Given $`L>0`$, an upper bound on the number of curves of length $`L`$ lieing in a collar of $`M`$. This bound should depend only on the genus of $`M`$, and not on its geometry.
The existence of the first type of bound was established by Kojima and Miyamoto \[KM\], and by Basmajian \[Ba\]. On the boundary of $`M`$, the second type of bound is a consequence of the Margulis Lemma, or of its two dimensional version known as the โcollar lemmaโ (\[Bu\] and also Theorem 2.18 of \[Mu\]). We actually use a bound that holds in a collar neighborhood of $`M`$ in Theorem 1. However the projection from a collar of the boundary of a hyperbolic manifold with totally geodesic boundary to the boundary is length decreasing, so it suffices to consider curves lieing on the boundary.
More precisely, let $`S(x)=\mathrm{sinh}^1(1/\mathrm{sinh}(x/2))`$. For a given simple closed geodesic $`c`$ with length $`d_c`$ on a hyperbolic surface, let $`N(c)=\{x:d(x,c)S(d_c)\}`$. Then the collar lemma states that $`N(c)`$ is a collar. Moreover if $`c_1`$ and $`c_2`$ are disjoint simple closed geodesics, then $`N(c_1)`$ and $`N(c_2)`$ are disjoint. There is an $`L`$ such that if $`dL`$, then $`S(d)>d/2`$ and $`d>\mathrm{sinh}(d/2)`$; for example, we can choose $`L=1.75`$; then $`S(d)>S(L)>0.887>0.85=L/2d/2`$ and $`1.76L/\mathrm{sinh}(L/2)d/\mathrm{sinh}(d/2)`$. Then any two simple closed geodesics of length $`L`$ are disjoint. Moreover the area of $`N(c)`$ is
$$2d_c\mathrm{sinh}(S(d_c))=2d_c/\mathrm{sinh}(d_c/2)2d_c/d_c=2.$$
Hence the number of simple closed geodesics of length at most $`L`$ is bounded above by
$$2\pi (2g(F)2)/2=2\pi (g(F)1).$$
$`14`$
where $`g(F)`$ is the genus of $`F`$.
For simplicity, we first assume that $`M`$ is connected. By (13) and (14) we have
$$n(g,g_{})=\frac{A(\frac{2\pi (2g+1)}{\mathrm{tanh}U^{}}+L)}{A(L)}+2\pi (g_{}1).$$
$`15`$
By Lemma 3.1 of \[Ba\], we have the lower bound
$$U^{}=\frac{1}{4}log\frac{g_{}+1}{g_{}1}.$$
$`16`$
Moreover $`A(R)={\displaystyle \frac{4\pi }{1\mathrm{tanh}^2R/2}}`$. We can get an explicit value for $`n(g,g_{})`$ by plugging in these functions, though this does not appear to give sharp values.
In general suppose $`M`$ consists of $`k`$ torus components and $`l`$ components of genus $`g_i>1`$, $`i=1,\mathrm{},l`$. Then $`g_{}=_{i=1}^lg_i+k`$ and there are at most $`_{i=1}^ln(g,g_i)+kN(g)`$ boundary slopes for proper essential surfaces of genus $`g`$, where $`N(g)`$ is the uniform bound for the number of boundary slopes of proper essential surface of genus $`g`$ on a torus boundary component given in \[HRW\], and $`n(g,g_i)`$ is given by (15). One can verify that $`_{i=1}^ln(g,g_i)+N(g)n(g,g_{})`$. โ
###### Demonstration Proof of Theorem 4
Pick any infinite sequence of totally geodesic hyperbolic 3-manifolds $`\{M_n\}`$ in $`(V)`$. Consider the sequence $`\{D(M_n)\}`$, where $`D(M_n)`$ is the double of each $`M_n`$. Then the volume of the closed hyperbolic 3-manifold $`D(M_n)`$ is bounded by $`2V`$. By passing to a subsequence, we can assume that $`D(M_n)`$ has a Gromov limit $`M^{}`$. It is known that
(a) $`M^{}`$ is a complete hyperbolic 3-manifold of finite volume, which can be viewed as the complement of a hyperbolic link $`L`$ in a closed 3-manifold, and each $`D(M_n)`$ is obtained by a Dehn surgery on $`M^{}`$.
(b) Since each $`D(M_n)`$ admits a reflection $`r_n`$ (isometry) about its geodesic boundary, so does $`M^{}`$. Hence $`M^{}=D(M_{\mathrm{}})`$, where $`M_{\mathrm{}}`$ is a hyperbolic 3-manifold with totally geodesic boundary. Let $`r_{\mathrm{}}`$ be the reflection of $`D(M_{\mathrm{}})`$ about $`M_{\mathrm{}}`$. We have not claimed as yet that there is no cusp at $`M_{\mathrm{}}`$.
(c) Let $`TH_ฯต(P)`$ be the $`ฯต`$ thick part of $`P`$ for any hyperbolic 3-manifold $`P`$. Then for any $`ฯต>0`$ and $`1ฯตk1`$, there is an integer $`N`$ such that for $`n>N`$ there is a homeomorphism $`h_n:TH_ฯต(D(M))TH_ฯต(D(M_n))`$ which is a $`k`$-quasi-isometry. Moreover $`h_n`$ can be chosen to commute with the reflections.
For the result on the Gromov limit of closed hyperbolic 3-manifolds of bounded volume, see Chapter 6 of \[T1\], or Chapter E of \[BP\]. For the fact about reflections, one can argue as follow: As in the case of closed hyperbolic 3-manifolds, any sequence of hyperbolic 3-manifolds with totally geodesic boundary and bounded volume $`V`$ has a subsequence with Gromov limit $`M_{\mathrm{}}`$, which is a complete hyperbolic 3-manifold with totally geodesic boundary. Then the double $`D(M_{\mathrm{}})`$ will be the limit of the doubles.
Suppose there is a torus in $`M`$, which must result in cusps in $`M_{\mathrm{}}`$. Let the torus $`T`$ be the boundary component of $`TH_ฯต(M_{\mathrm{}})`$ corresponding to the cusp $`C`$, and let $`c`$ be a component of $`TM`$. Then $`h_n(T)D(M_n)`$ is invariant under the reflection $`r_n`$ about $`M_n`$, and it follows that $`h_n(c)`$ is a meridian of the Dehn filling solid torus on $`h_n(T)`$, and therefore $`h_n(c)M_n`$ is a trivial loop. However each cusp in $`M_{\mathrm{}}`$ can only be a limit of essential loops, and this is a contradiction. Hence $`M_{\mathrm{}}`$ contains no cusps.
Since $`M_{\mathrm{}}`$ contains no cusps, for small $`ฯต`$, $`M_{\mathrm{}}`$ is contained in the interior of the compact manifold $`TH_ฯตD(M_{\mathrm{}})`$. Moreover as the fixed point set of the reflection $`r_{\mathrm{}}|TH_ฯต(D(M_{\mathrm{}}))`$, $`M_{\mathrm{}}`$ is compact, therefore it is closed. Since $`M_n`$ converges to $`M_{\mathrm{}}`$ in the limit, it follows that
(1โ) the genus of $`M_n`$ is stable when $`n`$ is large enough,
(2โ) the length of the shortest simple closed geodesic on $`M_n`$ cannot converge to zero (otherwise there will be a cusp in $`M_{\mathrm{}}`$).
Now suppose (1) of Theorem 4 is not true. Then we can find a sequence $`\{M_n\}`$ in $`(V)`$ such that the genus of $`M_n`$ is $`>n`$. The genus of any subsequence must also tend to infinity, which contradicts (1โ); hence (1) of Theorem 4 is true. Similarly, if (2) of Theorem 4 is not true, then we can find a sequence $`\{M_n\}`$ in $`(V)`$ such that the length of the shortest geodesic of $`M_n`$ is $`<1/n`$, which contradicts (2โ). This finishes the proof of Theorem 4. โ
References.
\[Ad\] C. Adams, Volumes of n-cusped hyperbolic 3-manifolds, J. London Math. Soc. 1988, 38, 2, 555-565.
\[Agol\] I. Agol, Topology of Hyperbolic 3-manifolds, Ph.D. thesis, UCSD, 1998.
\[Ba\] A. Basmajian, Tubular neighborhoods of totally geodesic hypersurfaces in hyperbolic manifolds, Invent. Math. 117 (1994), 207-225.
\[BP\] R. Benedetti and C. Petronio, Lectures on Hyperbolic Geometry, Universitext, Springer-Verlag, (1991).
\[Bu\] P. Buser, The collar theorem and examples. Manuscripta Math. 25 (1978), 349โ357.
\[BG\] M. Berger and B. Gostiaux, Differential Geometry: Manifolds, curves and surfaces, GTM 115, Springer Verlag, Berlin, New York.
\[Go\] C. Gordon, Dehn surgery on knots, Proceedings of the International Congress of Mathematicians, Vol. I, II (Kyoto, 1990), 631โ642, Math. Soc. Japan, Tokyo, 1991.
\[HRW\] J. Hass, H. Rubinstein and S.C.Wang, Immersed surfaces in 3-manifo- lds, preprint (1998).
\[Ha\] A. Hatcher, On the boundary curves of incompressible surfaces, Pacific J. Math. 99 (1982), 373-377.
\[KM\] S. Kojima and Y. Miyamoto, The smallest hyperbolic $`3`$-manifolds with totally geodesic boundary. J. Differential Geom. 34 (1991), 175โ192.
\[Lu\] J. Luecke, Dehn surgery on knots in $`S^3`$, Proc. ICM Vol 2 (Zurich, 1994), 585-594.
\[Mu\] C. McMullen, Complex dynamics and renormalization, Ann. Math. Study, No, 135, PUP, 1994.
\[Oe\] U. Oertel, Boundaries of $`\pi _1`$-injective surfaces, Topology Appl. 78, (1997), 215-234.
\[SW\] M. Scharlemann and Y. Wu, Hyperbolic manifolds and degenerating handle additions, J. Aust. Math. Soc. (Series A) 55 (1993), 72-89.
\[Sh\] P. Shalen, Representations of 3-manifold groups and its application to topology, Proc. ICM Berkeley, (1986), 607-614.
\[Sp\] M. Spivak, A comprehensive Introduction to Differential Geometry, Vol. 4, Publish or Perish, Inc. Berkeley 1979.
\[SY\] R. Schoen and S.T. Yau, Existence of incompressible minimal surfaces and the topology of three-dimensional manifolds with nonnegative scalar curvature, Ann. of Math. (2) 110 (1979), 127-142.
\[T\] W. Thurston, Three dimensional manifolds, Kleinian groups and hyperbolic geometry, Bull. AMS, Vol. 6, (1982) 357-388.
\[T1\] W. Thurston, Geometry and Topology of 3-manifolds, Princeton University Lecture Notes, 1978. |
no-problem/0002/astro-ph0002289.html | ar5iv | text | # HST/NICMOS Imaging of the Planetary Nebula Hubble 12
## 1. Observations and Reductions
Images of Hubble 12 (Hb 12) were obtained with the HST/NICMOS instrument on 13 Nov 1997. The images in the F110W, F164N, and F166N were taken with NIC1, and the F160W, F187N, F190N, F212N, F215N images were obtained with NIC2. The MULTIACCUM mode and spiral dither patterns were used. The NICMOS CALNICA and CALNICB pipelines were used to reduce the data. In general a few dither sets were obtained in each filter; these were aligned and averaged to produce the final images. The bright central star makes the region near the core difficult to see clearly, and adds a number of image artifacts. The lines that run approximately from corner to corner are part of the diffraction pattern of the instrument. The horizontal and vertical lines that run through the star are array artifacts caused when the central star saturates in the long integrations. In future reductions we will attempt to subtract the point source in the core to better study the region around it.
## 2. Results and Discussion
Hb 12 has been notable primarily because it represents one of the clearest cases known of UV excited near-IR fluorescent H<sub>2</sub> emission (Dinerstein et al. 1988; Ramsay et al. 1993, Hora & Latter 1996, Luhman & Rieke 1996). Dinerstein et al. had mapped the inner structure and found it to be elliptical surrounding the central star; the deep H<sub>2</sub> images in Hora & Latter (1996) showed the faint bipolar lobes extending N-S, and the torus or โeyeโ-shaped strucure at the base of the lobes around the central star. The H<sub>2</sub> line ratios observed in the torus were in excellent agreement with predictions by theoretical H<sub>2</sub> fluorescence calculations (see also Luhman & Rieke 1996). Hora & Latter also detected \[Fe II\] line emission at 1.64 $`\mu `$m in a position along the edge of the shell, but not at the H<sub>2</sub> line peak emission location to the E of the central star.
Previous HST-WPC2 imaging by Sahai & Trauger (1998) in H$`\alpha `$ showed the inner structure to have an โhourglassโ shape, and a small bipolar structure in the core region, with lobes roughly E-W within a few tenths of an arcsec from the star. Welch et al. (1999a,b) obtained ground-based images in the \[Fe II\] line and nearby continuum and found that the line emission was also distributed along the inner hourglass nebula. The HST images presented here show the symmetry axes of the hourglass and the H<sub>2</sub> eye and bipolar nebula differ in their alignment by $``$ 5. A comparison of these images with the inner bipolar structure found by Sahai & Trauger shows that its alignment differs by $`20^{}`$ from the hourglass and outer H<sub>2</sub> lobes. The different orientation of the structures suggests that the central source may be precessing between discrete outflow events. Also, the structures seen in the H<sub>2</sub> image indicate other possible outflow events and remnants of other bipolar hourglass nebulae. Hb 12 may therefore be another example of a PN with multiple nested bipolar bubbles.
The inner hourglass is bright in the Paschen $`\alpha `$ and \[Fe II\] lines, but the outline of the โeyeโ appears only in the H<sub>2</sub> and the wide bandpass filters (in continuum plus H<sub>2</sub> line emission). This implies that the regions where only H<sub>2</sub> is detected are somehow shielded from what has ionized the inner hourglass. There is no evidence for \[Fe II\] emission from other regions in the nebula, and the density of the inner hourglass does not seem sufficient to provide effective shielding of the other regions from radiation from the central star, which might suggest shock excitation in an interacting wind. This hypothesis must be confirmed with investigation of the velocity structure in the nebula. The strong influence of FUV photons elsewhere in this object argues that the \[Fe II\] emission is excited by FUV photons in the PDR. We will be investigating this possibility through detailed chemical modeling.
We obtained high-resolution IR spectra of Hb 12 in the H<sub>2</sub> and \[Fe II\] lines using CSHELL at the IRTF (Kelly, Hora, & Latter 1999) which indicates that the N lobe is inclined towards us. We also have additional medium-resolution spectra of the faint extended H<sub>2</sub> lobes to determine the excitation properties of the outer nebula. We are in the process of analyzing these data along with the optical and IR imagery to understand the structure of this interesting and complex nebula.
## References
Dinerstein, H. L., Lester, D. F., Carr, J. S., & Harvey, P. M. 1988, ApJLett, 327, L27
Hora, J. L., & Latter, W. B. 1996, ApJ, 461, 288
Hora, J. L., Latter, W. B., & Deutsch, L. K. 1999, ApJSupp, in press
Kelly, D. M., Hora, J. L., & Latter, W. B. 1999, in preparation
Luhman, K., & Rieke, G. H. 1996, ApJ, 461, 298
Ramsay, S. K., Chrysostomou, A., Geballe, T. R., Brand, P. W. J. L., & Mountain, M. 1993, MNRAS, 263, 695
Sahai, R., & Trauger, J. T. 1998, AJ, 116, 1357
Welch, C. A., Frank, A., Pipher, J. L., Forrest, W. J., & Woodward, C. E. 1999a, ApJ, in press
Welch, C. A., et al., 1999b, this conference |
no-problem/0002/nucl-th0002005.html | ar5iv | text | # Quark Coulomb Interactions and the Mass Difference of Mirror Nuclei
\[
## Abstract
We study the Okamoto-Nolen-Schiffer (ONS) anomaly in the binding energy of mirror nuclei at high density by adding a single neutron or proton to a quark gluon plasma. In this high-density limit we find an anomaly equal to two-thirds of the Coulomb exchange energy of a proton. This effect is dominated by quark electromagnetic interactionsโrather than by the up-down quark mass difference. At normal density we calculate the Coulomb energy of neutron matter using a string-flip quark model. We find a nonzero Coulomb energy because of the neutronโs charged constituents. This effect could make a significant contribution to the ONS anomaly.
\]
The Okamoto-Nolen-Schiffer (ONS) anomaly is the long-standing discrepancy between the calculated and measured binding-energy differences of mirror nuclei . The anomaly is likely to arise from charge symmetry breaking (CSB) in the strong interaction , itself believed to originate from the up-down quark mass difference and electromagnetic effects in the Standard Model. Thus, the study of CSB is a useful tool to elucidate the structure of strongly-interacting nuclear systems.
The ONS anomaly can be calculated on several levels. Perhaps the simplest is the observation by B. A. Brown that the magnitude of the anomaly is approximately equal to the Coulomb exchange energy. If one adds an extra proton to a nucleus in a simple Hartree-Fock picture, there will be both a direct (Hartree) and exchange (Fock) Coulomb interaction with the other protons. If oneโarbitrarilyโneglects the Fock term, one obtains a better agreement with experiment.
At a different level, Blunden and Iqbal compute the ONS anomaly by calculating the contribution from $`\rho `$-$`\omega `$ mixing to the CSB component of the nucleon-nucleon (NN) interaction . Their CSB interaction can explain part of the anomaly. However, at present this is controversial, both in the choice of meson couplings and in the momentum dependence of $`\rho `$-$`\omega `$ mixing . There have been also a number of calculations of the anomaly based on the contribution from the up-down quark mass difference ($`\mathrm{\Delta }m`$). Indeed, Nakamura and coworkers calculate a CSB NN interaction using a constituent quark model where the short-range color hyperfine interaction depends explicitly on the quark masses. Moreover, the mass difference between the neutron and proton may be density dependent . Finally, there are other more recent model calculations, such as the one reported in Ref. .
Although the observation by Brown is not a dynamical explanation, it is an interesting characterization of the size of the anomaly. Could there be something wrong with the exchange term? As the nucleon is a composite object, could it be that the exchange energy of composite objects yields results significantly different from the exchange of point nucleons? One expects identical results if the composite scale of the nucleon is much smaller than the inter-particle spacing. However these scales are similar in nuclei. Moreover, although various calculations based on the up-down quark mass difference exist, we are not aware of any calculation of electromagnetic (EM) effects between quarks to the ONS anomaly.
The neutron-proton mass difference in free space is made up from comparable contributions of $`\mathrm{\Delta }m`$ and EM effects. Note that EM and $`\mathrm{\Delta }m`$ terms contribute with opposite signs to the neutron-proton mass difference. However, the ONS anomaly is sensitive to the density dependence of these contributions so the relative sign is unknown. In this letter we study EM effects involving the Coulomb exchange interactions of composite nucleons.
To clarify the importance of EM and $`\mathrm{\Delta }m`$ terms we consider a high-density limit of the ONS anomaly. We will show, with some mild assumptions, that in the high-density limit: (1) there is an ONS anomaly and (2) that it is dominated by EM effects with $`\mathrm{\Delta }m`$ being unimportant. Further, (3) the magnitude of the anomaly is simply related to the Coulomb exchange energy and (4) its sign is the same as that observed at lower densities. Finally, we will perform model calculations to see how relevant this high-density limit is to normal-density nuclei.
Consider very high-density symmetric nuclear matter. We assume that an electron gas makes the system electrically neutral. Thus the direct Coulomb interaction vanishes. Yet Coulomb exchange effects are still present. Now add either one proton or one neutron to the system and calculate the change in energy. First, model the system as a Fermi gas of elementary nucleons. An added proton will have a Coulomb exchange energy of
$$V_p=e^2\frac{k_F}{\pi }.$$
(1)
Here $`k_F`$ is the Fermi momentum of the proton and $`e`$ is its electric charge. In contrast, an added neutron has zero Coulomb exchange energy: $`V_n=0`$. Thus, the energy difference between an added proton and a neutron is just:
$$E_pE_n=V_p+M_pM_n=e^2\frac{k_F}{\pi }\mathrm{\Delta }M,$$
(2)
where $`\mathrm{\Delta }M=M_nM_p=1.29`$ MeV is the neutron-proton mass difference. Equation (2) is the simple expectation of a model with unexcited point nucleons.
Next we consider a quark-gluon plasma. We assume because of asymptotic freedom, that at very high density the system is nearly a free Fermi gas of quarks. This is because the strong coupling $`\alpha _S(k_F^2)`$ becomes small at the large momentum scale characterized by $`k_F`$ . When a proton is added it will dissociate into two up and one down quark. Therefore, the Coulomb exchange energy of these three quarks is
$$V_p^{(q)}=\left(\underset{i=1}{\overset{3}{}}e_i^2\right)\frac{k_F}{\pi },$$
(3)
where $`e_i`$ denotes the quark electric charge and $`k_F`$ is the quark Fermi momentum. Note that there are three times as many (valence) quarks as nucleons. However the quarks have an extra color degeneracy of three. As a result, the quark Fermi momentum in Eq. (3) is the same as the protonโs Fermi momentum in Eq. (1). The sum of the squares of the valence charges in a proton is $`(4/9+4/9+1/9)e^2=e^2`$. Because of this โnumerical accidentโ the quark Coulomb exchange energy is equal to the Coulomb exchange energy of an elementary proton.
An interesting difference arises when we add a neutron. In a quark-gluon plasma the Coulomb exchange energy is no longer zero because a neutron is made up of charged constituents. Moreover, the exchange energy is always negative independent of the sign of the charges; the contributions from positive and negative charges add rather than cancel. Indeed, the sum of the squares of the valence quark charges in a neutron is $`(4/9+1/9+1/9)e^2=2e^2/3`$. Thus, the neutron Coulomb energy is fully two thirds of that of a proton: $`V_n^{(q)}=\frac{2}{3}e^2k_F/\pi `$. The energy difference between an added proton and a neutron becomes:
$$E_p^{(q)}E_n^{(q)}=e^2\frac{k_F}{\pi }+\frac{2}{3}e^2\frac{k_F}{\pi }=\frac{1}{3}e^2\frac{k_F}{\pi }.$$
(4)
We choose to define an ONS anomaly $`\mathrm{\Delta }E_{\mathrm{ONS}}`$ as the actual energy difference, which we assume is given by Eq. (4), minus the hadronic-model expectation of Eq. (2)
$$\mathrm{\Delta }E_{\mathrm{ONS}}(E_p^{(q)}E_n^{(q)})(E_pE_n)=\frac{2}{3}e^2\frac{k_F}{\pi }+\mathrm{\Delta }M.$$
(5)
This anomaly arises, not because of an error in the protonโs energy but, because there is a nonzero Coulomb contribution for a (dissociated) neutron. In principle we should add to the above equation the contribution from the up-down quark mass difference. However, in the high-density limit, all contributions from $`\mathrm{\Delta }m`$ are suppressed by the large Fermi momentum. Indeed, the difference in the Fermi energy of free down and up quarks is: $`\sqrt{k_F^2+m_d^2}\sqrt{k_F^2+m_u^2}(m_d^2m_u^2)/2k_F`$. Thus, in the limit of very high density the total anomalyโincluding contributions from $`\mathrm{\Delta }m`$โbecomes dominated by Eq. (5). Moreover, the original mass difference between the neutron and proton ($`\mathrm{\Delta }M`$) โdisappearsโat high density because the contributions from $`\mathrm{\Delta }m`$ are suppressed and the Coulomb self-energies of the neutron and the proton are no longer relevant, as the quarks have rearranged themselves into a uniform free Fermi gas.
In summary, we expect that at high density there will be an ONS anomaly with a magnitude that is two-thirds that of the proton Coulomb exchange energy. Furthermore, EM effects dominate over the contribution from $`\mathrm{\Delta }m`$ and the sign of the anomaly is the same as that observed at normal density.
Our earlier discussion suggests that the Coulomb energy of pure neutron matter is nonzero. Below we focus on neutron matter because of the simple expectation that for point neutrons the Coulomb energy is zero. This may provide a signature of substructure.
Since the above statements are only strictly true in the limit of very high density, we investigate their implications at normal density by performing a model calculation of neutron matter composed of valence quarks. While a model is necessary, our philosophy is to use a โminimalโ one by demanding the following general features that any realistic model must posses. We require the many-quark wave function to (1) be explicitly anti-symmetric even for the exchange of quarks from different nucleons and (2) have cluster separability: the quark wave function of a nucleon removed to infinity must reduce to that of a free nucleon, without any unphysical long-range interactions. Finally, we demand that (3) quarks be confined and (4) for the wave function to reduce to free nucleons at low density and to a quark Fermi gas at high density. Perhaps, any model satisfying these general features can be used.
Conventional quark potential models with two-body confining interactions do not satisfy cluster separability as they generate unphysical long-range van der Waals interactions between nucleons. String-flip models on the other hand, do satisfy the four properties described above . Unfortunately, we are not aware of any other models which both satisfy these properties and allow a simple calculation. Thus, we employ the three-quark string-flip model discussed in Ref . The model has nonrelativistic constituent quarks of mass $`m_c`$ of fixed red, green, and blue colors. A system of $`A`$ nucleons is modeled with $`N=3A`$ quarks interacting via the following many-body potential: $`V=V_{RG}+V_{GB}+V_{BR}`$, where each term represents the optimal pairing of quarks. For example, the โred-greenโ component of the potential is defined as
$$V_{RG}=\mathrm{Min}\left\{\underset{j=1}{\overset{A}{}}v(๐ซ_j^{(R)}๐ซ_{P_j}^{(G)})\right\}.$$
(6)
Here $`๐ซ_j^{(R)}`$ is the coordinate of the $`j_{th}`$ red quark and $`๐ซ_{P_j}^{(G)}`$ is its green partner in the neutron. The minimum is over all $`A!`$ permutations $`P_j`$ of the set of $`A`$ green quarks. A harmonic string potential $`v(r)=kr^2/2`$ is used to confine the quarks and the Hamiltonian for the model becomes
$$H=\underset{i=1}{\overset{N}{}}\frac{๐_i^2}{2m_c}+V=\underset{i=1}{\overset{N}{}}\frac{_i^2}{2m_c}+V.$$
(7)
Each red quark is connected by harmonic strings to one and only one green and to one and only one blue quark. This insures that quarks will be confined into โcolor-neutralโ clusters. For three quarks the model reduces to the well-known harmonic oscillator quark model. For neutron matter there is a very large number of permutations or ways to connect the strings. We employ an implementation of the linear sum assignment algorithm by Burkard and Derigs that efficiently finds the optimal permutation in a time proportional to $`N^3`$ . This allows Monte Carlo simulations with hundreds of quarks.
The model has two dimension-full parameters: $`k`$ and $`m_c`$. Yet we are only interested in the harmonic-oscillator length $`b=(km_c)^{1/4}`$, as this sets the length scale for quark confinement. The root mean square radius of a nucleon is $`r^2^{1/2}=3^{1/4}b`$. Hence, to reproduce the experimental charge radius of the proton $`r^2^{1/2}=0.86`$ fm we choose $`b=1.13`$ fm. At the end we can rescale our results for other values of $`b`$.
We are interested in simulating neutron matter. Therefore we assign to red and green quarks an electromagnetic charge of $`e/3`$ and to blue quarks a charge of $`2e/3`$. For simplicity we do not include any other intrinsic degree of freedom, such as spin or isospin. The electromagnetic self-energy of an isolated neutron is ($`\alpha =e^2=1/137`$)
$$V_n^0=\sqrt{\frac{2}{9\pi }}\frac{\alpha }{r^2^{1/2}}=0.446\mathrm{MeV}.$$
(8)
A simple variational wave function for the many-quark system has been constructed in Ref. . It is given by
$$\mathrm{\Psi }=\mathrm{exp}\left(\lambda \frac{V}{kb^2}\right)\mathrm{\Phi },$$
(9)
with $`\mathrm{\Phi }`$ a product of Slater determinants for the red, green, and blue quarks. In Ref. $`\lambda `$ is a variational parameter characterizing the length scale for quark confinement. At low density a value of $`\lambda =1/\sqrt{3}`$ allows Eq. (9) to reproduce the gaussian wave function of a free nucleon. For simplicity we keep lambda fixed at $`\lambda =1/\sqrt{3}`$ for all densities. This insures that any change in the Coulomb energy of a neutron does not arise from an artificial change in this length scale.
We calculate the total Coulomb energy
$$V_{\mathrm{Coul}}^{\mathrm{tot}}=\underset{i<j}{\overset{N}{}}\frac{e_ie_j}{|๐ซ_i๐ซ_j|},$$
(10)
of a system of $`N=3A`$ quarks in a box of volume $`V`$ with antiperiodic boundary conditions. To minimize finite size effects we use periodic distances to compute the quark separation. The neutron density of the system is $`\rho _n=A/V`$. We use standard Metropolis Monte Carlo techniques to calculate the expectation value of the total Coulomb energy for the wave function given in Eq. (9).
Figure 1 shows the change in the Coulomb energy per neutron
$$\mathrm{\Delta }V\frac{1}{A}V_{\mathrm{Coul}}^{\mathrm{tot}}V_n^0,$$
(11)
as a function of density for systems with $`N`$=96 and 264 quarks. We have subtracted the neutron self-energy $`V_n^0`$ of Eq. (8) because this is included in the experimental neutron-proton mass difference. We find $`\mathrm{\Delta }V`$ to be nonzero.
FIG. 1. Change in Coulomb energy per neutron as a function of baryon density for pure neutron matter. The insert compares the model to a free Fermi gas (solid line) at high density.
At normal density $`\rho _n=0.08`$ fm<sup>-3</sup> and $`N=96`$: $`\mathrm{\Delta }V=78\pm 1\mathrm{keV}`$. The scale of this result suggests that changes in the Coulomb energies of quarks can make a significant contribution to the ONS anomaly. More refined models may give results which are of the same order of magnitude, given the ratio of the nucleon size to interparticle spacing. Furthermore, we expect an additional contribution from the up-down quark mass difference $`\mathrm{\Delta }m`$. Our result is somewhat smaller than the total observed anomaly of the order 200 keV in mass 15 and 300 keV in mass 39 . Note that, for simplicity, we have calculated the average Coulomb energy per neutron rather than the self-energy of a single valence neutron. These quantities are expected to be similar. Indeed, in a free Fermi gas the average Coulomb energy per proton is three fourths of that of Eq. (1).
Figure 2 shows $`\mathrm{\Delta }V`$ as a function of the nucleon root mean square radius or oscillator length at the fixed density of $`\rho _n=0.08`$ fm<sup>-3</sup>. Making the quark core of a nucleon smaller reduces $`\mathrm{\Delta }V`$, but not by much. Further, as the oscillator length is made very small the scale of the neutron self-energy $`V_n^0`$ grows and this can increase $`\mathrm{\Delta }V`$. Of course, if the nucleon core is small one must use a large meson cloud to account for the full proton charge radius. This meson cloud, which we have not included, could further increase $`\mathrm{\Delta }V`$.
FIG. 2. Change in the Coulomb energy per neutron as a function of the nucleon root mean square radius at the fixed neutron density of $`\rho _n=0.08`$ fm<sup>-3</sup>.
One should extend our results by using more elaborated quark models. It is important to study models with more intrinsic spin and flavor degrees of freedom along with more complete treatments of color. However, in these more complete models we still expect an exchange or dynamical correlation between quarks associated with the nucleonโs hard core. This correlation could lead to a nonzero Coulomb energy for neutrons. Note that we have used harmonic oscillator confining strings. Thus, our wave function has gaussian tails. Linear confinement may increase the tails and this should enhance the Coulomb exchange energy.
In conclusion, we have considered a high-density limit of the Okamoto-Nolen-Schiffer anomaly to clarify the role of electromagnetic interactions (EM) and of the up-down quark mass difference $`\mathrm{\Delta }m`$. We have added a single neutron or proton to a quark gluon plasma. In this high-density limit we find that: (1) there is an ONS anomaly, (2) it is dominated by EM interactions rather than by $`\mathrm{\Delta }m`$, and (3) its magnitude is two-thirds of the proton Coulomb exchange energy. We find an attractive Coulomb exchange energy for an added neutron because of the neutronโs charged constituents. This suggests that the ONS anomaly could be closely related to the nucleon substructure. We use a minimal string-flip quark model to calculate the Coulomb energy of pure neutron matter. The model wave function is fully anti-symmetric and satisfies cluster separability and quark confinement. At normal density, we find a nonzero Coulomb energy for neutron matter that could make a significant contribution to the ONS anomaly.
This work was supported in part by DOE grants DE-FG02-87ER40365, DE-FC05-85ER250000, and DE-FG05-92ER40750. |
no-problem/0002/astro-ph0002528.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The coalescence and merging of two compact stars into a single object is a very common end-point of close binary evolution. Dissipation mechanisms such as friction in common envelopes, tidal dissipation, or the emission of gravitational radiation, are always present and cause the orbits of close binary systems to decay. This review will concentrate on the coalescence of compact binaries containing either two neutron stars (hereafter NS) or two white dwarfs (WD).
### 1.1 Double Neutron Stars
Many theoretical models of gamma-ray bursts (GRBs) rely on coalescing NS binaries to provide the energy of GRBs at cosmological distances (e.g., Eichler et al. 1989; Narayan, Paczyลski, & Piran 1992; Mรฉszรกros & Rees 1992; for recent reviews see Mรฉszรกros 1999 and Piran 1999). The close spatial association of some GRB afterglows with faint galaxies at high redshifts may not be inconsistent with a NS binary merger origin, in spite of the large recoil velocities acquired by NS binaries at birth (Bloom, Sigurdsson, & Pols 1999; but see also Bulik & Belczynski 2000). Currently the most popular models all assume that the coalescence of two NS leads to the formation of a rapidly rotating black hole (BH) surrounded by a torus of debris. Energy can then be extracted either from the rotation of the Kerr BH or from the material in the torus so that, with sufficient beaming, the gamma-ray fluxes observed from even the most distant GRBs can be explained (Mรฉszรกros, Rees, & Wijers 1999). However, it is important to understand the hydrodynamic processes taking place during the final coalescence before making assumptions about its outcome. In particular, as will be argued below (ยง2.2), it is not clear that the coalescence of two $`1.4M_{}`$ NS forms an object that will collapse to a BH on a dynamical timescale, and it is not certain either that a significant amount of matter will be ejected during the merger to form an outer torus around the central object (Faber & Rasio 2000).
Coalescing NS binaries are also important sources of gravitational waves that may be directly detectable by the large laser interferometers currently under construction, such as LIGO (Abramovici et al. 1992; see Barish & Weiss 1999 for a recent pedagogical introduction) and VIRGO (Bradaschia et al. 1990). In addition to providing a major new confirmation of Einsteinโs theory of general relativity (GR), including the first direct proof of the existence of black holes (see, e.g., Flanagan & Hughes 1998; Lipunov, Postnov, & Prokhorov 1997), the detection of gravitational waves from coalescing binaries at cosmological distances could provide accurate independent measurements of the Hubble constant and mean density of the Universe (Schutz 1986; Chernoff & Finn 1993; Markoviฤ 1993). Expected rates of NS binary coalescence in the Universe, as well as expected event rates in laser interferometers, have now been calculated by many groups. Although there is some disparity between various published results, the estimated rates are generally encouraging (see Kalogera 2000 for a recent review).
Many calculations of gravitational wave emission from coalescing binaries have focused on the waveforms emitted during the last few thousand orbits, as the frequency sweeps upward from $`10`$Hz to $`300`$Hz. The waveforms in this frequency range, where the sensitivity of ground-based interferometers is highest, can be calculated very accurately by performing high-order post-Newtonian (PN) expansions of the equations of motion for two point masses (see, e.g., Owen & Sathyaprakash 1999 and references therein). However, at the end of the inspiral, when the binary separation becomes comparable to the stellar radii (and the frequency is $`>1`$kHz), hydrodynamics becomes important and the character of the waveforms must change. Special purpose narrow-band detectors that can sweep up frequency in real time will be used to try to catch the last $`10`$ cycles of the gravitational waves during the final coalescence (Meers 1988; Strain & Meers 1991). These โdual recyclingโ techniques are being tested right now on the German-British interferometer GEO 600 (Danzmann 1998). In this terminal phase of the coalescence, when the two stars merge together into a single object, the waveforms contain information not just about the effects of GR, but also about the interior structure of a NS and the nuclear equation of state (EOS) at high density. Extracting this information from observed waveforms, however, requires detailed theoretical knowledge about all relevant hydrodynamic processes. If the NS merger is followed by the formation of a BH, the corresponding gravitational radiation waveforms will also provide direct information on the dynamics of rotating core collapse and the BH โringdownโ (see, e.g., Flanagan & Hughes 1998).
### 1.2 Double White Dwarfs
Coalescing WD binaries have long been discussed as possible progenitors of Type Ia supernovae (Iben & Tutukov 1984; Webbink 1984; Paczyลski 1985; see Branch et al. 1995 for a recent review). To produce a supernova, the total mass of the system must be above the Chandrasekhar mass. Given evolutionary considerations, this requires two C-O or O-Ne-Mg WD. Yungelson et al. (1994) showed that the expected merger rate for close WD pairs with total mass exceeding the Chandrasekhar mass is consistent with the rate of Type Ia supernovae deduced from observations. Alternatively, a massive enough merger may collapse to form a rapidly rotating NS (Nomoto & Iben 1985; Colgate 1990). Chen & Leonard (1993) speculated that most millisecond pulsars in globular clusters might have formed in this way. In some cases planets may also form in the disk of material ejected during the coalescence and left in orbit around the central pulsar (Podsiadlowski, Pringle, & Rees 1991). Indeed the very first extrasolar planets were discovered in orbit around a millisecond pulsar, PSR B1257$`+`$12 (Wolszczan & Frail 1992). A merger of two magnetized WD might lead to the formation of a NS with extremely high magnetic field, and this scenario has been proposed as a source of GRBs (Usov 1992).
Close WD binaries are expected to be extremely abundant in our Galaxy, even though their direct detection remains very challenging (Han 1998; Saffer, Livio, & Yungelson 1999). Iben & Tutukov (1984, 1986) predicted that $`20`$% of all binary stars produce close WD pairs at the end of their stellar evolution. More recently, theoretical estimates of the double WD formation rate in the Galaxy have converged to a value $`0.1\mathrm{yr}^1`$, with an uncertainty that may be only a factor of two (Han 1998; Kalogera 2000). The most common systems should be those containing two low-mass helium WD. Their final coalescence can produce an object massive enough to start helium burning. Bailyn (1993 and references therein) and others have suggested that some โextreme horizontal branchโ stars in globular clusters may be such helium-burning stars formed by the coalescence of two WD. Planets in orbit around a massive WD may also form following the binary coalescence (Livio, Pringle, & Saffer 1992).
Coalescing WD binaries are also important sources of low-frequency gravitational waves that should be easily detectable by future space-based laser interferometers. The currently planned LISA (Laser Interferometer Space Antenna; see Folkner 1998) should have an extremely high sensitivity (down to a characteristic strain $`h10^{23}`$) to sources with frequencies in the range $`10^4\mathrm{\hspace{0.17em}1}`$Hz. Han (1998) estimated a WD merger rate $`0.03\mathrm{yr}^1`$ in our own Galaxy. Individual coalescing systems and mergers may be detectable in the frequency range $`10`$$`100`$mHz. In addition, the total number ($`10^4`$) of close WD binaries in our Galaxy emitting at lower frequencies $`0.1`$$`10`$mHz (the emission lasting for $`10^2`$$`10^4`$yr before final merging) should provide a continuum background signal of amplitude $`h10^{20}`$$`10^{21}`$ (Hils et al. 1990). The detection of the final burst of gravitational waves emitted during an actual merger would provide a unique opportunity to observe in โreal timeโ the hydrodynamic interaction between the two degenerate stars, possibly followed immediately by a supernova explosion, nuclear outburst, or some other type of electromagnetic signal.
## 2 Coalescing Binary Neutron Stars
### 2.1 Hydrodynamics of Neutron Star Mergers
The final hydrodynamic merger of two NS is driven by a combination of relativistic and fluid effects. Even in Newtonian gravity, an innermost stable circular orbit (ISCO) is imposed by global hydrodynamic instabilities, which can drive a close binary system to rapid coalescence once the tidal interaction between the two stars becomes sufficiently strong. The existence of these global instabilities for close binary equilibrium configurations containing a compressible fluid, and their particular importance for binary NS systems, were demonstrated for the first time by Rasio & Shapiro (1992, 1994, 1995; hereafter RS1โ3) using numerical hydrodynamic calculations. These instabilities can also be studied using analytic methods. The classical analytic work for close binaries containing an incompressible fluid (e.g., Chandrasekhar 1969) was extended to compressible fluids in the work of Lai, Rasio, & Shapiro (1993a,b, 1994a,b,c, hereafter LRS1โ5). This analytic study confirmed the existence of dynamical instabilities for sufficiently close binaries. Although these simplified analytic studies can give much physical insight into difficult questions of global fluid instabilities, fully numerical calculations remain essential for establishing the stability limits of close binaries accurately and for following the nonlinear evolution of unstable systems all the way to complete coalescence.
A number of different groups have now performed such calculations, using a variety of numerical methods and focusing on different aspects of the problem. Nakamura and collaborators (see Nakamura & Oohara 1998 and references therein) were the first to perform 3D hydrodynamic calculations of binary NS coalescence, using a traditional Eulerian finite-difference code. Instead, RS used the Lagrangian method SPH (Smoothed Particle Hydrodynamics). They focused on determining the ISCO for initial binary models in strict hydrostatic equilibrium and calculating the emission of gravitational waves from the coalescence of unstable binaries. Many of the results of RS were later independently confirmed by New & Tohline (1997) and Swesty, Wang, & Calder (1999), who used completely different numerical methods but also focused on stability questions, and by Zhuge, Centrella, & McMillan (1994, 1996), who also used SPH. Zhuge et al. (1996) also explored in detail the dependence of the gravitational wave signals on the initial NS spins. Davies et al. (1994) and Ruffert et al. (1996, 1997) have incorporated a treatment of the nuclear physics in their hydrodynamic calculations (done using SPH and PPM codes, respectively), motivated by models of GRBs at cosmological distances. All these calculations were performed in Newtonian gravity, with some of the more recent studies adding an approximate treatment of energy and angular momentum dissipation through the gravitational radiation reaction (e.g., Janka et al. 1999; Rosswog et al. 1999), or even a full treatment of PN gravity to lowest order (Ayal et al. 2000; Faber & Rasio 2000).
All recent hydrodynamic calculations agree on the basic qualitative picture that emerges for the final coalescence (see Fig. 1). As the ISCO is approached, the secular orbital decay driven by gravitational wave emission is dramatically accelerated (see also LRS2, LRS3). The two stars then plunge rapidly toward each other, and merge together into a single object in just a few rotation periods. In the corotating frame of the binary, the relative radial velocity of the two stars always remains very subsonic, so that the evolution is nearly adiabatic. This is in sharp contrast to the case of a head-on collision between two stars on a free-fall, radial orbit, where shock heating is very important for the dynamics (RS1; Shapiro 1998). Here the stars are constantly being held back by a (slowly receding) centrifugal barrier, and the merging, although dynamical, is much more gentle. After typically $`12`$ orbital periods following first contact, the innermost cores of the two stars have merged and a secondary instability occurs: mass shedding sets in rather abruptly. Material (typically $`10`$% of the total mass) is ejected through the outer Lagrange points of the effective potential and spirals out rapidly. In the final stage, the spiral arms widen and merge together, forming a nearly axisymmetric thick disk or torus around the inner, maximally rotating dense core.
In GR, strong-field gravity between the masses in a binary system is alone sufficient to drive a close circular orbit unstable. In close NS binaries, GR effects combine nonlinearly with Newtonian tidal effects so that the ISCO is encountered at larger binary separations and lower orbital frequency than predicted by Newtonian hydrodynamics alone, or GR alone for two point masses. The combined effects of relativity and hydrodynamics on the stability of close compact binaries have only very recently begun to be studied, using both analytic approximations (basically, PN generalizations of LRS; see, e.g., Lai & Wiseman 1997; Lombardi, Rasio, & Shapiro 1997; Shibata & Taniguchi 1997), as well as numerical calculations in 3D incorporating simplified treatments of relativistic effects (e.g., Baumgarte et al. 1998; Marronetti, Mathews & Wilson 1998; Wang, Swesty, & Calder 1998; Faber & Rasio 2000).
Several groups have been working on a fully general relativistic calculation of the final coalescence, combining the techniques of numerical relativity and numerical hydrodynamics in 3D (Baumgarte, Hughes, & Shapiro 1999; Landry & Teukolsky 2000; Seidel 1998; Shibata & Uryu 2000). However this work is still in its infancy, and only very preliminary results of test calculations have been reported so far.
### 2.2 Black Hole Formation
The final fate of a NSโNS merger depends crucially on the NS EOS, and on the extraction of angular momentum from the system during the final merger. For a stiff NS EOS, it is by no means certain that the core of the final merged configuration will collapse on a dynamical timescale to form a BH. One reason is that the Kerr parameter $`J/M^2`$ of the core may exceed unity for extremely stiff EOS (Baumgarte et al. 1998), although Newtonian and PN hydrodynamic calculations suggest that this is never the case (see, e.g., Faber & Rasio 2000). More importantly, the rapidly rotating core may in fact be dynamically stable.
Take the obvious example of a system containing two identical $`1.35M_{}`$ NS. The total baryonic mass of the system for a stiff NS EOS is then about $`3M_{}`$. Almost independent of the spins of the NS, all hydrodynamic calculations suggest that about $`10\%`$ of this mass will be ejected into the outer torus, leaving at the center a maximally rotating object with baryonic mass $`2.7M_{}`$ (Any hydrodynamic merger process that leads to mass shedding will produce a maximally rotating object since the system will have ejected just enough mass and angular momentum to reach its new, stable quasi-equilibrium state). Most stiff NS EOS (including the well-known โAUโ and โUUโ EOS of Wiringa et al. 1988; see Akmal et al. 1998 for a recent update) allow stable, maximally rotating NS with baryonic masses exceeding $`3M_{}`$ (Cook, Shapiro, & Teukolsky 1994), i.e., well above the mass of the final merger core. Differential rotation (not taken into account in the calculations of Cook et al. 1994) can further increase this maximum stable mass very significantly (see Baumgarte, Shapiro, & Shibata 2000). Thus the hydrodynamic merger of two NS with stiff EOS and realistic masses is not expected to produce a BH. This expectation is confirmed by the preliminary full-GR calculations of Shibata & Uryu (2000), for polytropes with $`\mathrm{\Gamma }=2`$, which indicate collapse to a BH only when the two NS are initially very close to the maximum stable mass.
For slowly rotating stars, the same stiff NS EOS give maximum stable baryonic masses in the range $`2.53M_{}`$, which may or may not exceed the total merger core mass. Therefore, collapse to a BH could still occur on a timescale longer than the dynamical timescale, following a significant loss of angular momentum. Indeed, processes such as electromagnetic radiation, neutrino emission, and the development of various secular instabilities (e.g., r-modes), which may lead to angular momentum losses, take place on timescales much longer than the dynamical timescale (see, e.g., Baumgarte & Shapiro 1998, who show that neutrino emission is probably negligible). These processes are therefore decoupled from the hydrodynamics of the coalescence. Unfortunately their study is plagued by many fundamental uncertainties in the microphysics.
### 2.3 The Importance of the Neutron Star Spins
The question of the final fate of the merger could also depend crucially on the NS spins and on the evolution of the fluid vorticity during the final coalescence. Close NS binaries are likely to be nonsynchronized. Indeed, the tidal synchronization time is almost certainly much longer than the orbital decay time (Kochanek 1992; Bildsten & Cutler 1992). For NS binaries that are far from synchronized, the final coalescence involves some new, complex hydrodynamic processes (Rasio & Shapiro 1999).
Consider for example the case of an irrotational system (containing two nonspinning stars at large separation; see LRS3). Because the two stars appear to be counter-spinning in the corotating frame of the binary, a vortex sheet (where the tangential velocity jumps discontinuously by $`\mathrm{\Delta }v0.1c`$) appears when the stellar surfaces come into contact. Such a vortex sheet is Kelvin-Helmholtz unstable on all wavelengths and the hydrodynamics is therefore extremely difficult to model accurately given the limited spatial resolution of 3D calculations. The breaking of the vortex sheet generates some turbulent viscosity so that the final configuration may no longer be irrotational. In numerical simulations, however, vorticity is quickly generated through spurious shear viscosity, and the merger remnant is observed to evolve rapidly (in just a few rotation periods) toward uniform rotation.
The final fate of the merger could be affected drastically by these processes. In particular, the shear flow inside the merging stars (which supports a highly triaxial shape; see Rasio & Shapiro 1999) may in reality persist long enough to allow a large fraction of the total angular momentum in the system to be radiated away in gravitational waves during the hydrodynamic phase of the coalescence. In this case the final merged core may resemble a Dedekind ellipsoid, i.e., it will have a triaxial shape supported entirely by internal fluid motions, but with a stationary shape in the inertial frame (so that it no longer radiates gravitational waves). This state will be reached on the gravitational radiation reaction timescale, which is no more than a few tens of rotation periods. On the (much longer) viscous timescale, the core will then evolve to a uniform, slowly rotating state and will probably collapse to a BH. In contrast, in all 3D numerical simulations performed to date, the shear is quickly dissipated, so that gravitational radiation never gets a chance to extract more than a small fraction ($`<10`$%) of the angular momentum, and the final core appears to be a uniform, maximally rotating object (stable to collapse) exactly as in calculations starting from synchronized binaries. However this behavior is most likely an artefact of the large spurious shear viscosity present in the 3D simulations.
In addition to their obvious significance for gravitational wave emission, these issues are also of great importance for models of GRBs that depend on energy extraction from a torus of material around the central BH. Indeed, if a large fraction of the total angular momentum is removed by the gravitational waves, rotationally-induced mass shedding may not occur at all during the merger, eventually leaving a BH with no surrounding matter, and no way of extracting energy from the system. Note also that, even without any additional loss of angular momentum through gravitational radiation, PN effects tend to reduce drastically the amount of matter ejected during the merger (Faber & Rasio 2000).
## 3 Coalescing White Dwarf Binaries
### 3.1 Hydrodynamics of White Dwarf Mergers
The results of RS3 for polytropes with $`\mathrm{\Gamma }=5/3`$ show that hydrodynamics also plays an important role in the coalescence of two WD, either because of dynamical instabilities of the equilibrium configuration, or following the onset of dynamically unstable mass transfer. Systems with mass ratios $`q1`$ must evolve into deep contact before they become dynamically unstable and merge. Instead, equilibrium configurations for binaries with $`q`$ sufficiently far from unity never become dynamically unstable. However, once these binaries reach their Roche limit, dynamically unstable mass transfer occurs and the less massive star is completely disrupted after a small number ($`<10`$) of orbital periods (see also Benz et al. 1990). In both cases, the final merged configuration is an axisymmetric, rapidly rotating object with a core โ thick disk structure similar to that obtained for coalescing NS (RS2, RS3; see also Mochkovitch & Livio 1989).
### 3.2 The Final Fate: Collapse to a Neutron Star? Planets?
For two massive enough WD, the merger product may be well above the Chandrasekhar mass $`M_{Ch}`$. The object may therefore explode as a (Type Ia) supernova, or perhaps collapse to a NS. The rapid rotation and possibly high mass (up to $`2M_{Ch}`$) of the object must be taken into account for determining its final fate. Unfortunately, rapid rotation and the possibility of starting from an object well above the Chandrasekhar limit have not been taken into account in most previous theoretical calculations of โaccretion-induced collapseโ (AIC), which consider a nonrotating WD just below the Chandrasekhar limit accreting matter slowly and quasi-spherically (e.g., Canal et al. 1990; Nomoto & Kondo 1991; see Fryer et al. 1999 for a recent 2-D SPH calculation including rotation). Under these assumptions it is found that collapse to a NS is possible only for a narrow range of initial conditions. In most cases, a supernova explosion follows the ignition of the nuclear fuel in the degenerate core. However, the fate of a much more massive object with substantial rotational support and large deviations from spherical symmetry (as would be formed by dynamical coalescence) may be very different.
If a NS does indeed form, and later accretes some of the material ejected during the coalescence, a millisecond radio pulsar may emerge. Planets around this millisecond pulsar may be formed at large distances $`1`$AU following the viscous evolution of the remaining material in the outer disk (Podsiadlowski, Pringle & Rees 1991; Phinney & Hansen 1993). This is one of the possible formation scenarios for the extraordinary planetary system discovered around the millisecond pulsar PSR B1257$`+`$12 (see Wolszczan 1999 for a recent update; Podsiadlowski 1993 for alternative planet formation scenarios). This system contains three confirmed Earth-mass planets in quasi-circular orbits (Wolszczan & Frail 1992; Wolszczan 1994). The planets have masses of $`0.015/\mathrm{sin}i_1\mathrm{M}_{}`$, $`3.4/\mathrm{sin}i_2\mathrm{M}_{}`$, and $`2.8/\mathrm{sin}i_3\mathrm{M}_{}`$, where $`i_1`$, $`i_2`$ and $`i_3`$ are the inclinations of the orbits with respect to the line of sight, and are at distances of 0.19 AU, 0.36 AU, and 0.47 AU, respectively, from the pulsar. In addition, the unusually large second and third frequency derivatives of the pulsar suggest the existence of a fourth, more distant and massive planet in the system (Wolszczan 1999). The simplest interpretation of the present best-fit values of the frequency derivatives implies a mass of about $`100/\mathrm{sin}i_4\mathrm{M}_{}`$ (i.e., comparable to Saturnโs mass) for the fourth planet, at a distance of about $`38\mathrm{AU}`$ (i.e., comparable to Plutoโs distance from the Sun), and with a period of about $`170\mathrm{yr}`$ in a circular, coplanar orbit (Wolszczan 1996; Joshi & Rasio 1997). However, if, as may well be the case, the first pulse frequency derivative is not entirely acceleration-induced, then the fourth planet can have a wide range of masses (Joshi & Rasio 1997). In particular, it can have a mass comparable to that of Mars (at a distance of $`9`$AU), Uranus (at a distance of $`25`$AU) or Neptune (at a distance of $`26`$AU). The presence of this fourth planet, if confirmed, would place strong additional constraints on possible formation scenarios, as both the minimum mass and minimum angular momentum required in the protoplanetary disk would increase considerably (see Phinney & Hansen 1993 for a general discussion).
## Acknowledgements
This work was supported by NSF Grant AST-9618116, NASA ATP Grant NAG5-8460, and by an Alfred P. Sloan Research Fellowship. Our computational work is supported by the National Computational Science Alliance. |
no-problem/0002/astro-ph0002218.html | ar5iv | text | # Intrinsic and Cosmological Signatures in Gamma-Ray Burst Time Profiles: Time Dilation
## 1 Introduction
Many of the signatures of the cosmological time dilation and the radiation mechanisms of gamma-ray bursts (GRBs) are hidden in the temporal and spectral characteristics of GRBs. The subject of this paper is the analysis of the temporal properties of the bursts, and the correlations between intensities and timescales. We use the BATSE Time-to-Spill (TTS) data, which can give much higher time resolution than other forms of BATSE data for most bursts. The advantages and shortcomings of this data, our decomposition of the time profiles into pulses, and the evolution of burst characteristics are described in greater detail in the accompanying paper Lee et al. (2000). What follows is a brief summary. (See also Lee et al. (1996, 1998); Lee (2000).)
Many burst time profiles appear to be composed of a series of discrete, often overlapping, pulses, often with a *fast rise, exponential decay* (FRED) shape (Norris et al., 1996b). The different pulses may represent emission from distinct subevents within the gamma-ray burst source. Therefore, it may be useful to decompose burst time profiles in terms of individual pulses, each of which rises from background to a maximum and then decays back to background levels. We have analyzed gamma-ray burst time profiles by representing them in terms of a finite number of pulses, each of which is described by a small number of parameters.
We have used the phenomological pulse model of Norris et al. (1996b) to decompose gamma-ray burst time profiles into distinct pulses. In this model, each pulse is described by five parameters with the functional form
$$I(t)=A\mathrm{exp}\left(\left|\frac{tt_{\text{max}}}{\sigma _{r,d}}\right|^\nu \right),$$
(1)
where $`t_{\text{max}}`$ is the time at which the pulse attains its maximum, $`\sigma _r`$ and $`\sigma _d`$ are the rise and decay times, respectively, $`A`$ is the pulse amplitude, and $`\nu `$ (the โpeakednessโ) gives the sharpness or smoothness of the pulse at its peak.
We have developed an interactive pulse-fitting program to perform this pulse decomposition on the BATSE TTS data. and used this program to fit pulses to all gamma-ray bursts in the BATSE 3B catalog (Meegan et al., 1996) up to trigger number 2000 in all of the four BATSE LAD energy channels for which TTS data is available and shows time variation beyond the normal Poisson noise for the background. We fit each channel of each burst separately. We have obtained 574 fits for 211 bursts, with a total of 2465 pulses.
In this paper, we focus on the possibility of distinguishing between intrinsic signatures in the temporal characteristics and those which arise from their cosmological distribution. A prominent example of this is the cosmological time dilation effect, which we expect to see since some, and possibly all, gamma-ray bursts originate at cosmological distances.
All timescales in GRBs will be lengthened by a factor of $`1+z`$ where $`z`$ is the redshift of the burst, as a result of cosmological time dilation (Paczyลski, 1992; Piran, 1992). However, this seemingly straightforward test is not simple. First of all, given the great diversity in burst time profiles, it is difficult to decide which timescale is most appropriate for this test. It seems unlikely that any particular timescale is approximately the same in all bursts, so we expect to find time dilation as a statistical effect, rather than for individual bursts.
Secondly, redshifts are known only for a few bursts, so that for the vast majority of bursts we need to use another measure of distance or redshift. Most past analyses have used some measure of apparent GRB brightness for this purpose with the tacit assumption that the corresponding intrinsic brightness is a standard candle or has a very narrow distribution.
The observed apparent brightnesses of bursts are generally measured using either peak fluxes, which give the instantaneous intensity of bursts when they peak, or fluences, which measure the total output of bursts integrated over their entire durations. The brightness measures can also be divided another way, into photon measures and energy measures. Thus, there are several different measures of the apparent brightnesses of bursts. The BATSE burst catalogs give peak photon fluxes and total energy fluences for bursts. The pulse-fitting data presented here can be used to determine count fluxes and count fluences. Most previous work on the evidence for time dilation in burst time profiles has binned the bursts into two or three brightness classes using the peak flux as a measure of brightness, and compared a measure of total burst duration these classes. Use of fluence as a brightness measure has been promoted by Petrosian & Lee (1996a) and Lloyd & Petrosian (1999).
In this paper, we use a number of different timescale and brightness measures. We will describe their correlations using power laws. Although cosmological models generally predict more complex relationships than a simple power law, it would be fruitless to attempt to fit anything more complex than a power law using the pulse-fitting data, which appears to have a large intrinsic scatter. To contrast the cosmological versus the intrinsic signatures, we compare the relations or correlations between strengths and timescales among bursts, which should contain the signatures of cosmological time dilations, with the same correlations among pulses of individual bursts, which can only contain the intrinsic effects. It is likely that some of these correlations are affected by selection effects in our fitting procedures. To investigate the importance of these, we have carried out extensive simulations which are described in the accompanying paper Lee et al. (2000). We use the results of these simulations to test whether or not the correlations we find are properties of the bursts or are products of our procedures. In the next section, we define the various timescales and burst strengths used in this analysis. The correlations relevant to the โtime dilationโ tests are discussed in Section 3 and the correlations between other quantities within bursts and among bursts are described in Section 4. In Section 5 we discuss the significance of these correlations.
It should be noted that many of the simulated bursts were affected by a truncation that almost never occurred in the actual BATSE TTS data. The TTS data is truncated at $`2^{20}`$ counts or 240 seconds, whichever occurs first. In nearly all of the actual bursts, the 240 second limit is reached first, while in many of the the simulated bursts, the $`2^{20}`$ count limit is reached first. This truncation can shorten the observed time intervals between the first and last pulses in a burst, and between the two highest amplitude pulses in a burst, but not the observed pulse widths or the observed time intervals between consecutive pulses. *Therefore, all discussions of the first two kinds of time intervals in simulated bursts will only consider simulated bursts where no pulses were truncated by the $`2^{20}`$ count limit.*
## 2 Timescales and Intensities
We now describe the characteristics used in our correlation studies and the selection and procedural biases associated with each of them.
### 2.1 Intensities
We use peak count rates and count fluences as measures of burst intensity or strength. For individual pulses, the peak count rate is given by the amplitude $`A`$ and the count fluence by
$$=A_{\mathrm{}}^{\mathrm{}}I(t)๐t=A\frac{\sigma _r+\sigma _d}{\nu }\mathrm{\Gamma }\left(\frac{1}{\nu }\right).$$
(2)
where $`\mathrm{\Gamma }`$ is the gamma function. For a burst, on the other hand, the peak count rate is $`A_{\text{max}}`$, the largest amplitude of the pulses in the burst, and the total count fluence is $`=_i`$, summed over all pulses.
### 2.2 Time Intervals Between Pulses
The most obvious timescale for individual pulses is the *pulse width*, which is given by
$$T_f=A(\sigma _r+\sigma _d)(\mathrm{ln}f)^{\frac{1}{\nu }}.$$
(3)
where $`f`$ is the fraction of the peak height at which the width is measured. and $`\nu `$ is the โpeakednessโ parameter. In this paper, we use the case $`f=1/2`$, for which the width is the full width at half maximum (FWHM). We will discuss the correlations between pulse width and intensity measures in the next section. Here we consider some other timescales, namely the *time intervals between pulses*, which may also be characteristic of the gamma-ray production mechanisms. There are several possible choices of time intervals. Weโll examine the *intervals between consecutive pulses* first, which may have the following selection effect: Two pulses with short separations between their peaks may have a large overlap, and thus be identified as only one pulse. This will limit the shortest interval between pulses, introducing a selection bias. On the other hand, when two pulses have a long separation between them, additional smaller pulses may be resolved between them that wouldnโt be resolved if the separation were smaller. This will limit the the longest intervals between consecutive pulses, introducing another selection bias.
Figure 1 shows the distributions of the intervals between the peak times $`t_{\text{max}}`$ of adjacent pulses for the simulations and the fits to simulations. It shows that the fitting procedure identifies pulses with longer separations correctly, but misses most pulses with shorter separations.
Figure 2 shows the time intervals between consecutive pulses for bursts with different numbers of pulses, as derived from our fits to the BATSE data and from the simulations. Note that here and in similar figures to follow, we show only data from channels 2 and 3. In general, channels 1 and 4 show similar behavior, but results from these channels have lower significance because these channels contain fewer pulses. Table 1, columns (a) gives the Spearman rank-order correlation coefficients $`r_s`$ between these two quantities, and the probabilities that the observed correlations have occured by chance. These show that pulses tend to be closer together in bursts with more pulses, in both the actual bursts, and in the simulated bursts and in the fits to simulated bursts. One selection effect that may contribute to this result in actual bursts is that more complex bursts may simply be bursts with stronger signal-to-noise ratios, which allows more pulses to be resolved within any given time interval. Our analysis of the simulated bursts and the fits to the simulations show similar results. this result is as expected, since pulse peak times were generated independently of each other and of the number of pulses per burst, so more complex bursts will tend to have more pulses in any given time interval. The correlation is weaker for the fits to simulated bursts than for the original simulated bursts because the fitting procedure tends to miss pulses with shorter separations.
Another time interval, *the interval between the peak times of the first and last pulses* in a burst, might be expected to give a good measure of the *total duration* of the burst. However, the determination of this interval can be greatly affected by whether or not low amplitude pulses can be identified above background. This is essentially the same effect as the sensitivity of the $`T_{90}`$ interval to the signal-to-noise ratios of bursts (Norris, 1996; Lee & Petrosian, 1997).
Figure 3 and columns (b) of Table 1 compare the number of pulses in each burst with the time interval between the first and last pulses in each burst. They show that the time intervals between the first and last pulse are greater in bursts with more pulses, both in actual bursts, and in simulated bursts and fits to simulated bursts. In actual bursts, this may result from the selection effect described above; more complex bursts may simply have stronger signal-to-noise ratios, making it easier to identify earlier and later pulses. In the simulated bursts and the fits to simulated bursts, this is also as expected since the peak times of pulses were generated independently of the number of pulses in each burst.
A third time interval, the interval *between the peak times of the two highest amplitude pulses* in a burst, may also represent a characteristic time scale for the entire burst. Determination of this interval should be less affected by the selection effects that we have seen with the intervals between consecutive pulses. However, the identification of the two highest pulses may be affected by whether a particular structure in a burst is identified as a single pulse with large amplitude or as multiple overlapping pulses with smaller amplitudes. The interval between the two highest amplitude pulses should be less influenced by the selection effects in the fitting procedure that affect the interval between the first and last pulses in a burst.
Figure 4 and Table 1, columns (c) show the correlations between the number of pulses in each burst and the time intervals between the two highest amplitude pulses in each burst, both for actual bursts, and for simulated bursts and fits to simulated bursts. It appears that unlike the first two time intervals described above, there is no tendency for the third time interval to be shorter or longer in bursts with more pulses. This suggests that the interval between the two highest pulses in each fit isnโt subject to the signal-to-noise selection effects that affect both the intervals between adjacent pulses and the interval between the first and last pulse in each burst.
The upshot of the above analysis is that the correlations between time intervals and numbers of pulses per burst (or complexity) in the simulated bursts is similar to that of the actual BATSE data, indicating that the simulated data provides a good representation of these aspects of the actual data, and can be used to determine the biases in the data and in the fitting procedure.
## 3 Time Dilation
We now consider the correlations between timescales and intensities among pulses within bursts and among the bursts to determine the presence of time dilation or time stretching and to test if this is due to cosmological redshift of the sources.
### 3.1 Peak Luminosity as a Standard Candle
If we assume that the peak luminosities of bursts are approximately a standard candle, then the correlations between pulse amplitudes and timescales can be used to test time dilation. This corresponds to the amplitudes of the constituent pulses in bursts. It has previously been found that higher amplitude pulses have shorter durations (are narrower), (Davis et al., 1994; Norris et al., 1994; Davis, 1995), but it has been noted that this could be in part or entirely an intrinsic property of bursters. (Norris et al., 1998). A potential problem with using peak flux as a distance measure for bursts observed by BATSE is that data binned to 64 ms have been typically used, so that the peak fluxes of bursts with sharp spikes may be underestimated. (See Lee & Petrosian (1997).) This should be less of a problem with the variable time resolution TTS data, where the time resolution is inversely proportional to the count rate and every spill represents the same number of counts. The pulse-fitting data from actual BATSE bursts shown in the upper panels of Figure 5 clearly shows that higher amplitude pulses tend to be narrower, or have shorter durations. Table 2 gives the Spearman rank-order correlation coefficients, which show that pulse amplitudes and pulse widths are inversely correlated in all energy channels. The table also gives fitted power laws for pulse amplitude as a function of pulse width. These were obtained by applying the ordinary least squares (OLS) bisector linear regression algorithm (Isobe et al., 1990; Lee, 2000).
The lower panels of Figure 5 shows the pulse amplitudes versus pulse width for all pulses in all simulated bursts combined, for the initial simulations and for the fits to the simulations. The fitting procedure tends to miss lower amplitude pulses, but doesnโt appear to have strong selection effects in pulse width. However, the fitting procedure introduces an anticorrelation between pulse amplitudes and pulse widths, as shown in the last two rows of Table 2. By design, there is no correlation between pulse width and pulse amplitude in the initial simulation, but there is a negative correlation between pulse width and pulse amplitude in the fits to the simulations. However this correlation appear to be weaker and have far less statistical significance than in the fits to actual BATSE bursts.
It is difficult to draw concrete conclusions from the correlations in the combined set of pulses. To distinguish cosmological from intrinsic correlations, we should compare the correlations among bursts and among pulses within individual bursts.
### 3.2 Cosmological Effects
For testing the first type of correlation, we use the peak fluxes of each of the bursts, *i.e.*, the amplitudes of the highest amplitude pulses, and the widths of the same pulses. These data and their analysis (shown in Figure 6 and columns (a) of Table 3) shows a strong inverse correlation between peak pulse amplitude and pulse width in the actual BATSE bursts, but not in the simulated bursts or the fits to the simulated bursts. This suggests that the correlations observed in the fits to actual bursts observed by BATSE are not caused by selection effects in the fitting procedure, so they may arise from cosmological time dilation, intrinsic properties of the bursters, or selection effects arising from the BATSE triggering criteria.
### 3.3 Intrinsic Effects
A more unambiguous test of the second type of correlation, intrinsic correlations, can come from analysis of pulse widths and amplitudes of pulses within bursts, because correlations between pulse characteristics within bursts cannot be affected by the distances to the sources, and are less likely to be affected by selection effects due to the triggering process. To this end, we have carried out linear least squares fits to the logarithms of the pulse amplitudes and widths in all actual BATSE bursts, and simulated bursts (before and after fitting) which contain more than one pulse. The results are shown in Table 4, which gives the numbers and fractions of fits that show inverse correlations as determined from the Spearman coefficients, and the probabilities that this would occur by chance if there was no actual correlation, using the binomial distribution. It also gives the distributions of power-law indices (slopes), which we denote as $`\alpha `$, in four bins: $`\alpha <1`$, $`1<\alpha <0`$, $`0<\alpha <1`$, and $`\alpha >1`$. (For these bins, the results are identical for three different linear regression methods that are symmetric in the two variables being compared. See Isobe et al. (1990); Lee (2000).) The last column of Table 4 gives the median power law index from the OLS bisector method. For all energy channels, a significant majority of fits show inverse correlations between pulse widths and pulse amplitudes *within* bursts. When we examine the actual BATSE bursts for which the rank correlations have the greatest statistical significance, shown in the upper panels of Figure 7, we find that the vast majority of these show inverse correlations between pulse widths and pulse amplitudes; in the bursts where the correlations are positive, the correlations also tend to be less statistically significant. The pulse amplitudes most often vary as a small negative power of the pulse width. The power law indices are significantly different from those relating pulse amplitude to pulse width for the highest amplitude pulses in each burst (Petrosian et al., 1999). Ramirez-Ruiz & Fenimore (1999) found similar results for the sample of 28 complex bursts fitted by Norris et al. (1996b). As noted by those authors, this anticorrelation could be consistent with internal shock models of GRBs.
Because of the possible far-reaching effects of this result, it is important to ensure that this is not due to a selection or analysis bias. Our simulations can to some degree answer this question. Table 4 also shows that there are no correlations amplitude and pulse width within the simulated bursts. In the fits to the simulations, however, more bursts show a negative correlation between pulse amplitude and pulse width than show a positive correlation. This asymmetry appears to be as large as it is for the fits to actual BATSE data, which would suggest that the observed tendency for higher amplitude pulses within bursts to be narrower arises largely from a selection effect in the pulse-fitting procedure. However, when we compare the fits to actual and simulated bursts for which the rank correlations have the greatest statistical significance, shown in the lower panels of Figure 7, we find a different result. In the simulated data, in the bursts with correlations between pulse widths and pulse amplitudes with higher statistical significance, the fraction that have positive correlations between pulse widths and pulse amplitudes is similar to that in bursts where the rank correlations have weaker statistical significance; the asymmetry doesnโt depend on the statistical significance of the correlations. This is unlike the fits to actual bursts, where almost all of the bursts with the most statistically significant correlations show a negative slope (Petrosian et al., 1999). Therefore, the observed inverse correlations between pulse widths and pulse amplitudes within actual bursts appear to arise in part from intrinsic properties of the sources.
However, some caution is necessary in the interpretation of these results. This is because we find correlations between the errors in the fitted pulse parameters by comparing the parameters used in the simulations with those obtained from the fits to the simulations. For simulated bursts consisting of a single pulse in both the original simulation and in the fit, the identification of pulses between the simulation and the fit is unambiguous and unaffected by the effects of missing pulses. Figure 8 shows that the errors in the fitted pulse amplitudes and the fitted pulse widths tend to have an inverse correlation; when the fitted amplitude is larger than the original amplitude, the fitted width tends to be smaller than the original width, and vice versa. The same effect also appears when we compare the highest amplitude pulses from all bursts, or all pulses matched between the simulations and the fits to the simulations. This selection effect may cause weak inverse correlations between pulse amplitude and pulse width within fits to actual or simulated bursts, so it may be another reason why a large majority of both actual BATSE bursts and fits to simulated bursts show an inverse correlation between pulse amplitude and pulse width within the bursts, as found here and by Ramirez-Ruiz & Fenimore (1999). *However, we conclude that the evidence for intrinsic correlation between pulse amplitude and width is weak and requires further study. Therefore, caution should be exercised in the interpretation of this result, in particular in using it as evidence against the external shock model.*
### 3.4 Other Timescales
Cosmological time dilation must affect all timescales within bursts, not only pulse widths. Some of these timescales may provide a more robust test of cosmological time dilation. This is because use of a pulse width as a burst duration is subject to the following uncertainty. Because of the spectral shift due to cosmological redshift, for the dimmer, hence more distant, bursts, BATSE will be detecting higher energy rest frame photons. gamma-rays were originally produced at higher energies but had redshifted to lower energies when they were detected. Since both burst durations (Fenimore et al., 1995) and pulses (Lee et al., 2000) tend to be shorter at higher energies, this would weaken the correlations between amplitude and width due to time dilation.
We have seen earlier that the *intervals between the peak times* of the two highest amplitude pulses in each burst do not appear to increase or decrease with energy, so that cosmological redshift of photon energies should not affect these intervals. As shown in the upper panels of Figure 9 and columns (b) of Table 3, these intervals also show a significant inverse correlation with the amplitudes of the highest amplitude pulses in the actual bursts, so they are shorter for brighter bursts.
Such a trend does not seem to be present in the fits to the simulated data, and is not present in the initial simulated data by design (See Figure 9, lower panels, and bottom two rows of Table 3, columns (b).) The distributions are very similar for the simulated bursts and for the fits to the simulations, although the fits to simulations tend to miss points when both the peak amplitudes and the intervals between the two highest amplitude pulses are small. Therefore, it appears that the correlations observed in the fits to actual bursts observed by BATSE are not caused by selection effects in the fitting procedure, but may arise from cosmological time dilation or from intrinsic properties of the bursts. An early study of time dilation using the intervals between pulses found inconsistent results (Neubauer & Schaefer, 1996), but a number for later studies have found evidence of time dilation (Norris et al., 1996a; Deng & Schaefer, 1998a, b) consistent with our results.
To see if some kind of correlation is present among pulses within bursts, we compare pulse amplitudes with time intervals between pulses within bursts as follows: For each burst time profile consisting of three or more pulses, we order the individual pulses by decreasing pulse amplitude. Then we look for correlations between the amplitude of each pulse and the absolute value of the intervals between it and the pulse with the next lower amplitude. The results are shown in Table 5. There appears to be a more frequent occurrence of inverse correlations than positive correlations between pulse amplitudes and intervals between pulses within bursts in the BATSE data, but this is statistically insignificant in all energy channels except possibly channel 1. This table also shows that the fitting procedure does not introduce any significant bias.
Finally, it should also be noted that the fitted power law indices for highest pulse amplitude versus width of the highest amplitude pulse are smaller than -1, which is inconsistent with purely cosmological effects. For a given variation in the highest pulse amplitude, the corresponding variation in pulse width is too great to be accounted for by only cosmological time dilation. We have also seen that within individual bursts, higher amplitude pulses have a strong tendency to be narrower, which must result from intrinsic properties of the GRB sources themselves. It seems likely that the observed correlation between the highest pulse amplitude and the width of the highest pulses in each burst could result from a combination of cosmological and non-cosmological effects.
One of the possible intrinsic effects that could contribute to the inverse correlations of pulse widths with pulse amplitudes is that the total energy in a burst, or within individual pulses, might tend to fall within a limited range, or might have an upper limit. This would be the case if, for example, the fluence of a burst were a better measure of distance than the peak flux. In the next section, we repeat the above tests using the fluence instead of peak flux as a measure of the strengths of bursts and pulses.
On the other hand, the power law indices for highest pulse amplitude versus the time interval between the peaks of the two highest pulses in each burst may be consistent with the expected results of cosmological time dilation alone. Furthermore, it seems likely that this correlation is less affected by intrinsic properties of bursters or by selection effects than the correlation between the highest pulse amplitude and the width of the same pulse in each burst. For example, if the range of radiated energy in entire bursts or in individual pulses, were limited by the production mechanism, or by selection effects, this would be far less likely to affect intervals between pulses than to affect pulse widths.
### 3.5 Integrated Luminosity as a Standard Candle
Petrosian & Lee (1996a) have suggested that the integrated luminosities of bursts, measured using either energies or photons, are likely to be better standard candles than their peak luminosities. This would be the case if the total energy output of bursters fall in a narrow range of values, and much of the variation in flux results from the broad range of burst durations. Petrosian & Lee (1996b); Lee & Petrosian (1997) have also found that the energy fluences of bursts and their durations show a positive correlation, which is the opposite of what cosmological time dilation should cause. In what follows we carry out similar tests for bursts and for pulses within individual bursts. We shall see that the count fluences of bursts and pulse widths show a positive correlation, while the count fluences of bursts and time intervals between pulses show no correlation, and neither of these effects can arise from cosmological effects. However, determining the significance of some of these correlations is difficult because the simulated bursts were generated with no correlations between pulse width and pulse amplitude, and therefore have a positive correlation between pulse width and pulse count fluence.
In Figure 10, we show that the pulse widths of the highest amplitude pulses have positive correlations with the total count fluences of each fit that appear to be significant in all energy channels except perhaps in channel 3. (See also Table 6, columns (a).) The positive correlation appears somewhat stronger in the fits to simulations than in the simulations. As mentioned above, this makes the interpretation of this result difficult.
Correlations between pulse width and pulse count fluence *within* bursts do not appear to have been studied before. In Table 7, we show the distribution and some moments of the power law index $`\beta `$, which is obtained from linear fits to the logarithms of the fluence and widths of pulses in individual bursts. As evident, a significant majority of fits in all energy channels and in the simulations show strong positive correlations between pulse width and pulse count fluence within individual bursts. (See Table 7 and the upper panels of Figure 11.) Pulse count fluences most often vary as a large positive power of the pulse width. (Because more bursts have $`|\beta |>1`$ than $`|\beta |<1`$, taking the median of the reciprocal of $`\beta `$ is more appropriate.)
The last two rows of Table 7 and the lower panels of Figure 11 show that the correlations in the fits to simulations are similar, though somewhat weaker in the original simulations, so that the observed correlation for the BATSE bursts is probably not a result of the fitting procedure.
Figure 12 shows that there are no significant correlations between the errors in the fitted count fluences and the fitted pulse widths for simulated bursts consisting of a single pulse in both the simulation and the fit. Therefore, the uncorrelated errors in the pulse count fluences and pulse widths would tend to smear out any existing correlations rather than to create correlations, which is what we have seen above.
The relation between the total count fluence and time interval between the two highest amplitude pulses in the actual and simulated bursts are shown in Figure 13 and columns (b) of Table 6.) The two quantities have positive correlations in all energy channels in the actual BATSE bursts, as determined from the Spearman rank-order correlation coefficients. However, the correlation is statistically insignificant in all channels, except perhaps in channel 3.
The distributions of the total burst count fluence versus *the intervals between the peak times* of the two highest amplitude pulses in each burst are very similar for the simulated bursts and for the fits to the simulations, although the fits to simulations tend to miss points when the intervals between the two highest amplitude pulses are small. However, columns (b) of Table 6 show no significant correlation for either the simulations or the fits to the simulations. This indicates that any correlation that may be present in the BATSE bursts is intrinsic to the radiative process.
We can also compare the count fluences of individual pulses with the time intervals between pulses within bursts. The results, shown in Table 8, show no statistically significant correlations between these two quantities. The simulations and fits to simulations also show no statistically significant correlations.
In summary, all correlations between *pulse* count fluences and pulse widths are positive, and probably result from the simple fact that pulses of longer duration tend to contain more counts. The correlation between total *burst* count fluence and the width of the highest amplitude pulse in each burst is probably a result of this correlation and the fact that the majority of the total count fluence of a burst is often contained in a single pulse. The cosmological effects have been overwhelmed by other effects.
It is not clear why there appears to be no correlation between total burst count fluence and the interval between the two highest pulses in each burst. One possibility is that most of the observed bursts are sufficiently far away that the count fluence varies very little with luminosity distance. However, this would place many bursts at redshifts of $`z>10`$, which seems unlikely given current evidence.
## 4 Other Correlations
### 4.1 Correlations Between Flux and Fluence
Since the count fluence of a pulse scales as the product of its amplitude and its width, and a factor involving the peakedness $`\nu `$, or equivalently, since the amplitude of a pulse scales as its count fluence divided by its width, again with a factor involving $`\nu `$, various selection effects could cause observed pulse amplitudes and widths to have an inverse correlation or cause observed pulse count fluences and widths to have a positive correlation.
Figure 14 and Table 9 show that there are no strong correlations between the amplitudes of the highest amplitudes pulses and the total count fluences of the BATSE bursts, in any energy channel. This result is somewhat unexpected, because even in the absence of cosmological effects, we would expect both peak flux and total fluence to scale approximately as the inverse square of the luminosity distance to the sources (the effects of the time dilation factor $`1+z`$ are much smaller), and hence to have a positive correlation with each other. The results from our simulations are not helpful because the simulated bursts were also generated with a strong positive correlation between pulse amplitude and pulse count fluence. It appears that selection effects in the pulse-fitting procedure tend to weaken these positive correlations, shown in the last two rows of Table 9, when we compare the simulations with the fits to the simulations. The absence of correlation in the actual bursts may indicate that the intrinsic range of the *effective durations*, *i.e.* the total fluences divided by the peak fluxes (Lee & Petrosian, 1997), is large enough to smear out distance effects expected in the distribution of fluences and peak fluxes. It also suggests that if one of the two brightness measures is a good indicator of distance, then the other cannot be, probably due to selection effects, or due to cosmological evolution of the sources.
However, when we consider the relation for pulses within individual bursts, we find that a significant majority of bursts in all energy channels show a positive correlation between pulse count fluence and amplitude within bursts. (See Table 10.) In every energy channel, the majority of bursts have pulse amplitudes varying as a small positive power $`\gamma `$ of the pulse count fluence within bursts. Most simulated bursts, as expected, show a positive correlation between pulse amplitude and pulse count fluence, but in the fits to the simulations, fewer bursts show a positive correlation. Therefore, the actual correlation in the BATSE bursts may have been weakened by selection effects in the pulse-fitting procedure.
Figure 15 shows an apparent positive correlation between the errors in the fitted pulse amplitudes and fitted count fluences for simulated bursts consisting of a single pulse in both the simulation and the fit. However, the Spearman rank-correlation coefficient shows no significant correlation between the two sets of errors. Therefore, the uncorrelated errors in the pulse amplitudes and pulse count fluences would tend to smear out any existing correlations rather than to create correlations, which is what we have seen above.
### 4.2 Correlations Between Pulse Amplitude and Pulse Asymmetry
It has been reported that when considering the averaged time profiles of bursts, the decay times from the peaks of bursts show an inverse correlation with peak flux, while the rise times to the peaks of bursts show a smaller inverse correlation or no variation at all with peak flux (Stern et al., 1997b, a; Litvak et al., 1998; Stern et al., 1999). Such a result could not come from cosmological time dilation, but would have to be caused by the burst production mechanism itself, or by some selection effect, perhaps resulting from the BATSE trigger criteria, which selects for fast-rising bursts (Higdon & Lingenfelter, 1996), but is independent of burst decay times. It is possible that a similar effect could appear in the individual pulses comprising a burst, as a *positive* correlation between pulse amplitudes and pulse asymmetries as measured by the rise time to decay time ratios. Although there may be selection effects in the pulse-fitting procedure, most of these should affect both rise and decay times similarly, and therefore shouldnโt affect pulse asymmetry ratios.
For bursts consisting of a single pulse, the pulse rise and decay times are of course the rise and decay times for the entire burst. Figure 16 shows pulse asymmetries versus pulse amplitudes for these bursts. There does not seem to be any clear correlations in the actual BATSE bursts, but the range of pulse asymmetry ratios appear to be broader for lower amplitude bursts than for higher amplitude bursts. The latter effect could result from the lower signal-to-noise of lower amplitude pulses. The Spearman rank-order correlation coefficients shown in Table 11, columns (a), comparing pulse amplitudes and pulse asymmetries of single-pulse bursts essentially confirm this impression; the correlations for the actual BATSE bursts are very weak, and have different signs in the different energy channels. In the simulated bursts, there are clearly no correlations in either the initial simulations or in the fits to the simulations.
Although the properties of individual pulses in multiple-pulse bursts may be different from those of the entire bursts, it may still be useful to look for correlations between pulse amplitude and asymmetry for individual pulses in multiple-pulse bursts. For the highest amplitude pulses from each burst, plots of pulse asymmetry versus pulse amplitude are shown in Figure 17, which again show that pulse asymmetries span a larger range of values at lower amplitudes in the actual BATSE bursts. The Spearman rank-order correlation coefficients given in Table 11, columns (b), show a marginally significant inverse correlations in energy channels 1 and 3, a strong inverse correlation in channel 2, and no correlation in channel 4. Again, the simulated bursts show no correlation at all.
Finally, we consider correlations between the amplitudes and asymmetries of pulses within bursts. Table 12 shows characteristics of the distributions of the power law indices $`\delta `$ obtained from fits to these quantities. There do not appear to be statistically significant correlations, except possibly in channel 3.
In summary, there is no clear evidence of any correlations between pulse amplitudes and pulse asymmetry, so that the variations of pulse rise and decay time with pulse amplitude donโt appear to be significantly different.
## 5 Summary and Discussion
In this paper, we use a pulse-fitting procedure to the TTS data from BATSE and determine the amplitudes, rise and decay times, and fluences. We investigate the correlations between all of these parameters of pulses in individual bursts and among different bursts. The former gives a measure of correlations intrinsic to the energy and radiation generation in burst sources, while the latter are also affected by cosmological effects. Simulations are used to determine the biases of the pulse-fitting procedure.
If the peak luminosities of pulses or bursts are approximate standard candles, so that the peak fluxes would be good measures of distance, then we expect to find negative correlations between fluxes and timescales. We do find inverse correlations between the highest pulse amplitude within a burst and two different timescales, the width of the highest amplitude pulse and the time interval between the two highest amplitude pulses. The former correlation, between pulse amplitude and pulse width, which is expected from cosmological time dilation effects, is nevertheless not consistent with purely cosmological effects, but must be at least partially influenced by non-cosmological effects. These non-cosmological effects may include intrinsic properties of the burst sources, or selection effects due to the BATSE triggering procedure, but do not appear to be affected by the pulse-fitting procedure. Our study indicates that the latter correlation, between pulse amplitude and time intervals between pulses, may be less influenced by non-cosmological effects. The inverse correlation observed between pulse amplitude and pulse width within bursts results in part from selection effects in the pulse-fitting procedure, but also appears to result in part from intrinsic properties of the burst sources.
If the total radiated energies of bursts are approximate standard candles, so that the burst fluences would be good measures of distance, then we expect to find negative correlations between fluences and timescales. We find instead a *positive* correlation between the total burst count fluence and the width of the highest amplitude pulse, but no correlation with the time interval between the two highest amplitude pulses. The former correlation indicates that non-cosmological effects are stronger than any cosmological effects. This is supported by the positive correlation between pulse amplitude and pulse count fluence within bursts. However, it is not clear why total burst count fluence and time intervals between pulses show no correlation.
It is natural to expect that the peak flux of bursts and the total count fluence of bursts should both decrease essentially the same way (except for a factor of $`1+z`$) as the distance to the burst sources increase. This would suggest that there should be positive correlations between the peak flux of bursts and the total count fluence of bursts. Strangely, the highest pulse amplitude and the total count fluence of bursts appear to have no statistically significant correlation with each other, implying that the two measures of brightness cannot both be good standard candles; at least one, or more probably both, are poor measures of distance.
There do not appear to be any statistically significant correlations between pulse amplitude and pulse asymmetry, whether the comparison is i) of all pulses in all bursts combined, ii) of only the highest pulse in each burst, iii) of only the single-pulse bursts, or iv) of different pulses within multiple-pulse bursts. This implies that the differences between the variations of pulse rise and decay time with pulse amplitude are statistically insignificant, and both rise times and decay times tend to decrease as pulse amplitude increases.
We thank Jeffrey Scargle and Jay Norris for many useful discussions. This work was supported in part by Department of Energy contract DEโAC03โ76SF00515. |
no-problem/0002/hep-ph0002037.html | ar5iv | text | # Hadronization in the Chromodielectric Model
## 1 Introduction
Quantumchromodynamics (QCD) is the widely accepted theory to describe strong interactions between hadrons. This theory shows the well-known behavior of *asymptotic freedom*. Furthermore, lattice calculation show a phase transition from the hadronic world to a system of free moving quarks and gluons, the Quark-Gluon-Plasma (QGP). Heavyโion experiments at CERN-SPS, at recently started BNL-RHIC and at the yet to come CERN-LHC with energies of $`\sqrt{s}=20,200,5500`$ A GeV respectively are designed to study the eventually formed QGP. But still there is lack of a dynamical description of both the transitions from hadrons to quarks and gluons and vice versa, derived from first principles from QCD. In this talk I present a classical, molecular-dynamical model, which contains explicitly the phenomenon of confinement and a dynamical mechanism for the formation of hadrons out of large system of quarks and gluons.
## 2 The chromodielectric model (CDM)
We start with the Lagrangeโdensity originally invented by Friedberg and Lee and intensively studied by several followers .
$``$ $`=`$ $`\overline{q}(i\gamma _\mu D^\mu m)q\frac{1}{4}\kappa (\sigma )F_{\mu \nu }^aF^{\mu \nu a}`$ (1)
$`+(_\mu \sigma )(^\mu \sigma )U(\sigma ).`$
The first term describes quarks, where $`q`$ is a Dirac spinor with colorโ, spinorโ and flavorโindices being suppressed and $`m`$ is a massโmatrix for the different quarkโflavor. $`D^\mu =^\mu +igA^\mu `$ is the covariant derivative, describing the minimal coupling to the gauge fields $`A^\mu =\frac{\lambda ^a}{2}A^{\mu a}`$ with coupling constant $`g`$ and $`\lambda ^a,a=1\mathrm{}8`$, being the GellโMann matrices.
The second term is the kinetic term for the gauge field in a medium, mediated via a dielectric function $`\kappa (\sigma )`$. The color field tensor is given by $`F_{\mu \nu }^a=_\mu A_\nu ^a_\nu A_\mu ^agf^{abc}A_\mu ^bA_\nu ^c`$, where the $`f^{abc}`$ are the structure constants of SU(3)<sub>c</sub> and one has $`\frac{1}{4}F_{\mu \nu }^aF^{\mu \nu a}=\frac{1}{2}(\stackrel{}{E}^a\stackrel{}{E}^a\stackrel{}{B}^a\stackrel{}{B}^a)`$. $`\stackrel{}{E}`$ and $`\stackrel{}{B}`$ are the colorโelectric and โmagnetic fields.
The last two terms introduce a scalar field $`\sigma `$ with a quartic scalar potential $`U(\sigma )`$. This scalar field is designed to mimic the long-range behavior of non-perturbative QCD and is therefore purely classical. It acts like a medium as in classical electrodynamics but with a dielectric constant $`\kappa (\sigma )<1`$ . The potential $`U(\sigma )`$ is adjusted to have a global minimum at the vacuum expectation value (VEV) $`\sigma =\sigma _v`$ and a local minimum at $`\sigma =0`$. In the absence of colorโfields, the scalar field takes on its VEV everywhere.
### 2.1 Confinement mechanism
The mechanism of confinement in the CDM is based on an interplay of the colorโfields and the $`\sigma `$โfield via the dielectric function $`\kappa (\sigma )`$ and the scalar potential which are shown schematically in fig. 2. From the Lagrangian (1) one gets the equations of motion for the colorโfield.
$`[D_\mu ,\kappa (\sigma )F^{\mu \nu }]`$ $`=`$ $`j^\nu ,`$ (2)
where $`j^\nu =g\overline{q}\gamma ^\nu \lambda ^aq\lambda ^a/2`$ is the colorโcurrent of the quarks.<sup>1</sup><sup>1</sup>1Note that this current is not conserved, due to the colorโcarrying gluons. In an Abelian approximation the equations for the colorโfields reduce to the usual Maxwellโequations $`_\mu (\kappa (\sigma )F^{\mu \nu a})=j^{\nu a}`$. The crucial point of the model is the choice of $`\kappa (\sigma )`$, which is supposed to contain all nonโAbelian effects. It is unity in the absence of the $`\sigma `$โfield and it vanishes when the scalar field takes on its VEV $`\sigma _v`$. If one considers a colorโchargeโdistribution $`\rho `$ with vanishing total color projections (in the Abelian directions 3 and 8), which we call a white cluster, a colorโfield is produced due to the Gaussโlaw $`\stackrel{}{}(\kappa (\sigma )\stackrel{}{E}^a)=\rho ^a`$. The field is only allowed where $`\kappa (\sigma )>0`$ ($`\sigma <\sigma _v`$). To suppress the scalar field costs an energy $`U(0)=B`$, and the vacuum exerts a pressure on the colorโfield. If the transition from $`\kappa =1`$ to $`\kappa =0`$ is a rapid one, then one is left with a well defined spatial region, where the scalar field nearly vanishes and the color field is nonโzero. All colorโfieldโlines start and end on charges inside this volume and therefore there are no Van-der-Waalsโlike interactions to other white clusters except for very short-ranged $`\sigma `$-effects. For that reason, if eventually two white subclusters form inside the cluster, they can separate from each other.
In addition, if the chargeโdistribution has a non-vanishing total charge, the fieldโenergy deposited in this configuration is divergent, i. e. those configuration cannot be created.
### 2.2 Model equations
As the confinement mechanism in our model depends solely on the specific choice of the dielectric function $`\kappa (\sigma )`$ and the scalar potential $`U(\sigma )`$, we can neglect the spinโdependences in the quarkโLagrangian. Instead we replace it with a Lagrangian for classical, spinless charged particles that are coupled to the classical colorโfield.
$`_p`$ $`=`$ $`{\displaystyle \underset{k}{}}m_k\sqrt{1\dot{\stackrel{}{x}}_k}\rho _N(\stackrel{}{x}\stackrel{}{x}_k)j^{\mu a}A_\mu ^a`$ (3)
$`j^{\mu a}`$ $`=`$ $`g{\displaystyle \underset{k}{}}q_k^au_k^\mu \rho _N(\stackrel{}{x}\stackrel{}{x}_k)`$ (4)
with the colorโcharge $`q_k^a`$ and the 4โvelocity $`u_k^\mu `$. In our numerical realization we deal with extended particles with a fixed Gaussian distribution
$$\rho _N(\stackrel{}{x}\stackrel{}{x}_k)=\left(2\pi r_0^2\right)^{3/2}e^{(\stackrel{}{x}\stackrel{}{x}_k)^2/(2r_0^2)},$$
(5)
where $`\sqrt{<\stackrel{}{x}^2>}=\sqrt{3}r_0=0.7`$fm is chosen to fit the radius of the nucleon.
The colorโcurrent (4) is consistent with the originally derived equations of motion (2) only in an Abelian subโgroup of SU(3)<sub>c</sub>. This is related to the fact, that the gluons in QCD carry color as well, whereas the colorโcurrent carried by the gluons vanishes in the Abelian approximation .
To further simplify the numerical realization we neglect the magnetic fields $`\stackrel{}{B}^a`$. This is exact for static problems and for string like yoyoโexcitations . The two decoupled sets of Maxwellโequations reduce basically to the Gaussโlaw for each field. To summarize we now have the following equations of motion for the particles, the (electric) colorโfield and the $`\sigma `$โfield
$`\dot{\stackrel{}{x}}_k`$ $`=`$ $`{\displaystyle \frac{\stackrel{}{p}_k}{E_k}}`$ (6)
$`\dot{\stackrel{}{p}}_k`$ $`=`$ $`q_k^a{\displaystyle d^3x\left(\stackrel{}{}\varphi ^a(\stackrel{}{x})\right)\rho _N(\stackrel{}{x}\stackrel{}{x}_k)}`$ (7)
$`\stackrel{}{}\left(\kappa (\sigma )\stackrel{}{}\varphi ^a(\stackrel{}{x})\right)`$ $`=`$ $`\rho ^a(\stackrel{}{x})`$ (8)
$`{\displaystyle \frac{^2\sigma }{t^2}}+U^{}(\sigma )`$ $`=`$ $`^2\sigma +\frac{1}{2}\kappa ^{}(\sigma )\left(\stackrel{}{}\varphi ^a(\stackrel{}{x})\right)\left(\stackrel{}{}\varphi ^a(\stackrel{}{x})\right),`$ (9)
where the prime denotes differentiation with respect to $`\sigma `$ and $`\varphi ^a(\stackrel{}{x})`$ is the electric potential which satisfies $`\stackrel{}{E}^a=\stackrel{}{}\varphi ^a(\stackrel{}{x})`$, $`a\{3,8\}`$. We choose for the scalar potential $`U(\sigma )=B+a\sigma ^2+b\sigma ^3+c\sigma ^4`$ with $`B=(150\text{MeV})^4`$, $`a=(489.9\text{MeV})^2`$, $`b=15901\text{MeV}`$, $`c=163.1`$ and for the VEV $`\sigma _v=61.1\text{MeV}`$. The dielectric function is $`\kappa (\sigma )=\left(\mathrm{exp}\left(\alpha (\frac{\sigma }{\sigma _v}\beta )\right)+1\right)^1`$, where $`\alpha =7`$ and $`\beta =0.4`$. The coupling constant is chosen to reproduce the stringโtension in an $`q\overline{q}`$โconfiguration and takes on the value $`\alpha _S=g/(4\pi )=2`$.
### 2.3 Classification of particles
In QCD the quarks and gluons are represented as tripletโ and octetโstates of SU(3)<sub>c</sub> respectively. In our classical simulation we assign classical charges $`q^a`$ to the quarks. These charges are the diagonal entries of the $`\lambda ^a,a3,8`$. Due to our approximation we neglect the nonโAbelian part of the colorโfields. Instead we treat these 6 gluonโfields as particles in the same formalism as the quarks except that these particleโgluons carry both a color and an anti-color. The corresponding charges for quarks, anti-quarks and gluons are depicted in fig. 2. E.g. the particle $`r`$ in that scheme is a quark of color *red* and colorโcharges 1 and $`1/\sqrt{3}`$ with respect to the 3โ and the 8โfield respectively.
As we have mentioned in section 2.1, the dynamics of our model forces the charged particles into white clusters and the separation into white subclusters is allowed. We regard as hadrons only white clusters which cannot be divided into smaller ones. It turns out, that there is only a finite set of those *irreducible white Clusters* (IWC). The IWCs consist either of a quark and an anti-quark or of three quarks (anti-quarks), both with some gluon admixture, or they consist only of gluons and can therefore easily be interpreted as mesons, baryons and glue-balls respectively.
For the particle masses we use constituent quark masses to fit the lowโlying hadronic spectrum. As our model does not depend on isospin, we treat the $`u`$โ and the $`d`$โquark as degenerate particles with the same mass. Only the light pion, which is assumed to be a Goldstoneโboson of chiral symmetry breaking, does not fit in our constituent-massโscheme and thus we do not incorporate pions in our model. The quark masses are fixed to be $`m_{u,d}=400`$MeV, $`m_s=550`$MeV and $`m_c=1500`$MeV. Thus the masses of the lowest hadronic states are simply the sum of the quark masses of the IWC. For the gluons we take a mass $`m_g=700`$MeV to reproduce the lightest glueball-mass of 1400MeV.
## 3 Hadronization
To simulate the hadronization out of a QGP, we start with an ensemble of quarks and massive gluons, distributed homogeneously in a sphere of radius $`R=4`$fm in real space and according to a Boltzmannโdistribution with initial temperature $`T_0=160`$MeV in momemtumโspace. The relative number of different particles is given through the distribution $`N_id_i\mathrm{exp}\left(m_i/T_0\right)\left(\frac{m_iT_0}{2\pi }\right)^{3/2}`$, where $`d_i`$ is a (spin, isospin and color) degeneration factor. The colors are chosen randomly with the constraint of overall colorโneutrality. After solving the Gaussโlaw for the colorโfields in the first time step the system is driven by the equations of motion (6). Due to the initial momenta the particles tend to leave the system but are bound due to the formation of colorโstrings. In this way the particles are reorganized to form in a first step white clusters and finally only IWCs. This scenario is shown in fig. 3. The hadrons have invariant masses $`M_{\text{iwc}}^2=E^2+\stackrel{}{P}^2`$ with particle energies $`E_i`$ and momenta $`\stackrel{}{p_i}`$
$`E`$ $`=`$ $`{\displaystyle \underset{i}{}}E_i+{\displaystyle d^3x\left(\frac{1}{2}\dot{\sigma }^2+\frac{1}{2}(\stackrel{}{}\sigma )^2+U(\sigma )+\frac{1}{2}\kappa (\sigma )\stackrel{}{E}^a\stackrel{}{E}^a\right)}`$ (10)
$`\stackrel{}{P}`$ $`=`$ $`{\displaystyle \underset{i}{}}\stackrel{}{p}_i{\displaystyle d^3x\dot{\sigma }\stackrel{}{}\sigma }.`$ (11)
The resulting mass distribution is shown in fig. 4. The curves are fits to a Hagedorn distribution $`dn/dmm^{a+3/2}\mathrm{exp}(m(1/T1/T_h))`$ and to an inverse powerโlawโdistribution $`dn/dmm^\tau `$. If we assume $`T_h=160`$MeV and $`a=3`$ or $`a=3/2`$ in the Hagedorn case, we get a hadronic temperature $`T=146`$MeV and $`T=125`$MeV respectively. |
no-problem/0002/cond-mat0002251.html | ar5iv | text | # Ideally Efficient Irreversible Molecular Gears
## Abstract
Typical man-made locomotive devices use reversible gears, as cranks, for transforming reciprocating motion into directed one. Such gears are holonomic and have the transduction efficiency of unity. On the other hand, a typical gear of molecular motors is a ratchet rectifier, which is irreversible. We discuss what properties of rectifier mostly influence the transduction efficiency and show that an apliance which locks under backwards force can achieve the energetic efficiency of unity, without approaching reversibility. A prototype device based on ratchet principle is discussed.
PACS No: 05.40.-a; 05.70.Ln; 87.10.+e
Man-made engines powering our cars, trains and ships, and molecular motors powering cells and subcell units are energy transducer designed to transform chemical energy, stored in form of fuel and oxygen, into mechanical work. Both can be considered as consisting of the working unit(s) and of a gear. A gear is used in order to transform the oscillatory motion of a piston (or a kinesin molecule) $`x(t)`$ into a continuous directed motion $`X(t)vt`$, or into continuous rotation $`\phi (t)\omega t`$. A typical gear used for technical applications is a crank-and-shaft mechanism. This gear is reversible, since the continuous rotation of the crankโs axle causes the oscillatory pistonโs motion. The relation between $`x`$ and $`\phi `$ corresponds to a periodic, locally invertible function. The transformation of oscillations into directed motion implies symmetry breaking, determining the direction of motion. Cranks use spontaneous symmetry breaking: here both rotation directions are possible; the actual one is determined by initial conditions. The onset of motion is hard: too small oscillations can not be transformed into a continuous rotation. Moreover, the holonomic nature of gearing transformation implies synchronization of working units, if several of them are used. Molecular motors, on the other hand, use rectifiers (such as a ratchet-and-pawl system) in which the spatial symmetry is lacking from the very beginning. Rectifiers are irreversible gears, as clearly illustrated by usual electric appliance: A diode rectifier transforms an alternating current into direct one, but, being fed with a direct current, it does not produce an alternating one, but only heat. Rectification has significant advantages compared to holonomic gearing. Thus, the soft onset of the motion allows for easy control at small velocities, and the asynchronous mode of operation is of great virtue in nanoscale cellular systems, since the synchronization of molecular-level reaction events (having stochastic, Poissonian character) is a problematic task. This property is often referred to as the ability to rectify noise .
The quality of a gear can be characterized by its energetic efficiency, i.e. by a quotient between the input energy and useful work performed, so that the question of energetics of gearing got recently much attention within different theoretical frameworks. The energetic efficiency of a holonomic gear is unity, and the Second law of thermodynamics implies that the energetic efficiency of any other isothermal gear can not exceed this limit. On the other hand, typical efficiencies of prototype ratchets (as a rocked system of Refs. transporting particles against constant outer potential due to the work of additive oscillating field) in quasistatic regime (guaranteeing the no-loss condition for typical thermodynamic appliances, Ref.) are so poor, that one wonders, why didnโt the Nature look for anther mechanism to do the work. The exception is one of the systems (โsystem bโ)) discussed in Ref. which is essentially a synchronized, quasiequilibrium motor. In what follows we analyze in detail the thermodynamics of rectification and show that prototypical ratchet devices lack an important property of effective rectifiers, namely the backward locking (known from the common experience with the macroscopic ratchet-and-pawl mechanism). As we proceed to show, an ideal rectifier can perform as good as a crank, and moreover a minor variation of a simple ratchet rectifier can produce a gear whose performance is not too far from the ideal one.
Let us discuss the work produced by an isothermal system under changes of outer conditions. The mean energy of the system is given by $`E=_ie_ip_i`$, where $`e_i`$ is the energy of a (micro)state $`i`$ and $`p_i`$ is a corresponding probability (occupation number). The energy change is then given by
$$dE=\underset{i}{}de_ip_i+\underset{i}{}e_idp_i.$$
(1)
In quasiequilibrium Eq.(1) corresponds to the form $`dE=\delta A+\delta Q`$ of the First Law of thermodynamics. Out of equilibrium, the first term still corresponds to the work of outer forces, but the second one shows some new, typically nonequilibrium, aspects.
Let us discuss a case when $`i`$ can be parametrized by continuous phase space coordinates $`๐ซ=(๐ฑ,\dot{๐ฑ})`$. In an overdamped situation (typical for biological systems) the kinetic degrees of freedom decouple from spatial ones, $`p(๐ซ)=p(๐ฑ)p(\dot{๐ฑ})`$, with$`p(\dot{๐ฑ})`$ being equilibrium Maxwell distribution, see Ref. . Thus we can fully concentrate on the coordinate space of the system. The energy changes due to the redistribution of occupation probabilities during time $`dt`$ can be expressed as:
$$\underset{i}{}e_idp_i=dt_Ve(๐ซ)\frac{dp(๐ซ)}{dt}๐๐ซ=dt\left[_V๐ฃ(๐ฑ)gra๐U(๐ฑ)๐V_{dV}U(๐ฑ)๐ฃ(๐ฑ)๐\sigma \right]$$
(2)
where the continuity equation $`dp(๐ฑ)/dt+div๐ฃ(๐ฑ)=0`$ in coordinate space is used. Here $`d\sigma `$ denotes the surface element of the systemโs outer boundary. The first term represents the heat absorbed from the bath per unit time and is equal to the Joule heat taken with an opposite sign. The second, surface term describes the work (produced within the systemโs volume) of the currents, which are generated outside of the system. The energy balance in the system reads: $`dE/dt=P_F+P_I+q,`$ where $`P_F`$ is the power of outer forces, $`P_I`$ is the power of outer currents, and $`q`$ is the heat absorbed by a system from the heat bath per unit time. For a device acting periodically or under stochastic force with zero mean $`\overline{dE/dt}=0`$, so that $`\overline{P_F}+\overline{P_I}+\overline{q}=0`$. Depending on the particular arrangement, the input work and the useful work of a gear can be differently distributed between $`P_F`$ and $`P_I`$. On the other hand the mean heat is always dissipated, $`\overline{q}<0.`$
As an example let us consider a typical electrical arrangement consisting of an outer a.c. source of voltage $`U_F(t)`$, of a rectifier, and of an accumulator (maintaining a constant voltage $`\mathrm{\Delta }U`$) switched in series, see the insert in Fig.1. If a thermodynamic appliance achieves ideal efficiency, it typically achieves it in quasistatic regime, since a finite-velocity mode of operation is inevitably connected with losses, Ref.. Confining ourselves to a quasistatic situation, we can describe a rectifier by a Volt-Amper characteristics (load-current characteristic, LCC) $`I(t)=I(U(t))`$: The state of the whole system is characterized by the current $`I(t)`$ being the function of $`U`$, the potential difference at the rectifier. The useful work (charging the battery) is produced by the outer currents flowing against the batterieโs voltage, so that its value per unit time is $`P=P_I=\mathrm{\Delta }UI(t)`$, and the Joule heat $`Q=q=U(t)I(t)`$ is uselessly dissipated. The energy balance discussed before corresponds to a Kirchhoffโs law $`U(t)=U_F(t)+\mathrm{\Delta }U`$. The efficiency of a rectifying device is given by: $`\eta =\overline{P}/\overline{P_F}=\overline{P}/(\overline{P}+\overline{Q})`$. Hence,
$$\eta =\overline{I(t)}\mathrm{\Delta }U/\overline{I(t)U_F(t)}.$$
(3)
Note that Eq.(3) is valid for any one-dimensional rectifying device where the energy input takes place through the work of the outer forces, the useful work is produced against the constant field (by pumping particles uphill) and the Joule heat is dissipated, cf. Refs. . We get:
$$\eta =\overline{I(\mathrm{\Delta }U+U_F(t))}\mathrm{\Delta }U/\overline{I(\mathrm{\Delta }U+U_F(t))U_F(t)}$$
(4)
Applying Eq.(4) to a system rectifying sinusoidal outer field $`U_F(t)=U_0\mathrm{sin}\omega t`$ one gets after the change of variable $`x=\mathrm{sin}\omega t`$:
$$\eta =\frac{_1^1๐x\xi I\left(U_0(x+\xi )\right)/\sqrt{1x^2}}{_1^1๐xxI\left(U_0(x+\xi )\right)/\sqrt{1x^2}},$$
(5)
where $`\xi =\mathrm{\Delta }U/U_0`$. In order to understand what property of the system is important for achieving high efficiencies let us discuss a hypothetical appliance with a piecewise-linear LCC
$$I(U)=\{\begin{array}{cc}g_+U\hfill & \text{for }U>0\hfill \\ g_{}U\hfill & \text{for }U<0\hfill \end{array},$$
(6)
for which Eq.(5) can easily be evaluated analytically. The behavior of $`\eta `$ for $`U_0=0`$ as a function of the outer potential $`\mathrm{\Delta }U`$ is shown in Fig.1 for different values of the backward conductivity $`g_{}`$. The larger is the backward resistance, the larger maximal efficiency is achieved. For $`g_{}0`$ the maximal efficiency of a gear tends to 1, and is attained for $`\mathrm{\Delta }U=U_0`$. In this case the rectifier is always switched in its backward direction (locked) so that the stalling case is essentially a no-current one. Thus, if for an irreversible mode of operation, the stalling condition corresponds to vanishing of the currents, the losses are suppressed and the ideal efficiency is reached. This finding can be compared with the results of Ref., where the Carnot efficiency is achieved by a heat engine built of the two ideal diode rectifiers at different temperatures.
Let us now turn to another question: how to build an appliance based on a ratchet principle, whose efficiency tends to unity under idealized conditions. We confine ourselves to a quasistatically operating systems as only candidates for potentially ideal performance. The rocked one-dimensional ratchet performs badly, because its nonlinearity is too weak. The LCC of a ratchet rectifier can be obtained by using an adiabatic solution, Refs. , and leads to practically linear behavior at larger voltages of both signs, showing thus no locking behavior. As we have seen, locking is important for achieving high efficiencies. We also note that the phase space of a genuine ratchet-and-pawl appliance (showing locking) is at least two-dimensional .
Many typical ratchet appliances, discussed in the literature, can be considered as special cases of a generic two-dimensional ratchet model with two impenetrable saw-tooth boundaries in a homogeneous outer field (see Fig.1a) which we call an oblique rectifier. Standard one-dimensional models correspond to the case when only a narrow current channel between the boundaries is present. The system works as a rocked ratchet if only the $`x`$component of the field oscillates, and as a flashing appliance when only the $`y`$-component changes. The weakness of corresponding nonlinearities is connected with the fact that the number of the particles in a current channel does not depend on the field. On the other hand, the system with broad channel in an oblique field can reach very high efficiencies due to locking.
Note that an oblique rectifier in homogeneous outer field $`๐
`$ can be described by a LCC: In homogeneous field one has $`Q=\mathrm{๐ฃ๐
}๐V=๐
๐ฃ๐y๐x`$. Due to conservation $`๐ฃ๐y=I`$ so that $`Q=I๐
๐x=IU`$, where $`U`$ is the potential difference between the leftmost and the rightmost cross-sections of system. On the other hand, $`P=I\mathrm{\Delta }U`$ per definition.
In what follows we donโt attempt to discuss in detail the LCC of a generic oblique appliance, and present only a qualitative discussion. In an oblique field the appliance can effectively be considered as consisting of the effective current channel and the trapping pockets. Fig.2b) shows a cartoon corresponding to this strongly simplified picture. The current flowing through the channel is proportional to the $`x`$-component of the outer field $`F`$ and to the concentration $`n(F)`$ of particles in the channel, $`I=\mu SFn(F)`$. Here $`S`$ is the channelโs cross-section and $`\mu `$ is the particlesโ mobility. The concentrations of the particles in a channel and at the opening of the neck connecting it with the pocket are equal; the particleโs concentration in the pocket (having the typical energetic depth $`\mathrm{\Delta }u=Fd,`$ where $`d`$ is the distance form the neck to the pocketโs body) is $`n_p(F)=n\mathrm{exp}\left(\mathrm{\Delta }u/kT\right)`$. Since the overall number of particles per rectifying unit is field-independent, one has $`n(F)\left[\mathrm{\Omega }_c+\mathrm{\Omega }_p\mathrm{exp}\left(Fd/kT\right)\right]=n_0(\mathrm{\Omega }_c+\mathrm{\Omega }_p)`$, where $`\mathrm{\Omega }_c`$ and $`\mathrm{\Omega }_p`$ are the volumes of the channel and the pocket, respectively and $`n_0`$ is the concentration in the absence of the outer field. From this the form of the LCC follows as:
$$I(U)=\frac{g_0U}{1+a\mathrm{exp}(U/U_T)}.$$
(7)
Here $`g_0=\mu n_0S(1+\mathrm{\Omega }_p/\mathrm{\Omega }_c)/l`$ is the zero-field conductivity, $`U_T=kT/d`$ is the characteristic field and $`a=\mathrm{\Omega }_p/\mathrm{\Omega }_c`$. For strong positive fields the behavior of the appliance is linear. For strong negative fields, the particles get trapped in pockets, and the current through the appliance decays exponentially. Thus the generic locking behavior shows up. Numerical evaluation of Eq.(5) using the LCC Eq.(7) leads to results shown in Fig.3. Here we plot $`\eta (\mathrm{\Delta }U)`$ as a function of $`\mathrm{\Delta }U/U_0`$, for the fixed values of $`g_0=a=U_T=1`$ and for the values of $`U_0`$ equal to 1, 3, 10 and 30. We note that the maximal efficiency grows with $`U_0`$ (for $`U_0=30`$ maximal value of efficiency exceeds 90%), and the position of this maximum shifts to the left, i.e. to the values $`\mathrm{\Delta }U`$ approaching $`U_0`$. The reason for the growth of efficiency is the fact that typical reverse differential resistance grows exponentially with $`U`$, and in the limit of strong outer fields tends to the ideal limit of 1.
In summary, irreversible gears have considerable advantages when compared with reversible ones. They allow for an asynchronous mode of operation, of great virtue for biological systems, since distinct reaction events can hardly be synchronized on molecular level. We have shown that irreversibility does not put any limits on the efficiency of energy transduction, i.e. that an ideal rectifying appliance can reach the efficiency of 1. The property important for the effective rectification is locking under backwards load. We show that small modifications of a generic ratchet system (an oblique rectifying appliance) lead to systems whose efficiency tends to this idealistic limit.
The author is indebted to Prof. A. Blumen and Prof. J. Klafter for fruitful discussions. Financial support by the Deutsche Forschungsgemeinschaft through the SFB 428 and by the Fonds der Chemischen Industrie is gratefully acknowledged.
Figure Captions
Fig.1. The efficiency of a rectifier with LCC, Eq(6) switched according to a scheme shown as an insert. The battery is charged when $`\mathrm{\Delta }U<0`$. The fat line corresponds to an ideal appliance with $`g_{}=0`$. Three other curves correspond to $`g_{}=10^4`$, $`10^3`$ and $`10^2`$, respectively.
Fig.2. a) The oblique rectifying appliance, see text for details. When the outer field is strong enough the particles get trapped between the saw-teeth. The trapping potential is proportional to the outer field $`F`$. b) The cartoon of the appliance in fig. a), used in our considerations: this simplified version consists of the current channel and the pockets, where the particles get trapped if the outer field shows in reverse direction.
Fig.3. Efficiency of a trapping rectifier as a function of $`\mathrm{\Delta }U/U_0`$ at different values of outer field amplitude $`U_0.`$ Note that at larger fields the maximal efficiency tends to unity due to locking. |
no-problem/0002/astro-ph0002487.html | ar5iv | text | # Post-glitch RXTE-PCA observations of the Vela pulsar
## 1 Introduction
We present observations of the Vela pulsar with the Proportional Counter Array (PCA), on the Rossi X-Ray Timing Explorer (RXTE). Our observations cover two distinct time-spans. The first part is very close to the glitch on 1996 October 13.394 UT (Flanagan glidate (1996)). It consists of three observations at one, four and nine days after the glitch. We analyzed these sets of data separately. The second series of observations were obtained in January 1997. All data sets of January 1997 were analyzed together. The exact dates of observations are given in Table 1.
We first performed spectral analysis of our data, calculated time averaged flux for different observations and put upper limits for the flux change. Then, by using radio ephemerides, we detected the pulsations in the data, and investigated the changes in pulse shape and pulse fraction. Finally, we compared our results with the theoretical expectations of change in flux which might arise because of glitch induced energy dissipation in the neutron star.
Time averaged spectrum analysis is explained in section 2. The detected pulse shapes are presented in section 3. In section 4 we discuss the implications of our results.
## 2 Time Averaged Spectrum Analysis
The observation time-spans, and total integration times are given in Table 1 together with calculated model parameters, and flux values. The analysis is carried out using FTOOLS 4.1.1 and XSPEC v10. Only the data coming from the first xenon layer was chosen to increase the signal-to-noise ratio. The time intervals in which one or more of the five Proportional Counting Units (PCUs) are off, the elevation angle is less than 10 degrees, or pointing offset is greater than 0.02 degrees were not included in the analysis, as recommended in the โScreeningโ section of โABC of XTEโ (RXTE GOF abcxte (1998)). The background used is synthetic and is generated by the background estimator pcabackest. The background models are based on rate of very large events, spacecraft activation, and cosmic X-ray emission. More information on background models can be found in (Jahoda jaho (1996)). We have used the 2.2.1\_v80 version of response matrices. Although the matrices are not equally good for each PCU, to have good statistics, we combined data coming from every PCU, contrary to the recommendation by Remillard (remi\_resp (1997)).
A comparison of background with the data led us to ignore the channels above 68, which approximately corresponds to the energy 25.7 keV (see Fig. 1.). The systematic errors were chosen to make the reduced $`\chi ^2`$ equal to unity in XSPEC. To have reasonable systematic errors we also had to ignore channels 0-7. The maximum energy for the seventh channel is 2.90 keV.
The hydrogen column density model used by XSPEC is valid for the energies 0.03-10 keV. Although this covers the ROSAT energy band (0.5-2.4 keV), major portion of our spectrum (2.90-25.7 keV) falls outside this range. Therefore, we adopted the hydrogen column density $`4\times 10^{20}\text{atoms}/\text{cm}^2`$ obtained in a ROSAT observation of the Vela pulsar (รgelman et al. vel\_og (1993)), since at lower energies the spectral resolution of ROSAT is much better than RXTE/PCA detectorsโ resolution.
Figures 2 and 3 are plots of the energy spectrum along with fitted models and residuals. The model parameters are given in Table 1. The quoted errors are for three sigma confidence levels. The calculated flux for each observation along with an upper and a lower limit are given in Table 1. The upper (lower) limits for the flux are calculated by setting the index of the power-law to the lower (upper) limits given by XSPEC, leaving the normalization of the power-law spectrum as the only free parameter, and refitting the spectrum.
We have also searched for a blackbody component to the spectrum in addition to power-law, but this resulted in temperatures $`10`$ times larger than what has been found by รgelman et al. (vel\_og (1993)), for all observations. Although the addition of a blackbody component improves the fit, the resulting temperature suggests that this is not physical but merely a result of an increase in the number of variables. This idea is supported by the fact that adding a bremsstrahlung component to power-law or changing the power-law to a broken power-law gives a fit as good as a power-law blackbody combination.
## 3 Timing Analysis
The data analyzed consists of two parts. The earlier data sets extend between one and ten days after the glitch, where the post-glitch exponential relaxation of the pulse period prevails. The second data set, about three months after the glitch, does not display this rapid variation of the pulse frequency.
Finding a pulsation in the latter was straightforward. By using the Princeton ephemerides distributed with FTOOLS the pulse shape shown in Fig. 4(e) is obtained. This is a histogram of counts versus twice the phase, which is divided into 22 intervals. The histogram includes all photons detected in the first layer and in channels 8-68 inclusive. No filtering was done for elevation, offset or number of PCUs. Since background is synthetic it is not subtracted either.
Finding a pulse for earlier observations, which are very close to the glitch, proved to be difficult. There are two sets of ephemerides in the Princeton database that are relevant to these observations, and an additional one was provided by Claire Flanagan (private communication). The first two give the frequency, frequency derivative, and frequency second derivative, while the third one gives only frequency and frequency derivative. The time-spans covered by these ephemerides and the three observations in the earlier set are shown in Fig. 5. The reference epoch of the ephemerides are roughly 50370.7, 50372.0, and 50379.0 for Flanagan and Princeton ephemerides one and two, respectively. None of the ephemerides by itself gives a pulse for any of the three observations. Furthermore the ephemerides give different results for overlapping portions.
We therefore tried to combine the ephemerides. We interpolated the values of frequency and frequency derivative by making a second order polynomial fit to the values given in the ephemerides. This yielded to the expressions:
$`f`$ $`=`$ $`.2975280743\times 10^{18}t^2`$ (1)
$`.16020626525\times 10^{10}t+11.1962177095,`$
$`\dot{f}`$ $`=`$ $`.242915\times 10^{23}t^2`$ (2)
$`+.242121\times 10^{17}t.1622\times 10^{10}.`$
The epoch for these expressions is the same Flanaganโs ephemeris. By using these values and not taking the higher derivatives into account, we calculated the phase for the arrival time of each photon. This method yielded reasonable pulse shapes for the first and second observation (see Fig. 4 (a) and (b) ), but failed for the third one. The reference epoch of the second Princeton ephemeris is very near to the third observation, nevertheless the use of the ephemeris which represents an extended time-span of rapidly varying periods, fails to give a pulse shape by itself. We therefore tried another approach. We combined the frequencies, frequency derivatives, and frequency second derivatives given in the two Princeton ephemerides, made a fifth order polynomial fit, and threw away the fourth and fifth order terms. In this way we reduced the contribution of the second ephemeris. The final expression for frequency is:
$`f`$ $`=`$ $`9.68274932\times 10^{25}t^3+0.8\times 10^{18}t^2`$ (3)
$`1.59821\times 10^{11}t+11.1962159427143.`$
The epoch for this expression is the same as first Princeton ephemeris. The pulse shapes obtained for the second and third observations by this method are given in Fig. 4 (c) and (d).
## 4 Conclusions and Discussion
### 4.1 Time Averaged Spectrum
The power-law spectrum observed is in agreement with expectations deduced from previous observations of Vela at higher and lower energies(รgelman et al. vel\_og (1993), Kanbach et al. kanbach (1994), Strickman et al. strick96 (1996), Kuiper et al. kuiper (1998) ). At this part of spectrum (2-20 keV), the contribution of the pulsar is very small compared to the contribution of the compact nebula surrounding it. As a result the pulse shapes have a very high DC level, as can be seen in Fig. 4.
The slightly higher residuals near 6 keV and lower residuals near 4 keV are not characteristics of observed sources, but are artifacts of PCA. This effect, which is a result of the L edge of Xenon, is reduced by the version of response matrices in use, but not completely removed.
Our main conclusion from the analysis is that the spectrum does not change from early post-glitch to late observations. It is a power-law with an index around 2 for all of the observations. The power-law index does not change significantly among the observations. The highest value calculated for the index is 2.107 and the lowest value is 2.009. This corresponds to a change of 5%, which is a fractional upper limit for the change of power-index during the observations.
The upper and lower limits of the flux calculated by the comparison explained in section 2 and presented in Table 1 are well within the range of systematic errors. We therefore adopt the systematic errors as the upper limits to any variation in flux.
There is seemingly a jump in the flux between 2-20 keV, from the first to the second observation. This observation is only four days away from the glitch. The pre-glitch temperature of the surface of the Vela pulsar is thought to be around 0.15 keV(รgelman et al. vel\_og (1993)). Theoretical models (Van Riper et al.nsth1 (1991); Umeda et al.nsth2 (1993); Hirano et al. nsth3 (1997)) predict an increase at most by a factor of 8, which brings the temperature to 1.2 keV. Attempts to find a blackbody component in this observation did not give significantly different results from other observations. This suggests that the observed flux changes may have little or nothing to do with changes in surface temperature.
Another possible interpretation is that there is an error in the analysis of this particular observation, possibly arising from the calculation of synthetic background. Vela is a faint source for PCA. An improved model in the estimation of background for faint sources has been released by the PCA Team in 1998. This model has been used throughout the calculations. There may be further improvements on the background models that could change the calculated flux. The presented flux is calculated by using the spectrum model, rather than by direct observation. Apart from this observation there is no apparent change in flux or count rate.
Treating the calculated fluxes as very high upper bounds to the Wien tail of possible blackbody radiation from the neutron star surface could in principle be used to rule out some of the models for the post-glitch thermal emission from the neutron stars. In practice this does not work since the surface temperature range of the Vela pulsar is far below the RXTE-PCA energies.
### 4.2 Timing Analysis and Pulse Shapes
The epoch of the second ephemeris taken from the Princeton database, 50379.0 MJD, is pretty close to the third observation (see Fig. 5), but using the ephemeris alone for the observation does not give a pulse shape. This may be due to the existence of two distinct decay time scales of Vela, 3 days and 30 days, which were observed in all previous glitches and fall within the ranges of ephemeris (Alpar et al. Alpar93 (1993)). Our data is not good enough to determine any exponential decay time scales.
In view of the rapidly varying period at those epochs, the pulse shapes of the first part of the observations were obtained by a careful interpolation amounting to the construction of an ephemeris that can represent the rapid changes in the pulsarโs timing parameters in this postglitch epoch. The phase difference of the two observed peaks in these shapes is the same as the phase difference in the second set of observations (in January 1997). This gives us some confidence in the resultant pulse shapes.
The pulse shapes obtained are not reliable for drawing conclusions on the changes of pulse shape or pulsed fraction, since both of these factors are sensitively dependent on ephemeris. This is best seen by comparing Fig. 4 (b) and (c). They belong to the same set of data but have obvious differences both in the pulse shape and pulsed fraction.
### 4.3 Possible Future Work
Extracting the contribution of the compact nebula from the spectrum may help to delineate effects of temperature changes on the neutron star surface. Although the pulsations are detected, they are not reliable enough to justify taking the off-peak photon counts as background to the peak photon counts to remove the effects coming from the DC signal. The field of view of the PCA detector is one degree (RXTE GOF PCA (1996)), consequently the compact nebula surrounding the Vela pulsar has a significant contribution to the observed spectrum. The images showing the emission from the pulsar and the sources around it, in particular the compact nebula, can be found in Markwardt (craig\_neb (1998)), Frail et al. (ogel2 (1997)), Harnden et al. (1985 (1985)), and Willmore et al. (willmore (1992)).
When we divided the data from the second part of observations into smaller time intervals we have observed that the pulse shape begins to disappear for data strings covering less than 30000 seconds. The exposure time of the data sets we used in this work are below 10000 seconds. This explains the uncertainty in pulse shapes and fractions. Future target of opportunity observation of the Vela pulsar by RXTE need to be allocated more observation time, and should contain observations made approximately 20 days after the glitch, since this is about the time that the surface temperature will reach its maximum according to theoretical models. Also, more detailed ephemerides fitting the post-glitch behavior of the pulsar is necessary to make deductions on changes in pulse shape.
Finally we note that the question of glitch associated energy dissipation in the Vela pulsar has been addressed also with ROSAT observations. Comparison of observations at epochs before and after the glitch has not yielded stringent constraints on the glitch related energy dissipation (Seward et al. ROSAT (1999)).
While this work was in preparation another analysis of RXTE/PCA observations of the Vela pulsar was published by Strickman et al. (strick2 (1999)). They have also detected a pulsed emission and a power-law spectrum. Our analysis differs from theirs in two ways. Their phase-resolved spectra are obtained by taking โoff-pulseโ photons as background to โon-pulseโ photons, whereas we calculated only time averaged spectra. Another difference is that these authors used data coming from only the first xenon layer for energies below 8 keV, but included data coming from the other two layers for higher energies. In our analysis we used photons detected only in the first xenon layer. As a result of these differences, their power-law index is smaller than the value that we found.
###### Acknowledgements.
We thank Claire Flanagan for providing the ephemeris, Sally K. Goff for helping the preparation of the manuscript, and an anonymous referee for useful comments. Some calculations in this paper were performed on the โtasmanโ computer at METU Computer Center which was made available by รaฤrฤฑ รรถltekin. This analysis was made possible with the help and documentation provided by RXTE-PCA team, for which we thank the members of the team, in particular Keith Jahoda. M.A.G., A.B., M.A.A. and H.B.ร. acknowledge support from the Scientific and Technical Research council of Turkey, TรBฤฐTAK, under grant TBAG-ร 18. M.A.G. also acknowledges a scholarship provided by TรBฤฐTAK, and partial support from National Science Foundation (DMR 91-20000) through the Science and Technology Center for Superconductivity. M.A.A. also acknowledges support from the Turkish Academy of Sciences. |
no-problem/0002/astro-ph0002470.html | ar5iv | text | # Properties of Radio-Selected Broad Absorption-Line Quasars from the FIRST Bright Quasar Survey
## 1. Introduction
Until recently, a search of the astronomical literature would have revealed that broad absorption lines (BAL) are seen in approximately 10% of optically selected quasars (Foltz et al. (1990); Weymann et al. (1991)) and in exactly 0% of radio-loud quasars (Stocke et al. (1992)). This dichotomy has puzzled astronomers for years. The BAL quasars can be divided into two classes, high-ionization and low-ionization, which are primarily defined by the presence of broad absorption by C IV $`\lambda 1549`$ and Mg II $`\lambda 2800`$, respectively. (Note that all low-ionization BAL quasars also show high ionization absorption). The high-ionization BAL (HiBAL) quasars are more common, including 10% of all optically selected quasars, while the rarer low-ionization BAL (LoBAL) quasars make up only 1% of optically selected quasars.
Prior to the FIRST survey there was only a single example of a LoBAL quasar whose spectrum shows strong absorption by metastable excited states of Fe II (Q 0059$``$2735, Hazard et al. (1987)). Becker et al. (1997) reported the discovery of two more objects resembling Q0059$``$2735 (FIRST J084044.5+363328 and J155633.8+351758), the second of which is radio-loud. We will refer to these as FeLoBAL quasars. Both of the new unusual quasars were found by making optical identifications of radio sources from the VLA FIRST survey (Faint Images of the Radio Sky at Twenty-cm, Becker, White, & Helfand (1995); White et al. (1997)). Subsequently, Brotherton et al. (1998) identified five more radio-loud BAL quasars (two HiBAL and three LoBAL quasars) from a complete sample of radio-selected ultraviolet excess quasars, firmly establishing the existence of radio-loud BAL quasars. Lastly, Wills, Brandt, and Laor (1999) have recently suggested that the radio-loud quasar PKS 1004+13 is also a BAL quasar.
For the past five years we have been developing several new radio-selected samples of quasars based on the VLA FIRST survey. The most extensive of these is the FIRST Bright Quasar Survey or FBQS (Gregg et al. 1996, hereafter FBQS (1); White et al. 2000, hereafter FBQS (2)). The goal of the FBQS is to identify all quasars in the FIRST survey brighter than 17.8 on the POSS-I $`E`$ (red) plate. In the initial 2700 square degrees of the FIRST survey, we defined a sample of 1238 quasar candidates based on positional coincidence between a FIRST source and a POSS-I stellar object (see FBQS (1) and FBQS (2) for a detailed discussion of the candidate selection criteria). Spectra have been collected for 90% of these candidates, 636 of which have been identified as quasars. Among these are 29 which display BAL characteristics. We present the optical spectra and radio spectral indices of these BAL quasars, comparing their radio and optical properties to previous samples of optically selected BAL quasars. We discuss the selection biases inherent in the survey results and discuss why our sample differs from those based on optically selected samples.
## 2. Identification and Classification of BAL Quasars
The FBQS BAL quasars are defined to be any quasar which shows significant broad absorption blueward of either Mg II $`\lambda 2800`$ or C IV $`\lambda 1549`$. We have chosen not to employ any strict definition of a BAL, such as the โBALnicityโ index of Weymann et al. (1991), which requires continuous absorption of at least 10% in depth spanning more than 2000 km s<sup>-1</sup>, discounting absorption closer than 3000 km s<sup>-1</sup> blueward of the emission peak. Weymannโs highly conservative definition has the advantage of unambiguously distinguishing between associated absorbers and โclassical BALsโ but could unnecessarily exclude several potentially very interesting members of the class. We advocate classification of absorption systems based on their physical characteristics such as variability and partial coverage (c.f. Barlow et al. (1997)) and so we choose not to exclude any likely BAL quasars.
Even though we did not use โBALnicityโ to define our sample, we have calculated the BALnicity index for each of our BALs. The values are given in Table 1. In all, seven of our BALs fail the BALnicity test, i.e., they have zero BALnicity. Three of these are LoBALs for which by necessity the BALnicity was calculated from the Mg II absorption line, a line for which the test was never meant to be applied (Weymann et al 1991). For example, Voit et al (1993) found that Mg II absorption troughs are usually narrower than the C IV absorptions troughs in LoBALs. Two of these LoBALs are in fact FeLoBALs and the correctness of their inclusion is almost beyond question in so far as the conditions necessary for absorption by excited states of Fe strongly indicate an intrinsic system local to the active nucleus (FIRST J084044.5+363328 and FIRST 121442.3+280329). The inclusion of the third (FIRST J112220.5+312441) is problematical and will only be resolved with an observation of C IV. Inclusion of the four HiBALs which fail the test can be justified as follows. Two of them (FIRST J095707.4+235625 and J141334.4+421202) have nearly black C IV absorption spanning 4000 km/sec which is very unlikely to break up into a blend of narrow lines. FIRST J115023.6+281908 has three C IV absorption systems with velocities up to 11700 km/sec. Even though none of the absorbers individually is 2000 km/sec broad, the three taken together are very suggestive of being an intrinsic BAL outflow. Lastly, the Si IV absorption lines in FIRST J160354.2+300209 show clear evidence of partial covering which is normally taken to be a property of BALs (Arav et al. 1999).
The typical wavelength coverage of the FBQS spectra is 3800 to 8000 ร
. For quasars with $`0.5<z<1.7`$, the MgII 2800 feature is shifted into the observed range, permitting identification of LoBALs. Somewhat higher redshift LoBALs can be identified through broad absorption by Al III $`\lambda 1860`$ as in the case of FIRST J105427.1+253600. HiBAL quasars can be identified only for $`z1.4`$ which brings CIV $`\lambda `$1549 well into the observed spectral range. (Since the observed wavelength coverage is not uniform for all the FBQS spectra, the redshift range over which C IV is observable differs from quasar to quasar.) Some of our HiBAL quasars may actually be unrecognized LoBAL or FeLoBAL quasars, since LoBALs also exhibit broad CIV $`\lambda `$1549 and other high-ionization species.
Table 1, partially excerpted from Table 2 in FBQS2, lists FIRST catalog RA and Dec (J2000), recalibrated and extinction-corrected $`E`$ and $`O`$ magnitudes, red extinction corrections $`A(E)`$, FIRST peak and integrated radio flux densities, the computed BALnicity index (as defined in Weymann et al. (1991)), the maximum outflow velocity in the absorption lines, and redshifts for the 29 BAL quasars identified to date in the FBQS. Also in Table 1 are the radio luminosity $`L_R`$ at a rest frequency of 5-GHz (calculated using the observed radio spectral indices from Table 2 and hence different from the values given in FBQS2), the absolute $`B`$ magnitude $`M_B`$, and the radio loudness, $`R^{}`$, the ratio of the 5 GHz radio flux density to the 2500 ร
optical flux in the quasar rest frame (using $`\alpha _{radio}`$ from Table 2 and assuming $`\alpha _{opt}=1`$, Stocke et al. (1992)). We use the (APS-calibrated) APM $`O`$ magnitude (White et al. 2000) as a direct estimate of $`B`$, and we do not correct the optical magnitude for the emission-line contribution. The cosmological parameters $`H_\mathrm{o}=50\text{km}\text{s}^1\text{Mpc}^1`$, $`\mathrm{\Omega }=1`$, and $`\mathrm{\Lambda }=0`$ are adopted. In the last column, we give the type of BAL. Our spectra for FIRST J112220.5+312441 and J115023.6+281908, while suggestive that these objects are BAL QSOs, are not definitive; this uncertainty is indicated by question marks next to the type in Table 1. Figures 1 and 3 show the spectra of the BAL quasars, plotted in the rest frame to facilitate the recognition of the sometimes complex absorption features.
The BAL quasars in Table 1 divide nearly evenly into 15 HiBALs and 14 LoBALs. Both FIRST J105427.2+253600 and J132422.5+245222 are classified as LoBALs solely by the presence of Al III $`\lambda 1860`$, since their Mg II is redshifted into the near IR. Four of the LoBALs belong to the rare class of FeLoBALs (Becker et al. 1997), characterized by the metastable FeII absorption bands centered at $`2350`$ and $`2575`$ ร
. These four FeLoBAL quasars vary markedly in the depth of their absorption features. Two of the other LoBAL quasars (FIRST J140806.2+305449 and J152350.4+391405) are unusual insofar as the spectra appear suppressed blueward of 2500 ร
in the rest frame. Several of the BAL quasars, both high and low ionization, are almost devoid of obvious emission lines (e.g., FIRST J142703.6+270940, J142013.1+253404).
## 3. Radio Properties of the FIRST BAL Quasars
The FIRST Survey provides 20 cm maps with 5 arcsec angular resolution taken with the NRAO VLA<sup>1</sup><sup>1</sup>1The NRAO is operated by Associated Universities, Inc., under a cooperative agreement with the National Science Foundation. in the B-configuration. To investigate the radio spectral indices and radio morphologies of the radio-selected BAL quasars, we have reobserved nearly all of the objects with the VLA in either A or D configurations (sometimes both) at 20 and 3.6 cm wavelength. The observed flux densities of the quasars at 20 cm from the three different VLA configurations (A, B, and D) are given in Table 2, along with the A and D configuration 3.6 cm flux densities. These are supplemented by data from the WENSS (Westerbork Northern Sky Survey, Rengelink et al. (1997)) survey at 92 cm, the Green Bank 6 cm survey (Becker, White, & Edwards (1991)), and the NVSS 20 cm survey (Condon et al. 1998). Angular resolutions for all the observations are listed in Table 2. Spectral indices are given for 28 of the BAL quasars; the sources have a mix of flat spectra (9 sources, $`\alpha >0.5`$) and steep spectra (19 sources, $`\alpha 0.5`$), with 9 of the sources falling close to the dividing line ($`0.6\alpha 0.4`$). Where possible, the spectral indices are based on simultaneous observations at two frequencies.
This heterogeneous set of radio observations spans a wide range of angular resolutions, so the measured flux densities may not be directly comparable; any spectral index derived from data with different angular resolutions is uncertain unless the radio source is effectively a point source at the highest resolution available. In that case, the flux densities from all the observations can be directly compared, assuming a nonvariable source. If all the flux density measurements at a given frequency for an object are the same independent of angular resolution, then the point source assumption is probably valid. If the flux density measured with low resolution is less than that at high-resolution, then the source is probably variable. If the low-resolution flux density is higher than the high-resolution value, the difference could arise from either resolution effects or variability.
The VLA observations provide some indication of the radio brightness distribution and morphology. Roughly 90% of the BAL quasars appear point-like at the FIRST resolution of $`5\mathrm{}`$. This is in sharp contrast to the parent population of quasars in the FBQS, which are evenly split between point-like and extended (based on a subset of several hundred quasars with a similar range of parameters to the BAL quasars: $`z>0.5`$ and a 20 cm flux density less than 50 mJy). Of 13 BAL quasars observed in the A configuration of the VLA, 11 are still unresolved at the level of $`1.5\mathrm{}`$.
### 3.1. Comments on Individual Radio Sources
FIRST J072418.4+415914 โ Appears to be variable at 20 cm.
FIRST J080901.3+275342 โ Slightly resolved in FIRST.
FIRST J084044.5+363328 โ A second radio source positioned $`27\mathrm{}`$ away from 0840+3633 has a FIRST flux density of 2.5 mJy and appears extended. This source is too close for the NVSS to resolve from 0840+3633 and probably explains the higher NVSS flux density. The lower B-configuration 20 cm flux density was used to determine the spectral index in Table 2.
FIRST J093404.0+315331 โ Appears to be variable at 20 cm.
FIRST J115023.6+281907 โ Appears to be variable at 20 cm.
FIRST J140806.2+305449 โ Possibly a triple source, although a chance alignment of sources is more likely since both of the other sources break up into double sources in higher resolution images.
FIRST J152350.4+391405 โ Appears to be variable at 20 cm.
FIRST J160354.2+300209 โ Possibly a GigaHertz-peaked radio spectrum.
FIRST J164152.3+305852 โ Partially resolved by FIRST, consistent with the higher NVSS flux density but subsequent D configuration data suggests variability.
FIRST J165543.2+394520 โ Appears to be variable at 20 cm.
## 4. The FBQS BAL Quasar Fraction and Its Dependence on Radio-Loudness
The frequency of BALs within the FBQS can be derived by comparing the number of BAL quasars found to the number of quasars with rest-frame wavelength coverage (determined by the redshift and observed spectral range) that would have allowed absorption to be seen had it been present. For the redshift range relevant to LoBAL quasars, $`0.5z1.7`$, there are $`350`$ quasars in the FBQS in which LoBALs could have been confirmed had they been present. In this same redshift range, we find 11 are LoBAL or FeLoBAL quasars (3$`\pm `$1%, where the uncertainty is the standard deviation of a binomial distribution). There are 100 quasars in the redshift range relevant to HiBALs, $`z1.4`$, in which the wavelength coverage would have permitted C IV absorption to be seen had it been present. Of these 100 quasars, eighteen show high-ionization broad absorption. This includes the 3 high-redshift LoBALs, so designated because they also show Al III absorption; the other 15 objects show only high-ionization absorption. Our BAL rate is therefore $`18\pm 3.8`$%. If we exclude from this those objects with zero BALnicity, our rate is reduced to 14% which is roughly a 50% increase over the rate seen in optically selected samples ($``$9% in the LBQS; Foltz et al. 1990). It is worth noting of the unambiguous BAL quasars, i.e., those with nonzero BALnicity, one third are either LoBALs or FeLoBALs while the comparable number for the LBQS is 10%. This is very suggestive that the frequency of LoBAL quasars is highly dependent on radio luminosity as was already postulated in Becker et al. (1997).
Until the discovery of FIRST J155633.8+351758 by Becker et al. (1997) and several additional objects by Brotherton et al. (1998), it was believed that the BAL phenomenon did not occur in radio-loud quasars, i.e., those with $`R^{}>10`$. The 29 FBQS BALs demonstrate otherwise. In Figure 5 we plot $`\mathrm{log}R^{}`$ vs $`z`$ (where $`R^{}`$ is taken from FBQS (2)). Consistent with the Becker et al. (1997) and Brotherton et al. (1998) results, the BAL quasars are not confined to the radio-quiet regime. Our data do suggest that the incidence of BALs decreases for radio-loud quasars with $`R^{}>100`$. For the LoBALs in particular, $`R^{}`$ never exceeds 35. In comparison, 38% of the quasars in the FBQS over the same redshift range (z $`>0.5`$) have $`R^{}>35`$. The incidence of LoBALs is 5% for quasars with $`R^{}<35`$. While based on a rather small sample, these statistics suggest that the frequency of LoBAL quasars is dependent on radio loudness. The likelihood of a quasar being a HiBAL, however, shows no obvious dependence on radio loudness. HiBALs are slightly under-represented in quasars with $`R^{}>100`$, but this is not statistically significant. A better delineation of the frequency of BALs as a function of $`R^{}`$ will have to wait until more sky is surveyed by the FBQS, but the existence of a population of radio-loud BALs, both low and high ionization, is now firmly established.
Using $`R^{}`$ as a measure of radio loudness may be a little misleading for BAL quasars in so far as the BAL quasars are affected by reddening which would reduce the optical magnitude and hence inflate the value of $`R^{}`$. As we point out in section 4.2 (see Figure 6), if we use the alternative definition of radio-loud, ie, $`L_R>10^{32}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> (Miller, Rawlings, and Saunders 1993), which is independent of the observed optical magnitude, we still find that a significant number of the FBQS BAL quasars are radio-loud.
The traditional measure of the significance of the broad absorption lines in a quasar spectrum is the BALnicity index (BI; Weymann et al. (1991)). In Figures 6(a) and 6(b) we plot the dependence of BI on the observed radio luminosity, for HiBAL and LoBAL quasars respectively. For HiBAL quasars, there is an anticorrelation between BI and $`L_R`$ (The Spearman rank correlation coefficient is $`0.85`$ which would arise by chance with a probability of only $`6\times 10^5`$). No such anticorrelation is apparent for the LoBAL quasars in Figure 6(b). In Figures 6(c) and 6(d), we plot the dependence of the maximum outflow velocity $`V_{max}`$ against $`L_R`$ for HiBAL and LoBAL quasars. There is a suggestion of an anticorrelation for HiBAL quasars (the Spearman rank correlation coefficient is $`0.70`$; probability of only 0.0037), though it is considerably less convincing than that with BI.
The lack of correlation between BI or $`V_{max}`$ and $`L_R`$ for the LoBAL quasars may simply reflect the lack of high radio luminosity LoBAL quasars. If HiBAL quasars with luminosities greater than $`10^{33}`$ ergs/s/Hz are omitted from the plots, at most a weak correlation is detectable in the less luminous objects.
### 4.1. Why the FBQS BAL Quasar Fraction is High
There are several possible reasons for the higher frequency of BAL quasars in the FBQS. One possible explanation is the looser definition of BAL used in this paper, a definition divorced from the BALnicity index. Since the fraction of BALs seen in the LBQS has only appeared as an AAS abstract (Foltz et al. (1990)), it is difficult to evaluate the magnitude of this effect. Another possible explanation is that the frequency of the BAL phenomenon depends on the radio emission, albeit a reversal of the old thesis that only radio-quiet quasars can have BALs (Stocke et al. (1992)). The FBQS quasars span the radio-quiet/radio-loud boundary, filling in what used to be considered a bimodal distribution which was perhaps the result of selection effects in other surveys (White et al. 2000). Based on a limited sample, Francis, Hooper, & Impey (1993) found that BAL quasars in the LBQS appeared primarily in this radio-intermediate regime; accepting their result as correct, the FBQS naturally includes BALs passed over, for whatever reasons, by optical surveys. It is easy to imagine that the objects in Figure 1 that do not have strong emission lines or that have significantly redder continua than typical quasars would be overlooked in surveys with optical selection criteria.
A related reason for the high incidence of BAL quasars is that our sample was selected using the red $`E`$ magnitude of the optical counterparts while most quasar surveys (including the LBQS) are based on bluer $`B`$ magnitudes. A plot of the color of the FBQS quasars as a function of redshift is shown in Figure 7. The reddest objects in the figure are low-redshift objects, in which there is undoubtedly a large contribution of starlight. The BALs in general and the LoBALs in particular are predominant among the reddest quasars with $`z0.5`$, accounting for over 50% of the quasars redder than $`OE`$ of 1.3. The BAL quasars are redder than the average FBQS quasar by $`0.5`$ magnitude. Hence samples based on $`B`$ magnitudes have an effective magnitude cutoff 0.5 magnitudes higher for BAL quasars than for the non-BAL quasars which, owing to the steep quasar number counts would substantially reduce the observed incidence of BAL quasars. This effect is tantamount to a differential $`k`$-correction between BAL and non-BAL quasars and has been discussed in earlier studies (Boroson & Meyers 1992; Sprayberry & Foltz 1992). It is possible that the red FBQS BAL quasars represent the tip of the iceberg and that there remains a large population of BAL quasars that are yet redder and do not make it into the FBQS despite its weak color selection ($`OE<2`$) (Becker at al. 1997).
The red colors of the BAL quasars are not simply the effect of broad absorption lines suppressing the flux in the $`O`$ band (typically $``$0.2 mag), although this contributes part of the difference. The unabsorbed continuum itself appears red, especially in the LoBAL quasars, which is suggestive of dust (Brotherton et al. 1999; Yamamoto & Vansevicius 1999). The color difference extends to the infrared (Hall et al. 1997). Egami et al. (1996) presented the near-IR spectrum of Q 0059$``$2735, which displays a very large Balmer decrement (H$`\alpha `$/H$`\beta `$ = 7.6), almost the same as that seen in FIRST J155633.8+351758 (Dey 1998, priv. comm.). For case B recombination, H$`\alpha `$/H$`\beta `$ = 2.85; and โnormalโ blue quasars usually show H$`\alpha `$/H$`\beta `$ $`<`$ 4. (While case B is not likely to apply to Balmer lines and the Balmer decrement, the empirical result is that the smallest Balmer decrements are consistent with case B and that the Balmer decrement has been shown to correlate with the continuum slope in a manner consistent with an intrinsic case B ratio and dust reddening (Baker 1997)). The observed Balmer decrements of these extreme objects then imply $`A_V3`$, which is consistent with the Brotherton et al. (1997) estimate for J155633.8+351758 based on spectropolarimetry. Because of the bright magnitude limit of the FBQS ($`E=17.8`$), and the redshifts required to see BALs in the optical, modest reddening will remove BAL quasars from the sample (especially the LoBAL quasars which appear redder than other classes, e.g., Sprayberry and Foltz (1992)). If it suffered no intrinsic reddening, FIRST J155633.8+351758 would likely have been selected for inclusion in the FBQS.
### 4.2. Why the FBQS BAL Quasar Fraction is Low
The FBQS certainly misses BAL quasars. The magnitude limit discriminates against BAL quasars when the heavily absorbed spectral regions fall within the $`E`$ bandpass ($`6250\pm 180`$ ร
). While less affected than the $`O`$ bandpass, $`E`$ magnitudes are still affected by dust reddening. The BAL quasar fraction we find, $``$18%, is then a lower limit to the actual BAL quasar fraction for radio-intermediate quasars. The LoBAL quasars, which can be significantly dust reddened, are more susceptible to color selection effects than HiBAL quasars.
Goodrich (1997) and Krolik & Voit (1998) have argued that the true fraction of BAL quasars is much higher ($``$30%). While we would agree that the true fraction is possibly this high, the FBQS sample undermines their position that BAL quasars are radio-moderate (Francis, Hooper, & Impey (1993)) because of optical attenuation rather than intrinsically strong radio emission. Figure 8 plots the radio luminosity of the FBQS quasars as a function of redshift, clearly showing that at least some BAL quasars are intrinsically strong radio sources.
Goodrich (1997) (see also Goodrich & Miller 1995) had a second reason for arguing that the optical continuum was suppressed and the true fraction of BAL quasars was underestimated: high polarization. The idea is that scattered and polarized light is present in all quasars, but only becomes noticeable when the direct light is somehow attenuated. Hutsemekers, Lamy, & Remy (1998) found that, on average, LoBAL quasars are more polarized than HiBAL quasars, which are in turn (slightly) more polarized than non-BAL quasars. This is consistent with the idea that LoBAL quasars possess more absorbing material and dust along the line of sight.
Estimating the true fraction of BAL quasars remains a difficult problem given the many unknowns that must be assumed or derived with incomplete information. What we can say with some certainty is that the true fraction of BAL quasars among radio-selected quasars is greater than 18%.
### 4.3. Radio Properties and Unified Schemes
The similarity of the emission lines in BAL quasars and normal quasars (Weymann et al. (1991)) suggests that BAL quasars are normal quasars seen at a viewing angle that intersects an outflow common to all quasars. The spectropolarimetry results have often been interpreted in terms of a preferred orientation: Goodrich & Miller (1995), Hines & Wills (1995), and Cohen et al. (1995) all suggest that BAL quasars are normal quasars seen along a line of sight skimming the edge of a disk or torus, with BAL clouds accelerated from its surface by a wind, and polarized continuum light scattered above along a less obscured path. LoBAL quasars are those seen at the largest inclinations, thus presenting the largest column densities.
The jets of quasars provide a way to measure orientation. The relativistic beaming model for radio sources (e.g., Orr & Browne 1982) unifies core-dominated (flat spectrum) and lobe-dominated (steep spectrum) radio sources by means of orientation: core-dominant objects are those viewed close to the jet axis, while lobe-dominant objects are those viewed at larger angles. Indeed, relativistic jets appear to be present in at least some radio-quiet quasars (e.g., Blundell & Beasley 1998), and a flat radio spectral index in a radio-quiet quasar may indicate a beamed source (e.g., Falcke, Sherwood, & Patnaik 1996).
We find that about two thirds of the FBQS BAL quasars have steep radio spectra ($`\alpha <0.5`$), as expected for edge-on systems, but that the remaining third have flat spectra (including clearly radio-loud sources such as FIRST J141334.4+421202, as well as J155633.8+351758 which is not in this sample). This is inconsistent with the simple unified scheme, which predicts only steep spectrum sources for an edge-on geometry.
Similarly, Barvainis & Lonsdale (1997) found that the radio spectra of radio-quiet BAL quasars have a range of slopes, again including both flat and steep spectra, suggesting that BAL quasars are seen for a range of orientations with respect to the system (jet) axis.
The radio morphology of the FIRST BAL quasars is also unexpected for the unified edge-on scheme. VLA A array maps of our BALQSOs show that 80% of the sources are unresolved at the 0.2โณ scale. This could be because they are small, โfrustratedโ or young sources, similar to compact steep spectrum sources, or because they are very core-dominated with the jet beamed toward us. The compactness of the radio emission, even in the radio-loud sources, favors their existence in gas-rich interacting systems which can confine the radio emission to small scales.
There is an alternative to โunification by orientation,โ which may be described as โunification by time,โ with BAL quasars characterized as young or recently refueled quasars. Boroson & Meyers (1992) found that LoBAL quasars constitute 10% of IR-selected quasars, greater than the 1% found in optically selected samples, and that LoBALs show very weak narrow \[O III\] $`\lambda `$5007 emission. Turnshek et al. (1997) found that 1/3 of weak \[O III\] $`\lambda `$5007 quasars show BALs. Because \[O III\] $`\lambda `$5007 is emitted from the extended narrow-line region (NLR), its weakness suggests that obscuring material with a large covering factor is present. We are unaware of any LoBAL quasars with significant \[O III\] $`\lambda `$5007 emission. Voit et al. (1993) argue that low-ionization BALs are a manifestation of a โquasarโs efforts to expel a thick shroud of gas and dust,โ consistent with the scenario of Sanders et al. (1988) in which quasars emerge from dusty, gas-rich merger-produced ultraluminous infrared galaxies. The warm $`IRAS`$-selected BAL quasars ($`0.25<F_\nu (25\mu m)/F_\nu (60\mu m)`$ $`<3`$) Markarian 231 (Smith et al. 1995), $`IRAS`$ 07598+6508 (Boyce et al. 1996), and PG 1700+518 (Hines et al. 1999; Stockton et al. 1998) all show evidence for recent mergers or interactions, including young starbursts.
While the geometry of BAL quasars and their relationship to non-BAL quasars remains an open question, our results do not appear to favor the popular notion that all BAL quasars are normal quasars seen edge-on. The FBQS results are more consistent with the unification by time picture.
## 5. Summary
We have investigated the properties of 29 radio-selected BAL quasars found in the FBQS. The sample comprises 15 high-ionization BAL quasars, and 14 low-ionization BAL quasars, 4 of which are rare FeLoBALs. At least 13 are formally radio-loud, unequivocally establishing the existence of a substantial population of radio-loud quasars exhibiting BAL spectral features.
The frequency of BAL quasars appears to be higher than that found in optically selected samples. Even so, because of selection effects and preferential reddening of LoBAL quasars, the FBQS almost certainly misses additional BAL quasars and the true frequency must be higher. The situation is complicated by indications that the frequency of BAL quasars peaks among the radio-moderate population and decreases for the extremes of radio-loudness. The BAL quasars show compact radio morphologies, and have a range in radio spectral indices. The radio properties do not support the popular scenario in which all BAL quasars are normal quasars seen edge-on. An alternative picture in which BALs are an early stage in the development of new or refueled quasars is preferred.
The success of the FIRST survey is in large measure due to the generous support of a number of organizations. In particular, we acknowledge support from the NRAO, the NSF (grants AST-98-02791 and AST-98-02732), the Institute of Geophysics and Planetary Physics (operated under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48), the Space Telescope Science Institute, NATO, the National Geographic Society (grant NGS No. 5393-094), Columbia University, and Sun Microsystems. We also acknowledge several very helpful comments from an anonymous referee. |
no-problem/0002/quant-ph0002006.html | ar5iv | text | # Quantum computation by optically coupled steady atoms/quantum-dots inside a quantum electro-dynamic cavity
\[
## Abstract
We present a model for quantum computation using $`n`$ steady 3-level atoms or 3-level quantum dots, kept inside a quantum electro-dynamics (QED) cavity. Our model allows one-qubit operations and the two-qubit controlled-NOT gate as required for universal quantum computation. The $`n`$ quantum bits are described by two energy levels of each atom/dot. An external laser and $`n`$ separate pairs of electrodes are used to address a single atom/dot independent of the others, via Stark effect. The third level of each system and an additional common-mode qubit (a cavity photon) are used for realizing the controlled-NOT operation between any pair of qubits. Laser frequency, cavity frequency, and energy levels are far off-resonance, and they are brought to resonance by modifying the energy-levels of a 3-level system using the Stark effect, only at the time of operation.
preprint:
\] A computer, which follows quantum mechanical principles, has significant advantages over a classical computer . Implementing a quantum computer is based upon the implementation of basic quantum units called quantum bits (two-level systems) and communication among them. Logical operations of a quantum computer can be decomposed into a series of an arbitrary one-qubit rotation plus a two-qubit controlled-NOT operation, thus this set of operations makes a universal quantum computer . A similar set in which the controlled-NOT is replaced by the controlled-phase-shift gate \[$`|00|00`$, $`|01|01`$, $`|10|10`$, and $`|11|11`$\] is also universal . $`n`$ two-level systems can have $`2^n`$ highly entangled (phase coherence) states and a quantum computer takes advantages of performing unitary transformations in a parallel manner on these $`2^n`$ classical strings.
The main difficulties in implementing a quantum computer are the contradicting demands in terms of interaction with the environment. On one hand, a strong controlled interaction is desired in order to operate the computing algorithm (to switch the state of the qubits), but on the other hand uncontrolled interactions are strongly undesired since they cause decoherence of the qubits and hence loss of computing ability. All quantum systems lose their coherence after some time due to non-zero coupling with the environment. Thus the above problem is usually expressed as the need to increase the ratio between the decoherence time and the switching time: a quantum computer must perform all calculations within the decoherence time of the qubit.
Several theoretical and experimental attempts are currently ongoing to realize simple gates (such as the controlled-NOT gate between two qubits). The most realistic ones at the moment are ion-trap , liquid NMR , and cavity-QED . However, serious problems in scaling these systems, and/or in addressing particular qubits create the need for better suggestions or major modifications of these implementations.
The first interesting experiments were done on cavity-QED systems . The cavity-QED computation model is based on the idea of having two types of qubits (atoms and cavity-modes) and it was found very useful in implementing various gates. However, the requirement of mechanical control of atoms makes this model less desirable for quantum computing: the interaction time is controlled by the physical motion of the atoms inside the cavities, and having enough control to let the atom enter a cavity several times (whenever required by the algorithm) is difficult.
The long decoherence time of liquid-NMR and ion trap systems, and easy control of the qubits make these systems a serious candidate for quantum computation. But the problem is in scaling these systems with the increasing numbers of qubits. Like, in case of bulk liquid-NMR, the signal from the system decreases exponentially with the numbers of qubits. Thus the number of qubits comprising the quantum computer has to be small. The ion-trap computation model is based on interaction via a common-mode qubit and it has two main problems: (a) the addressing of an individual qubit by a separate laser directed to each ion is an idea which cannot be implemented yet. (b) the use of only one type of two-qubit interactionโinteraction via one common mode. This problem prevents the possibility of running simultaneous several gate operations.
Proposals for a solid state and solid-state NMR devices based on nanotechnology, might be more promising for the far future. But even single qubit systems have not been implemented in these systems due to the difficulties in creating and controlling such a single qubit.
Clearly, more candidates for realization of quantum computing devices are still needed, with the hope of a more diverted experimental effort. Such an effort, mainly in the direction of solid-state devices, but combining ideas from existing implementations, might lead to a system where single qubits can be addressed, scaling to large number of qubits made possible, and in the future, may be even a system where fault-tolerant computation can be performed.
This paper suggests a model of a quantum computer that combines the advantages of other models . We shall show how to implement the universal set of gates containing the controlled-phase-shift and the arbitrary one qubit rotation.
As a first step we suggest a hypothetical model of atoms fixed in a cavity, and a pair of electrodes โdirectedโ around each atom to control its energy level-spacing. In this first step, we combine the use of a common mode as in the ion-trap computation model , with the two types of qubits as suggested by cavity-QED models , to obtain better control of addressing a single qubit. Unfortunately, fixing atoms for the required time scales is not yet realistic. In general, atoms are in motion in all cavity QED experiments.
In the second step, we suggest replacing the atoms by quantum dots, so the idea of โfixingโ the qubits becomes more realistic. The technical ability of putting a single qubit in a single quantum dot, and the technical ability of putting a quantum dot in a cavity exist separately. Combining them together (while demanding also that the cavity is highly reflecting) is far from the ability of current experiments, but we hope to motivate this direction by showing that the computation model we obtain is very promising. Recently, it has been experimentally shown that a single electron can be controlled in a quantum dot; The dot size and dielectric modulation are however large $`(.5\mu m)^3`$.
A sketch of the model for the proposed quantum-computer (with steady atoms) is shown in Fig.1. Atoms are kept steady along the axis of the cavity. An external laser source is accessible to all atoms, and is directed perpendicular to the cavity axis. Electrodes around each atom (which we refer as โStark platesโ) are used to control its level spacing via the Stark effect, and are perpendicular to the laser and the cavity axis. When required in the protocol, a strong electric field is applied to the atom by changing the voltage on the electrodes. This field changes the energy level separation (a thorough study has been done for Rydberg atoms in Ref.). The required electric field can be calculated easily once the energy levels and the wave functions of the system are chosen. We will assume that the on/off switching of the electric field is but slow such that the change in the original wave function is insignificant. At the same time, it must be fast relative to the time steps of the computation. The applied DC field has to be a fraction of the order of the atomic energy level separations.
Quantum bits of the computer are described by the ground state ($`|g`$), and the first excited state ($`|e_0`$) of the atom; a third level ($`|e_1`$) is used for a controlled phase shift operation. Rotation of an individual qubit is achieved by applying a laser pulse to all atoms, while only one the qubit undergoing transformation is on-resonance with the laser frequency, and others are far off-resonance.
Communication between any two qubits is done by a common mode cavity photon as described now. The photonic mode is in its ground state (zero photons) and a maximum of one cavity photon is present at the time of interaction between two qubits. The cavityโs $`0`$-photon and $`1`$-photon states are defined by $`|0`$ and $`|1`$ respectively. The atomic levels are kept far off-resonance with respect to the resonant frequency of the cavity, to avoid undesired interaction (in which a transition from excited state to ground state takes place while emitting a photon to the cavity). Desired energy levels are brought into resonance with the cavity by changing the Stark field only at the time of logic operations. To perform a controlled operation between any two qubits, we do the following: (a) The state of the first qubit and the vacuum state of the cavity are swapped. (b) the new cavity state is used to perform a controlled operation with another qubit; the third level of the atom is used for that purpose, yielding a controlled-phase-shift gate. (c) Finally, the cavity state is again swapped with the first qubit, so the cavity is back into its vacuum state, and the controlled operation between the two qubits is completed.
We assume that the time for a significant far off-resonance evolution is huge compare to the on-resonance evolution time. We also assume that the cavity is of high-quality and has almost perfect reflecting walls, so that the decoherence time for the cavity mode is much larger than the time between the two required swap operations. The frequency of the laser pulses is off-resonant with the cavity.
We will describe in detail the Hamiltonian leading to the single qubit rotations and the controlled-phase-shift operations. This is done by taking into account the fact that $`|g|e_0`$ and $`|e_0|e_1`$ are allowed dipole transitions, but $`|g|e_1`$ is not an allowed transition due to the definite parity of the wave function.
Let $`\omega _{g;e_0}`$ be the level separations of the qubit (a similar definition applies for $`\omega _{e_0;e_1}`$ and $`\omega _{g;e_1}`$). For the atomic levels with definite parity, we assume that the levels are chosen so that the difference frequencies $`\omega _{g;e_0}`$ and $`\omega _{e_0;e_1}`$ are nearly the same. We treat this case here, but one can easily treat the case where other transitions are allowed or forbidden.
In the following, we describe the steps to obtain the necessary operations involving only one atom at a time, by bringing its levels to be on-resonance with the laser frequency or the cavity mode. The other atoms are kept far off-resonance to avoid their interactions. If the initial levels are such that $`g;e_0`$, and $`e_0;e_1`$ are the allowed transitions, and $`\omega _{e_0;g}<\omega _{e_0;e_1}`$, then one way to choose the cavity and the laser frequencies are such that $`\omega _{e_0;g}<\omega _{e_0;e_1}<\omega _l<\omega _c`$.
By increasing the level separations, the qubit can be brought to be on-resonance with the laser. This increase in level separation must be significant enough that the interaction of off-resonant atoms with the laser is insignificant. By increasing the level separations further, the qubit is brought to be on-resonance with the cavity photon. Each level separation increases with the applied electric field. While increasing the level separation $`\omega _{e_0;g}`$, the level separation $`\omega _{e_0;e_1}`$ will first come to resonance with the cavity. But we will assume that the switching time is much smaller than the inverse Rabi frequency of the atom-cavity system such that there is practically no effect of this resonance crossing.
One qubit rotation is performed by changing the atomic levels so that $`\omega _l=\omega _{e_0}\omega _g`$ and applying the laser pulse. The laser and qubit involved interact on resonance (but $`\omega _l`$ is off resonance with the cavity and with other qubits). The Hamiltonian for the atomic levels in the presence of the laser field is :
$`H_1={\displaystyle \frac{\mathrm{\Omega }_l}{2}}[\sigma _+e^{i\varphi }+\sigma _{}e^{i\varphi }]\text{.}`$
Where $`\sigma _+=|e_0><g|`$, $`\sigma _{}=|g><e_0|`$, $`\varphi `$ is the phase factor of the laser at the location of the basic unit, and $`\mathrm{\Omega }_l`$ is the Rabi frequency due to the laser $`\mathrm{\Omega }_l=E_0\mu _{ge_0}`$, where $`\mu _{ge_0}`$ is the dipole moment for $`|g>|e_0>`$ transition and $`E_0`$ is the strength of the electric field.
If the interaction time between the laser pulse and the qubit is $`t=\frac{k\pi }{\mathrm{\Omega }_l}`$, then the time evolution operator is
$`\widehat{V}_m^k(\varphi )=exp[ik{\displaystyle \frac{\pi }{2}}(\sigma _+^me^{i\varphi }+\sigma _{}^me^{i\varphi })]`$
The process is an energy non-conserving process, and the system is fed energy from the laser field.
The Jaynes-Cumming Hamiltonian for a 2-level system, which is on-resonance with the cavity photon is described by:
$`H_2=i{\displaystyle \frac{\mathrm{\Omega }_c}{2}}[\sigma _+\widehat{a}^{}\sigma _{}\widehat{a}]`$
Where $`a`$ and $`a^{}`$ are the annihilation and creation operators for common mode photon, and $`\mathrm{\Omega }_c`$ is the photon Rabi frequency of the cavity-atom system.
If the interaction time between the laser pulse and the qubit is $`t=\frac{k\pi }{\mathrm{\Omega }_c}`$, then the time evolution operator is
$`\widehat{U}_m^k(\varphi )=exp[ik{\displaystyle \frac{\pi }{2}}[i\sigma _+^m\widehat{a}^{}i\sigma _{}^m\widehat{a}]]\text{.}`$
To get a control-phase-shift between two qubits (two atoms/QDs, say $`m`$ and $`n`$ such that $`m`$ is the control and $`n`$ is the target), we need two types of cavity-atom operations: A $`\pi `$ pulse for obtaining the swap operation, where the qubit levels $`g;e_0`$, and the cavity levels are used and a $`2\pi `$ pulse using the third level and the cavity photon to obtain the atom-cavity controlled-phase-shift.
The operation is done in three steps:
(1) The levels $`|g_m`$ and $`|e_0_m`$ of the $`m`$th atom are brought into resonance with the cavity. The system is let to evolve on-resonance with the cavity for a time equal to $`\pi /\mathrm{\Omega }_c`$. At the end of this, a SWAP operation the state of the atom with the state of the cavity (which is the vacuum state) occurs. After the interaction, the $`m`$th atom is in its ground state.
(2) The states $`|g_n`$ and $`|e_1_n`$ are brought to resonance with the cavity and let to evolve for a time equal to $`2\pi /\mathrm{\Omega }_c`$. The result is that the state doesnโt change if there is no photon in the cavity: $`|g_n0|g0_n`$, $`|e_{0n}0|e_{0n}0`$; Also, there is no change if the $`n`$th atom is in its ground state: $`|g1|g1`$; however, if there is a photon in the cavity, and the $`n`$th atom is in the excited state, it gets a phase $`|e_{0n}1|e_{0n}1`$ (since it is a spinor).
(3) The $`m`$th atom and the cavity are brought into resonance and the system is let to evolve for a time equal to $`\pi /\mathrm{\Omega }_c`$. At the end of this, a SWAP operation the state of the $`m`$th atom with the state of the cavity (which is the vacuum state) occurs.
A $`\pi `$ pulse $`(k=1)`$ is given between the levels $`|g_m`$ and $`|e_0_m`$ by bringing these two levels (of the $`m`$โth atom) on resonance with the cavity to SWAP again their states. After the interaction, the cavity is back in the vacuum state, but the state of the qubits change.
A crucial issue in this model is the relative time scale between the cavity on-resonance and off-resonance with the 2-level system. When the cavity is off-resonance in presence of a photon, a dressed state evolves. The relative time scale of the evolution is :
$`{\displaystyle \frac{\tau _{offresonance}}{\tau _{onresonance}}}=\sqrt{({\displaystyle \frac{\omega _c\omega _{ge_0}}{\mathrm{\Omega }_c}})^2+1}`$
where $`\omega _c`$ is the cavity frequency $`\omega _{ge_0}=\omega _{e_0}\omega _g`$ and $`\mathrm{\Omega }_c`$ is the Rabi frequency of the atom due to the cavity photon. The vacuum off-resonance phase evolution, $`\mathrm{\Omega }_c^2/(\omega _c\omega _{e_0g})`$ must be small enough to make the off-resonant evolution insignificant . Another way to get rid of the vacuum off resonance evolution is by nullifying the extra phase evolution by additional logic operations or by taking into account the phase in every step of operations.
The spontaneous emission time is quite low or negligible for a trapped atom. The ratio of decoherence time to the time required for a single operation is $`10^6`$. That is, $`10^6`$ pulses can be applied within the coherence time. In case of a Rydberg atom, for $`5051`$ transition, $`\omega _{ge_0}=510^{10}`$ Hz, the cavity length is $`1`$ cm, $`\mathrm{\Omega }_c410^5`$ Hz, and $`\delta =410^6`$ Hz, where $`\delta `$ is the detuning, i.e., $`\omega _{e;g}\omega _l`$.
In the case of a quantum dot with transition energy $`\omega _{ge_0}1meV`$ (1 THz), $`\mathrm{\Omega }_c10^8`$ Hz and the cavity length is $`150\mu `$. The phase coherence length should be much larger than $`1/\mathrm{\Omega }_c`$ to realize a system that is capable of performing non trivial operations. The best reported values for decoherence times in a quantum dot system are comparable to $`1/\mathrm{\Omega }_c`$ These dots were however open in the sense that large electron reservoirs were connected to them. Isolated quantum dots that are specifically designed to reduce the decoherence times will be of paramount importance, not only here but also in other applications of coherent phenomena.
Measurement of the final state of the quantum computer is crucial for an experiment. In this proposed model, qubits are inside the cavity, the state of a qubit can be measured by the following procedure: (a) Transferring the quantum state of the qubit to the cavity by bringing the qubit into resonance with the cavity and waiting for half the time period of the atom-cavity Rabi oscillation. If the electron in the qubit is in higher state, it will release a photon to the cavity. (b) This photon has to be detected from the cavity by a detector, which is a difficulty that all models of quantum computing suffer from.
Here we have shown a new model of quantum-computer using atoms or quantum dots inside a quantum cavity. A similar model can be easily designed for spin-states inside a cavity by replacing the Stark effect by a Zeeman effect. With the advance of technology, it may be possible to fabricate steady atoms inside the cavity or quantum dots inside a cavity with long enough decoherence time. The important point of this model is that the qubits are easily addressed (and we donโt require a separate laser addressing each one). Note that operations are done only when the cavity/laser and the atomic levels are on-resonance, while undesired interactions are avoided by keeping the far off-resonance condition for the other atoms. The main operations are done by an external laser and controlling the voltage of the stark plates from outside.
We are grateful to T. Mor and V. P. Roychowdhury for useful discussions throughout this work. The work of PP was supported by the Revolutionary Computing group at Jet Propulsion Laboratory, contract No. 961360, and grant No. 530-1415-01 from DARPA Ultra program. |
no-problem/0002/hep-lat0002017.html | ar5iv | text | # P-vortices, nexuses and effects of gauge copies ITEP-TH-12/2000
## 1 Introduction
The old idea about the role of the center vortices in confinement phenomena has been revived recently with the use of lattice regularization. Both gauge invariant and gauge dependent approaches were developed. The gauge dependent studies were done in a particular gauge, named center gauge. Such gauge leaves intact center group local gauge invariance. It is believed that gauge dependent P-vortices defined on the lattice plaquettes are able to locate thick gauge invariant center vortices and thus provide the essential evidence for the center vortex picture of confinement. So far 3 different center gauges have been used in practical computations: the indirect center gauge , the direct center gauge and the Laplacian center gauge . It is known that the first two of these gauges suffer from gauge copies problem. Many results supporting the above mentioned role of P-vortices were obtained in the direct center gauge. Recently the following feature of this gauge has been discovered : there are gauge copies which correspond to higher maxima of the gauge fixing functional $`F`$ (see below for definition) than usually obtained and at the same time these new gauge copies produce P-vortices evidently with no center vortex finding ability since projected Wilson loops have no area law. It has been argued in that one can still use direct center gauge to locate center vortices if one uses gauge fixing algorithm avoiding โbadโ copies of . Below we subject this statement to the careful check. Another goal of our paper is to investigate properties of recently introduced new objects called nexuses or center monopoles . One can define nexus in $`SU(N)`$ gauge theory as 3D object formed by $`N`$ center vortices meeting at the center, or nexus, with the zero (mod $`N`$) net flux. We use Pโvortices in the center projection to define nexuses in $`SU(2)`$ lattice gauge theory.
## 2 Direct center gauge
Direct center gauge is defined by the maximization of the following functional of the lattice gauge field $`U_{n,\mu }`$ :
$$F(U)=\frac{1}{4V}\underset{n,\mu }{}\left(\frac{1}{2}\text{Tr}U_{n,\mu }\right)^2=\frac{1}{4V}\underset{n,\mu }{}\frac{1}{4}\left(\text{Tr}_{adj}U_{n,\mu }+1\right),$$
(1)
with respect to local gauge transformations, and can be considered as Landau gauge for adjoint representation; $`V`$ is the lattice volume. Condition (1) fixes the gauge up to $`Z(2)`$ gauge transformation. Fixed configuration can be decomposed into $`Z(2)`$ and coset parts: $`U_{n,\mu }=Z_{n,\mu }V_{n,\mu }`$, where $`Z_{n,\mu }=\text{sign}\text{Tr}U_{n,\mu }`$. Plaquettes constructed from $`Z_{n,\mu }`$ field have values $`\pm 1`$. Those of them taking values $`1`$ compose the so called P-vortices. P-vortices form closed surfaces in 4D space. Some evidence has been collected, that P-vortices in the direct center gauge can serve to locate gauge invariant center vortices. It has been reported that projected Wilson loops computed via linking number of the static quarks trajectories and P-vortices have area law with the string tension $`\sigma _{Z(2)}`$ very close to the string tension of the nonabelian theory $`\sigma _{SU(2)}`$. This fact has been called center dominance. Another important observation was that the density of P-vortices scales as a physical quantity . We inspect these statements using careful gauge fixing procedure.
The most common method to fix the gauge of the type (1) is the relaxation algorithm which makes maximization iteratively site by site. The relaxation is made more effective with the help of the overrelaxation. It is known that another algorithm โ simulated annealing โ is more effective and very useful when gauge copies problem becomes severe . Here we do not employ simulated annealing and apply gauge fixing procedure explained in details in ref. . We call it RO (relaxation โ overrelaxation) procedure.
The main problem of the direct center gauge fixing is that the functional $`F(U)`$ (1) has many local maxima. We call configurations corresponding to these local maxima gauge copies. They are lattice Gribov copies in fact. It is well known that for some gauge conditions which are formulated as the maximization of a nonlocal functional (e.g. Landau, Coulomb and Maximal Abelain gauges) the gauge dependent quantities depend strongly on the local maxima picked up, while to find out the global maximum is impossible. Thus it is necessary to approach the global maximum as close as possible. We follow the following procedure proposed and checked in : for given configuration we generate $`N_{cop}`$ gauge equivalent copies applying random gauge transformations, and fix the gauge for each gauge copy using the RO procedure. After that we compute the gauge dependent quantity $`X`$ on the gauge copy corresponding to the highest maximum of (1), $`F_{max}(N_{cop})`$. Averaging over statistically independent gauge field configurations and varying $`N_{cop}`$ we obtain the function $`X(N_{cop})`$ and extrapolate it to $`N_{cop}\mathrm{}`$ limit. This should provide a good estimation for $`X`$ computed on the global maximum unless the algorithm in use does not permit to reach the global maximum or its vicinity (the situation we met also in the present study). The main difference of the present study from the calculations performed earlier is that we use the higher value of the gauge copies ($`1N_{cop}20`$) than it was used in refs. and make careful analysis of $`N_{cop}`$ dependence. Due to that our results differ drastically from those reported previously .
Separately we compute observables using the modified (LRO) gauge fixing procedure : every configuration has been first fixed to Landau gauge, and then the RO algorithm for the direct center gauge has been applied. In this case the effect of large number of gauge copies, $`N_{cop}`$, is not very important, we confirm the results of ref. .
Note that there exists another proposal for the general gauge fixing procedure which is free of gauge copies problem. In some particular limit this procedure corresponds to the search of the global maximum . There is also a class of gauge conditions , which do not suffer from the gauge copies problem.
## 3 Results
Our computations have been performed on lattice $`L^4=12^4`$ for $`\beta =2.3,2.4`$ and $`L^4=16^4`$ for $`\beta =2.5`$. For $`\beta =2.3,2.4`$ ($`\beta =2.5`$) we study $`100`$ ($`50`$) statistically independent gauge field configurations. Using the described above gauge fixing procedure we calculate the various observables as functions of the number of randomly generated gauge copies $`N_{cop}`$ ( $`1N_{cop}20`$).
(i) We confirm the conclusion of ref. that gauge copies generated via LRO procedure have higher maxima of $`F(U)`$ and thus are closer to the global maximum of $`F(U)`$. We found that $`F_{max}^{LRO}(N_{cop})>F_{max}^{RO}(N_{cop})`$ for any value of $`N_{cop}`$, at any considered value of $`\beta `$.
(ii) We find that LRO procedure gives copies with significantly lower density, $`\rho `$, of P-vortices than RO procedure. We use the standard definition: $`\rho =\frac{1}{12V}_{n;\mu >\nu }(1Z_{n,\mu \nu })`$. Thus gauge copies generated by RO and LRO procedures are indeed different even in the limit $`N_{cop}\mathrm{}`$.
(iii) The difference between LRO and RO procedure results can be qualitatively explained as follows. Fixing the Landau gauge we get the configuration almost without P-vortices, the subsequent RO procedure substantially increases the number of P-vortices but percolating cluster does not appear. The original gauge field configuration contains a lot of P-vortices and the local RO procedure is not able to remove all large (and even wrapping) clusters of P-vortices. The field configuration after application of LRO procedure contains many small P-vortex clusters; the field configuration after application of RO procedure contains one large percolating cluster. It seems that this cluster is responsible for the area law behavior of the projected Wilson loops (see below).
(iv) The most important observable is the $`Z(2)`$-projected Creutz ratio $`\chi (I)`$ which we calculate using the procedure suggested in refs. , . $`\chi (I)`$ is defined through the projected Wilson loops, $`W_{Z(2)}(C)=\mathrm{exp}\{i\pi (\mathrm{\Sigma }_P,C)\}`$. Here $`(\mathrm{\Sigma }_P,C)`$ is the 4D linking number of the closed surface, $`\mathrm{\Sigma }_P`$, formed by P-vortex and closed loop $`C`$.
In Fig.1. we show the dependence of $`\chi (I)`$ on $`N_{cop}`$ for $`\beta =2.5`$. It occurs that this dependence is nicely fitted by the function $`C_1+C_2/\sqrt{N_{cop}}`$. The reason for such dependence is still to be understood. In Table 1 we give the ratio $`\sigma _{Z(2)}/\sigma _{SU(2)}`$ <sup>1</sup><sup>1</sup>1The data for $`\sigma _{SU(2)}`$ are taken from . $`\sigma _{Z(2)}`$ is computed from $`\chi (I)`$ for $`3I4`$ data at $`12^4`$ lattice and for $`3I6`$ data at $`16^4`$ lattice. For $`N_{cop}=3`$ (number of gauge copies used in ) $`\sigma _{Z(2)}`$ is close to $`\sigma _{SU(2)}`$. But it becomes significantly lower for $`N_{cop}\mathrm{}`$. Thus RO procedure results strongly depend on $`N_{cop}`$. It is important that $`\sigma _{Z(2)}`$ is 20-30 % lower than $`\sigma _{SU(2)}`$ for $`N_{cop}\mathrm{}`$. This implies that even if one restricts oneself to RO procedure as it is suggested in , one cannot conclude that P-vortices indeed well locate all center vortices.
(v) For gauge copies generated by LRO procedure we confirm the result of that $`\chi (I)`$ is zero within statistical errors for any value of $`N_{cop}`$.
(vi) In Table 1 we also show the ratio $`2\rho /\sigma _{SU(2)}a^2`$ ($`\rho `$ is the density of P-vortices). As it is claimed in ref. in case of the uncorrelated plaquettes carrying P-vortices $`2\rho `$ coincides with the dimensionless string tension, $`\sigma _{SU(2)}a^2`$. The results presented in Table 1 show that the density of P-vortices is not proportional to $`\sigma _{SU(2)}a^2`$. We have found out that for $`N_{cop}=3\rho `$ is in good agreement with the asymptotic scaling as it was found in . But for $`N_{cop}\mathrm{}\rho `$ deviates from the two loop asymptotic scaling formula.
(vii) We also investigate the properties of the point like objects, called nexuses. On the 4D lattice we have the conserved currents of nexuses, defined after the center projection. We calculate the phase, $`s_l`$, of the $`Z(2)`$ link variable: $`Z_l=\mathrm{exp}(i\pi s_l),s_l=0,1`$. Then we define the plaquette variable $`\sigma _P=\text{d}s\text{mod}2`$, $`(\sigma _P=0,1)`$. The nexus current (or center monopole current ) is then defined as $`{}_{}{}^{}j=\frac{1}{2}\delta ^{}\sigma _P`$. These currents live on the surface of the P-vortex (on the dual 4D lattice) and P-vortex flux goes through positive and negative nexuses in alternate order. The important characteristic of the cluster of currents is the condensate, $`C`$, defined as the percolation probability. As it is shown in ref. the condensate $`C`$ of the nexus currents is the order parameter for the confinement โ deconfinement phase transition. We found that $`C`$ is nonzero for the gauge copies obtained via RO procedure (when the projected Wilson loops have the area law). $`C`$ is zero (in the thermodynamic limit $`L\mathrm{}`$) for gauge copies obtained using LRO procedure (when the projected Wilson loops have no area law). It is interesting that for RO procedure $`C`$ seems to scale as the physical quantity with the dimension $`(mass)^4`$. This is illustrated in Fig.2, where we plot the $`\beta `$โdependence of the ratio $`C/(\sigma _{SU(2)}a^2)^2`$. Thus these new objects might be important degrees of freedom for the description of the nonperturbative effects.
(viii) It is important to perform the same calculations for the indirect center gauge and for the Laplacian center gauge .
We thank Ph. de Forcrand and T. Kovacs for useful remarks. This study was partially supported by grants RFBR 96-15-96740, RFBR 99-01230a, INTAS 96-370 and Monbushu grant. |
no-problem/0002/astro-ph0002375.html | ar5iv | text | # Asymmetric, arc minute scale structures around NGC 1275
## 1 INTRODUCTION
The Perseus cluster of galaxies (Abell 426) is one of the best studied clusters, due to its proximity ($`z=0.018`$, $`1^{}`$ corresponds to $``$ 30 kpc for $`H_0=50kms^1Mpc^1`$) and brightness. Detailed Xโray images were obtained with the Einstein IPC (BranduardiโRaymont et al. 1981) and HRI (Fabian et al. 1981) and the ROSAT PSPC (Schwarz et al. 1992, Ettori, Fabian, White 1999) and HRI (Bรถhringer et al. 1993; see also Heinz et al. 1998). The cluster has a prominent Xโray surface brightness peak at its center along with cool gas, which is usually interpreted as due to the pressure induced flow of gas releasing its thermal energy via radiation. The cooling flow is centered on the active galaxy NGC1275, containing a strong core-dominated radio source (Per A, 3C 84) surrounded by a lower surface brightness halo (e.g. Pedlar et al. 1990, Sijbring 1993). Analysis of the ROSAT HRI observations of the central arcminute has shown that the X-ray emitting gas is displaced by the bright radio emitting regions (Bรถhringer et al. 1993), suggesting that the cosmic ray pressure is at least comparable to that of the hot intracluster gas. Many other studies explored correlations of X-ray, radio, optical, and ultraviolet emission (see e.g. McNamara, OโConnell & Sarazin, 1996 and references therein). In this contribution, we discuss asymmetric structure in the Xโray surface brightness within $``$ 5 arcminutes of NGC 1275 and suggest that buoyant bubbles of relativistic plasma may be important in defining the properties of this structure.
## 2 IMAGES
The longest ROSAT HRI pointing towards NGC 1275 was made in August 1994 with a total exposure time of about 52 ksec. The $`8^{}\times 8^{}`$ subsection of the HRI image, smoothed with a $`3^{\prime \prime }`$ Gaussian, is shown in Fig. 1. The image is centered at NGC 1275. Two Xโray minima immediately to the north and south of NGC 1275 coincide (Bรถhringer et al. 1993) with bright lobes of radio emission at 332 MHz, mapped with the VLA by Pedlar et al. (1990). Another region of reduced brightness ($`1.5^{}`$ to the northโwest from NGC 1275) was detected earlier in Einstein IPC and HRI images (BranduardiโRaymont et al. 1981, Fabian et al. 1981). It was suggested that reduced brightness in this region could be due to a foreground patch of a photoabsorbing material or pressure driven asymmetry in the thermally unstable cooling flow (Fabian et al. 1981). The complex shape of the X-ray surface brightness is much more clearly seen in Fig. 2 which shows the same image, adaptively smoothed using the procedure of Vikhlinin, Forman, Jones (1996). The โcompressedโ isophotes in the figure delineate a complex spiralโlike structure. Comparison of Fig. 2 and Fig. 1 shows that the same structure is present in both images, i.e. it is not an artifact of the adaptive smoothing procedure.
In order to estimate the amplitude of the substructure relative to the undisturbed ICM, we divided the original image (Fig.1) by the azimuthally averaged radial surface brightness profile. The resulting image, convolved with the $`6^{\prime \prime }`$ Gaussian is shown in Fig.3. The regions having surface brightness higher than the azimuthally averaged value appear grey in this image and form a long spiralโlike structure starting near the cluster center and ending $`5^{}`$ from the center to the southโeast. Of course the appearance of the excess emission as a โspiralโ strongly depends on the choice of the โundisturbedโ ICM model (which in the case of Fig.3 is a symmetric distribution around NGC 1275). Other models would imply different shapes for the regions having excess emission. In particular a substantial part of the subtructure seen in Fig.1 and 2 can be accounted for by a model consisting of a sequence of ellipses with varying centers and position angles (e.g. using the IRAF procedure ellipse due to Jedrzejewski, 1987). Nevertheless the image shown in Fig.3 provides a convenient characterization of the deviations of the Xโray surface brightness relative to the azimuthally averaged value. Comparison of Fig.3 and Fig.1, 2 allows one to trace all features visible in Fig.3 back to the original image.
Superposed onto the image shown in Fig.3 are the contours of the radio image of 3C 84 at 1380 MHz (Pedlar et al. 1990). The radio image was obtained through DRAGN atlas (http://www.jb.man.ac.uk/atlas edited by J. P. Leahy, A . H. Bridle, and R. G. Strom). In this image, having a resolution of $`22\times 22`$ arcsec<sup>2</sup>, the central region is not resolved (unlike the higher resolution image of the central area used in Bรถhringer et al. 1993) and it does not show features, corresponding to the gasโvoids north and south of the nucleus. The compact feature to the west of NGC 1275, visible both in Xโrays and radio, is the radio galaxy NGC 1272. Fig.3 hints at possible relations between some prominent features in the radio and Xโrays. In particular, the Xโray underluminous region to the NorthโWest of NGC 1275 (BranduardiโRaymont et al. 1981, Fabian et al. 1981) seems to coincide with a โblobโ in radio. A somewhat better correlation is seen if we compare our image with the radio map of Sijbring (1993), with its better angular resolution), but again the correlation is not one to one<sup>1</sup><sup>1</sup>1Note, that some random correlation is expected since both the Xโray and radio emission are asymmetric and centrally concentrated around NGC 1275. A similar partial correlation of Xโrays and radio images also was found for another well studied object โ M87 (Bรถhringer et al. 1995). For M87, the relatively compact radio halo surrounds the source and some morphological similarities of the Xโray and radio images are observed. Gull and Northover (1973) suggested that buoyancy plays an important role in the evolution of the radio lobes. Bรถhringer et al. (1993, 1995) pointed out that buoyant bubbles of cosmic rays may affect the Xโray surface brightness distribution in NGC 1275 and M87. Below we speculate on the hypothesis that this mechanism is operating in both sources and the disturbance of Xโray surface brightness is related, at least partly, with the activity of an AGN in the past.
## 3 EVOLUTION OF THE OLD RADIO LOBES
The complex substructure of the Xโray emission in the Perseus cluster is seen at various spatial scales. At large scales (larger than $`10^{}`$$`20^{}`$), excess emission to the east of NGC 1275 was observed in the HEAOโ2 IPC and ROSAT images (BranduardiโRaymont et al. 1981, Fabian et al. 1981, Schwarz et al. 1992, Ettori, Fabian, White 1999). Schwarz et al. (1992), using ROSAT PSPC data, found that the temperature is lower in this region and suggested that there is a subcluster projected on the A426 cluster and merging with the main cluster. At much smaller scales ($`1^{}`$), there are two Xโray minima (symmetrically located to the north and south of NGC 1275) which Bรถhringer et al. (1993) explained as due to the displacement of the Xโray emitting gas by the high pressure of the radio emitting plasma associated with the radio lobes around NGC 1275. As is clear from Fig.1,2 at intermediate scales (arcminutes), substructure is also present. We concentrate below on the possibility that at these spatial scales, the disturbed Xโray surface brightness distribution is affected by the bubbles of radio emitting plasma, created by the jets in the past and moving away from the center due to buoyancy.
Recently Heinz, Reynolds & Begelman (1998) argued that the time-averaged power of the jets in NGC 1275 exceeds $`10^{45}ergss^1`$. This conclusion is based on the observed properties (in particular โ sharp boundaries) of the Xโray cavities in the central $`1^{}`$, presumably inflated by the relativistic particles of the jet. Such a high power input is comparable to the total Xโray luminosity of the central $`6^{}`$ region (i.e. $``$ 200 kpc) around NGC 1275. If the same power is sustained for a long time (e.g. cooling time of the gas $`10^{10}`$ years at a radius of 200 kpc) then the entire cooling flow region could be affected. Following Gull and Northover (1973) we assume that buoyancy (i.e. RayleighโTaylor instability) limits the growth of the cavities inflated by the jets. After the velocity of rise due to buoyancy exceeds the expansion velocity, the bubble detaches from the jet and begins rising. As we estimate below, for the jet power of $`10^{45}ergss^1`$ the bubble at the time of separation from the jet should have a size $``$10โ20 kpc ($`1^{}`$). The subsequent evolution of the bubble may resemble the evolution of a powerful atmospheric explosion or a large gas bubble rising in a liquid (e.g. Walters and Davison 1963, Onufriev 1967, Zhidov et al. 1977). If the magnetic field does not provide effective surface tension to preserve the quasiโspherical shape of the bubble, then it quickly transforms into a torus and mixes with the ambient cooling flow gas. The torus keeps rising until it reaches the distance from the center where its density (accounting for adiabatic expansion) is equal to the density of the ambient gas. Since the entropy of the ICM rises with distance from the center in the cooling flow region, the torus is unlikely to travel a very large distance from the center. Then the torus extends in the lateral direction in order to occupy the layer having a similar mass density. Below we give order of magnitude estimates characterizing the formation and evolution of the bubble.
For simplicity we assume a uniform ICM in the cluster center, characterised by the density $`\rho _0`$ and pressure $`P_0`$. The bubble is assumed to be spherical. During the initial phase (Scheuer 1974, Heinz et al., 1998) jets with a power $`L`$ inflate the cocoon with relativistic plasma and surrounded by a shell of the compressed ICM. The expansion is supersonic and from dimensional arguments it follows that the radius of the bubble $`r`$ as a function of time $`t`$ is given by the expression
$`r=C_1\left({\displaystyle \frac{L}{\rho _0}}t^3\right)^{1/5}`$ (1)
where $`C_1`$ is a numerical constant (see e.g. Heinz et al., 1998 for a more detailed treatment). At a later stage, expansion slows and becomes subsonic. The evolution of the bubble radius is then given by the expression
$`r=C_2\left({\displaystyle \frac{L}{P_0}}t\right)^{1/3}=\left({\displaystyle \frac{3}{4\pi }}{\displaystyle \frac{\gamma 1}{\gamma }}\right)^{1/3}\left({\displaystyle \frac{L}{P_0}}t\right)^{1/3}`$ (2)
where $`\gamma `$ is the adiabatic index of the relativistic gas in the bubble (i.e. $`\gamma =4/3`$). The above equation follows from the energy conservation law, if we equate the power of the jet with the change of internal energy plus the work done by the expanding gas at constant pressure: $`\frac{\gamma }{\gamma 1}P_04\pi r^2\dot{r}=L`$. The expansion velocity is then simply the time derivative of equation (1) or (2).
The velocity at which the bubble rises due to buoyancy can be estimated as
$`v_b=C_3\sqrt{{\displaystyle \frac{\rho _0\rho _r}{\rho _0+\rho _r}}rg}=C_3\sqrt{{\displaystyle \frac{r}{R}}}\sqrt{{\displaystyle \frac{GM}{R}}}=C_3\sqrt{{\displaystyle \frac{r}{R}}}v_K`$ (3)
where $`C_3`$ is a numerical constant of order unity, $`\rho _r`$ is the mass density of the relativistic gas in the bubble, $`g`$ is the gravitational acceleration, $`R`$ is the distance of the bubble from the cluster center, $`M`$ is the gravitating mass within this radius and, $`v_K`$ is the Keplerian velocity at this radius. In equation (3) we assumed that $`\rho _r\rho _0`$ and therefore replaced the factor $`\frac{\rho _0\rho _r}{\rho _0+\rho _r}`$ (Atwood number) with unity. The presently observed configuration of the bubbles on either side of NGC 1275 suggests that $`rR`$. Assuming that such a similar relation is approximately satisfied during the subsequent expansion phase of the bubble we can further drop the factor $`\sqrt{\frac{r}{R}}`$ in equation (3). Thus as a crude estimate we can assume that $`v_bC_3v_K`$ ($`C_30.5`$ is a commonly accepted value for incompressible fluids). Following Ettori, Fabian, and White (1999) we estimate the Keplerian velocity taking the gravitating mass profile as a sum of the Navarro, Frenk and White (1995) profile for the cluster and a de Vaucouleurs (1948) profile for the galaxy. For the range of parameters considered in Ettori, Fabian, White (1999), the Keplerian velocity between a few kpc and $``$ 100 kpc falls in the range 600-900 km/s. We can now equate the expansion velocity (using equation (2) for subsonic expansion) and the velocity due to buoyancy in order to estimate the parameters of the bubble when it starts rising:
$`t_b=\left({\displaystyle \frac{1}{36\pi }}{\displaystyle \frac{\gamma 1}{\gamma }}\right)^{\frac{1}{2}}\left({\displaystyle \frac{L}{P_0}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{1}{C_3v_K}}\right)^{\frac{3}{2}}`$
$`\mathrm{1.6\; 10}^7\left({\displaystyle \frac{L}{10^{45}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{P_0}{\mathrm{2\; 10}^{10}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{v_K}{700}}\right)^{\frac{3}{2}}years`$ (4)
$`r_b=\left({\displaystyle \frac{L}{P_0C_3v_k}}{\displaystyle \frac{\gamma 1}{\gamma }}{\displaystyle \frac{1}{4\pi }}\right)^{\frac{1}{2}}`$
$`17\left({\displaystyle \frac{L}{10^{45}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{P_0}{\mathrm{2\; 10}^{10}}}\right)^{\frac{1}{2}}\left({\displaystyle \frac{v_K}{700}}\right)^{\frac{1}{2}}kpc`$ (5)
Here $`t_b`$ and $`r_b`$ are the duration of the expansion phase and the radius of the bubble respectively. In the above equation we neglected the contribution to the radius (and time) of the initial supersonic expansion phase. Thus for $`L10^{45}erg/s`$ and for $`P_0=\mathrm{2\; 10}^{10}ergcm^3`$ (Bรถhringer et al. 1993) we expect $`r_b17kpc`$, which approximately corresponds to the size of the Xโray cavities reported by Bรถhringer et al. (1993). If, as suggested by Heinz et al. (1998), the jet power is larger than $`10^{46}ergs^1`$ then the bubble size will exceed 50 kpc ($`>1^{}`$) before the buoyancy velocity exceeds the expansion velocity. Of course these estimates of the expanding bubble are based on many simplifying assumptions (e.g. constant pressure assumption in equation (2)). In a subsequent publication we consider the expansion of the bubble in more realistic density and temperature profiles expected in cluster cooling flows.
According to e.g. Walters and Davison (1963), Onufriev (1967), Zhidov et al. (1977), a large bubble of light gas rising through much heavier gas under a buoyancy force will quickly transform into a rotating torus, which consists of a mixture of smaller bubbles of heavier and lighter gases. This transformation occurs on times scales of the RayleighโTaylor instability (i.e. $`tr_b/v_bt_b`$) and during this transformation the whole bubble changes its distance from the center by an amount $`r_b`$. The torus then rises until its average mass density is equal to the mass density of the ambient gas. The rise is accompanied by adiabatic expansion and further mixing with the ambient gas. Accounting for adiabatic expansion the mass density of the torus $`\rho _t(R)`$ will change during the rise according to
$`\rho _t(R)=\rho _0{\displaystyle \frac{\varphi }{(1\varphi )\left(\frac{P_0}{P(R)}\right)^{1/\gamma _{cr}}+\varphi \left(\frac{P_0}{P(R)}\right)^{1/\gamma _{th}}}}`$ (6)
where $`P(R)`$ is the ICM pressure at a given distance from the center, $`\varphi `$ is volume fraction of the ambient ICM gas mixed with the relativistic plasma at the stage of torus formation, $`\gamma _{cr}`$ and $`\gamma _{th}`$ are the adiabatic indices of the relativistic plasma and the ICM. Note that in equation (6) we (i) neglected further mixing with the ICM during the rise of the torus and (ii) mixing was assumed to be macroscopic (i.e. separate bubbles of the relativistic plasma and ICM occupy the volume of the torus). The equilibrium position of the torus can be found if we equate the torus density $`\rho _t(R)`$ and the ICM density $`\rho (R)`$ and solve this equation for $`R`$. We consider two possibilities here. One possibility is to assume that in the inhomogeneous cooling flow, the hot phase is almost isothermal and gives the dominant contribution to the density of the gas. Adopting the temperature of $`kT=6`$ keV for the hot phase and using the same gravitational potential as above, one can conclude that if a roughly equal amount (by volume) of the relativistic plasma and the ambient gas are mixed (i.e. $`\varphi 0.5`$), during the formation of the torus, then it could rise 100-200 kpc before reaching an equilibrium position. Accounting for additional mixing will lower this estimate. Alternatively we can adopt the model of a uniform ICM with the temperature declining towards the center (e.g. temperature is decreasing from 6 keV at 200 kpc to 2 keV at 10 kpc). Then for the same value of mixing ($`\varphi 0.5`$) the equilibrium position will be at the distance of $``$60 kpc from NGC 1275. Once at this distance the torus as a whole will be in equilibrium and it will further expand laterally in order to occupy the equipotential surface at which the density of the ambient gas is equal to the torus density. If the cosmic rays and thermal gas within the torus are uniformly mixed (or a magnetic field binds the blobs of thermal plasma and cosmic rays), the torus will not move radially. If, on the contrary, separate (and unbound) blobs of relativistic plasma exist then they will still be buoyant, but since their size is now much smaller than the distance from the cluster center the velocity of their rise will be much smaller than the Keplerian velocity. Analogously overdense blobs (with uplifted gas) may then (slowly) fall back to the center.
We now consider how radio and Xโray emission from the torus evolve with time. Duration of the rise phase of the torus will be at least several times longer than the time of the bubble formation (see eq. (3)), since the velocity of rise is a fraction of the Keplerian velocity (see e.g. Zhidov et al., 1977), i.e., $`t_{rise}10^8`$ years. Adiabatic expansion and change of the transverse size of the torus in the spherical potential tend to further increase this estimate. Even if we neglect energy losses of the relativistic electrons due to adiabatic expansion we can estimate an upper limit on the electron lifetime due to synchrotron and inverse Compton (IC) losses.
$`t=\mathrm{5\; 10}^8\left({\displaystyle \frac{\lambda }{20cm}}\right)^{1/2}\left({\displaystyle \frac{B}{\mu G}}\right)^{1/2}\left({\displaystyle \frac{B_t}{\mu G}}\right)^2years`$ (7)
where $`\lambda `$ is the wavelength of the observed radio emission, $`B`$ is the strength of the magnetic field, $`\frac{B_t^2}{8\pi }`$ is the value characterizing the total energy density of the magnetic field and cosmic microwave background. This life time (of the electrons emitting at a given frequency) will be longest if the energy density of the magnetic field approximately matches the energy density of the microwave background, i.e., $`B3.5\mu G`$. Then the maximum life time of the electrons producing synchrotron radiation at 20 cm is $`\mathrm{5\; 10}^7`$ years. This time is comparable to the time needed for the torus to reach its final position. Therefore, the torus could be either radio bright or radio dim during its evolution. If no reacceleration takes place, then the torus will end up as a radio dim region. We note here that, although the electrons may lose their energy via synchrotron and IC emission, the magnetic field and especially relativistic ions have a much longer lifetime (e.g. Soker & Sarazin 1990, Tribble 1993) and will provide pressure support at all stages of the torus evolution.
As we assumed above, the bubble detaches from the jet when the expansion velocity of the bubble is already subsonic. This means that there will be no strongly compressed shell surrounding the bubble and the emission measure along the line of sight going through the center of the bubble will be smaller than that for the undisturbed ICM, i.e., at the moment of detachment the bubble appears as an Xโray dim region. The Xโray brightness of the torus during final stages of evolution (when the torus has the same mass density as the ambient ICM) depends on how the relativistic plasma is mixed with the ambient gas (Bรถhringer et al. 1995). If mixing is microscopic (i.e. relativistic and thermal particles are uniformly mixed over the torus volume on spatial scales comparable with the mean free path) then the emission measure of the torus is the same as that for a similar region of the undisturbed ICM. Since part of the pressure support in the torus is provided by magnetic field and cosmic rays then the temperature of the torus gas must be lower than the temperature of the ambient gas (Bรถhringer et al. 1995). Thus emission from the torus will be softer than the emission from the ambient gas.
If, on the contrary, mixing is macroscopic (i.e. separate bubbles of relativistic and thermal plasma occupy the volume of the torus), then the torus will appear as an Xโray bright region (the average density is the same as of ICM, but only a fraction of the torus volume is occupied by the thermal plasma). For example, if half of the torus volume is occupied by the bubbles of the relativistic plasma then the emissivity of the torus will be a factor of 2 larger than that of the ambient gas. The Xโray emission of the torus is again expected to be softer than the emission of the ambient gas for two reasons (i) gas uplifted from the central region has lower entropy than the ambient gas and therefore will have lower temperature when maintaining pressure equilibrium with the ambient gas (ii) gas, uplifted from the central region, can be multiphase with stronger density contrasts between phases than the ambient gas and as a result a dense, cooler phase would give a strong contribution to the soft emission. Cosmic rays may heat the gas, but at least for the relativistic ions, the time scale for energy transfer is very long (comparable to the Hubble time). Trailing the torus could be the filaments of cooling flow gas dragged by the rising torus in a similar fashion as the rising (and rotating) torus after an atmospheric explosion drags the air in the form of a skirt.
We note here that the morphology predicted by such a picture is very similar to the morphology of the โearโlikeโ feature in the radio map of M87, reported by Bรถhringer et al. (1995). The โearโ could be a torus viewed from the side. The excess Xโray emission trailing the radio feature could then be due to the cooling flow gas uplifted by the torus from the central region. For Perseus the Xโray underluminous region to the NorthโWest of NGC 1275 could have the same origin (i.e. rising torus). In fact, the whole โspiralโ structure seen in Fig.2 could be the remains of one very large bubble (e.g. with the initial size of the order of arminutes โ corresponding to a total jet power of $`10^{46}ergs^1`$) inflated by the nucleus over a period of $`10^8`$ years. Alternatively multiple smaller bubbles, produced at different periods may contribute to the formation of the Xโray feature. If the jets maintain their direction over a long time then a quasiโcontinuous flow of bubbles will tend to mix the ICM in these directions uplifting the gas from the central region to larger distances. If the jet direction varies (e.g. precession of the jet on a timescale of $`10^8`$ years) then a complex pattern of disturbed Xโray and radio features may develop.
## 4 ALTERNATIVE SCENARIOS
Of course there are other possible explanations for disturbed Xโray surface brightness. We briefly discuss a few alternative scenarios below.
Assuming that the undisturbed ICM is symmetric around NGC 1275 (as was assumed in Fig.3) one may try to attribute the observed spiral-shaped emission to the gas stripped from an infalling galaxy or group of galaxies. Stripped gas (if denser and cooler than the ICM) will be decelerated by ram pressure and will fall toward the center of the potential, producing spiralโlike structure. Rather narrow and long features tentatively associated with stripped gas were observed e.g., for the NGC 4921 group in Coma (Vikhlinin et al. 1996) and NGC 4696B in the Centaurus cluster (Churazov et al. 1999). We note here that to prevent stripping at much larger radii, the gas must be very dense (e.g., comparable to the molecular content of a spiral galaxy). A crude estimate of the gas mass needed to produce the observed excess emission (assuming a uniform cylindrical feature with a length of 200 kpc and radius of 15 kpc, located 60 kpc away from NGC1275) gives values of the order of a few$`\times 10^{10}`$$`10^{11}M_{}`$. Here we adopted a density for the undisturbed ICM of $`10^2cm^3`$ at this distance from NGC 1275 following the deprojection analysis of Fabian et al. (1981) and Ettori, Fabian, and White (1999). The factor of two higher density within the feature will cause a $``$ 20โ40% excess in the surface brightness. In the above estimate for the mass of hot gas in the filament, it is assumed that this medium is approximately homogeneous and in ionization equilibrium. If the medium is very clumpy, the radiative emission of the plasma would be enhanced and this would result in an overestimate of the relevant gas mass. Such clumps should be easily seen with the high angular resolution of Chandra. Also if the medium consists of turbulently mixed hot and cold plasma, the very efficient excitation of lines in cold ions by hot electrons could lead to enhanced radiation (see e.g. Bรถhringer and Fabian 1989, Table 4) which may lead to an overestimate of the gas mass by up to an order of magnitude. The signature of this effect is a strongly line dominated spectrum, (see e.g. Bรถhringer and Hartquist, 1987) which could be tested by Chandra or XMM, in particular for the important iron L-shell lines. Thus it is possible that the inferred gas mass could be lower by up to an order of magnitude which makes the stripping scenario more likely and future observations with the new X-ray observatories can help to differentiate between these interpretations.
As was suggested by Fabian et al. (1981) a large scale pressureโdriven asymmetry may be expected in a thermally unstable cooling flow. This is perhaps the most natural explanation which does not invoke any additional physics. The same authors gave an estimate of the amount of neutral gas needed to explain the NW dip due to photoabsorption: excess hydrogen column density around $`10^{22}cm^2`$ is required to suppress the soft count rate in this region.
Yet another possibility is that the motion of NGC 1275 with respect to the ICM causes the observed substructure. As pointed out in Bรถhringer et al. (1993), NGC 1275 is perhaps oscillating at the bottom of the cluster potential well causing the excess emission $`1^{}`$ to the east of the nucleus. Since the X-ray surface brightness peak is well centered on NGC 1275, it is clear that the galaxy drags the central part of the cooling flow as it moves in the cluster core. At a distance larger than 2-3 arcminutes from NGC 1275, the cluster potential dominates over the potential of the galaxy. The gas at this distance should be very sensitive to the ram pressure of the ambient cluster gas and might give rise to the asymmetric (and time dependent) features.
The motion of NGC 1275 could also contribute to the X-ray structure through the formation of a โcooling wakeโ (David et al. 1994). If NGC 1275 is moving significantly, then inhomogeneities in the cooling gas would be gravitationally focussed and compressed into a wake. The wake would mark the, possibly complex, motion of NGC 1275, as it is perturbed by galaxies passing through the cluster core. Such a feature would be cool, since it arises from overdense concentrations of gas.
Finally, one can assume that cooling gas may have some angular momentum (e.g., produced by mergers) and the observed spiral structure simply reflects slow rotation of the gas combined with non-uniform cooling. Following Sarazin et al. (1995), one can assume that this gas will preserve the direction of its angular momentum and that this infalling material would eventually feed an AGN โ NGC 1275. One then might expect the radio jets to be aligned perpendicular to the rotation plane of the gas. At first glance, the โspiralโ feature appears approximately face-on, suggesting that jets should be directed along the line of sight as indeed is derived from the radio observations (see Pedlar et al. 1990).
## 5 CONCLUSIONS
The Xโray surface brightness around NGC 1275 (dominant galaxy of the Perseus cluster) is perturbed at various spatial scales. We suggest that on arcminute scales, the disturbance is caused by bubbles of relativistic plasma, inflated by jets during the past $`10^8`$ years. Overall evolution of the buoyant bubble will resemble the evolution of a hot bubble during a powerful atmospheric explosion. Colder gas from the central region of the cooling flow may be uplifted by the rising bubbles and (in the case of continuous jet activity) may make several cycles (from the center to the outer regions and back) on time scales comparable to the cooling time of the gas in the cooling flow.
A very important result that can be inferred from this model is the total power output of the nuclear energy source in NGC 1275 in the form of relativistic plasma. This energy release averaged over a time scale of about $`\mathrm{3\; 10}^7`$ to $`10^8`$ years is estimated as a function of the inflation time of the central radio lobes, the rise time of the inflated bubbles due to buoyancy forces, and the actual size of the central bubbles. A geometrically simplified model yields a power output on the order of $`10^{45}`$ erg s<sup>-1</sup>. This is comparable with the energy lost at the same time by thermal X-ray radiation from the entire central cooling flow region. This raises the question, where does all this energy go, especially if the energy release is persistent over a longer epoch during which the relativistic electrons can lose their energy by radiation, but the energy in protons and in the magnetic field is mostly conserved. The complicated X-ray morphology discussed in this paper may indicate long lasting nuclear activity, if we interpret the peculiar structure in the X-ray surface brightness as remnants of decaying radio lobe bubbles.
Detailed measurements of the morphology of the Xโray structure and the temperature and abundance distribution with Chandra and XMM may test this hypothesis. The gas uplifted from the central region is expected to be cooler than the ambient gas and to have an abundance of heavy elements typical of the innermost region. If cosmic rays are mixed with the thermal gas, then the pressure, as derived from Xโray observations, may be lower than the pressure of the ambient gas.
###### Acknowledgements.
We thank the referees for several helpful comments and suggestions. We are grateful to Nail Inogamov and Nail Sibgatullin for useful discussions. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. W. Forman and C. Jones acknowledge support from NASA contract NAS8-39073. |
no-problem/0002/cond-mat0002302.html | ar5iv | text | # The normal state Fermi surface of pristine and Pb-doped Bi2Sr2CaCu2O8+ฮด from ARPES measurements and its photon energy independence.
## Abstract
We address the question as to whether the topology of the normal state Fermi surface of Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> \- as seen in angle resolved photoemission - depends on the photon energy used to measure it. High resolution photoemission spectra and Fermi surface maps from pristine and Pb-doped Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> are presented, recorded using both polarised and unpolarised radiation of differing energies. The data show clearly that no main band crosses the Fermi surface along the $`\mathrm{\Gamma }`$$`\overline{M}`$Z direction in reciprocal space, even for a photon energy of 32 eV, thus ruling out the existence of a $`\mathrm{\Gamma }`$-centred, electron-like Fermi surface in this archetypal high T<sub>C</sub> superconductor. The true topology of the normal state Fermi surface remains that of hole-like barrels centred at the X,Y points of the Brillouin zone.
There is currently an ongoing and lively discussion as regards the true topology and character of the normal state Fermi surfaces (FS) of the high temperature superconductors (HTSC) in general, and of Bi2212 in particular. The โtraditionalโ picture seen in angle-resolved photoemission spectroscopy (ARPES) is that of three different features with different origins: the main FS centred around the X(Y) points of the Brillouin zone (BZ), as predicted by band structure calculations ; the so-called shadow FS due to antiferromagnetic spin correlations ; and extrinsic features (diffraction replicas (DRs)) which result from a diffraction of the outgoing photoelectrons as they pass through the structurally modulated Bi-O layers .
Recently, ARPES data recorded with photon energies of 32 - 33 eV seemed to show a different picture and have been interpreted in terms of an electron-like FS centred around the $`\mathrm{\Gamma }`$ point . It has even been suggested that the ARPES-derived FS depends on the photon energy used in the experiment . This, of course, would constitute a revolution in our thinking about the normal state FS of the HTSC and thus it is of utmost importance that this question be addressed quickly and clearly by a number of independent groups. In this contribution, we present ARPES investigations of Bi2212, with the aim of clearing up the controversy regarding the apparant photon energy dependence of normal state FS topology as seen by photoemission.
We present a combination of energy distribution curves (EDCs) measured using polarized synchrotron radiation with angle-scanned photoemission data using unpolarised radiation at various photon energies and demonstrate that, as physical intuition dictates, the main FS of the Bi2212 materials is independent of the photon energy used to measure it in an ARPES experiment.
The synchrotron-based data were recorded using the U2-FSGM beamline at the BESSY I facility, with a sample temperature of 100K, an overall energy resolution of 70 meV and an angular resolution of $`\pm `$1, which gives $`\mathrm{\Delta }`$k $``$ 0.094 ร
<sup>-1</sup> (i.e. 8.1 $`\%`$ of $`\mathrm{\Gamma }`$X) in the case of 32 eV radiation. In all cases the crystals were aligned such that the high symmetry direction being scanned was parallel to the electric field vector of the incoming synchrotron radiation. For the $`\mathrm{\Gamma }`$Y scans the analyser was then swung downwards out of the plane spanned by the surface normal and the E-vector, whilst for the $`\mathrm{\Gamma }`$$`\overline{M}`$Z scans the energy analyser remained in the aforementioned plane. The angle-scanned ARPES experiments were performed at 300 K or 120 K using monochromated, unpolarised He I and He II radiation and a SCIENTA SES200 analyser enabling simultaneous analysis of both the E and k-distribution of the photoelectrons. The overall energy resolution was set to 30 meV and the angular resolution to $`\pm `$0.38 , which gives $`\mathrm{\Delta }`$k $``$ 0.028 ร
<sup>-1</sup> (i.e. 2.4 $`\%`$ of $`\mathrm{\Gamma }`$X) in the case of He I radiation. High quality single crystals of pristine and Pb-doped Bi2212, the latter grown from the flux in the standard manner, were cleaved in-situ to give mirror-like surfaces .
Returning to the current ARPES controversy - certain points are universally accepted. Firstly, there is a consensus that the โtraditionalโ FS picture is correct for ARPES data recorded with low photon energies (h$`\nu `$ $``$22eV) Secondly, with respect to the high symmetry directions in k-space, the main FS crossing along the $`\mathrm{\Gamma }`$X direction is also generally accepted to be valid for all photon energies used to date. Thus, it is in fact the exact situation around the $`\overline{M}`$ point which is central to the debate, as it is in this region of k-space where the โclosingโ of the main FS arcs to give a $`\mathrm{\Gamma }`$-centred (electron-like) FS has been proposed .
Consequently, in order to investigate the validity of the โnewโ FS topology in detail, as well as to address the question as to whether the final states (17-20 eV above E<sub>F</sub>) accessed with lower photon energies are sufficiently high to guarantee their free-electron-like character, we have measured EDCs of Bi2212 using synchrotron radiation of different energies along the $`\mathrm{\Gamma }`$$`\overline{M}`$Z line in k-space. The data are shown in Fig. 1. For h$`\nu `$=32eV, the $`\mathrm{\Gamma }`$$`\overline{M}`$ data are very similar to those reported in Ref. , having been recorded in the same experimental geometry. In particular, the reduction of spectral weight around $`\overline{M}`$ for h$`\nu `$=32eV, and to a lesser extent for 40eV photons, could indeed be seen as a sign of a FS crossing, followed by the re-appearance of the band between $`\overline{M}`$ and Z. However, a reduction of the spectral weight of the states related to the extended saddle-point singularity around $`\overline{M}`$ for h$`\nu `$ around 30eV has been predicted to be due to matrix element effects alone in a recent theoretical treatment . Furthermore, for h$`\nu `$=50eV, for which no-one would doubt the validity of a free-electron-like final state, the situation resembles to that at lower photon energy and thus we see no indication of a $`\mathrm{\Gamma }`$$`\overline{M}`$ FS crossing.
Since it is the h$`\nu `$=32 and 40 eV data which most significantly deviate from the commonly accepted picture, we devote the rest of the paper to their detailed discussion. The claims for a $`\mathrm{\Gamma }`$$`\overline{M}`$Z main FS crossing have been based not only on the intensity suppression around $`\overline{M}`$ as seen in Fig. 1, but also on an analysis of the k-dependence of both the total photoemission intensity (which is related to the momentum distribution n(k)) and of the magnitude of the ARPES intensity at the Fermi level (I(E<sub>F</sub>)) . Fig. 2a shows data for h$`\nu `$=32eV for both the $`\mathrm{\Gamma }`$Y (panel 1) and $`\mathrm{\Gamma }`$$`\overline{M}`$ (panel 2) directions. We start first with the uncontroversial $`\mathrm{\Gamma }`$Y direction. The grey-scale image and I<sub>int</sub> / I(E<sub>F</sub>) analysis shown in the panels marked 1 contain the โsignatureโ of a main FS crossing - with a sharp peak in I(E<sub>F</sub>) coinciding with a steep drop in I<sub>int</sub>. However, the analogous data for the $`\mathrm{\Gamma }`$$`\overline{M}`$ direction (Fig. 2a - panels marked 2) show a different behaviour: both the drop in the total ARPES intensity as well as the peak in I(E<sub>F</sub>) are considerably broader than their counterparts along $`\mathrm{\Gamma }`$Y. In particular, the I(E<sub>F</sub>) peak is more than a factor of three broader than was the case for the $`\mathrm{\Gamma }`$Y main FS crossing. The question then arises as to whether this I<sub>int</sub> / I(E<sub>F</sub>) characteristic for $`\mathrm{\Gamma }`$$`\overline{M}`$ (h$`\nu `$=32eV) is compatible with a main FS crossing. We believe that it is not, and will lay out our arguments for this in the following.
Firstly, assuming for the sake of argument the validity of the $`\mathrm{\Gamma }`$-centred FS, the data for $`\mathrm{\Gamma }`$Y and $`\mathrm{\Gamma }`$$`\overline{M}`$ both represent scans crossing the FS at right angles (see the sketch at the top of Fig. 2a). Why, then, should the I<sub>int</sub> and I(E<sub>F</sub>) analyses for the two directions be so different?
One argument that immediately springs to mind is based upon the fact that the photoemission features along the two directions (directly seen as white features in the I(E,k) images of Fig. 2a) have different dispersion relations, thus possibly leading to the anomalous width in both I<sub>int</sub> and I(E<sub>F</sub>) for $`\mathrm{\Gamma }`$$`\overline{M}`$. A stringent test of this argument would be to compare the $`\mathrm{\Gamma }`$$`\overline{M}`$ data with the I<sub>int</sub> / I(E<sub>F</sub>) characteristics of a band, which not only crosses the main FS at right angles, but also displays the same dispersion relation as that observed along $`\mathrm{\Gamma }`$$`\overline{M}`$ for k$``$k<sub>F</sub>. Ideally speaking, this test should also be carried out for h$`\nu `$=32eV, but in practice this is hampered by severe difficulties in the location of a true right-angular FS crossing, which could not, of course, be along a high symmetry direction. This last point means that additional complications in the quantification of I<sub>int</sub> and I(E<sub>F</sub>) would also result from the strong matrix-element effects implicit in the use of polarised synchrotron radiation (e.g. h$`\nu `$=32eV). Furthermore, the DR features which โdecorateโ the ARPES data of pure Bi2212 make it harder still to find a suitable main FS crossing to use as a test system.
Therefore, in order to determine the I<sub>int</sub> / I(E<sub>F</sub>) signature of a main (right angular) FS crossing with dispersion equal to that seen along $`\mathrm{\Gamma }`$$`\overline{M}`$ for h$`\nu `$=32eV we turn to data from Pb-doped Bi2212, measured with unpolarised He I radiation (h$`\nu `$=21.2eV). This approach has the following advantages: use of unpolarised radiation minimises the differences in datasets recorded with different azimuthal angles and ARPES data from Pb-doped Bi2212 are simpler to interpret due to the absence of DR features . In Fig. 2b we show the comparison between Pb-doped Bi2212 ARPES data for $`\mathrm{\Gamma }`$Y (panel 3) and for a different direction in k-space (roughly from 0.4($`\mathrm{\Gamma }`$$`\overline{M}`$) towards Y), representing a right-angular crossing of the main FS (panel 4). As is evident from Figs. 2a and 2b, the dispersion relations of the bands in panels 2 and 4 are essentially identical - thus we have found a suitable candidate for our test. This search was only made possible by the use of the full-EDC FS map shown at the top of Fig. 2b. The lower panels of Fig. 2b show without any doubt that the I<sub>int</sub> and I(E<sub>F</sub>) characteristic of a main FS crossing is essentially unaffected by the steepness of the dispersion relation of the band coming up to the FS as both panels 3 and 4 of Fig. 2b show sharp peaks in I(E<sub>F</sub>) coupled to a steep drop in I<sub>int</sub>. This, then, is in favour of our contention that the h$`\nu `$=32 eV $`\mathrm{\Gamma }`$$`\overline{M}`$ data shown in Figs. 1 and 2a do not signal a main FS crossing in the Bi2212-based materials.
A further argument is based upon an analysis of the binding energy position of the leading edge of the ARPES spectra. In our experience, based upon full-EDC FS maps comprising more than 4000 spectra , the leading edge of the spectra not only approaches E<sub>F</sub> as the band disperses up towards the FS, but also moves rapidly away from E<sub>F</sub> again once the band has crossed the FS (this is a consequence of the well-known incoherent background present in ARPES data of all HTSC). Thus, following the leading edge energy as a function of k, a main FS crossing exhibits a sharp dip centred at k<sub>F</sub>, as is illustrated in Fig. 3. Figs. 3a, 3b and 3c show analyses of the leading edge energy for the h$`\nu `$=32 eV data for $`\mathrm{\Gamma }`$Y (which is shown in panel 1 of Fig. 2a), and the He I data shown in Fig. 2b (panel 3) and Fig. 2b (panel 4), respectively. In all cases, the main FS crossing, and thus k<sub>F</sub>, are characterised by the sharp dip or โVโ in the leading edge energy. This behaviour is to be compared with that for the $`\mathrm{\Gamma }`$$`\overline{M}`$ direction (Figs. 3d-f) in which, regardless of the photon energy, no sharp, โVโ-like structure is seen in the leading edge energy profiles centred around the proposed FS crossing (k<sub>F</sub> = 0.81 and 1.19 ($`\mathrm{\Gamma }`$$`\overline{M}`$)). Thus it is clear that the leading edge data of Fig. 3d (h$`\nu `$=32eV) should be grouped with the data of Fig. 3e and 3f which characterise flat-band, saddle-point behaviour, and not with the leading edge datasets describing a main FS crossing (Figs. 3a-3c).
Taking the arguments given above, the viewpoint that the observed FS โcrossingsโ along the $`\mathrm{\Gamma }`$$`\overline{M}`$Z line in Bi2212 result, in fact, from the superposition of extrinsic DR features is considerably strengthened. In Ref. we argued that multi-order DRs combine to give a high intensity ribbon, visible in FS mapping data running along the (0,-$`\pi `$)-($`\pi `$,0) line. The suppression of the spectral weight from the extended saddle-point singularity states predicted for photon energies around 30 eV , would then lead to a โhollowing-outโ of the ribbon - leaving its edges intense enough to appear as a pair of FS crossings either side of the $`\overline{M}`$ point. In order to test this point, and bearing in mind the efficacy of FS maps recorded using unpolarised radiation and based upon real, uninterpolated EDCs , we have carried out such FS mapping experiments on Pb-doped Bi2212, in which the Pb substitution supresses the incommensurate Bi-O modulation and thus โswitches offโ the DR features in the ARPES spectra.
Fig. 4 shows the FS maps, in which I(E<sub>F</sub>) for a 20 meV energy window (T=300K) is plotted. Data recorded using He I radiation (h$`\nu `$=21.22eV) are shown in Fig. 4a, whereas Figs. 4b and 4c contain smaller maps measured with He II (h$`\nu `$=40.8eV) radiation which highlight those areas in k-space indicated by the dotted lines in Fig. 4a. In each case, the main hole-like FS centred at the X and Y points is clearly visible (solid white line). While these conclusions are under no doubt for the He I data, upon consideration of Fig. 1, it can be seen that a photon energy of 40eV is still in the critical range for which an intensity suppression near $`\overline{M}`$ is observed. Thus, we point out that Fig. 4c (h$`\nu `$=40.8eV) shows no indication of a FS crossing at the points 0.81 and 1.19 ($`\mathrm{\Gamma }`$$`\overline{M}`$) as suggested in Ref. , nor at any point along or near to the $`\mathrm{\Gamma }`$$`\overline{M}`$Z line. Therefore, the FS maps presented in Fig. 4, taken together with the detailed analysis of I<sub>int</sub> / I(E<sub>F</sub>) (Fig. 2) and of the leading edge energies of the ARPES spectra (Fig. 3) offer very strong additional support to arguments that the alleged FS crossings along the $`\mathrm{\Gamma }`$$`\overline{M}`$Z direction in pristine Bi2212 are, in fact, due to DR features . These dominate the ARPES spectra as a result of the matrix element-related supression of the saddle-point emission near $`\overline{M}`$ for photon energies around 30eV .
Thus, in summary we can state that such DR-related โFS crossingsโ along $`\mathrm{\Gamma }`$$`\overline{M}`$Z in Bi2212 do not have any consequences for the true normal state FS topology of the Bi2212-based HTSC, which remains that of hole-like barrels centred at the X,Y points, independent of the photon energy used in the ARPES experiment.
We are grateful to the the BMBF (05 SB8BDA 6), the DFG (Graduiertenkolleg โStruktur- und Korrelationseffekte in Festkรถrpernโ der TU-Dresden) and the SMWK (4-7531.50-040-823-99/6) for financial support, to U. Jรคnnicke-Rรถssler and K. Nenkov for characterisation of the crystals. T.P. acknowledges an APART fellowship of the Austrian Academy of Sciences. |
no-problem/0002/astro-ph0002371.html | ar5iv | text | # Chemical and dynamical evolution in gas-rich dwarf galaxies
## 1. Introduction
Blue compact dwarf galaxies (BCD) are gas-rich systems experiencing an intense star formation. These galaxies have very simple structures, small sizes and are very metal poor. For these reasons, BCD are excellent laboratories to investigate the effect of a starburst on the chemical and dynamical evolution of the interstellar medium (ISM).
Previous dynamical and chemical studies of these galaxies have suggested the existence of a โdifferential galactic windโ, in the sense that after a starburst event these objects would loose mostly metals (ref. Mac Low & Ferrara 1999; DโErcole & Brighenti 1999; Pilyugin 1992, 1993; Marconi et al. 1994). However, in none of these studies, detailed chemical and dynamical evolution was taken into account at the same time. The aim of this paper is to include the effects (both energetic and chemical) of type II and type Ia SNe in a detailed dynamical model.
## 2. Model description
We consider a rotating gaseous component in hydrostatic isothermal equilibrium with the gravitational and the centrifugal forces. The potential well is given by the sum of a spherical, quasi-isothermal dark halo and an oblate King profile. The resulting gas distribution resembles that observed in IZw18 in a region $`R`$ 1 Kpc and $`z`$ 730 pc, which we call โgalactic regionโ.
To describe the evolution of the ISM we solve a set of time-dependent, hydrodynamical equations, with source terms describing the rate of energy and mass return from the starburst. Mass is returned mostly by SNeII and intermediate-mass stars (IMS), while the energy is injected essentially by SNe. For the first time, here we take into account also the contribution by SNeIa. These supernovae start to explode after 29 Myr, at the end of the SNII activity, occurring with the explosion of stars with 8 M (see Nomoto, Thielemann & Yokoi 1984).
Following Bradamante et al. (1998), we suppose that SNeII convert only 3% of their explosion energy into thermal energy of the ISM. SNeIa, instead, do not suffer radiative losses because they explode in a medium heated and diluted by the previous SNeII activity and release all their energy into the ISM.
We solve an ancillary set of equations which keep track of the evolution in space and time of some specific elements lost by stars, namely H, He, C, N, O, Mg, Si, Fe. The production of these elements are obtained following the nucleosynthetic prescriptions from various authors: Woosley & Weaver (1995) for the SNeII, Renzini & Voli (1981) for IMS and Nomoto et al. (1984) for SNeIa. For more details, see Recchi et al. (2000).
The standard model, called M1, has a gaseous mass inside the galactic region of $`1.7\times 10^7\mathrm{M}_{}`$ and a mass of gas turned into stars of $`6\times 10^5\mathrm{M}_{}`$, in reasonable agreement with the observations of IZw18. We run other two models obtained by reducing the burst luminosity of a factor 0.6 (model M2) and by reducing the mass of gas of a factor 0.25 (model M3). Moreover, we consider four nucleosynthetic options: we consider an initial abundance of the ISM of $`Z=0`$ and $`Z=0.01\mathrm{Z}_{}`$ and two possible values for the mixing lenght parameter $`\alpha _{\mathrm{RV}}=0`$ and $`\alpha _{\mathrm{RV}}=1.5`$. In the models with $`\alpha _{\mathrm{RV}}=1.5`$ we can produce N in a primary way in IMS.
## 3. Results
In model M1 a classical bubble develops as a consequence of SNII explosions (see Fig. 1). It expands faster along the $`z`$ direction, where the ISM density gradient is steeper. The SNII wind stops before the possible breakout, and the subsequent SNIa wind is not strong enough to expand the cavity further. The size of the bubble thus does not change for nearly 300 Myr, although the shape varies irregularly because of the Kelvin-Helmholtz instabilities along the interface between the hot cavity and the surrounding gas. After $``$ 340 Myr the expanding ISM is diluted enough and the hot bubble finally breaks out through a funnel. Most of the SNII ejecta remain locked into the bubble wall inside the galaxy, while the SNIa elements, ejected later, are easily channelled along the funnel. Iron is mostly produced by SNeIa and, when the breakup occurs, most of it is lost. Thus the gas \[$`\alpha `$/Fe\] ratio results lower outside the galaxy than inside (see Fig. 2).
After $``$ 29 Myr \[the burst in IZw18 is evaluated to be 15 - 27 Myr old by Martin (1996)\] the galactic abundances found in this model are in good agreement with those observed in IZw18 once the nucleosynthetic prescriptions with $`Z=0`$ and $`\alpha _{\mathrm{RV}}=1.5`$ are assumed. At this time a substantial fraction of N is produced by IMS in a primary way. An initial metallicity of $`Z=0.01\mathrm{Z}_{}`$ (simulating a pre-enriched burst), worsens the agreement between data and model results. Also the observed dimensions of the dynamical structures are in reasonable agreement with our result after $``$ 29 Myr.
Models M2 and M3 have similar dynamical behaviours. However, due to the different quantity of metals produced and gas mass lost, their abundances are overestimated (M3) or underestimated (M2) compared to IZw18. We also run a model similar to M1 but with a 100% efficiency of SNeII in heating the gas. In this case the galaxy results devoided of gas 450 Myr after the burst, at variance with the substantial amount of ISM in IZw18.
## References
Bradamante, F., Matteucci, F. & DโErcole, A. 1998, A&A, 337, 338
DโErcole, A. & Brighenti, F. 1999, MNRAS, 309, 941
Mac Low, M.-M. & Ferrara, A. 1999, ApJ, 513, 142
Marconi, G., Matteucci, F. & Tosi, M. 1994, MNRAS, 217, 391
Martin, C.L., 1996, ApJ, 465, 680
Nomoto, K., Thielemann, F.K. & Yokoi K. 1984, ApJ, 286, 644
Pilyugin, L.S., 1992, A&A, 260, 58
Pilyugin, L.S., 1993, A&A, 277, 42
Recchi, S., Matteucci, F. & DโErcole, A. 2000, submitted to MNRAS
Renzini, A. & Voli, M. 1981, A&A, 94, 175
Woosley, S.E. & Weaver, T.A. 1995, ApJS, 101, 181 |
no-problem/0002/astro-ph0002195.html | ar5iv | text | # ISO-SWS spectra of galaxies: continuum and features Based on observations with ISO, an ESA project with instruments funded by ESA member states (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with participation of ISAS and NASA.
## 1 Introduction
Mid-infrared spectra of galaxies are rich in emission lines, and display prominent broader emission and absorption features due to the presence of various solids and/or large molecules in their interstellar medium (ISM). Significant variation from source to source suggests that these features may provide important diagnostics of the ISM conditions in galaxies.
The ground based and Kuiper Airborne Observatory spectra of the prototypical starburst M 82 by Gillett et al. (1975) and Willner et al. (1977) fully established the existence of the mid-infrared โunidentified infrared bandsโ (UIB) at 6.2, 7.7, 8.6, and 11.3$`\mu `$m in galaxy spectra. These emission bands are characteristic of C-C and C-H bonds in aromatic molecules. In this paper we will refer to them as โPAH featuresโ according to one of the most popular interpretations of their carrier, polycyclic aromatic hydrocarbon molecules <sup>1</sup><sup>1</sup>1Other suggested carriers include small grains of hydrogenated amorphous carbon (HACs), quenched carbonaceous composites (QCCs), or coal.. These detections and related work using the IRAS LRS (Cohen & Volk 1989) form the basic pre-ISO knowledge of mid-infrared spectral features in galaxies.
Considerable work has also been done from the ground but has been limited to the features found in atmospheric windows, mainly silicate absorption and PAH feature emission in the N band (e.g. Roche et al. 1991, and references therein) and the companion PAH feature in the L band (e.g. Moorwood 1986). The restriction to atmospheric windows increases problems in establishing the โcontinuumโ on which the features are superposed. This is a nontrivial task, even with full wavelength coverage, due to the crowding of mid-infrared emission and absorption features (especially in the 10$`\mu `$m region).
With the Short Wavelength Spectrometer SWS (de Graauw et al. 1996) on board the Infrared Space Observatory ISO (Kessler et al. 1996) high spectral resolution observations with good signal-to-noise (S/N) were obtained for a number of bright galaxies. Their main advantages lie in continuous wavelength coverage from 2.4 to 45$`\mu `$m and in the possibility to clearly separate features from nearby emission lines.
The interpretation of galaxy-integrated spectra strongly benefits from comparisons to similar observations of galactic sources, sometimes spatially resolved, allowing better isolation of the physical mechanisms at work. Recent ISO spectra of many galactic template objects, such as reflection nebulae (e.g. Boulanger et al. 1996, Cesarsky et al. 1996a, Verstraete et al. 1996, Moutou et al. 1998), planetary nebulae and circumstellar regions (e.g. Beintema et al. 1996), and HII regions (Roelfsema et al. 1996, Cesarsky et al. 1996b) have clearly demonstrated the importance of such template spectra. They prove that PAHs are an ubiquitous part of the ISM. Additional information on emission features of crystalline silicates comes from similar template observations with ISO of e.g. planetary nebulae (Waters et al. 1998), evolved stars (Waters et al. 1996), young stars (Waelkens et al. 1996), or LBVs in the LMC (Voors et al. 1999). Absorption features (silicates, ices) have been found e.g. in the Galactic center (Lutz et al. 1996, Chiar et al. 2000), young stellar objects (dโHendecourt et al. 1996, Whittet et al. 1996, Dartois et al. 1999), and in dark clouds in the solar neighborhood (Whittet et al. 1998).
In this paper we present an inventory of mid-infrared spectral features detected in high resolution (R$``$1500) ISO-SWS 2.4โ45$`\mu `$m spectra of the starburst galaxies M 82 and NGC 253, the Seyfert 2 galaxies Circinus and NGC 1068, and a position in the 30 Doradus star forming region of the Large Magellanic Cloud (Sect. 3). We briefly discuss possible feature identifications (Sect. 4.1) and highlight possible relations between these features and the physical state of the interstellar medium in galaxies (Sect. 4.2). We also address the issue of the continuum determination and the apparent depth of the silicate absorption at 9.7$`\mu `$m (Sect. 5). Finally (Sects. 6 and 7) we demonstrate the use of these ISO spectra as templates for future infrared missions such as SIRTF, with particular emphasis on potential identification problems at low resolving power that are caused by coincidences of lines and features.
All the spectra shown here exhibit a large number of atomic, ionic and molecular emission lines. These have been or will be discussed elsewhere, along with more details on observations and data processing (Circinus: Moorwood et al. 1996; M 82: Lutz et al. 1998b, Schreiber 1998; NGC 1068: Lutz et al. 2000; 30 Dor: Thornley et al., in prep.).
## 2 Observations and data reduction
The objects discussed here have been observed as part of the ISO guaranteed time project on bright galactic nuclei. Here we concentrate on full grating scans obtained in SWS01 mode, speed 4. This mode provides a full 2.4โ45$`\mu `$m scan at resolving power of approximately 1000โ2000. For NGC 253, Circinus, and NGC 1068 the observations were centered on the nuclei. In the case of M 82 the observation was centered on the southwestern star formation lobe. For 30 Dor the apertures were lying roughly parallel to an ionized shell region, about 0.5โฒ away from the central stellar cluster. Table 1 summarizes the positions. Note that different parts of an SWS full grating scan are observed with different aperture sizes<sup>2</sup><sup>2</sup>2The aperture sizes are 14โณ$`\times `$20โณ for 2.4โ12.0$`\mu `$m, 14โณ$`\times `$27โณ for 12.0โ27.5$`\mu `$m, 20โณ$`\times `$27โณ for 27.5โ29.0$`\mu `$m, and 20โณ$`\times `$33โณ for 29.0โ45$`\mu `$m, with some wavelength overlap between the bands., varying between 14โณ$`\times `$20โณ and 20โณ$`\times `$33โณ.
We have processed the data using the SWS Interactive Analysis (IA) system (Lahuis et al. 1998, Wieprecht et al. 1998) and the ISO Spectral Analysis Package ISAP (Sturm et al. 1998). Dark current subtraction, scan direction matching, and flatfielding have been done interactively, and noisy detectors have been eliminated. In ISAP we clipped outliers and averaged the data of all 12 detectors for each AOT band, retaining the instrumental resolution. For those wavelength ranges affected by fringes, the averaged spectra were defringed using the FFT or iterative sine fitting options of the defringe module within ISAP.
To reduce noise for display purposes (Fig. 1), we have smoothed the data with a gaussian filter to a uniform resolution of 1000. This broadens slightly the line widths of the narrow atomic and ionic emission lines, but does not affect the broad emission and absorption features discussed in this paper. We did not remove the flux jumps at detector band limits. Part of these jumps may be real because the sources are extended and because the aperture sizes change at some band edges (at approximately 12.0, 27.5 and 29.0$`\mu `$m). Another part simply reflects the flux calibration uncertainty, which is of the order of 20 per cent.
We carefully checked the reality of the features in the final spectra against the possibility of residual instrumental features from the Relative Spectral Response Function (RSRF), which might be caused e.g. by an improper dark current subtraction. For example, the detector RSRF exhibits absorption features at 11.05 and 34$`\mu `$m which might appear in emission in the calibrated spectrum. This is, however, at most an effect of the order of a few per cent of the continuum level. The features we see at these wavelengths in our spectra (see below) are stronger, so that they must be real. Furthermore, we checked whether a feature appears in both scan directions and in the majority of all detectors. Additional confirmation was possible for those features that lie in the overlap region of two different AOT bands and appear in both bands. At this stage of instrument calibration, we do not believe that any of the broad structures in band 3E (appr. 27.5โ29.5 $`\mu `$m) are real features. Also, features at the end of band 4 (43โ45 $`\mu `$m, e.g. in Circinus) cannot be trusted.
## 3 An inventory of features
In Tables 2 and 3 we give an inventory of features that we believe to be reliable detections. We consider a detection reliable if the feature has an amplitude of at least 3$`\sigma `$ of the noise level and if it fulfills the criteria described above. We also list a few uncertain detections in parantheses, e.g. features in compliance with the above criteria but with an amplitude of less than 3$`\sigma `$ (but note that the definition of the local noise level is in many cases somewhat uncertain). Most of these features have been described before in reports of ISO-SWS observations of galactic template sources (e.g. Moutou et al. 1996, Verstraete et al. 1996, Roelfsema et al. 1996, Beintema et al. 1996). Here we highlight a few of their characteristics, source by source. In Sect. 4.1 we briefly discuss possible identifications. Please note that many of the features are severely blended and thus the peak wavelengths given here are approximate.
We also list the fluxes of four of the most prominent PAHs in Tables 4 and 5. To measure these fluxes we defined continua by a linear interpolation between the following points: 2.50 and 3.65$`\mu `$m for the 3.3$`\mu `$m feature, 5.9 and 10.9$`\mu `$m for the features at 6.2 and 7.7$`\mu `$m, and 10.9 and 11.8$`\mu `$m for the 11.3$`\mu `$m feature. We then obtained the fluxes by integrating between the following band limits: 3.10โ3.35$`\mu `$m, 6.0โ6.5$`\mu `$m, 7.3โ8.2$`\mu `$m, and 11.1โ11.7$`\mu `$m. To give an indication of the relative contribution of these features to the infrared luminosity we also list their ratio to the continuum flux in the range 11.6โ11.9$`\mu `$m. Due to the uncertainties involved in this measuring process (continuum shape, feature profile, etc.) all absolute and relative fluxes are only approximate. By definition, these fluxes ignore a possible, PAH-related โplateauโ or โcontinuumโ in the 6โ9 and 10โ13$`\mu `$m range (e.g. Boulanger et al. 1996).
M 82: M 82 is a small galaxy undergoing a very powerful starburst, and is considered to be a prototype of starburst activity. Due to its proximity (3.63 Mpc, Freedman et al. 1994) it is the brightest galaxy in the infrared, with the infrared luminosity arising mainly from warm dust in the central region. Emission features at 8.7 and 11.3$`\mu `$m were first detected by Gillett et al (1975), and the 3.3, 6.2 and 7.6$`\mu `$m features by Willner et al. (1977).
The main PAH features are clearly seen in the mid-infrared spectrum of M 82, shown in its entirety in Fig. 1 and in detail in the top panels of Fig 2. In addition, the spectrum shows a large number of weaker features which have previously been detected with ISO only in galactic template sources.
The 3.3$`\mu `$m feature has satellites at 3.4 and 3.5$`\mu `$m. It also shows a blue asymmetry, which might be due to a feature at 3.25$`\mu `$m (see Beintema et al. 1996). Two weak features at 5.25 and 5.65$`\mu `$m, that have been observed e.g. in the planetary nebula NGC 7027 (Beintema et al. 1996) and the photodissociation front of M 17 (Verstraete et al. 1996), are also present in M 82. The 6.2$`\mu `$m band has a red shoulder and shows an additional feature at 6.0$`\mu `$m. The 7.7$`\mu `$m feature consists of two bands at 7.6 and 7.8$`\mu `$m. A significant contribution of 7.7$`\mu `$m solid methane absorption (e.g. Whittet et al. 1996, Lutz et al. 1996) to this strong dip between 7.6 and 7.8$`\mu `$m is unlikely, since in M 82 related icy absorption features are shallower and extinction is lower (Sect. 5) than in the sources with clear methane absorptions. A weak feature is present at 10.6$`\mu `$m, which is confirmed in the average starburst spectrum of Lutz et al. 1998a (their Fig. 1, independent ISOPHOT-S data), and probably related to a feature seen by Beintema et al. (1996) in the spectrum of NGC 7027. The 11.3$`\mu `$m feature peaks at 11.2$`\mu `$m and shows the well-known asymmetric shape towards longer wavelengths (Witteborn et al. 1989). There is an additional component around 11.05$`\mu `$m, much too strong to be related to artifacts from the RSRF correction known to exist at this wavelength. A weak feature may be present near 12.0$`\mu `$m. The 12.7$`\mu `$m feature is very prominent in M 82. Moutou et al. (1998) found two emission features at 15.8 and 16.4$`\mu `$m in the spectrum of the galactic reflection nebula NGC 7023. On the basis of their laboratory work, they attributed these features to PAH molecules. These two features are clearly seen in M 82, and in some of our other template spectra. More emission features can be found at still longer wavelenghts (20.5 and 33-34$`\mu `$m).
In addition we find two features that โ to our knowledge โ have not been reported before: at 7.0 and 8.3$`\mu `$m. A feature around 7.0$`\mu `$m is also present in NGC 253 (most clearly), and perhaps in 30 Dor. We found the same feature in ISO-SWS spectra of the cool, dusty envelopes of the planetary nebula He 2-113 (see e.g. Waters et al. 1998, their Fig. 2). The average ISOPHOT-S spectra of starburst and normal galaxies (Lutz et al. 1998a, Helou et al. 2000) also show hints of a weak feature, blended with H<sub>2</sub> S(5) 6.91$`\mu `$m and \[Ar II\] 6.99$`\mu `$m (see also Sect. 6). The 8.3 feature is also visible in the galactic template spectrum of NGC 7023 (Fig. 4), and perhaps in some of the compact HII regions shown in Roelfsema et al. (1996). We also want to mention here the 14.3$`\mu `$m band. An astronomical observation of this band has been reported only recently for the first time (Tielens et al. 1999). It is present in all our template spectra, with the exception of NGC 1068, in NGC 7023, and perhaps also in some of the circumstellar PAH spectra shown in Beintema et al. (1996). On the other hand, our spectra do not show some of the emission features that have been detected before in astronomical observations, e.g. at 4.65$`\mu `$m (Verstraete et al. 1996) and 13.3$`\mu `$m (Moutou et al. 1998).
There are very few absorption features in our spectrum of M 82. The trough around 10$`\mu `$m looks like a strong 9.7$`\mu `$m silicate feature. In Sect. 5 we argue, however, that the trough is mainly due to the strong PAH emission at 8.7 and 11.3$`\mu `$m. Also, there is no clear signature of a corresponding silicate absorption feature at 18$`\mu `$m. A broad absorption feature around 3.0$`\mu `$m is probably due to H<sub>2</sub>O-ice. Its optical depth ($`\tau `$0.2) is relatively small compared e.g. to the Galactic center ($`\tau `$=0.5, Lutz et al. 1996, Chiar et al. 2000). However, since the overall extinction estimates differ in the same sense, this is consistent with the M 82 line of sight having properties similar to that towards the Galactic center: a mixture of diffuse ISM and molecular cloud extinction, with some variance in the relative weight for different lines of sight (Chiar et al. 2000). These properties seem to be quite typical for starburst galaxies, as we find similar conditions in NGC 253 (see below) and in NGC 4945 (Spoon et al., in prep.).
The M 82 spectrum has the highest S/N ratio and shows the largest number of features in our sample. Combined with the ISO-LWS long wavelength spectrum (Colbert et al. 1999) it will be an important template for future missions.
NGC 253: NGC 253 is a nearby, almost edge-on barred spiral galaxy with a high level of circumnuclear starburst activity. At optical wavelengths the galaxy is heavily obscured by dust lanes in the central regions. The ionic emission lines in NGC 253 are of lower excitation (e.g. lower \[Ne III\]/\[Ne II\] ratio) than in M 82, suggesting a softer average radiation field (Thornley et al., in prep.). Despite these differences between the two galaxies, their spectra of broad emission features are remarkably similar. This is demonstrated in Fig. 3. Only beyond 9$`\mu `$m does a difference in the underlying continuum become obvious. NGC 253 has a stronger continuum at these longer wavelengths. In starburst galaxies this underlying continuum is usually attributed to very small grains (VSG) of dust (e.g. Dรฉsert et al. 1990). In Sect. 5 we will use the difference in the continua of M 82 and NGC 253 to characterise the shape of the VSG continuum.
Table 2 shows that nearly all features of M 82 are also seen in NGC 253. The very few exceptions may simply be due to the lower S/N ratio of the NGC 253 spectrum. The red shoulder of the 6.2$`\mu `$m band seen in M 82 is more prominent in NGC 253 and is probably due to an additional feature at 6.35$`\mu `$m. The 3.0$`\mu `$m ice absorption is also present, having an optical depth of $`\tau `$0.25.
30 Dor: The 30 Dor region in the LMC is the largest, most massive, and most luminous H II region in the Local Group. As a local template for massive star formation and its interaction with the interstellar medium, it is instructive to compare its spectrum to the galaxy spectra in our sample. A more detailed study of the mid-infrared fine structure emission lines in 30 Dor will be presented in an upcoming paper (Thornley et al., in prep.).
An inspection of Fig. 2 and Table 2 shows that the 30 Dor spectrum exhibits most of the PAH features found in the galaxy spectra. Lower S/N may contribute to the non-detection of some of the weak features. Compared to the two starburst galaxies, the features in 30 Dor are much weaker relative to the continuum (see also Tables 4 and 5) and show different ratios. Note, for instance, the high 6.2/7.7$`\mu `$m feature ratio, the unusually high 3.4/3.3 ratio ($``$ 0.5, see also Sect. 4.2), the shape of the 7.6/7.8$`\mu `$m features, or the complete absence of the 12.7$`\mu `$m feature. These differences may be partly due to the fact, that the SWS apertures covered only part of the entire 30 Dor complex. Verstraete et al. (1996) use the case of M17 to demonstrate the strong variations in PAH spectra going from the center of an H II region to the surrounding photodissociation region. On the other hand, the similarity of the 30 Dor spectrum to the integrated ISOPHOT-S spectrum of the dwarf galaxy NGC 5253 (Rigopoulou et al. 1999) strongly suggests that the weakness of the PAHs is not an aperture effect but reflecting an intrinsic property of very active star formation in a low metallicity environment.
Silicate absorption at 9.7 and 18$`\mu `$m is, if at all present, very weak. No other absorption features can be detected.
NGC 1068: The nearby, prototypical Seyfert 2 galaxy NGC 1068 is a key object in the investigation and modeling of active galactic nuclei (AGNs). The ISO-SWS observations were centered on the active nucleus, with the aperture covering very little of the circumnuclear star forming โringโ, which has a radius of $``$15โณ. In stark contrast to the starburst templates we have shown, this active nucleus spectrum shows very little PAH emission. The weaker features (e.g. 3.3, 6.2 or 12.7$`\mu `$m) are barely visible, if at all.
The continuum around 10$`\mu `$m is strong because of a central warm component heated by the AGN. Silicate absorption is clearly present, although centered at 9.4$`\mu `$m rather than at 9.7$`\mu `$m, but again surrounding weak PAH emission complicates its interpretation. Hydrocarbon absorption is seen at 3.4$`\mu `$m (see also Bridger et al. 1994), in contrast to our starburst templates, where it is not observed. Note, however, that in M 82 and NGC 253 a similar absorption feature could plausibly exist if the analogy to the Galactic center holds, but be filled in by the 3.3/3.4 PAH emission. On the other hand M 82 and NGC 253 show an H<sub>2</sub>O ice absorption at 3.0$`\mu `$m which is definitely absent in NGC 1068. These differences in absorption features are entirely plausible because of the different physical conditions in the obscuring regions. For the starbursts, they likely include diffuse ISM as well as molecular clouds that can host icy grains. Conversely, infrared polarimetry suggests that most of the near-infrared obscuration in NGC 1068 occurs within a few parsecs from the nucleus, possibly in the torus (e.g. Packham et al. 1997). Such an energetic environment will be much less favourable for the existence of icy grains.
Circinus: The Circinus Galaxy is a nearby spiral galaxy which shows Seyfert 2 activity (e.g. Moorwood & Glass 1984, Oliva et al. 1994, Moorwood et al. 1996, Oliva et al. 1998). Due to its proximity (5 times closer than NGC 1068) it has become another template object for the study of AGNs. The AGN is surrounded by circumnuclear star-forming regions, as is often the case in Seyfert nuclei residing in spirals. The ISO-SWS observations were centered on the active nucleus, but contrary to the observations of NGC 1068 the apertures covered a significant amount of this circumnuclear star formation. Hence, most of the dust emission features seen in the starburst templates are also found in Circinus, but with weaker line-to-continuum ratio (see the discussion in Sect. 4.2). A peculiarity of the Circinus spectrum are the very pronounced features in the 20โ22$`\mu `$m region.
The observation was performed very early in the mission, when the observing strategy was not yet fully optimized. For instance, exposures of the internal flux calibration lamps, preceeding observations of the scientific target, caused memory effects in the immediately following scans. The low flux level of band 4 ($`\lambda `$ 29$`\mu `$m), relative to the preceding bands, is due to such a memory effect and an incorrect dark current subtraction. The same might be true for the apparent features near 44$`\mu `$m (which are not visible in the overlapping LWS spectrum). We did not attempt to improve the dark current subtraction further since this would involve subjective assumptions about the true dark current.
## 4 Mid-infrared features and the physical state of the ISM
### 4.1 Identification
#### 4.1.1 The 2-13$`\mu `$m region:
The emission features in this wavelength range have been extensively studied with ISO in galactic objects during the last few years. Comprehensive discussions of their identifications and characteristics can be found e.g. in Beintema et al. (1996), Moutou et al. (1998), Roelfsema et al. (1996), and Verstraete et al. (1996). They are most often attributed to PAH molecules. This is supported by recent laboratory studies (e.g. Roelfsema et al. 1996, Moutou et al. 1996). The only features in this range, which have not been addressed in the literature so far, are the ones at 7.0 and 8.3$`\mu `$m. In our sample of galaxies they are unambiguously detected only in M 82 and NGC 253 (the 8.3$`\mu `$m feature only in M 82), but as mentioned in Sect. 3 they seem to be present in other published spectra as well. It seems likely that they can be attributed to a PAH modes, too.
#### 4.1.2 Features between 13 and 20$`\mu `$m:
Features in this range have attracted less attention in the past, because they are intrinsically weak (but see e.g. Beintema et al. 1996). These bands, however, are more sensitive to the molecular structure of PAHs, since they involve the motion of the molecule as a whole, therefore depending on the exact species (Lรฉger et al. 1989). Hence, their observation could help to better constrain the composition of the interstellar mixture. The 13.6, 15.8, and 16.5$`\mu `$m features are also visible in the spectra of NGC 7023 and have been attributed to PAH bands in the past by Moutou et al. (1998), based on their laboratory work. The band at 14.3$`\mu `$m has been tentatively attributed to a phenyl bending mode by Tielens et al. (1999). Moutou et al. (1996) list a feature at this wavelength in their composite laboratory spectrum of a mixture of PAHs. Although weak, it might cause confusion with \[Ne V\] in low resolution spectra (see Sect. 6). Finally, the weak feature at 14.8$`\mu `$m (if real) could be due to the smallest PAH, benzene (Tielens et al. 1999).
#### 4.1.3 Features in the 20 to 45 $`\mu `$m region:
The number of modes in the laboratory PAH spectra of Moutou et al. (1996) decreases with increasing wavelength. Few species show emission beyond 20$`\mu `$m, e.g. near 21, 28 and 40$`\mu `$m. In this wavelength range other sources must be taken into account. A number of recent papers have reported the detection of crystalline silicates (olivines, fosterite, pyroxene, etc.) in objects like Luminous Blue Variables (Voors et al. 1999), dusty circumstellar disks (Waelkens et al. 1996, Waters et al. 1996), or Planetary Nebulae (Waters et al. 1998). In particular the feature at 34$`\mu `$m could be attributed to these kind of sources. However, one would expect to see emission features at e.g. 23, 28, 40 and 43$`\mu `$m, as well; none of these features are clearly detected in our spectra. On the other hand, even in some of the galactic templates, such as the planetary nebula NGC 6543 (e.g. Waters et al. 1996), not all of these features are present.
A feature near 20.5$`\mu `$m shows a striking variation in shape and central wavelength between the galaxies. This is particularly surprising since the galaxy spectra include a mixture of many different regions, and may suggest a carrier occuring only transiently in very special conditions. In Circinus this feature is most prominent, and peaks at the bluest wavelength (20.2$`\mu `$m). Circinus also shows a second peak at 21.7$`\mu `$m which is absent in the other spectra. A bump around 20.5$`\mu `$m is also seen in ISO-SWS spectra of M supergiants (Voors et al. 1999, Molster et al. 1999). It could be due to PAHs or alternatively metal oxides like FeO (Waters et al. 1996, Henning et al. 1995). IRAS-LRS and ISO-SWS spectroscopy have also detected a broad feature, centered at approximately 20.1$`\mu `$m, in carbon rich stars (Volk et al. 1999, Garcรญa-Lario et al. 1999, Szczerba et al. 1999). Possible candidates that have been proposed include large PAH clusters or hydrogenated amorphous carbon grains, hydrogenated fullerenes, and nano-diamonds (see references in Volk et al. 1999). However, compared to these detections, the features in our spectra are much narrower.
A mixture of fullerene molecules of different degree of hydrogenation (Webster 1995) might also explain the second peak in Circinus around 21.7$`\mu `$m, since the emission peak shifts from 23 for fully hydrogenated fullerene (C<sub>60</sub>H<sub>60</sub>) to 19$`\mu `$m for non-hydrogentated fullerene (C<sub>60</sub>). None of the other galaxies exhibit this feature, but the SWS01 spectrum of NML Cyg (Voors et al. 1999) and of the galaxy NGC4945 (S. Lord, private communication) also show a weak emission feature around 21.6โ22$`\mu `$m.
The broad plateau at 33โ34$`\mu `$m, i.e. under the strong lines of \[Si II\] and \[S III\], could be affected by detector memory effects. To remove such a possible instrumental effect we treated the two different scan directions of the SWS01 mode separately. The trailing wings of each line profile, i.e. the blue wing for the scan with increasing wavelength, and the red wing for the scan decreasing in wavelength, are much more distorted by memory effects than the leading wings. Therefore, we cut out these trailing wings, before we averaged the spectra of the two scan directions. We are hence confident that most of the remaining plateau is real. Such a feature has been observed in many galactic targets and is generally attributed to crystalline silicates (olivine, e.g. Waters et al. 1998).
### 4.2 Variation of PAH features
Published mid-IR spectra of galactic template sources show a significant variation of intrinsic PAH ratios from source to source. For instance Roelfsema et al. (1996) see a drastic change in the relative intensities of the 7.7 and 8.6 bands with increasing intensity or hardness of the radiation field. Similar changes are seen e.g. in different regions of M17 (Verstraete et al. 1996) or - for the 8.6/11.3 ratio - in the reflection nebula NGC 1333 (Joblin et al. 1996). PAHs exposed to intense and hard radiation fields can be ionized, lose hydrogen atoms, or be photodissociated; any of these effects may contribute to the observed variations in PAH ratios. According to Joblin et al. (1996) the ionization is best traced by the 3.4/3.3 and 8.6/11.3 ratios. A good hydrogenation indicator is the (12+12.7)/11.3 ratio. For instance, these authors find a high 3.4/3.3 ratio of 0.1 in radiation fields that are 10<sup>5</sup> times the standard.
Another important factor that can alter observed PAH ratios is extinction. Extinction will suppress the 6.2, 8.6 and 11.3$`\mu `$m features with respect to the one at 7.7$`\mu `$m. The 12.7/11.3 ratio is similarly affected, since the 11.3 feature is still in the wing of the 9.7$`\mu `$m silicate absorption. Details will depend on the applicable extinction law (see e.g. the Galactic center, Lutz et al. 1996). While extinction clearly affects PAH spectra in highly obscured sources like Ultraluminous Infrared Galaxies (ULIRGs, Lutz et al. 1998a) or the edge-on galaxy NGC 4945 (Spoon et al., in prep.), its effect will be less pronounced in the lower extinction sources of our sample.
Finally, in active galaxies, such as NGC 1068 or the Circinus Galaxy, PAH features can be diluted by an AGN-powered hot dust continuum. Genzel et al. (1998) and Lutz et al. (1998a) have used this as a diagnostic of the power sources of ULIRGs.
Our sample of galaxy spectra exhibits a similar trend in relative PAH strengths as the galactic templates. M 82 and NGC 253 have high 3.4/3.3 ratios, consistent with them being active starburst galaxies. 30 Dor seems to have an even higher 3.4/3.3 ratio, but the S/N ratio is not sufficient for a detailed analysis. However, a strong and hard, highly ionizing radiation field in 30 Dor is consistent with the results of Thornley et al. (1998), which are based on the ratios of fine structure emission lines, like \[Ne III\]/\[Ne II\]. In that context it is interesting to note again the complete absence of the 12.7$`\mu `$m feature in 30 Dor. In the two starburst galaxies M 82 and NGC 253 we see well-separated 7.6/7.8 and 8.6 features. The 8.6 band is much weaker than the 7.6/7.8 band, just as observed in โnormalโ HII regions. In the Seyfert galaxy NGC 1068, however, the 8.6 band is similar in strength to the 7.7 band, as it is seen in the ultracompact HII regions in M 17 (Cesarsky 1996b) or IRAS 18323-0242 (Roelfsema et al. 1996), where the UV radiation field is extremely strong.
NGC 1068, like many AGNs, shows an additional component of warm dust in the 10$`\mu `$m region. Unified models for Seyfert galaxies predict a dusty torus which would emit at these mid-infrared wavelengths (e.g. Pier & Krolik 1992). Hence, an alternative interpretation of the weak emission features on both sides of the silicate absorption might be self-absorbed silicate emission from the torus, i.e. the emissions we identified as PAH might simply be wings of a wide silicate emission maximum, the center of which is suppressed by absorption. However, the observed double peaks at 7.7/8.6 and 11.05/11.25, as well as the distinct rise in flux near 7.3$`\mu `$m are not reproduced by torus models and show that there must be some real, although weak, PAH emission on top of the continuum. The weakness of the PAH emission can be understood in terms of dilution by the hot dust continuum and destruction by the intense AGN radiation field.
NGC 1068 has a circumnuclear starburst region, and some of the PAH emission might be picked up from this region by the large SWS aperture. The match in shape with the (small-aperture) ground-based data of Roche et al. (1984) in the 8-13$`\mu `$m range, and comparisons to ground-based CO maps, suggest that this effect is of minor importance here.
In the Circinus Galaxy the 7.7/8.6 ratio is somewhere in between the two extrema M 82 and NGC 1068. The feature/continuum ratio of the PAH features, however, is much higher in Circinus than in NGC 1068. We attribute this to the fact that in the case of Circinus the large SWS aperture indeed picked up parts of the well-known circumnuclear star forming region.
## 5 Continuum placement and the depth of the silicate feature
An important diagnostic in many extragalactic studies is the depth of the silicate absorption feature at 9.7$`\mu `$m and the extinction derived from it. However, the presence of strong PAH bands on both sides of the 9.7$`\mu `$m silicate feature makes it very difficult to estimate the continuum level and the true depth of the silicate absorption. In particular ground-based 8โ13$`\mu `$m data suffer from this problem. Because of their continuous wavelength coverage ISO-SWS01 spectra are well suited to shed more light on this question. In the following we will discuss this issue using the example of M 82.
Firstly, evidence against a strong 9.7$`\mu `$m absorption comes from the absence of a strong 18$`\mu `$m silicate absorption (see Fig. 1). According to Draine & Lee (1984) the expected ratio $`\tau _{sil}`$(18$`\mu `$m) / $`\tau _{sil}`$(9.7$`\mu `$m) is 0.4. Furthermore an analysis of hydrogen recombination lines in the ISO range yields a moderate A<sub>V</sub>(gas)$``$ 5 mag (for a uniform screen model - see Schreiber 1998).
Next we come back to the comparison of the M 82 spectrum to the spectrum of NGC 253 (Fig. 3). The two galaxies are similar in A<sub>V</sub> and exhibit a very similar PAH spectrum. The only distinction appears to be a stronger VSG continuum in NGC 253. A strong rise of the continuum in this range is typical for regions of intense UV flux (e.g. Dรฉsert et al. 1990, Vigroux et al. 1996) <sup>3</sup><sup>3</sup>3The spectrum of NGC 253 indicates lower ionization (e.g. low \[Ne III\]/\[\[Ne II\]), but due to the compactness of the starburst in NGC 253 the radiation field here is more intense than in M 82.. The difference between both spectra can be well fit by a power-law continuum that begins to rise at approximately 8โ9$`\mu `$m.
Finally, we compare our spectrum of M 82 to the SWS01 spectrum of the galactic reflection nebula NGC 7023. Little extinction is expected in the line of sight to this nebula. The continuum under the PAH bands in NGC 7023 is very weak and starts to grow only beyond 20$`\mu `$m (Moutou et al. 1998). The flux density around 10$`\mu `$m is almost on the same level as the flux density shortward of the 6$`\mu `$m PAH band and at 15โ20$`\mu `$m. It could be explained by an underlying PAH plateau, or by the wings of the 7.7/8.6 and 11.3 PAH features (which can be represented by Lorentz profiles - Boulanger et al. 1998, Mattila et al. 1999). We therefore assume that in NGC 7023 this region consists mainly of emission bands, with little or no continuum and silicate absorption. In Fig. 4 we overplot the (smoothed) NGC 7023 spectrum on that of M 82. The NGC 7023 spectrum has been multiplied by a factor 3.25 in order to normalize the PAH emission feature at 7.6$`\mu `$m to the one in M 82. The 3.3 and 11.3$`\mu `$m bands in M 82 are weaker compared to NGC 7023. This might be explained e.g. by the harder radiation field in M 82 (Joblin et al. 1996, see Sect. 4.2). Apart from this the two spectra are remarkably similar. Only at higher wavelengths a component of hot, small dust grains starts to add to the M 82 continuum, as expected due to the much harder radiation field in M 82.
In view of these arguments we constructed a toy model in order to reproduce the observed spectrum of M 82. The model simply consists of the scaled spectrum of NGC 7023 plus a power-law continuum which starts at 8.5$`\mu `$m ($`f(\lambda 8.5)^\alpha `$). We use a power-law rather than a black body curve for sake of simplicity. A power-law produces a very good fit to the continuum up to 20 โ 25$`\mu `$m. A black-body curve would be more problematic, because the dust may not have a single temperature, and might not be in thermal equilibrium. M 82 is an extended source for the SWS apertures. There is a flux jump around 12$`\mu `$m by a factor of about 1.3, corresponding to a similar change in aperture size. We hence multiplied the power law continuum by 1.3 longward of 12$`\mu `$m <sup>4</sup><sup>4</sup>4A more accurate correction for the change in aperture size would have to assume a light distribution (as a function of wavelength) and a model of the instrument beam profile. Such a correction tool is not yet available.. The free parameters - the scaling factors for the NGC 7023 spectrum and the power law, plus the power law index - were adjusted by hand; we did not pursue a formal fit. Fig. 5 shows, that this simple model matches the observed spectrum remarkably well. There is no need to invoke any kind of extinction. Due to the uncertainties in the spectra, however, there is room for a moderate extinction, in accordance with the results from the recombination line studies (A<sub>V</sub> $``$ 5 mag). A slightly different power-law, modified by a modest amount of extinction, could fit the spectrum equally well. However, a strong overall extinction, as deduced from the ground based 8-13$`\mu `$m data (A<sub>V</sub> = 15โ60 mag, Gillett et al. 1975), is clearly incompatible with the new ISO data. This is an example of the potential danger of overestimating the silicate absorption depth in baseline-limited data.
## 6 The interpretation of low resolution spectra
Many ISO spectra of galactic and extragalactic objects have been taken in low resolution mode (ISOPHOT-S, ISOCAM-CVF). Also, surveys with future mid- and far-infrared space missions, e.g. of galaxies at higher redshifts, will likely be performed with a relatively low resolution. For example the IRS spectrometer on board SIRTF will have a resolution of approximately 50โ100 (plus a medium resolution mode of R=600) in a wavelength range similar to ISO-SWS. In this low resolution mode SIRTF-IRS, being much more sensitive than ISO-SWS, will be a unique tool to detect emission features in spectra of faint high-z galaxies. Low resolution spectra, however, suffer from possible identification and interpretation problems caused by coincidences of atomic/ionic lines and solid state features. In our high resolution SWS01 galaxy spectra lines and features are well separated, and we can use these spectra as templates to identify and highlight the importance of possible confusion problems. In Fig. 6 we have smoothed and rebinned the M 82 and NGC 1068 spectra to a resolution of 50 to simulate e.g. an ISOCAM-CVF or a SIRTF-IRS spectrum.
In Table 2 we have indicated possible confusions with nearby molecular, atomic, and ionic lines. We want to mention three lines in particular: \[Ar II\] at 6.99$`\mu `$m, which might be confused with the underlying PAH emission (and the nearby H<sub>2</sub> S(5) line), \[Ne II\] at 12.8$`\mu `$m, which in fact has been confused in the past with the underlying 12.7$`\mu `$m PAH feature, and \[Ne V\] at 14.3$`\mu `$m, which also has been confused in the past with the nearby PAH emission. To get an indication of the relative contributions of the 12.7$`\mu `$m PAH flux and the \[Ne II\] line flux to the combined (line plus feature) flux in low resolution spectra we have measured both fluxes in our high resolution spectra. Table 6 summarizes the ratios of PAH/\[Ne II\] in all 5 templates. The values vary widely: in 30 Dor the flux is solely due to \[Ne II\], whereas in Circinus \[Ne II\] contributes only about 15% of the combined flux. For the 7.0$`\mu `$m feature, we find that the broad feature contributes 25% of the combined flux of feature, H<sub>2</sub>, and \[Ar II\] in NGC 253.
In the low resolution representation of M 82 in Fig. 6 the shape of the 14.3$`\mu `$m PAH resembles very much the shape of an unresolved line like the \[Ne III\] line at 15.5$`\mu `$m and can be mistaken as \[Ne V\]. The high resolution spectrum of M 82 (Fig. 2) clearly shows, that there is no \[Ne V\] at 14.32$`\mu `$m but an emission feature plus a weak line of \[Cl II\] 14.37$`\mu `$m. Of all the strong fine structure emission lines only few lines remain unambiguously detectable in low resolution spectra, like Br $`\beta `$, Br $`\alpha `$, \[Ne III\] 15$`\mu `$m, \[S III\] 18.7, 33.5$`\mu `$m, and \[Si II\] 34.8$`\mu `$m in M 82, or \[O IV\] 26$`\mu `$m and perhaps \[Ne V\] 24$`\mu `$m, and \[S IV\] 10.5$`\mu `$m in NGC 1068. Clearly, low resolution spectra are very well suited for PAH and continuum measurements. However, flux measurements of narrow lines, and โ in some cases โ even their identification, can be very difficult. For these purposes higher resolutions, as for instance provided by the R=600 mode of the SIRTF spectrometer, are definitely needed.
## 7 Conclusions
We have detected a large number of mid-infrared features in galaxy spectra, some of them previously unobserved, and discussed the dependence of the dust features on ISM condition in galaxies. The spectral features vary considerably from source to source in presence and relative strength. Emission features are largely absent in the intense radiation field close to an AGN, and weak in a low metallicity, intensely star forming environment. Differences in the absorption spectra point to different physical properties of the obscuring regions in starburst and active galaxies.
The spectra presented here will be valuable template spectra for future mid- and far-infrared space missions such as SIRTF, SOFIA or FIRST. They provide important clues for the identification and interpretation of high redshift, dusty galaxies. The strongest PAH features can be used to provide redshift information in far-infrared photometric galaxy surveys (Simpson & Eisenhardt 1999, see also the example of 21396+3623, Rigopoulou et al. 1999). Furthermore, they affect galaxy number counts. For instance, Xu et al. (1998) have constructed semi-empirical galaxy SEDs to model the considerable PAH effects on number counts and redshift distributions. Finally, the continuum and the PAH features can be used to distinguish between starburst activity and active nuclei in high redshift galaxies, as has been demonstrated for local infrared bright galaxies (Genzel et al. 1998, Lutz et al. 1998a, Rigopoulou et al. 1999).
The advantage of the wide wavelength coverage of the SWS spectra has been used to illustrate the problem of the continuum definition and the true depth of the silicate absorption. We find that in our starburst templates the hot VSG dust continuum begins to rise around 8 to 9$`\mu `$m, and that it can be well fitted by a simple power-law up to 20โฆ25$`\mu `$m. Finally we have demonstrated possible line identification problems in low resolution spectra.
The spectra presented here are available in electronic form from the authors. We want to note again, that different parts of the spectra were observed through different aperture sizes, which should be taken into account for a detailed use as template spectra.
###### Acknowledgements.
We wish to thank George Helou for very fruitful discussions, and Bernhard Brandl for support with the SIRTF-IRS simulations. SWS and the ISO Spectometer Data Center at MPE are supported by DLR under grants 50 QI 8610 8 and 50 QI 9402 3. The ISO Spectral Analysis Package (ISAP) is a joint development by the LWS and SWS Instrument Teams and Data Centers. Contributing institutes are CESR, IAS, IPAC, MPE, RAL and SRON. |
no-problem/0002/gr-qc0002005.html | ar5iv | text | # Wave-Particle duality at the Planck scale: Freezing of neutrino oscillations
## 1 Introduction
One of the most challenging quests in contemporary theoretical physics concerns the nature of space-time at the Planck scale, and deciphering gravitationally-induced modifications to the quantum realm, and vice versa. While some of these aspects can only be revealed by observations, Maxwellian arguments of consistency can also shed light on the joint realm of the gravity and the quantum.
One such Maxwellian argument was presented in Ref. . It says that quantum measurements in the Planck realm necessarily alter the local space-time metric in a manner that destroys the commutativity of the position measurements of two different particles. In addition, it also affects the fundamental commutator,
$`[x,p_x]=i\mathrm{}`$ (1)
The essential idea of the above argument resides in the observations that a position measurement collapses the wave function, say, in the following manner, $`\text{Position Measurement}:\stackrel{}{r}|\psi (0r\mathrm{})\stackrel{}{r}|\psi (0rR)`$. In case $`R`$ is of the order of Planck length, the gravitational effects associated with the wave function collapse become important as it necessarily invokes the collapse of the energy-momentum tensor. Hence, the local space-time metric changes. As shown in Ref. , this circumstance makes the position measurements of two distinguishable particles non-commutative.
As a consequence, non-locality must be an essential part of any attempt to merge the theory of general relativity with quantum mechanics. The derived non-commutativity easily extends to measurements of different components of the position vector of a single particle, and modifies the fundamental commutators of the Heisenberg algebra. A further essential conclusion beyond the stated gravitationally-induced non-locality is that space-time itself acquires a non-commutative character.
Some implications of such non-commutative space times have been studied, e.g., by Madore , and by Connes , however, from an entirely different view point. Independently, efforts in string theories also arrive at gravitationally-modified fundamental uncertainty relations. In that context an early reference is the work of Veneziano , while a recent one is . In yet another line of argument, without invoking extended objects, and entirely within the framework of quantum mechanics and the theory of general relativity, Adler and Santiago also obtain similar modifications to the uncertainty principle without invoking extended objects (cf., ). A somewhat different argument, based on the existence of an upper bound for acceleration, also results in a gravitational modification to the uncertainty principle . The mathematical expression of the above results that leads to a gravitationally modified expression for the wave particle duality is given by the following modification to the fundamental commutator :
$`[๐ฑ,๐ฉ]=i\mathrm{}\left[\mathrm{๐}+ฯต{\displaystyle \frac{\lambda _P^2๐ฉ^2}{\mathrm{}^2}}\right]`$ (2)
where $`\lambda _P=\sqrt{\mathrm{}G/c^3}`$, is the Planck length, and $`ฯต`$ is some dimensionless number of the order of unity. In what follows I set $`ฯต`$ equal to unity.
It is the purpose of this Letter to decipher the wave-particle duality as contained in (2). To make our argument, we first recapture the origin of the wave-particle duality in the absence of gravitational effects, and then immediately return to the stated objective.
### 1.1 Wave-Particle duality in the absence of gravity
The fundamental commutator, (1), encodes the fact that intensity of matter and gauge fields cannot be arbitrarily reduced to zero, but is bounded from below. The first direct evidence for this circumstance came from Einsteinโs understanding of the photo-electric effect. It is precisely this commutator that lies behind the de Broglie relation, and the entire edifice of the wave-particle duality. To see this, recall that in configuration space, $`p_x=\frac{\mathrm{}}{i}\frac{}{x}`$, is a solution of the fundamental commutator (1), with eigenfunctions of the form $`\psi (p_x)=N\mathrm{exp}\left(\frac{i}{\mathrm{}}p_xx\right)`$. The spatial periodicity, $`\lambda =\frac{h}{|p_x|}`$, carried by $`\psi (p_x)`$, when extended to three dimensions, yields the well known de Broglie relation
$`\lambda ={\displaystyle \frac{h}{p}},`$ (3)
where $`p=|\stackrel{}{p}|`$ is the magnitude of the momentum vector associated with an object. A simple text-book algebraic exercise, with (1) as the physical input, gives the Heisenberg uncertainty relation
$`\mathrm{\Delta }x\mathrm{\Delta }p_x\mathrm{}/2,\text{etc.}`$ (4)
In the absence of gravity, equations (1), (3), and (4) represent various inter-related aspects of the wave-particle duality. One immediately sees that as $`p`$ approaches the Planck scale, and then beyond, the de Broglie wavelength continuously shrinks to zero and allows quantum-mechanical probing of space-time to all length scales and energies. However, as already mentioned, if the gravitational effects in the quantum-measurement process are taken into consideration, these results are no longer true. Planck length, up to a factor of the order of unity, emerges as the limiting length scale beyond which space-time cannot be probed. This circumstance, therefore, immediately suggests that the the relation (3) must undergo a change in which the left hand side of (3) saturates to, within a few times, the Planck length. It is precisely this that emerges in the following.
As long as the entire theoretical structure of the existing quantum field theories rests upon the wave-particle duality, it is necessary to fix the domain of its validity. The heaviest objects for which the wave-particle duality (3) has been experimentally verified, so far, is the $`C_{60}`$ fullerene . In this context, the experiment sets the scale at $`m_{C_{60}}=1.20\times 10^{21}`$ g. This is already an impressive achievement. Yet, it is to be compared with the Planck mass, $`m_{Pl.}(\mathrm{}c/G)^{1/2}=2.18\times 10^5`$ g. However, in order to study possible departures from the de Broglie wave-particle duality in the Planck regime, one may even not need to invoke early universe directly. All one may need are Planck mass quantum objects, and an appropriate technique to study an associated interference phenomena. To be specific, such effects may become indirectly observable via extremely high-energy gamma rays, and high-energy neutrinos.
Here, and in the following, the operator, or $`c`$-number, nature of objects, such as $`p_x`$ in (1), where it is an operator, and $`p_x`$ in $`\psi (p_x)`$, where it is a $`c`$-number, shall be omitted and will be assumed apparent from the context.
## 2 Gravitationally-modified Wave-Particle duality: minimal modification, and some implications
In one spatial dimension (chosen as $`x`$), the gravitationally-modified position-momentum uncertainty relation immediately follows from the commutator (2), and reads:
$`\mathrm{\Delta }x\mathrm{\Delta }p_x{\displaystyle \frac{\mathrm{}}{2}}\left[1+\left({\displaystyle \frac{\lambda _P\mathrm{\Delta }p_x}{\mathrm{}}}\right)^2+\left({\displaystyle \frac{\lambda _P๐ฉ}{\mathrm{}}}\right)^2\right].`$ (5)
It carries as a characteristic feature the Kempf-Mangano-Mann (KMM, ref.) lower bound on the position uncertainty:
$`\mathrm{\Delta }x__K=\lambda _P\left(1+{\displaystyle \frac{\lambda _P๐ฉ}{\mathrm{}}}\right)^{1/2}`$ (6)
Notice that $`\mathrm{\Delta }x_K`$ has a state dependence via $`๐ฉ`$. For a state of a vanishing $`๐ฉ`$, one obtains the absolute minimal distance that can be probed quantum mechanically. This lowest bound does not depend on the particle species. Therefore, the existence of the โabsolute minimal distanceโ suggests a new intrinsic property of the space-time itself.
An important implication of the KMM lower bound, $`\mathrm{\Delta }x__K`$, is that the de Broglie plane waves can no longer represent the physical wave functions, even not in principle. Thus the wave-particle duality must undergo a fundamental conceptual and quantitative change.
A non-relativistic modification to the de Broglie relation was presented in pioneering KMM work. This case, however, is likely to be of limited interest in the Planck regime. Here, I present the gravitationally modified de Broglie relation without restrictions on the particleโs momentum.
It is readily seen that the momentum space wave function consistent with the gravitationally modified uncertainty relations (5) reads :
$`\psi (p)`$ $`=`$ $`N\left(1+\beta p^2\right)^{\left[\kappa (๐ฉ)/4\beta (\mathrm{\Delta }p)^2\right]}`$ (7)
$`\times \mathrm{exp}\left[i{\displaystyle \frac{๐ฑ}{\lambda _P}}\mathrm{tan}^1\left(\sqrt{\beta }p\right){\displaystyle \frac{\kappa (๐ฉ)๐ฉ}{2(\mathrm{\Delta }p)^2\sqrt{\beta }}}\mathrm{tan}^1\left(\sqrt{\beta }p\right)\right]`$
where $`\kappa (๐ฉ):=1+\beta (\mathrm{\Delta }p)^2+\beta ๐ฉ^2`$, and $`\beta :=\lambda _P^2/\mathrm{}^2`$. $`N`$ is a normalization factor. This represents an oscillatory function damped by a momentum-dependent exponential. I identify the oscillation length with the gravitationally modified de Broglie wave length:
$`\lambda =2\pi {\displaystyle \frac{\lambda _P}{\mathrm{tan}^1\left(\sqrt{\beta }p\right)}}`$ (8)
Introducing $`\overline{\lambda }_P:=2\pi \lambda _P`$ as the Planck circumference; and $`\lambda _{dB}`$ as the gravitationally unmodified de Broglie wave length, $`\lambda _{dB}=h/p`$, the above expression takes the form:
$`\lambda ={\displaystyle \frac{\overline{\lambda }_P}{\mathrm{tan}^1\left(\overline{\lambda }_P/\lambda _{dB}\right)}}\{\begin{array}{cc}\lambda _{dB}\hfill & \text{for low energy regime}\hfill \\ 4\lambda _P\hfill & \text{for Planck regime}\hfill \end{array}`$ (9)
In addition, for the specific non-relativistic regime considered by Kempf et al. , $`\lambda `$ reproduces their equation (44). This justifies the interpretation of the oscillatory length associated with KMMโs $`\psi (p)`$ as the gravitationally modified de Broglie wavelength.
The gravitationally induced modifications to (1), (3), and (4) are now contained in (2), (9), and (5). These latter equations constitute the minimal conceptual and quantitative changes in the nature of the wave-particle duality.
A brief discussion on immediate physical implications of the modified wave-particle duality in the Planck realm now follows.
### 2.1 Freezing of neutrino oscillations
To explore one of the concrete consequences of the above-presented modification to the wave-particle duality, note that the existing data suggests flavor eigenstates of neutrinos to be linear superposition of different mass eigenstates :
$`|\nu _{\mathrm{}}={\displaystyle \underset{ศท}{}}U_\mathrm{}ศท|m_ศท`$ (10)
where, $`\mathrm{}=\mathrm{},\mu ,\tau `$, is the flavor index, and $`ศท=1,2,3`$, enumerates the mass eigenstates, while $`U`$ is a $`3\times 3`$ unitary matrix. Several fundamental questions now arise. Is this low-energy, i.e. low in comparison to the Planck mass, construct still valid at the Planck scale? What is the time evolution of the flavor and mass eigenstates in the Planck realm? At a deeper and related level, does the non-commutative space-time still carry Poincarรฉ symmetry? โ for the very notions of mass and spin (which the underlying mass eigenstates carry) originate from the Casimir invariants associated with the Poincarรฉ group. In addition, the equations governing the evolution of the states derive their form from the space-time symmetries. None of these questions has a readily available answer. An answer must, therefore, await future theoretical and observational input. The latter, for example, may come from the study of anomalous events around and beyond $`10^{20}`$ eV cosmic rays.
Under these circumstances we take note of the fact that low-energy neutrino oscillations owe their physical origin to different de Broglie oscillation lengths associated with each of the underlying mass eigenstates. If one assumes that each of the mass eigenstates carries the same energy, then the flavor oscillations arise due to different de Broglie oscillation lengths carried by each of the mass eigenstates. If this scenario was considered for neutrino oscillations then it is clear that neutrino oscillations shall freeze at Planck scale due to the above obtained gravitationally-induced modification to the wave-particle duality. In particular, I draw attention to the saturation of $`\lambda `$ as indicated in Eq. (9).
In the ordinary neutrino oscillation phenomenology the flavor oscillations are not altered at any practical level if one considers the โequal energy,โ or โequal velocityโ, or โwave packetโ approaches . Not knowing the answer to the questions posed above it is not yet possible to say if the Planck-scale freezing of neutrino flavor oscillations shall survive in all neutrino oscillation frameworks.
Our discussion here, therefore, is intended to bring attention to the fact that the gravitationally-induced modifications to the wave-particle duality may have significant physically observable consequences for the early universe.
### 2.2 Effect on H-atom
For comparison, to the lowest order in $`\lambda _P`$, the effect of the modification (5) on the ground state level of the hydrogen atom results in the following modified uncertainty principle estimate for the ground state of an electron in an H-atom:
$`\left(E_0\right)_g{\displaystyle \frac{me^4}{2\mathrm{}^2}}\left[1{\displaystyle \frac{4m\lambda _P^2}{\mathrm{}^2}}\left({\displaystyle \frac{me^4}{2\mathrm{}^2}}\right)\right]`$ (11)
Identifying:
$`E_0={\displaystyle \frac{me^4}{2\mathrm{}^2}}`$ (12)
with the ground state level of the hydrogen atom without incorporating the gravitationally-induced correction to the uncertainty relation, one immediately notices that the effect of gravitational corrections is to reduce the magnitude of the ionization energy by $`2.5\times 10^{48}\text{eV}`$. This suggests that a space-time endowed with the KMM bound is in some sense a heat bath as it decreases the energy required to disassociate the H-atom.
### 2.3 Coherence in the early universe and in biological systems
The wavelength $`\lambda `$ asymptotically approaches the constant value $`4\lambda _P`$ that is now universal for all particle species. As a consequence of this universality, a new type of coherence may emerge in the early universe and this may carry significance for the large-scale uniformity of the universe. It is also speculated that quantum mechanics plays a fundamental role in brain function, see,e.g., . Therefore, the new coherence may also carry significant implications for functioning of the brain, and other biological systems, if important biological elements carry a mass of the order of $`M_P=\left(\mathrm{}c/G\right)^{1/2}=2.2\times 10^5\text{g}`$.
## 3 Conclusion
If the effects of the gravitationally-induced modification to the de Broglie wave particle duality are negligible for low energy, their relevance perhaps cannot be overestimated at the Planck-scale. At present, there are already speculations that anomalous events around $`10^{20}`$ eV cosmic rays may be pointing towards a violation of the Lorentz symmetry . It is expected that the gravitationally-modified wave-particle duality carries with it deformations of the Poincarรฉ symmetries. Some of these deformations can be studied with the recently-approved Gamma-ray Large Area Space Telescope (GLAST), and with other detectors.<sup>1</sup><sup>1</sup>1The reader is referred to references for the original proposal, and for details on the recent progress in this direction. A related proposal on gravitationally-induced modification of quantum evolution by Ellis, Hagelin, Nanopoulos, and Srednicki can be studied via flux equalization of the cosmic neutrinos as shown by Liu, Hu, and Ge . The possible freezing of neutrino oscillations in the early universe could carry significant impact on the formation of structure in the early universe.
In the context of this Letter, and two recent works ,<sup>2</sup><sup>2</sup>2Under the assumption of operationally independence of the inertial and gravitational masses, these works establish a quantum-induced violation of the equivalence principle. the above discussion makes it clear that the conceptual foundations of the theory of general relativity and quantum mechanics are so rich that they impose concrete modifications onto each other in the interface region. Yet, a complete theory of quantum gravity shall carry โquantumโ and โgravityโ with new meanings - meanings that are yet to be deciphered from theory and observations in their entirety.
## Acknowledgements.
It is my pleasure to thank Achim Kempf for an extended e-discussion on the subject, and Mariana Kirchbach for a critical reading of the manuscript. This work was supported by CONACyT (Mexico), and ISGBG. |
no-problem/0002/hep-ex0002040.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The measurement of the branching ratio for the decay $`bu\mathrm{}\overline{\nu }`$ provides the cleanest way to determine the $`|V_{ub}|`$ element in the CKM mixing matrix. Evidence of the non-zero value of $`|V_{ub}|`$ has been first obtained by both ARGUS and CLEO by observing leptons produced in $`B`$ decays with momentum exceeding the kinematical limit for $`bc\mathrm{}\overline{\nu }`$ transitions . The extraction of $`|V_{ub}|`$ from the yield of leptons above the $`bc\mathrm{}\overline{\nu }`$ endpoint is subject to large systematical uncertainties. More recently, exclusive $`B\pi \mathrm{}\overline{\nu }`$ and $`B\rho \mathrm{}\overline{\nu }`$ decays have been observed by CLEO and their rates measured . Still the derivation of $`|V_{ub}|`$ from exclusive semileptonic decays, contrary to the case for $`|V_{cb}|`$ in $`BD^{}\mathrm{}\overline{\nu }`$, has significant model dependence.
The extraction of $`|V_{ub}|`$ from the shape of the invariant mass of the hadronic system recoiling against the lepton in $`bu\mathrm{}\overline{\nu }`$ transitions was proposed several years ago and it has been recently the subject of new studies . The proposed method starts from the observation that the hadronic system recoiling against the lepton in the decay has invariant mass lower than the charm mass for the majority of $`bu\mathrm{}\overline{\nu }`$ decays. The model dependence in predicting the shape of this invariant mass distribution has been claimed to be under control within about $`1015\%`$ if $`bu\mathrm{}\overline{\nu }`$ decays can be distinguished from the $`bc\mathrm{}\overline{\nu }`$ ones for masses of the hadronic system up to cut values close enough to the $`D`$ mass . This corresponds to a model uncertainty of 5-7% on the extraction of $`|V_{ub}|`$.
The predicted shape of the invariant mass distribution depends mainly on the kinematics of the heavy and spectator quarks inside the $`B`$ hadron and on the quark masses. However from the experimental point of view, the hadronisation process, transforming the $`u\overline{q}`$ system into the observable hadronic final state, represents a significant source of additional model uncertainties.
Several models have been proposed to describe both steps of the decay process. This note summarizes the results of a study performed by developing a dedicated $`bu\mathrm{}\overline{\nu }`$ decay generator, BTOOL. This generator implements different prescriptions for the initial state kinematics and the resonance decomposition of the hadronic final states. Its results are used to define the model dependent systematics in the extraction of $`|V_{ub}|`$ from the hadronic mass spectrum in semileptonic $`B`$ decays.
## 2 The BTOOL Generator
The decay generator provides the four momenta of the stable decay products in $`bu\mathrm{}\overline{\nu }`$ transitions. This requires a model for the kinematics of the $`b`$ and spectator $`\overline{q}`$ quarks inside the $`B`$ hadron, the description of the $`Q^2`$ distribution of the virtual $`W`$ and of the kinematics in the $`buW`$ and $`W\mathrm{}\overline{\nu }`$ decays, and finally a model for the hadronisation of the $`u\overline{q}`$ system. In the following the implementation of the decay in the generator is discussed. In obtain to study model dependences and systematic effects, different prescriptions have been adopted.
### 2.1 ACCMM Model
In the ACCMM model the $`B`$ hadron consists of the $`b`$ quark and the spectator $`\overline{q}`$ quark moving back-to-back in the $`B`$ rest frame.Their momenta $`p`$ are distributed according to the gaussian distribution:
$`\varphi (p)={\displaystyle \frac{4}{\sqrt{\pi }p_F^3}}e^{\frac{p^2}{p_F^2}}`$ (1)
where the width $`p_F`$ is known as Fermi motion and represents a parameter in the model. The normalisation is chosen such that
$`{\displaystyle _0^+\mathrm{}}๐pp^2\varphi (p)=1.`$ (2)
The choice of the $`p_F`$ parameter and of the mass of the spectator quark $`m_q`$ are discussed in details in the next section.
### 2.2 Parton Model
An alternative picture of the $`b`$ quark kinematics has been proposed as an application of the parton model to heavy quark decays . In this model the decay is considered in a frame where the $`B`$ hadron moves with large momentum (infinite momentum frame or Breit frame). In this frame the $`b`$ quark behaves as a free particle carring a fraction $`z`$ of the $`B`$ momentum, $`p_b=zp_B`$. The functional form of $`f(z)`$ can be extracted from the $`b`$ fragmentation function since the probability of finding the $`b`$ quark carrying a fraction $`z`$ of the hadron momentum corresponds to the probability for a $`b`$ quark to produce a $`B`$ hadron with a fraction $`z`$ of its energy. This is usually described by the Peterson fragmentation function that has the form:
$`f(z)={\displaystyle \frac{Nz(1z)^2}{((1z)^2+ฯต_bz)^2}}`$ (3)
where $`ฯต_b`$ is a free parameter. In this way, the parton model offers an advantage since the kinematics of the $`b`$ quark is described by a function that can be directly related with experimental data on $`b`$ fragmentation.
### 2.3 QCD Universal Structure Function
Recently there has been progress in defining the Fermi motion in the framework of QCD . This has been achieved in terms of a universal structure function describing the distribution of the light-cone residual momentum of the heavy quark inside the hadron. At leading order and in the large $`m_b`$ limit, the light-cone residual momentum $`k_+`$ can be expressed as the difference between the b quark pole mass and its effective mass $`m_b^{}`$ inside the hadron: $`m_b^{}=m_b+k_+`$. As a consequence $`k_+<\overline{\mathrm{\Lambda }}=m_Bm_b`$. An ansatz for the shape of the universal structure function has been suggested in the form:
$`f(z)=z^a(1cz)e^{cz}`$ (4)
where $`z=1\frac{k_+}{\overline{\mathrm{\Lambda }}}`$, and the coefficients $`a`$ and $`c`$ depend on the values of $`\overline{\mathrm{\Lambda }}`$ and of the kinetic energy operator as discussed in the next section.
### 2.4 Decay kinematics
The $`b`$ quark decays as $`bWu`$, the $`W`$ and $`u`$ being emitted back to back in the $`b`$ rest frame with an isotropic distribution of their emission angle w.r.t the $`b`$ direction. The virtual $`W`$ is characterized by an effective mass $`Q^2`$. The kinematics of the $`b`$ and $`W`$ decays correspond to that for two body decays. Therefore the model dependence that propagates to the final state kinematics depends on the choice of the values of $`p_F`$ and $`m_q`$ and of the $`Q^2`$ distribution. By defining $`x^2=Q^2/m_b^2`$ the differential decay rate can be expressed as:
$`{\displaystyle \frac{d\mathrm{\Gamma }}{dx}}={\displaystyle \frac{G_F^2m_b^5|V_{ub}|^2}{192\pi ^3}}(F_0(x){\displaystyle \frac{2\alpha _s}{3\pi }}F_1(x))`$ (5)
The functions $`F_0(x)`$ and $`F_1(x)`$ describe the tree-level contribution and the QCD corrections terms. Following Ref. , they can be written, in the limit $`m_u0`$, as:
$`F_0(x)=2(1x^2)^2(1+2x^2)`$ (6)
and
$`F_1(x)=(\pi ^2+2S_{1,1}(x^2)2S_{1,1}(1x^2))+8x^2(1x^22x^4)ln(x)`$ (7)
$`+2(1x^2)^2(5+4x^2)ln(1x^2)(1x^2)(5+9x^26x^4)`$ (8)
where $`S_{1,1}(x)`$ is the Nielsen polylogarithm. The resulting $`Q^2`$ distribution is shown in Figure 1.
The virtual $`W`$ is forced to decay to the lepton-neutrino pair. Due to the spin of the $`W`$, its decay is not isotropic and the angular distribution of the lepton varies as $`1+cos^2\theta `$. In computing the lepton and neutrino momenta in the $`W`$ rest frame the lepton mass is taken into account. After the decay the lepton and neutrino are boosted back to the $`B`$ rest frame. The $`u`$ quark is also boosted to the same system, and at this point the hadronic system is treated.
### 2.5 The hadronic system
The energy and invariant mass of the hadronic system correspond to those of the $`u\overline{q}`$ quark pair. The energy and invariant mass at the parton level can be compared with predictions obtained in QCD and heavy quark expansion. The observable final states are then generated by applying different prescriptions for describing the evolution of the quark fragmentation as discussed in the next section. These include a fully inclusive hadronisation scheme according to the JETSET parton shower model and exclusive descriptions of the final states in the $`bu\mathrm{}\overline{\nu }`$ transition.
### 2.6 The JETSET 7.4 interface
The generator can be used to produce individual $`B`$ meson decays for dedicated studies. Timings for the generation of individual decays on different platforms are given in Table 1. BTOOL is also interfaced as an option of the LUDECY subroutine of JETSET to handle $`bu\mathrm{}\overline{\nu }`$ decays in the generation of $`e^+e^{}Z^0/\gamma b\overline{b}`$ events. The BTOOL generator can be activated by setting the MDME(IDC,2) flag in the LUDAT3 common block of JETSET, where IDC refers to the $`bu\mathrm{}\overline{\nu }`$ decay channel.
## 3 Results and Comparisons
The focal interest in the simulation study of $`bu\mathrm{}\overline{\nu }`$ decays is to determine the invariant mass spectrum of the hadronic system and its correlation with the lepton energy and then to define the uncertainties of these distributions due to the choice of the model and of its input parameters. Three different models have been applied for the definition of the kinematics of the $`b`$ and spectator $`\overline{q}`$ inside the hadrons and two models for the generation of the hadronic final states. The criteria chosen for the input parameters and their range of variation are discussed in the following subsection. The results for the hadronic system mass and multiplicities are presented in 3.2 and 3.3.
### 3.1 The Choice of Parameters
The ACCMM model introduces two free parameters: i) the Fermi momentum $`p_F`$ and ii) the spectator quark mass $`m_{sp}`$. There have been attempts to extract the values of these parameters from fits to experimental observables such as the momentum spectrum of leptons from $`bc\mathrm{}\overline{\nu }`$ decay , the photon spectrum in $`bs\gamma `$ decay and the $`J/\psi `$ momentum distribution in $`BJ/\psi X`$ . Most of these determinations point to a value of $`p_F`$ 0.5 GeV/c (Table 2).
However it has been pointed out that the value of $`p_F`$ obtained in a fit to $`bc`$ transitions may be not appropriate in the description of $`bu`$ decays . At the same time it has also been shown that the ACCMM model is consistent with the QCD description of $`bu\mathrm{}\overline{\nu }`$ and $`bs\gamma `$ transitions and that the corresponding parameters can therefore be related. This is the direction followed in this study.
The effective $`b`$ quark mass $`m_b`$ depends on $`p_F`$ and $`m_{sp}`$ as:
$`m_b^2=m_b^2(p_b)=m_B^2+m_{sp}^22m_B\sqrt{p_b^2+m_{sp}^2}`$ (9)
where $`p_b`$ is the momentum of the heavy quark in the hadron and $`m_B`$ is the $`B`$ hadron mass. The value of $`m_B`$ can be taken as a parameter, tunable such that the ACCMM model corresponding to an average $`b`$ mass $`<m_b>`$ can be compared with theory predictions obtained for a given value of $`m_b`$. Estimated values for the $`b`$ quark pole mass are in the range 4.72 GeV/c<sup>2</sup> $`<m_b<`$ 4.92 GeV/c<sup>2</sup> .
The value of $`p_F`$ is proportional to the average kinetic energy of the $`b`$ quark in the hadron since:
$`<p_b^2>={\displaystyle _0^+\mathrm{}}๐p_bp_b^2(\varphi (p_b)p_b^2)={\displaystyle \frac{3}{2}}p_F^2`$ (10)
where $`\varphi (p)`$ is given by Eq. (1). Through the two above equations the ACCMM model parameters are related to those of the QCD description of the heavy quark inside the hadron. In this framework $`<p_b^2>`$ corresponds to the value of the expectation value $`\mu _\pi ^2`$ of the kinetic operator. The value $`p_F`$ = 0.5 GeV/c corresponds to $`<p_b^2>`$ = 0.37 GeV<sup>2</sup>. Estimates of $`<p_b^2>`$ have been obtained both from theory and fits to measured spectra in $`B`$ decays as discussed below.
For the Parton Model, the fragmentation function for $`b`$ quarks has been measured at LEP. Averaging over the ALEPH, DELPHI and OPAL results, the fraction of the $`b`$ quark energy taken by the beauty hadron is $`<x_B>=0.702\pm 0.008`$ . Also the observed shape in the preliminary DELPHI analysis was compatible with that of the Peterson function. These results point to a value for the $`ฯต_b`$ parameter of $`ฯต_b`$ = 0.0040. In the simulation of decays with the parton model, the spectator quark mass mass $`m_q`$ was set to zero and the $`b`$ quark mass was varied in the range 4.72 GeV/c<sup>2</sup> $`<m_b<`$ 4.92 GeV/c<sup>2</sup> as for the ACCMM model. It is interesting to point out that, by using the central value for $`ฯต_b`$, the parton model gave $`<p_b^2>=`$ 0.35 GeV<sup>2</sup> for $`m_b`$ = 4.72 GeV/c<sup>2</sup>, consistent with the value obtained in the ACCMM model for $`p_F`$ = 0.5 GeV/c.
The use of the QCD universal structure function $`f(k_+)`$ allows a consistent comparison with the results obtained in the framework of QCD and Heavy Quark expansion. The normalised moments $`a_n=\frac{A_n}{\overline{\mathrm{\Lambda }}^n}`$ of $`f(k_+)`$, given by
$`a_n={\displaystyle \frac{1}{\overline{\mathrm{\Lambda }}^n}}{\displaystyle ๐k_+k_+^nf(k_+)}`$ (11)
relate the function parameters with that of the theory. In particular the first two moments define the function normalisation and the third is proportional to the expectation value of the kinetic energy operator :
$`a_0=1`$ (12)
$`a_1=0`$ (13)
$`a_2={\displaystyle \frac{3\mu _\pi ^2}{\overline{\mathrm{\Lambda }}^2}}`$ (14)
These relationships define the values of the parameters $`a`$ and $`c`$ in Eq. 4 as a function of the values of $`\mu _\pi ^2`$ and $`\overline{\mathrm{\Lambda }}`$. There have been several evaluations of $`\mu _\pi ^2`$ and a selection of recent results is given in Table 3. Results are scheme dependent and, depending on the method used in their derivation, they point to the values of $`\mu _\pi ^2`$ of 0.4 GeV<sup>2</sup> or 0.2 GeV<sup>2</sup>.
The two values of $`\mu _\pi ^2=`$ 0.2 GeV<sup>2</sup> and 0.4 GeV<sup>2</sup> were chosen while $`\overline{\mathrm{\Lambda }}`$ ranged between 0.36 GeV/c<sup>2</sup> and 0.56 GeV/c<sup>2</sup> for 4.72 GeV/c<sup>2</sup> $`<m_b<`$ 4.92 GeV/c<sup>2</sup> and $`m_B`$ = 5.28 GeV/c<sup>2</sup>. The masses of the light quarks, $`u`$ and spectator quark, were set to zero.
### 3.2 The Hadronic System Mass
The precise determination of the fraction of $`bu\mathrm{}\overline{\nu }`$ transitions yielding an hadronic system with mass $`M_X`$ below a given $`M_{cut}`$ value is crucial in the estimation of $`|V_{ub}|`$.
As discussed above, at the quark level the hadronic mass corresponds to the mass of the $`u\overline{q}`$ system. This depends mainly on the masses and motion of the heavy and spectator quarks in the $`B`$ hadron. On the contrary the experimentally measureable hadronic mass is strongly affected by the resonant decomposition of the hadronic final states. In the simulation the hadronic system is analyzed in two steps.
Firstly, the $`u\overline{q}`$ pair is analyzed. This gives the hadronic mass distribution before resonant states are taken into account (see Figure 2) and can be compared with that computed using QCD and heavy quark expansion .
Results are expressed in terms of the fraction $`F_u(M_{cut})`$ of the $`bu\mathrm{}\overline{\nu }`$ transitions resulting in a mass of the $`u\overline{q}`$ pair below a given cut value. The kinematics of the $`b`$ and spectator quark have been defined using the ACCMM model, the QCD universal structure function and the Parton Model. In order to compare the results, parameters have been chosen such that they correspond to $`m_b`$ = 4.80 $`\pm `$ 0.10 GeV/c<sup>2</sup> and 0.2 GeV<sup>2</sup> $`<<p_b^2><`$ 0.4 GeV<sup>2</sup> as discussed above.
Table 4 summarizes the results for different choices of the input parameters. The first observation is that the universal function implemented in BTOOL reproduces the prediction from QCD and heavy quark expansion from . Further it has been found that also the ACCMM model reproduces these results to better than 15$`\%`$, for equivalent values of $`m_b`$ and $`<p_b^2>`$. This study confirms that the sensitivity to the value of the $`b`$ quark mass is significant in the region of low hadronic invariant masses. In this region also the model dependence is more pronounced. In order to estimate the overall uncertainty in the estimate of the fraction of $`bu\mathrm{}\overline{\nu }`$ decays with hadronic mass below a given cut value $`M_{cut}`$, the different sources of systematics have been combined. The chosen range of variation of the parameters is $`\pm \sigma (m_b)`$ = $`\pm `$ 0.10 GeV/c<sup>2</sup> and $`\pm \sigma (<p_b^2>)`$ = $`\pm `$ 0.1 GeV<sup>2</sup>. The corresponding realtive systematic errors are summarized in Table 5.
Secondly, the hadronic final states corresponding to a given mass and energy of the $`u\overline{q}`$ pair can be predicted by a variety of methods ranging from the fully inclusive quark fragmentation approach to exclusive models.
At large enough recoil $`u`$ quark energies, the $`u\overline{q}`$ system moves away fast and this picture is similar to that of the evolution of a jet initiated by a light quark $`q`$ in $`e^+e^{}q\overline{q}`$ annihilation. This is simulated by first arranging the $`u\overline{q}`$ system in a string configuration and then making it fragment according to the parton shower model. Exclusive models compute the decay amplitudes from the heavy-to-light form factors and the quark hadronic wave functions. The so-called ISGW2 model approximates the inclusive $`bu\mathrm{}\overline{\nu }`$ decay width by the sum over resonant final states, taking into account leading corrections to the heavy quark symmetry limit.
The range of applicability of the inclusive and exclusive models is restricted to particular regions of the accessible kinematic configurations. Figure 3 a) shows the physical region in the $`q^2`$ versus $`2m_Bq_0`$ plane where $`q^2`$ is the effective mass of the virtual $`W`$ and $`q_0`$ its energy in the $`B`$ rest frame. In this plot states of equal hadronic invariant mass $`M_X`$ correspond to lines $`M_X^2=m_B^22m_Bq_0+q^2`$. Systems of large invariant mass correspond to low $`q^2`$ and $`q_0`$ values, i.e. large $`u`$ recoil as shown in Figure 3 b). In these cases the $`u`$ quark energy is typically large enough compared with that of the spectator quark that the analogy with jet fragmentation is justifiable. Conversely at low $`u`$ recoil energy, i.e. close to the upper kinematical limit in Figure 3 a), the relative momentum of the $`u\overline{q}`$ pair is small and they are therefore likely to form a bound state.
A satisfactory description of the inclusive $`bu\mathrm{}\overline{\nu }`$ decay must take these characteristics into account. Hybrid models have been proposed for this purpose . The main feature of an hybrid model is to define a kinematical region in which the exclusive model is valid and its complement that can be treated by an inclusive fragmentation model. The two regions must be chosen in order to have well behaved matching conditions for a set of relevant kinematical variables. Further constraints can be derived from the branching ratios for $`B\pi \mathrm{}\overline{\nu }`$ and $`B\rho \mathrm{}\overline{\nu }`$ decays measured by CLEO . Since the aim of this study is the definition of the systematic uncertainties in the description of the hadronic mass spectrum, results have been derived for the two extreme cases of fully inclusive and exclusive models. In the inclusive model, the probabilities for generating light vector and axial resonances have been tuned in JETSET in order agree with the measured rates in $`Z^0`$ decays. For the exclusive model the ISGW2 model has been used. The results are presented in Figure 4 in terms of the fraction of $`bu\mathrm{}\overline{\nu }`$ decays with the hadronic final state below a given mass value. The comparison of the predictions from the two models shows significant differences in the predicted mass spectra due to the relative importance of resonant and non-resonant final states. For the region of $`M_{cut}>`$ 1.5 GeV/$`c^2`$, the relative difference of the two models correponds to 10 - 15% showing that the hadronic system fragmentation introduces an uncertainty of the same order as that from the $`b`$-quark mass and the heavy hadron kinematics.
### 3.3 The Hadronic System Multiplicity
An additional source of uncertainty in the modelling of the decay arises from the multiplicity of the hadronic system. This is of special relevance since the efficiency for resonstructing the decay depends on this multiplicity. In order to study this uncertainty, the different prescriptions for describing the final states discussed above have been analyzed in terms of the resulting decay multiplicity.
The charged multiplicity of the hadronic system from the $`bu\mathrm{}\overline{\nu }`$ decay can be compared with one half of that of a $`q\overline{q}`$ event at $`\sqrt{s}=2E_{had}`$. The data on the event charged multiplicity from Adone and Mark II at 2 GeV $`<\sqrt{s}<`$ 8 GeV can be described with a function $`<n_{ch}>=a+blns`$ with $`a=2.67\pm 0.04`$ and $`b=0.48\pm 0.02`$ shown by the long thick line in Figure 5. These data have also been compared with the results of the simulation where a $`u`$ quark of energy $`\sqrt{s}/2`$ is paired with the spectator quark. The quark is either given Fermi motion according to the value of $`p_F`$ but no transverse momentum, or kept at rest (i.e. $`p_F=0`$). The results are shown by the shorter thinner lines in Figure 5. The parton shower model reproduces reasonably well both the multiplicity and its scaling with the quark energy. The best agreement with the data is obtained by imposing $`p_F`$ = 0.2 GeV/c which gives a fit with $`a`$=2.71 and $`b`$=0.44.
Multiplicities in semileptonic $`B`$ decays have also been predicted using the quark-gluon string model (QGSM) which also reproduces fairly precisely the same data on the charged event multiplicity in low energy $`e^+e^{}`$ collisions . The multiplicities obtained by the decay generator using the hybrid model are compared with the predictions from QGSM and IGSW2 in Table 6. The average charged multiplicity in the decay $`<n_{ch}>`$ agrees for the three models within $`\pm `$ 0.16. This multiplicity is also quite close to that measured for $`D`$ meson decays , showing that the reconstruction of the hadronic system may be performed with comparable efficiency in semileptonic $`bu`$ and $`bc`$ decays.
## 4 The Lepton Spectrum
As already mentioned, the lepton spectrum is sensitive to the mass of the quark produced in the semileptonic $`b`$ decay. While the lepton yield in the region of lepton energies above the kinematical limit $`\frac{M_B^2M_D^2}{2M_B}`$ for $`bc\mathrm{}\overline{\nu }`$ transitions is subject to significant model dependences, a combined study of the mass of the hadronic system $`M_X`$ and the energy of the lepton in the $`B`$ rest frame $`E_{\mathrm{}}^{}`$ may allow an extraction of the $`|V_{ub}|`$ with good sensitivity and improved control of the systematics.
Figure 6 shows the correlation between the values of $`E_{\mathrm{}}^{}`$ and $`M_X`$ before and after the hadronisation of the $`u\overline{q}`$ system. By selecting decays with low hadronic invariant mass, the lepton spectrum is depleted in its lower end without significantly affecting the region of lepton energies above 1.5 GeV/c that are relevant for separating $`bu\mathrm{}\overline{\nu }`$ from $`bc\mathrm{}\overline{\nu }`$ decays.
## 5 Conclusion
A generator for inclusive $`bu\mathrm{}\overline{\nu }`$ decays has been developed and used for studying the invariant mass and resonance decomposition of the hadronic system produced in the decay. These studies are of special relevance for the extraction of $`|V_{ub}|`$ from semileptonic $`B`$ decays at LEP and at the $`B`$ factories. The low invariant mass of the hadronic system emitted in these decays can be used to separate them from the CKM favoured $`bc\mathrm{}\overline{\nu }`$ transitions. Different models for defining the kinematics of the heavy quark inside the hadron and the $`u\overline{q}`$ system hadronisation have been compared. Systematic uncertainties in the fraction of decays giving an hadronic system with mass below a given cut arise from the value of the $`b`$ quark pole mass, the momentum distribution of the heavy and spectator quark inside the hadron and modelling of the $`u\overline{q}`$ hadronisation. This analysis confirmed the observation that model dependencies and these systematic uncertainties from the $`b`$-quark mass and the heavy quark kinematics can be kept at the 10% level, or below, if the study of $`bu\mathrm{}\overline{\nu }`$ is performed including decays with hadronic final state masses up to $``$ 1.6 GeV/c<sup>2</sup> or above. A comparable uncertainty arises from the hadronisation model when comparing an inclusive to a fully exclusive model. The combined analysis of the hadronic mass and lepton spectrum spectrum may provide an optimal separation of $`bu\mathrm{}\overline{\nu }`$ from $`bc\mathrm{}\overline{\nu }`$ decays.
Acknowledgements
I would like to thank M. Neubert and N. Uraltsev for extensive discussions. I am also grateful to I. Bigi and C.S. Kim for pointing out the relationships between ACCMM and QCD parameters, to D. Lange and A. Ryd for providing me with results from a Monte Carlo implementation of the ISGW2 model, to T. Sjรถstrand for his advices on interfacing this generator with JETSET and to W. Venus for his comments to the manuscript. |
no-problem/0002/astro-ph0002360.html | ar5iv | text | # Solving the Coincidence Problem: Tracking Oscillating Energy
\[
## Abstract
Recent cosmological observations strongly suggest that the universe is dominated by an unknown form of energy with negative pressure. Why is this dark energy density of order the critical density today? We propose that the dark energy has periodically dominated in the past so that its preponderance today is natural. We illustrate this paradigm with a model potential and show that its predictions are consistent with all observations.
\]
Introduction. A variety of evidence accumulated over the last several years points to the existence of an unknown, unclumped form of energy in the Universe. First was an apparent concordance of different measurements: the age of the Universe; the Hubble constant; the baryon fraction in clusters; and the shape of the galactic power spectrum. Second came the stunning observations of tens of distant Type Ia Supernovae, which found a distance-redshift relation in accord with a cosmological constant, but in strong disagreement with a matter dominated Universe. Finally, this past year has seen analyses of the experiments measuring anisotropies in the CMB. Taken together, the CMB experiments plot out a rough shape for the power spectrum, one that is in accord with a flat Universe, but in disagreement with an open Universe. If we believe the estimates of matter density coming from observations of clusters , the only way to get a flat Universe, and hence account for the CMB measurements, is to have an unclumped form of energy density pervading the Universe.
Perhaps the simplest explanation of these data is that the unclumped form of energy density corresponds to a positive cosmological constant. A non-zero but tiny constant vacuum energy density (cosmological constant) could conceivably be explained by some unknown string theory symmetry (that sets the vacuum energy density to zero) being broken by a small amount. However, to explain in this way a constant vacuum energy density of $`2\times 10^{59}\mathrm{TeV}^4`$, which is not only small but is also just the right value that it is just beginning to dominate the energy density of the Universe now, would require an unbelievable coincidence. A different possibility is to give up the dream of finding a mechanism which would set the vacuum energy density to exactly zero and resort to believing that anthropic considerations select amongst $`10^{100}`$ string vacua to find one with a vacuum energy density sufficiently fine-tuned for life. Although this anthropic selection mechanism is logically consistent and even predicts a small but observable cosmological constant, one might think that nature would have found a more efficient mechanism to obtain a sufficiently small cosmological constant than such extreme brute force application of anthropic selection.
An alternative is to assume that the true vacuum energy density is zero, and to work with the idea that the unknown, unclumped energy is due to a scalar field $`\varphi `$ which has not yet reached its ground state. This idea, which is called dynamical lambda or quintessence, has received much attention over the last several years. However, two problems still remain. First, the fieldโs mass has to be extremely small, less than or of order the Hubble constant today $`10^{33}\mathrm{eV}`$, to ensure that it is still rolling to its vacuum configuration. This is in general difficult because scalar fields tend to acquire masses greater than or of order the scale of supersymmetry breaking suppressed by at most the Planck scale: $`mF/m_{\mathrm{Pl}}\mathrm{TeV}^2/m_{\mathrm{Pl}}10^3\mathrm{eV}`$. Although difficult, this could be achieved using pseudo-Nambu-Goldstone bosons . Another more speculative way to achieve this would be to use the hypothetical symmetry (perhaps some sort of hidden supersymmetry) that ensures that the true vacuum energy density is zero to also protect the flat directions in scalar field space that would correspond to the very light scalar fields necessary for quintessence. The second, and perhaps even more serious problem is that almost all of these models require that we live in a special epoch today, when the quintessence is just starting to dominate the energy density of the Universe, and furthermore this specialness cannot even be justified by use of anthropic arguments.
In recent years a lot of progress has been made in understanding the behavior of quintessence fields. A broad class of solutions, called tracker solutions , has been discovered in which the final value of the quintessence energy density is insensitive to the initial conditions. For example, potentials like $`V=V_0\varphi ^n`$ or $`V=V_0\mathrm{exp}(1/\varphi )`$ can, for suitable choices of $`V_0`$, catch up with the critical density late in the evolution of the Universe for a wide range of initial conditions and thus provide a natural setting for explaining the current acceleration of the Universe. However, the suitable choice of $`V_0`$ must be of the order of the critical energy density today, i.e., we are back to the problem of living at a special epoch today and not even being able to use anthropic arguments to justify this specialness.
In a subset of these tracking models, which we call the exact tracker solutions , the scalar field energy density is always related to the ambient energy density in the Universe: if the dominant component in the Universe is radiation, then the tracking fieldโs energy density also falls off as $`a^4`$, where $`a`$ is the scale factor of the Universe. If the dominant component is matter, then the fieldโs energy density scales as $`a^3`$. This behavior arises from an exponential potential for $`\varphi `$ (regardless of the value of $`V_0`$). Since the energy density in this field is always comparable to the background density, we are not living at a special epoch: any observer in the distant past or future would also see the tracking fieldโs energy density. However, these tracking solutions run into two problems. First, if their energy density today truly is dominant, then it should also have been dominant at the time of Big Bang Nucleosynthesis (BBN). Constraints from observations of light element abundances preclude such an additional form of energy density at early times. Second, tracking models have the wrong equation of state at present since the tracking field behaves like matter, with zero pressure, instead of having the necessary negative pressure to accelerate the Universe.
In this letter we ask the question, what if the Universe has been accelerating periodically in the past? Then the fact that the Universe is accelerating today would not be surprising. It would merely reflect that the period is such that the Universe is accelerating today. Of course, if it turned out that to achieve a presently accelerating Universe the period had to be excessively fine-tuned, then this scenario would not be worth considering. However, note that the assumption that there is nothing special about the present time itself argues for the robustness of such a scenario. If the Universe does accelerate periodically, then there is no reason why it should not be accelerating today. If the Universe does accelerate periodically, then it is, in fact, reasonable to expect it to accelerate today.
To judge the merits of this scenario in a concrete manner, we adopt an ad-hoc potential. Though worked out for this specific potential, the predictions outlined here are the generic predictions of a periodically accelerating Universe. The model we adopt for study is a modification of the exponential potential (which leads to the exact tracker solution). The modification to the potential is a sinusoidal modulation, which induces the tracker field to oscillate about the ambient energy density. We show that such a potential can satisfy the the BBN constraints, can produce the right equation of state today and leads to testable features in the CMB and matter power spectra. We call this type of energy Tracking, Oscillating Energy, or TOE.
The potential and the field evolution. Consider a scalar field $`\varphi `$ with potential $`V(\varphi )=V_0\mathrm{exp}(\lambda \varphi \sqrt{8\pi G})`$. It is well-known that such a potential leads to an attractor solution with $`\mathrm{\Omega }_\varphi \rho _\varphi /(\rho _\varphi +\rho _o)=n/\lambda ^2`$ where $`\rho _o`$ is the energy density in the other component of the Universe, which is assumed to scale as $`a^n`$. Thus, no matter what the initial conditions are for $`\varphi `$, it always evolves so that it tracks the rest of the density in the Universe.
Now consider the potential
$$V(\varphi )=V_0\mathrm{exp}\left(\lambda \varphi \sqrt{8\pi G}\right)\left[1+A\mathrm{sin}(\nu \varphi \sqrt{8\pi G})\right].$$
(1)
This potential serves to modulate the tracking behavior. Figure 1 shows the resultant evolution of $`\varphi `$ and its energy density for a particular set of the parameters $`A,\nu `$. (The normalization $`V_0`$ can be set to $`G^2`$ by shifting the initial value of $`\varphi `$.) Also shown is the tracking solution for this particular value of $`\lambda `$ without the modulation. As expected, the sinusoidal term in the potential leads to oscillations about this tracking behavior. One can obtain analytic solutions for the dynamics of the potential in Eq. (1) during radiation ($`n=4`$) or matter ($`n=3`$) domination in the limit that $`A`$ is small by perturbing about the corresponding exact tracker model which has $`\varphi \sqrt{8\pi G}=\frac{n}{\lambda }\mathrm{ln}a`$. The sine in Eq. (1) provides a periodic forcing term with period $`\mathrm{ln}a=\frac{2\pi \lambda }{n\nu }`$, while the natural period of the damped oscillations about the exact tracker solution is $`\mathrm{ln}a=8\pi \lambda /\sqrt{(6n)[3(3n2)\lambda ^28n^2]}`$ with decay $`e`$-life $`\mathrm{ln}a=4/(6n)`$. Although the above results are strictly valid only for small $`A`$, they account remarkably well for the behaviour shown in Figure 1. The forced period corresponds to the longer period of $`5.4`$ units ($`n=4`$) and somehwere between $`5.4`$ units and $`7.1`$ units ($`n=3`$), while the natural period corresponds to the shorter period of $`1.6`$ units ($`n=4,3`$) of the damped oscillations which are presumably excited by the non-linear effects that appear when $`A`$ is not small.
The energy density due to $`\varphi `$ is relatively small at the time of BBN and relatively large today for the parameter set in Figure 1. It is, of course, clear that in order to get the right behavior at BBN and today, one has to pick the โcorrectโ parameter sets. This involves a bit of fine-tuning which, as we argue below, is quite reasonable and natural. If one thinks of the parameter set as being randomly selected, then there is a finite probability that the Universe will be accelerating today and that the energy density of $`\varphi `$ will be sub-dominant at BBN. What is this probability? If one selects $`A`$, $`\nu `$ and $`\lambda `$ randomly, the chance of getting a Universe like ours is of the order of 1 in a 100. The exact number (for this potential) depends on how stringently we define โa Universe like oursโ. For example the tight constraints $`0.4<\mathrm{\Omega }_\varphi <0.8`$, $`w_\varphi <0.5`$, and $`(\rho _\varphi /\rho _0)_{\mathrm{BBN}}<0.1`$ give a probability of 1 in 450, while the relaxed constraints $`0.1<\mathrm{\Omega }_\varphi <0.9`$ and $`w_\varphi <0.25`$ and $`(\rho _\varphi /\rho _0)_{\mathrm{BBN}}<0.2`$ give a probability of 1 in 26. It is also very important to note that whatever the extent of fine-tuning, all of it is in dimensionless numbers. There are no energy scales in this scenario which are to be set by the present expansion rate of the Universe.
Power Spectra. To compare with CMB and large scale structure observations, we compute the power spectra of the perturbations in a TOE model. Perturbations evolve differently in the presence of the scalar field energy density. For example, perturbations typically grow only when the Universe is matter dominated. Therefore, we expect a non-zero $`\mathrm{\Omega }_\varphi `$ to lead directly to power suppression on the scales inside the horizon, with increased suppression for larger $`\mathrm{\Omega }_\varphi `$.
The prediction for the CMB angular power spectrum is plotted in Figure 2. The primeval power spectrum is scale-invariant with adiabatic initial conditions. Also plotted for comparison is a model ($`\mathrm{\Lambda }`$CDM) with cosmological constant $`\mathrm{\Omega }_\mathrm{\Lambda }=\mathrm{\Omega }_\varphi `$ today and the rest of the cosmological parameters also being the same. In further discussions we will contrast the results from the TOE model against this $`\mathrm{\Lambda }`$CDM model. A noteworthy feature in Figure 2 is the increase in the heights of the first two peaks compared to that of the $`\mathrm{\Lambda }`$CDM model. This stems from the fact that the gravitational potential decays more in the presence of the additional quintessence energy density. The decay of the potential at and after recombination (the so-called Integrated Sachs-Wolfe , or ISW, effect) leads to enhanced power on scales $`l\begin{array}{c}<\hfill \\ \hfill \end{array}600`$, after which the potential becomes irrelevant. Note that the increase in the amplitude of both the first and second peak cannot be mimicked by adding more baryons, which raise the odd peaks but lower the even ones.
On smaller scales $`(l\begin{array}{c}>\hfill \\ \hfill \end{array}600)`$, the TOE model has smaller anisotropies. Here there are two competing effects. First, the difference between the TOE and the $`\mathrm{\Lambda }`$CDM models (around recombination when $`\mathrm{\Lambda }`$ is insignificant) is the presence of the extra quintessence energy density, which leads to the expansion rate in the two models being related asโ
$$H_{\mathrm{TOE}}(a)=H_{\mathrm{\Lambda }\mathrm{CDM}}(a)\times \left(1\mathrm{\Omega }_\varphi (a)\right)^{1/2}.$$
(2)
Eq. 2 implies that all the relevant scales at recombination (which occurs at $`a_r10^3`$) are smaller in the TOE model by a factor of about $`\sqrt{1\mathrm{\Omega }_\varphi (a_r)}`$. In particular, the damping scale is smaller, which increases in the power on small scales for the TOE model relative to the $`\mathrm{\Lambda }`$CDM model. The second effect is the large scale normalization of the two models , and this second effect more than compensates for the first. COBE normalization is sensitive to scales around $`\mathrm{}=10`$ for which the differences in the two models with regard to the late-ISW effect is important. In particular, since $`\mathrm{\Lambda }`$ domination occurs very late, the ISW contribution around $`\mathrm{}=10`$ is much larger in the TOE model. This in turn implies that the normalization of the primeval power spectrum is smaller, a fact noticeable in the smaller amplitude of the photon power spectrum for the TOE model at small scales (and also the matter power spectrum, as we will soon see). One last effect that is worth pointing out concerns the difference in the peak positions in the two models (though unlike the peak amplitudes, it is probably not easily discerned). In particular, the TOE model has the acoustic features in its angular power spectrum shifted to smaller scales. This directly traces to the decrease in the angular diameter distance to the last scattering surface, for the TOE model. Of course, there is also the competing effect of the decrease in the size of the sound horizon at last scattering for the TOE model, which minimizes the effect.
The prediction for the matter power spectrum is plotted in Figure 3. The difference in power at the largest scales is due to COBE normalization and the difference in the super-horizon growth factor (which is sensitive to the equation of state of the cosmic fluid) for the perturbation. As one moves to smaller scales, which entered the horizon well before the present, the differences in the evolution of the matter perturbation become more pronounced. The presence of the extra quintessence energy stunts the growth of perturbation once a mode enters the horizon. So, the earlier the mode enters the horizon, the larger the growth suppression relative to the $`\mathrm{\Lambda }`$CDM model. In other words, smaller modes are monotonically more suppressed (something that may not be noticeable in the log plot) compared to the same modes in $`\mathrm{\Lambda }`$CDM model. It might also be surprising that the $`\varphi `$ domination around $`a=10^6`$ does not cause a more appreciable feature (i.e., suppression) in the power spectrum. The reason is that the smallest scales in Figure 3 have just entered the horizon at the time of $`\varphi `$ domination ($`a10^6`$).
The normalization on the small scales is generally quoted in terms of $`\sigma _8`$, the rms mass fluctuation within a $`8h^1\mathrm{Mpc}`$ sphere. For the parameters in Figure 1, the TOE model has $`\sigma _8=0.4`$. This is several sigma smaller than the preferred value (see e.g. ) of $`0.8`$, but could be rectified by a small blue-shift in the primordial spectrum .
Conclusions. We have constructed a model wherein the energy density tracks the dominant component in the Universe; satisfies the BBN constraints; and has the proper equation of state today. Further, this model makes definite predictions for large scale structure and for the CMB.
Perhaps the greatest drawback of this class of models is the arbitrariness of the potential. In particular we know of no theory which predicts a potential of the form given in Eq. (1). Nonetheless, we feel that the testable predictions of the model and the aesthetic quality it preserves that we do not live in a special epoch are of sufficient interest to warrant further study.
We thank Limin Wang for helpful discussions. The CMB spectra used in this work were generated by an amended version of CMBFAST . This work was supported by the DOE and the NASA grant NAG 5-7092 at Fermilab. EDS acknowledges support by the KOSEF Interdisciplinary Research Program grant 1999-2-111-002-5 and the Brain Korea 21 Project. |
no-problem/0002/astro-ph0002121.html | ar5iv | text | # Observations of Faint, Hard-Band X-ray Sources in the Field of CRSS J0030.5+2618 with the Chandra X-ray Observatory and the Hobby-Eberly Telescope1footnote 11footnote 1Based on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universitรคt Mรผnchen, and Georg-August-Universitรคt Gรถttingen.
## 1 Introduction
About 70% of the extragalactic X-ray background in the soft 0.5โ2 keV band has been resolved into discrete sources by pencil-beam surveys with the ROSAT satellite (e.g., Hasinger et al. (1998)). The 0.5โ2 keV source counts have reached a surface density of $`1000`$ deg<sup>-2</sup> at a discrete source detection limit of $`10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, and simple extrapolations suggest that all of the 0.5โ2 keV extragalactic X-ray background will be resolved by a flux limit of $`2\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup> at which the surface density will be $`3000`$ deg<sup>-2</sup>. Optical identification programs (e.g., Schmidt et al. (1998)) have established that type 1 Active Galactic Nuclei (AGN), such as Seyfert 1 galaxies and Quasi-Stellar Objects (QSOs), are the dominant contributors above a 0.5โ2 keV flux of $`5\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. A non-negligible number (about 16%) of type 2 AGN are seen as well.
Largely due to instrumental limitations, the nature of the sources that produce the $`>2`$ keV X-ray background is much less certain at present. It is important to solve this mystery since most of the energy density in the X-ray background is located above the ROSAT band. The best current constraints on the sources of the 2โ10 keV X-ray background have come from the ASCA and BeppoSAX satellites. Long observations with these satellites have reached discrete source detection limits of $``$ (3โ5)$`\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> and have resolved $`30`$% of the 2โ10 keV background into discrete sources (e.g., Ogasaka et al. 1998; Giommi, Fiore & Perri 1999). The integrated number of sources, $`N`$, is consistent with the law $`N(>S)S^{3/2}`$ expected for a uniform distribution of sources in Euclidean space. The deepest 2โ10 keV source counts to date have resolved $`60`$ sources deg<sup>-2</sup>. The faintest 2โ10 keV sources appear to have flatter spectra (with energy indices of $`\alpha 0.5\pm 0.2`$) than those of typical unabsorbed AGN (e.g., Ueda et al. 1998), suggesting that a population of sources with spectra similar to that of the integrated X-ray background dominates above 2 keV. The population is thought to be at least partially composed of obscured AGN, and some of these hard sources have indeed been associated with such objects (e.g., Fiore et al. 1999; Akiyama et al. 2000). An important result, however, is that the majority of the hard sources found thus far appear to have counterparts in the soft X-ray band (Giommi, Fiore & Perri 1998). Obscured AGN might still create soft X-ray emission via electron-scattered X-rays or due to non-nuclear X-ray emission (e.g. starburst activity).
The arcsecond imaging quality and high-energy sensitivity of the Chandra X-ray Observatory (Weisskopf, OโDell & van Speybroeck (1996)) promises to revolutionize our understanding of the X-ray background above 2 keV. The source confusion and misidentification problems that have dogged earlier hard X-ray surveys will be eliminated. In this paper, we use data from Chandra, the 8-m class Hobby-Eberly Telescope (HET), and public archives to study a small but well-defined sample of faint X-ray sources in the 2โ8 keV band. Our sample contains several of the faintest $`>2`$ keV sources yet detected and identified. We address (1) the number counts at faint hard X-ray fluxes, (2) the nature of the faint hard X-ray sources, and (3) the issue of whether most faint hard X-ray sources have soft X-ray counterparts. Our X-ray sources lie in the PG 0027+260 (an eclipsing cataclysmic variable) field of the Cambridge-Cambridge ROSAT Serendipity Survey (e.g., Boyle, Wilkes & Elvis 1997). This field contains CRSS J0030.5+2618, a $`z=0.516`$ cluster of galaxies, and it was observed by Chandra for 44 ks during its first month of calibration-phase operations. The Galactic column density along this line of sight is $`(3.9\pm 0.4)\times 10^{20}`$ cm<sup>-2</sup> (Stark et al. 1992), corresponding to an optical depth of $`\tau <0.02`$ for the 2โ8 keV band of primary interest here. In this paper we assume $`H_0=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=\frac{1}{2}`$.
## 2 X-ray Observations and Data Analysis
### 2.1 ACIS Observation Details and Image Creation
The field containing CRSS J0030.5+2618 was observed with the Chandra Advanced CCD Imaging Spectrometer (ACIS; Garmire & Nousek 1999a ; Garmire et al. 2000, in preparation) for a total exposure time of 44 ks on 1999 August 17. ACIS consists of ten CCDs designed for efficient X-ray detection and spectroscopy. Four of the CCDs (ACIS-I; CCDs I0โI3) are arranged in a $`2\times 2`$ array with each CCD tipped slightly to approximate the curved focal surface of the Chandra High Resolution Mirror Assembly (HRMA). The remaining six CCDs (ACIS-S; CCDs S0โS5) are set in a linear array and tipped to approximate the Rowland circle of the objective gratings that can be inserted behind the HRMA. The CCD which lies on-axis in ACIS-S (S3) is orthogonal to the HRMA optical axis. It is a back-illuminated CCD that is sensitive for imaging soft X-ray objects. Each CCD subtends an $`8.3^{}\times 8.3^{}`$ square on the sky. The individual pixels of the CCDs subtend $`0.5^{\prime \prime }\times 0.5^{\prime \prime }`$ on the sky. The on-axis image quality of the telescope is approximately $`0.5^{\prime \prime }`$ (FWHM); this quantity increases to $`1.0^{\prime \prime }`$ (critical sampling on the detector) at an off-axis angle of $`2^{}`$. The image size also has a weak energy dependence, with poorer quality at higher energy.
The observation was performed in two segments (observation ID numbers 1190 and 1226), separated by 1.0 ks. CRSS J0030.5+2618 was placed at the aim point for the ACIS-S array (on CCD S3) during the observation. The aim point position was $`\alpha _{2000}=00^\mathrm{h}30^\mathrm{m}32.5^\mathrm{s}`$, $`\delta _{2000}=+26^{}18^{}13.4^{\prime \prime }`$. The focal plane temperature was $`99.3^{}`$C. Faint mode was used for the event telemetry format, and ASCA grade 7 events were rejected on orbit to prevent telemetry saturation (see ยง5.7 of the AXAF Observatory Guide for a discussion of grades). Only one 3.3 s frame was โdroppedโ from the telemetry.
Here we will focus on the data from CCD S3 since it has not shown the charge transfer inefficiency (CTI) increase that has affected the front-illuminated CCDs (see Garmire & Nousek 1999b ). To avoid problems associated with the dither of Chandra (see ยง4.9.2 of the AXAF Observatory Guide), we also neglect data within $`20^{\prime \prime }`$ of the edge of S3. Our search area thus comprises 63.5 square arcminutes or 92% of S3. The two observation segments were co-added using the event browser software (Broos et al. (1999)). We used the CIAO datamodel software, provided by the Chandra X-ray Center, to create $`0.5^{\prime \prime }`$ pixel<sup>-1</sup> images in the โfullโ (0.2โ8 keV), โsoftโ (0.2โ2 keV), and โhardโ (2โ8 keV) bands (neglecting the 8โ10 keV data improves the signal-to-noise ratio in the hard band; e.g., Baganoff 1999). Our 0.2, 2, and 8 keV band boundaries have uncertainties of 80 eV, 20 eV and 160 eV, respectively. These uncertainties are smaller than or comparable to the S3 spectral resolution, and the 0.2 and 8 keV uncertainties are innocuous due to the small effective area of HRMA/ACIS below 0.3 keV and above 8 keV. The 2 keV band boundary is furthermore convenient because it is close to the energy of the HRMA response drop due to the iridium M-edge.
We have only used events with ACIS grades of 0, 2, 8, 16 and 64. For the background level during this observation, this conservative grade set appears to provide the best overall performance when trying to detect faint, hard sources on S3, although we explore other grade set choices in ยง2.4. For this 44 ks observation, the average background in the hard band varies across S3 in the range $``$ 0.03โ0.05 count pixel<sup>-1</sup>.
We corrected the Chandra astrometry by comparison with ROSAT HRI, Palomar Optical Sky Survey (POSS), and Isaac Newton Telescope data (see below). The QSO CRSS J0030.6+2620 ($`z=0.493`$) and the Seyfert 2 CRSS J0030.7+2616 ($`z=0.247`$) were particularly useful in this regard (see Boyle et al. 1995 and Boyle et al. 1997).
### 2.2 Source Searching
We used the CIAO wavdetect software (Dobrzycki et al. 1999; Freeman et al. 2000) to search our images for sources. Our primary interest is in 2โ8 keV point sources. We used a significance threshold of $`1\times 10^6`$ and computed 5 scaled transforms for wavelet scale sizes of 1, 2, 4, 6 and 8 pixels. In our hard band image, we have detected 9 point sources on S3.<sup>2</sup><sup>2</sup>2We also detect a source at $`\alpha _{2000}=00^\mathrm{h}30^\mathrm{m}57.8^\mathrm{s}`$, $`\delta _{2000}=+26^{}17^{}44.3^{\prime \prime }`$, but this source lies $`19^{\prime \prime }`$ from the edge of S3 and is hence excluded from consideration. These are listed in Table 1 and shown in Figures Observations of Faint, Hard-Band X-ray Sources in the Field of CRSS J0030.5+2618 with the Chandra X-ray Observatory and the Hobby-Eberly Telescope<sup>1</sup><sup>1</sup>1Based on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universitรคt Mรผnchen, and Georg-August-Universitรคt Gรถttingen. and Observations of Faint, Hard-Band X-ray Sources in the Field of CRSS J0030.5+2618 with the Chandra X-ray Observatory and the Hobby-Eberly Telescope<sup>1</sup><sup>1</sup>1Based on observations obtained with the Hobby-Eberly Telescope, which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universitรคt Mรผnchen, and Georg-August-Universitรคt Gรถttingen.. We estimate that our absolute positions in the hard band are good to within $`3^{\prime \prime }`$. They are therefore of comparable quality to those used in the ROSAT High-Resolution Imager survey of the Lockman Hole (Hasinger et al. 1998), and they are much better than earlier positions in the hard X-ray band. All of these sources, except source 5, were also detected in the independent soft-band image (usually with a significantly higher number of counts), giving us confidence in their reality. The relative positional agreement between hard-band and soft-band sources was typically better than $`1^{\prime \prime }`$. Given this relative positional agreement, the probability that any given source is a false match is $`0.2`$%. While the background in the soft band has significant spatial structure due to instrumental effects (e.g., node boundaries) and the presence of the cluster, this does not appear to affect our matching of hard and soft sources. Source 7 lies at a position where the soft-band background is elevated by $`25`$% by the central node boundary. This elevated background arises due to cosmic rays interacting with the physically separated frame store regions of the S3 CCD, and it is blurred when the Chandra dither is removed by the pipeline software. The slightly elevated background does not compromise the detection of source 7, but our soft-band photometry may have a small systematic error in addition to the statistical error given in Table 1. In addition, we are confident that none of our sources is due to a โhot pixelโ because these would show the characteristic Lissajous dither pattern from the spacecraft aspect. We have compared the angular extents of our sources with the 90% encircled energy radius of the Chandra PSF (see ยง3.8 of the AXAF Observatory Guide), and although the photon statistics are limited we find no clear evidence for anomalous extent.
We have compared our number counts in the soft band with those of Hasinger et al. (1998) as a rough consistency check. Hasinger et al. (1998) found $`940\pm 170`$ sources deg<sup>-2</sup> at a 0.5โ2 keV flux level of $`1\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. Our source density in the soft band is $`1500\pm 300`$ sources deg<sup>-2</sup> at a 0.2โ2 keV flux level of $`6\times 10^{16}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. For plausible bandpass corrections, our number counts are roughly consistent with a simple extrapolation of the Hasinger et al. (1998) $`\mathrm{log}N`$$`\mathrm{log}S`$ relation. We note that there is probably an enhancement in our number counts due to the presence of the cluster CRSS J0030.5+2618 (see ยง4 for further discussion).
The minimum detectable 2โ8 keV flux varies across S3 due to effects such as point spread function (PSF) broadening, vignetting, and spatially dependent CTI. Within $`3.5^{}`$ of the aim point, we estimate our flux limit to be (4โ5)$`\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> (corresponding to $`6`$ counts), while at larger off-axis angles our flux limit increases fairly quickly (e.g., see Figure 6.5 of the AXAF Proposersโ Guide). While this spatially dependent flux limit should be kept in mind, none of our main results below is sensitive to the precise details of our flux limit. Even at the locations on S3 furthest from the aim point, the observation is $`\stackrel{>}{}5`$ times deeper than previous hard X-ray surveys.
### 2.3 Source Parameterization
wavdetect performs photometry on detected sources, and we have cross checked the wavdetect results with manual aperture photometry. We find good agreement between the two techniques for all sources other than source 2, where the wavdetect photometry clearly has failed (wavdetect finds source 2 but claims it only has 0.9 counts in the hard band). In Table 1 we quote our manual photometry results for source 2 and wavdetect photometry results for all other sources. These have not been corrected for vignetting. We also quote the โband ratioโ defined as the ratio of hard-band to soft-band counts. The errors for our band ratio values have been computed following the โnumerical methodโ described in ยง1.7.3 of Lyons (1991); in the Poisson limit this method is more reliable than the standard approximate variance formula (e.g., see ยง3.3 of Bevington & Robinson 1992). In Figure 3 we compare our band ratios to power-law models with varying amounts of neutral absorption. Several of our sources appear likely to suffer significant internal X-ray absorption. In addition, wavdetect reports a significance level for each source, defined as the number of source counts divided by the Gehrels (1986) standard deviation of the number of background counts (see Table 1). For source 2 we have calculated the 2โ8 keV significance using our manual photometry. We do not report a 0.2โ2 keV significance for source 5 because it is not detected in this band.
We have used event browser to create full-band light curves for our sources. We analyzed these for variability using a Kolmogorov-Smirnov test. Most of our sources do not show significant evidence for variability. Source 8 may show variability by a factor of $`2`$ on a timescale of $`10000`$ s. The fact that the photon arrival times for our sources are distributed fairly evenly throughout the observation length is a further argument against some brief, transient effect (e.g., a transient โhot pixelโ or a cosmic ray) producing spurious sources. We have also examined the energy and ACIS grade distributions for our sources and find no anomalies that might indicate spurious instrumental effects.
We have calculated vignetting-corrected 0.2โ2 keV and 2โ8 keV fluxes for our sources using the counts from Table 1, and these are given in Table 2. We assume a $`\mathrm{\Gamma }=1.9`$ power-law model with the Galactic column density, and our fluxes have not been corrected for Galactic or internal absorption. Our 2โ8 keV observed fluxes (those of primary scientific interest here) are quite insensitive to the assumed column density for $`N_\mathrm{H}\stackrel{<}{}10^{22}`$ cm<sup>-2</sup> (compare with Figure 3). Our 0.2โ2 keV fluxes are somewhat more sensitive to the assumed column density; for our sources with large band ratios the 0.2โ2 keV fluxes calculated with Galactic absorption are 5โ20% lower than those calculated using $`\mathrm{\Gamma }=1.9`$ and column densities estimated from Figure 3. For our flux calculations, we have used the calibration-phase ACIS redistribution matrix files (rmfs) from the ACIS calibration group (S. Buczkowski & N. Schulz, private communication), and we have also used the calibration-phase ancillary response files (arfs; N. Schulz, private communication). These spectral responses are for a focal plane temperature of $`100^{}`$C, and they assume filtering upon ASCA grades 0, 2, 3, 4 and 6. To correct for our more conservative choice of grades, we have multiplied our 0.2โ2 keV fluxes by a factor of $`1.23\pm 0.12`$ and our 2โ8 keV fluxes by a factor of $`1.53\pm 0.23`$ (statistical errors only). These factors have been determined by comparing the numbers of events for our sources obtained with the two different grade filtering methods (source 6 is excluded in these comparisons since it would otherwise dominate the results; we compute separate factors for source 6 of $`1.39\pm 0.06`$ and $`1.80\pm 0.16`$). We estimate that our fluxes have calibration uncertainties of $`30`$%, but it is clear that we are detecting 2โ8 keV sources much fainter than were detected with ASCA and BeppoSAX.
In addition to the sources described above, we will introduce two new sources below: source AG1 (found on S3 when ASCA grade filtering is used; see ยง2.4) and source I3 (found on ACIS CCD I3; see ยง3.2). To compute fluxes for source AG1, we have followed the method of the previous paragraph but have not made the grade correction (since we use ASCA grade filtering for this source). To compute fluxes for source I3, we again followed the method of the previous paragraph. However, ACIS I3 spectral responses are only available at present for a focal plane temperature of $`90^{}`$C. We have used these but recognize that this may introduce systematic error into our flux calculations for source I3. Therefore, we do not use the fluxes for source I3 in any of our subsequent analysis.
### 2.4 Additional Safety Checks
The background level in the S3 CCD shows significant flaring during the observation due to โspace weatherโ (primarily soft electrons interacting with the CCD). We have repeated the analysis above after editing out the 8.0 ks when the background level was highest. We find the same 2โ8 keV sources as those listed in Table 1 except that source 4 is not detected in the edited data set. Source 4 is detected in the 0.2โ2 keV edited data, and we therefore believe that it is reliably detected in the hard band in the unedited data set (see Figure 2).
We have also performed source searches on images where we relax our grade screening so that we accept ASCA grades 0, 2, 3, 4 and 6. In these searches we detect most of the sources discussed in ยง2.2, but we fail to detect sources 2, 4 and 5 in the hard band. We thus infer a generally lower source detection efficiency with this grade screening. Our average background across S3 with this grade screening varies from $``$ 0.07โ0.09 count pixel<sup>-1</sup>, a factor of $`2`$ higher than in ยง2.1 and ยง2.2. However, we do detect one new hard band source, which we will hereafter refer to as โsource AG1โ (โAGโ is for โASCA gradeโ). We consider this source to be reliable since it is also detected in our independent soft band image, and we give its properties in Tables 1 and 2 (also see Figure 1). This source appears to have been missed by our source searching in ยง2.2 because several of its counts were rejected by our conservative grade filtering prescription. This highlights that it is difficult to choose a single optimal grade filtering criterion when dealing with sources with few counts; chance fluctuations in source grades can be important in this limit.
We have investigated if our choice of wavdetect wavelet scale sizes affects our results, and we find no evidence for this. We have repeated the searching of ยง2.2 using wavelet scale sizes of 1, 1.414, 2, 2.828, 4, 5.657, 8, 11.314 and 16 pixels (a โ$`\sqrt{2}`$ sequenceโ), and we find the same sources as in ยง2.2.
We are developing a matched filter code, based on the HRMA PSF, which we have used to check our wavdetect source detections. This preliminary matched filter code finds the same 9 hard-band sources on S3 that we have discussed in ยง2.1 and ยง2.2, and, like wavdetect, it finds soft-band counterparts for all sources other than source 5 (see ยง2.2).
We have also examined the spatial distribution of 2โ8 keV sources on S3 to see if we can detect any spatial non-uniformity. We have used a two-dimensional โKolmogorov-Smirnov testโ (see ยง14.7 of Press et al. 1992), and we have performed Monte-Carlo simulations to compute significance values for small numbers of sources. The 9 hard-band sources of ยง2.2 are found to be consistent with a uniform distribution. We have also performed the test including source AG1 (see above), and we found this sample of 10 sources to be consistent with a uniform distribution. However, we note that the two-dimensional โKolmogorov-Smirnov testโ has limited statistical power for only 9โ10 sources. In fact, we might have expected some spatial nonuniformity due to the fact that our sensitivity decreases away from the aim point; this may partially explain the absence of sources toward the lower-left part of Figure 1.
We have searched for spatial correspondences between our hard-band source positions and instrumental features, and we find none. In particular, we stress that sources 2, 3, 5 and 7 (the four blank-field sources of ยง3) are not linearly distributed along the dithered central node boundary (see ยง2.2 for a discussion of the node boundary). Source 7 is the closest of these four to the node boundary, and it is still $`9^{\prime \prime }`$ from it (much larger than the PSF size at this position).
## 3 Optical Observations and Data Analysis
### 3.1 Source Matching and Optical Photometry
We have compared the positions of our 2โ8 keV sources with optical sources on the Palomar Optical Sky Survey (POSS) plates and an archival 600 s $`R`$-band image taken with the 2.5-m Isaac Newton Telescope (INT) on 19 October 1995 (see Figure 4 and ยง2.5 of Boyle et al. 1997). The INT image is sensitive down to $`R21.7`$. We take an optical source to be positionally coincident with a Chandra source when its centroid is within $`3^{\prime \prime }`$ of the Chandra position in Table 1. Five of our nine sources from ยง2.2 are detected on the POSS plates. Two of these are AGN that have been previously identified by Boyle et al. (1997): the QSO CRSS J0030.6+2620 at $`z=0.493`$ (our source 6) and the Seyfert 2 CRSS J0030.7+2616 at $`z=0.247`$ (our source 8). The other four sources are not detected either on the POSS plates or in the deeper INT image, and these are henceforth referred to as โblank-field sources.โ The number of blank-field sources we have obtained appears reasonable when compared with an extrapolation to lower hard X-ray fluxes of the $`R`$-magnitude versus 2โ10 keV flux relation shown in Figure 3 of Akiyama et al. (2000); we would expect optical counterparts with $`R`$ 18โ23.5. It is also generally consistent with the results of deep soft X-ray surveys (compare with Figure 3 of Hasinger et al. 1999). We have used the APM catalog (McMahon & Irwin (1992)) and the INT image to determine $`R`$ magnitudes or $`R`$-magnitude limits for our sources; these are given in Table 2.
Source AG1 from ยง2.4 is not detected on the POSS plates, but it is coincident with a faint ($`R=21.5`$) object seen in the INT image.
Using the INT image, we find an $`R`$-band source density of 10.3 per square arcminute down to $`R=21.7`$. Given this source density and our $`3^{\prime \prime }`$ error circles, the probability that any given 2โ8 keV source has a false optical counterpart is 0.08. However, we note that most of the counterparts have $`R`$ magnitudes that are substantially brighter than the detection limit for the INT image, and their identifications are correspondingly more secure. The HET spectroscopy below combined with the rarity of AGN on the sky supports the correctness of our optical matching (see ยง4.1 of Schmidt et al. 1998 for details).
We have searched for NRAO VLA Sky Survey (NVSS; Condon et al. 1998) sources coincident with our X-ray sources and find none. This area of sky has not been covered by the VLA FIRST survey (Becker, White & Helfand 1995).
### 3.2 Hobby-Eberly Telescope Spectroscopy
We used the HET to obtain spectra for the three unidentified 2โ8 keV sources from ยง2.2 with optical counterparts on the POSS plates. We also obtained a spectrum of the $`R=19.1`$ optical counterpart to a 2โ8 keV source located on ACIS-I CCD I3 (see Table 1 for source details). We have not yet attempted to obtain an optical spectrum for source AG1.
The HET, located at McDonald Observatory, is the first optical/infrared 8-m class telescope to employ a fixed-altitude (Arecibo-type) design (Ramsey et al. (1998)). All spectra were obtained in October 1999 with the Marcario Low Resolution Spectrograph (LRS; Hill et al. 1998; Hill et al. 2000; Schneider et al. 2000) mounted at the prime focus of the HET. A $`2.0^{\prime \prime }`$ slit and 300 line mm<sup>-1</sup> grism/GG385 blocking filter produced spectra from 4400 ร
to 9000 ร
at 24 ร
resolution. The exposure time per source ranged from 20โ30 minutes. The image quality as delivered on the detector was typically $`2.5^{\prime \prime }`$ (FWHM). Wavelength calibration was performed with a fourth-order polynomial fit to a set of Cd/Hg/Ne/Zn lines; the rms of the fit was 0.8 ร
. Observations of the spectrophotometric standards of Oke & Gunn (1983) were used to perform the relative flux calibration. Spectra of the four objects are displayed in Figure 5.
Source 1: Source 1 has H$`\alpha `$ and \[O iii\] emission at $`z=0.269`$, with a derived absolute magnitude of $`M_\mathrm{B}=21.9`$. It has a large Balmer decrement with $`H\alpha /H\beta \stackrel{>}{}10`$, and its optical continuum slope is red (for an AGN) with $`\alpha =2.5\pm 0.4`$.<sup>3</sup><sup>3</sup>3$`F(\nu )\nu ^\alpha `$. Typical โblueโ quasars have $`1.3\stackrel{<}{}\alpha \stackrel{<}{}+0.1`$. Optical continuum slopes in this paper are for 5500โ8800 ร
in the observed frame. The H$`\alpha `$ line is resolved with a FWHM of 1400 km s<sup>-1</sup>.
Source 4: Source 4 is definitely a $`z=0.247`$ galaxy (H$`\alpha `$, \[O iii\], and \[O ii\] emission, plus a strong Mg b absorption feature) with $`M_\mathrm{B}=21.3`$. Unfortunately, our spectrum did not permit a search for \[Ne v\] emission at 3426 ร
(compare with ยง4 of Schmidt et al. 1998). Its Balmer decrement is $`H\alpha /H\beta \stackrel{>}{}3`$, and its optical continuum slope is $`\alpha =1.8\pm 0.4`$. The optical continuum emission is dominated by star light. The H$`\alpha `$ line is unresolved with a FWHM of $`<900`$ km s<sup>-1</sup>. With a 2โ8 keV flux of $`4.1\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, this is the faintest 2โ8 keV source yet identified to our knowledge. Somewhat surprisingly, the redshift of this source is the same as that of CRSS J0030.7+2616 (our source 8), but we do not have any reason to suspect identification problems. A comparison of our HET spectrum (see Figure 5) and the spectrum for CRSS J0030.7+2616 given in Figure 1 of Boyle et al. (1995) shows that the equivalent width of H$`\alpha `$ in CRSS J0030.7+2616 is $`\stackrel{>}{}2`$ times larger than that of source 4.
Source 9: Source 9 is difficult to interpret. The brightest optical source in the X-ray error circle is clearly extended on the INT image (see Figure 4). The optical counterpart is off the X-ray position by $`3^{\prime \prime }`$, which is by far the largest discrepancy of any of the optical identifications shown in Figure 4. The HET spectrum for this source shows one strong narrow line at 7411 ร
that is most likely H$`\alpha `$ at $`z=0.129`$. The line is unresolved with a FWHM of $`<900`$ km s<sup>-1</sup>, and the optical continuum is red with $`\alpha =2.9\pm 0.4`$. The residual ripple in the continuum below 7000 ร
is an artifact due to incomplete cancellation of fringing from a pellicle by the flat field. The relatively large difference between the optical and X-ray positions suggests that this may not be the correct identification. Another possibility is that this galaxy is a member of a small group and that the X-ray emission arises from the general environment and not an individual galaxy. However, the hard X-ray spectral shape would be difficult to understand in this case.
Source I3: Source I3 is a strong-lined quasar at $`z=1.665`$ with $`M_\mathrm{B}=24.7`$. The lines shown in Figure 5 have FWHM of $`5000`$ km s<sup>-1</sup>, and the optical continuum slope of $`\alpha =1.2\pm 0.4`$ is consistent with that of โnormalโ blue quasars.
Using our HET spectrum, we estimate $`(VR)+0.3`$ for source I3. For the other sources we estimate $`(VR)+0.5`$.
### 3.3 The Blank-Field Sources
We have compared the properties of the blank-field sources to those of the other sources to gain clues to their nature. Examination of Figure 6 shows that the blank-field sources are not the faintest 2โ8 keV sources in our sample; we have obtained successful HET identifications for sources with comparable or smaller 2โ8 keV fluxes. This is comforting in that it suggests that our blank-field sources are indeed reliable X-ray detections. Figures 6a and 6b also show that the blank-field sources have larger X-ray to $`V`$-band flux ratios than the other sources, as expected. However, these X-ray to $`V`$-band flux ratios are still consistent with those expected for AGN (compare with Figure 1 of Maccacaro et al. 1988). Figure 6c suggests that the blank-field sources may be somewhat harder than the other sources, but we do not consider this result to be statistically significant at present.
If the blank-field X-ray sources are in normal $`L^{}`$ galaxies (Kirshner et al. 1983; Efstathiou, Ellis & Peterson 1988), they must be at moderately high redshifts to explain their nondetections in the INT image (corresponding to $`R>21.7`$). To avoid detection in the INT image, an $`L^{}`$ Scd galaxy must have $`z\stackrel{>}{}0.75`$, and an $`L^{}`$ elliptical, $`z\stackrel{>}{}0.55`$. Typical $`L^{}`$ galaxies would thus need to be at higher redshifts than that of the cluster CRSS J0030.5+2618. The moderately high redshifts required for host galaxies also serve to rule out single extragalactic X-ray binaries and other low-luminosity X-ray sources associated with galaxies from creating the observed X-ray emission (unless the host galaxies have extremely low optical luminosities; we would have detected a host galaxy that is sub-$`L^{}`$ by two magnitudes to $`z=0.2`$). Large hard X-ray luminosities of $`L_{28}\stackrel{>}{}5\times 10^{41}`$ erg s<sup>-1</sup> are required for $`z\stackrel{>}{}0.2`$. Such hard X-ray luminosities are commonly seen among local AGN. They might also be generated by the most extreme โpureโ starburst galaxies, although even the most X-ray luminous starbursts known at present have hard X-ray luminosities $`\stackrel{<}{}3\times 10^{41}`$ erg s<sup>-1</sup> (e.g., Moran, Lehnert & Helfand 2000). In addition, the X-ray spectra of the blank-field sources are significantly harder than those seen for โpureโ starbursts at low redshift.
If the blank-field X-ray sources are โbona-fideโ quasars, with $`M_\mathrm{B}<22.3`$ for our adopted cosmology, they would need to have $`z>1.75`$ to avoid detection on the INT image. Quasars with $`z<3`$ and $`M_\mathrm{B}<23.2`$ would have been seen in the INT image, and a quasar as luminous as 3C273 would have been detected at redshifts greater than 4.
## 4 Discussion and Conclusions
Our results, obtained from a small but well-defined sample of 2โ8 keV sources, extend previous X-ray background studies in several ways. First, we have detected and securely identified hard X-ray sources about an order of magnitude fainter than has previously been possible. At our 2โ8 keV flux limit of $``$ (4โ8)$`\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> (spatially dependent; see ยง2.2), we find ten sources (including AG1) in our 63.5 arcmin<sup>2</sup> search area, corresponding to a 2โ8 keV source density of $`570\pm 180`$ deg<sup>-2</sup>. Even allowing for the possibility of one spurious source detection, this source density is still $`10`$ times larger than previous number counts in this energy band (e.g., Ogasaka et al. 1998; Giommi et al. 1998), and down to $`2\times 10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> our number counts appear consistent with the ASCA fluctuation analyses of Gendreau et al. (1998). However, our source counts may be somewhat enhanced due to the presence of the cluster CRSS J0030.5+2618 (see below). The fact that we detect sources down to $`4\times 10^{15}`$ erg cm<sup>-2</sup> s<sup>-1</sup> suggests that the number counts versus flux relation departs from the Euclidean form (the X-ray background would be resolved at $`10^{14}`$ erg cm<sup>-2</sup> s<sup>-1</sup> without a break in the $`\mathrm{log}N`$$`\mathrm{log}S`$ slope), although further data are clearly needed to quantify the break parameters.
Four of our five S3 sources with optical spectroscopy have $`z<0.3`$, which clearly exclude them from being associated with the cluster CRSS J0030.5+2618 at $`z=0.516`$. The fifth, the QSO CRSS J0030.6+2620, differs in redshift from the cluster by $`\mathrm{\Delta }z=0.023`$ corresponding to a line-of-sight separation of $`100`$ Mpc. While this QSO is certainly not a bound member of the cluster, it could be associated with the large-scale cosmic structure producing the cluster. As discussed in ยง3.3, if any of our blank-field sources (or AG1) lie in the cluster, they must be sub-$`L^{}`$ galaxies producing large hard X-ray luminosities of $`L_{28}\stackrel{>}{}4\times 10^{42}`$ erg s<sup>-1</sup>. Even in the most conservative (and unlikely) case, where we allow all objects with unknown $`z`$ to lie in the cluster, our cluster-corrected source density is still a factor of $`4.5`$ times higher than previously attained by ASCA and BeppoSAX. The same statement obtains for possible gravitational lensing effects by the cluster.
We detect nine of our ten 2โ8 keV sources in the 0.2โ2 keV band. While our statistics are admittedly limited, this result is consistent with the finding by Giommi, Fiore & Perri (1998) that most hard X-ray sources have soft X-ray counterparts, and it extends this result downward in flux by about an order of magnitude (see ยง1 for discussion). Down to our flux limit, we can show with $`>90`$% confidence that hard-band only sources comprise $`<40`$% of the total hard-band source population. Deeper Chandra observations are needed to determine if a large population of hard-band only sources emerges at still fainter flux levels.
We thank C.S. Crawford and A.C. Fabian for providing the archival INT image, G.M. Hill and M. Shetrone for help with the HET data acquisition, P.S. Broos, A.C. Fabian, E.D. Feigelson and G. Hasinger for helpful discussions, and D.H. Saxe for valuable computer support. We thank all the members of the Chandra team for their enormous efforts. We gratefully acknowledge the financial support of NASA grant NAS 8-38252 (GPG, PI), NASA LTSA grant NAG5-8107 (WNB), NASA GSRP grant NGT5-50247 (AEH), and NSF grant AST99-00703 (DPS). The HET is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximillians-Universitรคt Mรผnchen, and Georg-August-Universitรคt Gรถttingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. The Marcario LRS is a joint project of the University of Texas at Austin, the Instituto de Astronomia de la Universidad Nacional Autonoma de Mexico, Ludwig-Maximillians-Universitรคt Mรผnchen, Georg-August-Universitรคt Gรถttingen, Stanford University, and the Pennsylvania State University. This research is partially based upon data from the Isaac Newton Group archive. |
no-problem/0002/hep-ph0002059.html | ar5iv | text | # A new approach to the parametrization of the Cabibbo-Kobayashi-Maskawa matrix
## Abstract
The CKM-matrix V is written as a linear combination of the unit matrix I and a matrix U which causes intergenerational-mixing. It is shown that such a V results from a class of quark-mass matrices. The matrix U has to be hermitian and unitary and therefore can depend at most on 4 real parameters. The available data on the CKM-matrix including CP-violation can be reproduced by $`V=(I+iU)/\sqrt{2}`$. This is also true for the special case when U depends on only 2 real parameters. There is no CP-violating phase in this parametrization. Also, for such a V the invariant phase $`\mathrm{\Phi }\varphi _{12}+\varphi _{23}\varphi _{13}`$, satisfies a criterion suggested for โmaximalโ CP-violation.
It is more than twenty-five years since the first explicit parametrization for the six quark case was given for the so called Cabibbo-Kobayashi-Maskawa (CKM) matrix. Since then many diferent parametrizations have been suggested . In this note, we wish to suggest a new approach to parametrizing the unitary CKM matrix V. For this purpose, we write V as a linear combination of the unit matrix $`I`$ and another matrix $`U`$, so that
$$V(\theta )=\mathrm{cos}\theta I+i\mathrm{sin}\theta U$$
(1)
It is clear that for V to be unitary, U has to be both hermitian and unitary. Here $`\theta `$ is a parameter which will be fixed later. In Eq. (1), for the first term the physical (or the quark mass-eigenstate) and the gauge bases are the same. The second term, through U, represents the difference in the two bases. It also causes inter-generational mixing and makes it possible for V to give CP-violating processes. The break-up of V in two parts makes it possible to have a simple parametrization. We now show that knowing $`V(\theta )`$ allows us to construct the quark-mass matrices in terms of the parameters of V and the quark-masses.
Form of the quark-mass matrices. In the gauge-basis, the part of the standard model Lagrangian relevant for us can be written as
$$=\overline{q}_{uL}^{}M_uq_{uR}^{}+\overline{q}_{dL}^{}M_dq_{dR}^{}+\frac{g}{\sqrt{2}}\overline{q}_{uL}^{}\gamma _\mu q_{dL}^{}W_\mu ^++H.c.$$
(2)
where $`q_u^{}=(u^{},c^{},t^{})`$ and $`q_d^{}=(d^{},s^{},b^{})`$. By suitable redefinition of the right-handed quark fields one can make the quark-mass matrices $`M_u`$ and $`M_d`$ hermitian. Let the diagonal forms of the hermitian $`M_u`$ and $`M_d`$ be given by
$$\widehat{M}_u=V_u^{}M_uV_u,\widehat{M}_d=V_d^{}M_dV_d.$$
(3)
In the physical basis, defined by $`q_\alpha =V_\alpha ^{}q_\alpha ^{}`$ ($`\alpha =u`$ or $`d`$), one has
$$=\underset{\alpha =u,d}{}\overline{q}_{\alpha L}\widehat{M}_\alpha q_{\alpha R}+\frac{g}{\sqrt{2}}\overline{q}_{uL}\gamma _\mu Vq_{dL}W_\mu ^++H.c.$$
(4)
where
$$V=V_u^{}V_d$$
(5)
is the CKM-matrix.
For a V given by Eq.(1), one can easily find $`V_u`$ and $`V_d`$ which satisfy Eq. (5) In general,
$$V_u=V(\theta _u)=\mathrm{cos}\theta _uIi\mathrm{sin}\theta _uU$$
(6)
$$V_d=V(\theta _d)=\mathrm{cos}\theta _dI+i\mathrm{sin}\theta _dU$$
(7)
will give $`V(\theta )`$ provided $`\theta _u+\theta _d=\theta `$. This is so since $`V(\theta _1)V(\theta _2)=V(\theta _1+\theta _2)`$ because $`U=U^{}`$ and $`U^2=I`$.
Given these $`V_u`$ and $`V_d`$, Eq.(3) then determines $`M_u`$ and $`M_d`$ in terms of the quark masses and the experimentally accessible parameters of the CKM-matrix. More formally, this means that in the spectral decomposition of $`M_u(M_d)`$ the projectors depend only on the parameters in $`V(\theta )`$ and $`\theta _u(\theta _d)`$. There is a freedom in the choice of the values $`\theta _u`$ and $`\theta _d`$ as only their sum $`\theta _u+\theta _d=\theta `$ is determined from knowing $`V(\theta )`$.
It is clear that our form of $`V(\theta )`$ provides an explicit solution for a class of quark mass matrices.
Form of U in the standard model. To determine the general form of the hermitian and unitary $`3\times 3`$ matrix $`U`$ we start with a general hermitian matrix
$$U=\left(\begin{array}{ccc}u_1& \alpha ^{}& \beta ^{}\\ \alpha & u_2& \gamma ^{}\\ \beta & \gamma & u_3\end{array}\right)$$
(8)
where $`u_i(i=1,2,3)`$ are real and $`\alpha ,\beta `$ and $`\gamma `$ are complex numbers. Requiring $`U`$ to be unitary as well implies that $`U^2=I.`$ Explicitly this gives
$$u_1^2+\left|\alpha \right|^2+\left|\beta \right|^2=1,$$
(9)
$$u_2^2+\left|\alpha \right|^2+\left|\gamma \right|^2=1,$$
(10)
$$u_3^2+\left|\beta \right|^2+\left|\gamma \right|^2=1;$$
(11)
and
$$\left|\alpha \right|\left(u_1+u_2\right)+\left|\beta \gamma \right|\mathrm{exp}(i\varphi )=0,$$
(12)
$$\left|\beta \right|\left(u_1+u_3\right)+\left|\alpha \gamma \right|\mathrm{exp}(i\varphi )=0,$$
(13)
$$\left|\gamma \right|\left(u_2+u_3\right)+\left|\alpha \beta \right|\mathrm{exp}(i\varphi )=0.$$
(14)
Here $`\varphi \varphi _\alpha \varphi _\beta +\varphi _\gamma `$ while $`\varphi _\alpha ,`$ $`\varphi _\beta `$ and $`\varphi _\gamma `$ are the phases of $`\alpha ,`$ $`\beta `$ and $`\gamma .`$ Eqs. (12-14) immediatly imply that $`\mathrm{sin}\varphi =0`$ or $`\varphi =0`$ or $`\pi .`$ The resulting $`U`$ in the two cases differ by an overall sign . For definiteness we consider the case $`\varphi =0.`$ Eqs. (12-14) determine the diagonal elements in terms of $`\left|\alpha \right|,`$ $`\left|\beta \right|`$ and $`\left|\gamma \right|`$ and substituting these in Eqs. (9-11) gives the constraint
$$\left|\frac{\alpha \beta }{\gamma }\right|+\left|\frac{\alpha \gamma }{\beta }\right|+\left|\frac{\beta \gamma }{\alpha }\right|=2.$$
(15)
Using this one has
$$\begin{array}{ccc}u_1=\left|\frac{\alpha \beta }{\gamma }\right|1,& u_2=\left|\frac{\alpha \gamma }{\beta }\right|1& \text{and }u_3=\left|\frac{\beta \gamma }{\alpha }\right|1.\end{array}$$
(16)
For a more convenient form of $`U,`$ we put
$$\begin{array}{ccc}\alpha =2bc^{},& \beta =2ac,& \text{and }\gamma =2a^{}b.\end{array}$$
(17)
Since, $`\varphi _\alpha =(\varphi _b\varphi _c)+\pi `$ etc., the condition $`\varphi =0`$ translates into
$$\varphi _a\varphi _b+\varphi _c=\frac{\pi }{2},$$
(18)
where $`\varphi _a,`$ $`\varphi _b`$ and $`\varphi _c`$ are the phases of the complex numbers $`a,`$ $`b`$ and $`c.`$ The constraint of Eq. (15) becomes
$$\left|a\right|^2+\left|b\right|^2+\left|c\right|^2=1.$$
(19)
The general expression of the hermitian and unitary $`U`$ in terms of $`a,`$ $`b`$ and $`c`$ is
$$U=I2\left(\begin{array}{ccc}|a|^2+|b|^2& b^{}c& a^{}c^{}\\ bc^{}& |a|^2+|c|^2& ab^{}\\ ac& a^{}b& |b|^2+|c|^2\end{array}\right)$$
(20)
Given the two constraints in Eq. (18) and Eq. (19), we note that a general hermitian and unitary $`3\times 3`$ matrix depends on at most four real parameters. This is the form of $`U`$ we will use .
The Jarslskog invariant for $`U`$, viz. $`J(U)=Im(U_{11}U_{22}U_{12}^{}U_{21}^{})=0.`$ However, the $`V(\theta )`$ in Eq. (1) does give CP-violation, since
$$J(V(\theta ))=8\mathrm{cos}\theta \mathrm{sin}^3\theta |abc|^2=\mathrm{cos}\theta |V_{12}||V_{13}||V_{23}|$$
(21)
In our case, there is no โCP-violating phaseโ which governs the finitess of $`J`$. One of the off-diagonal elements of $`V(\theta )`$ has to be zero for $`J`$ to vanish. Note, that J is just given in terms of $`|V_{ij}|(ij)`$ unlike usual parametrizations . It is interesting to note, that even when $`a`$,$`b`$,$`c`$ are pure imaginary so that $`V(\theta )`$ depends on only 3 real parameters, $`J(V(\theta ))`$ is non-zero. In this case, $`U`$ becomes real and symmetric and the only complex number in $`V(\theta )`$ is $`i`$ in Eq.(1) !
Since $`U`$ is hermitian it requires that $`|V_{ij}|=|V_{ji}|`$ for $`V(\theta )`$ in Eq. (1). The experimentally determined CKM-matrix $`V_{EX}`$ given by the Particle Data Group
$$V_{EX}=\left(\begin{array}{ccc}0.97450.9760& 0.21700.2240& 0.00180.0045\\ 0.21700.2240& 0.97370.9753& 0.03600.0420\\ 0.00400.0130& 0.03500.0420& 0.99910.9994\end{array}\right)$$
(22)
The entries correspond to the ranges for the moduli of the matrix elements. It is clear that $`|V_{12}|=|V_{21}|`$ and $`|V_{23}|=|V_{32}|`$ are satisfied for the whole range, while the equality $`|V_{13}|=|V_{31}|`$ is suggested by the data. Given the fact that $`|V_{13}|`$ and $`|V_{31}|`$ are the hardest to determine experimentally, it is possible they might turn out to be equal. We adopt a common numerical value viz. $`|V_{13}|=|V_{31}|=0.005825\pm 0.002925.`$ This numerical value is obtained by first converting the range of values in $`V_{EX}`$ into a central value with errors, so that $`|V_{13}|=0.00315\pm 0.00135`$ and $`|V_{31}|=0.0085\pm 0.0045.`$ The average of these two gives the common numerical value above. Ranges for other moduli also are converted into a central value with errors.
To confront $`V(\theta )`$ with experiment we need to specify $`\theta `$. A physically appealing choice is to give equal weight to the generation mixing term $`(U)`$ and the generation diagonal term $`(I)`$ in $`V(\theta )`$, so that $`\theta =\pi /4`$ and the CKM-matrix
$$V(\pi /4)=\frac{1}{\sqrt{2}}(I+iU).$$
(23)
We use this for numerical work.
Numerical results Experimentaly, $`|V_{12}|`$ and $`|V_{23}|`$ are well determined. We take their average (or central) value in the range given in Eq. (22) as inputs; that is, $`|V_{12}|=|V_{21}|=0.2205`$ and $`|V_{23}|=|V_{32}|=0.039`$. Given these, one has
$$|a|=|V_{23}|/(2\mathrm{sin}\theta |b|),$$
(24)
$$|c|=|V_{12}|/(2\mathrm{sin}\theta |b|).$$
(25)
The constraint Eq. (19), gives a quadratic equation for $`|b|^2`$ with the solutions,
$$|b|^2=\frac{1}{2}\left[1\pm \sqrt{1(|V_{12}|^2+|V_{23}|^2)\mathrm{csc}^2\theta }\right].$$
(26)
Note, for real $`|b|^2`$, above input implies $`\mathrm{sin}^2(\theta )0.05014`$ or $`\theta 12.94^{}`$. Since, $`|V_{12}|>|V_{23}|>|V_{13}|`$ it is clear we need the positive sign in Eq. (26) so that $`|b|>|c|>|a|`$. For $`\theta =\pi /4`$, Eqs. (24-26) yield,
$$|a|=0.02794,|b|=0.98705,|c|=0.15796.$$
(27)
The values of the $`|V_{ij}|`$ for $`V(\pi /4)`$ in Eq. (23) are given in Table I. The values in the table should be compared with the average values of $`|V_{ij}|`$ obtained from $`V_{EX}`$. For example, average of $`V_{11}`$ from Eq. (22) is $`\frac{1}{2}(0.9745+0.9760)=0.97525`$. This is given as $`0.97525\pm 0.00075`$. The โerrorโ indicates the range for $`|V_{11}|`$. The experimental $`|V_{ij}|`$ are given in column 2, while the calculated values are given in column 3. The agreement is quite satisfactory suggesting that a CKM-matrix with$`|V_{ij}|=|V_{ji}|`$ may fit the data. We did not attempt a best fit in view of our assumption $`|V_{13}|=|V_{31}|`$ .
The value of $`J`$ for $`V_{EX}`$ and $`V(\pi /4)`$ are also given in the Table. $`J(V_{EX})`$ was calculated using the formula
$$J^2=|V_{11}V_{22}V_{12}V_{21}|^2\frac{1}{4}[1|V_{11}|^2|V_{22}|^2|V_{12}|^2|V_{21}|^2+|V_{11}V_{22}|^2+|V_{12}V_{21}|^2]^2$$
(28)
with the central values of $`|V_{ij}|`$, $`i=1,2`$ and since these four are best measured. The value $`J(V(\pi /4))`$ was calculated using Eq. (21) and is about $`34`$ times smaller. This is reasonable considering the slight differences in values of $`|V_{i,j}|`$ $`i=1,2`$ in the two cases and also since there is a strong numerical cancellation between the two terms on the r.h.s of Eq. (28).
It is important to note that calculated values require only the knowledge of $`|a|`$, $`|b|`$ and $`|c|`$. Thus, the numerical results are valid even when $`a`$, $`b`$ and $`c`$ are pure imaginary and $`V(\pi /4)`$ depends on only 2 real parameters .
Concluding remarks Apart, form providing a good numerical fit with 4 or possibly 2 parameters, the CKM-matrix $`V(\pi /4)`$ has an interesting feature connected with a criterion for โmaximalโ CP-violation.
It was noted that physically the relevant phase for CP-violation in the CKM-matrix $`V`$ is $`\mathrm{\Phi }=\varphi _{12}+\varphi _{23}\varphi _{13}`$, where $`\varphi _{ij}`$ is the phase of the matrix element $`V_{ij}`$. The reason for this is because $`\mathrm{\Phi }`$ is invariant under re-phasing transformations of $`V`$. So, a value of $`\mathrm{\Phi }|\pi /2|`$ was suggested as corresponding to โmaximalโ CP-violation. This is so in our case because of the constraint in Eq. (18) since $`\mathrm{\Phi }=2(\varphi _a+\varphi _c\varphi _b)\pi /2`$. So, $`\mathrm{cos}\mathrm{\Phi }=0`$ for $`V(\pi /4)`$. Note that, $`\mathrm{\Phi }=\pi /2`$ is automatic when $`a`$, $`b`$ and $`c`$ are pure imaginary and in that case $`V(\pi /4)`$ depends on only 2 real parameters.
It is remarkable that $`V(\pi /4)`$ with only 2 real parameters fits the available data. This may be because only the absolute values $`|V_{ij}|`$ are known at present. Future information on the full $`V_{ij}`$ will tell us if the relations implied by the two parameter parametrization given here are viable or the more general four parameter parametrization would be needed. It would be very interesting if the symmetry relations $`|V_{ij}|=|V_{ji}|(ij)`$ are confirmed experimentally.
Acknowledgments. I am grateful to Dr. A.O. Bouzas for critical comments and discussion. I am thankful to Vicente Antonio Perez, Elena Salazar and Yuri Nahmad for their help in preparing the manuscript.
| Quantity | Experiment | Theory |
| --- | --- | --- |
| $`|V_{12}|=V_{21}`$ | $`0.2205\pm 0.0035`$ | $`0.2205`$ (input) |
| $`|V_{23}|=V_{32}`$ | $`0.0390\pm 0.0030`$ | $`0.039`$ (input) |
| $`|V_{13}|=V_{31}`$ | $`0.005825\pm 0.002925`$ | $`0.00624`$ |
| $`V_{11}`$ | $`0.97525\pm 0.00075`$ | $`0.975367`$ |
| $`V_{22}`$ | $`0.9745\pm 0.0008`$ | $`0.974607`$ |
| $`V_{33}`$ | $`0.99925\pm 0.00015`$ | $`0.99922`$ |
| $`J`$ | $`1.414\times 10^4`$ | $`3.795\times 10^5`$ |
Table I. Numerical values of the moduli of the matrix elements of $`V(\theta )`$ for $`\theta =\pi /4`$. Experimental values are average values obtained from $`V_{EX}`$ in Eq. (13). The โerrorsโ reflect the large of values for $`|V_{ij}|`$. Note, since $`|V_{13}|=0.00315\pm 0.00135`$ and $`|V_{31}|=0.0085\pm 0.0045`$, we grote the average of these in the Table. $`J`$ is the Jarslskog invariant (see text) |
no-problem/0002/hep-th0002166.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Black holes are interpreted microscopically in string theory as bound states of explicitly specified constituents. It is therefore an important theoretical challenge to identify examples where the quantum bound state problem can be analyzed. The canonical example is the bound state of D1- and D5-branes where the microscopic theory is known in great detail . In this and many other cases the asymptotic degeneracy of states has been determined to agree with the Bekenstein-Hawking formula for the black hole entropy. In all cases where such an agreement has been established with precision the near-horizon geometry of the black hole contains an $`AdS_3`$ factor; and this feature underlies the agreement . It remains an open problem to similarly understand black holes with less symmetric near-horizon geometries; especially black holes with no supersymmetry. The present work reports on progress in this direction, for a specific case.
Consider the original Kaluza-Klein theory in four dimensions, obtained by compactification of five-dimensional pure gravity on a circle. The field content of this theory is a $`U(1)`$ gauge field, a scalar field, and gravity. Stationary black holes solutions to the theory are parametrized by their electric (Q) and magnetic (P) gauge charges, as well as their mass (M) and angular momentum (J). There is a simple embedding of the system into string theory, as follows. First, add six compact dimensions, e.g. a Calabi-Yau three-fold, or a six-torus. Next, interpret the original Kaluza-Klein direction as the M-theory circle so that the electric Kaluza-Klein charge is identified as D0-brane charge, and the magnetic Kaluza-Klein charge similarly becomes the charge of a D6-brane, fully wrapped around the six inert dimensions. The black hole is therefore interpreted at weak coupling as a bound state of D0-branes and D6-branes.
In string theory it is the norm to consider black holes with many independent charges excited simultaneously. The electric and magnetic charges are thus generalized to vectors. It has been the experience that black holes with non-orthogonal charge vectors pose special difficulties: the microscopic description is less constrained , and the corresponding classical solutions are much more complicated . An elementary property of the Kaluza-Klein dyon considered here is that its electric and magnetic charge vectors โ being numbers โ have nonvanishing inner-product, $`\stackrel{}{P}\stackrel{}{Q}=PQ0`$. The Kaluza-Klein black hole is therefore a simple setting where these problems can be analyzed. In fact, to the present author, this was the original motivation for considering the problem. The study of Kaluza-Klein black holes is further motivated by the โno-frillโ character of the system, by fundamental string theory interest in the D0/D6 bound state , and as a means to study black holes .
The black hole metric can be constructed explicitly for arbitrary $`(M,J,Q,P)`$ . This will not be repeated here. Instead the emphasis will be on the qualitative properties of the system. The presentation is based on the article , except for a new result(given in section 3), the microscopic interpretation of the mass formula for the bound state .
## 2 Properties of Extreme Black Holes
For a given value of the conserved charges $`Q,P,J`$ there is a lowest possible value of the mass $`M`$ consistent with regularity. The three parameter family of solutions saturating this bound are the extremal black holes. The physical interpretation is that extremal black holes in some sense are in their ground states. This presentation considers only the extremal case.
### 2.1 Basic Parameters
#### Black hole mass:
First, assume that the rotation is limited according to $`G_4J<PQ`$. In this โslow rotationโ case the mass formula is:
$$2G_4M=(Q^{2/3}+P^{2/3})^{3/2}.$$
(1)
This mass formula has been known for some time, for $`J=0`$ ; the striking point emphasized here is the independence of angular momentum. In other words, the system can carry angular momentum at no cost in energy. Other features of the extremal geometry do depend on the angular momentum, as expected.
Next, assume that $`G_4J>PQ`$. In this โfast rotationโ case the mass formula is more complicated, the solution of a quartic equation. The mass formula now depends on the angular momentum as well as the charges. It satisfies:
$$2G_4M>(Q^{2/3}+P^{2/3})^{3/2}.$$
(2)
The two branches of extremal black holes are joined by a two-parameter family of black holes satisfying $`G_4J=PQ`$. The geometry degenerates in this critical limit; for example, the black hole entropy approaches zero.
#### Black hole entropy
is also sensitive to the boundary at $`G_4J=PQ`$. Indeed, for slow rotation $`G_4J<PQ`$:
$$S=2\pi \sqrt{\frac{P^2Q^2}{G_4^2}J^2},$$
(3)
while for fast rotation $`G_4J>PQ`$:
$$S=2\pi \sqrt{J^2\frac{P^2Q^2}{G_4^2}}.$$
(4)
The only change is thus the overall sign under the square root. The extremal Kerr-Newman black hole and its four-parameter generalization canonically considered in string theory , coincide for special choices of parameters with the fast rotation case (4). Here the main interest is the slow rotation case (3).
#### The black hole temperature
vanishes in the extremal limit for all values of the angular momentum. This conforms with general expectations for extremal black holes.
### 2.2 Comparison with Supersymmetric Black Holes
The fast rotating Kaluza-Klein black holes are very similar to extremal Kerr-Newman black holes in four dimensions. A more surprising analogy is between the slowly rotating Kaluza-Klein black holes and the rotating BPS black holes in five dimensions, interpreted as excitations of D1/D5-brane bound states . The striking similarities include:
#### 1)
In the D1/D5-case the energy of a supersymmetric ground state is related to its momentum by supersymmetry, ensuring that the black hole mass is independent of the angular momentum. The D1/D5-system therefore also has the property that it can carry angular momentum at no cost in energy.
#### 2)
The supersymmetric ground states of the D1/D5 system are generally charged under the R-charge of the supersymmetry algebra, which is identified with the spacetime angular momentum . The projection on to a given value of the R-charge restricts the available phase space, and so decreases the entropy; it vanishes when the angular momentum is so large that all states are forced to have identical projection of the angular momentum. The black hole entropy has the form:
$$S=2\pi \sqrt{\frac{1}{4}J_3J^2},$$
(5)
where $`J_3`$ is the unique cubic invariant of $`E_{6(6)}`$. This should be compared with the entropy (3), or more generally the U-duality invariant expression:
$$S=2\pi \sqrt{\frac{1}{4}J_4J^2},$$
(6)
where $`J_4`$ is the unique quartic invariant of $`E_{7(7)}`$. The similarity suggests that the Kaluza-Klein black holes are described by a supersymmetric conformal field theory with a structure similar to the one familiar from the D1/D5-system. Specifically, the angular momentum should be identified with an R-charge in such a description.
#### 3)
For slowly rotating Kaluza-Klein black holes the angular velocity of the horizon vanishes $`\mathrm{\Omega }_H=0`$ . The physical interpretation is that the angular momentum is carried by the field surrounding the black hole, rather than by its interior. The unfamiliar combination of angular momentum, but no angular velocity, occurs also for the rotating BPS black holes in five dimensions. For fast rotation, the horizon velocity remains finite in the extremal limit $`\mathrm{\Omega }_H0`$, as for the Kerr black hole.
#### 4)
Outside the horizon of a rotating black hole there is an ergosphere. This is a region where observers cannot remain at rest relative to the asymptotic geometry, because the drag of the geometry force them to rotate along with the black hole. Such observers are nevertheless free to escape to infinity. An important consequence of the ergosphere is that it allows the black hole to shed rotational energy classically by superradiance. This effect renders e.g. the standard extremal Kerr black hole in four dimensions unstable. The D1/D5-system corresponds to rotating black holes in five dimensions and for these, remarkably, the ergosphere disappears in the extremal limit. This saves the stability of the system required by supersymmetry. Interestingly, the ergosphere of the Kaluza-Klein black hole also disappears in the extremal limit, for slow rotation. On the other hand, for fast rotation there is an ergosphere; so the black hole decays classically, even though it is extremal. In a sense, the mass (2) on the large rotation branch is too large, and the black hole seeks to reach the lower bound (1) which apparently is more stable.
At this point it may appear that slowly rotating Kaluza-Klein black holes are precisely analogous to the D1/D5-system. That is far from the truth. The D1/D5 system is supersymmetric; indeed, it is the only case familiar to me where rotation is consistent with the BPS condition . Many of the remarkable properties discussed above follow from this fact. It is thus significant to emphasize that Kaluza-Klein black holes are not supersymmetric.
To see this, embed the simple Kaluza-Klein theory in a theory with at least N=2 supersymmetry. The supersymmetry algebra then implies:
$$2G_4M\sqrt{Q^2+P^2},$$
(7)
with the inequality saturated if and only if the black hole preserves a part of the supersymmetry. The mass formulae (1-2) satisfy this condition; however,they never saturate it, when both electric and magnetic charges are present. Kaluza-Klein black holes are therefore not supersymmetric.
Without supersymmetry, the question of stability should be considered seriously. The energy of two widely separated fragments, each carrying either the electric or the magnetic charge is:
$$2G_4MQ+P.$$
(8)
This inequality is also satisfied by the mass formulae (1-2); so spontaneous fragmentation of the black hole into two parts is consistent with energy conservation.
However, in the present system it is important to consider also angular momentum conservation. The electric and magnetic fragments are charged with respect to the same $`U(1)`$ gauge field; so the total angular momentum of the final state satisfies Diracโs bound:
$$J\frac{PQ}{G_4}.$$
(9)
The lower bound coincides precisely with the one classifying Kaluza-Klein black holes as having slow or fast rotation. It is interesting that the evident qualitative distinction between slow and fast rotation is related to the Dirac bound on the angular momentum: the geometry โknowsโ about the Dirac bound. Concretely, the bound implies that angular momentum conservation forbids decay of the slowly rotating black holes into two widely separated electric and magnetic fragments; but the fast rotating ones do decay in this way.
It is possible that the slowly rotating black hole instead decays into two widely separated dyons, with charge assignments $`(Q_1,P_1)`$ and $`(Q_2,P_2)`$, respectively. The Dirac bound (9) on the angular momentum of the fragments is then replaced by:
$$J|P_1Q_2P_2Q_1|/G_4.$$
(10)
For example two identical dyons can have vanishing angular momentum. There are still large classes of black holes that have no possible decays; for example non-rotating black holes with mutually prime quantized charges. In fact, standard stability arguments, using supersymmetry, are similarly subject to conditions on the quantum numbers of the state. The possible decay into two dyons is therefore consistent with the analogy between BPS states and the slowly rotating branch.
These results do not imply that the slowly rotating black holes are absolutely stable. For example, the angular momentum could be carried away by one of the decay products. An example that realizes this possibility is the Callan-Rubakov effect ; here a charged spin-1/2 fermion interacts with a monopole, but the combined system nevertheless supports a spin-0 mode. An alternative decay channel involves a third particle carrying spin, but arbitrarily low energy; e.g. a graviton. Despite the existence of allowed decay channels, it is evident that the slowly rotating extremal black holes exhibit a remarkable degree of stability; in particular, it is suggestive that the most obvious decay channel is forbidden. It would be interesting to make a stronger and more precise statement on this issue.
### 2.3 Quantization Rules
Up to this point, the electric and magnetic charges have been arbitrary parameters. After embedding into quantum theory they are quantized:
$`Q`$ $`=`$ $`2G_4M_0n_Q,`$ (11)
$`P`$ $`=`$ $`2G_4M_6n_P,`$ (12)
where $`n_Q`$ and $`n_P`$ are integral. In the D0/D6 interpretation discussed in the introduction:
$`M_0`$ $`=`$ $`{\displaystyle \frac{1}{l_sg_s}},`$ (13)
$`M_6`$ $`=`$ $`{\displaystyle \frac{V_6}{(2\pi )^6l_s^7g_s}},`$ (14)
where the string units are defined so $`l_s=\sqrt{\alpha ^{}}`$ and $`V_6`$ is the volume of the six compact dimensions wrapped by the $`D6`$-brane; as always $`G_4=\frac{1}{8}(2\pi )^6l_s^8g_s^2`$. This gives the relation $`8G_4M_0M_6=1`$ (which in fact is expected from general principles). Thus:
$$\frac{2PQ}{G_4}=n_Qn_P.$$
(15)
As a check on normalizations note that, after this quantization condition is taken into account, the lower bound in (9) quantizes the angular momentum as a half-integer. A related point is that the entropy (3-4) simplifies. After the quantization condition is taken into account it is expressed in terms of pure numbers, i.e. the moduli cancel out. This is promising for a connection to microscopic ideas.
## 3 The Microscopic Description
The analogy with BPS black holes suggests that it is possible to describe Kaluza-Klein black holes precisely in the underlying string theory. As discussed in the introduction, the microscopic interpretation of the Kaluza-Klein black hole is a bound state of $`k=n_Q`$ D0-branes and $`N=n_P`$ D6-branes. The theory on the D6-branes is a field theory in 6+1 dimensions. The field content is the same as maximally supersymmetric Yang-Mills theory with $`SU(N)`$ gauge group. D0-branes are described in this theory as excitations with third Chern-class equal to the number of D0-branes, and vanishing first and second Chern-classes. Assuming that the compact dimensions span a six-torus, it is simple to construct examples of this kind using time-independent field strengths of the form:
$$F_{12}=f\mu _1;F_{34}=f\mu _2;F_{56}=f\mu _3,$$
(16)
where the $`SU(N)`$ matrices $`\mu _i`$ satisfy:
$$\mu _i^2=I;\mu _1\mu _2\mu _3=I;\mathrm{Tr}\mu _i=0;\mathrm{Tr}\mu _i\mu _j=0.$$
(17)
The first and second Chern-classes vanish because the trace of $`F`$, and also of $`FF`$, vanish along all cycles. The third Chern-class โ and so the number of D0-branes โ is:
$$k=\frac{1}{6(2\pi )^3}\mathrm{Tr}FFF=\frac{1}{(2\pi )^3}NV_6f^3.$$
(18)
It is convenient to use (11-14) and rewrite this relation as $`(2\pi )^3l_s^6f^3=Q/P`$.
The D6-brane wraps a small compact manifold, so it is legitimate to ignore higher derivatives in the action. The interactions of the theory are therefore given by the Born-Infeld Lagrangean. For static configurations the corresponding mass functional is:
$$M=T_6\mathrm{Tr}\sqrt{\mathrm{det}\left(1+2\pi l_s^2F\right)},$$
(19)
where $`T_6`$ is the tension of the $`D6`$-brane, i.e. its mass density. For the explicit configurations given above the mass becomes:
$$M=T_6V_6N\left(1+(2\pi )^2l_s^4f^2\right)^{3/2}=\left(P^{2/3}+Q^{2/3}\right)^{3/2}/(2G_4).$$
(20)
This is precisely the mass formula (1). A similar computation was presented in , for charges $`N=k=4`$ and moduli chosen such that $`P=Q`$. The generalization given here shows that a full functional dependence is reproduced by the microscopic considerations.
## 4 Discussion
The D0/D6-interpretation of the system is valid at weak coupling, i.e. when the ambient spacetime is mildly curved. In contrast, the black hole description applies at strong coupling. The agreement between the mass formulae obtained in the two mutually exclusive regimes therefore suggests a duality. Indeed, although Kaluza-Klein theory does not admit an $`SL(2,\text{Z}\text{Z})`$ duality group, it preserves a $`\text{Z}\text{Z}_2`$ subgroup interchanging electric and magnetic charges, as well as weak and strong coupling. The mass formula is therefore not necessarily invariant under extrapolation from weak to strong coupling; but the two regimes are related by a discrete symmetry.
It is simple to construct explicit microscopic configurations of the form (16-17); indeed, there are numerous ways to do so. Moreover, if the ansatz is relaxed, there are additional possibilities. The system therefore has considerable microscopic degeneracy which is presumably related to the black hole entropy. However, a precise confirmation of this idea has not yet been achieved.
Another open problem concerns the world-volume interpretation of the angular momentum. As discussed after (6), the black hole entropy formula suggests a relation to the R-charge of a superconformal algebra; or at least some $`U(1)`$ current in a 2D conformal field theory. Unfortunately it seems difficult to identify a specific current with the required properties.
Acknowledgments: I thank V. Balasubramanian, A. Goldhaber, P. Kraus, J. Harvey, J. Maldacena, E. Martinec, and A. Peet for discussions; and the Niels Bohr Institute for hospitality during the preparation of the manuscript. This work was supported by DOE grant DE-FG02-90ER-40560 and by a Robert R. McCormick Fellowship. |
no-problem/0002/astro-ph0002380.html | ar5iv | text | # The Dependence of Tidally-Induced Star Formation on Cluster Density
## 1. Introduction
The remarkable changes in cluster disk galaxy populations between intermediate redshifts (z $``$ 0.5) and the present are well known. Up to 50% of the population of intermediate redshift clusters are comprised of blue, star-forming galaxies, which have been shown to be predominantly normal spiral and irregular galaxies, a fraction of which are interacting or obviously disturbed. By the present epoch, this population has been depleted by a factor of 2 and replaced by a population of S0 galaxies. However the processes by which this has occurred are still not fully understood (cf. Dressler 1980; Dressler et al. 1997)
The processes which cause the transformation of the cluster spiral galaxy population to S0s are expected to have very significant effects on cluster galaxy star formation rates (SFRs). We have undertaken a comparison of SFRs between field and cluster spirals in 8 low redshift clusters in order to investigate whether these SFRs show evidence of continuing morphological transformation of disk galaxies at the present epoch. The comparison survey, details of which are published elsewhere (Moss, Whittle & Irwin 1988; Moss & Whittle 1993; Moss, Whittle & Pesce 1998; Moss & Whittle 1999) uses H$`\alpha `$ emission, resolved into disk and circumnuclear emission, as an estimator of the SFRs.
## 2. Comparison of Star Formation Rates in Cluster and Field Spirals
Our cluster sample (viz. galaxies in 8 clusters Abell 262, 347, 400, 426, 569, 779, 1367 and 1656), and our field sample (viz. galaxies in adjacent supercluster fields) were observed in an identical manner, thus eliminating systematic effects in detection efficiency. Furthermore, both (supercluster) field and cluster samples approximated volume limited samples, largely eliminating the systematic bias between cluster and field detection rates which may have been present in many earlier comparison studies (cf. Biviano et al. 1997).
A difficult question is which criterion to choose to normalise field and cluster disk galaxy samples. Some authors (e.g. Hashimoto et al. 1998; Balogh et al. 1998) have chosen to use bulge to disk (B/D) ratio on the grounds that it is a less subjective and star formation contaminated normalisation parameter. However the relation between B/D ratio and Hubble T-type has considerable scatter (Baugh, Cole & Frenk 1996; Simien & de Vaucouleurs 1986; de Jong 1995) such that an increase in the S0/S ratio in clusters is likely to mask systematic changes of SFR between field and cluster spirals. Since the latter changes are of interest for the present study, the B/D ratio is not a suitable normalisation parameter.
Accordingly we have chosen Hubble type as the normalisation parameter, and further restricted field and cluster samples to a total of 320 spirals (Sa and later) and peculiars. Some 39% of spirals and 75% of peculiars were detected in H$`\alpha `$ emission. It is estimated that emission detection is 90% complete to an equivalent width limit of 20 ร
, and $``$ 29% efficient below this limit (cf. Moss et al. 1998). The detected emission divides approximately equally between diffuse and compact emission which is identified with disk emission and circumnuclear starburst emission respectively. Examples of both types of emission are given in Figure 1.
A particular difficulty which arises in adopting the Hubble type as the normalisation parameter is the relation between the star formation properties of the galaxy and its Hubble type. In particular, a decrease in disk star formation rate may shift a galaxy to an earlier type. This shift in type may not be detected in any comparison of field and cluster spirals (Hashimoto et al. 1998). This makes any comparison of disk emission between field and cluster spirals uncertain. By contrast, circumnuclear emission is relatively independent of type (Kennicutt 1998), and in this case a reliable comparison of field and cluster spirals is possible.
As is characteristic of circumnuclear starburst emission (cf. Kennicutt 1998), the detected compact emission correlates with both a disturbed morphology of the galaxy (significance level, 8.7$`\sigma `$) indicative of tidally-induced star formation, and with the presence of a bar (significance level, 3.1$`\sigma `$). Furthermore this emission correlates with both local galaxy surface density (significance level, 3.9$`\sigma `$) and cluster central galaxy space density (significance level, 5.3$`\sigma `$). Since there is no significant difference in the incidence of galaxy bars between the field and cluster samples, whereas disturbed galaxies are more common in the cluster environment, it is considered that the observed enhancement of circumnuclear emission is due to tidally-induced star formation, whether from galaxyโgalaxy, galaxyโgroup or galaxyโcluster interactions.
Finally, the enhancement of circumnuclear starburst emission with increasing cluster density is not wholly accounted for by the correlation of this emission with local galaxy surface density. A Kendall partial rank correlation test shows an additional โcluster effectโ (significance level, 3.3$`\sigma `$) such that there is a higher incidence of circumnuclear emission for galaxies in a region of a given local galaxy surface density in richer clusters, as compared to that for galaxies in regions of the same surface density in poorer clusters.
## 3. Discussion
Although Lavery & Henry (1988) first proposed that the Butcher-Oemler effect could be explained as star formation triggered by galaxyโgalaxy interactions in intermediate redshift clusters, it was long considered that the typical cluster velocity dispersion ($``$ 1000 $`\mathrm{km}`$ $`\mathrm{s}^1`$) was too high for strong tidal interactions to occur. However recent work (e.g. Gnedin 1999) has shown that a non-static cluster potential can enhance tidal interactions for cluster galaxies. Such a non-static potential can arise in sub-cluster merging, and indeed there is evidence that the two richest clusters in our sample (Abell 1367 and 1656) are recent post-merger systems (Donnelly et al. 1998; Honda et al. 1996). It appears that the merger events in these clusters leading to a rapidly varying cluster potential, may have caused an increase in galaxy tidal interactions and the associated observed enhancement of circumnuclear starburst emission.
Tidal interactions of galaxies in clusters are likely to be an effective mechanism for the transformation of spirals to S0 galaxies (e.g. Gnedin 1999). It was noted above that the observed enhancement of circumnuclear emission in spirals with increasing cluster density is not wholly accounted for by that due to increasing local galaxy surface density. This implies that tidal interactions, and associated morphological transformation of spirals to S0s, proceeds faster in richer as compared to poorer clusters, perhaps because sub-cluster merging is more common for the former. This in turn may explain the hitherto anomalous absence of a typeโlocal galaxy surface density ($`T\mathrm{\Sigma }`$) relation in irregular clusters at intermediate redshift (Dressler 1980; Dressler et al. 1997). Whereas significant morphological transformation of the cluster disk galaxy population may be expected for regular (rich) clusters at $`z0.5`$ (for which the timescale for transformation is shorter) and for irregular (poor) clusters at $`z0`$ (for which a longer time duration for transformation is available), for irregular clusters at $`z0.5`$, there has been insufficient time for this to take place, leading to the observed absence of a $`T\mathrm{\Sigma }`$ relation.
### Acknowledgments.
We thank S.M. Bennett for preparation of the Figure.
## References
Balogh, M.L., Schade, D., Morris, S.L., Yee, H.K.C., Carlberg, R.G., Ellingson, E. 1998, ApJ, 504, L75
Baugh, C.M., Cole, S., Frenk, C.S. 1996, MNRAS, 283, 1361
Biviano, A., Katgert, P., Mazure, A., Moles, M., den Hartog, R., Perea, J., Focardi, P. 1997, A&A, 321, 84
de Jong, R.S. 1995, PhD thesis, Univ. Groningen
Donnelly, R.H., Markevitch, M., Forman, W., Jones, C., David, L.P., Churazov, E., Gilfanov, M. 1998, ApJ, 500, 138
Dressler, A. 1980, ApJ, 236, 351
Dressler, A., Oemler, A., Couch, W. J., Smail, I., Ellis, R. S., Barger, A., Butcher, H., Poggianti, B. M., Sharples, R. M. 1997, ApJ, 490, 577
Gnedin, O.Y. 1999, PhD thesis, Princeton Univ.
Hashimoto, Y., Oemler, A., Lin, H., Tucker, D.L. 1998, ApJ, 499, 589
Honda, H., Hirayama, M., Watanabe, M., Kunieda, H., Tawara, Y., Yamashita, K., Ohashi, T., Hughes, J. P., Henry, J. P 1996, ApJ, 473, L71
Kennicutt, R.C. 1998, ARA&A, 36, 189
Lavery, R.J., Henry, J.P. 1988, ApJ, 330, 596
Moss, C., Whittle, M. 1993, ApJ, 407, L17
Moss, C., & Whittle, M. 1999, MNRAS, submitted for publication
Moss, C., Whittle, M., Irwin, M.J. 1988, MNRAS, 232, 381
Moss, C., Whittle, M., Pesce, J.E. 1998, MNRAS, 300, 205
Simien, F., de Vaucouleurs, G. 1986, ApJ, 302, 564 |
no-problem/0002/cond-mat0002414.html | ar5iv | text | # Printed on Unusually High Thermal Conductivity of Carbon Nanotubes
## Abstract
Combining equilibrium and non-equilibrium molecular dynamics simulations with accurate carbon potentials, we determine the thermal conductivity $`\lambda `$ of carbon nanotubes and its dependence on temperature. Our results suggest an unusually high value $`\lambda 6,600`$ W/m$``$K for an isolated $`(10,10)`$ nanotube at room temperature, comparable to the thermal conductivity of a hypothetical isolated graphene monolayer or diamond. Our results suggest that these high values of $`\lambda `$ are associated with the large phonon mean free paths in these systems; substantially lower values are predicted and observed for the basal plane of bulk graphite.
With the continually decreasing size of electronic and micromechanical devices, there is an increasing interest in materials that conduct heat efficiently, thus preventing structural damage. The stiff $`sp^3`$ bonds, resulting in a high speed of sound, make monocrystalline diamond one of the best thermal conductors . An unusually high thermal conductance should also be expected in carbon nanotubes, which are held together by even stronger $`sp^2`$ bonds. These systems, consisting of seamless and atomically perfect graphitic cylinders few nanometers in diameter, are self-supporting. The rigidity of these systems, combined with virtual absence of atomic defects or coupling to soft phonon modes of the embedding medium, should make isolated nanotubes very good candidates for efficient thermal conductors. This conjecture has been confirmed by experimental data that are consistent with a very high thermal conductivity for nanotubes .
In the following, we will present results of molecular dynamics simulations using the Tersoff potential , augmented by Van der Waals interactions in graphite, for the temperature dependence of the thermal conductivity of nanotubes and other carbon allotropes. We will show that isolated nanotubes are at least as good heat conductors as high-purity diamond. Our comparison with graphitic carbon shows that inter-layer coupling reduces thermal conductivity of graphite within the basal plane by one order of magnitude with respect to the nanotube value which lies close to that for a hypothetical isolated graphene monolayer.
The thermal conductivity $`\lambda `$ of a solid along a particular direction, taken here as the $`z`$ axis, is related to the heat flowing down a long rod with a temperature gradient $`dT/dz`$ by
$$\frac{1}{A}\frac{dQ}{dt}=\lambda \frac{dT}{dz},$$
(1)
where $`dQ`$ is the energy transmitted across the area $`A`$ in the time interval $`dt`$. In solids where the phonon contribution to the heat conductance dominates, $`\lambda `$ is proportional to $`Cvl`$, the product of the heat capacity per unit volume $`C`$, the speed of sound $`v`$, and the phonon mean free path $`l`$. The latter quantity is limited by scattering from sample boundaries (related to grain sizes), point defects, and by umklapp processes. In the experiment, the strong dependence of the thermal conductivity $`\lambda `$ on $`l`$ translates into an unusual sensitivity to isotopic and other atomic defects. This is best illustrated by the reported thermal conductivity values in the basal plane of graphite which scatter by nearly two orders of magnitude. As similar uncertainties may be associated with thermal conductivity measurements in โmatsโ of nanotubes , we decided to determine this quantity using molecular dynamics simulations.
The first approach used to calculate $`\lambda `$ was based on a direct molecular dynamics simulation. Heat exchange with a periodic array of hot and cold regions along the nanotube has been achieved by velocity rescaling, following a method that had been successfully applied to the thermal conductivity of glasses . Unlike glasses, however, nanotubes exhibit an unusually high degree of long-range order over hundreds of nanometers. The perturbations imposed by the heat transfer reduce the effective phonon mean free path to below the unit cell size. We found it hard to achieve convergence, since the phonon mean free path in nanotubes is significantly larger than unit cell sizes tractable in molecular dynamics simulations.
As an alternate approach to determine the thermal conductivity, we used equilibrium molecular dynamics simulations based on the Green-Kubo expression that relates this quantity to the integral over time $`t`$ of the heat flux autocorrelation function by
$$\lambda =\frac{1}{3Vk_BT^2}_0^{\mathrm{}}<๐(t)๐(0)>๐t.$$
(2)
Here, $`k_B`$ is the Boltzmann constant, $`V`$ is the volume, $`T`$ the temperature of the sample, and the angled brackets denote an ensemble average. The heat flux vector $`๐(t)`$ is defined by
$`๐(t)`$ $`=`$ $`{\displaystyle \frac{d}{dt}}{\displaystyle \underset{i}{}}๐ซ_i\mathrm{\Delta }e_i`$ (3)
$`=`$ $`{\displaystyle \underset{i}{}}๐ฏ_i\mathrm{\Delta }e_i{\displaystyle \underset{i}{}}{\displaystyle \underset{j(i)}{}}๐ซ_{ij}(๐_{ij}๐ฏ_i),`$ (4)
where $`\mathrm{\Delta }e_i=e_i<e>`$ is the excess energy of atom $`i`$ with respect to the average energy per atom $`<e>`$. $`๐ซ_i`$ is the position and $`๐ฏ_i`$ the velocity of atom $`i`$, and $`๐ซ_{ij}=๐ซ_j๐ซ_i`$. Assuming that the total potential energy $`U=_iu_i`$ can be expressed as a sum of binding energies $`u_i`$ of individual atoms, then $`๐_{ij}=_iu_j`$, where $`_i`$ is the gradient with respect to the position of atom $`i`$.
In low-dimensional systems, such as nanotubes or graphene monolayers, we infer the volume from the way how these systems pack in space (nanotubes form bundles and graphite a layered structure, both with an inter-wall separation of $`3.4`$ ร
) in order to convert thermal conductance of a system to thermal conductivity of a material.
Once $`๐(t)`$ is known, the thermal conductivity can be calculated using Eq. (2). We found, however, that these results depend sensitively on the initial conditions of each simulation, thus necessitating a large ensemble of simulations. This high computational demand was further increased by the slow convergence of the autocorrelation function, requiring long integration time periods.
These disadvantages have been shown to be strongly reduced in an alternate approach that combines the Green-Kubo formula with nonequilibrium thermodynamics in a computationally efficient manner . In this approach, the thermal conductivity along the $`z`$ axis is given by
$$\lambda =\underset{๐
_e0}{lim}\underset{t\mathrm{}}{lim}\frac{<J_z(๐
_e,t)>}{F_eTV},$$
(5)
where $`T`$ is the temperature of the sample, regulated by a Nosรฉ-Hoover thermostat , and $`V`$ is the volume of the sample. $`J_z(๐
_e,t)`$ is the $`z`$ component of the heat flux vector for a particular time $`t`$. $`๐
_e`$ is a small fictitious โthermal forceโ (with a dimension of inverse length) that is applied to individual atoms. This fictitious force $`๐
_e`$ and the Nosรฉ-Hoover thermostat impose an additional force $`\mathrm{\Delta }๐
_i`$ on each atom $`i`$. This additional force modifies the gradient of the potential energy and is given by
$`\mathrm{\Delta }๐
_i`$ $`=`$ $`\mathrm{\Delta }e_i๐
_e{\displaystyle \underset{j(i)}{}}๐_{ij}(๐ซ_{ij}๐
_e)`$ (7)
$`+{\displaystyle \frac{1}{N}}{\displaystyle \underset{j}{}}{\displaystyle \underset{k(j)}{}}๐_{jk}(๐ซ_{jk}๐
_e)\alpha ๐ฉ_i.`$
Here, $`\alpha `$ is the Nosรฉ-Hoover thermostat multiplier acting on the momentum $`๐ฉ_i`$ of atom $`i`$. $`\alpha `$ is calculated using the time integral of the difference between the instantaneous kinetic temperature $`T`$ of the system and the heat bath temperature $`T_{eq}`$, from $`\dot{\alpha }=(TT_{eq})/Q`$, where $`Q`$ is the thermal inertia. The third term in Eq. (7) guarantees that the net force acting on the entire $`N`$-atom system vanishes.
In Fig. 1 we present the results of our nonequilibrium molecular dynamics simulations for the thermal conductance of an isolated $`(10,10)`$ nanotube aligned along the $`z`$ axis. In our calculation, we consider 400 atoms per unit cell, and use periodic boundary conditions. Each molecular dynamics simulation run consists of 50,000 time steps of $`5.0\times 10^{16}`$ s. Our results for the time dependence of the heat current for the particular value $`F_e=0.2`$ ร
<sup>-1</sup>, shown in Fig. 1(a), suggest that $`J_z(t)`$ converges within the first few picoseconds to its limiting value for $`t\mathrm{}`$ in the temperatures range below 400 K. The same is true for the quantity $`J_z(t)/T`$, shown in Fig. 1(b), the average of which is proportional to the thermal conductivity $`\lambda `$ according to Eq. (5). Our molecular dynamics simulations have been performed for a total time length of $`25`$ ps to represent well the long-time behavior.
In Fig. 1(c) we show the dependence of the quantity
$$\stackrel{~}{\lambda }\underset{t\mathrm{}}{lim}\frac{<J_z(๐
_e,t)>}{F_eTV}$$
(8)
on $`F_e`$. We have found that direct calculations of $`\stackrel{~}{\lambda }`$ for very small thermal forces carry a substantial error, as they require a division of two very small numbers in Eq. (8). We base our calculations of the thermal conductivity at each temperature on 16 simulation runs, with $`F_e`$ values ranging from $`0.40.05`$ ร
<sup>-1</sup>. As shown in Fig. 1(c), data for $`\stackrel{~}{\lambda }`$ can be extrapolated analytically for $`๐
_e0`$ to yield the thermal conductivity $`\lambda `$, shown in Fig. 2.
Our results for the temperature dependence of the thermal conductivity of an isolated $`(10,10)`$ carbon nanotube, shown in Fig. 2, reflect the fact that $`\lambda `$ is proportional to the heat capacity $`C`$ and the phonon mean free path $`l`$. At low temperatures, $`l`$ is nearly constant, and the temperature dependence of $`\lambda `$ follows that of the specific heat. At high temperatures, where the specific heat is constant, $`\lambda `$ decreases as the phonon mean free path becomes smaller due to umklapp processes. Our calculations suggest that at $`T=100`$ K, carbon nanotubes show an unusually high thermal conductivity value of $`37,000`$ W/m$``$K. This value lies very close to the highest value observed in any solid, $`\lambda =41,000`$ W/m$``$K, that has been reported for a 99.9% pure <sup>12</sup>C crystal at 104 K. In spite of the decrease of $`\lambda `$ above 100 K, the room temperature value of $`6,600`$ W/m$``$K is still very high, exceeding the reported thermal conductivity value of $`3,320`$ W/m$``$K for nearly isotopically pure diamond .
We found it useful to compare the thermal conductivity of a $`(10,10)`$ nanotube to that of an isolated graphene monolayer as well as bulk graphite. For the graphene monolayer, we unrolled the 400-atom large unit cell of the $`(10,10)`$ nanotube into a plane. The periodically repeated unit cell used in the bulk graphite calculation contained 720 atoms, arranged in three layers. The results of our calculations, presented in Fig. 3, suggest that an isolated nanotube shows a very similar thermal transport behavior as a hypothetical isolated graphene monolayer, in general agreement with available experimental data . Whereas even larger thermal conductivity should be expected for a monolayer than for a nanotube, we must consider that unlike the nanotube, a graphene monolayer is not self-supporting in vacuum. For all carbon allotropes considered here, we also find that the thermal conductivity decreases with increasing temperature in the range depicted in Fig. 3.
Very interesting is the fact that once graphene layers are stacked in graphite, the inter-layer interactions quench the thermal conductivity of this system by nearly one order of magnitude. For the latter case of crystalline graphite, we also found our calculated thermal conductivity values to be confirmed by corresponding observations in the basal plane of highest-purity synthetic graphite which are also reproduced in the figure. We would like to note that experimental data suggest that the thermal conductivity in the basal plane of graphite peaks near 100 K, similar to our nanotube results.
Based on the above described difference in the conductivity between a graphene monolayer and graphite, we should expect a similar reduction of the thermal conductivity when a nanotube is brought into contact with other systems. This should occur when nanotubes form a bundle or rope, become nested in multi-wall nanotubes, or interact with other nanotubes in the โnanotube matโ of โbucky-paperโ and could be verified experimentally. Consistent with our conjecture is the low value of $`\lambda 0.7`$ W/m$``$K reported for the bulk nanotube mat at room temperature .
In summary, we combined results of equilibrium and non-equilibrium molecular dynamics simulations with accurate carbon potentials to determine the thermal conductivity $`\lambda `$ of carbon nanotubes and its dependence on temperature. Our results suggest an unusually high value $`\lambda 6,600`$ W/m$``$K for an isolated $`(10,10)`$ nanotube at room temperature, comparable to the thermal conductivity of a hypothetical isolated graphene monolayer or graphite. We believe that these high values of $`\lambda `$ are associated with the large phonon mean free paths in these systems. Our numerical data indicate that in presence of inter-layer coupling in graphite and related systems, the thermal conductivity is reduced significantly to fall into the experimentally observed value range.
This work was supported by the Office of Naval Research and DARPA under Grant No. N00014-99-1-0252. |
no-problem/0002/nucl-th0002013.html | ar5iv | text | # Transport Theoretical Approach to the Nucleon Spectral Function in Nuclear Matter11footnote 1Work supported by BMBF, DFG and GSI Darmstadt
## Abstract
The nucleon spectral function in infinite nuclear matter is calculated in a quantum transport theoretical approach. Exploiting the known relation between collision rates and correlation functions the spectral function is derived self-consistently. By re-inserting the spectral functions into the collision integrals the description of hard processes from the high-momentum components of wave functions and interactions is improved iteratively until convergence is achieved. The momentum and energy distributions and the nuclear matter occupation probabilities are in very good agreement with the results obtained from many-body theory.
PACS numbers: 21.65.+f, 24.10.Cn
Keywords: nuclear matter, many-body theory, nucleon spectral function
A longstanding problem of nuclear many-body theory is the question to what extent short-range correlations are contributing to the properties of nuclear matter. While the bulk properties of nuclear matter are mainly affected by long-range mean-field dynamics, the picture, however, changes if nuclei are probed at large energy and momentum transfer. The spectral functions, obtained for example from $`A(e,e^{}p)X`$ experiments , are in energy and momentum much wider spread than predicted by mean-field dynamics. Obviously, the high momentum and energy structure of spectral functions has important consequences for dynamical processes, e.g. sub-threshold particle production on nuclei.
An overall measure of short-range correlations in nuclear matter is the depletion of occupation probabilities in ground state momentum distributions by about 10%. The processes behind this number are such that states from inside the Fermi sphere are scattered into high momentum configurations which clearly are not of mean-field nature. As a result, a momentum distribution with a long high momentum tail is generated extending much beyond the Fermi surface. An important finding is that the magnitude and the shape of the high momentum component is almost independent of the system under consideration while the inner parts, especially in light nuclei, are affected by the shell structure and finite size effects. Hence, the high momentum tails of the spectral functions are likely to reflect a universal property of nuclear many-body dynamics at short distances.
Theoretically, many attempts have been made to understand short-range correlations in nuclei. Approaches based on nuclear many-body theory up to explanations referring to the QCD aspects of strong interactions have been proposed. Obviously, before conclusions on non-standard phenomena can be drawn the many-body theoretical aspects of short-range correlations must be understood in detail. In fact, the results obtained from many-body theory are describing the available data rather satisfactorily. In recent years the theoretical results have converged, at least for the depletion of nuclear matter occupation probabilities. The majority of the model calculations are using Brueckner and Dirac-Brueckner techniques, see e.g. . In a correlation dynamical treatment was applied. Most of the approaches use the quasi-particle approximation, i.e. a sharp energy distribution for the nucleons is assumed (see e.g. ). Occupation probabilities in finite nuclei could be well described in a second RPA approach and by polarization self-energies , respectively. The Dirac-Brueckner calculations in , including hole-hole propagation, led to an extended and numerically rather involved energy-momentum structure of self-energies. However, the net effect on binding energies and occupation probabilities was only moderate, but improving the agreement with empirical data.
Here, we investigate spectral functions in nuclear matter by quantum transport theory , thereby taking up a proposal of Danielewicz and Bertsch . Indeed, the present study is motivated by our recent implementation of off-shell effects in a transport theoretical treatment of heavy-ion and other nuclear collisions . A sound theoretical basis how to treat off-shell effects in transport equations has been given in (see also ). A central result of transport theory is that collision rates and correlation functions are directly related: The calculation of either of the two quantities requires the knowledge of the other one. Theoretically, this corresponds to a rather involved self-consistency problem for which a direct solution apparently does not exist. Similar to , a practical approach is obtained by an iterative procedure. Successive approximations for self-energies, spectral functions and collision integrals are obtained be re-inserting the corresponding quantities from previous cycles of the calculation until convergence is achieved. The method is discussed below. Results are presented and compared to the work of Benhar et al. who calculated the nuclear matter spectral function within the framework of correlation-basis theory and of Ciofi degli Atti et al. who derived a global parameterization of spectral functions for finite nuclei and nuclear matter.
In quantum transport theory dynamical processes are described by the one-particle correlation functions
$`g^>(1,1^{})`$ $`=`$ $`i\mathrm{\Psi }(1)\mathrm{\Psi }^{}(1^{})`$
$`g^<(1,1^{})`$ $`=`$ $`i\mathrm{\Psi }^{}(1^{})\mathrm{\Psi }(1),`$ (1)
where $`\mathrm{\Psi }`$ are the nucleon field operators in Heisenberg representation. They account for the non-stationary processes which introduce a coupling between causal and anti-causal single particle propagation. In other words, states from below and above the Fermi surface are dynamically mixed as discussed above. Correspondingly, in an interacting quantum system the single particle self-energy operator includes correlation self-energies $`\mathrm{\Sigma }^{<>}`$ which couple particle and hole degrees of freedom . Obviously, $`g^{<>}`$ and $`\mathrm{\Sigma }^{<>}`$ are closely related to each other and must be determined by those parts of the fundamental interactions producing non-stationary effects. The wanted relation is obtained from transport theory. After a Fourier transformation to energy-momentum representation one finds for the self energies
$`\mathrm{\Sigma }^>(\omega ,p)`$ $`=`$ $`g{\displaystyle \frac{d^3p_2d\omega _2}{(2\pi )^4}\frac{d^3p_3d\omega _3}{(2\pi )^4}\frac{d^3p_4d\omega _4}{(2\pi )^4}(2\pi )^4\delta ^4(p+p_2p_3p_4)\overline{||^2}}`$ (2)
$`\times g^<(\omega _2,p_2)g^>(\omega _3,p_3)g^>(\omega _4,p_4)`$
$`\mathrm{\Sigma }^<(\omega ,p)`$ $`=`$ $`g{\displaystyle \frac{d^3p_2d\omega _2}{(2\pi )^4}\frac{d^3p_3d\omega _3}{(2\pi )^4}\frac{d^3p_4d\omega _4}{(2\pi )^4}(2\pi )^4\delta ^4(p+p_2p_3p_4)\overline{||^2}}`$ (3)
$`\times g^>(\omega _2,p_2)g^<(\omega _3,p_3)g^<(\omega _4,p_4),`$
where $`g=4`$ is the spin-isospin degeneracy factor and $`\overline{||^2}`$ denotes the square of the nucleon-nucleon scattering amplitude, averaged over spin and isospin of the incoming nucleons and summed over spin and isospin of the outgoing nucleons. Since both $`g^{<>}`$ and $`\mathrm{\Sigma }^{<>}`$ describe the correlation dynamics, the spectral function can be obtained from either of the two quantities as the difference over the cut along the energy real axis. In terms of the correlation propagators,
$$a(\omega ,p)=i\left(g^>(\omega ,p)g^<(\omega ,p)\right).$$
(4)
In a non-relativistic formulation the single particle spectral function is found explicitly as
$$a(\omega ,p)=\frac{\mathrm{\Gamma }(\omega ,p)}{(\omega \frac{p^2}{2m_N}\text{Re}\mathrm{\Sigma }(\omega ,p))^2+\frac{1}{4}\mathrm{\Gamma }^2(\omega ,p)},$$
(5)
including the particle and hole nucleon self-energy $`\mathrm{\Sigma }`$, which accounts for long-range mean-field dynamics. The width of the spectral distribution is determined by the imaginary part of the self-energy,
$$\mathrm{\Gamma }(\omega ,p)=2\text{Im}\mathrm{\Sigma }(\omega ,p)=i(\mathrm{\Sigma }^>(\omega ,p)\mathrm{\Sigma }^<(\omega ,p)).$$
(6)
From Eqs. (2), (3) it is apparent that the high momentum, i.e. short range, components of nuclear interactions are of primary importance for the energy-momentum spreading of the single particle strength. Obviously, the $`\delta `$-shaped quasi-particle distribution is recovered for $`\text{Im}\mathrm{\Sigma }0`$, i.e. vanishing correlations.
The correlation propagators $`g^{<>}`$ are given by
$`g^<(\omega ,p)`$ $`=ia(\omega ,p)f(\omega ,p),`$ (7)
$`g^>(\omega ,p)`$ $`=ia(\omega ,p)(1f(\omega ,p))`$ (8)
in terms of the energy-momentum phase space distribution function $`f(\omega ,p)`$. Since we are dealing with a system at $`T=0`$, $`f`$ reduces to
$$f(\omega ,p)=\mathrm{\Theta }(\omega _F\omega )$$
(9)
with the Fermi energy $`\omega _F`$. Therefore,
$`\mathrm{\Sigma }^>(\omega ,p)=0,\mathrm{\Gamma }(\omega ,p)=i\mathrm{\Sigma }^<(\omega ,p)\text{for}\omega \omega _F`$ (10)
$`\mathrm{\Sigma }^<(\omega ,p)=0,\mathrm{\Gamma }(\omega ,p)=i\mathrm{\Sigma }^>(\omega ,p)\text{for}\omega \omega _F.`$ (11)
In order to obtain $`\mathrm{\Gamma }`$, we have to calculate the self-energies $`\mathrm{\Sigma }^>`$ and $`\mathrm{\Sigma }^<`$. Since these quantities themselves depend on $`\mathrm{\Gamma }`$ via the spectral function $`a(\omega ,p)`$, the calculation has to be done iteratively.
Diagrammatically, the particle- and hole-type transition rate $`\mathrm{\Sigma }^>`$ and $`\mathrm{\Sigma }^<`$, Eqs. (2) and (3), are of two-particleโone-hole (2p1h) and one-particleโtwo-hole (1p2h) structure, respectively. In this respect, they are of the same basic structure as the polarization self-energies considered in many-body theoretical descriptions. However, while in many-body theory the polarization self-energies are typically included perturbatively in lowest order only by performing the integrations over intermediate 2p1h and 1p2h states with quasi-particle spectral functions, e.g. in Ref. and also Refs. , we apply a more extended scheme accounting for higher order effects.
A first attempt to extend the many-body scheme to higher order calculations was made in . Since, as expected, the computational effort was found to increase considerably approximations on spectral functions of particle intermediate states had to be invoked. An important aspect of the present transport theoretical approach is the fully self-consistent treatment of particle and hole strength function in each stage of the calculation. Within our iterative approach this is achieved by calculating the correlation self-energies with the self-consistently obtained spectral functions. Hence, the converged results include a non-perturbative summation of a whole series of $`n`$p $`m`$h diagrams.
Dynamically, the strength of the transition rates is determined by the in-medium (off-shell) nucleon-nucleon scattering amplitude $``$. We now make the extreme assumption of a constant transition amplitude, as is appropriate for the short-range part of the $`NN`$ interaction. Neglecting the energy-momentum dependence of the amplitude clearly simplifies the computations. Then, $``$ contributes only as an overall multiplicative factor to the transition rates which, at a given density, may be treated as an empirical parameter $`\overline{}`$. The self-energies $`\mathrm{\Sigma }^>`$ and $`\mathrm{\Sigma }^<`$ are then given by
$`\mathrm{\Sigma }^>(\omega ,p)`$ $`=`$ $`4i{\displaystyle \frac{\overline{||^2}}{(2\pi )^6}}{\displaystyle ๐\omega _3๐\omega _2๐p_3p_3^2๐p_2p_2^2\frac{d\mathrm{cos}\vartheta _2}{p_{\text{tot}}p_3}a(\omega _2,p_2)}`$
$`\times f(\omega _2,p_2)a(\omega _3,p_3)(1f(\omega _3,p_3)){\displaystyle ๐p_4p_4a(\omega _4,p_4)(1f(\omega _4,p_4))}`$
and
$`\mathrm{\Sigma }^<(\omega ,p)`$ $`=`$ $`4i{\displaystyle \frac{\overline{||^2}}{(2\pi )^6}}{\displaystyle ๐\omega _3๐\omega _2๐p_3p_3^2๐p_2p_2^2\frac{d\mathrm{cos}\vartheta _2}{p_{\text{tot}}p_3}a(\omega _2,p_2)}`$
$`\times (1f(\omega _2,p_2))a(\omega _3,p_3)f(\omega _3,p_3){\displaystyle ๐p_4p_4a(\omega _4,p_4)f(\omega _4,p_4)}`$
with $`p_{\text{tot}}=|\stackrel{}{p}+\stackrel{}{p}_2|`$, where four integrations due to the delta function and two over the azimuthal angles $`\phi _2,\phi _3`$ already have been carried out. The remaining six dimensional integrals were calculated on a $`(\omega ,p)`$-grid with boundaries $`|\omega |0.5`$ GeV and $`pp_{\text{max}}=1.25`$ GeV and a mesh size of 5 MeV in both directions.
In the present study we work in infinite nuclear matter at equilibrium with the Fermi momentum $`p_F=1.33`$ fm<sup>-1</sup>, density $`\rho `$=0.16 $`\text{fm}^3`$ and a binding energy per particle of $`\omega _F=16`$ MeV. The real part of the self-energy Re$`\mathrm{\Sigma }`$ is chosen to be independent of momentum and energy. In fact, a constant Re$`\mathrm{\Sigma }`$ only serves to define the scale for the excitation energy $`\omega `$ which in our case is given by $`\omega \omega _F`$. The energy and momentum dependent pieces of Re$`\mathrm{\Sigma }`$ play a more important role because they will modify the pole structure of the propagators. Inside the Fermi sphere typically a quadratic momentum dependence is found leading to a scaling of the kinetic energy by an effective mass and a global compression of spectra will be found. Hence, leaving out the momentum dependence will affect the final results only to a minor degree. Neglecting the energy dependence might introduce larger uncertainties because this amounts to a violation of analyticity as expressed by the dispersion relation between real and imaginary parts of polarization self-energies. Since we do obtain an energy dependent imaginary part analyticity could in principle be restored by calculating Re$`\mathrm{\Sigma }`$ dispersively. However, using a constant, energy independent transition matrix element clearly fails for large positive energies because the imaginary part would continue to increase beyond any limit with the level density of two-particle states. Thus, the imaginary part of the self-energy is not available over the full range of energies necessary for solving the dispersion equation. Rather than introducing an energy cut-off as an additional free model parameter we decide at the present stage to completely neglect the energy dependence of Re$`\mathrm{\Sigma }`$.
The average amplitude was adjusted such that the spectral functions obtained with many-body theoretical methods by Benhar et al. are reproduced leading to $`\left(\overline{||^2}\right)^{1/2}`$=207 MeV fm<sup>3</sup>. Relating this value to on-shell processes would correspond to a constant cross section of about 20 mb. In Fig. 1 the full energy-momentum structure of the resulting nucleon spectral function
$$P(\omega ,p)=Na(\omega ,p)$$
(14)
is displayed, where $`N`$ is a normalization constant. The same normalization as in the work of Benhar et al. is used. Our results are in remarkable agreement with the calculations of Ciofi degli Atti et al. as seen by comparing to Fig. 10 in that reference.
For a more quantitative comparison cuts for several momenta are shown in Fig. 2 as a function of energy $`\omega `$ below the Fermi surface and compared to the results of Ref. . The agreement is surprisingly good considering the seemingly drastic approximation invoked for $``$ and the neglect of analyticity.
We conclude from this agreement that the spectral function of nuclear matter in the hole-sector is rather insensitive to the specific energy and momentum structure of the interaction once a fully self-consistent calculation is performed. The effect of self-consistency is illustrated in Fig. 2 where spectral functions obtained after the first iteration are compared to the fully converged results. The iteration scheme was initialized by choosing spectral functions with a constant width of 0.1 MeV as start value. On a level of about 30% global features, as e.g. the location and height of the peak structures, are described already by the first iteration. At first sight, this seems to support the pertubative approach usually applied in many-body theoretical approaches. However, during the self-consistency cycle strength is re-distributed from the peak into the tail regions. Hence, higher order polarization effects act in a similar way as increasing the effective coupling strength. The results show that the full transport theoretical calculation apparently leads to a self-consistent adjustment of spectral functions which ultimately is dominated by phase space effects.
The agreement with the many-body calculations of extends also to the single nucleon momentum distribution in nuclear matter defined as
$$n(p)=_{\mathrm{}}^{\omega _F}๐\omega P(\omega ,p).$$
(15)
Results are displayed in Fig. 3. Again the agreement is close to perfect except for the increase of the distribution towards $`p_F`$, which is in contrast to the calculations of . Calculations in a schematic model using an empirical form for $`\mathrm{\Gamma }(\omega ,p)`$ with parameters given in Ref. show that this behaviour is a direct consequence of the missing analyticity in the self-energies. The schematic approach also confirms that the momentum dependence of Re$`\mathrm{\Sigma }`$ indeed leads to minor effects in the bulk behaviour of the spectral functions except close to the Fermi surface. The momentum distribution is affected mainly in the extreme momentum tail where a steeper slope is found. In this context it is important to note that $`n(p)`$ must decrease stronger than $`๐ช`$$`(\frac{1}{p^4})`$ in order to have a convergent result for the kinetic energy distribution of the correlated ground state. The transport theoretical results in Fig. 3 fulfill this constraint.
Finally, we remark that the shape of the momentum distribution is closely related to the value of $`\overline{}`$: Increasing the value - corresponding to a stronger interaction amongst the nucleons - would increase the occupation of states above $`p_F`$ and soften the Fermi edge, leading to a better description of the data in Fig. 3 for $`p>p_F`$.
Summarizing, correlations in nuclear matter have been described by an approach that allows to go beyond the perturbative level to which conventional many-body methods are constrained in practice. In a first application, the transition rates were calculated with a momentum independent nucleon-nucleon scattering amplitude. That a transport theoretical description with a simple approximation for the scattering amplitude could reproduce the momentum distribution obtained in state-of-the-art many-body calculations had already been observed by the authors of . The results here show in addition that also the spectral functions are dominated by phase space effects rather than by the off-shell momentum structure of interactions. The excellent agreement of the nucleon spectral function at far-off-shell energies and momenta with those obtained in sophisticated state-of-the-art nuclear many-body calculations also provides some a-posteriori justification for the method to include off-shell effects in nuclear transport calculations developed in Ref. .
Acknowledgements:
We are grateful to S. Fantoni for providing us with his spectral functions and to C. Ciofi degli Atti for helpful comments. |
no-problem/0002/hep-ex0002006.html | ar5iv | text | # BIHEP-EP1-2000-01 A Measurement of the Mass and Full-Width of the ๐_๐ Meson
\[
## Abstract
In a sample of 7.8 million $`J/\psi `$ decays collected in the Beijing Spectrometer, the process J/$`\psi \gamma \eta _c`$ is observed for five different $`\eta _c`$ decay channels: $`K^+K^{}\pi ^+\pi ^{}`$, $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$, $`K^\pm K_S^0\pi ^{}`$ (with $`K_S^0\pi ^+\pi ^{}`$), $`\varphi \varphi `$ (with $`\varphi K^+K^{}`$) and $`K^+K^{}\pi ^0`$. From these signals, we determine the mass of $`\eta _c`$ to be $`2976.6\pm 2.9\pm 1.3`$ MeV. Combining this result with a previously reported result from a similar study using $`\psi (2S)\gamma \eta _c`$ detected in the same spectrometer gives $`m_{\eta _c}=2976.3\pm 2.3\pm 1.2`$ MeV. For the combined samples, we obtain $`\mathrm{\Gamma }_{\eta _c}=11.0\pm 8.1\pm 4.1`$ MeV.
\]
A precise knowledge of the mass difference between the J/$`\psi (1^{})`$ and $`\eta _c(0^+)`$ charmonium states is useful for the determination of the strength of the spin-spin interaction term in non-relativistic potential models. While the $`J/\psi `$ mass has been determined with high accuracy (1 part in $`10^5`$) to be $`3096.88\pm 0.04`$ MeV, the mass of the $`\eta _c`$ is less well measured. The Particle Data Group (PDG) average of $`m_{\eta _c}=2979.8\pm 2.1`$ MeV is based on experiments using the reactions $`e^+e^{}`$ J/$`\psi \gamma \eta _c`$ , $`e^+e^{}\psi (2S)\gamma \eta _c`$ and p$`\overline{\text{p}}\gamma \gamma `$ . These measurements have poor internal consistency, and the PDG fit to the measurements has a confidence level of only 0.001. The most recent result from Fermilab experiment E760 disagrees with the result from the DM2 group by almost four standard deviations. Measurements of the full width of the $`\eta _c`$ have been made by four groups: E760 reports a result of $`\mathrm{\Gamma }_{\eta _c}=23.9_{7.1}^{+12.6}`$ MeV , which is larger than the results from SPEC ($`7.0_{7.0}^{+7.5}`$ MeV) , Mark III ($`10.1_{8.2}^{+33.0}`$ MeV) and Crystal Ball ($`11.5\pm 4.5`$ MeV). Additional measurements for both $`m_{\eta _c}`$ and $`\mathrm{\Gamma }_{\eta _c}`$ are needed to improve the situation. An $`\eta _c`$ mass value of $`m_{\eta _c}=2975.8\pm 3.9\pm 1.2`$ MeV was reported earlier by the Beijing Spectrometer (BES) collaboration based on an analysis of the reaction $`\psi `$(2S)$`\gamma \eta _c`$ . In this paper we report a measurement of the mass of the $`\eta _c`$ based on a data sample of 7.8 million J/$`\psi `$ events collected in BES. The reactions J/$`\psi \gamma \eta _c`$, $`\eta _cK^+K^{}\pi ^+\pi ^{}`$, $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$, $`K^\pm K_S^0\pi ^{}`$ (with $`K_S^0\pi ^+\pi ^{}`$), $`\varphi \varphi `$ (with $`\varphi K^+K^{}`$) and $`K^+K^{}\pi ^0`$ have been used to determine the mass and width of the $`\eta _c`$.
The Beijing Spectrometer has been described in detail in Ref. . Here we describe briefly those detector elements essential to this measurement. Charged particle tracking is provided by a 10 superlayer main drift chamber (MDC). Each superlayer contains four cylindrical layers of sense wires that measure both the position and the ionization energy loss (dE/d$`x`$) of charged particles. The momentum resolution is $`\sigma _P/P=1.7\%\sqrt{1+P^2}`$, where $`P`$ is in GeV/$`c`$. The dE/d$`x`$ resolution is $`9\%`$ and provides good $`\pi /K`$ separation in the low momentum region. An array of 48 scintillation counters surrounding the MDC measures the time-of-flight (TOF) of charged tracks with a resolution of 330 ps for hadrons. Outside of the TOF system is an electromagnetic calorimeter comprised of streamer tubes and lead sheets with a $`z`$ position resolution of 4 cm. The energy resolution of the shower counter scales as $`\sigma _E/E=22\%/\sqrt{E}`$, where $`E`$ is in GeV. Outside the shower counter is a solenoidal magnet that produces a 0.4 Tesla magnetic field.
The event selection criteria for each channel are described in detail in previous papers . Here we repeat only the essential information and emphasize those considerations that are special to the $`m_{\eta _c}`$ measurement. Candidates are selected by requiring the correct number of charged track candidates for the given hypothesis. These tracks must be well fit to a helix in the polar angle range $`0.8<\mathrm{cos}\theta <0.8`$ and have a transverse momentum above $`60`$ MeV/c. For the four-charged-track channels, at least one photon with energy $`E_\gamma >30`$ MeV is required in the barrel shower counter; for the $`K^+K^{}\pi ^0`$ channel, at least three $`E_\gamma >30`$ MeV photons are required. Showers that can be associated with charged tracks are not considered. Events are fitted kinematically with four constraints (4C) to the hypotheses: $`J/\psi \gamma K^+K^{}\pi ^+\pi ^{}`$, $`J/\psi \gamma \pi ^+\pi ^{}\pi ^+\pi ^{}`$, $`J/\psi \gamma K^\pm \pi ^{}\pi ^+\pi ^{}`$, $`J/\psi \gamma \gamma \gamma K^+K^{}`$. A one-constraint (1C) fit is done for the $`J/\psi \gamma _{miss}K^+K^{}K^+K^{}`$ hypothesis, where $`\gamma _{miss}`$ indicates that this photon is not detected. We select those events for each particular channel that have a confidence level greater than 5%. A cut on the variable, $`|U_{miss}|=|E_{miss}P_{miss}|<0.10`$ GeV (for $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$), $`<0.12`$ GeV (for $`K^+K^{}\pi ^+\pi ^{}`$), $`<0.15`$ GeV (for $`K^\pm K_S^0\pi ^{}`$) and $`<0.15`$ GeV (for $`\varphi \varphi `$) is imposed to reject events with multiphotons and misidentified charged particles. Here, $`E_{miss}`$ and $`P_{miss}`$ are, respectively, the missing energy and missing momentum calculated using the measured quantities for the charged tracks. Another cut on the variable, $`P_{t\gamma }^2=4|P_{miss}|^2\mathrm{sin}^2(\theta _{t\gamma }/2)<0.006(\text{GeV}/c)^2`$ (for $`K^+K^{}\pi ^+\pi ^{}`$, $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ and $`K^\pm K_S^0\pi ^{}`$) is used to reduce the backgrounds from $`\pi ^0`$โs, where $`\theta _{t\gamma }`$ is the angle between the missing momentum and the photon direction. For the $`K^+K^{}\pi ^+\pi ^{}`$ and $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$ channels, $`|M_{\pi ^+\pi ^{}\pi ^0}M_\omega |>30`$ MeV is required to remove the background from $`J/\psi \omega \pi ^+\pi ^{}`$ and $`J/\psi \omega K^+K^{}`$; where a $`\pi ^0`$ is associated with the missing momentum. For the $`K^\pm K_S^0\pi ^{}`$ (with $`K_S^0\pi ^+\pi ^{}`$) channel, the $`\pi ^+\pi ^{}`$ invariant mass for the $`K_S^0`$ candidate is required to be within 25 MeV of $`M_{K_S^0}`$. For the $`\varphi \varphi `$ (with $`\varphi K^+K^{}`$) channel, the invariant masses of both candidate $`\varphi `$โs corresponding to $`K^+K^{}`$ pairs are required to be within 30 MeV of the $`\varphi `$ mass. For the $`K^+K^{}\pi ^0`$ channel, at least one of the three $`\gamma \gamma `$ invariant mass combinations is required to be within 40 MeV of the $`\pi ^0`$ mass; for events where this happens for more than one combination, the one with invariant mass closest to the $`\pi ^0`$ mass is taken to be the candidate $`\pi ^0`$.
Using the event selection criteria described above, we determine the invariant mass spectra for each decay mode shown in Figs. 1(a) to 1(e). The curve in each figure indicates the result of a likelihood fit using a Breit-Wigner line shape convoluted with a Gaussian mass resolution function for the $`\eta _c`$, plus a polynomial function to represent the background. In these fits, the $`\eta _c`$ total width is fixed at its PDG central value of $`\mathrm{\Gamma }=13.2`$ MeV, and the resolution at the Monte Carlo determined value. The number of fitted events and the mass of the $`\eta _c`$ determined for each of the channels are listed in Table I. The experimental resolution, which varies from channel to channel, is also listed in the table.
Figure 1(f) shows the combined four-charged-track invariant mass distributions in the $`\eta _c`$ mass region for the $`K^+K^{}\pi ^+\pi ^{}`$, $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$, $`K^\pm K_S^0\pi ^{}`$ and $`\varphi \varphi `$ channels, which are the those with similar mass resolution. Here, a likelihood fit using a $`\mathrm{\Gamma }`$ fixed at the PDG value and a mass resolution ($`\sigma `$) fixed at the value averaged over the four channels ($`\sigma _{avg}=13.3`$MeV) gives a total of $`100.9\pm 19.8`$ $`\eta _c`$ events and a mass $`m_{\eta _c}=2976.7\pm 3.4`$ MeV with a $`\chi ^2/\text{dof}=14.2/20`$, corresponding to a confidence level of 81.9%. If $`\sigma =13.3`$ MeV is fixed and the mass, number of events, and $`\mathrm{\Gamma }`$ are allowed to float, the resulting mass value and number of events are $`m_{\eta _c}=2976.7\pm 3.0`$ MeV and $`91.5\pm 21.2`$, respectively.
The main systematic errors associated with the $`m_{\eta _c}`$ determination arise from the mass-scale calibration, the detection efficiency, and the uncertainties associated with the selection of the cut values. In the case of the $`\psi (2S)`$ measurement , the level of the systematic error on the overall mass scale of BES was estimated as 0.8 MeV by comparing the masses of the $`\chi _{c1}`$ and $`\chi _{c2}`$ charmonium states, detected in the same decay channels, with their PDG values. These masses have been measured in a number of experiments, and the reported values have good internal consistency. The systematic error caused by the detection efficiency was determined to be 0.7 MeV by using a Monte Carlo simulation. Systematic errors originating from the cut conditions are mainly from the confidence-level cuts for the constrained kinematic fits and the photon minimum energy requirement. For example, when the accepted confidence level probability is varied between 1% and 10%, the central value of $`m_{\eta _c}`$ shifts by 0.7 MeV. When the minimum energy of the photon is changed from 30 MeV to 50 MeV, the central value of $`m_{\eta _c}`$ shifts by 0.2 MeV. The systematic errors associated with the uncertainties in the experimental mass resolution and the full width of the $`\eta _c`$ are small. When the experimental mass resolution is varied between the extreme values of 11.0 and 15.0 MeV, and the full width is changed from 10.0 to 16.0 MeV, we find that shifts of the mass are less than 0.2 MeV. The total overall systematic error of this measurement is taken to be 1.3 MeV, the sum in quadrature of all contributions.
Combining the weighted average with the result for the $`K^+K^{}\pi ^0`$ decay channel (see Table I), we obtain the result $`m_{\eta _c}=2976.6\pm 2.9\pm 1.3`$ MeV for the five channels. Combining this result with that from the BES analysis of $`\psi (2S)\gamma \eta _c`$, namely $`m_{\eta _c}=2975.8\pm 3.9\pm 1.2`$ MeV , we obtain a weighted average $`m_{\eta _c}=2976.3\pm 2.3\pm 1.2`$ MeV. Here, since most of the systematic error in the mass scale is common between the $`J/\psi `$ and $`\psi (2S)`$ measurements, we take the systematic error of the combined measurement to be that from the $`\psi (2S)`$ measurement.
The full width of the $`\eta _c`$ was determined from a fit to the combined $`J/\psi `$ and $`\psi (2S)`$ event samples. Figure 2 shows the combined four-charged-track invariant mass distribution in the $`\eta _c`$ mass region for $`J/\psi \gamma \eta _c`$ (with $`\eta _cK^+K^{}\pi ^+\pi ^{}`$, $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$, $`K^\pm K_S^0\pi ^{}`$ and $`\varphi \varphi `$) and $`\psi (2S)\gamma \eta _c`$ (with $`\eta _cK^+K^{}\pi ^+\pi ^{}`$, $`\pi ^+\pi ^{}\pi ^+\pi ^{}`$, $`K^\pm K_S^0\pi ^{}`$ and $`K^+K^{}K^+K^{}`$). An $`\eta _c`$ full width of $`\mathrm{\Gamma }=11.0\pm 8.1`$ MeV is given by a likelihood fit performed with the resolution fixed at $`\sigma =13.3`$ MeV. This fit gives a total of $`168.3\pm 26.8`$ $`\eta _c`$ events with a $`\chi ^2/\text{dof}=15.0/21`$, corresponding to a confidence level of 82.1%. The systematic error of the width measurement is 4.1 MeV which includes the sum in quadrature of the uncertainty in the mass resolution $`\sigma `$ (2.5 MeV), the uncertainty associated with the choice of selection cuts (2.5 MeV), and the mass dependence of the detection efficiency (2.0 MeV).
In summary, we have used the BES 7.8 million $`J/\psi `$ data sample to observe the $`\eta _c`$ in five different decay modes and determine the $`\eta _c`$ mass to be $`2976.6\pm 2.9\pm 1.3`$ MeV. Combining this result with a prior BES analysis of $`\psi (2S)\gamma \eta _c`$, we find $`m_{\eta _c}=2976.3\pm 2.3\pm 1.2`$ MeV. Combining the two samples, we also obtain $`\mathrm{\Gamma }_{\eta _c}=11.0\pm 8.1\pm 4.1`$ MeV. The mass measurement of $`\eta _c`$ from BES is in good agreement with the PDG value of $`2979.8\pm 2.1`$ MeV, but 3.8$`\sigma `$ below the E760 result of $`2988.3_{3.1}^{+3.3}`$ MeV. Figure 3 shows the BES results together with the four previous measurements with the smallest errors. The curve in Fig. 3 allows a determination of the values of $`\chi ^2`$ versus $`m_{\eta _c}`$ for a fit including all existing measurements. The minimum value, $`\chi ^2/\text{dof}=22.2/8`$ occurs at $`2979.2\pm 0.9`$ MeV. The high $`\chi ^2`$ value is predominantly due to the poor agreement between the DM2 and E760 measurements. The two measurements of BES reduce the new world average for the mass by 0.6 MeV.
The BES collaboration acknowledges financial support from the Chinese Academy of Sciences, the National Natural Science Foundation of China, the U.S. Department of Energy and the Ministry of Science & Technology of Korea. It thanks the staff of BEPC for their hard efforts. This work is supported in part by the National Natural Science Foundation of China under contracts Nos. 19991480, 19825116 and 19605007 and the Chinese Academy of Sciences under contract No. KJ 95T-03(IHEP); by the Department of Energy under Contract Nos. DE-FG03-92ER40701 (Caltech), DE-FG03-93ER40788 (Colorado State University), DE-AC03-76SF00515 (SLAC), DE-FG03-91ER40679 (UC Irvine), DE-FG03-94ER40833 (U Hawaii), DE-FG03-95ER40925 (UT Dallas); and by the Ministry of Science and Technology of Korea under Contract KISTEP I-03-037(Korea). We also acknowledge Prof. D. V. Bugg, Prof. B. S. Zou and Prof. S. F. Tuan for helpful suggestions and discussions. |
no-problem/0002/astro-ph0002247.html | ar5iv | text | # Constraining reionization using the thermal history of the baryons
## 1. Introduction
Quasars have provided us with a unique probe of the high redshift universe. These bright point sources shine like a flashlight through space, revealing the presence of baryonic matter through the light it absorbs. Thus every quasar spectrum contains a one-dimensional map of the distribution of matter along the line of sight. The extraordinary quality of the spectra obtainable with the HIRES spectrograph on the Keck telescope, enables us to extract the wealth of information that has been collected by the quasarโs light along its journey through space and time. Computer simulations of structure formation have been remarkably successful in reproducing these observations. They show that the physics governing the high redshift intergalactic medium (IGM), which is responsible for the low column density absorption lines (the so-called Ly$`\alpha `$ forest) is relatively simple. The IGM, which contains most of the baryons in the universe, is photoionized and photoheated by the collective UV radiation from young stars and quasars. On large scales its dynamics are determined by the gravitational field of the dark matter, while on small scales gas pressure is important. The availability of superb data and a detailed physical model, have made the Ly$`\alpha `$ forest into a powerful probe of the high-redshift universe.
Since shock heating is unimportant in the low-density IGM, most of the gas follows a simple temperature-density relation which is the result of the interplay of photoionization heating and adiabatic cooling due to the expansion of the universe. For densities around the cosmic mean, this relation is well-described by a power-law, $`T=T_0(\rho /\overline{\rho })^{\gamma 1}`$ (Hui & Gnedin 1997). At reionization the gas is reheated, resulting in an increase in $`T_0`$ and a decrease in $`\gamma `$ (provided that the gas is reionized on a timescale short compared to the Hubble time). In ionization equilibrium, $`T_0`$ decreases and the slope of the effective equation of state steepens (i.e. $`\gamma `$ increases). However, because the timescale for recombination is long, the gas retains some memory of how and when it was reionized (Miralda-Escudรฉ & Rees 1994).
The distribution of line widths ($`b`$-parameters) depends on various mechanisms. Thermal motions of the hydrogen atoms broaden the H I absorption lines and other processes, such as the differential Hubble flow across the absorbing structure and bulk flows, also contribute to the line widths. However, the minimum line width is set by the temperature of the gas, which in turn depends on the density. A standard way of analyzing Ly$`\alpha `$ forest spectra is by decomposing them into a set of Voigt profiles. Since the minimum line width ($`b`$-parameter) depends on the temperature, and since column density ($`N`$) correlates strongly with physical density, there is a cut-off in the $`b(N)`$ distribution which traces the effective equation of state of the IGM (Schaye et al. 1999; Ricotti, Gnedin & Shull 2000; Bryan & Machacek 2000). We have used this relation to measure the thermal evolution of the IGM from a set of nine Ly$`\alpha `$ quasar absorption line spectra.
This work is more fully described and discussed in a forthcoming publication (Schaye et al. 2000).
## 2. Method
We have measured the $`b(N)`$ cut-off for a set of nine high-quality Ly$`\alpha `$ forest spectra, spanning the redshift range 2.0โ4.5, eight of which were taken with the HIRES spectrograph of the Keck telescope. We used hydrodynamic simulations to calibrate the relations between the $`b(N)`$ cut-off and the temperature-density relation. Except for the two lowest redshift quasars, the Ly$`\alpha `$ forest spectra were split in two in order to take into account the significant redshift evolution ($`\mathrm{\Delta }z0.5`$) and signal-to-noise variation across a single spectrum. The calibration was done separately for each half of each observed spectrum. The synthetic spectra were processed to give them identical characteristics (resolution, pixel size, noise properties, mean absorption) as the real data. The same Voigt profile fitting package (an automated version of VPFIT (Webb 1987)) was used for both the simulated and the observed spectra.
## 3. Results and discussion
The measured evolution of the temperature at the mean density and the slope of the effective equation of state are plotted in Figure 1. From $`z3.5`$ to $`z3.0`$, $`T_0`$ increases and the gas becomes close to isothermal ($`\gamma 1.0`$). This behavior differs drastically from that predicted by models in which helium is fully reionized at higher redshift. For example, the solid lines correspond to a simulation that uses a uniform metagalactic UV-background from quasars as computed by Haardt & Madau (1996) and which assumes the gas to be optically thin. In this simulation, both hydrogen and helium are fully reionized by $`z4.5`$ and the temperature of the IGM declines slowly as the universe expands. Such a model can clearly not account for the peak in the temperature at $`z3`$ (reduced $`\chi ^2`$ for the solid curves are 6.9 for $`T_0`$ and 3.6 for $`\gamma `$). Instead, we associate the peak in $`T_0`$ and the low value of $`\gamma `$ with reheating due to the second reionization of helium (He II $``$ He III). This interpretation is supported by measurements of the Si IV/C IV ratio (Songaila 1998, but see also Boksenberg, Rauch, & Sargent 1998 and Giroux & Shull 1997) and direct measurements of the He II opacity (Heap et al. 2000 and references therein).
The dashed lines in Figure 1 are for a model that was designed to fit the data (reduced $`\chi ^2`$ is 0.24 for $`T_0`$ and 1.38 for $`\gamma `$). In this simulation, which has a much softer UV-background at high redshift, He II reionizes at $`z3.2`$. Before reionization, when the gas is optically thick to ionizing photons, the mean energy per photoionization is much higher than in the optically thin limit (Abel & Haehnelt 1999). We have approximated this effect in this simulation by enhancing the photoheating rates during reionization, so raising the temperature of the IGM.
Since the simulation assumes a uniform ionizing background, the temperature has to increase abruptly (i.e. much faster than the gas can recombine) in order to make $`\gamma `$ as small as observed. In reality, the low-density gas may be reionized by harder photons, which will be the first ionizing photons to escape from the dense regions surrounding the sources. This would lead to a larger temperature increase in the more dilute, cooler regions, resulting in a decrease of $`\gamma `$ even for a more gradual reionization. Furthermore, although reionization may proceed fast locally (as in our small simulation box), it may be patchy and take some time to complete. Hence the steep temperature jump indicated by the dashed line, although compatible with the data, should be regarded as illustrative only. The globally averaged $`T_0`$ could well increase more gradually which would also be consistent with the data. More data at $`z>3`$ is needed to determine whether the temperature rise is sharp or gradual. On the theoretical side, more realistic models should include radiative transfer effects, which are important during reionization.
Together with measurements of the He II opacity, which probe the ionization state in the voids, the thermal history of the IGM provides important constraints on models of helium reionization. Furthermore, the temperature of the IGM before the onset of helium reionization can be used to constrain the redshift of hydrogen reionization, which marks the end of the dark ages of cosmic history.
### Acknowledgments.
We are grateful to Bob Carswell and Sara Ellison for giving us permission to use their spectra of the quasars Q1100$``$264 and APM 08279+5255 respectively.
## References
Abel, T., & Haehnelt, M. G. 1999, ApJ, 520, L13
Boksenberg, A., Sargent, W. L. W., & Rauch, M. 1998, preprint (astro-ph/9810502)
Bryan, G. L., & Machacek, M. E. 2000, ApJ, submitted (astro-ph/9906459)
Giroux, M. L., & Shull, J. M. 1997, AJ, 113, 1505
Heap, S. R., Williger, G. M., Smette, A., Hubeny, I., Sahu, M., Jenkins, E. B., Tripp, T. M. & Winkler, J. N. 2000, ApJ, in press (astro-ph/9812429)
Haardt, F., & Madau, P. 1996, ApJ, 461, 20
Hui, L. & Gnedin, N. Y. 1997, MNRAS, 292, 27
Miralda-Escudรฉ, J., & Rees, M. J. 1994, MNRAS, 266, 343
Ricotti, M., Gnedin, N. Y., & Shull, J. M. 2000, ApJ, in press (astro-ph/9906413)
Schaye, J., Theuns, T., Leonard, A., & Efstathiou, G. 1999, MNRAS, 310, 57
Schaye, J., Theuns, T., Rauch, M., Efstathiou, G., & Sargent, W. L. W. 2000, MNRAS, submitted (astro-ph/9912432)
Songaila, A. 1998, AJ, 115, 2184
Webb, J. K. 1987, Ph.D. thesis, Univ. Cambridge |
no-problem/0002/hep-ph0002230.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Particle accelerators are designed and built, based essentially upon the classical theory of point charges interacting with electromagnetism. Nevertheless, particles are described by wave-functions, and diffractive limits must exist as to how well they can be localized in a given optical apparatus. The first quantum mechanical effects to arise in a potentially limiting way might be expected to be diffractive in nature. In this paper we take a first look at the problem of estimating the quantum diffractive limits of accelerators. We begin with an important system, an NLC-class machine. We are inspired to consider this because the desired goals for the NLC beam spot size are ambitious. To achieve the desired luminosity requires a $`5`$ nm beam spot in one transverse dimension, (the vertical or $`y`$ direction in the NLC reports ). We will find that this criterion is about two orders of magnitude above the quantum limit. Indeed, we will describe how to estimate the classical design result for the beam spot size itself from quantum mechanics, obtaining rough agreement with the NLC specifications.
We obtain a conceptually simple result. The diffractive limit on the beam spot size in the $`x_{}`$ direction is given by the Rayleigh formula for a (massless) wave of energy $`E`$ which has passed through an effective โaperatureโ $`\delta _0`$ and focused over a focal length $`f`$. That is:
$$\mathrm{\Delta }x\frac{\mathrm{}cf}{E\delta _0}$$
(1.1)
where $`f`$ is the final focal length, $`E`$ the beam energy (the result varies somewhat in a compound lens system, see Section 3). We emphasize that the โaperatureโ $`\delta _0`$ is not a mechanical aperature, e.g., it is not the beam pipe size. $`\delta _0`$ is actually the initial state Gaussian width of the transverse quantum wave-function as it enters the linac upstream from the damping rings, where the wave-function has been prepared (we assume an โideal linac,โ in which there is negligible further synchrotron radiation downstream; this is not necessarily a good approximation, and corrections to the effective $`\delta _0`$ are expected). The initial state can be considered to be an ensemble of particles, each in simple harmonic oscillator (SHO) transverse wave-functions, where the Gaussian envelope (groundstate) width is determined by the damping ring wiggler system. This is given by:
$$\delta _0=\sqrt{\frac{\mathrm{}c}{eB}}$$
(1.2)
where $`B`$ is the typical magnetic field in the damping system, of order $`1`$ Tesla. Taking $`f=2`$m, $`E=250`$ GeV, $`B=1`$ Tesla, yields $`\delta _025.7`$ nm, and thus, $`\mathrm{\Delta }x>0.062`$ nm as a diffracive limit. Hence, the NLC would appear to be safely above the quantum limit by about two orders of magnitude. We remark, however, that this is the extremal lower limit which saturates the Heisenberg uncertainty relationship, and holds in our idealized limit.
In general the individual particle initial state is an excited SHO transverse wave-function of average principle quantum number $`\overline{n}`$. This increases the expected diffractive spot size to $`\mathrm{\Delta }x\sqrt{\overline{n}}f/E\delta _0`$. In fact, since $`\overline{n}1/\mathrm{}`$ it is easily seen that this result is independent of $`\mathrm{}`$, and therefore should be equivalent to a classical derivation of the beam spot size. $`\overline{n}`$ can be crudely estimated from radiation relaxation following the original arguments of Sands , and others. This yields a result of $`\mathrm{\Delta }x2`$ nm, roughly consistent with the NLC design report calculations for the vertical beam spot .
The subject of quantum beam dynamics for particle accelerators is fairly novel . Gaussian optics is a preferred formalism for tackling the problem considered here. Presently we construct a transverse Gaussian wave-packet, with a longitudinal plane wave structure, and propagate it through an optical system. Gaussians extremalize the Heisenberg uncertainty relation, and they are also the groundstate solutions in continuous linear focusing channels, and e.g., magnetic lenses, wigglers, to a reasonable approximation, etc., and can be approached by synchrotron radiation relaxation , , , . Remarkably, Gaussian transverse wave-functions, which solve the quantum Schroedinger equation for propagation through the optical system, (neglecting synchrotron radiation), are controlled entirely by the classical lens matrices of the system. While Gaussian optics is a standard formalism in treating electron microscopy , , , to our knowledge, the behavior of a quantum Gaussian beam in a synchrotron has not been previously formulated, and we will indicate the self-replicating solution to a synchrotron by an application of lens matrix methods.
First consider the problem of a relativistic electron wave-function passing though a lens. Spin is an inessential complication , , so we can use the Klein-Gordon (KG) equation. Assume for simplicity that there is only one spatial transverse dimension, $`x_{}`$, and let $`z`$ be the longitudinal spatial dimension. In the KG equation we include a transverse simple harmonic oscillator (SHO) potential term which is dependent upon $`z`$ (For the analysis, we set $`\mathrm{}=c=1`$):
$$^2\varphi +m^2\varphi +\stackrel{~}{K}(z)x_{}^2\varphi =0$$
(1.3)
Then, with $`\varphi =\mathrm{exp}[i(Etp_zz)]\widehat{\varphi }`$ and $`E^2=p_z^2+m^2`$, the KG equation becomes the transverse Schoedinger equation:
$$i\frac{}{z}\widehat{\varphi }+\frac{1}{2E}(\stackrel{}{}_{})^2\widehat{\varphi }\frac{K(z)x^2}{2}\widehat{\varphi }=0$$
(1.4)
where:
$$K\stackrel{~}{K}/E.$$
(1.5)
This is a standard construction in optics , and $`\widehat{\varphi }(z)`$ has the conventional interpretation with $`z`$ replacing time. The parameter $`K(z)`$ is $`z`$-dependent, corresponding to the finite longitudinal structure of the lens system. For a single thin lens we take $`K(0)=0`$ for $`z<\delta z`$ and $`z>0`$, and $`K=K_0`$ for $`\delta zz0`$.
Let us now postulate a Gaussian form for the wave-function centered at the transverse position $`x_{}`$, carrying a transverse momentum $`p_{}`$:
$$\widehat{\varphi }=\mathrm{exp}\left(\frac{1}{2}A(z)(xx_{})^2+ip_{}x+C\right)$$
(1.6)
In this expression, $`A(z)`$ is the complex Gaussian kernel, $`x_{}`$ and $`p_{}`$ are real, and $`C`$ simply parameterizes the overall normalization. Hence, the Gaussian wave-function has four real parameters. After substituting this anzatz, the Schroedinger equation, eq.(1.4), yields the following equations of motion for the width:
$$i\frac{A}{z}=\frac{A^2}{E}K(z)$$
(1.7)
and for $`x_{}`$ and $`p_{}`$ we obtain the classical Hamilton equations:
$$\frac{p_{}}{z}=Kx_{};\frac{x_{}}{z}=\frac{p_{}}{E};$$
(1.8)
Note that the centroid $`x_{}`$ and centroid momentum $`p_{}`$ motions are decoupled from that of the Gaussian kernel $`A(z)`$ and vice versa. (anharmonic effects would generally couple these quantities). Moreover, the boundary conditions on $`A(z)`$ and of the centroid $`x_{}`$ and centroid momentum $`p_{}`$ are independent . Note that the last equation is just the โ$`z`$-velocityโ expressed in terms of the momentum for a particle of โmassโ $`E`$. Remarkably, eq.(1.7) can be seen to be equivalent to the classical Hamilton equations by identifying $`A(z)iP(z)/X(z)`$ where $`P(z)`$ and $`X(z)`$ are generalized (complex) momenta and positions which satisfy the eqs.(1.8).
Now, we impose an initial condition at $`z=L`$ that the particle has been prepared into a transverse Gaussian wave-packet, specified to have a pure real width $`\delta _0`$ given by:
$$A_0=Re[A(L)]=1/(\delta _0)^2;Im[A(L)]=0$$
(1.9)
We assume that the centroid of the initial wave-packet is moving parallel to the $`z`$-axis, thus $`p_{}=0`$, and $`x_{}=x_0`$ is initially arbitrary at $`z=L`$.
The wave-packet enters the lens at $`z=\delta z`$ and exits at $`z=0`$. Upon entry of the lens $`A(\delta z)`$ is given by the free drift solution of eq.(1.7) from $`z=L`$ to the lens, over the drift distance $`L`$:
$$A(\delta z)=\frac{A_0}{(1+iA_0L/E)}$$
(1.10)
where we assume a thin lens, $`\delta z/L<<1`$.
In the thin lens, to a good approximation for small $`\delta z`$, we have from eq.(1.7):
$$A(0)=A(\delta z)+i\delta zK_0$$
(1.11)
Here we neglected the term $`i\delta zA^2/E`$ in the differential equation for $`A(z)`$ which only gives negligible free particle spreading in the lens. Moreover, the classical centroid motion of the wave-packet is found from eq.(1.8):
$$p_{}(0)=K_0x_0\delta z;x_{}(0)=x_0.$$
(1.12)
Upon exiting the lens the particle propagates again in free space a distance $`\mathrm{}`$ with $`K=0`$. Hence we find:
$$A(\mathrm{})=\frac{A(0)}{1+iA(0)\mathrm{}/E}$$
(1.13)
and:
$$p_{}(\mathrm{})=K_0x_0\delta z;x_{}(\mathrm{})=x_0p_{}\mathrm{}/E.$$
(1.14)
Note that the classical trajectory of the off-axis particle is deflected back toward the lens axis, $`x_{}=0`$.
The focal length, $`f`$ is defined such that $`x_{}(f)=0`$, hence:
$$f=\frac{E}{K_0\delta z}$$
(1.15)
The kernel of the wave-packet can now be obtained by solving eqs.(1.9,1.10,1.11,1.13) recursively to obtain:
$$A(\mathrm{})=\left[\frac{1i[L/(E(\delta _0)^2)(\delta _0)^2E/f]}{(\delta _0)^2(1\mathrm{}/f)+i(\mathrm{}/E+L/EL\mathrm{}/Ef)}\right]$$
(1.16)
Note that the Gaussian kernel has an imaginary part which changes from positive (focusing) to negative (defocusing) upon passage of the geometrical focal length, $`L>f`$. Thus, the transverse probability distribution becomes:
$$|\widehat{\varphi }|^2=๐ฉ(z)\mathrm{exp}\left\{\left(\frac{(xx_{})^2}{(\delta _0)^2(1\mathrm{}/f)^2+(\mathrm{}/E\delta _0)^2[1L(\mathrm{}f)/f\mathrm{}]^2}\right)\right\}$$
(1.17)
and, the transverse size of the wave-packet is given by:
$$\delta ^2(\mathrm{})=(\delta _0)^2(1\mathrm{}/f)^2+(\mathrm{}/E\delta _0)^2[1L(\mathrm{}f)/f\mathrm{}]^2$$
(1.18)
In Figure 1 we give a numerical integration of the Schroedinger equation in the preceding discussion, which confirms the validity of our solution. Note that for finite $`L`$ the Gaussian width is focused to a minimum at $`z=f+f^2/L+\mathrm{}`$. In the limit $`L\mathrm{}`$ the transverse size of the wave-packet reaches a minimum at the focal point $`\mathrm{}=f`$, where the new effective transverse size is:
$$\mathrm{\Delta }x=\frac{f}{E\delta _0}$$
(1.19)
This is the usual Rayleigh diffractive minimum, $`f\lambda /a`$ if we regard $`a\delta _0`$ as an โeffective aperture sizeโ through which the beam has passed, and $`E=p_zc=\mathrm{}c/\lambda `$, the usual quantum wavelength of the particle.
What if the initial prepared wave-function is not the groundstate of a SHO (pure Gaussian), but is rather an excited eigenstate of principle quantum number $`n`$? Hence, at $`z=L`$, neglecting $`x_{}`$ and $`p_{}`$, we assume:
$$\widehat{\psi }(L)=H_n(x/\sqrt{2}\delta _0)\mathrm{exp}\left(\frac{1}{2}(x/\delta _0)^2+C\right)$$
(1.20)
where $`H_n(\xi )`$ is the $`n`$th Hermite polynomial.
First, we note that the Gaussian solution eq.(1.6) contains the generating function for Hermite polynomials :
$$\widehat{\varphi }=\mathrm{exp}\left(\frac{1}{2}A(z)(x)^2+ip_{}(z)x+C\right)\left(\underset{n=0}{\overset{\mathrm{}}{}}\frac{H_n(\sqrt{(A(z)/2)}x)}{n!}[\sqrt{(A(z)/2)}x_{}(z)]^n\right)$$
(1.21)
For a freely drifting particle, if we choose $`p_{}(L)=0`$, and initial $`x_{}(L)=x_0`$, in $`\widehat{\varphi }`$, then we see that our solution $`\widehat{\psi }(z)`$ is determined for any $`z`$:
$$\widehat{\psi }(z)=\frac{^n}{x_0^n}\varphi (z)|_{x_0=0}$$
(1.22)
After passing through an arbitrary lens system, the solution for $`\widehat{\psi }(z)`$ becomes messy, and in general $`x_{}(z)`$ and $`p_{}(z)`$ are arbitrary, and we cannot so easily differentiate with respect to $`x_0`$ to pull out our solution. However, both $`x_{}(z)`$ and $`p_{}(z)`$ are proportional to $`x_0`$ by the linearity of the Hamilton equations. At a focal point we have $`x_{}(f)=0`$ (for any $`x_0`$, owing to linearity), and $`p_{}(f)x_0`$. Hence, the solution at a focal point simplifies:
$`\widehat{\psi }(f)`$ $`=`$ $`{\displaystyle \frac{^n}{x_0^n}}\varphi (f)|_{x_0=0}`$ (1.23)
$`=`$ $`{\displaystyle \frac{^n}{x_0^n}}\mathrm{exp}\left({\displaystyle \frac{1}{2}}A(z)(x)^2+ip_{}(z)x+C\right)|_{x_0=0}`$
$``$ $`x^n\mathrm{exp}\left({\displaystyle \frac{1}{2}}A(z)(x)^2\right)`$
and thus, the arbitrary solution is focused to a Gaussian times a power of $`x`$. This gives a focal spot size:
$$\mathrm{\Delta }x=\frac{\sqrt{n}f}{E\delta _0}$$
(1.24)
Now this result may seem counterintuitive; we are starting with a broader initial distribution by the factor $`\sqrt{n}`$, and we might guess that this would produce a smaller focal point by an amout $`1/\sqrt{n}`$. The wave-function, however, is not smooth in $`x`$, i.e., the Hermite polynomial yields a distribution of transverse momentum, and the initial state has โearsโ, each of typical Gaussian width $`\delta _0`$, but displaced off the optical axis by $`\sqrt{n}`$ . These produce the $`\sqrt{n}`$ enhancement of the focal spot. Yet another way to see this is to note that one can make a classical offโaxis centroid motion of the groundstate Gaussian by superimposing large $`n`$ states, and the Gaussian width will yield the minimal $`f/\delta _0E`$ result. Of course, the quantum state of interest to us will typically have a large value of $`n`$ determined by radiative relaxation. We consider this in the next section.
The actual linear acceleration phase is inconsequential to this result. The above discussion assumed a uniform drift in the longitudinal $`z`$-direction, i.e., constant energy $`E`$. If the particle is accelerating linearly, then $`E`$ becomes z-dependent, $`E(z)=(E_fE_0)z/L+E_0`$. It is easily seen that the only effect on our solution is to replace $`L`$ by $`L\mathrm{ln}((E_fE_0)/E_0)`$, where $`E_0`$ is the initial energy, $`E_f`$ the final energy, and $`E`$ in the above expressions is everywhere replaced by $`E_f`$. For the first NLC, we have $`E_f250`$ GeV $`>>E_02`$ GeV. The linear acceleration phase is thus equivalent to free drift through an effective distance of $`L\mathrm{ln}(E_f/E_0)45`$ km where $`L=10`$ km. The amount by which a wave-packet of initial size of $`25`$ nm spreads throughout the NLC acceleration phase is about a factor of $`6`$. However, this spreading is irrelevant to computing the final diffractive limit as seen in eq.(1.19) where the free drift length $`L`$ has completely cancelled from the expression at the classical focal point, and only the initial quantity $`\delta _0`$ (together with the local quantities $`f`$ and $`E`$) controls the diffractive limit.<sup>1</sup><sup>1</sup>1We remark that the proper way to view the quantum spreading in the transverse phase-space is to use Wigner functions, which depend upon both $`x`$ and a quantum momentum $`p`$. The Wigner function isocontours deform in a manner that is conformal to the classical emittance envelope, so while the wave-functions spread in $`x`$ the Wigner envelopes actually shear in $`x`$ and $`p`$ and remain contained in the transverse phase-space.
Thus, the ultimate diffractive limit is controlled by the initial boundary conditions on the wave-function size, i.e. by $`\delta _0`$, and not by the intervening unitary lens system. What in general determines $`\delta _0`$? For the NLC the initial wave-function, as well as the initial classical distribution, is prepared in the โdamping rings.โ Damping rings are essentially a system of magnets arranged as wigglers which induce synchrotron radiation and cool the classical beam bunches of electrons. They are designed to produce roughly a four order of magnitude reduction in one of the transverse dimension phase space volumes, i.e., $`\mathrm{\Delta }x\mathrm{\Delta }p_x`$ (the transverse emittance). As the system cools classically, it is also relaxing quantum mechanically. This occurs because the particles in the wiggler chain experience a transverse SHO potential, and synchrotron radiation pushes highly excited wave-functions toward the Gaussian groundstate in this potential . However, there are also re-excitation transitions which eventually come into equilibrium, and a typical average SHO principle quantum number is established. While this is certainly an oversimplified view of the actual system, we will use it as a starting point to estimate $`\delta _0`$.
## 2 Magnetic Focusing and Damping
We now sumarize the details of the motion of a transverse wave-packet in a magnetic field. This is discussed in detail in the classic work of Sokolov and Ternov . We will use the more transparent WKB approximation, expanding about the classical radius of motion. Hence, one should use caution in comparing solutions, e.g., principle quantum numbers refer to different things. For example, large $`n`$ in the usual framework corresponds to small $`n`$, but large classical radius $`j_z`$ presently.
Consider a particle moving in a planar orbit in a uniform magnetic field, aligned in the $`\widehat{z}`$ direction, $`\stackrel{}{B}=\widehat{z}B_0`$ in a cylindrical coordinate system $`(r,\varphi ,z)`$. The vector potential can be chosen as $`A_\varphi =rB_0/2`$ with $`A_r=0`$ and $`A_z=0`$. We examine the transverse motion of a relativistic electron in the plane $`z=0`$. For an anzatz of the form $`e^{iEt/\mathrm{}}\psi (r,\varphi )`$, the KG equation becomes:
$$\left[E^2+m^2\frac{^2}{r^2}\frac{1}{r}\frac{}{r}+\left(\frac{i}{r}\frac{}{\varphi }eA_\varphi \right)^2\right]\psi =0$$
(2.25)
where a possible momentum component $`p_z`$ has been set to zero. Consider a state of with a large โpseudo-angular momentum,โ $`\mathrm{}`$, and scale out a factor of $`1/\sqrt{r}`$:
$$\psi =\frac{1}{\sqrt{r}}e^{i\mathrm{}\varphi }\widehat{\chi }(r,t).$$
(2.26)
($`\mathrm{}`$ is not the physical angular momentum because it is gauge dependent due to presence of the vector potential; the physical angular momentum in the present case is $`2\mathrm{}`$, as we will see below). Hence:
$$\left[E^2+m^2\frac{^2}{r^2}+\left(\frac{\mathrm{}}{r}\frac{1}{2}erB_0\right)^2+\frac{1}{4r^2}\right]\widehat{\chi }=0$$
(2.27)
This now has the apparent form of a one-dimensional Schroedinger equation with an effective potential:
$$V(r)=\left[\left(\frac{\mathrm{}}{r}+\frac{1}{2}erB_0\right)^2+\frac{1}{4r^2}\right]$$
(2.28)
The potential has a minimum at:
$$r=R_0\left(\frac{\sqrt{4\mathrm{}^2+1}}{eB_0}\right)^{1/2}\sqrt{\frac{2\mathrm{}}{eB_0}}$$
(2.29)
where the latter expression corresponds to $`\mathrm{}>>1`$. Consider henceforth $`\mathrm{}>>1.`$ We consider small radial fluctuations around the large orbital radius $`R_0`$ as $`r=R_0+x`$ and expand:
$$\left[E^2+m^2\frac{^2}{x^2}+V(R_0)+\frac{1}{2}x^2V^{\prime \prime }(R_0)\right]\widehat{\chi }=0$$
(2.30)
and:
$$V(R_0)=2eB_0\mathrm{}V^{\prime \prime }(R_0)=2e^2B_0^2$$
(2.31)
Thus the high orbital angular momentum Landau levels are approximate eigenstates of the SHO potential defined by $`\widehat{V}(x)=\frac{1}{2}x^2V^{\prime \prime }(R_0)`$, or $`K=e^2B_0^2/2E`$. The states are labelled by $`(n,\mathrm{})`$ where $`n`$ is a principle SHO quantum number; the energy eigenvalues of these levels are given by:
$$E^2=m^2+eB_0(2\mathrm{}+n+\frac{1}{2})$$
(2.32)
In the presence of the gauge interaction the physical angular momentum is $`L_z=i/\varphi +eR_0A_\varphi (R_0)`$. Hence, the physical angular momentum is:
$$j_z=\mathrm{}+\frac{1}{2}eB_0R_0^2=2\mathrm{}$$
(2.33)
where we use the explicit solution for $`R_0`$ from eq.(2.29) in the latter expression. Therefore, to make the classical correspondence, we identify the angular momentum with that of an entering beam particle of momentum $`p_\varphi `$, to obtain $`R_0p_\varphi =j=2\mathrm{}`$. This yields consistency with the familiar expression for the classical orbital radius and the total energy:
$$R_0=\frac{p_\varphi }{eB_0}E^2=m^2+eB_0(j_z+n+\frac{1}{2})$$
(2.34)
Transitions that increase $`n`$, but decrease $`j_z`$ are allowed; hence synchrotron radiation can be excitatory as well as relaxational. The fact that the energy is degenerate, depending upon the combination $`j_z+n`$ is a consequence of the symmetry in the choice of the classical orbital center. (Note that the solution formed with the anzatz $`e^{i\mathrm{}\varphi }`$ for large $`\mathrm{}`$ is actually a solution of vanishing physical momentum; it is a zero-mode associated with the translational invariance of the center of the particleโs orbit).
The groundstate in the transverse dimension is a Gaussian with $`A=|eB_0|`$, given by:
$$\delta _0=\frac{1}{\sqrt{Re(A)}}=(eB_0/\mathrm{}c)^{1/2}.$$
(2.35)
For a typical field strenth of $`1`$ Tesla we obtain $`\delta _025`$ nm. The โspring constantโ is $`O(e^2)`$, hence we say that the dipole magnet is weakly focusing (for quadrupoles $`V^{\prime \prime }(R_0)eB_0E/a`$, where $`a`$ defines a gradient, hence โstrong focusingโ). This description applies to wigglers, even though the dipole magnet field is alternating in $`z`$, if the magnitude of the $`B`$ field is roughly constant.
It has been known for a long time that an equilibrium between deexcitatory and excitatory transitions for a particle in a damping system (or synchrotron) will be established , and there will be an equilibrium value of $`n`$. This value is roughly estimated as follows. The typical energy of synchrotron radiated photons is , :
$$E_\gamma \frac{1}{R_0}\left(\frac{E}{m_e}\right)^3$$
(2.36)
A unit step in a quantum number $`n`$ or $`j_z`$ produces only a small energy change, $`eB_0/E1/R_0`$. The dipole approximation selection rules imply large allowed changes in $`j_z`$, but only unit steps in $`n`$:
$$\mathrm{\Delta }j_z\frac{E}{R_0eB_0}\left(\frac{E}{m_e}\right)^3\left(\frac{E}{m_e}\right)^3\mathrm{\Delta }n\pm 1$$
(2.37)
Bear in mind that we are treating $`n`$ as the principle quantum number in the WKB focusing channel defined by expanding about $`R_0`$, and dipole transitions involving the operator $`\stackrel{}{A}`$ will change $`n`$ by a unit (these can be excitatory). Then $`\mathrm{\Delta }j_z/R_0`$ is essentially the change in longitudinal electron momentum, imparted to the photon. In transitions, though $`R_0`$ changes, there is no sudden translation in the transverse position of the electron wave-function, only a transition in motion, i.e., the virtual center of the orbit changes .
Over a radiative energy loss time interval the number of emitted photons is:
$$n_\gamma \frac{E}{E_\gamma }\frac{m_e^3}{eB_0E}R_0\left(\frac{m_e^3}{E^2}\right)$$
(2.38)
The principle quantum number $`n`$ undergoes a random walk by roughly $`\sqrt{n_\gamma }`$, hence the equilibrium $`\overline{n}`$ is of order $`\sqrt{n_\gamma }`$. Using $`E2`$ GeV, and $`B_01`$ Tesla, whence $`R_06.6`$ m, we find $`n_\gamma =1.12\times 10^6`$ and $`\overline{n}1.06\times 10^3`$.
Hence, our diffractive limit is now increased by $`\sqrt{\overline{n}}0.33\times 10^2`$, and we thus have a beam spot size $`\sqrt{\overline{n}}\times 0.062.0`$ nm. Why is this result so close to the design goals of the NLC that are obtained by classical physics? Indeed, we believe that this result is a quantum derivation of the classical result! The quantum number $`n`$ scales as $`1/\mathrm{}`$, while our diffractive limit scales as $`\mathrm{\Delta }x\sqrt{\mathrm{}}`$, hence the product $`\sqrt{n}\mathrm{\Delta }x`$ is independent of $`\mathrm{}`$. This, moreover, assures us that the ultimate quantum limit is of order $`1/\sqrt{\overline{n}}`$ smaller than the minimal classical analysis. (We note that the Oide effect may be understood as a blowing up of $`\overline{n}`$ in intense final focus magnets, where large transverse energy photons are radiated).
A more detailed discussion of synchrotron radiation relaxation is beyond the scope of the present paper. Excellent treatments can be found in , , and the pioneering work of .
## 3 Quantum Particle in a Synchrotron
The solution to the Schroedinger equation for passage of a free particle through a lens, eq.(1.4), can be completely described by the simple classical optical matrix methods. If one passes a classical ray moving in the $`\widehat{z}`$ direction through a lens system, the outgoing state of the transverse $`\widehat{x}`$ canonical variables may be written as :
$$\left(\begin{array}{c}x(\mathrm{})\\ p(\mathrm{})/E\end{array}\right)_{out}=\left(\begin{array}{c}x(\delta z)\\ p(\delta z)/E\end{array}\right)_{in}\text{where:}det=1.$$
(3.39)
The unimodular matrix $``$ for a compound sequence of lens elements is the corresponding sequential product of the individual matrices of the elements. For example, a sequence of free propagation (distance $`L`$), followed by defocusing lens (focal length $`f`$), followed by a space $`a`$, followed by a focusing lens (focal length $`f`$), followed by free propagation (distance $`\mathrm{}`$) yields the result:
$$\left(\begin{array}{c}x(\mathrm{})\\ p(\mathrm{})/E\end{array}\right)=\left(\begin{array}{cc}1+\frac{a}{f}\frac{a\mathrm{}}{f^2}& (L+a+\mathrm{})+\frac{aL}{f}\frac{a\mathrm{}}{f}\frac{aL\mathrm{}}{f^2}\\ \frac{a}{f^2}& 1\frac{a}{f}\frac{aL}{f^2}\end{array}\right)\left(\begin{array}{c}x(\delta z)\\ p(\delta z)/E\end{array}\right)$$
(3.40)
The zero of the $`(11)`$ matrix element in $`\mathrm{}=Ff+f^2/a`$ implies the system is net focusing with composite focal length $`F`$ (e.g., see ref. ).
The effect of this particular lens system in quantum mechanics, e.g., on the Gaussian kernel $`A`$ as defined in eq.(1.6), can be easily derived from the Schroedinger equation:
$$A(\mathrm{})=\frac{A_0(1a/faL/f^2)+iEa/f^2}{1+a/fa\mathrm{}/f^2+iA_0/E[L+a+\mathrm{}+aL/fa\mathrm{}/f(a\mathrm{}L/f^2)]}$$
(3.41)
The focal length, $`F`$, is where $`_{11}=0`$, and, at the focal length we obtain the width:
$$\delta (F)=\frac{f^2}{aE\delta _0}\text{where:}Re(A(0))=\frac{1}{\delta _0^2}$$
(3.42)
This result is the minimal diffractive quantum limit for the composite lens system, and it is again determined by the initial width of the quantum state.<sup>2</sup><sup>2</sup>2Here we might imagine taking $`a\mathrm{}`$ holding $`f`$ fixed to cause $`\delta (F)0`$; however, for $`a>>f`$ the longitudinal dimension ($`\mathrm{\Delta }z`$) of the focal point becomes small as $`f/a`$ owing to the $`aL/f`$ and $`a\mathrm{}L/f^2`$ terms in $`_{12}`$, and the finite longitudinal distribution of the beams becomes problematic; we have not looked in detail at optimization of this.
Of course, beyond $`\delta _0`$, there is actually no new information in the above formula for $`A`$ than is already present in the lens matrix for the classical ray optics. If the lens matrix is $`_{ij}`$, then we see by comparison with eq.(3.41) the general result for the Gaussian kernel (, ):
$$\text{Theorem I:}A_{out}=\frac{_{22}A_{in}i_{21}E}{_{11}+i_{12}A_{in}/E}$$
(3.43)
That is, if we write the out-amplitude as:
$$A_{out}=iE\left(\frac{๐ฉ}{๐}\right)$$
(3.44)
then:
$$\left(\begin{array}{c}๐\\ ๐ฉ\end{array}\right)=\left(\begin{array}{cc}_{11}& _{12}\\ _{21}& _{22}\end{array}\right)\left(\begin{array}{c}iE\\ A_{in}\end{array}\right)$$
(3.45)
From this result we can easily derive the quantum limit on beam size at the classical focal length. The classical focal length occurs where $`_{11}=0`$. At this point we have $`_{12}=_{21}^1`$. Hence we readily find:
$$\text{Theorem II:}ReA_f=\frac{E^2ReA_0}{_{12}^2[(ReA_0)^2+(ImA_0)^2]}=\frac{1}{(\mathrm{\Delta }x)^2}.$$
(3.46)
If $`A_0=\mathrm{}/\delta _0^2`$ is pure real, then:
$$\mathrm{\Delta }x=\frac{_{12}}{E\delta _0}\frac{\stackrel{~}{F}}{E\delta _0}$$
(3.47)
noting that $`_{12}\stackrel{~}{F}f`$ is a length scale comparable to the focal length at the classical focal length, e.g., $`\stackrel{~}{F}=f^2/a`$ in our previous compound lens example (see previous footnote).
Now, if the magnet system is periodic, as in a synchrotron, we expect quantum states that are approximately periodic solutions in the matrix. Periodic solutions must be eigenstates of the matrix $``$. Consider first the motion within a very thick lens, i.e., a continuous transverse SHO potential. For an infinitesimal displacement in the $`z`$-direction, the lens matrix is:
$$_{SHO}=\left(\begin{array}{cc}1& \delta z\\ \frac{K\delta z}{E}& 1\end{array}\right)$$
(3.48)
which is unimodular to $`๐ช((\delta z)^2)`$. The eigenvalues of $`_h`$ are $`\lambda _\pm =1\pm i\delta z\sqrt{K/E}`$. The stable, quantum solutions in the lens are therefore the eigenvectors:
$$\lambda _\pm \left(\begin{array}{c}iE\\ A_0\end{array}\right)=\left(\begin{array}{cc}_{11}& _{12}\\ _{21}& _{22}\end{array}\right)\left(\begin{array}{c}iE\\ A_0\end{array}\right)$$
(3.49)
hence we find:
$$A_0=\pm \sqrt{KE}$$
(3.50)
Hence, the stable solution in the linear focusing channel is, indeed, the Gaussian groundstate solution in the SHO potential. We have simply recovered the usual Gaussian groundstate in this limit.
Consider now a โsynchrotron,โ i.e., periodic magnet lens lattice based upon the above lens configuration. We assume an infinite series of alternating dipoles with spacing $`a`$ and focal lengths $`\pm f`$. Replacing $`L=a\mathrm{}`$ in the matrix elements $`_{ij}`$ of eq.(3.40) gives the lens matrix for the synchrotron:
$$=\left(\begin{array}{cc}1+\frac{a}{f}\frac{a\mathrm{}}{f^2}& \mathrm{\hspace{0.33em}\hspace{0.33em}2}a+\frac{a^2}{f}2\frac{a\mathrm{}}{f}\frac{a^2\mathrm{}}{f^2}+\frac{a\mathrm{}^2}{f^2}\\ \frac{a}{f^2}& 1\frac{a}{f}\frac{a^2}{f^2}+\frac{a\mathrm{}}{f^2}\end{array}\right)$$
(3.51)
The condition that we have a periodic solution is:
$$A_{\mathrm{}}=\frac{_{22}A_{\mathrm{}}i_{21}E}{_{11}+i_{12}A_{\mathrm{}}/E}$$
(3.52)
Using $`det=1`$ we find:
$$A_{\mathrm{}}=\frac{iE}{2_{12}}\left[_{22}_{11}\pm \left((_{22}+_{11})^24\right)^{1/2}\right]$$
(3.53)
Stable quantum solutions (solutions that are normalizeable Gaussians for all $`\mathrm{}`$) therefore require:
$$1\frac{1}{2}Tr()1$$
(3.54)
This is, of course, the familiar stability condition for the classical motion. Note that $`Tr()=2a^2/f^2`$, which is $`\mathrm{}`$ independent, thus when the condition is met for particular choices of $`f`$ and $`a`$ it holds everywhere. The stability condition is the usual one, $`fa/2`$.
The solution for $`A(\mathrm{})`$ is:
$$A(\mathrm{})=\frac{E}{[(2f+a)(1\mathrm{}/f)+\mathrm{}^2/f]}\left[\left(1\frac{a^2}{4f^2}\right)^{1/2}+i\left(1+\frac{a}{2f}\frac{\mathrm{}}{f}\right)\right]$$
(3.55)
where $`fa/2`$ and $`0\mathrm{}a`$.
Consider the special case of a system in which $`f=a`$. We see that the minimum Gaussian width occurs at $`\mathrm{}=a`$, given by:
$$min(1/ReA(\mathrm{}))=\frac{2f}{\sqrt{3}E}(\mathrm{\Delta }x)^2$$
(3.56)
This implies that the minimum achievable beam spot size in a synchrotron is $`\sqrt{f/E}`$. If a focusing magnet of focal length $`f^{}`$ is inserted into the synchrotron magnet lattice and, then we obtain the minimum spot size $`f^{}/E\mathrm{\Delta }xf^{}/\sqrt{Ef}`$. There is no initial parameter $`\delta _0`$ since we have assumed that synchrotron radiation relaxes the quantum state into the stable, periodic solution. Here we see a potential advantage of a linear collider over a synchrotron, in that the linear collider has a much larger $`\delta _0\sqrt{f/E_0}`$ prepared in the low energy damping ring, which makes the quantum diffractive limit smaller, $`f/E\delta _0`$, while in the synchrotron $`\delta _0\sqrt{f/E}`$, where $`E`$ is the larger beam energy, thus giving a larger diffractive limit $`\sqrt{f/E}`$.
## 4 Conclusion
In conclusion, we estimate the minimal quantum beam spot size achievable in a linear collider to be given by:
$$\mathrm{\Delta }x\frac{\mathrm{}cf}{E\delta _0}$$
(4.57)
where $`f`$ is the final focal length, $`E`$ the beam energy, and $`\delta _0`$ is the initial transverse size of the wave-functions prior to acceleration. This may be viewed as a direct transcription of the Heisenberg uncertainty principle. $`\delta _0`$ is prepared in the synchronous damping rings, typically wigglers, and $`\delta _01/\sqrt{eB/\mathrm{}c}`$ where $`B`$ is the magnetic field strength. We have for an NLC-class machine, $`B1`$ Tesla, $`f2`$ m, $`E250`$ GeV hence $`\delta _025`$ nm, and $`\mathrm{\Delta }x๐ช(0.06)`$ nm.
Radiation damping implies that the initial state wave-function is not a groundstate, and has an average equilibrium principle quantum number $`\overline{n}`$. Then our result is modified:
$$\mathrm{\Delta }x=\sqrt{\overline{n}}\frac{\mathrm{}cf}{E\delta _0}$$
(4.58)
$`\overline{n}`$ is estimated to be
$$\overline{n}\left(\frac{m_e^3}{eB_0E}\right)^{1/2}$$
(4.59)
or, for the above parameters, $`\overline{n}0.11\times 10^4`$. Eq.(4.59) is essentially classical, and yields $`\mathrm{\Delta }x2`$ nm, roughly consistent with the classical vertical final focus beam spot size of the NLC, $`5`$ nm. A more precise analysis of this latter effect is, however, certainly required. This may be a useful way to approach other phenomena, such as the Oide effect, in which large fluctuation in $`\overline{n}`$ in strong final focus magnets can occur, broadening the beam spot size.
We have examined the quantum solution in a simple FODO synchrotron model. In a synchrotron information about the initial $`\delta _0`$ is lost, and the minimal transverse beam spot size is:
$$\mathrm{\Delta }x=\sqrt{\frac{\mathrm{}cf}{E}}$$
(4.60)
which is $`๐ช(1)`$ nm for most high energy synchrotrons, e.g., LEP and Tevatron, in operation at present. Again, a factor of $`\overline{n}`$ would yiedl the classical result. Presumably proton synchrotrons are far from the equilibrium, with $`n>>\overline{n}`$.
A tantalizing question is: can quantum diffractive effects be observed? More generally, our discussion has been motivated by the belief that quantum optics may be the preferred way to analyze futuristic machines. A more general formalism more symmetrical in $`p_{}`$ and $`x_{}`$ for the study of the quantum phase space, perhaps based upon Wignerโs formulation of quantum mechanics, is desired .
Acknowledgements
We wish to thank W. Bardeen, D. Burke, J. D. Jackson, R. Noble, C.Quigg, A. Tollestrup, and especially P. Chen, D. Finley, L. Michelotti, and R. Raja for useful discussions. |
no-problem/0002/astro-ph0002039.html | ar5iv | text | # Keck Speckle Imaging of the White Dwarf G29-38: No Brown Dwarf Companion Detected
## 1 Introduction
Zuckerman and Becklin (1987) discovered that the white dwarf Giclas 29-38 has a large infrared excess and proposed that the excess could be due to a brown dwarf companion. This suggestion inspired discussion of brown dwarfs as white dwarf companions (Stringfellow, Black & Bodenheimer 1990), oscillating brown dwarfs (Marley, Lunine & Hubbard 1990), and other possible cool companions that could explain the excess (Greenstein 1988). Later photometry by Tokunaga et al. (1990) and Telesco, Joy & Sisk (1990) suggested that the 10 micron excess greatly exceeds that expected from a brown dwarf companion, leading to the interpretation that the mid-infrared excess originates from a cloud of circumstellar dust. However, new data from ISOCAM (Chary, Zuckerman & Becklin 1998) show that the 7 and 15 micron excesses are in agreement with a 1000 K blackbody fit to the excess at other wavelengths. The source of the infrared excess of G29-38 remains uncertain.
Direct searches for a companion have produced mixed results. Tokunaga et al. (1988) imaged G29-38 at H and K bands and limited the extent of the source to a diameter of 400 milliarcseconds (mas) or 5.64 AU. Tokunaga et al. (1988) and Tokunaga et al. (1990) took near-infrared spectra of the object and found no evidence for absorption features due to a brown dwarf. Haas and Leinert (1990) took slit scans of G29-38 in 1988, and found a North-South extension at K-band that was well fit by a binary model with a flux ratio of 1:1 and a separation of $`230\pm 40`$ mas ($`3.24\pm 0.56`$ AU). However, when Haas and Lienert repeated their observations the following year under better seeing conditions, the object appeared unextended. Shelton, Becklin and Zuckerman (1998) took slit scans of G29-38 in the J and K bands at the Lick 3-meter telescope in October of 1989 to look for the centroid shift that would arise if, as the photometry suggests, the hypothetical cool companion is brighter in K and the white dwarf is brighter in J. They did not see this effect. They place an upper limit of $`40`$ mas (0.56 AU) on the North-South binary separation, and an upper limit of $`120`$ mas (1.69 AU) on the East-West separation.
Attempts to find the radial velocity signature of a companion to G29-38 have also proven frustrating. Barnbaum & Zuckerman (1992) combined their own spectroscopy with radial velocity data by Graham et al. (1990), Graham, Reid, & Rich (1991, personal communication reported in Graham et al. 1990), Liebert & Saffer (1989, personal communication reported in Graham et al. 1990) and Liebert, Saffer, & Pilachowski (1989), and reported a probable radial velocity variation with a period of 11.2 months and an amplitude of $`10\mathrm{km}\mathrm{s}^1`$. Kleinman et al. (1994), however, argued based on extensive astroseismological observations that the radial velocity variation due to a binary companion must be less than $`\pm 0.65\mathrm{km}\mathrm{s}^1`$ assuming a $``$1 year period.
Hoping to find another clue to the mystery of the infrared excess, we imaged G29-38 at K band on the 10-m W. M. Keck telescope using speckle interferometry to search for a resolved companion at the diffraction limit.
## 2 Observations
We imaged G29-38 at K band with NIRC (the Near-Infrared Camera; Matthews & Soifer 1994) on the W. M. Keck telescope on December 15, 1997. The seeing was extraordinary; we used 0.5 second integrations and saw about 5 speckles and a diffraction-limited core. We took 12 sets of 100 frames of G29-38. Among observations of G29-38 we interspersed observations of two nearby, presumably unresolved calibrator stars, S23291+0515 and S23292+0521, which we observed in the same manner as G29-38, for a total of 6 sets of calibrator frames. We used a version of the speckle reduction software described in Koresko et al. (1991) adapted for use with NIRC. We chose a $`128\times 128`$ pixel subframe centered on the object, and constructed $`128\times 128`$ pixel sky frames from the corners of the $`256\times 256`$ pixel NIRC images. From each set of object and sky frames we computed a power spectrum, and a bi-spectrum, and re-constructed Fourier phases and amplitudes. We divided the Fourier components from each target set by the Fourier components from a few different calibrator sets to correct for the telescope-aperture transfer function, and in this way assembled 18 calibrated images and 18 calibrated power spectra.
Figure 1 shows the mean of the images, compared to a simulated image of a point sourceโthe Fourier transform of the Gaussian$`\times `$Hanning apodizing function used to synthesize the speckle images. The plate scale is 20.57 mas per pixel. Figure 2 shows the azimuthal average of the arithmetic mean of the calibrated power spectra, where we normalized each power spectrum by dividing it by the geometric mean of the first 15 data points after the zero-frequency component. The error bars represent the 1-$`\sigma `$ variations among the 18 power spectra. The $`\lambda /D`$ diffraction limit of Keck at K-band is 55 mas. The noise increases at high frequencies because the power in the images decreases near the diffraction limit. The low frequency spike probably occurs because of seeing noise, the change in seeing between observations of G29-38 and observations of the calibrators. Because the final image closely resembles a point source and the power spectrum is consistent with a constant, the power spectrum of a $`\delta `$-function, we conclude that we did not resolve G29-38.
## 3 Discussion
The K-band flux of G29-38 is $`5.46\pm 0.15`$ mJy; 2.05 mJy of this is in excess of Greensteinโs (1988) white dwarf model (Tokunaga, Becklin & Zuckerman 1990). We computed the power spectrum of a binary system consisting of a Greenstein white dwarf and a point-like companion which supplies all the excess flux. The only free parameter for this binary model is the angular separation of the components. We fit the model to the observed power spectrum, and derive a best fit binary separation of 20 mas. The maximum deviation of the power spectrum from a straight line, however, is consistent with typical deviations due to time variations of the atmosphere-telescope point-spread function. In figure 2, we compare the 20 mas model with the observed power spectrum and a model with the same flux ratio but a 30 mas separation. The latter model is marginally inconsistent with our observations, so we report 30 mas as an upper limit to the binary separation.
At G29-38โs distance of 14.1 pc (Tokunaga et al. 1990), 30 mas corresponds to a transverse separation of 0.42 AU. Assuming that G29-38 is 0.61 $`M_{}`$ (Bergeron et al. 1995), an 0.06 $`M_{}`$ brown dwarf orbiting the star at 0.42 AU would have a period of about 0.33 years and would create a reflex motion in G29-38 that would have been detectable to Kleinman et al. (1994) if the orbit were inclined more than 10 degrees from face-on. The statistical likelyhood of an inclination $`10`$ degrees is 1.5%. Closer orbits would be easier to detect from reflex motion.
Perhaps a brown dwarf orbits G29-38 with a long period that would be hard to identify in reflex motion and the brown dwarf happened to pass in front of the star or behind it when we observed it on December 15, 1997. For instance, Kleinman et al. (1994) saw a long-term trend in their radial velocity data which could be interpreted as a companion with an $`8`$ year period causing radial velocity variations on the order of 0.8 km/s. Such a companion would have a semimajor axis of $`3.4`$ AU. If the orbit had a semi-major axis $`a`$, and were edge-on, the fraction of the time the brown dwarf would spend in the region where we couldnโt resolve it is $`\frac{2}{\pi }\mathrm{sin}^1\frac{0.42AU}{a}`$; for $`a=3.4`$ AU, there is a $`<8`$% chance that the brown dwarf would have been hidden from us. Since Shelton et al. (1998) also missed the hypothetical edge-on brown dwarf in 1989 as it passed close to the star, we find this scenario unlikely.
A companion in an eccentric orbit is easier to detect from reflex motion than a companion in a circular orbit with the same semi-major axis. Therefore such a companion would have to be farther away from the star on average for Kleinman et al. (1994) to have missed it, making it even more unlikely that it would have been hidden from us, Haas & Lienert (1990), and Shelton et al. (1998). A companion in an eccentric, face-on orbit would spend relatively little time close to the star, and probably would not have been missed by both us and Shelton et al. (1998).
The infrared excess may represent thermal radiation from a cloud of dust rather than a cool companion (Zuckerman & Becklin 1987). We can place no constraints on the concentration or geometry of such a cloud. Dust radiating thermally at 1โ15 microns heated by radiation from the white dwarf alone would be far too close to the star ($`<10^3`$ AU) for us to resolve.
## 4 Conclusions
We conclude that the infrared excess of G29-38 is not due to a single orbiting companion. If there were a single companion producing the excess, it would have to orbit almost face-on and closer than 0.4 AU; or it could orbit roughly edge on, with a period of several years, in such a way that it happened to appear at a minimum angular separation from the star in December, 1997 when we observed it and in the fall of 1989 when Haas & Lienert (1990) and Shelton et al. (1998) observed it. Either case is highly improbable. This result supports the hypothesis that source of the near-infrared excess is not a cool companion but a dust cloud (Zuckerman & Becklin 1987; Wickramasinghe et al. 1987; Graham et al. 1990; Koester et al. 1997).
We thank Eugene Chiang, Chris Clemens, Peter Goldreich, and Ben Zuckerman for inspiration and helpful discussions. The observations reported here were obtained at the W. M. Keck Observatory, which is operated by the California Association for Research in Astronomy, a scientific partnership among California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. It was made possible by the generous financial support of the W. M. Keck Foundation. |
no-problem/0002/nlin0002001.html | ar5iv | text | # Discreteness effects on soliton dynamics: a simple experiment.
## I A brief tutorial on collective diffusion.
Before describing the apparatus and experimental results, let us recall some basic ideas on the diffusion of particles over a periodic potential under the effect of an applied force. The motion of a single particle can be easily analyzed if one considers only the low temperature situation where the thermal fluctuations can be neglected. In the absence of a driving force, the particle is trapped into a potential well. Applying a driving force is equivalent to tilting the potential and, above a critical angle $`\alpha _0`$, the minimum disappears and the particle starts to slide on the washboard potential .
Instead of a single particle let us now consider a chain of particles coupled by harmonic springs with an equilibrium length equal to the distance between the potential minima: this state is called a commensurate state. Its ground state is reached when all the particles are in the potential minima (fig. 1.a) and the situation is comparable to the case of a single particle: the motion induced by an external force will only start when the minima of the total potential (including the force term) have disappeared. If a defect is introduced in the chain, the situation becomes very interesting and is equivalent to creating a dislocation in a crystal. Consider for instance the case of a missing particle obtained by moving one half of the chain by one unit, i.e. a kink in the particle positions. In the vicinity of the defect, the competition between the elastic energy of the springs and the substrate potential energy displace the particles with respect to the minima of the potential (fig. 1.b). As a result the particles next to the defect are easier to move with an external force than a particle sitting at the bottom of the well. For a sinusoidal potential of period $`a`$, $`V(x)=V_0[1\mathrm{cos}(2\pi x/a)]`$, when the coupling between the particles is strong, the position of the n<sup>th</sup> particle is $`x_n=a[n+\frac{2}{\pi }\mathrm{tan}^1\mathrm{exp}(n/\mathrm{})]`$, with $`\mathrm{}=\frac{a}{2\pi }\sqrt{k/V_0}`$ where $`k`$ is the spring elastic constant . No analytical solution for the structure of the defect is known in the case of weak coupling.
When the coupling between the particles is very strong, the defect is extended. The displacements of the particles vary progressively from zero to one lattice spacing across the defect. Consequently, there are particles at any level of the substrate potential, including on the maximum. This is the situation that is realized in the chain of strongly coupled pendula described by Scott . In this quasi-continuum situation, it is easy to understand why the defect (or soliton in the terminology of nonlinear science) can move freely. When it is translated some particles have to climb over the substrate potential barriers but simultaneously others move downward in the potential and the overall translation does not require any energy. In a continuum model, the system is invariant by any translation and the free motion of the soliton is simply a manifestation of this translational symmetry (Goldstone mode).
In the weakly coupled situation that we consider here, the situation is different. The defect is highly localized and even though some particles are slightly displaced from the potential minima as in Fig. 1(c), the springs are not strong enough to maintain particles at the top of the substrate potential barriers. Now, in order to translate the defect, one has to move particles up on the substrate potential: there is a barrier to the free translation of the defect, which is well known in dislocation theory as the Peierls-Nabarro (PN) barrier. The weak coupling case appears therefore as a natural intermediate case between the case of individual particles which must overcome the full potential barrier of the substrate potential and the continuum case where the soliton is completely free to move. In this intermediate case, the defect moves as a collective excitation over an effective potential, the PN potential, which has the period of the lattice and an amplitude which is much lower than the individual potential barrier of the substrate. This analysis in terms of a collective object explains why the critical stress for the plastic deformation of a crystal is several orders of magnitude lower than the stress that would translate a full atomic plane above another .
The case of atoms adsorbed over an atomic surface is even more interesting because usually the equilibrium distance $`b`$ that the atoms would select if they were free is not commensurate with the period $`a`$ of the crystalline substrate. As a result the interaction forces compete with the substrate potential to determine the particle positions. The atomic layer minimizes the energy by letting the particles drop near the bottom of the substrate potential wells almost everywhere and compensating for the mismatch between $`a`$ and $`b`$ by creating local discommensurations which are very similar to the isolated defect described above for the commensurate case. The only difference is that, depending on the commensurability ratio, there is a hierarchy of defects with different shapes and different barriers. In the case of an irrational ratio, this hierarchy is complete and some defects have a vanishingly small PN barrier: in the discrete lattice (provided the coupling is not too weak ), a vanishingly small external force can cause mass transport in the system. The application of an external driving force to an interacting chain of atoms exhibits this hierarchy of depinning transitions . For very low force, no motion can be detected. Then, the geometrical kinks (the discommensurations due to the concentration of particles related to the number of minima of the potential) start to move whereas the atoms stay static. For larger external forces, additional defects (kink-antikinks pairs) are created giving rise to an increase of the mobility. Finally, for high enough forces, all atoms are moving with the mobility of single Brownian particles.
## II Experiment
Let us now describe the experimental apparatus sketched in Figs. (2). The system under study is a chain of steel cylinders, each one 60 mm in length and 8 mm in diameter. The cylinders sit on a washboard potential cut in a block of plastic (approximate sine-shape potential). The height of the valleys is 10 mm and the lattice spacing is 20 mm. The cylinders are coupled one to an other with an elastic string; the first cylinder is fixed whereas the other ones are free to move. In the example shown in Figs. (2), a 3 mm slot was made along the center of the plastic support (represented by the two dotted lines in Fig. (2b)) in order to let the elastic move freely.
The concentration of cylinders, which determines the presence and structure of the discommensurations, is defined as $`\theta =M/N`$ where $`M`$ is the number of cylinders and $`N`$ the numbers of lattice spacing. The defects shown in Fig. 1.b and 1.c correspond to $`\theta =15/16`$ while figure 2 shows schematically the case $`\theta =2/3`$. In the experiment the length $`b`$ of the elastic strings is adjusted for each concentration by imposing the condition $`\theta =a/b`$ which means that, in the absence of the substrate potential the cylinders would be equally spaced in such a way that $`M`$ cylinders would cover $`N`$ lattice spacings, achieving the desired concentration.
The apparatus can be used to test the static and dynamical properties of this model system. First one can measure the depinning force which is required to move a lattice with a given concentration above the substrate. This is done by progressively inclining the system, i.e. increasing very slowly the angle $`\alpha `$ and determining the critical angle $`\alpha _c`$ above which the initial distribution of cylinders is unstable, i.e. above which at least one cylinder begins to slide. The results of this experiment are shown on Fig 3. For $`\theta =1/q`$ (with $`q`$=1,2,..), the system has a trivial ground state with one cylinder at the bottom of the substrate potential wells every $`q`$ wells. In these cases, all cylinders start to move simultaneously. As discussed above these commensurate cases should be the hardest to depin and this is confirmed by the experiment. Moreover the behavior should not depend on the number of empty wells that might separate two cylinders, i.e. we expect the same critical angle for $`q=1`$ or $`q=2`$.. This is confirmed by the results shown on Fig 3. When $`\theta =p/q`$ is a rational number with $`q>p`$, $`q1`$, such as $`\theta =2/3`$ shown in Figs. (2), the ground state involves defects and Fig 3 shows that their cooperative motion (cases $`\theta =2/3`$, $`\theta =3/4`$) occurs for lower angles than the individual motion. This illustrates therefore the depinning hierarchy, the Peierls Nabarro barrier being lower than the substrate barrier. Figure 3 also shows that the PN barrier depends on the commensurability ratio which governs the structure of the kink-defects ; higher order rational numbers result in a lower PN barrier: $`\alpha _c(2/3)>\alpha _c(3/4)E_{\text{PN}}(2/3)>E_{\text{PN}}(3/4)`$. With a system as small as the one we are using we cannot study other rationals such as 3/5, 5/8, 8/13 that should lead to lower barriers. A truly incommensurate case, leading to a vanishing PN barrier , corresponds to an irrational ratio and therefore it cannot be obtained in an experiment since $`M`$ and $`N`$ are necessarily integers. This case can however be approached by rational numbers with numerators and denominators chosen in a Fibbonacci sequence , but it would require a model much longer than the one we have built.
The second class of experiments that can be performed tests the dynamical properties of the system by measuring the mobility of the defect as a function of the applied force. This is done by artificially holding the defect above the critical angle $`\alpha _c`$ while the potential is tilted, and then letting it go. When the constraint is released, the defect slides on the washboard potential. Using a high speed CCD camera to record the fast motion, we measure the time $`\mathrm{\Delta }t`$ for the propagation of the defect over $`n`$ lattice-spacings.
Since the velocity is $`v=na/\mathrm{\Delta }t`$ and the external force is due to gravity, $`F=mg\mathrm{sin}(\alpha )`$, the mobility is by definition
$$B(\alpha )=v/F=\frac{na/\mathrm{\Delta }t}{mg\mathrm{sin}(\alpha )}.$$
(1)
The results, obtained for the concentration $`\theta =2/3`$ sketched in Fig. 2, are plotted in Fig. 4: they clearly show a plateau corresponding to the kink-running state, obtained when only the defect moves, leading to the first contribution to mass transport. The final transition to the sliding state, where all cylinders slide on the washboard potential, is reached for higher external forces.
## III Conclusion.
In this brief report, we have presented a simple teaching experiment stressing the kink-concept in a discrete system. One is able to illustrate different theoretical ideas in a simple way using this experiment. The Peierls-Nabarro potential, usually presented in the context of dislocation theory, and its role are not only clearly emphasized but one shows that it is a function of the concentration , i.e. for atoms adsorbed on a crystal, it varies with the coverage. Moreover, this experiment explains the recently developed idea about two-dimensional diffusion of atoms . One easily detects a hierarchy of depinnings: first the defects (kinks or antikinks) are moving, then for higher forces, we have the individual motion. The present apparatus is too short to illustrate experimentally the existence of an hysteresis phenomenon in the force driving the diffusion : the chain starts to slide for a force $`F_1`$ but stops only when the force has been lowered below $`F_2<F_1`$ because, once the motion has been initiated, the kinetic energy allows the particles, or the defect, to overcome a small potential barrier. It would be very interesting to build a much longer chain, so that few defects could coexist and of course interact. The critical angle of the different concentration regions would also be different.
This experiment has however some important differences with the physical problem of atomic diffusion. The elastic strings apply a force to the cylinders only when they are extended (not in compression) and the unavoidable solid friction has no simple equivalent at the microscopic level. It is therefore important not to overemphasize these results. Nevertheless, as recent very nice experiments in Josephson junction arrays have confirmed discreteness effects on soliton-like structures (see review by Peyrard in Ref. ), we think that the present experiment, conceptually and materially more appropriate for teaching purpose, could be an excellent tool to present the soliton concept in the framework of discreteness to non-specialists.
Fig. 1
Fig. 2
Fig. 3
Fig. 4 |
no-problem/0002/astro-ph0002393.html | ar5iv | text | # 1 Young radio-loud AGN
## 1 Young radio-loud AGN
Although radio-loud Active Galactic Nuclei (AGN) have been studied for several decades, still not much is known about their birth and subsequent evolution. The recent identification of a class of very young radio sources can be considered as a major breakthrough in this respect, since it has opened many unique opportunities for radio source evolution studies.
Unfortunately, the nomenclature and use of acronyms in this field of research is rather confusing. This is mainly caused by the different ways in which young radio sources are selected. Selection of young sources is made in two ways, the first based on their broadband radio spectra, and the second based on their compact morphology. A convex shaped spectrum, peaking at about 1 GHz distinguishes young radio sources from other classes of compact radio sources. In this case they are called Gigahertz Peaked Spectrum (GPS) radio sources (eg. OโDea etal. 1991, OโDea 1998). Similar objects, which are typically an order of magnitude larger in size, have their spectral turnovers shifted to the $`10100`$ MHz regime, causing them to be dominated at cm wavelengths by the optically thin parts of their spectra. These are called Compact Steep Spectrum (CSS) radio sources to distinguish them from the general population of extended steep spectrum sources (eg. Fanti et al. 1991).
On the other hand, young radio sources are found in multi-frequency VLBI surveys, in which they can be recognised by compact jet/lobe-like structures on both sides of their central core. They are called Compact Symmetric Objects (CSO, Wilkinson et al. 1994). Their double sided structures clearly distinguish them from the large majority of compact sources showing one-sided core-jet morphologies. This implies that the luminosities of CSO are unlikely to be substantially enhanced by Doppler boosting. Larger versions of CSOs are subsequently called Medium Symmetric Objects (MSO) and Large Symmetric Objects (LSO).
The overlap between the classes of CSO and GPS galaxies is large and we believe that they can be considered to be identical objects. However, note that a substantial fraction of GPS sources are optically identified with high redshift quasars, which in general show core-jet structures (Stanghellini et al. 1997). The relationship between GPS quasars and GPS galaxies/CSO is not clear and under debate (Snellen et al. 1999). We therefore believe it is wise to restrict evolution studies to GPS galaxies and CSOs.
### Evidence for youth
Although it was always speculated that GPS sources were young objects, only recently has strong evidence been found to support this hypothesis. Monitoring several GPS sources over a decade or more using VLBI, allowed Owsianik & Conway (1998) and Owsianik, Conway & Polatidis (1998) to measure the hotspot advance speeds of several prototype GPS sources to be $`0.1h^1c`$. These imply dynamical ages of typically $`10^{23}`$ years.
Additional proof for youth comes from analysis of the overall radio spectra of the somewhat larger CSS sources. Murgia et al (1999) show that their spectra can be fitted with synchrotron aging models, implying ages of typically $`10^{35}`$ years.
The work of these authors shows that GPS/CSO sources are very young and most likely the progenitors of large, extended radio sources. This makes them key objects for radio source evolution studies.
### Tools for radio source evolution studies
Several authors have used number count statistics and linear size distributions to constrain the luminosity evolution of radio sources (Fanti et al. 1995; Readhead et al. 1996, OโDea & Baum 1997). All these studies find an excess of young objects in relation to the number of old, extended radio sources. This over-abundance of GPS and CSS sources has generally been explained by assuming that a radio source significantly decreases in luminosity over its lifetime. In this way, sources are more likely to contribute to flux density limited samples at young than at old age, causing the apparent excess.
However, in addition to their over-abundance, GPS galaxies are found to be significantly more biased towards high redshift than large extended radio galaxies (Snellen & Schilizzi, 2000). This is puzzling since classes of sources representing similar objects at different stages of their evolution are expected to have similar birth functions and redshift distributions. Furthermore, it suggests that the interpretation of their number count statistics, which are averaged over a large redshift range, is not so straightforward. We have postulated a simple evolution scenario which can resolve these puzzles. We argue that the luminosity evolution of a radio-loud AGN during its first $`10^5`$ years is qualitatively very different from that during the rest of its lifetime. This may be caused by a turnover in the density profile of the interstellar/intergalactic medium at the core-radius of the host galaxy, resulting in an increase in luminosity for young, and a decrease in luminosity for old radio-loud AGN with time. Such a luminosity evolution results in a flatter collective luminosity function for the young objects, causing their bias towards higher redshifts, and their over-abundance at bright flux density levels (Snellen et al. 2000).
An alternative explanation is that GPS sources are indeed young AGN, but mainly short-lived objects, which will never evolve into extended radio sources (Readhead et al. 1994). In that case, the two populations are not directly connected, and no similar cosmological evolution or redshift distribution is necessary.
## 2 Space-VLBI observations of GPS sources
In general, the angular resolution of VLBI observations at a certain observing frequency is limited by the size of the earth. The combination of ground VLBI stations with the Japanese satellite HALCA (part of the VLBI Space Observatory Programme VSOP), achieves a resolution typically 3 times higher than this ($`1.5`$ mas and $`0.5`$ mas at 1.6 and 5 GHz respectively). In particular, the study of GPS sources benefits from VSOP, since observing at a higher frequency to achieve a similar resolution is often not an option, because of their steep fall-off in flux density towards high frequency. Furthermore, their physical properties are most interesting around their spectral turnover, where differences in spectral indices within the source are more prominent than at high frequency.
We have been awarded VSOP observing time for 11 and 8 of the brightest and most compact GPS sources at 5.0 and 1.6 GHz respectively. Details and status of the observations are listed in Table 1. At the time of writing, all targets at 5 GHz, and 6 of the 8 sources at 1.6 GHz have been observed.
### First results and discussion
A large fraction of the sources have now been imaged. Some examples are shown in figures 1 and 2. Additional observations have been taken at 15 GHz with the VLBA to match the 5 GHz VSOP data in resolution, which will allow detailed spectral decompositions of the objects. In particular, this may sched new light on the nature of the GPS quasars and the role of Doppler boosting in these sources.
One of the first results of these observations are the high brightness temperatures observed of typically $`10^{10.511}`$ Kelvin. This indicates that these objects must be near their synchrotron self absorption (SSA) turnover at the observed frequency, making it very likely that indeed SSA is the cause of their spectral peaks. This is in agreement with the statistical arguments of Snellen et al. (2000), who found that among samples of GPS and CSS sources, the ratio of component size, as derived from the spectral peak assuming SSA, and overall angular size, are constant and very similar to those found for large extended radio sources. This not only implies self-similar evolution, but also provides strong evidence for SSA. Note however, that several authors argue that free-free absorption can not be ruled out for the smallest GPS galaxies (Kameno et al., this volume; Marr et al., this volume)
A valuable spin-off from these high angular resolution VSOP observations come from their comparison with ground-based VLBI images taken at an earlier epoch. Following the method of Owsianik & Conway (1998), we use these to derive dynamical ages for GPS sources. In this way, we find that the two dominant components at 5 GHz of 2021+614 (fig 2), have a larger separation at the epoch of the VSOP observations, compared to data from Conway et al. (1994) taken in 1982 and 1987. The increase in separation indicates a hotspot advance speed of $`0.1`$c, which implies an age of $`400`$ years for these components (Tschager et al. 2000). Preliminary analysis of 0108+388 (fig 1) shows an advance speed of 15 $`\mu as`$/yr, consistent with what is found by Owsianik, Conway & Polatidis (1998; 9 $`\mu as/yr`$). These observations confirm the young ages of a few hundred years for the most compact GPS galaxies.
## 3 Summary
GPS galaxies and CSO are now identified as classes of young radio sources. They form a key element in the investigation of the evolution of radio-loud AGN. We report on VSOP observations of 11 and 8 bright GPS sources at 5.0 and 1.6 GHz frequency respectively. First analysis indicates high brightness temperatures consistent with synchrotron self absorption as the cause of their spectral turnover. Comparison with ground-based VLBI datasets taken at earlier epochs confirm the very young ages for the most compact GPS galaxies of a few hundred years.
#### Acknowledgements.
We gratefully acknowledge the VSOP Project, which is led by the Japanese Institute of Space and Astronautical Science in cooperation with many organizations and radio telescopes around the world.
## References
Conway J.E., Myers S.T., Pearson T.J., Readhead C.S., Unwin S.C., & Xu W., 1994, ApJ, 425, 568
Fanti R., Fanti C., Schilizzi R.T., Spencer R.E., Nan Rendong, Parma P., Van Breugel W.J.M., Venturi T., 1990, A&A, 231, 333
Fanti C., Fanti R., Dallacasa D., Schilizzi R.T., Spencer R.E., Stanghellini C., 1995, A&A, 302, 317
Kameno et al., this volume
Marr et al., this volume
Murgia M., Fanti C., Fanti R., Gregorini L., Klein U., Mack K-H., Vigotti M., 1999, A&A, 345, 769
OโDea C.P., Baum S.A., Stanghellini C., 1991, ApJ, 380, 66
OโDea C.P., Baum S.A., 1997, AJ, 113, 148
OโDea C.P., 1998, PASP, 110, 493
Owsianik I., Conway J.E., 1998, A&A, 337, 69
Owsianik I., Conway, J.E., Polatidis, A.G., 1998, A&A, 336, L37
Readhead A.C.S, Xu W., Pearson T.J., 1994, in Compact Extragalactic Radio Sources, eds Zensus & Kellerman, p19
Readhead A.C.S., Taylor G.B., Xu W., Pearson T.J., Wilkinson P.N., 1996, ApJ, 460, 634
Snellen I.A.G., Schilizzi R.T., Bremer M.N, Miley G.K., de Bruyn A.G., Rรถttgering H.J.A., 1999a, MNRAS, 307, 149
Snellen I.A.G., & Schilizzi R.T., proc. of โLifecycles of Radio Galaxiesโ workshop, ed J. Biretta et al., to appear in New Astronomy Reviews.
Snellen I.A.G., Schilizzi R.T., Miley G.K., de Bruyn A.G., Bremer, M.N. & Rรถttgering H.J.A., 2000, MNRAS, submitted
Stanghellini C., OโDea C.P., Baum S.A., Dallacasa D., Fanti R., Fanti C., 1997a, A&A, 325, 943
Tschager W., Schilizzi R.T., Rรถttgering, H.J.A., Snellen I.A.G., Miley, G.K., 2000, submitted to A&A
Wilkinson P.N., Polatidis A.G., Readhead A.C.S., Xu W., Pearson T.J., 1994, ApJ, 432, L87 |
no-problem/0002/math-ph0002044.html | ar5iv | text | # Continuous time evolution from iterated maps and Carleman linearization
## I Introduction
There are well known examples of how to relate a continuous time differential system
$$\dot{x}(t)=F(x(t)),$$
where $`F:๐^k๐^k`$, to an iteration of a map of $`๐^s`$ into itself. We only recall the Eulerโs broken line method and the Poincarรฉ map. An example of the inverse procedure to the discretization has been recently discussed in . Namely, a precise meaning has been given therein to the notion of a โcontinuous iterationโ of a mapping, that is a continuous counterpart of the iterates
$$f^n(x)=f(f^{n1}(x)).$$
The purpose of this work is to introduce an alternative formalism for the study of the continuous iterations based on the classical Carleman linearization technique which is more general than the approach taken up in . In section II we briefly introduce the Carleman linearization technique. Section III is devoted to the detailed analysis of the properties of the Carleman embedding matrix which are crucial for the actual treatment. Based on the observations of section III we find in section IV an explicit formula on continuous iterations of a mapping and show its connection with a linearization transformation for a corresponding recurrence. The theory is illustrated by an example of the logistic equation. In section V we introduce the ordinary differential equation referring to the continuous iteration and find a simple relation between the Carleman embedding matrices corresponding to the continuous and discrete time cases.
## II The Carleman linearization
We begin by recalling the Carleman linearization technique . Consider the system
$$\dot{x}=F(x),$$
(2.1)
where $`F:๐^k๐^k`$ and $`F`$ is analytic in $`x`$. Having in mind the applications of the Carleman technique in the study of the iterated one-dimensional maps discussed in this work we restrict for brevity to the case with $`k=1`$, i.e. the ordinary differential equation (2.1). On making the ansatz
$$x_j:=x^j,j=1,\mathrm{\hspace{0.17em}2},\mathrm{},$$
(2.2)
where $`x`$ fulfils (2.1) we arrive at the infinite linear system
$$\dot{x}_j=\underset{k=0}{\overset{\mathrm{}}{}}L_{jk}x_k,$$
(2.3)
with the constant coefficient matrix $`L_{jk}`$. Clearly, in view of (2.2) the finite system (2.1) is embedded into the infinite system (2.3). Therefore, the Carleman linearization is also referred to as the Carleman embedding technique. Recently, the Carleman approach has been succesfully applied to the solution of numerous nonlinear problems (see and references therein). We only recall the application of the Carleman linearization technique for calculating Lyapunov exponents and finding first integrals for the Lorenz system .
As shown by Steeb the Carleman embedding can be easily generalized to the case with nonlinear recurrences of the form
$$x_{n+1}=f(x_n),$$
(2.4)
where $`f`$ is analytic in $`x_n`$. Indeed, in analogy with (2.2) we set
$$x_{jn}:=x_n^j,$$
(2.5)
where $`x_n`$ fulfils (2.4), which leads to the infinite-dimensional linear system of difference equations such that
$$x_{jn+1}=\underset{k=0}{\overset{\mathrm{}}{}}M_{jk}x_{kn}.$$
(2.6)
As with the case of the ordinary differential equations the finite-dimensional recurrence (2.4) is embedded into the infinite linear system (2.5).
## III The Carleman embedding matrix for nonlinear recurrences
In this section we study the properties of the Carleman embedding matrix for nonlinear recurrences specified by (2.6) which are utilized in the actual formalism. We now return to (2.6). Let $`M(f)`$ designate the Carleman matrix referring to the recurrence (2.4). Equations (2.4), (2.5) and (2.6) taken together yield
$$(f(x))^j=\underset{k=0}{\overset{\mathrm{}}{}}M_{jk}(f)x^k,$$
(3.7)
so
$$M_{jk}(f)=\frac{1}{k!}\frac{d^k(f(x))^j}{dx^k}|_{x=0}.$$
(3.8)
Let
$$f(x)=\underset{k=0}{\overset{\mathrm{}}{}}f_kx^k.$$
(3.9)
The matrix $`M(f)`$ can be alternatively defined with the help of the coefficients of the expansion (3.3) as
$$M_{jk}(f)=\{\begin{array}{cc}\delta _{0k}\hfill & \text{ }\text{for }j=0,\hfill \\ \underset{m_1+m_2+\mathrm{}+m_j=k}{}f_{m_1}f_{m_2}\mathrm{}f_{m_j}\hfill & \text{ }\text{for }j1.\hfill \end{array}$$
(3.10)
The first few elements of the matrix $`M(f)`$ are
$$M(f)=\left(\begin{array}{ccccc}1\hfill & 0\hfill & 0\hfill & 0\hfill & \mathrm{}\hfill \\ f_0\hfill & f_1\hfill & f_2\hfill & f_3\hfill & \mathrm{}\hfill \\ f_0^2\hfill & 2f_0f_1\hfill & 2f_0f_2+f_1^2\hfill & 2(f_0f_3+f_1f_2)\hfill & \mathrm{}\hfill \\ f_0^3\hfill & 3f_0^2f_1\hfill & 3(f_0^2f_2+f_0f_1^2)\hfill & 3f_0^2f_3+6f_0f_1f_2+f_1^3\hfill & \mathrm{}\hfill \\ \mathrm{}\hfill & \mathrm{}\hfill & \mathrm{}\hfill & \mathrm{}\hfill & \end{array}\right).$$
(3.11)
We remark that usage of the formula (3.2) or (3.4) is not the most effective way of calculating the elements of the matrix $`M(f)`$. The simpler possibility is to apply the relation
$$M_{jk}(f)=\frac{1}{2\pi }\underset{0}{\overset{2\pi }{}}e^{\mathrm{i}k\phi }(f(e^{\mathrm{i}\phi }))^j๐\phi ,$$
(3.12)
following directly from (3.1) and the well-known fact that the functions of the form $`e^{\mathrm{i}n\phi }`$, where $`n=0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2}`$$`\mathrm{}`$, form the orthonormal basis of the space of the square integrable functions on a unit circle.
Example: Consider the logistic equation
$$x_{n+1}=\mu x_n(1x_n).$$
(3.13)
Using the relation (3.6) we easily obtain the following formula on the elements of the corresponding Carleman matrix $`M`$:
$$M_{jk}=(1)^{kj}\left(\genfrac{}{}{0pt}{}{j}{kj}\right)\mu ^j$$
(3.14)
(the vanishing of $`M_{jk}`$ in the case with $`2j<k`$ is understood).
We now focus our attention on the mapping
$$fM(f).$$
(3.15)
A remarkable property of (3.9) is that it provides a representation of a semigroup of analytic functions with multiplication defined as the composition operation, that is
$$M(fg)=M(f)M(g).$$
(3.16)
The relation (3.10) is an immediate consequence of (3.1). Clearly,
$$M(\mathrm{id})=I,$$
(3.17)
where $`\mathrm{id}(x)x`$ is an identity function playing the role of a neutral element for the semigroup of the analytic functions, $`I`$ is the identity matrix, and whenever exists the inverse $`f^1`$ of $`f`$, then
$$M(f^1)=M^1(f).$$
(3.18)
Finally, it is clear in view of (3.10) that iterations $`f^n`$ of a function $`f`$ are represented by matrix powers, i.e.
$$M(f^n)=M^n(f).$$
(3.19)
Having in mind the form of the relation (3.13) it is plausible to define the continuous iterations $`f^t`$ of $`f`$, where $`t`$ is a real parameter, by
$$M(f^t)=M^t(f).$$
(3.20)
Thus, the problem of the precise definition of continuous iterations can be reduced to finding the powers $`M^t`$ of the matrix $`M`$ for a non-integer $`t`$.
We recall that the infinite-dimensional (anti)representations of the formal power series were originally studied in . The counterpart of the formula (3.10) introduced therein describing the anti-representation is of the form
$$B(fg)=B(g)B(f),$$
(3.21)
where $`B(f)`$ are the Bell matrices specified by
$$B_{jk}(f)=\frac{1}{k!}\frac{d^j(f(x))^k}{dx^k}|_{x=0},j,k=1,\mathrm{\hspace{0.17em}2},\mathrm{},$$
(3.22)
and it is assumed that the function $`f`$ given by the formal power series satisfies $`f(0)=0`$. We point out that such assumption is rather restrictive one and it is not satisfied in such important cases as for example $`f(z)=z^2+c`$, related to the celebrated Mandelbrot fractal. Evidently, we have
$$B(f^n)=B^n(f).$$
(3.23)
As with (3.13) the relation (3.17) was the point of departure in to define the continuous iterations. Nevertheless, the approach taken up therein is less general and it seems to be more complicated than that introduced in the next section of this work. We finally remark that in opposition to the actual treatment there is no interpretation of the Bell matrices $`B(f)`$ provided in connected with an infinite-dimensional linearization of the original nonlinear recurrence (2.4).
## IV Powers of the Carleman embedding matrices and continuous iterations
As mentioned in the previous section (see formula (3.14)) the problem of the precise definition of continuous iterations reduces to finding the non-integer powers $`M^t`$ of the matrix $`M(f)`$. Our purpose now is to discuss this point in a more detail. We first observe that the problem under investigation can be furthermore cast into determining the transformation diagonalizing $`M`$ such that
$$M(f)=U^1\mathrm{\Lambda }U,$$
(4.24)
where $`\mathrm{\Lambda }`$ is diagonal, i.e. $`\mathrm{\Lambda }_{jk}=\lambda _j\delta _{jk}`$. In fact, (4.1) leads to the following formula on the powers of the matrix $`M`$:
$$M^t(f)=U^1\mathrm{\Lambda }^tU,$$
(4.25)
where $`(\mathrm{\Lambda }^t)_{jk}=\lambda _j^t\delta _{jk}`$.
In order to diagonalize the matrix $`M`$ we first bring it down to the triangular form. Consider (2.4). Let us assume that $`x_n=x_{}`$ is a stationary solution to (2.4), that is $`x_{}`$ is a fixed point of $`f`$ such that
$$f(x_{})=x_{}.$$
(4.26)
As with ordinary differential equations we can switch over to new variables
$$x_n^{}=x_nx_{},$$
(4.27)
so that the resulting nonlinear recurrence
$$x_{n+1}^{}=g(x_n^{}),$$
(4.28)
where $`g(x_n^{})=f(x_n^{}+x_{})x_{}`$, obeys
$$g(0)=0.$$
(4.29)
Using the definition (3.4) one can easily check that the condition (4.6) leads to the upper triangular Carleman embedding matrix $`M(g)`$. Further, we have
$$g=hfh^1,$$
(4.30)
where
$$h(x)=xx_{}.$$
(4.31)
Using (3.10), (4.7) and (3.2) we arrive at the matrix relation of the form
$$M(g)=T_x_{}M(f)T_x_{}^1,$$
(4.32)
where $`T_x_{}=M(h)`$, and
$$(T_x_{})_{ij}=\{\begin{array}{cc}\left(\genfrac{}{}{0pt}{}{j}{k}\right)(x_{})^{jk}\hfill & \text{ }\text{for }jk,\hfill \\ 0\hfill & \text{ }\text{for }j<k.\hfill \end{array}$$
(4.33)
Evidently,
$$(T_x_{}^1)_{ij}=\{\begin{array}{cc}\left(\genfrac{}{}{0pt}{}{j}{k}\right)x_{}^{jk}\hfill & \text{ }\text{for }jk,\hfill \\ 0\hfill & \text{ }\text{for }j<k.\hfill \end{array}$$
(4.34)
We have thus shown that the Carleman embedding matrix $`M(f)`$ corresponding to (2.4), where $`f`$ fulfils (4.3) can be reduced by means of the transformation (4.4) to the upper triangular form $`M(g)`$.
Now let
$$g(x)=\underset{k=0}{\overset{\mathrm{}}{}}g_kx^k.$$
(4.35)
Taking into account (3.4) and (4.12) we find that the diagonal elements of the matrix $`M(g)`$ are
$$M_{ii}(g)=g_1^i=\left(\frac{dg(0)}{dx}\right)^i=\left(\frac{df(x_{})}{dx}\right)^i,i=0,\mathrm{\hspace{0.17em}1},\mathrm{\hspace{0.17em}2},\mathrm{}.$$
(4.36)
These elements coincide with the eigenvalues of the matrix $`M(g)`$ specified by
$$\psi M(g)=\lambda \psi ,$$
(4.37)
where $`\psi `$ is an infinite row-vector. Suppose now that the eigenvalues $`\lambda _i=M_{ii}(g)`$ of the matrix $`M(g)`$ are mutually different. Since $`\lambda _i=g_1^i\lambda ^i`$, therefore we then have the restrictive conditions $`\lambda 0`$ and $`\lambda \sqrt[n]{1}`$. It is easy to verify that the transformation diagonalizing $`M(g)`$ of the form
$$M(g)=V^1\mathrm{\Lambda }V,$$
(4.38)
where $`\mathrm{\Lambda }`$ is diagonal, is given by the following recursive relations:
$`V_{jk}=\{\begin{array}{cc}(\lambda ^j\lambda ^k)^1{\displaystyle \underset{l=j}{\overset{k1}{}}}V_{jl}M_{lk}(g)\hfill & \text{ }\text{for }j<k,\hfill \\ 1\hfill & \text{ }\text{for }j=k,\hfill \\ 0\hfill & \text{ }\text{for }j>k,\hfill \end{array}`$ (4.42)
$`V_{jk}^1=\{\begin{array}{cc}(\lambda ^j\lambda ^k)^1{\displaystyle \underset{l=j+1}{\overset{k}{}}}V_{lk}^1M_{jl}(g)\hfill & \text{ }\text{for }j<k,\hfill \\ 1\hfill & \text{ }\text{for }j=k,\hfill \\ 0\hfill & \text{ }\text{for }j>k,\hfill \end{array}`$ (4.46)
and
$$\mathrm{\Lambda }_{jk}=\lambda ^j\delta _{jk}.$$
(4.47)
Finally, combining (4.15) and (4.9) we find that the transformation diagonalizing $`M(f)`$ can be expressed by (4.1), with
$$U=VT_x_{}.$$
(4.48)
We are now in a position to define the desired continuous iteration $`f^t`$. Indeed, eqs. (3.14), (3.1) and (4.1) taken together yield the following formula on $`f^t`$:
$`f^t(x)`$ $`=`$ $`{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}(M^t)_{1k}(f)x^k={\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}(U^1\mathrm{\Lambda }^tU)_{1k}x^k`$ (4.49)
$`=`$ $`{\displaystyle \underset{jklm}{}}(T_x_{}^1)_{1j}V_{jk}^1\lambda ^{kt}V_{kl}(T_x_{})_{lm}x^m.`$ (4.50)
On introducing the functions $`\phi _k(x)`$ such that
$$\phi _k(x):=\{\begin{array}{cc}_{l=0}^{\mathrm{}}V_{1k}^1V_{kl}(xx_{})^l\hfill & \text{ }\text{for }k>0,\hfill \\ x_{}\hfill & \text{ }\text{for }k=0,\hfill \end{array}$$
(4.51)
we can write (4.20) in a more compact form. It follows that
$$f^t(x)=\underset{k=0}{\overset{\mathrm{}}{}}\lambda ^{kt}\phi _k(x).$$
(4.52)
Thus it turns out that the problem of the definition of continuous iterations can be brought down to the solution of the eigenvalue equation (4.14). Further, it is straightforward to show that the matrix $`V`$ satisfying (4.15) satisfies the relation (3.4). Using this we find that the matrix $`V`$ can be expressed by the solution to the eigenvalue equation (4.14) with the help of the following relation:
$$V_{jk}=\underset{l=0}{\overset{k}{}}V_{j1l}\psi _{kl},$$
(4.53)
where $`\psi _k`$ are the coordinates of the vector $`\psi `$ . Now it is not difficult to check that besides of the matrix $`V`$ also the diagonal matrix $`\mathrm{\Lambda }`$ specified by (4.18) fulfils the relation (3.4). On passing with the use of (4.1) and (4.19) from matrices to functions we arrive at the following functional equation:
$$f(x)=u^1(\lambda u(x)).$$
(4.54)
where $`u=vh`$, and $`V=M(v)`$. Clearly, the functional equivalent of (4.2) is of the form
$$f^t(x)=u^1(\lambda ^tu(x)).$$
(4.55)
It thus appears that the problem of the definition of the continuous iterations can be alternatively cast into the solution of the functional equation
$$u(f(x))=\lambda u(x).$$
(4.56)
The equation (4.26) is known in the literature. For example in it was used for finding explicit solutions to nonlinear recurrences (2.4). We recall that the solution $`u(x)`$ of (4.26) is simply the linearization transformation
$$x_n^{}=u(x_n),$$
(4.57)
reducing the solution of the nonlinear recurrence (2.4) to the linear one
$$x_{n+1}^{}=\lambda x_n^{}.$$
(4.58)
We point out that the connection of the functional equation (4.26) with an infinite-dimensional eigenvalue problem was found for the first time by the second author in the context of the Hilbert space description of nonlinear recurrences (2.4). The formula on continuous iterations analogous to (4.22) in the particular case of $`f(0)=0`$ was originally obtained in with the use of the finite-dimensional truncations of the Bell matrices. We finally remark that our experience indicates that the recursive relation (4.23) on the matrix $`V`$ is of practical importance for the numerical solution of the functional equation (4.26).
Example: Consider as an illustrative example the well-known exactly solvable case of the logistic equation (3.7) with $`\mu =4`$
$$x_{n+1}=4x_n(1x_n).$$
(4.59)
Using the identity
$$\mathrm{arccos}(2x^21)=2\mathrm{arccos}x,0x1,$$
(4.60)
we get the solution to the functional equation (4.26), where $`f(x)=4x(1x)`$, of the form
$`\lambda `$ $`=`$ $`4,`$ (62)
$`u(x)`$ $`=`$ $`{\scriptscriptstyle \frac{1}{4}}[\mathrm{arccos}(12x)]^2.`$ (63)
Finally, taking into account (4.25) we get the desired formula on the continuous iterations such that
$$f^t(x)={\scriptscriptstyle \frac{1}{2}}\{1\mathrm{cos}[2^t\mathrm{arccos}(12x)]\}.$$
(4.64)
Referring back to (4.22) we find
$$x_{}=0,h(x)=x,\phi _0=0,\phi _k(x)={\scriptscriptstyle \frac{1}{2}}(1)^{k+1}\frac{[\mathrm{arccos}(12x)]^{2k}}{(2k!)},k1.$$
(4.65)
Thus the well-known solution to (4.29) of the form $`x_n(x_0)`$ = $`f^n(x_0)`$, where $`f^n(x_0)`$ is given by (4.32) corresponds to the fixed point $`x_{}=0`$ of $`f(x)`$. We now discuss the solution corresponding to the second fixed point $`x_{}=\frac{3}{4}`$ of $`f(x)`$. On utilizing the identity
$$2\pi \mathrm{arccos}(2x^21)=2\mathrm{arccos}x,1x<0,$$
(4.66)
we arrive at the solution to (4.26), where
$`\lambda `$ $`=`$ $`2,`$ (68)
$`u(x)`$ $`=`$ $`{\scriptscriptstyle \frac{1}{2}}\mathrm{arccos}(12x){\scriptscriptstyle \frac{\pi }{3}}.`$ (69)
Taking into account (4.25) and (4.35) we obtain
$$f^t(x)={\scriptscriptstyle \frac{1}{2}}\{1\mathrm{cos}\{(2)^t[\mathrm{arccos}(12x){\scriptscriptstyle \frac{2\pi }{3}}]+{\scriptscriptstyle \frac{2\pi }{3}}\}\}.$$
(4.70)
It can be checked with the use of (4.22), where $`x_{}=\frac{3}{4}`$ and $`h(x)=x\frac{3}{4}`$, that the obtained solution really refers to the fixed point $`x_{}=\frac{3}{4}`$. Furthermore, a straightforward calculation shows that the function (4.32) is equivalent to (4.36) for non-negative integer $`t`$, that is the solutions of the logistic equation (4.29) corresponding to (4.32) and (4.36), respectively, such that $`x_n(x_0)`$ = $`f^n(x_0)`$, coincide. Clearly, (4.32) is different from (4.36) for non-integer $`t`$. We conclude that in the case with the logistic equation (4.29) the continuous iteration is not unique. It should be noted that such ambiguity was not recognized in . In fact, the solution (4.36) cannot be obtained by means of the approach introduced in . We also remark that uniqueness of the continuous iterations referring to (4.29) is violated by the existence of the multiple equilibria for (4.29). It is suggested that it is the case for the general recurrence (2.4). We finally point out that due to the term $`(2)^t`$ the function $`f^t(x)`$ given by (4.36) is complex-valued.
## V From iterated maps to continuous time evolution
In the previous section we have investigated the continuous iterations $`f^t(x)`$ referring to the recurrence (2.4). Since we can interpret the continuous parameter $`t`$ as a โtime variableโ, therefore the question naturally arises on the dynamics of $`f^t(x)`$. Consider the continuous iterations $`f^t(x)`$ and the corresponding powers of the Carleman embedding matrix $`M^t(f)`$. We have
$`(f^t(x))^j`$ $`=`$ $`{\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}(M^t)_{jk}x^k,`$ (5.71)
$`{\displaystyle \frac{d}{dt}}M^t`$ $`=`$ $`\mathrm{ln}MM^t.`$ (5.72)
Eqs. (5.1) and (5.2) taken together yield
$$\frac{d}{dt}f^t(x)=G(f^t(x)),$$
(5.73)
where
$$G(x)=\underset{k=0}{\overset{\mathrm{}}{}}(\mathrm{ln}M)_{1k}x^k.$$
(5.74)
We have thus shown that the continuous iterations $`f^t(x)`$ satisfy the ordinary differential equation (5.3) subject to the initial condition $`f^0(x)=x`$, i.e. $`f^t`$ is the flow corresponding to (5.3). Of course, since $`f^t(x)`$ is a flow therefore the vector field $`G`$ can be expressed by $`f^t(x)`$ with the help of the well-known relation
$$G(x)=\frac{df^t(x)}{dt}|_{t=0}.$$
(5.75)
An alternative form of the vector field $`G`$ can be obtained directly from (4.25). Indeed, differentiating both sides of (4.25) with respect to time we get (5.3) with
$$G(x)=\mathrm{ln}\lambda \frac{du^1(u(x))}{dx}u(x).$$
(5.76)
Interestingly, there exists a remarkably simple relation between the Carleman linearization of the differential equation (5.3) and the Carleman linearization of the original recurrence (2.4). In fact, on setting
$$x_j(t):=(f^t(x))^j,$$
(5.77)
and using (3.1), (3.14) and (5.2) we arrive at the infinite linear system
$$\dot{x}_j=\underset{k=0}{\overset{\mathrm{}}{}}L_{jk}x_k,$$
(5.78)
where
$$L=\mathrm{ln}M.$$
(5.79)
Thus the logarithm of the Carleman embedding matrix related to the linearization of the recurrence (2.4) is simply the Carleman embedding matrix describing the linearization of the differential equation (5.3). We finally remark that whenever the solution $`f^n(x)`$ to (2.4) is chaotic then it is plausible to expect that the solution $`f^t(x)`$ of the differential equation (5.3) is also chaotic. As is well-known, the definition of chaotic systems presents a delicate problem, nevertheless it seems incredible that the one-dimensional autonomous system (5.3) would exhibit in any sense the chaotic behavior. The following example provides a possible solution of the problem.
Example: Consider the continuous iteration (4.32) corresponding to the solution of the logistic equation (4.29). An immediate consequence of (5.5) or (5.6) and (4.31) is the following differential equation satisfied by $`f^t(x)`$:
$$\frac{df^t(x)}{dt}={\scriptscriptstyle \frac{1}{2}}\mathrm{ln}2\mathrm{sin}\{\mathrm{arccos}[12f^t(x)]\}\mathrm{arccos}[12f^t(x)].$$
(5.80)
Notice that the principal part $`\mathrm{arccos}(x)`$ of the inverse cosine obeys $`0\mathrm{arccos}(x)\pi `$, therefore the right-hand side of (5.10), that is $`\frac{df^t(x)}{dt}`$, is non-negative. On the contrary, in view of (4.32) $`f^t(x)`$ oscillates. We conclude that $`f^t(x)`$ cannot satisfy (5.10) for arbitrary $`t`$. Indeed, an easy inspection shows that $`f^t(x)`$ satisfies the following differential equation:
$$\frac{df^t(x)}{dt}=\stackrel{~}{G}(t,x,f^t(x)),f^0(x)=x,0x1,$$
(5.81)
where
$`\stackrel{~}{G}(t,x,f^t(x))`$ (5.82)
$`=\{\begin{array}{cc}{\scriptscriptstyle \frac{1}{2}}\mathrm{ln}2\mathrm{sin}\{\mathrm{arccos}[12f^t(x)]\}\mathrm{arccos}[12f^t(x)]\hfill & \text{ }\text{for }0t<\frac{\mathrm{ln}\frac{\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2},\hfill \\ {\scriptscriptstyle \frac{1}{2}}\mathrm{ln}2\mathrm{sin}\{\mathrm{arccos}[2f^t(x)1]\}\{\pi +\mathrm{arccos}[2f^t(x)1]\}\hfill & \text{ }\text{for }\frac{\mathrm{ln}\frac{\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2}<t<\frac{\mathrm{ln}\frac{2\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2},\hfill \\ \mathrm{}\hfill & \text{ }\mathrm{}\hfill \\ {\scriptscriptstyle \frac{1}{2}}\mathrm{ln}2\mathrm{sin}\{\mathrm{arccos}[12f^t(x)]\}\{2k\pi +\mathrm{arccos}[12f^t(x)]\}\hfill & \text{ }\text{for }\frac{\mathrm{ln}\frac{2k\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2}<t<\frac{\mathrm{ln}\frac{(2k+1)\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2},\hfill \\ {\scriptscriptstyle \frac{1}{2}}\mathrm{ln}2\mathrm{sin}\{\mathrm{arccos}[2f^t(x)1]\}\{(2k+1)\pi +\mathrm{arccos}[2f^t(x)1]\}\hfill & \text{ }\text{for }\frac{\mathrm{ln}\frac{(2k+1)\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2}<t<\frac{\mathrm{ln}\frac{2(k+1)\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2},\hfill \\ \mathrm{}\hfill & \text{ }\mathrm{}\hfill \\ 0\hfill & \text{ }\text{for }t=\frac{\mathrm{ln}\frac{l\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2},\hfill \\ 0\hfill & \text{ }\text{for }x=0,(5.12)\hfill \end{array}`$ (5.91)
where $`k,l=1,\mathrm{\hspace{0.17em}2},\mathrm{}`$. Thus, it turns out that (5.10) holds only for $`0t<\frac{\mathrm{ln}\frac{\pi }{\mathrm{arccos}(12x)}}{\mathrm{ln}2}`$. Nevertheless, in view of (5.12) the form of $`\stackrel{~}{G}`$ is determined completely by (5.10). In fact, the continuation of (5.10) expressed by (5.12) is evidently implied by (5.10) and the fact that the inverse cosine is the infinitely-many-valued function. It seems that the adequate denomination for (5.10) would be the โprincipal partโ of (5.11). As with (4.29) the differential equation (5.11) is also chaotic. In fact, by (4.32) $`0f^t(x)1`$, and the Lyapunov exponent $`\sigma `$ is
$$\sigma =\underset{t\mathrm{}}{lim}\frac{1}{t}\mathrm{ln}\left|\frac{f^t(x)}{x}\right|=\mathrm{ln}2.$$
(5.92)
As expected, this exponent coincides with the Lyapunov exponent for (4.29) . Notice that the chaoticity of (5.11) does not contradict the well-known fact based on the Poincarรฉ-Bendixon theorem that the minimal dimension of the phase space of the chaotic autonomous system is three. As a matter of fact the nonautonomous equation (5.11) refers to the two-dimensional phase space with coordinates $`f^t`$ and $`t`$. Nevertheless, becouse of the non-periodic dependence of the vector field $`\stackrel{~}{G}`$ on the time variable $`t`$, the volume of the phase space occupied by the trajectories of the autonomous system corresponding to (5.11) is, in contrast to the assumption of the Poincarรฉ-Bendixon theorem, infinite one.
## VI Conclusion
In this work it is shown that the Carleman linearization of nonlinear recurrences defines the matrix representation of analytic functions. Such representation enables a sound definition of continuous iterations. On introducing the infinite-dimensional eigenvalue equation (4.14) related to the problem of the definition of continuous iterations and translating it back into the language of the composition of functions, we have arrived at the functional equivalent (4.26) of (4.14). As we have mentioned earlier that equation can be met in the literature. Nevertheless its application in the context of continuous iterations as with (4.25) is most probably new. In opposition to the alternative approach introduced by Aldrovandi and Freitas the interpretation is provided in the actual treatment of the matrices representing functions in terms of the infinite dimensional linearization of the original nonlinear recurrence. We have also identified the (finite) dynamical system corresponding to continuous iterations, and we have found a simple formula (5.9) relating the Carleman embedding matrices in the discrete and continuous time cases. We note that the naive approach to the continuous iterations corresponding to (2.4) relying on the formal replacement of the discrete variable $`n`$ by the continuous one $`t`$ suggests only the delayed equation $`x(t+1)=f(x(t))`$ which is equivalent to the infinite dimensional system of ordinary differential equations. The simplicity of the approach taken up herein suggests that it would be a useful tool in the study of nonlinear recurrences as well as their continuous counterparts. |
no-problem/0002/quant-ph0002031.html | ar5iv | text | # Experimental Test of Relativistic Quantum State Collapse with Moving Reference Frames
## Figure captions
1. Schematic of the experiments that consist of a photon pair source and two analyzers separated by 10.6 km, see . The absorbing surface A and the rotating wheel are at equal distances from the source. The detectors APD3 and APD4 are connected with longer fibers such that each photon meets first the absorber, next the detector. In a second experiment the absorbers are replaced by two photon counters APD1 and APD2, again at exactly the same distance. We obtain typically 2 kcts/s single count rates and a mean value of about 3 coincidences per second (incl. 2 accidentals), for details, see text and .
2. 2-photon interference fringes measured over 6 hours, each data point corresponds to a time interval of 100s. The difference of the optical path lengths is is varying from - 8 to + 1.3 ps. Negative values mean that the detections occurs first in Bernex in the Geneva-Bernex reference frame. In the moving Bellevue reference frame the detections happen first in Bellevue over the entire scan range, as indicated on the upper time scale. Despite this different time ordering no reduced visibility is observed. |
no-problem/0002/astro-ph0002361.html | ar5iv | text | # Effect of Beam-Plasma Instabilities on Accretion Disk Flares
## 1 Introduction
The origin of fluctuations in the emission from Active Galactic Nuclei (AGN) and binary X-ray sources is an important and long-standing problem. One frequently considered possibility employs flares in the coronae around accretion disks to produce rapid energy release, particle acceleration and radiation (e.g., Galeev, Rosner & Vaiana galeev (1979); Kuperus & Ionson kuperus (1985); for a review, see Kuijpers kuijpers (1995)). These models usually build upon our understanding of solar flare physics.
A particular model of this type has been proposed by de Vries & Kuijpers (deVries (1992); hereafter dVK), and was specifically applied by them to X-ray variability of AGNs. Their model is an elaboration on typical flare scenarios, in that, as usual, the source of energy is stored in magnetic fields in coronae of accretion disks. They estimate the power released in flares in a radiation pressure dominated corona, which they stress is a different environment from the gas pressure dominated solar corona. They argue this leads to a situation where beams of relativistic electrons are produced in the corona and then lose essentially all of their energy through inverse Compton scattering on UV disk photons before they can stream back to the disk. They further argue that these inverse Compton (IC) photons produce the X-ray variability seen in Seyfert galaxies, and are able to calculate spectral power-densities in reasonable agreement with observations.
However, the dVK model does not take into account other mechanisms that might vitiate some of their key assumptions. We note that dVK briefly argue that, particularly if radiation pressure dominates the energy density in the corona, as is indeed likely around standard thin accretion disks (e.g., Shakura & Sunyaev shakura (1973)) which they assume, energy losses through scattering on plasma waves are unimportant; then the dominant losses will be to IC scattering. However, it is well known that an electron beam-plasma system is often susceptible to the excitation of beam-plasma instabilities which usually have large growth rates (Sturrock sturrock (1964)). Here we argue that when these beam-plasma instabilities (BPIs) are taken into account, the rate of loss of energy by the electrons for the accretion disk coronae conditions suggested by dVK is typically much higher than the rate of gain of energy through direct acceleration by the electric fields, which are presumed to arise in reconnection events. Therefore beams of electrons usually will not reach the high Lorentz factors needed to produce most X-rays by the IC process. In many accretion disk models X-rays are usually produced through IC scattering of soft photons on hot thermal electrons (e.g. Shapiro, Lightman & Eardley shapiro (1976); Liang & Price liang (1977)). In such a situation beam-plasma instabilities are not excited, and only thermal spontaneously excited plasma waves should exist. These will have energy densities less than the thermal energy density of the plasma, which in turn is much less than the radiation energy density. In this case, the argument of dVK would be valid, but, once they assume a beam is present, then beam-plasma instability effects must be included.
## 2 Growth of Beam-Plasma Instabilities
The key assumptions of the dVK model are that: 1) relaxation of magnetic structures efficiently produce relativistic electron beams; 2) the particle beam is a mono-energetic stream of electrons with an initial Lorentz factor $`\gamma _0`$; 3) the ambient radiation is from a quasi-infinite disk and can be considered as uniform and isotropic, with a radiation density $`u_{\mathrm{rad}}`$; 4) the beam is optically thin, so multiple scattering of photons can be ignored. Although (3) is an approximation, it is a reasonable one, and (4) is certainly plausible under many circumstances. But the core of their argument hinges on the ability of the neutral sheet in the reconnection process to quickly accelerate electrons via a direct electric field. During this acceleration process dVK claim the equation for the acceleration of an single electron suffering IC losses is
$$\frac{d\gamma }{dt}=\chi _1\frac{(\gamma ^21)^{1/2}}{\gamma }\chi _2(\gamma ^21),$$
(1)
where, $`\chi _1=eE/m_ec`$ and $`\chi _2=4\sigma _Tu_{rad}/3m_ec`$, with all symbols having their usual meanings. In that the first (positive) term starts out substantially greater in magnitude than the second (negative) one, acceleration will ensue until a limiting Lorentz factor is reached when the two terms balance:
$$\gamma _{\mathrm{}}=2^{1/2}[1+(1+4\chi _1^2/\chi _2^2)^{1/2}]^{1/2}.$$
(2)
The electric field is reasonably taken by dVK to be the Dreicer value, which we take as: $`E_D=6\pi n_pe^3\mathrm{ln}\mathrm{\Lambda }/(k_BT_e)`$, where $`n_p`$ is the electron density of the ambient plasma, $`\mathrm{ln}\mathrm{\Lambda }20`$ is the Coulomb logarithm, and all other symbols have their usual meanings. With typical AGN values ($`n_p10^{10}`$ cm<sup>-3</sup>, $`T_e10^6`$K, and $`T_{rad}10^5`$K) they find $`\gamma _{\mathrm{}}(\chi _1/\chi _2)^{1/2}30`$. They then conclude that the electrons will all reach this terminal Lorentz factor before the acceleration terminates and the electrons then lose their energy against the disk photons providing the background radiation field.
We now show that since a BPI is excited, it will dominate the energy losses for the beam and actually prevent the electrons from reaching the high Lorentz factors calculated by dVK. Under these circumstances there will be very little IC radiation, so that, while a great deal of energy may be released through magnetic reconnection, the bulk of the energy will probably provide heating to the corona (e.g., Liang & Price liang (1977)) but is unlikely to yield the bulk of the X-rays directly through IC emission.
The dominant growth rate of the BPI depends on the relative magnitudes of the bulk velocity of the beam, $`v_b`$, and the mean thermal velocity in the beam, $`v_{Tb}`$; under some conditions, $`v_{Te}`$, the mean thermal velocity of the ambient electrons, also must be taken into account. The standard formula for the BPI growth rate, valid for $`v_b>(n_p/n_b)^{1/3}v_{Tb}`$, is our Case 1 (e.g., Mikhailovskii mikhailovskii (1974))
$$\mathrm{\Gamma }_{bp}=0.7\left(\frac{n_b}{n_p}\right)^{1/3}\omega _{pe},$$
(3)
where $`n_b`$ is the beam density, $`n_p`$ is the ambient plasma density (here, in the disk corona), and $`\omega _{pe}=5.47\times 10^4n_e^{1/2}`$ is the plasma frequency in terms of the ambient electron number density in cgs units. The frequency at which this mode grows is $`\omega _{pe}(10.4(n_b/n_p)^{1/3})`$.
If the beam starts out very slowly, with $`v_b<(n_p/n_b)^{1/3}v_{Tb}`$, then the โweakโ version of the BPI is relevant, and this is our Case 2 (e.g., Benz benz (1993))
$$\mathrm{\Gamma }_{bp,w}=\left(\frac{n_b}{2n_p}\right)\left(\frac{v_b}{v_{Te}}\right)^2\omega _{pe},$$
(4)
and the frequency at which this dominant mode is excited is $`\omega _{pe}`$. Under the limited circumstances that $`v_{Te}>v_b>v_{Tb}`$, the โhot-electronโ Case 3 yields (e.g., Mikhailovskii mikhailovskii (1974)),
$$\mathrm{\Gamma }_{bp,he}=\left(\frac{n_b}{n_p}\right)^{1/2}\frac{v_{Te}}{v_b}\omega _{pe},$$
(5)
where this dominant mode is at a frequency of $`(v_b/v_{Te})\omega _{pe}`$.
The AGN corona values of dVK for $`n_p=10^{10}`$ cm<sup>-3</sup> and $`T=10^6`$ K, which we also believe are reasonable, will be adopted here. There are, however, additional parameters that must be considered now (basically in lieu of the radiation temperature, or $`u_{rad}`$, needed by dVK). First, $`\zeta n_b/n_p`$; for solar flares this value is $`10^6`$$`10^4`$ (Benz benz (1993)); however, we will bear in mind the possibility that this ratio may be higher in this type of radiation dominated plasma. We also need initial values of $`v_b`$ and $`v_{Tb}`$, to determine which of the three Cases defined above should be considered. For us to say that a beam actually exists we must always demand that $`v_b>v_{Tb}`$.
Note that the BPI directly gives the rate of growth of an electric field in the plasma, and the energy loss goes as the square of the field strength. Then we find that when the relativistic effects that arise if the Lorentz factors really could become large are included, the rate of change of energy of electrons in the beam is,
$$\frac{d\gamma }{dt}=\chi _1\frac{(\gamma ^21)^{1/2}}{\gamma }2\alpha \gamma \mathrm{\Gamma }_{bp}(\gamma ),$$
(6)
where $`\alpha W/E=W/\gamma n_bmc^2`$, is the ratio between the wave energy density, $`W`$, and the electron beam energy density, $`E`$. In order to determine $`W`$, knowledge of the saturation mechanisms of the wave field are needed. Often, in order to avoid a detailed discussion of the saturation mechanisms, which tend to operate in multiplicity in a plasma, the condition of equipartition of energy between the waves and the beam particles is used (Treumann & Baumjohann treumann (1997)). In that case $`\alpha `$ is approximated to unity, and we consider this situation first. Case 4, where $`\alpha 1`$, and the saturation occurs earlier by trapping, will then be addressed.
In Eqn. (6) we have ignored the IC term appearing in Eqn. (1), having replaced it with a generic form of the BPI growth rate; the fact that the BPI term is much bigger than the $`\chi _2`$ term for all reasonable circumstances will soon become evident. The dominant dependence of $`\mathrm{\Gamma }_{bp}`$ upon $`\gamma `$ for the first three cases arises through the replacement: $`n_bn_b/\gamma ^3`$ (e.g., Walsh walsh (1980); Krishan 1999), which effectively modifies $`\zeta `$, which is defined as the density ratio at non-relativistic relative velocities. In Cases (2) and (3) we must also write $`v_b/c=(\gamma ^21)^{1/2}/\gamma `$.
Now, for Cases 1, 2 and 3, respectively, we have:
$$\frac{d\gamma }{dt}=\chi _1\frac{(\gamma ^21)^{1/2}}{\gamma }2A_1,$$
(7)
with $`A_1=0.7\zeta ^{1/3}\omega _{pe}8\times 10^7\zeta _5^{1/3}n_{e,10}^{1/2}`$, where the common notation, $`X_n=X/10^n`$, has been employed so that the physical parameters will be of order unity;
$$\frac{d\gamma }{dt}=\chi _1\frac{(\gamma ^21)^{1/2}}{\gamma }2\frac{(\gamma ^21)}{\gamma ^4}A_2,$$
(8)
with $`A_2=0.5(c/v_{Tb})^2\zeta \omega _{pe}3\times 10^8\eta _{b,2}^2\zeta _5n_{e,10}^{1/2}`$, where we have now defined $`\eta _bv_{Tb}/c0.01`$;
$$\frac{d\gamma }{dt}=\chi _1\frac{(\gamma ^21)^{1/2}}{\gamma }\frac{2\gamma A_3}{(\gamma ^3\gamma )^{1/2}},$$
(9)
with $`A_3=\zeta ^{1/2}(v_{Te}/c)\omega _{pe}2\times 10^5\zeta _5^{1/2}\eta _{e,2}n_{e,10}^{1/2}`$, where now, $`\eta _ev_{Te}/c0.01`$.
Under any of these situations we have $`\chi _1=5.2\times 10^1n_{e,10}T_{e,6}^1`$ with our definition of $`E_D`$ (which is slightly larger than that of dVK, thereby only strengthening our argument). For any plausible initial value of $`\gamma 1`$ the different dependences of Eqns. (7โ9) upon $`\gamma `$ are not important. What is important is that $`A_1,A_2,A_3\chi _1\chi _2`$; i.e., the energy loss term arising from any form of the BPI completely dominates over the energy gain term from direct electric field acceleration.
We now consider Case 4, where equipartition is not established. Under these circumstances, the growth of the Langmuir waves for the fastest initial beam situation, Case 1, is arrested by the trapping of the beam electrons. In this case, the ratio $`\alpha `$ is eventually given by the saturated value (Melrose melrose (1986); Krishan krishan (1999)), $`\alpha =9/2[n_b/(2n_p\gamma ^3)]^{2/3}`$, and it can be a rather small number that reduces the loss rate significantly. This gives a chance for the situation envisioned by dVK to occur. In addition, $`\alpha `$ initially can start out below the saturation value as it arises from thermal fluctuations, and thus it could allow an initial thermal runaway. The detailed spatial and temporal structure of the reconnection sites will determine if this initial acceleration can play a significant role.
In spite of these uncertainties, we can obtain a reasonable estimate of the influence of BPI in the situation where equipartition is not established. We again consider all three cases discussed above, but we now include electron trapping and assume $`\alpha `$ to take the saturation value. Here the competition between the IC losses represented by $`\chi _2`$ and the BPI losses represented by the Melrose $`\alpha `$ has to be considered carefully.
The inverse Compton term increases with $`\gamma `$ whereas the $`\alpha `$ factor modifying the BPI term decreases with $`\gamma `$. Thus demanding that the BPI term is smaller than the IC term fixes the minimum value of $`\gamma `$ necessary to validate the dVK proposal. A detailed calculation yields the results for the three cases as follows: case (1a), $`\gamma _{min}=54`$; case (2a), $`\gamma _{min}=21`$; case (3a), $`\gamma _{min}=9.16`$. Thus it is clear that only in case (3a), is it likely that the IC term dominates and hence the dVK proposal is valid. This requires rather special conditions for the flare models to work.
This type of runaway acceleration has been observed in the laboratory under specific circumstances which lead to a very weak beam plasma instability. In laboratory experiments, the runaway electrons are observed detached from the main body of the plasma, as for example in a stellarator. If the runaway electrons hit the tungsten aperture, they generate X-rays which can be detected. Provided the conditions are right, the runaway electrons undergo instabilities producing plasma oscillations which then couple to the ions. This principle is applied in the design of some electron tube oscillators (Rose & Clark rose (1961)). Thus the runaway electrons can stably propagate under certain circumstances, but will be affected by a BPI if they do satisfy the conditions for it. These conditions are essentially on the velocity of the beam and its thermal spread, as we have already discussed for the first three cases above.
Often it is found that a regime of strong Langmuir waves is quickly reached and these waves are further subjected to modulational instabilities. Thus, different saturation mechanisms operate at different stages of the development of the instability, depending on beam plasma parameters. However, under the circumstances and parameters proposed by dVK, the damping is severe.
## 3 Discussion and Conclusions
We thus conclude that the mechanism proposed by dVK should not generally work unless much greater densities are possible in the coronae at the same time that the temperatures are lower, since $`\chi _1`$ rises faster with $`n_p`$ than does any form of $`\mathrm{\Gamma }_{bp}`$, and declines faster with temperature. While denser coronae should be available around the accretion disks in X-ray binaries, the ambient temperatures will also be a good deal higher, so we cannot suggest a physically interesting situation where the BPIs do not dominate. If one could somehow begin with very large $`\gamma `$ values, then the growth rate of the beam-plasma instabilities are reduced. For Case 1 this does not help, and no solutions for large $`\gamma `$ are possible; however, for Cases 2 and 3, the relativistic decreases in the BPI rates are so substantial that high asymptotic $`\gamma `$ values are allowed. This is also true for Case 4, where the saturation reduces the effectiveness of the BPI; however, even then the BPI can prevent much acceleration unless the beam already starts with a substantial value of $`\gamma `$ or has such a low density in comparison to the ambient medium that it could not carry significant power. Moreover, we see no way to achieve these initially high $`\gamma `$ values: that is what the dVK mechanism was supposed to accomplish, but now appears to be incapable of achieving.
Filamentation, which could produce denser beam fragments, could play a role by raising $`\zeta `$ locally. If any analogy can be drawn with solar flares, then the presence of rapid irregularities within the Type 3 radio bursts strongly indicates that the flux tubes are filamentary during the acceleration phase (e.g., Vlahos & Raoult vlahos (1995)). However, this possibility is still insufficient to salvage this mechanism for AGN coronae, since even with $`\zeta 1`$ the ratios of $`A_{1,2,3}/\chi _1>1`$. In Case 4, where saturation is important in principle, the large value of $`\zeta `$ implies that $`\alpha 1`$ too (for initial $`\gamma 1`$) so the loss term still would dominate.
Nonetheless, even with much of the energy going into wave turbulence, as we have argued, significant IC emission can be possible. This is because (as pointed out by the referee) trapping and other nonlinear effects can roughly heat the electrons up to $`kT_ee\varphi `$, with $`\varphi `$ the electrostatic amplitude of the waves. Since the energy gain term (the first on the RHS of Eq. ) is essentially a constant, these โthermalized/trappedโ electrons can attain nearly the same energy as in the dVK picture. However this energy will not be in the form of a beam, as argued by dVK, but rather, will be present in an isotropic distribution. Then the IC process still works, and one of the points made by dVK, that much of the energy is lost by IC hard X-rays instead of โsoftโ X-rays from material evaporated from the disk, can remain valid, as already noted at the end of ยง1. In order to see if the inverse Compton losses actually dominate, detailed computations of these effects should be undertaken under various circumstances.
It is well known that in the case of the solar corona, the directly accelerated beams should be thermalized within a very short time through BPI (e.g., Sturrock sturrock (1964)). In the standard picture, this produces Langmuir waves which then manifest themselves as various types of radio bursts if non-linear effects or transport from faster to slower electrons within the beam could dominate (e.g., Vlahos & Raoult vlahos (1995)). However, energetic electrons have been observed in satellite measurements in near-earth orbit, and the outstanding question of the maintenance of these beams through their propagation from the sun to the earth has given rise to more complex models involving complex profiles of the electron beams (Vlahos & Raoult vlahos (1995)). Instead of producing X-ray flares via a primary process as proposed by dVK, these secondary processes involving energy input to the plasma could contribute to variability in the radio band.
###### Acknowledgements.
We thank the anonymous referee for pointing out the incompleteness of our analysis in the original version. This work was supported in part by NASA grant NAG 5-3098 and RPI and Strategic International Initiative funds at GSU. |
no-problem/0002/astro-ph0002308.html | ar5iv | text | # Collisional versus collisionless dark matter
## 1. Introduction
The nature of dark matter is still far from being resolved. Primordial nucleosynthesis and observational data suggest that the baryonic material accounts for just a fraction of the matter density in the universe. Fundamental particles remain the most likely candidate for the dark matter and much effort has been devoted to researching a class of weakly interacting, collisionless dark matter (CDM) (e.g., Davis et al. 1985). However, the hierarchical gravitational collapse of cold collisionless particles leads to dense, singular dark matter halos โ a result that is central to several fundamental problems with this model on small scales (e.g. Hogan & Dalcanton 2000 and references within).
It may be possible to solve the current problems with CDM by appealing to extreme astrophysical processes. Alternatively, we can explore other dark matter candidates that behave differently on non-linear scales. One possibility is strongly self interacting dark matter (hereafter SIDM). Originally proposed to suppress small scale power in the standard CDM model (Carlson et al. 1992, Machaceck et al. 1994, de Laix et al. 1995), SIDM was recently revived by Spergel & Steinhardt (1999) to solve some of the outstanding problems with CDM. The behaviour of this component depends on the particlesโ collisional cross-section. Large cross-sections imply short mean free paths, so that the dark matter can be described as a fluid that does not cool but can shock heat. Particles with a mean free path of order the scale length of a dark matter halo offers the possibility of conductive heat transfer to the halo cores (Spergel & Steinhardt 1999). In this Letter we contrast the dynamics and structure of โhalos within halosโ between collisional and collisionless dark matter and compare predictions with current observational constraints.
## 2. Simulating the structure of SIDM halos
In this section we present the first numerical calculations of the structure of dark matter halos in which the particles have a large interaction cross-section. Self interacting dark matter behaves like a collisional gas and its evolution can be simulated using standard computational fluid dynamics techniques. We model the collisional dark matter fluid by approximating its behaviour as an ideal gas where the ratio of specific heats is 5/3. We use the smoothed-particle hydro-dynamics (SPH) code Hydra (Couchman et al. 1995) to follow the hierarchical growth of a massive dark matter halo. For added confidence in the robustness of key results, we perform independent collapse tests using an evolution of the Benz-Navarro SPH code (cf. Gelato & Sommer-Larsen 1999).
Our cosmological initial conditions were adapted from the โcluster comparisonโ simulation (Frenk et al. 1999), in which a massive dark matter halo forms within a 64 Mpc box of a critical density universe. (We adopt $`H_0=50\text{km}\text{s}^1\text{Mpc}^1`$ throughout.) We carry out two simulations; the first is a CDM plus 10% non-radiative gas and the second run is 100% non-radiative gas ($``$ SIDM). The particle mass is approximately $`8.6\times 10^9M_{}`$ and the effective force resolution is 0.3% of the virial radius of the final cluster $`r_{\mathrm{vir}}=2.7`$ Mpc (see Figure 1).
The CDM run behaves as expected and as characterised by many previous authors (e.g., Barnes & Efstathiou 1987, Frenk et al. 1999). One interesting point to highlight from this and similar simulations is that the gas ends up with a shallower density profile than the dark matter (cf. Figure 2.2). This is due to energy transfer between the two components and the fact that the entropy of the gas can increase through shocks that occur during the gravitational collapse. On large scales the SIDM run is similar to the CDM run although we note that the filaments appear narrower. On non-linear scales the two models behave very differently and we now discuss the salient features in more detail.
### 2.1. Density profiles
The final density profile of the most massive SIDM halo is shown next to its collisionless counterpart in Figure 2.2. This halo has more than $`10^5`$ particles within its virial radius. The profile is close to a singular isothermal sphere with slope $`\rho (r)r^2`$, even in the very central region. The hierarchical collapse imparts thermal energy into the particles which leads to a small amount of pressure support, however this is not sufficient to flatten their inner profiles.
To check these results we performed 3D spherical collapses of power-law spheres with zero initial kinetic energy and density profiles $`\rho (r)r^n`$ with $`n=1,0,+1`$. We found consistent results with 100 and 5000 particles, indicating that the singular profile in the cosmological SIDM simulation is *not* purely an artifact of the high-redshift progenitor collapses being inadequately resolved. The collapse with $`n=1`$ leads to a singular spherical isothermal structure. In this case the central particles are not strongly shocked and stay at a low entropy. The $`n=0`$ and $`n=+1`$ collapes generate much higher entropies throughout the system. SIDM particles fall in from larger radii achieving higher velocities and significant thermal energy is generated during the collapse resulting in a pressure supported constant density core. A similar point has been made by Bertschinger (1985). Our cosmological Gaussian flucuations resemble the former collapse which results in singular isothermal structures โ what is needed is a mechanism that prevents low entropy material surviving, such as we find in more violent collapses.
### 2.2. Ram pressure truncation and viscous drag
Halos of SIDM suffer ram-pressure truncation and ram-pressure/viscous drag, however dynamical friction is largely suppressed in SIDM models since the bow shocks and the collisional nature of the fluid inhibit the formation of trailing density wakes. A good approximation is to adopt isothermal profiles for the substructure halo (subscript $`s`$) and parent halo (subscript $`p`$) such that $`\rho (r)=v^2/(4\pi Gr^2)`$. The ram pressure, $`\rho (r_p)v_p^2`$, can be equated to the force required to retain a shell of material at radius $`r_s`$ from the centre of the substructure halo $`Fm_sv_s^2/r_s`$. Thus the stripping radius at position $`r_p`$ in the parent halo is $`r_{\mathrm{strip}}=kr_p(v_s/v_p)^2`$ where $`k`$ is a constant of order $`\pi `$. This can be contrasted with the tidal radius of embedded isothermal halos, $`r_{\mathrm{tidal}}=r_p(v_s/v_p)`$. Therefore, substructure halos of SIDM will be stripped to substantially smaller sizes than their CDM counterparts.
It is also interesting to compare the timescale for a substructure halo to sink to the centre of a larger system due to hydro-dynamical drag, $`F_{\mathrm{drag}}=\rho (r_p)v_p^24\pi r_s^2`$. For a circular orbit, $`L=r_pv_p`$, and the rate of specific angular momentum loss, $`dL/dt=r_pF/m_s`$, therefore $`r_p^1dr_p/dt=F/(m_sv_p)`$. As the substructure is dragged deeper into the central potential, its radius decreases as calculated above and we can substitute for $`v_s`$. Thus we find $`dr_p/dt=kv_p`$ such that the drag timescale is simply of order of the crossing time $`t_{\mathrm{drag}}=kr_p/v_p`$. All SIDM substructure halos sink at a similar rate independent of their mass and on a timescale that is typically faster than that due to dynamical friction.
### 2.3. Orbital and velocity bias
These results have fascinating implications for biasing and the survival of substructure within dense environments. In dynamically old objects, such as galaxy halos, there may have been time for most of their substructure to sink to the centre. Any surviving substructure that passes close to the Galactic disk will be stripped to a negligible mass, therefore disk heating is not a problem in SIDM models. Hydro-dynamical destruction may be happening to the Sagittarius dwarf right now: its current SIDM halo radius would be approximately 100 pc. We may also expect that galaxies orbiting through the central regions of rich clusters will have lost most of their dark matter halos. Younger systems, such as galaxy clusters have only had sufficient time to concentrate and bias their โsatellitesโ towards the central regions. SIDM satellites suffer significant velocity bias due to drag: an analysis of the 20 most massive satellites within the largest dark matter halo yields $`\sigma _{_{\mathrm{SIDM}}}/\sigma _{_{\mathrm{CDM}}}=0.85`$.
The orbits of the Milky Wayโs satellites with known proper motions are surprisingly circular (e.g., Grebel et al. 1998, van den Bosch et al. 1999), whereas circular orbits are rare in CDM models (Ghigna et al. 1998). We find that the anisotropy parameter for SIDM satellites, $`\beta _{_{\mathrm{SIDM}}}=0.5`$, compared with $`\beta _{_{\mathrm{CDM}}}=0.32`$ (where $`\beta =v_t^2/(v_t^2+v_r^2)`$), which results from the efficient angular momentum loss of satellites at pericentre. SIDM may also account for the โHolmbergโZaritskyโ effect (Holmberg 1969, Zaritsky et al. 1997). The angular momentum of SIDM halos is re-distributed differently than in the CDM halos leading to a rotationally flattened central core. The baryons are most likely to dissipate into this plane that aligns with the large scale filamentary structure. It is material that flows from these cold filaments into the larger halos that spins up the dark matter: satellites infalling along this โspecialโ plane will rapidly sink once they make contact with the SIDM galaxy halos. Furthermore, those satellites sinking in the retrograde direction to the parent haloโs angular momentum will be preferentially destroyed due to the enhanced drag which is $`v^2`$ (cf. Figure 1b).
### 2.4. Halo shapes
The shapes of dark matter halos provide another clear discriminant between SIDM and CDM. The typical ratio of short to long axis for CDM halos is 0.5 with a log-normal distribution (Barnes & Efstathiou 1987). Figure 2.4 shows the ratio of short to long axis, $`c/a`$, and intermediate to long axis, $`b/a`$, as a function of radius for a well resolved halo in the simulation. The virialised part of the halo is rotationally flattened into an oblate shape such that $`ฯต_{\mathrm{max}}0.2`$. This is typical of the other SIDM halos which are generally flattened in the range $`0.0<ฯต<0.2`$. For comparison we also show the shape of the same halo in the collisionless CDM simulation which has a prolate configuration with $`c/ab/a=0.6`$ within $`r_{\mathrm{vir}}`$.
Analyses of polar ring galaxies and X-ray isophotes tend to give flattened dark matter potentials, whereas techniques that use disk flaring and the precession of warps yield spherical mass distributions (Olling & Merrifield 1998). Ultimately, gravitational lensing will resolve this issue, but for now we note that a lensing study of CL0024+1645 constrains the assymetry of the projected mass distribution to be less than 3% (Tyson et al. 1998). With the notion that collisional halos should be spherical, Miralda-Escude (2000) argued that the cluster MS2137-23 rules out SIDM since analysis of its gravitational arcs demonstrates that its mass distribution must be flattened such that $`ฯต>0.1`$ in the central region. At the moment, SIDM and CDM are both consistent with these data.
### 2.5. The extent of halos within halos
The dwarf satellites of the Milky Way have internal velocities of order 10โ30 $`\text{km}\text{s}^1`$, that in isolation would extend to 10โ30 kpc but are tidally limited according to their orbits within the Milky Wayโs potential. Numerical simulations confirm this simple expectation (Ghigna et al. 1998). For example, the dark matter halo surrounding the Carina satellite would be truncated to $`r_{\mathrm{tidal}}(r_{\mathrm{peri}}/50\mathrm{kpc})(v_{_{\mathrm{Carina}}}/v_{_{\mathrm{MW}}})=2.7\mathrm{kpc}`$ at its current position. In an SIDM universe, the halo of Carina would be reduced to a size $`r_{\mathrm{strip}}400\mathrm{pc}`$.
Observations of stars escaping from satellites constrain the extent of their dark matter halos (Moore 1996, Burkert 1997). Tidal streams have recently been spectroscopically confirmed for Carina (Majewski et al. 1999) and are also claimed for Draco and Ursa Minor (Irwin & Hatzidimitriou 1993). These observations imply that the dark matter extends only as far as the optical radii, about 300 parsecs for all of these satellites and much smaller than their expected sizes if they had halos of CDM.
Similarly, the dark matter halos of cluster galaxies are truncated by the global cluster potential and their sizes can be constrained by quantifying their effects on strongly and weakly lensed images of background galaxies. Natarajan et al. (1999) have analysed several of the clusters imaged by the Hubble Space Telescope and claim that the dark matter halos of bright cluster galaxies are severely truncated to between 15โ30 kpc. These galaxies have typical internal velocity dispersions of $`150\text{km}\text{s}^1`$ and sample the projected central 500 kpc region of the clusters (else they wouldnโt lie in the HST frames). Thus we expect $`r_{\mathrm{tidal}}`$ 30โ60 kpc from gravitational stripping, but $`r_{\mathrm{strip}}`$ 10โ30 kpc from maximal collisional stripping.
## 3. Discussion
The properties of dark matter halos of strongly interacting particles are markedly different from their collisionless counterparts. SIDM halos are close to spherical with a modest degree of rotational flattening. Observations of halo shapes cannot currently distinguish between the models examined here; however, future lensing observations will determine if SIDM is a viable dark matter candidate. Halos within halos suffer ram-pressure truncation that decreases their sizes to less than the tidal radius. Current observational data on galactic halos in clusters and satellite galaxies in the Galactic halo are naturally reproduced in SIDM models: the extent of Carinaโs halo is an order of magnitude smaller than predicted by CDM. Ram-pressure drag creates significant velocity and orbital bias in the substructure halos which sink on a short timescaleโof order the crossing timeโindependent of their mass. Another positive feature of SIDM is the ability to produce satellite systems on near circular orbits which are very rare in CDM models.
Both CDM and SIDM with a large cross-section fail to reproduce observed rotation curves of dwarf and LSB galaxies. We have seen that the final density profiles are sensitive to the shape of the initial fluctuations: more violent collapses end up with constant density cores. Alternatively, SIDM with a mean free path between kiloparsec and megaparsec scales may solve this problem (Spergel & Steinhardt 1999). In this case, particles could transfer heat to the cold central regions that occur in standard CDM collapses, creating an initial expanding phase with lower central density. It is not obvious that a cold core would be generated and maintained in a hierarchical scenario since the dense mini-halos collapsing at high redshift may form singular isothermal structures. The dense substructure halos would rapidly sink to the centres of the parent halos by hydro-dynamical drag, depositing high density low entropy material and conserving isothermal profiles.
Simulating intermediate mean free paths is relatively straightforward. One technique would be to use the neighbour lists to choose random particles to collide (Burkert 2000). Simulations in progress will demonstrate whether SIDM can reproduce the observed rotation curves of dwarf galaxies. A solution to this problem will naturally resolve the abundance of dark matter substructure in the Galactic halo since substructure with shallow potentials would be easily disrupted.
BM would like to thank Marc Davis for many discussions of the astrophysical consequences of strongly interacting dark matter while a NATO fellow in Berkeley and the Royal Society for support. VQ is a Marie Curie research fellow (grant HPMF-CT-1999-00052). Computations were carried out as part of the Virgo consortium. |
no-problem/0002/hep-ph0002017.html | ar5iv | text | # References
BILEPTON RESONANCE IN ELECTRON-ELECTRON SCATTERING
PAUL H. FRAMPTON
Department of Physics and Astronomy,
University of North Carolina, Chapel Hill, NC 27599-3255
E-mail: frampton@physics.unc.edu
## Abstract
Theoretical backgound for bileptonic gauge bosons is reviewed, both the SU(15) GUT model and the 3-3-1 model. Mass limits on bileptons are discussed coming from $`e^+e^{}`$ scattering, polarized muon decay and muonium-antimuonium conversion. Discovery in $`e^{}e^{}`$ at a linear collider at low energy (100GeV) and high luminosity ($`10^{33}/cm^2/s`$) is emphasised.
Introduction.
It is a stunning historical fact that $`e^{}e^{}`$ collisions have never been studied at a center of mass energy above 1.12 GeV as published in 1971 by Richter et al. There were plans to explore $`e^{}e^{}`$ at DESY but these were abandoned when money ran out.
The three large projects in HEP for the US (and internationally) for the foreseeable future are: NLC, MC and VLHC. Of these the NLC is for the first decade of the twenty-first century; the other two are for the second decade. The NLC is presently a multi- billion dollar project primarily aimed at $`e^+e^{}`$.
A topic of this workshop is: should it have also $`e^{}e^{}`$ capability?
Why has $`e^{}e^{}`$ been so neglected? Firstly $`e^+e^{}`$ is where $`Z^{^{}}`$ can be found - often cited as the most conservative extension of the Standard Model (SM). By contrast $`e^{}e^{}`$ is an exotic, empty channel because it has double electric charge and lepton number $`L=2`$. Surely, $`e^{}e^{}`$ would allow only checks of higher-order quantum electrodynamics. But physics is an experimental science!
$`e^{}e^{}`$ Resonance.
Such a resonance must have $`L=2`$ and $`Q=2`$. It must be a boson. For spin zero a doubly-charged Higgs scalar, the coupling is a free parameter and is generically small. For a spin one gauge boson, the coupling is large and prescribed. Bilepton gauge bosons give a pronounced peak at $`s=M^2`$. But, as our main emphasis here, the resonance tail is detectable at much lower energy.
Bilepton gauge bosons were first suggested in the context of SU(15) grand unification.
First recall that in $`SU(5)`$ grand unification with families each in $`5+\overline{10}`$ the reason for $`B`$ violation is that the second rank tensor $`\overline{10}`$ has indefinite $`B`$ and $`L`$ quantum numbers.
If $`SU(5)`$ had fermions only in the $`5`$ then $`B`$ and $`L`$ would necessarily be conserved perturbatively.
The presence of the $`\overline{10}`$ is what causes the indeterminacy of $`B`$ and $`L`$ and allows mediation of proton decay in the gauge sector.
Since proton decay remains elusive the idea in $`SU(15)`$ is to prohibit it in the gauge sector. The 15 helicity states in each family are assigned to a $`15`$ of $`SU(15)`$. Whereupon each gauge boson has definite $`B`$ and $`L`$ according to which pair of the fundamental fermions it couples.
The first family is assigned to:
$$15=(u_L^R,u_L^G,u_L^B,d_L^R,d_L^G,d_L^B;\overline{u}_L^R,\overline{u}_L^G,\overline{u}_L^B,\overline{d}_L^R,\overline{d}_L^G,\overline{d}_L^B;e_L^+,\nu _{eL},e_L^{})$$
and similarly for the second and third families.
It is clear that all of the 224 gauge bosons of $`SU(15)`$ have definite $`B`$ and $`L`$.
Anomaly cancellation is by mirror fermions - disfavored aesthetically but not phenomenologically.
The pattern of spontaneous symmetry breaking is:
$$SU(15)\stackrel{M_G}{}SU(12)_q\times SU(3)_l$$
$$\stackrel{M_B}{}SU(6)_L\times SU(6)_R\times U(1)_B\times SU(3)_l$$
$$\stackrel{M_A}{}SU(3)_C\times SU(2)_L\times U(1)_Y$$
In the breaking at $`M_A`$ color $`SU(3)_C`$ is embedded in $`SU(6)_L\times SU(6)_R`$ as $`(3+3,1)+(1,\overline{3}+\overline{3})`$.
$`SU(2)_L`$ is embedded in $`SU(6)_L\times SU(3)_l`$ with $`6_L=3(2)_L`$ and $`3_L=2_L+1_L`$
$`U(1)_Y`$ is contained in $`SU(6)_R\times U(1)_B\times SU(3)_l`$ according to:
$$Y=\sqrt{3}\mathrm{\Lambda }+\sqrt{\frac{2}{3}}B+\sqrt{3}๐ด$$
with $`\mathrm{\Lambda }`$, $`B`$ and $`๐ด`$ generators of $`SU(6)_R`$, $`U(1)_B`$ and $`SU(3)_l`$, respectively, normalized as $`SU(15)`$ matrices with
$$Tr(\mathrm{\Lambda }^a\mathrm{\Lambda }^b)=2\delta ^{ab}.$$
Explicitly, these normalized $`SU(15)`$ generators are
$$\mathrm{\Lambda }=\frac{1}{\sqrt{3}}\mathrm{diag}(000000,111111,000)$$
$$B=\sqrt{\frac{3}{2}}\mathrm{diag}(\frac{1}{3}\frac{1}{3}\frac{1}{3}\frac{1}{3}\frac{1}{3}\frac{1}{3},\frac{1}{3}\frac{1}{3}\frac{1}{3}\frac{1}{3}\frac{1}{3}\frac{1}{3},000)$$
$$\mathrm{and}๐ด=\frac{1}{\sqrt{3}}\mathrm{diag}(000000,000000,211)$$
RENORMALIZATION GROUP
$$\mu d\alpha _i(\mu )/d\mu =B_i\alpha _i^2(\mu )$$
with matching conditions, at $`M_A`$:
$$\alpha _{3C}^1(M_A)=\frac{1}{2}\alpha _{6L}^1(M_A)+\frac{1}{2}\alpha _{6R}^1(M_A)$$
$$\alpha _{2L}^1(M_A)=\frac{3}{4}\alpha _{6L}^1(M_A)+\frac{1}{4}\alpha _{3l}^1(M_A)$$
$$\alpha _{1Y}^1(M_A)=\frac{9}{20}\alpha _{6R}^1(M_A)+\frac{1}{10}\alpha _B^1(M_A)+\frac{9}{20}\alpha _{3l}^1(M_A)$$
at $`M_B`$:
$$\alpha _{6L}(M_B)=\alpha _{6R}(M_B)=\alpha _B(M_B)=\alpha _{12q}(M_B)$$
and at $`M_G`$:
$$\alpha _{12q}(M_G)=\alpha _{3l}(M_G)=\alpha _{15}(M_G)$$
The results can be tabulated, as shown in this Table of typical values for the three breaking scales of $`SU(15)`$
| $`M_A(GeV)`$ | $`M_B(GeV)`$ | $`M_G(GeV)`$ |
| --- | --- | --- |
| $`250`$ | $`4.0\times 10^6`$ | $`6.0\times 10^6`$ |
| $`500`$ | $`5.8\times 10^6`$ | $`8.9\times 10^6`$ |
| $`10^3`$ | $`8.3\times 10^6`$ | $`1.3\times 10^7`$ |
| $`2\times 10^3`$ | $`1.2\times 10^7`$ | $`1.9\times 10^7`$ |
There is one input parameter, say $`M_A`$.
$`M_B`$ and $`M_G`$ are outputs.
At low energies ($`M_A`$) the gauge bosons under $`SU(6)_L\times SU(6)_R\times U(1)_B\times SU(3)_l`$ are, with respect to the standard model:
$$35_L=(8,3)_0+(8,1)_0+(1,3)_0$$
$$35_R=2(8,1)_0+(8,1)_{\pm 1}+(1,1)_0+(1,1)_{\pm 1}$$
$$1_B=(1,1)_0$$
$$8_l=(1,3)_0+(1,1)_0+(1,2)_{\pm 3/2}$$
All are interesting but the last-listed $`(1,2)_{\pm 3/2}`$ are the bileptonic gauge bosons which can show up in Moller scattering.
( e.g. $`e^{}e^{}\mu ^{}\mu ^{}`$).
Clearly such bileptons are a general feature
of the embedding
$$SU(2)_LSU(3)$$
and have the electric charges
$`(Y^{},Y^{})`$ (L = +2)
with antiparticles
$`(Y^{++},Y^+)`$ (L = -2).
This feature of $`SU(15)`$ grand unification re-emerges in the $`331`$ model to which we now turn.
$`331`$ is more economic, and anomaly cancellation is more elegant, compared to $`SU(15)`$.
To introduce the 3-3-1 model, the following are motivating factors:
1. Consistency of a gauge theory requires cancellation of all chiral anomalies. Such cancellation occurs for a quark-fermion family and is enough (almost) to fix all charges.
2. This does not explain $`N_f>1`$ but is sufficiently impressive to suggest that $`N_f=3`$ may be explicable by anomaly cancellation in an extension. This requires extended families have non-zero anomaly and not all three families treated similarly.
3. The third family is exceptional because of the top quark mass, and suggests
+1 +1 -2 cancellation.
4. There is such a -2 in the SM as the ratio of quark charges.
5. Extension of $`SU(2)_L`$ to $`SU(3)_L`$ will have the same lepton couplings of the bileptons as in $`SU(15)`$.
For the 3-3-1 model the gauge group is:
$`SU(3)_C\times SU(3)_L\times U(1)_X`$
The first family quarks are assigned to
$$\left(\begin{array}{c}u\\ d\\ D\end{array}\right)_L\overline{u}_L\overline{d}_L\overline{D}_L$$
The triplet is a 3 of $`SU(3)_L`$.
The second family of quarks is assigned similarly:
$$\left(\begin{array}{c}c\\ s\\ S\end{array}\right)_L\overline{c}_L\overline{s}_L\overline{S}_L$$
The third family of quarks is assigned differently:
$$\left(\begin{array}{c}T\\ t\\ b\end{array}\right)_L\overline{T}_L\overline{t}_L\overline{b}_L$$
The triplet in this case is a 3\* of $`SU(3)_L`$.
The X quantum numbers of the triplets are equal to the electric charges of the cental members. That is, for the three families of quarks, $`X=\frac{1}{3},\frac{1}{3},+\frac{2}{3}`$.
The leptons are assigned to 3\*โs as follows:
$$\left(\begin{array}{c}e^+\\ \nu _e\\ e^{}\end{array}\right)_L\left(\begin{array}{c}\mu ^+\\ \nu _\mu \\ \mu ^{}\end{array}\right)_L\left(\begin{array}{c}\tau ^+\\ \nu _\tau \\ \tau ^{}\end{array}\right)_L$$
These three antitriplets have $`X=0`$.
Let us see how anomalies cancel. Recall that anomaly cancellation is crucial in many situations of model-building beyond the standard model e.g. chiral color and in string theory.
The color anomaly $`(3_L)^3`$ cancels because QCD is vectorlike.
The anomaly $`(3_L)^3`$ is non-trivial. Taking $`N_C`$ colors and $`N_l`$ light neutrinos the anomaly cancels only if $`N_C=N_l=3`$.
The remaining anomalies
$`(3_C)^2X,(3_L)^2X,X^3,X(T_{\mu \nu })^2`$
also all cancel.
In particular, each family has a non-zero anomaly for $`X^3`$, $`(3_L)^2X`$ and $`(3_L)^3`$; in each case the anomalies cancel proportionately to $`+1+12`$, as anticipated in the earlier discussion.
To break the symmetry requires several Higgs multiplets.
First an $`X=+1`$ triplet $`\mathrm{\Phi }`$ with VEV $`<\mathrm{\Phi }>=(0,0,U)`$ breaks 331 to 321 and gives masses to the D, S and T quarks as well as the gauge bosons $`Z^{}`$ and $`Y`$. The scale $`U`$ sets the scale for the new physics.
Electroweak symmetry breaking requires two further triplets $`\varphi `$ and $`\varphi ^{}`$ with $`X=0`$ and $`X=1`$ respectively. Their VEVs give mass to d, s, t and to u, c, b respectively. The first VEV also gives a family-antisymmetric contribution to the charged leptons. To obtain a general mass matrix for charged leptons necessitates adding a sextet with $`X=0`$.
THE NEW PHYSICS SCALE U
There is a lower bound from precision electroweak data:
$`ZZ^{}`$ mixing dictates $`M(Z^{})>300GeV`$.
FCNC limits give a similar bound. For FCNC it is crucial that the third family be the one treated asymmetrically. Otherwise the FCNC disagree with experiment.
UPPER BOUND ON U:
A bound on $`U`$ arises because the embedding of 321 in 331 requires $`\mathrm{sin}^2\theta <1/4`$ because for $`\mathrm{sin}^2\theta =1/4`$ the coupling $`g_X`$ diverges. This fixes $`U<3TeV`$ using $`\mathrm{sin}^2\theta (M_Z)=0.231`$. Hence $`M(Y)`$ cannot be higher than 1.5 TeV.
LEP data:
The highest precision high-energy data is from LEP. It gives $`M(Y)>120GeV`$.
The best lower bounds come from low energy experiments:
(1) Polarized muon decay:
$`M(Y^\pm )>230`$ GeV.
(2) Muonium-Antimuonium conversion:
$`M(Y^{\pm \pm })>850GeV`$.
Just to recapitulate some of the points made at the beginning:
$`e^{}e^{}`$ collisions have never been studied above c.o.m. energy 1.12 GeV. An NLC should have $`e^{}e^{}`$ capability.
Accomplishment of $`e^{}e^{}`$ Collisions at NLC.
In the post-SSC era it is desirable to avoid a third comma in the cost C, i.e. $`C<\$1B`$.
How can this be achieved?
The cost of an NLC is roughly linear in the energy.
A 500GeV NLC was costed last year at $7.9B, although I have been told informally that that cost might be lowered below $5B. Thus 100GeV could be below $1B?
Therefore the first fundable step could focus on luminosity rather than energy and be a 100GeV machine with luminosity $`10^{33}`$. This is sufficiently above LEP to give a Giga-Z and allows an opportunity to do new machine physics.
Acknowledgements
It is a pleasure to thank Clemens Heusch for organizing a pleasant meeting and the US Department of Energy for support under Grant No. DE-FG01-97ER41036.
References
Note Added
In a recent work \[P.H. Frampton and A. Rasin, UNC Report IFP-781-UNC (February 2000)\] we have updated the cross-section estimates for $`e^{}e^{}\mu ^{}\mu ^{}`$ in which used the $`SU(15)`$ model. In the simpler 331-model the cross-section is about one order of magnitude higher than the results in . |
no-problem/0002/astro-ph0002186.html | ar5iv | text | # The Extreme Compact Starburst in MRK 273
## 1 Introduction
Luminous infrared galaxies are the most numerous sources with luminosities $``$ 10<sup>11</sup> L in the nearby universe (Sanders and Mirabel 1996). The bulk of the luminosity from these sources is infrared emission from warm dust. A critical question concerning these sources is whether the dust is heated by an active nucleus, or a starburst? Recent studies using near IR spectroscopy suggest that the dominant dust heating mechanism in most luminous infrared galaxies (80$`\%`$) is star formation (Genzel et al. 1998), although AGN heating may become significant for the highest luminosity sources ($``$ 10<sup>12.3</sup> L; Veilleux et al. 1999). This question has taken on new significance due to the recent discovery of a population of luminous infrared galaxies at high redshift seen in deep sub-millimeter and millimeter imaging surveys. If these high $`z`$ sources are starbursts, then they may dominate the cosmic star formation rate at $`z>2`$ (Smail, Ivison, and Blain 1997, Barger et al. 1998, Hughes et al. 1998, Blain et al. 1999, Eales et al. 1999, Bertoldi et al. 1999).
The most direct evidence to date of a dominant starburst in a luminous infrared galaxy is the discovery of a population of radio supernovae in the nuclear regions of Arp 220 by Smith et al. (1998) using high resolution imaging at 1.4 GHz. Radio observations are unique in this regard, since they are unobscured by dust and allow for imaging with mas resolution. We have begun a program of imaging the radio continuum emission and HI 21cm absorption in luminous infrared galaxies using the Very Long Baseline Array and the Very Large Array at resolutions ranging from 1 to 100 mas. Results on the Seyfert 1 galaxy Mrk 231 have been presented in Carilli, Wrobel, and Ulvestad (1998), Taylor et al. (1999), and Ulvestad, Carilli, and Wrobel (1998). Those data revealed the presence of an AGN driven radio-jet source on pc-scales at the center of a (possibly star forming) gas disk with a diameter of a few hundred pc, with about half the radio continuum emission coming from the disk.
In this letter we present the results on Mrk 273 at z = 0.0377. Mrk 273 has an infrared luminosity of $`\mathrm{L}_{\mathrm{FIR}}=1.3\times 10^{12}\mathrm{L}_{}`$ (as defined in Condon 1992), where we assume H<sub>o</sub> = 75 km s<sup>-1</sup> Mpc<sup>-1</sup>. The optical galaxy has been classifed as a Seyfert 2, LINER, and both (Baan et al. 1998, Colina, Arribas, and Borne 1999, Goldader et al. 1995), and it has a disturbed morphology on kpc-scales, with tidal tails indicating a merger event within the last 10<sup>8</sup> years (Knapen et al. 1998). Near IR spectroscopy reveals strong PAH features indicative of a starburst, but also high ionization lines indicative of an AGN (Genzel et al. 1998, Lutz et al. 1998). The X-ray emission also presents a mixed picture, with evidence for a highly absorbed hard component, but possible Fe L emission at 0.8 keV indicating cool (0.4 keV) gas (Iwasawa 1999). Broad absorption in the HI 21cm line and OH megamaser emission have also been detected in Mrk 273 (Baan, Haschick, and Schmelz 1985, Schmelz, Baan, and Haschick 1988).
The nuclear regions in Mrk 273 on sub-arcsecond scales are complex, with a double nucleus on a scale of 2<sup>โฒโฒ</sup> seen in the near IR (Knapen et al. 1998, Majewski et al. 1993, Armus et al. 1990), and in the radio continuum (Ulvestad and Wilson 1984, Condon et al. 1991, Knapen et al. 1998, Coles et al. 1999). The most peculiar aspect of Mrk 273 is that only one of the nuclei (the northern source) is seen in both the radio continuum and the near IR. The southeast nucleus is detected in the radio continuum, but is very faint in the near IR, although there may be a faint blue โstar clusterโ at this position (Scoville et al. 2000). The southwest nucleus is seen in the near IR, but shows only very weak, extended radio continuum emission (Knapen et al. 1998). High resolution near IR imaging with the HST shows that both the north and southwest IR peaks are redder than the surrounding galaxy, and that the northern nucleus is redder than the southwestern nucleus (Scoville et al. 2000).
Imaging of CO emission from Mrk 273 shows a peak at the northern nucleus, with faint extended emission on scales of a few arcseconds (Downes and Solomon 1998). Downes and Solomon derive a molecular gas mass of $`1\times 10^9`$ M for the northern nucleus, and find that the CO is most likely in a disk with size $`<0.6^{\prime \prime }`$. From these data they conclude that the northern nucleus of Mrk 273 is an extreme compact starburst, with an IR luminosity of $`6\times 10^{11}`$ L emitted from a region $`<`$ 400 pc in diameter. This conclusion is supported by the 0.2<sup>โฒโฒ</sup> resolution images of the HI 21cm absorption presented in Coles et al. (1999), which reveal a velocity gradient along the major axis of the northern nucleus.
In this letter we present high resolution imaging (10 mas to 50 mas) of the HI 21cm absorption and radio continuum emission from Mrk 273. These data confirm the existence of a rotating gas disk with a diameter of 350 pc, and reveal a population of compact sources, possibly composed of luminous radio supernovae and/or nested radio supernova remnants.
## 2 Observations
Observations of Mrk 273 were made on May 31 and June 6, 1999 with the Very Long Baseline Array (VLBA), including the phased Very Large Array (VLA) as an element in the very long baseline array. The pass band was centered at the frequency of the neutral hydrogen 21cm line at a heliocentric redshift of: z = 0.0377, or cz = 11300 km s<sup>-1</sup>. The total bandwidth was 16 MHz, using two orthogonal polarizations, 256 spectral channels, and 2 bit correlation. The total on-source observing time was 13.4 hrs.
Data reduction was performed using the Astronomical Image Processing System (AIPS) and AIPS++. Standard a priori gain calibration was performed using the measured gains and system temperatures of each antenna. The compact radio source J1337+550 was observed every 5 minutes, and this source was used to determine the initial fringe rates and delays. The source 3C 345 was used to calibrate the frequency dependent gains (band pass calibration). The source J1400+621 was used to check the absolute gain calibration. The results showed agreement of observed and expected flux densities to within 3$`\%`$.
After application of the delay and rate solutions, and band pass calibration, a continuum data set for Mrk 273 was generated by averaging off-line channels. This continuum data set was then used for the hybrid imaging process, which involves iterative imaging and self-calibration of the antenna-based complex gains (Walker 1985). The final iteration involved both phase and amplitude calibration with a 3 minute averaging time for phases and 15 minutes for amplitudes. The self-calibration solutions were applied to the spectral line data set. The spectral line data were then analyzed at various spatial and spectral resolutions by tapering the visibility data, and by smoothing in frequency. The continuum emission was subtracted from the spectral line visibility data using UVLIN. Images of the line and continuum data were deconvolved using the Clark โCLEANโ algorithm as implemented in IMAGR. For the radio continuum images we also employed the multi-resolution CLEAN algorithm as implemented in AIPS++ (Holdaway and Cornwell 1999). Results were consistent for all image reconstruction algorithms, and we present the naturally weighted Clark CLEAN continuum images in the analysis below. The full resolution of the naturally weighted images is 10 mas. We also present images at 50 mas resolution made using a Gaussian taper of the visibilities.
## 3 Results and Analysis
The 1.368 GHz continuum image of Mrk 273 at 50 mas resolution is displayed in Figure 1. The image shows that the northern nucleus is extended, with a major axis of 0.5<sup>โฒโฒ</sup> and a minor axis of 0.3<sup>โฒโฒ</sup>. The region shows two peaks separated by 0.11<sup>โฒโฒ</sup>. We designate the western peak N1 and the eastern peak N2. These two peaks can also be seen in near IR images of Mrk 273 (Knapen et al. 1998). The total flux density from this region is 86$`\pm `$9 mJy. The southeastern source, which we designate SE, is also extended over about 0.3<sup>โฒโฒ</sup>, with a total flux density of 40$`\pm `$4 mJy.
Figure 2 shows the 1.368 GHz continuum images of the northern and southeastern nuclei of Mrk 273 at 10 mas resolution. The northern source is highly resolved, consisting of a diffuse component extending over 0.5<sup>โฒโฒ</sup>, punctuated by a number of compact sources. Table 1 lists the positions and surface brightnesses at 10 mas resolution of the six sources with surface brightnesses $``$ 0.5 mJy beam<sup>-1</sup>. Positions are relative to the peak surface brightness, corresponding to N1. The nominal position of N1 in Figure 2 is (J2000): $`13^h44^m42.119^s`$, $`55^o53^{}13.48^{\prime \prime }`$, based on phase-referencing observations using the celestial calibrator J1337+550 with a 5 minute cycle time. Note that the minimum error in the absolute astrometry is 12 mas, as set by the uncertainty in the calibrator source position (see Wilkinson et al. 1998 and references therein). The true error after phase transfer is likely to be significantly higher than this (Fomalont 1995, Beasley and Conway 1995).
Given the incomplete Fourier spacing coverage for VLBI imaging, in particular for short spacings, it is possible that the CLEAN algorithm has generated spurious point sources when trying to deconvolve extended emission regions. Conversely, we cannot rule-out the possibility that the extended emission is composed of mostly faint point sources. The use of multiresolution CLEAN mitigates these problems, and the sources listed in Table 1 all reproduce with essentially the same surface brightnesses for images made with the Clark CLEAN, multi-resolution CLEAN, and for images made with different visibility weighting schemes. The brightness temperatures of these sources are all $`3\times 10^6`$ K, indicating non-thermal emission. The southeastern nucleus is also resolved, with high surface brightness emission occuring over a scale of 50 mas. We set a 4$`\sigma `$ limit of 0.14 mJy to any compact radio source associated with the southwestern peak (large cross in Figure 1) seen at near IR wavelengths (Knapen et al. 1998).
Spectra of the HI 21cm absorption toward SE, and N1 and N2, at 50 mas resolution are shown in Figure 3. The spectrum of SE shows a double peaked profile, with the two lines separated by 400 km s<sup>-1</sup>, each with a Full Width at Half Maximum (FWHM) of about 280 km s<sup>-1</sup>. There is marginal evidence that each component has velocity sub-structure, but the SNR of these data are insufficient to make a firm conclusion on this point. The peak optical depth of each line is about 0.12$`\pm `$0.02, and the implied HI column density in each component is then: N(HI) = 6.4$`\pm `$1.1 $`\times `$10<sup>19</sup> $`\times `$ $`T_s`$ cm<sup>-2</sup>, where $`T_s`$ is the HI spin temperature in K.
An interesting comparison is made with the MERLIN absorption spectra at 0.2<sup>โฒโฒ</sup> resolution toward the SE component (Coles et al. 1999). At this resolution, MERLIN detects 19 mJy of continuum emission, and shows a 3 mJy absorption line at about 11200 km s<sup>-1</sup>, and weaker absorption of about 1 mJy at 11400 km s<sup>-1</sup>. The VLBA data show a peak continuum surface brightness of 10 mJy beam<sup>-1</sup> at 50 mas resolution, and absorption line depths of 1 mJy at both velocities. This suggests that the absorption at 11200 km s<sup>-1</sup> is due to extended gas covering both the compact and extended continuum emitting regions, while the 11400 km s<sup>-1</sup> absorption is due to a small cloud ($``$ 40 pc) covering only the high surface brightness continuum emission. Assuming 11200 km s<sup>-1</sup> indicates the systemic velocity of the gas at that location in the galaxy disk (Coles et al. 1999), then the higher velocity system would be infalling at 200 km s<sup>-1</sup>.
The spectrum of N2 shows a relatively narrow absorption line, with a FWHM = 160 km s<sup>-1</sup>, a peak optical depth of 0.59$`\pm `$0.06, and an HI column density of $`1.8\pm 0.2\times 10^{20}`$ $`\times `$ $`T_s`$ cm<sup>-2</sup>. The spectrum of N1 shows a broad, flat absorption profile with FWHM = 540 km s<sup>-1</sup>, with optical depths ranging from 0.1 and 0.4$`\pm `$0.04 across the line profile. Again, there is marginal evidence for a few narrower, higher optical depth components. The total HI column density is $`1.8\pm 0.3\times 10^{20}\times T_s`$ cm<sup>-2</sup>. The velocity range of the HI absorption toward N1 is comparable to that seen for the OH megamaser emission (Baan, Haschick, and Schmeltz 1985, Stavely-Smith et al. 1987).
Figure 4 shows the position-velocity (P-V) diagram for the HI 21cm absorption along the major axis of the northern nucleus. There is a velocity gradient from east to west of about 450 km s<sup>-1</sup> across 300 mas, plus an apparent flattening of the velocity distribution to larger radii. The P-V distribution is confused somewhat by the broad absorption seen toward N1 (at position โ90 mas in Figure 4). The east-west velocity gradient of the HI absorption across the northern source is consistent with results from MERLIN HI 21cm imaging at 0.2<sup>โฒโฒ</sup> resolution (Coles et al. 1999), and with the velocity field derived from CO emission observations at 0.6<sup>โฒโฒ</sup> resolution (Downes and Solomon 1998).
## 4 Discussion
The most significant result from our high resolution radio continuum imaging of Mrk 273 is that the emission from the northern nucleus extends over a region of $`0.3^{\prime \prime }\times 0.5^{\prime \prime }`$ (220$`\times `$370 pc), punctuated by a number of compact sources with flux densities between 0.5 and 3 mJy. This morphology resembles those of the starburst nuclei of NGC 253 and M82 (Ulvestad and Antonnuci 1997, Muxlow et al. 1994), on a similar spatial scale. However, the total radio luminosity is an order magnitude larger in Mrk 273. The physical conditions in this region are extreme, with a minimum pressure of 10<sup>-9</sup> dynes cm<sup>-2</sup>, and corresponding magnetic fields of 100 $`\mu `$G.
The 1.4 GHz radio continuum emission from nuclear starburst galaxies is thought to be primarily synchrotron radiation from relativistic electrons spiraling in interstellar magnetic fields, with the electrons being accelerated in supernova remnant shocks (Condon 1992, Duric 1988). The compact sources are then individual supernovae or supernova remnants, while the diffuse emission is thought to be from electrons that have diffused away from the supernova remnant shocks. Our high resolution images provide strong support for the hypothesis of Downes and Solomon (1998) that the northern nucleus of Mrk 273 is an extreme compact starburst, with a massive star formation rate of 60 M year<sup>-1</sup>, as derived from the radio continuum luminosity (Condon 1992), and occuring in a region of only 370 pc diameter. From their detailed analysis of the CO emission from Mrk 273, Downes and Solomon (1998) propose that the star formation occurs in a disk with scale height of 21 pc and a total gas mass of $`1\times 10^9`$ M.
The nature of the weak, compact radio continuum sources in Mrk 273 is not clear, but given the similarity in morphology with the starburst nuclei in M82 and NGC 253, it is likely that these sources are a combination of nested supernova remnants and/or luminous radio supernovae. These sources have radio spectral luminosities $`10^{28}`$ ergs s<sup>-1</sup> Hz<sup>-1</sup> at 1.4 GHz, which is an order of magnitude higher than the brightest radio supernovae remnants seen in M82 (Muxlow et al. 1994), and are comparable in luminousity to the rare class of extreme luminosity radio supernovae characterized by SNe 1986J (Rupen et al. 1987) and 1979C (Weiler and Sramek 1988). A substantial population of such luminous supernovae has been discovered in the starburst nucleus of Arp 220 by Smith et al. (1998), who suggest that the high luminosities of those supernovae may indicate a denser local environment relative to typical supernovae, by a factor 3 or so (Chevalier 1984). If the compact sources in Mrk 273 are nested supernova remnants, then it would require 10 or more of the most luminous M82-type supernova remnants in regions less than 7 pc in size. Future high resolution imaging of Mrk 273 is required to clarify the nature of these compact sources.
It is possible that the brightest of the compact sources, coincident with N1, indicates the presence of a weak radio AGN. Supporting evidence for this conclusion is the broad HI absorption line observed toward N1. This component contributes only 3.8$`\%`$ to the total radio luminosity at 1.4 GHz of the northern nuclear regions.
From flattening of the radio spectrum between 1.6 and 5 GHz, Knapen et al. (1998) suggested that there may be a dominant, synchrotron self-absorbed radio-loud AGN in the northern nucleus of Mrk 273. The images presented herein clearly preclude this hypothesis. We feel a more likely explanation for the low frequency flattening is free-free absorption. We are currently analyzing images with sub-arcsecond resolution between 327 MHz and 22 GHz in order to determine the origin of this low frequency flattening.
The gas disk hypothesis for the northern nucleus of Mrk 273 is supported by the observed velocity gradient in the HI 21cm absorption along the major axis. The rotational velocity at a radius of 220 pc is 280 km s<sup>-1</sup>, assuming an inclination angle of 53<sup>o</sup>. Assuming Keplerian rotation, the enclosed mass inside this radius is then 2$`\times `$10<sup>9</sup> M, comparable to the molecular gas mass observed on this scale.
Overall, these data support the idea that the dominant energy source in the northern nuclear region in Mrk 273 is a starburst and not an AGN. However, the presence of an AGN somewhere in the inner 2<sup>โฒโฒ</sup> of Mrk 273 is still suggested, based on the high ionization near IR lines (Genzel et al. 1998), the (possible) hard X-ray component (Iwasawa 1999), and the Seyfert II optical spectrum, although Condon et al. (1991) argue that a Seyfert II spectrum is not necessarily a conclusive AGN indicator. It is possible that the AGN is located at either the SE radio nucleus, or the SW near IR nucleus.
The SE radio nucleus presents a number of peculiarities, the most important of which is the weakness of the near IR emission (Knapen et al. 1998, Scoville et al. 2000). Knapen et al. (1998) suggested that this source may simply be the chance projection of a background radio source. However, the probability of a chance projection of a 40 mJy source within 1<sup>โฒโฒ</sup> of the northern nucleus is only $`4\times 10^7`$ (Langston et al. 1990, Richards et al. 1999). This low probability, and the fact that we see evidence for gas infall into the SE nucleus in the HI 21cm absorption images, effectively preclude the background source hypothesis. The radio morphology is consistent with an amorphous jet, or a very compact starburst, although the lack of CO emission from this region argues for an AGN. One possible cause for the lack of near IR emission is that the active region is still obscured at 2.2 $`\mu `$m. The HI 21cm absorption column density is 6.4$`\pm `$1.1 $`\times `$10<sup>22</sup> $`\times `$ ($`\frac{\mathrm{T}_\mathrm{s}}{10^3\mathrm{K}}`$) cm<sup>-2</sup>, while the absorption column derived from the hard X-ray spectrum may be as large as $`4\times 10^{23}`$ cm<sup>-2</sup>, depending on the X-ray powerlaw index. Using the HI 21cm column leads to A<sub>v</sub> = $`40\times (\frac{\mathrm{T}_\mathrm{s}}{10^3\mathrm{K}})`$, assuming a Galactic dust-to-gas ratio. This is comparable the extinction responsible for the obscuration in the near IR of the AGN in the powerful radio galaxy Cygnus A (Ward 1996). Imaging at wavelengths of 10 $`\mu `$m or longer, with sub-arcsecond resolution, is required to address this interesting question.
We do not detect any high surface brightness radio emission associated with the SW near IR nucleus. This could simply mean that this region harbours a radio quiet AGN. An alternative posibility is that this is a star forming region in which the star formation is very recent, commencing less than 10<sup>6</sup> years ago, such that a substantial population of radio supernovae and supernova remnants have not yet had time to develop.
We thank J. Wrobel, J. Ulvestad, and K. Menten for useful discussions and comments. This research made use of the NASA/IPAC Extragalactic Data Base (NED) which is operated by the Jet propulsion Lab, Caltech, under contract with NASA. The VLA and VLBA are operated by the National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. CLC acknowledges support from the Alexander von Humboldt Society, and the Max Planck Institute for Radio Astronomy.
References
Armus, L., Heckman, T.M., and Miley, G.K. 1990, ApJ, 364, 471
Baan, W.A., Haschick, A.D., and Schmelz, J.T 1985, ApJ (letters), 298, 51
Baan, W.A., Salzer, J.J., and Lewinter, R.D. 1998, ApJ, 509, 633
Beasley, A.J. and Conway, J.E. 1995, in Very Long Baseline Interferometry, eds. J. Zensus, P. Diamond, and P. Napier, p. 327
Bertoldi, F. et al. 1999, A&A (letters), in preparation
Blain, A., Smail, I., Ivison, R.J., & Kneib, J.-P. 1999, MNRAS, 302, 623
Chevalier, R.A. 1984, ApJ (letters), 285, 63
Coles, G.H., Pedlar, A., Holloway, A.J., and Mundell, C.G. 1999, MNRAS, 310, 1033
Colina, L., Arribas, S., and Borne, K.D. 1999, ApJ (letters), 527, 13
Condon, J.J. 1992, ARAA, 30, 575
Condon, J.J., Huang, Z.P., Yin, Q.F., and Thuan, T.X. 1991, ApJ 378, 65
Downes, D. and Solomon, P. 1998, ApJ, 507, 615
Duric, Neb 1988, Space Science Reviews, 48, 73
Iwasawa, K. 1999, MNRAS, 302, 961
Eales, S., et al. 1999, ApJ, 515, 518
Fomalont, E. 1995, in Very Long Baseline Interferometry, eds. J. Zensus, P. Diamond, and P. Napier, p. 363
Genzel, R. et al. 1998, ApJ, 498, 579
Goldader, J.D., Joseph, R.D., Doyon, R., and Sanders, D.B. 1995, ApJ, 444, 97
Holdaway, M. and Cornwell, T. 1999, in preparation
Hughes, D. et al. 1998, Nature, 394, 341
Knapen, J.H. et al. 1998, ApJ (letters), 490, 29
Langston, G.I., Conner, S.R., Heflin, M.B., Lehar, J., and Burke, B.F. 1990, ApJ, 353, 34
Lutz, D., Spoon, H.W., Rigopoulou, D., Moorwood, A.F., and Genzel, R. 1998, ApJ (letters), 505, 103
Majewski, S.R., Hereld, M., Koo, D.C., Illingworth, G.D., and Heckman, T.M. 1993, ApJ, 402, 125
Muxlow, T.W., Pedlar, A., Wilkinson, P.N., Axon, D.J., Sanders, E.M., and de Bruyn, A.G. 1994, MNRAS, 266, 455
Richards, E. 1999, ApJ, in press
Rupen, M.P., van Gorkom, J.H., Knapp, G.R., Gunn, J.E., and Schneider, D.P. 1987, AJ, 94, 61
Sanders, D.B. and Mirabel, I.F. 1996, ARAA, 34, 749
Schmelz, J.T., Baan, W.A., and Haschick, A.D. 1988, ApJ, 329, 142
Scoville, N.Z. et al. 2000, AJ, in press (astroph 9912246)
Smail, I., Ivison, R., & Blain, A. 1997, ApJ (letters), 490, 5
Smith, H.E., Lonsdale, C.J., Lonsdale, C.J., and Diamond, P.J. 1998, ApJ (letters), 493, 17
Stavely-Smith, L., Cohen, R.J., Chapman, J.M., Pointon, L., and Unger, S.W. 1987, MNRAS, 226, 689
Taylor, G.B., Silver, C.S., Ulvestad, J.S., and Carilli, C.L. 1999, ApJ, 519, 185
Ulvestad, J.S., Wrobel, J.M., and Carilli, C.L. 1999, ApJ, 516, 127
Ulvestad, J.S. and Antonucci, R.J. 1997, ApJ, 488, 621
Ulvestad, J.S. and Wilson, A.S. 1984, ApJ, 278, 544
Veilleux, S., Sanders, D.B., and Kim, D.-C. 1999, ApJ, 522, 113
Walker, R.C. 1985, in Aperture Synthesis in Radio Astronomy, eds. R. Perley, F. Schwab, and A. Bridle (NRAO: Green Bank), p. 189
Ward, M.J. 1996, in Cygnus A, eds. C. Carilli and D. Harris, (Cambridge University Press), p. 43
Weiler, K.W. and Sramek, R.A. 1988, ARAA, 26, 295
Wilkinson, P.N., Browne, I.W.A., Patnaik, A.R., Wrobel, J.M., and Sorathia, B. 1998, MNRAS, 300, 790
Figure Captions
Figure 1 โ An image of Mrk 273 at 1.368 GHz at 50 mas (37 pc) resolution.The contour levels are a geometric progression in the square root of two, hence every two contours implies a factor two rise in surface brightness. The first contour level is 0.25 mJy beam<sup>-1</sup>. The peak surface brightness is 10 mJy beam<sup>-1</sup> and the off-source rms is 85$`\mu `$Jy beam<sup>-1</sup>. The reference position (0,0) corresponds to (J2000): $`13^h44^m42.142^s`$, $`55^o53^{}13.15^{\prime \prime }`$, based on phase-referencing observations using the celestial calibrator J1337+550 with a 5 minute cycle time. The cross in the SW corner indicates the position of the SW near IR nucleus.
Figure 2a โ An image of the northern nuclear regions of Mrk 273 at 1.368 GHz at 10 mas (7.3 pc) resolution. The contours are linear with an increment of 0.1 mJy beam<sup>-1</sup>, starting at 0.1 mJy beam<sup>-1</sup>. The peak surface brightness is 3.05 mJy beam<sup>-1</sup> and the off-source rms is 36 $`\mu `$Jy beam<sup>-1</sup>.
Figure 2b โ The same as Figure 2A, but now for the southeastern nuclear regions. The peak surface brightness is 1.35 mJy beam<sup>-1</sup>.
Figure 3 โ The HI 21cm absorption spectra toward Mrk 273 from images at 50 mas resolution. Figure 3a is the spectrum of N1. The peak surface brightness of 7.8 mJy beam<sup>-1</sup> has been subtracted. Figure 3b is the spectrum of N2. The peak surface brightness of 6.7 mJy beam<sup>-1</sup> has been subtracted. Both these spectra have a velocity resolution of 29 km s<sup>-1</sup>. Figure 3c is the spectrum of SE at a velocity resolution of 58 km s<sup>-1</sup>. The peak surface brightness of 10 mJy beam<sup>-1</sup> has been subtracted. The zero point on the velocity scale corresponds to a heliocentric velocity of 11300 km s<sup>-1</sup> in all spectra.
Figure 4 โ The position-velocity plot for the HI 21cm absorption across the major axis of the northern nucleus of Mrk 273 at a spatial resolution of 50 mas (37 pc) and a velocity resolution of 60 km s<sup>-1</sup>. The contour levels (in absoption) are: 0.4, 0.8, 1.2, 1.6, 2.0, 2.4, 2.8 mJy beam<sup>-1</sup>. The position of continuum component N1 corresponds to โ90 mas, while N2 corresponds to +20 mas. The zero point on the velocity scale corresponds to a heliocentric velocity of 11300 km s<sup>-1</sup> in all spectra. |
no-problem/0002/cond-mat0002283.html | ar5iv | text | # Coupling to spin fluctuations from conductivity scattering rates
## Abstract
A recent analysis of optical conductivity data which has provided evidence for coupling of the charge carriers to the $`41`$meV spin resonance seen in the superconducting state of optimally doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.95</sub> (Y123), is extended to other systems. We find that the corresponding spin resonance is considerably broader in Tl<sub>2</sub>Sr<sub>2</sub>CuO<sub>8+ฮด</sub> (Tl2201) and YBa<sub>2</sub>Cu<sub>4</sub>O<sub>8</sub> (Y124) than it is in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+ฮด</sub> (Bi2212) and there is no resonance in overdoped Tl2201 with $`T_c=23`$K. The effective charge-spin spectral density is temperature dependent and contains feedback effects that further stabilize superconductivity as $`T`$ is reduced.
For a conventional electron-phonon system an isotropic, on the Fermi surface, spectral density can be introduced which is essentially temperature independent below $`T_c`$. This spectral density, $`\alpha ^2F(\omega )`$, can be determined from tunneling data in the superconducting state and has been used with great success to understand the deviations from BCS universal laws observed in many conventional superconductors . In principle, information on $`\alpha ^2F(\omega )`$ can also be obtained through inversion of optical data although, to our knowledge, this has only been accomplished for Pb.
Recently Marsiglio et al. introduced a dimensionless function $`W(\omega )`$ which is defined as the second derivative of the normal state optical scattering rate $`\tau ^1(\omega )=(\mathrm{\Omega }_p^2/4\pi )\mathrm{}\mathrm{e}\sigma _N^1(\omega )`$ multiplied by frequency $`\omega `$. Here $`\mathrm{\Omega }_p`$ is the plasma frequency and $`\sigma _N(\omega )`$ the normal state optical conductivity. Specifically,
$$W(\omega )=\frac{1}{2\pi }\frac{d^2}{d\omega ^2}\left[\frac{\omega }{\tau (\omega )}\right]$$
(1)
which follows directly from experiment, provided the data on $`\sigma _N(\omega )`$ is sufficiently accurate that a meaningful second derivative can be taken, possibly after smoothing. Marsiglio et al. made the very important observation that within the phonon range $`W(\omega )\alpha ^2F(\omega )`$ at least for those spectral densities studied. Beyond the phonon range $`W(\omega )`$ can be negative but this does not distract from the fact that $`W(\omega )`$ can be used to get the shape and magnitude of $`\alpha ^2F(\omega )`$. Application of Eq. (1) to the normal state conductivity of K<sub>3</sub>C<sub>60</sub> gave an $`\alpha ^2F(\omega )`$ (provided negative regions in $`W(\omega )`$ are simply ignored) in excellent agreement with incoherent inelastic neutron scattering data on the phonon frequency distribution $`F(\omega )`$ and gave sufficient coupling strength to obtain the measured value of $`T_c`$. This leaves little doubt that K<sub>3</sub>C<sub>60</sub> is an $`s`$-wave, electron-phonon superconductor, even though correlation effects are likely to be quite important.
More recently Carbotte et al. have extended the method of Ref. 3 to spin fluctuation exchange systems and to the superconducting state with $`d`$-wave symmetry. The charge carriers are coupled to the spin fluctuations through the spin susceptibility which is strongly peaked at $`(\pi ,\pi )`$ in the two dimensional Brillouin zone of the CuO<sub>2</sub> plane of the high $`T_c`$ oxides. In this case the momentum dependence of the interaction is very important and cannot be pinned to the Fermi surface and there are cold and hot spots. Nevertheless, the resulting in-plane infrared conductivity is isotropic for tetragonal systems and Eq. (1) can still be applied and the resulting $`W(\omega )`$ interpreted as an effective spectral density for the electron-spin fluctuation exchange interaction. In contrast to the electron-phonon case this effective interaction resides in the system of electrons and, due to correlation effects, can be temperature dependent. In particular, it can undergo major changes when the electrons condense into the superconducting state. Such feedback effects are generic to any electronic mechanism. They have been studied theoretically within a Hubbard model by Dahm and Tewordt who also review the work of others.
Optical conductivity calculations for a $`d`$-wave superconductor within a spin fluctuation mechanism by Carbotte et al. established, that $`W(\omega )`$ of Eq. (1) still gives a good approximation to the spectral density $`I^2\chi (\omega )`$ provided it is divided by two and shifted by the gap $`\mathrm{\Delta }_0`$. Calculations of $`W(\omega )`$ from the data of Basov et al. in optimally doped Y123, revealed strong coupling of the charge carriers to the $`41`$meV spin resonance seen below $`T_c`$ in spin polarized inelastic neutron scattering experiments. The coupling to this resonance was found to be large enough to stabilize the observed superconducting state. For underdoped Y123 the spin resonance remains in the optics even above $`T_c`$ up to a pseudogap temperature, in agreement with the neutron work by Dai et al. A quantitative analysis is not attempted in this case however, because of the added complications of the pseudogap.
Here we extend our previous work to other materials and, in contrast to what was done in Ref. 6 we proceed here without any reference to neutron data. In Fig. 1 we show results for the coupling to the spin resonance in Y124 ($`T_c=82`$K, solid line), Tl2201 ($`T_c=90`$K, dashed line), and Bi2212 ($`T_c=90`$K, dotted line) derived from optical data measured at $`T=10`$K. These results were obtained from a direct application of Eq. (1) to the optical data of Puchkov et al. Shifting by the gap which is determined by the method discussed in detail later on, the resonances are at 38, 43, and $`46`$meV respectively with a considerably larger width in the first two than in Bi2212. On the other hand, the spin resonance in Bi2212 was observed by Fong et al. at $`43`$meV using neutron scattering. No neutron data exist, to our knowledge, for Y124 and Tl2201 and, therefore, our results represent a falsifiable prediction. Another prediction that we develop later is that, overdoped Tl2201 ($`T_c=23`$K) will show no resonance.
To extract more information from optical data we need to consider a more specific mechanism, namely spin fluctuation exchange. At the simplest level in the normal state, we describe the corresponding spectral density by a two parameter form
$$I^2\chi (\omega )=I^2\frac{\omega \omega _{SF}}{\omega ^2+\omega _{SF}^2},$$
(2)
where $`I^2`$ is the coupling between spin excitations and the charge carriers and $`\omega _{SF}`$ sets the energy scale for the spin fluctuations. Both parameters can be derived from a fit to the normal state optical scattering rates as a function of frequency. A fit for Tl2201 ($`T_c=90`$K) is shown in the top frame of Fig. 2. The fit to the $`T=300`$K data with $`\omega _{SF}=100`$meV and a high energy cutoff at $`400`$meV is excellent and lowering $`\omega _{SF}`$ to $`30`$meV does not give an acceptable fit. The formalism we use to relate spectral density to conductivity is standard and $`\sigma _N(\omega )`$ follows from a knowledge of the self energy $`\mathrm{\Sigma }(\omega )`$. As a check on the accuracy of the inversion procedure embodied in Eq. (1), we show in the central frame of Fig. 2 our results for the function $`W(\omega )`$ obtained from our theoretical normal state optical scattering rate $`\tau ^1(\omega )`$ based on our input spectral density $`I^2\chi (\omega )`$ given in Eq. (2) and shown as the grayed squares. We see, that at $`T=10`$K the inversion matches almost perfectly the input spectral density except for small wiggles in the inverted curve (solid line). This excellent agreement between $`W(\omega )`$ and $`I^2\chi (\omega )`$ is not limited to simple, smooth forms. In the bottom frame of Fig. 2 we show results obtained for a structured spectrum, namely a spectrum which is proportional to the one used by Schachinger and Carbotte to analyze the optical properties of superconducting Tl2201. Except for some oscillations at higher frequencies the normal state $`W(\omega )`$ (solid curve) is close to the input spectral function $`I^2\chi (\omega )`$ (grayed squares).
Normal state conductivity data is not available at low temperatures in the high $`T_c`$ oxides and it is necessary to devise an inversion technique which applies in the superconducting state. Also, the spectral density can depend on temperature and on the state of the system. This requires a formalism which relates the spectral density $`I^2\chi (\omega )`$ to the superconducting state conductivity. This was provided in the work of Schachinger et al. who calculated the conductivity of a $`d`$-wave superconductor within an Eliashberg formalism. As previously stated, using this formalism Carbotte et al. established that in this case $`W(\omega )/2`$ agrees fairly well with $`I^2\chi (\omega )`$ provided it is shifted by the gap amplitude $`\mathrm{\Delta }_0`$. This is shown clearly in Fig. 3 which is similar to the bottom frame of Fig. 2 except that now the superconducting state conductivity has been employed and the material is Bi2212 with $`T_c=90`$K rather than Tl2201. The grayed squares are the input spectral density shifted by the theoretical gap $`\mathrm{\Delta }_0=28`$meV and the dashed line are the results for the inversion $`W(\omega )/2`$ vs. $`\omega `$ based on the calculated $`\sigma _S(\omega )`$. A simple $`d`$-wave gap model was used, and a parameter $`g`$ introduced giving the relative weight of the spin fluctuation spectral density in the gap channel as compared to its value in the renormalization channel. Details can be found in Ref. 12. For Bi2212, $`g=0.725`$, gives the measured value of $`T_c`$ when the normal state spectral density of Eq. (2) is used in the linearized self energy equations at $`T=T_c=90`$K. This value of $`g`$, which is considerably less than one, could be interpreted as an indication that a second, subdominant scattering mechanism (for example phonons) is also operative. The theoretical gap, on the other hand, is calculated from the solution of the $`d`$-wave Eliashberg equations for a temperature $`T=10`$K and is defined as the peak in the quasiparticle density of states. It is to be noted that this gap is a bit smaller than the gap of $`31`$meV suggested from the inversion data of Fig. 1 for Bi2212 (dotted line) using the experimentally observed position of the resonance peak at $`43`$meV. Nevertheless, the agreement is excellent and the theoretical value of $`28`$meV is within the experimentally observed range.
The agreement between $`W(\omega )/2`$ and $`I^2\chi (\omega )`$ in the top frame of Fig. 3 as to size and shape of the main peak is excellent. However, a negative piece is introduced in $`W(\omega )`$ right above the spin resonance peak which is not part of the spectral density. Nevertheless, at higher energies, $`W(\omega )/2`$ does recover and shows long tails extending to several $`100`$meV although they are underestimated. Additional evidence for the existence of this high energy background is found from our fit to the normal state data shown in the bottom frame of Fig. 3. The grayed lines give the optical scattering rate in Bi2212 at $`T=300`$K. The dashed curve is the fit to this data (solid line) and gives a normal state spin fluctuation frequency $`\omega _{SF}=100`$meV in Eq. (2) and an area under $`I^2\chi (\omega )`$ of $`95`$meV. From application of Eq. (1) to the superconducting state data we have already established the existence of coupling of charge carriers to a resonance peak as seen in Fig. 1 which gives its size and position in energy and this is reproduced as the solid curve in the top frame of Fig. 3. To get the superconducting state spectral density (grayed squares of the top frame in Fig. 3) the low frequency part of the normal state response is replaced by the resonant peak.
There is no known sum rule on the spectral weight $`I^2\chi (\omega )`$ and we find that the area under this function increases from $`95`$meV in the normal state to $`115`$meV in the superconducting state. The increase is due to the appearance of the spin resonance. Part of this spectral weight could come from a transfer from higher energies but our resolution at such energies is not sufficient to confirm this. In the bottom frame of Fig. 3 we show the fit to the superconducting state optical scattering rate obtained from our model $`I^2\chi (\omega )`$. The agreement is very good and since no new parameters were introduced to obtain the black dashed curve which agrees remarkably well with the black solid curve in the region $`0\omega 250`$meV, this is taken to be a strong consistency check on our work.
We extend our analysis to the material Y124 $`(T_c=81\mathrm{K})`$ where we predict from Fig. 1 a spin resonance to exist at $`38`$meV. Moreover, this spin resonance is much broader than the one observed in Bi2212 of Y123. Results are presented in Fig. 4. The top frame of this figure demonstrates the agreement between $`W(\omega )/2`$ and $`I^2\chi (\omega )`$ which was shifted by the theoretical gap $`\mathrm{\Delta }_0=24`$meV which is another prediction of our calculations as, to our knowledge, no experimental data exist for this material. The bottom frame of Fig. 4 presents our comparison between experimental and theoretical optical scattering rates. As in the case of Bi2212 the normal state scattering rate (grayed lines) at $`T=300`$K gives evidence for the existence of a high energy background as the experimental data (solid line) are best fit by a spin fluctuation spectrum of the type described by Eq. (2) with $`\omega _{SF}=80`$meV and a high energy cutoff of $`400`$meV (dashed line). The black lines compare the theoretical results (dashed line) to experiment (solid line) in the superconducting state at $`T=10`$K. The signature of the spin resonance, the sharp rise in $`\tau ^1(\omega )`$ starting around $`50`$meV is correctly reproduced by theory. For $`\omega >120`$meV the experimental scattering rate shows only a weak energy dependence and the theoretical prediction starts to deviate from experiment. This is in contrast to our results for Bi2212 (bottom frame of Fig. 3) and Tl2201 and could be related to the fact that the Y124 sample used by Puchkov et al. was slightly underdoped, a situation not covered by our theory.
To conclude, we obtained theoretical gap amplitudes $`\mathrm{\Delta }_0=24,26,`$ and $`28`$meV for Y124, Tl2201, and Bi2212 respectively. Experimental values are in the range of $`30`$meV for Bi2212 and $`28`$meV for Tl2201. The theoretical values correspond to ratios $`2\mathrm{\Delta }_0/k_BT_c`$ of 6.8, 6.7, and 7.2, much larger than the BCS value of $`4.3`$, and proves that feedback effects, not present in BCS, stabilize the superconducting state as $`T`$ is reduced (a result also supported by the theoretical study of Dahm and Tewordt).
Next we consider the case of overdoped Tl2201 with $`T_c=23`$K. The data of Puchkov et al. are reproduced as the solid curves of Fig. 5. The grayed curves are at $`T=300`$K in the normal state and the black curves apply to the superconducting state at $`T=10`$K. Nowhere is there a large rapid rise in the $`T=10K`$ curve at an energy which would correspond to the sum of $`\mathrm{\Delta }_0`$ plus some resonant frequency. This is in striking contrast to the sharp rise seen in Bi2212 and Y124 (bottom frame of Figs. 3 and 4 solid curves). For this overdoped sample no spin resonance forms. In fact, a fit of Eq. (2) with $`\omega _{SF}=300`$meV to the $`T=300`$K data which gives the grayed dashed curve in good agreement with the data (grayed, solid line) also gives the black dashed curve when used in a superconducting state calculation. The agreement with the solid black curve is quite good. No adjustment of any kind was made. Finally, we note in passing that $`\tau ^1(\omega )`$ stays finite (but very small) in the limit $`\omega 0`$ in all calculations presented here.
Optical conductivity data in Bi2212, Y124, and Tl2201 indicate that in the superconducting state the charge carriers are strongly coupled to a spin resonance which forms only in this state. No such resonance is seen in overdoped Tl2201 with $`T_c=23`$K. These results confirm that the coupling to the spin resonance first seen in optimally doped Y123 is a general feature of several, but not all the high $`T_c`$ oxides. The feature that corresponds to the resonance is an sharp rise in the optical scattering rate at a frequency equal to the sum of the gap plus the spin resonance frequency. Inversion of the optical data gives information on the absolute strength of the coupling between charge carriers and the spin resonance, and on its width. The resonance is found to be considerably broader in Tl2201 and Y124 than it is in Bi2212. In the systems considered here, at $`T_c`$, there is only coupling to the background spin fluctuations which extend to high energies. Below $`T_c`$ a spin resonance forms at low $`\omega `$ and this leads to increased coupling to the spin degrees of freedom which further stabilizes the superconducting state. This feedback effect leads to a ratio of twice the gap to $`T_c`$ of order 6-8, much larger than the value predicted in weak coupling BCS for a $`d`$-wave gap $`(4.3)`$. Overdoped Tl2201 with $`T_c=23`$K provides an example for which no spin resonance forms below $`T_c`$, and this system has optical properties close to those expected for a Fermi liquid.
Research supported in part by NSERC (Natural Sciences and Engineering Research Council of Canada) and by CIAR (Canadian Institute for Advanced Research). We thank D.N. Basov for continued interest in this work and discussions. |
no-problem/0002/astro-ph0002354.html | ar5iv | text | # X-ray flares on zero-age- and pre-main sequence stars in Taurus-Auriga-Perseus
## 1 Introduction
The Taurus-Auriga-Perseus region offers the opportunity to study the X-ray emission of young stars at several evolutionary stages. The youngest stars observed by ROSAT in this portion of the sky are the T Tauri Stars (TTSs) of the Taurus-Auriga and Perseus star forming regions, late-type pre-main sequence (PMS) stars of $`M3\mathrm{M}_{}`$ with an estimated age of $`10^510^7\mathrm{yrs}`$. Two young star clusters, the Pleiades and Hyades, are also located in this region of the sky at age of $`10^8\mathrm{yrs}`$ and $`\mathrm{6\; 10}^8\mathrm{yrs}`$, respectively. They consist mostly of zero-age main-sequence (ZAMS) stars, except for some higher mass post-main sequence stars and brown dwarfs, which are not studied here.
From the early observations by the Einstein satellite it was concluded that the X-ray emission of young stars arises in an optically thin, hot plasma at temperatures above $`10^6\mathrm{K}`$ (Feigelson & DeCampli (1981)). The emission region has been associated with the stellar corona where the X-rays are produced โ more or less analogous to the solar X-ray emission โ through a stellar $`\alpha `$-$`\mathrm{\Omega }`$-dynamo. The dynamo is driven by the combination of rotation and convective motions. Correlations between the X-ray emission of late-type stars and the stellar rotation support the notion that dynamo-generated magnetic fields are responsible for heating the coronae (Pallavicini et al. (1981)). But successful direct measurements of the magnetic fields of TTSs have been performed only recently (see e.g. Guenther et al. (1999)). The details of the heating mechanism are still not well understood.
The correlation between stellar rotation and X-ray emission of late-type stars suggests that the rotational evolution of young stars determines the development of stellar activity. The rotational evolution of low-mass PMS stars partly depends on the circumstellar environment. While classical TTSs (hereafter cTTSs) are surrounded by a circumstellar disk, inferred from IR dust emission (Bertout et al. (1988), Strom et al. (1989), and Beckwith et al. (1990)) and more recently from direct imaging (e.g. McCaughrean & OโDell (1996)), weak-line TTSs (wTTSs) lack such a disk, or at least the disk is not optically thick. Owing to contraction wTTSs spin up as they approach the main sequence. For cTTSs, on the other hand, coupling between the disk and the star may prevent spin-up (Bouvier et al. (1993)). The period observed on the ZAMS depends on the time the star has spent in the cTTS phase. After the main-sequence is reached, the rotation rate decreases again (see Bouvier et al. (1997a)). As a consequence of their slower rotation, stars on the ZAMS and main sequence (MS) should on average show less X-ray activity than PMS stars.
Earlier investigations of X-ray observations of young late-type stars were mostly concerned with the quiescent emission (see Neuhรคuser et al. (1995), Stauffer et al. (1994), Gagn$`\stackrel{ยด}{\mathrm{e}}`$ et al. (1995), Hodgkin et al. (1995), Micela et al. (1996), 1999, Pye et al. (1994), and Stern et al. (1994)). In contrast to these studies we focus on the occurrence of X-ray flares. Furthermore we discuss a larger sample than most of the previous studies by using all currently available observations from the ROSAT Public Data Archive that contain any TTS, Pleiad or Hyad in the field of view.
X-ray flares may be used as a diagnostics of stellar activity. They are thought to originate in magnetic loops. In contrast to findings from quasi-static loop modeling, the only direct determination of the size of a flaring region (Schmitt & Favata (1999)) shows that the emitting region is very compact. In the loops which confine the coronal plasma magnetic reconnection suddenly frees large amounts of energy which is dissipated into heat and thus leads to a temporary enhancement of the X-ray emission. The decay of the lightcurve is accompanied by a corresponding (exponential) decay of the temperature and emission measure, which are obtained from one- or two-temperature spectral models for an optically thin, thermal plasma (Raymond & Smith (1977), Mewe et al. (1985), 1986).
The most powerful X-ray flares have been observed on the youngest objects, notably a flare on the infrared Class I protostar YLW 15 in $`\rho `$ Oph which has been presented by Grosso et al. (1997). X-ray flares on TTSs observed so far (see Montmerle et al. (1983), Preibisch et al. (1993), Strom & Strom (1994), Preibisch et al. (1995), Gagn$`\stackrel{ยด}{\mathrm{e}}`$ et al. (1995), Skinner et al. (1997), Tsuboi et al. (1998)) exceed the maximum emission observed from solar flares by a factor of $`10^3`$ and more. Some extreme events have shown X-ray luminosities of $`L_\mathrm{x}=10^{33}\mathrm{erg}/\mathrm{s}`$. Although some of the strongest X-ray flares ever observed were detected on TTSs to date no systematic search for TTS flares was undertaken.
This paper is devoted to a study of the relation between X-ray flare activity and other stellar parameters, such as age, rotation rate, and multiplicity. For this purpose we perform a statistical investigation of ROSAT observations. We develop a method for the flare detection based on our conception of the typical shape of a flare lightcurve, where the term โtypical shapeโ refers to the characteristics of the X-ray lightcurve described above, i.e. a significant rise and subsequent decay of the lightcurve to the previous emission level. The database and source detection is described in Sect. 2. In Sect. 3 we describe how the lightcurves are generated. Our flare detection algorithm is explained in Sect. 4, where we also present all flare parameters derived from the X-ray lightcurves. Then we describe the influence of observational restrictions on the data analysis and how the related biases can be overcome (Sect. 5). In Sect. 6 we compare the flare characteristics of different samples of flaring stars selected by their age, rotation rate, and multiplicity. We present luminosity functions for TTSs, Pleiads, and Hyads during flare and quiescence. Luminosity functions of the non-active state of these stars have been presented before (see e.g. Pye et al. (1994), Hodgkin et al. (1995), Neuhรคuser (1997)) and some of the flares discussed here have been discussed by Gagn$`\stackrel{ยด}{\mathrm{e}}`$ et al. (1995), Strom & Strom (1994), and Preibisch et al. (1993). However, this is the first statistical evaluation of flare luminosities. Flare rates comparing stellar subgroups with different properties (such as age, $`v\mathrm{sin}i`$, and stellar multiplicity) are compiled in Sect. 7. Because of lack of sufficient statistics for a detailed spectral analysis, hardness ratios are used to describe the spectral properties of the flares. In Sect. 8 we present the observed relations between hardness ratios measured during different activity phases and between hardness and X-ray luminosity. Finally, we discuss and summarize our results in Sect. 9 and Sect. 10.
## 2 Database and data reduction
In this section we introduce the stellar sample and explain the analysis of the raw data. Details about our membership lists for TTSs, Pleiads, and Hyads are given below (Sect. 2.1). We have retrieved all pointed ROSAT PSPC observations from the archive that contain at least one of the stars from these lists in their field. The observations are listed in Table 1. After performing source detection on all of these pointings, we have cross correlated the membership lists with the detected X-ray sources and identified individual TTSs, Pleiads, and Hyads in the X-ray image. The process of source detection and identification is described in Sect. 2.2.
### 2.1 The stellar sample
The analysis presented here is confined to the Taurus-Auriga-Perseus region. This portion of the sky includes the Taurus-Auriga complex, the MBM 12 cloud, and the Perseus molecular clouds with the reflection nebula NGC 1333 and the young cluster IC 348. Two open clusters containing mostly ZAMS stars are found nearby the above mentioned star forming regions, the Pleiades and the Hyades. The choice of this specific sky region thus enables us to compare the X-ray emission of young stars at different ages.
Our sample of low-mass PMS stars in and around Taurus consists of all TTSs which are either on or very close to the Taurus star forming clouds or off the clouds at locations where they can still be linked with the Taurus clouds (see e.g. Neuhรคuser et al. (1997)). We restrict our Taurus sample to objects between $`\alpha _{2000}=2h`$ and $`5h`$ and $`\delta _{2000}=10^{}`$ and $`40^{}`$. The TTSs in Taurus comprise those listed in the Herbig-Bell catalog (Herbig & Bell (1988); HBC), in Neuhรคuser et al. (1995) or in Kenyon & Hartmann (1995). In addition we include TTSs newly identified either as counterparts to previously unidentified ROSAT sources (Strom & Strom (1994), Wichmann et al. (1996), Magazzรน et al. (1997), Neuhรคuser et al. (1997), Zickgraf et al. (1998), Li & Hu (1998), Briceรฑo et al. (1998)) or by other means (Torres et al. (1995), Oppenheimer et al. (1997), Briceรฑo et al. (1998), Reid & Hawley (1999), Gizis & Reid (1999)). TTSs in the molecular cloud MBM 12 (see Hearty et al. (2000)) are also in the examined sky region.
In addition to TTSs from Tau-Aur we include those from the Perseus molecular cloud complex, mainly IC 348 and NGC 1333, in our analysis. Our list of TTSs in IC 348 comprises X-ray detections identified with H$`\alpha `$ emission stars or with proper motion members (Tables 4 and 5 in Preibisch et al. (1996)), and emission line stars from Herbig (1998), and Luhman (1999). TTS members of NGC 1333 are listed in Preibisch (1997).
All objects with low lithium strength are excluded, because it is dubious whether they are young. We accept only those objects as PMS stars which show more lithium than Pleiades stars of the same spectral type, i.e. we exclude all those with $`W_\lambda `$(Li) lower than $`0.2`$ ร
for F- and G-type stars and lower than $`0.3`$ ร
for K-type stars. When applying this criterion, we always use the spectrum with the best resolution and best S/N, i.e. the high-resolution spectra from Wichmann et al. (in preparation). If no high-resolution spectra are available, we use the medium-resolution spectra from Martรญn & Magazzรน (1999), Neuhรคuser et al. (1997), or Magazzรน et al. (1997).
Members of the Pleiades and Hyades clusters are selected from the Open Cluster database compiled by C. Prosser and collegues (available at http://cfa-www.harvard.edu/ $``$stauffer/opencl/index.html). The tables collected in Prosserโs database provide a summary of membership classification based on different methods, such as photometry, spectra, radial velocity and H$`\alpha `$ emission. In addition a final membership determination is given which we use to define our membership lists.
Fig. 1 shows a sky map with the positions of the selected PSPC observations.
### 2.2 Source detection and identification
Source detection is performed on all observations given in Table 1 and shown in Fig. 1 using a combined local and map source detection algorithm based on a maximum likelihood method (Cruddace et al. (1988)). All detections with $`ML7.4`$ (corresponding to $`3.5`$ Gaussian $`\sigma `$ determined as best choice by Neuhรคuser et al. (1995)) are written to a source list, which is subsequently cross-correlated with the membership lists introduced above. The maximum distance $`\mathrm{\Delta }`$ between optical and X-ray position to be allowed in this identification process depends on the off-axis angle of the source because the positional accuracy of the PSPC is worse at larger distances from the center due to broader point spread function (PSF). From distributions of the normalized cumulative number of identifications versus offset $`\mathrm{\Delta }`$ for different off-axis ranges we have determined the optimum cross-correlation radius for all detector positions (similar to Neuhรคuser et al. (1995)). A detailed description of this process together with a table providing $`\mathrm{\Delta }`$ for different off-axis ranges will be given in Stelzer et al. (in preparation).
Observations which are characterized by strong background variations (200020, 200008-0, and 200442), as well as observations consisting of two very short intervals separated by a gap of $`>100\mathrm{h}`$ (200068-0, 200914) are omitted from the flare detection and flare analysis. Furthermore, we neglect observations with a total duration of less than 1000 s. All observations which have been ignored in the analysis presented in this paper are marked with an asterisk in Table 1.
## 3 Lightcurves
Using the arrival time information of the photons counted within a pre-defined source circle lightcurves are generated for each of the ROSAT sources that have been identified with a TTS, Pleiad or Hyad from the membership lists.
For the source extraction radius we have used the 99% quantile of the Point Spread Function (PSF) at 1 keV, i.e. the radius containing 99% of the 1 keV photons at the respective off-axis angle. In contrast to the standard EXSAS source radius of 2.5 FWHM, which becomes unreasonably large for off-axis sources due to the extended wings of the PSF, this choice of extraction radius limits the source size. Close to the detector center, some bright sources slightly overshine the nominal 99% quantile of the PSF probably due to small deviations from the assumed 1 keV spectrum. We have therefore checked all images for such bright sources and determined a larger source radius for these cases based on visual inspection. In crowded regions, where the PSF of several sources overlap, the measured counts are upper limits to the actual emission of the sources. None of the overlapping sources showed a flare, however, such that no further attention is drawn to the overestimation of the count rate in these cases.
The events measured within the circular source region are binned into 400 s intervals. Since the typical duration of a flare is less than one hour, significantly longer integration times would lead to a loss of information about the structure of the lightcurve, while for shorter bin lengths the lightcurves are dominated by the low statistics. Furthermore, the choice of 400 s integration time guarantees that no additional variability is introduced by the telescope motion (wobble).
Due to the earth eclipses the data stream is interrupted at periodical time intervals. Depending on the phase used for the time integration, at the beginning and/or end of each data segment the 400 s intervals are only partly exposed. For the flare detection only bins with full 400 s of exposure are used. To gain independence of the binning we generate lightcurves with different phasing of the 400 s intervals: First, in order to divide the given observing time into as many 400 s exposures as possible, a lightcurve is binned in such a way that a new 400 s interval starts after each observation gap. Thus, data are lost only at the end of each data segment, because the last bin remains uncomplete. Secondly, lightcurves are built by simply splitting the total observing time into 400 s intervals beginning from the start of the observation regardless of data gaps. In this case data are rejected at the beginning and end of each data segment.
The number of background counts falling in the source circle is determined from the smoothed background image which is created by cutting out the detected sources and then performing a spline fit to the resulting image. This method of background acquisition is of advantage in crowded fields where an annulus around the source position โ the most widely used method for estimating the background โ likely is contaminated by other sources. The background count rate is found by dividing the number of background counts in the source circle through the exposure time extracted from the standard ROSAT exposure map. To take account of possible time variations in the background count rate, the background is determined separately for each data segment and subtracted from the measured count rate in the respective data interval. (When referring to โdata segmentsโ we mean parts of the lightcurve that are separated from each other by gaps of at least 0.5 h.)
## 4 Flare detection
### 4.1 The method
One of the major elements of a flare by customary definition is a significant increase in count rate, after which the initial level of intensity is reached again. Therefore, our flare detection is based on the deviation of the count rate from the (previously determined) mean quiescent level of the source. To ensure that the quiescent count rate contains no contribution from flares, in the first step, we determine mean count rates for all data segments of each lightcurve and define the quiescent level as the lowest mean measured in any of these data segments.
We define a flare as an event which is characterized by two or more consecutive time bins that constitute a sequence of either rising or falling count rates, corresponding to rise and decay phase of the flare. In addition, to ensure the significance of our flare detections, we define the upper standard deviation of the quiescent level as a point of reference and require that (a) all bins which are part of the flare are characterized by count rates higher than this level, and that (b) the sum of the deviations of all these bins is more than 5 $`\sigma `$ from this level. A rise immediately followed by a decay is counted as one flare. Since the shape of a lightcurve is influenced to some degree by the binning used, we accept only flares that are detected in lightcurves with both bin phasings (see Sect. 3).
Detections of more than one flare in a single lightcurve are possible. To estimate the contribution of each event properly, after detection the decay of the first flare in each lightcurve is modeled by an exponential function, and a new lightcurve is generated by subtracting the fit function from the data. Having removed the first flare, we search for further flares in the reduced, โflare-subtractedโ lightcurve using the same criteria as before. This procedure is repeated until no additional flares are detected.
Since many of the investigated sources are highly variable X-ray emitters on timescales shorter than resolvable by our method, the mean count rate used until now in some cases is not a good estimate for the quiescent emission. With the knowledge obtained about the times at which flares have occurred we therefore redetermine the quiescent count rate taking the mean from the remaining data after removal of all flare contributions. Using this new mean count rate we repeat the flare detection procedure.
### 4.2 Flare Parameters
With the detection procedure described in the previous subsection we have found 52 flares. We have always identified the nearest optical position with the X-ray source. In one flare, however, two possible optical counterparts, DD Tau and CZ Tau, are closeby (at 6<sup>โฒโฒ</sup> and 24<sup>โฒโฒ</sup> respectively), so that we can not be sure which star flared. Fifteen events were observed on TTSs, 20 on Pleiads, and 17 on Hyads. On two TTSs (RXJ 0437.5+1851 and T Tau) and two Hyads (VA 334 and VB 141) two flares occurred in the same observation. VB 141 showed a third event during a different ROSAT exposure.
Hyades stars above $`2\mathrm{M}_{}`$, that have already evolved off the main-sequence, are not considered in the statistical analysis if they showed a flare. Brown dwarfs in the Hyades and Pleiades are not on the main-sequence per definition, but they are also too faint for X-ray detection (Neuhรคuser et al. (1999)). Thus we discuss only the ZAMS from the Pleiades and Hyades.
A complete list of all TTSs, Pleiads, and Hyads on which at least one flare was detected is given in Table 2. Column 1 gives the designation of the flaring star. Column 2 is the distance estimate used for the count-to-energy-conversion. For TTSs in Taurus-Auriga we adopt a value of $`140\mathrm{pc}`$ (Elias (1978), Wichmann et al. (1998)), while the TTSs in MBM 12 are located at $`65\mathrm{pc}`$ (Hearty et al. (2000)), and those in Perseus are located at $`350\mathrm{pc}`$ (NGC 1333; Herbig & Bell (1988)) and $`300\mathrm{pc}`$ (IC 348; Cernicharo et al. (1985)). Pleiads are assumed to be at a distance of $`116\mathrm{pc}`$, the value derived by Mermilliod et al. (1997). Finally, we use the individual Hipparcos parallaxes for Hyades stars if available, and otherwise the mean value of $`46\mathrm{pc}`$ (Perryman et al. (1998)). We give spectral type, $`v\mathrm{sin}i`$, multiplicity, and binary separation of the stars and their respective references in columns 3 โ 9. For TTSs additional columns specify whether the star is a cTTS or a wTTS.
The lightcurves of all flares have been analyzed by fitting an exponential to the decay of each flare. In Figs. 23, and 4 the fit function and measured mean quiescent count rate are displayed together with the data. In Tables 34 and 5 we give the result of the modeling. Column 1 and column 2 contain the stellar identification of the X-ray source and the ROSAT observation request number (ROR). The mean quiescent count rate is given in column 3, the maximum count rate inferred from the exponential fit to the lightcurve in column 4, and the decay timescale $`\tau _{\mathrm{dec}}`$ from the fit in column 5. For flares with poor data sampling we did not determine the errors of $`\tau _{\mathrm{dec}}`$. Column 6 is the estimated rise time of the flare. Due to data gaps in most cases no reasonable estimate can be given. Luminosities are listed in columns 7 and 8: quiescent luminosity $`L_{\mathrm{qui}}`$, and maximum luminosity during the flare $`L_{\mathrm{max}}`$. We assume that all stars in the system contribute the same level of X-ray emission during quiescence, but that only one component flares at any one time. Therefore, for all multiple stars the observed quiescent count rate from column 3 has been divided by the number of components before the conversion to luminosity and energy.
For the conversion from count rates to luminosities we have used the mean ROSAT PSPC energy-conversion-factor (ECF) from Neuhรคuser et al. (1995), i.e. $`ECF=1.110^{11}\mathrm{cts}\mathrm{cm}^2/\mathrm{erg}`$ and the distances given in Table 2. In order to eliminate uncertainties in the distance estimate we have computed ratios of luminosity (given in column 9). Here, $`L_\mathrm{F}`$ denotes the luminosity emitted during the flare, i.e. $`L_\mathrm{F}=L_{\mathrm{max}}L_{\mathrm{qui}}`$. The total emitted energy during quiescence $`E_{\mathrm{qui}}`$ (column 10) and during the flare alone $`E_\mathrm{F}`$ (column 11) are inferred from the integration of the lightcurve between $`t_{\mathrm{max}}`$ and $`t_{\mathrm{max}}+\tau `$. The last column gives the reference for flares that have been published previously. In the last two rows of Tables 34 and 5 we have listed the mean and median for each of the given parameters, except $`\tau _{\mathrm{ris}}`$ which is not well constrained. The means and medians have been computed with the ASURV Kaplan-Meier estimator (see Feigelson & Nelson (1985)), taking account of upper/lower limits. Lower limits of $`L_{\mathrm{max}}`$ occur when there is doubt about whether the maximum emission of the flare has been observed (due to a data gap near the observed maximum). Upper limits for $`\tau _{\mathrm{dec}}`$ occur when the decay is not observed because of a data gap between maximum and post-flare quiescent count rate. Both luminosity and decay timescale determine the flare energy, but the limits of these two parameters carry opposite signs. We therefore consider all values of $`E_\mathrm{F}`$ uncertain where $`\tau _{\mathrm{dec}}`$ or $`L_{\mathrm{max}}`$ are a limit (indicated by colons in Tables 34 and 5) and have not included them in the computation of the mean and median.
## 5 Observational selection effects
It is the purpose of this paper to compare the flare activity of different stars, and thus some attention has to be drawn to observational selection effects. In this section, we will discuss how observational restrictions influence the search for flares. At several points during the data analysis, we are confronted with the problem of finding a representation of the data which is free from these biases.
The major difficulty with the statistical evaluation of flares on different stars is that the sensitivity of the flare detection process depends on the measured (quiescent) count rate, which determines the signal-to-noise (S/N), and hence on the distance to the star. The observational bias consists in the fact that for bright stars ($`L_{\mathrm{qui}}`$ large) the minimum luminosity $`L_\mathrm{F}`$ of a detectable flare is higher than for a faint star. The result is, among others, that at first hand it can not be decided whether any observed correlation between $`L_{\mathrm{qui}}`$ and $`L_\mathrm{F}`$ is real or produced by this effect. In Fig. 5 we have plotted the flare luminosity $`L_\mathrm{F}`$ against the quiescent luminosity $`L_{\mathrm{qui}}`$.
The contribution of the observational bias to this correlation can be estimated as follows: For each quiescent count rate $`I_{\mathrm{qui}}`$ we can determine the minimum strength $`L_\mathrm{F}/L_{\mathrm{qui}}`$ needed for a flare to be detected, if we assume that a flare is found whenever there is a rise in count rate of at least 3 $`\sigma `$ within one 400 s time bin. (In our actual flare search we were even more conservative; see Sect. 4.) Hypothetical events of that kind obey a detection threshold curve for $`L_\mathrm{F}/L_{\mathrm{qui}}`$ as shown in Fig. 6. As mentioned above, the minimum flare luminosity needed for detection of a flare becomes larger with increasing quiescent brightness. In contrast, the required luminosity ratio, i.e. the relative strength of the events, decreases when $`I_{\mathrm{qui}}`$ increases. Note also, that the curve in Fig. 6 is distance independent. But the relation between $`L_{\mathrm{qui}}`$ and the corresponding minimum $`L_\mathrm{F}`$ of a detectable flare differs for stars at different distances. In Fig. 5 we have overplotted the theoretical threshold for detection of a flare on a star at 140 pc distance. Note, that the slope of the data in Fig. 5 is somewhat steeper than the increase of the threshold imposed by the S/N. This seems to indicate an intrinsic correlation between quiescent and flare luminosity. We have subtracted the theoretical threshold value for $`L_\mathrm{F}`$ from the observed flare luminosity for each of the stars from Fig. 5. Correlation tests for the difference between threshold and observed value for $`L_\mathrm{F}`$, $`(L_{\mathrm{F},\mathrm{theo}}L_{\mathrm{F},\mathrm{obs}})`$, with $`L_{\mathrm{qui}}`$ show that the correlation is of low significance, $`\alpha `$=0.05. The data points below the theoretical curve are all Pleiads or Hyads. They do not contradict the threshold curve, since Pleiades and Hyades stars are closer than 140 pc and therefore have a lower flare detection threshold.
## 6 Statistical comparison of the flaring stars
We present now a statistical analysis of the X-ray flares from Tables 34 and 5. A detailed discussion of the (quiescent) X-ray properties of all detected and undetected stars is postponed to a later paper (Stelzer et al., in preparation).
In this section, different flare parameters will be checked for dependence on age, circumstellar environment, and rotation rate (Sect. 6.26.36.4.) to see whether any of these properties has an effect on the characteristic luminosity and time scales of coronal activity. For the statistical comparison of the flaring stars the ASURV package version 1.2 (Feigelson & Nelson (1985)) was used.
First we compare the flaring populations of TTSs, Pleiads, and Hyads concerning their effective temperatures. We have converted spectral types to effective temperatures using the conversion given in Kenyon & Hartmann (1995) for PMS stars earlier than M0, and Luhman (1999) for PMS M-type stars intermediate between dwarfs and giants. For Pleiades and Hyades stars we have used the conversion of Schmidt-Kaler (1982). We have applied two-sample tests to each pair of $`T_{\mathrm{eff}}`$-distributions to reveal possible differences between flaring stars of the three groups. Henceforth, we denote the probability that the distributions are similar by $`\alpha `$. In all but one of the comparisons we found $`\alpha >0.2`$, and therefore no significant differences in $`T_{\mathrm{eff}}`$. The exception is the logrank test between TTSs and Hyads where $`\alpha =0.03`$.
Most flares occurred on G, K and M stars. However, some events were observed on A and F stars. Stars of intermediate spectral type, lacking both a convection-driven dynamo and a strong stellar wind, seem to have no efficient mechanism to generate X-ray flares. Therefore, it is often assumed that X-ray emission apparently seen on A or B stars, can be attributed to an (unknown) late-type companion (see e.g. Stauffer et al. (1994), Gagn$`\stackrel{ยด}{\mathrm{e}}`$ & Caillault (1994), Panzera et al. (1999)). The same arguments can be applied to explain X-ray flares on these stars. In any case, the emission mechanism of early-type stars is different from that of late-type stars. From the sharp onset of rotation-activity relations in dwarf stars Walter (1983) has argued that the onset of solar-like dynamo activity occurs abruptly at about spectral type F5. To ensure that no stars with X-ray generation mechanisms other than stellar dynamos are included, we have excluded the stars of spectral type F and earlier from the statistical analysis presented in this paper, i.e. we have restricted the flare sample to events on G, K, and M stars. This limitation provides samples which have similar $`T_{\mathrm{eff}}`$ distributions, i.e. $`\alpha >0.2`$ also for the two-sample test between TTSs and Hyads (see Table 6 for the detailed results). This justifies to combine all flaring late-type stars for the statistical analysis. In the following the stellar sample is restricted to G, K, and M stars. If not explicitly mentioned the two flares on known white-dwarf systems (on V471 Tau and VA 673) are excluded from the sample, since the white dwarf could be responsible for the X-ray event instead of its late-type companion.
### 6.1 Flare frequency of MS stars and spectral type
It is interesting to ask whether the depth of the convection zone has any influence on the occurrence of surface flares. Since the relative size of the convection zone increases for later spectral types, the distribution of flares onto stars of different spectral types may help to solve this question. What we really want to check is whether the flare frequency depends on stellar mass, which corresponds to spectral type on the MS. Because PMS stars still evolve through the Hertzsprung-Russell diagram (HRD), i.e. change their spectral type, we exclude the TTSs from this part of the analysis. Flaring Pleiades and Hyades stars are combined to increase the sample size.
We have studied the spectral type distribution of flares by comparing the number of flares on stars of a certain spectral type to the total number of detected stars of that spectral type. The detection sensitivity for flares of a given strength $`L_\mathrm{F}/L_{\mathrm{qui}}`$ is different for each star because it depends on the level of quiescent emission $`I_{\mathrm{qui}}`$ (see Sect. 5). $`I_{\mathrm{qui}}`$, depends on the spectral type of the star. For this reason, a simple comparison between numbers of flares and numbers of detected stars of each spectral type would be misleading. The observational bias can, however, be eliminated if the numbers (of flares and detections) are evaluated above a certain threshold $`L_\mathrm{F}/L_{\mathrm{qui}}`$. We compare the number of flares with measured luminosity ratio above a critical value $`(L_\mathrm{F}/L_{\mathrm{qui}})_{\mathrm{crit}}`$ to the number of detected stars for which $`I_{\mathrm{qui}}`$ exceeds the minimum value needed for detection of a flare of that critical strength. We have compiled these numbers for a reasonable range of values $`L_\mathrm{F}/L_{\mathrm{qui}}`$, and show the result in Fig. 7. Plotted are the number of flares exceeding $`L_\mathrm{F}/L_{\mathrm{qui}}`$ divided by the number of detected stars that are bright enough for detection of flares with that value of $`L_\mathrm{F}/L_{\mathrm{qui}}`$. G stars clearly show the smallest rate of events throughout all of the observed range of flare strengths.
### 6.2 Age of flaring stars (Luminosity functions)
To study how the flare activity of young late-type stars evolves with stellar age we have computed luminosity distribution functions (LDF) and performed two-sample tests for three subsamples of stars: TTSs, Pleiads, and Hyads.
Maximum likelihood distributions for TTSs, Pleiads, and Hyads are presented in Fig. 8 for both flare luminosity $`L_\mathrm{F}`$ and mean luminosity during the quiescent part of flare observations $`L_{\mathrm{qui}}`$. Note, that Fig. 8 (b) contains no upper limits because only stars which have shown a flare are included, and $`L_{\mathrm{qui}}`$ during flare observations can be extracted from the lightcurves. The flare luminosity in Fig. 8 (a) includes upper limits. Since $`L_\mathrm{F}=L_{\mathrm{max}}L_{\mathrm{qui}}`$, upper limits for $`L_{\mathrm{max}}`$ (see Table 34, and 5) translate to upper limits for $`L_\mathrm{F}`$. LDFs for all non-flaring stars (detections and non-detections) will be shown elsewhere (Stelzer et al., in preparation).
Two-sample tests were applied to each pair of LDFs to search for differences. The results are given in Table 7 (for $`L_\mathrm{F}`$ and $`L_{\mathrm{qui}}`$). The null hypothesis of two samples being the same is rejected for all pairs of flare luminosity distributions at significance levels $`\alpha <0.05`$. The quiescent luminosity of flaring TTSs is different from both the quiescent luminosity of flaring Pleiads and flaring Hyads. Usually the quiescent luminosity functions of Pleiades and Hyades stars are also found to be distinct (see e.g. Caillault (1996)). However, we find no difference ($`\alpha >0.61`$) between the quiescence luminosities of the flaring stars in these two clusters. Using the relation between $`I_{\mathrm{qui}}`$ and the threshold for $`L_\mathrm{F}/L_{\mathrm{qui}}`$ (see Fig. 6) we have determined that more than 90% of the detected Hyades stars are bright enough for detection of a flare whose strength $`L_\mathrm{F}/L_{\mathrm{qui}}`$ is equal to the mean observed for flares on late-type Hyads, i.e. $`L_\mathrm{F}/L_{\mathrm{qui}}=4.527`$. The fact that mostly X-ray bright Hyades stars display flaring activity is therefore not a selection effect. Instead, flaring Hyads indeed are overluminous compared to the non-flaring Hyades stars detected by the ROSAT PSPC.
The mean luminosities of TTSs, Pleiads, and Hyads and their standard deviations derived with inclusion of upper limits are given in Table 8.
### 6.3 Flaring cTTSs and wTTSs
So far we have not distinguished between cTTSs and wTTSs, because it is a matter of debate whether all cTTSs are younger than wTTSs. However, they are clearly distinguished by their circumstellar environment. The disks of cTTSs may influence flare activity. We have, therefore, compared cTTSs and wTTSs with respect to several flare parameters (see Table 9 for the results). Significant differences are found in the flare luminosity $`L_\mathrm{F}`$ and relative strength of the flare $`L_\mathrm{F}/L_{\mathrm{qui}}`$. The decay timescale does not depend on the type of TTS. The values given in Table 9 have been derived from all flares on TTSs except the one on DD Tau / CZ Tau. DD Tau is a cTTS binary and CZ Tau a wTTS binary. The four stars are not resolved in the PSPC image. It is therefore impossible to classify this flare concerning the type of TTS. We have performed two further series of two-sample tests in which the event is included. In one of these series of tests the flare is attributed to DD Tau, and the other time to CZ Tau. The significance of the results did not change.
The mean flare and quiescent luminosities of cTTSs and wTTSs are given in Table 8. cTTSs, although characterized by lower quiescent emission, show stronger flares than wTTSs.
### 6.4 $`v\mathrm{sin}i`$ of the flaring stars
Stellar rotation is one of the necessary conditions for magnetic activity. We have, therefore, examined the influence of the stellar rotation rate on the characteristics of X-ray flares. The relation between flare parameters and projected rotational velocity, $`v\mathrm{sin}i`$, is shown in Fig. 9.
The statistical significance for correlations between some flare parameters and $`v\mathrm{sin}i`$ is given in Table 10 (columns 2 and 3). The weak correlation between luminosity, both $`L_\mathrm{F}`$ and $`L_{\mathrm{qui}}`$, and $`v\mathrm{sin}i`$ is significant. The decay time $`\tau _{\mathrm{dec}}`$ and the relative flare strength $`L_\mathrm{F}/L_{\mathrm{qui}}`$, on the other hand, are not related to $`v\mathrm{sin}i`$.
We have studied the flaring population in terms of differences in flare characteristics between slow and fast rotators. The boundary was set to 20 km/s because this choice gives two samples of about equal size: 16 slow and 12 fast rotators showed an X-ray flare. Stars from Tables 34, and 5 for which no measurement of $`v\mathrm{sin}i`$ is available are ignored. The result of two sample tests for the parameters $`L_\mathrm{F}`$, $`L_{\mathrm{qui}}`$, $`\tau _{\mathrm{dec}}`$, and $`L_\mathrm{F}/L_{\mathrm{qui}}`$ are presented in the remaining columns of Table 10. In no case the null hypothesis that slow and fast rotators are drawn from the same distribution was rejected at significance level $`\alpha <0.05`$.
## 7 Flare rates
In this section, we will derive flare rates as a means to determine the activity level for a stellar sample with distinct properties. The characteristic properties which will be examined are (a) stellar age (comparing TTSs, Pleiads, and Hyads), (b) stellar rotation (comparing slow and fast rotators), and (c) stellar multiplicity (comparing close binaries to other stars). Flare rates will be computed separately for each group of stars.
We assume that the duration of the active state is represented by the decay timescale $`\tau _{\mathrm{dec}}`$, i.e. the generally poorly restricted rise times $`\tau _{\mathrm{ris}}`$ are neglected. This is certainly wrong for the flare on hcg 144 which seems to have a somehow reversed character (slow rise and rapid decay). However, hcg 144 is a star of unknown spectral type and therefore not part of the group to be examined here. To compile the flare rates, $`F=(\tau _i)/T_{\mathrm{obs}}`$, we have added up the decay timescales $`\tau _{\mathrm{dec}}`$ of the flares and divided this sum by the total observing time, $`T_{\mathrm{obs}}`$, of all detections (flaring and non-flaring stars). Only the nearest identification of each X-ray source has been considered in the compilation of $`T_{\mathrm{obs}}`$. But for multiple systems we have multiplied the observing time by the number of components. For the compilation of $`T_{\mathrm{obs}}`$ we have eliminated data gaps larger than 1 h, the typical flare duration. This provides us the fraction of the total observing time during which the stars are observed in the active state.
In practice $`(\tau _i)`$ is computed from the sample mean $`\overline{\tau }`$ returned by ASURVโs Kaplan-Meier estimator. This way we ensure that upper limits to $`\tau _{\mathrm{dec}}`$ are taken into account. The Kaplan-Meier estimator returns also the uncertainty of $`\overline{\tau }`$. To include the spread of the data in the estimation of $`F`$ we have converted this uncertainty of the mean to the sample variance $`\sigma _\tau `$. Consequently:
$$F=\frac{\overline{\tau }N}{T_{\mathrm{obs}}}\pm \frac{\sigma _\tau \sqrt{N}}{T_{\mathrm{obs}}}$$
(1)
### 7.1 Flare rate and stellar age
The evolution of flare rates with stellar age is examined by comparing the flare frequency of TTSs to that of the Pleiades and the Hyades. 15 flares have occurred on TTSs, 14 on late-type Pleiads, and 11 on late-type Hyads. We have derived the following values for the flare rate $`F`$: $`0.86\pm 0.16`$% (TTSs), $`0.67\pm 0.13`$% (Pleiads), and $`0.86\pm 0.32`$% (Hyads). When the white-dwarf binaries are excluded the flare rate for Hyades declines to $`F_{\mathrm{H},\mathrm{noWD}}=0.71\pm 0.30`$%.
The flare rates are biased for several reasons which will be explained next. First, the flare detection limit is determined by the S/N, which in turn depends on the distance to the star. Therefore, flare rates of TTSs, Pleiads, and Hyads are only comparable above a limiting minimum strength of the flare, expressed by a threshold $`L_\mathrm{F}/L_{\mathrm{qui}}`$. And, secondly, incomplete data sampling might lead to wrong conclusions about the decay timescale of individual events and thus contaminate the resulting $`F`$.
To solve the first problem we have scaled the quiescent count rate of all flaring stars to a distance of 140 pc, the distance of most of the TTSs. I.e. we have multiplied all quiescent count rates with a factor $`(\frac{d}{140pc})^2`$. These theoretical values of $`I_{\mathrm{qui}}`$ correspond to higher flare detection thresholds $`L_\mathrm{F}/L_{\mathrm{qui}}`$ for all stars except the ones in Perseus. All flares from Perseus stars would be detected at 140 pc, since they are further away than this distance. The observed luminosity ratios of all flaring stars have then been compared to the theoretical threshold needed if the star were at 140 pc. All flares for which the observed value is below this requirement should be neglected when the flare rates are computed. It turns out that all flares on Pleiads remain above the 140 pc threshold. But only 7 out of 11 flares on Hyads (one of the 7 is a white-dwarf system) have $`L_\mathrm{F}/L_{\mathrm{qui}}`$ high enough to be detected at a distance of 140 pc. Now the comparison of our different samples is free from the sensitivity bias. And we derive flare rates $`F`$ of $`0.86\pm 0.16`$% (TTSs), $`0.67\pm 0.13`$% (Pleiads), and $`0.46\pm 0.19`$% (Hyads). Without the white-dwarf binary $`F`$ decreases for the Hyades to $`0.32\pm 0.17`$%.
The uncertainties in the measurement of the flare duration are less easy to overcome. The large flare rate of TTSs is partially due to two extraordinary long events of duration $`>8\mathrm{h}`$ (see Table 3). The decay times of both of these flares are considered to be an upper limit. If these two events are discarded from the sample of flares, $`F_{\mathrm{TTS}}=0.74\pm 0.14`$%.
We have also compiled $`F`$ for cTTSs and wTTSs separately to see whether the circumstellar environment has any influence on the frequency of the flare activity. Among the events on TTSs, 6 are observed on cTTSs and 8 on wTTSs. An additional flare was seen from the unresolved stars DD Tau / CZ Tau. The classification of this event within the subgroups of TTSs remains therefore unclear, and complicates the comparison of $`F`$ for the two classes of TTSs. At first, the event on DD Tau / CZ Tau has been eliminated from the sample, thus that 6 flares on cTTSs are opposed to 8 flares on wTTSs. The respective flare rates are $`F_\mathrm{c}=1.09\pm 0.39`$% and $`F_\mathrm{w}=0.65\pm 0.16`$%. When the ambiguous event is counted on the side of the cTTSs $`F_\mathrm{c}`$ rises to $`1.28\pm 0.37`$%. When it is attributed to the wTTS CZ Tau instead, $`F_\mathrm{w}`$ becomes $`0.76\pm 0.16`$%. Note, that even though the number of flares on wTTSs is higher than the number of flares on cTTSs, the flare rate for wTTSs is lower than the flare rate for cTTSs. This is possible because of differences in the total observing time.
$`F`$ as a function of stellar age is displayed in Fig. 10. The decline of the flare rate with stellar age is obvious. Rates for cTTSs and wTTSs are symbolized by diamonds and triangles, respectively. The location of the lower diamond and triangle describes the flare rates without the event on DD Tau / CZ Tau. The upper diamond and triangle are values for $`F`$ if this flare is included in the respective group of TTSs.
### 7.2 Flare rate and rotational velocity
Here we examine whether the flare frequency depends on rotation. This is done by computing $`F`$ (defined as before) for fast rotators on the one hand ($`v\mathrm{sin}i>20\mathrm{km}/\mathrm{s}`$) and slow rotators on the other hand ($`v\mathrm{sin}i<20\mathrm{km}/\mathrm{s}`$). Again, only late-type stars are considered. The resulting rates are $`F_{\mathrm{slow}}=0.55\pm 0.10`$% and $`F_{\mathrm{fast}}=1.55\pm 0.38`$%. Thus there is a clear trend towards an increase of flare activity with increasing rotational velocity.
### 7.3 Flare rate and multiplicity
Another interesting question is whether the circumstellar surroundings have any influence on the flare frequency. The coronal activity may e.g. change if there are interactions between the magnetic fields of binaries. Such interactions are expected to take place only in close binaries. To search for such a connection we, therefore, discriminate between spectroscopic binaries on the one hand and all others, i.e. singles or visual multiples. The flare rate $`F`$ is computed in the same way as before. Since the observation time of each stellar system has been multiplied by the number of components, the flare rates should be about equal for both samples if the underlying physics are the same. However, we find that the flare rate of spectroscopic binaries is enhanced by more than a factor of two: $`F_{\mathrm{non}\mathrm{SB}}=0.64\pm 0.12`$% and $`F_{\mathrm{SB}}=1.43\pm 0.25`$%. Note, that the study of individual flare parameters (similar to the analysis of Sect. 6) has shown no difference for these two samples.
## 8 Hardness Ratios
For most of the flaring sources not enough counts are collected by the PSPC to compare the different levels of X-ray emission in a detailed spectral analysis. Therefore, we use hardness ratios to mark spectral changes. ROSAT PSPC hardness ratios are defined by:
$$HR\mathrm{\hspace{0.17em}1}=\frac{HS}{H+S}HR\mathrm{\hspace{0.17em}2}=\frac{H_2H_1}{H_2H_1}$$
(2)
where $`S`$, $`H`$, $`H_1`$, and $`H_2`$ denote the count rates in the ROSAT PSPC soft (0.1-0.4 keV), hard (0.5-2.0 keV), hard1 (0.5-0.9 keV) and hard2 (0.9-2.0 keV) band respectively. For each flare observation $`HR\mathrm{\hspace{0.17em}1}`$ and $`HR\mathrm{\hspace{0.17em}2}`$ are computed for three activity stages representing the quiescent state (pre- and post-flare), the rise and the decay, respectively. Sometimes no counts are measured in one or more of the energy bands. Whenever this is the case we have derived upper limits for the hardness ratio making use of the background counts in that energy band at the source location.
The observed hardness ratios, $`HR\mathrm{\hspace{0.17em}1}`$ and $`HR\mathrm{\hspace{0.17em}2}`$, are plotted in Fig. 11. The plots comparing quiescent and flare state show marginal evidence that most of the stars lie below the diagonal in the hardness plot (see lower left panel of Fig. 11) and thus are harder during the flare intervals as compared to their quiescence. No significant difference in hardness is observed between flare rise and flare decay. When impulsive heating takes place before the outburst the plasma cools quickly by radiation and conduction to the chromosphere. Therefore, the similar hardness observed during rise and decay phase suggests that heating takes place throughout the decay.
To quantify the differences in hardness between different flare stages we have computed mean hardness ratios for each of the stellar groups. In Table 11 we show the mean hardness for each activity stage (quiescence, rise, and decay) and each sample of stars (TTSs, Pleiads, and Hyads). The hardness changes systematically when the three groups are compared to each other: TTSs display the hardest spectra, followed by Pleiads, which in turn are characterized by higher hardness ratios than the Hyades stars. This is also manifest in the hardness plots of Fig. 11 where the three samples occupy different regions. In Sect. 6.2 it was shown that the flare luminosity declines with stellar age. As a consequence, the spectral hardness and the flare luminosity are correlated. The relation between hardness ratios and $`L_\mathrm{F}`$ is displayed in Fig. 12 and suggests that the more luminous flares are associated with hotter plasma.
## 9 Discussion
### 9.1 Methods for flare detection
Using binned data to detect flares introduces observational restrictions. The sensitivity for detection of small flares is lower and very short flares remain unobserved due to the time binning. Apart from these limitations, our flare detection produces reliable results as verified by comparison to both an alternative approach using Bayesian statistics and, where possible, previous detections of flares by visual inspection stated in the literature.
The importance of Bayesian statistics to astronomical time series analysis has been described by Scargle (1998) and first applied to ROSAT observations of flare stars by Hambaryan et al. (1999). This approach, unlike the โclassicalโ method used here, works on the raw, unbinned data and therefore has a time resolution which is only limited by the instrument clock.
We have performed a detailed comparison of the events recognized by the two methods. For the flare detection with the Bayesian algorithm the prior odds ratio, $`O_{\mathrm{pri}}`$, was set to 1. This means that at the beginning one-rate and two-rate Poisson processes are assumed to have the same probability for being the correct description of the data set. The significance of any detection of variability is then given by the value of the posterior odds ratio, $`O_{21}`$. Applied to our data, 62 events are found at the $`5\sigma `$ level, and 95 events have $`O_{21}`$ corresponding to at least $`3\sigma `$. All but 5 of the flares discussed in this paper were among the $`5\sigma `$ detections. The remaining ones are detected at $`>3\sigma `$. But note that with the Bayesian method we find variability in 182 lightcurves (in contrast to our 52 flares).
Although the Bayesian approach is sensitive to short events, we have persisted on the criteria explained in Sect. 4.1 for two reasons: (i) While Bayesian statistics are sensitive to all kinds of temporal variability, we are here interested in large flare events only. This makes an additional selection process necessary. (ii) Comparison with the classical flare search used in this paper has shown that the Bayesian method needs further refinement. E.g. the outcome of the present algorithm used to search for flares depends sensitively on the value of the prior odds ratio.
### 9.2 Interpretation of the results
Before flares on different stellar groups are compared, it must be checked whether the composition of these samples is similar. The X-ray luminosity of MS stars depends on their spectral type. Therefore it would be desirable to separately investigate the flare activity from stars of different spectral types. However this is hindered by the low flare statistics. We have performed statistical tests where the flaring TTSs, Pleiads, and Hyads have been compared regarding to their $`T_{\mathrm{eff}}`$ and hence spectral types. These tests have shown that it is justified to jointly analyse flares on all late-type stars, i.e. stars with spectral type G, K, and M.
We have shown that the relative number of flares increases when going from spectral types G to K (see Fig. 7). Hereby, we have taken into account that the detection sensitivity for flares depends on the level of measured quiescent emission and hence on the spectral type. An interpretation is that deeper convection zones are favorable to the occurrence of surface flares.
#### 9.2.1 Age
We found that in terms of absolute flare luminosity and energy output TTSs surpass both Pleiads and Hyads. The mean flare luminosity of TTSs ($`L_{\mathrm{F},\mathrm{TTS}}=\mathrm{1.13\; 10}^{31}\mathrm{erg}/\mathrm{s}`$) is almost an order of magnitude higher than that for Hyads ($`L_{\mathrm{F},\mathrm{Hya}}=\mathrm{1.15\; 10}^{30}\mathrm{erg}/\mathrm{s}`$). The mean Pleiades flare luminosity is intermediate between that for TTSs and Hyades stars with $`L_{\mathrm{F},\mathrm{Ple}}=\mathrm{3.26\; 10}^{30}\mathrm{erg}/\mathrm{s}`$ (see also Fig. 5 and Table 8). This is partly due to the different distances of our stellar samples which result in different detection sensitivities for flares. Note, however, that this effect can explain only why no events with small $`L_\mathrm{F}`$ are observed on TTSs. But the lack of large events on Hyades stars is real. In Sect. 7 flare rates for TTSs, Pleiads, and Hyads have been established from an evaluation of the observed flare durations and the total observing time. Both, flare rate and mean flare luminosity decline with increasing stellar age.
The quiescent luminosity of Hyades stars which showed a flare is larger than the average $`L_{\mathrm{qui}}`$ of Hyads (see Sect. 6.2). More than $`90`$% of the detected Hyades stars are bright enough for detection of an average Hyads flare. Therefore, this result is not a selection effect, and we can conclude that only the most X-ray luminous Hyades stars exhibit X-ray flares. The interesting question whether the enhanced X-ray luminosity of flaring Hyades stars can be explained by their rotation rate can not be pursued with this set of data, because only for half of the flaring Hyades stars measurements of $`v\mathrm{sin}i`$ are available.
#### 9.2.2 Circumstellar Envelope
If magnetic interactions between star and disk take place, the field lines will constantly become twisted by differential rotation (Montmerle et al. (2000)). This may provide an environment favorable for magnetic reconnection and related flare activity.
Six of the observed flare events can be attributed to cTTSs and 8 events to wTTSs. One of the flares on TTSs occurred either on DD Tau, a cTTS, or on CZ Tau, a wTTS, both of which are not resolved in the ROSAT PSPC observations. Two-sample tests show clear indications that flares on cTTSs are more X-ray luminous than those on wTTSs (see Table 7). This holds no matter on which side the ambiguous event is counted. The flare rate is also slightly higher for cTTSs than for wTTSs, however with low significance. Given the fact that quiescent X-ray emission of wTTSs is stronger than in cTTSs, this observation is surprising. A possible interpretation is that the stronger flare events on cTTSs may be due to violent interaction with their disks.
#### 9.2.3 Multiple Flares
During four observations a second flare followed the first one (see lightcurves of VA 334, VB 141, RXJ 0437.5+1851B, and T Tau in Figs. 2 and 4). From the number of observed flares and the total observing time the average duration between two flare events is estimated to be $`>100\mathrm{h}`$. Therefore, from a statistical point of view it is very unlikely to observe so many unrelated โdouble eventsโ. We note, that double flares have been reported in the optical. And Guenther & Ball (1999) have presented two flares that occurred within a few hours from the wTTS V819 Tau.
A possible interpretation of multiple flares is the star-disk scenario proposed by Montmerle et al. (2000) and mentioned in the previous subsection. However, this model does not seem to be accurate for our objects, which are more evolved and in part are known not to possess disks.
#### 9.2.4 Projected Rotational Velocity
The statistical tests we have performed to discriminate between slow and fast rotators (with boundary drawn at 20 km/s) reveal no dependence of individual flare parameters $`L_\mathrm{F}`$, $`L_{\mathrm{qui}}`$, $`\tau _{\mathrm{dec}}`$, and $`L_\mathrm{F}/L_{\mathrm{qui}}`$ on the rotation rate. However, the flare frequency is about three times higher for fast rotators as compared to slow rotators: $`F_{\mathrm{slow}}=0.55\pm 0.10`$% and $`F_{\mathrm{fast}}=1.55\pm 0.38`$%.
#### 9.2.5 Binary Interactions
We have searched for evidence of binary interaction during X-ray flares by dividing our sample of flares into spectroscopic binaries and all other systems, i.e. wide (or visual) multiples and single stars, in which such interactions can not take place. The comparison of flare rates $`F`$ showed that large X-ray flares are significantly more frequent on spectroscopic binaries: $`F_{\mathrm{SB}}=1.43\pm 0.25`$% and $`F_{\mathrm{non}\mathrm{SB}}=0.64\pm 0.12`$%. We have taken account of all components in multiple systems when evaluating the flare rate. Therefore, the difference in $`F`$ between close binaries and other stars seems indeed to indicate that magnetic interactions within close binaries leads to increased flare activity. But note, that interbinary events are expected to have longer durations because of the larger scale of the magnetic configuration. Our statistical observations did not show an increase of the time scales for spectroscopic binaries.
#### 9.2.6 Spectral signatures during flares
From the lower panels of Fig. 11 it can be concluded that for most of the observed events the spectral hardness has increased during the flare. Due to the large uncertainties, however, the changes in the mean hardness are only marginal. But, note, that the uncertainties represent the standard deviation (computed by taking into account upper/lower limits to the hardness) and thus reproduce the spread in the data.
We think that the X-ray emission of TTSs is harder than that of Pleiades and Hyades stars (see Table 11) for two reasons: (i) Because of their circumstellar envelope TTSs suffer from much stronger absorption than Pleiads and Hyads, and absorption is stronger for Pleiads than for Hyads due to the larger distance of the former, (ii) the younger the stars, the stronger the activity, and therefore the harder the spectrum.
## 10 Conclusions
We have determined flare rates for PMS stars, Pleiades and Hyades on a large data set and found that all stars are observed during flares for less than 1% of the observing time. Both frequency and strength of large X-ray flares decline after the PMS phase.
To probe whether the activity changes in the presence of a circumstellar disk, e.g. as a result of magnetic interactions between the star and the disk, we have compared flares on cTTSs and wTTSs. We find that flares on cTTSs are stronger and more frequent.
A comparison of flares on spectroscopic binaries to flares on all other stars of our sample shows that the flare rate is by a factor of $`2`$ higher for the close binaries.
The flare rate of fast rotators is enhanced by a factor of $``$ 3 as compared to slowly rotating stars.
To summarize, our analysis confirms that age and rotation influence the magnetic activity of late-type stars. All previous studies in this field have focused on the quiescent X-ray emission. Now, for the first time the rotation-activity-age connection has been examined for X-ray flares. Furthermore, from the sample of flares investigated here we find evidence that magnetic activity goes beyond solar-type coronal activity: On young stars interactions between the star and a circumstellar disk or the magnetic fields of close binary stars may play a role.
###### Acknowledgements.
We made use of the Open Cluster Database, compiled by C.F. Prosser and J.R. Stauffer. We thank S. Wolk and W. Brandner for useful discussions and an anonymous referee for valuable comments. RN acknowledges grants from the Deutsche Forschungsgemeinschaft (Schwerpunktprogramm โPhysics of star formationโ). The ROSAT project is supported by the Max-Planck-Gesellschaft and Germanyโs federal government (BMBF/DLR). |
no-problem/0002/astro-ph0002452.html | ar5iv | text | # Optical Counterparts to Damped Lyman Alpha Systems
## 1. Summary
We have used the Semi Analytic Models (SAMs) of Somerville & Primack (1999) to determine the distribution of galaxies in a dark matter halo, and the amount of cold gas in each galaxy. The SAMs also contain the star formation history of each galaxy so we can explore the optical properties of the galaxies in the halos that give rise to DLAS. Here we present results on the optical counterparts from the two models discussed in Maller (1999), in which we matched the kinematic data with thicker and less radially extended, or thinner, more radially extended gas disks. We show both models here only to demonstrate that the optical properties are not highly sensitive to the details of the gas modeling. We only refer to optical counterparts that reside in the same virialized halo that produces the DLAS. The contribution of LBGs in neighboring dark matter halos will be explored in future work, but is expected to be relatively unimportant.
The properties of the optical counterparts will place strong constraints on DLAS models. One constraint is the number of DLAS with optical counterparts. Figure 1a shows the distribution that we see in our models. Eighty percent of DLAS do not have an optical counterpart with $`<25.5`$, while a rare five percent contain two or more such galaxies in the same halo. Lastly we show the distribution of optical impact parameter (Figure 1b) in our models. The optical impact parameter is the physical distance between the line of sight to the quasar and the centroid of the light distribution of the LBG. We obtain a broad distribution of optical impact parameter values in our model. Because of the large radial extent of our gas disks, the DLAS are often many stellar disk scale lengths from the center of the light distribution. Also in the MDM scenario, with many galaxies in a single halo, sometimes the galaxy bright enough to be identified as an optical counterpart is not one of the galaxies giving rise to the DLAS: in this case very large separations are possible. Thus we expect the predictions about the optical impact parameter to be unique to the multiple disk model, and a useful way of distinguishing it from other models.
## References
Djorgovski, S. G. 1997, in Structure and Evolution of the Intergalactic Medium from QSO Absorption Line System, Proceedings of the 13th IAP Astrophysics Colloquium (Paris: Editions Frontieres), 303
Kauffmann, G. 1996, MNRAS, 281, 475
Maller, A. H. 1999, PhD thesis, Univ. California, Santa Cruz
Maller, A. H., Somerville, R. S., Prochaska, J. X. & Primack, J. R. 1999, in After the Dark Ages: When Galaxies were Young, ed. S. Holt & E. Smith (AIP Press), 102
Prochaska, J. X. & Wolfe, A. M. 1998, ApJ, 507, 113
Somerville, R. S. & Primack, J. R. 1999, MNRAS, in press |
no-problem/0002/astro-ph0002276.html | ar5iv | text | # New ๐พ Doradus Stars from the Hipparcos Mission and Geneva Photometry
## 1. Introduction
The $`\gamma `$ Dor stars have amplitude variations up to 0.1 mag in Johnson V and periods ranging from 0.4 to 3 days (Kaye, these proceedings). We searched in two databases for finding new members of this class of variable stars. The first one is the Hipparcos main mission photometric database. It contains a mean of 110 measurements for 118 204 stars brighter than 12.4 and is magnitude complete up to 7.3-7.9 depending on the galactic latitude $`b`$. As the sampling is ruled by the scanning law of the satellite, it is not affected by the aliasing around 1/day, which might be a problem for detecting $`\gamma `$ Dor stars.
The second scanned database is the Geneva photometric catalogue (Burki & Kienzle, these proceedings), it counts 48 000 stars and 345 000 measurements in a seven colour system. The content of the Geneva catalogue is the reunion of more than 200 scientific programmes, including namely the Bright Star Catalogue south of $`\delta <+20`$.
## 2. Hipparcos main mission
Thousands new variable stars were discovered by the Hipparcos satellite. During the analysis of the Hipparcos photometry, stars from the Periodic Catalogue having accurate parallaxes and colours were plotted in the HR diagram (Eyer 1998). A clump of stars just at the cool lower edge of the $`\delta `$ Scuti instability strip was present and gave rise to a list of 15 candidates (excluding redundant cases from the other studies). A clump is also present when plotting the variable stars of the Hipparcos Unsolved Catalogue.
## 3. Hipparcos main mission and Geneva photometry
A systematic search for finding new $`\gamma `$ Dor stars was undertaken in the Hipparcos periodic variable star catalogue using also Geneva photometry and performing a multivariate discriminant analysis. This study led to a list of 14 new $`\gamma `$ Dor stars (Aerts et al. 1998). This method is stricter since information on amplitude, period, physical parameters and multiperiodic behaviour were taken into account.
## 4. The search in Geneva photometry
Finally the Geneva photometric database was scanned to find F dwarf stars with high standard deviation. Eleven candidates were then measured with the 70-cm Swiss telescope, resulting in about 1000 photometric measurements, which are under study (Eyer & Aerts, in preparation). It turns out that about half suspected stars might be constant stars.
## 5. Spectroscopic measurements
In order to confirm the pulsational character of these stars, new spectra have been taken with the CORALIE spectrograph on the 1.2-m Swiss telescope at ESO-La Silla Observatory. The photometry and spectroscopy are necessary steps since we want to establish a robust list of new $`\gamma `$ Dor stars. Up to now 22 stars have been measured, the strategy consist of taking at least five spectra of each candidate. Among the stars, some are binaries, some are fast rotators and some show clear line profile variations (cf. Fig. 1). Some stars are too faint for the telescope size, thus correlation techniques are used to lower the noise level. The following step is to accumulate photometry and spectra for promising candidates in order to perform mode identification.
## References
Aerts, C., Eyer, L., & Kestens, E. 1998, A&A337, 790
Eyer, L. 1998, PhD Thesis, Geneva University |
no-problem/0002/astro-ph0002135.html | ar5iv | text | # The Formation and Fragmentation of Primordial Molecular Clouds
## 1 Introduction
Saslaw and Zipoy (1967) realized the importance of gas phase H<sub>2</sub> molecule formation in primordial gas for the formation of protoโgalactic objects. Employing this mechanism in Jeans unstable clouds, Peebles and Dicke (1968) formulated their model for the formation of primordial globular clusters. Further pioneering studies in this subject were carried out by Takeda et al. (1969), Matsuda et al. (1969), and Hirasawa et al. (1969) who followed in detail the gas kinetics in collapsing objects and studied the possible formation of very massive objects (VMOโs). In the 1980โs the possible cosmological consequences of population III star formation were assessed (Rees and Kashlinsky 1983; Carr et al. 1984; Couchman and Rees 1986). In particular Couchman and Rees (1986) discussed first structure formation within the standard cold dark matter model. Their main conclusions were that the first objects might reheat and reionize the universe, raise the Jeans mass and thereby influence subsequent structure formation.
Early studies focused on the chemical evolution and cooling of primordial clouds by solving a chemical reaction network within highly idealized collapse models (cf. Hirasawa 1969; Hutchins 1976; Palla et al. 1983; MacLow and Shull 1986; Puy et al. 1996; Tegmark et al. 1997). Some hydrodynamic aspects of the problem were studied in spherical symmetry by Bodenheimer (1986) and Haiman, Thoul and Loeb (1996). Recently multiโdimensional studies of first structure formation have become computationally feasible (Abel 1995; Anninos & Norman 1996; Zhang et al. 1997; Gnedin & Ostriker 1997; Abel et al. 1998a,1998b; Bromm et al. 1999). These investigations have provided new insights into the inherently multidimensional, nonlinear, nonequilibrium physics which determine the collapse and fragmentation of gravitationally and thermally unstable primordial gas clouds.
In Abel, Anninos, Norman & Zhang 1998a (hereafter AANZ) we presented the first self-consistent 3D cosmological hydrodynamical simulations of first structure formation in a standard cold dark matterโdominated (SCDM) universe. These simulations included a careful treatment of the formation and destruction of H<sub>2</sub> โthe dominant coolant in low mass halos ($`M_{tot}=10^510^8M_{}`$) which collapse at high redshifts ($`z3050`$). Among the principal findings of that study were: (1) appreciable cooling only occurs in the cores of the high density spherical knots located at the intersection of filaments; (2) good agreement was found with semi-analytic predictions (Abel 1995; Tegmark et al. 1997) of the minimum halo mass able to cool and collapse to higher densities; (3) only a small fraction ($`<10\%`$) of the bound baryons are able to cool promptly, implying that primordial Pop III star clusters may have very low mass. Due to the limited spatial resolution of the those simulations ($`1`$ kpc comoving), we were unable to study the collapse to stellar densities and address the nature of the first objects formed.
In this paper we present new, higher- resolution results using the powerful numerical technique of adaptive mesh refinement (AMR, Bryan & Norman 1997; Norman & Bryan 1999) which has shed some light on how the cooling gas fragments. With an effective dynamic range of $`262,144`$ the numerical simulations presented here are the highest resolution simulations in cosmological hydrodynamics to date. Although we are not yet able to form individual protostars, we are able to resolve the collapsing protostellar cloud cores which must inevitably form them. We find the cores have typical masses $`200M_{}`$, sizes $`0.3`$ pc, and number densities $`n10^5`$ cm<sup>-3</sup>โsimilar to dense molecular cloud cores in the Milky Way with one vital difference: the molecular hydrodgen fraction is $`5\times 10^4`$, meaning the cores evolves very differently from Galactic cores.
The plan of this paper is as follows. The simulations are briefly described in Sec. 2. Results are presented in Sec. 3. The properties and fate of the primordial protostellar cloud are discussed in Sec. 4. Conclusions follow in Sec. 5. Results of a broader survey of simulations will be reported in Abel, Bryan & Norman (1999).
## 2 Simulations
The three dimensional adaptive mesh refinement calculations presented here use for the hydrodynamic portion an algorithm very similar to the one described by Berger and Collela (1989). The code utilizes an adaptive hierarchy of grid patches at various levels of resolution. Each rectangular grid patch covers some region of space in its parent grid needing higher resolution, and may itself become the parent grid to an even higher resolution child grid. Our general implementation of AMR places no restriction on the number of grids at a given level of refinement, or the number of levels of refinement. However, we do restrict the refinement factor โ the ratio of parent to child mesh spacing โ to be an integer (chosen to be 2 in this work). The dark matter is followed with methods similar to the ones presented by Couchman (1991). Furthermore, the algorithm of Anninos et al. (1997) is used to solve the timeโdependent chemistry and cooling equations for primordial gas given in Abel et al. (1997). More detailed descriptions of the code are given in Bryan & Norman (1997, 1999), and Norman & Bryan (1999).
The simulations are initialized at redshift 100 with density perturbations of a SCDM model with $`\mathrm{\Omega }_B=0.06`$, $`h=0.5`$, and $`\sigma _8=0.7`$. The abundances of the 9 chemical species (H, H<sup>+</sup> , H<sup>-</sup> , He, He<sup>+</sup> , He<sup>++</sup> , H<sub>2</sub> , H$`{}_{}{}^{+}{}_{2}{}^{}`$ , e<sup>-</sup>) and the temperature are initialized as discussed in Anninos and Norman (1996). After a collapsing highโ$`\sigma `$ peaks has been identified in a low resolution run, the simulation is reinitialized with multiple refinement levels covering the Langrangian volume of the collapsing structure. The mass resolution in the initial conditions within this region are $`0.53(8.96)M_{}`$ in the gas (dark matter). The refinement criteria ensure that: (1) the local Jeans length is resolved by at least 4 grid zones, and (2) that no cell contains more than 4 times the initial mass element ($`0.53M_{}`$). We limit the refinement to 12 levels within a $`64^3`$ top grid which translates to a maximum dynamic range of $`64\times 2^{12}=262,144`$.
## 3 Results
We find that primordial molecular clouds are only formed at the intersection of filaments, in agreement with the results of AANZ. The evolution of these primordial molecular clouds is marked by frequent mergers yielding highly complex velocity and density fields within the โvirialโ radius. In the following three sections, we first describe the evolution of these objects and then their morphology and structure.
### 3.1 Formation of the First Objects
To illustrate the physical mechanisms at work during the formation of the first cosmological object in our simulation, we show the evolution of various quantities in Figure 1. The top panel of this plot shows the virial mass of the largest object in the simulation volume. We divide the evolution up into four intervals. In the first, before a redshift of about 35, the Jeans mass in the baryonic component is larger than the mass of any non-linear perturbation. Therefore, the only collapsed objects are dark-matter dominanted, and the baryonic field is quite smooth. (We remind the reader that a change in the adopted cosmological model would modify the timing, but not the nature, of the collapse.)
In the second epoch, $`23<z<35`$, as the non-linear mass increases, the first baryonic objects collapse. However, these cannot efficiently cool and the primordial entropy of the gas prevents dense cores from forming. This is shown in the second frame of figure 1 by a large gap between the central baryonic and dark matter densities (note that while the dark matter density is limited by resolution, the baryonic is not, so the true difference is even larger). As mergers continue and the mass of the largest clump increases, its temperature also grows, as shown in the third panel of this figure. The H<sub>2</sub> fraction also increases (bottom panel).
By $`z23`$, enough H<sub>2</sub> has formed (a few $`\times 10^4`$), and the temperature has grown sufficiently high that cooling begins to be important. During this third phase, the central temperature decreases and the gas density increases. However, the collapse is somewhat protracted because around this point in the evolution, the central density reaches $`n10^4`$ cm<sup>-2</sup>, and the excited states of H<sub>2</sub> are in LTE. This results in a cooling time which is nearly independent of density rather than in the lowโdensity limit where $`t_{cool}\rho ^1`$ (e.g. Lepp & Shull 1983).
Finally, at $`z19`$, a very small dense core forms and reaches the highest resolution that we allowed the code to produce. It is important to note that at this point, the maximum gas density in the simulations exceeds $`10^8\mathrm{cm}^3`$, and at these densities, 3โbody formation of molecular hydrogen will become dominant (see Palla et al. 1983). Also, the assumption of optical thin cooling begins to break down and radiative transfer effects become important. Therefore, only simulation results at and above this redshift will be discussed. It is worthwhile to note that the simulations presented here are physics rather than resolution limited.
### 3.2 Morphology
The increase in dynamic range by $`1000`$ in the simulations presented here as compared to AANZ allow us to investigate the details of the fragmentation process in detail. Visualizations of the gas density and temperature on two different scales at $`z=19.1`$ are shown in the color plate 3. In the upper left panel the velocity field is shown superimposed on the density. The $`5\times 10^5M_{}`$ structure forms at the intersection of two filaments with overdensities of $`10`$. Most of the mass accretion occurs along these filaments. The complexity of the velocity field is evident; the accretion shock is highly aspherical and of varying strength. Within the virial radius ($`r=106`$ pc), there are a number of other cooling regions. The right-hand panels zooms in on the collapsing fragment. (note that the smallest resolution element ($`0.02\mathrm{pc}`$) in the simulations is still 1600 times smaller than the slice shown in the right panels). The small fragment in the center of this image has a typical overdensity of $`\stackrel{>}{}10^6`$ and a mass of $`200M_{}`$.
### 3.3 Profiles
Despite the complex structure of the primordial molecular clouds much of their structure can be understood from spherical profiles of the physical quantities, particularly for the dense central core which is nearly spherical. Figure 2 shows mass-weighted, spherical averages of various quantities around the densest cell found in the simulation at redshift 19.1. Panel a) plots the baryon number density, enclosed baryon mass, and local Bonnor-Ebert mass<sup>1</sup><sup>1</sup>1Bonnor-Ebert mass is the analog of the Jeans Mass but assuming an isothermal ($`\rho r^2`$) instead of a uniform density distribution. $`27M_{}T_K^{1.5}/\sqrt{n}`$ versus radius. Panel b) plots the abundances of H<sub>2</sub> and free electrons. Panel c) compares three timescales defined locally: the H<sub>2</sub> cooling time $`t_{H_2}`$, the freefall time $`t_{ff}=[3\pi /(32G\rho )]^{1/2}`$, and the sound crossing time $`t_{cross}=r/c_s=7.6\times 10^6r_{pc}/\sqrt{T_K}`$ yrs. In panel d) we identify two distinct regionsโlabeled I and IIโas defined by the temperture profile. Region I ranges from outside the virial radius to $`r_{T_{min}}5\mathrm{pc}`$, the radius at which the infalling material has cooled down to $`T_{min}200\mathrm{K}`$โnear the minimum temperature allowed by H<sub>2</sub> cooling. Within region I, the temperature profile reflects, in order of decreasing radius, cosmic infall, shock virialization, adiabatic heating in a settling zone, and an H<sub>2</sub> cooling flow. In region II, the temperature slowly rises from $`T_{min}`$ to $`400K`$ due to adiabatic heating.
For most of region I the H<sub>2</sub> cooling time $`t_{H_2}`$ is comparable to the freeโfall time, as is illustrated in panel c) of Figure 2. The H<sub>2</sub> number fraction rises from $`7\times 10^6`$ to $`2\times 10^4`$ as the free electron fraction drops from $`2\times 10^4`$ to $`2\times 10^5`$. At $`r_{T_{min}}`$, the sound crossing time becomes substantially shorter than the cooling time. This suggests that region II is contracting quasiโhydrostatically on the cooling time scale, which approaches its constant highโdensity value at small radii. This constant cooling time of $`10^5\mathrm{years}`$ sets the time scale of the evolution of the fragment until it can turn fully molecular via three body associations. Inside $`r0.3`$ pc, the enclosed baryonic mass of $`200M_{}`$ exceeds the local Bonnor-Ebert mass, implying this material is gravitationally unstable. However, due to the inefficient cooling, its collapse is subsonic (panel e). The radius where $`M>M_{BE}`$ defines our protostellar cloud core.
## 4 Discussion
Many interesting features of the collapsing and fragmenting โprimordial molecular cloudโ are identified. Most notable is the formation of an initially quasiโhydrostatically contracting core of $`200M_{}`$ which becomes gravitationally unstable. We argue that this is a characteristic mass scale for core formation mediated by H<sub>2</sub> cooling. Substituting into the formula for the Bonnor-Ebert mass $`T_{min}`$ and $`n_{LTE}`$ we get $`240M_{}`$.
What will be the fate of the collapsing core? Within the core the number densities increase from $`10^5`$ to $`10^8\mathrm{cm}^3`$. For densities $`\stackrel{>}{}10^8\mathrm{cm}^3`$, however, threeโbody formation of H<sub>2</sub> will become the dominant formation mechanism, transforming all hydrogen into its molecular form (Palla et al. 1983). Our chemical reaction network does not include this reaction and the solution cannot be correct at $`r\stackrel{<}{}0.1\mathrm{pc}`$. The most interesting effect of the threeโbody reaction is that it will increase the cooling rate by a factor $`10^3`$, leading to a further dramatic density enhancement within the core. This will decrease the dynamical timescales to $`100\mathrm{years}`$, effectively decoupling the evolution of the fragment from the evolution of its host primordial molecular cloud. Therefore, it is a firm conclusion that only the gas within these cores can participate in population III star formation.
Omukai & Nishi (1998) have simulated the evolution of a collapsing, spherically symmetric primordial cloud to stellar density including all relevant physical processes. Coincidentally, their initial conditions are very close to our final state. Based on their results, we can say that if the cloud does not break up, a massive star will be formed. Adding a small amount of angular momentum to the core does not change this conclusion (Bate 1998). A third possibility is that the cloud breaks up into low mass stars via thermal instability in the quasi-hydrostatic phase. Silk (1983) has argued that, due to the enhanced cooling from the 3โbody produced H<sub>2</sub> , fragmentation of this core might continue until individual fragments are opacity limited (i.e. they become opaque to their cooling radiation). Exploring which of these scenarios is correct will have to await yet higher resolution simulations including the effects of radiative transfer. It will also be interesting to examine the possible effects of molecular HD which, although much less abundant, is a much more efficient coolant at low temperatures.
How many cores are formed in our halo? Because our timestep contracts rapidly once the first core forms, we are not yet able to answer this question definitively. An earlier, less well resolved simulation yielded $`56`$ cores by $`z=16.5`$, suggesting that multiple cores do form. We speculate that the total number of cores to eventually form will be proportional to the total amount of cooled gas. However, the first star in a given halo will most likely always be formed close to its center where the dynamical timescale is shortest. The cooling timescale at $`r_{T_{min}}`$ in Figure 2. of $`\stackrel{>}{}10^6\mathrm{years}`$ should roughly correspond to the typical formation time of fragments. During this time the product of the first collapsed fragment might already be an important source of feedback. Hence, even for the question of the efficiency of fragmentation it seems that feedback physics have to be included.
Let us assume the first $`200M_{}`$ cores fragment to form stars with 100% efficiency. If the ratio of produced UV photons per solar mass is the same as in present day star clusters than about $`6\times 10^{63}`$ UV photons would be liberated during the average life time of massive star ($`5\times 10^7\mathrm{years}`$). This is about hundred times more than the $`4\times 10^{61}`$ hydrogen atoms within the virial radius. However, the average recombination time $`(nk_{rec})^15\times 10^5\mathrm{years}`$ within the virial radius is a factor 100 less than the average lifetime of a massive star. Hence, very small or zero UV escape fractions for these objects are plausible. However, the first supernovae and winds from massive stars will substantially change the subsequent hydrodynamic and chemical evolution as well as the star formation history of these objects. A more detailed understanding of the role of such local feedback will have to await yet more detailed simulations that include the poorly understood physics of stellar feedback mechanisms.
Since the collapsing core evolves on much faster timescales than the rest of the halo it seems plausible that the first star (or star cluster) will have a mass of less or the order of the core mass. It seems also quite clear that the radiative feedback from this star (these stars) will eventually halt further accretion. As a consequence this might suggest that the formation of very massive objects or supermassive black holes is unlikely. This later speculation will be tested by yet higher resolution simulations we are currently working on.
Recently Bromm, Coppi, and Larson (1999) have studied the fragmentation of the first objects in the universe. The results of their simulations using a smooth particle hydrodynamics technique with isolated boundary conditions disagree with the results presented here. Their objects collapse to a disk which then fragments quickly to form many fragments throughout the rotationally supported disk. The efficiently fragmenting disk in those simulations originates from the assumed idealized intial conditions. These authors simulated top-hat spheres that intially rotate as solid bodies on which smaller density fluctuations were imposed. Naturally they find a disk. Also it is clear that for a top-hat if the disk breaks up it will do so everywhere almost simultaneously. Our results with realistic initial conditions do not lead to a disk and form the first fragment close to the center of the halo.
## 5 Conclusions
We have reported first results from an ongoing project that studies the physics of fragmentation and primordial star formation in a cosmological context. The results clearly illustrate the advantages and power of structured adaptive mesh refinement cosmological hydrodynamic methods to cover a wide range of mass, length and timescales. All findings of AANZ are confirmed in this study. Among other things, these are that 1) a significant number fraction of hydrogen molecules is only formed in virialized halos at the intersection of filaments, and 2) only a few percent of the halo gas has cooled to $`TT_{vir}`$.
The improvement of a factor $`1000`$ in resolution over AANZ has given new insights into the details of the fragmentation process and constraints on the possible nature of the first structures: 1) Only $`\stackrel{<}{}1\%`$ of the baryons within a virialized object can participate in population III star formation. 2) The formation of super massive black holes or very massive objects in small halos seem very unlikely. 3) Fragmentation via BonnorโEbert instability yields a $`200M_{}`$ core within one virialized object. 4) If the gas were able to fragment further through 3โbody H<sub>2</sub> association and/or opacity limited fragmentation, only a small fraction of all baryons in the universe will be converted into small mass objects. 5) The escape fraction of UV photons above the Lyman limit should initially be small due to the high column densities of HI ($`N_{HI}10^{23}\mathrm{cm}^2`$) of the parent primordial molecular cloud. 6) The first star in the universe is most likely born close to the center of its parent halo of $`\stackrel{>}{}10^5M_{}`$.
This work is supported in part by NSF grant AST-9803137 under the auspices of the Grand Challenge Cosmology Consortium (GC<sup>3</sup>). NASA also supported this work through Hubble Fellowship grant HF-0110401-98A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc under NASA contract NAS5-26555. Tom Abel acknowledges support from NASA grant NAG5-3923 and useful discussions with Karsten Jedamzik, Martin Rees, Zoltan Haiman, and Simon White. |
no-problem/0002/astro-ph0002018.html | ar5iv | text | # The HST view of the FR I / FR II dichotomy Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555 and by STScI grant GO-3594.01-91A
## 1 Introduction
The original classification of extended radio galaxies by Fanaroff & Riley (fr (1974)) is based on a morphological criterion, i.e. edge darkened (FR I) vs edge brightened (FR II) radio structure. It was later discovered that this dichotomy corresponds to a (continuous) transition in total radio luminosity (at 178 MHz) which formally occurs at $`L_{178}=2\times 10^{33}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>. The presence of radio sources with intermediate morphology, in which typical FR I structures (such as extended plumes and tails) are seen together with features characteristics of FR II sources (narrow jets and hot spots) (see e.g. Parma et al. parm87 (1987), Capetti et al. capetti95 (1995)) argues in favour of a continuity between the two classes.
From the optical point of view both FR I and FR II are associated with various sub-classes of elliptical-like galaxies, but statistically their populations are different (Zirbel zirb96 (1995)). Owen (owen93 (1993)) found that the FR I/FR II division is also linked to the optical magnitude of the host galaxy, possibly suggesting that the environment plays an important role in producing different extended radio morphologies. Moreover, FR II are generally found in regions of lower galaxy density and are more often associated with galaxy interactions with respect to FR I (Prestage & Peacock pp (1988), Zirbel zirb97 (1997)). Differences are also observed in the optical spectra: while FR I are generally classified as weak-lined radio galaxies, strong (narrow and broad) emission lines are often found in FR II (Morganti et al. morg92 (1992), Zirbel & Baum zirb95 (1995)), although a sub-class of weak-lined FR II is also present (Hine & Longair hine (1979)).
Within the unification scheme for radio-loud AGN (for a review, see Urry & Padovani urry95 (1995)), FR I and FR II radio galaxies are thought to represent the parent population of BL Lac objects and radio-loud quasars, respectively (Antonucci & Ulvestad anto85 (1985), Barthel bart89 (1989)). In order to explain the lack of broad lines in the โmis-orientedโ (narrow-lined) FR IIโtype objects, obscuration by a thick torus is invoked. A combination of obscuration and beaming is therefore necessary at least for the FR II-quasars unification (e.g. Antonucci & Barvainis anto90 (1985)). However, there is evidence that this simple picture is probably inadequate: some radioโselected BL Lacs - among the most powerful sources in the class \- display an extended radio structure and luminosity typical of FR II (Kollgaard et al. koll92 (1992), Murphy et al. murphy93 (1993)) and broad - although weak - lines have been observed in some BL Lacs. Moreover, Owen et al. owen96 (1996) noted that the lack of BL Lacs in a sample of radio galaxies located in Abell clusters can be an effect of their selection criteria if the parent population of BL Lacs includes both FR I and FR II. This idea is also consistent with a recently proposed modification of the unification scheme, which claim that the weak-lined FR II are indeed associated with BL Lac objects (Jackson & Wall jackson (1999)). These observations can be however reconciled with the unification scenario once continuity between the weak and powerful radioโloud sources is allowed and thus transition objects are expected.
In Chiaberge et al. (pap1 (1999), hereafter Paper I) we studied HST images of all FR I radio galaxies belonging to the 3CR catalogue, finding that unresolved nuclear sources are commonly present in these objects. A strong linear correlation is found between this optical and the radio core emission, extending over four orders of magnitude in luminosity. This, together with spectral information, strongly argues for a common non-thermal origin, and suggests that the optical cores can be identified with synchrotron radiation produced in a relativistic jet, qualitatively supporting the unifying model for FR I and BL Lacs. Furthermore, the high rate detection ($`\stackrel{>}{}`$85 %) of optical cores in the complete sample indicates that a standard pcโscale geometrically thick torus is not present in these low-luminosity radio galaxies. Any absorption structure, if present, must be geometrically thin, and thus the lack of broad lines in FR I cannot be attributed to obscuration. Alternatively, thick tori are present only in a minority of FR I. Given the dominance of non-thermal emission, the optical core luminosity also represents a firm upper limit to any thermal component, suggesting that accretion might take place in a low efficiency radiative regime.
The picture which emerges from this analysis is that FR Is lack substantial thermal (disc) emission, Broad Line Regions and obscuring tori, which are usually associated with radio-quiet and powerful radioโloud AGN.
As a natural extension of Paper I, here we study the HST images of a sample of low redshift FR II radio galaxies, in order to explore how the differences in radio morphology are related to the optical nuclear properties. In particular, one of the most important questions is whether the FR I/FR II dichotomy is generated by two different manifestations of the same astrophysical phenomenon, and the transition between the two classes is indeed continuous, or instead it reflects fundamental differences in the innermost structure of the central engine.
The selection of the sample is presented and discussed in Sect. 2, while in Sect. 3 we describe the HST observations. In Sect. 4 we focus on the detection and photometry of the optical cores. Finally, in Sect. 5 we discuss our findings.
## 2 The sample
The sample considered here comprises all radio galaxies belonging to the 3CR catalogue (Spinrad spinrad (1985)) with redshift $`z<0.1`$, morphologically classified as FR II. We directly checked their classification for erroneous or doubtful identifications by searching the literature for the most recent radio maps. The final list (see Table 1) constitutes a complete, flux and redshift limited sample of 26 FR II radio galaxies.
We searched for optical spectral classification and/or optical spectra, in order to differentiate our sources on the basis of the presence of broad or narrow emission lines. For only one source (namely 3C 136.1) we could not find spectral information in the literature. All spectral types usually associated with FR II galaxies are represented in the sample: five objects are BLRG, fifteen are classified as NLRG, while four show only weak lines in their optical spectrum (WLRG). The remaining source, namely 3C~371, has been classified as a BL Lac object. In Table 1 redshifts and radio data are reported, as taken from the literature, together with the optical spectral classifications.
In Fig. 1 we show the redshift vs total radio luminosity diagram for the sample of FR II galaxies, together with the sample of FR I discussed in Paper I, but limited to sources with $`z<0.1`$ for coherence with the FR II sample. FR II have a median redshift $`z=0.06`$, and total radio luminosities at 178 MHz are between $`10^{32}`$ and 10<sup>34</sup> erg s<sup>-1</sup> Hz<sup>-1</sup> ($`H_0=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$ are adopted hereafter). Notice that whereas the two samples are selected at the same limits of redshift and flux, FR II are, on average, more luminous and distant than FR I.
## 3 HST observations
HST observations of the FR II sources are available in the public archive (up to April 1999) for 25 out of the 26 sources (only 3C~33 and 3C~105 have not been observed). The HST images were taken using the Wide Field and Planetary Camera 2 (WFPC2). The whole sample was observed using the F702W filter as part of the HST snapshot survey of 3C radio galaxies (Martel et al. martel (1999), De Koff et al. De Koff (1996)). For 3C 192 we used a F555W image, as this source was not observed with the F702W filter. Exposure times are in the range 140โ300 s. The data have been processed through the standard PODPS (Post Observation Data Processing System) pipeline for bias removal and flat fielding (Biretta et al. biretta (1996)). Individual exposures in each filter were combined to remove cosmic rays events.
## 4 Optical cores in FR II
In Paper I we have adopted a simple operative approach, based on the analysis of the nuclear brightness profile, in order to establish when an optical core is present in a radio galaxy. As in the case of FR I sources, the FWHM fall into two very distinct regimes: in 11 cases we measured a FWHM = 0.05<sup>โฒโฒ</sup> โ 0.08<sup>โฒโฒ</sup>, i.e. indicative of the presence of an unresolved source at the HST resolution, while in 8 cases we found widths larger than 0.2<sup>โฒโฒ</sup>. We therefore believe that no ambiguity exists on whether or not a central unresolved source is present.
In three sources (3C 382, 3C 390.3 and 3C 445) the central regions are saturated. While, on the one hand, this prevents us from deriving their brightness profile, on the other is by itself a clear indication of a point-like source. In fact diffuse emission would produce saturation with our instrument configuration and exposure times only for surface brightness $`<13`$ mag arcsec<sup>-2</sup> in the R band, much larger than typically observed in the central regions of radio-galaxies at this redshift. Furthermore in all these sources we observe diffraction rings and spikes, the characteristic hallmarks of the HST Point Spread Function.
We performed aperture photometry of these components. The background level is evaluated, as in Paper I, by measuring the intensity at a distance of $`5`$ pixels ($`0.23^{\prime \prime }`$) from the center. The dominant photometric error is thus the determination of the background in regions of steep brightness gradients, especially for the faintest cores, resulting in a typical error of $`10\%`$. For the saturated cores we evaluated their fluxes by comparing the PSF wings with those of several bright stars seen in archival HST images taken with the same filter. This method leads to a somewhat larger uncertainties, 25 % as estimated from the scatter of measures obtained with different reference stars, which arise from the time dependent structure of the HST PSF. In Table 1 we report fluxes and luminosities of the optical cores.
All of the images were taken using broad band filters, which include emission lines. In particular, the F702W transmission curve covers the wavelength range $`59008200`$ ร
and thus within our redshift range includes the H$`\alpha `$ and \[N II\] emission lines. Unfortunately, no HST narrow band images are available for the NLRG and WLRG, however, we expect the line contamination to be small, due to the wide spectral region covered by the filters used ($``$ 2000 ร
) with respect to typical lines equivalent width. We correct the broad band fluxes only in the case of BLRG, where the emission of broad lines is probably co-spatial to the optical core, using ground-based data taken from the literature (Zirbel & Baum zirbel (1995)). The resulting emission line contribution is typically $`2530\%`$ of the total flux measured in the F702W filter.
In 8 cases the nuclear regions only show diffuse emission: for such galaxies we estimate upper limits to a possible central component evaluating the light excess of the central 3x3 pixels with respect to the surrounding galaxy background. The remaining 3 sources show complex morphologies, e.g. with dust lanes obscuring the central regions, and no photometry was performed.
FR II cores span a wide range of optical luminosities $`L_o`$ (from $`10^{25.5}`$ up to $`10^{30}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>). In Fig. 3 we report the optical core versus radio (5 GHz) core luminosity for the FR II sample, superimposed to the data (as from Paper I) for FR I galaxies limiting ourselves, for consistency, to those with redshift $`z<0.1`$. FR II show a complex behavior, which however seems to be related to their optical spectral classification.
Let us firstly consider the blazar, 3C~371. As one might expect, since its emission is dominated by beamed synchrotron radiation, it is among the brightest source both in the radio and in the optical band (see Fig. 3). In the $`L_r`$ vs $`L_o`$ plane it falls in the low luminosity end of the region defined by radio selected blazars (Chiaberge et al., in preparation).
The second group of sources is represented by the BLRG (3C~111, 3C~227, 3C~382, 3C~390.3 and 3C~445): all of them have very luminous optical cores ($`L_o>10^{28}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>) being โ together with 3C~371 โ the most powerful objects of the sample and clearly separating from the other FR II in the diagram. Notice that they have an optical excess (or radio deficiency) of up to 2 orders of magnitude with respect to the radio-optical core luminosity correlation found for FR I.
In 5 WLRG and NLRG, namely 3C~88, 3C~285, 3C~388, 3C~402 and 3C~403, we detected optical cores which share the same region in the luminosity plane as FR I sources with luminosities between $`L_o=10^{25.5}`$ up to more than $`10^{27.5}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>.
The 8 upper limits, all associated with WLRG or NLRG, are also plotted. Four objects (3C~35, 3C~98, 3C~192 and 3C~326) lie close to the correlation defined by FR I. Conversely, 3C~15, 3C~353, 3C~452 and 3C~236 present an optical luminosity deficit of one to two orders of magnitude given their radio core emission with respect to FR I.
No radio core data have been found in the literature for 3C~136.1, 3C~198 and 3C~318.1.
## 5 Discussion
In the 24 nearby FR II radio galaxies (out of a complete sample of 26) studied in this paper, 13 show an unresolved optical core, in 8 cases we can only set an upper limit to their luminosity, while 3 sources present a complex nuclear morphology. The location in the optical-radio core luminosity plane is clearly connected to the optical spectral classification.
Conversely, cores in FR I radio galaxies show a linear correlation between their radio and optical luminosity, which strongly argues for a common non-thermal origin. The presence of such correlation provides a useful benchmark to investigate the origin of optical cores in FR II.
In the following we consider each FR II group separately.
### 5.1 Broad Line Radio Galaxies
Let us first focus on BLRG. These sources present, overall, a strong optical excess (or radio deficit), of up to two orders of magnitude, with respect to the correlation defined by the FR I cores (see Fig. 3). Note that in the sample of nearby FR I, the only source lying well above the radio-optical correlation is 3C 386, which also shows a broad H$`\alpha `$ line.
BLRG are objects in which the innermost nuclear regions are thought to be unobscured along our line of sight (Barthel bart89 (1989)). We therefore expect the presence of a thermal/disc component: indeed this might dominate over any synchrotron jet radiation and thus be responsible for the observed emission. The idea that we see directly an accretion disc is supported by several observations: in the case of 3C~390.3, a bump in the spectral energy distribution has been interpreted as radiation emitted by a disc component with intermediate inclination (Edelson & Malkan edelson86 (1986)). Furthermore, the broad and double peaked H$`\beta `$ line observed in this source as well as in other BLRG (see e.g. Eracleous & Halpern eracl94 (1994)), can be accounted for within a relativistic accretion disc model (Perez et al. perez88 (1988)).
The location of 3C 111 is puzzling, as it lies along the correlation. However, several pieces of evidence point to the idea that beamed radiation from the relativistic jet significantly contributes in this source: it has the largest core dominance among the BLRG of our sample; superluminal motions with apparent speed $`v3.4c`$ have been revealed in the inner jet (Vermeulen & Cohen vermu (1994)), implying that the angle between the line of sight and the jet axis is smaller than $`30^{}`$; the radio core is strongly variable and polarized (Leahy et al. leahy (1997)). Furthermore a broad K$`\alpha `$ iron line is detected in the Xโray band, but with a relatively small equivalent width which can be explained if the continuum emission is diluted by a beamed component (Reynolds et al. reynolds (1998)). However, its total radio extent of $``$ 250 kpc argues against a viewing angle typical of blazars. Thus 3C~111 appears to be a transition source between radio-galaxies and blazars, seen at an angle sufficiently small that the jet beaming already affects its nuclear properties.
### 5.2 WLRG and NLRG with optical cores
Let us now concentrate on the NLRG and WLRG in which we detected optical cores. These objects (2 WLRG and 3 NLRG), all with FR II radio morphology, have cores with radio and optical emission properties that are completely consistent with those found in FR I. This suggests that in these sources the nuclear emission is similarly dominated by synchrotron radiation from the inner jet.
In the case of FR I, based on the high fraction of detected nuclei, we suggested that any obscuring material must be geometrically thin and thus the absence of broad lines and the relative weakness of any thermal (disc) component with respect to the synchrotron emission cannot be ascribed to extinction.
For FR II with FR Iโlike nuclei โ of which we do not have enough statistics โ there is an alternative possibility, namely that the optical core (jet) emission is produced outside the obscuring torus (and thus outside the BLR). In this sense they would represent transition objects seen at an intermediate angle between the completely obscured and unobscured ones. However, VLBI observations show that radio cores are unresolved on scale of $``$ 0.1 pc in nearby radio galaxies and this suggests that their optical counterโparts have a similar extent, as already discussed in Paper I. Furthermore, a symmetric jet-counter jet structure has been observed in several radio sources, implying that they lie essentially in the plane of the sky (Giovannini et al. giovannini98 (1998)). If the core emission is indeed produced outside the torus, at a distance of, say, $``$1 pc from the central black hole, a clear separation between the two sides of the jet (and no stationary core) should be observed in these highly misoriented objects (although, at present, symmetric jets have been found only in FR I). This ad hoc geometrical model does not seem to be viable, but a conclusive test requires spectropolarimetry looking for polarized scattered broad lines. A further indication could be obtained from the comparison of the nuclear infrared (reprocessed?) luminosity of FR Is and FR IIs with FR Iโlike nuclei of similar optical luminosity.
We conclude that these FR II are intrinsically narrow-lined objects which are in every aspect, except their extended radio-morphology, similar to FR I. Note that the presence of an FR I-like nucleus in a FR II does not seem to be connected with the total (radio) luminosity or redshift of the galaxy. In fact, the total power of such sources spans the range $`L_{178}=10^{32}10^{33.5}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> and the redshifts are between $`z=0.0250.091`$, completely overlapping with the entire sample and not limited, as one might expect, to the low luminosity end.
Conversely, it appears that a possible relationship exists between the occurrence of FR I-like nuclei in FR II and the environment, as all these 5 galaxies reside in clusters. This result can be particularly important, as it is known that FR I and FR II inhabit different environments, with FR II generally avoiding rich groups, especially at low redshifts (Zirbel zirb97 (1997)), while FR I are usually located in rich clusters. However, before any firm conclusion can be drawn about this issue, a larger sample of objects has to be considered.
### 5.3 WLRG and NLRG without optical cores
In 8 galaxies, all WLRG or NLRG, we do not detect the presence of an unresolved nuclear component. Four sources are located above or very close to the FR I correlation. They are consistent with being objects in which an optical counterโpart to the radio core is present, but it is too faint to be seen against the bright background of the host galaxy.
The remaining 4 objects (3 NLRG and 1 WLRG) are certainly more interesting, since they lie 1 - 2 orders of magnitude below the correlation. Therefore they lack not only of a BLR, but also of the expected optical counterpart of the radio core. However, note that these sources have radio core luminosities which cover the same range of BLRG. According to the prescriptions of the unification schemes, they can well be the obscured counterโparts of BLRG. Noticeably, excluding the blazar 3C~371, BLRG and these obscured sources clearly distinguish themselves for having the brightest radio cores among FR II.
## 6 Conclusions
In Chiaberge et al. (pap1 (1999)) we discovered that FR I nuclei lie in the radio-optical luminosity plane along a tight linear correlation. We argued that this is due to a common synchrotron origin for both the radio and optical emission. FR I nuclei must also be unobscured and intrinsically lacking of BLR and of significant thermal emission from any powerful accretion disc.
In order to explore how the differences in radio morphology are related to the optical nuclear properties, we analyzed HST images of 24 extended radio-galaxies morphologically classified as FR II, belonging to the 3C catalog and with $`z<0.1`$. We detected optical cores in 13 sources, which implies that the covering fraction of any obscuring material is less than $`0.54`$, or equivalently, the torus has an opening angle of $`63^{}`$. This can be even larger if at least some of the upper limits are actually just below the detection threshold. Notice that our determination of this critical angle is inconsistent with the division between higher redshift ($`0.5<z<1`$) 3CR quasars and radio galaxies, which has been found to be $`\theta 45^{}`$ (Barthel bart89 (1989)). This might be a problem, however the low redshift selection of our sources does not allow to derive any firm conclusion. We are currently studying a larger and higher redshift sample in order to further investigate this issue (Chiaberge et al. in preparation).
Our results suggest that the radio morphology is not univocally connected with the optical properties of the innermost structure of radio galaxies. In fact, at least at low redshifts, there is not a single homogeneous population of FR II: unlike FR I, they show a complex behavior, which is however clearly related to their optical spectral classification.
In BLRG optical nuclei are likely to be dominated by thermal (disc) emission. As discussed above, line emission contamination cannot account for this excess. In agreement with the current unification scheme of radio loud AGNs, we also identify their possible obscured counterโparts. It seems that broad lines and obscuring tori are closely linked and both are present only associated to radiatively efficient accretion.
We also find five FR II sources, spectrally identified as narrow lined objects, which harbor nuclei essentially indistinguishable from those seen in FR I. By analogy with FR I, we argue that their optical nuclear emission is produced primarily by synchrotron radiation, they are not obscured to our line of sight and therefore intrinsically lack a BLR.
Clearly, a classification based on the optical nuclear properties, as seen in these HST images, is more likely to reflect true similarities (or differences) on the nature of the central engine (such as, e.g., the rate of radiative dissipation in the accretion disc) than the traditional dichotomy of radio morphology.
From our data and within the limits of the available statistics, we find no evidence of a continuous transition between the two classes (FR I and FR II), as they are well separated in the $`L_r`$ vs $`L_o`$ plane. At this stage we only point out that sources with cores below $`L_o<10^{27.5}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> (or equivalently $`L_r<10^{31}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>) have FR I like nuclei, while FR II start above this threshold.
It is of particular interest that a significant fraction of FR II (at least 30 %, but can be as large as 50 % depending on the nature of the sources without detected optical nuclei) have FR Iโlike nuclei. The fact that all of these are located in clusters, an environment typical of FR I, might represent an important hint on the origin of the different flavours of radio galaxies, worth exploring through the study of a larger sample of objects.
These results have also interesting bearings from the point of view of the unified models. In fact, this picture argues against the idea that all FR II radio galaxies constitute the parent population of radio-loud quasars. We propose instead that galaxies with FR II morphology and an FR I-like core are possibly mis-aligned counterโparts of BL Lac objects. This can account for the observation that some radio-selected-type BL Lacs show radio morphologies more consistent with FR II than with FR I (e.g. Kollgaard et al. koll92 (1992)).
To conclude, we note that all of the galaxies included in our sample are low redshift objects with total radio powers not exceeding $`L_{178}10^{27}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>: thus a crucial observational issue is to understand whether these results hold to higher power/redshift samples or they are limited to low luminosities FR II. This will be explored in a forthcoming paper.
###### Acknowledgements.
We thank Jim Pringle for insightful suggestions and Edo Trussoni for useful comments on the manuscript. The authors acknowledge the Italian MURST for financial support. This research was supported in part by the National Science Foundation under Grant No. PHY94-07194 (A. Celotti).
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. |
no-problem/0002/cond-mat0002304.html | ar5iv | text | # Disorder and Interaction in 2D: Exact diagonalization study of the Anderson-Hubbard-Mott model
## Abstract
We investigate, by numerically calculating the charge stiffness, the effects of random diagonal disorder and electron-electron interaction on the nature of the ground state in the 2D Hubbard model through the finite size exact diagonalization technique. By comparing with the corresponding 1D Hubbard model results and by using heuristic arguments we conclude that it is unlikely that there is a 2D metal-insulator quantum phase transition although the effect of interaction in some range of parameters is to substantially enhance the non-interacting charge stiffness.
Understanding the nature of the ground state in an interacting disordered electron system is one of the most formidable and interesting challenges in condensed matter physics. A question of fundamental importance is whether the ground state of an interacting disordered electron system is a metal, an insulator, or some other state (e.g. superconductor). This question takes on particular significance in two-dimensional (2D) systems where it is generally accepted that (1) the disordered 2D system in the absence of any interaction is a localized (weakly localized for weak disorder) insulator, and (2) the interacting clean 2D system (without any disorder) is a Fermi liquid metal at high electron densities (and a Wigner crystal at low electron densities). Little is known about the disordered interacting system when both disorder and interaction are strong and of comparable magnitudes so that neither may be treated as a perturbation. A notable attempt by Finkelstein to analytically explore the nature of the disordered interacting electron system remains inconclusive as the theory flows toward strong coupling. Recently renewed interest has developed in this subject with much of the current motivation arising from a set of experimental measurements on the low temperature transport properties of low density 2D electron (or hole) systems confined in Si MOSFETs and GaAs heterostructures. These transport measurements (carried out as a function of carrier density) have been interpreted by many (but not all) as exhibiting evidence for a 2D metal-insulator quantum phase transition (M-I-T) with the system being a metal at high density $`n(>n_c)`$ and an insulator at low density $`(n<n_c)`$ with $`n_c`$ as the critical density separating the two phases. If true this would be a striking example of an interaction driven quantum phase transition since changing density $`(n)`$ is equivalent to tuning the effective ratio of the interaction energy to the non-interacting kinetic energy of the system. Much interest has naturally focused on this possible 2D M-I-T quantum phase transition, particularly because the corresponding non-interacting disordered 2D electron system is thought on rather firm grounds to be always localized (Anderson localization) and therefore strictly an insulator at T=0 in the thermodynamic limit. (Much of the debate in interpreting the experimental data relating to the 2D M-I-T phenomenon arises from the fact that the experiments are necessarily done at finite temperatures and in finite systems, whereas the theoretical quantum phase transition is a T=0 infinite system phenomenon.) If such a 2D MIT exists it is of great interest because the metallic phase must be a non-Fermi liquid since it cannot be adiabatically connected to the corresponding insulating noninteracting 2D disordered system.
In this letter we address the nature of the ground state of a disordered interacting 2D electron system numerically by exactly diagonalizing the few particle 2D interacting Hamiltonian and doing a disorder averaging. We use the extensively studied 2D Hubbard model and its natural extensions for our exact diagonalization calculations. We study the effects of both on-site (as in the standard Hubbard model) and longer range interactions whereas the disorder in our model is a random on-site disorder of strength $`W`$ (with $`W`$ denoting the width of the square distribution from which the on-site disorder energy is randomly chosen). Without interaction, our model is the 2D Anderson model, which has a localized insulating ground state, whereas without disorder our model is the Mott-Hubbard model which has an extended metallic ground state away from half filling. We restrict to low โmetallicโ filling factors (typically less than quarter filling) because our central interest is in understanding the continuum systems, and also because we want to stay below half filling where the 2D Hubbard model has an interaction driven Mott transition. Our typical exact diagonalization study uses the Lanczos technique for $`N=6`$ electrons (with spin) on a $`4\times 4`$ 2D lattice, corresponding to a filling of $`\nu =6/32=3/16`$. This involves the diagonalization of matrices of $`313600^2`$size. We typically average over 10 disorder realizations. Following standard notations three parameters $`t`$ (the hopping amplitude), $`U`$ (the on-site interaction strength), and $`W`$ (disorder strength) parametrize our minimal Anderson-Hubbard model. We carry out our exact diagonalization in the subspace of the total number of electrons $`N`$ and the total spin component $`S_z=[(NM)/2+M/2]`$ with $`M`$ being the number of spin up electrons. We note that the Hilbert space grows exponentially with the system size, and the results presented in this work are the essential current limit on what can be achieved via the exact diagonalization technique for this problem.
To characterize the nature of the ground state, i.e. its localization properties, we use the technique suggested by Kohn a long time ago and calculate the charge stiffness $`D_c`$, sometimes also referred to as the Drude weight, of the finite system. We calculate charge stiffness for each individual disorder realization exactly through our finite size diagonalization, and then obtain the root mean square average by averaging over a number of disorder realizations. The charge stiffness $`D_c`$, which is simply related to the persistent current , is the zero frequency weight of the long wavelength conductance (i.e. the Drude weight) in the system. As such, it is finite for a metal or a conductor and is zero for an insulator or a localized system in the thermodynamic limit . Thus $`D_c`$ is an eminently reasonable โorder parameterโ for studying metal-insulator or localization transitions, although the fact that in finite systems $`D_c`$ must necessarily be finite introduces complications in interpreting numerical results. We mention that the strict classification of a metal (finite $`D_c`$) or an insulator ($`D_c=0`$) based on the charge stiffness applies in the thermodynamic limit only in the absence of disorder because diffusive metallic electrons (in the presence of finite disorder) have algebraically vanishing $`D_c`$ in the thermodynamic limit (whereas $`D_c`$ vanishes exponentially in the thermodynamic limit for an insulator). But this is only of academic interest in finite cluster studies where no strict distinction between metals and insulators exist in any case since all finite systems, by definition, have finite conductance (and are therefore โmetalsโ in a trivial sense) by virtue of finite size effects. Charge stiffness (or persistent current magnitude) has been extensively used in the literature in finite size numerical localization studies (see, for example, ref. 7 in the context of 2D M-I-T problem) of disordered interacting systems, and it is empirically well-known that the calculation of $`D_c`$ in finite systems is an extremely effective way of numerically studying the localization problem in the presence of both interaction and disorder. This is mainly because the charge stiffness is closely related to the phase sensitivity of the system to boundary conditions, which is an operationally effective way of distinguishing a metal from an insulator.
In Fig. 1 we show our calculated disorder-averaged charge stiffness for the $`4\times 4`$ 2D Hubbard cluster (with $`6`$ electrons) as a function of the on-site repulsion $`U`$ for various values of the disorder strength $`W`$. In the absence of any disorder ($`W=0`$), the clean 2D Hubbard model away from half filling is expected to be a metal with a finite value of $`D_c`$, whereas the corresponding 2D Anderson model ($`U=0`$, $`W0`$) is expected to be a weakly localized insulator for small $`W`$ (crossing over to an exponentially strongly localized insulator for large $`W`$). The numerical results for these limiting cases ($`W=0`$, $`U0`$ and $`W0`$, $`U=0`$) are also shown in Fig. 1 for the sake of comparison and completeness.
The most important generic feature of the results shown in Fig. 1 is the peak in the charge stiffness at an intermediate value of $`UU_cW`$ where the calculated charge stiffness for the finite 2D cluster has a maximum for a given disorder strength $`W`$. The charge stiffness $`D_c`$ appears to decrease from this peak value (for a given $`W`$) for both $`UU_c`$. Note that $`D_c`$ increases sharply from $`U=0`$ to $`U=U_c`$, and then decreases slowly for $`U>U_c`$. This peak or the maximum in $`D_c`$ is rather manifest in Fig. 1 for $`W/t=5`$ and $`3`$ (i.e. for strong disorder) whereas for weak disorder (e.g. $`W/t=0.5`$ in Fig. 1) the peak occurs at somewhat larger values of $`U/W2`$ and is not so obvious from Fig. 1 (we have explicitly verified that the peak exists for $`W/t=0.5`$ also). The actual value of $`U_c/t`$ clearly depends on the disorder strength $`W`$, increasing with $`W/t`$ from $`U_c/t0.95`$ for $`W/t=0.5`$ through $`U_c/t3.0`$ for $`W/t=3`$ to $`U_c/t3.5`$ for $`W/t=5`$. The qualitative behavior of our results is explained by the competition between $`U`$ and $`W`$ in the Anderson-Hubbard model. In a disordered system the random potential $`W`$ favors a maximal occupation (double occupation for our spin $`1/2`$ electrons) of the lowest energy sites. The on-site repulsion $`U`$ on the other hand opposes double occupancy and favors configurations with minimal number of double-occupied sites. This competition between $`W`$ and $`U`$, where $`W`$ tends to localize the charge density and $`U`$ tends to homogenize the charge density, is a well-known feature of the disordered Hubbard model. The results shown in Fig. 1, in particular, the increase of $`D_c`$ as $`U`$ increases from zero for a fixed disorder strength $`W`$, is a direct result of the competition. Note that the actual crossover behavior of $`D_c(U,W,t)`$ shown in Fig. 1 cannot be parametrized by the single parameter $`U/W`$$`D_c`$ depends on both $`U/t`$ and $`W/t`$. We have carried out similar calculations in an โextendedโ Anderson-Hubbard model with a long-range interaction (in addition to $`U`$) with results qualitatively similar to those shown in Fig. 1.
The direct interpretation of our exact finite size results shown in Fig. 1 is that the conductance of a finite disordered 2D system increases when the interaction is turned on (at a fixed disorder), reaching a maximum for $`U=U_cW`$, and then it decreases slowly with still increasing $`U`$. The issue of applying these numerical results based on $`4\times 4`$ 2D clusters to address the fundamental question of 2D M-I-T is, however, extremely tricky. For example, one popular recent line of thinking, based mostly on numerical work involving spinless electrons in finite 2D systems , has been to interpret equivalent results on interaction-enhanced conductance as evidence in favor of a 2D M-I-T, with the peak in $`D_c`$ at $`UU_c`$ being interpreted as an intermediate metallic phase. We disagree with this interpretation for reasons to be discussed below. We emphasize that any conclusion about the existence of a true quantum phase transition, based entirely on small system numerical results of the type shown in Fig. 1, is fundamentally problematic since all finite systems (whether they are conductors or insulators in the thermodynamic limit) have finite $`D_c`$ in finite size systems. In principle, a finite size scaling analysis of the numerical results is capable of determining the existence of a quantum phase transition (i.e. the 2D M-I-T), but in practice, of course, one does not have anywhere near the number of data points (for various 2D system sizes) minimally required to carry out a meaningful finite size scaling analysis in this problem.
Our conclusion that the charge stiffness results depicted in Fig. 1 do not indicate the existence of a true 2D M-I-T, but instead show a crossover from an Anderson insulator at small $`U/W`$ to a disordered Mott insulator (a โWigner glassโ phase) at large $`U/W`$ with an intermediate crossover regime (around $`U=U_cW`$) of interaction-enhanced finite size conductance (or equivalently, an enhanced localization length), which is not a thermodynamic โmetallic phaseโ, is based on two complementary sets of arguments: (1) Comparison with the corresponding one dimensional (1D) results; and (2) strong circumstantial evidence based on heuristic theoretical arguments.
To better understand the nature of the 2D disordered Hubbard model we have carried out an identical finite system charge stiffness calculation on the corresponding 1D disordered Hubbard model (1D Hubbard rings). We show the corresponding 1D Anderson-Hubbard model results in Fig. 2 for $`6`$ electrons on a $`12`$ site ring (corresponding to quarter filling). The 1D results of Fig. 2 are qualitatively identical to the 2D results of Fig. 1: $`D_c`$ in the disordered 1D Hubbard model initially increases as a function of $`U/W`$ for a fixed $`W`$, showing a maximum at $`U=U_cW`$, and then it decreases slowly for large $`U>U_c`$, exactly as in 2D system. The โcriticalโ $`U_c/t`$ for the charge stiffness peak in the 1D system is $`U_c/t0.7`$, $`3.3`$, $`4.5`$ for $`W/t=0.5`$, $`3`$, $`5`$ respectively (which are not that different from the corresponding 2D results at $`3/16`$ filling).
Noting that the charge stiffness results shown in Figs. 1 and 2 in the 2D and 1D disordered Hubbard models respectively are essentially indistinguishable (i.e. just by looking at the results of Figs. 1 and 2 one does not know which one corresponds to 1D and which to 2D since the results are qualitatively identical) one is forced to conclude that if the results of Fig. 1 are interpreted as exhibiting evidence for a 2D M-I-T then one must, based on the results of Fig. 2, infer that there is also a 1D M-I-T in the disordered 1D Hubbard model as a function of the interaction strength. We mention in this context that we have verified that the 1D disordered extended Hubbard model (with additional long range interaction) produces results qualitatively similar to those in the corresponding 2D system โ thus the equivalence between 1D and 2D charge stiffness results is valid for finite and long range interactions also.
There are, however, very compelling theoretical grounds to believe that 1D disordered systems are localized even in the presence of interaction. Thus, the results of Fig. 2 cannot be interpreted as evidence for a 1D M-I-T โ instead the maximum in $`D_c`$ as a function of $`U`$ only indicates the interaction-induced enhancement of the localization length (or, equivalently the persistent current ), which in a finite system, increases the Drude conductance or the charge stiffness. Based on the striking qualitative similarity between the 1D (Fig. 2) and the 2D (Fig. 1) results and the fact that both systems have strictly localized or insulating ground states in the disordered, $`W0`$, non-interacting, $`U=0`$, system) we therefore conclude that the 2D results of Fig.1 do not indicate a 2D M-I-T; it only indicates an interaction-induced enhancement of the 2D localization length for intermediate interaction strengths $`UU_c`$. Note that while the intermediate-interaction crossover regime ($`UU_c`$) is not a new quantum phase (it is still an insulator), the interaction-induced enhancement of the 2D localization length may be extremely large, and even the experimental 2D systems showing the so-called 2D M-I-T may actually be โeffectiveโ metals since the enhanced localization lengths may be larger than the actual system size (or, the phase breaking length at finite temperatures).
In addition to the above empirical argument for the non-existence of a 2D M-I-T based on the comparison between 1D and 2D exact diagonalization results we have a heuristic theoretical argument which points to the same conclusion. The small $`U(0)`$ and the large $`U(\mathrm{})`$ interaction limits of the disordered 2D Hubbard model are believed to be insulating or localized on theoretical grounds. The non-interacting $`(U0)`$ disordered 2D system is known to be localized for any finite disorder (the localization length is exponentially large, the so-called weak localization regime, for small disorder) by virtue of the scaling theory of localization . The localized large $`U(\mathrm{})`$ regime arises from the fact that the pure Hubbard ground state (in the absence of disorder) must have strong ferromagnetic correlations in the large-$`U`$ limit in order to minimize the interaction energy. In fact, it is known that the large $`U`$ ground state of a Hubbard-type model with an additional next-nearest neighbor hopping term is ferromagnetic (the same is true for the pure Hubbard model at fillings close to half). In this limit, therefore interaction tends to become less relevant since the electrons being spin polarized avoid each other. The system in this large-$`U`$ limit may thus be approximately equivalent to a non-interacting or weakly interacting system (albeit a spin-polarized one), and the introduction of any disorder ($`W0`$) necessarily localizes this 2D โeffectively non-interactingโ Hubbard system. The weakly localized (for small disorder) large $`U(\mathrm{})`$ 2D system has, however, an exponentially longer localization length (which explains the enhanced $`D_c`$ for large $`U`$ in Fig. 1) than the usual non-interacting ($`U0`$) disordered limit because the ferromagnetic spin-polarized phase ($`U\mathrm{}`$) has a larger Fermi energy, which would exponentially enhance the localization lengths. Thus, both the small $`U`$ and the large $`U`$ regimes are necessarily localized, and the enhancement of $`D_c`$ in the intermediate-$`U(U_c)`$ regime must either indicate a crossover between an Anderson insulator ($`U0`$) and a disordered Mott insulator (equivalently a Mott glass, โWigner glassโ in the corresponding continuum system) for $`1/U0`$ or involve two quantum phase transitions โ one from the low-$`U`$ Anderson insulator phase to the intermediate ($`UU_c`$) โmetallicโ phase with enhanced $`D_c`$ and then again from this intermediate โmetallicโ phase to the large-$`U`$ Mott glass phase. We see absolutely no features in our 2D or 1D numerical results which could be indicative of such a double or re-entrant insulator ($`U0`$) - โmetalโ ($`UU_c`$) - insulator ($`1/U0`$) quantum phase transition.
We conclude with a critical discussion of the recent low temperature experimental results in low density, high mobility 2D systems which have motivated the current resurgence in the issue of 2D M-I-T in disordered and interacting electron systems. Experimentally one finds that the high density regime ($`n>n_c`$) is โmetallicโ in the sense of having a positive temperature coefficient ($`\frac{d\rho }{dT}>0`$) of the resistivity $`\rho `$ and the low density ($`n<n_c`$) is insulating with $`\frac{d\rho }{dT}<0`$. This has been interpreted by many (but not all -) as clear evidence of an interaction-driven M-I-T occurring at a critical density $`n_{c\text{ }}`$. The standard interpretation of these experimental observations as a 2D M-I-T is, however, problematic because the high density phase (i.e. the less interacting phase) is the nominal โmetallicโ phase according to this interpretation. This makes little sense since the non-interacting or the weakly interacting very-high density phase must be a weakly localized 2D insulator based on the scaling theory . Thus, very similar to the conclusion we reached for our exact diagonalization numerical results, the experimental situation must correspond to either a double quantum phase transition (the very high density phase is a weakly localized insulator, with the intermediate regime, corresponding to our peak in $`D_c`$ around $`UU_c`$, being a novel interaction-induced โmetallicโ phase) or just a sharp crossover from a high density weakly localized insulator to a low density strongly localized insulator occurring around $`nn_c`$. Logically, there can be either two quantum phase transitions (insulator$``$metal$``$insulator) or none, based on our knowledge that the asymptotic high and low density phases are both insulating phases with the high density phase being the standard weak localized phase and the low density phase being a strongly localized phase. Experimentally, there is little evidence for two quantum phase transitions (note that there must be two quantum phase transitions or none; it cannot be one quantum phase transition and one crossover). Therefore we believe, based on arguments similar to what we use to interpret our theoretical results presented in this paper, that the experimental observations are indicating a very sharp crossover (around $`nn_c`$) from a weakly to a strongly localized 2D insulator as $`n`$ decreases, and the high density regime ($`n>n_c`$) is only an effective โmetalโ because the effective system size (the phase breaking length at finite $`T`$) is smaller than the localization length which may have been substantially enhanced by interaction effects as we show in this paper. There is some very recent experimental support for this scenario.
We emphasize that the interaction induced enhancement of $`D_c`$ for $`0<U/W1`$ in Fig. 1 should not be considered as evidence in favor of a 2D M-I-T (as was recently done in ref. 7 based on finite system studies of spinless electrons using smaller system sizes) particularly since (1) the interaction enhancement is only effective for very large disorder strength ($`W/t>1`$) where the system is likely to be localized any way (note that for weak disorder, $`W/t<1`$, there is essentially no interaction induced $`D_c`$ enhancement โ if there is indeed an interaction-driven 2D metallic phase it is likely to be in the low disorder regime where Fig. 1 indicates little interaction enhancement), and (2) the actual interaction-enhanced $`D_c`$ values in the strong disorder regime in Fig. 1 are still extremely small in magnitude (and are much smaller than the corresponding $`D_c`$ values for non-interacting weak disorder system, which is still known to be weakly localized by virtue of scaling localization). We therefore conclude that the interaction enhancement of $`D_c`$ seen in Fig. 1 (and 2) indicates an interaction-driven enhancement of the localization length in the strong disorder regime, and not a 2D M-I-T.
We thank Eugene Demler for stimulating discussions. We also acknowledge helpful correspondence with Andy Millis, Charles Stafford and Dieter Vollhardt. This work is supported by the US-ONR. |
no-problem/0002/astro-ph0002335.html | ar5iv | text | # Discovery of an Obscured Broad Line Region in the High Redshift Radio Galaxy MRC 2025-218
## 1. Introduction
Deep radio surveys have proven to be one of the best methods for finding high redshift galaxies. Most evidence suggests that these powerful radio sources are the precursors of local giant ellipticals (e.g. Pentericci, et al. 1999). Many have irregular and complex morphologies suggestive of mergers and they are often surrounded by an overdensity of compact sources; presumably sub-galactic clumps (e.g. van Breugel et al. 1998). At both low and high redshifts, radio galaxies usually have strong optical emission lines, especially OIII at 5007 ร
. It is strongly debated, however, if the emission lines arise by the same mechanism as the radio jets. Several authors (e.g. Rawlings & Saunders 1992, Eales & Rawlings 1993, and Evans 1998) have demonstated a strong correlation between radio luminosity and \[OIII\] luminosity, but as Evans showed, there is a strong selection effect based on the detection limits as a function of distance and this may explain much of the correlation. Since the galaxies are often disturbed, star formation, large scale shocks and a central AGN are all possible sources of the line emission. Active galaxy unification models suggest that radio galaxies are quasars with obscured broad line regions (e.g. Antonucci 1993). Eales & Rawlings (1993, 1996) and Evans (1998) have been successful at using infrared spectrographs on 4-meter class telescopes to measure a few of the brightest lines in small samples of radio galaxies in the redshift range 2.2 to 2.6 and they find line ratios most consistent with Seyfert 2 (obscured AGN) nuclei. Independent of our current efforts, a team has also successfully used the ISAAC instrument on the VLT to observe high redshift radio galaxies (HzRGs) including MRC 2025-218 (McCarthy, personal communication).
An additional unexplained phenomenon is that at high redshifts (z$`>`$0.6) the radio, optical continuum, infrared continuum and emission line structures tend to be closely aligned (Chambers, Miley, & van Bruegel 1987, and McCarthy et al. 1987). This is probably not seen in lower redshift targets because the central activity tends to be a smaller fraction of the total luminosity than in high redshift sources. Among the proposed explanations are that the emission lines arise from shock induced star formation (De Young 1989, Rees 1989) or that it is scattered light originating from the central nucleus (Fabian 1991). This is a crucial question in our understanding of how and when most star formation occured in giant elliptical galaxies and in clusters in general.
MRC 2025-218 (z=2.630) has a compact near infrared and optical continuum morphology (van Breugel et al. 1998), but extended Ly$`\alpha `$ emission (5<sup>โฒโฒ</sup>) aligned with its radio axis (McCarthy et al. 1992). The extended UV emission has significant (8.3$`\pm `$2.3 %) linear polarization perpendicular to the UV axis (Cimatti et al. 1996) suggesting that scattering plays a significant role. McCarthy et al. also found three extremely red galaxies (EROโs: R-K $`>`$ 6 mag) within 20<sup>โฒโฒ</sup> of the radio galaxy. This is a large overdensity of such objects and strongly suggests an association between the EROโs and the active galaxy. In this paper we present infrared spectra taken with a long slit oriented close to the radio axis and including one of the EROโs. The ERO spectra will be described in a future paper. For all calculations we have assumed a cosmology with $`\mathrm{\Lambda }`$=0, q<sub>0</sub>=0.1 and H<sub>0</sub>=75 km s<sup>-1</sup> Mpc<sup>-1</sup>. For a redshift of 2.63 this yields a luminosity distance of 2.1$`\times `$10<sup>4</sup> Mpc and an angular scale of 7.7 kpc per arcsecond.
## 2. Observations and Data Reduction
The field of MRC 2025-218 was observed on 4 Jun, 1999 (UT) with the near infrared spectrograph NIRSPEC (McLean, et al. 1998 and McLean, et al. 2000) on the Keck II Telescope during its commissioning. First the field was imaged in the K-band with the slitviewing camera which is a HgCdTe PICNIC detector (256<sup>2</sup> pixels) sensitive from 1 to 2.5 microns. Figure 1 shows the reduced image of the field with a total integration time of 540 seconds and a FWHM of 0$`\stackrel{}{\mathrm{.}}`$54. As shown in the figure, the slit (42<sup>โฒโฒ</sup> long and 0$`\stackrel{}{\mathrm{.}}`$57 wide) was placed on both the radio galaxy and the extremely red galaxy dubbed ERO-A by McCarthy et al. (1992). This corresponded to a slit position angle of -7 degrees.
Figure 1. - K band image of the MRC 2025-218 field.
For spectroscopy, the telescope was repeatedly moved roughly 20 arcseconds to center the objects first in the upper portion of the slit then the lower portion. Four 300 second exposures were taken in both the H-band ($``$1.6$`\mu `$m) and K-band ($``$2.2$`\mu `$m) yielding an effective integration time on MRC 2025-215 of 20 minutes in each band. For guiding, NIRSPECโs optical guide camera was used to actively track a bright star roughly 2 arcminutes from MRC 2025-218.
Arc lamp and flat lamp spectra were taken at each setting prior to changing mechanism setups. The 7.6 magnitude A0 star PPM 272233 was also observed at the same settings in order to remove telluric absorption effects from the atmosphere. The calibrator star was reduced first. For each band the spectral pair was subtracted and divided by a reduced flat field lamp spectra. Bad pixels were then identified and removed by medianing the four nearest neighbors. The spectra were spatially rectified using a quadratic polynomial at each row, then spectrally rectified with a quadratic at each column. The negative spectrum of the star was then shifted and subtracted from the positive spectrum producing a combined spectrum with residual atmospheric lines removed. The stellar spectrum was extracted by averaging the central 3 pixels along the 2-d spectrum. A synthetic black body spectrum was divided into the stellar spectrum and residual hydrogen absorption lines from the Brackett series were interpolated over. The spectra of the radio galaxy were reduced in a similar way except they were divided by the reduced calibration star spectrum instead of a black body. For extraction of the galaxy spectra, a 6 pixel spatial aperture (1$`\stackrel{}{\mathrm{.}}`$14) was used. Spectrophotometry was obtained by determining the equivalent widths of the emission lines within a 1$`\stackrel{}{\mathrm{.}}`$5 aperture in the spectra and comparing this to the broad band fluxes of the galaxy in a 1$`\stackrel{}{\mathrm{.}}`$5 circular aperture in the slit viewing camera images.
Figure 2. - H band spectrum of MRC 2025-218. It is dominated by \[OIII\] at rest wavelength 5007 ร
. Also present is the other member of this doublet (\[OIII\], 4959 ร
) and a weak H$`\beta `$ emission line.
## 3. Results
Figure 2 shows the H-band spectrum of MRC 2025-218. By far the most dominant line is \[OIII\] (5007 ร
) redshifted to 1.82 $`\mu `$m. This line is highlighted in figure 3 where the complete position velocity map of this line is presented. Panel (a) of figure 3 is stretched to highlight the spectrally double nature of the nuclear emission ($`\mathrm{\Delta }`$v $``$ 200 km sec<sup>-1</sup>). Panel (b) shows three faint emission knots at large angular separations (1<sup>โฒโฒ</sup>-2<sup>โฒโฒ</sup>) and/or high kinematic velocities ($``$400 km s<sup>-1</sup>) Although faint, these structures repeat in the individual spectra that cover the \[OIII\] line. Two knots appear at essentially 0 km sec<sup>-1</sup> relative velocity, but 1$`\stackrel{}{\mathrm{.}}`$8 North and 2$`\stackrel{}{\mathrm{.}}`$4 South of the Nucleus. A high speed clump appears 1<sup>โฒโฒ</sup> North of the nucleus and at a redshifted relative velocity of 410 km sec<sup>-1</sup>. This high speed clump is also the brightest within our slit with a flux of roughly 1$`\times `$10<sup>-16</sup> ergs s<sup>-1</sup> cm<sup>-2</sup>. Also detected in the H-band spectrum is the other member of the \[OIII\] doublet at 4959 ร
, and H$`\beta `$. The ratio of \[OIII\] / H$`\beta `$ is extremely large at 17$`\pm `$7. The H$`\beta `$ line has a total nuclear flux of only 5 $`\times `$ 10<sup>-17</sup> ergs cm<sup>-2</sup> s<sup>-1</sup>. Table 1 gives all detected fluxes and line widths.
Figure 4 shows the K-band spectrum which is dominated by a broad H$`\alpha `$ emission line. The spectrum has had a median filter passed over it to improve the appearance of the fainter lines. The H$`\alpha `$ is well modeled by a pair of Gaussians having line widths of 9300$`\pm `$900 km/s and 730$`\pm `$100 km/s. The narrow component is consistent with the Ly$`\alpha `$ line width of 700 km/s found by Villar-Martin et al. 1999. After subtracting away the two H$`\alpha `$ components, the middle graph in figure 4 shows the strong \[NII\] (6548/6583 ร
) emission lines as well as weaker features from \[OI\] (6300 ร
) and \[SII\](6716/6731 ร
). The line fluxes and widths are also given in table 1.
Figure 3. - Position velocity plots for OIII (5007). Panel (a) is stretched to show the double nuclear peak. Panel (b) highlights three extended emission regions circled in white. The OIII line is highly disturbed with several different kinematic and spatial components including a kinematically split nucleus and a high velocity (400 km/s) knot located 2โ off nucleus. Nearby OH lines from the Earthโs atmosphere are labeled.
| TABLE 1 | | | |
| --- | --- | --- | --- |
| Emission Line Strengths | | | |
| | Rest | Flux | Line Width |
| Line | $`\lambda `$(ร
) | ($`\times 10^{16}`$ ergs s<sup>-1</sup> cm<sup>-2</sup>) | (km s<sup>-1</sup>) |
| SII | 6731 | 0.4$`\pm `$0.3 | 200$`\pm `$100 |
| SII | 6716 | 0.6$`\pm `$0.3 | 200$`\pm `$100 |
| NII | 6583 | 1.3$`\pm `$0.3 | 880$`\pm `$100 |
| NII | 6548 | 1.3$`\pm `$0.3 | 880$`\pm `$100 |
| H$`\alpha `$(narrow) | 6563 | 2.7$`\pm `$0.4 | 730$`\pm `$100 |
| H$`\alpha `$(broad) | 6563 | 18$`\pm `$2 | 9300$`\pm `$900 |
| OI | 6300 | 0.8$`\pm `$0.3 | 800$`\pm `$400 |
| OIII | 5007 | 8.4$`\pm `$1.6 | 600$`\pm `$200 |
| OIII | 4959 | 2.1$`\pm `$0.4 | 600$`\pm `$200 |
| H$`\beta `$ | 4861 | 0.5$`\pm `$0.3 | 600$`\pm `$200 |
## 4. Discussion
### 4.1. Nuclear Spectrum
The nuclear spectrum of the HzRG MRC 2025-218 is clearly dominated by emission lines from a central AGN. The broad H$`\alpha `$ line width is 9300 km/s which is only seen in type I AGN (unobscured broad line regions). This line width is very close to the mean H$`\beta `$ line width of 9870$`\pm `$950 km/s of radio loud quasars in the redshift range 2.0 to 2.5 by McIntosh et al., 1999. The ratio of \[OIII\]/H$`\beta `$ is 17 which is also only seen in AGN and ratios of \[NII\]/H$`\alpha `$ and \[OI\]/H$`\alpha `$ are also consistent with AGN excitation (Osterbrock, 1989).
From the H$`\alpha `$/H$`\beta `$ narrow line ratio of 5.4 we derive an optical extinction A<sub>V</sub>=1.4 mag. In this calculation weโve assumed an intrinsic ratio of H$`\alpha `$/H$`\beta `$ = 3.1 as seen in local AGN (Osterbrock, 1989), and the interstellar extinction law of Cardelli et al. (1989). This must be treated as an upper limit, however, since radio loud objects may have elevated H$`\alpha `$ due to collisional excitation (e.g. Baker et al. 1994). If the broad line ratio of H$`\alpha `$/H$`\beta `$ were similar to the narrow line ratio, then broad H$`\beta `$ should have marginally been detected in our H-band spectrum. We therefore feel safe in the assumption that the extinction to the broad line region is similar to the value for the narrow line region (1.4 mag), but not necessarily significantly greater. This extinction is also sufficient to explain the lack of broad Ly$`\alpha `$ detections in McCarthy et al. (1990) and Villar-Martin et al. (1999b). Without extinction our broad line emission would predict a Ly$`\alpha `$ broad line flux of 2.6$`\times `$10<sup>-15</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> in the Villar-Martin slit which would have been easily detected but with A<sub>V</sub>=1.4 mag this is reduced to less than 3$`\times `$10<sup>-16</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> which would have been marginably detected at best.
Figure 4. - The K band spectrum of MRC 2025-218 is dominated by a very wide (9300 km s<sup>-1</sup>) strong emission line of H$`\alpha `$. The upper graph is the reduced spectrum overlayed with the H$`\alpha `$ profiles. The dashed line is the x-axis for this graph. The middle graph has had the continuum and H$`\alpha `$ emission lines subtracted to emphasize the weaker lines of NII. The bottom graph is the residuals after subtracting gaussians for each emission line.
Given the similarities in line width with radio loud quasars we now try to determine if the extinction could explain the observed differences between MRC 2025-218 and radio loud quasars. If we correct the H$`\alpha `$ flux for A<sub>V</sub>=1.4 mag (the upper limit to the narrow line extinction), then the broad line flux becomes 5.2$`\times `$10<sup>-15</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> or a broad line H$`\alpha `$ luminosity of 2.6$`\times `$10<sup>44</sup> ergs s<sup>-1</sup>. We used the sample of quasars of McIntosh et al. (1999) to derive a mean H$`\alpha `$ luminosity of 6.11$`\times `$10<sup>44</sup> ergs s<sup>-1</sup> based on the mean H$`\beta `$ equivalent width of their sample, no extinction and an intrinsic ratio of 3.1 between H$`\alpha `$ and H$`\beta `$. The one sigma dispersion in this value is only 10% in their sample. Our extinction corrected H$`\alpha `$ luminosity is then weaker than their mean by a factor of 2.6 suggesting the central engines are very similar. If we go a step further and assume that the intrinsic luminosities are the same, then the broad line extinction would need to be A<sub>V</sub>=3.5 mag instead of A<sub>V</sub>=1.4 mag as derived above for the narrow line region.
A remaining difference between MRC 2025-218 and the quasars in the McIntosh sample is the H-band magnitude. MRC 2025-218 has a broad band magnitude of H=19.1 while the mean quasar H-band magnitude is 15.16. After correcting for the different redshifts (quasar mean z=2.2) then MRC 2025-218 is 3.2 magnitudes fainter than the quasars at a rest wavelength of 4550 ร
. If the broad band flux of MRC 2025-218 is dominated by the AGN then it would require A<sub>V</sub>=4.2 mag to make it equal to the quasar sample. This is surprisingly close to the value of 3.5 mag required to match the broad H$`\alpha `$ fluxes. The assumption that the AGN dominates the broad band flux in a radio galaxy, however, is not obvious and may be in conflict with the empirically determined K magnitude versus redshift relation observed in both low redshift and high redshift objects (Eales et al. 1997). MRC 2025-218 is consistent with the K vs. Z relation both with and without taking the line emission into account.
Villar-Martin et al. (1999) find that MRC 2025-218 has large ratios of \[NV\]/HeII and \[NV\]/\[CIV\] and suggest that the most likely explanation is that N is overabundant. They held out the possibility, however, that contamination from a broad line region was enhancing this line in comparison to their other radio galaxies. But they argued against this due to the lack of any broad lines including \[CIII\]. From our broad H$`\alpha `$ detection, however, we clearly see that the broad line region is only partially obscured and the strong NV emission is probably not indicative of high metalicity. This is further corroborated by the relatively low ratios of \[NII\]/H$`\alpha `$(narrow).
### 4.2. Spectral Shape and Extended Emission
The double spectral peak found in \[OIII\] could be due to a high velocity (200 km s<sup>-1</sup>) cloud of gas or possibly a double active nucleus. The unsmoothed H$`\alpha `$ narrow line is quite noisy but also shows a double profile with a separation of 200 km s<sup>-1</sup>. Due to the noise, however, we are not confident in the second H$`\alpha `$ peak. If the second peak were due to a star forming region it would be unlikely that the \[OIII\] line would be double as well since the OIII/H$`\beta `$ ratio should be much lower for a starburst.
The off nucleus knots seen in \[OIII\] are difficult to understand. Extended OIII has been observed in other radio galaxies aligned to the radio axis (Armus et al. 1998) but no line ratios have been determined for this gas. If we assume that the emission is from starbursts then our brightest knot (1$`\times `$10<sup>-16</sup> ergs s<sup>-1</sup> cm<sup>-2</sup>) would have an \[OIII\]/H$`\alpha `$ ratio less than 1.0. This would make the H$`\alpha `$ flux greater than 1$`\times `$10<sup>-16</sup> ergs s<sup>-1</sup> cm<sup>-2</sup> and a luminosity more than 5$`\times `$10<sup>42</sup> ergs s<sup>-1</sup>. Assuming the relationship of Kennicutt (1983) that the star formation rate is equal to L(H$`\alpha `$) divided by 1.12$`\times 10^{41}`$ ergs/s we derive a star formation rate of 45 M yr<sup>-1</sup>. This is comparable to the rates seen in Pettini et al. (1998) where they studied 5 star forming galaxies in the redshift range 2.2 to 3.3. This is also close to the estimated star formation rate of the Lyman Break Galaxy MS1512-cB58. As calculated in Teplitz et al. (2000) cB58 has a SFR of 620 M yr<sup>-1</sup> but after removing a factor of 30 for gravitational lensing this becomes 21 M yr<sup>-1</sup>.
## 5. Conclusions
We have obtained the most sensitive infrared spectra ever taken of a high redshift radio galaxy. The galaxy has very strong emission lines with ratios and line widths consistent with an obscured quasar. The narrow line region appears to be partially obscured with A<sub>V</sub> around 1.4 mag, but from comparisons with high redshift quasars, we estimate that the extinction to the broad line region is between 3 and 5 magnitudes. Since other radio galaxies in the same redshift range donโt show broad emission lines, we suggest that MRC 2025-218 is further along in its evolution towards an unobscured quasar. We cannot rule out any of the proposed mechanisms for the production of the aligned emission. But based on the \[OIII\] line strength if the majority of the emission is due to star formation, we find that the star formation rate is comparable to that of Lyman Break Galaxies at similar redshifts. We urge even deeper observations of this and other similar radio galaxies in order to measure additional extended line emission.
It is a pleasure to acknowledge the hard work of past and present members of the NIRSPEC instrument team at UCLA: Maryanne Angliongto, Oddvar Bendiksen, George Brims, Leah Buchholz, John Canfield, Kim Chin, Jonah Hare, Fred Lacayanga, Samuel B. Larson, Tim Liu, Nick Magnone, Gunnar Skulason, Michael Specncer, Jason Weiss and Woon Wong. In addition, we thank the Keck Director Fred Chaffee, CARA instrument specialist Thomas A. Bida, and all the CARA staff involved in the commissioning of NIRSPEC. We also want to thank Lee Armus for many useful discussions. We are also grateful for a very careful review from our anonymous referee. Data presented herein were obtained at the W.M. Keck Observatory which was made possible by the generous financial support of the W.M. Keck Foundation. |
no-problem/0002/nucl-th0002038.html | ar5iv | text | # Partial Dynamical Symmetry in a Fermion System
## Abstract
The relevance of the partial dynamical symmetry concept for an interacting fermion system is demonstrated. Hamiltonians with partial SU(3) symmetry are presented in the framework of the symplectic shell-model of nuclei and shown to be closely related to the quadrupole-quadrupole interaction. Implications are discussed for the deformed light nucleus <sup>20</sup>Ne.
PACS numbers: 21.60Fw, 21.10-k, 21.60.Cs, 27.30+t
Symmetries play an important role in dynamical systems. They provide labels for the classification of states, determine selection rules, and simplify the relevant Hamiltonian matrices. Algebraic, symmetry-based models offer significant simplifications when the Hamiltonian under consideration commutes with all the generators of a particular group (โexact symmetryโ) or when it is written in terms of the Casimir operators of a chain of nested groups (โdynamical symmetryโ) . In both cases basis states belonging to inequivalent irreducible representations (irreps) of the relevant groups do not mix, the Hamiltonian matrix has block structure, and all properties of the system can be expressed in closed form. An exact or dynamical symmetry not only facilitates the numerical treatment of the Hamiltonian, but also its interpretation and thus provides considerable insight into the physics of a given system.
Naturally, the application of exact or dynamical symmetries to realistic situations has its limitations. Usually the assumed symmetry is only approximately fulfilled, and imposing certain symmetry requirements on the Hamiltonian might result in constraints which are too severe and incompatible with experimentally observed features of the system. The standard approach in such situations is to break the symmetry. Partial Dynamical Symmetry (PDS) corresponds to a particular symmetry-breaking for which the Hamiltonian is not invariant under the symmetry group and hence various irreps are mixed in its eigenstates, yet it possess a subset of โspecialโ solvable states which respect the symmetry. This new scheme has recently been introduced in bosonic systems and has been applied to the spectroscopy of deformed nuclei and to the study of mixed systems with coexisting regularity and chaos . It is the purpose of this Letter to demonstrate the relevance of the partial dynamical symmetry concept to fermion systems. More specifically, in the framework of the symplectic shell-model of nuclei , we will prove the existence of a family of fermionic Hamiltonians with partial SU(3) symmetry. The PDS Hamiltonians are rotationally invariant and closely related to the quadrupole-quadrupole interaction; hence our study will shed new light on this important interaction. We will compare the spectra and eigenstates of the quadrupole-quadrupole and PDS Hamiltonians for the deformed light nucleus <sup>20</sup>Ne.
The quadrupole-quadrupole interaction is an important ingredient in models that aim at reproducing quadrupole collective properties of nuclei. A model which is able to fully accommodate the action of the collective quadrupole operator, $`Q_{2m}=\sqrt{\frac{16\pi }{5}}_sr_s^2Y_{2m}(\widehat{r}_s)`$, is the symplectic shell model (SSM), an algebraic scheme which respects the Pauli exclusion principle . In the SSM, this operator takes the form $`Q_{2m}=\sqrt{3}(\widehat{C}_{2m}^{(11)}+\widehat{A}_{2m}^{(20)}+\widehat{B}_{2m}^{(02)})`$, where $`\widehat{A}_{lm}^{(20)},\widehat{B}_{lm}^{(02)}`$, and $`\widehat{C}_{lm}^{(11)}`$ are symplectic generators with good SU(3) \[superscript $`(\lambda ,\mu )`$\] and SO(3) \[subscript $`l,m`$\] tensorial properties. The $`\widehat{A}_{lm}^{(20)}`$ ($`\widehat{B}_{lm}^{(02)}`$), $`l`$ = 0 or 2, create (annihilate) $`2\mathrm{}\omega `$ excitations in the system. The $`\widehat{C}_{lm}^{(11)}`$, $`l`$ = 1 or 2, generate a SU(3) subgroup and act only within one harmonic oscillator (h.o.) shell ($`\sqrt{3}\widehat{C}_{2m}^{(11)}=`$ $`Q_{2m}^E`$, the symmetrized quadrupole operator of Elliott, which does not couple different h.o. shells , and $`\widehat{C}_{1m}^{(11)}=\widehat{L}_m`$, the orbital angular momentum operator). A fermion realization of these generators is given in .
A basis for the symplectic model is generated by applying symmetrically coupled products of the 2$`\mathrm{}\omega `$ raising operator $`\widehat{A}^{(20)}`$ with itself to the usual $`0\mathrm{}\omega `$ many-particle shell-model states. Each $`0\mathrm{}\omega `$ starting configuration is characterized by the distribution of oscillator quanta into the three cartesian directions, $`\{\sigma _1,\sigma _2,\sigma _3\}`$ ($`\sigma _1\sigma _2\sigma _3`$), or, equivalently, by its U(1)$`\times `$SU(3) quantum numbers $`N_\sigma (\lambda _\sigma ,\mu _\sigma )`$. Here $`\lambda _\sigma =\sigma _1\sigma _2`$, $`\mu _\sigma =\sigma _2\sigma _3`$ are the Elliott SU(3) labels, and $`N_\sigma =\sigma _1+\sigma _2+\sigma _3`$ is related to the eigenvalue of the oscillator number operator. The product of $`N/2`$ raising operators $`\widehat{A}^{(20)}`$ generates $`N\mathrm{}\omega `$ excitations for each starting irrep $`N_\sigma (\lambda _\sigma ,\mu _\sigma )`$. Each such product operator $`๐ซ^{N(\lambda _n,\mu _n)}`$, labeled according to its SU(3) content, $`(\lambda _n,\mu _n)`$, is coupled with $`|N_\sigma (\lambda _\sigma ,\mu _\sigma )`$ to good SU(3) symmetry $`\rho (\lambda ,\mu )`$, with $`\rho `$ denoting the multiplicity of the coupling $`(\lambda _n,\mu _n)(\lambda _\sigma ,\mu _\sigma )`$. The quanta distribution in the resulting state is given by $`\{\omega _1,\omega _2,\omega _3\}`$, with $`N_\sigma +N=\omega _1+\omega _2+\omega _3`$, $`\omega _1\omega _2\omega _3`$, and $`\lambda =\omega _1\omega _2`$, $`\mu =\omega _2\omega _3`$. The basis state construction is schematically illustrated in Fig. 1 for a typical Elliott starting state with $`(\lambda _\sigma ,\mu _\sigma )=(\lambda ,0)`$. <sup>20</sup>Ne, for instance, has $`N_\sigma `$ = 48.5 (after removal of the center-of-mass contribution) and $`(\lambda _\sigma ,\mu _\sigma )`$ = (8,0) . To complete the basis state labeling, additional quantum num-
bers $`\alpha =\kappa LM`$ are required, where $`L`$ denotes the angular momentum with projection $`M`$, and $`\kappa `$ is a multiplicity index, which enumerates multiple occurrences of a particular $`L`$ value in the SU(3) irrep $`(\lambda ,\mu )`$ from 1 to $`\kappa _L^{max}(\lambda ,\mu )=[(\lambda +\mu +2L)/2]`$ \- $`[(\lambda +1L)/2]`$ \- $`[(\mu +1L)/2]`$, where \[$`\mathrm{}`$\] is the greatest non-negative integer function . The group chain corresponding to this labeling scheme is Sp(6,R) $``$ SU(3) $``$ SO(3) which defines a dynamical symmetry basis.
The quadrupole-quadrupole interaction connects h.o. states differing in energy by $`0\mathrm{}\omega `$, $`\pm 2\mathrm{}\omega `$, and $`\pm 4\mathrm{}\omega `$, and may be written as
$`Q_2Q_2`$ $`=`$ $`9\widehat{C}_{SU3}3\widehat{C}_{Sp6}+\widehat{H}_0^22\widehat{H}_03\widehat{L}^26\widehat{A}_0\widehat{B}_0`$ (2)
$`+\{\text{terms coupling different h.o. shells}\},`$
where $`\widehat{C}_{SU3}`$ and $`\widehat{C}_{Sp6}`$ are the quadratic Casimir invariants of SU(3) and Sp(6,R) with eigenvalues $`2(\lambda ^2+\mu ^2+\lambda \mu +3\lambda +3\mu )/3`$ and $`2(\lambda _\sigma ^2+\mu _\sigma ^2+\lambda _\sigma \mu _\sigma +3\lambda _\sigma +3\mu _\sigma )/3+N_\sigma ^2/34N_\sigma `$, respectively. These operators, as well as the h.o. $`\widehat{H}_0`$ and $`\widehat{L}^2`$ terms, are diagonal in the dynamical symmetry basis. Unlike the Elliott quadrupole-quadrupole interaction, $`Q_2^EQ_2^E`$ $`=6\widehat{C}_{SU3}3\widehat{L}^2`$, the $`Q_2Q_2`$ interaction of Eq. (2) breaks SU(3) symmetry within each h.o. shell since the term $`\widehat{A}_0\widehat{B}_0\widehat{A}_0^{(20)}\widehat{B}_0^{(02)}=(\{\widehat{A}\times \widehat{B}\}_0^{(00)}\sqrt{5}\{\widehat{A}\times \widehat{B}\}_0^{(22)})/\sqrt{6}`$ mixes different SU(3) irreps. In order to study the action of $`Q_2Q_2`$ within such a shell, we consider the following family of Hamiltonians:
$`H(\beta _0,\beta _2)=\beta _0\widehat{A}_0\widehat{B}_0+\beta _2\widehat{A}_2\widehat{B}_2`$ (4)
$`={\displaystyle \frac{\beta _2}{18}}(9\widehat{C}_{SU3}9\widehat{C}_{Sp6}+3\widehat{H}_0^236\widehat{H}_0)+(\beta _0\beta _2)\widehat{A}_0\widehat{B}_0.`$
For $`\beta _0=\beta _2`$, one recovers the dynamical symmetry, and with the special choice $`\beta _0=12`$, $`\beta _2=18`$, one obtains $`Q_2Q_2=H(\beta _0=12,\beta _2=18)+const(N)3\widehat{L}^2`$ \+ terms coupling different shells, where $`const(N)`$ is constant for a given h.o. $`N\mathrm{}\omega `$ excitation.
From Eq. (4) it follows that $`H(\beta _0,\beta _2)`$ is not SU(3) invariant. We will now show that $`H(\beta _0,\beta _2)`$ exhibits partial SU(3) symmetry. Specifically, we claim that among the eigenstates of $`H(\beta _0,\beta _2)`$, there exists a subset of solvable pure-SU(3) states, the SU(3)$``$SO(3) classification of which depends on both the Elliott labels $`(\lambda _\sigma ,\mu _\sigma )`$ of the starting state and the symplectic excitation $`N`$. In general, we find that all L-states in the starting configuration ($`N=0`$) are solvable with good SU(3) symmetry $`(\lambda _\sigma ,\mu _\sigma )`$. For excited configurations ($`N>0`$ and even) we distinguish between two possible cases:
* $`\lambda _\sigma >\mu _\sigma `$: the pure states belong to $`(\lambda ,\mu )=(\lambda _\sigma N,\mu _\sigma +N)`$ and have $`L=\mu _\sigma +N,\mu _\sigma +N+1,\mathrm{},\lambda _\sigma N+1`$ with $`N=2,4,\mathrm{}`$ subject to $`2N(\lambda _\sigma \mu _\sigma +1)`$.
* $`\lambda _\sigma \mu _\sigma `$: the special states belong to $`(\lambda ,\mu )=(\lambda _\sigma +N,\mu _\sigma )`$ and have $`L=\lambda _\sigma +N,\lambda _\sigma +N+1,\mathrm{},\lambda _\sigma +N+\mu _\sigma `$ with $`N=2,4,\mathrm{}`$.
To prove the claim, it suffices to show that $`\widehat{B}_0`$ annihilates the states in question. For $`N=0`$ this follows immediately from the fact that the $`0\mathrm{}\omega `$ starting configuration is a Sp(6,R) lowest weight which, by definition, is annihilated by the lowering operators of the Sp(6,R) algebra. The latter include the generators $`\widehat{B}_{lm}^{(02)}`$. For $`N>0`$, let $`\{\sigma _1,\sigma _2,\sigma _3\}`$ be the quanta distribution for a 0$`\mathrm{}\omega `$ state with $`\lambda _\sigma >\mu _\sigma `$. Adding $`N`$ quanta to the 2-direction yields a $`N\mathrm{}\omega `$ state with quanta distribution $`\{\sigma _1,\sigma _2+N,\sigma _3\}`$, that is $`(\lambda ,\mu )=(\lambda _\sigma N,\mu _\sigma +N)`$. Acting with the rotational invariant $`\widehat{B}_0`$ on such a state does not affect the angular momentum, but removes two quanta from the 2-direction, giving a $`(N2)\mathrm{}\omega `$ state with $`(\lambda ^{},\mu ^{})=(\lambda _\sigma N+2,\mu _\sigma +N2)`$. (The symplectic generator $`\widehat{B}_0`$ cannot remove quanta from the other two directions of this particular state, since this would yield a state belonging to a different symplectic irrep.) Comparing the number of $`L`$ occurrences in $`(\lambda ,\mu )`$ and $`(\lambda ^{},\mu ^{})`$, one finds that as long as $`\lambda _\sigma N+1\mu _\sigma +N`$, $`\mathrm{\Delta }_L(N)\kappa _L^{max}(\lambda ,\mu )\kappa _L^{max}(\lambda ^{},\mu ^{})=1`$ for $`L=\mu _\sigma +N,\mu _\sigma +N+1,\mathrm{},\lambda _\sigma N+1`$, and $`\mathrm{\Delta }_L(N)=0`$ otherwise. When $`\mathrm{\Delta }_L(N)`$=1, a linear combination $`|\varphi _L(N)=_\kappa c_\kappa |N\mathrm{}\omega (\lambda _\sigma N,\mu _\sigma +N)\kappa LM`$ exists such that $`\widehat{B}_0|\varphi _L(N)=0`$, and thus our claim for family (a) holds. The proof for family (b) can be carried out analogously if one considers adding $`N`$ quanta to the 1-direction of the starting irrep. In this case there is no restriction on N, hence family (b) is infinite.
The special states have well defined symmetry Sp(6,R) $``$ SU(3) $``$ SO(3) and are annihilated by $`\widehat{B}_0`$. This ensures that they are solvable eigenstates of $`H(\beta _0,\beta _2)`$ with eigenvalues $`E(N=0)=0`$, $`E(N)=\beta _2N(N_\sigma \lambda _\sigma +\mu _\sigma 6+3N/2)/3`$ for family (a), and $`E(N)=\beta _2N(N_\sigma +2\lambda _\sigma +\mu _\sigma 3+3N/2)/3`$ for family (b). All 0$`\mathrm{}\omega `$ states are unmixed and span the entire $`(\lambda _\sigma ,\mu _\sigma )`$ irrep. In contrast, for the excited levels ($`N>0`$), the pure states span only part of the corresponding SU(3) irreps. There are other states at each excited level which do not preserve the SU(3) symmetry and therefore con-
tain a mixture of SU(3) irreps. The partial SU(3) symmetry of $`H(\beta _0,\beta _2)`$ is converted into partial dynamical SU(3) symmetry by adding to it SO(3) rotation terms which lead to L(L+1)-type splitting but do not affect the wave functions. The solvable states then form rotational bands and since their wave functions are known, one can evaluate the E2 rates between them . It is of interest to note that both the fermion Hamiltonian presented here and the boson Hamiltonian of exhibit partial SU(3) symmetry and involve a SU(3) tensor of the form $`[(2,0)\times (0,2)](2,2)L=0`$.
To illustrate that the PDS Hamiltonians of Eq. (4) are physically relevant, we compare the eigenstates of $`H_{PDS}=h(N)+\xi H(\beta _0=12,\beta _2=18)+\gamma _2\widehat{L}^2+\gamma _4\widehat{L}^4`$ to those of the symplectic Hamiltonian $`H_{Sp6}=\widehat{H}_0\chi Q_2Q_2+d_2\widehat{L}^2+d_4\widehat{L}^4`$. Here the function $`h(N)`$ is simply a constant for a given $`N\mathrm{}\omega `$ excitation and contains the h.o. term $`\widehat{H}_0`$. Least squares fits to measured energies and B(E2) values of the ground band of <sup>20</sup>Ne were carried out for 2$`\mathrm{}\omega `$, 4$`\mathrm{}\omega `$, 6$`\mathrm{}\omega `$, and 8$`\mathrm{}\omega `$ symplectic model spaces.The resulting energies and transition rates converge to values which agree with the data, Fig. 2 and Table I. The parameters $`\gamma _2`$ and $`\gamma _4`$ in $`H_{PDS}`$ were determined by the energy splitting between states of the ground band, $`\xi `$ was adjusted to reproduce the relative
positions of the resonance bandheads and $`h(N)`$ was fixed by the energy difference $`[E(0_2^+)E(0_1^+)]`$. Fig. 2 and Table I demonstrate the level of agreement between the PDS and symplectic results.
An analysis of the structure of the ground and resonance bands reveals the amount of mixing in the 8$`\mathrm{}\omega `$ symplectic ($`Q_2Q_2`$) wave functions. Fig. 3 shows the decomposition for representative ($`2^+`$) states of the five lowest rotational bands. Ground band (K=$`0_1`$) states are found to have a strong $`0\mathrm{}\omega `$ component ($`64\%`$), and three of the four resonance bands are clearly dominated ($`60\%`$) by $`2\mathrm{}\omega `$ configurations. States of the first resonance band (K=$`0_2`$), however, contain significant contributions from all but the highest $`N\mathrm{}\omega `$ excitations. The relative strengths of the SU(3) irreps within the $`2\mathrm{}\omega `$ space are shown as well: states are found to be dominated by one representation \[(10,0) for the K=$`0_2`$ band, (8,1) for K=$`1_1`$, (6,2)$`\kappa =2`$ for K=$`2_1`$, and (6,2)$`\kappa =1`$ for K=$`0_3`$, where $`\kappa =1`$ and 2 correspond here to Vergados basis labels 0 and 2, respectively \], while the other irreps contribute only a few percent. Such trends are present also in the more realistic symplectic calculations of .
The PDS Hamiltonian $`H_{PDS}`$ acts only within one oscillator shell, hence its eigenfunctions do not contain admixtures from different $`N\mathrm{}\omega `$ configurations. As expected, $`H_{PDS}`$ has families of pure SU(3) eigenstates which can be organized into rotational bands. The ground band belongs entirely to $`N=0`$, $`(\lambda ,\mu )=(8,0)`$, and all states of the K=$`2_1`$ band have quantum labels $`N=2`$, $`(\lambda ,\mu )=(6,2)`$, $`\kappa =2`$. A comparison with the symplectic case shows that the $`N\mathrm{}\omega `$ level to which a particular PDS band belongs is also dominant in the corresponding symplectic band. In addition, within this dominant excitation, eigenstates of $`H_{PDS}`$ and $`H_{Sp6}`$ have similar SU(3) distributions; in particular, both Hamiltonians favor the same $`(\lambda ,\mu )\kappa `$ values. Significant differences in the structure of the wave functions appear, however, for the K=$`0_2`$ resonance band. In the $`8\mathrm{}\omega `$ symplectic calculation, this band contains almost equal contributions from the $`0\mathrm{}\omega `$, $`2\mathrm{}\omega `$, and $`4\mathrm{}\omega `$ levels, with additional admixtures of $`6\mathrm{}\omega `$ and $`8\mathrm{}\omega `$ configurations, while in the PDS calculation, it belongs entirely to the $`2\mathrm{}\omega `$ level. These structural differences are also evident in the interband transition rates, e.g. B(E2; K=$`0_1`$, L=$`2^+`$ $``$ K=$`0_2`$, L=$`0^+`$) = 2.93 (5.69) W.u. and B(E2; K=$`0_2`$, L=$`2^+`$ $``$ K=$`0_1`$, L=$`0^+`$) = 5.84 (12.6) W.u. in the $`8\mathrm{}\omega `$ (PDS) calculation, and reflect the action of the inter-shell coupling terms in Eq. (2). Increasing the strength $`\chi `$ of $`Q_2Q_2`$ in $`H_{Sp6}`$ will also spread the other resonance bands over many $`N\mathrm{}\omega `$ excitations. The K=$`2_1`$ band (which is pure in the PDS scheme) is found to resist this spreading more strongly than the other resonances. For physically relevant values of $`\chi `$, the low-lying bands have the structure shown in Fig. 3.
In summary, we have introduced a family of fermionic Hamiltonians with partial SU(3) symmetry. Using the framework of the symplectic shell model, we have proven
that these Hamiltonians possess both mixed-symmetry and solvable pure-SU(3) rotational bands. For the deformed light nucleus <sup>20</sup>Ne, we have shown that various features of the quadrupole-quadrupole interaction can be reproduced with a particular parameterization of the PDS Hamiltonians. For both the ground and the resonance bands, PDS eigenstates were seen to approximately reproduce the structure of the exact $`Q_2Q_2`$ eigenstates within the $`0\mathrm{}\omega `$ and $`2\mathrm{}\omega `$ spaces, respectively. In particular, for each pure state of the PDS scheme we found a corresponding eigenstate of the quadrupole-quadrupole interaction, which was dominated by the same SU(3) irrep. Moreover, for reasonable interaction parameters, each rotational band was primarily located in one level of excitation, with the exception of the lowest K=$`0_2`$ resonance band, which was spread over many $`N\mathrm{}\omega `$ excitations. Implications of the structural differences between the various resonance bands for giant monopole and quadrupole transitions remain to be investigated. The occurrence of partial symmetries for fermions, as shown in this work, and for bosons, as presented in previous works , highlights their relevance to dynamical systems and motivates their further study.
The authors acknowledge valuable suggestions by D.J. Rowe and constructive comments by J.P. Elliott and G. Rosensteel. This work is supported by a grant from the Israel Science Foundation. We thank the Institute for Nuclear Theory at the University of Washington for its hospitality and the Department of Energy for partial support during the completion of this work. |
no-problem/0002/hep-ph0002085.html | ar5iv | text | # 1 Introduction
## 1 Introduction
It is of crucial importance to know the precise value of the $`\pi NN`$ coupling constant, both in Nuclear and in Particle physics. Together with the pion mass this coupling scales the nuclear interaction. Through the Goldberger-Treiman relation it tests chiral symmetry. From this last relation one expects an accuracy in its value of about 1% as has been discussed in details in Ref. . Some further recent considerations on this relation and its implications for the $`\pi NN`$ coupling can be found, for instance, in Ref. .
In the 1980, the $`\pi NN`$ coupling constant was believed to be well known. The analysis of $`\pi ^\pm p`$ scattering data gives a value of $`14.28\pm 0.18`$ for the charged pion coupling constant. Forward dispersion relation analysis of pp scattering data led to $`g_{\pi ^0}^2/4\pi =14.52\pm 0.40`$ for the neutral pion coupling constant. The Nijmegen group , in the 1990โs and on the basis of energy-dependent partial-wave analyses (PWA) of nucleon-nucleon ($`NN`$) scattering data, found smaller values. They obtained $`g_{\pi ^0}^2/4\pi =13.47\pm 0.11`$ and $`g_{\pi ^\pm }^2/4\pi =13.58\pm 0.05`$. These values were confirmed in their more recent $`NN`$ PWA analyses . The Virginia Polytechnic Institute (VPI) group from analysis of both $`\pi ^\pm N`$ and $`NN`$ data has obtained also low values around $`g_\pi ^2/4\pi =13.7`$. From a PWA for the $`\pi ^+`$p reaction a value of 13.45(14) was recently obtained . The very recent $`\pi `$ and pion-photo-production PWA VPI analyses give values of $`13.73\pm 0.10`$ and $`14.00\pm 0.13`$ respectively. Let us mention that some charge dependence has been also considered . All the determinations which rely on the analysis of large data bases from a great number of experiments, with some of the data rejected according to certain criteria, have a very good statistical accuracy. It is however difficult to assert them a clear systematic uncertainty.
A more direct determination is the use of the Goldberger-Miyazawa-Oehme (GMO) sum-rule which, in principle, depends directly on physical observables. This was applied in particular in Ref. giving a value of $`g_{\pi ^\pm }^2/4\pi =13.75\pm 0.15`$ and very recently in Ref. leading to $`g_{\pi ^\pm }^2/4\pi =14.17\pm 0.17`$. Another direct method is based on the extrapolation to the pion pole of precise data on single-energy backward differential np cross sections. Both determinations allow a systematic discussion of statistical and systematic uncertainties. The use of the recent backward np Uppsala data at 162 MeV and of a novel extrapolation method, the difference method, gives $`g_{\pi ^\pm }^2/4\pi =14.52\pm 0.26`$ . Fairly new analysis of the recent PSI backward np data, with the classical Chew extrapolation and a conformal mapping method leads to $`g_{\pi ^\pm }^2/4\pi =13.84\pm 0.43`$ . A summary of values for the coupling constant is given in Table 1. It can be seen that between the smallest and largest value there is a discrepancy of about 7%. The dispersion of the different $`g_{\pi ^\pm }^2/4\pi `$ values can also be judged from Fig. 1.
Let us recall that, when using the np backward data, the experimental normalisation of the cross section is very important to the sensitivity. It is both the shape of the angular distribution at the most backward angles, and the absolute normalisation of the data, that are of crucial importance . The Nijmegen group has strongly criticised the Uppsala data at 162 MeV and its extrapolated result via the difference method . In particular in Ref. it is claimed that there is a very strong model dependence of the difference method. It is the purpose of the present study to check the accuracy of the difference method and in particular, when applied to the precise 162 MeV Uppsala data we shall demonstrate, by choosing different models, that the model dependence is small and that for good reference models this method leads to a precision smaller than 2%.
The evidence that the backward peak of the np angular distribution is dominated by the one-pion exchange will be discussed in Sect. 2. The determination of the $`\pi NN`$ coupling constant through the extrapolation to the pion pole from models and np data is studied in Sect. 3, and some conclusions are given in Sect. 4.
## 2 Evidence for the one-pion exchange
It was early realized that the one-pion exchange (OPE) contributes importantly to the np charge exchange (CEX) at small momentum transfer. In Fig. 2 we have plotted the recent np CEX experimental differential cross section data measured at the Svedberg Laboratory at 162 MeV as a function of $`q^2`$, square of the momentum transfer of the neutron to the proton. It can be seen that there is a strong peak at very small $`q^2`$. Note that in Fig. 2 $`q^2`$ is expressed in units of m<sub>ฯ</sub>, the charged pion mass. This differential cross section has a โCoulomb like behaviourโ, it goes to $`\mathrm{}`$, not at $`q^2=0`$ as in the photon-exchange case, but at $`q^2=m_\pi ^2`$, which corresponds to the pion pole of the OPE. The presence of this OPE pole very close to the physical region led Chew in 1958 to suggest a model-independent extrapolation to this pole which would allow to determine the $`\pi NN`$ coupling constant . We here refer, for the interested reader, to the very detailed and well documented discussion concerning the OPE given in Ref. . We shall now recall the different methods of extrapolation which were considered in that same reference.
## 3 Extrapolation to the pion pole
### 3.1 Methods of extrapolation
The idea to extrapolate to the pion pole is to study a smooth physical function, the Chew function, built by multiplying the cross section by $`(q^2+m_\pi ^2)^2`$. This removes the pole term and the extrapolation can be made more safely and controllably. This function can be then fitted to the data in the physical region by a polynomial and extrapolated to the pole. One considers,
$$y(x)=\frac{(4\pi )^2sx^2}{m_\pi ^4g_R^4}\frac{\mathrm{d}\sigma }{\mathrm{d}\mathrm{\Omega }}(x)=\underset{\mathrm{i}=0}{\overset{\mathrm{n}1}{}}a_ix^i.$$
(1)
Here $`x=q^2+m_\pi ^2`$ and $`s`$ is the square of the total energy. At the pion pole $`x=0`$ and
$$y(0)a_0g_{\pi ^\pm }^4/g_R^4$$
(2)
where the pseudoscalar coupling constant $`g_{\pi ^\pm }^2/4\pi 14`$. The quantity $`g_R^2`$ is a reference scale for the coupling chosen for convenience. The model-independent extrapolation requires accurate data with absolute normalisation of the differential cross section. If the experimental differential cross section is incorrectly normalised by a factor $`N`$, the extrapolation determines $`\sqrt{N}g_{\pi ^\pm }^2/4\pi `$. This is one of the most important sources of uncertainty when extrapolating the data. The Chew method which has been the most used in the past requires at least 5 terms in the $`q^2`$ expansion .
A second method, which should improve the convergence of the Chew extrapolation, is the Ashmore method . It parameterises $`\mathrm{d}\sigma /\mathrm{d}\mathrm{\Omega }(x)`$ in terms of a pion Born amplitude with the addition of a background term. Here we use the regularised pion Born amplitudes as given in Ref. . One expects also an important contribution to the np CEX from the $`\rho `$-meson exchange. We then use for the Ashmore background amplitude a pole term with adjustable strength simulating this $`\rho `$-meson exchange. This expression is fitted to the data and gives in principle a model-independent result for the coupling constant. More physics is built into the procedure, so fewer terms should be needed. More detailed expressions can be found in Ref .
In order to obtain an improvement in the extrapolation we have introduced the Difference Method . It is based on the Chew function, but it uses the fact that an important part of the cross section behaviour is described by models with exactly known values for the coupling constant. It applies the Chew method to the difference between the function $`y(x)`$ of a model and that of the experimental data, i.e.,
$$y_{Model}(x)y_{Exp}(x)=\underset{\mathrm{i}=0}{\overset{\mathrm{n}1}{}}d_ix^i.$$
(3)
If $`g_R`$ of Eq. (1) is replaced by the model value $`g_{Model}`$, one has at the pion pole,
$$y_{Model}(0)y_{Exp}(0)d_0\frac{g_{Model}^4g_{\pi ^\pm }^4}{g_{Model}^4}.$$
(4)
This should decrease systematic extrapolation uncertainties and remove a substantial part of the non-OPE information at large momentum transfers. We have formally not introduced a model dependence by using such a comparison function and such procedures are used in many contexts of physics to obtain better transparency and precision. One has to calibrate the method, that is, to find the precision to which the coupling constant can be determined and the possible systematic uncertainties that are associated with the extrapolation procedure.
### 3.2 Application to models
We now want to cross-check to which precision can work the difference method in the extrapolation to the pion pole. We shall first apply it to models where the coupling is exactly known. This allows to investigate its properties and in particular its systematics. In order to determine the systematic uncertainties in the procedures we have generated pseudo-data with uncertainties corresponding to the Uppsala 162 MeV experiment from 10000 computer simulations using exact data points from different models with a Gaussian, random error distribution . One has for a given pseudo-measurement m,
$$y_m^{Pseudodata}(x)=y^{Model}(x)+\mathrm{\Delta }y_m(x)$$
(5)
with
$$\mathrm{\Delta }y_m(x)=\mathrm{\Delta }y^{Uppsala}(x)\sqrt{2Log(R_{1m})}cos(\pi R_{2m}).$$
(6)
In Eq. 6 $`R_{im}`$, for i=1, 2, are random numbers between 0 and 1 when m varies from 1 to 10000.
In the context of the workshop we have asked R. A. Arndt to provide us with different models from his $`NN`$ energy dependent PWA. One of the models, which we call A13.75, corresponds to the energy-dependent PWA of the $`pp`$ and np data from 0 to 400 MeV with a minimisation on $`g_{\pi ^\pm }^2/4\pi `$ . The minimum $`\chi ^2`$ on the $`NN`$ data is obtained for a coupling constant of 13.75. The second model we consider is built from the previous one with all parameters kept fixed, except the value of the $`\pi NN`$ coupling constant which is lowered down from 13.75 to 12.83, let us denote this model as A12.83g. A third model, denoted DA99, was also given to us but with an unknown coupling constant. We shall here apply the three methods described in section 3.1 in order to attempt to determine this coupling constant, considering the predictions of this model as pseudo-data. The differential cross section at 162 MeV of the model DA99 is compared to that of the Uppsala experiment in Fig. 2.
Results are listed in Tables 2 and 3, $`n`$ being the number of terms in the polynomial fit of the โpseudo-dataโ. As for each calculation we performed 10000 pseudo-experiments, $`\chi ^2/N_{df}`$ is the average $`\chi ^2`$ per degrees of freedom, $`g_{\pi ^\pm }^2/4\pi `$ is the mean value of the coupling constant and the errors quoted are the standard deviations which, in fact, are very close to the average value of the error of every pseudo-experiment. We also give, for the models A12.83g and A13.75, the systematic deviation $`\delta g_{\pi ^\pm }^2`$ of the mean value from the true value in the model. We have then a control on systematic extrapolation errors and can calibrate the corresponding corrections. The data is grouped in two intervals, the first one with $`0<q^2<4m_\pi ^2`$ called โreduced rangeโ with 31 data points, corresponding to the previous Uppsala experiment and the second one with $`0<q^2<10.1m_\pi ^2`$ denoted โfull rangeโ with 54 data points corresponding to the latest experiment . This allows to examine the sensitivity and stability of the extrapolation to a given cut in momentum transfer and to check that it is the small $`q^2`$ region that carries an important part of the pion pole information. As a function of $`n`$ the behaviour of $`\chi ^2/N_{df}`$ is characteristic: it drops quickly with increasing $`n`$ to a value close to unity. Additional terms give only small benefits, and the data become over-parameterised. One can then adopt different statistical strategies leading to similar results. One is to take results at the minimum $`\chi ^2/N_{df}`$. This minimum is usually a shallow one, and values of $`n`$ close to $`n`$ of $`\chi ^2/N_{df}`$ minimum are almost equally probable statistically. Another possibility is to pick up $`g_{\pi ^\pm }^2/4\pi `$ from one of the smallest values of $`n`$ consistent with a $`\chi ^2/N_{df}`$ well within the range expected from the experimental sample. We recall the reader that here there is about 47% probability of the experimental $`\chi ^2/N_{df}`$ to be larger than unity, and about 25% for it to be larger than 1.15.
For the Chew method a good fit is performed with a fourth order polynomial in $`q^2`$ for the reduced range, but with a large systematic downward shift of 0.71 for both โA12.83gโ and โA13.75โ PWAโs as compared to the original model values. With a third order polynomial fit the statistical error becomes smaller, but the systematic shift is unreasonably large viz. 2.43 and 2.48 respectively. In the full range , one or two more terms are needed to obtain a good fit. In any case a systematic shift remains even when a perfect fit is obtained, but at the minimum $`\chi ^2`$ it is always less than the statistical and extrapolation uncertainty. The โDA99โ pseudo-data for both ranges give slightly different results for $`g_{\pi ^\pm }^2/4\pi `$ at minimum $`\chi ^2`$, but if one applies the corresponding systematic shift one obtains the same value of 14.27(86).The statistical and extrapolation error is rather large so we do not obtain a precise determination of the coupling constant using this method. Fig. 3 shows the fit of the model DA99 in the reduced range together with its Chew-function (Eq. 1) extrapolations for n=4 (dotted line) and n=5 (solid line). The n=5 fit is better for $`q^2`$ above 2 m$`{}_{}{}^{2}{}_{\pi }{}^{}`$.
For the Ashmore method a good description is achieved in the reduced range with one term less in the expansion. This shows that the physics beyond the $`\pi `$-exchange is reasonably described by the $`\rho `$-exchange, as anticipated in section 3.1. The systematic shifts are similar to those of the Chew method. The statistical and extrapolation error, when one stands close to the minimum $`\chi ^2`$, is however smaller. Once corrected by the mean value of the systematic shifts of the models the pseudo-data gives a $`g_{\pi ^\pm }^2`$ of 14.25(47) with a central value very close to the previous result. In the full range the needed number of terms to get a good $`\chi ^2`$ is also smaller and the corrected value for n=5 is 14.37(35). Although the statistical and extrapolation accuracy has improved, this method also appears to lack the high accuracy we would like to have. The excellent fit to the model DA99 is drawn in Fig. 4 for the full range with the Ashmore parameterisation with 5 terms.
The Difference Method should need less terms in the polynomial expansion than the two above methods, and this will give a smaller, statistical extrapolation error. Recall that the statistical extrapolation errors are only meaningful if $`\chi ^2/N_{df}`$ is close to 1. The fact that the angular distributions from the pseudo-data and models might be alike can help, in particular for large $`q^2`$. This can add more physical information without introducing in principle any model dependence. We use the two reference models studied above. Results are given in Tables 2 and 3 for the reduced and full ranges, respectively. The pion-pole extrapolations of the $`n`$-term polynomial fits of the Difference Method, on the the reduced and full ranges for the model DA99, are shown in Figs. 5 and 6 for the comparison models A12.83g and A13.75, respectively. The error bars increase at large $`x`$, which is due to the multiplication of the cross section by $`x^2`$, giving a smaller weight for the large $`q^2`$ region when extrapolating. The difference behaviour is slightly smoother with the reference model โA13.75โ, however it can be seen that in all cases 4 and 5 terms are necessary to have a good fit in the reduced and full range, respectively. In the full range, as one has more points, one gets a better statistical extrapolation error, this applies also to the difference between reference models which gives a check on the systematic uncertainties in the extrapolation. Both ranges have systematic shifts below 1%.
Averaging the values at minimum $`\chi ^2`$ (values in boldface in Tables 2 and 3) from the Difference-Method extrapolations one gets, over the reduced range,
$`\sqrt{N}g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`14.07\pm 0.32(\text{statistical + extrapolation})`$
$`\pm 0.11(\text{systematic})\pm 0.16(\text{normalisation})`$
$`=`$ $`14.07\pm 0.37,`$
i.e. an accuracy of 2.6 %, and over the full range,
$`\sqrt{N}g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`14.14\pm 0.25(\text{stat. + extr.})\pm 0.08(\text{syst.})\pm 0.16(\text{norm.})`$
$`=`$ $`14.14\pm 0.31,`$
which corresponds to an accuracy of 2.2 %. The results are fairly close which substantiates our statement on the relevant information being nearly entirely at low $`q^2`$. In view of the somewhat larger extrapolation uncertainty in the case of the reduced range, we take the full range value, 14.14(31), which compares quite well with the exact value of the model DA99 which is 14.28. Using the experimentally given statistics of the 162 MeV Uppsala data, the difference method has allowed us to determine the unknown DA99 coupling to less than 1 % within an uncertainty of 2.2 %.
### 3.3 Application to data
We shall now apply the difference method to the 162 MeV Uppsala data to cross-check the precision of the $`g_{\pi ^\pm }^2/4\pi `$ determination of Ref. . We here consider different reference models than those used in . Besides the three comparison models considered in the previous section we also use two more PWA fit of the $`pp`$ and np data from 0 to 400 MeV with $`g_{\pi ^\pm }^2/4\pi `$ fixed to 12.83 (model A12.83) and 14.28 (model A14.28) . In Table 4 we give the total $`\chi ^2`$ and the $`\chi ^2`$/data on the 3747 np data considered for all the models we study here. We also recall their $`g_{\pi ^\pm }^2/4\pi `$ together with the corresponding pseudovector coupling $`f_c^2/4\pi `$, related to $`g_{4\pi ^\pm }^2/4\pi `$ by $`g_{\pi ^\pm }^2=f_c^2(2M_{M_N}/m_{\pi ^\pm })^2`$ with $`M_N`$ average proton-neutron mass. As for the model A12.83g, the model A14.28g (DA99) was obtained from the model A13.75 with all parameters kept fixed but increasing the $`\pi NN`$ coupling from 13.75 to 14.28. Predictions of all these comparison models for the 162 MeV Uppsala data are shown in Fig. 2.
Results for the reduced and full range are listed in Tables 5 and 6 respectively. The A12.83g comparison model requires 4 terms in the reduced range and 5 in the full one. Figure 7 shows that there is some edge effect in the reduced range at $`q^2=4m_\pi ^2`$. For the model A12.83 one needs also 4 terms in the reduced range. The obtained value, 14.37(31) is quite consistent with those determined in the model A12.83g, either in the reduced range, 14.69(31) or in the full range, 14.41(24). In the full range the minimum is very shallow and the $`n=4(\chi ^2/N_{df}=1.18)`$ and $`n=5(\chi ^2/N_{df}=1.16)`$ values, 13.65(14) and 13.94(25), respectively, are compatible. The lower value, 13.65(14), is however not compatible with the value, 14.37(31) of the reduced range at $`\chi ^2/N_{df}`$ minimum, We shall then retain, in the full range for the A12.83 model, the $`n=5`$ determination, 13.94(25). Figure 8 shows as before the necessity to have at least 4 or 5 terms to obtain a good fit. In the reduced range 3 terms are here sufficient with the reference model A13.75 leading to $`g_{\pi ^\pm }^2/4\pi =14.50(12)`$ while in the full range $`\chi ^2`$ minimum is reached with 4 terms with a value of 14.38(14). Figure 9 shows in both range a smoother behaviour than before. In the A14.28 and A14.28g case one needs the same number of terms as for A13.75 but the corresponding $`g_{\pi ^\pm }^2/4\pi `$ are somewhat larger and not always compatible with previous values. Curves for the extrapolation are shown in Figs. 10 and 11. Applying the difference method between models (see also Tables 2 and 3) shows relatively large systematics which allows to understand this dispersion.
Let us here remind the reader that the difference method analysis is quite consistent. One can check, in the Tables 2, 3, 5 and 6, that, at a given $`n`$ value, the difference between the results of the difference method for the reference models and data is very close to the systematic shift between the two models. One has,
$`g_{\pi ^\pm }^2/4\pi (n,\text{Model A}\text{Uppsala})`$ $``$ $`g_{\pi ^\pm }^2/4\pi (n,\text{Model B}\text{Uppsala})`$ (7)
$``$ $`\delta g_{\pi ^\pm }^2(n,\text{Model A}\text{Model B}).`$
For instance if we compare in the full range the $`n=4`$, (A13.75 $``$ Uppsala) result to that of (A12.83g $``$ Uppsala) we have a difference (see Table 6) of 0.59 (14.38 - 13.79) which is close to the systematic shift between these models .65 (see Table 3 ). In Ref. where the comparison models were the Nijmegen potential , the Nijmegen (NI93) and Virginia (SM95) energy dependent PWAโs, dispersion of results were smaller. This could be traced to the fact that these models, where $`g_{\pi ^\pm }^2/4\pi `$ has been minimised with respect to the $`NN`$ data, have a high $`q^2`$ momentum more similar to that of the Uppsala data.
Summarising these results we take for $`g^2`$ the following average in the reduced range
$`g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`{\displaystyle \frac{1}{5}}[14.69(31)+14.37(31)+14.50(12)+14.96(12)+14.94(12)]`$ (8)
$`=`$ $`14.69(20)`$
It is here only the second value (from A12.83$``$Uppsala) which is slightly outside the range of the 2 last values from A14.28 and A14.28g
Taking half of the average of $`|\delta g_{\pi ^\pm }^2|`$ between models at $`\chi ^2/N_{df}=1.00`$ for the estimation of the systematic uncertainty (see Table 5) we obtain
$`g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`14.69\pm 0.20\text{(stat. + extr.)}\pm 0.15\text{(syst.)}\pm 0.17\text{(norm.)}`$ (9)
$`=`$ $`14.69(30)`$
i.e. an accuracy of 2 %. For the full range (see Table 6) we have
$`g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`{\displaystyle \frac{1}{5}}[14.41(24)+13.94(25)+14.38(14)+14.83(13)+14.75(13)]`$ (10)
$`=`$ $`14.46(18)`$
Adding the systematic (estimated as above) and normalisation errors we obtain
$`g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`14.46\pm 0.18\text{(stat.+ extr.)}\pm 0.15\text{(syst.)}\pm 0.17\text{(norm.)}`$ (11)
$`=`$ $`14.46(29)`$
i.e. again an accuracy of 2%. Note that for A12.83 $``$ A14.28 we need to go to $`n=7`$ to get $`\chi ^2/N_{df}=1.00`$ with a $`\delta g_{\pi ^\pm }^2`$ of $``$0.36. The value of Eq. (11) is to be compared to the value we determined in Ref. , viz.
$`g_{\pi ^\pm }^2/4\pi `$ $`=`$ $`14.52\pm 0.13\text{(stat.+ extr.)}\pm 0.15\text{(syst.)}\pm 0.17\text{(norm.)}`$ (12)
$`=`$ $`14.52(26)`$
i.e. an accuracy of 1.8%. It is seen that both determination, Eqs. (11) and (12), are very close. The determination of Ref. , Eq. (12) has a better statistical extrapolation error which can be understood as the comparison model have possibly a better determined high $`q^2`$ behaviour as just mentioned above.
## 4 Conclusions
This analysis of the 162 Mev Uppsala precise experiment on np charge exchange demonstrates here again that such data can be used for a direct and accurate determination of the $`\pi NN`$ coupling constant. We reproduce the original coupling constant using the present procedures for pseudo-data from PWAโs as exemplified with the model DA99. The extrapolation error increases with the number of parameters. Our value is 7% larger than the Nijmegen result $`g_{\pi ^\pm }^2=13.58\pm 0.05`$, but it is consistent with values given in earlier data compilations based on the analysis of $`\pi N`$ and $`NN`$ scattering data .
The data have been used to determine a precise value for the charged $`\pi NN`$ coupling constant using extrapolation to the pion pole. Using the most accurate extrapolation method, the Difference Method, we find $`\sqrt{N}g_{\pi ^\pm }^2=14.46\pm 0.18`$ ($`f_{\pi ^\pm }^2=0.0800\pm 0.001`$) with a systematic error of about $`\pm 0.15`$ ($`\pm 0.0008`$) and a normalisation uncertainty of $`\pm 0.17`$ ($`\pm 0.0009`$). We do reproduce the input coupling constants of models using equivalent pseudo-data. The practical usefulness of the method, its precision and its relative insensitivity to systematics appear to be under control. The pseudo-data demonstrate that considerable precision is achieved statistically at a single energy. The absolute normalisation of the data is nevertheless crucial. The precision of the method used here has not yet reached its theoretical limit, but we can point out the key information necessary for this in the $`NN`$ sector. We need as precise as possible unpolarised differential cross sections with an absolute normalisation of 1 to 2% to reach a precision of about 1% in the coupling constant. For the Uppsala data an accurate normalisation of 2.3 % was obtained using integration over the angular distribution. It was performed on the part they measured (from 72 to 180) and on the remaining one calculated from PWAโs and models. The result was scaled to the well known experimentally total cross section . It is important to extend the angular range of data, to be able to achieve an improved normalisation.
For that purpose there is, as we have heard in this workshop, an experiment in progress at The Svedberg Laboratory. It will measure the np differential cross section in the forward hemisphere. This should hopefully determine the normalisation to 1 % and then lower down the accuracy on $`g_{\pi ^\pm }^2/4\pi `$ from 1.8 % to 1.5 %. To the question which was asked at this workshop โDoes the difference method work at the 1 % level?โ the answer is so far no, but as we have demonstrate it can work to less than 2 %, in particular with good reference models we obtained a precision of 1.8 %. Here with different reference models the precision reached was 2 %. Contrary to what was claimed in Ref. we do not see a large model dependence, even using โextremeโ models as the A12.83g with $`g_{\pi ^\pm }^2/4\pi `$=12.83 or $`f_c^2/4\pi `$=0.071. The model dependence is relatively small as can be judged from a comparison of the result here and that of Ref. where while using different reference models the results agree within less than 0.5 %.
Let us also mention, as we were told in this workshop that np backward measurement with tagged neutron beams (which leads to absolute normalisation) are in progress at IUCF for 185 $`T_{lab}`$ 195 MeV and $`90^{}<\theta <180^{}`$. If the very backward steeper shape of the Uppsala data as compared with earlier data is confirmed together with its absolute normalisation then the $`\pi NN`$ coupling constant cannot be as low as found, for instance, by the Nijmegen group. If the Uppsala data is correct and if the coupling constant is small then to reconciliate both, either the total cross section could be off or the angular distribution in the forward hemisphere could be different of what it is believed so far from PWA analyses. Hopefully the two above mentioned experiments in progress should help to clarify the situation. There will be also pp and np spin-transfer measurements which could be used to precise the $`\pi NN`$ coupling constant .
In principle, an experiment at one single energy is enough to determine $`g_{\pi ^\pm }^2/4\pi `$, for all energies contain similar information. Although the method of analysis seems to work well, it is, however, useful to deduce the coupling constant from data at several energies. This would increase confidence that some unexpected systematic effect influences the conclusion.
We thank the The Svedberg Laboratory crew for the data. We are also grateful to J. Blomgren, M. Lacombe and N. Olsson for helpful discussions and to W.R. Gibbs for advice on producing pseudo-data from models. TE acknowledges an interesting discussion with M. Rentmeester and BL the hospitality of the The Svedberg Laboratory.
This work has been financially supported by the Swedish Natural Science Research Council and, grรขce au Service Scientifique et Technique de lโAmbassade de France ร Stockholm en Suรจde, par le Ministรจre des Affaires รtrangรจres Franรงais. |
no-problem/0002/cond-mat0002195.html | ar5iv | text | # Nature of 45โ vortex lattice reorientation in tetragonal superconductors
\[
## Abstract
The transformation of the vortex lattice in a tetragonal superconductor which consists of its 45 reorientation relative to the crys tal axes is studied using the nonlocal London model. It is shown that the reorientaton occurs as two successive second order (continuous) phase transitions. The transition magnetic fields are calculated for a range of parameters relevant for borocarbide superconductors in which the reorientation has been observed.
\]
Properties of the vortex matter have recently attracted great attention due to diversity of phases and novel phenomena associated with them. One of the main research goals is determination of the phase diagram. In high temperature superconductors the vortex matter phases include the vortex liquid and various vortex solids which exist due to the competition of intervortex interactions with fluctuations both thermal and those due to the quenched disorder. On the other hand, in borocarbide superconductors a rich variety of quite perfect vortex crystals has been observed. The experimental information comes from such different measurements as neutron diffraction, decoration, and scanning tunneling microscopy. For these near isotropic materials the entropy contribution to the free energy is small and phase transitions in the vortex lattice are governed by competition between intervortex interactions of different symmetry.
The borocarbides are materials of the tetragonal symmetry. Interactions of this symmetry should exist for any physical subsystem of the crystal. In particular, in the mixed state with the field along the fourfold tetragonal axis they would favor a square vortex lattice. However, the standard magnetic repulsion of vortices is isotropic in this case. The isotropic interaction becomes dominant when the intervortex distance is large enough and a sparse lattice is close to hexagonal, the most closely packed two dimensional lattice. One, therefore, expects that the interplay of the interactions of different symmetries may result in structural transformations of the vortex lattices, observed in borocarbides.
For the applied magnetic field along the fourfold symmetry axis, these transformations are as follows. With decreasing magnetic field, the lattice undergoes a second order phase transition, at which the square structure loses stability and becomes a rhombic (distorted hexagonal) vortex lattice. As the field further decreases, the rhombic lattice changes the orientation relative to underlying crystal by 45, which has been classified as the first order transition. For the field along the twofold axis, the 90 reorientation has been reported.
In this paper we study in detail the 45 reorientation and clarify its nature. We show that this reorientation proceeds as two successive second order (continuous) transitions and not as an abrupt first order transition, the scenario assumed before. Instead of considering a limited class of rhombic lattices, we study the general class of arbitrary lattices. We find that in the field region between the two second order phase transitions, the lattice with the lowest possible symmetry is realized (with the inversion being the only symmetry element). This intermediate region is quite narrow and the structural evolution in this field domain might be difficult to discern experimentally. However, the thermodynamic characteristics of the superconductor are different for the two scenarios, and this can be tested. In particular, no latent heat is expected during the lattice reorientation. We also predict a peak in the critical current in the transition region if the pinning is of a weak collective type. Below we describe the London model with nonlocal corrections relevant for the mixed state of borocarbides. Then, the numerical procedure is outlined and the results are presented
A fruitful approach to the problem of the vortex lattice phases is the extended London model. We start here with London equations corrected for nonlocality:
$`{\displaystyle \frac{4\pi }{c}}j_i(๐ค)`$ $`=`$ $`{\displaystyle \frac{1}{\lambda ^2}}q_{ij}(๐ค)a_j(๐ค)`$ (1)
$`=`$ $`{\displaystyle \frac{1}{\lambda ^2}}\left(m_{ij}^1\lambda ^2n_{ijlm}k_lk_m\right)a_j(๐ค).`$ (2)
Here, $`a_j=A_j+(\mathrm{\Phi }_0/2\pi )_j\theta ,`$ $`A_j`$ is the vector potential, $`\theta `$ is the order parameter phase, and $`\mathrm{\Phi }_0`$ is the flux quantum. The nonlocal response kernel $`q_{ij}(๐ค)`$ is expanded up to the second order terms in the wave vector $`๐ค`$. The tensor $`n_{ijlm}v_iv_jv_lv_m\gamma (T,\mathrm{})`$ where $`๐ฏ`$ is the Fermi velocity and the function $`\gamma `$ decreases somewhat with temperature and drops fast for short mean-free paths $`\mathrm{}`$. It is difficult to accurately estimate the components of $`\widehat{n}`$ because of uncertainties in determination of Fermi velocities and, in particular, of the mean-free path. At low temperatures, $`\widehat{n}\gamma /\kappa ^2`$, where $`\kappa `$ is the GinzburgโLandau parameter. Since good crystals of borocarbides are clean materials with $`\kappa =10รท15`$, one expects the components of $`\widehat{n}`$ to be of the order $`10^2`$. Note also that for the problem of vortex lattices in fields well under the upper critical field, the correction $`\lambda ^2\widehat{n}k^2\xi _0^2k^21`$ ($`\xi _0`$ is the zero-$`T`$ coherence length). Therefore, for strong type-II superconductors, the corrections to the standard London equations and the truncation in the expansion (2) are well justified.
For the tetragonal symmetry, the tensor $`\widehat{n}`$ in the crystal frame has four independent components $`n_{xxxx}`$, $`n_{xxyy}`$, $`n_{zzzz}`$, and $`n_{xxzz}`$ The inverse mass tensor has two different components $`m_{xx}^1=m_{yy}^1`$ and $`m_{zz}^1`$. The London free energy functional corresponding to Eq. (2) reads
$$F=\frac{1}{8\pi }\frac{\mathrm{d}๐ค}{4\pi ^2}\left(|๐ก|^2\lambda ^2ฯต_{ijk}ฯต_{lmn}k_jk_mq_{lk}^1h_nh_i\right)$$
(3)
where $`๐ก(๐ค)`$ is the magnetic field and $`ฯต_{ijk}`$ is the unit antisymmetric tensor. The nonlocal corrections preserve linearity of the London equations and do not change the standard London result that the interaction of two vortices is proportional to the field of one of them at the location of the other. As usual, the free energy density of a vortex lattice is given by $`F=\left(B^2/8\pi \mathrm{\Phi }_0\right)_๐ h_z(๐ )`$ where $`B`$ is the magnetic induction, $`๐ `$ is a vector of the reciprocal lattice and $`h_z`$ is the component of the single vortex field along the vortex axes. We are interested in the field along the fourfold symmetry axis $`z`$. Solving Eq. (2) for a single vortex one can bring the free energy density to the form
$$F=\underset{๐ }{}\frac{B^2/8\pi }{1+\lambda ^2g^2+\lambda ^4(ng^4+dg_x^2g_y^2)},$$
(4)
where $`n=n_{xxyy}`$ and $`d=2(n_{xxxx}3n_{xxyy}).`$ The free energy $`F(B,T)`$ is the thermodynamic potential, which is minimum in equilibrium of a superconducting slab in a perpendicular applied field. The temperature enters $`F(B,T)`$ via $`T`$ dependent parameters $`\lambda (T),`$ $`n(T)`$ and $`d(T)`$ that can, in principle, be calculated using a microscopic model. Note that besides the factor $`B^2`$, the induction enters via the area of the primitive lattice cell. We determine the stable lattice by numerical minimization of $`F(B,T;๐ )`$ with respect to the lattice structure specified by a given set of $`๐ `$โs.
The vortex lattice is completely defined by the basis vectors $`๐_1`$ and $`๐_2`$, i.e., by four parameters. Since a unit cell accommodates one flux quantum, $`a_1a_2\mathrm{sin}\beta =\mathrm{\Phi }_0/B`$, three parameters suffice. Following Ref. we choose $`\alpha `$, $`\rho (a_2/a_1)\mathrm{cos}\beta `$, and $`\sigma (a_2/a_1)\mathrm{sin}\beta `$ as the needed three (see Fig. 1 for definitions of $`\alpha `$ and $`\beta `$). The parameters $`\rho `$ and $`\sigma `$ are convenient because one can select a domain of their variation, each point of which corresponds to a lattice with various equivalent choices of the basis vectors $`๐_{1,2}`$. Thus, the minimization of $`F`$ is done at fixed $`B`$, $`n`$, and $`d`$ with respect to $`\rho ,\sigma `$ and $`\alpha `$ for $`0<\alpha <\pi `$, $`0\rho 0.5,`$ and $`\rho ^2+\sigma ^21`$. The minima of $`F`$ are often located on the boundaries of this domain; we use the โAmoebaโ numerical routine convenient in such circumstances. The cutoff factor $`\mathrm{exp}\left(\xi ^2g^2\right)`$ was introduced inside the sum (4) to properly account for the failure of the London model in the vortex core. Changing parameters $`B`$ and $`n,d`$ we obtain the phase diagram.
The main finding of this work is that the reorientation of the lattice proceeds as two steps. Figure 2 shows the transition lines on the $`B,d`$ plane for a fixed $`n=0.015`$. The equilibrium lattices both before and after the reorientation have the rhombic symmetry $`D_{2h}`$. Their symmetry axes, which coincide with the diagonals of a rhombic unit cell (with the appropriate choice of such a cell, see Fig. 2), are aligned with $`[110]`$ and $`[1\overline{1}0]`$ at lower magnetic inductions, whereas the symmetry axes are at $`[100]`$ and $`[0\overline{1}0]`$ for higher $`B`$โs. This result is in accordance with data for $`YNi_2B_2C`$. In a narrow region between the two rhombic phases, a less symmetric lattice is stable. Here, the unit cell is a general parallelogram. All in-plane symmetry elements disappear except the inversion, and the symmetry group reduces to $`C_{2h}.`$
One could describe the reorientation process as a gradual rotation of the unit cell accompanied by a slight deformation. Figure 3 shows how the angles $`\alpha `$ and $`\beta `$ change when $`B`$ increases in the vicinity of the reorientation (for $`d=0.05`$ and $`n=0.015`$). The transitions at $`B3.18\mathrm{\Phi }_0/(2\pi \lambda )^2`$ and $`B3.27\mathrm{\Phi }_0/(2\pi \lambda )^2`$ are seen clearly. The angles $`\alpha `$, $`\beta `$, and the other lattice parameters are continuous at the two transition fields. We conclude that both phase transitions that occur during the reorientation are of the second order. The sequence of symmetry changes with the field decreasing is $`D_{2h}C_{2h}D_{2h}.`$ While at the first step of the reorientation, the symmetry becomes lower, at the second step the symmetry increases. Correspondingly, the ground state is double degenerate in both $`D_{2h}`$ phases; the degenerate vacua (two equilibrium structures of the same energy) are related by rotations over $`90^{}`$). The structure becomes four times degenerate in the intermediate $`C_{2h}`$ phase (rotations over $`\pm 45^{}`$ and $`90^{}`$). Practically, this may lead to apparently increased disorder in the $`C_{2h}`$ phase.
It is worth noting that the relative energy differences between the equilibrium $`C_{2h}`$ lattice and the rhombic ones is exceedingly small. As an example, we provide this figure for $`d=0.05`$ and $`n=0.015`$: the relative difference between energies of the rhombic lattice at the transition point and the lattice in the middle of the field domain of $`C_{2h}`$ structure is of the order $`10^7`$. This is much smaller than $`10^2`$ for the relative energy differences usually cited for triangular and square lattices within the standard London or GinzburgโLandau models.
The location of the phase transition lines is sensitive to both $`n`$ (for the isotropic correction in Eq. (4)) and $`d`$ (of the four-fold symmetric correction). Figure 4 shows the phase diagram of the vortex lattice on $`B,d`$ and $`B,n`$ planes in the region of the reorientation process. The region of stability of the monoclinic lattice is broader for smaller $`n`$โs and larger $`d`$โs. Still, as is seen at Fig. 4, this field region is narrow for values of $`n`$ and $`d`$ adopted in our simulations. For example, for LuNi<sub>2</sub>B<sub>2</sub>C with $`\lambda 710`$ร
, the field unit $`\mathrm{\Phi }_0/(2\pi \lambda )^2`$ is about $`100`$G.
The new scenario of the lattice reorientation in a tetragonal superconductor which is found this paper has implications for thermodynamic characteristics of the vortex lattice and its dynamic behavior. We have found, and this is our main result, that the reorientation proceeds as two successive phase transitions of the second order when the applied field or temperature vary. Therefore, continuous variation of the entropy (i.e., no latent heat) and of the reversible magnetization are expected during reorientation. In contrast, the old scenario of the first order transition implied discontinuous jumps of the above quantities.
As is seen at Fig. 4, for small values of $`n`$ and $`d`$, the domain of monoclinic phase shrinks. Then, it would be difficult to distinguish experimentally this situation from 1st order transition, because the entropy would change fast with $`B`$ during the reorientation (for the $`B`$-sweep at a fixed $`T`$). Still, one should not observe hysteresis, characteristic of the 1st order transitions. If the sequence of transitions we suggest here is found, it would be of interest to suppress $`n`$ and $`d`$ by making the mean-free path shorter and to see how the transition evolves (as has been done with doping Lu-based borocarbide crystals with Co.)
Both the upper and lower phase transitions ($`D_{2h}C_{2h}`$) of the reorientation process cause uniform spontaneous deformations of the vortex lattice. As a result, a particular combination of the elastic lattice moduli vanishes at the transitions. It has been recently shown that a change of elastic properties of this type leads generally to peculiarities in the critical current, provided a weak collective pinning operates in the material. Therefore, the reorientation of the vortex lattice in borocarbides may lead to a peak in the critical current.
Finally, we would like to point to other possible applications of our results. The London model we employed reflects properly the symmetry of the system. It was originally derived for an anisotropic Fermi surface and isotropic superconducting pairing. However, the d-wave type of pairing also leads to a similar effective London model (for not very low temperature where the effects of the order parameter nodes become essential). The reorientation of the vortex lattice has indeed been found theoretically, and characterized as the first order transition. It would be of interest to check whether or not our scenario of the reorientation applies to this case as well.
This work is supported by NSC of Taiwan through the grants #89-2112-M-009-0016 and #89-2112-M-009-039. |
no-problem/0002/astro-ph0002105.html | ar5iv | text | # The NASA Astrophysics Data System: Architecture
## 1 Introduction
The Astrophysics Data System (ADS) Abstract Service was originally designed as a search and retrieval system offering astronomers and research librarians sophisticated bibliographic search capabilities. Over time, the system has evolved to include full-text scans of the scholarly astronomical literature and an ever-increasing number of links to resources available from other information providers, taking full advantage of the capabilities offered by the emerging technology of the World-Wide Web (WWW).
As new data and functionality were incorporated in the ADS, the design of its system components evolved as well, driven by the desire to strike a balance between simplicity in the operation of the system and richness in its features. Over time, we favored design approaches promising long-term rewards over short-term gains, within the limits allowed by our resources. The approach we followed in software development has always been very pragmatic and data-driven, in the sense that specialized software components were designed to work efficiently with the existing datasets, rather than attempting to use general-purpose, monolithic software packages.
This paper gives an overview of the architecture of the Astrophysics Data System bibliographic services and discusses in detail the design of the underlying data structures and the implementation of its key software components. In conjunction with three other ADS papers in this volume, it is intended to give a complete description of the current state and capabilities of the ADS. An overview of the history and current use of the system is given in Kurtz et al. (2000) (OVERVIEW from here on); details on the datasets in the ADS, their creation and maintenance is given in Grant et al. (2000) (DATA); a complete description of the ADS search engine and its user interface is given in Eichhorn et al. (2000) (SEARCH).
Section 2 discusses the methodological approach used in the management of bibliographic records, their representation in the system, and the procedures used for data exchange with our collaborators. Section 3 describes the structure of the index files used by the ADS search engine, the implementation of the procedures that create them, and the use of discipline-specific knowledge to improve search results. Section 4 details the design and implementation of general procedures for the creation and management of properties associated with bibliographic records, and their use in the creation of links to internal and external resources. Section 5 discusses the set of procedures used to clone the ADS bibliographic services to the current mirror sites and the level of system independence necessary for their operation. In section 6 we describe how the recent developments in technology and collaborations among astronomical data centers may affect the evolution of the ADS.
## 2 Creation of Bibliographic Records
The bibliographic records maintained by the ADS project consist of a corpus of structured documents describing scientific publications. Each record is assigned a unique identifier in the system and all data gathered about the record are stored in a single text file, named after its identifier. The set of all bibliographic records available to the ADS is partitioned into four main data sets: Astronomy, Instrumentation, Physics and Astronomy Preprints (DATA). This division of documents into separate groups reflects the discipline-specific nature of the ADS databases, as discussed in DATA and section 3.2.
Since we receive bibliographic records from a large number of different sources and in a variety of formats (DATA), the creation and management of these records require a system that can parse, identify, and merge bibliographic data in a reliable way. In this section we describe the framework used to implement such a system and some of its design principles. Section 2.1 details the methodology behind our approach. Section 2.2 describes the file format adopted to represent the bibliographic records. Section 2.3 outlines the procedures used to automate data exchange between our system and our collaborators. Details about the pragmatic aspects of creating and managing the bibliographic records are described in DATA.
### 2.1 Methodology
When the ADS abstract service was first introduced to the astronomical community (Kurtz et al. (1993)), the system was built on bibliographic data obtained from a single source (the NASA STI project, also known as RECON) and in a well-defined format (structured ASCII records). The activity of entering these data into the ADS database consisted simply in parsing the individual records, identifying the different bibliographic fields in them, and reformatting the contents of these fields into the ones used in our system. Bibliographical records were created as text files named after STIโs accession numbers (DATA), which the project used to uniquely identify records in the system.
As the desire for greater inter-operability with other data services grew (OVERVIEW), the ADS adopted the bibliographic code (โbibcodeโ from here on) as the unique identifier for a bibliographic entry (DATA). This permitted immediate access to the astronomical databases maintained by the Strasbourg Data Center (CDS), and allowed integration of SIMBADโs object name resolution (Egret & Wenger (1988)) within the ADS abstract service (OVERVIEW).
As more journal publishers and data centers became providers of bibliographic data to our project, a unified approach to the creation of bibliographic records became necessary. What makes the management of these records challenging is the fact that we often receive data about the same bibliographic entry from different sources, in some cases with incomplete or conflicting information (e.g. ordering or truncation of the author list). Even when the data received is semantically consistent, there may be differences in the way the information has been represented in the data file. For instance, while most journal publishers provide us with properly encoded entities for accented characters and mathematical symbols, the legacy data currently found in our databases and provided to us by some sources only contain plain ASCII characters. In other, more subtle and yet significant cases, the slightly different conventions adopted by different groups in the creation of bibcodes (DATA) make it necessary to have โspecial caseโ provisions in our system that take these differences into account when matching records generated from these sources.
The paradigm currently followed for the creation of bibliographic records in our system is illustrated in figure 1. The different action boxes and tests displayed in the diagram represent modular procedures, most of which have been implemented as PERL (Wall, Christiansen & Schwartz (1996)) software modules. More details about each of the software components can be found in DATA.
As the holdings of the ADS databases have grown over time, additional metadata about the literature covered in our databases has been collected and is currently being used by many of our software modules for a variety of tasks. Among them it is worth mentioning two activities which are significant in the context discussed here:
1) Identification of publication sources. This is the activity of associating the name of the publication with the standard abbreviation used to compose bibliographic codes, and allows us to compute a bibcode for each record submitted to our system.
2) Data consistency checks. For all major serials and conference series in our databases, we maintain tables correlating the volume, issue, and page ranges with publication dates. We also have recently started to maintain โcompletenessโ tables describing in analytical form what range of years or volumes are completely abstracted in our system for each publication. This allows us to flag as errors those records referring to publications for which the ADS has complete coverage, but which do not match any entry in our system. The availability of this feature is particularly significant for reference resolution, as discussed later in this paper.
### 2.2 Data Representation
From the inception of the ADS databases until recently, each bibliographic record has been represented as a single entity consisting of a number of different fields (e.g. authors, title, keywords). This information was stored in the database as an ASCII file containing pairs of field names and values. While this model has allowed us to keep a structured representation of each record, over the years its limitations have become apparent.
First of all, the issue of dealing with multiple records referring to the same bibliographic entry arose. As previously mentioned, while much of the information present in these records is the same, certain fields may only appear in one of them (for example, keywords assigned by the publisher). Therefore the capability of managing bibliographic fields supplied by different sources became desirable, which could not be easily accomplished with the file format being used.
Secondly, the problem of maintaining ancillary information about a particular bibliographic entry or even an individual bibliographic field surfaced. Information such as the time-stamp indicating when a bibliographic entry was created or modified, which data provider submitted it, and what is the identifier assigned to the record by the publisher can be used to decide how this data should be merged into our system or how hyperlinks to this resource should be created. Even more importantly, it is often necessary to attach semantic information to individual records. For instance, if keywords are assigned to a particular journal article, it is important to know what keyword system or thesaurus was used in order to effectively use this information for document classification and retrieval (Lee, Dubin & Kurtz (1999)).
Thirdly, the issue of properly structuring the bibliographic fields had to be considered. Some of these fields contain simply plaintext words, and as such can be easily represented by unformatted character strings. Others, however, consist of lists of items (e.g. keywords or astronomical objects), or may contain structured information within their contents (e.g. an abstract containing tables or math formulae). The simple tagged format we had adopted did not allow us to easily create hierarchical structures containing subfields within a bibliographic field.
Finally, there was the problem of representing relationships among bibliographical entries (e.g. an erratum referring the original paper), or among bibliographic fields (e.g. an author corresponding to an affiliation). While we had been using ASCII identifiers to cross-correlate authors and affiliations in our records, the adopted scheme was very limited in its capabilities (e.g. multiple affiliations for an author could not be expressed using the syntax we implemented).
Given the shortcomings of the bibliographic record representation detailed above, we recently started reformatted all our bibliographic records as XML (Extensible Markup Language) documents. XML is a markup language which is receiving widespread endorsement as a standard for data representation and exchange. Using this format, a single XML document was created for each bibliographic entry in our system. Each bibliographic field is represented as an XML element, and may in turn consist of sub-elements (see DATA for an example of such a file). Ancillary information about the record is stored as metadata elements within the document. Information about an individual field within the record is stored as attributes of the element representing it. Relationships among fields are expressed as links between the corresponding XML elements.
While it is beyond the scope of this paper to describe the characteristics that make XML a desirable language for representing structured documents, we will point out the main reasons why XML was selected over other formats in our environment. The reader should note that most of these remarks not only apply to XML, but also to its โparentโ language, SGML (Standard Generalized Markup Language).
XML can be used to represent precise, possibly non-textual information organized in data structures, and as such can be used as a formal language for expressing complex data records and their relationships. In our case, this means that bibliographic fields can be described in as much detail as necessary. For instance, the publication information for a conference proceedings volume can be composed of the conference title, the conference series name and number, the names of the editors, the name of the publisher, the place of publication, and the ISBN number for the printed book. While all this information has been stored in the past in a single bibliographic field, the obvious representation for it is a structured record where items such as conference title and editors are clearly indentified and tagged. This allows, among other things, to properly identify individual bibliographical items when formatting the record for a particular application (e.g. when citing a work in an article).
A second important feature which XML offers is the possibility of representing any amount of ancillary information (the โmetadataโ) along with the actual contents of a document. This permits, among other things, to tag bibliographic records, or even individual fields, with any relevant piece of information. For instance, an attribute can be assigned to the bibliographic field listing a set of keywords describing what keyword system they belong to.
Other important characteristics of XML are: the adoption of Unicode (Unicode Consortium (1996)) for character data representation, allowing uniform treatment of all international characters and most scientific symbols; and the support for standard mechanisms for managing complex relationships among different documents through hyperlinking.
Some of the practical advantages of adopting XML over other SGML variants simply come from the wide acceptance of the language in the scientific community as well as in the software industry. There is currently great interest among the astronomical data centers in creating interfaces capable of seamlessly exchanging XML data (Shaya et al. (1999); Murtagh & Guillaume (1998)). It is our hope that as our implementation of an XML-based markup language for bibliographic data evolves, it can be integrated in the emerging Astronomical Markup Language (Murtagh & Guillaume (1998)). As many of the technologies in the field of document management change rapidly, it is important for a project of our scope to adopt the ones which offer the greatest promise of longevity. In this sense, we feel that the level of abstraction and dataset independence that XML imposes on programmers and data specialists is justifies the added complexity.
### 2.3 Data Harvesting
Of vital importance to the operation of the ADS is the issue of data exchange with collaborators, in particular the capability to efficiently retrieve data produced by publishers and data providers. The process of collecting and entering new bibliographic records in our databases has benefitted from three main developments: the adoption by all publishers of electronic production systems from the earliest stages of their publication process; the almost exclusive use of SGML and LaTeX as the formats for document production; and the pervasive use of the Internet as the medium for data exchange.
An overview of the procedures used to collect bibliographic data in the daily interactions between ADS staff and data providers is presented in DATA. In this section we discuss how the use of automated procedures has benefitted the activities of data retrieval and entry in the operations of the ADS. Two approaches are presented: the โpushโ paradigm, in which data is sent from the data provider to the ADS, and the โpullโ paradigm, in which data is retrieved from the data provider.
#### 2.3.1 Data Push
The โpushโ approach has received much attention since the introduction of web-based broadcasting technologies in 1997 (Miles (1998)), to the point that many people consider both push and web broadcasting to have the same meaning. Here we refer to the concept of data โpushโ in its original meaning, i.e. the activity of electronic data submission to one or more recipients. The primary means used by ADS users and collaborators to send us electronic data are: FTP upload, e-mail, and submission through a web browser (DATA). While these three mechanisms are conceptually similar (data is sent from a user to a computer server using one of several well-established Internet protocols), the one we have found most amenable to receiving โpushedโ data is the e-mail approach. This is primarily due to the fact that modern electronic mail transport and delivery agents offer many of the features necessary to implement reliable data delivery, including content encoding, error handling, data retransmission and acknowledgement. Additional features such as strong authentication and encryption can be implemented at a higher level through the use of proper software agents after data delivery has been completed. In the rest of the section we describe the implementation of an email-based data submission service used by the ADS, although the system operation can be easily adapted to work under other delivery mechanisms such as FTP or HTTP.
In an attempt to streamline the management of the increasing amount of bibliographic data sent to us, we have put in place procedures to automatically filter and process messages sent to an e-mail address which has been created as a general-purpose submission mechanism. This activity is implemented by using the procmail filter package. Procmail is a very flexible software tool that has been used in the past to automatically process submission of electronic documents by a number of institutes (Bell (1999); Bell et al. (1996)). Our procmail filter has been configured to analyze the input message, verify its origin, identify which dataset it belongs to, and archive the body of the message in the proper dataset-specific directory. Optionally, the filter can be set up so that one or more procedures are executed after archival. Most of the submissions received this way are simply archived and later loaded into the databases by the ADS administrators during a periodic update (DATA). Using this paradigm, the email filter allows us to efficiently manage submissions from different collaborators by enforcing authentication of the submitterโs email address and by properly filing the message body. This procedure is currently used to archive the IAU Circulars and the Minor Planet Electronic Circulars.
By defining additional actions to be performed after archival of a submitted e-mail message, automated database updates can be implemented. We currently use this procedure to allow automated submission and updating of our institutionโs preprint database, which is currently maintained by the ADS project as a local resource for scientists working at the Center for Astrophysics. The person responsible for maintaining the database contents simply sends a properly formatted email message to the ADS manager account and an update operation on the database is automatically triggered; when the updating is completed, the submitter is notified of the success or failure of the procedure. We expect to make increasing use of this capability as the electronic publication time-lines have been steadily decreasing.
#### 2.3.2 Data Pull
โData pullโ is the activity of retrieving data from one or more remote network locations. According to this model, the retrieval is initiated by the receiving side, which simply downloads the data from the remote site and stores it in one or more local files. We have been using this approach for a number of years to retrieve electronic records made available online by many of our collaborators. For instance, the ADS LANL astronomy preprint database (SEARCH) is updated every night by a procedure that retrieves the latest submissions of astronomy preprints from the Los Alamos National Laboratory (LANL) archive, creates a properly formatted copy of them in the ADS database, and then runs an updating procedure that recreates the index files used by the search engine (section 3). This nightly procedure has been running in an unsupervised fashion since the beginning of 1997.
The pull approach is best used to periodically harvest data that may have changed. By using procedures that are capable of saving and comparing the original timestamps generated by web servers we can avoid retrieving a network resource unless it has been updated, making efficient use of the bandwidth and resources available. Section 4.2 discusses the application of these techniques to the management of distributed bibliographic resources.
## 3 Indexing of Bibliographic Records
In the classic model of information retrieval (Salton & McGill (1983); Belkin & Croft (1992)), the function of a document indexing engine is: the extraction of relevant items from the collection of text; the translation of such items into words belonging to the so-called Indexing Language (Salton & McGill (1983)); and the arrangement of these words into data structures that support efficient search and retrieval capabilities. Similarly, the function of a search engine is: the translation of queries into words from the Indexing Language; the comparison of such words with the representations of the documents in the Indexing Language; and the evaluation and presentation of the results to the user.
The heterogeneous nature of the bibliographic data entered into our database (DATA), and the need to effectively deal with the imprecision in them led us to design a system where a large set of discipline-specific interpretations are made. For instance, to cope with the different use of abstract keywords by the publishers, and to correct possible spelling errors in the text, sets of words have been grouped together as synonyms for the purpose of searching the databases. Also, many astronomical object names cited in the literature are translated in a uniform fashion when indexing and searching the database to improve recall and accuracy.
In order to achieve a high level of software portability and database independence, the decision was made to write general-purpose indexing and searching engines and incorporate discipline-specific knowledge in a set of configuration and ancillary files external to the software itself. For instance, the determination of what parsing algorithm or program should be used to extract tokens indexed in a particular bibliographic field was left as a configurable option to the indexing procedure. This allowed us, among other things, to reuse the same code for parsing text both at search and index time, guaranteeing consistency of results.
The remainder of this section describes the design and implementation of the document indexing system used by the ADS: section 3.1 provides an overview of indexing procedures; section 3.2 details the organization of the knowledge base used during indexing; section 3.3 discusses the implementation of the indexing engine. Details on the search engine and user interface can be found in SEARCH.
### 3.1 Overview of the Indexing Engine
The model we followed for providing search capabilities to the ADS bibliographic databases makes use of data structures commonly referred to as inverted files or inverted indices (Knuth (1973); Frakes & BaezaโYates (1992)). To allow the implementation of fielded queries, an inverted file structure is created for each searchable field, as described in section 3.3. (In the following we will refer to โbibliographic fieldsโ as the elements composing a bibliographic record described in the previous section โ e.g. authors, affiliations, abstract โ and โsearch fieldsโ as all the possible searchable entities implemented in the query interface and described in detail in SEARCH โ e.g. author, exact author, and text). In general the mapping between search fields and index files is one-to-one, while the mapping between inverted files and bibliographic fields is one-to-many. For instance, in our current implementation, the โauthorโ index consists of the tokens extracted from the authors field, while the โtextโ index is created by joining the contents of the following fields: abstract, title, keywords, comments, and objects. The complete mapping between bibliographic fields and search fields is described in section 3.3.
During the creation of the inverted files, the indexing engine makes use of several techniques commonly used in Natural Language Processing (Efthimiadis (1996)) to improve retrieval accuracy and to implement sophisticated search options. These transformations provide the mapping between the input data and the words belonging to the Index Language. Some of them are described below.
Normalization: This procedure converts different morphological variants of a term into a single format. The aim of normalization is to reduce redundancy in the input data and to standardize the format of some particular expressions. This step is particularly important when treating data from heterogeneous sources which may contain textual representations of mathematical expressions, chemical formulae, astronomical object names, compound words, etc. A description of how this is implemented via morphological translation rules is provided in section 3.2.1.
Tokenization: This procedure takes an input character string and returns an array of elements considered words belonging to the Index Language. While the tokenization of well-structured fields such as author or object names is straightforward, parsing and tokenizing portions of free-text data is not a trivial matter. For instance, the decision on how to split into individual tokens expressions such as โnon-N.A.S.A.โ or designations for an astronomical object such as โPSR 1913+16โ is often both discipline and context-specific. To ensure consistency of the search interface and index files, the same software used to scan text words at search time is used to parse the bibliographic records at indexing time. A detailed description of the text tokenizer is presented in SEARCH.
Case folding: Converting the case of words during indexing is a standard procedure in the creation of indices and allows the reduction in size of most index files by removing redundancy in the input data. For example, converting all text to uppercase both at indexing and search time allows us to map the strings โSuperNova,โ โSupernova,โ and โsupernovaโ to the canonical uppercase form โSUPERNOVA.โ In our implementation the feature of folding case has been set as an option which can be selected on a field-by-field basis, since case is significant in some rare but important circumstances (e.g. the list of planetary objects). Details on the treatment of case in fielded queries are discussed in SEARCH.
Stop words removal: The process of eliminating high-frequency function words commonly used in the literature also contributes to reduce the amount of non-discriminating information that is parsed and indexed (Salton & McGill (1983)). The use of case-sensitive stop words (described in section 3.2.3) allows us to keep those words in which case alone can discriminate the semantics of the expression.
Synonym expansion: By grouping words in synonym classes we can implement a so-called โquery expansionโ by returning not only the documents containing one particular search item, but also the ones containing any of its synonyms. Using a well-defined set of synonyms rather than relying on grouping words by stemming algorithms to perform query expansion provides much greater control in the implementation of query expansion and can yield a much greater level of accuracy in the results. This powerful feature of the ADS indexing and search engines is described more fully in section 3.2.2.
### 3.2 Discipline-specific Knowledge Base
The operation of the indexing engine is driven by a set of ancillary files representing a knowledge base (Hayes-Roth, Waterman & Lenat (1983)) which is specific to the domain of the data being indexed. This means that in general different ancillary files are used when indexing data in the different databases, although in practice much of the metadata used is shared among them.
Since the input bibliographies consist of a collection of fielded entries and each field contains terms with distinct and well-defined syntax and semantics, the processing applied to each field has to be tailored to its contents. The following subsections describe the different components of the knowledge base in use.
#### 3.2.1 Morphological Translation Rules
Morphological translation rules are syntactic operations designed to convert different representations of the same basic literal expression into a common format (Salton & McGill (1983)). This is most commonly done with astronomical object names (e.g. โM 31โ vs. โM31โ), as well as some composite words (e.g. โX RAYโ, โX-RAYโ and โXRAYโ). The translations are specified as pairs of antecedent and consequent patterns, and are applied in a case-insensitive way both at indexing and searching time. The antecedent of the translation is usually a POSIX (IEEE (1995)) regular expression, which should be matched against the input data being indexed or searched. The consequent is an expression that replaces the antecedent if a match occurs, and which may contain back-references to substrings matched by the antecedent.
The table of translation rules used by the indexing and search engine uses two sets of replacement expressions for maximum flexibility in the specification of the translations, one to be used during indexing and the other one for searching. This allows for instance the contraction of two words into a single expression while still allowing indexing of the two separate words. For example, the expression โBe starsโ is translated into โBestarsโ when searching and โBestars starsโ when indexing, so that a search for โstarsโ would still find the record containing this expression. Note that if we had not used the translation rule described above to create the compound word โbestars,โ the word โBeโ would have been removed since it is a stop word, and the search would have just returned all documents containing โstars.โ The complete list of translation rules currently in use is displayed in table 1.
To avoid the performance penalties associated with matching large amounts of literal data against the translation rules, the regular expressions are โcompiledโ into resident RAM when the ADS services are started, making the application of regular expressions to the input stream very efficient (SEARCH).
Despite the extensive use of synonyms in our databases, there are cases in which the words in an input query cannot be found in the field-specific inverted files. In order to provide additional search functionality, two options have been implemented in the ADS databases, one aimed at improving matching of English text and a second one aimed at matching of author names.
During the creation of the text and title indices, all words found in the database are truncated to their stem according to the Porter stemmer algorithm (Harman (1991)). Those stems that do not already appear in the text and title index are added to the index files and point to the list of terms that generated the stem. Upon searching the database and not finding a match, the search engine proceeds to apply the same stemming rules to the input term(s) and then repeat the search. Thus word stemming is used as a โlast-resortโ measure in an attempt to match the input query to a group of words that may be related to it. For searches that require an exact match, no stemming of the input query takes place. The limited use of stemming techniques during indexing and searching text in the ADS system derives from the observation that these algorithms only allow minor improvements in the selection and ranking of search results (Harman (1991); Xu & Croft (1993)).
To aid in searches on author names, the option to match words which are phonetically similar was added in 1996 and is currently available through one of the ADS user interfaces. In this case, a secondary inverted file consisting of the different phonetic representations of author last names allows a user to generate lists of last names that can be used to query the database. Two phonetic retrieval algorithms have been implemented, based on the โsoundexโ (Gadd (1988)) and โphonixโ (Gadd (1990)) algorithms.
#### 3.2.2 Synonym Expansion
A variety of techniques have been used in information retrieval to increase recall by retrieving documents containing not only the words specified in the query but also their synonyms (Efthimiadis (1996)). By grouping individual words appearing in a bibliographic database into sets of synonyms, it becomes possible to use this information either at indexing or searching time to perform a so-called โsynonym expansion.โ
Typically, this procedure has been used as an alternative to text stemming techniques to automatically search for different forms of a word (singular vs. plural, name vs. adjective, differences in spelling and typographical errors). However, since the specification of the synonyms is database- and field-specific, our paradigm has allowed us to easily extend the use of synonyms to other search fields such as authors and planetary objects (SEARCH). Additionally, during the creation of the text synonym groups we were able to incorporate discipline-specific knowledge which would otherwise be missed. In this sense, the use of synonym expansion in ADS adds a layer of semantic information that can be used to improve search results. For instance, the following list of words are listed as being synonyms within the ADS:
```
circumquasarฮจฮจ
miniquasar
nonquasar
protoquasars
qso
qsos
qsr
qsrs
qsrss
qss
quarsars
quasar
quasare
quasaren
quasargalaxie
quasargalaxien
quasarhaufung
quasarlike
quasarpaar
quasars
quasers
quasistellar
```
During indexing and searching, by default any words which are part of the same synonym group are considered to be โequivalentโ for the purpose of finding matching documents. Therefore a title search for โquasarโ will also return papers which contained the word โquasistellarโ in their title. Of course, our user interface allows the user to disable synonym expansion on a field-by-field as well as on a word-by-word basis.
It is the extensive work that has gone into compiling such a list that makes searches in the ADS so powerful. To give an idea of the magnitude of the task, it should suffice to say that currently the synonyms database consists of over 55,000 words grouped into 9,266 sets. Over the years, the clustering of terms in synonym groups has incorporated data from different sources, including the Multi-Lingual supplement to the Astronomy Thesaurus (Shobbrook (1995)).
Despite the fact that the implementation of query expansion through the use of synonyms illustrated above has shown to be an effective tool in searching and ranking of results, we are currently in the process of reviewing the contents and format of the synonym database to improve its functionality. First of all, as we have added more and more bibliographic references from historical and foreign sources, the amount of non-English words in our database has been slowly but steadily increasing. As a result, we intend to merge the proper foreign language words with each group of English synonyms in a systematic fashion (Oard & Diekema (1997); Grefenstette (1998)).
Secondly, we intend to review and correct the current foreign words in our synonym classes to include, where appropriate, their proper representation according to the Unicode standard (Unicode Consortium (1996)), which provides the foundation for internationalization and localization of textual data. By identifying entries in our synonym file that were created by transliterating words that require an expanded character set into ASCII, we can simply add the Unicode representation of the word to the synonym group, therefore ensuring that both forms will be properly indexed and found when either form is used in a search.
Finally, we are implementing a more flexible group structure for the synonyms which allows us to specify hierarchical groupings and relationship among groups rather than simple equivalence among words. This last feature allows us to effectively implement the use of a limited thesaurus for search purposes (Miller (1997)). Instead of simply grouping words together in a flat structure as detailed above, we first create separate groups of words, each representing a distinct and well-defined concept. Words representing the concept are then assigned to one such groups and are considered โequivalentโ instantiations of the concept. A word can only belong to one group but groups can contain subgroups, representing instances of โsub-concepts.โ The following XML fragment shows how grouping of synonyms is being implemented under this new paradigm:
```
<syngroup id="00751">
<subgroup rel="instanceof">00752</subgroup>
<subgroup rel="instanceof">00753</subgroup>
<subgroup rel="instanceof">00754</subgroup>
<subgroup rel="instanceof">00755</subgroup>
<subgroup rel="oppositeof">00756</subgroup>
<syn>qso</syn>
<syn>qsos</syn>
...
<syn>quasistellar</syn>
<syn lang="de">quasare</syn>
<syn lang="de">quasaren</syn>
<syn lang="de">quasargalaxie</syn>
<syn lang="de">quasargalaxien</syn>
</syngroup>
<syngroup id="00752">
<syn>circumquasar</syn>
<syn>circumquasars</syn>
</syngroup>
<syngroup id="00753">
<syn>miniquasar</syn>
<syn>miniquasars</syn>
<syn>microquasar</syn>
<syn>microquasars</syn>
</syngroup>
<syngroup id="00754">
<syn>protoquasar</syn>
<syn>protoquasars</syn>
</syngroup>
<syngroup id="00755">
<syn>quasar cluster</syn>
<syn>quasar clusters</syn>
<syn lang="de">quasarhäufung</syn>
<syn lang="de">quasarhäufungen</syn>
</syngroup>
<syngroup id="00756">
<syn>nonquasar</syn>
<syn>nonquasars</syn>
</syngroup>
<syngroup id="01033">
...
<subgroup rel="instanceof">00755</subgroup>
...
<syn>cluster</syn>
<syn lang="de">häufung</syn>
...
</syngroup>
```
The new approach allows a much more sophisticated implementation of query expansion through the use of synonyms. Some of its advantages are:
1) Hierarchical subgrouping of synonyms: every group may contain one or more subgroups representing โsub-conceptsโ related to the group in question. Currently the two relations we make use of are the ones representing instantiation and opposition. This capability allow us to break down a particular concept at any level of detail, grouping synonyms at each level and then โincludingโ subgroups as appropriate.
2) Multiple group membership: each subgroup may be an instance of one or more synonym groups. For instance, the synonyms โquasarhรคufungโ and โquasar clusterโ are in a subgroup that belongs to both the โqsoโ and the โclusterโ groups.
3) Use of multi-word sequences in synonym groups: in certain cases, individual words referring to a concept correspond to a sequence of several words in other languages or context. Allowing declarations of multi-word synonyms enables us to correctly identify most terms.
4) Multilingual grouping: words belonging to a language other than English are tagged with the standard international identifier for that language. This permits us to use the synonyms in a context sensitive way, so that if the same word were to exist in two languages with different meanings, the proper synonym group would be used when reading documents in each language.
The synonym database described above is used at indexing time to create common lists of document identifiers for words belonging to the same synonym group or any of its subgroups. The effect of this procedure is that when use of synonyms is enabled, searches specifying a word that belongs to a synonym group will result in the list of records containing that word as well as any other word in the synonym group or its subgroups. In the example given above, a search for โqsoโ would have listed all documents containing โqso,โ its other synonyms, as well as subgroup members such as โminiquasarโ and โprotoquasar.โ On the other hand, a search for โminiquasarโ would have only returned the list of documents containing either โminiquasarโ or โmicroquasar,โ narrowing significantly the search results.
#### 3.2.3 Stop Words
A number of words considered โirrelevantโ with respect to the searches of the particular field and database at hand are ignored during indexing and searching. These words (commonly referred to as โstop wordsโ) consist primarily of terms used in the English language with great frequency, as well as adverbs, prepositions and any other words not carrying a significant meaning when used in the context under consideration (Salton & McGill (1983)). Such words are removed both at indexing and searching time, decreasing the number of irrelevant searches and disregarding search terms that would not yield significant results.
The use of both case-sensitive and case-insensitive stop words during indexing allows us to single out those instances of terms that may have different meanings depending on their case. For instance, the words โheโ and โHeโ usually represent different concepts in the scientific literature (the second one being the symbol for the element Helium). By selectively eliminating all instances of โhe,โ when indexing the bibliographies, we stand a good chance that the remaining instances of the word refer to the element Helium.
The effort currently underway to create a structured synonym database will be used to group and maintain the list of stop words in use. By simply clustering stop words in synonym groups and properly tagging the group as containing stop words, we can use the same software that is currently being developed to create and maintain the list of synonyms in our database. An example of the resulting records is shown below:
```
<syngroup id="00037" type="stop">
<!-- he is used in case-sensitive way to avoid
removing "He" (element helium) from index -->
<syn case="mixed">he</syn>
<syn>she</syn>
<syn lang="de">er</syn>
<syn lang="de">sie</syn>
<syn lang="fr">il</syn>
<syn lang="fr">elle</syn>
<syn lang="es">él</syn>
<!-- as above, but without proper accenting -->
<syn lang="es">el</syn>
<syn lang="es">ella</syn>
<syn lang="it">lui</syn>
<syn lang="it">lei</syn>
</syngroup>
```
This paradigm allows us to treat stop words as a special case of synonyms (which are identified by the indexing and search engines as being of type โstopโ).
### 3.3 The Indexing Engine
General-purpose indexing engines and relational databases were used as part of the abstract service in its first implementation (Kurtz et al. (1993)), but they were eventually dropped in favor of a custom system as the desire for better performance and additional features grew with time (Accomazzi et al. (1995)), as is often necessary in the creation of discipline-specific information retrieval systems (van Rijsbergen (1979)). The approach used to implement the data indexing portion of the database can be considered โdata-drivenโ in the sense that parsing, matching and processing of input text data is controlled by a single configuration file (described below) and by the discipline-specific files described in section 3.2.
The inverted files used by the search engine are the products of a pipeline of data processing steps that has evolved with time. To allow maximum flexibility in the definition of the different processing steps, we have found it useful to break down the indexing procedure into a sequence of smaller and simpler tasks that are general enough to be used for the creation of all the files required by the search engine. A key design element which has helped generalize the indexing process is the use of a configuration file which describes all the field-specific processing necessary to create the index files. The configuration file currently in use is displayed in table 2. For each search field listed in the table, an inverted file structure is created by the indexing engine.
The first step performed by the indexing software is the creation of a list containing the document identifiers to be indexed. This usually consists of the entire set of documents included in a particular database but may be specified as a subset of it if necessary (for instance when creating an update to the index, see section 3.3.3). The list of document identifiers is then given as input to an โindexerโ program, which proceeds to create, for each search field, an inverted file containing the tokens extracted from the input documents and the document identifiers (bibcodes) where such words occur. (In the following discussion we will refer to the tokens extracted by the indexer simply as โwords,โ although they may not be actual words in the common sense of the term. For instance, during the creation of the author index, the โwordsโ being indexed are author names.) After all the inverted files have been created, each one of them is processed by a second procedure which generates two separate files used by the search engine: an โindexโ file, containing the list of words along with pointers to a list of document identifiers, and a โlistโ file, containing compact representations of the lists of document identifiers corresponding to each word.
The following subsections describe the procedures used during the different indexing steps: section 3.3.1 details the creation of the inverted files; section 3.3.2 describes the creation of the index and list files; section 3.3.3 describes the procedures used to update the index and list files; section 3.3.4 discusses some of the advantages and shortfalls of the implemented indexing scheme.
#### 3.3.1 Creation of Inverted Files
An inverted file (van Rijsbergen (1979); Frakes & BaezaโYates (1992)) is a table consisting of two columns: the first column contains the instances of words belonging to the indexing language, and the second column contains the list of document identifiers in which those words were found. The transformation of a document into its indexing language is performed in the following steps:
1) parsing of the document contents and extraction of all the bibliographic elements needed for the creation of one or more search fields; 2) joining of bibliographic elements that should be indexed together to produce a list of strings; 3) application of translation rules (if any) to the list of strings; 4) itemization of the list of strings into an array of words to be indexed; 5) removal of stop words from the list of words to be indexed (either case sensitively or insensitively); 6) folding of case for each of the words (if requested); 7) creation or addition of an entry for each word in a hash table correlating the word indexed with the document identifiers where it appears.
The indexer keeps a separate inverted file for each set of indexing fields to be created (see table 2, column 1). Each inverted file is simply implemented as a sorted ASCII table, with tab separated columns. Given the current size of our databases, the creation of these tables takes place incrementally. A pre-set number of documents is read and processed by the indexer, an occurrence hash table for these documents is computed in memory, and an ASCII dump of the hash is then written to disk file as a set of keys (the words being indexed) followed by a list of document identifiers containing such words. The global inverted file is then created by simply joining the partial inverted files using a variation of the standard UNIX join command.
Once the occurrence tables for the primary search fields listed in table 2 have been created, a set of derived fields are computed if necessary. Currently this step is used to create the โauthorsโ occurrence table from the โexact authorsโ one by parsing and formatting entries in it so that all names are reduced to the forms โLastname, Fโ (where โFโ stands for the first name initial) and โLastname.โ This allows efficient searching for the standard author citation format.
#### 3.3.2 Creation of the Index and List Files
After all the primary and derived inverted files have been generated, a separate program is used to produce for each table two separate files which are used by the search engine: an inverted index file (here simply called โindexโ file) and a document list file (โlistโ file, see Salton (1989)). The index file is an ASCII table which contains the complete list of words appearing in the inverted file and two sets of numerical values associated with it, the first set is used for exact word searches, the second one for synonym searches. The list file is a binary file containing blocks of document identifiers in which a particular word was found. Each set of numerical values specified in the index file consists of: the relative โweightโ of the word (or group of synonyms) in the database, as defined below; the length of the group of document identifiers in the list file, in bytes; the position of the group of document identifiers in the list file, defined as the byte offset from the beginning of the list file.
The value chosen to express the weight $`W(w)`$ of a word $`w`$ is a variation of the inverse document frequency (Salton & Buckley (1988)):
$$W(w)=K\times \mathrm{log}_{10}N/df(w)$$
where $`K`$ is a constant, $`N`$ is the total number of documents in the database, and $`df(w)`$ is the document frequency of the word $`w`$, i.e. the number of documents in which the word appears (Salton & Buckley (1988)). The choice of a suitable value for the constant $`K`$ (currently set to $`K=10^4`$) allows the indexing and search engine to perform most of the operations in integer arithmetic. To avoid performing slow log computations during the creation of the index files, the function that maps $`df(w)`$ to $`W(w)`$ is cached in an associative array so that when repeated integer values of $`df(w)`$ are encountered, the pre-computed values are used.
The document identifiers which are stored in the list files are 32-bit integers (from here on called sequential identifiers) corresponding to line numbers in the list of bibliographic codes which have been indexed. The search engine resolves all queries on index files by performing binary searches on the words appearing in the index file, then reading the corresponding list of sequential identifiers in the list files, combining results, and finally resolving the sequential identifiers in bibcodes (see figure 2).
The procedure for the creation of the index and list files reads the inverted file associated with each search field and performs the following steps:
1) read all entries from the list of document identifiers (bibcode list) and create a hash table associating each bibcode with its corresponding sequential identifier;
2) if synonym grouping is to be used for this field, read the synonym file for this field and create a hash table associating each entry in the synonym group with the word with the highest frequency in the group;
3) for each word in the inverted file translate the list of bibcodes associated with it into the corresponding list of integer line numbers, and mark word as being processed;
4) if word belongs to a group of synonyms, sequentially find and process all other words in the same group, marking them as processed, then iteratively process all words in any of the subgroups until nesting of subgroups is exhausted; if no synonyms are in use, the same procedure is used with the provision that the group of synonyms is considered to be composed only of the word itself;
5) join, sort and unique the lists of sequential identifiers for all the words in the current group of synonyms;
6) write to the list file the sorted list of sequential identifiers for each word in the group of synonyms, followed by the cumulative list of sequential identifiers for the entire group of synonyms;
7) for each word in the group of synonyms, write to the index file an entry containing the word itself and the two sets of numerical values (weight, length, and offset) for exact word and synonym searches.
Figure 3 illustrates the creation of entries for two words in the โtextโ index and list file from the text inverted file.
#### 3.3.3 Index Updates
The separation of the indexing activity into two separate parts offers different options when it comes to updating an index. New documents which are added to the database can be processed by the indexer and merged into the inverted file quickly, and a new set of index and list files can then be generated from it. Similarly, since the synonym grouping is performed after the creation of the inverted files, a change in the synonym database can be propagated to the files used by the search engine by recreating the index and list files, avoiding a complete re-indexing of the database.
Despite the steps that have been taken in optimizing the code used in the creation of the index and list files from the occurrence tables, this procedure still takes close to two hours to complete when run on the complete set of bibliographies in the astronomy database using the hardware and software at our disposal. In order to allow rapid and incremental updating of the index and list files, a separate scheme has been devised requiring only in-place modification of these files rather than their complete re-computation.
During a so-called โquick updateโ of an operational set of index files used by the search engine, a new indexing procedure is run on the documents that have been added to the database since the last full indexing has taken place. The indexing procedure produces new sets of incremental index and list files as described above, with the obvious difference that these files only contain words that appear in the new bibliographic records added to the database. A separate procedure is then used to merge the new set of index and list files into the global index and list files used by the operational search engine, making the new records immediately available to the user. The procedure is implemented in the following steps (see figure 4):
1) Compute new sequential identifiers for the list of bibcodes in the incremental index by adding to each of them the number of entries in the operational bibcode list. This guarantees that the mapping between bibcodes and sequential identifiers is still unique after the new bibcodes have been merged into the operational index.
2) Append the list of sequential identifiers found in the incremental list file to the operational list file. In the case of identifiers corresponding to a new entry in the index file, their block of values is simply appended to the end of the operational list file. In the case of identifiers corresponding to an entry already present in the operational index file, the original list of identifiers (โmain blockโ) needs to be merged with the new list of identifiers. In order to avoid clobbering existing data in the operational list file, the list of identifiers from the incremental index is appended to the end of the global list file, creating an extension of the main blocks of identifiers that we call an โextension block.โ To accomplish the linking between main and extension blocks, the last sequential identifier in a main block is overwritten with a negative value representing the corresponding extension blockโs offset from the beginning of the list file (except the change in sign). An extension block contains as the first integer value the size of the extension block in bytes, followed by the last identifier read from the main block in the list file, followed by the sequential identifiers from the incremental index (see fig. 4). When the search engine finds a negative number as the last document identifier value, it will seek to the specified offset, read a single integer entry corresponding to the number of bytes composing the extension block, and then proceed to read the specified number of identifiers. Note that because of the way extension blocks are created, the list of sequential identifiers created by concatenating the entries in the extension block to the entries in the main block is always sorted.
3) For each entry in each incremental index file, determine if a corresponding entry exists in the operational index file. If an entry is found, no modification of the index file is necessary, otherwise the index file is updated by inserting the entry in it. The values of the weights and offsets are corrected by taking into account the total number of documents in the operational index and the size of the list file.
#### 3.3.4 Remarks on the Adopted Indexing Scheme
One of the advantages of using separated index and list files is that the size of the files that are accessed most frequently by the search engine (the list of bibcodes and the index files) is kept small so that their contents can be loaded in random access memory and searched efficiently (SEARCH). For instance, the size of the text index file for the astronomy database is approximately 16 MB, and once the numerical entries are converted into binary representation when loaded in memory by the search engine, the actual amount of memory used is less than 10 MB.
The use of integer sequential identifiers in the list files allows more compact storage of the document identifiers as well as implementation of fast algorithms for merging search results (since all the operations are executed in 32-bit integer arithmetic rather than having to operate on 19-character strings). For instance, recent indexing of the ADS astronomy database produces text inverted files which have sizes approaching 500 MB, while the size of the text list file is about 140 MB.
The choice of a word weight which is a function of only the document frequency allows us to store word weights as part of the index files. It has been shown that a better measure for the relevance of a document with respect to a query word is obtained by taking into account both the document frequency $`df`$ and the term frequency $`tf`$, defined as the frequency of the word in each document in which it appears (Salton & Buckley (1988)), normalized to the total number of words in the document. The reasoning behind this is that a word occurring with high relative frequency in a document and not as frequently in the rest of the database is a good discriminant element for that document. Although we had originally envisioned incorporating document-specific weights in the list files to take into account the relative term frequency of each word, we found that little improvement was gained in document ranking. This is probably due to the fact that the collection of documents in our databases is rather homogeneous as far as document length and characteristics are concerned. Eventually the choice was made to adopt the simpler weighting scheme described above.
The procedures used to create the inverted files can scale well with the size of the database since the global inverted file is always created by joining together partial inverted files. This allows us to limit the number of hash entries used by the indexer program during the computation of the inverted files. According to Heapโs law (Heap (1978)), and as verified experimentally in our databases, a body of $`n`$ words typically generates a vocabulary of size $`V=Kn^\beta `$ where $`K`$ is a constant and $`\beta 0.40.6`$ for English text (Navarro (1998)). Since the size of the vocabulary $`V`$ corresponds to the number of entries in a global hash table used by the indexing software, we see that an ever-increasing amount of hardware resources would be necessary to hold the vocabulary in memory; our choice of a partial indexing scheme avoids this problem. Furthermore, the incremental indexing model is quite suitable to being used in a distributed computing environment where different processors can be used in parallel to generate the partial inverted files, as has been recently shown by Kitajima et al. (1997).
The procedures used to create the list and index files make use of memory sparingly, so that processing of entries from the occurrence tables is essentially sequential. The only exception to this is the handling of groups of synonyms. In that case, the data structures used to maintain the entries for the words in the current synonym group are kept in memory while the cumulative list of sequential identifiers for the entire group is built. The memory is released as soon as the entries for the current synonym group are written to the list and index files.
## 4 Management of Bibliographic Properties
By combining bibliographic data and metadata available from several sources in a single database and by maintaining a list of what properties and resources are available for each bibliography, the ADS system allows users to formulate complex queries such as: โshow me all the papers that cite any paper ever written about the object M87 and the subject โglobular clustersโ and which are available online as full-text documents.โ This query is possible thanks to the collection and fusion of data from several sources:
1) The astronomical object databases, which maintain a collection of object names and bibliographies in which they appear. This search is performed through a peer-to-peer network connection with the SIMBAD (Egret & Wenger (1988)) and NED (Helou & Madore (1988)) database servers, as described in OVERVIEW and SEARCH. This first step allows us to find the set of bibliographies on M87.
2) The ADS abstract service indices, which allow a search of all astronomical papers containing the words โglobular clusterโ or their synonyms. This part of the search is performed by the ADS search engine and makes use of the local files generated by indexing the bibliographic databases as described in section 3. This step allows us to discard any bibliographic entry which does not contain the words โglobular clusterโ in its text index.
3) The list of citations in the ADS databases, which maintain updated lists of astronomical papers and any paper referenced in them. This allows us to look up the list of papers that have cited the selected bibliographic entries, and then proceed to join the results.
4) The list of papers available electronically from either the astronomical journal publishers or the ADS article service, both of which provide access to full-text articles online.
The query given above illustrates how knowing whether a particular bibliographic entry possesses a particular property (e.g. whether it has been cited) and what values may be associated with that property (e.g. the list of citing papers) can be used as a method for selection and ranking of query results. Additionally, the availability of remote resources for a particular bibliographic entry can be described as being one of its properties, which in turns allows an additional filtering of the result lists.
As new data regarding a bibliographic entry become available, its record is updated in the ADS database by merging the new information with the existing entry and possibly by updating its relevance within the database and its relation with respect to other internal and external resources. For instance, when a new paper is published which references an existing bibliography, the record for the latter paper needs to be updated by establishing a link between the two papers; at the same time, the โcitation relevance measureโ for the paper, computed as the number of times the paper was cited in the literature, also needs to be updated.
The procedures used in the creation and management of bibliographic properties (simply called โpropertiesโ from here on) in the ADS databases are a result of the need for managing resources related to bibliographies which may or may not be available locally. The main characteristics of the property sets as defined in our system can be summarized in the following list:
1) Some properties simply denote the fact that an entry belongs to a certain dataset (e.g. whether a paper is refereed or not), others may have values associated with them (e.g. โis available online electronicallyโ will have as its value the URL of the full-text paper). In general, the knowledge of whether an entry in the database has a certain property allows the search engine to select it for further consideration when executing a database query, while the value(s) assumed by this property do not need to be taken into account until later.
2) The lists of bibliographic identifiers and their properties may be defined as being either โstaticโ or โdynamic.โ Static properties are those that once defined do not change in time (e.g. whether a paper is refereed), while dynamic properties may change their value with time (e.g. the list of citations for a paper).
3) Some properties may depend on each other (e.g. references and citations), hence the creation and updating order for these properties is significant.
Currently the ADS has defined a set of 21 different properties which are applicable to its bibliographies. Some of them are listed in table 3.
In the rest of this section we will discuss the approach we followed in implementing the database structures allowing query and selection based on properties of bibliographies. In section 4.1 we describe the implementation used to associate properties and attributes to entries in the database and the procedures maintaining relational links among them. In section 4.2 we describe the framework used to automatically update and merge bibliographic data with information submitted to the ADS.
### 4.1 Representation of Properties
The creation and updating of properties in the ADS system is the result of merging entries provided by different data sources and individuals at different times and in different formats. The procedures used to maintain the property database are therefore structured to be as general as possible (so that defining a new property is a simple task) while still allowing as much customization as necessary to deal with a variety of sources and formats. The representation of properties allows the search engine to efficiently filter results based on whether a bibliographic entry possesses a particular property. It also allows fast access to the values associated to a particular bibliographic property, so that the search interface can quickly access the information as required.
Instead of representing these properties as a single relational table where each bibliographic entry is associated with the ordered set of property values, a different approach was chosen where each property is represented by a separate table. The following definition was adopted:
โA bibliographic entry $`b`$ possesses property $`p`$ if the unique identifier for $`b`$ appears in the property table associated with $`p`$, $`T_p`$. If $`p`$ is a property that can have one or more values associated with it, the entry for $`b`$ in table $`T_p`$ will contain the $`n`$-tuple of such values next to it.โ
As an example, a possible entry in table $`T_{data}`$ for a bibliographic entry which has a $`data`$ property associated to it could be:
```
1999A&A...341..121S
http://cdsweb.u-strasbg.fr/htbin/myqcat3?
J/A+A/341/121/
http://adc.gsfc.nasa.gov/adc-cgi/cat.pl?
/journal_tables/A+A/341/121/
```
The first column contains the bibliographic identifier for the property, while the second column contains the values of the $`data`$ property, in this case a list of URLs of electronic data tables published in the paper. (Note that this record has been split on several lines for editorial reasons.)
The file structure most amenable to representing these property tables is again an inverted file, which allows fast binary searches on the bibcode identifiers. As is the case for the inverted files used to perform fielded searches on the contents of the bibliographic entries in our database (see section 3), each property table is decomposed in two parts, an index file and a list file. Since the records in the index file contain only bibcodes, which have a fixed length, we can create a binary index file where each record consist of one bibcode identifier (which is the sort key in the file), a pointer into the list file, and the number of property values associated with the bibcode. Entries in the list file are variable length, newline separated records, each record corresponding to a property value.
In addition to the index and list files, a database-specific file is generated for each property containing the list of all bibcodes in that particular database which possess that property. When the data structures used by the search engine are loaded into random access memory, these lists of bibcodes are read and for each bibliographic entry a binary array containing the list of properties which it possesses is created. By storing this information as part of the memory-resident data structures used by the search engine, selection and filtering of bibliographic entries based on their properties becomes a very efficient operation. The current implementation uses a 32-bit integer to represent the binary array of properties, where the $`n`$-th bit is set if and only if the bibliographic entry possesses the $`n`$-th property.
### 4.2 Implementation of the Property Database Management Software
To provide the capability of merging properties and values generated from separate sources and in different formats, we devised a framework consisting of a hierarchical set of files and software utilities which are used to implement an efficient processing pipeline (see figure 5). The approach we follow may be regarded as being bottom-up, because the property files are always created from smaller, independently updated datasets. Updating of such datasets is typically event-driven, as described below.
A top-level directory is created which contains one subdirectory for each property in the database. Each of these subdirectories in turn contains files representing different datasets which need to be merged together. The nature and content of such files is determined by their extension, according to the following conventions:
.tab: files containing identifiers and properties as provided by different data centers and users; these entries will need to be translated to the standard format used by scripts managed by the ADS staff
.bib: files containing lists of tab-separated identifier and value pairs; these entries are suitable to be merged into a single property file used by the ADS search engine
.fmt: executable procedures which generate .bib files from their respective .tab files; these procedures contain format- and domain- specific knowledge about the source of the particular dataset and the mapping of entries from the .tab file into the .bib file
.uri: file containing the URLs of documents which should be downloaded from the network and merged to create a .tab file; these URLs may correspond to static or dynamic documents generated by other service providers listing the bibliographic properties available on their web site
.flt: executable procedures which generate .bib files by filtering the complete list of bibliographic identifiers according to some data-specific criteria; one example of such filter is the one which produces the list of all refereed bibcodes from the list of all bibcodes by checking the journal abbreviation
.kill: file containing the list of bibcodes which should $`not`$ be listed as possessing a particular property; these are typically used to implement โexceptions to the rule,โ cases; for example, we use a kill file to remove bibcodes corresponding to editorial notices from the global list of papers appearing in a refereeed journal.
Data retrieval and formatting scripts designed after the GNU โmakeโ utility limit the creation and processing of data to what is strictly necessary. In particular, data sources that are specified as URLs are downloaded only if their timestamp is more recent than their local copy. This obviously applies to network protocols that support the notion of time-stamping, e.g. HTTP and FTP. Similarly, scripts that are used to format input tables into lists of bibcodes and relative URLs are only executed if the timestamp of the relevant tables indicates that they have been modified more recently than their corresponding target file.
## 5 Database Mirroring
All of the software development and data processing in the ADS has been carried out over the last 6 years in a UNIX environment. During the life of the project, the workgroup-class server used to host the ADS services has been upgraded twice to meet the increasing use of the system. The original dual processor Sun 4/690 used at the inception of the project was replaced by a SparcServer 1000E with two 85MHz Supersparc CPU modules in 1995 and subsequently an Ultra Enterprise 450 with two 300MHz Ultrasparc CPUs was purchased in 1997. These two last machines are still currently used to host the ADS article and abstract services, respectively.
Soon after after the inception of the article service in 1995 it became clear that for most ADS users the limiting factor when retrieving data from our computers was bandwidth rather than raw processing power. With the creation of the first mirror site hosted by the CDS in late 1996, users in different parts of the world started being able to select the most convenient database server when using the ADS services, making best use of bandwidth available to them. At the time of this writing, there are seven mirror sites located on four different continents, and more institutions have already expressed interest in hosting additional sites. The administration of the increasing number of mirror sites requires a scalable set of software tools which can be used by the ADS staff to replicate and update the ADS services both in an interactive and in an unsupervised fashion.
The cloning of our databases on remote sites has presented new challenges to the ADS project, imposing additional constraints on the organization and operation of our system. In order to make it possible to replicate a complex database system elsewhere, the database management system and the underlying data sets have to be independent of the local file structure, operating system, and hardware architecture. Additionally, networked services which rely on links with both internal and external web resources (possibly available on different mirror sites) need to be capable of deciding how the links should be created, giving users the option to review and modify the systemโs linking strategy. Finally, a reliable and efficient mechanism should be in place to allow unsupervised database updates, especially for those applications involving the publication of time-critical data.
In the next sections we describe the implementation of an efficient model for the replication of our databases to the ADS mirror sites. In section 5.1 we describe how system independence has been achieved through the parameterization of site-specific variables and the use of portable software tools. In section 5.2 we describe the approach we followed in abstracting the availability of network resources through the implementation of user-selectable preferences and the definition of site-specific default values. In section 5.3 we describe in more detail the paradigm used to implement the synchronization of different parts of the ADS databases. We conclude with section 5.4 where we discuss possible enhancements to the current design.
### 5.1 System Independence
The database management software and the search engine used by the ADS bibliographic services have been written to be independent from system-specific attributes to provide maximum flexibility in the choice of hardware and software in use on different mirror sites. We are currently supporting the following hardware architectures: Sparc/Solaris, Alpha/Tru64 (formerly Digital Unix), IBM RS6000/AIX, and x86/Linux. Given the current trends in hardware and operating systems, we expect to standardize to GNU/Linux systems in the future.
Hardware independence was made possible by writing portable software that can be either compiled under a standard compiler and environment framework (e.g. the GNU programming tools, Loukides & Oram (1996)) or interpreted by a standard language (e.g. PERL version 5, Wall, Christiansen & Schwartz (1996)). Under this scheme, the software used by the ADS mirrors is first compiled from a common source tree for the different hardware platforms on the main ADS server, and then the appropriate binary distributions are mirrored to the remote sites.
One aspect of our databases which is affected by the specific server hardware is the use of binary data in the list files, since binary integer representations depend on the native byte ordering supported by the hardware. With the introduction of a mirror site running Digital UNIX in the summer of 1999, we were faced with having to decide whether it was better to start maintaining two versions of the binary data files used in our indices or if the two integer implementations should be handled in software. While we have chosen to perform the integer conversion in software for the time being given the adequate speed of the hardware in use, we may revisit the issue if the number of mirror sites with different byte ordering increases with time.
Operating System independence is achieved by using a standard set of public domain tools abiding to well-defined POSIX standards (IEEE (1995)). Any additional enhancements to the standard software tools provided by the local operating system is achieved by cloning more advanced software utilities (e.g. the GNU shell-utils package) and using them as necessary. Specific operating system settings which control kernel parameters are modified when appropriate to increase system performance and/or compatibility among different operating systems (e.g. the parameters controlling access to the systemโs shared memory). This is usually an operation that needs to be done only once when a new mirror site is configured.
File-system independence is made possible by organizing the data files for a specific database under a single directory tree, and creating configuration files with parameters pointing to the location of these top-level directories. Similarly, host name independence is achieved by storing the host names of ADS servers in a set of configuration files.
### 5.2 Site Independence
While the creation of the ADS mirror sites makes it virtually impossible for users to notice any difference when accessing the bibliographic databases on different sites, the network topology of a mirror site and its connectivity with the rest of the Internet play an important role in the way external resources are linked to and from the ADS services. With the proliferation of mirror sites for several networked services in the field of astronomy and electronic publishing, the capability to create hyperlinks to resources external to the ADS based on the individual userโs network connectivity has become an important issue.
The strategy used to generate links to networked services external to the ADS which are available on more than one site follows a two-tiered approach. First, a โdefaultโ mirror can be specified in a configuration file by the ADS administrator (see figure 6). The configuration file defines a set of parameters used to compose URLs for different classes of resources, lists all the possible values that these parameters may assume, and then defines a default value for each parameter. Since these configuration files are site-specific, the appropriate defaults can be chosen for each of the ADS mirror sites depending on their location. ADS users are then allowed to override these defaults by using the โPreference Settingsโ system (SEARCH) to select any of the resources listed under a category as their default one. Their selection is stored in a site-specific user preference database which uses an HTTP cookie as an ID correlating users with their preferences (SEARCH).
In order to create links to external resources which are a function of a userโs preferences, we store the parametrized version of their URLs in the property databases. The search engine expands the parameter when the resource is requested by a user according to the userโs preferences. For instance, the parametrized URL for the electronic paper associated with the bibliographic entry 1997ApJ...486...42G can be expressed as $UCP$/cgi-bin/resolve?1997ApJ...486...42G. Assuming the user has selected the first entry as the default server for this resource, the search engine will expand the URL to the expression:
```
http://www.journals.uchicago.edu/cgi-bin/resolve?
1997ApJ...486...42G
```
This effectively allows us to implement simple name resolution for a variety of resources that we link to. While more sophisticated ways to create dynamic links have been proposed and are being used by other institutions (Van de Sompel & Hochstenbach (1999); Fernique, Ochsenbein & Wenger (1998)), there is currently no reliable way to automatically choose the โbestโ mirror site for a particular user, since this depends on the connectivity between the user and the external resource rather than the connectivity between the the ADS mirror site and the resource. By saving these settings in a user preference database indexed on the user HTTP cookie ID (SEARCH), users only need to define their preferences once and our interface will retrieve and use the appropriate settings as necessary.
### 5.3 Mirroring software
The software used to perform the actual mirroring of the databases consists of a main program running on the ADS master site initiating the mirroring procedure, and a number of scripts, run on the mirror sites, which perform the transfer of files and software necessary to update the database. The paradigm we adopted in creating the tools used to maintain the mirror sites in sync is based on a โpushโ approach: updates are always started on the ADS main site. This allows mirroring to be easily controlled by the ADS administrator and enables us to implement event-triggered updating of the databases. The main mirroring program, which can be run either from the command line or through the Common Gateway Interface (CGI), is a script that initiates a remote shell session on the remote sites to be updated, sets up the environment by evaluating the mirror sitesโ and master siteโs configuration files, and then runs scripts on the remote sites that synchronize the local datasets with the ADS main site. An example of the menu-driven CGI interface and a mirroring session are shown in figure 7.
The updating procedures are specialized scripts which check and update different parts of the database and database management software (including the procedures themselves). For each component of the database that needs to be updated, synchronization takes place in two steps, namely the remote updating of files which have changed to a staging directory, and the action of making these new files operational. This separation of mirroring procedures has allowed us to enforce the proper checks on integrity and consistency of a data set before it is made operational.
The actual comparison and data transfer for each of the files to be updated is done by using a public domain implementation of the rsync algorithm (Tridgell 1999a ). The advantages of using rsync to update data files rather than using more traditional data replication packages are summarized below.
1) Incremental updates: rsync updates individual files by scanning their contents, computing and comparing checksums on blocks of data within them, and copying across the network only those blocks that differ. Since during our updates only a small part of the data files actually changes, this has proven to be a great advantage. Recent implementations of the rsync algorithms also allow partial transfer of files, which we found useful when transferring the large index files used by the search engine. In case the network connection is lost or times out while a large file is transferred, the partial file is kept on the receiving side so that transfer of additional chunks of that file can continue where it left off on the next invocation of rsync.
2) Data integrity: rsync provides several options that can be used to decide whether a file needs updating without having to compare its contents byte by byte. The default behavior is to initiate a block by block comparison only if there is a difference in the basic file attributes (time stamp and file size). The program however can be forced to perform a file integrity check by also requesting a match on the 128-bit MD4 checksum for the files.
3) Data compression: rsync supports internal compression of the data stream sent between the master and mirror hosts by using the zlib library (Deutsch & Gailly (1996)).
4) Encryption and authentication: rsync can be used in conjunction with the Secure Shell package (Ylonen et al. (1999)) to enforce authentication between rsync client and server host and to transfer the data in an encrypted way for added security. Unfortunately, since all of the ADS mirror sites are outside of the U.S., transfer of encrypted data could not be performed at this time due to restrictions and regulations on the use of encryption technology.
5) Access control: the use of rsync allows the remote mirror sites to retrieve data from the master ADS site using the so-called anonymous rsync protocol. This allows the master site to exercise significant control over which hosts are allowed to access the rsync server, what datasets can be mirrored, and does not require remote shell access to the main ADS site, which has always been the source of great security problems.
During a typical weekly update of the ADS astronomy database, as many as 1% of the text files may be added or updated, while the index files are completely recreated. By checking the attributes of the individual files and transferring only the ones for which either timestamp or size has changed, the actual data which gets transferred when updating the collection of text files is of the order of 1.7% of the total file size (12MB vs. 700MB). By using the incremental update features of rsync when mirroring a new set of index files, the total amount of data being transferred is of the order of 38% (250MB vs. 650MB).
### 5.4 Planned Enhancements
While the adoption of the rsync protocol has made it possible to dramatically decrease the time required to update a remote database, there are several areas where additional improvements could be made to the current scheme in an effort to reduce the amount of redundant processing and network transfers on the main ADS server. Some of the planned improvements are discussed below.
Given the CPU-intensive activity of computing lists of file signatures and checksums for files selected as potential targets for a transfer, the rsync server running on the main ADS site is often under a heavy load when the weekly updates of our bibliographic databases are simultaneously mirrored to the remote sites. Under the current implementation of the rsync server software, each request from a mirror site is handled by a separate process which creates the list of files and directories being checked. Therefore, the load on the server increases linearly with the number of remote hosts being updated, although much of the processing requested by the separate rsync connections is in common and takes place at the same time. By adding an option to cache the data signatures generated by the rsync server and exchanged with each client, most of the processing involved could be avoided. This option, first suggested by the author of the rsync package (Tridgell 1999b ) but never implemented, would significantly benefit busy sites such as the ADS main host. A similar approach has been used by Dempsey & Weiss (1999) to implement an experimental replication mechanism based on rsync. We hope that a stable and general approach to this caching issue can be adopted soon and are collaborating with the maintainers of the package on its development.
A second improvement that would significantly reduce the bandwidth currently used during remote updating of the ADS mirror sites is the implementation of a multicasting or cascading mirroring model (see figure 8). Internet multicasting is still a technology under development (Miller et al. (1998)) and efficient implementations require special software support at the IP (Internet Protocol) level, over which we have no control. The cascading model can instead by implemented at the application level using current software tools. Under this model, the administrator of the main server to be cloned defines a tree in which the nodes represent the mirror sites, with the root of the tree being the main site. Data mirroring is then implemented by having each node in the tree โpushโ data to its subordinate nodes. This approach trades off the simplicity of simultaneous updating for all mirror sites from a central host in favor of a sequence of cascading updates, which is a sensible solution once the number of mirror sites becomes large. We are currently experimenting with this model on a prototype system and plan to make the design operational by the end of 1999 if the design proves to be advantageous.
## 6 Future Developments
By all accounts, the ADS project has been very successful in providing bibliographical services to the astronomer and research librarian. Much of the systemโs strength has been its role as part of a network of services designed to provide advanced search and retrieval capabilities to the scientific community at large. Given the rapid changes in the field of electronic publishing, resource linking, digital library research, it is of great importance for our project to adapt its operations to this ever-changing environment and its underlying technologies.
In this last section we analyze some of the promises and challenges that we expect to face over the next several years and we discuss how they may affect the evolution of our system. In section 6.1 we describe the new datasets that are becoming available to our project and the changes necessary for their integration in the existing system architecture. Section 6.2 describes the effect of expected technological changes on the operations of the ADS. Finally, section 6.3 discussed how increased collaboration and inter-operability among data providers can lead to the creation of a more integrated environment making better use of information discovery and electronic publishing technologies.
### 6.1 New Data
From the user prospective, one of the most significant changes will be the completion of our full-text coverage and abstracting for the scholarly astronomical literature. Over the next year we expect to complete the digitization of all astronomical journals back to volume 1 (DATA). The availability of such a large body of scanned publications allows us to pursue some important goals through the use of Optical Character Recognition (OCR) technology: the creation of full-text documents and the extraction of abstract and citation information from them.
The full text of an article produced by OCR programs can be used by the indexing and search engine to provide better retrieval capabilities. However, the current indexing model has been developed to work well with a homogeneous set of bibliographic data with little variation in document length and content model; extending the scope of our databases to include the full-text of articles may therefore require a new approach to the entire architecture behind the indexing and search engines. Furthermore, since the output generated by OCR packages is known to contain incorrectly recognized characters and words, new strategies may be required to manage this level of uncertainty during indexing and searching.
The extraction and OCRing of important document fragments such as abstracts and references is currently an ongoing process which holds great promise (DATA). Essentially, the combination of pattern recognition and OCR techniques allows us to identify areas in a scanned document corresponding to the abstract or reference section of a paper. The text extracted from an abstract section is then reformatted and inserted into the bibliographic record for that paper. Periodic analysis of the text index has been necessary to identify and correct misinterpreted characters and words produced by the OCR software. The increased amount of human checks on our data set as a quality assurance measure has been the price to pay for integrating these additional abstracts in our bibliographic records.
Text extracted from a reference section is analyzed by programs making use of natural language processing techniques to identify the individual works cited in the article and add them to our citation database. The challenge we are facing in this case is creating a robust system capable of correctly parsing and matching the cited reference strings with bibliographic records in our database (Accomazzi et al. (1999)), with the additional complication that the input text may contain characters incorrectly recognized by the OCR software.
### 6.2 New Technologies
The latest developments in Electronic Data Interchange and User Interfaces advocate the adoption of a model of data representation where there is clear separation between content, metadata, and style. The widespread endorsement of XML and related proposals such as the XLink language, the Extensible Style Language (XSL), and the Document Object Model (DOM), seems to indicate that we will see pervasive use of XML across platforms and implementations. While this raises hopes that data exchange among different astronomical data centers and institutions can be streamlined, it is not clear at this point that a unique framework describing all resources in astronomy can be defined, nor that such a system is necessary at this point. However, the adoption of XML as the โlingua francaโ for data interchange can help remove the initial obstacles preventing more widespread creation of peer-to-peer connections between information providers and can help speed up the creation of โfederatedโ services (Murtagh & Guillaume (1998)).
In this context, we hope to leverage the wide deployment of XML-based applications to generalize and extend the services currently offered to our collaborators and users. This involves modifying the implemented APIs (SEARCH) to allow output of structured XML documents containing both metadata and bibliographic data. We have already started adopting this paradigm while implementing new and experimental services which require the exchange of data and metadata structures between client and server, such as the ADS reference resolver (Accomazzi et al. (1999)).
Another issue related to data interchange which is currently receiving much attention is the definition of persistent identifiers for bibliographic resources available on the Internet. This issue is a particular instance of a more general problem, which is the need to define common naming schemes for digital objects and distributed locator services allowing their resolution. For a number of years this has been recognized as one of the most important infrastructure components necessary for the large-scale development of digital library systems (Lynch & Molina-Garcia (1996)). Today most publishers are providing location services which are based on the traditional paradigm of identifying a published work by journal, volume and page. It is becoming increasingly clear that a more general mechanism will have to be adopted in the future since this model does not extend well into the digital era. For instance, a publication may be available only in electronic form (as is already the case for some โe-journalsโ such as EPJdirect and ZPhys-e from Springer-Verlag). or may correspond to a multimedia object rather than a traditional text document; in these cases, the concept of pagination loses its meaning. The Document Object Identifier (DOI, Paskin (1999)), which has been proposed by an international consortium of publishers, holds the promise of becoming the universal identifier suitable for naming digital objects. Unfortunately, the required registration procedures and management of DOI space and limited support for its location services seem to have discouraged its widespread adoption so far (Davidson & Douglas (1998)).
The ADS has already extended the use of the bibcode identifier in different ways to account for the existence of electronic-only publications (DATA), but it is becoming increasingly more difficult to map new document identifiers into a model that was designed to describe printed material only. It is likely that over the next few years our project will need to adopt new notations for identifying bibliographic records, while still maintaining backward compatibility with the existing bibcodes for printed work. In this sense, it is likely that ADS will be able to help the astronomical community in the transition from print-based to electronic publishing by providing resolving services for astronomical bibliographies and related resources.
### 6.3 New Services
The adoption of common technologies and protocols by data providers has helped create a low-level of inter-operability among different data services (in the sense that users can simply browse across different web sites by following links between them). However, with the exponential increase of documents and services available on the web, the problem of providing an integrated tool for locating information of interest to a researcher has remained unsolved. While well-organized repositories and archives with good search interfaces exist for a variety of data sets, a scientist who needs to consult several such archives is left with having to individually query each one separately and then organize the results collected from each one of them. It is fortunate that the creation of the ADS and its ongoing collaboration with other data providers has reduced (if not completely eliminated) this problem for astronomers, but this is not the case for scientists in other disciplines or for those researches whose work spans across the conventional boundaries of scientific research fields.
The problem of providing a unified search mechanism across datasets is being tackled both within the individual disciplines (Heikkila, MCGlynn & White (1999); Fernique, Ochsenbein & Wenger (1998); Murtagh & Guillaume (1998)) and at the architectural level (Schatz (1997)). A proposed solution to this problem is the creation of federated services composed by โclusteringโ the combined assets and search capabilities of several independent data centers. A common set of metadata elements describing the local search domain and interface can be used to translate generic queries into site-specific ones, and then merge and present the results to the users. While this type of approach is known to work within well-restricted research domains, the broader problem of querying databases belonging to different research fields is far more complex and requires the creation of systems capable of implementing semantic inter-operability (Schatz (1997); Lynch & Molina-Garcia (1996)). While the ADS has been offering direct access to its search engine since 1996 (SEARCH), in order for the ADS to become part of such a federated system, we will need to provide an increased level of abstraction and access to the capabilities of our search interfaces. Additionally, the emerging standards for site- and database-specific resource descriptions will require the creation and maintenance of a body of metadata defining both the extent of our databases and the supported query interfaces. Hanisch (2000) has recently proposed the creation of such a distributed system for Astronomy and the Space Sciences.
Another important aspect of services increasing inter-operability between data providers is cross-linking of online resources. While most publishers of scientific journals have been able to create electronic versions of their journals relatively quickly soon after the explosion in popularity of the web, only a few of them have taken advantage of the new capabilities that the technology has to offer, namely the possibility to create hyperlinks between online documents and related resources. In this respect, electronic publishing in astronomy was ahead of its times with the publication by the University of Chicago Press in late 1996 of the electronic version of the Astrophysical Journal which contained hyperlinks from the reference section of articles to bibliographic records in the ADS. The early implementation of this feature became possible thanks to the close collaboration between the publisher, the ADS staff, and the visionary leadership provided by the American Astronomical Society (AAS). Similarly, editors and publishers have now made it their policy to submit electronic versions of data tables appearing in astronomical papers to the CDS and Astronomical Data Center (ADC) archives, allowing ADS to easily maintain links to these datasets in its bibliographic records. This practice was estabilished back in 1990 with an agreement between the CDS and the editors of the journal Astronomy & Astrophysics.
While reference and object linking has today become more commonplace (Hitchcock et al. (1998)), there are a number of unresolved problems that limit its usefulness. The issue of linking a reference to an instance of the document it refers to can be viewed as a two step process (Caplan & Arms (1999)): (1) resolution of a reference string into a document identifier; and (2) resolution of the document identifier into one or more URLs. In the current use of the ADS reference resolver, (Accomazzi et al. (1999)) step (1) is accomplished by the publisher during the last stages of the electronic publication process, and links are created only if a reference string is found to correspond to a valid bibcode in ADS (โstatic linkingโ). The step of document resolution (2) is another example of the problem of object resolution mentioned in section 6.2. In this case, a bibcode needs to be mapped into the โbestโ URL corresponding to it, and is typically implemented as a site-specific resolution activity, so that for example, the CDS mirror of the University of Chicago journals will link to the CDS mirror of the ADS bibliographic services.
While this model has worked well for many astronomical journals, it has some shortcomings. First of all, the computation of static links at publication time does not allow for the possibility that one of the works cited in the reference section may become available at a later date (e.g. if the coverage of the literature has been extended or if a more accurate resolution of the reference is later implemented). From a theoretical point of view, a better approach to the problem would be the use of โdynamic linking,โ in which links are created when the document is downloaded (Van de Sompel & Hochstenbach (1999)). It is likely that most publishers will move towards a mixed model in which on-line documents are periodically reprocessed for the purpose of updating links between them and external resources that may have become available, or to provide options for forward-looking citation queries into bibliographical databases.
As far as the issue of bibcode resolution, it is clear that a better approach to having site-specific settings would be to allow real-time resolution of bibcode identifiers based on the preference of the individual users and the current availability of relevant resources. The approach we follow when resolving links to external resources (SEARCH) does account for user preferences, but does not take into account real-time availability of the possible instances of the resource. This is in contrast with the approach followed by Fernique, Ochsenbein & Wenger (1998), where the opposite is true. It is clear that in order to create a reliable system for resolving astronomical resources, and integration of both approaches is necessary, so that a global user profile can be used to specify preferences while a global resource database can be used to specify the availability and location of these resources on the network. The implementation of such a system is greatly complicated by the increasingly complex organization of networks, with firewalls and proxy servers acting as intermediary agents in the activity of resource resolution. Hopefully these issues will be solved over the next few years by the adoption of standard practices and software tools.
## 7 Conclusions
The design and implementation of the ADS bibliographic services has been driven by the desire to provide flexible search capabilities to the astronomical community. The original decision to create our own suite of software tools for indexing and searching the databases has proven to be an important one as it has given us the freedom to continuously enhance and tailor the software to our usersโ needs. With freedom, however, also came the responsibility of maintaining a complex system which has now been ported to a variety of hardware and software platforms. Fortunately, the adoption of standard programming languages and coding techniques has greatly facilitated the task.
Over the years, the ADS has evolved from being a user-oriented system to becoming an open service for the discovery and retrieval of bibliographic data, allowing integration of our capabilities in the operation of other information providers. At the same time, our system was expanded from being simply a searchable archive of bibliographic references to being a service offering relational links among records within our system and to resources available elsewhere. In this respect, the design of a hierarchical framework for the management of bibliographic resources has provided the required level of flexibility and extensibility. With the recent proliferation of mirror sites for popular resources in astronomy, we have adopted a simple yet powerful mechanism for the resolution of links to resources available at multiple locations, adding user customization to the resolution process.
With the completion of full-text coverage of the astronomical literature over the next few years, the ADS will be able to significantly increase the holdings of its citation database and provide full-text search and retrieval capabilities. With the adoption of new technologies and standards in electronic data interchange, the ADS is likely continue to play an important role in the integration of network services in astronomy.
###### Acknowledgements.
The usefulness of a bibliographic service is only as good as the quality and quantity of the data it contains. The ADS project has been lucky in benefitting from the skills and dedication of several people who have significantly contributed to the creation and management of the underlying datasets. In particular, we would like to acknowledge the work of Elizabeth Bohlen, Donna Thompson, Markus Demleitner, and Joyce Watson. Funding for this project has been provided by NASA under grant NCC5-189. |
no-problem/0002/astro-ph0002440.html | ar5iv | text | # ISO spectroscopy of circumstellar dust in the Herbig Ae systems AB Aur and HD 163296 Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA and on observations collected at the European Southern Observatory, La Silla, Chile.
## 1 Introduction
Herbig Ae/Be stars are intermediate-mass pre-main sequence stars surrounded by disks of gas and dust which might be the site of on-going planet formation (see Waters & Waelkens 1998 for a recent review). AB Aur (A0Ve) and HD 163296 (A3Ve) belong to the best studied Herbig stars and are sometimes considered prototypical for the entire class. As early as 1933, Merrill & Burwell remarked upon the similarity of both systems, which was confirmed by numerous subsequent authors. Apart from the fact that the stellar mass, effective temperature and age of AB Aur and HD 163296 are nearly identical (van den Ancker et al. 1997, 1998), the similarity between the two systems also extends to their circumstellar environment: both AB Aur and HD 163296 are examples of relatively isolated star formation, and are not hindered by confusion with other sources (Henning et al. 1998; Di Francesco et al. 1998).
Both AB Aur and HD 163296 show a rich, variable, emission-line spectrum from the ultraviolet to the optical. With a few exceptions, most noticeably the observed \[O i\] emission, these lines have been successfully modelled as arising in an inhomogeneous stellar wind (Catala et al. 1989; Bรถhm et al. 1996; Bouret et al. 1997). Infall of material has been detected in both HD 163296 and AB Aur through monitoring of UV and optical absorption and emission lines and can be explained by the presence of infalling evaporating exocomets (Grady et al. 1996, 1999). In the infrared, both stars are among the sources with the strongest 10 $`\mu `$m silicate feature in emission (Cohen 1980; Sitko 1981; Sorrell 1990). Sitko et al. (1999) compared the 10 $`\mu `$m silicate feature in HD 163296 with that of solar-system comets and found a striking resemblance with that of comet Hale-Bopp. Neither star shows the 3.29 $`\mu `$m UIR emission band present toward many Herbig Ae/Be stars (Brooke et al. 1993; Sitko et al. 1999). Basic astrophysical parameters of both stars are listed in Table 1.
There is strong evidence for the presence of a circumstellar disk in both AB Aur and HD 163296. Bjorkman et al. (1995) detected a 90ยฐ flip of the polarization angle between the optical and the ultraviolet in HD 163296, which they interpreted as evidence for a flattened, disk-like structure. Mannings & Sargent (1997) resolved the gaseous disks surrounding AB Aur and HD 163296 using CO millimeter wave aperture synthesis imaging. Using continuum measurements at 1.3 mm, the same authors also resolved the circumstellar dust disk of HD 163296. The AB Aur dust disk has also been resolved in the infrared, and shows a surprisingly strong dependence of disk diameter on wavelength, ranging from 0$`\stackrel{}{.}`$0065 (0.94 AU) at 2.2 $`\mu `$m (Millan-Gabet et al. 1999), through 0$`\stackrel{}{.}`$24 (35 AU) at 11.7 $`\mu `$m to 0$`\stackrel{}{.}`$49 (70 AU) at 17.9 $`\mu `$m (Marsh et al. 1995).
In this paper we present new infrared spectra of AB Aur and HD 163296 obtained with the Short- and Long Wavelength Spectrometers on board the Infrared Space Observatory (ISO; Kessler et al. 1996). We will discuss these spectra and their implications for the evolution of dust in Herbig systems. In a subsequent paper (Bouwman et al. 2000), we will describe a model for the circumstellar dust disks of AB Aur and HD 163296 and apply it to these data.
## 2 Observations
ISO Short Wavelength (2.4โ45 $`\mu `$m) Spectrometer (SWS; de Graauw et al. 1996) and Long Wavelength (43โ197 $`\mu `$m) Spectrometer (LWS; Clegg et al. 1996) full grating scans of AB Aur were obtained in ISO revolutions 680 (at JD 2450717.747) and 835 (JD 2450872.380), respectively. An SWS full grating scan of HD 163296 was made in revolution 329 (JD 2450367.398). Observing times were 3666 seconds for the SWS and 2741 seconds for the LWS observations. Data were reduced in a standard fashion using calibration files corresponding to OLP version 7.0 (SWS) or 6.0 (LWS), after which they were corrected for remaining fringing and glitches. To increase the S/N in the final spectra, statistical outliers were removed and the detectors were aligned, after which the spectra were rebinned to a lower spectral resolution. The resulting spectra are shown in Fig. 1.
$`N`$-band (10.1 $`\mu `$m) images of HD 163296 were obtained on July 25, 1997 (at JD 2450654.608) using TIMMI on the ESO 3.6m telescope at La Silla. Total integration time was 65 minutes. The pixel size was 0$`\stackrel{}{.}`$336, with a total field of view of 21$`\stackrel{}{.}`$$`\times `$ 21$`\stackrel{}{.}`$5. After a standard reduction procedure, the HD 163296 image was indistinguishable from that of the standard star $`\eta `$ Sgr. After deconvolution of the HD 163296 image with that of $`\eta `$ Sgr, the resulting stellar image stretched across 2 pixels. We conclude that the bulk of the 10 $`\mu `$m flux of HD 163296 comes from an area less than 0$`\stackrel{}{.}`$7 (90 AU at 122 pc) in diameter.
## 3 Contents of spectra
The infrared spectra of AB Aur and HD 163296 (Fig. 1) show big differences: whereas AB Aur shows the cool, strong continuum expected for a Herbig star, for HD 163296 the continuum appears to be so weak that the entire SWS spectrum is dominated by solid-state emission features. The IRAS fluxes also plotted in Fig. 1 suggest that an underlying continuum in HD 163296 is present, but peaks longward of 100 $`\mu `$m and therefore is much cooler than the $``$40 K continuum in AB Aur.
Both AB Aur and HD 163296 show a strong 9.7 $`\mu `$m amorphous silicate feature in emission together with a broad emission complex ranging from 14 to 38 $`\mu `$m. The emissivities for various dust components in the spectra are also included in Fig. 1. In the HD 163296 spectrum the broad emission complex ranging from 14 to 38 $`\mu `$m is too broad and too intense to be solely attributed to the 19 $`\mu `$m feature due to the OโSiโO bending mode. We tentatively attribute this feature to a blend of silicates and iron oxide. Lab spectra of FeO also show a strongly rising emissivity in the short-wavelength range of the SWS (Henning et al. 1995). When folded with a $``$ 800 K blackbody, this naturally produces a broad emission feature peaking around 3 microns, which is present in both stars.
The large degree of redundancy in the SWS data makes it possible to assess the reality of weak spectral features which at first glance may appear to be lost in the noise. Each part of the spectrum was scanned twice by twelve detectors, so by checking whether a particular feature is seen in all detectors and in both scan directions, it is possible to disentangle real features from noise. The features identified in this way are listed in Table 2. AB Aur clearly shows the familiar 6.2 and 7.7 and possibly also the 8.6 and 11.2 $`\mu `$m UIR bands usually attributed to emission by polycyclic aromatic hydrocarbons (PAHs), as well as a new UIR band at 15.9 $`\mu `$m. The 3.29 $`\mu `$m UIR band is absent.
The HD 163296 spectrum shows a number of small emission features at wavelengths corresponding to those of crystalline olivines ((Mg<sub>x</sub>Fe<sub>1-x</sub>)<sub>2</sub>SiO<sub>4</sub>). Remarkably, the amorphous silicate feature has a higher band-strength relative to the continuum in our ISO data than in the ground-based 8โ13 $`\mu `$m spectrum of HD 163296 by Sitko et al. (1999). In contrast, the 11.2 $`\mu `$m shoulder, due to crystalline silicates, appears weaker in our spectrum, suggesting a significant time variability of both components.
In addition to this, HD 163296 shows emission from the 44 $`\mu `$m H<sub>2</sub>O ice feature. The relative location of the IRAS 60 $`\mu `$m measurement in comparison to the SWS spectrum suggests that the long-wavelength H<sub>2</sub>O ice feature around 69 $`\mu `$m as well as the broad unidentified feature longward of 100 $`\mu `$m, observed in HD 100546 and HD 142527 (Malfait et al. 1998, 1999), might also be very prominent in HD 163296. PAHs are very weak or absent in HD 163296.
In addition to the solid-state features, both AB Aur and HD 163296 also contain a number of H i recombination lines at shorter wavelengths. All lines from the Bracket and Pfund included in the SWS wavelength range series are present, while the higher H i series are not detected, possibly due to the combined effect of a lower instrumental sensitivity and a higher background in this part of the spectrum. These recombination line data will be discussed in more detail in a forthcoming paper.
The LWS spectrum of AB Aur is relatively smooth and featureless. The only line that is clearly visible in the spectrum is the \[C ii\] line at 157.7 $`\mu `$m. The strength of this line (8.6 $`\times `$ 10<sup>-16</sup> W m<sup>-2</sup>) is compatible with it originating in the background rather than being circumstellar. As can be seen from Fig. 1, there is a $``$25% difference in the flux scales between the AB Aur SWS and LWS spectra in the overlapping region. Although within the formal errors of the absolute flux calibration for SWS and LWS, this discrepancy is larger than that found in other sources. The difference cannot be attributed to the different aperture sizes (33โณ $`\times `$ 20โณ for SWS versus a circular 80โณ FWHM for LWS), and confusion with extended emission, since then the LWS spectrum would have to have a higher flux level than the SWS spectrum. It is interesting to note that in the time interval between the SWS and LWS measurements, AB Aur did show an optical photometric event which could have also affected the infrared brightness (van den Ancker et al. 1999 and references therein), so the difference in flux between SWS and LWS might in fact be due to real variability. In Fig. 1 we also plotted the IRAS fluxes of AB Aur. Although compatible with both spectra, the IRAS 60 $`\mu `$m flux agrees better with the SWS spectrum, so we rather arbitrarily choose to adopt the SWS flux calibration for the region around 45 $`\mu `$m.
## 4 Spectral energy distributions
Spectral Energy Distributions (SEDs) of AB Aur and HD 163296 were constructed from literature data as well as our new ISO spectra and newly obtained VLA photometry for HD 163296 and are shown in Fig. 2. All the submm fluxes in these SEDs refer to single-dish measurements. As can be seen from Fig. 2, the SED can be naturally decomposed in three parts: the optical wavelength range, where the total system flux is dominated by the stellar photosphere, the infrared to submm, where emission originates from the circumstellar dust disk, and the radio, where free-free emission from the stellar wind becomes dominant.
The difference in behaviour of the dust component in HD 163296 and AB Aur is striking: after a nearly flat energy distribution in the infrared, the sub-mm and mm fluxes of AB Aur drop rapidly ($`\lambda F_\lambda \lambda ^{4.3}`$), indicative of the dust becoming optically thin at these wavelengths, whereas toward HD 163296 the slope of the sub-mm to mm fluxes ($`\lambda ^{2.9}`$) is within errors equal to that of the Rayleigh-Jeans tail of a black body ($`\lambda ^3`$). The new radio points at 1.3, 3.6 and 6 cm for HD 163296 do not follow the simple power-law dependence expected if these were solely due to free-free radiation. This demonstrates that even at wavelengths as long as 1.3 cm, a significant fraction of the system flux is due to circumstellar dust. The 3.6 and 6 cm fluxes are probably dominated by free-free emission.
The energy distribution of a circumstellar dust disk is governed by its temperature profile, the density distribution and the dust properties (chemical composition and size distribution). Since the circumstellar disks of AB Aur and HD 163296 stars are expected to be passive (Waters & Waelkens 1998) and the properties of the central stars are nearly identical, the temperature profiles in the disks are expected to be similar as well. One possibility to explain the different sub-mm to mm slope for AB Aur and HD 163296 could be a much flatter density distribution for HD 163296. However, with a standard sub-mm dust emissivity ($`\beta `$ = 2) the inferred dust mass for HD 163296 would become implausibly large. A better explanation may be that the dust properties of AB Aur and HD 163296 are different, a fact already concluded independently from the ISO spectra. To be able to radiate efficiently, dust particles must have a size similar to (or larger than) the wavelength, pointing to the existence of a population of mm- to cm-sized cold dust grains in the circumstellar environment of HD 163296, whereas those in AB Aur must be micron-sized. The ISO spectrum of HD 163296 also contains warm ($``$ 800 K) dust, suggesting a significant lack of emission from dust of intermediate temperatures.
## 5 Discussion and conclusions
We have shown that the main difference between the AB Aur and HD 163296 systems is that HD 163296 contains a population of very large (mm to cm-sized), cold, partially crystalline, dust grains, which is absent in AB Aur. AB Aur contains a population of small dust grains (PAHs), which is absent in HD 163296. In view of the fact that the stellar parameters are nearly identical except for stellar rotation, these differences are remarkable. This must mean that either the evolution of the dust composition in protoplanetary disks happens within the error in the age determination of both systems (2$`{}_{1}{}^{}{}_{}{}^{+2}`$ Myr for AB Aur vs. 4$`{}_{2}{}^{}{}_{}{}^{+4}`$ Myr for HD 163296; van den Ancker et al. 1998), or that the evolution of the dust is dominated by external factors.
We have also shown that in AB Aur and HD 163296 iron oxide is a constituent of the circumstellar dust mixture and can also be responsible for the observed excess emission near 3 $`\mu `$m. Since the near-infrared excess exhibited by our programme stars is by no means unusual for a Herbig Ae/Be star, this means that the same applies for the entire group of Herbig stars. Therefore the results of models attributing this near infrared excess emission to very hot dust from an actively accreting disk (e.g. Hillenbrand et al. 1992) must be regarded with some caution. The case of HD 163296 demonstrates that infrared broad-band photometry can be completely dominated by emission from solid-state features, which must also be taken into account in any future modelling of the energy distributions of Herbig stars.
The detection of PAHs in AB Aur shows that ground-based surveys (e.g. Brooke et al. 1993) have underestimated the fraction of Herbig stars containing PAHs. The case of HD 163296 shows that the fraction of Herbig stars showing PAH emission will not go up to 100%, so models depending on the presence of very small dust grains to explain the observed near-infrared excess in Herbig Ae/Be stars (Natta et al. 1993; Natta & Krรผgel 1995) will not be successful in all cases. We believe iron oxide to be a more plausible explanation for this near-infrared excess.
It is difficult to attribute the absence of the 3.29 $`\mu `$m feature in AB Aur to an unusual temperature of the PAHs. Since the 6.2 and 7.7 $`\mu `$m CโC stretches are strong, while the bands due to CโH bonds are weak or absent, a more promising possibility seems a very low hydrogen covering factor of the PAHs in AB Aur or the presence of a population of large ($`>`$ 100 C atoms) PAHs (Schutte et al. 1993). If the new 15.9 $`\mu `$m UIR band in AB Aur is also caused by PAHs, this suggests that it is also due to a CโC bond.
To gain more insight in the evolution of dust in protoplanetary disks, it is useful to compare the spectra presented here to those of other Herbig Ae stars. Except for the absence of crystalline material, the AB Aur spectrum and energy distribution are nearly identical to those of its older counterpart HD 100546 (Malfait et al. 1998). In the case of AB Aur (2.5 M), any possible crystallization of circumstellar dust must therefore occur at a stellar age older than 2 $`\times `$ 10<sup>6</sup> years. The differences between HD 100546 and HD 163296 are larger: the crystalline dust in HD 100546 is much more prominent than that in HD 163296 and the population of large cold dust grains seen in HD 163296 is absent in HD 100546.
As these cases of AB Aur, HD 163296 and HD 100546 demonstrate, the age of the central star and the degree of crystallization do not show a one to one correspondence, but the processes of grain growth and crystallization in protoplanetary disks are also not necessarily coupled. Studying a larger sample of Herbig Ae/Be stars might shed more light on what causes this large observed diversity in dust properties in systems which appear very similar in other aspects.
###### Acknowledgements.
This paper is based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) and with the participation of ISAS and NASA and on observations collected at the European Southern Observatory, La Silla, Chile. The authors would like to thank the SWS IDT for their help with the SWS observations and Norman Trams and Michelle Creech-Eakman for their help with the LWS observations. Koen Malfait and Xander Tielens are gratefully acknowledged for reading of the manuscript of this paper prior to publication. |
no-problem/0002/cond-mat0002392.html | ar5iv | text | # Topological approach to Luttingerโs theorem and the Fermi surface of a Kondo lattice
## Abstract
A non-perturbative proof of Luttingerโs theorem, based on a topological argument, is given for Fermi liquids in arbitrary dimensions. Application to the Kondo lattice shows that even the completely localized spins do contribute to the Fermi sea volume as electrons, whenever the system can be described as a Fermi liquid.
Landauโs Fermi liquid theory is among the most important theories in quantum many-body problem. At zero temperature, a Fermi liquid has a Fermi surface, similarly to the noninteracting fermions. One of the most fundamental results on the Fermi liquid is Luttingerโs theorem, which states that the volume inside the Fermi surface is invariant by the interaction, if the number of particles is held fixed. Luttinger argued, in his 1960 paper , the correction to the volume vanishes order by order in the perturbation expansion.
Recently, there have been renewed interests in the Luttingerโs theorem. Since Luttingerโs original proof was based on the perturbation theory, Luttingerโs theorem could be violated by non-perturbative effects. In fact, several claims of possible breakdown of Luttingerโs theorem have been reported recently . On the other hand, such non-perturbative effects can violate the Fermi liquid theory itself. In fact, the Fermi liquid theory is known to be invalid generally in one dimension, where Tomonaga-Luttinger (TL) liquid is the generic behavior. Although not being a Fermi liquid, a TL liquid in one dimension has a well-defined Fermi surface (actually Fermi points in one dimension). Thus the question of the validity of Luttingerโs โtheoremโ still exists in this case, where Luttingerโs original proof certainly does not apply. This question was answered recently by a perturbative proof and a more general non-perturbative proof , which can be applied to one-dimensional TL liquids. However, the question on higher (especially two) dimensions remains unanswered. In fact, it is not clear whether a Fermi liquid which violates Luttingerโs theorem can exist.
Another interesting problem, which is not answered by the Luttingerโs perturbative proof, is the Fermi surface of the Kondo lattice. The Kondo lattice contains a periodic array of localized spins which are coupled to conduction electrons. The Kondo lattice is believed to belong to the Fermi liquid (or TL liquid, in one dimension) in some region of the phase diagram. Even if we assume Luttingerโs theorem to be valid, there is a problem in how to count number of particles. It is rather difficult, by conventional methods, to clarify whether a localized spin should be counted as an electron (โlarge Fermi surfaceโ picture) or not (โsmall Fermi surfaceโ picture). In one dimension, the non-perturbative proof of the Luttingerโs theorem was also applied to the Kondo lattice , to show that the localized spins do participate in the Fermi sea. (See also for numerical evidences.) On the other hand, there has been no definite answer for higher dimensions, although there are several results supporting the โlarge Fermi surfaceโ picture.
The argument in Ref. is a generalization of Lieb-Schultz-Mattis (LSM) theorem , which was given at about the same time as the apparently unrelated Luttingerโs theorem . Since the LSM argument itself cannot be applied to higher dimensions, the discussion in Ref. was restricted to one dimension. However, very recently the LSM argument was combined with Laughlinโs gauge invariance argument on the Quantum Hall Effect (QHE) and extended to higher dimensions . Inspired by this observation, we will extend the non-perturbative proof of the Luttingerโs theorem to arbitrary dimensions in the present letter.
We consider an interacting fermion system on a $`D`$-dimensional lattice with periodic boundary conditions. We start from a finite system of size $`L_x\times L_y\times \mathrm{}\times L_D`$, where the length is defined so that the unit cell has the size $`1\times 1\times \mathrm{}\times 1`$. The number of fermions is assumed to be conserved. If the system satisfies a commensurability condition, it can have a finite excitation gap . In this letter, we will rather focus on the gapless case, which is expected for general incommensurate particle density. For simplicity, let us first start with the case of spinless fermion of single species. We introduce a fictitious electric charge $`e`$ for each particle, and a coupling to an externally controlled fictitious electromagnetic field. Because of the periodic boundary condition, the system is topologically equivalent to torus. Following Refs. , we consider an adiabatic increase of a (fictitious) magnetic flux $`\mathrm{\Phi }`$ piercing through the โholeโ of the torus so that the uniform electric field is induced, say, in the $`x`$-direction.
While in general the Hamiltonian of the system $`H(\mathrm{\Phi })`$ depends on the flux $`\mathrm{\Phi }`$ reflecting the Aharanov-Bohm (AB) effect, the AB effect is absent when the flux reaches the unit flux quantum $`\mathrm{\Phi }_0=hc/e`$. We consider the adiabatic increase of the flux from $`\mathrm{\Phi }=0`$ to $`\mathrm{\Phi }=\mathrm{\Phi }_0`$. In the following, we will consider how the total momentum of the system is changed during the adiabatic process in two different ways, and compare those results. In the remainder of this letter, we take the units in which $`\mathrm{}=1`$, for simplicity.
First, we analyze the momentum change in a system of interacting fermions for general. We remind the reader that the momentum itself is a gauge dependent quantity in the presence of the gauge field; a meaningful comparison between momenta can only be made under the same gauge choice. In a simplest gauge choice, the AB flux $`\mathrm{\Phi }`$ is represented by the uniform vector potential $`A_x=\mathrm{\Phi }/L_x`$ in the $`x`$-direction. In this gauge, the Hamiltonian always commutes with the translation operator $`T_x`$ to the $`x`$-direction. We further assume that the translation symmetry is not spontaneously broken, as it should not in a Fermi liquid. Thus the ground state is an eigenstate of the total momentum $`P_x`$: $`P_x|\mathrm{\Psi }_0=P_x^0|\mathrm{\Psi }_0`$ with the eigenvalue $`P_x^0`$. The $`x`$-component of the total momentum $`P_x`$ is related to $`T_x`$ as $`T_x=e^{iP_x}`$. After the adiabatic process, the original groundstate $`|\mathrm{\Psi }_0`$ evolves into some state $`|\mathrm{\Psi }_0^{}`$. While the state $`|\mathrm{\Psi }_0^{}`$ could be different from $`|\mathrm{\Psi }_0`$, it belongs to the same eigenvalue $`P_x^0`$ of $`P_x`$, because the Hamiltonian always commutes with $`T_x`$ (and thus $`P_x`$) in the uniform gauge during the adiabatic process. Although it naively means that the momentum is unchanged after the adiabatic process, it is not true. The Hamiltonian $`H(\mathrm{\Phi }_0)`$ with the unit flux quantum in the uniform gauge is different from the original one $`H(0)`$, although the spectrum should be identical. Namely, they correspond to different choices of the gauge for the same physics. In order to get back to the original gauge, we must perform a large gauge transformation
$$U=\mathrm{exp}[\frac{2\pi i}{L_x}\underset{\stackrel{}{r}}{}xn_\stackrel{}{r}],$$
(1)
where $`n_\stackrel{}{r}`$ is the particle number operator at site $`\stackrel{}{r}`$, and $`x`$ is the $`x`$-coordinate of $`\stackrel{}{r}`$. This transforms the Hamiltonian $`H(\mathrm{\Phi }_0)`$ back to the original one: $`UH(\mathrm{\Phi }_0)U^1=H(0)`$. After this gauge transformation, the adiabatic evolution of the groundstate becomes $`U|\mathrm{\Psi }_0^{}`$.
Now we can examine the total momentum $`P_x`$ of this state, and compare it with the original one $`P_x^0`$. Here we can employ the arguments used in the LSM theorem and its generalizations . By using the identity
$$U^1T_xU=T_x\mathrm{exp}[2\pi i\underset{\stackrel{}{r}}{}\frac{n_\stackrel{}{r}}{L_x}]$$
(2)
we see that $`U|\mathrm{\Psi }_0^{}`$ is an eigenstate of $`P_x`$ with
$$P_x=P_x^0+2\pi \nu L_yL_z\mathrm{}L_D,$$
(3)
where $`\nu `$ is the particle density (number of particles per unit cell). This result is valid regardless of the interaction strength.
Next, we analyze the momentum change assuming that the system is a Fermi liquid. The Fermi liquid is described in terms of quasiparticles, which are almost non-interacting. More precisely, the low-energy effective Hamiltonian of a Fermi liquid is given by
$$\underset{\stackrel{}{k}}{}ฯต(\stackrel{}{k})\stackrel{~}{n}_\stackrel{}{k}+\underset{\stackrel{}{k},\stackrel{}{k}^{}}{}f(\stackrel{}{k},\stackrel{}{k}^{})\stackrel{~}{n}_\stackrel{}{k}\stackrel{~}{n}_\stackrel{}{k}^{},$$
(4)
where $`\stackrel{~}{n}_\stackrel{}{k}`$ is the quasiparticle number operator of momentum $`\stackrel{}{k}`$. Namely, there is an interaction energy due to the second term but no scattering between the quasiparticles. Thus the eigenstates of $`\stackrel{~}{n}_\stackrel{}{k}`$ are also eigenstates of Hamiltonian. In the groundstate, the Fermi sea (region inside the Fermi surface) is completely filled with quasiparticles, while the outside is empty in terms of quasiparticles. Excitations on the groundstate are given by quasiparticles outside the Fermi sea and/or quasiholes inside the Fermi sea. In fact, the quasiparticle (or quasiholes) are free from scattering only in the vicinity of the Fermi surface; the very notion of quasiparticle/hole is useful only in this case. The Fermi liquid theory is valid for the low-energy phenomena, in which the relevant excitations consist only of quasiparticles (quasiholes) near the Fermi surface.
Let us define the Fermi sea volume $`V_F^{(L)}`$ in the finite size system $`L_x\times L_y\times \mathrm{}\times L_D`$. The quasiparticles are scattering free, and their momenta are discretized as in the case of free particles. Thus we can define the Fermi sea volume $`V_F^{(L)}`$ by an integer โoccupation numberโ of the quasiparticles $`N_F^{(L)}`$:
$$V_F^{(L)}=\frac{(2\pi )^DN_F^{(L)}}{L_xL_y\mathrm{}L_D}.$$
(5)
Although the quasiparticles are not free from scattering (and thus are not meaningful) away from the Fermi surface, this expression is still valid because the Fermi sea volume is uniquely determined by its surface. The $`V_F^{(L)}`$ should approach the true volume of the Fermi sea $`V_F`$, in the thermodynamic limit $`L_j\mathrm{}`$.
The adiabatic evolution is determined by the low-energy effective Hamiltonian (4). In the Fermi liquid theory, the charge of the quasiparticle is identical to that of the original particle $`e`$. The coupling of the quasiparticles to the uniform vector potential $`A_x`$ is thus given by the substitution of the momentum $`k_xk_x+eA_x/c`$ in the Hamiltonian. After the adiabatic insertion of the unit flux quantum, and getting back to the original Hamiltonian by the gauge transformation, each quasiparticle gets a momentum shift: $`k_x`$ is increased by $`2\pi /L_x`$. This produces quasiparticles on one side of the Fermi surface, and quasiholes on the opposite side.
Since the result of the adiabatic process is equivalent to the shift of the whole Fermi sea by $`2\pi /L_x`$, the change of the $`x`$-component of total momentum $`P_x`$ of the system during the adiabatic process is given by
$$\mathrm{\Delta }P_x=\frac{2\pi }{L_x}N_F^{(L)}$$
(6)
We note that the only changes after the adiabatic process involve the quasiparticles and quasiholes near the Fermi surface, so that the Fermi liquid theory is still valid. To violate eq. (6), the system must break some of the properties of Fermi liquid used in the present argument. For example, if a quasiparticle had a charge $`e^{}`$ which is different from the charge $`e`$ of the original particle, we would obtain a different result.
Now, comparing the two results eqs. (3) and (6) obtained with different arguments, we obtain $`N_F^{(L)}/L_x\nu L_yL_z\mathrm{}L_D=\text{(integer)}`$, where we have used the fact that each component of momenta is defined modulo $`2\pi `$. Let us choose the system size so that $`L_x,L_y,\mathrm{}`$ and $`L_D`$ are mutually prime with the others. We also assume $`L_x=ql_x`$ where $`l_x`$ is an integer. (It should be recalled that system size should be an integral multiple of $`q`$, to allow the filling factor $`\nu =p/q`$.) Then, from eq. (5) we obtain $`N_F^{(L)}pl_xL_yL_z\mathrm{}L_D=L_x\times \text{(integer)}`$.
Furthermore, we can consider other adiabatic processes, in which the gauge field is induced in one of the other directions $`y,z,\mathrm{}`$, instead of $`x`$. Similar calculations for these cases lead to $`N_F^{(L)}=L_\alpha \times \text{(integer)}`$, where $`\alpha =y,z,\mathrm{},D`$. Because we have chosen the lengths $`L_j`$โs mutually prime, we conclude that $`N_F^{(L)}pl_xL_yL_z\mathrm{}L_D=nL_xL_yL_z\mathrm{}L_D`$, where $`n`$ is an integer. Writing this in terms of Fermi sea volume, we arrive at
$$\frac{V_F}{(2\pi )^D}\nu =n,$$
(7)
where we have replaced the Fermi sea volume $`V_F^{(L)}`$ for the finite size system by its thermodynamic limit $`V_F`$, because this relation is exact already for the finite system. The thermodynamic limit $`V_F`$ should be independent of our special (mutually prime) choice of $`L_j`$โs, if $`V_F`$ is well-defined.
The relation (7) is nothing but the statement of Luttingerโs theorem. The integer $`n`$ corresponds to the number of completely filled bands. It is valid also when the Fermi sea consists of several disjoint regions, if $`V_F`$ is understood as the sum of volumes of all regions. Our proof is much simpler than the original one . Moreover, in contrast to Ref. , our argument is non-perturbative and relies only on some of the basic properties of Fermi liquid.
It is straightforward to extend our argument to spinful electrons. When the numbers of up-spin electrons and down-spin electrons are conserved separately, we consider the fictitious electromagnetic field coupled to only up-spin (or down-spin) electrons. Assuming the spinful Fermi liquids, the volume of the Fermi sea $`V_F^\sigma `$ for spin $`\sigma `$ is given by $`V_F^\sigma =(2\pi )^D\nu _\sigma `$, where $`\nu _\sigma `$ is the number of particles with spin $`\sigma `$ per unit cell. For the spin-symmetric case $`\nu _{}=\nu _{}`$, it reads $`V_F=V_F^{}=V_F^{}=(2\pi )^D\nu /2`$ where $`\nu `$ is the total particle density $`\nu _{}+\nu _{}`$.
As a nontrivial application, let us consider the Kondo lattice. Luttingerโs original perturbative proof does not apply to this case, and the question on the volume of the Fermi sea has remained. For the sake of clarity, we consider the Kondo lattice model given by the Hamiltonian
$$H=\underset{j,k}{}t_{jk}c_{j\sigma }^{}c_{k\sigma }+\text{h.c.}+\underset{j}{}U_jc_j^{}c_jc_j^{}c_j+\underset{l}{}J_l\stackrel{}{s}_l\stackrel{}{S}_l,$$
(8)
where $`c_{j\sigma }^{}`$ and $`c_{j\sigma }`$ are standard Fermion creation/annihilation operators at site $`j`$ with spin $`\sigma `$, $`\stackrel{}{s}_l=c_{l\alpha }^{}\stackrel{}{\sigma }^{\alpha \beta }c_{l\beta }/2`$ is the spin operator of the conduction electron, and $`\stackrel{}{S}_l`$ is the localized spin at site $`l`$. As in the previous case, we couple the fictitious electromagnetic field only to the up-spin electrons. After the adiabatic insertion of the AB flux of unit flux quantum, we make the gauge transformation as in the previous cases. However, the naive one $`U_{}^e=\mathrm{exp}[\frac{2\pi i}{L_x}_\stackrel{}{r}xn_{\stackrel{}{r}\sigma }]`$, does not bring the Hamiltonian back to the original one, because it changes the Kondo coupling. In order to recover the original Hamiltonian, we must also twist the localized spins. The transformation
$$U_{}=\mathrm{exp}[\frac{2\pi i}{L_x}\underset{\stackrel{}{r}}{}x(n_{\stackrel{}{r}\sigma }+S_\stackrel{}{r}^z)]$$
(9)
does the required job. We obtain the total momentum after the adiabatic process as
$$P_x=P_x^0+2\pi [\nu _{}+N_s(S+m)]L_yL_z\mathrm{}L_D,$$
(10)
where $`N_s`$ is the number of localized spins per unit cell and $`m`$ is the magnetization per single localized spin. The special contribution proportional to $`S`$ comes from the boundary term $`\mathrm{exp}(2\pi iN_sS_1^z)`$ appearing in $`U_{}^1T_xU_{}`$, similarly to the one dimensional case .
Thus, provided that the system belongs to a spinful Fermi liquid, the volume of the Fermi sea is given by $`V_F^\sigma =(2\pi )^D[\nu _\sigma +N_s(S\pm m)]`$, where $`\pm `$ takes $`+`$ for $`\sigma =`$ and $``$ for $`\sigma =`$. For the spin-symmetric case $`\nu _{}=\nu _{}`$ and $`m=0`$, we obtain
$$V_F=V_F^{}=V_F^{}=\frac{(2\pi )^D}{2}[\nu +2N_sS],$$
(11)
for the total particle density $`\nu =\nu _{}+\nu _{}=2\nu _{}`$. This is exactly what we obtain if we apply the Luttingerโs theorem to the Anderson-type model in which the localized spins are represented by electrons. It means that the localized spin $`S`$ does contribute to the Fermi sea volume as $`2S`$ electrons, even though it is completely immobile. This is the picture conventionally called as the โlarge Fermi surfaceโ.
It should be noted that we did not answer the non-trivial question whether (or when) the Kondo lattice belongs to the Fermi liquid. We have only proved that, if the Kondo lattice is a Fermi liquid (as it is believed to be true in some region of the phase diagram), the localized spins participate in the Fermi sea.
Finally, let us comment on claims of the violation of the Luttingerโs theorem. There are several possibilities regarding the apparent contradiction to our non-perturbative proof. Of course, it should be checked whether our argument applies to the model under consideration. However, our argument does apply to a very wide range of lattice models, including the Hubbard and $`t`$-$`J`$ models, for which the violation of the Luttingerโs theorem has been also proposed. A possibility is that the system is not a Fermi liquid in these cases. In other words, a violation of the Luttingerโs theorem requires the system to be a non-Fermi liquid. We note that, however, merely not being a Fermi liquid is insufficient, as the TL liquid in one dimension does satisfy the Luttingerโs theorem . Our approach could be extended to a non-Fermi liquid which has an appropriately defined Fermi surface, if such a liquid does exist. Our argument reveals a rigid relationship between the structure of low-energy excitations and the Fermi sea volume.
Another possibility is that the claimed violation of Luttingerโs theorem is actually incorrect. In particular, numerical results are only available for restricted system size and/or temperature, and can miss the possibly small singularity at the true Fermi surface. On the other hand, even if they are incorrect in identifying the true Fermi surface, they might still be of physical relevance because the actual experiments are done also at finite energy scale; the experimentally measured โFermi surfaceโ could be different from the true Fermi surface defined in the low-energy limit, to which our argument applies. In any case, our definite result on the Fermi surface of the Fermi liquid in the low-energy limit would be useful as a guideline. Claims of the violation of the Luttingerโs theorem should be examined in the light of the present result.
During the forty years after the Luttingerโs paper , several examples of โquantizationโ of a physical quantity have been found in many-body physics. Namely, despite the complexity of the interacting many-body states, some physical quantity takes a special value which is stable against various perturbations such as interaction strength. Presumably the most natural understanding of such a quantization is given by a topological argument. Indeed, typical examples of the quantization โ QHE and the quantized magnetization plateaus have been related to topological mechanisms .
Luttingerโs theorem perhaps does not look like a quantization, because the volume of the Fermi sea takes continuous values depending on the particle density. However, the insensitivity to the interaction resembles to other quantization phenomena, and may well be regarded as a certain kind of quantization, especially when written as in eq. (7). In fact, we have revealed a close theoretical relationship among Luttingerโs theorem, QHE and magnetization plateaus. In addition, our argument can be related also to the chiral anomaly in quantum field theory . Luttingerโs theorem might be actually the first example of the topological quantization discovered in quantum many-body problem, although the topological understanding has been missing for a long time.
I would like to thank Ian Affleck, Hal Tasaki and Masanori Yamanaka for stimulating discussions which were essential for the present work. I am also grateful to an anonymous referee for pointing out a logical gap in the original manuscript. This work is supported by Grant-in-Aid from Ministry of Education, Science, Sports and Culture of Japan. |
no-problem/0002/physics0002030.html | ar5iv | text | # Phase transition in the ground state of a particle in a double-well potential and a magnetic field
## I Introduction
It is a well-known fact that in the absence of a magnetic field the ground state of bosons is non degenerate, and therefore has the symmetry of the hamiltonian. Mathematically this result from the fact that the kernel of the operator $`e^{tH}`$ is positive. This last property no more holds in the presence of a magnetic field so that degeneracy of the ground state may be expected, as well as symmetry breaking in it. One-body systems already may show this phenomenon. Indeed Lavine and OโCarrol proved the existence of spherically symmetric potentials for which, in the presence of a magnetic field, the ground state has a non-vanishing value for the $`z`$ component of angular momentum, so that the rotational symmetry is broken.
Further examples were provided by Avron, Herbst and Simon <sup>,</sup> . On the opposite side, these last authors were able to prove that for the hydrogen atom the symmetry is not broken, as well as in the case where the potential is monotonically increasing with the distance. These authors, however, mainly concerned with problems of atomic physics, did not discuss the degeneracy and the physical significance of it.
On the other hand, two of us analysing the problem of a particle confined to a disc or an annulus in the presence of a magnetic field found that the ground state was degenerate in the case of an annulus and for a disc with Neumann boundary conditions (with Dirichtlet boundary conditions in the disc case the degeneracy disappears.) The degeneracy appears each time the magnetic field reaches a critical value and the magnetisation jumps at these critical values, which form a discrete set .
Motivated by these results we consider in this article a class of systems for which similar phenomena occur. Namely we analyse the ground state of a particle in three dimensions moving in a double-well type potential, cylindrically symmetric, and submitted to a constant magnetic field in the $`z`$ direction.
We find that the ground state has an azimuthal momentum $`\mathrm{}m`$ taking increasing values $`m=0,1,2,\mathrm{}`$ when we increase the magnetic field $`B`$. At critical values of $`B`$ $`(B_m)`$ the ground state is twice degenerate between the $`m`$ and the $`m+1`$ state. Moreover the magnetisation jumps at these critical values and shows in general an oscillatory behaviour reminiscent of the well known de Haas von Halphen oscillations in solid state physics.
We show that this phenomenon can be understood by an analysis of the minima of the potential energy, fixing however the angular momentum to its quantised value $`\mathrm{}m`$. In the two-dimensional case we can use the WKB method and obtain bounds on the energy in order to estimate the critical fields. But in general, we had to compute numerically the energies and compare them to estimates based on trial wave functions. The agreement is quite good in general.
Concerning possible experimental verifications of these effects, which require basically to have a potential which has a minimum sufficiently far from the origin, we could think of two cases. The first one would be in some molecules where proton dynamics could be described by such an effective potential. The second one, more thrilling, would be the case of charged bosons undertaking a Bose-Einstein condensation. Our results suggest that in this case, the bosons would undertake a phase transition in their condensate, when we apply an increasing magnetic field. This phase transition would manifest itself by appearance of oscillations in the magnetisation, which would jump at certain critical values of the magnetic field.
## II The Model
We will consider the case of a particle of mass $`\mu `$, charge $`q`$, in a potential $`V`$ with a cylindrical symmetry, submitted to a magnetic field $`\stackrel{~}{B}`$ in the $`z`$ direction. We do not consider the effect of the spin of the particle. We choose for a unit of energy $`V_0`$, and length $`r_0`$, both being characteristic of the potential. The dimensionless hamiltonian reads if $`r=\sqrt{x^2+y^2}`$
$$(iฯต\stackrel{}{}\stackrel{}{A})^2+V(r,z)$$
(1)
where
$$ฯต=\frac{\mathrm{}}{r_0\sqrt{2\mu V_0}}$$
(2)
measures the importance of the quantum effects and the vector potential in the symmetric gauge is given by
$$\stackrel{}{A}=(\frac{By}{2},\frac{Bx}{2},0)$$
(3)
$`B={\displaystyle \frac{q}{c}}{\displaystyle \frac{r_0}{\sqrt{2\mu V_0}}}\stackrel{~}{B}`$ being the dimensionless magnetic field.
Thanks to the cylindrical symmetry, we can replace the $`z`$ component of the angular momentum $`L_z`$ by its eigenvalue $`ฯตm`$ so that the reduced hamiltonian reads
$$H_m=ฯต^2\left[\frac{1}{r}\frac{}{r}r\frac{}{r}+\frac{^2}{z^2}\right]+\left(\frac{ฯตm}{r}\frac{rB}{2}\right)^2+V(r,z)$$
(4)
The ground state energy of this hamiltonian and the corresponding eigenfunction will be denoted $`E_m`$ and $`\psi _m`$.
It remains to specify $`V`$. We will basically consider a double-well potential of the form:
$$V(r,z)=r^4+z^42(r^2+z^2)+vr^2z^2$$
(5)
with $`v`$ satisfying $`v2`$, so that $`V`$ is bounded from below. If $`v`$ is equal to $`0`$ we can decouple the motion in the $`z`$ direction form the one in the plane perpendicular to the magnetic field. This is what we will call the two-dimensional case. If $`v=2`$, we have in three dimensions a potential with spherical symmetry.
We have chosen this double-well form because if we had taken the simple well $`V=r^4+z^4+2(r^2+z^2)+vr^2z^2`$ with $`v0`$ it follows from that the ground state is not degenerate and corresponds to $`m=0`$.
A physical quantity of interest is the magnetisation in the ground state
$$M=\frac{E}{B}$$
(6)
in units $`\frac{q}{c}r_0\sqrt{\frac{V_0}{2\mu }}`$
We will denote by $`e_m`$ the ground state energy of the hamiltonian
$$h_m=ฯต^2\left[\frac{1}{r}\frac{}{r}r\frac{}{r}+\frac{^2}{z^2}\right]+V_m(r,z)$$
(7)
with
$$V_m=\frac{(ฯตm)^2}{r^2}+\frac{B^2}{4}r^2+V$$
(8)
and by
$$E_m=e_mฯตmB$$
(9)
the ground state energy of $`H_m`$ given in (4), so that the real ground state energy is given by
$$E=\underset{m0}{inf}E_m$$
(10)
since obviously negative $`m`$ give a larger energy.
Finally we will use the following useful scaling property of the energy $`e_m`$
$$e_m(ฯต,\lambda ,v)=s^2e_m(\frac{ฯต}{s^{3/2}},\frac{\lambda }{s},v)s0$$
(11)
where
$$\lambda =\frac{B^2}{4}2$$
(12)
is the parameter multiplying $`r^2`$ in the potential. Equation (11) follows simply from the scaling transformation : $`r^2sr^2`$ and $`z^2sz^2`$. This relation shows that we have effectively a two parameter dependence of the energy $`e_m`$ in general and a one parameter dependence in the two dimensional case.
The choice $`s=|\lambda |`$ or $`s=m^{\frac{1}{3}}(m1)`$ shows that large magnetic field or large angular momenta correspond to the semi-classical limit. In fact we shall see that in the classical limit $`ฯต0`$ ground state with $`m0`$ are favoured inducing ground state degeneracies at some values of the magnetic field. It thus appears that the tendency to have a ground state with the same symmetry as the hamiltonian and therefore non degenerate is an effect due to quantum mechanics.
## III The classical limit
One can gain some qualitative understanding of the problem by looking at the classical limit of it. This means that we neglect the quantum kinetic energy and define the ground state energy as
$$E=\underset{m0}{inf}\underset{(r,z)}{inf}[V_m(r,z)ฯตmB]$$
(13)
where
$$V_m=\frac{(ฯตm)^2}{r^2}+r^4+z^4+\frac{B^2}{4}r^22r^2+vr^2z^2$$
(14)
and consider that $`m`$ is an integer.
Two cases need to be considered separately: $`|v|<2`$ and $`v2`$. If $`|v|<2`$ we denote by $`x_m`$ and $`t_m`$ respectively the values of $`r^2`$ and $`z^2`$ which minimise the potential $`V_m`$, and we find
$$t_m=\mathrm{\hspace{0.17em}1}\frac{vx_m}{2}$$
(15)
$$\left(2\frac{v^2}{2}\right)x_m+\left(v2+\frac{B^2}{4}\right)=\frac{(ฯตm)^2}{x_m^2}$$
(16)
On the other hand, considering for a while $`m`$ as a continuous variable, the absolute minimum of $`V_mฯตmB`$ is given by
$$ฯต\widehat{m}=\frac{B}{2}x_{\widehat{m}}$$
(17)
From (15) this gives an absolute minimum of $`V_mฯตmB`$ given by
$$x_{\widehat{m}}=t_{\widehat{m}}=\frac{1}{1+\frac{v}{2}}$$
(18)
and therefore
$$ฯต\widehat{m}=\frac{B}{2}\frac{1}{1+\frac{v}{2}}$$
(19)
In considering the variable $`m`$ as a continuous one we have treated the problem purely classically and the corresponding โground stateโ energy is
$$E^{cl}=\frac{2}{1+\frac{v}{2}}$$
(20)
We know that $`m`$ is a discrete variable but for consistency we must consider $`ฯต`$ as a small number. Then if $`m`$ designates the integer part of $`\widehat{m}`$, we have $`\widehat{m}=m+\theta `$ and if $`0\theta <\frac{1}{2}`$, the ground state has the quantum number $`m`$, whereas if $`\frac{1}{2}<\theta 1`$ it has $`m+1`$.
From this analysis we conclude that if $`B_{m1}<B<B_m`$ where
$$B_m=ฯต(1+\frac{v}{2})(2m+1)$$
(21)
the ground state has the quantum number $`m`$. Hence we see that by increasing the magnetic field, we find in increasing order the values of $`m=0,1,2,\mathrm{}`$ and an infinite set of critical values of the magnetic field exist, $`B_m`$ for which the ground state is twice degenerate, being both $`m`$ and $`m+1`$.
This picture is entirely confirmed by the numerical results in the quantum case. It is also quite interesting to look at the magnetisation. In the state whose quantum number is $`m`$, we have
$$M_m=ฯตm\frac{B}{2}x_m$$
(22)
so that using (15)
$$M_m=[ฯตm\frac{B}{2}\frac{1}{1+\frac{v}{2}}][\frac{1\frac{v}{2}}{1\frac{v}{2}+\frac{B^2}{4}}]$$
(23)
when $`B_{m1}<B<B_m`$.
This shows that the magnetisation has an โoscillatoryโ type of behaviour reminiscent of the familiar de Haas von Halphen one in solid state physics and that the magnetisation jumps at the critical values of the magnetic field, the jump being given by
$$\mathrm{\Delta }M_m=ฯต\frac{1\frac{v}{2}}{1\frac{v}{2}+\frac{B^2}{4}}$$
(24)
Once again this general behaviour is reproduced by the numerical results in the quantum case and the spacing between the values of the critical field is rather well represented by formula (21) when $`m1`$. In the two-dimensional case, i.e. $`v=0`$ and neglecting the trivial $`z`$ dependence, we can proceed further and look at a really semi-classical approximation namely WKB, for the ground state energy
$$_r_{}^{r_+}dr\sqrt{e_mV_m(r)}=\frac{ฯต\pi }{2}$$
(25)
where
$$V_m(r)=\frac{(ฯตm)^2}{r^2}+r^4+\left(2+\frac{B^2}{4}\right)r^2$$
(26)
and the ground state energy is $`E_m=e_mฯตmB`$.
In fact this WKB approximation will give the best analytical results, apart from the variational estimates for the energy, which give unfortunately only exact upper bounds on the energy.
When the potential has spherical symmetry $`v=2`$, quantum effects are much more important and the classical analysis gives only that the ground state has $`m=0`$ if $`B<2ฯต`$, is degenerate between $`m=0`$ and $`m=1`$ when $`2ฯตB<4ฯต`$, has possibly $`m=0,1,2`$ for $`4ฯตB<6ฯต`$ and so on. This only suggests that we have again the increasing sequence of $`m`$, when we increase the magnetic field and that critical values appear near $`2ฯตm`$.
When $`v>2`$, we find that $`m=0`$ is the ground state except when $`B=2ฯตm`$, where it is degenerate between $`m`$ and $`0`$. We may note however that the classical ground state correspond to points $`(r=0,z=\pm 1)`$ in configuration space for $`m=0`$, whereas it corresponds to two circles $`(r=\frac{ฯต}{2B},z=\pm \sqrt{1\frac{ฯต}{2B}})`$ for $`m=1`$ and $`2ฯต<B<4ฯต`$, so that the wave function can be more spread in the $`m=1`$ state than the in the $`m=0`$ state, and that the kinetic energy of the $`m=1`$ state is lower, favouring the $`m=1`$ state. Hence we should expect, at least when $`ฯต`$ is small, a ground state with $`m=0`$ for small fields and a ground state with $`m=1`$, when $`2ฯต<B<4ฯต`$. A similar argument can be given for the higher values of $`m`$.
Finally, it is worth noticing that if we had taken a simple well type potential
$$V(r,z)=r^4+z^4+2(r^2+z^2)+vr^2z^2$$
(27)
the classical analysis gives a ground state with $`m=0`$, at least when $`v1`$. This is a correct result when $`v0`$ at the quantum level.
## IV Numerical results and variational bounds
It is quite useful to undertake a numerical analysis of this problem. We have used a finite element method, choosing for the basis a product of two triangles functions. We discuss separately the two-dimensional problem and the three dimensional ones.
### A Two dimensions
We first give pictures of the ground state energy for two typical values of $`ฯต`$, a small ($`ฯต=0.03`$) and a large one ($`ฯต=0.5`$) as a function of the magnetic field $`B`$. (figure 1). The cusps at the critical values of $`B`$ indicate a jump of the corresponding magnetisation.
This last quantity shows first a diamagnetic behaviour at small field, but then a paramagnetic - diamagnetic oscillation at least when $`ฯต0.3`$. Beyond this value the magnetisation is entirely negative (figure 1 bottom right). We can also note that when $`B`$ becomes large the magnetisation tends to $`ฯต`$, its value in the Landau regime.
The results clearly indicate that we go progressively from the states with $`m=0,1,2\mathrm{}`$ by increasing the magnetic field and that the magnetisation jumps at the critical values. The effect is more pronounced in the classical regime. All these results are in qualitative argument with the classical picture presented before and the agreement is even quantitative when $`ฯต=0.03`$ for example.
The jumps of the magnetisation given by formula (23) are reproduced (figure 2) with a precision of less than $`1`$ percent when $`ฯต=0.03`$, and the spacing between the critical values of the magnetic field
$$\frac{B_{m+1}B_m}{ฯต}=\mathrm{\hspace{0.17em}2}+\mathrm{\Delta }_m$$
(28)
is given by $`\mathrm{\Delta }_m0.04`$ if $`m1`$ and $`ฯต=0.1`$. $`\mathrm{\Delta }_m`$ decreases when $`m`$ increases in agreement with the scaling relation $`B_m=(2m+1)ฯต`$, so that the simple classical formula reproduces rather well the results. By contrast, the jump between the $`m=0`$ and the $`m=1`$ state is largely of quantum mechanical origin, as well as the precise values of the critical fields.
Figure 3 describes the various regions in the $`ฯตB`$ plan. We can note that even when $`ฯต>0.25`$ a linear relation exists between $`B_m`$ and $`ฯต`$, as in the classical regime, which is a bit surprising.
It is also interesting to look at the eigenfunctions when the magnetic field reaches its critical value. In figure 4 we give pictures of them at the critical value between the state $`m=0`$ and $`m=1`$ when $`ฯต=0.2`$. We see that their maxima are located very near the minimum of the potential.
Finally we compare the results with two theoretical estimates: first of all the WKB one, and a variational one. This last estimate is based on the following two parameters trial wave function
$$\psi _m=r^me^{\alpha r^2\beta (r1)^2}$$
(29)
The variational upper bound on the energy can be expressed in terms of Weber cylindrical functions, but we directly computed the corresponding integrals.
Tables I,II,III and IV give a comparison of the results for two values of the parameter $`ฯต`$, and for the critical fields. Excellent agreement is found for the variational method (maximal error of the order of 2 % when $`ฯต=0.5`$). WKB works quite well when $`ฯต`$ is small ($`ฯต=0.03`$) as expected, but even better on the energies when $`ฯต=0.5`$ and the error does not exceed 1%.
### B Three dimensions
For the spherically symmetric potential $`(v=2)`$, figure 5 gives the ground energies a well as the corresponding magnetisation for two different values of $`ฯต`$ : 0.03, 0.5.
Once again we see that the values of $`m`$ in the ground state increases with $`B`$, and that the magnetisation jumps at critical values $`B_m`$ of the magnetic field, where the ground state is doubly degenerate. These results are in qualitative agreement with the classical analysis. Figure 6 summaries the results in the $`ฯต`$ \- $`B`$ plane. Notice that in this
case, when $`ฯต0.1`$ already the relation between $`B_m`$ and $`ฯต`$ is no more linear. On the other hand the spacing between the critical values of $`B`$ predicted by the crude classical estimate:
$$\mathrm{\Delta }B_m=B_{m+1}B_m\mathrm{\hspace{0.17em}2}ฯต$$
(30)
is satisfied with a precision of $`25\%`$ at $`m=1`$ and becomes more accurate when $`m`$ increases, at least in the range $`ฯต0.1`$.
Our best variational estimate for the energy was made with a three parameter trial wave function
$$\psi _{\alpha ,\beta ,\zeta }=r^me^{\alpha r^2\beta (\sqrt{r^2+z^2}\zeta )^2}$$
(31)
Table V gives the values of the critical field $`B_m`$ and Table VI the corresponding ground state energies, when $`ฯต=0.05`$ estimated by the variational method and computed with the simulation.
Obviously there is a very good agreement, since the largest error for $`B_m`$ is less than $`2\%`$ and for $`E_m`$ less than $`0.7\%`$. Table VI gives the same but for $`ฯต=0.5`$. Again we see a good agreement (error less than 5%). When $`ฯต`$ increases we found that $`\alpha `$ increases and $`\beta `$ decreases as well as $`\zeta `$ and our trial wave function becomes less accurate, because the double-well nature of the potential is less important compared to the kinetic energy.
Figure 7 describes the situation in the $`v`$ \- $`B`$ plane for $`m=0,1,\mathrm{},10`$ and different values of $`ฯต`$. We notice that when $`v`$ is less than $`2`$ and $`ฯต`$ is not too large ($`ฯต0.2`$), the situation is similar to the one already discussed, but that there is an abrupt change at $`v=2`$ when $`ฯต`$ is small in agreement with the classical analysis. However when $`ฯต>0.2`$ the ground state $`m=0`$ is definitely favoured as $`v`$ increases.
Figure 8 shows the energies for the first five $`m`$ values computed with three different $`v`$: two-dimensional ($`v=0`$), spherical potential ($`v=2`$), and $`v=3`$. We can see a new crossing between the $`m=0`$ and the other $`m`$ levels when $`v`$ becomes larger than $`2`$, although this does not concern the ground state.
## V Bounds on the critical field in the two dimensional case
One might desire to get rigorous upper and lower bounds on the critical fields. One possible approach would consist in getting upper and lower bounds on the ground state energies $`E_m`$. Whereas we have seen that one can obtain very good variational upper bounds, it is rather difficult to get good lower ones. In order to test these results, we analysed only the two-dimensional case.
First we want to obtain conditions under which $`m=0`$ is the ground state. Using the inequality
$$\frac{l^2}{x}+x^2x^2\frac{l^2}{a^2}x+\frac{2l^2}{a}$$
(32)
valid for any $`x`$ and $`a`$ positive, we deduce that
$$e_0[\lambda ]\frac{2(ฯตm)^2}{a}+e_0\left[\lambda \left(\frac{ฯตm}{a}\right)^2\right]$$
(33)
On the other hand
$$e_0[\lambda ]e_0\left[\lambda \left(\frac{ฯตm}{a}\right)^2\right]=_{\lambda \left(\frac{ฯตm}{a}\right)^2}^\lambda d\lambda ^{}r^2_0(\lambda ^{})$$
(34)
$$\left(\frac{ฯตm}{a}\right)^2r^2_0\left[\lambda \left(\frac{ฯตm}{a}\right)^2\right]$$
(35)
since $`r^2_0(\lambda )`$ is decreasing in $`\lambda `$.
But
$$\left|r^2_0[\lambda ]\frac{\lambda }{2}\right|\left[e_0[\lambda ]+\left(\frac{\lambda }{2}\right)^2\right]^{\frac{1}{2}}$$
(36)
The scaling relation and the fact that $`e_0`$ is increasing in $`ฯต`$ imply that when $`\frac{\lambda }{2}1`$
$$e_0[\lambda ]+\left(\frac{\lambda }{2}\right)^2\left(\frac{\lambda }{2}\right)^2(e_0[2]+1)$$
(37)
Taking now $`a`$ such that $`\frac{ฯตm}{a}\frac{B}{2}`$ $`(m1)`$ we get combining these inequalities that
$$E_0E_mm1$$
(38)
if we can find $`t>\frac{B}{2}`$ such that
$$t^2\left\{1+\frac{1}{2}\left(t^2\frac{B^2}{4}\right)\right\}\delta 2ฯต\left(t\frac{B}{2}\right)<0$$
(39)
where $`\delta =1+\sqrt{e_o[2]+1}`$
In the estimate for $`\delta `$ we can use our best variational upper bound. Inequality (39) will be satisfied if $`B`$ is less than some value $`B_0`$, so that in this range $`m=0`$ is the ground state. In order to see when $`m0`$ is a ground state, we use the following trial wave function $`\psi (r)`$ for a state with angular momentum $`m^{}`$.
$$\psi (r)=r^{m^{}m}\psi _m(r)m^{}m$$
(40)
where $`\psi _m(r)`$ is the exact ground state wave function for the state with angular momentum $`m`$. An integration by parts shows that
$$_0^{\mathrm{}}drr\left[\psi _{m}^{}{}_{}{}^{2}r^{2(m^{}m)}+2(m^{}m)r^{2(m^{}m)1}\psi _m^{}\psi _m\right]$$
(41)
$$=_0^{\mathrm{}}drr^{2(m^{}m)}\psi _m(r\psi _m)^{}$$
(42)
Therefore if we use the fact that
$$\frac{ฯต^2}{r}(r\psi _m^{})^{}=[V_m(r)e_m]\psi _m$$
(43)
We see that
$`{\displaystyle _0^{\mathrm{}}}`$ $`\mathrm{d}rr\left[ฯต^2\psi _{}^{}{}_{}{}^{2}+V_m^{}(r)\psi ^2\right]=e_m{\displaystyle _0^{\mathrm{}}}drr\psi ^2`$ (44)
$`+ฯต^22m^{}(m^{}m){\displaystyle _0^{\mathrm{}}}drr^{2(m^{}m)1}\psi _m^2`$ (45)
and we conclude that
$$e_m^{}e_m+ฯต^22m^{}(m^{}m)\frac{r^{2(m^{}m1)}_m}{r^{2(m^{}m)}_m}$$
(46)
In particular
$$e_1e_0+2ฯต^2\frac{1}{r^2_0}$$
(47)
If we have a lower bound $`c`$ on $`r^2_0`$ then we see that
$$E_1<E_0$$
(48)
if
$$B>\frac{2ฯต}{c}$$
(49)
We can use for the lower bound $`c`$ the one given in equation (36)
$$c=\frac{\lambda }{2}\sqrt{e_0[\lambda ]+(\frac{\lambda }{2})^2}$$
(50)
which is satisfactory when $`B`$ is not too large, but which becomes negative for large $`B`$. We can repair this by using the fact that if $`f`$ is an increasing function of $`r`$, its expectation value in the ground state is lowered by adding to the potential a new increasing potential. We can find a useful comparison potential
$$W=a_1r^2+a_2r^4+a_3r^6V$$
(51)
which has a ground state wave function of the form
$$\psi =e^{b_1r^2b_2r^4}b_2>0$$
(52)
so that $`r^2_W`$ can be computed explicitly for this potential and we can take $`c=r^2_W`$ in equation (49), which gives a more satisfactory result for large $`B`$.
In any case we see that the state $`m=1`$ if favoured over the state $`m=0`$ if $`B`$ is larger than some value, and by continuity there must exist a field for which both states have equal energy. But in order to prove that the ground state is $`m=1`$ when $`B`$ is in some range requires to show that $`E_m>E_1`$ $`m2`$. For this purpose let us consider $`m`$ as a continuous parameter. Then
$$\frac{E_m}{m}=\mathrm{\hspace{0.17em}2}ฯต^2m\frac{1}{r^2}_mฯตB$$
(53)
If we can show that $`\frac{E_m}{m}0`$ for all $`m1`$, then we will have shown that $`E_m>E_1`$. When $`m1`$ we have
$$\frac{1}{r^2}_m\frac{1}{r^2_m}$$
(54)
and
$$\frac{(ฯตm)^2}{r^2_m}+r^2_m^2+\lambda r^2_me_m$$
(55)
In order to get a variational bound on $`e_m`$ we can use the trial wave function $`\psi =r^me^{ar^2}`$, which gives
$$e_m\frac{m+1}{m+2}V_{m+2}(x_{m+2})$$
(56)
where $`x_m`$ is the value of $`x`$ which minimises
$$V_m(x)=\frac{(ฯตm)^2}{x}+x^2+\lambda x$$
(57)
Noting that equation (55) implies that
$$r^2_mx_m+\sqrt{e_m+V_m(x_m)}$$
(58)
one can see by combining equations (53), (54), (56) and (58) that $`E_mE_1`$ for all $`m2`$ if
$$\frac{B^2}{8}<\frac{1}{1+c^2}$$
(59)
with
$$c^2=\frac{1}{ฯต^2}|\lambda |\left[x_1+\sqrt{V_{1+2m}(x_{1+2m})V_1(x_1)}\right]^2$$
(60)
which implies that $`B`$ should be less than some value.
We give in the table VII some numerical values for the bounds obtained by these methods.
They show that whereas the range of values of $`B`$ for which $`E_0<E_1`$ and $`E_1<E_0`$ is reasonably estimated for $`ฯต0.1`$, there is no range of values of $`B`$ for which our bounds show that $`m=1`$ is the ground state except when $`ฯต`$ is very small (0.01) But in this range WKB works perfectly well. Obviously we have too poorly estimated the effect of the kinetic energy and that of the centrifugal barrier. Numerical computations for example show that the replacement of $`\frac{1}{r^2}_1`$ by $`\frac{1}{r^2_1}`$ is not appropriate when $`ฯต`$ or $`B`$ are too large.
In conclusion, even in two dimensions improved rigorous bounds on the critical values of the magnetic field are needed, and the WKB method for which we have no estimate of the error gives the best analytic results.
## VI conclusion
It could be of course quite interesting to see an experimental verification of these surprising effects of the magnetic field. Even though we have found them in the case of a double-well, we think that the details of the potential do not matter too much. What is needed is a potential whose minimum is taken sufficiently far from the origin.
We have thought of two possible fields where one could observe such effects. The first one is molecular physics where often the dynamics of electrons or protons is modelled by the motion of a quantum particle in a double-well (although admittedly often a one-dimensional one.) If we consider the case of the electron in the rotationally symmetric double-well, the smallest value of the critical field where the $`m=1`$ and $`m=0`$ states are degenerate is about $`15`$ Tesla if we take for the depth of the potential $`1`$ eV and for the distance to the origin of the minimum 2 $`\AA `$. For protons the situation is more favourable since a field of $`5`$ Tesla can create a degeneracy when the depth is kept to $`1`$ eV and the minimum is at a distance of $`1.5`$ $`\AA `$. Obviously a more detailed investigation is needed if one wants to see these unusual effects (like a change from diamagnietism to paramagnetism) in molecules.
The other field is that of Bose-Einstein condensates of very cold atoms, which recently has made spectacular progress. If we consider free charged bosons in a magnetic field and in a potential $`V(\frac{r}{r_0})`$ one can show that there is a Bose-Einstein condensation in the ground state in three dimensions, in the limit $`r_0`$ going to infinity, for all potentials which have a quadratic dependence of $`r`$ near the origin. Our result supports therefore that free charged bosons in their condensate would show a phase transition when one varies the magnetic field. This transition would manifest itself by jumps of the magnetisation at some critical values of the magnetic field. The phenomenon would probably persist in a dilute gas of charged bosons in a neutralising background. It is however probably quite difficult to create such a jellium in the laboratory and this remains a challenging task.
## VII Acknowledgements
We thank Ph. Martin and N. Datta for some useful discussions on the Bose-Einstein condensation in the presence of a magnetic field. |
no-problem/0002/physics0002011.html | ar5iv | text | # A classical Over Barrier Model to compute charge exchange between ions and oneโopticalโelectron atoms
## I Introduction
The electron capture process in collisions of slow, highly charged ions with neutral atoms and molecules is of great importance not only in basic atomic physics but also in applied fields such as fusion plasmas and astrophysics. The process under study can be written as:
$$A^{+q}+BA^{(qj)+}+B^{j+}.$$
(1)
Theoretical models are regularly developed and/or improved to solve (1) from first principles for a variety of choices of target $`A`$ and the projectile $`B`$, and their predictions are compared with the results of ever more refined experiments.
In principle, one could compute all the quantities of interest by writing the time-dependent Schrรถdinger equation for the system (1) and programming a computer to solve it. This task can be performed on presentโdays supercomputers for moderately complicated systems. Notwithstanding this, simple approximate models are still valuable: (i) they allow to get analytical estimates which are easy to adapt to particular cases; (ii) allow to get physical insight on the features of the problem by looking at the analytical formulas; (iii) finally, they can be the only tools available when the complexity of the problem overcomes the capabilities of the computers. For this reason new models are being still developed .
The present author has presented in a recent paper a study attempting to develop a more accurate OBM by adding some quantal features. The model so developed was therefore called a semiโclassical OBM. Its results showed somewhat an improvement with respect to other OBMs, but not a dramatic one.
In this paper we aim to present an OBM for dealing with one of the simplest processes (1): that between an ion and a target provided with a single active electron. Unlike the former one , this model is entirely developed within the framework of a classical model, previously studied in (see also ), but with some important amendments and improvements which, as we shall see, allow a quite good accordance with experiments.
The paper is organized as follows: a first version of the model is presented and discussed in section II. In section III we will test our model against a first test case. From the comparison a further improvement to the model is proposed (section IV) and tested against the same case, as well as other data in section V. It will be shown that predictions with this correction are in much better agreement.
## II The model: first picture
We consider the standard scattering experiment and label T, P, and e respectively the target ion, the projectile and the electron. The system T \+ e is the initial neutral atom. Let r be the electron vector relative to T and R the internuclear vector between T and P. In the spirit of classical OBM models, all particles are considered as classical objects.
Let us consider the plane $`๐ซ`$ containing all the three particles and use cylindrical polar coordinates $`(\rho ,z,\varphi )`$ to describe the position of the electron within this plane. We can arbitrarily choose to set the angle $`\varphi =0`$, and assign the $`z`$ axis to the direction along the internuclear axis.
The total energy of the electron is (atomic units will be used unless otherwise stated):
$$E=\frac{p^2}{2}+U=\frac{p^2}{2}\frac{Z_t}{\sqrt{\rho ^2+z^2}}\frac{Z_p}{\sqrt{\rho ^2+(Rz)^2}}.$$
(2)
$`Z_p`$ and $`Z_t`$ are the effective charge of the projectile and of the target seen by the electron, respectively. Notice that we are considering hydrogenlike approximations for both the target and the projectile. We assigne an effective charge $`Z_t=1`$ to the target and an effective quantum number $`n`$ to label the binding energy of the electron:$`E_n=Z_t^2/2n^2=1/2n^2`$.
As long as the electron is bound to T, we can also approximate $`E`$ as
$$E(R)=E_n\frac{Z_p}{R}.$$
(3)
This expression is used throughout all calculations in (I); however, we notice that it is asimptotically correct as long as as $`R\mathrm{}`$. In the limit of small $`R`$, instead, $`E(R)`$ must converge to a finite limit:
$$E(R)(Z_p+1)^2E_n$$
(4)
(united atom limit). For the moment we will assume that $`R`$ is sufficiently large so that eq . (3) holds, but later we will consider the limit (4), too.
On the plane $`๐ซ`$ we can draw a section of the equipotential surface
$$U(z,\rho ,R)=E_n\frac{Z_p}{R}.$$
(5)
This represents the limit of the region classically allowed to the electron. When $`R\mathrm{}`$ this region is divided into two disconnected circles centered around each of the two nuclei. Initial conditions determine which of the two regions actually the electron lives in. As $`R`$ diminishes there can be eventually an instant where the two regions become connected. In fig. 1 we give an example for this.
In the spirit of OBMs it is the opening of the equipotential curve between P and T which leads to a leakage of electrons from one nucleus to another, and therefore to charge exchange. We make here the no-return hypothesis: once crossed the barrier, the electron does not return to the target. It is well justified if $`Z_p>>1`$. As we shall see just below, this hypothesis has important consequences.
It is easy to solve eq. (5) for $`R`$ by imposing a vanishing width of the opening ($`\rho _m=0`$); furthermore, by imposing also that there be an unique solution for $`z`$ in the range $`0<z<R`$:
$$R_m=\frac{(1+\sqrt{Z_p})^2Z_p}{E_n}.$$
(6)
In the region of the opening the potential $`U`$ has a saddle structure: along the internuclear axis it has a maximum at
$$z=z_0=R\frac{1}{\sqrt{Z_p}+1}$$
(7)
while this is a minimum along the orthogonal direction.
Charge exchange occurs provided the electron is able to cross this potential barrier. Let $`N_\mathrm{\Omega }`$ be the fraction of trajectories which lead to electron loss at the time $`t`$. It is clear from the discussion above that it must be function of the solid opening angle angle $`\mathrm{\Omega }`$, whose projection on the plane is the $`\pm \theta _m`$ angle. The exact expression for $`N_\mathrm{\Omega }`$ will be given below. Further, be $`W(t)`$ the probability for the electron to be still bound to the target, always at time $`t`$. Its rate of change is given by
$$dW(t)=N_\mathrm{\Omega }dt\frac{2}{T_{em}}W(t),$$
(8)
with $`T_{em}`$ the period of the electron motion along its orbit.
It is important to discuss the factor $`dt(2/T_{em})`$ since it is an important difference with (I), where just half of this value was used. The meaning of this factor is to account for the fraction of electrons which, within the time interval $`[t,t+dt]`$ reach and cross the potential saddle. In (I) it was guessed that it should be equal to $`dt/T_{em}`$, on the basis of an uniform distribution of the classical phases of the electrons. However, let us read again what the rhs of eq. (8) does mean: it says that the probability of loss is given by the total number of available electrons within the loss cone ($`W(t)\times N_\mathrm{\Omega }`$), multiplied by the fraction of electrons which reach the potential saddle. However, on the basis of the noโreturn hypothesis, only outgoing electrons can contribute to this term: an electron which is within the loss cone and is returning to the target from the projectile is not allowed, it should already have been captured and therefore would not be in the set $`W`$. It is clear, therefore, that the effective period is $`T_{em}/2`$, corresponding to the outgoing part of the trajectory.
A simple integration yields the leakage probability
$$\begin{array}{cc}\hfill P_l=P(+\mathrm{})& =1W(+\mathrm{})=\hfill \\ & =1\mathrm{exp}\left(\frac{2}{T_{em}}_{t_m}^{+t_m}N_\mathrm{\Omega }๐t\right).\hfill \end{array}$$
(9)
In order to actually integrate Eq. (9) we need to know the collision trajectory; an unperturbed straight line with $`b`$ impact parameter is assumed:
$$R=\sqrt{b^2+(vt)^2}.$$
(10)
The extrema $`\pm t_m`$ in the integral (9) are the maximal values of $`t`$ at which charge exchange can occur. If we identify this instant with the birth of the opening, using eq. (6) and (10), we find
$$t_m=\frac{\sqrt{R_m^2b^2}}{v}.$$
(11)
At this point it is necessary to give an explicit expression for $`N_\mathrm{\Omega }`$. To this end, we will consider first the case of an electron with zero angular momentum ($`l=0`$), and then will extend to nonzero values.
In absence of the projectile, the classical electron trajectories, with zero angular momentum, are ellipses squeezed onto the target nucleus. We are thus considering an electron moving essentially in one dimension. Its hamiltonian can be written as
$$\frac{p^2}{2}\frac{1}{r}=E_n.$$
(12)
The electron has a turning point at
$$r_c=\frac{1}{E_n}.$$
(13)
Obviously the approaching of the projectile modifies these trajectories. However, in order to make computations feasible, we make the following hypothesis: electron trajectories are considered as essentially unperturbed in the region between the target and the saddle point. The only trajectories which are thus allowed to escape are those whose aphelia are directed towards the opening within the solid angle whose projection on the $`๐ซ`$ plane is $`\pm \theta _m`$ (see fig. 1) provided that the turning point of the electron is greater than the saddle-point distance: $`r_cz_0`$. The validity of these approximations can be questionable, particularly if we are studying the collision with highlyโcharged ions, which could deeply affect the electron trajectory. We limit to observe that it is necessary in order to make analytical calculations. A posteriori, we shall check the amount of error introduced by such an approximation.
The angular integration is now easily done, supposing a uniform distribution for the directions of the electrons:
$$N_\mathrm{\Omega }=\frac{1}{2}(1\mathrm{cos}\theta _m).$$
(14)
In order to give an expression for $`\theta _m`$ we notice that $`\mathrm{cos}\theta _m=z_0/(\rho _m^2+z_0^2)^{1/2}`$, with $`\rho _m`$ root of
$$E(R)=\left(\rho _m^2+\frac{R^2}{(\sqrt{Z_p}+1)^2}\right)^{1/2}+Z_p\left(\rho _m^2+\frac{Z_pR^2}{(\sqrt{Z_p}+1)^2}\right)^{1/2}.$$
(15)
It is easy to recognize that, in the right-hand side, the first term is the potential due to the electronโtarget interaction, and the second is the electronโprojectile contribution. Eq. (15) cannot be solved analytically for $`\rho _m`$ except for the particular case $`Z_p=1`$, for which case:
$$\rho _m^2=\left(\frac{2}{E(R)}\right)^2\left(\frac{R}{2}\right)^2.$$
(16)
The form of $`E(R)`$ function of $`R`$ cannot be given analytically, even though can be quite easily computed numerically . In order to deal with expressions amenable to algebraic manipulations, we do therefore the approximation: first of all, divide the space in the two regions $`R<R_u,R>R_u`$, where $`R_u`$ is the internuclear distance at which the energy given by eq. (3) becomes comparable with its unitedโatom form:
$$E_n+\frac{Z_p}{R_u}=(Z_p+1)^2E_nR_u=\frac{Z_p}{(Z_p+1)^21}\frac{1}{E_n}.$$
(17)
We use then for $`E(R)`$ the unitedโatom form for $`R<R_u`$, and the asymptotic form otherwise:
$$\begin{array}{cc}\hfill E(R)& =E_n+\frac{Z_p}{R},R>R_u\hfill \\ & =(Z_p+1)^2E_n,R<R_u\hfill \end{array}$$
(18)
It is worthwhile explicitly rewriting eq. (16) for the two cases:
$$\begin{array}{cc}\hfill \rho _m^2& =R^2\left(\frac{4}{(E_nR+1)^2}\frac{1}{4}\right),R>R_u\hfill \\ & =\frac{1}{4}\left(\frac{1}{E_n^2}R^2\right),R<R_u\hfill \end{array}$$
(19)
and the corresponding expressions for $`N_\mathrm{\Omega }`$ are:
$$\begin{array}{cc}\hfill N_\mathrm{\Omega }=\frac{1\mathrm{cos}\theta _m}{2}& =\frac{1}{8}(3E_nR),R>R_u\hfill \\ & =\frac{1}{2}(1E_nR),R<R_u.\hfill \end{array}$$
(20)
Note that $`N_\mathrm{\Omega }=1/2`$ for $`R=0`$. This is a check on the correctness of the model, since, for symmetrical scattering at low velocity and small distances we expect the electrons to be equally shared between the two nuclei.
When $`Z_p>1`$ we have to consider two distinct limits: when $`R\mathrm{}`$ we know that eventually $`\rho _m0`$ (eq. 6). It is reasonable therefore to expand (15) in series of powers of $`\rho _m/R`$ and, retaining only terms up to second order:
$$\rho _m^2\frac{2\sqrt{Z_p}}{\left(\sqrt{Z_p}+1\right)^4}R^2\left[\left(\sqrt{Z_p}+1\right)^2Z_pE_nR\right].$$
(21)
Consistently with the limit $`R\mathrm{}`$, we have used the largeโ$`R`$ expression for $`E(R)`$.
The limit $`R0`$ is quite delicate to deal with: a straightforward solution of eq. (15) would give
$$\rho _m\frac{1}{(Z_p+1)E_n}+๐ช(R),$$
(22)
but calculating $`\mathrm{cos}\theta _m`$ and eventually $`N_\mathrm{\Omega }`$ from this expression gives wrong results: it is easy to work out the result $`N_\mathrm{\Omega }=1/2,R0`$. This is wrong because, obviously, the limit $`N_\mathrm{\Omega }1,Z_p\mathrm{}`$ must hold. The reason of the failure lies in the coupling of eq. (15) with the unitedโatom form for $`E(R)`$: one can notice that the expression thus written is perfectly simmetrical with respect to the interchange projectileโtarget. Because of this symmetry, electrons are forced to be equally shared between the two nuclei. This is good when dealing with symmetrical collisions, $`Z_p=Z_t=1`$, and is actually an improvement with respect to (I), where eq. (21) was used even for small $`R`$โs and one recovered the erroneous value $`N_\mathrm{\Omega }(R=0)=3/8`$. But when $`Z_p>1`$ the asymmetry must be retained in the equations. The only way we have to do this is to extend eq. (21) to small $`R`$, obtaining
$$1\mathrm{cos}\theta _m\frac{\sqrt{Z_p}}{(\sqrt{Z_p}+1)^2}\left[(\sqrt{Z_p}+1)^2Z_pE_nR\right].$$
(23)
It is straightforward to evaluate eq. (23) in the limit $`Z_p\mathrm{},R0`$, and find the sought result, 2.
We notice that, from the numerical point of view, it is not a great error using eq. (21) everywhere: the approximation it is based upon breaks down when $`R`$ is of the order of $`R_u`$ or lesser, which is quite a small range with respect to all other lengths involved when $`Z_p>1`$, while even for the case $`Z_p=1`$ it is easy to recover (see equations below) that the relative error thus introduced on $`P_l`$ is $`\mathrm{\Delta }P_l/P_l=1/24`$ for small $`b`$ (andโobviouslyโit is exactly null for large $`b`$). Therefore, eq. (21) could be used safely in all situations. However, we think that the rigorous altough quite lengthy derivation given above was needed since it is not satisfactory working with a model which does not comply with the very basic requirements required by the symmetries of the problem at hand.
We have now to take into account that the maximum escursion for the electron is finite. If we put $`r_c=z_0`$ and use for $`z_0`$, $`r_c`$ respectively the expressions given by (7) and (13), we obtain an equation which can be easily solved for $`R`$:
$$R=R_{}^{}{}_{m}{}^{}=(\sqrt{Z_p}+1)r_c.$$
(24)
The $`R_{}^{}{}_{m}{}^{}`$ thus computed is the maximum internuclear distance at which charge exchange is allowed under the present assumptions. Since $`R_{}^{}{}_{m}{}^{}<R_m`$ (compare the previous result with that of eq. 6 ) we have to reduce accordingly the limits in the integration in eq. (9): it must be performed between $`\pm t_{}^{}{}_{m}{}^{}`$, with the definition of $`t_{}^{}{}_{m}{}^{}`$ the same as $`t_m`$ but for the replacement $`R_mR_{}^{}{}_{m}{}^{}`$.
The result for the leakage probability is:
$$P_l=1\mathrm{exp}\left(2\frac{F(u_m)+G_Z}{T_{em}}\right),$$
(25)
where we have defined
$$\begin{array}{cc}\hfill F(u)& =\frac{\sqrt{Z_p}}{(\sqrt{Z_p}+1)^2}\left[\left((\sqrt{Z_p}+1)^2Z_p\right)\frac{b}{v}u\left(\frac{E_nb^2}{2v}\right)\left(u\sqrt{1+u^2}+\mathrm{arcsinh}(u)\right)\right],\hfill \\ \hfill G_Z& =(3F(u_u)2t_u)(Z_p=1)\hfill \\ & =0(Z_p>1),\hfill \\ \hfill u_m& =vt_m^{}/b,\hfill \\ \hfill u_u& =vt_u/b,\hfill \\ \hfill t_u& =\frac{\sqrt{R_u^2b^2}}{v}.\hfill \end{array}$$
(26)
The period can be easily computed by
$$T_{em}=2_0^{1/E_n}\frac{dr}{p}=\sqrt{2}_0^{1/E_n}\frac{dr}{\sqrt{\frac{1}{r}E_n}}=2\pi n^3$$
(27)
(this result could be found also in ).
The cross section can be finally obtained after integrating over the impact parameter (this last integration must be done numerically):
$$\sigma =2\pi _0^{b_m}bP_l(b)๐b.$$
(28)
Again, we have used the fact that the range of interaction is finite: the maximum allowable impact parameter $`b_m`$ is set equal to $`R_{}^{}{}_{m}{}^{}`$.
Finally, we consider the case when the angular momentum is different from zero. Now, orbits are ellipses whose minor semiaxis has finite length. We can still write the hamiltonian as function of just $`(r,p)`$:
$$\frac{p^2}{2}\frac{1}{r}+\frac{L^2}{2r^2}=E_n.$$
(29)
$`L`$ is the usual term: $`L^2=l(l+1)`$. The turning points are now
$$r_c^\pm =\frac{1\pm \sqrt{12E_nL^2}}{2E_n}.$$
(30)
and $`R_m^{}=(\sqrt{Z_p}+1)r_c^+`$.
Now the fraction of trajectories entering the loss cone is much more difficult to estimate. In principle, it can still be determined: it is equal to the fraction of ellipses which have intersection with the opening. Actual computations can be rather cumbersome. Thus, we use the following approximation, which holds for low angular momenta $`l<<n`$ (with $`n`$ principal quantum number): ellipses are approximated as straight lines (as for the $`l=0`$ case), but their turning point is correctly estimated using eq. (30). Note that also the period is modified: its correct expression is
$$T_{em}=\sqrt{2}_r^{}^{r^+}\frac{dr}{\sqrt{\frac{1}{r}E_n\frac{l(l+1)}{2r^2}}}.$$
(31)
## III A test case
As a first test case we consider the inelastic scattering $`\mathrm{Na}^++\mathrm{Na}(28\mathrm{d},29\mathrm{s})`$. We investigate this sytem since: (i) it has been studied experimentally in ; (ii) some numerical simulations using the Classical Trajectory Monte Carlo (CTMC) method have also been done on it , allowing to have detailed informations about the capture probability $`P_l`$ function of the impact parameter, and not simply integrated cross sections; (iii) finally, it has been used as test case in (I), thus allowing to assess the relative quality of the fits.
In fig. (2) we plot the normalized cross section $`\stackrel{~}{\sigma }=\sigma /n^4`$ versus the normalized impact velocity $`\stackrel{~}{v}=vn`$ for both collisions $`nl=`$ 28d and $`nl=`$ 29s (solid line). The two curves are very close to each other, reflecting the fact that the two orbits have very similar properties: the energies of the two states differ by a very small amount, and in both cases $`E_nL^2<<1`$. The two curves show reversed with respect to experiment: $`\sigma `$(28d) it is greater than $`\sigma `$(29s). The reason is that the parameter $`r_c`$ is larger in the former case than in the latter.
We can distinguish three regions: the first is at reduced velocity around 0.2, where a steep increase of cross section appears while going towards lower velocities. Overโbarrier models do not appear to fully account for this trend: they have a behaviour at low speed which is ruled approximately by the $`1/v`$ law, consequence of the straight-line impact trajectory approximation: it is well possible that this approximation too becomes unadequate in this region.
The second region covers roughly the range 0.3 $`รท`$ 1.0. Here the $`nl=`$ 29s data are rather well simulated while the present model overestimates the data for $`nl=`$ 28d. The bad agreement for $`nl=`$ 28d was already clear to Ostrovsky which attributed it to a deficiency of the model to modelize $`l`$-changing processes. It seems clear that neither our treatment of the angular momentum is sufficient to cure this defect.
Finally, there is the region at $`\stackrel{~}{v}>1`$, where again the OBM, as it stands, is not able to correctly reproduce the data. The reason for this discrepancy can be traced back to the finite velocity of the electron: the classical electron velocity is $`v_e=1/n`$, so $`\stackrel{~}{v}`$ can be given the meaning of the ratio between the projectile and the electron velocity. When $`\stackrel{~}{v}1`$ the projectile is less effective at collecting electrons in its outgoing part of the trajectory (i.e. when it has gone beyond the point of closest approach). In simple terms: an electron is slower than the projectile; when it is left behind, it cannot any longer reach and cross the potential barrier.
## IV Corrections to the model
This picture suggests a straightforward remedy: a term must be inserted in eq. (8) to account for the diminished capture efficiency. This is accomplished formally through rewriting $`N_\mathrm{\Omega }w(t,\stackrel{~}{v})N_\mathrm{\Omega }`$, with $`w1`$. We have put into evidence that $`w`$ can in principle be function of time and of the impact velocity. The simplest correction is made by assuming a perfect efficiency for $`\stackrel{~}{v}<1`$, $`w(t,\stackrel{~}{v}<1)=1`$, while, for $`\stackrel{~}{v}>1`$, no electrons can be collected after that the distance of minimum approach has been reached: $`w^+w(t>0,\stackrel{~}{v}>1)=0`$. This can appear too strong an assumption, since those electrons which are by the same side of the projectile with respect to the nucleus, and which are close to their turning point may still be captured. In fig. (2) we can compare the original data with those for $`w^+=0`$ (dashed line). The sharp variation of $`\sigma `$ at $`\stackrel{~}{v}=1`$ is obviously a consequence of the crude approximations done choosing $`w`$ which has a stepโlike behaviour with $`v`$.
To get further insight, we plot in fig. 3 the quantity $`bP_l(b)`$ versus $`b`$ for the collision $`\mathrm{Na}^++\mathrm{Na}(28\mathrm{d})`$. The impact velocity is $`\stackrel{~}{v}=1`$. The symbols are the CTMC results of ref. . Solid line is the model result for $`w^+=1`$; dotted line, the result for $`w^+=0`$; dashed line, an intermediate situation, with $`w^+=1/2`$. Striking features are, for all curves, the nearly perfect accordance of the value $`b3000`$ at which $`P_l=0`$ (it is $`b_m`$ according to our definition). The behaviour at small $`b`$โs ($`P_l1/2`$) is well reproduced for $`w^+=1`$ while it is slightly underestimated by the two other curves. On the other hands, only by setting $`w^+=0`$ it is possible to avoid the gross overestimate of $`P_l`$ near its maximum.
It is thus evident that the agreement is somewhat improved in the region $`\stackrel{~}{v}1`$ by letting $`w^+=0`$. However, the highโvelocity behaviour is still missed by the model, which predicts a powerโlaw behaviour $`\sigma v^1`$, while the actual exponent is higher. Within our picture, this suggests that also the capture efficiency $`w^{}=w(t<0)`$ must be a decreasing function of $`\stackrel{~}{v}`$. An accurate modelization of the processes which affect this term is difficult, and we were not able to provide it. However, some semiโqualitative arguments can be given. Let us review again the process of capture as described in section II and shown in fig. (1): if $`\stackrel{~}{v}>1`$, an electron at time $`t`$ can be in the loss cone and still not to be lost, since within a time span $`\mathrm{\Delta }t\rho _m/v`$ the position of the loss cone has shifted of such an amount that only those electrons which were closer to the saddle point than a distance $`v_e\mathrm{\Delta }t`$ could be caught. The fraction of these electrons is $`\mathrm{\Delta }t(2/T_{em})\rho _m(2/vT_{em})`$. This correction gives an additional $`1/v`$ dependence, thus now $`\sigma 1/v^2`$.
As an exercise, we try to fit experimental data using $`w`$ as a free parameter instead that a function to be determined by first principles. We choose one of the simplest functional forms:
$$w=\frac{1+|\beta |^m}{1+|\stackrel{~}{v}\beta |^m},$$
(32)
with $`\beta ,m`$ free parameters to be adjusted. This form gives the two correct limits: $`w1,\stackrel{~}{v}0`$, and $`w0,\stackrel{~}{v}\mathrm{}`$. The parameter $`\beta `$ is not really needed; it has been added to reach a better fit. Its meaning is that of a treshold velocity, at which the capture efficiency begins to diminish. In fig. (2) we plot the fit obtained with $`\beta =0.2,m=4`$ (dotted line): this is not meant to be the best fit, just a choice of parameters which gives a very good agreement with data. We see that the suggested corrections are still not enough to give the right powerโlaw, if one needs to go to some extent beyond the region $`\stackrel{~}{v}=1`$.
## V Other comparisons
### A Iodine - Cesium collisions
We apply now our model to the process of electron capture
$$\mathrm{I}^{q+}+\mathrm{Cs}\mathrm{I}^{(q1)+}+\mathrm{Cs}^+$$
(33)
with $`q=6รท30`$. This scattering process has been studied experimentally in . It is particularly interesting to study in this context since it has revealed untractable by a number of other OBMโs, including that of (I) (for a discussion and results, see ). The impact energy is chosen equal to $`1.5\times Z_p`$ keV: since it corresponds to $`\stackrel{~}{v}<<1`$, we can safely assume $`w=1`$. The Cesium atom is in its ground state with the optical electron in a $`s`$ state.
In fig. 4 we plot the experimental points together with our estimates. In this case the fit is excellent. It is important to notice that this agreement is entirely consequence of our choice of limiting integration to $`R`$ given by eq. (24): to understand this point, observe that because of the very high charge of the projectile, the exponential term in eq. (25) is small ($`F`$, by direct inspection, is increasing with $`Z_p`$) and thus $`P_l1`$. The details of the model which are in $`F`$ are therefore of no relevance. The only surviving parameter, and that which determines $`\sigma `$, is $`R_m^{}`$. It can be checked by directly comparing our fig. 4 with fig. 1 of ref. , where results from model (I) are shown, which differ from ours just in replacing eq. (24) with eq. (6). There, the disagreement is severe.
### B Ion - Na($`n=3`$) collisions
As a final test case we present the results for collisions HโNa(3s,3p). They are part of a set of experiments as well as numerical simulations involving also other singlyโcharged ions: He, Ne, and Ar (see and the references therein and in particular ; ref. presents numerical calculations for the same system). In fig. 5 we plot the results of our model together with those of ref. . Again, we find that only by neglecting $`w^+`$ some accordance is found. The lowโenergy wing of the curve is strongly underestimated for Na(3s), while the agreement is somewhat better for Na(3p). Again, the slope of $`\sigma `$ for relative velocities higher than 1 could not be reproduced.
We do not show results for other ions: they can be found in fig. 3 of ref. . What is important to note is that differencies of a factor two (and even larger for 3s states) appear between light (H<sup>+</sup>, He<sup>+</sup>) and heavy (Ne<sup>+</sup>, Ar<sup>+</sup>) ions which our model is unable to predict. We can reasonably conclude therefore: (i) that the present model is not satisfactory for $`v/v_e<<1`$ (it was already pointed out in sec. IV) and for $`v/v_e>1`$ ; (ii) the structure of the projectile must be incorporated into the model otherwise different ions with the same charge should cause the same effect, at odds with experiments. As emphasized in the energy defect $`\mathrm{\Delta }E`$ of the process is a crucial parameter: captures to states with $`\mathrm{\Delta }E0`$ are strongly preferred. Obviously, the value of $`\mathrm{\Delta }E`$ depends on the energy levels structure of the recombining ion.
## VI Summary and conclusions
We have developed in this paper a classical OBM for single charge exchange between ions and atoms. The accuracy of the model has been tested against three cases, with results going from moderateโtoโgood (sec. III and IV), excellent (sec. V.A), and poorโtoโmoderate (sec. V.B). As a rule of thumb, the model can be stated to be very well suited for collisions involving highly charged ions at low velocities.
The model is based upon a previous work , and adds to it a number of features, which we go to recall and discuss: (i) the finite excursion from the nucleus permitted to the electrons; (ii) the redefinition of the fraction of lost electrons $`dt/T_{em}dt(2/T_{em})`$; (iii) a more accurate treatment of the small impact parameter region for symmetrical collisions; (iv) the explicit-altough still somewhat approximate-treatment of the capture from $`l>0`$ states; (v) a correction to the capture probability due finite impact velocity. Let us discuss briefly each of these points:
Point (i) and (ii) contribute a major correction: in particular, (i) is essential to recover that excellent agreement found in section V.A, while (ii) accounts for the correct $`bP_l`$ behaviour at small $`b`$โs (see fig. 2).
Point (iii) is unimportant for actual computations, but corrects an inconsistency of the model.
Point (iv) has been studied in less detail, in part for the lack of experimental data on which doing comparisons.
Point (v): a good theoretical estimate of $`w`$ should be of the outmost importance for developing a really accurate model of collision at medium-to-high impact velocity. In this paper we have just attempted a step towards this direction which, however, has allowed to recover definitely better results.
Finally we recall from sec. V.B that the treatment of the projectileโor better the process of the electron-projectile bindingโis an aspect which probably awaits for main improvements. We just observe that it is a shortcoming of all classical methods, that they cannot easily deal with quantized energy levels.
## Acknowledgments
It is a pleasure to thank the staff at National Institute for Fusion Science (Nagoya), and in particular Prof. H. Tawara and Dr. K. Hosaka for providing the data of ref. . |
no-problem/0002/astro-ph0002530.html | ar5iv | text | # The Discovery of an Embedded Cluster of High-Mass Stars Near SGR 1900+14
## 1 Introduction
The Hartmann et al. (1996, V96); Vrba et al. (1996, V96) survey of the original Network Synthesis Localization (NSL) of SGR 1900+14 (Hurley et al., 1994) found a pair of nearly identical M5 supergiant stars, separated by 3.3 arcsec, and at an estimated distance of 12-15 kpc. While just outside of the original NSL, they lie within the ROSAT HRI localization of the quiescent Xโray source RX J190717+0919.3 thought to be associated with SGR 1900+14 (Hurley et al., 1996). On the basis of the small probability that even one supergiant would lie within the ROSAT error circle and that at least one other supergiant had been associated with an SGR (1806โ20; van Kerkwijk et al. (1995); Kulkarni et al. (1995)), V96 proposed that the M star pair may be associated with the SGR 1900+14 source. The position of the M star pair has continued to be consistent with more recent Xโray and gammaโray observations which, taken together, have narrowed considerably the actual location of SGR 1900+14 from the original NSL area of 5 arcmin<sup>2</sup>. These recent Xโray and gammaโray observations have also detected variations with a period of 5.16 sec (Hurley et al., 1999a; Murakami et al., 1999; Kouveliotou et al., 1999) and a deceleration of $`\dot{P}10^{10}`$ sec/sec. Taken together, these are interpreted as evidence that the SGR source is a magnetar, though there remains some uncertainty in this interpretation (Marsden, Rothschild, & Lingenfelter, 1999).
Additionally, a variable and fading radio source was detected shortly after the 27 August SGR 1900+14 superburst by Frail, Kulkarni, & Bloom (1999), providing strong evidence that it was the radio counterpart to the SGR. Its subarcsec accurate position is located only a few arcseconds from the M stars. These positional coincidences, the lack of a plerionic radio source, and, despite arguments for SNR G42.8+0.6 in the literature, the lack of a coincident supernova remnant, suggest that the system of proximate, highโmass M stars should not yet be dismissed as an evolutionary companion to the pulsating Xโray source associated with the SGR.
Finding direct evidence that the M star pair may be associated with SGR 1900+14 has proven elusive as summarized by Guenther, Klose, & Vrba (2000). Also difficult is a theoretical understanding of how isolated, albeit high mass, stars could play a role in the formation of a pulsating Xโray source, despite the presence of a high mass luminous blue variable (LBV) very near the SGR 1806โ20 localization position, a remarkably similar situation to that for SGR 1900+14. Recent nearโ and midโinfrared observations of SGR 1806โ20 (Fuchs et al., 1999, F99), however, have revealed the LBV to be only the most luminous member of a compact cluster of massive stars. Such proximate regions of recent star formation provide a natural location for the birth of such pulsating Xโray sources, which cannot be very old, without the need for invoking enormous space velocities from the nearest supernova remnants.
In this paper we present evidence for a similar compact cluster of high-mass stars which has heretofore been hidden in the glare of its brightest components, the pair of M5 supergiant stars.
## 2 Observations
The 1998 outburst season of SGR 1900+14 presented an opportunity to search for optical and near-infrared variability of the double M stars, or other sources within the ROSAT HRI error circle, which might be correlated to the SGR outbursts via some process such as mass transfer to a compact object. Beginning in early May and continuing through midโJuly 1998 we carried out an Iโ and Jโband monitoring campaign at the U.S. Naval Observatory, Flagstaff Station (NOFS) which eventually comprised 2025 short exposure frames of data with 54,460 seconds of open shutter time during 16 nights, intended to sample variability timescales down to a few seconds. The results of this work found no variablity for any object within the ROSAT HRI error circle and are presented more fully in Vrba et al. (2000).
However, it was recognized that the numerous short Iโband exposures constituted several hours of total exposure time, which could be stacked to form a deep Iโband image to search for a counterpart at the position of the Frail, Kulkarni, & Bloom (1999) variable radio source. To the 1998 data were added additional short exposure frames from 1995 and 1999. In all, 217 frames of individual exposure time between 1 and 10 minutes were coadded to form a net image of about 6.5 hours total exposure. All frames were obtained with one of two Tektronix 2K CCDs on the 1.55โm Strand Astrometric Telescope at the USNOFS. It was additionally recognized that, since the exposures used were short enough not to saturate the three bright M stars (A, B,and C of V96), their light could largely be removed by PSF subtraction.
Figure 1 is an approximately 45 x 45 arcsec portion of the medianโfiltered composite Iโband image centered on the V96 M stars, with a limiting detection magnitude of I $``$ 26.5. In this image the M stars ABC have been removed, although their positions are still apparent due to imperfect subtraction. The position of the variable radio source is shown, but no counterpart is visible to I $``$ 26.5, which is consistent with the non-detections in the near infrared of Eikenberry & Dror (1999). Unfortunately, none of the nearly 200 frames from 1998 were obtained simultaneously with a gammaโray burst from SGR 1900+14.
Of greater interest is that the subtracted Iโband image shows what appears to be a cluster of stars, and possibly nebulosity, centered on the position of the M stars. The IRAS source found at this location by van Paradijs et al. (1996) shows a steeply rising energy spectrum that can be interpreted as warm dust in the cluster region. Figure 1 also shows identification numbers of the possible cluster stars. On UT 1999 October 28 we used the ASTROCAM IR imager, which employs an SBRC 1024<sup>2</sup> InSb detector, at the 1.55โm telescope to obtain a 1600 second net exposure Jโband image of this region. An approximately 45 x 45 arcsec region of this image is shown in Figure 2, where again the M stars were somewhat successfully PSFโsubtracted.
We obtained photometry for the cluster stars from the Iโ and Jโband frames, calibrated with several Iโ and Jโband local standards which had previously been set up for our variability monitoring program. The photometric results are presented in Table 1 where the results for stars 5 and 6 are presented together as they could not be separated in the Jโband observations. The observed (IโJ) $``$ 7 colors are far larger than for any unreddened star and indicate that they suffer extremely high extinction.
## 3 Nature of the Cluster
Assuming that the cluster stars are at the same distance and suffer the same extinction as the M supergiant stars (12-15 kpc; A<sub>V</sub> = 19.2$`\pm `$1.0; V96) we placed all stars in an $`M_I`$ vs. (IโJ) CM diagram (Figure 3), assuming normal interstellar extinction (Bessel & Brett, 1988), and where the error bars include the ranges in distance and A<sub>V</sub> values given above. The solid curves show the approximate loci for supergiants and dwarfs later than A0 and for giants later than G0 while the dashed lines show the M0 (IโJ) colors, for reference. The large uncertainty of the intrinsic (IโJ) colors of the stars after subtracting a huge baseline of extinction renders them essentially useless in estimating their spectral types. However, at this assumed distance and extinction the stars have luminosities far greater than that of main sequence stars. We note that even assuming the stars are at a much closer distance (for instance d $``$ 5 kpc as has often been quoted by association with the SNR G42.8+0.6) has little affect on the conclusion that these are highly luminous stars.
Several examples of compact high mass young clusters serve as templates for these objects: NGC 3603 (Moffat, Drissen, & Shara, 1994), W43 (Blum, Damineli, & Conti, 1999), and several clusters summarized in Figer, McLean, & Morris (1999). These clusters are characterized by 10 โ 30 cluster members, radii of 0.2 โ 1.0 pc, and ages of 1 โ 10 Myr. The SGR 1900+14 cluster has at least 13 members (including stars A and B) and an approximate 7 arcsec radius which, at a distance of 12 โ 15 kpc, corresponds to a cluster radius of $``$ 0.4 pc. A remarkably similar example to that of the SGR 1900+14 cluster is described by Moffat (1976) in which a group of 12 luminous stars surround the M3 I supergiant star HD143183 within a cluster radius of 0.6 pc. These examples support the idea that the small cluster of stars near SGR 1900+14 and dominated by the M5 supergiants is likely a real association. A formal astrometric solution, not previously presented, for the positions of the M supergiants based on 21 USNO-A2.0 stars gives the result ($`\pm `$ 0.1 arcsec):
Star A: $`\alpha `$ = 19<sup>h</sup> 07<sup>m</sup> 15.35<sup>s</sup>, $`\delta `$ = +09<sup>d</sup> 19โ 21.4โ (J2000)
Star B: $`\alpha `$ = 19<sup>h</sup> 07<sup>m</sup> 15.13<sup>s</sup>, $`\delta `$ = +09<sup>d</sup> 19โ 20.7โ (J2000)
## 4 Discussion
If the cluster was the birthplace of SGR 1900+14, this essentially excludes SNR G42.8+0.6 as playing any role in the SGR. Although one can envision scenarios in which the SNR progenitor was ejected from the cluster by dynamical interaction or a much earlier supernova, this leaves the necessity of the neutron star having been kicked back to almost exactly its place of origin by the supernova that formed SNR G42.8+0.6 (since the cluster and SGR localizations are coincident), an unlikely coincidence both in space and timing. However, despite the association of G42.8+0.6 with SGR 1900+14 in the literature, there has been no evidence supporting this association offered, such as the probablity of finding any SNR within a given distance, based on the number density of SNRs in the Galactic plane.
A more plausable scenario is one in which the cluster and associated dense gas/dust cloud hides a recent supernova. Evidence for this cloud comes from Figures 1 and 2 and the coincident extended strong farโinfrared source indicating compact warm and extended cool dust (see V96). Optical extinction from this cloud combined with a 12โ15 kpc distance explains why the supernova would not have been noticed historically. A very young SNR expanding into the dense windโblown bubble due to mass loss from the supergiant stars in the cluster would be consistent with the otherwise unexplained persistent Xโray source at this position, RX J190717+0919.3 (Hurley et al., 1996). While no quiescent radio source is known at this position, a combination of selfโabsorption within the dense medium and rapid decay (Reynolds, 1988) could account for this. The supernova remnant evolutionary calculations of Truelove & McKee (1999) indicate that for an ejecta mass of 1 M, and an external density medium of 10 cm<sup>-3</sup>, one finds characteristic sizes of $``$ 1 pc at t = 1000 yr; similar to that of the cluster dimensions at the estimated M supergiant distances.
The most likely position for the SGR itself is the Frail, Kulkarni, & Bloom (1999) fading radio source located at $`\alpha `$ = 19<sup>h</sup> 07<sup>m</sup> 14.33<sup>s</sup>, $`\delta `$ = +09<sup>d</sup> 19โ 21.1โ (J2000), with positional accuracy of $`\pm `$ 0.15 arcsec in each coordinate. With these astrometric positions we estimate the approximate distances from the center and edge of the cluster to the radio position as 12 arcsec (0.7โ0.9 pc) and 5 arcsec (0.3โ0.4 pc), respectively, based on the 12โ15 kpc distance estimate. Thus, even at the extreme minimum age of the SGR based on the simplest magnetar physics ($``$ 700 yr; Kouveliotou et al. 1999) this implies a tangential velocity of $``$ 420 km s<sup>-1</sup> from the near edge of the cluster. While still an ample velocity for the runaway neutron star, it obviates the enormous space velocities implied by associating it with G42.8+0.6 (Kouveliotou et al., 1999), which is about 12 arcmin away (Hurley et al., 1999b).
While an isolated instance of the compact, high mass cluster found at/near SGR 1900+14 would be dismissed as a chance superposition, its striking similarity to the cluster found near SGR 1806-20 by Fuchs et al. (1999) must be recognized. In that case, an LBV supergiant is found associated with a cluster of at least another four massive young stars enshrouded in a bright dust cloud as imaged by ISO and located only 7 arcsec from the SGR gammaโray localization. With an approximate cluster radius of 8 arcsec and an estimated distance of 14.5 kpc, this implies a cluster radius of $``$ 0.6 pc. Now that similar compact clusters have been found near the positions of the two best studied SGRs (1806-20 and 1900+14) the possiblity that young SGR neutron stars have their origins in compact clusters should be considered seriously. |
no-problem/0002/hep-ph0002111.html | ar5iv | text | # FUTURE PROSPECTS FOR CP VIOLATION IN HADRON MACHINES
## 1 Introduction
While the first statistically significant observation of CP violation in the B meson system will most likely take place in one of the $`e^+e^{}`$ $`b`$-factory experiments (BaBar, Belle), this talk will try to prove that the contribution from experiments in hadron colliders will be dominant in the future, starting in 2001.
Figure 1 shows the standard CP violation triangle. The angle $`\beta `$ is the most easily accessible, in particular through the asymmetry in the decays $`B^0/\overline{B}^0J/\mathrm{\Psi }K_S`$. First determinations by CDF , OPAL and ALEPH have recently become available. When combined, they give a two-standard deviation measurement of $`\mathrm{sin}2\beta `$:
$$\mathrm{sin}2\beta =0.78\pm 0.37.$$
(1)
The combination is totally dominated by the CDF result. By summer 2000, the electro-positron colliders should get a measurement with an uncertainty about $`\pm 0.15`$ . The task is substantially more involved for the other two angles, $`\alpha `$ and $`\gamma `$. It is not obvious when will BaBar or Belle be able to obtain a measurement.
Hadronic uncertainties diminish the usefulness of a determination of the length of the side of the triangle opposite $`\beta `$. The length of the side opposite $`\gamma `$, can be obtained from the determination of $`\mathrm{\Delta }m_s`$, the frequency of the oscillations in the $`B_s^0\overline{B}_s^0`$ system. The current limit from LEP is :
$$x_s=\frac{\mathrm{\Delta }m_s}{\mathrm{\Gamma }_s}>21.8.$$
(2)
Standard Model expectations for $`x_s`$ are in the range 20โ30 . When the determination of $`x_s`$ will become available, it will be possible to combine it with $`x_d`$, the similar parameter in the $`B_d`$ system, now measured to be
$$x_d=\frac{\mathrm{\Delta }m_d}{\mathrm{\Gamma }_d}=0.717\pm 0.026,$$
(3)
in order to obtain the length of the side of the triangle opposite $`\gamma `$ through the relation:
$$\frac{\mathrm{\Delta }m_d}{\mathrm{\Delta }m_s}\frac{\left|V_{td}\right|}{\left|V_{ts}\right|}\frac{\left|V_{td}\right|}{\left|V_{cb}\right|}.$$
(4)
Since the lepton machines are only scheduled to run at the energy of the $`\mathrm{{\rm Y}}`$(4S) resonance, no $`B_s`$ mesons will be produced. Therefore, $`x_s`$ will only be determined in hadron machine experiments.
The standard CP triangle in fig. 1 is the only one obtained from the standard Wolfenstein parametrization of the CKM matrix . However, one should remember that this is just an expansion of the complete matrix in powers of $`\lambda `$, the sinus of the Cabibbo angle, valid up to order $`๐ช(\lambda ^3)`$. If one proceeds with the expansion up to $`\lambda ^4`$, one encounters a new imaginary contribution to $`V_{ts}`$, which gives raise to a new unitarity triangle. A new angle, $`\delta \gamma `$, appears, which in the Standard Model is proportional to $`\lambda ^2`$, and, therefore, very small, of order $`๐ช(10^2)`$. It is easily accessible through the decay $`B_sJ/\mathrm{\Psi }\varphi `$, the $`B_s`$ equivalent to the โgolden channelโ $`B_dJ/\mathrm{\Psi }K_S`$. Again, this is not possible in the lepton machines, which do not produce $`B_s`$.
The experimental program in the coming years has two clear goals:
* First, to observe for the first time CP violation in the B system. As explained above, this will most likely be achieved by BaBar and/or Belle, followed by HERA-B at DESY, CDF and D0.
* Then, to test whether CP violation is generated by the Standard Model. Here, all the experiments mentioned before will contribute but, most probably, the second-generation dedicated experiments, LHCb and BTeV, will be needed.
In the remaining of the talk, the capabilities of the experiments foreseen in future hadron colliders will be reviewed. Section 2 will cover the near future: CDF and D0 at the Fermilab Tevatron, scheduled to start physics data-taking in spring 2001. The more distant future, from 2005 on, will be covered in Section 3. ATLAS, CMS and LHCb are approved to take data at CERNโs LHC, while BTeV, if approved, will run at the Tevatron. Special emphasis will be put in the two dedicated $`b`$-experiments, LHCb and BTeV. Finally, section 4 will give a short summary of the talk.
## 2 The Near Future: CDF and D0
### 2.1 The detector upgrades
The upgraded Tevatron at Fermilab is scheduled to start collider physics operation in spring 2001 with two upgraded detectors, CDF and D0. The luminosity delivered in the first two years of operation is expected to reach about 2 fb<sup>-1</sup>, which is equivalent to a production of about 10<sup>11</sup> $`b`$ pairs per year. The challenge in a hadron collider experiments is not producing the $`b`$ pairs, but rather triggering on them, selecting them and getting rid of the background. CDF has proven to be able to do all this with their recent determination
$$\mathrm{sin}2\beta =0.79_{0.44}^{+0.41}$$
(5)
using the $`J/\mathrm{\Psi }`$ channel.
Both CDF and D0 are undergoing substantial upgrades . Among other improvements, CDF is getting a new, longer, vertex detector, that will provide 3D coordinates in a larger acceptance region. A new time-of-flight system will allow kaon identification at low momentum, and can, therefore, be used for kaon tagging: using the kaon charge to decide on the flavor of the decaying $`b`$ hadron. More importantly, a new pipelined, deadtimeless trigger system will allow purely hadronic events (no leptons) to be triggered efficiently. This will vastly increase the physics capabilities of CDF, as it will be shown later. The new all-hadron trigger requires two high transverse momentum tracks at level 1 and, using vertex detector information, finds their vertex at level 2 and requires the decay length to be positive.
D0 upgrades are not less important. The detector will have for the first time a solenoidal coil, with a 2 Tesla field that, together, with the new scintillating-fiber central tracking system, will provide precise momentum measurements for all charged tracks. Furthermore, a new four-layer vertex detector will also be added. These changes should allow D0 to reconstruct and tag B decays using standard displaced vertes techniques.
A comparison between the $`b`$-physics capabilities of CDF and D0 shows a clear advantadge for the former, due, in particular, to its unique all-hadron trigger. In order to trigger on hadronic B decays, D0 has to rely on semileptonic decays of the opposite-side B hadron, therefore paying the price of the 10% semileptonic branching ratio.
### 2.2 $`B_s`$ oscillations
As mentioned in the introduction, the measurement of the mass difference of the two $`B_s`$ sates, $`\mathrm{\Delta }m_s`$, is needed in order to determine the length of the side of the standard CKM triangle opposite the angle $`\gamma `$. The mass difference is obtained from the difference between mixed and unmixed $`B_s`$ decays as a function of proper time:
$$\frac{N_{unmixed}(t)N_{mixed}(t)}{N_{unmixed}(t)+N_{mixed}(t)}=D\mathrm{cos}\left(\mathrm{\Delta }m_st\right),$$
(6)
where $`D`$ is the so-called dilution factor, explained later. In order to be able to perform the measurement one needs to determine:
* The proper decay time. Hence good decay length resolution is needed.
* The $`B_s`$ flavor at production time (tagging). The dilution factor $`D`$ in the previous equation is $`D=12p_{mistag}`$, where $`p_{mistag}`$ is the probability of getting the flavor wrong.
* The $`B_s`$ flavor at decay time, using, for instance, flavor-specific decays.
CDF has studied the flavor-specific decay channel $`B_s^0D_s^{}\pi ^+`$ or $`B_s^0D_s^{}\pi ^+\pi ^{}\pi ^+`$ with $`D_s^{}\varphi \pi ^{},K^0K^{}`$. As one can see, there are no leptons and, therefore, the purely hadronic trigger is mandatory. CDF expects about 20000 selected events in the first two years of operation with a signal-to-background ratio between 1/2 and 2 . The significance of the measurement of $`\mathrm{\Delta }m_s`$ (in number of standard deviations) or of the related, dimensionless parameter $`x_s`$ introduced above, is given by
$$Sig(x_s)=\sqrt{\frac{Nฯต_{tag}D^2}{2}}\mathrm{exp}\left(\left(x_s\sigma _t/\tau \right)^2/2\right)\sqrt{\frac{S}{S+B}},$$
(7)
where $`N`$ is the number of selected events, $`ฯต_{tag}`$ is the tagging efficiency, $`D`$ is the dilution factor, $`\sigma _t`$ is the proper time resolution, $`\tau `$ is the $`B_s`$ lifetime and $`S`$ and $`B`$ are the number of signal and background events selected, respectively. Looking at the formula, one sees that the โquality factorโ $`ฯต_{tag}D^2`$ gives the effective tagging efficiency and that good proper time resolution is, clearly, crucial. No matter how many events one can collect, the reach in $`\mathrm{\Delta }m_s`$ is going to be limited by the time resolution to something of order $`๐ช\left((13)/\sigma _t\right)`$. CDF expects a time resolution around 50 fs and a quality factor $`ฯต_{tag}D^2=0.113`$, which translate into a sensitivity to $`x_s<63`$ at the five standard deviation level . This goes well beyond the current LEP limit of 21.8 and the expected range in the Standard Model, from 20 to 30. In summary, CDF should be able to determine $`x_s`$ quite precisely unless it is much larger than expected.
### 2.3 Measurement of $`\mathrm{sin}2\beta `$
The standard prodecure to determine the angle $`\beta `$ uses the time integrated asymmetry in the $`B^0/\overline{B}^0`$ decays to $`J/\mathrm{\Psi }K_s`$:
$`A_{CP}`$ $`=`$ $`{\displaystyle \frac{N(B^0J/\mathrm{\Psi }K_s)N(\overline{B}^0J/\mathrm{\Psi }K_s)}{N(B^0J/\mathrm{\Psi }K_s)+N(\overline{B}^0J/\mathrm{\Psi }K_s)}}`$
$`=`$ $`D_{mix}\mathrm{sin}2\beta ,\text{with}`$
$`D_{mix}`$ $`=`$ $`x_d/\left(1+x_d^2\right)0.47.`$ (8)
Here $`D_{mix}`$ is a dilution factor due to $`B_d^0\overline{B}_d^0`$ mixing. Since $`x_s`$ is much larger than $`x_d`$ this dilution factor is very small in the $`B_s^0`$ asymmetries, so that time-dependent measurements are needed.
The experimentally observed asymmetry is further diluted by mistagging and background:
$`A_{obs}`$ $`=`$ $`D_{tag}D_{bgd}A_{CP}`$
$`D_{tag}`$ $`=`$ $`12p_{mistag},D_{bgd}=\sqrt{S/\left(S+B\right)},`$ (9)
so that the final uncertainty on $`\mathrm{sin}2\beta `$ can be written as
$$\delta \left(\mathrm{sin}2\beta \right)=\frac{1}{D_{mix}D_{tag}}\frac{1}{\sqrt{ฯต_{tag}N}}\sqrt{\frac{S+B}{S}}.$$
(10)
CDF believes they can obtain a total $`ฯต_{tag}D_{tag}^2`$ around 9.1 %, including the kaon tagging using the proposed time-of-flight system. With the statistics available in the first two years of data-taking, that would imply $`\delta (\mathrm{sin}2\beta )0.07`$ . On the other hand, total $`ฯต_{tag}D_{tag}^2`$ for D0 is expected to be around 5.0 %, implying $`\delta (\mathrm{sin}2\beta )0.15`$, using only the muon decay of the $`J/\mathrm{\Psi }`$ . Analyses are underway to quantify the precision attainable in the electron channel.
### 2.4 Other CP Angles
Determination of CP angles other than $`\beta `$ can also be attempted by CDF. It will be more difficult for D0 because of their lack of a fully hadron trigger. A quick summary of CDF capabilities follows:
* $`\mathrm{sin}2\alpha `$ could, in principle, be obtained from the $`B^0\overline{B}^0`$ asymmetry in the $`\pi ^+\pi ^{}`$ decay. However, the large penguin contribution, with different phase, my preclude the effective extraction of $`\alpha `$ from this channel. Some other methods will be described in section 3. In any case, CDF can have a quality factor around 9.1% for this channel, leading to a precision in the determination of the asymmetry of $`\delta A(\pi ^+\pi ^{})0.09`$ .
* There are several ways of studying the angle $`\gamma `$. CDF has explored, for instance, the use of the asymmetries in the decays $`B_s^0/\overline{B}_s^0D_s^\pm K^{}`$ and $`B^\pm K^\pm D_{CP}^0`$, where $`D_{CP}^0`$ is the $`CP=1`$ eigenstate of the $`D^0`$ system. For the moment, however, there are no quantitative estimates of the possible $`\mathrm{sin}2\gamma `$ reach.
* Finally, the asymmetry in the decays $`B_s^0/\overline{B}_s^0J/\mathrm{\Psi }\varphi `$, the โgolden decayโ equivalent in the $`B_s`$ sector, can be determined to about 10% by CDF, depending on the value of $`x_s`$ . This asymmetry provides a direct determination of $`\delta \gamma `$. They expect about 6000 events with $`ฯต_{tag}D_{tag}9.7\%`$. While only an asymmetry of order a few percent is expected in the Standard Model, a larger value, if found, would be a sign of new physics. It should be noted that since in this channel the triggering is based on the leptonic decays of the $`J/\mathrm{\Psi }`$, D0 should be able to do a similar job.
## 3 The More Distant Future: ATLAS, CMS, LHCb, BTeV
### 3.1 The situation in 2005
In about 2005 ATLAS, CMS and LHCb will start taking data at the LHC accelerator at CERN. At about the same time, BTeV, if approved, will start at the Tevatron. At that time a lot of progress will have been made by the earlier experiments, both at hadron machines and at the electron-positron factories. In particular, one could expect $`\mathrm{sin}2\beta `$ to be measured with a precision around 0.04 by a combination of BaBar, Belle, HERA-B, CDF and D0. The length of the side opposite $`\gamma `$ will be known thanks to the $`B_s`$ oscillations measurements at HERA-B, CDF and D0 but it is rather unclear the amount of information that will be gathered about $`\alpha `$ and $`\gamma `$. The outcome of the combination of all the different measurements should be one of the following:
* either a clear inconsistency with the Standard Model predictions will be seen in the precise measurements ($`\beta `$ and $`B_s`$ mixing);
* or a hint of inconsistency with the less precise measurements ($`\alpha `$, kaon results) will appear;
* or all measurements will look consistent with the Standard Model.
In any case, the next generation of experiments will be needed for a full understanding of the CP violation mechanism. That will involve more precise measurements of the same parameters in the same channels, measuring the same parameters in different channels and determining for the first time some parameters, notably the angle $`\gamma `$.
### 3.2 The detectors
Around year 2005, four new hadron-machine experiments with interesting capabilities in $`b`$ physics will start taking data. Two of them are specifically designed for $`b`$ physics, LHCb and BTeV . Both of them cover only the forward region, since most of the $`b\overline{b}`$โs produced are in this region. BTeV actually covers also the backward region, thus doubling the acceptace. The larger cross section available at LHC energies (500 $`\mu `$b vs. 100$`\mu `$b) more than compensates for the smaller acceptance of LHCb. Covering only the forward/backward regions has clear advantages:
* $`b\overline{b}`$ production peaks forward;
* limited solid angle coverage leads to limited cost;
* the $`b`$ hadrons have higher momentum which leads to easier vertex finding and improved decay time resolution;
* open geometry allows for easy installation and maintenance.
These advantadges compensate for the higher minimum bias background, occupancies and radiation dose that have to be overcome at low angles.
The main advantadges that LHCb and BTeV have over the all-purpose detectors, ATLAS and CMS, are in particle identification and, especially, in the triggering capabilities. LHCb and BTeV have RICH systems that allow $`\pi /K`$ separation in a large momentun range ($`1<p<150`$ for LHCb, $`3<p<70`$ GeV for BTeV). This is mandatory to separate signal from background in the $`B\pi \pi ,KK,K\pi `$ channels and adds the possibility of using kaon tagging, as mentioned in chapter 2 for CDF.
Both LHCb and BTeV can trigger efficiently in all-hadron events, thanks to their vertex triggers. In contrast, ATLAS and CMS need a high-$`p_T`$ lepton to start the triggering sequence. This considerably reduces the $`b`$-physics capabilities of the two multipurpose detectors.
### 3.3 Physics Reach
#### 3.3.1 $`\mathrm{sin}2\beta `$
The angle $`\beta `$ can be measured very precisely by the second generation $`b`$ experiments using the same โgoldenโ decay used by CDF and D0: $`B^0/\overline{B}^0J/\mathrm{\Psi }K_S`$. Since the $`J/\mathrm{\Psi }`$ is only reconstructed through its lepton-pair decays, the triggering and particle-id advantadges of LHCb and BTeV are of no importance and the kind of precision that can be achieved by the four experiments is very similar, ranging from $`\mathrm{\Delta }\mathrm{sin}2\beta =\pm 0.015`$ for LHCb to $`\pm 0.025`$ for CMS , with ATLAS and BTeV somewhere in between.
#### 3.3.2 $`x_s`$ reach
The $`B_s`$ mixing analysis is much improved by the availability of a pure hadron trigger and, therefore, the differences between the reach of the experiments are large. Whereas ATLAS and CMS could only reach $`x_s`$ values around 50 , lower than what can be explored already at CDF with their hadron trigger, the dedicated experiments can go up to 60 (BTeV) or 75 (LHCb) . It should be said, however, that Standard Model expectations for $`x_s`$ are in the range 20โ30, easily accessible by the first generation experiments.
#### 3.3.3 $`\mathrm{sin}2\alpha `$
The asymmetry in the $`B_d^0\pi ^+\pi ^{}`$ channel has already been mentioned in section 2 as a possibility for measuring $`\alpha `$, although, at present, its usefulness is not clear due to the large penguin contribution with different phase. In any event, LHCb and BTeV can measure the asymmetry in this all-hadron channel with a $`\pm 0.025`$ precision .
More promising, if more involved, seems to be the $`B_d^0/\overline{B}_d^0\pi ^+\pi ^{}\pi ^0`$ channel, which proceeds through $`\rho `$ intermediate states. Here, all amplitudes can be determined and the remaining ambiguities can be resolved using the intereference regions in the Dalitz plot . The analysis seems feasible, but no experiment has yet made public the precision it could achieve in $`\alpha `$.
#### 3.3.4 $`\mathrm{sin}2\gamma `$
There are several ways of getting to the angle $`\gamma `$, all of them rather challenging. For example, it can be obtained from the simultaneous measurement of six time-integrated decay rates in the $`B_d^0D^0K^0`$ channel: $`B_d^0D^0K^0,B_d^0\overline{D}^0K^0,B_d^0D_{CP}^0K^0,\overline{B}_d^0\overline{D}^0\overline{K}^0,\overline{B}_d^0D^0\overline{K}^0,\overline{B}_d^0D_{CP}^0\overline{K}^0`$, with $`D^0`$ decaying to $`K^{}\pi ^+`$, $`\overline{D}^0`$ to $`K^+\pi ^{}`$, and $`D_{CP}^0`$ to either $`K^+K^{}`$ or $`\pi ^+\pi ^{}`$. The final state, therefore, consists in all cases in four charged kaons or pions resonating in different masses. It is clear that proper pion/kaon separation is crucial. LHCb claims a precision in $`\gamma `$ around 10<sup>o</sup> from this channel .
BTeV has studied a similar channel: $`B^{}D^0/\overline{D}^0K^{}`$, where $`D^0`$ and $`\overline{D}^0`$ go to the same final state, and the corresponding $`B^+`$ decays. Using now nine time-integrated decays, the precision in $`\gamma `$ is found to be about 13<sup>o</sup> .
#### 3.3.5 Other CP angles
Apart from $`\alpha `$, $`\beta `$ and $`\gamma `$, other CP angles are also accessible to the second generation CP experiments:
* The small angle $`\delta \gamma `$ introduced in section 1 can be measured precisely using the $`B_s^0/\overline{B}_s^0J/\mathrm{\Psi }\varphi `$ asymmetry as already mentioned above. Precisions around $`0.50.9^\mathrm{o}`$ seem possible, even for ATLAS and CMS , because of the leptonic decays of the $`J/\mathrm{\Psi }`$.
* The combination $`\gamma 2\delta \gamma `$ can be determined with a precision around 10<sup>o</sup> by both BTeV and LHCb through the decay $`B_s^0D_s^{}K^+`$ and its charge conjugated. Since $`\delta \gamma `$ is expected to be small in the Standard Model, this can be viewed as yet another way of getting $`\gamma `$, only in this case through $`B_s`$ decays, and, hence, a very interesting check.
* LHCb has also studied the extraction of $`2\beta +\gamma =\pi +\beta \alpha `$ from $`B^0D^+\pi ^{}`$. The precision that can be obtained is around 9<sup>o</sup> . Again, since $`\beta `$ will be well known, this method could be used to obtain $`\gamma `$ or $`\alpha `$.
The ability of measuring a single angle with different processes can be very useful in checking whether the Standard Model by itself is able to explain CP violation.
## 4 Summary
This talk has tried to convey the message that hadron machines have a very comprehensive program for understanding the origin of CP violation in the period 2001โ2015 or so, at the Tevatron and LHC.
In a first phase, starting in 2001, CDF and, to a lesser extend, D0 will be able to measure $`\mathrm{sin}2\beta `$ to about 0.07 and study $`B_s`$ mixing for values of the mixing parameter $`x_s`$ up to 63.
In a second phase, from 2005 on, LHCb, BTeV, if approved, plus ATLAS and CMS will improve on $`\beta `$ and $`x_s`$ and will be able to determine both $`\alpha `$ and $`\gamma `$ to about 10<sup>o</sup>, thus putting strong constraints on the Standard Model ability to explain all the phenomenology of CP violation in the $`b`$ sector.
## 5 Acknowledgements
It is a pleasure to thank Giorgio Capon and the rest of the organizers of the workshop for the kind invitation to give this talk and for their patience while waiting for me to finish writing this manuscript. |
no-problem/0002/hep-ph0002244.html | ar5iv | text | # References
HZPP-0003
Feb. 25, 2000
Anisotropy of Dynamical Fluctuations as a Probe
for Soft and Hard Processes in High Energy Collisions<sup>1</sup><sup>1</sup>1This work is supported in part by the NSFC under project 19975021.
Liu Lianshou, Chen Gang and Fu Jinghua
Institute of Particle Physics, Huazhong Normal University, Wuhan 430079 China
ABSTRACT
It is shown using Lund Monte Carlo that, unlike the average properties of the hadronic system inside jets, the anisotropy of dynamical fluctuations in these systems changes abruptly with the variation of the cut parameter $`y_{\mathrm{cut}}`$. A transition point exists, where the dynamical fluctuations in the hadronic system inside jet behave like those in soft hadronic collisions. Thus the anisotropy property of the dynamical fluctuations can serve as a probe for the soft and hard processes in high energy collisions.
PACS number: 13.85 Hd
Keywords: dynamical fluctuations, hadronic jet, hard and soft processes
As is well known, the presently most promissing theory of strong interaction โ Quantum Chromo-Dynamics (QCD) has the special property of both asymptotic freedom and colour confinement. For this reason, in any process, even though the energy scale, $`Q^2`$, is large enough to be able to do perturbative QCD (pQCD) calculation, there must be a non-perturbative hadronization phase before the final state particles can be observed. Therefore, the transition or interplay between hard and soft processes is a very important problem.
In current literature, this transition is determined by some cut-parameter. For example, in doing theoretical calculation a parameter $`Q_0^2`$ is introduced. When $`Q^2>Q_0^2`$ the perturbative QCD is assumed to be applicable and the process is hard. While when $`Q^2<Q_0^2`$ the perturbative calculation is unallowed and the process becomes soft (nonperturbative). However, the value of $`Q_0^2`$ is not determined exactly. It decreases steadily as the developement of perturbative technique.
In experimental data analysis people use some โjet-algorithmโ (e.g. Jade or Durham ones) to combine the final-state particles into โjetsโ. Each jet is assumed to be originated from a hard parton, and the hadrons in the jet is produced softly from this hard parton. Thus the transition between hard and soft processes is described as the production of hard partons and the subsequent hadronization of these partons. In this formalism there is also a parameter โ $`y_{\mathrm{cut}}`$. The value of this parameter determines how the hadrons are grouped into jets, and whether an event is a โ2-jet eventโ or a โ3-jetโ, โ4-jetโ ones,
Let us concentrate on the 2-jet events. By definition, these two jets should be developed softly from two hard partons and no hard process is involved in the evolution. If there is any hard process in the developement then we say that a third jet appears. Historically, it was the observation of the third jet in e<sup>+</sup>e<sup>-</sup> collisions that confirmed the existence of gluon . In this sense, there should be a definite value of $`y_{\mathrm{cut}}`$, which is consistent with the physical meaning of โjetโ.
On the other hand, due to the success in pQCD calculation of jet, people sometimes take the number of jets in an event as indefinite, depending on the value of $`y_{\mathrm{cut}}`$, which can be chosen arbitrarily. Their stress is in ultilizing this dependence to confront the pQCD calculation with experiments. From this point of view, the physical meaning of jet and the associated concepts โโ โsoftโ and โhardโ are neglected. A process is hard or soft is not judged physically, but is determined through the technical problem of whether the process can be calculated by perturbative QCD.
Let us remind that physically, soft and hard are distinguished through the magnitude of transverse momentum. In hadron-hadron collisions at energies below top-ISR most of the final-state hadrons have low transverse momenta and the process is soft. At collider energies high-transverse-momentum jets, coming from hard parton collisions, start to appear . The transverse momenta of this jets are higher than 10 โ 20 GeV/$`c`$. Besides, there are also mini-jets with transverse momenta higher than about 4 โ 5 GeV/$`c`$ , which are generally refered to as semi-hard. The critical value of transverse momentum for the transition between soft and hard (semi-hard) is about 4 โ 5 GeV/$`c`$.
The following important questions arise: 1) Does the number of jets in an event possess any definite meaning? If yes, how to determine this number, i.e. how to decide the correct value of $`y_{\mathrm{cut}}`$ for the determination of this number. 2) Is it in principle possible to locate the transition between soft and hard processes in the hadronic final states of high energy e<sup>+</sup>e<sup>-</sup> collisions? If yes, how to do that?
In order to answer these questions, let us remind that the qualitative difference between the typical soft process โ moderate energy hadron-hadron collisions and the typical hard process โ high energy e<sup>+</sup>e<sup>-</sup> collisions can be observed most clearly in the property of dynamical fluctuations therein. It is found recently that inspite of the similarities in the average properties, the dynamical fluctuations in the hadronic systems from these two processes are qualitatively different โโ the former is anisotropic in the longitudinal-transvere plane and isotropic in the transverse planes while the latter is isotropic in three dimensional phase space.
This observation inspired us to think that the dynamical-fluctuation property may provide a probe for the transition betweem soft and hard processes inside the hadronic final state of high energy e<sup>+</sup>e<sup>-</sup> collisions. In the present letter we show, using Lund Monte Carlo simulation, that this is indeed the case.
In total 500 000 events are generated for 91.2 GeV e<sup>+</sup>e<sup>-</sup> collisions using JETSET7.4. The resulting hadronic systems are analysed using Durham and/or Jade jet-algorithms. The fractions $`R_2`$, $`R_3`$, $`R_4`$ of the 2-, 3-, 4-jet events in the whole sample are plotted versus the value of $`y_{\mathrm{cut}}`$ for both Durham and Jade algorithms in Fig.1. It can be seen clearly form the figures that the definition of โjetโ depends strongly on the value of $`y_{\mathrm{cut}}`$. When $`y_{\mathrm{cut}}`$ is big, most of the events are taken to be โ2-jetโ events. In the limit of very large $`y_{\mathrm{cut}}`$, the whole sample consists of only โ2-jetโ events. On the contrary, when the value of $`y_{\mathrm{cut}}`$ decreases continuously, the jets are divided further and further, and gradually most of the events become โmulti-jetโ (more than two jets) ones.
At the energy in consideration, it is certainly impossible that all the events are 2-jet ones. Neither is it possible that most of the events are multi-jet ones. In order to determine a reasonable value of $`y_{\mathrm{cut}}`$, we have to use the dependence of some physical property of the system on $`y_{\mathrm{cut}}`$. As example, we show in Fig.2 the dependence of average charged multiplicity $`N_{\mathrm{ch}}`$ and average ellipticity $`e`$ on $`y_{\mathrm{cut}}`$ for the โ2-jetโ sample determined by Durham algorithm. The ellipticity $`e`$ is an event-shape parameter defined as the ratio of minor $`T_3`$ to major $`T_2`$ in thrust analysis
$$e=T_3/T_2.$$
(1)
By definition $`e1`$. When $`e=1`$ the jet cone is circular in the momentum space. It is expected that, when $`y_{\mathrm{cut}}`$ increases, more and more โinpuritiesโ (multi-jet events) are mixed into the โ2-jetโ event sample, and the jet cone will diviate more and more from being circular. So the average ellipticity $`e`$ will drcrease with the increasing of $`y_{\mathrm{cut}}`$. It can be seen from the figure that this is indeed the case. However, the value of $`e`$ changes smoothly with $`y_{\mathrm{cut}}`$ and it is hard to get a probe for a reasonable value of $`y_{\mathrm{cut}}`$ by using $`e`$. The same holds also for $`N_{\mathrm{ch}}`$ and other average quantities.
Fig.1 The ratio of 2-, 3-, 4-jet events Fig.2 Average charged multiplicity and ellipticity
as function of $`y_{\mathrm{cut}}`$ as function of $`y_{\mathrm{cut}}`$
Let us turn now to consider the dynamical fluctuations. These fluctuations can be characterized by the anomalous scaling of factorial moments (FM) :
$`F_q(M)`$ $`=`$ $`{\displaystyle \frac{1}{M}}{\displaystyle \underset{m=1}{\overset{M}{}}}{\displaystyle \frac{n_m(n_m1)\mathrm{}(n_mq+1)}{n_m^q}}`$
$``$ $`(M)^{\varphi _q}(M\mathrm{}),`$
where a region $`\mathrm{\Delta }`$ in 1-, 2- or 3-dimensional phase space is divided into $`M`$ cells, $`n_m`$ is the multiplicity in the $`m`$th cell, and $`\mathrm{}`$ denotes vertically averaging over the event sample. Note that when the fluctuations exist in higher-dimensional (2-D or 3-D) space the projection effect will cause the second-order 1-D FM goes to saturation according to the rule<sup>2</sup><sup>2</sup>2In order to elliminate the influence of momentum conservation , the first few points ($`M=1,2`$ or 3) should be omitted when fitting the data to Eq.(3).:
$$F_2^{(a)}(M_a)=A_aB_aM_a^{\gamma _a},$$
(3)
where $`a=1,2,3`$ denotes the different 1-D variables. The parameter $`\gamma _a`$ describes the rate of going to saturation of the FM in direction $`a`$ and is the most important characteristic for the higher-dimensional dynamical fluctuations. If $`\gamma _a=\gamma _b`$ the fluctuations are isotropic in the $`a,b`$ plane; while when $`\gamma _a\gamma _b`$ the fluctuations are anisotropic in this plane. The degree of anisotropy is characterized by the Hurst exponent $`H_{ab}`$, which can be obtained from the values of $`\gamma _a`$ and $`\gamma _b`$ as
$$H_{ab}=\frac{1+\gamma _b}{1+\gamma _a}.$$
(4)
The dynamical fluctuations are isotropic when $`H_{ab}=1`$, and anisotropic when $`H_{ab}1`$.
For the 250 GeV/$`c`$ $`\pi `$(K)-p collisions from NA22 the Hurst exponents are found to be :
$$H_{p_\mathrm{t}\phi }=0.99\pm 0.01,H_{yp_\mathrm{t}}=0.48\pm 0.06,H_{y\phi }=0.47\pm 0.06,$$
(5)
which means that the dynamical fluctuations in this moderate energy hadron-hadron collisions are isotropic in the transverse plane and anisotropic in the longitudinal-transvere planes. This is what should be , because there is almost no hard collisions at this energy and the direction of motion of the incident hadrons (longitudinal direction) should be previleged. Note that the special role of longitudinal direction in these soft processes is present both in the magnitude of average momentum and in the dynamical fluctuations in phase space.
In high energy e<sup>+</sup>e<sup>-</sup> collisions, the longitudinal direction is chosen along the thrust axis, which is the direction of motion of the primary quark-antiquark pair. Since this pair of quark and antiquark move back to back with very high momenta, the magnitude of average momentum of final state hadrons is also anisotropic due to momentum conservation. However, the dynamical fluctuations in this case come from the QCD branching of partons , which is isotropic in nature. Therfore, although the momentum distribution still has an elongated shape, the dynamical fluctuations in this case should be isotropic in 3-D phase space.
A Monte Carlo study for e<sup>+</sup>e<sup>-</sup> collisions at 91.2 GeV confirms this assertion . The dynamical fluctuations are approximately isotropic in the 3-D phase space, the corresponding Hurst exponents being
$$H_{p_\mathrm{t}\phi }=1.18\pm 0.03,H_{yp_\mathrm{t}}=0.95\pm 0.02,H_{y\phi }=1.11\pm 0.02.$$
(6)
The present available experimental data for e<sup>+</sup>e<sup>-</sup> collisions at 91.2 GeV also show isotropic dynamical fluctuations in 3-D .
Now we apply this technique to the โ2-jetโ sample obtained from a certain, e.g. Durham, jet-algorithm with some definite value of $`y_{\mathrm{cut}}`$. Doing the analysis for different values of $`y_{\mathrm{cut}}`$, the dependence of dynamical-fluctuation property of the โ2-jetโ sample on the value of $`y_{\mathrm{cut}}`$ can be investigated.
Let us try to discuss what results can be expected?
As we have shown in Fig.1, when $`y_{\mathrm{cut}}`$ is very big the โ2-jetโ sample coincides with the whole event sample, $`R_2=1`$. In this case, the fluctuations are known to be isotropic in the 3-D phase space, cf. Eq.(6), i.e. the parameter $`\gamma _a`$ for the three 1-D variables ($`y,p_\mathrm{t},\phi `$) equal to each other ($`\gamma _{p_\mathrm{t}}=\gamma _\phi =\gamma _y`$).
($`a`$) ($`b`$)
Fig.3 The variation of $`\gamma `$ with $`R_2`$ and $`y_{\mathrm{cut}}`$
As the decreasing of $`y_{\mathrm{cut}}`$ the multi-jet events, which contaminate the โ2-jetโ sample, will be cleared away gradually, and at a certain value of $`y_{\mathrm{cut}}`$, a โpureโ 2-jet sample will be formed. The word โpureโ is used here to indicate that these two jets are developed softly from initial partons and no other jet(s) has been mixed in.
It can be expected that the dynamical fluctuations in the โpureโ 2-jet sample will mimic those in the soft hadronic collisions, i.e. isotropic in the transverse plane and anisotropic in the longitudinal-transverse planes ($`\gamma _{p_\mathrm{t}}=\gamma _\phi \gamma _y`$).
Thus the variation of $`\gamma `$โs with the decreasing of $`y_{\mathrm{cut}}`$ (or decreasing of $`R_2`$) is expected to be: At first, when $`y_{\mathrm{cut}}`$ is very big, the โ2-jetโ sample is identical to the whole event sample ($`R_2=1`$), and the three $`\gamma `$โs equal to each other; As the decreasing of $`y_{\mathrm{cut}}`$ (the deceasing of $`R_2`$) the three $`\gamma `$โs depart, and becomes, at a certain value of $`y_{\mathrm{cut}}`$, isotropic in ($`p_\mathrm{t},\phi `$) and anisotropic in ($`y,p_\mathrm{t}`$) and ($`y,\phi `$), $`\gamma _{p_\mathrm{t}}=\gamma _\phi \gamma _y`$.
The results of simulation are presented in Fig.3($`a`$). It can be seen from the figure that the above expectation comes true. The characteristic behaviour $`\gamma _{p_\mathrm{t}}=\gamma _\phi \gamma _y`$ arrives at $`y_{\mathrm{cut}}0.0048`$ ($`R_20.48`$). The values of $`\gamma `$โs and the corresponding Hurst exponents at this point are listed in Table I. For convenience we will call this point, where $`\gamma _{p_\mathrm{t}}=\gamma _\phi \gamma _y`$, as transition point.
Table I Parameter $`\gamma `$ and Hurst exponents at the transition point
| $`y_{\mathrm{cut}}=0.0048`$ (Durham) $`R_{2\mathrm{j}\mathrm{e}\mathrm{t}}=0.48`$ | | | | | |
| --- | --- | --- | --- | --- | --- |
| $`\gamma _y`$ | $`\gamma _{p_\mathrm{t}}`$ | $`\gamma _\phi `$ | $`H_{yp_\mathrm{t}}`$ | $`H_{y\phi }`$ | $`H_{p_\mathrm{t}\phi }`$ |
| 1.074$`\pm `$0.037 | 0.514$`\pm `$0.080 | 0.461$`\pm `$0.021 | 0.73$`\pm `$0.06 | 0.70$`\pm `$0.06 | 0.96$`\pm `$0.10 |
Fig.4 Comparison of the speed of going to saturation of
$`F_2`$ for different 1-D variables at different $`R_2`$
Note that in the Durham algorithm that we are using the test variable $`y`$ is essentially the relative transverse momentum $`k_{}`$ squared . The transition point $`y_{\mathrm{cut}}0.0048`$ corresponds to $`k_{}4`$ GeV/c, which is consistent with the critical value of transverse momentum between soft and hard (semi-hard) components in hadron-hadron collisions.
It is instructive also to follow the evolusion of $`\gamma `$โs with the increasing of $`y_{\mathrm{cut}}`$ (incresing of $`R_2`$).
It can be seen from Fig.3($`a`$) that, when $`y_{\mathrm{cut}}`$ ($`R_2`$) is very small, where the two โjetsโ are highly undeveloped and each consists mainly of one hard parton, $`\gamma _\phi `$ is consistent to zero, i.e. there is no dynamical fluctuation in $`\phi `$ at all. On the other hand, at this point $`\gamma _{p_\mathrm{t}}`$ is almost as large as $`\gamma _y`$, showing that the dynamical fluctuations in this undeveloped โ2-jetโ system behaves as an isotropic 2-D fractal in the ($`y,p_\mathrm{t}`$) plane.
When $`y_{\mathrm{cut}}`$ ($`R_2`$) increases, $`\gamma _{p_\mathrm{t}}`$ departs with $`\gamma _y`$ and approaches to $`\gamma _\phi `$. What is important is that $`\gamma _{p_\mathrm{t}}`$ and $`\gamma _\phi `$, instead of going up parallelly, cross over each other, turns from $`\gamma _\phi <\gamma _{p_\mathrm{t}}`$ to $`\gamma _\phi >\gamma _{p_\mathrm{t}}`$, resulting in a sharp transition point. After that, the three $`\gamma `$โs approach eventually to a common value, and the โ2-jetโ sample approachs to the whole event sample.
In order to show the evolusion of the anisotropy property of dynamical fluctuations with the variation of $`y_{\mathrm{cut}}`$ ($`R_2`$) more clearly, we take three typical points: $`(A)`$ $`R_2=0.18`$, $`(B)`$ $`R_2=0.48`$, $`(C)`$ $`R_2=1`$, indicated by arrows in Fig.3($`a`$). Point $`A`$ corresponds to the case of undeveloped jets, $`B`$ is the transition point and $`C`$ is the whole sample. Since the anisotropy property of dynamical fluctuations determines solely by the rate of approaching to saturation of FM, which is characterized by the parameter $`\gamma `$, we rescale the $`F_2(p_\mathrm{t})`$ and $`F_2(\phi )`$ appropriately, letting them coincide with $`F_2(y)`$ at $`M=3`$ and arrive at a common saturation height with $`F_2(y)`$. The results are shown in Fig.4. It can be seen from the figure that when $`R_2=0.18`$, $`F_2(\phi )`$ does not increase with $`M`$, i.e. no dynamical fluctuation at all in $`\phi `$, while at this point $`F_2(p_\mathrm{t})`$ and $`F_2(y)`$ go to saturation almost with the same speed. When $`R_2=0.48`$ (transition point), $`F_2(p_\mathrm{t})`$ and $`F_2(\phi )`$ go to saturation almost with the same speed, much slower than $`F_2(y)`$ do. When $`R_2=1`$ (whole sample) all three $`F_2`$ coincide and go to saturation with an identical speed.
For comparison, we have also done the same analysis using Jade algorithm. The results, shown in Fig.3($`b`$), are qualitatively the same: At small $`y_{\mathrm{cut}}`$ ($`R_2`$), $`\gamma _\phi `$ vanishes and $`\gamma _{p_\mathrm{t}}\gamma _y`$; As $`y_{\mathrm{cut}}`$ ($`R_2`$) increases $`\gamma _{p_\mathrm{t}}`$ and $`\gamma _\phi `$ approaches each other and cross over at $`y_{\mathrm{cut}}0.158`$ ($`R_20.39`$). This is the transition point for Jade algorithm. The parameter $`\gamma `$โs at this point are $`\gamma _y=1.22\pm 0.04`$, $`\gamma _{p_\mathrm{t}}=0.51\pm 0.09`$, $`\gamma _\phi =0.59\pm 0.08`$.
In this letter we have shown using Lund Monte Carlo that, unlike the smooth change of average properties of the hadronic system inside jets, the anisotropy of dynamical fluctuations in these systems changes abruptly with the variation of the cut parameter $`y_{\mathrm{cut}}`$. At $`\sqrt{s}=91.2`$ GeV, the dynamical fluctuations in the whole e<sup>+</sup>e<sup>-</sup> collision sample (large $`y_{\mathrm{cut}}`$ limit) are fully isotropic in the 3-D phase space, and become highly anisotropic (almost no fluctuation at all in $`\phi `$) for small $`y_{\mathrm{cut}}`$ where the โjetโ is highly undeveloped. A transition point exists, where the hadronic system inside jet behaves like that of the soft hadronic collisions, i.e. the dynamical fluctuations are isotropic in the transverse plane and anisotropic in the longitudinal-transverse planes. The corresponding relative transverse momentum at the transition point is about $`k_{}4`$ GeV/c, which is consistent with the critical value of transverse momentum between soft and hard (semi-hard) components in hadron-hadron collisions. Thus the the transition point determines the physically meaningful value of $`y_{\mathrm{cut}}`$, and thereby gives the number of jets in an events. The anisotropy property of the dynamical fluctuations can serve as a sensible probe for hard and soft processes.
This observation is not only meaningful in the study of jets in e<sup>+</sup>e<sup>-</sup> collisions but also enlightening in the jet-physics in relativistic heavy ion collisions, which will become important after the operation of the new generation of colliders at BNL (RHIC) and CERN (LHC).
Acknoledgement
The authors are grateful to Wu Yuanfang and Xie Qubin for valuable discussions. |
no-problem/0002/nucl-ex0002008.html | ar5iv | text | # Beam Energy Evolution of HBT Systematics at the AGS
## Abstract
We present preliminary results of the first $`\pi `$ interferometry (HBT) excitation function at intermediate AGS energies. The beam energy evolution of the correlationsโ dependence on $`m_T`$, centrality, and emission angle with respect to the reaction plane are discussed. Comparisons with predictions of the RQMD cascade model are made.
Two-particle intensity interferometry (HBT) measurements have long been used to study the geometry and dynamics of heavy ion collisions (see, e.g. ). Pion correlation functions are sensitive to the pion source size, shape, decay-time, and long-lived particle (e.g. $`\mathrm{\Lambda }`$) production. In addition, dynamic effects such as flow produce space-momentum correlations resulting in dependences of the correlation functions on $`\pi `$ momentum.
In this paper, we discuss an excitation function (2-8 AGeV) of $`\pi ^{}`$ HBT measurements. Studying the evolution of the correlations as a function of $`E_{beam}`$ is important for two reasons. Firstly, a sudden increase, at some $`E_{beam}`$, in the lifetime of the hadronic fireball has long been proposed as a robust signal of the onset of QGP formation . Secondly, the sensitivity of correlation functions to the underlying physics makes such measurements potent tools to test the dynamics of microscopic models of heavy ion collisions. Many models attempt to extrapolate to the RHIC energies. Confidence in the ability to extrapolate (determined by the correct underlying physics and its evolution with energy) would be enhanced if the model reproduces an excitation function of detailed HBT systematics.
Using the large-acceptance EOS Time Projection Chamber the E895 collaboration measured charged particles from Au+Au collisions at 2, 4, 6, and 8 AGeV at the Brookhaven AGS. Good particle identification minimized $`e^{}`$ contamination of the $`\pi ^{}`$ sample. Momentum resolution, largely due to multiple Coulomb scattering and straggling in the 3% interaction length target, was on the order of 1.5-3%. The experimental correlation functions have been corrected for the momentum resolution with an iterative method similar to that employed by the NA44 collaboration . This correction typically increases the fitted $`\lambda `$ parameter by 15%, and the radii by 5%.
Track merging and splitting effects were eliminated by a two-track geometrical cut based on the tracking algorithm. As expected, this cut discards some pairs (in the โrealโ and event-mixed distributions) at low relative momentum, q. However, due to detector geometry, this cut preferentially discards low-q pairs at high $`p_T`$; thus, to minimize phasespace bias effects, we restrict our analysis to low $`p_T`$ and use narrow windows in $`p_T`$.
A full Coulomb-wave integration over a spatial source of 5-fm Gaussian radius was used to generate the Coulomb correction, which was applied pair-wise to the event-mixed denominator of the correlation function. Identical Coulomb correction functions were applied to data and to correlation functions from the RQMD (see below). Further details of the analysis have been reported previously .
The high quality of the data is seen in Figure 1, where projections in the Bertsch-Pratt (BP), or out-side-long, system are shown for midrapidity pions from central events at each bombarding energy. The relative momentum $`q`$ was calculated in the fixed Au+Au c.m. frame. The functional form
$$C(q_{out},q_{side},q_{long})=1+\lambda e^{R_{out}^2q_{out}^2R_{side}^2q_{side}^2R_{long}^2q_{long}^22R_{ol}^2q_{out}q_{long}}$$
(1)
was fit to the data, using a maximum-likelihood technique . The cross-term $`R_{ol}^2`$ was consistent with zero in all cases and uncorrelated with the other parameters.
Correlation functions were also constructed in the Yano-Koonin-Podgoretskiฤญ (YKP) decomposition ; here, the effective lifetime $`R^0`$ is fit more directly. We fit to the form
$$C(q_0,q_{},q_{})=1+\lambda e^{R_{}^2q_{}^2R_{}^2(q_{}^2q_0^2)(R_o^2+R_{}^2)(qU)^2}$$
(2)
The excitation function of the fit results is presented in Figure 3. Also shown are results of fits to correlation functions generated by using the $`\pi ^{}`$ freeze-out points from the RQMD (v2.3) model as input to the two-particle correlator code CRAB . Both the data and the model show a decrease in the $`\lambda `$ parameter, due to increased production of long-lived $`\pi ^{}`$-emitting particles at higher energy .
While the longitudinal radii, $`R_{long}`$ and $`R_{}`$, display little dependence on beam energy, the observed decrease of the transverse radii $`R_{side}`$ and $`R_{out}`$ comes as something of a surprise. At low energy, the YKP fits suggest that the model produces a $`\pi ^{}`$ source that is too small and too long-lived; this leads in the BP decomposition to an underprediction in $`R_{side}`$, but a reasonable agreement in $`R_{out}`$, as the space and time effects partially cancel.
Part of the reason for the decrease in the apparent transverse size is revealed in Figure 3. It is clear that the $`m_T`$-dependence of $`R_{side}`$ (and the space-momentum correlation that causes it) becomes stronger with $`E_{beam}`$ in the data, while the model suggests a $`m_T`$-dependence rather independent of energy, and reminiscent of trends at higher energy. The observed trend may suggest that collective transverse flow builds with bombarding energy, and is only strong enough to affect $`R_{side}(m_T)`$ above $`E_{beam}`$4 AGeV.
Although dynamics determines the $`m_T`$-dependence of the radii, it is worthwhile to check that the HBT radii track somewhat with geometry. Figure 5 shows the impact parameter ($`b`$) dependence of the HBT radii ($`b`$ was estimated from the charged particle multiplicity). $`\lambda `$ increases with $`b`$, as the production of long-lived $`\pi `$-emitting particles is suppressed relative to direct pions. The transverse radii ($`R_{side}`$ and $`R_{out}`$) decrease for more peripheral collisions, as expected, while $`R_{long}`$ shows little $`b`$-dependence. The trends suggest that the measured pion source reflects the overlap volume of the colliding nuclei.
More detailed information may be obtained by studying the HBT signal as a function of $`\pi ^{}`$ emission angle with respect to the reaction plane. The reaction plane is calculated only from momenta of Z$`2`$ nuclei, for every event , so auto-correlations are not an issue. From an overlap-volume picture, one expects an anisotropic apparent shape in the transverse direction for non-central collisions, with a larger spatial scale perpendicular to the reaction plane. Deviations may reflect the non-isotropic flow dynamics of the system prior to freeze-out, or may carry information concerning the opacity of the source .
Preliminary results for 2, 4, and 6 AGeV collisions at $`b57`$ fm are shown in Fig. 5. At the lower energy, $`R_{side}`$ (the radius most closely related to geometry ) exhibits a $`\varphi _{rp}`$-dependence consistent with geometric considerations. While RQMD simulations (with perfect reaction plane resolution) display similar trends at all energies, at higher energy, the $`R_{side}`$ oscillation is not seen in our data. Since the radii are not corrected for the finite dispersion, this is due at least in part to the worsening resolution with which the reaction plane is measured. Further study of this novel HBT signal is required.
In summary, we are mapping out the systematics of pion correlations in the energy range between the Bevalac and maximum AGS energy. Large jumps in source size or lifetime at some collision energy, which might indicate the onset of QGP formation, are not observed. Surprisingly, the apparent source size is larger at the lower beam energies; this appears largely a consequence of weaker space-momentum correlations there. The RQMD model, with or without meanfield effects, does not reproduce the data; in the model at low energy, the effective size is too small, the lifetime too large, and $`R(m_T)`$ does not evolve with beam energy. The impact parameter dependence of the radii follows naive expectations from geometry. An HBT analysis correlated with the event-wise reaction plane reveals a significant oscillation in $`R_{side}`$, at low beam energy. |
no-problem/0002/math0002123.html | ar5iv | text | # Lifts of smooth group actions to line bundles
## 1. Introduction and statement of the results
Let $`X`$ be a connected smooth compact manifold with a smooth left action of a compact connected Lie group $`G`$. Our aim is to study liftings of the action of $`G`$ to complex line bundles $`LX`$. Of course, it is not always possible to find such a lift. For example, if $`xX`$ has trivial stabiliser, then the restriction $`L|_{Gx}`$ has to be topologically trivial.
The problem which we consider is very natural and has been already studied by several people. The general question on lifting of smooth actions to principal bundles was considered for example by R. Palais and T. Stewart \[PS, S\]. The more concrete problem of lifting actions to complex line bundles was studied for example by B. Kostant in \[Ko\], where he proved that if $`G`$ is simply connected, $`X`$ is symplectic and the action of $`G`$ is Hamiltonian, then there is always some lift of the action to $`L`$ (see Theorem 4.5.1 in \[Ko\]), and by A. Hattori and T. Yoshida \[HY\] and R. Lashof \[L\].
Let $`EGBG`$ be the universal $`G`$-principal bundle. Fix a point $`x_0BG`$, and denote by $`\iota :XX_G`$ the inclusion of the fibre over $`x_0BG`$ of the Borel construction $`X_G=EG\times _GXBG`$.
Our first result is a new proof of the following theorem, which was proved in \[HY\]. Our method of proof is however different from theirs.
###### Theorem 1.1.
Let $`LX`$ be a line bundle. The action of $`G`$ on $`X`$ lifts to a linear action on $`L`$ if and only if
$$c_1(L)\iota ^{}H_G^2(X;).$$
Furthermore, if $`c_1(L)\iota ^{}H_G^2(X;)`$, then the different lifts of the action are classified by $`\iota ^1(c_1(L))`$.
Fix from now on a metric on $`L`$. All the connections we will take on $`L`$ will be assumed to be unitary with respect to this metric, and all the actions of $`G`$ on $`L`$ will keep the metric fixed.
Let $`๐ค=\mathrm{Lie}(G)`$. To describe lifts of the action of $`G`$ and invariant connections on $`L`$ we will use the Cartan model for real equivariant cohomology (see section 2 for the necessary definitions). Suppose that $`c_1(L)=\iota ^{}l`$ for some $`lH_G^2(X;)`$, and let $`\alpha \mu `$ be a closed element in the Cartan complex $`\mathrm{\Omega }_G^{}(X;i)`$ representing the class $`2\pi ilH_G^2(X;i)`$, where $`\alpha \mathrm{\Omega }^2(X;i)^G`$ and $`\mu \mathrm{\Omega }^0(X;i๐ค^{})^G`$. Any connection $``$ on $`L`$ whose curvature is $`\alpha `$ can be combined with $`\mu `$ to obtain an infinitesimal lift of the action (see section 3 for definitions and Theorem 3.1), and we will study whether there is a connection $``$ which defines an infinitesimal lift that exponentiates to an action of $`G`$. Let $`๐^\alpha `$ be the gauge equivalence classes of connections on $`L`$ whose curvature is $`\alpha `$, and let $`a_1:H_1(G;)H_1(X;)`$ be the map induced by the map $`a_1(x):GggxX`$ for any $`x`$ (since $`X`$ is connected, the map in homology is independent of $`x`$).
###### Theorem 1.2.
The set $`๐_\alpha ^G(\mu )`$ of gauge equivalence classes of connections with curvature $`\alpha `$ which, combined with $`\mu `$, define an infinitesimal lift which exponentiates to an action of $`G`$, is a subtorus of $`๐^\alpha `$ of dimension $`b_1(X)dim(\mathrm{Im}a_1_{})`$, where $`b_1(X)=dimH^1(X;)`$. In particular, if $`X`$ is symplectic and the action of $`G`$ is Hamiltonian, then $`๐_\alpha ^G(\mu )=๐^\alpha `$.
In fact, we get in this way all the possible lifts of the action. This is due to the following reasons: (1) given any lift, there is some invariant connection (just take any connection and average); (2) using the construction of differential forms in the Cartan complex representing the first equivariant Chern class $`c_1^G(L)`$ of $`L`$ (see \[BV\]) we get a closed element $`\alpha \mu `$ representing the class $`2\pi ic_1^G(L)`$; and (3), the group $`G`$ is connected, so a lift of the action is uniquely determined by the infinitesimal action on $`L`$.
The results and techniques in this paper have interesting consequences in the case in which $`X`$ is symplectic and the action of $`G`$ is Hamiltonian. The following corollary is an analogue of a well known result in Geometric Invariant Theory (see Corollary 1.6 in \[MFK\]).
###### Corollary 1.3.
Let $`G`$ act on a symplectic manifold $`(X,\omega )`$ in a Hamiltonian fashion. Then there exists an integer $`d1`$ with the following property. For any line bundle $`LX`$ with a connection $``$ whose curvature is $`G`$ invariant, there is a lift of the action of $`G`$ to $`L^d`$ such that the induced connection $`^d`$ on $`L^d`$ is $`G`$-invariant.
###### Corollary 1.4.
Let $`G`$ act on a symplectic manifold $`(X,\omega )`$ in a Hamiltonian fashion. Let $`\epsilon >0`$.There exists a symplectic form $`\omega ^{}`$ such that
(1) $`\omega ^{}`$ is preserved by $`G`$,
(2) $`|\omega \omega ^{}|_{C^0}<\epsilon `$,
(3) the action of $`G`$ on $`(X,\omega ^{})`$ is Hamiltonian,
(4) there is a natural number $`k>0`$, a Hermitian line bundle $`LX`$ with a unitary connection $``$ with curvature $`ik\omega ^{}`$ and a linear action of $`G`$ lifting the action on $`X`$ and preserving $``$.
This paper is organized as follows: in section 2 we recall some facts on equivariant cohomology which we will need; in section 3 we state the relation between infinitesimal lifts and 2-forms in the Cartan complex; in section 4 we define and prove some key properties of the monodromy map $`M_\gamma `$, which measures how far an infinitesimal lift is to exponentiate to an action of $`G`$; in section 5 we study how to chose connections which provide liftings of the action of $`G`$; finally, in section 6 we give the proofs of Theorems 1.1 and 1.2, and Corollaries 1.3 and 1.4.
Acknowledgements. I thank M. Vergne for having pointed out to me the paper of Kostant \[Ko\]. I also thank O. GarcรญaโPrada for some comments on the paper.
## 2. Equivariant cohomology
In this section we recall some basic facts on equivariant cohomology. For more information the reader is refered to \[AB, BGV, GS\].
Let $`\pi :X_G=EG\times _GXBG`$ be the Borel construction of $`X`$. The equivariant cohomology of $`X`$ is by definition the singular cohomology of $`X_G`$, and is denoted, for any ring $`R`$, as
$$H_G^{}(X;R):=H^{}(X_G;R).$$
Let $`๐ค`$ be the Lie algebra of $`G`$. Let $`\mathrm{\Omega }^{}(X)`$ be the complex of differential forms on $`X`$. Denote by $`X:๐ค\mathrm{\Gamma }(TX)`$ the map which assigns to any $`s๐ค`$ the vector field on $`X`$ generated by the infinitesimal action of $`s`$. So, if $`f\mathrm{\Omega }^0(X)`$, $`xX`$ and $`s๐ค`$, then $`X(s)(f)(x)=lim_{\epsilon 0}\epsilon ^1(f(e^{\epsilon s}x)f(x))`$. Consider the complex
$$\mathrm{\Omega }_G^{}(X)=(\mathrm{\Omega }^{}(X)[๐ค])^G$$
(as usual the supscript <sup>G</sup> means $`G`$-invariant elements) with the grading obtained from the usual grading in $`\mathrm{\Omega }^{}(X)`$ and twice the grading in $`[๐ค]`$ given by the degree, together with the differential $`d_๐ค`$ defined by
$$d_๐ค(\eta )(s)=d(\eta (s))+\iota _{X(s)}\eta (s),$$
where $`\eta \mathrm{\Omega }_G^{}(X)`$, $`s๐ค`$, and $`\iota _v:\mathrm{\Omega }^{}(X)\mathrm{\Omega }^1(X)`$ is the contraction map. One can check that $`d_๐ค^2=0`$, and the complex $`(\mathrm{\Omega }_G^{}(X),d_๐ค)`$ is called the Cartan complex. It will be our main tool in this paper. Note that in this paper we consider sometimes elements of $`\mathrm{\Omega }_G^{}(X;i)=i\mathrm{\Omega }_G^{}(X)`$.
The following classical theorem (which is a generalisation of de Rhamโs theorem) is proved for example as Theorem 2.5.1 in \[GS\].
###### Theorem 2.1.
There is a natural isomorphism $`H_G^{}(X;)H^{}(\mathrm{\Omega }_G^{}(X),d_๐ค)`$.
We will need an explicit description of the isomorphism given by the theorem in degree 2, as given by the following lemma. The proof can easily be deduced from the proof of Theorem 2.5.1 in \[GS\].
###### Lemma 2.2.
Let $`\alpha \mu \mathrm{\Omega }_G^2(X)`$ satisfy $`d_๐ค(\alpha \mu )=0`$. Let $`f:\mathrm{\Sigma }X_G`$ be a continuous map. Let $`P_f:=(\pi f)^{}EG\mathrm{\Sigma }`$, and let $`\varphi _f`$ be the induced section of $`\pi _f:X_f=P_f\times _GX=(\pi f)^{}X_G\mathrm{\Sigma }`$. Suppose that both $`P_f`$ and $`\varphi `$ are smooth, and let $`A`$ be a connection on $`P_f`$. We then get a projection $`\pi _A:TX_f\mathrm{Ker}d\pi _f`$, which allows to pullback $`\alpha `$ to a form $`\pi _A^{}\alpha \mathrm{\Omega }^2(X_f)`$ vanishing on the tangent vectors which are horizontal with respect to $`A`$. Then
$$[\mathrm{\Sigma }],f^{}[\alpha \mu ]=_\mathrm{\Sigma }\varphi _f^{}(\pi _A^{}\alpha )\mu ,F_A,$$
where $`F_A\mathrm{\Omega }^2(P_f\times _{\mathrm{Ad}}๐ค)`$ is the curvature of $`A`$.
Observe that a constant and equivariant map $`\mu :X๐ค^{}`$ represents an integral element $`[\mu ]H_G^2(X;)`$ if and only if for any $`u๐ค`$ such that $`\mathrm{exp}(u)=1`$ we have $`\mu ,u`$.
Finally, the map $`\iota ^{}:H_G^2(X;)H^2(X;)`$ can be written as follows using the Cartan and de Rham complexes: if $`\alpha \mu `$ is a closed element of $`\mathrm{\Omega }_G^2(X)`$, then
(1)
$$\iota ^{}[\alpha \mu ]=[\alpha ]H^2(X;)$$
(of course, since $`d_๐ค(\alpha \mu )=0`$ we have $`d\alpha =0`$).
## 3. Infinitesimal lift of the action
Let $``$ be a connection on $`L`$ whose curvature $`\alpha =^2\mathrm{\Omega }^2(X;i)`$ is $`G`$-invariant. We will call an infinitesimal lift of the action of $`G`$ on $`X`$ to an action on $`L`$ any map $`\stackrel{~}{X}:๐ค\mathrm{\Gamma }(TL)`$ which satisfies<sup>1</sup><sup>1</sup>1Since the action of $`G`$ is on the left, for any $`s,s^{}๐ค`$ we have $`X([s,s^{}])=[X(s),X(s^{})].`$ $`\stackrel{~}{X}([s,s^{}])=[\stackrel{~}{X}(s),\stackrel{~}{X}(s^{})]`$ for any $`s,s^{}๐ค`$ and such that
$$d\pi \stackrel{~}{X}=X.$$
Let $`u=2\pi ii=\mathrm{Lie}(S^1)`$. Let $`U_L\mathrm{\Gamma }(\mathrm{Ker}d\pi )`$ be the vertical tangent field generated by the infinitesimal action of $`s`$ on $`L`$ given by fibrewise multiplication.
Let $`X^{}:๐ค\mathrm{\Gamma }(TL)`$ be the map which assigns to $`s๐ค`$ the horizontal lift of $`X(s)`$ obtained using $``$. The following is well known (see for example Section 3 in \[Ko\]).
###### Theorem 3.1.
Let $`\mu :Xi๐ค^{}`$ be a map which satisfies $`d\mu (s)=\iota _{X(s)}\alpha `$ for any $`s๐ค`$. Then $`\stackrel{~}{X}^{,\mu }:=X^{}+i\mu U_L`$ is an infinitesimal lift of the action of $`G`$ on $`X`$.
Conversely, for any infinitesimal lift $`\stackrel{~}{X}`$ which leaves $``$ invariant, the function $`\mu :Xi๐ค^{}`$ defined by $`\stackrel{~}{X}:=X^{}+i\mu U_L`$ satisfies $`d\mu (s)=\iota _{X(s)}\alpha `$ for any $`s๐ค`$.
Note that the condition $`d\mu (s)=\iota _{X(s)}\alpha \text{ for any }s๐ค`$ is equivalent to asking
$$d_๐ค(\alpha \mu )=0.$$
## 4. The monodromy map $`M_\gamma `$
Let $`(L,)X`$ be as in the preceeding section, let $`\alpha =^2`$ be the curvature of $``$, and let $`\mu :Xi๐ค^{}`$ be a map which satisfies $`d\mu (s)=\iota _{X(s)}\alpha `$ for any $`s๐ค`$. Let $`\stackrel{~}{X}=\stackrel{~}{X}^{,\mu }:๐ค\mathrm{\Gamma }(TL)`$ be the corresponding infinitesimal lift.
Let $`\gamma :S^1G`$ be any representation. As before, let $`u=2\pi ii=\mathrm{Lie}(S^1)`$. For any $`xX`$ and $`yL_x`$, let $`\nu _x:[0,1]L`$ be the integral line of the vector field $`\stackrel{~}{X}(\gamma _{}(u))`$ with initial value $`\nu _x(0)=y`$. We then have $`\nu _x(1)L_x`$, and there is a unique $`M_\gamma (x)=M_\gamma ^{,\mu }(x)S^1`$ (independent of $`y`$) such that $`\nu _x(1)=M_\gamma (x)\nu _x(0)`$.
The map $`M_\gamma :XS^1`$ which we have defined measures the extent to which the infinitesimal action given by $`(,\mu )`$ exponentiates to an action of the group. We will see below (viz. Lemma 5.2) that the condition $`M_\gamma =1`$ for any $`\gamma `$ is enough to ensure that the infinitesimal action exponentiates.
Let $`xX`$ be any point, and let $`\mathrm{Mon}^{}(S^1x)S^1`$ be the monodromy of parallel transport using $``$ along the path $`[0,1]t\gamma (e^{2\pi it})x`$. The following formula for $`M`$ can be easily proved using coordinates in a neigbourhood of $`S^1x`$ (see also Theorem 2.10.1 in \[Ko\]):
(2)
$$M_\gamma ^{,\mu }(x)=\mathrm{Mon}^{}(S^1x)\mathrm{exp}(2\pi \mu (x),\gamma _{}(u)).$$
An easy consequence of this formula is that $`M`$ is gauge invariant, i.e., for any gauge transformation $`g:LL`$,
(3)
$$M_\gamma ^{g^{},\mu }(x)=M_\gamma ^{,\mu }(x).$$
Let $`\eta \mathrm{\Omega }^1(X;i)`$, so that $`+\eta `$ is another connection on $`L`$. Then we also deduce from (2) that
(4)
$$M_\gamma ^{+\eta ,\mu }(x)=M_\gamma ^{,\mu }(x)\mathrm{exp}\left(_{S^1}\gamma _x^{}\eta \right),$$
where $`\gamma _x:S^1X`$ maps $`\theta `$ to $`\gamma (\theta )xX`$.
### 4.1. Cohomological interpretation of $`M`$
In this subsection we assume $`G=S^1`$ and $`\gamma =\mathrm{id}`$, and we denote $`M=M_\gamma `$. We identify $`i`$ with $`(i)^{}`$ by assigning to $`\alpha i`$ the map $`ia\alpha ,a=\alpha a/2\pi `$. In particular, $`\mu \mathrm{\Omega }^0(X;)^{S^1}`$. Let us suppose that the action of $`S^1`$ on $`X`$ is generically free, i.e., the isotropy group is trivial for generic $`xX`$ (if this is not true, then either the action of $`S^1`$ on $`X`$ is trivial, and the results in this section are obvious, or there is a biggest common stabiliser $`\{1\}ZS^1`$, which acts freely on $`X`$, and we replace $`X`$ by $`X/Z`$ and $`S^1`$ by $`S^1/Z`$).
###### Lemma 4.1.
Let $`xX`$ be a point with trivial stabiliser, and let $`O_x:S^1X`$ be the map $`O_x(\theta )=\theta x`$. Assume that $`O_x(S^1)`$ is homologous to zero. Then, if $`\frac{i}{2\pi }[\alpha \mu ]H_{S^1}^2(X;)`$, we have $`M(x)=1`$.
###### Proof.
Let $`\mathrm{\Sigma }_0`$ be a compact surface with a fixed isomorphism $`\mathrm{\Sigma }_0S^1`$ and compatible orientation, and let $`b_0:\mathrm{\Sigma }_0X`$ be a map such that $`b_0|_{\mathrm{\Sigma }_0}=O_x`$. Note that $`\iota b_0(\mathrm{\Sigma }_0)=\iota O_x(S^1)`$ is contained in $`ES^1\times _{S^1}(S^1x)X_{S^1}`$. Since the action of $`S^1`$ on $`S^1xX`$ is free, $`ES^1\times _{S^1}(S^1x)`$ is contractible. So, denoting the unit disk by $`๐ป`$, we may take a map $`B_1:๐ปES^1\times _{S^1}(S^1x)`$ such that $`cB_1|_{S^1}=\iota b_0|_{\mathrm{\Sigma }_0}`$, where $`c(\theta )=\theta ^1`$ for $`\theta S^1`$ (such a $`B_1`$ is unique up to homotopy rel $`S^1`$). Let us patch together the maps $`B_0:=\iota b_0`$ and $`B_1`$ to get a map $`B:\mathrm{\Sigma }:=\mathrm{\Sigma }_0_{S^1}(๐ป)X_{S^1}`$, where the minus sign refers to inversed orientation, so that we take the isomorphism $`c`$ to identify $`(๐ป)`$ with $`S^1`$. We claim that
(5)
$$M(x)=\mathrm{exp}[\mathrm{\Sigma }],B^{}[\alpha +\mu ].$$
Clearly, once we check the claim, the lemma is proved. To prove (5), we will apply Lemma 2.2 to the map $`B`$. Let $`\pi :X_{S^1}BS^1`$ denote the projection. Then we have $`\pi B_0=\{x_0\}`$, so that $`P_B|_{\mathrm{\Sigma }_0}`$ (we use here the notation of Lemma 2.2) is the trivial bundle. Let us fix a trivialisation $`P_B|_{\mathrm{\Sigma }_0}\mathrm{\Sigma }_0\times S^1`$ of it. Using the induced trivialisation $`X_B|_{\mathrm{\Sigma }_0}=\mathrm{\Sigma }_0\times X`$, the restriction $`\varphi _B|_{\mathrm{\Sigma }_0}`$ is given by $`(\mathrm{id},b_0)`$. Since $`๐ป`$ is contractible, $`P_B`$ is obtained by patching the trivial bundles over $`\mathrm{\Sigma }_0`$ and $`๐ป`$ through a gluing map $`\rho :\mathrm{\Sigma }_0=S^1S^1`$. The map $`b_0|_{\mathrm{\Sigma }_0}`$ has winding number $`1`$, so the section $`\varphi _B|_{\mathrm{\Sigma }_0}`$ will only glue with a section of the trivial bundle $`(๐ป)\times (S^1x)(๐ป)\times X`$ if the gluing map $`\rho `$ is the identity $`\rho (\theta )=\theta `$. Hence, the bundle $`P_B`$ must have degree $`1`$.
Take now a connection $`A`$ on $`P_B`$ which coincides over $`\mathrm{\Sigma }_0`$ with the flat connection induced by the chosen trivialisation of $`P_B|_{\mathrm{\Sigma }_0}`$. Then the curvature of $`A`$ is supported in $`๐ป`$. By Lemma 2.2, the RHS of (5) is equal to
$$_\mathrm{\Sigma }\varphi _B^{}(\pi _A^{}\alpha )+\mu ,F_A.$$
Now, by our choice of $`A`$, $`_\mathrm{\Sigma }\mu ,F_A=_๐ป\mu ,F_A`$. On the other hand, the image of the restriction $`\varphi _B|_๐ป`$ is contained in $`P_B\times _{S^1}(S^1x)X_B`$, and since $`\mu `$ is $`S^1`$ equivariant, we may write $`_๐ป\mu ,F_A=\frac{1}{2\pi }\mu (x)_๐ปF_A`$. Finally, by ChernโWeil $`_๐ปF_A=2\pi i\mathrm{deg}(P_B)=2\pi i`$.
Using again that $`\varphi _B(๐ป)P_B\times _{S^1}(S^1x)`$, we deduce that $`\varphi _B^{}(\pi _A^{}\alpha )`$ vanishes on $`๐ป`$. Indeed, the vertical part of any tangent vector to $`P_B\times _{S^1}(S^1x)`$ lies in $`\mathrm{Ker}d\pi _B|_{P_B\times _{S^1}(S^1x)}`$, which is a real line bundle, so for any $`y๐ป`$ and $`u,vT_y๐ป`$, $`\pi _A(d\varphi _B(u))`$ and $`\pi _A(d\varphi _B(v))`$ are linearly dependent, and hence $`\alpha (\pi _A(d\varphi _B(u)),\pi _A(d\varphi _B(v)))=0`$. So $`_\mathrm{\Sigma }\varphi _B^{}(\pi _A^{}\alpha )=_{\mathrm{\Sigma }_0}\varphi _B^{}(\pi _A^{}\alpha )`$. And this is equal to $`_{\mathrm{\Sigma }_0}b_0^{}\alpha `$. Taking into account that $`\alpha `$ is the curvature of a connection $``$ on the line bundle $`LX`$, one can check that
$$\mathrm{exp}\left(_{\mathrm{\Sigma }_0}b_0^{}\alpha \right)=\mathrm{Mon}^{}(S^1x)$$
(see Theorem 1.8.1 in \[Ko\]). Now, using (2) and the preceeding computations we deduce
$`M(x)`$ $`=\mathrm{exp}\left(\left({\displaystyle _\mathrm{\Sigma }}\varphi _B^{}(\pi _A^{}\alpha )\right)i\mu (x)\right)`$
$`=\mathrm{exp}\left({\displaystyle _\mathrm{\Sigma }}\varphi _B^{}(\pi _A^{}\alpha )+\mu (x),F_A\right)\text{ (since }F_A=2\pi i\text{)}`$
$`=\mathrm{exp}[\mathrm{\Sigma }],B^{}[\alpha +\mu ].`$
###### Corollary 4.2.
Assume that for some $`xX`$, $`O_x(S^1)`$ is homologous to zero. Then, if $`\frac{i}{2\pi }[\alpha \mu ]H_{S^1}^2(X;)`$, we have $`M=1`$.
###### Proof.
Indeed, the condition of $`O_x(S^1)`$ being homologous to zero is independent of $`x`$, the function $`M`$ is continuous, and by assumption the set of $`xX`$ with trivial stabiliser is dense. โ
When the orbit $`S^1x`$ is not homologous to zero, the map $`M(x)`$ will depend on the connection $``$ (and not only on $`\alpha `$ and $`\mu `$), as we will see below.
###### Lemma 4.3.
Let $`x,x^{}X`$ be two points with trivial stabiliser. Then $`M(x)=M(x^{})`$.
###### Proof.
This can be proved either with local coordinates or using the same technique as above. We sketch the second strategy. For that, let $`\rho :[0,1]X`$ be a path such that $`\rho (0)=x`$ and $`\rho (1)=x^{}`$, let $`\mathrm{\Sigma }_0=[0,1]\times S^1`$ and let $`b_0:\mathrm{\Sigma }_0(t,\theta )\theta \rho (t)`$. Glue two disks $`๐ป_0`$ and $`๐ป_1`$ to the boundary of $`\mathrm{\Sigma }_0`$ with suitable orientations to get a closed oriented surface $`\mathrm{\Sigma }`$, and extend the map $`\iota b_0`$ to a map $`B:\mathrm{\Sigma }X_{S^1}`$ just as in the preceeding lemma (i.e., so that the image of $`๐ป_0`$ is contained in $`ES^1\times _{S^1}(S^1x)`$ and that of $`๐ป_1`$ in $`ES^1\times _{S^1}(S^1x^{})`$). As before one can check that
$$M(x)M(x^{})=\mathrm{exp}[\mathrm{\Sigma }],B^{}[\alpha +\mu ].$$
Now, however, the map $`B`$ is homotopic to the trivial map, and from this the result follows. โ
###### Corollary 4.4.
The map $`M:X`$ is constant.
For the last lemma of this section, we return to the general situation, in which $`G`$ is any compact connected Lie group.
###### Lemma 4.5.
Let $`\gamma :S^1G`$ be a morphism, and let $`gG`$. We then have
$$M_\gamma =M_{g\gamma g^1}.$$
###### Proof.
Let $`\rho :S^1G`$ be a smooth map such that $`\rho (1)=1`$ and $`\rho (1)=g`$. Consider on $`X\times S^1`$ the action of $`S^1`$ given by
$$\theta (x,\alpha )=\rho (\alpha )\gamma (\theta )\rho (\alpha )^1x\text{ for }\theta S^1\text{ and }(x,\alpha )X\times S^1\text{.}$$
Let $`\pi _1:X\times S^1X`$ be the projection, and take on $`X\times S^1`$ the bundle $`\pi _1^{}L`$ with the connection $`_{M\times S^1}=\pi _1^{}`$. Finally, let $`\mu _{M\times S^1}(x,\alpha )=\mu (x),\mathrm{Ad}(\rho (\alpha ))\gamma _{}(u)`$. The monodromy $`N=M^{_{M\times S^1},\mu _{M\times S^1}}`$ satisfies
$$N|_{X\times \{\alpha \}}=M_{\rho (\alpha )\gamma \rho (\alpha )^1}.$$
Applying Corollary 4.4 to $`N`$, we deduce our result. โ
## 5. The choice of the connection
Recall that the map $`a_1:H_1(G;)H_1(X;)`$ is induced from the map $`a_1(x):GggxX`$, where $`xX`$ is an arbitrary point. Through this section we will make the following topological assumption:
(6)
$$\mathrm{Im}a_1\mathrm{Tor}H_1(X;)=0.$$
Let $`\alpha \mathrm{\Omega }^2(X;i)^G`$ an invariant $`2`$-form representing $`2\pi ic_1(L)`$. Let $`\mu \mathrm{\Omega }^0(X;i๐ค^{})^G`$ satisfy $`d\mu (s)=\iota _{X(s)}\alpha `$ for any $`s๐ค`$, so that $`\alpha \mu \mathrm{\Omega }_G^2(X;i)`$ is a closed form in the Cartan complex, and hence represents an equivariant cohomology class $`[\alpha \mu ]H_G^2(X;i)`$.
###### Lemma 5.1.
Suppose that $`\frac{i}{2\pi }[\alpha \mu ]H_G^2(X;)`$. Then one can chose a connection $``$ on $`L`$ whose curvature is $`\alpha `$ and such that for any morphism $`\gamma :S^1G`$ we have $`M_\gamma ^{,\mu }=1`$. More preciely, the set $`๐_\alpha ^G(\mu )`$ of gauge equivalence classes of connections satisfying this property is a torus of dimension $`b_1(X)dim(\mathrm{Im}a_1_{})`$.
###### Proof.
Let $`A_\alpha `$ be the set of connections on $`L`$ whose curvature is $`\alpha `$. Let $`TG`$ be a maximal torus. By Lemma 4.5 it is enough to consider $`M_\gamma `$ for $`\gamma :S^1T`$, since for any $`\gamma :S^1G`$ there exists $`gG`$ such that $`g\gamma g^1(S^1)T`$.
Let $`๐ฑ=\mathrm{Lie}T`$, and let $`\mathrm{\Lambda }=\mathrm{Ker}(\mathrm{exp}:๐ฑT)`$, so that $`T=๐จ/\mathrm{\Lambda }`$. The morphisms $`\gamma :S^1T`$ are in 1โ1 correspondence with elements of $`\mathrm{\Lambda }`$. For any $`\gamma ,\gamma ^{}\mathrm{\Lambda }`$ we have
(7)
$$M_{\gamma ^{}+\gamma }^{,\mu }=M_\gamma ^{}^{,\mu }M_\gamma ^{,\mu }.$$
To see this, observe the following. Let $`yL`$, and let $`\nu (y;):๐ฑL`$ be the map defined as follows. For any $`s๐ฑ`$, let $`\nu _s^y:[0,1]L`$ be the path such that $`\nu _s^y(0)=y`$ and $`\nu _{s}^{y}{}_{}{}^{}=\stackrel{~}{X}^{,\mu }(s)(\nu _s^y)`$. Then we set $`\nu (y;s)=\nu _s^y(1)`$. With this definition, if $`s๐ฑ`$ and $`vT_s๐ฑ๐ฑ`$ (use the canonical isomorphism) then $`D\nu (y;s)(v)=\stackrel{~}{X}^{,\mu }(v)(s)`$ (this is a consequence of $`[\stackrel{~}{X}^{,\mu }(v),\stackrel{~}{X}^{,\mu }(v^{})]=0`$ for any $`v,v^{}๐ฑ`$). From this it follows that
(8)
$$\nu (\nu (y;s);s^{})=\nu (y;s+s^{}),$$
which clearly implies (7).
Consider now the map
$$c:\mathrm{\Lambda }H^1(X;)$$
which sends $`\gamma \mathrm{\Lambda }`$ to the homology class $`[\gamma (S^1)]`$ represented by any orbit of the $`S^1`$ action on $`X`$ induced by $`\gamma :S^1T`$. Let $`\mathrm{\Lambda }_0=\mathrm{Ker}c`$. Using condition (6), we deduce from Lemma 4.1 that for any $`\gamma \mathrm{\Lambda }_0`$ and any connection $`A_\alpha `$ we have $`M_\gamma ^{,\mu }=1`$. (Note that the map $`\gamma ^{}:H_G^2(X;)H_{S^1}^2(X;)`$ induced by $`\gamma `$ lifts to the Cartan complex as $`\gamma ^{}(\alpha \mu )=\alpha \gamma ^{}(\mu )`$. Let now $`\mathrm{\Lambda }_1=\mathrm{\Lambda }_0^{}`$. This is a free abelian module. Let $`e_1,\mathrm{},e_r\mathrm{\Lambda }_1`$ be a basis. By (7), if a connection $`A_\alpha `$ satisfies $`M_{e_j}^{,\mu }=1`$ for any $`1jr`$, then $`M_\gamma ^{,\mu }=1`$ for all $`\gamma \mathrm{\Lambda }`$.
Finally, by gauge invariance of $`M_\gamma ^{,\mu }(x)`$ (3), we can consider gauge classes of connections on $`L`$ rather than connections. So let $`๐_\alpha =A_\alpha /\mathrm{Map}(X,S^1)`$ be the gauge equivalence classes of connections on $`L`$ with curvature $`\alpha `$. Picking a base connection $`A_\alpha `$, we can identify
$$T_\alpha =+H^1(X;)/H^1(X;).$$
Furthermore, formula (4) implies that if $`\eta \mathrm{\Omega }^1(i)`$ satisfies $`d\eta =0`$, then
$$M_{e_j}^{,\mu }=M_{e_j}^{+[\eta ],\mu }[\eta ],c(e_j),$$
where $`[\eta ]H^1(X;)`$ is the class represented by $`\eta `$. On the other hand, the images by $`c`$ of $`e_1,\mathrm{},e_r`$ are all linearly independent. Hence, $`c(e_1),\mathrm{},c(e_r)`$ is a space of dimension $`r`$, so the set of gauge equivalence classes of connections $`[]๐_\alpha ^G(\mu )`$ such that $`M_\gamma ^{,\mu }=1`$ for all $`\gamma `$ is the image under the quotient
$$H^1(X;)H^1(X;)/H^1(X;)$$
of an affine subspace of codimension $`r`$. On the other hand, we have $`r=dim(\mathrm{Im}a_1)`$ so $`rb_1(X)=dimH^1(X;)`$, and hence this set is nonempty. More precisely, the set $`๐_\alpha ^G(\mu )๐_\alpha `$ is a torus of dimension $`b_1(X)r0`$. โ
###### Lemma 5.2.
The infinitesimal lift $`\stackrel{~}{X}`$ defined by $`(,\mu )`$ exponentiates to give a linear action of $`G`$ on $`L`$ if and only if for any representation $`\gamma :S^1G`$ we have $`M_\gamma =1`$.
###### Proof.
Given $`yL`$ and $`gG`$, we define $`gyL`$ in the obvious way: let $`g=\mathrm{exp}(s)`$, where $`s๐ค`$, let $`\nu _s^y:[0,1]L`$ be the integral curve of the vector field $`\stackrel{~}{X}^{,\mu }(s)`$ with initial value $`\nu _s^y(0)=y`$; then $`gy:=\nu (y;s)=\nu _s^y(1)`$.
There are two things to check: that $`gy`$ is well defined and that the resulting map $`G\times LL`$ is indeed an action of $`G`$ on $`L`$. Observe first that both things are clear when $`G=T`$ is a torus (see formula (8)). We now sketch how to deal with the general case. Suppose that $`s,s^{}๐ค`$ satisfy $`\mathrm{exp}(s)=\mathrm{exp}(s^{})`$. We want to check that, for any $`yL`$, $`\nu _s^y(1)=\nu _s^{}^y(1)`$. Now, it is easy to prove that there exists some $`s^{\prime \prime }๐ค`$ such that $`\mathrm{exp}(s^{\prime \prime })=\mathrm{exp}(s)=\mathrm{exp}(s^{})`$ and such that $`[s,s^{\prime \prime }]=[s^{},s^{\prime \prime }]=0`$. Then, applying (8) to some torii $`T,T^{}`$ such that $`s,s^{\prime \prime }\mathrm{Lie}T`$ and $`s^{},s^{\prime \prime }\mathrm{Lie}T^{}`$, we deduce that $`\nu (y;s)=\nu (y;s^{\prime \prime })=\nu (y;s^{})`$. This proves well definedness. Finally, by BakerโCampbellโHaussdorf, $`\nu `$ satisfies $`\nu (\nu (y;s);s^{})=\nu (y;\mathrm{log}(\mathrm{exp}(s)\mathrm{exp}(s^{})))`$ for $`s,s^{}`$ small enough, and from this it follows easily that $`\nu `$ defines an action of $`G`$ on $`L`$. โ
## 6. Proofs of the results
We prove the theorems in two steps. First we assume that condition (6) is satisfied. Then we deduce the results in the general case.
### 6.1. Proofs of the theorems when (6) holds
#### 6.1.1. Proof of Theorem 1.2
Combine Lemma 5.2 with the ideas at the end of the proof of Lemma 5.1.
#### 6.1.2. Proof of Theorem 1.1
If the action of $`G`$ lifts to $`L`$, then the first equivariant Chern class $`c_1^G(L)`$ of $`L`$ is an integral class and provides a lift of $`c_1(L)`$. Now suppose that $`c_1(L)=\iota ^{}(l)`$, where $`lH_G^2(X;)`$. Take $`\frac{i}{2\pi }(\alpha \mu )\mathrm{\Omega }_G^2(X)`$ whose cohomology class is equal in $`H_G^2(X;)`$ to $`l`$. By Theorem 1.2, there is some connection $``$ on $`L`$ which defines a lift of the action of $`G`$ to $`L`$. Let $`L_\alpha `$ denote the line bundle $`L`$ with the action of $`G`$. Applying the ChernโWeil construction to equivariant bundles as defined in \[BV\] for the connection $``$, we deduce that the form $`\frac{i}{2\pi }(\alpha \mu )`$ represents $`c_1^G(L)`$. Now, since we have used de Rham theory, we have lost control of torsion, so that all we know in principle is that
$$c_1^G(L)l\mathrm{Tor}H_G^2(X;).$$
To deduce that $`c_1^G(L)=l`$, we observe that the restriction of $`\iota ^{}`$ to $`\mathrm{Tor}H_G^2(X;)`$ is an injection (indeed, $`\mathrm{Tor}H_G^2(X;)=\mathrm{Ext}(H_1(X_G;),)`$, $`\mathrm{Tor}H^2(X;)=\mathrm{Ext}(H_1(X;),)`$ and, since $`G`$ is connected, $`\pi _1(BG)=0`$, so the long exact sequence of homotopy groups for $`XX_GBG`$ tells us that $`\pi _1(X)\pi _1(X_G)`$ is exhaustive). So, from $`\iota ^{}(c_1^G(L)l)=0`$ we deduce that $`c_1^G(L)=l`$ in $`H_G^2(X;)`$.
To prove that $`\iota ^1(c_1(L))`$ classifies the lifts of the action to $`L`$ it is enough to check that if $`G`$ acts on $`L`$ and $`c_1^G(L)=0`$, then $`L`$ can be $`G`$-equivariantly trivialised, i.e., there is an equivariant nowhere vanishing section of $`L`$. So assume that $`G`$ acts on $`L`$ and $`c_1^G(L)=0`$. Take a $`G`$-invariant connection $``$ on $`L`$, let $`\alpha =^2`$ and let $`\mu `$ be the map given by Theorem 3.1. Now, by assumption $`[\alpha \mu ]=0H_G^2(X;i)`$. Since the set of forms representing a fixed cohomology class is connected, we can join $`\alpha \mu `$ to $`0\mathrm{\Omega }_G^2(X;i)`$ through a path $`\gamma \mathrm{\Omega }_G^2(X;i)`$ all of whose forms represent $`0H_G^2(X;i)`$. Fix a trivialisation $`LX\times `$. It is easy to see, using the proof of Lemma 5.1, that $`\gamma `$ can be lifted continuously to give $`t`$ a connection $`_t`$ defining a lift to $`L`$ of the action with ChernโWeil form equal to $`\gamma (t)`$, in such a way that $`_1`$ is the trivial connection on $`L`$. So we get a homotopy between the initial action of $`G`$ on $`L`$ and the trivial action defined from a trivialisation $`LX\times `$. Since $`G`$ is compact, this implies that the initial action of $`G`$ on $`L`$ is trivial.
### 6.2. Proof of the theorems in the general case
Suppose that $`T=\mathrm{Im}a_1\mathrm{Tor}H_1(X;)`$ is nonzero. Let $`T^{}H_1(X;)`$ be a complementary submodule of $`T`$, and let $`G_T`$ be the connected Lie group which fits in the exact sequence
$$1TG_T\stackrel{q}{}G1$$
with $`q_{}H_1(G_T;)=T^{}`$. The action of $`G`$ induces an action of $`G_T`$ on $`X`$, and we have a commutative diagram
Now, the action of $`G_T`$ clearly satisfies condition (6), so we may apply the results obtained in the preceeding subsection and get lifts of the action of $`G_T`$ to $`L`$, together with invariant connections.
To prove Theorems 1.1 and 1.2 for the action of $`G`$ it is enough to check that, if $`L_\alpha `$ is a $`G_T`$ bundle isomorphic to $`L`$ (as bundles over $`X`$) such that
$$c_1^{G_T}(L_\alpha )q^{}H_G^2(X;)$$
then the action of $`G_T`$ on $`L_\alpha `$ descends to an action of $`G`$, or, equivalently, the action of $`TG_T`$ on $`L_\alpha `$ is trivial (note that, on the other hand, $`q^{}`$ is injective). This follows from the sequence of maps
$$H_G^2(X;)\stackrel{q^{}}{}H_{G_T}^2(X;)\stackrel{r^{}}{}H^2(BT;),$$
which is induced by the fibration $`BTX_{G_T}X_G`$, and consequently satisfies $`r^{}q^{}=0`$. The map $`r^{}`$ is obtained from the $`T`$-equivariant inclusion $`x_0X`$ (where $`x_0`$ is any point). And, since a representation $`\rho :T^{}`$ is trivial if and only if $`c_1^T(ET\times _\rho )=0`$, we deduce that if $`c_1^{G_T}(L_\alpha )q^{}H_G^2(X;)`$ then $`r^{}c_1^{G_T}(L_\alpha )=0`$ and hence $`T`$ acts trivially on $`L_\alpha `$.
### 6.3. Proof of Corollary 1.3
A theorem of Kirwan (see Proposition 5.8 in \[Ki\]) says that if $`G`$ acts in a Hamiltonian fashion on $`X`$ then there is an isomorphism $`H_G^{}(X;)H^{}(X;)H^{}(BG;)`$. In particular, this means that there exists an integer $`d1`$ such that if $`aH^2(X;)`$ then $`da\iota ^{}H_G^2(X;)`$. So, given the line bundle $`L`$, there exists $`lH_G^2(X;)`$ such that $`c_1(L^d)=\iota ^{}l`$. Let now $`\frac{i}{2\pi }(\alpha ^{}\mu ^{})\mathrm{\Omega }_G^2(X)`$ represent $`l`$ in $`H_G^2(X;)`$, and let $`_d`$ be a connection on $`L^d`$ whose curvature is $`\alpha ^{}`$. Let $`\eta :=(^d_d)^G`$, where <sup>G</sup> means the projection to the invariant subspace $`\mathrm{\Omega }^1(X;i)^G`$ using the standard averaging trick: $`\zeta ^G=\frac{1}{|G|}_{gG}g\zeta `$. Then
$$\alpha \mu :=(\alpha ^{}\mu ^{})+d_๐ค\eta \mathrm{\Omega }_G^2(X;i)$$
represents $`2\pi il`$, and the curvature of $``$ is $`\alpha `$. On the other hand, $`\mathrm{Im}a_1=0`$, since any Hamiltonian action of $`S^1`$ on a compact manifold has fixed points, and hence the orbits are contractible. Consequently, by Theorem 1.2, the lift $`\stackrel{~}{X}^{^d,\mu }`$ exponentiates to an action of $`G`$ on $`L`$ which leaves $``$ fixed.
### 6.4. Proof of Corollary 1.4
Let $`\mu :X๐ค^{}`$ be a moment map for the action of $`G`$ on $`X`$. By Corollary 1.3, it suffices to take any closed $`\omega ^{}\mu ^{}\mathrm{\Omega }_G^2(X)`$ near $`\omega \mu `$ and representing a class $`[\omega ^{}\mu ^{}]H_G^2(X;2\pi )`$. |
no-problem/0002/hep-ph0002092.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Our present world picture is based on two theories: the standard model of particle physics and general relativity, the theory of gravity. These two theories have scored astonishing successes. It is therefore quite striking when one learns that this picture of the laws of physics is inconsistent. The inconsistency comes from taking a part of theory, the standard model, as a quantum theory while the other, gravity, as a classical theory. At first sight it seems that all we need to do is to quantize general relativity. If one performs an expansion in terms of Feynman diagrams, the technique used in the standard model, then one finds infinities that cannot be absorbed in a renormalization of the Newton constant (and the cosmological constant). In fact as one goes to higher and higher orders in perturbation theory one would have to include more and more counterterms. In other words, the theory is not renormalizable. So the principle that is so crucial for constructing the Standard model fails. It is now more surprising that an inconsistent theory could agree so well with experiment!. What happens is that quantum gravity effects are usually very small due to the weakness of gravity relative to other forces. Since the effects of gravity are proportional to the mass, or the energy of the particle, they grow at high energies. At energies of the order of $`E10^{19}Gev`$ gravity would have a strength comparable with that of the other Standard Model interactions<sup>2</sup><sup>2</sup>2 This is an energy scale where we should definitely see new physics, it is nevertheless possible that quantum gravity becomes relevant at much lower energies, $`E`$ 1-10 $`Tev`$ . . We should also remember that physics as we understand it now cannot explain the most important โexperimentโ that ever happened: the Big Bang. It is quite interesting that the Big-Bang theory links high energy physics and cosmology and in order to understand what happened in the beginning it seems that we need to understand quantum gravity. There are also esthetic reasons for wanting a theory beyond the Standard Model. We would like to explain the origin of the gauge group, the relations between the three couplings, the rest of the parameters of the standard model, why we have three generations, etc. It is very suggestive that if one extrapolates the running of the couplings they seem to meet at energies that are close to the energy where quantum gravity becomes important, suggesting that a grand unified field theory, based on a bigger gauge group would in any case lie close to the Planck scale. The present situation is analogous to the situation particle physics was in when we only had Fermiโs theory of weak interactions. It was a theory that agreed well with experiments performed at low energies but it was not a consistent theory. Renomalizability, or mathematical consistency, was a crucial clue for the discovery of the Standard Model.
The challenges we face can be separated, in degree of difficulty, in the following three:
I) Formulate an internally consistent theory of quantum gravity. By this we mean a theory which reduces at low energies, $`E10^{19}Gev`$, to general relativity but in which we can perform quantum calculations to any order we wish. This theory should solve some fundamental problems with quantum gravity such as explaining the origin of black hole entropy, etc. These are questions about gravity which do not involve directly the fact that we also have the particle physics that we see in nature.
II) Be capable of incorporating the Standard Model. So the theory should be such that at low energies it can contain chiral gauge fields, fermions, etc.
III) Explain the Big-Bang and the parameters of the Standard Model. We should understand the resolution of the initial singularity in cosmology and we should understand why we have the Standard Model. We should understand how the standard model parameters arise, which parameters are related, and which parameters (if any) arise as a โhistoricalโ accident.
We know have a theory, called string theory (or M-theory), which has been able already to provide a solution to the first two challenges. Unfortunately we do not know yet how to solve the third challenge. Maybe string theory is the solution and we just have to understand it better or maybe we have to modify it in some way. String theory is a theory under construction. We know several limits and aspects of the theory but we still do not know the fundamental axioms of the theory that would enable us to approach the third challenge.
String theory is based on the idea that fundamental objects are not point particles, as in particle theories, but one dimensional objects called strings. Let us first review how we construct a theory of interacting particles. We start with a set of free particles, for example electrons, photons, quarks, gluons. These particles can have different states; e.g. they could have the spin pointing up or down. Then we consider interactions. These are introduced by allowing particles to split into two other particles with some probability amplitude $`g`$. $`g`$ will be the interaction strength or coupling constant. For example and electron could emit a photon, etc. So in order to compute a scattering amplitude we have to sum over all paths of the particles and all ways in which they could emit other particles, etc. These sums are performed via Feynman diagrams. In figure 1(a,b) we see some examples of Feynman diagrams. String theory is constructed in a completely analogous way. We first start with free strings. Strings can be open or closed. Let us just consider a theory with only closed strings. The strings are โrelativisticโ, meaning that their tension is equal to their mass per unit length. So that if we had a stretched string an oscillation would propagate at the speed of light along it. The tension is a dimensionful quantity which we can parameterize in terms of a distance scale $`T=1/l_s^2`$. Strings can oscillate. These oscillations can be decomposed in normal modes. Since strings are quantum mechanical objects each normal mode will have a certain occupation number. The total energy of the oscillating string will be quantized. And the total mass of an oscillating string will be equal to the total energy contained in the oscillations. When we view this oscillating string from far away it looks like a pointlike object. These different oscillatory states of the string are analogous to the different polarization states of the particles, now that the mass of the string state itself depends on the โpolarizationโ state. Some of these oscillatory states of the string will have zero energy and will thus be massless particles. There is one state with spin two which can be viewed as the graviton. The masses of the massive string states are of the order $`m1/l_s`$. String interactions are introduced by allowing strings that touch to recombine into one string. These are splitting and joining interactions as shown in figure 1 (c,d). The amplitude for this process defines the string coupling $`g`$. In order to compute any process in string theory we have to sum over all possible splitting and joining interactions. The simplest string theories are those that live in ten dimensions and are supersymmetric. The sum over string theory Feynman diagrams can be performed and yields finite results. At low energies, energies lower than the mass of the massive string states $`E1/l_s`$ the only excitation we will have are gravitons and other massless particles. The interactions of these particles are those of Einstein gravity plus some other massless fields. In this way string theory manages to quantize gravity. What we have described here amounts to a perturbative quantization of the theory, in the same way that the Feynman diagram expansion in particle physics is a perturbative quantization of the field theory. But there are non-perturbative aspects of the theory that are not captured by the perturbative theory. One example is soliton solutions, like magnetic monopoles of grand unified theories. These are collective excitations that are stable, typically for some topological reason. Their masses go as $`m1/g^2`$ and in the weak coupling approximation we can study them as solutions of the classical field theory action. In field theories we can also have extended solitons, like cosmic strings or domain walls. In string theory we also have solitons. These solitons are called D-branes. D-branes are solitons with different dimensionalities. They can be pointlike (D-0-brane), one dimensional (D-1-brane), two-dimensional (D-2-brane), etc. These solitons have a very precise description in string theory . Their excitations are described by open strings that end on them. At low energies some of the open string modes are massless, have spin one and give rise to gauge fields. When we put many branes together the open strings have two indices $`i,j`$ labeling the brane where they start and the brane where they end see figure 2. These two indices become the indices of non-abelian $`U(N)`$ gauge fields.
It might be surprising that we were discussing a ten dimensional theory while our world is โobviouslyโ four dimensional. What we really see is that the world is four dimensional at long distances, we do not really know the dimensionality of the world at short enough distances. In string theory we assume that we are living in a world that has four large dimensions (the ones we see) and six very small dimensions, see figure 3. It is a familiar phenomenon in condensed matter that if an electron is confined to move on a very narrow layer then the electron will behave as if it was moving in only two dimensions. Similarly particles that move in a ten dimensional space with six small dimensions will behave at low energies as if they were moving only in four dimensions. What is the size of these extra dimensions? The traditional view is that they are all small, of the order of $`10^{33}cm`$. But recently it was realized that some dimensions could be as big as $`1mm`$ . In that case all the standard model fields would have to be confined to live on a D-brane that is extended along the four extended dimensions that we see but transverse to the large extra dimensions. Picking different manifolds or brane configurations we can have different particles at low energies. In both cases the Standard Model parameters would depend on the details of the internal manifold or brane configuration. Compactifications which preserve 8,4 or 2 supersymmetries at low energies are fairly well understood. The case where we preserve only one supersymmetry is not so well understood and we do not understand how supersymmetry can be broken, as it is in the real world, without generating a huge cosmological constant, of the order of the supersymmetry breaking scale. This seems to be the most important obstacle in understanding precisely how the Standard Model is embedded in string theory.
Recent progress in string theory was based on the idea of dualities. It is familiar that classical electromagnetism is invariant under the interchange of electric and magnetic fields $`\stackrel{}{E}\stackrel{}{B}`$, $`\stackrel{}{B}\stackrel{}{E}`$. This exchanges electric charges with magnetic charges. In field theories electric charges are carried by fundamental particles and magnetic charges by solitons. So this duality exchanges elementary particles with solitons. This can be achieved by changing the coupling constant as $`g1/g`$ so that solitons that were heavy become light, like elementary particles. In many string theories we have dualities of this type. When the coupling becomes strong in terms of some variables the theory has an equivalent description in terms of some dual variables that can be weakly coupled. In this fashion many string theories are connected. These dualities are hard to check since one has to solve the strongly coupled theory in order to show that it is the same as some dual weakly coupled theory. In supersymmetric cases there are several quantities that one can calculate which do not depend on the coupling. They can be calculated at weak coupling, extrapolated to strong coupling, and then compared with the corresponding result in the dual theory. For this reason dualities have been checked mostly in supersymmetric theories. Examples of quantities that are protected by supersymmetry and can be calculated are: 1) Low energy effective action, 2) The number and masses of various โprotectedโ special states, these are typically some charged particles. These states could be elementary on one theory and solitons on the dual theory. Books on string theory include
## 2 Black hole entropy
Black holes are one of the most intriguing objects that general relativity predicts. In classical general relativity black holes have a horizon, which is a surface in spacetime such that if somebody crosses it he/she cannot come back out. This surface, however, does not look special at all for the observer who is falling in. Black holes became even more intriguing when Hawking showed that they emit thermal radiation due to quantum effects. So a black hole is more like a black body, which will emit thermal radiation. For a black hole in four dimensions the temperature is inversely proportional to its radius, $`T1/r1/(G_NM)10^4K(M_{sun}/M)`$. We see that for black holes that are produced for normal astrophysical processes, whose mass is always bigger than the mass of the sun, this temperature is too small to be detected, since it much lower that the temperature of the cosmic microwave background radiation. The fact that they are thermal objects raises very interesting and very important theoretical puzzles, solving these puzzles is one of the challenges of a theory of quantum gravity. We are used to the fact that when we encounter a thermal object we can explain its temperature as arising from the motion of the internal constituents. So the question becomes, what are the internal constituents of the black hole that explain its temperature? This question is often phased in terms of explaining the microscopic origin of the entropy. The entropy can be defined through the first law of thermodynamics as $`dM=TdS`$. The entropy comes of to be $`S=A_H/(4G_N)`$. In other words, the entropy is proportional to the area of the horizon in Planck units. Any theory of quantum gravity, such as string theory, should explain this entropy. In string theory it is hard to calculate this entropy directly since strings describe small fluctuations around flat space while a black hole represents a large deviation from Minkowski space. Recently, when the dynamics of D-branes was understood, it became possible to calculate this entropy for some special cases . Consider a compactification of string theory down to four dimensions that preserves two supersymmetries. In such a theory we could consider charged black holes. In general charged black holes should satisfy a constraint on the mass that looks like $`MQ`$ in order to avoid naked singularities, i.e. singularities which are not covered by a horizon. In these theories with two supersymmetries this constraint coincides with the so called BPS bound. This is a bound coming from the supersymmetry algebra. The charge $`Q`$ appears in the right hand side of the supersymmetry algebra and the BPS bound comes from demanding unitarity of the representations. Furthermore, the states with $`M=Q`$ lie in a smaller representation of the supersymmetry algebra and the number of states in such representations does not depend on the coupling or other continuous parameters in the theory<sup>3</sup><sup>3</sup>3Most precisely the number of states that cannot be combined into larger representations, such as the ones with $`M>Q`$, remains invariant.. Black holes with $`M=Q`$ are also special from the point of view of the gravity theory, they are called extremal black holes and for them the Hawking temperature vanishes. In these supersymmetric theories it is possible to change parameters so that the black hole configuration becomes a weakly coupled system of D-branes and strings whose entropy one can calculate fairly easily, see figure 4. The answer, of course, comes out to be the same as the area of the corresponding black hole solution. Since the number of BPS states does not change when we do this transformation this provides a derivation of black hole entropy for these special black holes in these supergravity theories. The entropy of more general black holes, including near extremal black holes ($`M>Q`$ but $`MQQ`$), can be computed using the AdS/CFT correspondence as it will be explained below. The entropy of general black holes in completely general string backgrounds cannot be calculated with the present techniques.
## 3 Conformal field theories and Anti-de-Sitter spacetimes
Although string theory was described above as a theory of quantum gravity, it originated as an attempt to describe hadrons. The string description explained some features of the hadron spectrum such as Regge trajectories, etc.. We now know that hadrons are described by QCD, but it is still quite hard to do computations at low energies due to strong coupling problems. In fact we expect confinement. Confinement is thought to arise from the fact that the color electric field lines form narrow bundles in going from a quark to an anti-quark, see figure 5. These fluxes look at low energies like strings and one might expect that at low energies a description in terms of string might be valid. It was shown by โt Hooft that the proper way to understand these strings is to take the limit of a large number of colors $`N\mathrm{}`$, keeping $`g_{YM}^2N`$ fixed. In this limit only planar Feynman diagrams survive. These planar Feynman diagrams seem to be giving a discretization of a string worldsheet. However it was not clear what kind of string theory that would be.
For the case of Yang Mills theory with $`๐ฉ=4`$ supersymmetries and a large number of colors $`N`$ it has been conjectured that these gauge strings are the same as the fundamental strings described above but moving in a particular curved spacetime: the product of five-dimensional Anti-de-Sitter space and a five-sphere . Five dimensional AdS has a boundary which is four dimensional. The field theory is defined on this four dimensional boundary. In figure 6 we can see the Penrose diagram of AdS.
The radius of curvature of this spacetime is proportional to $`(g_{YM}^2N)^{1/4}`$ and the string coupling is $`g_s=g_{YM}^2`$ which goes like $`1/N`$ for fixed $`g_{YM}^2N`$ as expected from โt Hooftโs general argument. There have been a large number of checks for this correspondence. Many checks are possible due to the large number of supersymmetries. The simplest check is the observation that both theories have the same symmetries. $`๐ฉ=4`$ supersymmetric Yang Mills theory is scale invariant, the coupling does not run, it is independent of the energy. Non supersymmetric Yang Mills is classically conformal invariant but quantum corrections introduce a length scale through the running of the coupling. In conformal theories the symmetry group includes translations, rotations, scale transformations and the so called special conformal transformations. All these form the group mathematically known as $`SO(2,4)`$. This group is the group of isometries of AdS. Similarly the Yang Mills theory has an $`SO(6)`$ global symmetry group, which is the same as the group of rotations of $`S^5`$. In fact when we consider string theory on $`AdS5\times S^5`$ we also have the same supersymmetries as the gauge theory. Other checks include the comparison of the spectrum of BPS particles, renormalization group flows that partially break supersymmetry, etc. Many extensions of this correspondence were suggested, for other theories with less or no supersymmetry and for theories that are not conformal invariant. For a detailed account of the checks and a more extensive list of references on this subject see . A puzzling aspect is that the bulk theory contains gravity while the field theory does not contain gravity. Gravity in the bulk is related to the stress tensor of the boundary theory in the following way. Correlation functions for the stress tensor in the field theory are equated with the amplitude for propagation of gravitons between prescribed points at the boundary. Correlation functions of operators in the field theory can be calculated, using the correspondence, as the amplitudes that particles propagate through the bulk between prescribed points at the boundary , see figure 7. Each operator corresponds to a particle, or more precisely a string mode, propagating in $`AdS`$. The mass of the particle in AdS is related to the conformal dimension $`\mathrm{\Delta }`$ of the field through
$$\mathrm{\Delta }=2+\sqrt{4+(mR_{AdS})^2}$$
Similarly we can calculate the the quark antiquark potential by considering a string that goes between two points on the boundary see figure 8. In this picture we can see that the strings that move in the ten dimensional space are precisely the strings representing fluxes of color fields. We can thus say that strings made of gluons look very much like ordinary fundamental strings when $`g_{YM}^2N`$ is large. It is a general feature of dualities that some excitations that look fundamental in one picture are collective excitations in the dual theory. If we compute the quark-antiquark potential by computing the energy of the string in this curved background we find that it goes like $`V(L)1/L`$ where $`L`$ is the separation in the four dimensional theory. This potential is not confining and it should not be, since the field theory is conformal. It is possible to deform the field theory in such a way that one destroys conformal invariance and supersymmetry at low energies. The resulting theory is expected to be confining. Indeed one can find the corresponding supergravity solution and find that the geometry is deformed and now the quark-antiquark potential is indeed confining and the theory has a mass gap . Even though the theory is confining it is not pure Yang-Mills, it is a strongly coupled version of it. In order to find the large $`N`$ limit of pure Yang-Mills one needs to consider strings propagating in a curved spacetime whose curvature is of the order of the string scale. In this situation the gravity approximation would not be good enough. Treating strings is these small spaces is a challenging problem, which is being explored. What we have seen so far are applications of the correspondence to the study of large $`N`$, strongly coupled field theories. The correspondence can also be used to learn about gravity. It is harder to use it in this direction since the field theory is strongly coupled and therefore hard to solve. There are, however, some general statements that one could make. One of the most mysterious objects in a gravity theory is a black hole. One can consider a black hole in $`AdS`$. This black hole is, in principle, described by some thermal state in the boundary theory. If we have a big black hole, of the order of the size of the $`AdS`$ radius, its entropy will be inversely proportional to the Newton constant which is of order $`1/N^2`$. So the entropy will be of order $`N^2`$, which agrees with the number that we would naively expect in the field theory since we have $`N^2`$ gluons. In the case of $`AdS_3`$ it is possible to precisely calculate the entropy in the field theory and the result agrees with gravity precisely, see the review . An aspect of gravity that is manifest in via this correspondence is holography . Holography says that in a quantum theory of gravity we should be able to describe physics in some region of space by a theory with at most one degree of freedom per unit Planck area. Notice that the number of degrees of freedom would then increase with the area and not with the volume as we are normally used to. Of course, for all physical systems that we normally encounter the number of degrees of freedom is much smaller than the area, since the Planck length is so small. It is called โholographyโ because it would be analogous to a hologram which can store a three dimensional image in a two dimensional surface. In this case we represent the physics of the five dimensional Anti-de-Sitter spacetime with a theory that lives on its boundary. It is a concrete example of holography. Understanding it better might lead to more insights about quantum gravity.
## 4 Conclusions
We have learned that the laws of physics as we know them are not consistent, since we do not treat gravity using quantum mechanics. Formulating the correct theory of quantum gravity would enable us to understand the big bang and most probably the parameters of the standard model. String theory is enables us to study a whole class of phenomena that we expect in a theory of quantum gravity. The theory is not yet developed to the point of making definite experimental predictions, but is understood well enough to explain some quantum gravity phenomena like black hole entropy and evaporation, topology change, etc. Supersymmetric phases of string theory are fairly well explored. One of the main challenges is to find a supersymmetry breaking mechanism that does not generate a large cosmological constant. Many connections were found between different string theories and between string theory and field theory. It has been shown that string theory reduces, in certain circumstances, to ordinary four dimensional field theories and vice-versa. We hope that in the near future we will understand the theory better so as to make contact with experiment.
I am grateful to the organizers and participants for an interesting conference. This work was supported in part by DOE grant DE-FG02-91ER40654 and the Sloan and Packard foundations. |
no-problem/0002/astro-ph0002328.html | ar5iv | text | # An analytic model for the epoch of halo creation
## 1 Introduction
The hierarchical build-up of self-gravitating dark matter is thought to drive evolution in the observable universe. The formation of clumps of dark matter precipitates the formation of galaxies by providing a potential well into which gas can fall and subsequently cool. Violent mergers between equally sized halos and their associated galaxies are thought to be important for starbursts and quasar activation. In order to model and understand the observable universe it is therefore essential to understand the build-up of the dark structure.
The most widely used analytic model for the distribution of mass in isolated halos at any epoch comes from Press-Schechter (PS) theory \[Press & Schechter 1974\]. By smoothing the initial field of density fluctuations on different scales, information on the distribution of perturbation sizes can be obtained. Linking the time at which these perturbations collapse to the initial overdensities using the simplified spherical top-hat collapse model allows the distribution of mass in isolated halos at any epoch to be determined \[Press & Schechter 1974, Peacock & Heavens 1990, Bond et al. 1991\].
In Percival & Miller \[Percival & Miller 1999\] (hereafter paper I), we used the tenets of PS theory to model the related, but distinct problem of determining the distribution of times at which halos of a given mass are created. Here, โcreationโ is defined as the epoch at which non-linear collapse is predicted. Two derivations were given, one of which directly used the trajectories invoked in PS theory \[Peacock & Heavens 1990, Bond et al. 1991\], and one of which used Bayesโ theorem to convert from the PS mass function to a time distribution. The second derivation required the prior for the creation time which was calculated by examining the trajectories model.
In this paper we extend the Bayesian link between the mass function and the creation time distribution to cover any mass function. This is important, not only because it is known that standard PS theory is wrong in detail (e.g. Sheth & Tormen 1999), but especially because the new extension applies to mass functions derived from more general density fields including non-Gaussian models (e.g. Matarrese, Verde & Jimenez 2000).
First, we adopt the assumption that all clumps monotonically increase in mass on the cosmological time scales of interest. This monotonic growth is an inevitable aspect of gravitational instability. Every epoch should now be thought of as a creation time for a given clump, and we need not make the distinction between the creation time distribution and the distribution of times at which a given halo exists.
In order to convert from a mass function to a distribution in time we require the prior for the creation time. This is the rate at which creation events occur, given no information about the halo mass. In this work we use the spherical top-hat collapse (STHC) model to provide a simple mechanism for determining the required rate. In Section 2 we derive the link between collapse time and the overdensity at an early epoch for the STHC model within any Friedmann cosmology. Having determined that this relation is independent of halo mass, this leads directly to an approximation to the prior for the creation time, described in Section 3. This is the second major assumption adopted in this paper: that the prior for the creation time is well approximated by this simple model for the break-away of structure from linear expansion. This means that following the two simple assumptions detailed above, we are able to convert any mass function to give the distribution of epochs at which halos of a given mass are created.
Simple models of cosmologically evolving phenomena often adopt an important mass range rather than a specific halo mass (e.g. paper I, Granato et al. 1999). In order to use the work presented here in these models, the joint distribution of halos in mass and creation time is required. Although calculating the required joint probability is formally impossible because the equations cannot be properly normalised, a formula with the correct shape can be determined and is presented in Section 5.
So far we have not made a distinction between the slow accretion of mass onto a halo and major mergers between halos. Such a distinction is important because only major mergers are thought to play a vital role in starbursts and quasar activation (see paper I). The time distribution calculated in this paper determines when halos existed (or were created by any mechanism assuming monotonic clump growth) which is not necessarily equal to the distribution of merger events. This is discussed in Section 6.
Finally, we compare the analytic link between the mass function and the creation rate to the results from three numerical simulations of structure formation in different cosmological models. An analytic fit to the mass function as described by Sheth & Tormen \[Sheth & Tormen 1999\] is adopted and is converted into a creation rate using the STHC model. This model is compared with and shown to be in good agreement with the numerical results.
## 2 The Spherical Top-Hat Collapse Model
In this Section we analyse the STHC model which is the simplest model for the way in which clumps of dark matter break free from linear growth and undergo non-linear collapse. We present the derivation of the link between the initial overdensity and the collapse time $`t_{\mathrm{coll}}`$ in a form which clearly shows that this link is independent of the mass of the overdensity. The derivation also demonstrates a method for calculating this link within any Friedmann cosmology. Similar derivations have been previously discussed in a variety of subsets of this space: for an Einstein-de Sitter model, a derivation is given by Gunn & Gott \[Gunn & Gott 1972\], for an open $`\mathrm{\Omega }_V=0`$ model by Lacey & Cole \[Lacey & Cole 1993\], and for a flat $`\mathrm{\Omega }_V0`$ universe by Eke Cole & Frenk \[Eke, Cole & Frenk 1996\]. A summary of these results is given in Kitayama & Suto \[Kitayama & Suto 1996\]. A numerical prescription for the calculation of the overdensity in any cosmology has also been developed \[Somerville & Primack 1999\].
These derivations all use the same basic idea which is adopted in this work: the behaviour of two spheres of equal mass is compared within the cosmological framework. One of the spheres evolves with the background density $`\rho _b(t)`$, while the other is perturbed by a uniform excess density $`\mathrm{\Delta }\rho (t)`$. In subsequent analysis, a subscript โ$`b`$โ denotes that a quantity relates to the sphere with background density, and โ$`p`$โ to the perturbation.
Matter is assumed to be an ideal fluid with no pressure and the universe is modelled as spherically symmetric around the perturbation. As a consequence of Birkhoffโs theorem, the gravitational field of both the perturbation and the background is described by a Robertson-Walker (RW) metric with curvature constant $`K`$, and RW scale factor $`a(t)`$. The behaviour of such perturbations is governed by Friedmannโs equation which we will consider in the form:
$$\left(\frac{da}{dt}\right)^2+K=\frac{2GM}{a}+(H_0^2\mathrm{\Omega }_V)a^2$$
(1)
where $`M`$ is the mass inside the sphere. Note that in order to compare spheres with different behaviour, we do not normalise the scale factor $`a(t)`$ to equal the curvature scale (by dividing by $`\sqrt{|K|}`$) so $`K`$ is allowed to take any real value.
To calculate the behaviour of the overdensity at an early time, we note that a series solution for $`a(t)`$ in the limit $`t0`$ can be obtained for Equation 1. This is given by $`a=\alpha t^{2/3}+\beta t^{4/3}+O(t^{6/3})`$, where:
$$\alpha =\left(\frac{9GM}{2}\right)^{1/3},\beta =\frac{3K}{20}\left(\frac{6}{GM}\right)^{1/3}.$$
(2)
Using the fact that the spheres contain equal mass, the behaviour of $`\delta (t)\mathrm{\Delta }\rho (t)/\rho _b(t)`$ in the limit $`t0`$ is given by:
$`\underset{t0}{lim}\left({\displaystyle \frac{\mathrm{\Delta }\rho (t)}{\rho _b(t)}}\right)`$ $`=`$ $`\underset{t0}{lim}\left({\displaystyle \frac{a_b(t)^3}{a_p(t)^3}}1\right)`$ (3)
$`=`$ $`{\displaystyle \frac{3}{\alpha }}(\beta _p\beta _b)t^{2/3}+O(t^{4/3}).`$
Defining:
$$ฯต=\frac{K}{(GMH_0)^{2/3}},$$
(4)
the present day normalisation of Equation 1 gives that for a sphere of uniform background density:
$$ฯต_b=(\mathrm{\Omega }_M+\mathrm{\Omega }_V1)\left(\frac{2}{\mathrm{\Omega }_M}\right)^{2/3},$$
(5)
We can now combine Equations 23 & 5 to determine the behaviour of $`\delta (t)`$ in the limit $`t0`$ as a function of $`ฯต_p`$:
$$\underset{t0}{lim}\delta (t)=\frac{9}{20}\left(\frac{4}{3}\right)^{1/3}$$
$$\times \left[(\mathrm{\Omega }_M+\mathrm{\Omega }_V1)\left(\frac{2}{\mathrm{\Omega }_M}\right)^{2/3}ฯต_p\right](H_0t)^{2/3}$$
$$+O[(H_0t)^{4/3}]$$
(6)
If the field of perturbations is linearly extrapolated to present day and normalised here, the approximation of Carroll, Press & Turner \[Carroll, Press & Turner 1992\] for the ratio of the current linear amplitude to the Einstein-de Sitter model can be used to extrapolate the limiting behaviour of $`\delta `$ to this epoch. The extrapolated limit, $`\delta _{\mathrm{lim}}`$, is related to $`ฯต_p`$ by:
$$\delta _{\mathrm{lim}}\frac{3}{8}\left(4\mathrm{\Omega }_M\right)^{2/3}\left[(\mathrm{\Omega }_M+\mathrm{\Omega }_V1)\left(\frac{2}{\mathrm{\Omega }_M}\right)^{2/3}ฯต_p\right]$$
$$\times \left[\mathrm{\Omega }_M^{4/7}\mathrm{\Omega }_V+\left(1+\frac{1}{2}\mathrm{\Omega }_M\right)\left(1+\frac{1}{70}\mathrm{\Omega }_V\right)\right]^1.$$
(7)
A similar formula is possible if the field of fluctuations is normalised at any other epoch. Note that $`\delta _{\mathrm{lim}}(ฯต_p+\mathrm{constant})`$ and the time dependence of $`\delta _{\mathrm{lim}}`$ is given by that of $`ฯต_p`$.
For the perturbation, the radius of maximum expansion can be calculated from Equation 1: this radius corresponds to the first positive root of the equation $`2GM+H_0^2\mathrm{\Omega }_Va^3Ka=0`$ denoted by $`a_{\mathrm{max}}`$. This leads to a necessary and sufficient condition for the perturbation to collapse: that such a (finite) root exists. Because of the symmetry in Equation 1, this model predicts that the perturbation will collapse to a singularity at a time equal to twice the time required to reach maximal expansion:
$$H_0t_{\mathrm{coll}}=2_0^{a_{\mathrm{max}}^{}}\left(\frac{2}{a^{}}+\mathrm{\Omega }_V(a^{})^2ฯต_p\right)^{1/2}๐a^{}$$
(8)
where we have changed from $`a`$ to $`a^{}=aH_0^{2/3}/(GM)^{1/3}`$, and $`a_{\mathrm{max}}^{}`$ is the first positive root of the equation:
$$2+\mathrm{\Omega }_V(a^{})^3ฯต_pa^{}=0.$$
(9)
Although collapse to a singularity does not occur in practice, the virialisation epoch is assumed to be similar to $`t_{\mathrm{coll}}`$.
For perturbations that collapse, $`\delta _{\mathrm{lim}}`$ is called the โcriticalโ density and is denoted $`\delta _c`$. Equation 7 then gives $`\delta _c(ฯต_p)`$. Equations 8 & 9 give $`t_{\mathrm{coll}}(ฯต_p)`$, and the combination of these three Equations gives the required link between $`\delta _c`$ and the collapse time. Note that these Equations are independent of the perturbation mass, and therefore so is the link between the initial overdensity and the collapse time.
In practice we wish to use these Equations to calculate $`\delta _c(z_{\mathrm{coll}},\mathrm{\Omega }_M,\mathrm{\Omega }_V)`$ or $`d\delta _c(z_{\mathrm{coll}},\mathrm{\Omega }_M,\mathrm{\Omega }_V)/dt`$ where $`z_{\mathrm{coll}}`$ is the collapse redshift. Unfortunately this is not easy as Equations 8 & 9 cannot be inverted to give $`ฯต_p(t_{\mathrm{coll}})`$. The procedure adopted is as follows: the collapse time can be numerically determined from $`z_{\mathrm{coll}}`$ using the Friedmann equation for the background cosmology. $`ฯต_p`$ can be determined numerically using Equations 8 & 9, and $`\delta _c`$ can be calculated using Equation 7. $`d\delta _c/dt`$ can be calculated numerically from $`\delta _c(t_{\mathrm{coll}})`$ and is discussed further in the next Section.
For the subset of cosmological models with $`\mathrm{\Omega }_V=0`$, the above procedure is simplified and analytic formula can be obtained for $`\delta _c`$. In this case, Equation 8 reduces to:
$$H_0t_{\mathrm{coll}}=2_0^{2/ฯต_p}\left(\frac{2}{a^{}}ฯต_p\right)^{1/2}๐a^{}.$$
(10)
Making the substitution $`\mathrm{tan}(\theta )=(2/a^{}ฯต_p)^{1/2}`$, this integral can be solved to give:
$$H_0t_{\mathrm{coll}}=\frac{2\pi }{ฯต_p^{3/2}}.$$
(11)
We now show that these Equations provide the result of Gunn & Gott \[Gunn & Gott 1972\] for an Einstein-de Sitter cosmology. In this case, substituting Equation 11 into Equation 7 gives that:
$$\delta _c(t_{\mathrm{coll}})=\frac{3}{20}\left(\frac{8\pi }{H_0t_{\mathrm{coll}}}\right)^{2/3}.$$
(12)
We can now change from collapse time to collapse redshift to give:
$`\delta _c(z_{\mathrm{coll}})`$ $`=`$ $`{\displaystyle \frac{3}{20}}(12\pi )^{2/3}(1+z_{\mathrm{coll}})`$ (13)
$``$ $`1.69(1+z_{\mathrm{coll}}),`$
which is the equation of Gunn & Gott \[Gunn & Gott 1972\].
## 3 From a mass function to a time distribution
In this Section we show how to convert from a mass function to the distribution of times at which isolated halos of a given mass exist. First, we make the assumption that the mass of any clump is a monotonically increasing function of time so that the mass will increase between any two epochs. This is true for Press-Schechter theory (see paper I). Note that this mass growth is not constrained to be continuous and the mass is allowed to undergo instantaneous finite increases, or โmass jumpsโ. Following this assumption, every epoch at which a halo exists should also be considered as a โcreationโ epoch: every halo is a new isolated halo of some mass. The distribution of โcreation eventsโ is therefore the same as the distribution of times at which the halos exist. Note that by definition this only applies to isolated halos which have not been subsumed into larger objects.
In this paper we have called this epoch the โcreationโ time of a halo in order to avoid confusion with other authors definitions of the โformationโ time of a halo. Note that this semantic change was not adopted in paper I. The โformationโ time of a halo was defined by Lacey & Cole \[Lacey & Cole 1993\] as the latest time when the largest progenitor of a halo has a mass less than half that of the final halo. This definition makes sense if we are discussing a non-evolving quantity, say the existence of a galaxy halo, and wish to know when it was formed given that it exists at present day. However, suppose we do not know anything about the build-up of a halo before or after it has mass $`M`$ and only wish to know when it was likely to have existed. This Lacey & Cole definition of โformationโ cannot help us for we do not know the time and mass from which to determine progenitors: progenitors of what?
In order to calculate the probability density function (pdf) of the times at which halos exist, we consider the set of all possible times and all possible halo masses. This is the โsample spaceโ of our โexperimentโ. The experiment consists of choosing a particle, or small mass element, and an โeventโ is given by any subset of the sample space: for instance that the particle is part of a halo of mass $`M_1<M<M_2`$ created at time $`t_1<t<t_2`$, or that the particle inhabits a halo of mass $`M`$, created at time $`t`$.
Denoting a generic pdf by the function $`f`$, the mass function is given by $`f(M|t)dM`$, the distribution of halo masses at a given epoch. This is equal to $`Mn(M)/\rho `$ where $`n(M)`$ is the number density of halos. The pdf we wish to determine is given by $`f(t|M)dt`$, the distribution of times at which halos of mass $`M`$ were created. Note that our assumption of monotonic mass growth means that $`t`$ is the same variable in $`f(M|t)dM`$ and $`f(t|M)dt`$. These pdfs are then related by the following formula, based on Bayesโ theorem:
$$f(t|M)dt=\frac{f(M|t)dMf(t)dt}{_0^{\mathrm{}}\left[f(M|t)dMf(t)\right]๐t},$$
(14)
where $`f(t)dt`$ is the normalised prior for time, or the distribution of creation events in time given no information about the mass of halo. In paper I we calculated the prior using the Brownian random walks invoked in PS theory with a sharp $`k`$-space filter. In order not to bias the distribution of up-crossings within this model, we assumed a uniform prior for $`\delta _c`$.
The reason the prior is uniform in $`\delta _c`$ follows from the STHC model. Within this model, it is the density $`\delta `$ associated with a particle that is important, and the barrier has to move from $`\delta `$ to $`\delta d\delta `$ for โhalo creationโ to have occurred. Given that the mass of all clumps monotonically increases, all particles will be associated with creation events at any $`\delta _c`$. Following these two observations, any two equal width intervals in $`\delta _c`$ should contain equal โnumbersโ of halo creation events.
The derivation presented in the previous Section showed that for the STHC model, the link between the critical overdensity and the collapse time is independent of the perturbation mass. Therefore, given no information about the mass contained within a perturbation, the pdf for the time at which the perturbation collapses should be assumed to be proportional to the time derivative of $`\delta _c(t)`$. This gives the rate at which the collapse threshold $`\delta _c(t)`$ crosses the initial overdensities. This can be calculated numerically from the following formula:
$$\frac{d\delta _c}{dt}\frac{dฯต_p}{dt}=$$
$$\left[\frac{d}{dฯต_p}\left(\underset{0}{\overset{a_{\mathrm{max}}^{}(ฯต_p)}{}}\left(\frac{2}{a^{}}+\mathrm{\Omega }_V(a^{})^2ฯต_p\right)^{\frac{1}{2}}๐a^{}\right)\right]^1,$$
(15)
where $`a_{\mathrm{max}}^{}`$ is the first positive root of the equation $`2+\mathrm{\Omega }_V(a^{})^3ฯต_pa^{}=0`$. Note that for cosmologies with $`\mathrm{\Omega }_V=0`$, the above equation can be analytically solved as for Equation 11, and the derivative $`d\delta /dt`$ is proportional to $`t^{5/3}`$.
Unfortunately, $`d\delta _c(t)/dt`$ cannot be normalised so that it integrates over all time to give unity. This means that we cannot simply take a multiple of $`d\delta _c(t)/dt`$ as the prior for the collapse time. However, we can still use Equation 14 by making use of a mathematical trick and placing an arbitrary upper limit on $`t`$, $`t_u`$, which can be removed later without affecting the result. This gives that:
$$f(t|M)dt=\underset{t_u\mathrm{}}{lim}\left[\frac{f(M|t)dMf(t,t_u)dt}{_0^{t_u}\left[f(M|t)dMf(t,t_u)\right]๐t}\right]$$
(16)
The connection between the mass function and the creation rate of halos presented in this Section is consistent with that of paper I: the prior for time used is exactly the same. We have merely shown that adopting the STHC model for the rate at which structures are created allows any mass function to be converted to give the pdf of the time at which a halo of a given mass is created.
## 4 The relation with the multiplicity function
Changing variables from mass to a function of
$$\nu \frac{\delta _c}{\sigma _M}$$
(17)
alters the form of the standard PS mass function to one which is invariant with respect to time. Here $`\sigma _M`$ is the rms fluctuation of the initial density field smoothed with a top-hat filter on a scale related to mass $`M`$. Unless stated otherwise we change variables in the mass function from M to $`\mathrm{ln}\nu (M,t)`$. The normalised pdf $`f(\mathrm{ln}\nu |t)`$ is called the multiplicity function and is related to the mass function by:
$$f(M|t)=Af(\mathrm{ln}\nu |t)\frac{\mathrm{ln}\nu }{M}|_t,$$
(18)
where $`A`$ is a normalisation constant. Note that we have retained the condition on time in $`f(\mathrm{ln}\nu |t)`$, to emphasise that we are still concerned with the distribution of halos at a particular epoch. Although the multiplicity function has a form which is invariant with respect to time, it still gives the distribution of $`\mathrm{ln}\nu `$ we would expect for halos given a particular time. This is not the same as the the distribution of $`\mathrm{ln}\nu `$ we would obtain if we chose halos at random in both mass and time, or the distribution of $`\mathrm{ln}\nu `$ we would obtain if we chose halos only of a particular mass.
If the mass function can be written in a form which is independent of time as described above, then under the same change of variables, the creation rate becomes independent of halo mass. The resulting pdf $`f(\mathrm{ln}\nu |M)`$ is now only valid if we are examining the distribution of halos at fixed mass. Following the notation adopted above, this is given by:
$$f(t|M)=Af(\mathrm{ln}\nu |M)\frac{\mathrm{ln}\nu }{t}|_M.$$
(19)
## 5 The joint distribution of halos in mass and time
Because the mass of each halo is assumed to monotonically increase with time, within any interval of mass and time, an infinite number of โcreation eventsโ occur. This means that the joint probability of the existence of a halo in both mass and time cannot be properly normalised.
Equation 16 gives the link between two pdfs, the mass function and the creation rate using a mathematical trick to cope with an un-normalised prior in time. The numerator of this equation is the joint distribution of halos in mass and time, $`f(M,t)dMdt=f(M|t)dMf(t)dt`$. The denominator is not a function of time: it only normalises the resulting formula so $`f(t|M)dt`$ integrates to unity. Following this argument, given a mass function, multiplying by $`d\delta _c(t)/dt`$ creates a function with both the correct mass and time behaviour. This joint distribution function (not a pdf) has the same mass dependence as $`f(M|t)`$ and the same time dependence as $`f(t|M)`$.
As an example we consider the fitting function of Sheth & Tormen \[Sheth & Tormen 1999\] to the multiplicity function determined from the results of N-body simulations for different cosmological parameters:
$$\frac{Mn(M)}{\rho }dM=f(M|t)dM=f(\mathrm{ln}\nu |t)d\mathrm{ln}\nu $$
$$=A\sqrt{\frac{2}{\pi }}\left(1+\frac{1}{\nu ^{2p}}\right)\nu ^{}e^{\nu ^2/2}d\mathrm{ln}\nu ,$$
(20)
where $`\nu ^{}=a^{1/2}\nu `$ and $`a`$ & $`p`$ are parameters. Note that Sheth & Tormen displayed this formula using a different notation to that adopted here, although parameters $`a`$ and $`p`$ are the same in both cases. $`A`$ is determined by requiring that the integral of $`f(\mathrm{ln}\nu |t)`$ over all $`\mathrm{ln}\nu `$ gives unity. Sheth & Tormen found best fit parameters $`a=0.707`$ and $`p=0.3`$ for their simulations and group finding algorithm. The standard PS multiplicity function has $`a=1`$, $`p=0`$ and $`A=1/2`$. Unless stated otherwise, by standard PS theory, we refer to the adoption of this multiplicity function combined with top-hat filtering (to calculate $`\sigma _M^2`$). In order to convert this function to provide a model of both the time and mass of halo creation events, all we need to do is to multiply by $`d\delta _c/dt`$.
For standard PS theory, writing $`\nu `$ explicitly in terms of $`\sigma _M^2`$ and $`\delta _c`$ we find that the joint distribution of the existence of a halo in mass and time reduces to:
$$f(M,t)dMdt=\frac{\delta _c}{(2\pi )^{1/2}\sigma _M^3}$$
$$\times \mathrm{exp}\left(\frac{\delta _c^2}{2\sigma _M^2}\right)\left|\frac{d\sigma _M^2}{dM}\right|\left|\frac{d\delta _c}{dt}\right|dMdt,$$
(21)
Although not normalised, such a formula integrated over any two areas of the mass-time plane will provide the correct relative number densities.
Note that this is not the same formula as obtained by simply multiplying the mass function with the creation time distribution at fixed mass. This would be inconsistent within a Bayesian framework and would produce a joint density function which lacks the correct mass and time behaviour: the form of each conditional pdf is altered by the other. Care should therefore be taken when using the creation rate in models which also include the mass function.
## 6 The relation with merger events
So far, we have only been concerned with the epoch at which a halo is created. However, there is an important distinction between major mergers and the slow accretion of mass when applying the results in models of certain cosmological phenomena. For instance, only violent merger events are thought to be important for starbursts and quasar activation. In paper I, we showed that for standard PS theory with a sharp $`k`$-space filter, if mass jumps in a particular trajectory correspond to merger events, then the distribution of mergers is the same as that of the build-up of matter from all types of creation event. This is because the trajectories are Brownian random walks which have the special property that their form is independent of the initial point.
Given only the mass function and the assumptions outlined above it is not possible to determine how each clump increases in mass, only the distribution of times at which it reaches a certain mass. More information about the build-up of individual clumps is required before the distribution of major mergers can be determined. Such information is available in PS theory and follows from the argument that each trajectory gives the history of the halo masses in which a particular small mass element resides.
## 7 Description of the Numerical Simulations
A direct approach to modelling structure formation is to simulate the evolution of the mass density of the Universe using a distribution of softened particles. We have run three such simulations using the Hydra N-body, hydrodynamics code \[Couchman, Thomas & Pearce 1995\] with $`128^3`$ dark matter particles to model the build-up of halos for three different cosmological models, described in Table 1.
In order to determine the rate at which halos are created within these simulations, we output particle positions at a large number of times. For the $`\mathrm{\Gamma }`$CDM simulation, we output particle positions at 362 different epochs, separated by approximately equal intervals in time. For the OCDM simulation the number of outputs was 345 and for the $`\mathrm{\Lambda }`$CDM simulation, 499. The box size chosen was 100 $`h^1`$Mpc for all three simulations which gave a particle mass of $`2.6\times 10^{11}`$ $`M_{}`$ for $`\mathrm{\Gamma }`$CDM and $`7.9\times 10^{10}`$ $`M_{}`$ for the other two simulations. Groups of particles were found for each output using a standard friends-of-friends algorithm with linking length set to $`b=0.2`$ times the mean interparticle separation.
## 8 Fitting to the Mass Function
The multiplicity function averaged over all output times is presented from each of the simulations in Fig. 1. Here we have only considered groups containing over 45 particles in order to limit the number of false detections due to numerical effects. In compiling the data in this way, we have assumed that converting from mass to $`\mathrm{ln}\nu `$ does indeed convert the form of the mass function into one which is independent of epoch. This Figure has been produced in such a way as to be directly comparable with figure 2 of Sheth & Tormen \[Sheth & Tormen 1999\]. For comparison we also plot their best fit model and the predictions of standard PS theory.
We have also plotted the model of Sheth & Tormen \[Sheth & Tormen 1999\] (Equation 20) after allowing the parameters to vary to simultaneously fit the data from all three simulations. We find slightly different best fit parameters to those of Sheth & Tormen. Our best fit parameters are $`a=0.774,p=0.274`$, compared to standard PS theory $`a=1,p=0`$ and Sheth & Tormen $`a=0.707,p=0.3`$. Note that the difference between our best fit values and those of Sheth & Tormen could be explained by the different group finding algorithms used.
## 9 Comparison between the Analytic and Numerical Halo Creation Rates
Although we have argued that the monotonic increase in mass means that all epochs are โcreationโ times for a given halo, we cannot simply compare the creation rate formulae with the distribution of halo numbers at different epochs: each halo should only be counted once. To determine the distribution of creation times of halos of mass $`M`$, we therefore sequentially analysed the FOF output from $`z=50`$ to present day. All halos of mass $`>M`$ were examined at each epoch to determine whether they were โnewโ. The definition of โnewโ adopted was that at least half of the particles in a halo were not included in any halo of mass $`>M`$ at a previous output time. The number of these halos in the required mass range was taken to be the minimum number which could have been created between that output time and the previous one. In order not to miss creation events where a halo was created and subsumed into a larger halo all within the time interval between two outputs, we analysed the progenitors of all new halos with mass greater than the required range. Those with a progenitor distribution at the previous step which could sum to a halo of the required mass were recorded as a possible halo of the required mass. In this way we determined the minimum and maximum mass which could have been created in each time interval between output from the simulation.
In Fig. 2 we plot the creation rate for halos within two narrow mass ranges. In order to obtain the maximum number of creation events, we have used relative low numbers of particles in each group. Data are plotted for groups of between $`4550`$ and $`100110`$ particles. These distributions are compared with the three multiplicity functions plotted in Fig. 2, converted into creation rates by multiplying by $`d\delta /dt`$ for halos of mass equivalent to 45 or 100 particles. These curves have been normalised to the low redshift data.
All of the models reproduce the decrease in creation events to present day seen in the simulations. As output from the simulation occured after approximately equal intervals of time, the high redshift data suffers as the intervals contain relatively more creation events. This means that we cannot precisely follow the build-up of the clumps, and the difference between the maximum and minimum mass which could have been created in each bin is increased. This is particularly noticable in the OCDM simulation where halos are created at earlier times and we have fewer outputs from the simulation.
However, there is evidence that the solid line (calculated from the best fit to the mass function) also fits the creation rate data the best out of the three models plotted. As a rough guide to this, the root mean square value between the plotted data points and the model is 3.4 for this curve, compared to 6.7 for standard PS theory, and 5.0 for the best fit model of Sheth & Tormen \[Sheth & Tormen 1999\]. Note that the form of the creation rate is strongly dependent on the parameter $`a`$ in Equation 20, and only weakly dependent on parameter $`p`$. This is consistent with the importance of these parameters for the mass function: parameter $`a`$ controls the position of the high-mass cut-off, whereas parameter $`p`$ controls the low-mass tail of the distribution.
## 10 Conclusions
We have demonstrated a simple method for linking any mass function to the corresponding distribution of times at which isolated halos of a given mass are created. In order to provide this link we adopted the assumption that the time scales of interest are those over which the mass of every clump can be thought of as monotonically increasing. The prior for the collapse time was estimated using the STHC model which ties in directly with PS theory, although the method does not use any of PS theory beyond that of the STHC model. We have presented a new derivation of the link between the collapse time and initial overdensity for this model which explicitly shows that this link is independent of the halo mass and is applicable in any Friedmann cosmology. Multiplying the mass function by a function with no mass dependence and proportional to the time derivative of the critical overdensity then provides a joint density function with the correct behaviour for the creation of a halo in mass and time. Integrating over the resulting joint density function will give the correct relative number densities of halos within different mass and time intervals.
We have extended the analysis of N-body simulation results presented in paper I to cover three simulations of the build-up of dark matter within different cosmological models. Rather than using PS theory, we have demonstrated how a fit to the mass function may be converted to give a creation rate. Out of the three functions we have compared to the mass function data, the best fit model for these data when converted to a creation rate also fits the creation rate data the best. This gives us confidence that the formalism presented here is sound, and should give accurate results in more general situations, in particular non-Gaussian models.
## 11 Acknowledgements
We are grateful for the use of the Hydra N-body code \[Couchman, Thomas & Pearce 1995\] kindly provided by the Hydra consortium. |
no-problem/0002/astro-ph0002004.html | ar5iv | text | # Extracting Energy from a Black Hole through Its Disk
## 1 Introduction
Extraction of energy from a black hole or an accretion disk through magnetic braking has been investigated by many people. As a rotating black hole is threaded by magnetic field lines which connect with remote astrophysical loads, energy and angular momentum are extracted from the black hole and transported to the remote loads via Poynting flux (Blandford & Znajek 1977; Macdonald & Thorne 1982; Phinney 1983). This is usually called the Blandford-Znajek mechanism and has been suggested to be a plausible process for powering jets in active galactic nuclei (Rees, Begelman, Blandford, & Phinney 1982; Begelman, Blandford, & Rees 1984) and gamma ray bursts (Paczyลski 1993; Lee, Wijers, & Brown 1999). Similar process can happen to an accretion disk when some of magnetic field lines threading the disk are open and connect with remote astrophysical loads (Blandford 1976; Blandford & Znajek 1977; Macdonald & Thorne 1982; Livio, Ogilvie, & Pringle 1999; Li 1999).
In this paper we investigate the effects of magnetic field lines connecting a Kerr black hole with a disk surrounding it. This kind of magnetic field lines are expected to exist and have important effects (Macdonald & Thorne 1982; Blandford 1999, 2000; Gruzinov 1999). We find that, with the existence of such magnetic coupling between the black hole and the disk, energy and angular momentum are transfered between them. If the black hole rotates faster than the disk, energy and angular momentum are extracted from the black hole and transferred to the disk via Poynting flux. This is the case when $`a/M_H>0.36`$ for a thin Keplerian disk, where $`M_H`$ is the mass of the black hole and $`aM_H`$ is the angular momentum of the black hole. Throughout the paper we use the geometric units with $`G=c=1`$. The energy deposited into the disk by the black hole is eventually radiated to infinity by the disk. This provides a way for extracting energy from a black hole through its disk. If the disk has no accretion (or the accretion rate is very low), the power of the disk essentially comes from the rotational energy of the black hole. We will show that the magnetic coupling between the black hole and the disk has a higher efficiency in extracting energy from a Kerr black hole than the Blandford-Znajek mechanism.
## 2 Transfer of Energy and Angular Momentum between a Black Hole and Its Disk by Magnetic Coupling
Suppose a bunch of magnetic field lines connect a rotating black hole with a disk surrounding it. Due to the rotation of the black hole and the disk, electromotive forces are induced on both the black holeโs horizon and the disk (Macdonald & Thorne 1982; Li 1999)
$`_H={\displaystyle \frac{1}{2\pi }}\mathrm{\Omega }_H\mathrm{\Delta }\mathrm{\Psi },_D={\displaystyle \frac{1}{2\pi }}\mathrm{\Omega }_D\mathrm{\Delta }\mathrm{\Psi },`$ (1)
where $`\mathrm{\Omega }_H`$ is the angular velocity of the black hole, $`\mathrm{\Omega }_D`$ is the angular velocity of the disk, $`\mathrm{\Delta }\mathrm{\Psi }`$ is the magnetic flux connecting the black hole with the disk. The black hole and the disk form a closed electric circuit, the electric current flows through the magnetic field lines connecting them. Suppose the disk and the black hole rotates in the same direction, then $`_H`$ and $`_D`$ have opposite signs. This means that energy and angular momentum are transferred either from the black hole to the disk or from the disk to the black hole, the direction of transfer is determined by the sign of $`_H+_D`$. By the Ohmโs law, the current is $`I=(_H+_D)/Z_H=\mathrm{\Delta }\mathrm{\Psi }(\mathrm{\Omega }_H\mathrm{\Omega }_D)/(2\pi Z_H)`$, where $`Z_H`$ is the resistance of the black hole which is of several hundred Ohms (the disk is perfectly conducting so its resistance is zero). The power deposited into the disk by the black hole is
$`P_{HD}=I_D=\left({\displaystyle \frac{\mathrm{\Delta }\mathrm{\Psi }}{2\pi }}\right)^2{\displaystyle \frac{\mathrm{\Omega }_D\left(\mathrm{\Omega }_H\mathrm{\Omega }_D\right)}{Z_H}}.`$ (2)
The torque on the disk produced by the black hole is
$`T_{HD}={\displaystyle \frac{I}{2\pi }}\mathrm{\Delta }\mathrm{\Psi }=\left({\displaystyle \frac{\mathrm{\Delta }\mathrm{\Psi }}{2\pi }}\right)^2{\displaystyle \frac{\left(\mathrm{\Omega }_H\mathrm{\Omega }_D\right)}{Z_H}}.`$ (3)
As expected, we have $`P_{BH}=T_{BH}\mathrm{\Omega }_D`$.
The signs of $`P_{HD}`$ and $`T_{HD}`$ are determined by the sign of $`\mathrm{\Omega }_H\mathrm{\Omega }_D`$. When $`\mathrm{\Omega }_H>\mathrm{\Omega }_D`$, we have $`P_{HD}>0`$ and $`T_{HD}>0`$, energy and angular momentum are transferred from the black hole to the disk. When $`\mathrm{\Omega }_H<\mathrm{\Omega }_D`$, we have $`P_{HD}<0`$ and $`T_{HD}<0`$, energy and angular momentum are transferred from the disk to the black hole so the black hole is spun up. For a disk with non-rigid rotation, $`\mathrm{\Omega }_D`$ varies with radius. For fixed values of $`\mathrm{\Delta }\mathrm{\Psi }`$, $`\mathrm{\Omega }_H`$, and $`Z_H`$, $`P_{HD}`$ peaks at $`\mathrm{\Omega }_D=\mathrm{\Omega }_H/2`$. However for realistic cases which is most important is when the magnetic field lines touch the disk close to the inner boundary, so $`\mathrm{\Omega }_D`$ in Eq. (2) and Eq. (3) can be taken to be the value at the inner boundary of the disk. According to Gruzinov (1999) the magnetic fields will be more unstable against screw instability if the foot-points of the field lines on the disk are far from the inner boundary of the disk.
For a thin Keplerian disk around a Kerr black hole in the equatorial plane, the angular velocity of the disk is (Novikov & Thorne 1973)
$`\mathrm{\Omega }_D(r)=\left({\displaystyle \frac{M_H}{r^3}}\right)^{1/2}{\displaystyle \frac{1}{1+a\left(M_H/r^3\right)^{1/2}}},`$ (4)
where $`r`$ is the Boyer-Lindquist radius in Kerr spacetime. $`\mathrm{\Omega }_D(r)`$ decreases with increasing $`r`$. The angular velocity of a Kerr black hole is
$`\mathrm{\Omega }_H={\displaystyle \frac{a}{2M_Hr_H}},`$ (5)
where $`r_H=M_H+\sqrt{M_H^2a^2}`$ is the radius of the event horizon. $`\mathrm{\Omega }_H`$ is constant on the horizon. The inner boundary of a Keplerian disk is usually assumed to be at the marginally stable orbit with radius (Novikov & Thorne 1973)
$`r_{ms}=M_H\left\{3+z_2\left[(3z_1)(3+z_1+2z_2)\right]^{1/2}\right\},`$ (6)
where
$`z_1=1+\left(1a^2/M_H^2\right)^{1/3}\left[\left(1+a/M_H\right)^{1/3}+\left(1a/M_H\right)^{1/3}\right],`$ (7)
and
$`z_2=\left(3a^2/M_H^2+z_1^2\right)^{1/2}.`$ (8)
Inserting Eq. (6) into Eq. (4), we obtain the angular velocity of the disk at its inner boundary: $`\mathrm{\Omega }_{ms}=\mathrm{\Omega }_D(r_{ms})`$. For the Schwarzschild case (i.e $`a=0`$) we have $`r_{ms}=6M_H`$ and $`\mathrm{\Omega }_{ms}=6^{3/2}M_H^1\mathrm{\Omega }_0`$.
Assuming the magnetic field lines touch the disk close to the inner boundary, we have $`P_{HD}P_0f`$ where
$`P_0=\left({\displaystyle \frac{\mathrm{\Delta }\mathrm{\Psi }}{2\pi }}\right)^2{\displaystyle \frac{\mathrm{\Omega }_0^2}{Z_H}}`$ (9)
is the value of $`P_{HD}`$ for the Schwarzschild case, and
$`f={\displaystyle \frac{\mathrm{\Omega }_{ms}\left(\mathrm{\Omega }_H\mathrm{\Omega }_{ms}\right)}{\mathrm{\Omega }_0^2}}`$ (10)
is a function of $`a/M_H`$ only. The variation of $`P_{HD}`$ with $`a/M_H`$ is shown in Fig. 4. We see that $`P_{HD}>0`$ when $`0.36<a/M_H<1`$, $`P_{HD}<0`$ when $`0a/M_H<0.36`$. $`P_{HD}=0`$ at $`a/M_H0.36`$ and $`a/M_H=1`$ since $`P_{HD}\mathrm{\Omega }_H\mathrm{\Omega }_{ms}`$ and $`\mathrm{\Omega }_H=\mathrm{\Omega }_{ms}`$ when $`a/M_H0.36`$ and $`a/M_H=1`$. For fixed $`\mathrm{\Delta }\mathrm{\Psi }`$, $`M_H`$, and $`Z_H`$, $`P_{HD}`$ peaks at $`a/M_H0.981`$. $`T_{HD}`$ always has the same sign as $`P_{HD}`$ since $`P_{HD}=T_{HD}\mathrm{\Omega }_D`$ for a perfectly conducting disk.
## 3 Extracting Energy from a Black Hole through Its Disk
When $`a/M_H>0.36`$, energy and angular momentum are extracted from the black hole and transferred to the disk. So a fast rotating black hole can pump its rotational energy into a disk surrounding it through magnetic coupling between them. Once the energy gets into the disk, it can be radiated to infinity either in the form of Poynting flux associated with jets or winds, or in the form of thermal radiation associated with dissipative processes in the disk. If the disk is not accreting or its accretion rate is very low, then the diskโs power comes from the rotational energy of the black hole. This provides a way for indirectly extracting energy from a rotating black hole. Note, that the Blandford-Znajek mechanism is a way for directly extracting energy from a rotating black hole to the remote load.
It is possible that the Blandford-Znajek mechanism provides a very โcleanโ energy beam, while energy extracted from the disk is โdirtyโ, contaminated by matter from the disk corona (R. D. Blandford 1999a, private communication). However, we must keep in mind that there exists no quantitative model demonstrating how to generate clean energy with the Blandford-Znajek process.
Let us consider again our case, in which Kerr black hole loses its energy and angular momentum through the magnetic interaction with a thin Keplerian disk, with the magnetic field lines touching the disk close to the marginally stable orbit. The evolution of the mass and angular momentum of the black hole are given by
$`{\displaystyle \frac{dM_H}{dt}}=2P_{HD},{\displaystyle \frac{dJ_H}{dt}}=2T_{HD},`$ (11)
where $`P_{HD}`$ and $`T_{HD}`$ are given by Eq. (2) and Eq. (3) respectively, the factors $`2`$ come from the fact that a disk has two faces. From Eq. (11) we obtain $`dJ_H/dM_H=1/\mathrm{\Omega }_{ms}`$, where we have used $`P_{HD}T_{HD}\mathrm{\Omega }_{ms}`$. Define the spin of a Kerr black hole by $`sa/M_H=J_H/M_H^2`$, then we have
$`{\displaystyle \frac{ds}{d\mathrm{ln}M_H}}={\displaystyle \frac{1}{\omega }}2s,`$ (12)
where $`\omega M_H\mathrm{\Omega }_{ms}`$ is a function of $`s`$ only. Eq. (12) can be integrated
$`M_H(s)=M_{H,0}\mathrm{exp}{\displaystyle _{s_0}^s}{\displaystyle \frac{ds}{\omega ^12s}},`$ (13)
where $`M_{H,0}=M_H(s=s_0)`$. Consider a Kerr black hole with initial mass $`M_H`$ and the initial spin $`s=0.998`$, which is the maximum value of $`s`$ that an astrophysical black hole can have (Thorne 1974). As the black hole spins down to $`s=0.36`$, the total amount of energy extracted from the black hole by the disk can be calculated with Eq. (13): $`\mathrm{\Delta }E0.15M_H`$. This amount of energy will eventually be transported to infinity by the disk. In a realistic case the magnetic field lines touch the disk not exactly at the marginally stable orbit, the averaged angular velocity of the disk will be somewhat smaller than $`\mathrm{\Omega }_{ms}`$, then the total amount of energy that can be extracted from the black hole should be somewhat smaller than $`0.15M_H`$.
For comparison letโs calculate the amount of energy that can be extracted from a Kerr black hole by the Blandford-Znajek mechanism in the optimal case i.e. when the impedance matching condition is satisfied (cf. Macdonald & Thorne 1982). To do so, we only need to replace $`\mathrm{\Omega }_{ms}`$ with $`\mathrm{\Omega }_H/2`$ in Eq. (13), since the power and torque of the black hole are related by $`P_H=T_H\mathrm{\Omega }_F`$ where $`\mathrm{\Omega }_F`$ is the angular velocity of magnetic field lines, and in the optimal case $`\mathrm{\Omega }_F=\mathrm{\Omega }_H/2`$. Then we obtain that as the black hole spins down from $`s=0.998`$ to $`s=0`$ the total energy extracted from the black hole by the Blandford-Znajek mechanism is $`0.09M_H`$.
We find that the magnetic coupling between a black hole and a disk has a higher efficiency in extracting energy from the black hole than the Blandford-Znajek mechanism (see Fig. 4). This is because the energy extracted from the black hole by the magnetic coupling to the disk has a larger ratio of energy to angular momentum than is the case for the Blandford-Znajek mechanism.
## 4 Conclusions
When a black hole rotates faster than the disk, which is the case if $`a/M_H>0.36`$ for a Kerr black hole with a thin Keplerian disk, then the black hole exerts a torque at the inner edge of the disk. The torque transfers energy and angular momentum from the black hole to the disk. This is similar to the โpropellerโ mechanism in the case of a magnetized neutron star with a disk (Illarionov & Sunyaev 1975). The energy transfered to the disk is eventually radiated to infinity by the disk. This provides a mechanism for extracting energy from a black hole through its disk. For a Kerr black hole with the initial mass $`M_H`$ and spin $`a/M_H=0.998`$, the total amount of energy that can be extracted by a thin Keplerian disk is $`0.15M_H`$. Therefore, this is more efficient than the Blandford-Znajek mechanism which can extract only $`0.09M_H`$.
When the black hole rotates slower than the disk, i.e. $`0a/M_H<0.36`$, energy and angular momentum are transferred from the disk to the black hole, and the disk accretes onto the black hole.
I am very grateful to Bohdan Paczyลski for encouraging and stimulating discussions. This work was supported by the NASA grant NAG5-7016. |
no-problem/0002/hep-ph0002284.html | ar5iv | text | # References
UCRHEP-T271
February 2000
Neutrino Exotica in the Skew E<sub>6</sub> Left-Right Model
Ernest Ma
Physics Department, Univeristy of California, Riverside, CA 92521, USA
## Abstract
With the particle content of the 27 representation of E<sub>6</sub>, a skew left-right supersymmetric gauge model was proposed many years ago, with a variety of interesting phenomenological implications. The neutrino sector of this model offers a natural framework for obtaining small Majorana masses for $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$, with the added bonus of accommodating 2 light sterile neutrinos.
With the advent of superstring theory, it was recognized early on that the gauge symmetry E<sub>6</sub> may be relevant for discussing low-energy particle physics phenomenology. There are two ideas: (1) the particle content of the Minimal Supersymmetric Standard Model (MSSM) may be extended to include all particles contained in the fundamental 27 representation of E<sub>6</sub>; and (2) the standard-model gauge group may be extended as well. The most actively pursued such approach is to add an extra U(1).
A very different and unique alternative was proposed many years ago, which considers instead an unconventional $`SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)`$ decomposition of the 27 representation of E<sub>6</sub>, resulting in a variety of interesting phenomenological implications. Among these are the natural absence of flavor-changing neutral currents at tree level despite the presence of $`SU(2)_R`$, the possibility of breaking $`SU(2)_R`$ at or below the TeV scale with only Higgs doublets and bidoublets without conflicting with present phenomenology, and the appearance of an effective two-doublet Higgs sector different from that of the MSSM, with the interesting (and currently very relevant) property that the tree-level upper bound of the lightest neutral Higgs-boson mass is raised from $`M_Z`$ in the MSSM to $`\sqrt{2}M_W`$ in this case.
Two possible deviations from the standard model have recently been observed. One is a new determination of the weak charge of atomic cesium. The other is a new analysis of the hadronic peak cross section at the $`Z`$ resonance. Based on these data, it has now been shown that the model of Ref. is in fact the most favored of all known gauge extensions of the standard model.
This paper deals with another aspect of this remarkable model, i.e. that of its neutrinos. In the original proposal, neutrino masses were assumed to be zero for simplicity. \[Recall that in 1986, neutrino oscillations were not clearly established.\] However, such is not an essential feature of this model. It is in fact more natural that $`\nu _e`$, $`\nu _\mu `$, and $`\nu _\tau `$ acquire small Majorana masses through the usual sesaw mechanism, and that 2 light sterile neutrinos are accommodated in this model, resulting in a number of exotic phenomena which may be tested in future experiments.
The skew E<sub>6</sub> left-right model is based on the observation that there are two ways of identifying the standard-model content of the 27 representation of E<sub>6</sub>. Written in its \[$`SO(10)`$, $`SU(5)`$\] decomposition, we have
$`\mathrm{๐๐}`$ $`=`$ $`(16,5^{})+(16,10)+(16,1)`$ (1)
$`+`$ $`(10,5^{})+(10,5)+(1,1).`$
The usual assumption is that the standard-model particles are contained in the $`(16,5^{})`$ and $`(16,10)`$ multiplets. On the other hand, if we switch $`(16,5^{})`$ with $`(10,5^{})`$ and $`(16,1)`$ with $`(1,1)`$, the standard model remains the same. The difference between the 2 options only appears if the gauge group is extended. In particular, a very different and unique model emerges if the gauge group becomes $`SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)`$. In this scenario, the particle assignments are as follows.
$`(u,d)_L(3,2,1,1/6),d_L^c(3^{},1,1,1/3),`$ (2)
$`(h^c,u^c)_L(3^{},1,2,1/6),h_L(3,1,1,1/3),`$ (3)
$`\left(\begin{array}{cc}\nu & E^c\\ e& \psi ^0\end{array}\right)_L(1,2,2,0),(e^c,S)_L(1,1,2,1/2),`$ (6)
$`(\xi ^0,E)_L(1,2,1,1/2),N_L(1,1,1,0),`$ (7)
where the convention is that all fields are considered left-handed.
The notion of $`R`$ parity is an important ingredient of this construction. The usual quarks and leptons, i.e. $`u`$, $`d`$, $`u^c`$, $`d^c`$, $`\nu `$, $`e`$, and $`e^c`$, with the addition of $`N`$, have $`R=+1`$ and their scalar supersymmetric partners have $`R=1`$ as in the MSSM. The other fermions, i.e. $`h`$, $`h^c`$, $`E`$, $`E^c`$, $`\psi ^0`$, $`\xi ^0`$, and $`S`$ have $`R=1`$ and their scalar supersymmetric partners have $`R=+1`$. Furthermore, all gauge bosons have $`R=+1`$ and gauge fermions have $`R=1`$, except $`W_R^\pm `$ which has $`R=1`$ and $`\stackrel{~}{W}_R^\pm `$ which has $`R=+1`$. This unusual feature is the origin of many desirable and interesting properties of this model which sets it apart from all other gauge extensions of the standard model.
Consider the $`R=+1`$ neutral fermion sector, i.e. the usual neutrinos $`\nu _e`$, $`\nu _\mu `$, $`\nu _\tau `$, and the 3 $`N`$โs. They are linked by the Yukawa terms $`\nu _iN_j\stackrel{~}{\psi }_k^0`$, where one linear combination of $`\stackrel{~}{\psi }_k^0`$ may be identified with the usual Higgs scalar $`h_2^0`$ which acquires the vacuum expectation value $`v_2`$. Furthermore, since $`N_j`$ transforms trivially under $`SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)`$, it is allowed to have a nonzero Majorana mass which is presumably large. \[In U(1) extensions of the MSSM within the context of E<sub>6</sub>, the requirement that $`N`$ transform trivially under $`SU(3)_C\times SU(2)_L\times U(1)_Y\times U(1)`$ uniquely determines it to be a particular linear combination of $`U(1)_\psi `$ and $`U(1)_\chi `$ with mixing angle $`\alpha =\mathrm{tan}^1\sqrt{1/15}`$, where $`Q_\alpha =Q_\psi \mathrm{cos}\alpha Q_\chi \mathrm{sin}\alpha `$. This is referred to as $`U(1)_N`$ or $`U(1)_\nu `$. However, with two U(1) gauge factors, kinetic mixing must be considered, a complication which is absent in a left-right model.\] The resulting $`6\times 6`$ neutrino mass matrix is
$$_\nu =\left(\begin{array}{cc}0& m_D\\ m_D^T& m_N\end{array}\right),$$
(8)
where $`m_D`$ and $`m_N`$ are themselves $`3\times 3`$ matrices. Thus the usual neutrinos acquire small Majorana masses through the canonical seesaw mechanism without any problem. In other gauge extensions, this mechanism is often not available.
The $`SU(2)_R\times U(1)`$ of this model breaks down to the standard-model $`U(1)_Y`$ through the vacuum expectation value $`v_3`$ of a linear combination of the $`\stackrel{~}{S}`$โs. Let us define that to be $`\stackrel{~}{S}_3`$. Because of the allowed Yukawa terms linking $`hh^c`$, $`EE^c`$, and $`\xi ^0\psi ^0`$ to $`\stackrel{~}{S}_3`$, these exotic fermions have masses proportional to $`v_3`$. However, only 1 of the 3 $`S`$โs gets a mass at this stage, i.e. $`S_3`$, as it is linked to a particular linear combination of the two neutral gauge fermions corresponding to $`SU(2)_R`$ and $`U(1)`$ through $`\stackrel{~}{S}_3`$.
Electroweak symmetry breaking proceeds as in the MSSM, with $`\stackrel{~}{\xi }_3`$ identified as $`h_1^0`$ and $`\stackrel{~}{\psi }_3`$ as $`h_2^0`$, having vacuum expectation values $`v_1`$ and $`v_2`$ respectively. Now $`S_{1,2}`$ are no longer massless and if they are light, they could well be called sterile neutrinos.
Consider now the $`R=1`$ neutral fermion sector, i.e. $`\xi _i^0`$, $`\psi _i^0`$, $`S_i`$, and the 3 gauge fermions $`\stackrel{~}{W}_L^0`$, $`\stackrel{~}{W}_R^0`$, and $`\stackrel{~}{B}`$ corresponding to $`SU(2)_L`$, $`SU(2)_R`$, and $`U(1)`$ respectively. They are linked by the
$$f_{ijk}(\xi _i^0e_je_k^cE_i\nu _je_k^c\xi _i^0\psi _j^0S_k+E_iE_j^cS_k)$$
(9)
terms of the superpotential as well as the gauge interaction terms together with the soft supersymmetry-breaking Majorana mass terms $`m_L`$, $`m_R`$, $`m_B`$ of the gauge fermions. The resulting $`12\times 12`$ mass matrix is
$$=\left[\begin{array}{cccc}0& m_{EE^c}& f_{i3j}v_2& m_1\\ m_{EE^c}^T& 0& m_{ee^c}& m_2\\ f_{j3i}v_2& m_{ee^c}^T& 0& m_3\\ m_1^T& m_2^T& m_3^T& \stackrel{~}{m}\end{array}\right],$$
(10)
where
$`m_1={\displaystyle \frac{v_1}{\sqrt{2}}}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ g_L& 0& g_B\end{array}\right),`$ $`m_2={\displaystyle \frac{v_2}{\sqrt{2}}}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ g_L& g_R& 0\end{array}\right),`$ (17)
$`m_3={\displaystyle \frac{v_3}{\sqrt{2}}}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& g_R& g_B\end{array}\right),`$ $`\stackrel{~}{m}=\left(\begin{array}{ccc}m_L& 0& 0\\ 0& m_R& 0\\ 0& 0& m_B\end{array}\right).`$ (24)
The gauge couplings $`g_L`$, $`g_R`$, and $`g_B`$ are related to the electromagnetic coupling $`e`$ by
$$\frac{1}{e^2}=\frac{1}{g_L^2}+\frac{1}{g_R^2}+\frac{1}{g_B^2}.$$
(25)
Assuming that $`g_L=g_R`$, we then have
$$g_L^2=g_R^2=\frac{e^2}{\mathrm{sin}^2\theta _W},g_B^2=\frac{e^2}{12\mathrm{sin}^2\theta _W},$$
(26)
where $`\theta _W`$ is the usual electroweak mixing angle.
It is clear from the above that the $`10\times 10`$ mass submatrix spanning $`\xi _i^0`$, $`\psi _i^0`$, $`S_3`$, and the 3 gauge fermions will have 8 eigenvalues of order $`v_3`$ and 2 eigenvalues of order $`m_{L,R,B}`$. The $`2\times 2`$ effective mass matrix spanning $`S_{1,2}`$ is then given by a generalized seesaw formula, i.e.
$$(_S)_{ij}=\underset{k=1}{\overset{3}{}}\underset{l=1}{\overset{3}{}}\frac{2f_{k3i}v_2(m_{ee^c})_{lj}}{(m_{EE^c})_{kl}}.$$
(27)
If $`f_{k3i}`$ are small enough, say less than $`m_{ee^c}/v_2`$, then $`S_{1,2}`$ may indeed be light enough to be considered as sterile neutrinos. Note that under the standard $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ gauge group, $`S(1,1,0)`$ is indeed a singlet.
We now have 5 light neutrinos, the usual ones $`\nu _e`$, $`\nu _\mu `$, $`\nu _\tau `$ with $`R=+1`$, and the sterile ones $`S_{1,2}`$ with $`R=1`$. Since $`R`$ parity is still strictly conserved, they do not mix. Hence $`S_{1,2}`$ would not be a factor in considering the phenomenology of neutrino oscillations. However, as a discrete symmetry, $`R`$ parity may be broken by soft terms without affecting other essential properties of the unbroken theory. Remarkably, exactly such a soft term is allowed by the $`SU(3)_C\times SU(2)_L\times SU(2)_R\times U(1)`$ gauge symmetry of this model, namely
$$m_{ij}^{}(\nu _i\psi _j^0e_iE_j^c).$$
(28)
Hence the $`3\times 2`$ matrix linking $`\nu _e`$, $`\nu _\mu `$, $`\nu _\tau `$ with $`S_{1,2}`$ is given by
$$(_{\nu S})_{ij}=\underset{k=1}{\overset{3}{}}\underset{l=1}{\overset{3}{}}\frac{m_{il}^{}f_{k3j}v_2}{(m_{EE^c})_{kl}},$$
(29)
and we have a general theoretical framework for considering 3 active and 2 sterile neutrinos. Of course, further assumptions would be necessary to obtain a desirable pattern to explain the present observations of neutrino oscillations.
Given that $`S_{1,2}`$ are light, the fact that they are connected to $`e_i^c`$ through $`W_R^\pm `$ means that there are many modifications to the phenomenology of lepton weak interactions. One possibility is that $`v_3`$ is very large, then all such deviations are negligible. On the other hand, the analysis of Ref. shows that it is possible to have $`v_3`$ of order 1 TeV or less. This would allow experiments in the near future to observe a number of exotic phenomena as discussed below.
Whereas the neutral gauge bosons of this model are flavor-diagonal in their couplings, the scalar bosons are not. Hence there will be some flavor-changing interactions, most of which are suppressed by Yukawa couplings. However, there is one important exception, i.e. those terms proportional to the top-quark mass. They appear in the superpotential as the following gauge-invariant combination:
$$u_iu_j^c\psi _k^0d_iu_j^cE_k^cu_ih_j^ce_k+d_ih_j^c\nu _k.$$
(30)
It was pointed out a long time ago that $`\stackrel{~}{h}^c`$ exchange would then contribute significantly to the rare decay $`K^+\pi ^+\nu \overline{\nu }`$. As it happens, the first measurement of this branching fraction, i.e.
$$B(K^+\pi ^+\nu \overline{\nu })=4.2\begin{array}{c}+9.7\\ 3.5\end{array}\times 10^{10},$$
(31)
is in fact somewhat larger than the standard-model expectation, $`(0.82\pm 0.32)\times 10^{10}`$. If that is the correct interpretation, then using the results of Ref., another prediction of this model is
$$\frac{B(bs\nu \overline{\nu })}{B(K^+\pi ^+\nu \overline{\nu })}2.4\frac{|V_{tb}V_{us}|^2}{|V_{cb}V_{td}|^2},$$
(32)
which is of order $`10^5`$. Hence the branching fraction of $`bs\nu \overline{\nu }`$ should be about $`10^5`$, which is several orders of magnitude above the standard-model expectation.
Whereas the 2 light sterile neutrinos $`S_{1,2}`$ do not have standard-model interactions, they do transform nontrivially under $`SU(2)_R\times U(1)`$. Hence they must interact with the new heavy gauge bosons $`W_R^\pm `$ and
$$Z^{}=\left(\frac{12x}{1x}\right)^{\frac{1}{2}}W_R^0+\left(\frac{x}{1x}\right)^{\frac{1}{2}}B,$$
(33)
where $`x\mathrm{sin}^2\theta _W`$. Consequently, there are several essential features of this model involving $`S_{1,2}`$.
(A) The fundamental weak decay $`\mu ^{}e^{}\nu _\mu \overline{\nu }_e`$ is now supplemented with $`\mu ^{}e^{}S_i\overline{S}_j`$ from $`W_R^\pm `$ exchange. The latter is constrained by present data through the limit
$$|g_{RR}^V|=\left(1U_{\mu 3}^2\right)^{\frac{1}{2}}\left(1U_{e3}^2\right)^{\frac{1}{2}}(m_{W_L}^2/m_{W_R}^2)<0.033,$$
(34)
where $`S_e=U_{e1}S_1+U_{e2}S_2+U_{e3}S_3`$, etc. and the matrix $`U`$ has been assumed real for simplicity.
(B) The flavor-changing decay $`\mu eee`$ gets tree-level contributions from Eq. (7) through $`\stackrel{~}{\xi }_i^0`$ exchange. However, they may be very small because the $`f_{ijk}`$โs may be chosen arbitrarily to supress any such effect in this case. In contrast, there is an unavoidable one-loop contribution from the exchange of $`W_R^\pm `$ and $`S_i`$, which is the analog of the standard-model case of $`W_L^\pm `$ and $`\nu _i`$. Whereas the latter is totally negligible because it is proportional to neutrino mass-squared differences, the former is important because the mass of $`S_3`$ is comparable to $`m_{W_R}`$ but $`S_{1,2}`$ are essentially massless.
The largest exotic contribution to $`\mu eee`$ actually comes from the effective $`Z\mu \overline{e}`$ vertex. The reason is that in this model,
$$Z=(1x)^{\frac{1}{2}}W_L^0\left(\frac{x^2}{1x}\right)^{\frac{1}{2}}W_R^0\left(\frac{x2x^2}{1x}\right)^{\frac{1}{2}}B,$$
(35)
hence a new vertex $`ZW_R^+W_R^{}`$ appears, in analogy to $`ZW_L^+W_L^{}`$ of the standard model. The calculation of the one-loop $`Z\mu \overline{e}`$ vertex is similar to that of $`Zd\overline{s}`$ in the standard model. The result is $`g_{Z\mu \overline{e}}Z^\lambda \overline{e}[\gamma _\lambda (1+\gamma _5)/2]\mu `$, with
$$g_{Z\mu \overline{e}}=\frac{e^3U_{\mu 3}U_{e3}}{16\pi ^2x^{\frac{1}{2}}(1x)^{\frac{1}{2}}}\left[\frac{r_3}{1r_3}+\frac{r_3^2\mathrm{ln}r_3}{(1r_3)^2}\right],$$
(36)
where
$$r_3m_{S_3}^2/m_{W_R}^2=\left(\frac{1x}{12x}\right)\left(1+\frac{v_2^2}{v_3^2}\right)^1.$$
(37)
Hence this vertex is not suppressed at all, and its contribution to the $`\mu eee`$ decay amplitude is proportional to $`1/m_Z^2`$, so it is much larger than that of the box diagram from $`W_R^\pm `$ and $`S_i`$ exchange, which is proportional to $`1/m_{W_R}^2`$.
Connecting the $`Z\mu \overline{e}`$ vertex with the standard-model $`Ze\overline{e}`$ vertex, the decay branching fraction of $`\mu eee`$ is then given by
$$B(\mu eee)=2x(1x)(14x+12x^2)g_{Z\mu \overline{e}}^2/e^2.$$
(38)
Since the present experimental upper limit of this is $`1.0\times 10^{12}`$, the following constraint is obtained:
$$U_{\mu 3}U_{e3}<2.3\times 10^3,$$
(39)
where the $`v_2^2/v_3^2`$ term of Eq. (23) has been neglected.
(C) Because of Eq. (22), the rare decay $`Z\mu ^{}e^++e^{}\mu ^+`$ is also predicted. However, because of Eq. (25), its branching fraction is less than $`8.6\times 10^{13}`$ which is of course totally negligible. The analogous decays $`Z\tau ^{}e^++e^{}\tau ^+`$ and $`Z\tau ^{}\mu ^++\mu ^{}\tau ^+`$ are related to $`\tau eee`$, $`\tau e\mu \mu `$, $`\tau \mu ee`$, and $`\tau \mu \mu \mu `$, with upper limits of order $`10^5`$ and $`10^6`$ on their branching fractions respectively. All are predicted in this model to have branching fractions of order $`10^7`$ multiplied by $`U_{\tau 3}^2U_{e3}^2`$ or $`U_{\tau 3}^2U_{\mu 3}^2`$.
(D) The archetypal rare decay $`\mu e\gamma `$ is also predicted in this model, again through $`W_R^\pm `$ and $`S_i`$ exchange. The one-loop diagrams are analogous to the usual ones in the standard model, resulting in a decay branching fraction
$$B(\mu e\gamma )=\frac{3\alpha }{32\pi }\left(\frac{m_{W_L}}{m_{W_R}}\right)^4U_{\mu 3}^2U_{e3}^2F^2(r_3),$$
(40)
where the function $`F`$ is given by
$$F(r_3)=\frac{r_3(1+5r_3+2r_3^2)}{(1r_3)^3}+\frac{6r_3^3\mathrm{ln}r_3}{(1r_3)^4}.$$
(41)
Using the most recent experimental upper bound of $`1.2\times 10^{11}`$ on $`B`$, we obtain
$$U_{\mu 3}U_{e3}(m_{W_L}^2/m_{W_R}^2)<3.8\times 10^4.$$
(42)
(E) If the lightest exotic quark, call it $`h_1`$, is lighter than $`W_R^\pm `$, then its decay is predominantly given by
$$h_1u_ie_jS_k.$$
(43)
Since $`S_{1,2}`$ are light and undetected, this mimics the ordinary semileptonic decay of a heavy quark, but without any nonleptonic component.
In conclusion, the skew E<sub>6</sub> left-right model proposed many years ago, favored by recent atomic physics and $`Z`$ resonance data, has been shown to be a natural framework for 3 active and 2 sterile light neutrinos. The constraints from low-energy data, as given by Eqs. (20), (25), and (28), require at worst ($`U_{\mu 3}=U_{e3}`$) only that $`m_{W_R}>442`$ GeV, or equivalently $`m_{S_3}(=m_Z^{})>528`$ GeV. Hence the new physics of this model is accessible to experimental verification in the not-so-distant future.
This work was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837. |
no-problem/0002/hep-ex0002060.html | ar5iv | text | # Study of RPC gas mixtures for the ARGO-YBJ experiment
## 1 Introduction
The ARGO-YBJ experiment is under way over the next few years at the Yangbajing High Altitude Cosmic Ray Laboratory (4300 m a.s.l., 606 $`g/cm^2`$), 90 km North to Lhasa (Tibet, P.R. China). The aim of the ARGO-YBJ experiment is the study of fundamental issues in Cosmic Ray and Astroparticle Physics including $`\gamma `$-ray astronomy, GRBs physics at 100 GeV threshold energy and the measurement of the $`\overline{p}/p`$ at TeV energies. The apparatus consists of a full coverage detector of dimension $`71\times 74`$ $`m^2`$ realized with a single layer of Resistive Plate Counters (RPCs). A guard ring partially (about 50 $`\%`$) instrumented with RPCs, surrounds the central detector, up to $`100\times 100`$ $`m^2`$; it improves the apparatus performance by enlarging the fiducial area for the detection of showers with the core outside the full coverage carpet.
A lead converter 0.5 cm thick will cover uniformly the RPC plane in order to increase the number of charged particles by conversion of shower photons and to reduce the time spread of the shower particles. The measurement technique, namely the timing on the shower front with a few tens of particles, requires RPC operation with 1 ns time resolution, low strip multiplicity for good energy estimation at low energies, high efficiency and low single counting rate to trigger efficiently at low multiplicity.
Keeping in mind all these needs and the low operating pressure (about 600 mbar) at Yangbajing we started to investigate different gas mixtures, at sea level, in order to optimize the detector performance. In fact previous studies have shown that the performance of the detector may be heavily affected by the reduced pressure . Three gas components were used: Argon, iso-Butane C4H10 and TetraFluoroEthane C2H2F4 that will be indicated in the following as Ar, i-But and TFE respectively.
The set-up used for this study consists of a small telescope of 4 RPCs $`50\times 50`$ $`cm^2`$ area with 16 pick-up strips 3 cm wide connected to the front-end electronics board . The front-end circuit contains 16 discriminators, with about 70 mV voltage threshold, and provides a FAST-OR signal with the same input-to-output delay (10 ns) for all the channels. The 4 RPCs were overlapped one on the other, 3 out of them were used to define a cosmic ray beam by means of a triple coincidence of their FAST-OR signals, the fourth one was used as test RPC. The three RPCs providing the trigger were operated with a gas mixture of $`60\%`$ Ar, $`37\%`$ i-But and $`3\%`$ TFE. At any trigger occurrence the time provided by the test RPC was read by means of a LECROY TDC of 0.25 ns time bin, operated in common START mode; the number of fired strips was read by means of a CAEN module C187. The single counting rate was read by a CAEN scaler C243.
## 2 RPC performance
The RPCs were operated in streamer mode, as foreseen for the experiment, at the ARGO laboratory of the Physics Department of the Naples University.
Many gas mixtures have been tested, including mixtures with a high percentage of Ar ($`40\%`$ \- $`60\%`$) which represent a reference point for the performance of the detector. The results are shown in Fig. 1 where in the first column a) are reported the three mixture with Ar kept at $`60\%`$; the numbers associated to each line like 60/30/10 refer respectively to Ar/i-But/TFE.
It can be seen that changing the relative percentage of the two quenching components, i-But and TFE, does not result in any strong or evident difference of the detector performance. In fact the operating voltage does not chance substantially, neither the efficiency; the time resolution stabilizes around 1 ns and the strip multiplicity ranges in 1.1 - 1.3 depending on the applied voltage. This is confirmed by the set of measurements done with Ar kept at $`40\%`$ (column b). Only the mixture 40/50/10 shows some difference, but it is likely that there was some uncontrolled change or problem during the data taking.
In order to check gas mixtures with i-But content around and below the flammability threshold ($`10\%`$) we have performed measurements with mixtures where the i-But fraction was kept at $`10\%`$ in one set and $`2\%`$ in another set. The results are reported in column a) and b) of Fig. 2.
Both samples of measurements show that the Ar percentage plays an important role in defining the operating voltage (about 800 V above the knee-voltage $`V_{knee}`$) and influences the strip multiplicity. The mean value of the strip multiplicity remains anyway below 1.3; also the single rate changes according to the Ar percentage and this can be understood as we work at fixed discrimination threshold. Efficiency and time resolution are not affected. This means that i-But, traditionally used to โquenchโ the discharge by asbsorption of ultraviolet photons, can be substituted almost completely by TFE which is electronegative and captures electrons of the plasma.
A solution to the problem of low pressure is increasing the density of the gas mixture. An increase of TFE concentration at expenses of the Ar concentration should therefore increase the primary ionization thus compensating for the $`40\%`$ reduction caused by the lower gas target pressure (600 $`mbar`$) and reduces the afterpulse probability. We have tested two mixtures with high content of TFE, namely Ar/i-But/TFE : 15/10/75 and 15/5/80; the second one is a non flammable mixture. The experimental results ( Fig. 3) do not exhibit evident differences between them.
The reduction of the Argon concentration in favour of TFE results in a clear increase of the operating voltage as expected from the large quenching action of TFE. The efficiency, $`98\%`$, is comparable to the values obtained with the mixtures having low percentage of TFE; might even be higher but this, given the experimental fluctuations, cannot be stated with certainty. Moreover, the time resolution doesnโt show any worsening, the single rate keeps below 400 $`Hz/m^2`$ and also the strip multiplicity stays below 1.1. The mixture 15/10/75 has been already used successfully at Yangbajing in the ARGO test .
## 3 Conclusions
We have studied the performance of RPCs with gas mixtures made of Ar, i-But and TFE. We checked mixtures with a high percentage of Ar, mixtures with low content of i-But, and finally mixtures with low and high percentage of TFE. For all of them, apart from those with a high percentage of TFE, we can summarize as follows: 1) the efficiency is $`95รท97\%`$; 2) the voltage where the efficiency plateau starts and the working voltage (tipically $`V_{knee}`$ \+ 800 V) depends strongly on the Ar fraction, showing an increase of about 500 V per every $`10\%`$ Ar reduction); 3) the single rate shows a plateau in the frequency range 400 $`Hz/m^2`$ \- 600 $`Hz/m^2`$; 4) the time resolution at working voltage is tipically $`1.0รท1.3`$ $`ns`$; 5) the mean value of the strip multiplicity at the working voltage is $`1.15รท1.25`$ depending on the Ar percentage.
The mixtures with higher TFE content result in sligthly better performance, giving $`98\%`$ efficiency, lower strip multiplicity and single rate. We couldnโt find any parameter able to discriminate the different mixtures. Preliminary measurements show that in mixtures with higher TFE fraction a smaller charge per track is developed. Tests are going on to investigate the analog read-out of the detector operated in streamer mode. |
no-problem/0002/astro-ph0002207.html | ar5iv | text | # Time Resolved GRB Spectroscopy1footnote 11footnote 1Paper presented at the 5th Huntsville Symposium, Huntsville (Alabama), 19 - 22 October 1999.
## Introduction
We study a sample of 43 GRBs selected for the high-quality of their time-resolved spectra obtained with the BATSE Spectroscopy Detectors (sensitive in the energy range $`251800`$ keV). The time over which each spectrum was accumulated was varied so that the signal-to-noise ratio was greater than 15 (in the hard X-ray energy band). These data provide excellent temporal resolution: in many cases we obtain more than 10 spectra per burst with accumulation times as short as 256 ms.
## Spectral Models
We fitted each GRB time-resolved spectrum with two models: (1) the Band model , and (2) the Optically Thin Synchrotron Shock Model (OTSSM) A:mtav:3 ; A:mtav:4 . The (purely phenomenological) 4-parameter Band model A:mtav:1 consists of two power-law components (of spectral indexes $`\alpha `$ and $`\beta `$) joined smoothly by an exponential roll-over near a break energy $`E_0`$.
$`N(E)`$ $`=`$ $`A\left({\displaystyle \frac{E}{100\mathrm{keV}}}\right)^\alpha \mathrm{exp}\left({\displaystyle \frac{E}{E_0}}\right)\mathrm{for}E\left(\alpha \beta \right)E_0`$ (1)
$`N(E)`$ $`=`$ $`\left[A\left({\displaystyle \frac{\left(\alpha \beta \right)E_0}{100\mathrm{keV}}}\right)^{\alpha \beta }\mathrm{exp}\left(\beta \alpha \right)\right]\left({\displaystyle \frac{E}{100\mathrm{keV}}}\right)^\beta \mathrm{for}E\left(\alpha \beta \right)E_0`$ (2)
We used the (three-parameter) OTSSM of Refs. A:mtav:3 ; A:mtav:4 . We performed an independent spectral fitting for the Band and OTSSM models for each of the time-resolved spectra of all GRBs of our sample. For each GRB we obtain 4 (3) best fit parameters as a function of time representing the complete spectral evolution.
## Results
We find GRB spectral evolutions of two types: (1) a โtracking behaviourโ, with spectral parameters in approximate one-to-one correspondence with the changing energy flux, and (2) a โhard-to-soft evolutionโ, with spectral parameters evolving independently of the energy flux (see, e.g., ref. A:mtav:2 ).
Fig.1 shows the distribution for all collected time-resolved spectra of the low-energy spectral index $`\alpha `$. A few bursts show values $`\alpha `$ โ2/3 typically during the initial-rising part of their most intense pulses. The high energy spectral index $`\beta `$ is less constrained, and in some cases varies substantially over consecutive spectra within the same burst. The $`\beta `$ distribution (Fig.2 โ left panel) is peaked near โ2 for the Band model representation, and is broader for the OTSSM fits. Break energies $`E_0`$ are typically well below 500 keV. Interestingly, we find that the OTSSM provides a very good representation of time-resolved spectral data. Fig.2 (right panel) show the cumulative distribution of the reduced $`\chi ^2`$ for the Band and OTSSM models.
## Discussion
We studied 43 GRBs from the BATSE spectral archive selected by their large signal-to-noise ratios. We collected information for a total of 1046 spectra.
Our results indicate that the OTSSM is quite successful in describing the majority of GRB spectra. Fig.3 shows the spectral evolution of the remarkable GRB 990123 demonstrating the validity of the OTSSM for very intense bursts. However, violations of the simple OTSSM are apparent in about $`15\%`$ (at $`3\sigma `$ level) of our time resolved spectra. These violations (typically with a low-energy index $`\alpha >2/3`$) always occur at the beginning of major GRB pulses (as in Fig.4).
The OTSSM was derived A:mtav:3 for idealized plasma and hydrodynamic conditions that are most likely valid far from the central source. Several plasma and dynamic conditions (probably involving emission sites close to a central object) may produce the apparent suppression of soft photons at the beginning of some GRB pulses. |
no-problem/0002/math0002107.html | ar5iv | text | # On the homotopy theory of arrangements, II
## 1. Combinatorial and topological structure
One significant change in the study of the homotopy theory of arrangements since the publication of has been the introduction of matroid-theoretic terminology and techniques into the subject. In this section we review this approach and describe progress toward the topological classification of hyperplane complements. Refer to for further details on matroids.
### 1.1. The matroid of an arrangement
Let $`V=^{\mathrm{}}`$ and let $`๐=\{H_1,\mathrm{},H_n\}`$ be a central arrangement of hyperplanes in $`V`$. For each hyperplane $`H_i`$ choose a linear form $`\alpha _iV^{}`$ with $`H_i=\mathrm{ker}(\alpha _i)`$. The product $`Q(๐)=_{i=1}^n\alpha _i`$ is the defining polynomial of the arrangement.
The underlying matroid $`G(๐)`$ of $`๐`$ is by definition the collection of subsets of $`[n]:=\{1,\mathrm{},n\}`$ given by
$$G(๐)=\{S[n]|\{\alpha _i|iS\}\text{is linearly dependent}\}.$$
Elements of $`G=G(๐)`$ are called dependent sets. Minimal dependent sets are called circuits. Independent sets and bases are defined in the obvious way. The rank $`\mathrm{rk}(S)`$ of a set $`S[n]`$ is the size of a maximal independent subset of $`S`$. The rank of $`G`$ (or $`๐`$) is $`\mathrm{rk}([n])`$. The closure $`\overline{S}`$ of a set $`S`$ is defined by
$$\overline{S}=\{T[n]|TS\text{and}\mathrm{rk}(T)=\mathrm{rk}(S)\}.$$
A set $`S`$ is closed if $`\overline{S}=S`$. Closed sets are also called flats. The collection of closed sets, ordered by inclusion, forms a geometric lattice $`L(G)`$ which is isomorphic to the intersection lattice $`L(๐)`$ defined and studied in . The isomorphism $`L(G)L(๐)`$ is given by $`S_{iS}H_i`$.
Thus the matroid $`G(๐)`$ contains the same information as the intersection lattice $`L(๐)`$. One of the simple advantages of the matroid-theoretic approach is the fact that the matroid $`G(๐)`$ is determined uniquely by any of a number of different pieces of data besides the set of flats. For instance, the set of circuits, the rank function, or the set of bases, each determine the matroid, and thus the intersection lattice. Besides giving a nice conceptual framework for the combinatorial structure of arrangements, techniques and deep results from the matroid theory literature have been applied with some benefit in the study of the topology of arrangements.
The line generated by $`\alpha _i`$ in $`V^{}`$ depends only on $`H_i`$, and thus $`๐`$ determines a unique point configuration $`๐^{}`$ in the projective space $`(V^{})P^\mathrm{}1`$. The dual point configuration $`๐^{}`$ can be used to depict the combinatorial structure of an arrangement in case $`\mathrm{rk}(๐)4`$ if the defining forms $`\alpha _i`$ have real coefficients. (In this case $`๐`$ is called a complexified arrangement.) One merely plots the points $`\alpha _i`$ in a suitably chosen affine chart $`^\mathrm{}1`$ in the real projective space $`P^\mathrm{}1`$, for instance by scaling the $`\alpha _i`$ so that the coefficient of $`x_1`$ in each is equal to 1, and then ignoring this coefficient. Dependent flats of rank two (or three) are seen in these affine configurations as lines (or planes) containing more than two (or three) points. These lines and planes are usually explicitly indicated in the picture. This is especially useful for arrangements of rank four. Since the hyperplanes are indicated by points in $`^3`$, they donโt obscure the internal structure as a collection of affine planes in $`^3`$ would (see Figure 5). These depictions of projective point configurations are generalized to give affine diagrams of arbitrary matroids. Dependent flats are again explicitly indicated with โlinesโ or โplanes,โ which in the general case may not be straight or flat in the euclidean sense. It is common to refer to flats of rank one, two, or three in an arbitrary matroid as points, lines, or planes respectively. These diagrams are useful for the study of arrangements which are not complexified real arrangements (see Figures 1 and 2).
### 1.2. Basic topological results
The seminal result in the homotopy theory of arrangements is the calculation of the cohomology algebra of the complement $`M=M(๐):=^{\mathrm{}}_{i=1}^nH_i`$ by Orlik and Solomon . Motivated by work of Arnolโd , and using tools established by Brieskorn , they gave a presentation of $`H^{}(M)`$ in terms of generators and relations. The presentation $`A(๐)`$ depends only on the underlying matroid $`G=G(๐)`$, and is now called the Orlik-Solomon (or $`OS`$) algebra of $`G`$. Henceforth we will refer to the $`OS`$ algebra $`A(๐)`$ rather than the cohomology ring $`H^{}(M)`$. The algebra $`A(๐)`$ is defined as the quotient of the exterior algebra on generators $`e_1,\mathrm{},e_n`$ by the ideal $`I`$ generated by โboundariesโ of dependent sets of $`G`$. See for a precise definition.
This result of gave rise to a collection of โhomotopy typeโ conjectures, which assert that various homotopy invariants of the complement depend only on $`G(๐)`$. A great deal of research in the homotopy theory of arrangements has been focused on conjectures of this type. Note that such conjectures may have โweakโ or โstrongโ solutions: one may show that the invariant depends only on the matroid, or one may give an algorithm to compute the invariant from matroidal data.
The major positive result in this direction is the lattice-isotopy theorem, proved by the second author in . It asserts that the homotopy type, indeed the diffeomorphism type of the complement remains constant through a โlattice-isotopy,โ that is, a one-parameter family of arrangements in which the intersection lattice, or equivalently, the underlying matroid remains constant.
This result is often recast in terms of matroid realization spaces, which are related to the well-known โmatroid stratificationโ of the Grassmannian. We describe this connection. The defining forms $`\alpha _i`$ of $`๐`$ can be identified with row vectors, and thus the arrangement $`๐`$ can be identified with an $`n\times \mathrm{}`$ matrix $`R`$ over $``$. This matrix is called a realization of the underlying matroid. Two realizations $`R`$ and $`R^{}`$ are equivalent if there is a nonsingular diagonal $`n\times n`$ matrix $`S`$ and a nonsingular $`\mathrm{}\times \mathrm{}`$ matrix $`T`$ such that $`R^{}=SRT`$. The corresponding arrangements will then be linearly isomorphic. The set of equivalence classes of realizations of a fixed matroid $`G`$ is called the (projective) realization space $`(G)`$ of $`G`$. Now assume the matrix $`R`$ has rank $`\mathrm{}`$, i.e., that $`๐`$ is an essential arrangement. Then the column space of $`R`$ is an $`\mathrm{}`$-plane $`P_R`$ (sometimes denoted $`P_๐`$) in $`^n`$. Note that an isomorphic copy of the arrangement $`๐`$ inside $`P_R`$ is formed by the intersection of $`P_R`$ with the coordinate hyperplanes in $`^n`$. Postmultiplying $`A`$ by a nonsingular matrix doesnโt affect $`P_R`$. Thus we see that the realization space $`(G)`$ can be identified with a subset $`\mathrm{\Gamma }(G)`$ of the space of orbits of the diagonal $`(^{})^n`$ action on the Grassmanian $`๐ข_{\mathrm{}}(^n)`$ of $`\mathrm{}`$-planes in $`^n`$. The subsets $`\widehat{\mathrm{\Gamma }}(G)=\{P_R|R\text{is a realization of}G\}๐ข_{\mathrm{}}(^n)`$ are called matroid strata, although they do not comprise a stratification in the usual sense, since the closure of a stratum may not be a union of strata . These strata play a central role in the theory of generalized hypergeometric functions, especially when the original arrangement $`๐`$ is generic. The topology of the strata themselves can be as complicated as arbitrary affine varieties over $``$ even for matroids of rank three, by a celebrated theorem of Mnรซv . These strata are connected by โdeletion maps,โ whose fibers are themselves complements of arrangements .
Realizations in $`\mathrm{\Gamma }(G)`$ correspond to arrangements which have the same underlying matroid $`G`$, as determined by the arbitrary ordering of the hyperplanes. Thus, for the study of homotopy type as a function of intrinsic combinatorial structure (i.e., without regard to labelling), the true โmoduli spaceโ for arrangements should be the quotient of $`๐ข_{\mathrm{}}(^n)`$ by the action of the $`S_n\times (^{})^n`$. Then linear isomorphism classes of arrangements with isomorphic underlying matroids (or isomorphic intersection lattices) correspond to points of the orbit space $`\mathrm{\Gamma }(G)/\mathrm{Aut}(G)`$.
Randellโs lattice-isotopy theorem can be reformulated as follows: two arrangements which are connected by a path in $`\widehat{\mathrm{\Gamma }}(G)`$ (or $`\mathrm{\Gamma }(G)`$) have diffeomorphic complements. Thus one is led to the difficult problem of understanding the set of path components of $`\mathrm{\Gamma }(G)/\mathrm{Aut}(G)`$.
More detailed combinatorial data will suffice to uniquely determine the homotopy type of the complement. For instance, in the case of complexified real arrangements, the defining forms $`\alpha _i,1in`$ determine an underlying oriented matroid. This is most easily described in terms of bases: the matroid $`G(๐)`$ is determined by the collection $``$ of maximal independent subsets $`B[n]`$. These can naturally be identified with ordered subsets of $`[n]`$. The oriented matroid $`\widehat{G}(๐)`$ is then a partition $`=_+_{}`$ of the set of ordered bases of $`G(๐)`$ into positive and negative bases, corresponding to the sign of the (nonzero) determinant of the corresponding ordered sets of linear forms. The work of Salvetti , as refined by Gelfand and Rybnikov , shows that the underlying oriented matroid of a complexified real arrangement uniquely determines the homotopy type of the complement. In fact one can construct a partially ordered set $`๐ฆ(\widehat{G})`$ directly from the oriented matroid $`\widehat{G}`$ whose โnerveโ, or collection of linearly ordered subsets, forms a simplical complex homotopy equivalent to the complement. In subsequent work, Bjรถrner and Ziegler (see also Orlik ) generalized the construction to arbitrary arrangements (or arrangements of subspaces), in terms of combinatorial structures called 2-matroids or complex oriented matroids . They showed that this detailed combinatorial data determines the complement up to piecewise-linear homeomorphism.
The relation between Randellโs lattice-isotopy theorem and the combinatorial complexes of has not been fully explored. In particular, it would be interesting to cast the notion of lattice-isotopy in combinatorial terms, i.e., as a sequence of elementary โisotopy movesโ on the posets $`๐ฆ(\widehat{G})`$ which leave the homotopy type of the nerve unchanged. A first step in this direction was accomplished in . We pose this as our first open problem.
###### Problem 1.1.
Prove a combinatorial lattice-isotopy theorem, that โisotopicโ (complex) oriented matroids (with the same underlying matroid) determine homotopy equivalent cell complexes.
### 1.3. Homotopy classification
The fundamental question whether the homotopy type of $`M(๐)`$ is uniquely determined by $`G(๐)`$ was answered in the negative by Rybnikov in . The basic building block of his construction is the MacLane matroid, whose affine diagram is pictured in Figure 1.
For this matroid $`G`$, the realization space $`(G)`$ consists of two conjugate complex realizations $`R`$ and $`\overline{R}`$, corresponding to arrangements $`๐`$ and $`\overline{๐}`$. One can โamalgamateโ these realizations along one of the three-point lines (rank-two flats) to form arrangements $`๐๐`$ and $`๐\overline{๐}`$ of rank four with thirteen hyperplanes. These arrangements have the same underlying matroid, of rank four on 13 points, pictured in Figure 2.
Rybnikov establishes some special properties of this matroid, for instance, that any automorphism of the $`OS`$ algebra arises from a matroid automorphism, which must preserve or interchange the factors of the amalgamation. Using these he is able to show that the arrangements $`๐๐`$ and $`๐\overline{๐}`$ have nonisomorphic fundamental groups, since the first has an automorphism which switches the factors preserving orientations of the natural generators, while the only automorphism of the second which switches factors must reverse orientations. Refer to Section 4.1 for a more detailed description of the fundamental group. Rybnikov actually uses the rank-three truncation of this matroid, and 3-dimensional generic sections of these arrangements, but this operation does not affect the fundamental group.
The last part of Rybnikovโs argument is quite delicate and very specialized. None of the known invariants of fundamental groups, for instance those described elsewhere in this paper, will distinguish these two groups.
###### Problem 1.2.
Find a general invariant of arrangement groups that distinguishes the two Rybnikov arrangements, and generalize his construction.
To date this is the only known example of this phenomenon. In particular it is not known if this behavior is exhibited by complexified arrangements.
###### Problem 1.3.
Prove that the underlying matroid of a complexified arrangement determines the homotopy type, or find a counter-example.
Partial results along these lines were obtained by Jiang and Yau and Cordovil . In a condition on the underlying matroid $`G`$ is given which implies that the realization space of $`G`$ is path-connected, so that any two arrangements realizing $`G`$ have diffeomorphic complements by the lattice-isotopy theorem. In it is shown that complexified arrangements whose underlying matroids are isomorphic via a correspondence which preserves a (geometrically defined) โshelling orderโ will have identical braid-monodromy groups.
The extent to which arrangements with non-isomorphic matroids can have homotopy equivalent complements has also been studied (see, e.g., ) with some degree of success. One approach to this problem is purely combinatorial, namely to classify $`OS`$ algebras up to graded algebra isomorphism. This approach is adopted in . A powerful invariant is developed in , sufficient to distinguish all known non-trivial examples which are not already known to be isomorphic.
At this point all known examples of matroids with isomorphic $`OS`$ algebras can be explained by two simple operations . The first of these is a construction involving a well-known equivalence of affine arrangements arising from the โcone-deconeโ construction \[65, Prop. 5.1\], along with the trivial fact that the complement of the direct sum of affine arrangements, denoted $``$ in , is diffeomorphic to the cartesian product of the complements of the factors. In fact this construction can be applied to arbitrary pairs of matroids to yield central arrangements with non-isomorphic matroids and diffeomorphic complements . This construction always yields arrangements with non-connected (i.e., nontrivial direct sum) matroids. Jiang and Yau show that this phenomenon cannot occur in rank three, that is, the diffeomorphism type of the complement of a rank-three arrangement uniquely determines the underlying matroid. Thus the rank-three examples of , which have non-isomorphic underlying matroids, have complements which are homotopy equivalent but not diffeomorphic.
The second operation which yields isomorphic $`OS`$ algebras is truncation. It is shown in that the truncations of two matroids with isomorphic $`OS`$ algebras will have the same property. (It is not known if truncation preserves homotopy equivalence). These two โmovesโ suffice to explain the examples produced in , indeed all known examples of this phenomenon. Thus it seems an orderly classification of $`OS`$ algebras may be within reach.
###### Problem 1.4.
Classify $`OS`$ algebras up to graded isomorphism.
In the alternative, we suggest the following.
###### Problem 1.5.
Find a pair of arrangements with homotopy equivalent complements and whose underlying matroids are non-isomorphic, connected, and inerectible (i.e., not truncations).
Cohen and Suciu in approach this same problem of homotopy classification using invariants of the fundamental group. Their approach has the advantage that it may also be used to distinguish the complements of arrangements with the same underlying matroid. Some of this work is described elsewhere in this paper. Here we merely remark on the surprising connection described in between the characteristic varieties of arising from the Alexander invariant of the fundamental group, and the resonant varieties of , which arise from the $`OS`$ algebra.
## 2. Algebraic properties of the group of an arrangement
The topology of hyperplane complements seems to be to a large extent controlled by the fundamental group. These โarrangement groupsโ have relatively simple global structure, being pieced together out of free groups in a fairly straightforward way (see Sections 4.1 and 3.3), but have surprisingly delicate fine structure. At the time of the writing of there was a great deal of activity around the study of the lower central series of these groups, and connections with rational homotopy theory and Chenโs theory of iterated integrals. In this section we report on progress in these areas in the intervening years.
### 2.1. The LCS formula, quadratic algebras, rational $`K(\pi ,1)`$ and parallel arrangements
Discoveries of Kohno and the authors showed that Wittโs formula for the lower central series of finitely generated free Lie algebras (or, equivalently, free groups) generalized to a wide class of hyperplane complements. The so-called LCS formula reads
$$\underset{n1}{}(1t^n)^{\varphi _n}=\underset{i0}{}b_i(t)^i,$$
relating the ranks $`\varphi _n`$ of factors in the lower central series of the fundamental group $`\pi _1(M)`$ to the betti numbers $`b_i=dim(A^i(๐))`$ of $`M`$. In it is shown that this formula holds for all fiber-type arrangements. These are arrangements whose underlying matroids are supersolvable . This result was ostensibly extended to rational $`K(\pi ,1)`$ arrangements in . (See also Section 2.2.) We refer the reader to for a precise definition of rational $`K(\pi ,1)`$ arrangement. Briefly, if $`๐ฎ`$ is the 1-minimal model of $`M`$ (or, equivalently, of $`A(๐)`$), then $`๐`$ is rational $`K(\pi ,1)`$ if $`H^{}(๐ฎ)A(๐)`$. It is shown in that fiber-type arrangements are rational $`K(\pi ,1)`$.
The technical results of were used in to show that fundamental groups of fiber-type arrangements (in particular, the pure braid group) are residually nilpotent. This result turned out to be important for the theory of knot invariants of finite type .
The situation surrounding the LCS formula was very much in flux during the preparation of , a fact reflected in the equivocal footnotes in the table of implications in that paper. The situation has been clarified somewhat in the meantime. Our purpose here is to briefly summarize the current understanding of these issues.
Recall that an arrangement of rank three is parallel if for any four hyperplanes of $`๐`$ in general position, there is a fifth hyperplane in $`๐`$ containing two of the six pairwise intersections. The $`OS`$ algebra $`A(๐)`$ is quadratic if the relation ideal $`I`$ (defined in Section 1.2) is generated by its elements of degree two. We will sometimes say $`๐`$ is quadratic. This is a combinatorial condition, which will be discussed in further detail in Section 3.2. In general the quotient of the exterior algebra $`\mathrm{\Lambda }(e_1,\mathrm{},e_n)`$ by the ideal generated by the degree two elements of $`I`$ is called the quadratic closure of $`A(๐)`$, denoted $`\overline{A}(๐)`$. Here is a summary of cogent results established in .
1. If $`๐`$ is a rational $`K(\pi ,1)`$ arrangement, then $`๐`$ is quadratic.
2. Every parallel arrangement is quadratic.
3. Every rational $`K(\pi ,1)`$ arrangement satisfies the LCS formula.
4. Every quadratic arrangement satisfies the LCS formula at least to third degree.
In we cited an unpublished note which claimed that every parallel arrangement is a rational $`K(\pi ,1)`$. Using the construction of , in 1994 Falk wrote a Mathematica program to compute $`\varphi _4`$, and checked the smallest example of a parallel, non-fiber-type arrangement of rank 3. This arrangement, labelled $`X_2`$ in , consists of the planes $`x\pm z=0,y\pm z=0,x+y\pm 2z=0`$, and $`z=0`$, and is pictured in Figure 3. We obtained the result $`\varphi _4=15`$, whereas the LCS formula would predict $`\varphi _4=10`$.
So the implications
$$\text{parallel}\text{rational }K(\pi ,1),$$
$$\text{quadratic}\text{rational }K(\pi ,1),$$
$$\text{parallel}\text{LCS},$$
and
$$\text{quadratic}\text{LCS}$$
recorded in are all false.
Subsequently, work of Shelton-Yuzvinsky , and Papadima-Yuzvinsky provided further clarification. Let $``$ denote the holonomy Lie algebra of $`M`$, the quotient of the free Lie algebra on generators $`x_1,\mathrm{}x_n`$ by the image of the map $`H_1(M)\mathrm{\Lambda }^2(H_1(M))`$ dual to the cup product. Let $`U=U(๐)`$ be its universal enveloping algebra, a dual object to the 1-minimal model $`๐ฎ`$. The Hilbert series of $`U`$ is $`_{n1}(1t^n)^{\varphi _n}`$. Kohno constructs a chain complex $`(R,\delta )`$ which, when exact, forms a resolution of $``$ as a trivial $`U`$-module. In this case $`๐`$ is a rational $`K(\pi ,1)`$ arrangement, and the LCS formula holds.
Shelton and Yuzvinsky realized that $`U(๐)`$ is the Koszul dual of the quadratic closure of $`A(๐)`$. We refer the reader to for a precise definition; loosely speaking, the defining relations for the Koszul dual $`U`$ form the orthogonal complement to those of $`\overline{A}(๐)`$ inside the tensor product $`T_2(A^1(๐))`$. They observed that the Aomoto-Kohno complex $`(R,\delta )`$ is the usual Koszul complex of $`U`$, and thus is exact if and only if $`U`$ is a Koszul algebra โ $`U`$ is Koszul iff $`\mathrm{Ext}_U^{p.q}(,)=0`$ unless $`p=q`$. It follows from this that $`A(๐)`$ is a quadratic algebra. (This observation was also made by Hain .) The LCS formula is then a consequence of Koszul duality. They give a combinatorial proof that $`A(๐)`$ is quadratic and that $`U(๐)`$ is Koszul if $`๐`$ is a supersolvable arrangement.
The results of were strengthened and extended in to give a description of $`H^{}(๐ฎ)`$ in terms of Koszul algebra theory, for more general spaces. In particular, it is shown in that $`๐`$ is rational $`K(\pi ,1)`$ if only if the $`OS`$ algebra is Koszul. In addition, Papadima and Yuzvinsky gave an alternate proof that the arrangement $`X_2`$ above fails the LCS formula. Finally, using a โcentral-to-affineโ reduction argument, they were able to prove the following.
###### Theorem 2.1.
For arrangements of rank three, the LCS formula holds if and only if the arrangement is fiber-type.
Peeva applies techniques of commutative algebra and Grรถbner basis theory to obtain a short proof that supersolvable arrangements satisfy the LCS formula, in addition to other related computational results.
In research closely related to the lower central series of arrangement groups, Kohno used the iterated integral/holonomy Lie algebra approach to construct representations of the (pure) braid group, and more generally to study the monodromy of local systems over hyperplane complements. This work is also closely tied to the theory of generalized hypergeometric functions. See for a description of these developments. Cohen and Suciu pursued similar ideas using methods more closely connected to those of in .
### 2.2. The $`D_n`$ reflection arrangements
The fundamental groups of the reflection arrangements of type $`D_n`$ have been studied using some of the technical machinery of . Note that these arrangements, for $`n>3`$, are not supersolvable. The author of constructs a presentation which he claims presents these fundamental groups as โalmost direct productsโ in the sense of . He used this to show that these groups are residually nilpotent. In 1994 we tried to use this presentation to get more precise calculations for the lower central series of these groups, at least for $`n=4`$. In fact we found that the presentation in is not correct. Even for the $`D_3`$ arrangement, which is supersolvable, the results one deduces from do not jibe with the LCS formula, which is known to hold for $`D_3`$. In Liebman and Markushevich adopt a different approach and derive a different presentation to show that the $`D_n`$ arrangement groups are residually nilpotent.
It was in the course of this research that we started computing $`\varphi _4`$ by machine. In addition to finding the counterexample $`X_2`$ described above, we also computed $`\varphi _4=183`$ for the $`D_4`$ reflection arrangement. The LCS formula yields $`\varphi _4=186`$. So the $`D_4`$ arrangement fails the LCS formula, contrary to another assertion reported on in .
The work of Shelton and Yuzvinsky make it clear why the argument of for the LCS formula for the $`D_n`$ reflection arrangements fails: these arrangements, for $`n>3`$, do not have quadratic $`OS`$ algebras, by . Hence the Aomoto-Kohno complex $`R_{}`$ cannot be exact for these arrangements.
So we are left with no examples of arrangements which are not supersolvable, yet are rational $`K(\pi ,1)`$, and no examples of arrangements satisfying the LCS formula which are not rational $`K(\pi ,1)`$.
###### Problem 2.2.
Find examples of non-supersolvable or non-rational $`K(\pi ,1)`$ arrangements satisfying the LCS formula, or prove that such examples do not exist.
### 2.3. Work of Cohen and Suciu on the Chen groups
As noted above, the ranks of the quotients in the lower central series of fiber-type arrangements are determined by the betti numbers of the complement. From this point of view, the pure braid groups look like products of free groups (though they are not; see .) In the last few years, Cohen and Suciu have introduced the Chen groups into the study of arrangements, providing a computable tool for distinguishing similar arrangements.
The Chen groups of a group $`G`$ are the lower central series quotients of $`G`$ modulo its second commutator subgroup $`G^{\prime \prime }`$. If for any group $`G`$ we let $`\mathrm{\Gamma }_k(G)`$ denote the $`k^{th}`$ lower central series subgroup, then the homomorphism $`GG/G^{\prime \prime }`$ induces an epimorphism
$$\frac{\mathrm{\Gamma }_k(G)}{\mathrm{\Gamma }_{k+1}(G)}\frac{\mathrm{\Gamma }_k(G/G^{\prime \prime })}{\mathrm{\Gamma }_{k+1}(G/G^{\prime \prime })}=k^{th}\text{ Chen group}$$
Thus the ranks $`\varphi _k`$ of quotients of lower central series groups are no less than the corresponding ranks $`\theta _k`$ of Chen groups. In the case of the pure braid group, the ranks $`\theta _k`$ are determined in ; they are given by the generating function
$$\underset{k=2}{\overset{\mathrm{}}{}}\theta _kt^{k2}=\left(\genfrac{}{}{0pt}{}{n+1}{4}\right)\frac{1}{(1t)^2}\left(\genfrac{}{}{0pt}{}{n}{4}\right)$$
In particular, these numbers differ from those for the product of free groups, providing a tidy proof that the pure braid groups are not such products.
Cohen and Suciu provide a detailed study of these groups including a method for their computation from a presentation of the Alexander invariant (see the discussion of presentations of the fundamental group below.) It is interesting that while these groups are very effective in distinguishing similar groups, there is not yet an example of combinatorially equivalent arrangements with different Chen ranks. In particular, they do not distinguish the examples of Rybnikov of combinatorially equivalent, homotopically different arrangements (see Section 1.3).
### 2.4. Cohomological properties of the fundamental group
In 1972 Deligne proved that for a complexification of a real simplicial arrangement, the complement $`M`$ is aspherical (also expressed by saying that $`M`$ is a $`K(\pi ,1)`$ space.) That is, the universal cover of $`M`$ is contractible. Since all real reflection arrangements are simplicial, this solved a question raised and partially answered by Brieskorn in . The original study of this sort of problem was the work of Fadell and Neuwirth on the pure braid group. Following , the authors introduced in the notion of fiber-type arrangement and observed that for this class $`M`$ is aspherical, essentially by the iterated fibration argument of Fadell and Neuwirth. So it is natural to ask: for what arrangements is $`M`$ aspherical? It is known by work of Hattori that not all are โ the arrangement defined by $`Q=xyz(x+y+z)`$ is the simplest example.
Here we wish to touch upon the algebraic consequences of asphericity. Now if $`M`$ is aspherical, the (known) cohomology of $`M`$ is isomorphic to the cohomology of the group. Since $`M`$ has cohomological dimension $`\mathrm{rk}(๐)<\mathrm{},\pi _1(M)`$ does also. In addition, $`\pi _1(M)`$ has no torsion, and there is a $`K(\pi ,1)`$ space, $`\pi =\pi _1(M)`$, with the homotopy type of a finite complex (namely, $`M`$). So here is another open problem:
###### Problem 2.3.
Are all arrangement groups torsion-free?
The answer is of course yes for real reflection arrangements and for fiber-type (or supersolvable) arrangements. One approach to this question is to show that all arrangement groups are orderable. Here we say a group $`G`$ is *orderable* provided that there is a linear order $`<`$ on $`G`$ so that $`g<h`$ implies $`cg<ch`$ for all $`cG`$. It follows easily that an orderable group has no torsion. The braid group was shown orderable by Dehornoy in ; at the Tokyo meeting L. Paris proved that the group of a fiber-type arrangement is orderable . It is not known whether all arrangement groups are orderable. Note that the group of an arrangement has a finite presentation of a fairly restricted type, as described in Section 4.1, and that the relators all lie in the commutator subgroup.
There are some useful observations concerning these ideas in . For instance, we have the following theorem.
###### Theorem 2.4.
For $`j2`$ the Hurewicz map
$$\varphi :\pi _j(M)H_j(M)$$
is trivial.
As a consequence, the second homology of $`\pi _1(M)`$ is isomorphic to $`H_2(M)`$. In addition, it is mentioned there that the arrangement defined by
$$Q=xyz(y+z)(xz)(2x+y)$$
has the property that there is no arrangement with aspherical complement with the same intersection lattice in rank one and two. The following result is also proved in .
###### Theorem 2.5.
The complement of a central arrangement of rank three is aspherical provided that the fundamental group has cohomological dimension three and is of type FL.
A group $`\pi `$ is type FL provided that $``$ (as a trivial $`[\pi ]`$-module) has a finite resolution by free $`[\pi ]`$-modules. An equivalent statement is that there should exist a finite CW complex which is a $`K(\pi ,1)`$-space. Theorem 2.5 shows that for central rank three arrangements asphericity is determined by the fundamental group.
## 3. Arrangements with aspherical complements
Much of the early history of the topology of arrangements revolves around the โ$`K(\pi ,1)`$ problem,โ the problem of determining which arrangements have aspherical complements. (Such an arrangement is called a $`K(\pi ,1)`$ arrangement.) This history is described in some detail in (see also Section 2.4). In addition, we proved an ad hoc necessary condition \[37, Thm. 3.1\] for asphericity involving โsimple triangles,โ and introduced the notion of formal arrangement, which was shown to be a necessary condition for $`K(\pi ,1)`$ and rational $`K(\pi ,1)`$ arrangements. A great deal of progress was made in these areas in the intervening years, which we report on in this section.
### 3.1. Free arrangements are not aspherical
In our earlier survey, we highlighted the Saito conjecture, that all free arrangements are aspherical. In 1995 Edelman and Reiner provided counterexamples, which we briefly describe.
Let $`S`$ denote the polynomial ring of $`V.`$ A linear map $`\theta :SS`$ is a derivation if for $`f,gS,`$ we have $`\theta (fg)=f\theta (g)+g\theta (f)`$. The module of $`๐`$derivations is defined by
$$D(๐)=\{\theta \theta (Q)QS\}$$
where $`Q`$ is the defining polynomial of the arrangement. Then the arrangement is *free* provided that $`D(๐)`$ is a free $`S`$-module.
It is known that reflection arrangements are free; for their many pleasant properties see . In 1975 K. Saito conjectured that free arrangements should be aspherical. In their study of tilings of centrally symmetric octagons in , Edelman and Reiner found the family of arrangements given by
$$Q(๐_\alpha )=xyz(xy)(xz)(yz)(x\alpha y)(x\alpha z)(y\alpha z)$$
with $`\alpha .`$ They proved that the corresponding arrangements are free for all $`\alpha `$, while they are not aspherical for $`\alpha 1,0,1`$. The proof of freeness is direct, using addition-deletion \[65, Theorem 4.51\] while the non-asphericity follows from the โsimple triangleโ criterion of . The counter-example $`๐_2`$ is pictured in Figure 4.
### 3.2. Formality and related concepts
The fundamental group of arrangement is determined by a generic 3-dimensional section. Based on the idea that $`K(\pi ,1)`$ arrangements should be extremal in some sense, we developed the notion of formal arrangement in . This has been the subject of several papers since , which provide a better understanding of the concept. Here is a โmodernโ definition, equivalent to the original from .
Let $`\mathrm{\Phi }:^nV^{}`$ be given by $`\mathrm{\Phi }(x)=_{i=1}^nx_i\alpha _i`$, where the $`\alpha _i`$ are the defining forms for $`๐`$. Let $`K=\mathrm{ker}(\mathrm{\Phi })`$ and let $`F`$ be the subspace of $`K`$ spanned by its elements of weight three (i.e., having three nonzero entries). Then the arrangement $`๐`$ is formal if $`F=K`$.
The orthogonal complement $`K^{}^n`$ coincides with the point $`P_๐๐ข_{\mathrm{}}(^n)`$ defined in Section 1.2. Thus the arrangement $`๐`$ is isomorphic to the arrangement in $`K^{}`$ formed by the coordinate hyperplanes. In the same way, the orthogonal complement $`F^{}K^{}`$ defines an arrangement $`๐_F`$, called the formalization of $`๐`$. So $`๐`$ is formal if and only if $`๐=๐_F`$. If $`๐`$ is not formal, $`๐_F`$ has strictly greater rank, and $`๐`$ is a (not necessarily generic) section of $`๐_F`$. Also, $`๐`$ and $`๐_F`$ have isomorphic generic โplanarโ (i.e., rank-three) sections.
These properties of formalization were asserted in , but the arguments we had in mind were not correct. The clarification described here is due to Yuzvinsky . Examples in show that non-formal arrangements need not be generic sections of their formalizations. The arrangement of Example 2.19 of has the property that the free erection of the underlying matroid is not realizable, but (contrary to the assertion in ) there is nevertheless a realizable (formal) erection. Matroid โerectionโ is the reverse of (corank one) truncation; truncation is the matroid-theoretic analogue of generic section. The free erection of an erectible matroid is the unique erection with โthe most general positionโ โ see .
These observations are enough to establish the following results from . The third assertion follows immediately from the second.
1. If $`๐`$ is a $`K(\pi ,1)`$ arrangement, then $`๐`$ is formal.
2. If $`๐`$ is quadratic, then $`๐`$ is formal.
3. If $`๐`$ is a rational $`K(\pi ,1)`$ arrangement, then $`๐`$ is formal.
We asked whether free arrangements are also necessarily formal. This was established by Yuzvinsky.
###### Theorem 3.1.
If $`๐`$ is a free arrangement, then $`๐`$ is formal.
The preceding result was generalized by Brandt and Terao . They define the notion of $`k`$-formal arrangement. A formal arrangement has the property that all relations among the defining equations are consequences of relations which are โlocalizedโ at rank-two flats, in the sense that an element of $`K`$ of weight three gives rise to a three-element subset of a rank-two flat. A formal arrangement is $`3`$-formal if all relations among these local generators of $`F=K`$ are themselves consequences of relations which are localized at rank-three flats of $`๐`$. This construction is iterated to define the notion of $`k`$-formal arrangement for every $`k2`$. See for the precise definition. An arrangement of rank $`r`$ is automatically $`k`$-formal for every $`kr`$. The original notion of formality coincides with the case $`k=2`$.
###### Theorem 3.2.
If $`๐`$ is a free arrangement of rank $`r`$, then $`๐`$ is $`k`$-formal for every $`2k<r`$.
The converse is false .
Related work appears in , where the authors show that the discriminantal arrangements of Manin and Schechtman (see Section 3.4.2) are formal, and the โvery genericโ discriminantal arrangements are $`3`$-formal, though none are free.
An arrangement is locally formal if, for every flat $`X[n],`$ the arrangement $`๐_X=\{H_i|iX\}`$ is formal. Since freeness, quadraticity, and $`K(\pi ,1)`$-ness are all โhereditary properties,โ in that they are inherited by the localizations $`๐_X`$, one has that every free, quadratic, or $`K(\pi ,1)`$ arrangement is locally formal.
We asked in whether formality is a โcombinatorial propertyโ, depending only on the underlying matroid. Yuzvinsky constructed counter-examples in .
###### Theorem 3.3.
There exist arrangements $`๐_1`$ and $`๐_2`$ with the same underlying matroid, such that $`๐_1`$ is formal and $`๐_2`$ is not formal.
In Figure 5 are the dual point configurations of Yuzvinskyโs arrangements. The dotted line in Figure 5(b) indicates where to โfoldโ the configuration to erect it to a rank-four configuration. The nontrivial planes in the erection are
$$12389,12456,13458,13678,14579,23567,24789,25689,\text{and}34679.$$
Note that these two configurations are lattice-isotopic (over $``$), so neither is free or $`K(\pi ,1)`$.
If $`๐`$ is not formal, then the underlying matroid of $`๐`$ is a strong map image (under the identity map) of that of $`๐_F`$ (see for the general definition), and the two matroids have the same rank-three truncations. These combinatorial properties gave rise to several attempts to replace the notion of formality with some clearly matroidal condition, and strengthen Theorem 3.1 and assertion (i) above. For example one can ask for conditions on a matroid $`G`$ so that every (complex) realization of $`G`$ is formal. One is naturally led to the notion of line-closure.
Let $`G`$ be a matroid on ground set $`[n]`$. The line-closure of a subset $`S`$ of $`[n]`$ is the smallest subset of $`[n]`$ which contains every line (that is, rank-two flat) spanned by points of $`S`$. A set is line-closed if it is equal to its line-closure. The matroid $`G`$ is line-closed if every line-closed subset of $`[n]`$ is a flat of $`G`$. In his current work in progress , the first author has established the following result.
###### Theorem 3.4.
An arrangement $`๐`$ is quadratic only if the underlying matroid $`G(๐)`$ is line-closed.
###### Corollary 3.5.
The underlying matroid of a rational $`K(\pi ,1)`$ arrangement is necessarily line-closed.
The converse of Theorem 3.4, that $`๐`$ is quadratic when $`G(๐)`$ is line-closed, is very likely also true. A crucial step in the proof is yet to be completed, however, so this assertion remains an open problem.
Yuzvinsky defined a formal matroid to be a matroid $`G`$ possessing a basis (of $`\mathrm{rk}(G)`$ points) whose line-closure is $`[n]`$. Every line-closed matroid is formal in this sense. In fact a matroid $`G`$ is line-closed if and only if the line-closure of every basis of each flat $`X`$ is equal to $`X`$. Every realization of a formal matroid is formal.
In we define a matroid $`G`$ to be taut if $`G`$ is not a strong map image of a matroid $`G^{}`$ of greater rank with the same points and lines, and locally taut if every flat of $`G`$ is taut. Every line-closed matroid is locally taut, in fact every formal matroid is taut. Every realization of a (locally) taut matroid is (locally) formal. There exist matroids which are taut but not formal . A weak version of the first part of the following problem was suggested by Yuzvinsky in his talk .
###### Problem 3.6.
Prove that the matroid of a free or $`K(\pi ,1)`$ arrangement is necessarily taut.
Joseph Kung has pointed out to us that a locally taut matroid is uniquely determined by its points and lines, which suggests the following interesting problem.
###### Problem 3.7.
Prove that the underlying matroid of a locally formal arrangement (e.g. a free or $`K(\pi ,1)`$ arrangement) is uniquely determined by its points and lines.
This last problem is a variant on the following questions from , the first of which is Teraoโs Conjecture, and both of which remain open.
###### Problem 3.8.
Prove that freeness and $`K(\pi ,1)`$-ness of arrangements are matroidal properties.
We will refrain from discussing Teraoโs Conjecture further, except to pose a weak version which fits the spirit of this paper, and is interesting in its own right.
###### Problem 3.9.
Prove that freeness is preserved under lattice-isotopy.
### 3.3. Tests for asphericity
Some progress was also made on the problem of finding sufficient conditions for an arrangement to be $`K(\pi ,1)`$. The main results are the weight test of and its application to factored arrangements by Paris . A new technique involving modular flats was recently discovered and presented at the conference .
The complement $`M`$ of a 2-dimensional affine arrangement $`๐`$ is built up out of $`K(\pi ,1)`$ spaces, specifically $`(r,r)`$ torus link complements, in a relatively simple way, as is reflected in the Randell-Salvetti-Arvola presentations (see Section 4.1). In fact this structure mirrors precisely constructions from geometric group theory related to complexes of groups. This observation allows one to construct a relatively well-behaved cell complex which has the homotopy type of the universal cover of $`M`$, and to apply the weight test of Gersten and Stallings to derive a test for asphericity of $`M`$.
###### Theorem 3.10.
If $`๐`$ is a complexified affine arrangement in $`^2`$ that admits an $`๐`$-admissible, aspherical system of weights, then $`๐`$ is a $`K(\pi ,1)`$ arrangement.
The question remains what an $`๐`$-admissible, aspherical system of weights is. This involves the complex $`B`$ of bounded faces in the subdivision of $`^2`$ determined by $`๐`$. A weight system is an assignment of a real number weight to each โcornerโ of each 2-cell in $`B`$. The system is aspherical if the sum of weights around any $`d`$-gon at most $`d2`$. The system is $`๐`$-admissible if certain sums of weights at vertices of $`\mathrm{\Gamma }`$ are at least $`2\pi `$. See for more detail.
The universal cover complex constructed in may be used in some cases to construct explicit essential spheres showing that $`M`$ is not aspherical. Radloff used this method to prove some necessary conditions for $`K(\pi ,1)`$-ness, along the lines of the โsimple triangleโ test of , and found several new examples of non-$`K(\pi ,1)`$ arrangements.
Falk and Jambu introduced the notion of factored arrangement in , originally in an attempt to find a combinatorial criterion for freeness. A factorization of an arrangement $`๐`$ is a partition of $`[n]`$ such that each flat of $`G(๐)`$ of rank $`p`$ meets precisely $`p`$ blocks, and meets one of them in a singleton, for each $`p`$. This property is necessary and sufficient for the $`OS`$ algebra $`A(๐)`$ to have a complete tensor product factorization - see . When $`๐`$ has a factorization, we say $`๐`$ is factored.
Paris realized that a factorization of a rank-three arrangement provides a template for a very simple $`๐`$-admissible, aspherical weight system.
###### Theorem 3.11.
If $`๐`$ is a factored, complexified arrangement in $`^3`$, then $`๐`$ is a $`K(\pi ,1)`$ arrangement.
Every supersolvable arrangement is factored, so this result provides a new, wider class of $`K(\pi ,1)`$ arrangements, at least in rank three.
###### Problem 3.12.
Show that factored arrangements of arbitrary rank are $`K(\pi ,1)`$.
A flat $`X`$ of a matroid $`G`$ is modular if $`\mathrm{rk}(XY)+\mathrm{rk}(XY)=\mathrm{rk}(X)+\mathrm{rk}(Y)`$ for every flat $`Y`$. The following result was discovered independently by Paris and Falk-Proudfoot
###### Theorem 3.13.
If $`X`$ is a modular flat of arbitrary rank in $`G(๐)`$, then there is a topological fibration $`M(๐)M(๐_X)`$ whose fiber is the complement of a projective arrangement.
This generalizes the corank-one case, which gives rise to fiber-type arrangements, established in . The new result can be used to construct or recognize $`K(\pi ,1)`$ arrangements if the base (whose matroid is the modular flat $`X`$) and fiber (whose matroid is the complete principal truncation of $`G(๐)`$ along $`X`$) are known to be $`K(\pi ,1)`$. This method is used to construct some interesting new examples in . Refer to Parisโ paper in this volume for more details.
### 3.4. Some crucial examples
In this section we want to briefly discuss some specific and interesting types of arrangements for which the $`K(\pi ,1)`$ problem is unsolved. These might be regarded as test subjects for new techniques; they qualify as โthe first unknown cases.โ
First we cite another improvement to the table of implications in . Recall the definition of parallel arrangement from Section 2.1. In we had listed the implication โparallel $``$ $`K(\pi ,1)`$โ as โnot known, of significant interest.โ In unpublished work, Luis Paris has shown this implication to be false. Specifically, he showed that the Kohno arrangement $`X_2`$ (defined in Section 2.1) is not $`K(\pi ,1)`$. The proof establishes that the fundamental group contains a subgroup isomorphic to $`^4`$; the result then follows from \[37, Thm. 3.2\]. The copy of $`^4`$ is generated by $`a,b,c,`$ and the commutator $`[d,e]`$, where $`a,b,c,d,`$ and $`e`$ are the canonical generators corresponding to the hyperplanes $`x\pm z=0,z=0,`$ and $`x+y\pm 2z=0`$ respectively.
#### 3.4.1. Complex reflection arrangements
Fadell and Neuwirth showed in 1962 that the complement of the $`A_{\mathrm{}}`$ reflection (or braid) arrangement is $`K(\pi ,1)`$. In 1973 Brieskorn proved this for many real reflection arrangements, followed soon thereafter by Deligneโs proof of the general case. Orlik and Solomon extensively studied arrangements of hyperplanes invariant under finite groups generated by complex reflections (see \[65, Chapter 6\]). It is natural to ask if all such arrangements are aspherical. We believe the conjecture that they are is due to Orlik, though it was proposed long before it ever appeared in print. It is known that the answer is affirmative in all cases except six exceptional, non-complexified arrangements, some of which have rank three. The proofs for the known cases use a variety of techniques, and essentially proceed from the Shepard-Todd classification of irreducible unitary reflection groups (see, e.g., ). What seems to be missing is a unifying property, similar to the simplicial property for real reflection arrangements exploited by Deligne. The closest approach to this goal is the work reported in \[65, p. 265\] which proves the asphericity of arrangements associated to Shephard groups (symmetry group of a regular convex polytope.) Here the problem is reduced to the (already solved) problem for an associated real reflection arrangement.
###### Problem 3.14.
Give a uniform proof that all unitary reflection arrangements are $`K(\pi ,1)`$.
#### 3.4.2. Discriminantal arrangements
Experience seems to show us that questions involving asphericity are quite complex for all arrangements but tractible for restricted classes (reflection, fiber-type, generic). One interesting class is that of the discriminantal arrangements introduced by Manin and Schechtmann . Rather than give the full definition here we will describe the rank three examples, where the problem is already interesting.
Consider a real affine arrangement of lines in the plane, obtained by taking a collection of $`n`$ points, no three of which are collinear, and drawing all $`\left(\genfrac{}{}{0pt}{}{n}{2}\right)`$ lines through pairs of these points. Then embed this configuration in the plane $`z=1`$ in three-space and cone over the origin to obtain a central real three-arrangement. Then complexify.
This process can result in arrangements with distinct matroidal and topological structure, even for fixed $`n`$ . The discriminantal arrangements are obtained from โvery genericโ collections of points, for which no three of the $`\left(\genfrac{}{}{0pt}{}{n}{2}\right)`$ lines are concurrent except at the original $`n`$ points.
The arrangement $`C(4)`$ is linearly equivalent to the braid arrangement of rank three. An easy calculation shows that the Poincarรฉ polynomial associated to the cohomology of $`C(n)`$ does not factor over $``$ for $`n5`$, so that these arrangements are not free and are not of fiber-type. Also $`C(n)`$ is not simplicial for $`n5`$. The arrangements $`C(n)`$ for $`n6`$ are not aspherical, by \[37, Thm. 3.1\].
For $`n=5`$, one obtains a complexified central three-arrangement of $`10`$ planes. This arrangement is not factored. More generally $`C(5)`$ does not support an admissible, aspherical system of weights, so the weight test fails. On the other hand, all of the standard necessary conditions for asphericity hold.
###### Problem 3.15.
Determine whether the discriminantal arrangement $`C(5)`$ is $`K(\pi ,1)`$.
A solution to this problem would also determine whether the space of configurations of six points in general position in $`P^2`$ is aspherical , a result which would be of significant interest.
#### 3.4.3. Deformations of reflection arrangements
A โdeformationโ of a reflection arrangement is an affine arrangement with defining equations of the form
$$\alpha _i(x_1,\mathrm{},x_{\mathrm{}})=c_{ij},$$
where the $`\alpha _i`$ are the positive roots in some root system, and $`c_{ij}`$. This class of arrangements is of great interest to combinatorialists, and is the subject of the paper of Athanasiadis in this volume .
As is our custom, we โconeโ to obtain a central arrangement. For instance, based on the root system of type $`B_2`$, we obtain the $`B_2`$ Shi arrangement, defined by the polynomial
$$Q=xyz(x+y)(xy)(xz)(yz)(x+yz)(xyz).$$
(Shi arrangements are obtained by setting $`c_{i1}=0`$ and $`c_{i2}=1`$ for all $`i`$.) This nine-line complexified arrangement has a factorization, given by the partition
$$\{\{4\},\{1,2,5,7\},\{3,6,8,9\}\},$$
and is therefore a $`K(\pi ,1)`$ arrangement. On the other hand, the Shi arrangement constructed in a similar way from the root system of type $`G_2`$ is not factored or simplicial, and has no simple triangle.
###### Problem 3.16.
Decide whether the $`G_2`$ Shi arrangement is $`K(\pi ,1)`$.
More generally, we propose the following.
###### Problem 3.17.
Decide which Shi arrangements are $`K(\pi ,1)`$.
## 4. Topological properties of the group of an arrangement
At the time of the publication of , a presentation of the fundamental group of the complement of a complexified arrangement had been derived . In the meantime, a similar presentation was found for arbitrary complex arrangements , and several different โspinesโ for the complement, some of them modelled on group presentations, were constructed . These group presentations have been used to study the Milnor fibration and Alexander invariants of the complement. We report briefly on these ideas here.
### 4.1. Presentations of $`\pi _1`$
We have seen earlier in the discussion of the lower central series, Chen groups and group cohomology that certain classes of arrangements (fiber-type, simplicial) have well-behaved fundamental groups. Due to work of Arvola , Randell and Salvetti an explicit presentation of $`\pi _1(M)`$ can be written. See \[65, Section 5.3\] for a clear exposition of Arvolaโs presentation for any complex arrangement, and for the explicit presentation and some applications of Randellโs presentation, which holds for complexified arrangements and is naturally simpler than the general case. A different approach, using the notion of โlabyrinth,โ is adopted by Dung and Vui in to arrive at similar presentations for arbitrary arrangement groups.
In these presentations one first takes a planar section (or, more precisely, the projective image), so that one is working with an affine arrangement in $`^2`$. Then there is one generator for each line of the arrangement, and one set of relations for each intersection. In all cases the relations consist entirely of commutators, but to date this has not shed much light on the questions of group cohomology, torsion in the fundamental group, or other properties (such as orderability) of the fundamental group. A general theme for questions is: to what extent do arrangement groups mimic the properties of the pure braid groups.
The concept of braid monodromy was introduced by B. Moishezon . Libgober showed in that the braid monodromy presentation of the fundamental group yields a two-complex with the homotopy type of the complement of an algebraic curve (e.g., a line arrangement) transverse to the line at infinity.
Motivated in part by , the first author showed in that for arbitrary line arrangements the 2-complex modelled on the presentation of serves as an efficient model for constructing the homotopy type of the complement (in the case of 3-arrangements). This construction was then used to construct a number of examples with different intersection lattice but same homotopy type (see also Section 1.3).
In related work Cohen and Suciu have given an explicit description of the braid monodromy of a complex arrangement, using Hansenโs theory of polynomial covering maps. They show that the resulting presentation of the fundamental group is equivalent to the Randell-Arvola presentation via Tietze transformations that do not affect the homotopy type of the associated 2-complex. It follows that the complement is homotopy equivalent to the 2-complex modelled on either of these presentations, generalizing the result of . For this work Cohen and Suciu used extensively the concept of braided wiring diagram, which we briefly describe below. The notion of braided wiring diagram generalizes Goodmanโs concept of wiring diagram , and was earlier considered for arrangements in . (Wiring diagrams appear in combinatorics as geometric models for rank-three oriented matroids.) The presentations of and use versions of this idea. In brief, the braided wiring diagram can be thought of as a template for the fundamental group (or, for line arrangements, the homotopy type.)
Here is a sketch of the construction. For examples and further details, in particular, a beautiful derivation using polynomial covering space theory, see . Since we are interested in the fundamental group, consider an affine arrangement $`๐`$ in $`^2`$. Choose coordinates in $`^2`$ so that the projection to the first coordinate is generic. Suppose that the images $`y_1,\mathrm{},y_n`$ of the intersections of the lines have distinct real parts. Choose a basepoint $`y_0\{y_1,\mathrm{},y_n\}`$, and assume the real parts of $`y_i`$ are decreasing with $`i`$. Let $`\xi `$ be a smooth path which begins with $`y_0`$ and passes in order through the $`y_i`$, horizontal near each $`y_i`$. Then the braided wiring diagram is $`๐ฒ=\{(x,z)\xi \times Q(x,z)=0\}.`$ (Recall that $`Q`$ is the defining polynomial of the arrangement.)
This braided wiring diagram should be viewed as a picture of the braid monodromy of the fundamental group of the arrangement (or as a picture of the fundamental group itself). In a sense, it carries the attaching (or amalgamating) information as one computes the fundamental group using the Seifert-Van Kampen theorem. Each actual node in the wiring diagram gives a set of relators, as does each crossing. In particular, it is shown in that the braided wiring diagram recovers the Arvola or Randell presentation of $`\pi _1(M)`$. Indeed, in the real case, the braided wiring diagram can be identified with the usual drawing of the arrangement in $`^2`$.
As is the case with ordinary braids, there are โMarkov movesโ with which one can modify such a wiring diagram to realize any braid-equivalence of the underlying braid monodromies. These are given explicitly in . Rudimentary moves of this type, called โflips,โ first appeared in . Among the consequences we note the following results which relate braid monodromy and braided wiring diagrams to lattice isotopy of line arrangements (that is, arrangements in $`^2`$).
###### Theorem 4.1.
Lattice-isotopic arrangements in $`^2`$ have braided wiring diagrams which are related by a finite sequence of Markov moves and their inverses.
###### Theorem 4.2.
Line arrangements with braid-equivalent monodromies have isomorphic underlying matroids.
### 4.2. The Milnor fiber
The defining polynomial $`Q=_{i=1}^n\alpha _i`$ is homogeneous of degree $`n`$ and can be considered as a map
$$Q:M^{}$$
It is well-known that this map is the projection of a fiber bundle, called the Milnor fibration, and that the Milnor fiber $`F=Q^1(1)`$ should be of interest. In it was shown that this Milnor fibration is constant in a lattice-isotopic family, so that the Milnor fiber is indeed an invariant of lattice-isotopy. Because of this we propose the following definition, analogous to the definition made in the theory of knots.
###### Definition 4.3.
Two arrangements are called *(topologically) equivalent* if they are lattice-isotopic. We say the arrangements have the same *(topological) type.*
Thus, arrangements are topologically equivalent if and only if they lie in the same path component of some matroid stratum in the Grassmannian. With this terminology, we have the following result.
###### Theorem 4.4.
The Milnor fiber and fibration are invariants of topological type.
Now, $`F`$ is simply an $`n`$fold cover of the complement of the projectivized arrangement in $`P^\mathrm{}1.`$ Since the algorithms of the previous section work to compute the fundamental group of this latter space, questions involving the fundamental group and cohomology of $`F`$ are also questions involving the group of the arrangement. In particular, while the cohomology of $`M`$ is determined by the intersection lattice, that of $`F`$ may not be. The situation is analogous to that of plane curves, where work going back to Zariski shows that not only the type but the position of the singularities affects the irregularity. (The irregularity here is simply half the โexcessโ in the first betti number of $`F`$.)
Early results concerning the Milnor fiber of an arrangement (often in the general context of plane curves) appear in work of Libgober and Randell , particularly with respect to Alexander invariants. Libgoberโs work gave considerable information about the homology of the Milnor fiber in relation to the number and type of singularities of the arrangement, their position and the number of lines. The paper observed that the Alexander polynomial was equal to the characteristic polynomial of the monodromy on the Milnor fiber.
The paper of Artal-Bartolo included an interesting example: for the rank three braid arrangement $`A_3`$ the first betti number of the Milnor fiber is seven, an excess of two over the five โpredictedโ by the number of lines. (This result can be obtained as an interesting exercise by applying the Reidemeister-Schreier rewriting algorithm to the presentations of the fundamental group.) Orlik and Randell showed that in the generic case the cohomology of the Milnor fiber is minimal, given by the number of lines, below the middle dimension.
Cohen and Suciu carry forward the study of the Milnor fiber in . Using the group presentation and methods of Fox calculus they give twisted chain complexes whose homology gives that of the Milnor fiber. Their methods are effective, and several explicit examples are given. The monodromy action on the Milnor fiber is of course crucial, and this monodromy is determined as well.
Finally, we note the following problem, which remains open after many years.
###### Problem 4.5.
Prove that the homology of the Milnor fiber of $`๐`$ depends only on the underlying matroid.
###### Acknowledgements .
The idea to hold a birthday conference in honor of Peter Orlik was initially suggested by the second author in 1995. We would like to thank Mutsuo Oka And Hiroaki Terao for their hard work in organizing the meeting. We also wish to thank the referee for his helpful observations concerning deformations of arrangements.
The two authors were both students of Peter Orlik in Madison, Randell in the early 1970โs and Falk in the early 1980โs. We are happy to have the chance to thank him, in print, for introducing us to the field of hyperplane arrangements and for his enthusiasm, friendship and support through the years. |
no-problem/0002/hep-th0002182.html | ar5iv | text | # A consistent electromagnetic duality
## Abstract
We present a new view for duality in classical electromagnetic theory, based on the physical properties of a dual theory, eliminating the problems of the usual treatment of the subject.
Keywords : electromagnetic duality, Heaviside duality, magnetic charges. PACS numbers : 11.15.-q, 11.30.-j, 12.15.-y
The concept of duality has received considerable attention in gauge theories. It provides useful tools to construct solutions to the field equations, namely, those which are self-dual or anti-self-dual, or allows to study regimes of the theory which prevent the use of perturbation expansions .
Duality in classical electromagnetic theory was discovered by Heaviside a century ago for the Maxwell equations in vacuum. He saw that
$$\begin{array}{cc}\hfill ๐=& \frac{๐}{t}\hfill \\ & \\ \hfill ๐=& \frac{1}{c^2}\frac{๐}{t}\hfill \end{array}\}.$$
(1)
exchanged among themselves under the replacements
$$๐c๐,c๐๐.$$
(2)
This symmetry of the system, duality (we shall refer to it as Heaviside duality) , originated a lot of speculation about its meaning: is the electric field equivalent to the magnetic induction field, and the reverse?. With the advent of nonabelian gauge theories for elementary particle physics, a lot of work in Physics and Mathematics has been performed to clarify the meaning and applicability of duality, as exposed above, or in its modern nonabelian versions.
Let us recall that the original fields in the Maxwell equations have electric charges or currents and/or time varying fields as their sources. Even in vacuum, electric fields are considered the ones accelerating electric charges parallel to their direction, whereas magnetic fields provide transverse acceleration for electric charges.
Magnetic materials are related to elementary magnetic dipoles but single isolated magnetic charges have never been observed. Dirac introduced magnetic monopoles in this framework to provide sources for the magnetic induction field, i. e.,
$$๐=g\mu _0\delta (๐ฑ)$$
(3)
To preserve the relation with the magnetic potential,
$$๐=๐$$
(4)
a topological structure had to be included (, ). It determined the famous Dirac relation between the electric charge of a test particle and the strength of the monopole field :
$$qg=2\pi n\frac{\mathrm{}}{\mu _0}.$$
(5)
It is usually assumed that the introduction of magnetic monopoles is related to Heaviside duality, though the relation is somewhat vague <sup>2</sup><sup>2</sup>2It is worth pointing that the same quantum of flux appears here as in the proposed Aharonov-Bohm effect .
There remains unwanted aspects of the theory at the electromagnetic level. Standing high is the fact that the lagrangian of the theory changes sign under duality, that is, Heaviside duality is a symmetry for the equations of motion but not for the lagrangian providing them. Neglecting sources,
$$L\{๐,๐\}=\frac{ฯต_0}{2}d^3x\left(๐^2c^2๐^2\right)$$
(6)
Heaviside duality may be extended to a continuous variation . The generator for infinitesimal transformation has been proposed to be non-local or built breaking explicit Lorentz invariance
In this article we present a new interpretation for duality which provides a consistent physical picture and avoids all the problems alluded before. The matter being largely speculative, our point is, notwithstanding, the physical basis for the symmetry.
Let us look at the equations for would be classical magnetodynamics, the dual theory to real electrodynamics. They are :
$$\begin{array}{cc}\hfill ๐^{}=& 0\hfill \\ & \\ \hfill ๐^{}=& \mu _0\rho _m\hfill \\ & \\ \hfill ๐^{}=& \frac{1}{ฯต_0}๐ฃ_m\frac{๐^{}}{t}\hfill \\ & \\ \hfill ๐^{}=& \frac{1}{c^2}\frac{๐^{}}{t}\hfill \end{array}\}.$$
(7)
The primes indicate that though the symbols appear in their positions for the original Maxwell electrodynamics equations, they satisfy different equations than in electrodynamics. The physical content of the equations is different: the sources for the magnetic field are now magnetic charges and/or currents, and time variations of the electric field. The latter, now, is generated only by magnetic currents or by the time variation of the magnetic field.
With the same arguments of the usual case, an axial vector electric potential and a pseudo-scalar magnetic potential may be introduced from the homogeneous equations:
$$\begin{array}{cc}\hfill ๐^{}=& ๐\hfill \\ & \\ \hfill ๐^{}=& \frac{\psi }{t}+\frac{1}{c^2}\frac{๐}{t}\hfill \end{array}\}.$$
(8)
With these potentials, a second order lagrangian may be constructed from which the Euler-Lagrange equations are the ones above for the fields and their magnetic charges and currents. In three dimensional notation, the lagrangian is:
$$L\{๐^{},๐^{}\}=\frac{ฯต_0}{2}d^3x\left(c^2๐_{}^{}{}_{}{}^{2}๐_{}^{}{}_{}{}^{2}\right)\rho _m\psi ๐ฃ_m๐$$
(9)
One can easily obtain in covariant four-dimensional notation the corresponding lagrangian, envolving an anti-symmetric tensor of second rank, $`G_{\mu \nu }`$. Its space-space components are the components of the electric (primed) field, while the time-space ones are
$$G_{0k}=cB_{}^{}{}_{k}{}^{}$$
(10)
This exchange of components with respect to the corresponding tensor in electrodynamics, $`F_{\mu \nu }`$, makes tempting to identify one as the dual of the other. This is not allowed, the corresponding fields satisfy different differential equations and have correspondingly different physical content. It may be correct to say that in the magnetodynamic case the lagrangian density is written in terms of the dual of what would be the field intensity tensor with the same fields.
We are now in position to analyze duality. The wave equations for the fields in electrodynamics and magnetodynamics are the same. That is, in this harmonic sector (in the sense of differential forms) the meaning of the exchange in the original Heaviside duality has the meaning of an exchange between the magnetodynamic and electrodynamic equations:
$$๐c๐^{}c๐๐^{}$$
(11)
and the corresponding ones for the other fields.
One can write a lagrangian density with satisfactory dual symmetry under the exchange of electric and magnetic fields in the electrodynamic and the magnetodynamic sectors. It is
$$=\frac{ฯต_0}{4}F_{\mu \nu }F^{\mu \nu }\frac{ฯต_0}{4}G_{\mu \nu }G^{\mu \nu }\frac{1}{c}j_{e}^{}{}_{\mu }{}^{}A^\mu \frac{1}{c}j_{m}^{}{}_{\mu }{}^{}W^\mu +(gaugefixingterms).$$
(12)
Our extended version for duality exchanges the terms in the lagrangian. Of course, for the free space equations, one has Heaviside duality in both sectors. One can also add monopoles by looking for topologically non-trivial configurations for the potentials. Monopoles are seen as not directly related to the extended electromagnetic duality.
The generator of duality continuous transformations may be easily obtained. From the lagrangian, the corresponding momenta to the potentials $`A^\mu `$ and $`W^\mu `$ are
$$\begin{array}{cccc}\hfill \mathrm{\Pi }_\mu =& \frac{}{(^0A^\mu )}=\hfill & \hfill ฯต_0E_k\delta _{k\mu }& \\ \hfill \mathrm{\Pi }_{}^{}{}_{\mu }{}^{}=& \frac{}{(^0W^\mu )}=\hfill & \hfill ฯต_0cB_{}^{}{}_{k}{}^{}\delta _{k\mu }& \end{array}\}.$$
(13)
In terms of the fields and their momenta, the transformations, parametrized by the continuous transformations as
$$\begin{array}{cc}\hfill ๐=& \mathrm{cos}\eta ๐\mathrm{sin}\eta c๐^{}\hfill \\ & \\ \hfill c๐=& \mathrm{sin}\eta ๐^{}+\mathrm{cos}\eta c๐\hfill \end{array}\}$$
(14)
and the corresponding ones for the other couple of fields, are obtained from the generator:
$$M=d^3x[A^\mu \mathrm{\Pi }_{}^{}{}_{\mu }{}^{}\mathrm{\Pi }_\mu W^\mu ]$$
(15)
which is perfectly local and covariantly written.
In conclusion, to make a consistent mathematical treatment of electromagnetic duality, preserving locality, Lorentz invariance and positivity of the energy,the physical properties of dual systems must be enlarged correctly.
Our proposal looks very similar to what is known as Hodge lemma in mathematical terms : the space of second rank tensors in a compact manifold (we work in non-compact four-dimensional spacetime) decomposes in harmonic terms (solutions to the wave equation), curl of vectors (A, in our case) and divergences of higher rank tensors (or their duals, in our case, W). The spaces of this decomposition are mutually orthogonal.
It is true that Nature presents us only electric charges and currents, but duality points for a complementary world closely related to the former. The fact that this symmetry is not apparent in the real world points to some kind of breaking, and may be a fruitful field of research.
Acknowledgments We acknowledge partial support from CNPq during part of this work, and discussions with Prof. C. J. Wotzasek. |
no-problem/0002/astro-ph0002392.html | ar5iv | text | # The role of outflows and star formation efficiency in the evolution of early-type cluster galaxies
## 1. Introduction
One of the long-standing problems in astrophysics is the process of star formation in galaxies. The standard scenario assumes stars to form from gas that falls in the potential wells of dark matter halos. Subsequent interacting or merging stages among galaxies might trigger additional bursts of star formation. The complex nature of star formation makes this problem rather an untractable one from an analytical point of view, so that the best approach towards understanding the distribution of stellar populations in galaxies requires a heavy use of rough approximations and all-too-often dangerous generalizations. It is the purpose of current phenomenological models describing the formation and evolution of the stellar component in galaxies to reveal the mechanisms which describe the wide range of galaxy colors and luminosities as well as their connection to morphology. The current status of the determination of the ages of the stellar populations in galaxies is rather controversial due to the degeneracy between age and metallicity (Worthey 1994). Observations of early-type galaxies by two different groups using similar techniques targeting narrow spectral indices to infer a luminosity-weighted age give contradictory results. While Trager et al. (2000) find a large age spread in the sample of field and group early-type systems of Gonzรกlez (1993), Kuntschner (2000) reports a large metallicity spread in Fornax cluster ellipticals. So far, any observational measurement of age is plagued by many degeneracies which render a direct estimate uncertain. An alternative approach modelling the formation and chemical enrichment of the stellar component of galaxies is needed in order to reveal the actual scenario of galaxy formation.
## 2. Modelling chemical enrichment
The model presented here describes the process of star formation in early-type galaxies in terms of four parameters: star formation efficiency ($`C_{\mathrm{eff}}`$), ejected gas fraction in outflows ($`B_{\mathrm{out}}`$), formation redshift ($`z_F`$) and infall timescale ($`\tau _f`$). The latter two parameters refer to the epoch ($`t=t(z_F)`$) at maximum and spread of a Gaussian profile for the infalling gas, i.e.:
$$f(t)e^{\left(tt(z_F)\right)^2/2\tau _f^2}$$
(1)
This gas will be turned into stars according to a linear Schmidt-type law, where the proportionality constant is the star formation efficiency parameter. The model is described in more detail in Ferreras & Silk (2000a,b). This generic description allows us to include multi-burst scenarios in galaxies undergoing several merging stages with enough gas to fuel star formation at each merging event. Figure 1 shows a comparison between two star formation histories: one with three equally strong starburst events and a second one as a simplification to the former using our approach. The top panel shows the evolution of the gas mass โ which is proportional to the star formation rate. The bottom panel shows the evolution of the metallicity and the top inset traces the evolution of rest frame $`UV`$ color for both scenarios. One can see that at times after the last bursting episode, the evolution in both cases is roughly undistinguishable. Hence, we conclude that our four-parameter model can account not only for a standard โmonolithicโ scenario but also for multi-burst formation histories. Furthermore, a Gaussian profile for infall avoids the overproduction of low metallicity stars. In fact, a suitable choice of infall parameters ($`\tau _f`$,$`z_F`$) can reproduce the local metallicity distribution of stars (Rocha-Pinto & Maciel 1996).
For a given set of four parameters ($`B_{\mathrm{out}}`$,$`C_{\mathrm{eff}}`$,$`z_F`$,$`\tau _f`$) we can trace a star formation history and convolve it in age and metallicity with the simple stellar populations of Bruzual & Charlot (in preparation). Hereafter a closed cosmology ($`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`H_0=60`$ km s<sup>-1</sup> Mpc<sup>-1</sup>), and a hybrid Initial Mass Function between Scalo and Salpeter (Ferreras & Silk 2000b) are used. Out of these four parameters, we find only $`B_{\mathrm{out}}`$ and $`C_{\mathrm{eff}}`$ can generate the range of observed $`UV`$ colors in nearby early-type cluster galaxies (Bower, Lucey & Ellis 1992). Hence, the luminosity sequence of these systems could be explained either by a range of outflows ($`B_{\mathrm{out}}`$-sequence), by a range of star formation efficiencies ($`C_{\mathrm{eff}}`$-sequence), or by some combination thereof. Figure 2 shows the predicted color-magnitude relation (CMR) of Coma galaxies at the redshift of cluster Cl0016+16 ($`z=0.55`$, Ellis et al. 1997) assuming a $`B_{\mathrm{out}}`$-sequence (left) or a $`C_{\mathrm{eff}}`$-sequence (right), and a range of infall parameters ($`z_F`$,$`\tau _f`$). A sequence driven by $`C_{\mathrm{eff}}`$ results in an age spread for the stellar populations. This causes the remarkable departure of the predictions from the observed CMR (shaded area) for extended star formation histories ($`z_F=10`$, $`\tau _f=2`$ Gyr). However, because of the age-metallicity degeneracy, we find that quite a large range of the parameters agree with the observations within error bars. Hence, we cannot use photometric measurements of moderate redshift clusters in order to determine whether age ($`C_{\mathrm{eff}}`$) or metallicity ($`B_{\mathrm{out}}`$) drive the CMR.
## 3. $`M/L`$ ratio as tracer of age evolution
One of the most age-sensitive observables is the mass-to-light ratio. Hence, the predicted evolution of $`M/L`$ with lookback time should be different for sequences driven by age or by metallicity. We consider the evolution with redshift of the slope of the correlation between $`M/L`$ in rest frame $`B`$-band and stellar mass. This slope change is parametrized by $`\eta _B`$ defined as follows:
$$\eta _B(z)\frac{\mathrm{\Delta }\mathrm{log}M/L_B}{\mathrm{\Delta }\mathrm{log}M}|_z\frac{\mathrm{\Delta }\mathrm{log}M/L_B}{\mathrm{\Delta }\mathrm{log}M}|_{z=0}$$
(2)
For a $`B_{\mathrm{out}}`$-sequence (driven by outflows), the range of luminosities is related by a spread in metallicities. As we evolve the cluster to higher redshifts, the mass-to-light ratio will decrease uniformly across the luminosity sequence because of lookback time, and there will also be a relative change of $`M/L`$ among early-type galaxies caused by its very weak metallicity dependence, which makes the decrease in mass-to-light ratio slighly larger in galaxies with a higher metallicity, thereby flattening the slope of $`M/L`$ vs $`M`$ (i.e. $`\eta _B\mathrm{}<0`$). On the other hand, a $`C_{\mathrm{eff}}`$-sequence (driven by efficiency) will introduce a significant age spread which varies with galaxy mass, so that $`M/L`$ at the fainter end (which has a lower efficiency and thus a larger age spread) will decrease more than the bright end, steepening the slope (i.e. $`\eta _B>0`$, see figure 5 in Ferreras & Silk 2000b).
This behavior makes the study of the evolution of $`M/L`$ with redshift a suitable candidate to infer the star formation history of early-type cluster galaxies. Unfortunately, this observable still poses a long string of uncertainties which prevent it from establishing a clearcut way of breaking the degeneracy between age and metallicity: mass-to-light ratios require time-consuming spectral observations in order to measure velocity dispersions, and can only be achieved with 10m class telescopes for clusters at moderate and high redshifts. Furthermore, the measured $`M/L`$ ratios (inferred from observations of velocity dispersions, surface brightnesses and galaxy sizes) rely on a set of assumptions about the structure of the galaxy. Any correlation between galaxy structure and mass or luminosity will add systematic errors which are hard to estimate. However, alternative age-dependent observables such as Balmer spectral indices are also plagued by model-dependent uncertainties. Despite all these caveats, the study of the evolution of $`M/L`$ with lookback time is still one of the best methods to determine the stellar demography in galaxies.
## References
Bower, R. G., Lucey, J. R. & Ellis, R. S. 1992, MNRAS, 254, 601
Ferreras, I. & Silk, J. 2000a, ApJ, March 20, astro-ph/9910385
Ferreras, I. & Silk, J. 2000b, MNRAS, in press
Gonzรกlez, J. J. 1993, Ph.D. thesis, University of California
Kuntschner, H. 2000, MNRAS, in press (astro-ph/0001210)
Rocha-Pinto, H. J. & Maciel, W. J. 1996, MNRAS, 279, 447
Trager, S. C., et al. 2000, AJ, in press (astro-ph/0001072)
Van Dokkum, P. G., et al. 1998, ApJ, 504, L17
Worthey, G. 1994, ApJS, 95, 107 |
no-problem/0002/physics0002041.html | ar5iv | text | # Josephson effects in dilute Bose-Einstein condensates
## Abstract
We propose an experiment that would demonstrate the โdcโ and โacโ Josephson effects in two weakly linked Bose-Einstein condensates. We consider a time-dependent barrier, moving adiabatically across the trapping potential. The phase dynamics are governed by a โdriven-pendulumโ equation, as in current-driven superconducting Josephson junctions. At a critical velocity of the barrier (proportional to the critical tunneling current), there is a sharp transition between the โdcโ and โacโ regimes. The signature is a sudden jump of a large fraction of the relative condensate population. Analytical predictions are compared with a full numerical solution of the time dependent Gross-Pitaevskii equation, in an experimentally realistic situation.
The Josephson effects (JEโs) are a paradigm of the phase coherence manifestation in a macroscopic quantum system . Observed early on in superconductors , JEโs have been demonstrated in two weakly linked superfluid <sup>3</sup>He-B reservoirs . Weakly interacting Bose-Einstein condensate (BEC) gases provide a further (and different) context for JEโs. Indeed, magnetic and optical traps can be tailored and biased (by time-dependent external probes) with high accuracy , allowing the investigation of dynamical regimes that might not be accessible with other superconducting/superfluid systems. The macroscopic BECโs coherence has been demonstrated by interference experiments , and the first evidence of coherent tunneling in an atomic array, related to the โacโ JE, has been recently reported .
A superconducting Josephson junction (SJJ) is usually biased by an external circuit that typically includes a current drive $`I_{ext}`$. The striking signatures of the Josephson effects in SJJ are contained in the voltage-current characteristic ($`V`$-$`I_{ext}`$), where usually one can distinguish between the superconductive branch or โdcโ-branch ($`V=0`$, $`I_{ext}0`$), and the resistive branch or โacโ-branch ($`VRI_{ext}`$) (see for example ). External circuits and current sources are absent in two weakly linked Bose condensates and the Josephson effects have been related, so far, with coherent density oscillations between condensates in two traps or between condensates in two different hyperfine levels . This collective dynamical behavior is described by a non-rigid pendulum equation , predicting a new class of phenomena not observable with SJJโs.
Now the following question arises: can two weakly linked condensates exhibit the analog of the voltage-current characteristic in SJJ? Although BECs are obviously neutral, the answer is positive. A dc current-biased SJJ can be simulated by considering a tunneling barrier moving with constant velocity across the trap. At a critical velocity of the barrier a sharp transition between the โdcโ and โacโ (boson) Josephson regimes occurs. This transition is associated with a macroscopic jump in the population difference, that can be easily monitored experimentally by destructive or non-destructive techniques.
In the following we will briefly introduce the phenomenological equations of the resistively shunted junction (RSJ) model for the SJJ. We will describe the corresponding experiment for two weakly linked BECs and show that the relevant equations are formally equivalent to the RSJ equations. Then we compare the analytical results with a numerical integration of the Gross-Pitaevskii equation in a realistic 3D setup.
In the RSJ model, SJJ is described by an equivalent circuit in which the current balance equation is
$$I_c\mathrm{sin}(\theta )+GV+C\dot{V}=I_{ext}$$
(1)
where $`I_c`$ is the upper bound of the Josephson supercurrent $`I`$ (which is represented, in the ideal case, by the sinusoidal current-phase relation $`I=I_c\mathrm{sin}(\theta )`$); $`G`$ is an effective conductance (offered by the quasiparticles and the circuit shunt resistor), and $`C`$ is the junction capacitance. The voltage difference $`V`$ across the junction is related to the relative phase $`\theta `$ by
$$\dot{\theta }=2eV/\mathrm{}.$$
(2)
In the low conductance limit $`G\omega _pC`$ where $`\omega _p=\sqrt{2eI_c/\mathrm{}C}`$ is the Josephson plasma frequency, combining equations (1) and (2) leads to the โdriven pendulumโ equation
$$\ddot{\theta }=\omega _p^2\frac{}{\theta }U\left(\theta \right)$$
(3)
where $`U`$ is the tilted โwashboardโ potential:
$$U\left(\theta \right)=1\mathrm{cos}(\theta )+i\theta $$
(4)
with $`i=I_{ext}/I_c`$. This equation describes the transient behavior before the stationary dissipative behavior is reached (resistive branch). If we start from equilibrium, with $`i=0`$, and increase adiabatically the current, no voltage drop develops until the critical value $`i=1`$ is reached (neglecting secondary quantum effects). At this point $`V`$ continuously develops until a stationary asymptotic dissipative behavior is reached in a time scale approximately of order $`C/G`$. Similar phenomenology may occur in BECs and we will derive equations formally identical to Equations (3) and (4).
A weak link between two condensates can be created by focusing a blue-detuned far-off-resonant laser sheet into the center of the magnetic trap . The weak link can be tailored by tuning the width and/or the height of the laser sheet. Raman transitions between two condensates in different hyperfine levels provide a different weak link , in analogy with the โinternal Josephson effectโ observed in $`70`$s with $`{}_{}{}^{3}HeA`$ .
Here we consider a double well potential in which the laser sheet slowly moves across the magnetic trap with velocity $`v`$ (but our framework can be easily adapted to investigate the internal Josephson effect). In the limit of very low $`v`$, the two condensates remain in equilibrium, i.e. in their instantaneous ground state, because of the non-zero tunneling current that can be supported by the barrier. In fact, an average net current, proportional to the velocity of the laser sheet, flows through the barrier, sustained by a constant relative phase between the two condensates. This keeps the chemical potential difference between the two subsystems locked to zero, as in the SJJ dc-branch. However, the superfluid component of the current flowing through the barrier is bounded by a critical value $`I_c`$. As a consequence there exists a critical velocity $`v_c`$, above which a non-zero chemical potential difference develops across the junction. This regime is characterized by a running-phase mode, and provides the analog of the ac-branch in SJJโs.
The โdcโ and โacโ BEC regimes are governed by a phase-equation similar to the current-driven pendulum equations (3) and (4). Such equations together with the sinusoidal current-phase relation $`I=I_c\mathrm{sin}(\theta )`$ describe the phase difference and current dynamics. The dimensionless current $`i`$ is related to the barrier velocity by
$$i=v/v_c$$
(5)
with the critical velocity $`v_c`$ given by
$$v_c=\frac{\mathrm{}\omega _p^2}{F}$$
(6)
where $`F`$ is to a good approximation represented by double the average force exerted by the magnetic trap on single atoms in one well.
Equations (3)-(6) can be derived by a time-dependent variational approximation and have also been verified, as we discuss below, by the full numerical integration of the Gross-Pitaevskii equation (GPE) . The GPE describes the collective dynamics of a dilute Bose gas at zero temperature:
$$i\mathrm{}\frac{}{t}\mathrm{\Psi }=\left[H_0\left(t\right)+g|\mathrm{\Psi }|^2\right]\mathrm{\Psi }$$
(7)
where $`H_0\left(t\right)=\frac{\mathrm{}^2}{2m}^2+V_{ext}(๐ซ,t)`$ is the non interacting Hamiltonian and where $`g=4\pi \mathrm{}^2a/m`$, with $`a`$ the scattering length and $`m`$ the atomic mass. The order parameter $`\mathrm{\Psi }=\mathrm{\Psi }(๐ซ,t)`$ is normalized as $`๐๐ซ|\mathrm{\Psi }(๐ซ,t)|^2=N`$, with $`N`$ the total number of atoms. The external potential is given by the magnetic trap and the laser barrier $`V_{ext}(๐ซ,t)=V_{trap}\left(๐ซ\right)+V_{laser}(z,t)`$. We consider a harmonic, cylindrically symmetric trap $`V_{trap}\left(๐ซ\right)=\frac{1}{2}m\omega _r^2\left(x^2+y^2\right)+\frac{1}{2}m\omega _0^2z^2`$ where $`\omega _r`$ and $`\omega _0`$ are the radial and longitudinal frequency, respectively. The barrier is provided by a Gaussian shaped laser sheet, focused near the center of the trap $`V_{laser}\left(z\right)=V_0\mathrm{exp}\left((zl_z)^2/\lambda ^2\right)`$ with the coordinate $`l_z(t)`$ describing the laser motion and $`v=dl_z/dt`$ its velocity.
The equations (3) to (6) can be derived by solving variationally the GPE using the ansatz: $`\mathrm{\Psi }(๐ซ,t)=c_1(t)\psi _1\left(๐ซ\right)+c_2(t)\psi _2\left(๐ซ\right)`$, where $`c_n=\sqrt{N_n(t)}\mathrm{exp}\left(i\theta _n(t)\right)`$ are complex time-dependent amplitudes of the left $`n=1`$ and right $`n=2`$ condensates (see also ). The trial wave functions $`\psi _{1,2}\left(๐ซ\right)`$ are orthonormal and can be interpreted as approximate ground state solutions of the GPE of the left and right wells. The equations of motion for the relative population $`\eta =(N_2N_1)/N`$ and phase $`\theta =\theta _2\theta _1`$ between the two symmetric traps are
$`\mathrm{}\dot{\eta }`$ $`=`$ $`(2E_J/N)\sqrt{1\eta ^2}\mathrm{sin}\left(\theta \right),`$ (8)
$`\mathrm{}\dot{\theta }`$ $`=`$ $`Fl_z(t){\displaystyle \frac{2E_J}{N}}{\displaystyle \frac{\eta }{\sqrt{1\eta ^2}}}\mathrm{cos}\left(\theta \right){\displaystyle \frac{NE_c}{2}}\eta ,`$ (9)
where $`E_c=2g๐๐ซ\psi _1(๐ซ)^4`$ is the variational analog of the capacitive energy in SJJ, while $`E_J=N๐๐ซ\psi _1(๐ซ)\left[H_0+gN\psi _1^2(๐ซ)\right]\psi _2(๐ซ)`$ is the Josephson coupling energy. The current-phase relation $`I=I_c\sqrt{1\eta ^2}\mathrm{sin}(\theta )`$ is directly related to Eq. (8) where the critical current is given by $`I_c=E_J/\mathrm{}`$. $`Fl_z(t)`$ represents the contribution to the chemical potential difference in the two wells due to the laser displacement $`l_z`$ (after linearizing in $`l_z`$), and where $`F=๐๐ซ\left(\psi _1(๐ซ)^2\psi _2(๐ซ)^2\right)\frac{}{l_z}V_{laser}m\omega _0^2๐๐ซz\left(\psi _1(๐ซ)^2\psi _2(๐ซ)^2\right)`$. The above variational method provides a simple and useful interpolating scheme between the low interacting limit $`N^2E_cE_J`$ and the opposite limit $`N^2E_cE_J`$. In the last case, and with $`\eta 1`$, we recover the driven-pendulum phase equation (3) and the critical velocity relations (5) and (6) with $`\mathrm{}\omega _p=\sqrt{E_JE_c}`$. In particular, it is legitimate to consider the Josephson coupling as a perturbation, with the the phase dynamics entirely determined by the difference in the chemical potentials $`\mu _1(N_1,l_z)`$ and $`\mu _2(N_2,l_z)`$ in the two wells. In this case $`E_c`$ corresponds to $`2\left(\mu _1/N_1\right)_{l_z}`$ and $`\mathrm{}^2\omega _p^2=E_J\left(\mu _1/N_1\right)_{l_z}`$. The critical velocity is proportional to the critical current: $`v_c=\left(\frac{dN_1}{dl_z}\right)^1I_c`$, with
$$\left(\frac{dN_1}{dl_z}\right)^1=\left(\frac{\mu _1}{l_z}\right)_{N_1}^1\left(\frac{\mu _1}{N_1}\right)_{l_z}$$
(10)
and $`\left(\mu _1/l_z\right)_{N_1}`$ being $`F/2`$ in Eq.(6). These derivatives can be computed numerically. In the Thomas-Fermi (TF) limit they reduce to
$$\left(\frac{\mu _1}{N_1}\right)_{l_z}=\frac{g}{V_{TF}}$$
(11)
and
$$\left(\frac{\mu _1}{l_z}\right)_{N_1}=\frac{1}{V_{TF}}_{V_{TF}}๐๐ซ\frac{}{l_z}V_{laser}$$
(12)
where $`V_{TF}`$ is the volume of the region in which $`\mathrm{\Psi }_1`$ is different from zero (in the TF approximation).
We make the comparison of Eqs. (8) and (9) with a full numerical integration of the GPE in an experimentally realistic geometry relative to the limit $`N^2E_cE_J`$. In particular, we show that Eq. (6), derived in the limit of $`\eta 1`$, still remains a good approximation even for $`\eta 0.4`$. The details of the numerical calculation are given elsewhere .
We have considered the JILA setup, with $`N=5\times 10^4`$ Rb atoms in a cylindrically symmetric harmonic trap, having the longitudinal frequency $`\omega _0=50`$ s<sup>-1</sup> and the radial frequency $`\omega _r=17.68`$ s<sup>-1</sup>. The value of the scattering length considered is $`a=58.19`$ $`\dot{A}`$. A Gaussian shaped laser sheet is focused in the center of the trap, cutting it into two parts. We assume that the (longitudinal) $`1/e^2`$ half-width of the laser barrier is $`3.5`$ $`\mu `$m and the barrier height $`V_0/\mathrm{}=650`$ s<sup>-1</sup>.
Although the lifetime of a trapped condensate can be as long as minutes, we have made a quite conservative choice, by considering a time scale on the order of one second. The possibility to perform experiments on a longer time-scale will improve the observability of the phenomena we are discussing. With this choice of time scale, that corresponds only to few plasma oscillations, an adiabatic increase of the velocity is not possible, therefore we proceed as follows. For $`t<0`$ the laser is at rest in the middle of the trap, $`l_z=0`$, and the two condensates are in equilibrium. For $`t>0`$ the laser moves across the trap, with constant velocity, and the relative atomic population is observed at $`t_f=1s`$. With this initial condition, which introduces small plasma oscillations in the relative population, it is expected, in absence of dissipation, to slightly reduce the critical current by the numerical factor $`0.725`$ (see the general properties of the driven pendulum equation ).
In Fig.1 we show the relative condensate population $`\eta =(N_2N_1)/N`$, calculated after $`1`$ second, for different values of the laser velocity $`v`$. The crosses are the results obtained with the full numerical integration of the time-dependent GPE (7). The dot-dashed line shows the equilibrium values $`\eta _{eq}`$ of the relative population calculated with the stationary GPE and with the laser at rest in the โfinalโ position $`l_z=vt_f`$. The displacement of $`\eta (t_f)`$ from $`\eta _{eq}`$ is a measure of the chemical potential difference, being $`\mathrm{\Delta }\mu =\mu _2\mu _1NE_c(\eta (t_f)\eta _{eq})/2`$.
For $`v<0.42\mu m/s`$, the atoms tunnel through the barrier in order to keep the chemical potential difference $`\mathrm{\Delta }\mu `$ locked around zero. The dc component of the tunneling current is accounted for by an averaged constant phase difference between the two condensates. This is the close analog of the dc Josephson effect in superconducting Josephson junctions. The small deviations between the dashed line and the crosses are due to the presence of plasma oscillations (induced by our initial condition). At $`v0.42\mu m/s`$ there is a sharp transition, connected with the crossover from the dc-branch to the ac-branch in SJJ. For $`v>0.42\mu m/s`$, the phase difference starts running and the population difference, after a transient time, remains on average fixed. A macroscopic chemical potential difference is established across the junction. In this regime ac oscillations in the population difference are observed. The frequency of such oscillations are approximatively given by $`\mathrm{\Delta }\mu (t)/\mathrm{}`$ (not visible in the figure).
The solid line of Fig.1 corresponds to the solutions of Eqs. (8) and (9) in which the value of the energy integrals $`E_cN/\mathrm{}=2.46ms^1`$ and $`E_J/N\mathrm{}=2.41\times 10^4ms^1`$ are chosen in order to give the correct value of $`\omega _p=2.44\times 10^2ms^1`$ and $`I_c=12.1ms^1`$. The values $`\omega _p`$, $`I_c`$ are calculated numerically studying the frequency of small oscillations around equilibrium and the current-phase relation, respectively. The force integral is $`F/\mathrm{}=1.060ms^1\mu m^1`$. The parameters $`\omega _p`$, $`I_c`$ and $`F`$ are calculated with the laser at rest ($`v=0`$) in $`l_z=0`$. Using these values in Eq. (6) and taking into account the reducing factor $`0.725`$ we obtain the value $`0.407`$ $`\mu ms^1`$ for the critical velocity, in agreement with the value observed in the simulation.
Small deviations between the variational solutions (full line in Fig.1) and the numerical results (crosses in Fig.1), above the critical velocity, are due to โlevel-crossingโ effects. Numerical results show that when the condensate ground state of the โupperโ well is aligned with the excited collective dipole state in the โlowerโ well, a finite number of atoms go from the โupperโ well to the โlowerโ well. Close to this tunneling resonance it is possible to control, by manipulating the barrier velocity below a fraction of $`v_c`$, the dc flux of atoms from the ground state condensate in the โupperโ well to the longitudinal intrawell collective dipole mode of the condensate in the โlowerโ well. This effect is directly observable in the macroscopic longitudinal oscillations of the two condensates (at frequencies $`\omega _0`$).
Concerning a possible realization of the phenomenon described in this work, we note that for small barrier velocities $`v`$, the motion of the laser sheet with respect to the magnetic trap with velocity $`v`$ or, $`viceversa`$, the motion of the magnetic trap with velocity $`v`$, are equivalent, there being negligible corrections due to different initial accelerations.
Thus far we have discussed the zero temperature limit. At finite temperature dissipation can arise due to incoherent exchange of thermal atoms between the two wells. This can be described phenomenologically by including a term $`E_cG\dot{\theta }/\omega _p^2`$ in Eq. (3) where $`G`$ is the conductance. Dissipation will be negligible as long as the characteristic time scale $`(E_cG)^1(20G/\mathrm{})s`$ is bigger than the time scale of the experiment ($`1s`$).
To conclude we note that while it could be difficult to measure directly the plasma oscillations, since their amplitude is limited by $`\mathrm{\Delta }\eta <\frac{4}{N}\sqrt{\frac{E_J}{E_c}}`$, the macroscopic change in the population difference may be easily detected with standard techniques. Moreover the framework that we have discussed can be easily adapted to investigate the internal Josephson effect.
Our phenomenological equations are similar to the driven pendulum equation governing the Josephson effects in SJJs. As a consequence, within this framework we can study the โsecondary quantum phenomenaโ, such as the Macroscopic Quantum Tunneling between different local minima of the washboard potential (see for instance ).
It is a pleasure to thank L. P. Pitaevskii, S. Raghavan and S. R. Shenoy for many fruitful discussions. |
no-problem/0002/astro-ph0002078.html | ar5iv | text | # NICMOS Narrow-band Infrared Photometry of TW Hya Association Stars
## 1 Introduction
For more than a decade, the young star TW Hya has been an enigma since it lies in a region of sky apparently devoid of the raw materials to form stars, nearly 13 from the nearest dark cloud, yet it is unambiguously a classical T Tauri star (Rucinski & Krautter (1983)) surrounded by a great deal of cold dust (Weintraub, Sandell & Duncan (1989)) and gas (Zuckerman et al. (1995); Kastner et al. (1997)). Recently, the TW Hya mystery was solved: TW Hya, along with other T Tauri stars found in an area of $``$100 square degrees of the southern sky (de la Reza et al. (1989); Gregorio-Hetem et al. (1992)), compose a uniquely close association of young stars known as the TW Hya Association (Kastner et al. (1997)).
At a mean distance of only $``$55 pc, the TW Hya Association (hereafter TWA) is almost three times closer than the next nearest known region of recent star formation. Given the likely age ($``$10 MY) of the TWA, these stars could harbor very young planetary systems with fully formed giant planets or low mass, brown dwarf companions, and may still be surrounding by circumstellar disks. In fact, there is substantial evidence for circumstellar gas and dust around several of these stars (Weintraub, Sandell & Duncan (1989); Zuckerman & Becklin (1993); Zuckerman et al. (1995); Kastner et al. (1997)). The relative proximity and the absence of significant interstellar or intra-molecular cloud extinction in the direction of the TWA make the prospects for detecting substellar companions around these nearest T Tauri stars much better than the prospects for similar searches for young low mass companions around T Tauri stars in Taurus-Auriga, Chamaeleon, Lupus or Ophiuchus, the next closest regions of star formation. In a recent study of the TWA, Webb et al. (1999) identified a total of at least 17 sources as members of the TWA. In addition, Lowrance et al. (1999) and Webb et al. have reported the discovery of a likely low mass brown dwarf companion (M $``$ 0.02 $`M_{}`$) to TWA 5A (= CD$``$337795) in a combination of ground-based and Hubble Space Telescope (HST) observations. TWA 5B is found almost 2<sup>โฒโฒ</sup> from TWA 5A; thus, despite its relative physical proximity ($``$100 AU) to the primary, TWA 5B is amenable to spectroscopic and astrometric studies, uncontaminated by light from TWA 5A.
In this paper, we report results from imaging the fields around five stars in the TWA, including the TWA 5 system, using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) and the HST. The goal of this program was to search for companions around these stars. Our choice of three narrowband filters centered at 1.64, 1.90 and 2.15 $`\mu `$m was designed to enable us to identify cool and low surface gravity objects, including substellar mass companions, through their likely strong signatures of H<sub>2</sub>O absorption at 1.9 $`\mu `$m.
## 2 Observations
We obtained images of five star systems (Table 1) in the TWA using camera 1 (NIC1) and camera 2 (NIC2) of NICMOS between 1998 May 30 and July 12 (U.T.) Observations of each of the five stars were made identically. Using NIC1 and filter F164N, we imaged each target in a four position, spiral dither pattern, with an integration time per position of 33.894 s. Three images were obtained at each position for a total integration time of 406.73 s. We carried out identical observations using NIC1 and filter F190N, with an integration time per position of 43.864 s and a total integration time of 526.37 s. Switching to NIC2 and filter F215N, we again obtained three sets of four-position dithered image suites; however, for the F215N observations we changed the starting position for the dithered image suites in order to obtain a better median filtered image for subtraction of the thermal background. The integration time per position was 15.948 s for the NIC2 images, for a total integration time of 191.38 s.
## 3 Results
### 3.1 Imaging
We find no sources in any of our images other than the previously known five primaries and four secondaries, to limiting magnitudes of 18.3, 18.4 and 17.5 in the F164N, F190N and F215N images, respectively, at distances beyond $``$1$`^{\prime \prime }.`$3 at 1.64 and 1.90 $`\mu `$m and 2$`^{\prime \prime }.`$3 at 2.15 $`\mu `$m. The images of TWA 5 (Fig. 1) reveal how how easy it is to detect and image young, intermediate mass, brown dwarf companions around stars in the TWA, even without a coronagraph. In addition, all nine imaged objects appear as point sources (with FWHM of 0$`^{\prime \prime }.`$14, 0$`^{\prime \prime }.`$16, and 0$`^{\prime \prime }.`$18 in the F164N, F190N and F215N images, respectively), with no evidence (after deconvolutions performed with point spread functions \[PSFs\] generated using the software package Tiny Tim<sup>1</sup><sup>1</sup>1http://scivax.stsci.edu/ krist/tinytim.html, direct subtractions of PSFs, and examinations of azimuthally averaged radial intensity profiles \[Fig. 2\]) of extended emission around any of them. Thus, although some of these stars appear to be surrounded by circumstellar material (e.g., TWA 1 is surrounded by a circumstellar disk of radius $``$3<sup>โฒโฒ</sup> that is viewed nearly face-on; Weinberger et al. (1999); Krist et al. (1999)), we conclude that these direct, narrow band images are insufficiently sensitive to image circumstellar disks around these stars.
### 3.2 Photometry
We report our photometry for these observations in Table 1. The factors used to convert from NICMOS count rates to absolute fluxes and magnitudes<sup>2</sup><sup>2</sup>2http://www.stsci.edu/ftp/instrument\_news/NICMOS/NICMOS\_phot/keywords.html, version 1998, December 1 were 5.376665 $`\times `$ 10<sup>-5</sup> Jy sec ADU<sup>-1</sup> for the F164N filter, 4.866353 $`\times `$ 10<sup>-5</sup> Jy sec ADU<sup>-1</sup> for the F190N filter, and 3.974405 $`\times `$ 10<sup>-5</sup> Jy sec ADU<sup>-1</sup> for the F215N filter with zero point flux densities of 1033 Jy, 862 Jy, and 690 Jy, respectively. Photometry was obtained by measuring the total counts within a 0$`^{\prime \prime }.`$5 radius aperture and then applying a correction factor of 1.15 to compensate for the flux that falls outside of this radius<sup>3</sup><sup>3</sup>3 http://www.stsci.edu/ftp/instrument\_news/NICMOS/nicmos\_doc\_phot.html.
Figure 3 shows how the photometry of TWA stars through the $`J`$, $`H`$, and $`K`$ broadband and the F164N, F190N and F215N narrowband filters relates to the near infrared spectral characteristics of late-type stars. Stars with lower effective temperatures have increasingly strong water absorption bands centered at $`1.4\mu `$m and $`1.9\mu `$m which are very effectively probed by this combination of broad and narrowband filters.
We find no systematic differences between the $`K`$ and F215N photometry (see Fig. 3, right panel), despite the slight difference in central wavelengths and large difference in bandwidth. On the other hand the stars are systematically brighter at F164N than at $`H`$-band, the most extreme case being TWA 5B. Finally, TWA 5B is the only star with an absolute flux that is clearly lower at 1.9 than at 2.15 $`\mu `$m.
The left hand panel of Fig. 3 clearly shows how water absorption in the 1.35-1.55 and 1.7 - 2.1 $`\mu `$m regions will strongly affect broad band $`H`$ measurements but will have no effect on observations with the F164N filter (see also the library of near-infrared spectra published by Lanรงon & Rocca-Volmerange 1992). Thus, the $`H`$-band and F164N observations reveal the presence of different amounts of water vapor absorption in the spectra of most of the stars in our sample.
### 3.3 Astrometry
For the four binary systems in our sample, we measured the intensity centroids for each binary component and transformed the cartesian positions on the array into offsets in Right Ascension and Declination of the secondaries from the primaries (Table 2). Except for TWA 8, for which the binary separation is such that the companion only appeared in the NIC2 images, the results presented in Table 2 are those obtained using only the NIC1 images since the spatial resolution is highest when using the NIC1 array. The NIC1-based results in Table 2 are the statistical average and standard deviations based on measurements of the F164N and F190N images. The pixel to RA and Dec conversions were done using the plate scale measurement ephemeris generated by the NICMOS instrument team<sup>4</sup><sup>4</sup>4 http://www.stsci.edu/ftp/instrument\_news/NICMOS/nicmos\_doc\_platescale.html. Comparison of the results for the F164N and F190N images indicate that, in most cases, we can determine image separations to an accuracy of 0.02 pixels ($`<`$ 1 milli-arcsec).
Because our astrometric results are obtained from unocculted HST images and with the highest resolution camera in NICMOS, these results are much more precise than offsets previously reported for these binaries. They are, however, consistent with previous results (Table 2). In the case of TWA 5B, Lowrance, Weinberger & Schneider (1999) recently independently determined that the offsets reported in Lowrance et al. (1999) and Webb et al. (1999) have a sign error in RA; the corrected values are reported in Table 2.
## 4 TWA 5B
### 4.1 The age of the TWA
We have constructed an H-R diagram for the TWA (Fig. 4) using the pre-main sequence tracks of Baraffe et al. (1998). This H-R diagram is quite similar to that presented by Webb et al. (1999), which is based on the pre-main sequence tracks of DโAntona & Mazzitelli (1997); however, Fig. 4 appears to constrain the cluster age more tightly than does previous work on the TWA, presumably because of improved physics included in the Baraffe et al. tracks (see Baraffe et al. 1997 for a discussion). Specifically, virtually all of the stars, including TWA 5B, fall between the 3 and 10 MY isochrones. In comparison, the H-R diagrams of Webb et al. and Lowrance et al. (1999) indicate that the TWA stars have ages in the range 1โ100 MY while Kastner et al. (1997) suggested that the likely age of the TWA stars is 10โ30 MY, based on lithium studies (upper limit) and X-ray luminosities (lower limit). TWA 6 and TWA 9A, which lie together almost on the 30 MY isochrone, and TWA 9B, which falls near the 100 MY isochrone, are mild outliers in our and the Webb et al. HR diagrams and appear to be older than the other TWA stars (however, see Webb et al. for other possible explanations).
What other information do we have to constrain the ages of the TWA stars? Soderblom et al. (1998) used the lithium abundance to place an age range of 5โ20 MY and a most probable age of 10 $`\pm `$ 3 MY on TWA 4 (HD 98800; EW(Li $`\lambda `$6708) = 0.36 ร
); Stauffer, Hartmann, & Barrado y Navascues (1995) used the strength of the Li line to assign an upper limit of 9โ11 MY to TWA 11B (HR 4796B) while Jayawardhana et al. (1998) assigned an isochronal age of 8 $`\pm `$ 3 MY to this star; and Webb et al. (1999) measured similar Li EW strengths for 14 of the 17 stars identified as members of the TWA and, on this basis, suggested that they are all less than $``$10 MY. The excellent agreement between the ages estimated from the Li EWs and those obtained from photometry and pre-main sequence evolutionary tracks suggests that the age of the TWA is well constrained to be in the range 5โ15 MY.
### 4.2 Mass and evolutionary status of TWA 5A and 5B
TWA 5A is a M1.5 star (Webb et al. 1999) with $`T_{\mathrm{eff}}=3700\pm 150`$K (Leggett et al. 1996). The distance to the TWA 5 system is presently unknown but can be estimated as $`55\pm 9`$pc from the measured parallaxes of four members of the association (Webb et al. 1999). The range of distances is consistent with the approximate angular dimension of the association. With $`d=55\pm 9`$pc and $`K=6.8\pm 0.1`$, we find $`M_K=3.10\pm 0.41`$. By comparing these values of $`T_{\mathrm{eff}}`$ and $`M_K`$ with the evolution sequences of Baraffe et al. (1998), we find $`M=0.75\pm 0.15M_{}`$ and an age of 2.5 to 6 MY for TWA 5A, assuming it is a single, pre-main sequence star (Fig. 4). On the other hand, Webb et al. report that TWA 5A is suspected to be a spectroscopic binary. If we assume that TWA 5A is binary with equal mass components, the mass of each component decreases to $`0.7\pm 0.15M_{}`$ and the age range becomes 6 to 18 MY.
Since the Baraffe et al. (1998) sequence does not extend to substellar masses, we analyze the photometric measurements of TWA 5B with evolutionary models computed by Saumon & Burrows (unpublished). These models use the same interior physics as Saumon et al. (1996) and Burrows et al. (1997) with the distinction that the surface boundary condition is provided by the โNextGenโ sequence of atmosphere models computed by Allard and Hauschildt for cool stars (Allard et al. 1996, Hauschildt, Allard & Baron 1999). The atmospheric structures provide a surface boundary condition for the interior models by giving a relation between the interior entropy (where the convective zone becomes essentially adiabatic at depth) and the surface parameters $`S(T_{\mathrm{eff}},g)`$. This relation plays a central role in controlling the evolution of fully convective stars. Colors are computed from the synthetic spectra, and are therefore fully consistent with the evolution calculation. This evolution sequence was calculated for objects with solar compositions and masses between 0.01 and 0.3$`M_{}`$, and is very similar to that of Baraffe et al. (1998) since it uses the same input physics (equation of state, atmosphere models, nuclear reaction screening factors, etc.). A limitation of the โNextGenโ atmospheres is that they do not include dust opacity, which becomes significant for $`T_{\mathrm{eff}}<2600`$K.
Figure 5 shows the evolution of the absolute magnitudes at $`I`$, $`J`$, $`H`$ and $`K`$ bands, from 1 to 100 MY, based on the models of Saumon & Burrows (unpublished). Each curve shows the evolution for a fixed mass. The two dashed lines highlight the 0.02 and 0.03$`M_{}`$ models. The boxes show the photometric measurements for TWA 5B (Webb et al. 1999, Lowrance et al. 1999), with the height of the box representing the $`\pm 1\sigma `$ photometric error and the width showing the 5โ15 MY estimated age of the association. The absolute magnitudes of TWA 5B assume a distance of 55 pc and the $`\pm 9`$pc uncertainty is shown by the error bar in the upper right corner. All four bandpasses indicate that the mass of TWA 5B is between 0.02 and 0.03$`M_{}`$ with an upper limit of $`0.06M_{}`$ if the TWA 5 system lies on the far side of the association and near the upper limit of our age estimate. Given this mass range and the estimated age of TWA 5B, the models indicate that its surface gravity is $`3.8<\mathrm{log}g(\mathrm{cm}/\mathrm{s}^2)<4.0`$.
Stellar and substellar objects with $`M0.012M_{}`$ undergo a phase of nearly constant luminosity which corresponds to the fusion of their primordial deuterium content (DโAntona & Mazzitelli 1985, Saumon et al. 1996). This phase lasts for 2 - 20 MY and contraction โ with a consequent steady decrease in luminosity โ resumes once the deuterium is exhausted. Figure 5 shows that TWA 5B is almost certainly in the deuterium burning phase of its evolution.
### 4.3 Colors of TWA 5B
The $`IJK`$ colors of TWA 5B are consistent with its dM8.5โdM9 spectral classification based on 0.65โ0.75 $`\mu `$m spectra (Webb et al. (1999); Leggett, Allard & Hauschildt (1998)), and thus a temperature of $`T_{\mathrm{eff}}=2600\pm 150`$K (Luhman, Liebert & Rieke (1997); Leggett et al. (1996)). In a $`JH`$ vs. $`HK`$ diagram, TWA 5B falls well outside of the observed sequence of very-low mass stars and brown dwarf candidates in the field (Leggett, Allard & Hauschildt (1998)), while all other members of the association fall along the observed sequence of field stars. This indicates that the $`H`$ magnitude for TWA 5B may be erroneous (by $``$ 1$`\sigma `$) or that its relatively low surface gravity results in a redder $`HK`$ color.
The narrowband infrared colors are shown in Fig. 6 along with the synthetic colors from the โNextGenโ spectra. Each curve shows the colors for $`T_{\mathrm{eff}}=2600`$, 2800, 3000 and 3200 K (from left to right) for a fixed gravity. The colors of TWA 5B are shown by the triangle with error bars. For the estimated $`T_{\mathrm{eff}}=2600\pm 150`$K and $`\mathrm{log}g=3.9\pm 0.1`$, there is a reasonable agreement for the F164N$``$F215N color but the models are $`0.4`$ magnitude too blue in F164N$``$F190N. Consequently, TWA 5B is brighter at 1.9 $`\mu `$m than predicted by the models.
The F190N bandpass falls in the middle of a strong H<sub>2</sub>O absorption band (Fig. 3) whose strength probably is overestimated by the โNextGenโ models. Allard et al. (1997) compare a sequence of near-infrared spectra of late M dwarfs with their synthetic spectra. In all cases, the models overestimate the depth of the H<sub>2</sub>O band, an effect which increases for later spectral types. While an inadequate H<sub>2</sub>O opacity may be partly responsible for this effect, Tsuji, Ohnaka, & Aoki (1996) have shown that the condensation of dust in atmospheres of low $`T_{\mathrm{eff}}`$ results in a source of continuum opacity which decreases the depth of the water absorption bands. New atmosphere models including dust opacity (Tsuji, Ohnaka, & Aoki; Leggett, Allard & Hauschildt 1998) indicate that its effects on the spectrum (and on broadband colors) become discernible for $`T_{\mathrm{eff}}<2800`$K but remain moderate ($`0.1`$ mag) at the effective temperature of TWA 5B ($`2600`$K). While current models including dust opacity may not fully account for the relatively high F190N flux of TWA 5B, the F164N$``$F190N color of TWA 5B is a strong indication of the presence of dust in its atmosphere.
### 4.4 Astrometry of TWA 5B
At a distance of 55 $`\pm `$ 9 pc, TWA 5B lies at a projected distance from TWA 5A of 108 astronomical units. TWA 5A has a spectral type of M1.5 and a likely mass for the central binary of $``$1.4 M while TWA 5B has an estimated spectral type of M8.5 (Webb et al. (1999)) and a likely mass of $``$25 M<sub>jup</sub>. Given this information about the TWA 5 system, the orbital period P of TWA 5B should be P $``$ 1000 years. Thus, the angular motion of TWA 5B, assuming a circular orbit of radius 1$`^{\prime \prime }.`$96 viewed nearly pole-on, would be 0$`^{\prime \prime }.`$013 yr<sup>-1</sup> (or 0$`^{\prime \prime }.`$010 yr<sup>-1</sup> if TWA 5A is a single star with a mass of 0.7 M and P = 1300 years).
The NICMOS observations of TWA 5 were obtained on 25 April (Lowrance et al. (1999)) and 12 July 1998 (this paper), a difference of $``$0.21 years. In only one-fifth of a year, the orbital motion of TWA 5B would have changed its position relative to TWA 5A by only $``$0$`^{\prime \prime }.`$0027, too small for the positional difference to be measured at these two epochs. Thus, the differences between the positions we measured and the corrected positions from earlier epoch observations reported by Lowrance, Weinberger, & Schneider (1999) are strictly due to the relative accuracies of the different measurements.
Although we have demonstrated that the positional change for TWA 5B is measurement error, not orbital motion, we also have shown that the position reported in this paper is a very accurate โstartingโ position for TWA 5B. In addition, our results show that it is possible to measure the relative separation of these two objects to an accuracy of only a few thousandths of an arcsec. Thus, the orbital motion of TWA 5B should be measurable to a fairly high degree of accuracy with ground-based observing facilities equipped with adaptive optics or with the refurbished NICMOS camera.
## 5 Summary
To the sensitivity limits of these data, our images reveal no detectable circumstellar disks or infrared reflection nebulae, and no low mass stellar or substellar companions around stars in the five studied TWA systems other than the previously discovered TWA 5B. As for TWA 5B itself, our results suggest that this object has a mass in the range of 0.02โ0.03 $`M_{}`$, in good agreement with the work of Lowrance et al. (1999). Finally, while our single epoch observations cannot demonstrate or measure the orbital motion of TWA 5B, they are more than accurate enough to permit the measurement of this motion, in combination with future epoch HST or adaptive optics, ground-based observations, with a baseline of only about a year.
We thank the referee for thoughtful suggestions which improved the clarity of the manuscript, A. Burrows for computing the evolutionary sequences used in this work, and F. Allard, P.H. Hauschildt, I. Baraffe and G. Chabrier for making their synthetic spectra and models available. This research was supported by NSF grant AST93-18970 and NASA grants NAG5-4988 and GO07861.01-96A and is based on observations obtained with the NASA/ESA Hubble Space Telescope at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
### FIGURE CAPTIONS |
no-problem/0002/cond-mat0002113.html | ar5iv | text | # Magnetic properties of frustrated spin ladder
## I Introduction
The frustration of the antiferromagnetic exchange interaction brings about many interesting phenomena in the quantum spin systems, because it generally enhances the quantum fluctuation. It would be valuable to consider the effect of the frustration on the spin ladder, like the materials SrCu<sub>2</sub>O<sub>3</sub> (Ref.), Cu<sub>2</sub>(C<sub>2</sub>H<sub>12</sub>N<sub>2</sub>)<sub>2</sub>Cl<sub>4</sub> (Refs.) and La<sub>6</sub>Ca<sub>8</sub>Cu<sub>24</sub>O<sub>41</sub> (Ref.). They are strongly quantized and have the spin gap. When the next-nearest-neighbor (NNN) exchange interaction appears, the frustration takes place in the system. In the classical limit it is easily shown that the system has two different ordered phases depending on the strength of the NNN exchange and the phase boundary does not change even under external magnetic field. In the quantum system, however, some modifications should exist in the ground state phase diagram, because the spin ladder has no long range order even at $`T=0`$. In this paper, we investigate the frustrated spin ladder by the exact diagonalization of the finite clusters to determine the magnetic phase diagram, even under external field. In addition we consider the possibility of the magnetization plateau , which is predicted by a strong coupling approach.
## II Model and numerical method
The $`S=1/2`$ spin ladder with NNN coupling is described by the Hamiltonian
$``$ $`=`$ $`J_1{\displaystyle \underset{i}{\overset{L}{}}}(๐_{1,i}๐_{1,i+1}+๐_{2,i}๐_{2,i+1})`$ (1)
$`+`$ $`J_{}{\displaystyle \underset{i}{\overset{L}{}}}(๐_{1,i}๐_{2,i})`$ (2)
$`+`$ $`J_2{\displaystyle \underset{i}{\overset{L}{}}}(๐_{1,i}๐_{2,i+1}+๐_{2,i}๐_{1,i+1}),`$ (3)
where $`J_1`$, $`J_2`$ and $`J_{}`$ are the coupling constants of the leg, NNN (diagonal) and rung exchange interactions, respectively. We put $`J_{}`$=1 in the following. Using the Lanczos algorithm we numerically solved the ground state of the finite clusters. We also calculated the lowest energy of $``$ for $`_i^L(S_{1,i}^z+S_{2,i}^z)=M`$ , which denotes $`E(M)`$. Using $`E(M)`$, we investigate the magnetic state with $`mM/L`$ under the external field described by $`_Z=H_i^L(S_{1,i}^z+S_{2,i}^z)`$.
## III Two magnetic phases
Consider the nonmagnetic ground state at first. In the classical limit the system has two different ordered phases divided by the first-order phase boundary $`J_2=J_{}/2(=1/2)`$ shown as a dashed line in Fig. 1. $`J_1=1/2`$ is also the boundary because the phase diagram should be symmetric under the exchange of $`J_1`$ and $`J_2`$ (the reflection with respect to the dot-dashed line in Fig. 1).
In the quantum $`S=1/2`$ system we should distinguish the two phases based on the dimer picture; the dimers along the rung and the diagonal, respectively. The former is realized for $`J_2J_{}/2`$, while the latter for $`J_2J_{}/2`$. In the latter phase each two spins coupled by the rung are expected to behave like an effective $`S=1`$ (triplet) object. Thus we call the two phases โrung-dimerโ and โrung-tripletโ, respectively. The phase boundary is easily detected as a level crossing point in the ground state even in small finite clusters. Since the boundary is almost independent of $`L`$, we show only the result of $`L=12`$ as circles in Fig. 1. Our study of the spin correlation function along the rung also supported the above argument and suggested that the boundary is first-order. The results are completely consistent with the recent analysis by the density matrix renormalization group. (It also indicated the crossover of the phase boundary from first-order to second-order ones for $`J_2<0.287J_1`$, but we donโt consider such a parameter region in this paper.)
Even in the magnetic state under external field the two phases still can be identified by the different canted Nรฉel orders shown in the Figs. 2(a) and (b), respectively, in the classical system. The phase boundary is the same as the nonmagnetic ground state. The quantum system is gapless for $`0<m<1`$ and it might be difficult to distinguish the two phases by the dimer picture. In this case the classical picture is useful because the gapless phase is characterized by the power-law decay of the dominant spin correlation function corresponding to the classical order. Thus the quantum system should also have two phases like the classical limit. The same analysis as the nonmagnetic state indicated the first-order boundary for finite $`m`$. We show the boundaries for $`m`$=1/6, 1/3, 1/2, 2/3 and 5/6 ($`L`$=12) in Fig. 1. They exhibit a small $`m`$ dependence, although it is not so large that a field-induced transition between the two phases is expected to occur in any realistic situations. As the magnetization increases, the boundary tends to approach to the classical limit for most magnetizations. For $`m=1/2`$, however, the boundary exhibits a quite different behavior and it is close to the nonmagnetic one. It implies that the quantum fluctuation is enhanced by the frustration particularly at $`m=1/2`$. Thus we consider the possibility of another spin gap induced by external field, that is observed as a plateau in the magnetization curve at $`m=1/2`$.
## IV Magnetization plateau
We consider the magnetization plateau at $`m=1/2`$. The plateau length $`\mathrm{\Delta }E(M+1)+E(M1)2E(M)`$ is one of useful order parameters to investigate the boundary between the gapless and plateau phases. Since $`\mathrm{\Delta }`$ is the low-lying energy gap, it should obey the relation $`\mathrm{\Delta }1/L`$ in the gapless phase. The scaled plateau $`L\mathrm{\Delta }`$ for several $`L`$ is plotted versus $`J_2`$ with fixed $`J_1`$ to 0.4 in Fig. 3 (a). It suggested that a gapless-gapful transition occurs at $`J_20.2`$. To clarify the feature of the transition, we investigate the central charge $`c`$ of the conformal field theory (CFT) and the critical exponent $`\eta `$. $`\eta `$ is defined by the asymptotic behavior of the spin correlation function $`S_0^+S_r^{}(1)^rr^\eta `$. CFT enables us to estimate $`c`$ and $`\eta `$ from the low-lying energy spectra of finite clusters, using the forms $`E(M)/Lฯต(m)\pi cv_s/6L^2`$ and $`\mathrm{\Delta }\pi v_s\eta /L`$ $`(L\mathrm{})`$, where $`v_s`$ is the sound velocity which is the gradient of the dispersion curve at the origin. After some extrapolation to the infinite length limit, we show the results of $`c`$ and $`\eta `$ for $`J_1=0.4`$ in Fig. 3(b). It justifies that the phase boundary is of the Kosterlitz-Thouless (KT) transition with $`c=1`$ in the gapless phase and $`\eta =1`$ at the critical point. Thus we determine the phase boundary as points with $`\eta =1`$ in the $`J_1`$-$`J_2`$ plane. The result of the KT line is shown as solid symbols in Fig. 4 together with the first-order boundary indicated as open symbols. Fig. 4 is a complete phase diagram at $`m=1/2`$. The plateau phase is surrounded by the KT line and first-order line. The intersection of the two lines is expected to be a tri-critical point. The present analysis suggested that the plateau appears only in the rung-dimer phase. The rung-triplet phase reasonably has no plateau, because it is equivalent to the uniform $`S=1`$ chain.
A necessary condition of the presence of the plateau in general 1D systems was rigorously given by $`Q(Sm)`$. $`Q`$ is the periodicity of the ground state and $`S`$ is the total spin of the unit cell. The present case must hold $`Q=2`$ in the plateau phase at $`m=1/2`$. It suggests that the frustration stabilizes the structure where the singlet and triplet rung bonds are alternating, as is in the case of the zigzag ladder.
Finally we present the magnetization curves for ($`J_1`$,$`J_2`$)= (0.5,0), (0.5,0.3) and (0.5,0.4) in Fig. 5. They were obtained by the size scaling in Ref. applied to the calculated energy spectra of finite systems up to $`L=16`$. The plateau clearly appears at $`m=1/2`$ in the latter two cases.
## V Summary
The antiferromagnetic spin ladder with NNN coupling is investigated by the exact diagonalization of finite clusters. It indicated the existence of the two magnetic phases; the rung-dimer and rung-triplet phases, not only for $`m=0`$ but also in the magnetic state. It is also found that the magnetization plateau possibly appears at $`m=1/2`$ only in the rung-dimer phase. |
no-problem/0002/cs0002003.html | ar5iv | text | # On the accuracy and running time of GSAT
## 1 Introduction
The problem of deciding satisfiability of a boolean formula is extensively studied in computer science. It appears prominently, as a prototypical NP-complete problem, in the investigations of computational complexity classes. It is studied by the automated theorem proving community. It is also of substantial interest to the AI community due to its applications in several areas including knowledge representation, diagnosis and planning.
Deciding satisfiability of a boolean formula is an NP-complete problem. Thus, it is unlikely that sound and complete algorithms running in polynomial time exist. However, recent years brought several significant advances. First, fast (although, clearly, still exponential in the worst case) implementations of the celebrated Davis-Putnam procedure \[DP60\] were found. These implementations are able to determine in a matter of seconds the satisfiability of critically constrained CNF formulas with 300 variables and thousands of clauses \[DABC96\]. Second, several fast randomized algorithms were proposed and thoroughly studied \[SLM92, SKC96, SK93, MSG97, Spe96\]. These algorithms randomly generate valuations and then apply some local improvement method in an attempt to reach a satisfying assignment. They are often very fast but they provide no guarantee that, given a satisfiable formula, a satisfying assignment will be found. That is, randomized algorithms, while often fast, are not complete. Still, they were shown to be quite effective and solved several practical large-scale satisfiability problems \[KS92\].
One of the most extensively studied randomized algorithms recently is GSAT \[SLM92\]. GSAT was shown to outperform the Davis-Putnam procedure on randomly generated 3-CNF formulas from the crossover region \[SLM92\]. However, GSATโs performance on structured formulas (encoding coloring and planning problems) was poorer \[SKC96, SK93, SKC94\]. The basic GSAT algorithm would often become trapped within local minima and never reach a solution. To remedy this, several strategies for escaping from local minima were added to GSAT yielding its variants: GSAT with averaging, GSAT with clause weighting, GSAT with random walk strategy (RWS-GSAT), among others \[SK93, SKC94\]. GSAT with random walk strategy was shown to perform especially well. These studies, while conducted on a wide range of classes of formulas rarely address a critical issue of the likelihood that GSAT will find a satisfying assignment, if one exists, and the running time is studied without a reference to this likelihood. Notable exceptions are \[Spe96\], where RWS-GSAT is compared with a simulated annealing algorithm SASAT, and \[MSG97\], where RSW-GSAT is compared to a tabu search method.
In this paper, we propose a systematic approach for studying the quality of randomized algorithms. To this end, we introduce the concepts of the accuracy and of the running time relative to the accuracy. The accuracy measures how likely it is that a randomized algorithm finds a satisfying assignment, assuming that the input formula is satisfiable. It is clear that the accuracy of GSAT (and any other similar randomized algorithm) grows as a function of time โ the longer we let the algorithm run, the better the chance that it will find a satisfying valuation (if one exists). In this paper, we present experimental results that allow us to quantify this intuition and get insights into the rate of growth of the accuracy.
The notion of the running time of a randomized algorithm has not been rigorously studied. First, in most cases, a randomized algorithm has its running time determined by the choice of parameters that specify the number of random guesses, the number of random steps in a local improvement process, etc. Second, in practical applications, randomized algorithms are often used in an interactive way. The algorithm is allowed to run until it finds a solution or the user decides not to wait any more, stops the execution, modifies the parameters of the algorithm or modifies the problem, and tries again. Finally, since randomized algorithms are not complete, they may make errors by not finding satisfying assignments when such assignments exist. Algorithms that are faster may be less accurate and the trade-off must be taken into consideration \[Spe96\].
It all points to the problems that arise when attempting to systematically study the running times of randomized algorithms and extrapolate their asymptotic behavior. In this paper, we define the concept of a running time relative to the accuracy. The relative running time is, intuitively, the time needed by a randomized algorithm to guarantee a postulated accuracy. We show in the paper that the relative running time is a useful performance measure for randomized satisfiability testing algorithms. In particular, we show that the running time of GSAT relative to a prescribed accuracy grows exponentially with the size of the problem.
Related work where the emphasis has been on fine tuning parameter settings \[PW96, GW95\] has shown somewhat different results in regard to the increase in time as the size of the problems grow. The growth shown by \[PW96\] is the retropective variation of maxflips rather than the total number of flips. The number of variables for the 3-CNF randomized instances reported \[GW95\] are $`50,70,100`$. Although our results are also limited by the ability of complete algorithms to determine satisfiable instances, we have results for $`50,100,\mathrm{},400`$ variable instances in the crossover region. The focus in our work is on maintaining accuracy as the size of the problems increase.
Second, we study the dependence of the accuracy and the relative running time on the number of satisfying assignments that the input formula admits. Intuitively, the more satisfying assignments the input formula has, the better the chance that a randomized algorithm finds one of them, and the shorter the time needed to do so. Again, our results quantify these intuitions. We show that the performance of GSAT increases exponentially with the growth in the number of satisfying assignments.
These results have interesting implications for the problem of constructing sets of test cases for experimenting with satisfiability algorithms. It is now commonly accepted that random $`k`$-CNF formulas from the cross-over region are โdifficultโ from the point of view of deciding their satisfiability. Consequently, they are good candidates for testing satisfiability algorithms. These claims are based on the studies of the performance of the Davis-Putnam procedure. Indeed, on average, it takes the most time to decide satisfiability of CNF formulas randomly generated from the cross-over region. However, the suitability of formulas generated randomly from the cross-over region for the studies of the performance of randomized algorithms is less clear. Our results indicate that the performance of randomized algorithms critically depends on the number of satisfying assignments and much less on the density of the problem. Both under-constrained and over-constrained problems with a small number of satisfying assignments turn out to be hard for randomized algorithms. In the same time, Davis-Putnam procedure, while sensitive to the density, is quite robust with respect to the number of satisfying truth assignments.
On the other hand, there are classes of problems that are โeasyโ for Davis-Putnam procedure. For instance, Davis-Putnam procedure is very effective in finding 3-colorings of graphs from special classes such as 2-trees (see Section 4 for definitions). Thus, they are not appropriate benchmarks for Davis-Putnam type algorithms. However, a common intuition is that structured problems are โhardโ for randomized algorithms \[SKC96, SK93, SKC94\]. In this paper we study this claim for the formulas that encode 3- and 4-coloring problem for 2-trees. We show that GSATโs running time relative to a given accuracy grows exponentially with the size of a graph. This provides a formal evidence to the โhardnessโ claim for this class of problems and implies that, while not useful in the studies of complete algorithms such as Davis-Putnam method, they are excellent benchmarks for studying the performance of randomized algorithms.
The main contribution of our paper is not as much a discovery of an unexpected behavior of randomized algorithms for testing satisfiability as it is a proposed methodology for studying them. Our concepts of the accuracy and the relative running time allow us to quantify claims that are often accepted on the basis of intuitive arguments but have not been formally pinpointed.
In the paper, we apply our approach to the algorithm RWS-GSAT from \[SK93, SKC94\]. This algorithm is commonly regarded as one of the best randomized algorithms for satisfiability testing to date. For our experiments we used walksat version 35 downloaded from ftp.research.att.com/dist/ai and run on a SPARC Station 20.
## 2 Accuracy and running time
In this section, we will formally introduce the notion of the accuracy of a randomized algorithm $`๐`$. We will then define the concept of the running time relative to accuracy.
Let $``$ be a finite set of satisfiable CNF formulas and let $`๐ซ`$ be a probability distribution defined on $``$. Let $`๐`$ be a sound algorithm (randomized or not) to test satisfiability. By the accuracy of $`๐`$ (relative to the probability space $`(,๐ซ)`$), we mean the probability that $`๐`$ finds a satisfying assignment for a formula generated from $``$ according to the distribution $`๐ซ`$. Clearly, the accuracy of complete algorithms (for all possible spaces of satisfiable formulas) is 1 and, intuitively, the higher the accuracy, the more โcompleteโ is the algorithm for the space $`(,๐ซ)`$.
When studying and comparing randomized algorithms that are not complete, accuracy seems to be an important characteristics. It needs to be taken into account โ in addition to the running time. Clearly, very fast algorithms that often return no satisfying assignments, even if they exist, are not satisfactory. In fact, most of the work on developing better randomized algorithms can be viewed as aimed at increasing the accuracy of these algorithms. Despite this, the accuracy is rarely explicitly mentioned and studied (see \[Spe96, MSG97\]).
We will propose now an approach through which the running times of randomized satisfiability testing algorithms can be compared. We will restrict our considerations to the class of randomized algorithms designed according to the following general pattern. These algorithms consist of a series of tries. In each try, a truth assignment is randomly generated. This truth assignment is then subject to a series of local improvement steps aimed at, eventually, reaching a satisfying assignment. The maximum number of tries the algorithm will attempt and the length of each try are the parameters of the algorithm. They are usually specified by the user. We will denote by $`MT`$ the maximum number of tries and by $`MF`$ โ the maximum number of local improvement steps. Algorithms designed according to this pattern differ, besides possible differences in the values $`MT`$ and $`MF`$, in the specific definition of the local improvement process. A class of algorithms of this structure is quite wide and contains, in particular, the GSAT family of algorithms, as well as algorithms based on the simulated annealing approach.
Let $`A`$ be a randomized algorithm falling into the class described above. Clearly, its average running time on instances from the space $`(,๐ซ)`$ of satisfiable formulas depends, to a large degree, on the particular choices for $`MT`$ and $`MF`$. To get an objective measure of the running time, independent of $`MT`$ and $`MF`$, when defining time, we require that a postulated accuracy be met. Formally, let $`a`$, $`0<a1`$, be a real number (a postulated accuracy). Define the running time of $`A`$ relative to accuracy $`a`$, $`t^a`$, to be the minimum time $`t`$ such that for some positive integers $`MT`$ and $`MF`$, the algorithm $`A`$ with the maximum of $`MT`$ tries and with the maximum of $`MF`$ local improvement steps per try satisfies:
1. the average running time on instances from $`(,๐ซ)`$ is at most $`t`$, and
2. the accuracy of $`A`$ on $`(,๐ซ)`$ is at least $`a`$.
Intuitively, $`t^a`$ is the minimum expected time that guarantees accuracy $`a`$. In Section 3, we describe an experimental approach that can be used to estimate the relative running time.
The concepts of accuracy and accuracy relative to the running time open a number of important (and, undoubtedly, very difficult) theoretical problems. However, in this paper we will focus on an experimental study of accuracy and relative running time for a GSAT-type algorithm. These algorithms follow the following general pattern for the local improvement process. Given a truth assignment, GSAT selects a variable such that after its truth value is flipped (changed to the opposite one) the number of unsatisfied clauses is minimum. Then, the flip is actually made depending on the result of some additional (often again random) procedure.
In our experiments, we used two types of data sets. Data sets of the first type consist of randomly generated 3-CNF formulas \[MSL92\]. Data sets of the second type consist of CNF formulas encoding the $`k`$-colorability problem for randomly generated 2-trees. These two classes of data sets, as well as the results of the experiments, are described in detail in the next two sections.
## 3 Random 3-CNF formulas
Consider a randomly generated 3-CNF formula $`F`$, with $`N`$ variables and the ratio of clauses to variables equal to $`L`$. Intuitively, when $`L`$ increases, the probability that $`F`$ is satisfiable should decrease. It is indeed so \[MSL92\]. What is more surprising, it switches from being close to one to being close to zero very abruptly in a very small range from $`L`$ approximately $`4.25`$ to $`4.3`$. The set of 3-CNF formulas at the cross-over region will be denoted by $`CR(N)`$. Implementations of the Davis-Putnam procedure take, on average, the most time on 3-CNF formulas generated (according to a uniform probability distribution) from the cross-over regions. Thus, these formulas are commonly regarded as good test cases for experimental studies of the performance of satisfiability algorithms \[CA93, Fre96\].
We used seven sets of satisfiable 3-CNF formulas generated from the cross-over regions $`CR(N)`$, $`N=100,150,\mathrm{},400`$. These data sets are denoted by $`DS(N)`$. Each data set $`DS(N)`$ was obtained by generating randomly 3-CNF formulas with $`N`$ variables and $`L=4.30`$ (for $`N=100`$) and $`L=4.25`$ (for $`N150`$) clauses. For each formula, the Davis-Putnam algorithm was then used to decide its satisfiability. The first one thousand satisfiable formulas found in this way were chosen to form the data set.
The random algorithms are often used with much larger values of $`N`$ than we have reported in this paper. The importance of accuracy in this study required that we have only satisfiable formulas (otherwise, the accuracy cannot be reliably estimated). This limited the size of randomly generated 3-CNF formulas used in our study since we had to use a complete satisfiability testing procedure to discard those randomly generated formulas that were not satisfiable. In Section 5, we discuss ways in which hard test cases for randomized algorithms can be generated that are not subject to the size limitation.
For each data set $`DS(N)`$, we determined values for $`MF`$, say $`MF_1,\mathrm{},MF_m`$ and $`MT_1,\mathrm{},`$ $`MT_n`$ for use with RWS-GSAT, big enough to result in the accuracy at least 0.98. For instance, for $`N=100`$, $`MF`$ ranged from $`100`$ to $`1000`$, with the increment of 100, and $`MT`$ ranged from 5 to 50, with the increment of 5. Next, for each combination of $`MF`$ and $`MT`$, we ran RWS-GSAT on all formulas in $`DS(N)`$ and tabulated both the running time and the percentage of problems for which the satisfying assignment was found (this quantity was used as an estimate of the accuracy). These estimates and average running times for the data set $`DS(100)`$ are shown in the tables in Figure 1.
Fixing a required accuracy, say at a level of $`a`$, we then looked for the best time which resulted in this (or higher) accuracy. We used this time as an experimental estimate for $`t^a`$. For instance, there are 12 entries in the accuracy table with accuracy $`0.99`$ or more. The lowest value from the corresponding entries in the running time table is 0.03 sec. and it is used as an estimate for $`t^{0.99}`$.
The relative running times $`t^a`$ for RWS-GSAT run on the data sets $`DS(N)`$, $`N=100,150,\mathrm{},400`$, and for $`a=0.90`$ and $`a=0.95`$, are shown in Figure 2. Both graphs demonstrate exponential growth, with the running time increasing by the factor of 1.5 - 2 for every 50 additional variables in the input problems. Thus, while GSAT outperforms Davis-Putnam procedure for instances generated from the critical regions, if we prescribe the accuracy, it is still exponential and, thus, will quickly reach the limits of its applicability. We did not extend our results beyond formulas with up to 400 variables due to the limitations of the Davis-Putnam procedure, (or any other complete method to test satisfiability). For problems of this size, GSAT is still extremely effective (takes only about 2.5 seconds). Data sets used in Section 5 do not have this limitation (we know all formulas in these sets are satisfiable and there is no need to refer to complete satisfiability testing programs). The results presented there also illustrate the exponential growth of the relative running time and are consistent with those discussed here.
## 4 Number of satisfying assignments
It seems intuitive that accuracy and running time would be dependent on the number of possible satisfying assignments. Studies using randomly generated 3-CNF formulas \[CFG<sup>+</sup>96\] and 3-CNF formulas generated randomly with parameters allowing the user to control the number of satisfiable solutions for each instance \[CI95\] show this correlation.
In the same way as for the data sets $`DS(N)`$, we constructed data sets $`DS(100,p_{k1},p_k)`$, where $`p_0=1`$, and $`p_k=2^{k3}100`$, $`k=2,\mathrm{},11`$. Each data set $`DS(100,p_{k1},p_k)`$ consists of 100 satisfiable 3-CNF formulas generated from the cross-over region $`CR(100)`$ and having more than $`p_{k1}`$ and no more than $`p_k`$ satisfying assignments. Each data set was formed by randomly generating 3-CNF formulas from the cross-over region $`CR(100)`$ and by selecting the first 100 formulas with the number of satisfying assignments falling in the prescribed range (again, we used the Davis-Putnam procedure, here).
For each data set we ran the RWS-GSAT algorithm with $`MF=500`$ and $`MT=50`$ thus, allowing the same upper limits for the number of random steps for all data sets (these values resulted in the accuracy of .99 in our experiments with the data set $`DS(100)`$ discussed earlier). Figure 3 summarizes our findings. It shows that there is a strong relationship between accuracy and the number of possible satisfying assignments. Generally, instances with small number of solutions are much harder for RWS-GSAT than those with large numbers of solutions. Moreover, this observation is not affected by how constrained the input formulas are. We observed the same general behavior when we repeated the experiment for data sets of 3-CNF formulas generated from the under-constrained region (100 variables, 410 clauses) and over-constrained region (100 variables, 450 clauses), with under-constrained instances with few solutions being the hardest.
These results indicate that, when generating data sets for experimental studies of randomized algorithms, it is more important to ensure that they have few solutions rather than that they come from the critically constrained region.
## 5 CNF formulas encoding $`k`$-colorability
To expand the scope of applicability of our results and argue their robustness, we also used in our study data sets consisting of CNF formulas encoding the $`k`$-colorability problem for graphs. While easy for Davis-Putnam procedure (which resolves their satisfiability in polynomial time), formulas of this type are believed to be โhardโ for randomized algorithms and were used in the past in the experimental studies of their performance. In particular, it was reported in \[SK93\] that RWS-GSAT does not perform well on such inputs (see also \[JAMS91\]).
Given a graph $`G`$ with the vertex set $`V=\{v_1,\mathrm{},v_n\}`$ and the edge set $`E=\{e_1,\mathrm{},e_m\}`$, we construct the CNF formula $`COL(G,k)`$ as follows. First, we introduce new propositional variables $`col(v,i)`$, $`vV`$ and $`i=1,\mathrm{},k`$. The variable $`col(v,i)`$ expresses the fact that the vertex $`v`$ is colored with the color $`i`$. Now, we define $`COL(G,k)`$ to consist of the following clauses:
1. $`\neg col(x,i)\neg col(y,i)`$, for every edge $`\{x,y\}`$ from $`G`$,
2. $`col(x,1)\mathrm{}col(x,k)`$, for every vertex $`x`$ of $`G`$,
3. $`\neg col(x,i)\neg col(x,j)`$, for every vertex $`x`$ of $`G`$ and for every $`i,j`$, $`1i<jk`$.
It is easy to see that there is a one-to-one correspondence between $`k`$-colorings of $`G`$ and satisfying assignments for $`COL(k,G)`$. To generate formulas for experimenting with RWS-GSAT (and other satisfiability testing procedures) it is, then, enough to generate graphs $`G`$ and produce formulas $`COL(G,k)`$.
In our experiments, we used formulas that encode $`3`$-colorings for graphs known as $`2`$-trees. The class of 2-trees is defined inductively as follows:
1. A complete graph on three vertices (a โtriangleโ) is a 2-tree
2. If $`T`$ is a 2-tree than a graph obtained by selecting an edge $`\{x,y\}`$ in $`T`$, adding to $`T`$ a new vertex $`z`$ and joining $`z`$ to $`x`$ and $`y`$ is also a 2-tree.
A 2-tree with 6 vertices is shown in Fig. 4. The vertices of the original triangle are labeled 1, 2 and 3. The remaining vertices are labeled according to the order they were added.
The concept of 2-trees can be generalized to $`k`$-trees, for an arbitrary $`k2`$. Graphs in these classes are important. They have bounded tree-width and, consequently, many NP-complete problems can be solved for them in polynomial time \[AP89\].
We can generate 2-trees randomly by simulating the definition given above and by selecting an edge for โexpansionโ randomly in the current 2-tree $`T`$. We generated in this way families $`G(p)`$, for $`p=50,60,\mathrm{},150`$, each consisting of one hundred randomly generated 2-trees with $`p`$ vertices. Then, we created sets of CNF formulas $`C(p,3)=\{COL(T,3):TG(p)\}`$, for $`p=50,60,\mathrm{},150`$. Each formula in a set $`C(p,3)`$ has exactly 6 satisfying assignments (since each 2-tree has exactly 6 different 3-colorings). Thus, they are appropriate for testing the accuracy of RWS-GSAT.
Using CNF formulas of this type has an important benefit. Data sets can be prepared without the need to use complete (but very inefficient for large inputs) satisfiability testing procedures. By appropriately choosing the underlying graphs, we can guarantee the satisfiability of the resulting formulas and, often, we also have some control over the number of solutions (for instance, in the case of 3-colorability of 2-trees there are exactly 6 solutions).
We used the same methodology as the one described in the previous section to tabulate the accuracy and the running time of RSW-GSAT for a large range of choices for the parameters $`MF`$ and $`MT`$. Based on these tables, as before, we computed estimates for the times $`t^a`$ for $`a=0.95`$, for each of the data sets. The results that present the running time $`t^a`$ as a function of the number of vertices in a graph (which is of the same order as the number of variables in the corresponding CNF formula) are gathered in Figure 5. They show that RWS-GSATโs performance deteriorates exponentially (time grows by the factor of $`34`$ for every 50 additional vertices).
An important question is: how to approach constraint satisfaction problems if they seem to be beyond the scope of applicability of randomized algorithms? A common approach is to relax some constraints. It often works because the resulting constraint sets (theories) are โeasierโ to satisfy (admit more satisfying assignments). We have already discussed the issue of the number of solutions in the previous section. Now, we will illustrate the effect of increasing the number of solutions (relaxing the constraints) in the case of the colorability problem. To this end, we will consider formulas from the spaces $`C(p,4)`$, representing 4-colorability of 2-trees. These formulas have exponentially many satisfying truth assignments (a 2-tree with $`p`$ vertices has exactly $`3\times 2^p`$ 4-colorings). For these formulas we also tabulated the times $`t^a`$, for $`a=0.95`$, as a function of the number of vertices in the graph. The results are shown in Figure 6.
Thus, despite the fact the size of a formula from $`C(p,4)`$ is larger than the size of a formula from $`C(p,3)`$ by the factor of $`1.6`$, RWS-GSATโs running times are much lower. In particular, within .5 seconds RWS-GSAT can find a 4-coloring of randomly generated 2-trees with 500 vertices. As demonstrated by Figure 5, RWS-GSAT would require thousands of seconds for 2-trees of this size to guarantee the same accuracy when finding 3-colorings. Thus, even a rather modest relaxation of constraints can increase the number of satisfying assignments substantially enough to lead to noticeable speed-ups. On the other hand, even though โeasierโ, the theories encoding the 4-colorability problem for 2-trees still are hard to solve by GSAT as the rate of growth of the relative running time is exponential (Fig. 6).
The results of this section further confirm and provide quantitative insights into our earlier claims about the exponential behavior of the relative running time for GSAT and on the dependence of the relative running time on the number of solutions. However, they also point out that by selecting a class of graphs (we selected the class of 2-trees here but there are, clearly, many other possibilities) and a graph problem (we focused on colorability but there are many other problems such as hamiltonicity, existence of vertex covers, cliques, etc.) then encoding these problems for graphs from the selected class yields a family of formulas that can be used in testing satisfiability algorithms. The main benefit of the approach is that by selecting a suitable class of graphs, we can guarantee satisfiability of the resulting formulas and can control the number of solutions, thus eliminating the need to resort to complete satisfiability procedures when preparing the test cases. We intend to further pursue this direction.
## 6 Conclusions
In the paper we formally stated the definitions of the accuracy of a randomized algorithm and of its running time relative to a prescribed accuracy. We showed that these notions enable objective studies and comparisons of the performance and quality of randomized algorithms. We applied our approach to study the RSW-GSAT algorithm. We showed that, given a prescribed accuracy, the running time of RWS-GSAT was exponential in the number of variables for several classes of randomly generated CNF formulas. We also showed that the accuracy (and, consequently, the running time relative to the accuracy) strongly depended on the number of satisfying assignments: the bigger this number, the easier was the problem for RWS-GSAT. This observation is independent of the โdensityโ of the input formula. The results suggest that satisfiable CNF formulas with few satisfying assignments are hard for RWS-GSAT and should be used for comparisons and benchmarking. One such class of formulas, CNF encodings of the 3-colorability problem for 2-trees was described in the paper and used in our study of RWS-GSAT.
Exponential behavior of RWS-GSAT points to the limitations of randomized algorithms. However, our results indicating that input formulas with more solutions are โeasierโ for RWS-GSAT to deal with, explain RWS-GSATโs success in solving some large practical problems. They can be made โeasyโ for RWS-GSAT by relaxing some of the constraints. |
no-problem/0002/hep-ph0002172.html | ar5iv | text | # Introduction
## Introduction
The Standard Model (SM) is very successful in confronting the data coming from the highest energy accelerators. Still, there are theoretical reasons to expect that it is not complete, and one of the first questions in the quest for new physics is what is the relevant scale, where new phenomena can give experimental signatures. Recently, a radical proposal has been put forward for the solution of the hierarchy problem, which brings close the electroweak scale $`\mathrm{m}_{\mathrm{EW}}1\mathrm{TeV}`$ and the Planck scale $`\mathrm{M}_{\mathrm{Pl}}=\frac{1}{\sqrt{\mathrm{G}_\mathrm{N}}}10^{15}\mathrm{TeV}`$. In this framework the effective four-dimensional $`\mathrm{M}_{\mathrm{Pl}}`$ is connected to a new $`\mathrm{M}_{\mathrm{Pl}(4+\mathrm{n})}`$ scale in a (4+n) dimensional theory:
$$\mathrm{M}_{\mathrm{Pl}}^2\mathrm{M}_{\mathrm{Pl}(4+\mathrm{n})}^{2+\mathrm{n}}\mathrm{R}^\mathrm{n}$$
(1)
where there are n extra compact spatial dimensions of radius $`\mathrm{R}`$. This can explain the observed weakness of gravity at large distances. At the same time, quantum gravity becomes strong at a scale M of the order of 1 TeV and could have observable signatures at present and future colliders.
The first experimental searches for large extra dimensions have concentrated on the effects of real and virtual graviton emission<sup>1</sup><sup>1</sup>1For searches in Bhabha scattering see e.g. .. In a string theory of quantum gravity there are additional modifications of Standard Model amplitudes and new phenomenological consequences. Effective contact interactions caused by massive string mode oscillations might compete with or even become stronger than those due to virtual exchange of Kaluza-Klein excitations of gravitons, and thus provide the first signature of low scale gravity or a lower bound on the string scale.
Bhabha scattering above the Z resonance offers a reach hunting field for new phenomena . It can be used to search for manifestations of contact interactions and as a very sensitive probe of the point-like structure of the electron.
This paper is organized as follows. In sections 2 and 3 the experimental data and the analysis technique are presented. In the following section, we describe the search for effects of TeV strings in Bhabha scattering. In sections 5 and 6 we use the data to obtain limits on the scale of different contact interaction models, and on the size of electrons respectively. We conclude with a discussion of the results.
## Experimental Data
Data on fermion-pair production at 183 or 189 GeV centre-of-mass energies from the LEP2 collider has become available recently. In the following we will concentrate on the measurements of Bhabha scattering at these two highest energy points, where large data samples have been accumulated during the very successful LEP runs in 1997 and 1998.
The ALEPH , L3 and OPAL collaborations have presented results for the differential cross section of Bhabha scattering. In the case of L3 and OPAL the results are for both energy points and the scattering angle $`\theta `$ is the angle between the incoming and the outgoing electrons in the laboratory frame. In the ALEPH case the measurements are at 183 GeV and the scattering angle is defined in the outgoing $`\mathrm{e}^+\mathrm{e}^{}`$ rest frame. The acceptance is given by the angular range $`|\mathrm{cos}\theta |<0.9`$ for the ALEPH and OPAL measurements and by $`44^{}<\theta <136^{}`$ for the L3 measurement.
The experiments use different strategies to isolate the high energy sample, where the interactions take place at energies close to the full available centre-of-mass energy. This sample is the main search field for new physics. L3 and OPAL apply an acollinearity cut of $`25^{}`$ and $`10^{}`$ respectively. ALEPH defines the effective energy, $`\sqrt{\mathrm{s}^{}}`$, as the invariant mass of the outgoing fermion pair. It is determined from the angles of the outgoing fermions. For details of the selection procedures, the statistical and systematic errors we refer the reader to the publications of the LEP experiments.
## Analysis Method
The Standard Model predictions for the differential cross sections of Bhabha scattering at 183 and 189 GeV are computed with the Monte Carlo generator BHWIDE . We assign a theory uncertainty of 1.5 % on the absolute scale of the predictions. In all cases the individual experimental cuts of the selection procedures and the isolation of the high energy samples are taken into account. The results are cross-checked with the semi-analytic program TOPAZ0 .
The effects of new phenomena are computed as a function of a generic parameter $`\epsilon `$, defined for each individual case in the corresponding section. Initial-state radiation (ISR) changes the effective centre-of-mass energy in a large fraction of the observed events. We take these effects into account by computing the first order exponentiated differential cross section following . Other QED and electroweak corrections give smaller effects and are neglected.
In total we have 47 data points: 28 from the 3 differential spectra at 183 GeV and 19 from the L3 and OPAL spectra at 189 GeV. A fitting procedure similar to the one in is applied.
A negative log-likelihood function is constructed by combining all data points at the two centre-of-mass energies:
$$\mathrm{log}=\underset{\mathrm{r}=1}{\overset{\mathrm{n}}{}}\left(\frac{(\mathrm{Prediction}(\mathrm{SM},\epsilon )\mathrm{Measurement})^2}{2\mathrm{\Delta }_{\mathrm{Measurement}}^2}\right)_\mathrm{r}$$
(2)
$`\mathrm{\Delta }_{\mathrm{Measurement}}`$ $`=`$ $`\mathrm{error}(\mathrm{Prediction}(\mathrm{SM},\epsilon )\mathrm{Measurement})`$ (3)
where $`Prediction(SM,\epsilon )`$ is the SM expectation for a given measurement (a point in the differential spectra) combined with the additional effect of new phenomena as a function of the mass scale or electron size, and $`Measurement`$ is the corresponding measured quantity. The index $`\mathrm{r}`$ runs over all data points. The error on a deviation consists of three parts, which are combined in quadrature: a statistical error and a systematic error (as given by the experiments) and the theoretical error assigned above. The systematic errors account for small correlations between data points.
## TeV Strings in Bhabha Scattering
In the authors develop a model to study the effects of string Regge excitations on physical cross sections by a simple embedding of the Quantum Electrodynamics of electrons and photons into string theory. They use only one gauge group and only vector-like couplings, in order to avoid complications but grasp the general phenomenological picture. The results are model-dependent.
The effects of TeV scale strings on Bhabha scattering are computed from the leading-order scattering amplitudes. All amplitudes are multiplied by a common form-factor
$$๐ฎ(\mathrm{s},\mathrm{t})=\frac{\mathrm{\Gamma }(1\frac{\mathrm{s}}{\mathrm{M}_\mathrm{S}^2})\mathrm{\Gamma }(1\frac{\mathrm{t}}{\mathrm{M}_\mathrm{S}^2})}{\mathrm{\Gamma }(1\frac{\mathrm{s}}{\mathrm{M}_\mathrm{S}^2}\frac{\mathrm{t}}{\mathrm{M}_\mathrm{S}^2})}.$$
(4)
In the case where the string scale $`\mathrm{M}_\mathrm{S}`$ is close to or smaller than the centre-of-mass energy, the Gamma-functions in this form-factor produce a very reach and complicated resonance structure. On the other hand, in the limit where the Mandelstam variables s and t are much smaller than $`\mathrm{M}_\mathrm{S}`$, we have
$$๐ฎ(\mathrm{s},\mathrm{t})=(1\frac{\pi ^2}{6}\frac{\mathrm{st}}{\mathrm{M}_\mathrm{S}^4}+\mathrm{}).$$
(5)
So in this model the leading corrections are proportional to $`\mathrm{M}_\mathrm{S}^4`$, corresponding to an operator of dimension 8.
To compare the string predictions to the data on Bhabha scattering above the Z resonance one has to handle also the contributions due to Z exchange and the interference with photon exchange amplitudes. The Z is not part of the string QED model developed in , but as all QED Bhabha scattering amplitudes are multiplied by the common factor $`๐ฎ(\mathrm{s},\mathrm{t})`$, the authors suggest to compare the differential cross section to the simple formula
$$\frac{\mathrm{d}\sigma }{\mathrm{d}\mathrm{cos}\theta }=(\frac{\mathrm{d}\sigma }{\mathrm{d}\mathrm{cos}\theta })_{\mathrm{SM}}|๐ฎ(\mathrm{s},\mathrm{t})|^2.$$
(6)
The data from the LEP collaborations at 183 and 189 GeV show no statistically significant deviations from the SM predictions due to string effects. In their absence, we use the log-likelihood method, which after proper normalization gives the confidence level for any value of the scale $`\mathrm{M}_\mathrm{S}`$ in the physically allowed region. The exact definition can be found in . The one-sided lower limit on the scale $`\mathrm{M}_\mathrm{S}`$ at 95% confidence level is:
$$\mathrm{M}_\mathrm{S}=0.631\mathrm{TeV}.$$
(7)
Examples of the data analysis at 189 GeV are shown in Figure 1 and Figure 2, where the SM predictions and the expectations from several manifestations of new phenomena are compared to the measurements of the L3 and OPAL collaborations, respectively. In these figures we plot the combined statistical and systematic errors; the theory uncertainty is not shown. In the area of the forward peak the theory uncertainty in the SM prediction starts to limit the precision of our study.
## Contact Interactions
The standard framework, used in searches for deviations from the SM predictions, is the most general combination of helicity conserving dimension-6 operators . In this scheme, new interactions beyond the Standard Model are characterised by a coupling strength, $`g`$, and by an energy scale, $`\mathrm{\Lambda }`$, which can be viewed as the scale of compositeness. At energies much lower than $`\mathrm{\Lambda }`$, we have an effective Lagrangian leading to four-fermion contact interactions.
The differential cross section for fermion-pair production in $`\mathrm{e}^+\mathrm{e}^{}`$ collisions can be decomposed in the usual way as:
$$\frac{\mathrm{d}\sigma }{\mathrm{d}\mathrm{\Omega }}=\mathrm{SM}(\mathrm{s},\mathrm{t})+\epsilon \mathrm{C}_{\mathrm{Int}}(\mathrm{s},\mathrm{t})+\epsilon ^2\mathrm{C}_{\mathrm{CI}}(\mathrm{s},\mathrm{t})$$
(8)
where $`\mathrm{SM}(\mathrm{s},\mathrm{t})`$ is the Standard Model contribution, $`\mathrm{C}_{\mathrm{CI}}(\mathrm{s},\mathrm{t})`$ comes from the contact interaction amplitude and $`\mathrm{C}_{\mathrm{Int}}(\mathrm{s},\mathrm{t})`$ is the interference between the SM and the contact interaction terms. The exact form of these functions is given in . By convention $`\frac{\mathrm{g}^2}{4\pi }=1`$ and $`|\eta _{\mathrm{ij}}|1`$, where $`(\mathrm{i},\mathrm{j}=\mathrm{L},\mathrm{R})`$ labels the helicity of the incoming and outgoing fermions. We define
$$\epsilon =\frac{\mathrm{g}^2}{4\pi }\frac{\mathrm{sign}(\eta )}{\mathrm{\Lambda }^2}$$
(9)
where the sign of $`\eta `$ enables to study both the cases of positive and negative interference.
As discussed in the previous section, the data from the LEP collaborations at 183 and 189 GeV show no statistically significant deviations from the SM predictions. In their absence, using the same technique we derive one-sided lower limits on the scale $`\mathrm{\Lambda }`$ of contact interactions at 95% confidence level. They are summarized in Table 1 and Figure 3. The results presented here improve on the limits obtained by individual LEP experiments .
## Electron Size
In the Standard Model leptons, quarks and gauge bosons are considered as point-like particles. A possible substructure or new interactions at as yet unexplored very high energies could manifest themselves as finite radii and anomalous magnetic dipole moments of these particles.
The high precision measurements of the magnetic dipole moment $`(\mathrm{g}2)_\mathrm{e}`$ of the electron can be used to put stringent limits on the electron radius $`\mathrm{r}_\mathrm{e}`$ . If non-standard contributions to $`(\mathrm{g}2)_\mathrm{e}`$ scale linearly with the electron mass, the bound is $`\mathrm{r}_\mathrm{e}210^{23}\mathrm{m}`$. On the other hand, if they scale quadratically with the electron mass, which is a natural consequence of chiral symmetry , the bound is reduced to $`\mathrm{r}_\mathrm{e}310^{18}\mathrm{m}`$. In the authors perform an analysis of the high precision data on the Z resonance, noting that while the assumption of elementary photons is quite natural, the same is less obvious for the very massive Z bosons. In the pure electron case the limit is not competitive with the $`(\mathrm{g}2)_\mathrm{e}`$ results.
Here we perform a new analysis based on the LEP2 data on Bhabha scattering, where again the photon exchange gives the dominating amplitudes both in the t- and s-channels, and good sensitivity to electron substructure can be expected. The differential cross section for fermion-pair production in $`\mathrm{e}^+\mathrm{e}^{}`$ collisions far above the Z is modified as:
$$\frac{\mathrm{d}\sigma }{\mathrm{dQ}^2}=(\frac{\mathrm{d}\sigma }{\mathrm{dQ}^2})_{\mathrm{SM}}\mathrm{F}_\mathrm{e}^2(\mathrm{Q}^2)\mathrm{F}_\mathrm{f}^2(\mathrm{Q}^2)$$
(10)
where $`\mathrm{F}_\mathrm{e}`$ and $`\mathrm{F}_\mathrm{f}`$ are the form-factors of the initial (final) state fermions. They are parametrized in the standard way as :
$$\mathrm{F}(\mathrm{Q}^2)=1+\frac{1}{6}\mathrm{Q}^2\mathrm{r}^2$$
(11)
where $`\mathrm{Q}^2`$ is the Mandelstam variable s or t for s- or t-channel exchange, and $`\mathrm{r}^2`$ is the mean-square radius of the fermions. This formalism is a convenient way to estimate the electron size in the case where the product $`\mathrm{Q}^2\mathrm{r}^2`$ is small.
From the data of the LEP collaborations at 183 and 189 GeV we extract the following upper limit on the electron radius at 95% confidence level:
$$\mathrm{r}_\mathrm{e}<2.810^{19}\mathrm{m}.$$
(12)
This limit is one order of magnitude lower than the limit derived from $`(\mathrm{g}2)_\mathrm{e}`$ measurements in the case where the deviations from the SM of the magnetic dipole moment of the electron depend quadratically on its mass.
High energy analyses have been performed in interactions involving electrons and quarks, assuming a single form-factor for all fermions. The H1 collaboration at HERA uses deep inelastic scattering and obtains a limit of $`\mathrm{r}<2610^{19}\mathrm{m}`$ at 95 % confidence level . The CDF collaboration at the TEVATRON studies the Drell-Yan process to put a limit of $`\mathrm{r}<5.610^{19}\mathrm{m}`$ at 95 % confidence level .
## Discussion
The search for TeV strings motivates a fresh look at Bhabha scattering. In the model analyzed here the string realization of quantum gravity is manifested as a form-factor which modifies the differential cross section. The lower limit obtained in our analysis of LEP2 data is $`\mathrm{M}_\mathrm{S}=0.631\mathrm{TeV}`$. In from the study of virtual graviton exchange in gravity models with large extra dimensions we obtained a lower limit on their scale of $`\mathrm{\Lambda }_\mathrm{T}=1.412\mathrm{TeV}`$ for positive inteference ($`\lambda =+1`$<sup>2</sup><sup>2</sup>2 This value of $`\mathrm{\Lambda }_\mathrm{T}`$ corresponds, depending on the convention, also to a gravity scale $`\mathrm{M}_\mathrm{s}=1.261\mathrm{TeV}`$. The gravity scale with subscript small s should not be confused with the string scale $`\mathrm{M}_\mathrm{S}`$, studied here.. As noted in , the gravity scale is between $`1.6รท3.0\mathrm{M}_\mathrm{S}`$, depending on the coupling strength. The results on the gravity scale from and on the string scale from this analysis agree well with each other.
It is interesting to note that our study of the electron size also leads to form-factors modifying the differential cross section, but with opposite sign. The limit derived here, $`\mathrm{r}_\mathrm{e}<2.810^{19}\mathrm{m}`$, becomes $`\mathrm{M}_\mathrm{r}>0.705\mathrm{TeV}`$, if translated to a mass scale. This is a reflection of the similar magnitude of the effects at LEP2 energies in both cases, even if the physics mechanisms involved are different.
In the framework of contact interactions very stringent bounds exceeding 10 TeV are obtained. When interpreting the physical meaning of these limits, we should remember that a strong coupling $`\frac{\mathrm{g}^2}{4\pi }=1`$ for the novel interactions is postulated by convention. If we assume a coupling of electromagnetic strength, the limits can be translated:
$$\mathrm{\Lambda }^{}=\sqrt{\alpha _{\mathrm{QED}}}\mathrm{\Lambda }=0.085\mathrm{\Lambda }$$
(13)
where we have used the value of the fine structure constant and ignored the small effect of a running $`\alpha _{\mathrm{QED}}`$. For instance the VV model with positive interference gives effects similar to the ones resulting from a finite electron size, as shown in Figure 1 and Figure 2. The limit for the VV model translates as follows:
$$\mathrm{\Lambda }_+=13.0\mathrm{TeV}\mathrm{\Lambda }^{}=1.1\mathrm{TeV}\mathrm{r}=1.810^{19}\mathrm{m}.$$
(14)
This results is comparable with the upper limit for electron substructure, derived using form-factors.
The measurements of Bhabha scattering above the Z resonance confirm the predictions of the Standard Model and reach already a similar level of precision as the best theoretical tools available. In order to fully exploit the physics potential of the large data samples collected during the LEP running in 1999 and expected in 2000, improved theory predictions are very desirable. Bhabha scattering is a probe, sensitive enough to provide a first window to new physics at the TeV scale.
## Acknowledgements
The author is grateful to A. Bรถhm, M. Peskin and I. Antoniadis for valuable discussions. |
no-problem/0002/astro-ph0002376.html | ar5iv | text | # Evidence of self-interacting cold dark matter from galactic to galaxy cluster scales
## 1 Introduction
Constraining the nature of dark matter is presently one of the most relevant problems in cosmology and particle physics. The current most popular scenarios for structure formation in the universe are based on the inflationary CDM theory, according to which cosmic structures arise from small Gaussian density fluctuations composed mostly of non-relativistic collisionless particles. Luminous galaxies are thought to form by gas cooling and condensing into the dark matter haloes which grow by gravitational accretion and merging in a hierarchical fashion.
The question on the inner density profiles of the virialized dark matter haloes is at present controversial. In the last few years much observational and theoretical effort has been employed into investigating the inner structure of dark haloes. On galactic scales, the rotation curves of dwarf galaxies offer a way to study the inner mass distribution of their dark haloes directly since these galaxies are dominated by dark matter. By analysing the rotation curves of some near dwarf galaxies, Moore (1994), Flores $`\&`$ Primack (1994) and Burkert (1995) have shown that the central mass distribution of their dark haloes is soft, i.e. the haloes have a constant density core. A similar result concerns low surface brightness galaxies (LSB, hereafter) (de Blok $`\&`$ McGaugh 1997) even though the uncertainty in the observational data is larger than in the case of dwarf spirals. Hernรกndez $`\&`$ Gilmore (1998) showed that the observed rotation curves of both LSB and normal large galaxies are consistent with a fixed initial halo shape, characterized by a significant soft core inner region. On scales of clusters of galaxies, unfortunately there is not much information available. Recently, from strong gravitational lensing observations, Tyson, Kochanski, & DellโAntonio (1998) have obtained an unprecedent high-resolution mass map for the cluster CL0024+1654, which has not a central cD galaxy, and found the existence of a soft core. Taken together these studies suggest the univesality of constant density cores across both large mass scales and galactic types.
On the theoretical side, the structure of the CDM haloes was studied over a wide range of masses by means of high-resolution N-body cosmological simulations (e.g., Navarro, Frenk, & White 1997; NFW hereafter) and semi-analytical approaches (e.g., Avila-Reese, Firmani, & Hernรกndez 1998). It was found that the universal density profile firstly introduced by NFW describes very well the mass distribution of most of the CDM haloes. This profile is univocally determined by the mass, and in the centre diverges as $`\rho r^1`$ producing a cusp in the core. Recent high-resolution N-body simulations have shown that, as the numerical resolution is increased, the inner profiles result even steeper than $`r^1`$ (e.g., Moore et al. 1999b), making the CDM haloes more cuspy than in the case of the NFW profile.
So far, the predicted inner density profile of the CDM haloes seems to be in conflict with the observations. Another potential difficulty for the CDM models was recently reported: the N-body simulations predict an overly large number of haloes within group-like systems compared to observations (Klypin et al. 1999; Moore et al. 1999a). In light of these difficulties, the current stance of the hierarchical CDM-based scenario of structure formation remains somewhat confusing because, in fact, this scenario successfully accounts for: the distribution of matter at large scales (Bahcall et al. 1999), the uniformity of the cosmic microwave radiation and its small temperature anisotropies, and the observationally inferred cosmological parameters.
The aim of this letter is to analyse the halo core properties inferred from observations which might suggest explanations of the origin as to the soft halo cores and clarify the discrepancies that appear on small scales with the hierarchical scenario of structure formation. We investigate whether some modifications on the initial conditions of this scenario are able to improve the results with respect to the observations. We demonstrate that the introduction of self-interaction in the CDM particles as was suggested by Spergel & Steinhardt (1999) offers the most viable solution to the core problem in a context that preserves the hierarchical CDM-based scenario.
## 2 Halo central density from observations
We select from the literature dwarf and LSB galaxies with accurately measured rotation curves and clearly dominated by dark matter. These restrictions considerably reduce ambiguities in the estimates of the dark matter mass distribution due to uncertain stellar mass-to-light ($`M/L`$) ratios and modifications of the original halo profile produced by the gravitational drag of baryons during disc formation. Hence the dark haloes of these galaxies can be rightly assumed almost โvirginโ. These constraints reduce the sample to six dwarf galaxies: DDO154 (Carignan et al. 1998), DDO170 (Lake et al. 1990), DDO105 (Schramm 1992, quoted by Moore 1994), NGC3109 (Jobin et al. 1990), IC2574 (Martimbeau et al. 1994), NGC5585 (Cรณte et al. 1991). Six LSB galaxies are selected with the same criterion from a published sample: F568-v1, F571-8, F574-1; F583-1, F583-4, UGC5999 (de Blok $`\&`$ McGaugh 1997). The rotation curves measured for all these galaxies were used by the different authors to estimate the halo parameters, particularly the central density.
Our analysis also includes the density profile obtained for the cluster CL0024+1654 from a high resolution mass map derived using strong lensing techniques (Tyson et al. 1998). Because of the lack of a massive cD galaxy in the core, this cluster can be assumed to be dark matter dominated at the centre. Two clusters of galaxies, CL1455+22 and CL0016+16, with evident shallow mass profiles in the inner regions obtained by weak gravitational lensing studies (Smail et al. 1995) have also been considered, even though the uncertainty of the observational data is larger in these cases.
In Figure 1 we plot a very suggestive result: for a broad range of masses, the central density of the dark haloes is independent of mass (or circular velocity). Most dwarf galaxies (filled squares), LSB galaxies (open squares) and clusters (circles) indicate an average halo core density close to $`\rho _c=0.02M_{}/pc^3`$. The arrow shows a fiducial value derived from a published sample of LSB galaxies (de Blok $`\&`$ McGaugh 1997). The galaxy error bars are based on the observational uncertainty, and when possible from the range given by the maximum and minimum disc models. The cluster error bars take into account the uncertainty in observations and in a normalization factor of three in going from strong to weak lensing techniques (Wu et al. 1998).
This observational evidence makes the cosmological puzzle quite complex: how can one explain the origin of soft halo cores with roughly the same central density over the entire mass range sampled?
## 3 Shallow cores from collisionless cold dark matter
As was discussed above, observations seem to show that the inner density profile of the dark matter haloes is (i) shallow, and (ii) with a central value independent from the total halo mass (or maximum circular velocity $`V_m`$). These facts disagree with the predictions of the hierarchical CDM models. Now, we investigate some alternatives which might alleviate these difficulties within the cosmological context. For this we have performed a quantitative study of the CDM halo profiles using a semi-numerical method (Avila-Reese, Firmani, & Hernรกndez 1998) aimed at calculating the collapse and virialization of spherically symmetric density fluctuations starting from an arbitrary mass aggregation history. Results obtained with this method are in excellent agreement with those of the N-body simulations (see Avila-Reese et al. 1999; Firmani & Avila-Reese 2000). The method is based on a generalization of the secondary infall model where non-radial motions and adiabatic invariance are taken into account. The only free parameter is the orbital parameter of particles (the perihelion to aphelion ratio) which regulates the thermal orbital energy of the system. This parameter is fixed independently of the halo mass and is constant during halo formation. Cosmological N-body simulations suggest $`r_{\mathrm{peri}}/r_{\mathrm{apo}}0.20.3`$ (see Ghigna et al. 1998).
Recently, Moore et al. (1999b) have simulated CDM haloes formed by monolithic collapse with N-body simulations introducing for this a lower cut-off at some wavelength in the power spectrum of fluctuations which suppresses substructures. The result was that the steep inner density profile of the haloes persisted. We suggest that this result might be partially a consequence of the lack of thermal orbital energy. In a monolithic collapse scenario the thermal orbital energy plays a significant role in producing soft cores: as $`r_{\mathrm{peri}}/r_{\mathrm{apo}}`$ increases a larger soft core is obtained. The density profiles of our haloes obtained for a CDM model with a lower cut-off in the variance of the power spectrum and a non zero initial thermal content, present soft cores. However, these models are unable to predict the observed central density trend shown in Figure 1 (Avila-Reese et al. 1998). In fact, the central density $`\rho _c`$ increases with $`V_m`$ in such a way that if $`\rho _c`$ is reproduced at galactic scales, for the cluster scales, $`\rho _c`$ overshoots the observed value by more than an order of magnitude. A hypothetical injection of thermal energy to the dark matter at a specific time in the life of the universe leads to a similar negative result.
An interesting way to produce soft halo cores in agreement with observations is to simply truncate the hierarchical halo mass aggregation histories at a given redshift towards the past. This may be done assuming that the halo mass fraction instantaneously collapses with some thermal energy (monolithic thermal collapse), while the rest of the mass is aggregated at the normal hierarchical rate. We have calculated the density profiles for haloes whose mass aggregation histories correspond to a hierarchical flat $`\mathrm{\Lambda }`$CDM model ($`\mathrm{\Omega }_m=0.3`$, h=0.7, $`\sigma _8=1`$) from $`z=5`$ and $`r_{peri}/r_{apo}=0.3`$; before this epoch the hierarchical aggregation was truncated. The results for this toy model are in good agreement with the observations: the haloes have a soft core, the core densities are independent from the mass and have a value similar to that what observational inferences indicate. It is interesting to note that the most distant QSOs and galaxies are at redshifts $`z5`$. Although the toy model presented here might look attractive, it is difficult to imagine a physical process capable of delaying the collapse of the central parts of the CDM haloes until $`z5`$.
## 4 Shallow cores from self-interacting cold dark matter
Self-interacting dark matter has been proposed as a possible solution for two potential conflicts of the hierarchical CDM models (Spergel & Steinhardt 1999; Hannestad 1999): the shallow core of the haloes and the dearth of dwarf galaxies in the Local Group. Astrophysical consequences of collisional dark matter have been pointed out by Ostriker (1999). It is easy to show that a configuration with the NFW density distribution is very far from thermal equilibrium: the inner velocity dispersion (temperature) has a positive gradient. Consequently, the presence of some self-interaction in the CDM particles introduces in the dark haloes a process of thermalization with heat transfer inwards, avoiding the formation of a cuspy profile. Heat capacity in the core is negative. This is a typical property of self-gravitating systems, like the interiors of the stars. For this reason, the heat transfer inwards cools the core exacerbating even more the temperature gradient. The heat transfer inwards increases causing the core to expand and cool due to gravothermal instability, leading to runaway core expansion. This physical mechanism is the key point for core expansion if self-interaction is effective. This process is similar to the post-collapse gravothermal instability well-known in dynamical studies of globular clusters (Bettwieser & Sugimoto 1983) where the minimum central density is reached roughly after a thermalization time.
The expansion of the core does not last forever. Since as the core expands the central density decreases, this would make the self-interaction less efficient and the core formation mechanism a self-limiting process. Although attractive, this mechanism is difficult to investigate because of our lack of knowledge regarding the cross section of the self-interacting dark matter particles. For this reason we start our analysis with a thermodynamical approach: we shall estimate the central density of CDM haloes assuming a thermodynamical equilibrium is reached due to strong self-interaction of the CDM particles. The final result will be the formation in the CDM halo of a central isothermal non-singular density profile established by competition between 1) mass and energy hierarchical aggregation, and 2) the thermalization due to self-interaction. The hierarchical mass and energy aggregation tends to stablish a NFW density profile (with the corresponding heat transfer inwards) while the self-interaction process tends to lead the system to a thermal equilibrium with the corresponding formation of a shallow core. For a given mass, the halo formed by a hierarchical mass aggregation identifies a gravitational binding energy (or $`V_m`$). Using this mass and binding energy to rescale a thermodynamical equilibrium configuration it is easy to find:
$$\rho _c=\alpha \frac{V_m^6}{M^2}M_{}/pc^3$$
(1)
where $`V_m`$ is in km/s, $`M`$ is the halo mass in M and $`\alpha `$ is a constant given by the detailed shape of the final equilibrium configuration. Since for the CDM haloes a tight relationship between their mass and circular velocity of the kind $`MV_m^n`$ with $`n3.2`$ is predicted (Avila-Reese et al. 1998,1999), eq. (1) implies that $`\rho _c`$ is roughly invariant with respect to the mass or $`V_m`$ as observations point out (Fig. 1). This strongly suggests that indeed a thermalization process due to dark matter self-interaction is acting in the CDM haloes.
Unfortunately, there is not a single final thermal equilibrium configuration, and as Lynden-Bell & Wood (1968) pointed out, some of configurations are even unstable. The King and Wooley configurations are examples of systems that have reached thermal equilibrium. They are characterized by a form parameter that may be related to the entropy of the system. A fiducial value for the central density of the CDM haloes with self-interaction may be estimated using a King or a Wooley profile at the state of maximum entropy (Lynden-Bell $`\&`$ Wood 1968). For these cases we derive respectively $`\alpha =\mathrm{1.3\; 10}^9`$ (short-dashed line in Fig. 1) and $`\alpha =\mathrm{2.6\; 10}^9`$ (long-dashed line in Fig. 1) in the appropriate units. The case of maximum entropy for a King profile corresponds to a value of the form parameter of $`8.5`$. A lower limit for the density may be roughly estimated from the dynamical evolution of globular clusters based on the Fokker-Planck approximation (Spitzer $`\&`$ Thuan 1972), starting from a uniform spherical distribution (this initial condition will lead to a central density lower than the density reached by the thermalization of a steep initial profile). The rescaling for this model taken at the first thermal equilibrium state gives us $`\alpha =\mathrm{1.7\; 10}^8`$ (dotted curve in Fig.1).
Global thermal equilibrium is reached when the self-interaction cross section is sufficiently large in order for the characteristic time scale of interactions across the overall halo to be shorter than the halo lifetime. An opposite situation of minimum cross section is given when self-interaction induces thermal equilibrium only in the region of the shallow core. In this case the central isothermal core appears surronded by a matter distribution characterised by a NFW profile. From the observational data it is possible now to infer an estimate of the self-interaction cross section. If $`n`$ is the dark particle number density, $`\sigma `$ the cross section and $`v`$ the dispersion velocity, assuming the collision time in the core $`\tau =1/(n\sigma v)`$ close to the Hubble time we obtain:
$$\frac{\sigma }{m_x}\mathrm{4\; 10}^{25}\left(\frac{0.02M_{}pc^3}{\rho _c}\right)\left(\frac{100kms^1}{v}\right)cm^2/GeV$$
(2)
with $`m_x`$ the mass of the dark matter particle and $`\rho _c`$ the central density. It is interesting to point out that for velocity dispersions corresponding to galaxy clusters this value is close to the upper limit estimated by Miralda-Escudรฉ (2000) from the observationally inferred ellipticity of the cluster MS21137-23.
## 5 Summary
The discovery of a soft core in the cluster of galaxies CL0024+1254 by strong gravitational lensing measurements and the rotation curves of dark-matter dominated dwarf and LSB galaxies indicate that dark matter haloes have shallow inner density profiles from galactic to cluster scales. Studying in detail the observational data available for these cosmic objects, we found that the halo central density is nearly invariant with respect to the mass from galactic to cluster sizes.
We investigated different mechanisms and models for halo core formation within the hierarchical CDM scenario. We have shown that a lower cut-off at some wavelength in the CDM power spectrum and the assumption of high particle orbital thermal energies produce soft cores in the haloes, but the invariance of $`\rho _c`$ with respect to the mass is not reproduced. A more viable solution to the core problem is the introduction of self-interaction in the CDM particles. Being this the case, we proposed the gravothermal expansion as the mechanism responsible for the formation of soft cores in a hierarchical CDM scenario.
Using a thermodynamical approach we have estimated the central density of haloes in the case of maximum efficiency for self- interaction and found good agreement with the values inferred from observations. The central density in this case scales with the halo mass and its maximum circular velocity as $`\rho _cV_m^6/M^2`$. This result implies that $`\rho _c`$ is roughly constant because for the CDM haloes $`MV_m^n`$ with $`n3.2`$. If thermal equilibrium is restricted to the core, then the cross section given by eq.(2) may be derived consistently with observations. The cases analysed here, corresponding to a global and a local thermal equilibrium respectively, represent two limiting cases between which dark matter self-interaction may generate isothermal cores compatible with observations. We exclude from our analysis the extreme case of a very strong self-interaction which may lead the core to a gravothermal catastrophe with a central density profile steeper than NFW. Such extreme assumption of large cross section may be immediately ruled out because a singular isothermal core will be produced in contradiction with observations.
We stress the relevance confirming the existence of soft cores with scale invariant densities would have. In particular, the construction of high-resolution mass maps with gravitational lensing techniques for the inner regions of clusters is of great interest.
## Acknowledgments
ED thanks Fondazione CARIPLO for financial support. |
no-problem/0002/cond-mat0002271.html | ar5iv | text | # Effect of field tilting on the vortices in irradiated Bi-2212.
## Abstract
We report on transport measurements in a Bi-$`2212`$ single crystal with columnar defects parallel to the c-axis. The tilt of the magnetic field away from the direction of the tracks is studied for filling factors $`f=B_z/B_\varphi <1`$. Near the Bose Glass transition temperature $`T_{BG}`$, the angular scaling laws are verified and we find the field independent critical exponents $`\nu ^{}=1.1`$ and $`z^{}=5.30`$. Finally, above $`H_C`$ we evidence the signature of a smetic-A like vortex phase. These experimental results provide support for the Bose Glass theory.
Columnar defects have been introduced in HTCS in order to avoid dissipation due to vortex motion. In the case of parallel tracks Nelson and Vinokur have predicted a transition between a so-called Bose Glass (BG) and vortex liquid, at a critical temperature $`T=T_{BG}`$. An important result is that field tilting does not destabilize the BG phase. Above a threshold transverse field $`H_C(T)`$ perpendicular to the columnar defects (CD), the flux lines accomodate simultaneously to CD and to the transverse field direction, defining a smectic-A like phase as proposed by Hwa and Nelson . Increasing further $`H_{}`$ leads to a vortex-liquid state . Recently, Grigera et al. have evidenced a threshold transverse field $`H_C(T)`$ with resistivity measurements performed on a twinned Y-123 single crystal.
In this paper, we present results obtained on a Bi-2212 single crystal, irradiated with CD parallel to the c-axis with 5.8 GeV $`Pb`$ ions in GANIL (Caen- France). The CD density corresponds to a matching field $`B_\varphi =0.75`$ T. Isothermal I-V curves have been obtained varying the tilt angle between the magnetic field and the c-axis of the crystal. Measurements were performed keeping the $`B_z`$ component constant, where the z-axis is along the c-axis of the sample. In the insert of Fig. 1, the log-log plot of isothermal ohmic resistance versus $`H_{}/H_z`$ is displayed above and below $`T_{BG}`$. For $`T>T_{BG}`$, ohmic behaviour is detected, within the experimental sensitivity, even for vanishing transverse field. In contrast, for $`T<T_{BG}`$ the ohmic resistance goes to zero at some critical tilt showing the existence of a critical transverse magnetic field $`H_C(T)`$. In the BG theory, scaling functions $`f_+`$ and $`f_{}`$ are predicted to describe the resistivity in the presence of a tilted magnetic field, above and below $`T_{BG}`$, respectively. The resistance then reads:
$$R|t|^{\nu ^{}(z^{}2)}f_\pm \left((H_{}/H_z)|t|^{3\nu ^{}}\right).$$
(1)
where $`t=(TT_{BG})/T_{BG}`$ is the reduced temperature, and $`\nu ^{}`$ and $`z^{}`$ are critical exponents. Fig. 1 displays such scaling properties. We find that the scaling functions $`f_+`$ and $`f_{}`$ collapse for all the filling fractions $`f=B_z/B_\varphi <1`$ investigated in our experiment. Moreover, we obtain field independent critical exponents, $`\nu ^{}=1.1\pm 0.1`$ and $`z^{}=5.30\pm 0.05`$. These results are in good agreement with the BG theory and other experimental results .
We shall now consider the case $`T<T_{BG}`$. For $`H_{}>H_C(T)`$, the vortex motion is mediated by kinks aligned in chains in the direction of the transverse magnetic field $`H_{}`$ , in such a way that the chain density is directly related to the linear resistance:
$$Rn_{chain}(H_{}H_C(T))^{3/2}.$$
(2)
Scaling arguments lead to a critical transverse magnetic field $`H_C(T)`$ vanishing as $`TT_{BG}^{}`$ as:
$$H_C(T)|t|^{3\nu ^{}}.$$
(3)
Figure 2 displays a plot of $`R^{\frac{2}{3}}`$ versus $`H_{}/H_z`$. The solid lines are successfull fits of Eq. 2 to data. This result does support the existence of a smectic-A behaviour. The intersection of a linear fit with the abscissa-axis directly gives the critical transverse magnetic field $`H_C(T)`$ at a given temperature. The insert of Fig. 2 shows a log-log plot of $`H_C(T)`$ thus obtained versus the reduced temperature $`t=(TT_{BG})/T_{BG}`$. The solid line represents a least-square fit of Eq. 3 to data. According to Eq. 3, we find therefrom the critical exponent value $`\nu ^{}=1.1\pm 0.1`$. Note that this value is consistent with the one we found above by another way, as predicted by Nelson and Vinokur .
In conclusion, we have investigated the Bose Glass phase transition on an irradiated Bi-2212 single crystal versus both the filling fraction $`f=B_z/B_\varphi <1`$ and the magnetic field tilt. We have shown that the Bose Glass transition in the presence of a tilted field verifies the scaling rules predicted by Nelson and Vinokur with both field independent scaling functions and critical exponents $`\nu ^{}=1.1`$ and $`z^{}=5.30`$. A smectic-A like behaviour has been evidenced. Finally, the critical tilt $`H_C/H_z`$ , separating this phase from the Bose Glass one, has been found to vary as $`H_C/H_z|t|^{3\nu ^{}}`$ in agreement with the theorical expectation of a sharp cusp in the $`TH_{}/H_z`$ phase diagram. |
no-problem/0002/cond-mat0002123.html | ar5iv | text | # Vortex Pinning and Dynamics in Layered Superconductors with Periodic Pinning Arrays
## Abstract
We examine vortex dynamics and pinning in layered superconductors using three-dimensional molecular dynamics simulations of magnetically interacting pancake vortices. Our model treats the magnetic interactions of the pancakes exactly, with long-range logarithmic interactions both within and between planes. At the matching field the vortices are aligned with the pinning array. As a function of tilt angle for the pinning arrays a series of commensuration effects occur, seen as peaks in the critical current, due to pancakes finding a favorable alignment.
In superconductors with periodic pinning arrays interesting commensurability effects occur when the periodicity of the vortex lattice matches the periodicity of the pinning lattice. Experiments and simulations so far have been done with thin film superconductors where the vortex lattice and pinning can be considered two-dimensional. The case of vortex lattices interacting with a periodic pinning array in a layered 3D superconductor has not been studied. Such a system would correspond to an anisotropic superconductor such as BSCCO with a periodic arrangement of columnar defects. In this system the $`z`$-direction becomes important as the applied field or the pinning array is tilted. The dynamical effects of vortices moving in periodic pinning arrays in such a system have not been examined, in particular how the vortex lattice structure of the moving state differs from that of the pinned state. To study vortex pinning and dynamics in layered superconductors, we have developed a simulation containing the correct magnetic interactions between pancakes . This interaction is long range both in and between planes, and is treated using a rapidly converging summation method .
The overdamped equation of motion, for $`T=0`$, for vortex $`i`$ is given by $`๐_i=_{j=1}^{N_v}๐(\rho _{i,j},z_{i,j})+๐_i^{vp}+๐_d=๐ฏ_i`$, where $`N_v`$ is the number of vortices and $`\rho `$ and $`z`$ are the distance between pancakes in cylindrical coordinates. The magnetic energy between pancakes is
$`๐(\rho _{i,j},0)=2dฯต_0\left((1{\displaystyle \frac{d}{2\lambda }})\mathrm{ln}{\displaystyle \frac{R}{\rho }}+{\displaystyle \frac{d}{2\lambda }}E_1(\rho )\right)`$
$`๐(\rho _{i,j},z)={\displaystyle \frac{d^2ฯต_0}{\lambda }}\left(\mathrm{exp}(z/\lambda )\mathrm{ln}{\displaystyle \frac{R}{\rho }}E_1(R)\right)`$
where $`R=\sqrt{z^2+\rho ^2}`$, $`E_1(x)=_\rho ^{\mathrm{}}\mathrm{exp}(x/\lambda )/\rho ^{}๐\rho ^{}`$ and $`ฯต_0=\mathrm{\Phi }_0^2/(4\pi \xi )^2`$. The pinning is placed in a square array of parabolic traps with a radius $`r_p`$ much smaller than the distance between pins. The location of the pinning sites is the same in every layer corresponding to correlated defects. A driving force $`f_d`$ is slowly increased and the vortex velocities are measured. Here we consider the first matching field case where the number of vortices $`N_v`$ equals the number of pinning sites $`N_p`$. We conduct a series of simulations in which the pinning sites are tilted at an increasing angle with respect to the $`z`$-axis. We will only consider driving that produces vortex motion transverse to the direction of the tilt angle. We examine systems with 8 layers containing 64 vortices and pins in each layer. Work for larger systems, varied fields and coupling strength will be presented elsewhere .
In Fig. 1(a) we present the critical depinning force $`f_{dp}^c`$ as a function of tilt angle $`\theta `$. Here $`f_{dp}^c`$ peaks at $`\theta =0^{}`$ when the pancakes are aligned with pins on all layers. As $`\theta `$ is increased $`f_{dp}^c`$ drops. For small tilt angles $`\theta <5^{}`$ the vortex lines tilt with the pins. For larger angles the vortex lines realign in the $`z`$ direction. The depinning force $`f_{dp}^c`$ will then remain low as only one pancake in the straight vortex line will be sitting at a pinning site. At $`\theta =45^{}`$ $`f_{dp}^c`$ shows a peak of the same magnitude as the peak at $`\theta =0`$. At this tilt angle, and also for any angle satisfying $`\theta =\mathrm{tan}^1(n)`$ where $`n`$ is an integer, the pinning sites are again aligned in the $`z`$-direction so that a vortex line can be formed that is also aligned in the $`z`$-direction with all the pancakes in a single vortex being able to sit in a pinning site. There are also peaks in $`f_{dp}^c`$ at $`\theta =26.6^{}`$ and $`56.3^{}`$. At these angles the pancakes again sit on all the pinning sites. The individual vortex lines now consists of half the number of pancakes as at $`\theta =0.0^{}`$; however, there are now twice as many vortex lines with the pancakes from an individual vortex line being coupled in every other layer. The view from the $`z`$-direction as shown in Fig. 1 for these angles indicates that the vortex lattice is now rectangular with twice as many vortex lines as at the other angles. At $`\theta =36.9^{}`$ a smaller peak is observed. The vortex structure at this angle will be presented elsewhere .
In (b) and (c) we show the vortex structures for the pinned phase and moving phase for $`\theta =1.5^{}`$ as seen from the $`z`$-direction. In (b) the vortices can be seen to stay aligned with the pins. In (c) for $`f_d>f_{dp}^c`$ the vortices realign with the z-direction. Such a transition from a tilted to straight vortex lattice as a function of drive may be visible with neutron scattering experiments.
We acknowledge helpful discussions with L. N. Bulaevskii, A. Kolton, R.T. Scalettar, and G. T. Zimรกnyi. This work was supported by CLC and CULAR (LANL/UC) and by the Director, Office of Adv. Scientific Comp. Res., Div. of Math., Information and Comp. Sciences, U.S. DoE contract DE-AC03-76SF00098. |
no-problem/0002/astro-ph0002482.html | ar5iv | text | # Simple dynamical models of the Sagittarius dwarf galaxy
## 1 Introduction
The Sagittarius dwarf galaxy is the closest satellite of the Milky Way (Ibata, Gilmore & Irwin 1994, 1995, hereafter IGI95). Soon after its discovery, several groups carried out simulations to see if its properties are consistent with the disruption of an object similar to the other dwarf companions of the Milky Way, but none produced a model in full agreement with both the age and the structure of the observed system (Johnston, Spergel & Hernquist 1995; Velรกzquez & White 1995; Edelsohn & Elmegreen 1997; Ibata et al. 1997, hereafter I97; Gรณmez-Flechoso, Fux & Martinet 1999). All groups assumed light to trace mass and an initial system similar to observed dwarf spheroidals. All found the simulated galaxy to disrupt after one or two orbits whereas the observed system has apparently completed ten or more. Most considered this to be a problem (but cf Velรกzquez & White 1995). As a result, several unconventional models were proposed to explain the survival and structure of Sagittarius. In an extensive numerical study, Ibata & Lewis (1998) concluded that Sagittarius must have a stiff and extended dark matter halo if it is to survive with 25% of its initial mass still bound today. Since an extended halo cannot remain undistorted in the Galaxyโs tidal field for any conventional form of dark matter, it is unclear how this idea should be interpreted. Furthermore, it produces an uncomfortably large mass-to-light ratio ($`100`$), it cannot reproduce the observed elongation, and it suggests that little tidal debris will be liberated, in apparent conflict with the observations of Mateo, Olszewski & Morrison (1998), and Majewski et al. (1999) (see also Johnston et al. 1999). A somewhat less unorthodox model was proposed by Zhao (1998), where Sagittarius was scattered onto its current tightly bound orbit by an encounter with the Magellanic Clouds about 2 Gyr ago. This appears physically possible but requires careful tuning of the orbits of the two systems (see Ibata & Lewis 1998; and Jiang & Binney 2000). Another mechanism by which the dwarf could have moved to a short-period orbit is dynamical friction, which can be important only if Sagittarius has lost a lot of mass in the past. Jiang & Binney (2000) found a one-parameter family of initial configurations that evolve into something like the present system over a Hubble time. Their initial systems have masses $`10^{1011}\mathrm{M}_{}`$ and start from a Galactocentric radius $`200`$ kpc.
Driven by this apparent puzzle, we decided to search more thoroughly for a self-consistent model of the disruption of Sagittarius, which, after a Hubble time, has similar characteristics to those observed. (See Table 1 for a summary of the observed properties of the system.) Below we present two models which meet these requirements.
## 2 Method
In our numerical simulations, we represent the Galaxy by a fixed potential with three components: a dark logarithmic halo
$$\mathrm{\Phi }_{\mathrm{halo}}=v_{\mathrm{halo}}^2\mathrm{ln}(r^2+d^2),$$
(1)
a Miyamoto-Nagai disk
$$\mathrm{\Phi }_{\mathrm{disk}}=\frac{GM_{\mathrm{disk}}}{\sqrt{R^2+(a+\sqrt{z^2+b^2})^2}},$$
(2)
and a spherical Hernquist bulge
$$\mathrm{\Phi }_{\mathrm{bulge}}=\frac{GM_{\mathrm{bulge}}}{r+c},$$
(3)
where $`d`$=12 kpc and $`v_{\mathrm{halo}}`$ = 131.5 $`\mathrm{km}\mathrm{s}^1`$; $`M_{\mathrm{disk}}=10^{11}\mathrm{M}_{}`$, $`a`$ = 6.5 kpc and $`b`$ = 0.26 kpc; $`M_{\mathrm{bulge}}=3.4\times 10^{10}\mathrm{M}_{}`$ and $`c`$ = 0.7 kpc. This choice of parameters gives a flat rotation curve with an asymptotic circular velocity of 186 $`\mathrm{km}\mathrm{s}^1`$. The mass of the dark-matter halo within 16 kpc is 7.87 $`\times 10^{10}\mathrm{M}_{}`$ in this model.
We represent the satellite galaxy by a collection of $`10^5`$ particles and model their self-gravity by a multipole expansion of the internal potential to fourth order \[White 1983, Zaritsky & White 1988\]. This type of code has the advantage that a large number of particles can be followed in a relatively small amount of computer time. Hence a substantial parameter space can be explored while retaining considerable detail on the structure of the disrupted system. In this quadrupole expansion, higher than monopole terms are softened more strongly. We choose $`ฯต_10.20.25r_c`$ for the monopole term ($`r_c`$ is the core radius of the system) and $`ฯต_2=2ฯต_1`$ for dipole and higher terms and for the centre of expansion. The centre of expansion is a particle which, in practice, follows the density maximum of the satellite closely at all times.
For the stellar distribution of the pre-disruption dwarf we choose a King model (King 1966), since this is a good representation of the distant dwarf spheroidals. King models are defined by a combination of three parameters: $`\mathrm{\Psi }(r=0)`$ (depth of the potential well of the system), $`\sigma ^2`$ (measure of the central velocity dispersion), and $`\rho _0`$ (central density) or $`r_0`$ (King radius). The ratio $`\mathrm{\Psi }(r=0)/\sigma ^2`$ defines how centrally concentrated the system is, and for any value of this parameter, a set of homologous models with different central densities and core (or King) radii may be found. We assume that the progenitor of Sagittarius obeys the known metallicity-luminosity relation for the Local Group dSph \[Mateo 1998\]. The metallicity determinations for Sagittarius \[I97\] indicate $`[\mathrm{Fe}/\mathrm{H}]1`$, corresponding to a total luminosity in the range $`3.5\times 10^73.5\times 10^8\mathrm{L}_{}`$. To obtain an initial guess for the mass of the system, we transform this luminosity into a mass assuming a mass-to-light ratio $`2`$. The relevant initial stellar mass interval is then $`7\times 10^77\times 10^8\mathrm{M}_{}`$.
Note that our choice of a fixed potential to represent our Galaxy means that we neglect any exchange of energy between the satellite and the Galactic halo. This is an excellent approximation for the range of orbits and satellite masses that we consider, since these imply dynamical friction decay times substantially in excess of the Hubble time. The orbits are also sufficiently large that impulsive heating during disk passages can be neglected.
The orbit of Sagittarius is relatively well constrained \[I97\]. The heliocentric distance $`d25\pm 2`$ kpc and position $`(l,b)=(5.6^{},14^{})`$ of the galaxy core are well-determined; the heliocentric radial velocity $`v_r^{\mathrm{sun}}140\pm 2\mathrm{km}\mathrm{s}^1`$, and its variation across the satellite are also accurately measured. Outside the main body ($`b<20^{}`$) the radial velocity shows a small gradient $`\mathrm{d}v_r/\mathrm{d}b3\mathrm{km}\mathrm{s}^1\mathrm{deg}`$, but no gradient is detected across the main body itself. The proper motion measurements are not very accurate; $`\mu _b2.1\pm 0.7\mathrm{mas}\mathrm{yr}^1`$, and no measurement is available in the $`l`$-direction. On the other hand the strong North-South elongation of the system suggests that it has little motion in the $`l`$-direction, thus implying the orbit should be close to polar. We generate a range of possible orbits satisfying these constraints and concentrate on those with relatively long periods in order to maximise the survival chances of our satellite. We begin all our simulations half a radial period after the Big Bang to allow for the initial expansion. We place the initial satellite at apocentre, then we integrate forward until $`13`$ Gyr. The orbits are chosen so at this time the position and velocity of the satellite core correspond to those observed. We allow ourselves some slight freedom in choosing the final time in order to fit the observed data as well as possible.
## 3 Results
Figure 1 gives an example of an orbit which is consistent with all the current data on Sagittarius. It has a pericentre of $`16.3`$ kpc, an apocentre of $`68.3`$ kpc, and a radial period of $`0.85`$ Gyr. We use similar orbits for all the simulations described below. Note that the slow precession about the Galactic rotation axis is in part due to the quasi-polar nature of the orbit and in part to the fact we have assumed the Milky Wayโs dark halo to be spherical.
After letting our satellite relax in isolation, we integrate each simulation for $`13`$ Gyr. In practice we needed to run a large number of simulations, and test each to see if it satisfies the observational constraints at the present time. Since it remains uncertain whether dwarf spheroidals have extended dark halos (e.g. Klessen & Kroupa 1998), we have considered both purely stellar models and models in which the initial stellar system is embedded in a more massive and more extended dark halo.
### 3.1 Constant mass-to-light ratio: A purely stellar model
Our preferred purely stellar model (Model I) initially has a core radius of $`r_c`$ = 0.44 kpc, a total velocity dispersion of 18.9 $`\mathrm{km}\mathrm{s}^1`$, and a concentration parameter $`c=\mathrm{log}_{10}(r_t/r_c)0.83`$. This implies a total mass of $`M=4.66\times 10^8\mathrm{M}_{}`$. For a satellite to survive for about 10 Gyr on an orbit with pericentre $`15`$ kpc, apocentre $`70`$ kpc, and period $`1`$ Gyr (for which the observational constraints are satisfied) its initial central density has to be $`\rho _00.360.4\mathrm{M}_{}\mathrm{pc}^3`$. Satellites with significantly smaller initial densities do not survive long enough.
In Figure 2 we plot heliocentric distance as a function of galactic latitude for stars projected near the main remnant 12.5 Gyr after infall. Streams of particles are visible at all latitudes over a broad range in distance. Sagittarius has been orbiting long enough for its debris streams to be wrapped several times around the Galaxy. (See also Figure 8.)
The remnant galaxy, i.e. the central region of the satelliteโs debris, is similar to the real system. In Figure 3 we plot its mass surface density. The transformation from observed surface brightness to mass surface density (which is what the simulations give us) can be done as follows. The observed mass surface density $`\mathrm{\Sigma }`$ for an assumed mass-to-light ratio $`\mathrm{{\rm Y}}`$ is
$$\mathrm{\Sigma }=\frac{N_XL_X}{f_X}\mathrm{{\rm Y}}\left[\frac{\mathrm{M}_{}}{\mathrm{deg}^2}\right],$$
(4)
where $`N_X`$ is the number of observed stars of type $`X`$ per square degree, $`L_X`$ is their luminosity, and $`f_X`$ is the fraction of the total luminosity in stars of type $`X`$. In IGI95 the spatial structure of Sagittarius was determined from the excess of counts at the apparent magnitude of the horizontal branch. Uncertainties in the result are due primarily to contamination by sources in the Galactic bulge. Their lowest isodensity contour is at $`\mathrm{\Sigma }_{\mathrm{min}}5\times 10^5\frac{\mathrm{M}_{}}{\mathrm{deg}^2}`$, assuming $`\mathrm{{\rm Y}}2.25`$ and \[Fe/H\] $`1`$ (Bergbusch & vandenBerg 1992), and has an extent of $`7.5^{}\times 3^{}`$. This same isodensity contour is shown in Figure 3 as a thick line. It has an extent of $`8^{}\times 4.8^{}`$, in reasonable agreement with the observations given the uncertainties. In I97 isodensity contours were derived from counts of main sequence stars close to the turn-off, roughly one magnitude above the plate limit. The minimum contour in this case corresponds to $`\mathrm{\Sigma }_{\mathrm{min}}10^5\frac{\mathrm{M}_{}}{\mathrm{deg}^2}`$, and has an extent of roughly $`15^{}\times 7^{}`$. In Fig. 3 this contour is shown as a dashed-line, and has an extent of $`21^{}\times 6.5^{}`$, also in good agreement with the observations.
Note that the isophotes (or isodensity contours) become rounder towards the centre of the satellite. Its angular core radius is $`R_c1.29^{}`$, which for a distance of 26 kpc (derived from the simulations) corresponds to 0.58 kpc, again in good agreement with the observations.
The kinematic properties of the remnant galaxy are more difficult to compare with observations because a substantial amount of mass from debris streams is projected on top of the main body. Like I97, we measure the radial velocity across the system considering only particles for which $`100\mathrm{km}\mathrm{s}^1v_r^{\mathrm{sun}}180\mathrm{km}\mathrm{s}^1`$. In the left panel of Figure 4 we plot the heliocentric radial velocity, and in the right panel we plot its dispersion as a function of Galactic latitude. For comparison, we analysed the observations of I97 at CTIO in the same way (their Table 2b); these data have a precision of a few $`\mathrm{km}\mathrm{s}^1`$ (triangles in Figure 4). Our model is consistent with the observed kinematics; we obtain a heliocentric radial velocity of $`139.5\mathrm{km}\mathrm{s}^1`$ and an internal velocity dispersion in the radial direction of $`11\mathrm{km}\mathrm{s}^1`$ for the main body. However, when the radial velocity restrictions for inclusion in this calculation are relaxed, we find much larger velocity dispersions because of the contribution of stars from other streams. It is important to consider this problem when determining which stars should be considered members of Sagittarius.
### 3.2 Varying mass-to-light ratio: A model with a dark halo
The observational data for Sagittarius mainly refer to the current remnant system, which corresponds to the innermost regions of the progenitor satellite. As a consequence, models that are initially dark matter dominated in their outskirts are relatively poorly constrained.
As an example we focus on a progenitor with a mass distribution which is similar to that of Model I in its inner regions, but is considerably more extended. We take the mass distribution to be a (heavy) King model with $`r_c=0.54`$ kpc and $`r_t=10.4`$ kpc, with an initial total velocity dispersion of 25.2 $`\mathrm{km}\mathrm{s}^1`$, and total mass of $`M=1.7\times 10^9\mathrm{M}_{}`$. For an orbit like that of Model I this produces a suitable remnant after 12 Gyr. The mass distribution of this remnant satisfies many of the observational constraints of Table 1. Its core radius is slightly larger $`r_c0.65`$ kpc, and the radial velocity dispersion in the main body is 12.1 $`\mathrm{km}\mathrm{s}^1`$.
We will construct a two-component satellite with this mass distribution by solving for the dependence of mass-to-light ratio on initial binding energy that produces the initial light profile of Model I. We choose the mass-to-light ratio of satellite material to be a decreasing function of binding energy, so that the most bound particles have near โstellarโ mass-to-light ratios, whereas weakly bound particles are almost entirely โdarkโ. From the energy distribution of the heavy King model, and that of a King model with $`r_0=0.095`$ kpc and $`\sigma =25.6\mathrm{km}\mathrm{s}^1`$, we can derive the mass-to-light ratio as a function of binding energy as
$$\mathrm{{\rm Y}}(ฯต)=\mathrm{{\rm Y}}_{}\frac{\mathrm{d}M/\mathrm{d}ฯต(ฯต)}{\mathrm{d}M_{}/\mathrm{d}ฯต(ฯต=ฯต_{}+ฯต_{\mathrm{max}}ฯต_{\mathrm{max}})}$$
(5)
where $`\mathrm{{\rm Y}}_{}`$ is the mass-to-light ratio of a stellar population. The energies $`ฯต_{}`$ of the lighter King model have been shifted by a fixed amount $`ฯต_{\mathrm{max}}ฯต_{\mathrm{max}}`$, to be on the same scale as that of the heavier King model. The resulting mass-to-light ratio is shown in Figure 5.
In Figure 6 we show the surface mass densities normalized to their central values for Model I (only stars), for the heavy King model and for the two-component model (โstarsโ and dark-matter). We shall refer to this two-component model as Model II, which is obtained by weighting each simulation particle by $`\mathrm{{\rm Y}}(ฯต)^1`$.
If we require that the central stellar mass surface densities of Model I and Model II be the same, we find that the total mass in stars in Model II is $`1.69\times 10^8\mathrm{M}_{}`$. To match Sagittarius surface brightness, we choose the central stellar mass-to-light ratio $`\mathrm{{\rm Y}}_{}=1.5`$. Thus, the total luminosity of Model II is then $`1.13\times 10^8\mathrm{L}_{}`$, implying a mass-to-light ratio of 15.1. Its initial velocity dispersion is 23 $`\mathrm{km}\mathrm{s}^1`$. The visible extent of the remnant has properties which are almost identical to those of Model I, and we find its velocity dispersion to be $`11.1\mathrm{km}\mathrm{s}^1`$. Both results are again in good agreement with the observations.
The two initial satellites (Models I and II) have the same stellar mass distributions in their inner regions, differing only in that one has an extended dark halo. We may thus conclude that the presence of a dark halo does not affect the final structure of the remnant, which is very similar in both models. However there is a significant difference in the properties of their debris streams. In Model I the unbound debris streams are predicted to contain 5.2 times the light in the main body of the remnant ($`M_V14.1`$), as defined by the dotted contour in Figure 3, whereas in Model II ($`M_V13.4`$) this ratio is 4.85. If we had chosen Model II to be a constant mass-to-light ratio model, we would have got an almost equally good fit to the main body of Sagittarius, but would have predicted the streams to contain 19 times the light in the main body of the remnant. In this last case, Sagittarius would have contributed $`4.56\times 10^8\mathrm{L}_{}`$ to the Galactic stellar halo in the form of debris stars (for $`\mathrm{{\rm Y}}=3.5`$). Thus we see that the observed properties of the main remnant do not usefully constrain the number of stars that may be present in the debris streams, but that the different models can be better constrained from the properties of their debris streams, as we exemplify below.
### 3.3 Discussion
#### 3.3.1 Some predictions
In this section we concentrate for simplicity on Model I. We can use it to predict star counts as a function of distance and radial velocity at different points on the sky. We focus on fields along the path defined by the orbit of Sgr, which is where we expect to find debris streams. This is illustrated in Figure 7, where the number counts are normalized to their values on the main body of our simulated Sagittarius, as shown in the first row. We assume fields which are $`1^{}\times 1^{}`$. For the distance, we use 5 kpc bins, whereas for the radial velocity we take 25 $`\mathrm{km}\mathrm{s}^1`$ bins. Note that the contrast of structures in the radial velocity counts are generally larger than in the distance counts, indicating that it should be easier to detect streams in velocity space rather than as density inhomogeneities (see also Helmi & White 1999). This is particularly true considering the much greater relative precision of the velocity measurements. Space density enhancements often occur near the orbital turning points; several are seen as sharp features in the central panel of Fig. 8.
Our model can also be used to predict where streams originating in different mass loss events should be found. This is illustrated in Figure 8 where different colours indicate material lost at different pericentric passages. Note that since the surface brightness of the unbound material decreases with time, material lost in early passages is considerably more difficult to detect than recent mass loss (for an axisymmetric potential the time dependence is $`1/t^2`$, but if the potential may be considered as nearly spherical the surface density will effectively decrease as $`1/t`$; see Helmi & White 1999). The central panel (latitude vs. heliocentric distance) explains why Sagittarius streams have been more difficult to detect above the Galactic plane than below it, even though the density contrast is higher for the northern streams (as shown in the second and third panels of Fig. 7). From the left panel,$`90^{}l90^{}`$, we see that the stream of stars lost in the previous pericentric passage (shown in blue) becomes more distant as we go north. For example, at $`b=40^{}`$, the stream is located approximately 50 kpc from the Sun. The red giant clump visual magnitude at this distance would be roughly 19.3<sup>m</sup>, compared to the 17.85<sup>m</sup> observed in the main body of Sagittarius.
#### 3.3.2 Comparison to data outside the main body of Sagittarius
Even though we have constructed our models to reproduce the properties of the main body of Sagittarius, it is nevertheless worthwhile to compare our simulations to data sets which have claimed detections of Sagittarius debris.
##### Outer Structure of Sagittarius.
Mateo et al. (1998) have traced Sagittarius material out to 30 degrees from its nucleus: the globular cluster M54. They obtained deep photometric data along the southeast extension of the major axis of Sagittarius. In Figure 9 we show the particle counts in our simulation for the strip 3 to 10 in longitude, and spanning about 30 in latitude outside the main remnant body. For comparison we plot the data by Mateo and collaborators, shifted a few degrees in latitude, and arbitrarily offset in number counts. Thus qualitatively we reproduce the break in the number counts profile. This change in slope is indicative of the transition between material which is still bound today and that lost in the last pericentric passage.
##### Star counts at $`b=40^{}`$.
Majewski et al. (1999) have claimed a detection of a possible stream from Sagittarius at $`b=40^{}`$ and $`l=11^{}`$, at a slightly smaller heliocentric distance of $`23`$ kpc and with a radial velocity of the order of $`30\mathrm{km}\mathrm{s}^1`$. As they discuss, this velocity may be strongly affected by contamination by other Galactic components. We note, however, that we would predict a stream of stars (shown in blue) going through this latitude and longitude with roughly the observed distance, and with a radial velocity of $`55\mathrm{km}\mathrm{s}^1`$. (See the central and bottom left panels of Fig. 8, $`90^{}l90^{}`$.). As mentioned above, this stream is formed mostly by material lost in the previous pericentric passage and not three passages ago, as in the model of Johnston et al. (1999). This difference reflects the different orbital timescales in the two models. The surface density of stars may be able to distinguish between them; it is predicted to be higher in our case.
Unfortunately, Majewski and collaborators could not detect the northern stream. They either did not reach the magnitude limit of $`19.3^m`$ expected for the red giant clump, or were offset by a few degrees from its expected location. Thus, for example, Majewski et al. (1999) had a limiting magnitude of $`21`$ at $`b=41^{}`$ and $`l=6^{}`$, but $`V19`$ at $`b=41^{}`$ and $`l=6^{}`$. The actual stream in our model is predicted to go through $`l1^{}`$ and to be about $`2^{}`$ wide. Note that the width prediction is more secure than the location since the motion of Sagittarius in the $`l`$-direction is poorly constrained at present, although a flattened halo would make the streams wider.
##### RR Lyrae found by the Sloan Digital Sky Survey.
The Sloan Digital Sky Survey (SDSS) commissioning data has detected 148 candidate RR Lyrae stars in about 100 deg<sup>2</sup> of sky, along the celestial equator ($`1.27^{}\delta 1.27^{}`$), and from $`\alpha =160.5^{}`$ to $`\alpha =236.5^{}`$ (Ivezic et al., 2000). Although the faint-magnitude limit of the SDSS would allow them to detect RR Lyrae stars to large Galactocentric distances, they find no candidates fainter than r\*$``$ 20, i.e., farther than 65 kpc from the Galactic center. The distribution of stars in their sample is very inhomogeneous and shows a clump of over 50 stars at about 45 kpc from the Galactic centre, which is also detected in the distribution of nonvariable objects with RR Lyrae star colors.
By studying carefully Figure 8, and from our previous discussions, we are naturally led to believe this substructure could be associated with the northern streams of Sagittarius. In the upper left panel of Figure 10 we see how, in our simulations of Model I, a stream of material intersects the area observed by SDSS. The positions of the particles in our simulations are in excellent agreement with those of the RR Lyrae candidates belonging to the reported substructure. The upper right panel shows the visual magnitude of the particles falling in the region of the sky analysed by SDSS. We note here that there are basically two substructures in this region: one at $`V19.5^m`$, and a second one, at a fainter magnitude $`V20.5^m`$ (for $`M_V=0.7^m`$ characteristic of RR Lyrae stars, e.g. Layden et al. (1996)). The first lump clearly could correspond to the substructure observed in the SDSS data. The material in this lump is mostly formed by particles that were lost in recent pericentric passages (i.e. 1 โ 3 Gyr ago) as shown in the bottom left panel of Figure 10.
As Ivezic et al. (2000) discuss, they do not find any RR Lyrae stars fainter than $`V20^m`$. This would be in apparent contradiction with our results, (e.g. top right panel of Fig. 10). However, we need to estimate how much material we find in each lump, calibrate this number with respect to the number of RR Lyrae in the lump observed by SDSS, and thereby determine how many RR Lyrae SDSS could have missed. In the first lump we find 1264 particles, whereas the second has 362 particles. According to Ivezic et al. (2000) the detection efficiency decreases rapidly between $`V20^m`$, where it is fifty per cent, and $`V21^m`$ where it is zero. We here assume that for stars of $`V20.5^m`$ this efficiency is about 15%, which means that only 54 of the 362 particles could, in principle, have been observed. Therefore, we estimate that the ratio of unobserved to that of observed debris material is 0.043 in this region of the sky. Thus if SDSS found $`50`$ RR Lyrae belonging to the first substructure, it should have detected $`2.14\pm 1.46`$ RR Lyrae in the fainter magnitude range. This means that the failure to detect fainter RR Lyrae in this region of the sky is barely significant in this context. From this perspective we cannot rule out that a second stream of debris material is located at much larger distances (typically between 80 and 100 kpc from the Sun, as shown in Figure 8).
Nevertheless the absence of a visible stream may be indicating that this material could be dark-matter dominated. This second stream is formed by particles that became unbound more than 7 Gyr ago. It therefore corresponds to particles orbiting the outskirts of the progenitor of Sagittarius. If this region of the system was dark-matter dominated, such streams would remain unobservable. Fainter data ($`V2021^m`$) in this region of the sky could be crucial to constrain the initial properties of the system, e.g. size, total luminosity. This particular region of the sky should thus be explored further!
##### Carbon stars by the APM
The APM survey has detected about 75 high latitude carbon giants presumably belonging to the halo. These stars being of intermediate age, could trace streams that have recently become unbound from Sagittarius or from other Galactic satellites. Ibata et al. (2000) have proposed that a large fraction of the observed halo carbon stars belong to Sagittarius tidal debris, since they preferentially occur near the great circle of its orbit. Even though there are large uncertainties in the determination of distances to these carbon stars, and the survey is not complete, particularly in regions where we expect Sagittarius streams to be present, this proposal clearly fits within the expectations for the models we have developed here.
## 4 Conclusions
We have found viable models for the Sagittarius dwarf galaxy with a wide range of total luminosities and masses, and both with and without extended dark halos. A purely stellar progenitor could be a King model with a total velocity dispersion of 18.9 $`\mathrm{km}\mathrm{s}^1`$, a core radius of 0.44 kpc and a tidal radius of 3 kpc. For the case where the progenitor is embedded in an extended massive halo, the initial stellar distribution follows a King profile with the same core radius, a slightly larger total velocity dispersion of $`23\mathrm{km}\mathrm{s}^1`$ and similar extent. The dark-matter is more extended. The data available at present only weakly constrain the total initial extent either of the light or of the mass. The observed metallicity data, for example, are consistent with an initial galaxy similar to either of our detailed models, both of which would lie within the scatter of the luminosityโsizeโvelocity dispersionโmetallicity distribution for more distant dwarf spheroidal galaxies in the Local Group. Thus we see no indication that Sagittarius is in any way anomalous. Further work on the debris streams of Sagittarius is needed to constrain better its initial total luminosity, and to distinguish between purely stellar or dark-matter dominated progenitors.
It is certainly encouraging that our models could reproduce the data available both on the main body and on the debris streams. We wish to stress however, that this does not mean that we have found the โultimateโ model. Other models with similar characteristics may also exist. Alternatives would include progenitors with smaller stellar masses or larger dark halos; flattened systems or with anisotropic velocity distributions; or systems with a stellar disk and a spherical dark halo (as proposed for the progenitors of dSph by Mayer et al. (2000)). Moreover, our assumption of a rigid Galactic potential, which does not vary in time over 12 Gyr, is clearly simplistic in view of current models for the formation of structure in the Universe. Only when we have a better estimate of the total luminosity of Sagittarius, both in its main body, as well as on its streams, we will be able to model it in greater detail. The present interest in the debris streams of Sagittarius will help us understand not only the properties of what has turned out to be just another dwarf spheroidal, but also the formation history of our Galaxy. A complete map of the streams will, for example, allow us to derive the Galactic potential (Johnston et al. 1999). If these streams are less smooth or broader than expected, this may indicate smaller scale structure present in the halo either now or when this was assembled.
## Acknowledgments
We thank Pavel Kroupa and James Binney for comments on earlier versions of this manuscript. We have enjoyed discussions with Heather Morrison and Paul Harding. CONICET, Fundaciรณn Antorchas, DAADโFundaciรณn Antorchas, and EARA are acknowledged for financial support. |
no-problem/0002/cond-mat0002149.html | ar5iv | text | # Critical behavior of n-vector model with quenched randomness
## 1 Introduction
The phase transitions and critical phenomena is one of the most widely investigated topic in modern physics. Nevertheless, an eliminated number of exact and rigorous results are available, and they refer mainly to the twoโdimensional systems and fractals . Rigorous results have been obtained also in four dimensions based on an exact renormalization group (RG) technique . The RG method, obviously, provides exact results at $`d>4`$ (where $`d`$ is the spatial dimensionality), but this case is somewhat trivial in view of critical phenomena. In three dimensions, approximate methods are usually used based on perturbation theory.
Here we present a particular result obtained within the diagrammatic perturbation theory. The GinzburgโLandau phase transition model with $`O(n)`$ symmetry (i.e., the $`n`$โvector model) is considered, which includes a quenched random temperature disorder. The usual prediction of the perturbative RG field theory is that, in the case of the spatial dimensionality $`d<4`$ and small enough $`n`$ (at $`n=1`$ and $`n0`$, in particular), the critical behavior of the $`n`$โcomponent vector model is essentially changed by the quenched randomness. Here we challenge this conventional point of view based on a mathematical proof. We have proven rigorously that within the diagrammatic perturbation theory the critical exponents in the actually considered model cannot be changed by the quenched randomness at $`n0`$.
## 2 The model
We consider a model with the GinzburgโLandau Hamiltonian
$`H/T={\displaystyle \left[\left(r_0+\sqrt{u}f(๐ฑ)\right)\phi ^2(๐ฑ)+c(\phi (๐ฑ))^2\right]๐๐ฑ}`$
$`+uV^1{\displaystyle \underset{i,j,๐ค_1,๐ค_2,๐ค_3}{}}\phi _i(๐ค_1)\phi _i(๐ค_2)u_{๐ค_1+๐ค_2}\phi _j(๐ค_3)\phi _j(๐ค_1๐ค_2๐ค_3)`$ (1)
which includes a random temperature (or random mass) disorder represented by the term $`\sqrt{u}f(๐ฑ)\phi ^2(๐ฑ)`$. For convenience, we call this model the random model. In Eq. (2) $`\phi (๐ฑ)`$ is an $`n`$โcomponent vector (the order parameter field) with components $`\phi _i(๐ฑ)=V^{1/2}_๐ค\phi _i(๐ค)e^{i\mathrm{๐ค๐ฑ}}`$, depending on the coordinate $`๐ฑ`$, and $`f(๐ฑ)=V^{1/2}_๐คf_๐คe^{i\mathrm{๐ค๐ฑ}}`$ is a random variable with the Fourier components $`f_๐ค=V^{1/2}f(๐ฑ)e^{i\mathrm{๐ค๐ฑ}}๐๐ฑ`$. Here $`V`$ is the volume of the system, $`T`$ is the temperature, and $`\phi _i(๐ค)`$ is the Fourier transform of $`\phi _i(๐ฑ)`$. An upper limit of the magnitude of wave vector $`k_0`$ is fixed. It means that the only allowed configurations of the order parameter field are those corresponding to $`\phi _i(๐ค)=0`$ at $`k>k_0`$. This is the limiting case $`m\mathrm{}`$ ($`m`$ is integer) of the model where all configurations of $`\phi (๐ฑ)`$ are allowed, but Hamiltonian (2) is completed by term $`\underset{i,๐ค}{}(k/k_0)^{2m}\phi _i(๐ค)^2`$.
The perturbation expansions of various physical quantities in powers of the coupling constant $`u`$ are of interest. In this case $`n`$ may be considered as a continuous parameter. In particular, the case $`n0`$ has a physical meaning describing the statistics of polymers .
The system is characterized by the twoโpoint correlation function $`G_i(๐ค)`$ defined by the equation
$$\phi _i(๐ค)\phi _j(๐ค)=\delta _{i,j}G_i(๐ค).$$
(2)
Because of the $`O(n)`$ symmetry of the considered model, we have $`G_i(๐ค)G(๐ค)`$, i.e., the index $`i`$ may be omitted. It is supposed that the averaging is performed over the $`f(๐ฑ)`$ configurations with a Gaussian distribution for the Fourier components $`f_๐ค`$, i.e., the configuration $`\{f_๐ค\}`$ is taken with the weight function
$$P(\{f_๐ค\})=Z_1^1\mathrm{exp}\left(\underset{๐ค}{}b(๐ค)f_๐ค^2\right),$$
(3)
where
$$Z_1=\mathrm{exp}\left(\underset{๐ค}{}b(๐ค)f_๐ค^2\right)D(f),$$
(4)
and $`b(๐ค)`$ is a positively defined function of $`k`$. Eq. (2) defines the simplest random model considered in (according to the universality hypothesis, the factor $`\sqrt{u}`$ does not make an important difference). Our random model describes a quenched randomness since the distribution over the configurations $`\{f_๐ค\}`$ of the random variable is given (by Eqs. (3) and (4)) and does not depend neither on temperature nor the configuration $`\{\phi _i(๐ค)\}`$ of the order parameter field. More precisely, the common distribution over the configurations $`\{f_๐ค\}`$ and $`\{\phi _i(๐ค)\}`$ is given by
$$P(\{f_๐ค\},\{\phi _i(๐ค)\})=P(\{f_๐ค\})\times Z_2^1(\{f_๐ค\})\mathrm{exp}(H/T),$$
(5)
where $`Z_2(\{f_๐ค\})=\mathrm{exp}(H/T)D(\phi )`$ and $`H`$ is defined by Eq. (2). For comparison, the common distribution is the equilibrium Gibbs distribution in a case of annealed randomness sometimes considered in literature.
## 3 A basic theorem
We have proven the following theorem.
Theorem. In the limit $`n0`$, the perturbation expansion of the correlation function $`G(๐ค)`$ in $`u`$ power series for the random model with the Hamiltonian (2) is identical to the perturbation expansion for the corresponding model with the Hamiltonian
$`H/T={\displaystyle \left[r_0\phi ^2(๐ฑ)+c(\phi (๐ฑ))^2\right]๐๐ฑ}`$
$`+uV^1{\displaystyle \underset{i,j,๐ค_1,๐ค_2,๐ค_3}{}}\phi _i(๐ค_1)\phi _i(๐ค_2)\stackrel{~}{u}_{๐ค_1+๐ค_2}\phi _j(๐ค_3)\phi _j(๐ค_1๐ค_2๐ค_3)`$ (6)
where $`\stackrel{~}{u}_๐ค=u_๐ค\frac{1}{2}f_๐ค^2`$.
For convenience, we call the model without the term $`\sqrt{u}f(๐ฑ)\phi ^2(๐ฑ)`$ the pure model, since this term simulates the effect of random impurities .
Proof of the theorem. According to the rules of the diagram technique, the formal expansion for $`G(๐ค)`$ involves all connected diagrams with two fixed outer solid lines. In the case of the pure model, diagrams are constructed of the vertices , with factor $`uV^1\stackrel{~}{u}_๐ค`$ related to any zigzag line with wave vector $`๐ค`$. The solid lines are related to the correlation function in the Gaussian approximation $`G_0(๐ค)=1/\left(2r_0+2ck^2\right)`$. Summation over the components $`\phi _i(๐ค)`$ of the vector $`\phi (๐ค)`$ yields factor $`n`$ corresponding to each closed loop of solid lines in the diagrams. According to this, the formal perturbation expansion is defined at arbitrary $`n`$. In the limit $`n0`$, all diagrams of $`G(๐ค)`$ vanish except those which do not contain the closed loops. In such a way, for the pure model we obtain the expansion
$$G(๐ค)=\text{}+\text{}+\mathrm{}.$$
(7)
In the case of the random model, the diagrams are constructed of the vertices and . Besides, it is important that only those diagrams give a nonzero contribution where each dotted line is coupled to another dotted line. The factors $`uV^1f_๐ค^2`$ correspond to the coupled dotted lines and the factors $`uV^1u_๐ค`$ correspond to the dashed lines. Thus, we have
$$G(๐ค)=\text{}+\left[\text{}+\text{}\right]+\mathrm{}.$$
(8)
In our notation the combinatorial factor corresponding to any specific diagram is not given explicitly, but is implied in the diagram itself. In the random model, first the correlation function $`G(๐ค)`$ is calculated at a fixed $`\{f_๐ค\}`$ according to the distribution $`Z_2^1\left(\{f_๐ค\}\right)\mathrm{exp}(H/T)`$ (which corresponds to diagrams where solid lines are coupled, but the dotted lines with factors $`\sqrt{u}V^{1/2}f_๐ค`$ are not coupled), performing the averaging with the weight (3) over the configurations of the random variable (i.e., the coupling of the dotted lines) afterwards. According to this procedure, the diagrams of the random model in general (not only at $`n0`$) do not contain parts like , , , etc. Such parts would appear after the coupling of dotted lines only if unconnected (i.e., consisting of separate parts) diagrams with fixed $`\{f_๐ค\}`$ would be considered.
Thus, in the considered random model, the term of the $`l`$โth order in the perturbation expansion of $`G(๐ค)`$ in $`u`$ power series is represented by diagrams constructed of a number $`M_1`$ of vertices and an even number $`M_2`$ of vertices (i.e., $`M_2/2`$ doubleโvertices ), such that $`l=M_1+M_2/2`$. In the pure model, defined by Eq. (3), this term is represented by diagrams constructed of $`l`$ vertices . It is clear and evident from Eqs. (7) and (8) that all diagrams of the random model are obtained from those of the pure model if any of the zigzag lines is replaced either by a dashed or by a dotted line, performing summation over all such possibilities. Note that such a method is valid in the limit $`n0`$, but not in general. The problem is that, except the case $`n0`$, the diagrams of the pure model contain parts like , , , etc. If all the depicted here zigzag lines are replaced by the dotted lines, then we obtain diagrams which are not allowed in the random model, as it has been explained before. At $`n0`$, the only problem is to determine the combinatorial factors for the diagrams obtained by the above replacements. For a diagram constructed of $`M_1`$ vertices and $`M_2`$ vertices the combinatorial factor is the number of possible different couplings of lines, corresponding to the given topological picture, divided by $`M_1!M_2!`$.
It is suitable to make some systematic grouping of the diagrams of the random model. The following consideration is valid not only for the diagrams of the twoโpoint correlation function, but also for free energy diagrams (connected diagrams without outer lines) and for the diagrams of $`2m`$โpoint correlation function (i.e., the diagrams with $`2m`$ fixed outer solid lines, containing no separate parts unconnected to these lines). It is supposed that at $`n0`$ the main terms are retained, which means that the free energy diagrams contain a single loop of solid lines. We define that all diagrams which can be obtained from the $`i`$โth diagram (i.e., the diagram of the $`i`$โth topology) of the pure model, belong to the $`i`$โth group. The sum of the diagrams of the $`i`$โth group can be found by the following algorithm.
1. First, the $`i`$โth diagram of pure model is depicted in an a priori defined way.
2. Each vertex is replaced either by the vertex , or by the double-vertex , performing the summation over all possibilities. Besides, all vertices and and all lines are numbered before coupling, and all the distributions of the numbered vertices and lines over the numbered positions (arranged according to the given picture defined in step 1 and according to the actually considered choice, defining which of the vertices must be replaced by and which of them must be replaced by ) are counted as different. Each specific realization is summed over with the weight $`1/(M_1!M_2!)`$.
3. To ensure that each specific diagram is counted with the correct combinatorial factor, the result of summation in step 2 is divided by the number of independent symmetry transformations (including the identical transformation) $`S_i`$ for the considered $`i`$โth diagram constructed of vertices , where the symmetry transformation of a diagram is defined as any possible redistribution (such that the outer solid lines are fixed) of vertices and coupled lines not changing the given picture. Really, the coupling of lines is not changed if any of the symmetry transformations with any of the specific diagrams of the $`i`$โth group is performed, whereas, according to the algorithm of step 2, original and transformed diagrams are counted as different.
It is convenient to modify step 2 as follows. Choose any one replacement of the vertices by and , and perform the summation over all such possibilities. For any specific choice we consider only one of the possible $`M_1!M_2!`$ distributions of the numbered $`M_1`$ vertices and $`M_2`$ vertices over the fixed numbered positions, and make the summation with weight $`1`$ instead of the summation over $`M_1!M_2!`$ equivalent (i.e., equally contributing) distributions with the weight $`1/(M_1!M_2!)`$.
Note that the location of any vertex is defined by fixing the position of dashed line, the orientation of which is not fixed. According to this, the summation over all possible distributions of lines (numbered before coupling) for one fixed location of vertices (as consistent with the modified step 2) yields factor $`8^{M_1}4^{M_2/2}`$. The $`i`$โth diagram of the pure model also can be calculated by such an algorithm. In this case the summation over all possible line distributions yields a factor of $`8^l`$, where $`l=M_1+M_2/2`$ is the total number of vertices in the $`i`$โth diagram. Obviously, the summation of diagrams of the $`i`$โth group can be performed with factors $`8^l`$ instead of $`8^{M_1}4^{M_2/2}`$, but in this case factors $`\frac{1}{2}uV^1f_๐ค^2`$ must be related to the coupled dotted lines instead of $`uV^1f_๐ค^2`$. In this case the summation over all possibilities where zigzag lines are replaced by dashed lines with factors $`uV^1u_๐ค`$ and by dotted lines with factors $`\frac{1}{2}uV^1f_๐ค^2`$, obviously, yields a factor $`uV^1\left(u_๐ค+\frac{1}{2}f_๐ค^2\right)uV^1\stackrel{~}{u}_๐ค`$ corresponding to each zigzag line with wave vector $`๐ค`$. Thus, the sum over the diagrams of the $`i`$โth group is identical to the $`i`$โth diagram of the pure model defined by Eq. (3). By this the theorem has proved.
## 4 Remarks and conclusions
The theorem has been formulated for the twoโpoint correlation function, but the proof, in fact, is more general, as regards the relation between diagrams of random and pure models. Thus, the statement of the theorem is true also for free energy and for $`2m`$โpoint correlation function.
According to the proven theorem and this remark, at $`n0`$ the considered pure and random models cannot be distinguished within the diagrammatic perturbation theory. Thus, if, in principle, critical exponents can be determined from the diagrammatic perturbation theory at $`n0`$, then, in this limit, the critical exponents for the random model are the same as for the pure model. We think that in reality correct values of critical exponents can be determined from the diagrammatic perturbation theory, therefore the quenched random temperature disorder does not change the universality class at $`n0`$. This our conclusion correlates with results of some other authors. In particular, there is a good evidence that the universality class is not changed by the quenched randomness at $`n=1`$. It has been shown by extensive MonteโCarlo simulations of twoโdimensional dilute Ising models that the critical exponent of the defect magnetization is a continuous function of the defect coupling. By analyzing the stability conditions, it has been concluded in Ref. that the critical exponent $`\nu `$ of the bulk correlation length of the random Ising model does not depend on the dilution, i.e., $`\nu =1`$ at $`d=2`$ both for diluted and not diluted Ising models. The standard (pertubative) RG method predicts the change of the universality class by the quenched randomness. We think, this is a false prediction. The fact that the standard RG method provides incorrect result is not surprising, since it has been demonstrated (in fact, proven) in Ref. that this method is not valid at $`d<4`$.
## References |
no-problem/0002/astro-ph0002336.html | ar5iv | text | # Constraints on Cosmological Parameters from Future Galaxy Cluster Surveys
## 1. Introduction
It has long been realized that clusters of galaxies provide a uniquely useful probe of the fundamental cosmological parameters. The formation of the largeโscale dark matter (DM) potential wells of clusters is likely independent of complex gas dynamical processes, star formation, and feedback, and involve only gravitational physics. As a result, the abundance of clusters $`N_{\mathrm{tot}}`$ and their distribution in redshift $`dN/dz`$ should be determined purely by the geometry of the universe and the power spectrum of initial density fluctuations. Exploiting this relation, the observed abundance of nearby clusters has been used to constrain the amplitude $`\sigma _8`$ of the power spectrum on cluster scales to an accuracy of $`25\%`$ (e.g. White, Efstathiou & Frenk (1993); Viana & Liddle (1996)). The value of $`\sigma _8`$ in these studies depends on the assumed underlying cosmology, especially on the density parameters $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. Subsequent works (Bahcall & Fan (1998); Blanchard & Bartlett (1998); Viana & Liddle (1999)) have shown that the redshiftโevolution of the observed cluster abundance places useful constrains on these two cosmological parameters.
In the above studies, the equation of state for the $`\mathrm{\Lambda }`$โcomponent has been implicitly assumed to be $`p=w\rho `$ with $`w=1`$. The recent suggestion that $`w`$ might be different from $`1`$, or even redshift dependent (Turner & White (1997); Caldwell, Dave & Steinhardt (1998)) has inspired several studies of cosmologies with a component of dark energy. From a particle physics point of view, such $`w>1`$ can arise in a number of theories (see Freese, Adams & Frieman (1987); Ratra & Peebles (1988); Turner & White (1997); Caldwell, Dave & Steinhardt (1998) and references therein). It is therefore of considerable interest to search for possible astrophysical signatures of the equation of state, especially those that distinguish $`w=1`$ from $`w>1`$. Wang et al. (2000) has summarized current astrophysical constraints that suggest $`1w<0.2`$; while recent observations of Type Ia SNe suggest the stronger constraint $`w<0.6`$ (Perlmutter, Turner & White (1999)).
The galaxy cluster abundance provides a natural test of models that include a dark energy component with $`w1`$, because $`w`$ directly affects the linear growth of fluctuations $`D_z`$, as well as the cosmological volume element $`dV/dzd\mathrm{\Omega }`$. Furthermore, because of the dependence of the angular diameter distance $`d_A`$ on $`w`$, the experimental detection limits for individual clusters, e.g., from the SunyaevโZelโdovich effect (SZE) decrement or the Xโray luminosity, depend on $`w`$. Wang & Steinhardt (1998, hereafter WS98) studied the constraints on $`w`$ from a combination of measurements of the cluster abundance and Cosmic Microwave Background (CMB) anisotropies. Their work has shown that the slope of the comoving abundance $`dN/dz`$ between $`0<z<1`$ depends sensitively on $`w`$, an effect that can break the degeneracies between $`w`$ and combinations of other parameters $`(h,\mathrm{\Omega },n)`$ in the CMB anisotropy alone.
Here we consider in greater detail the constraints on $`w`$, and other cosmological parameters, from cluster abundance evolution. Our main goals are: (1) to quantify the statistical accuracy to which $`w1`$ models can be distinguished from standard $`\mathrm{\Lambda }`$ Cold Dark Matter (CDM) cosmologies using cluster abundance evolution; (2) to assess these accuracies in two specific cluster surveys: a deep SZE survey (Carlstrom et al. (1999)) and a large solid angle Xโray survey, and (3) to contrast constraints from cluster abundance to those from CMB anisotropy measurements and from luminosity distances to highโredshift Supernovae (Schmidt et al. (1998); Perlmutter et al. (1999)).
Our work differs from the analysis of WS98 in several ways. We examine the surface density of clusters $`dN/dzd\mathrm{\Omega }`$, rather than the comoving number density $`n(z)`$. This is important from an observational point of view, because the former, directly measurable quantity inevitably includes the additional cosmology-dependence from the volume element $`dV/dzd\mathrm{\Omega }`$. We incorporate the cosmologyโdependent massโlimits expected from both types of surveys. Because the SZE survey has a nearly $`z`$โindependent sensitivity, we find that highโredshift clusters at $`z>1`$ yield useful constraints, in addition to those studied by WS98 in the range $`0<z<1`$. Finally, we quantify the statistical significance of differences in the models by applying a combination of a KolmogorovโSmirnov (KS) and a Poisson test to $`dN/dzd\mathrm{\Omega }`$, and obtain constraints using a grid of models for a wide range of cosmological parameters.
This paper is organized as follows. In ยง 2, we describe the main features of the proposed SZE and X-ray surveys relevant to this work. In ยง 3 we briefly summarize our modeling methods and assumptions. In ยง 4, we quantify the effect of individual variations of $`w`$ and of other parameters on cluster abundance and evolution. In ยง 5, we obtain the constraints on these parameters by considering a grid of different cosmological models. In ยง 6, we discuss our results and the implications of this work. Finally, in ยง 7, we summarize our conclusions.
## 2. Cluster Surveys
The observational samples available for studies of cluster abundance evolution will improve enormously over the coming decade. The present samples of tens of intermediate redshift clusters (e.g., Gioia et al. (1990); Vikhlinin et al. (1998)) will be replaced by samples of thousands of intermediate redshift and hundreds of high redshift ($`z>1`$) clusters. At a minimum, the analysis of the European Space Agency Xโray Multiโmirror Mission (XMM) archive for serendipitously detected clusters will yield hundreds, and perhaps thousands of new clusters with emission weighted mean temperature measurements (Romer et al. (2000)). Dedicated X-ray and SZE surveys could likely surpass the XMM sample in areal coverage, number of detected clusters or redshift depth. The imminent improvement of distant cluster data motivates us to estimate the cosmological power of these future surveys. Note that in practice, the only survey details we utilize in our analyses are the virial mass of the least massive, detectable cluster (as a function of redshift and cosmological parameters), and the solid angle of the survey. We include here a brief description of two representative surveys.
### 2.1. A SunyaevโZelโdovich Effect Survey
The SZE survey we consider is that proposed by Carlstrom and collaborators (Carlstrom et al. (1999)). This interferometric survey is particularly promising, because it will detect clusters more massive than $`2\times 10^{14}M_{}`$, nearly independent of their redshift. Combined, this low mass threshold and its redshift independence produce a cluster sample which extends, depending on cosmology, to redshifts $`z3`$. The proposed survey will cover 12 deg<sup>2</sup> in a year; it will be carried out using ten 2.5 m telescopes and an 8 GHz bandwidth digital correlator operating at cm wavelengths (Mohr et al. (1999)). The detection limit as a function of redshift and cosmology $`M_{\mathrm{min}}(z,\mathrm{\Omega }_m,h)`$ for this survey has been studied using mock observations of simulated galaxy clusters (Holder et al. (2000)), and we draw on those results here.
Optical and near infrared followup observations will be required to determine the redshifts of SZE clusters. Given the relatively small solid angle of the survey, it will be straightforward to obtain deep, multiband imaging. We expect that the spectroscopic followup will require access to a multiobject spectrograph on a 10 m class telescope. The ongoing development of infrared spectrographs may greatly enhance our ability to effectively measure redshifts for the most distant clusters detected in the SZE survey.
### 2.2. A Deep, Large Solid Angle Xโray Survey
We also consider the cosmological sensitivity of a large solid angle, deep X-ray imaging survey. The characteristics of our survey are similar to those of a proposed Small Explorer class mission, called the Cosmology Explorer, spear-headed by G. Ricker and D. Lamb. The survey depth is $`3.6\times 10^6`$ cm<sup>2</sup>s at 1.5 keV, and the coverage is 10<sup>4</sup> deg<sup>2</sup> (approximately half the available unobscured sky). We assume that the imaging characteristics of the survey are sufficient to allow separation of the $`10\%`$ clusters from the $`90\%`$ AGNs and galactic stars. We focus on clusters which produce 500 detected source counts in the 0.5:6.0 keV band, sufficient to reliably estimate the emission weighted mean temperature in a survey of this depth (the external and internal backgrounds sum to $`1.4`$ cts/arcmin<sup>2</sup>).
To compute the number of photons detected from a cluster of a particular flux, we assume the clusters emit RaymondโSmith spectra (Raymond & Smith (1977)) with $`\frac{1}{3}`$ solar abundance, and we model the effects of Galactic absorption using a constant column density of $`n_H=4\times 10^{20}`$ cm<sup>-2</sup>. The metallicity and Galactic absorption weโve chosen are representative for a cluster studied in regions of high Galactic latitude; when analyzing a real cluster one would, of course, use the Galactic $`n_H`$ appropriate at the location of the cluster. Cluster metallicities vary, but for the 0.5:6 keV band, line emission contributes very little flux for clusters with temperatures above 2 keV. For example, if the cluster metallicity were doubled to $`\frac{2}{3}`$ solar, the conversion between flux and the observed counts in the 0.5:6 keV band for this particular survey would vary by $`1.4`$% and $`0.1`$% for Raymond-Smith spectral models with temperatures $`kT=2`$ keV and 10 keV, respectively. We assume that the detectors have a quantum efficiency similar to the ACIS detectors (Bautz et al. (1998); Chartas et al. (1998)) on the Chandra X-ray Observatory, and the energy dependence of the mirror effective area mimics that of the mirror modules on ABRIXAS (Friedrich et al. (1998)).
The X-ray survey could be combined with the Sloan Digital Sky Survey (SDSS) to obtain redshifts for the clusters โ the redshift distribution of the clusters which produce 500 photons in the survey described above is well sampled at the SDSS photometric redshift limit.
### 2.3. Determining the Survey Limiting Mass $`M_{\mathrm{min}}`$
For our analysis, the most important aspect of both surveys is the limiting halo mass $`M_{\mathrm{min}}(z,\mathrm{\Omega }_m,w,h)`$, as a function of redshift and cosmological parameters. More specifically, we seek the relation between the detection limit of the survey, and the corresponding limiting โvirial massโ. In our modeling below, we will be using the mass function of dark halos obtained in large scale cosmological simulations (Jenkins et al. (2000)). In these simulations, halos are identified as those regions whose mean spherical overdensity exceeds the fixed value $`\delta \rho /\rho _b=180`$ (with respect to the background density $`\rho _b`$, and irrespective of cosmology; see discussion below). In what follows, we adopt the same definition for the mass of dark halos associated with galaxy clusters.
In the X-ray survey, $`M_{\mathrm{min}}`$ follows from the cluster X-ray luminosity โ virial mass relation and the details of the survey. We adopt the relation between virial mass and temperature obtained in hydrodynamical simulations by Bryan & Norman (1998),
$$M_{\mathrm{vir}}=a\frac{T^{3/2}}{E(z)\sqrt{\mathrm{\Delta }_c(z)}},$$
(1)
where $`H(z)=H_0E(z)`$ is the Hubble parameter at redshift $`z`$, $`a=1.08`$ is a normalization determined from the hydrodynamical simulations, and $`\mathrm{\Delta }_c`$ is the enclosed overdensity (relative to the critical density) which defines the cluster virial region. The normalization $`a`$ is found to be relatively insensitive to cosmological parameters, and the redshift evolution of Equation 1 appears to be consistent with the hydrodynamical simulations in those models where it has been tested (Bryan & Norman (1998)). Here we assume that Equation 1 holds in all cosmologies with the same value of $`a`$ (see $`\mathrm{\S }`$6.2 for a discussion of the effects of errors in the mass-temperature relation), and use the fitting formulae for $`\mathrm{\Delta }_c`$ provided by WS98, which includes the case $`w1`$. Finally, we convert $`M_{\mathrm{vir}}`$ from Equation 1 to the mass $`M_{180}`$ enclosed within the spherical overdensity of $`\delta \rho /\rho =180`$ (with respect to the background density), assuming that the halo profile is well described by the NFW model with concentration $`c=5`$ (Navarro, Frenk & White 1997, hereafter NFW).
We next utilize Equation 1, together with the relation between bolometric luminosity and temperature found by Arnaud & Evrard (1999), to find the limiting mass of a cluster that produces 500 photons in the 0.5:6.0 keV band in a survey exposure. For these calculations we assume that the luminosity-temperature relation does not evolve with redshift, consistent with the currently available observations (Mushotzky & Scharf (1997); relaxing this assumption is discussed below in ยง 6).
For an interferometric SZE survey, the relevant observable is the cluster visibility $`V`$, which is the Fourier transform of the cluster SZE brightness distribution on the sky as seen by the interferometer. The visibility is proportional to the total SZE flux decrement $`S_\nu `$,
$$VS_\nu (M,z)f_{ICM}\frac{MT_e_n}{d_A^2(z)}$$
(2)
where $`T_e_n`$ is the electron density weighted mean temperature, $`M`$ is the virial mass, $`f_{ICM}`$ is the intracluster medium mass fraction and $`d_A`$ is the angular diameter distance. We normalize this relation using mock observations of numerical cluster simulations (see Mohr & Evrard (1997) and Mohr, Mathiesen & Evrard (1999)) carried out in three different cosmological models, including noise characteristics appropriate to the proposed SZE array (see Holder et al. (2000) for more details). The ICM mass fraction is set to $`f_{ICM}=0.12`$ in all three cosmological models. This mass fraction is consistent with analyses of X-ray emission from well defined samples if $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, our fiducial value. Note that we use the same $`f_{ICM}=0.12`$ in all our cosmological models rather than varying it with the $`H_0`$ scaling appropriate for analyses of cluster X-ray emission. In the discussion which follows, this choice allows us to focus solely on the cosmological discriminatory power of cluster surveys; naturally, in interpreting a real cluster survey one would likely allow $`f_{ICM}`$ to vary with $`H_0`$.
Note that for a flux limited survey, the limiting mass in equation 2 is sensitive to cosmology through its dependence on $`d_A`$ and the definition of the virial mass $`M`$. We adopt the simulationโnormalized value of $`M_{\mathrm{min}}^{}(z)`$ in our fiducial cosmology as a template, and then we rescale this relation to determine $`M_{\mathrm{min}}(z)`$ in the model of interest using the relation
$$M_{\mathrm{min}}(z)=M_{min}^{}(z)\frac{h^{}}{h}\left[\frac{hd_A(z)}{h^{}d_A^{}(z)}\right]^{6/5}$$
(3)
Here the superscript refers to quantities in the $`\mathrm{\Lambda }`$CDM reference cosmology, and we have used the scaling of virial mass with temperature (Eqn. 1): $`MT_e_n^{3/2}`$. We tested this scaling by comparing it to mock observations in simulations of two different cosmologies (open CDM and standard CDM), and found that agreement was better than $`10\%`$ in the redshift range $`0<z<3`$. Finally, in the numerical simulations used to calibrate Equation 2, the halo mass was defined to be the total mass enclosed within a region whose mean spherical interior density is 200 times the critical density. As in the Xโray case, we convert $`M_{\mathrm{min}}(z)`$ from Equation 3 to the desired mass $`M_{180}`$ by assuming that the halo profile follows NFW with concentration $`c=5`$.
The mass limits we derive for both surveys are shown in the redshift range $`0<z<3`$ in Figure 2.3, both for $`\mathrm{\Lambda }`$CDM and for a $`w=0.5`$ universe. The SZE mass limit is nearly independent of redshift, and changes little with cosmology. As a result, the cluster sample can extend to $`z3`$. In comparison, the Xโray mass limit is a stronger function of $`w`$, and it rises rapidly with redshift. For the X-ray survey considered here the number of detected clusters beyond $`z1`$ is negligible.
These mass limits incorporate some simplifying assumptions that have not been tested in detail (although we consider small variations of the mass limits below). Our goal is to capture the scaling with cosmological parameters and redshift as best as presently possible. However, we emphasize that further theoretical studies of the sensitivities of these scalings to, for example, energy injection during galaxy formation will be critical to interpreting the survey data. In the case of the Xโray survey, the cluster sample will have measured temperatures, allowing the limiting mass to be estimated independent of the cluster luminosity. In the case of the SZE survey, deep X-ray followup or multifrequency SZE followup observations should yield direct measurements of the limiting mass.
## 3. Estimating the Cluster Survey Yield
To derive cosmological constraints from the observed number and redshift distribution of galaxy clusters, the fundamental quantity we need to predict is the comoving cluster mass function. The PressโSchechter formalism (Press & Schechter (1974); hereafter PS), which directly predicts this quantity in any cosmology, has been shown to be in reasonably good agreement (i.e. to within a factor of two) with results of Nโbody simulations, in cosmologies and halo mass ranges where it has been tested (Lacey & Cole (1994); Gross et al (1998); Lee & Shandarin (1999)). Numerical simulations have only recently reached the large size required to accurately determine the mass function of the rarest, most massive objects, such as galaxy clusters with $`M>10^{15}M_{}`$.
In this paper, we adopt the halo mass function found in a series of recent largeโscale cosmological simulations by Jenkins et al. 2000. The results of these simulations are particularly wellโsuited for the present application. The large simulated volumes allow a statistically accurate determination of the halo mass function; for halo masses of interest here, to better than $`<\mathrm{\hspace{0.33em}30}\%`$. In addition, the mass function is computed in three different cosmologies at a range of redshifts, and found to obey a simple โuniversalโ fitting formula. Although this does not guarantee that the same scaling holds in other, untested cosmologies, we make this simplifying assumption in the present paper. In the future, the validity of this assumption has to be tested by studying the numerical mass function across a wider range of cosmologies.
Generally, the simulation mass function predicts a significantly larger abundance of massive clusters than does the PS formula. For sake of definiteness, we note that in the simulations, halos are identified as those regions whose mean spherical overdensity exceeds the fixed value $`\delta \rho /\rho _b=180`$ with respect to the background density $`\rho _b`$. This is somewhat different from the typical halo definition within the context of the PS formalism, where the overdensity, relative to the critical density, is taken to be that of a collapsing spherical topโhat at virialization.
Following Jenkins et al. 2000, we assume that the comoving number density $`(dn/dM)dM`$ of clusters at redshift $`z`$ with mass $`M\pm dM/2`$ is given by the formula,
$$\frac{dn}{dM}(z,M)=0.315\frac{\rho _0}{M}\frac{1}{\sigma _M}\frac{d\sigma _M}{dM}\mathrm{exp}\left[\left|0.61\mathrm{log}(D_z\sigma _M)\right|^{3.8}\right],$$
(4)
where $`\sigma _M`$ is the r.m.s. density fluctuation, computed on massโscale $`M`$ from the presentโday linear power spectrum (Eisenstein & Hu (1998)), $`D_z`$ is the linear growth function, and $`\rho _0`$ is the presentโday mass density. The directly observable quantity, i.e. the average number of clusters with mass above $`M_{\mathrm{min}}`$ at redshift $`z\pm dz/2`$ observed in a solid angle $`d\mathrm{\Omega }`$ is then simply given by
$$\frac{dN}{dzd\mathrm{\Omega }}\left(z\right)=\left[\frac{dV}{dzd\mathrm{\Omega }}\left(z\right)_{M_{\mathrm{min}}(z)}^{\mathrm{}}๐M\frac{dn}{dM}\right]$$
(5)
where $`dV/dzd\mathrm{\Omega }`$ is the cosmological volume element, and $`M_{\mathrm{min}}(z)`$ is the limiting mass as discussed in section 2.3. Equations 4 and 5 depend on the cosmological parameters through $`\rho _0`$, $`D_z`$, and $`dV/dzd\mathrm{\Omega }`$, in addition to the mild dependence of $`\sigma _M`$ on these parameters through the power spectrum (although the dependence on the powerโspectrum is more pronounced in the Xโray survey, where the limiting mass varies strongly with redshift). Note that the comoving abundance $`dn/dM`$ is exponentially sensitive to the growth function $`D_z`$. We use convenient expressions for $`dV/dzd\mathrm{\Omega }`$ and $`D_z`$ in open and flat $`\mathrm{\Omega }_\mathrm{\Lambda }`$ cosmologies available in the literature (Peebles (1980); Carroll, Press & Turner (1992); Eisenstein (1996)). In the case of cosmologies with $`w1`$, we have evaluated $`dV/dzd\mathrm{\Omega }`$ numerically, but used the fitting formulae for $`D_z`$ obtained by WS98, which are accurate to better than 0.3% for the cases of constant $`w`$โs considered here.
### 3.1. Normalizing to Local Cluster Abundance
To compute $`dN/dzd\mathrm{\Omega }`$ from equation 5, we must choose a normalization for the density fluctuations $`\sigma _M`$. This is commonly expressed by $`\sigma _8`$; the present epoch, linearly extrapolated rms variation in the density field filtered on scales of $`8h^1`$ Mpc. To be consistent in our analysis, we choose the normalization for each cosmology by fixing the local cluster abundance above a given mass $`M_{\mathrm{nm}}=10^{14}h^1M_{}`$. In all models considered, we set the local abundance to be $`1.03\times 10^5(h/0.65)^3\mathrm{Mpc}^3`$, the value derived in our fiducial $`\mathrm{\Lambda }`$CDM model (see below). We have chosen to normalize using the local cluster abundance (upto a factor $`h^3`$) above mass $`M_{\mathrm{nm}}`$ rather than above a particular emission weighted mean temperature $`kT_{\mathrm{nm}}`$, because this removes the somewhat uncertain cosmological sensitivity of the virial mass temperature ($`MT_x`$) relation from the normalization process; spherical tophat calculations suggest a significant offset in the $`MT_x`$ normalization of the open and flat $`\mathrm{\Omega }_m=0.3`$ models which hydrodynamical simulations do not seem to reproduce (Evrard, Metzler & Navarro (1996); Bryan & Norman (1998); Viana & Liddle (1999)).
An alternative approach to the above is to regard $`\sigma _8`$ a โfreeโparameterโ, on equal footing with the other parameters we let float below. This possibility will be discussed further in ยง 6. Here we note that our normalization approach is sensible, because the number density of nearby clusters can be measured to within a factor of $`h^3`$, and the masses of nearby clusters can be measured directly through several independent means; these include the assumption of hydrostatic equilibrium and using X-ray images and intracluster medium (ICM) temperature profiles, weak lensing, or galaxy dynamical mass estimates. The only cosmological sensitivity of these mass estimators is their dependence on the Hubble parameter $`h`$; we include this $`h`$ dependence when normalizing our cosmological models. Note that previous derivations of $`\sigma _8`$ (e.g. Viana & Liddle 1993; Pen 1998) in various cosmologies from the local cluster abundance $`N(>kT)`$ above a fixed threshold temperature $`kT_{\mathrm{min}}7`$keV yielded a constraint with the approximate scaling $`\sigma _8\mathrm{\Omega }_m^{1/2}0.5`$. We find a similar relation when varying $`\mathrm{\Omega }_m`$ away from our fiducial cosmology; however, we note that if a $`5`$ times smaller threshold temperature were used, the constrained combination would be quite different, $`\sigma _8\mathrm{\Omega }_m`$constant. Since our adopted normalization is based on mass, rather than temperature, in general, we find still different scalings. As an example, when $`h=0.65`$ and $`w=1`$ are kept fixed, our normalization procedure translates into $`\sigma _8(\mathrm{\Omega }_m/0.3)^{0.85}0.9`$.
### 3.2. Fiducial Cosmological Model
The parameters we choose for of our fiducial cosmological model are $`(\mathrm{\Omega }_\mathrm{\Lambda },\mathrm{\Omega }_\mathrm{m},h,\sigma _8,n)=(0.7,0.3,0.65,0.9,1)`$. This flat $`\mathrm{\Lambda }`$CDM model is motivated as a โbestโfitโ model that produces a local cluster abundance consistent with observations (Viana & Liddle (1999)), and satisfies the current constraints from CMB anisotropy (Lange et al. (2000), see also White, Scott & Pierpaoli (2000)), highโ$`z`$ SNe, and other observations (Bahcall et al. 1999). We have assumed a baryon density of $`\mathrm{\Omega }_bh^2=0.02`$, consistent with recent D/H measurements (e.g. Burles & Tytler 1998). Note that the power spectrum index $`n`$ is not important for the analysis presented here, because we normalize on cluster scales $`\sigma _8`$, and we find that this minimizes the effect of varying $`n`$ on the density fluctuations relevant to cluster formation.
## 4. Exploring the Cosmological Sensitivity
In this section, we describe how variations of the individual parameters $`\mathrm{\Omega }`$, $`w`$, and $`h`$, as well as the cosmological dependence of the limiting mass $`M_{\mathrm{min}}`$, affect the cluster abundance and redshift distribution. This will be useful in understanding the results of the next section, when a full grid of different cosmologies is considered. We then describe our method of quantifying the statistical significance of differences between the distributions $`dN/dz`$ in a pair of different cosmologies.
### 4.1. Single Parameter Variations
The surface density of clusters more massive than $`M_{\mathrm{min}}`$ depends on the assumed cosmology mainly through the growth function $`D(z)`$ and volume element $`dV/dzd\mathrm{\Omega }`$, as well as through the cosmologyโdependence of the limiting mass $`M_{\mathrm{min}}`$ itself. In the approach described in section 3, once a cosmology is specified, the normalization of the power spectrum $`\sigma _8`$ is found by keeping the abundance of clusters at $`z=0`$ constant. We therefore consider only three โfreeโ parameters, $`w`$, $`h`$, $`\mathrm{\Omega }_m`$, specifying the cosmology. We assume the universe to be either flat ($`\mathrm{\Omega }_Q=1\mathrm{\Omega }_m`$), or open with $`\mathrm{\Omega }_Q=0`$.
#### 4.1.1 Changing $`\mathrm{\Omega }_m`$
The effects of changing $`\mathrm{\Omega }_m`$ are demonstrated in Figure 4.1. The curves correspond to a flat $`\mathrm{\Lambda }`$CDM universe with ($`h=0.65,w=1`$), and $`\mathrm{\Omega }_m=0.27`$ (dotted), $`\mathrm{\Omega }_m=0.30`$ (solid), and $`\mathrm{\Omega }_m=0.33`$ (shortโdashed). In addition, the longโdashed curves show the same three models (top to bottom), assuming open CDM with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. The top left panel shows the total number of clusters in a 12 square degree field, detectable down to the constant SZE decrement $`S_{\mathrm{min}}`$. As discussed in section 2.3 above, a constant $`S_{\mathrm{min}}`$ implies a redshift and cosmologyโdependent limiting mass $`M_{\mathrm{min}}`$. In the SZE case, we find that if we had not included this effect, the sensitivity to $`\mathrm{\Omega }_m`$ would have been somewhat stronger. Several conclusions can be drawn from Figure 4.1. Overall, the top left panel shows that a decrease in $`\mathrm{\Omega }_m`$ increases the number of clusters (and vice versa) at all redshifts. Note that the dependence is strong, for instance, a $`10\%`$ decrease in $`\mathrm{\Omega }_m`$ increases the total number of clusters by $`30\%`$ in either $`\mathrm{\Lambda }`$CDM or OCDM cosmologies. As emphasized by Bahcall & Fan (1998), Viana & Liddle (1999) and others, this makes it possible to estimate an upper limit on $`\mathrm{\Omega }_m`$ using current, sparse data on cluster abundances (i.e. only a few highโ$`z`$ clusters). A second important feature seen in the top left panel is that the shape of the redshift distribution is not changed significantly, a conclusion that holds both in $`\mathrm{\Lambda }`$CDM and OCDM. Finally, the remaining three panels reveal that the effects of $`\mathrm{\Omega }_m`$ arise mainly from the changes in the comoving abundance (bottom left panel). In flat $`\mathrm{\Lambda }`$CDM, $`\mathrm{\Omega }_m`$ has relatively little effect on the volume or the growth function, and the comoving abundance is determined by the value of $`\sigma _8`$ that keeps the the local abundance constant at $`z=0`$ (we find $`\sigma _8`$=0.83 for $`\mathrm{\Omega }_m=0.33`$ and $`\sigma _8=1.00`$ for $`\mathrm{\Omega }_m=0.27`$). In addition, we find that the change in the shape of the underlying power spectrum with $`\mathrm{\Omega }_m`$ enhances the differences caused by $`\mathrm{\Omega }_m`$ (when we artificially keep the power spectrum at its $`\mathrm{\Omega }_m=0.3`$ shape, we find $`\sigma _8`$=0.84 for $`\mathrm{\Omega }_m=0.33`$). We also note that the volume element and the comoving abundance act in the same direction: a lower $`\mathrm{\Omega }_m`$ increases both the comoving abundance and the volume element. In OCDM, the growth function has a larger effect, and relative to $`\mathrm{\Lambda }`$CDM, the redshift distribution is much flatter.
#### 4.1.2 Changing $`w`$
The effects of changing $`w`$ are demonstrated in Figure 4.1. The figure shows models with ($`\mathrm{\Omega }_m=0.3,h=0.65`$) and with three different $`w`$โs: $`w=1`$ (solid curve), $`w=0.6`$ (dotted curve), and $`w=0.2`$ (shortโdashed curve). In addition, we show the result from an open CDM model with ($`\mathrm{\Omega }=0.3,h=0.65`$; longโdashed curve). The figure reveals that increasing $`w`$ above $`w=1`$ causes the slope of the redshift distribution above $`z0.5`$ to flatten, increasing the number of highโ$`z`$ clusters. Furthermore, โopeningโ the universe has an effect similar to increasing $`w`$. The other three panels demonstrate the reason for these scalings. The top right panel shows that the growth function is flatter in higher $`w`$ models, significantly increasing the comoving number density of highโredshift clusters (bottom left panel). The volume element (bottom right panel) has the opposite behavior, in the sense the volume in higherโ$`w`$ models is smaller, which tends to balance the increase in the comoving abundance caused by the growth function in the range $`0<z<\mathrm{\hspace{0.33em}0.5}`$; but for higher redshifts, the growth function โwinsโ. An important conclusion seen from Figure 4.1 is that both the total number of clusters as well as the shape of their redshift distribution, significantly depends on $`w`$. We also note that in the SZE case, our sensitivity to $`w`$ has been enhanced by the cosmological dependence of the mass limit (opposite to what we found for the $`\mathrm{\Omega }_m`$โsensitivity, which we found was weakened by the same effect).
#### 4.1.3 Changing $`h`$
Figure 4.1.1 demonstrates the effects of changing $`h`$. Three $`\mathrm{\Lambda }`$CDM models are shown with ($`\mathrm{\Omega }_m=0.30,w=1`$), and $`h=0.55`$ (dotted curve), $`h=0.65`$ (solid curve), and $`h=0.80`$ (shortโdashed curves). The longโdashed curves correspond to OCDM models with the same parameters (top to bottom). Comparing the top right panel with that of Figure 4.1, the qualitative behavior of $`dN/dz`$ under changes in $`h`$ and $`\mathrm{\Omega }_m`$ are similar: decreasing $`h`$ increases the total number of clusters, but does not considerably change their redshift distribution. However, the sensitivity to $`h`$ is significantly less: the total number of clusters is seen to increase by $`25\%`$ only when $`h`$ is decreased by the same percentage. Note that the growth function is not effected by $`h`$, and the $`h`$ sensitivity is driven by our normalization process, which fixes the abundance at $`z=0`$ (see ยง 3.1). Since the volume scales as $`h^3`$, we fix the comoving abundance to be proportional to $`h^3`$. As a result, $`dN/dzd\mathrm{\Omega }`$ is nearly independent of $`h`$. In fact, the entire $`h`$โdependence is attributable to the small change caused by $`h`$ in the shape of the power spectrum (for a pure powerโlaw spectrum, there would be no $`h`$โdependence, and the three curves for the flat universe in the top left panel of Figure 4.1.1 would look identical).
#### 4.1.4 Abundances in the Xโray Survey
The evolution of the cluster abundance, and its sensitivity to $`\mathrm{\Omega }_m`$ and $`w`$ in the Xโray survey are shown in Figure 4.2. Because of the much larger solid angle surveyed, the numbers of clusters is significantly larger than in the SZE case, despite the higher limiting mass (cf. Fig 2.3). Nevertheless, the general trends that can be identified in the Xโray sample are similar to those in the SZE case. Raising $`w`$ increases the total number of clusters, and flattens their redshift distribution. As in the SZE survey, raising $`\mathrm{\Omega }_m`$ decreases the total number of clusters.
### 4.2. Effects of the Limiting Mass Function
Finally, we examine the extent to which the above conclusions depend on the cosmology and redshiftโdependence of the limiting mass $`M_{\mathrm{min}}`$.
#### 4.2.1 The SZE Survey
We first compute cluster abundances above the fixed mass $`M_{\mathrm{min}}=10^{14}h^1\mathrm{M}_{}`$, characteristic of the SZE survey detection threshold in the range of cosmologies and redshifts considered here. The results are shown in Figure 4.1.3: the bottom panels show the surface density and comoving abundance when $`\mathrm{\Omega }_m`$ is changed (the models are the same as in Figure 4.1), and the top panels show the same quantities under changes in $`w`$ (the cosmological models are the same as in Figure 4.1). A comparison between Figures 4.1.3 and 4.1 gives an idea of the importance of the mass limit. The general trend seen in Figure 4.1 remains true, i.e. increasing $`w`$ flattens the redshift distribution at highโ$`z`$. However, when a constant $`M_{\mathrm{min}}`$ is assumed, the โpivot pointโ moves to slightly higher redshift, and the total number of clusters becomes less sensitive to $`w`$. Similar conclusions can be drawn from a comparison of Figure 4.1 with the bottom two panels of Figure 4.1.3: under changes in $`\mathrm{\Omega }_m`$ the general trends are once again similar, but the differences between the different models are amplified when a constant $`M_{\mathrm{min}}`$ is used. In summary, we conclude that in the SZE case (1) the variation of the mass limit with redshift and cosmology has a secondary importance, and (2) it weakens the $`\mathrm{\Omega }_m`$ dependence, but strengthens the $`w`$ dependence.
#### 4.2.2 The Xโray Survey
In comparison to the SZE survey, the Xโray mass limit is not only higher, but is also significantly more dependent on cosmology (cf. Fig 2.3). On the other hand, the Xโray sample goes out only to the relatively low redshift $`z=1`$, where the growth functions in the different cosmologies diverge relatively little. This suggests that in the Xโray case the mass limit is more important than in the SZE survey. In order to separate the effects of the changing mass limit from the change in the growth function and the volume element, in Figure 4.2.2 we show the sensitivity of $`dN/dz`$ to changes in $`\mathrm{\Omega }_m`$ and $`w`$, without including the effects from the mass limit. The same models are shown as in Figure 4.2, except we have artificially kept the mass limit at its value in the fiducial cosmology. The figure reveals that essentially all of the $`w`$โsensitivity seen in Figure 4.2 is caused by the changing mass limit; when $`M_{\mathrm{min}}`$ is kept fixed, the cluster abundances change very little. On the other hand, comparing the bottom panels of Figures 4.2 and 4.2.2 shows that including the scaling of the mass limit somewhat reduces the $`\mathrm{\Omega }_m`$ dependence, just as in the SZE case.
### 4.3. Overview of Cosmological Sensitivity
In summary, we conclude that changes in $`w`$ modify both the normalization and the shape of the redshift distribution of clusters, while changes in $`\mathrm{\Omega }_m`$ or $`h`$ effect essentially only the overall amplitude. This suggests that changes in $`w`$ can not be fully degenerate with changes in either $`\mathrm{\Omega }_m`$ or $`h`$ (or a combination), making it possible to measure $`w`$ from cluster abundances alone. These conclusions hold either for clusters above a fixed detection threshold in and SZE or X-ray survey, or for a sample of clusters above a fixed mass. We find that the sensitivity to $`\mathrm{\Omega }_m`$ arises mostly through the growth function, both in the SZE and Xโray surveys. This sensitivity is slightly weakened by the scaling of the limiting mass $`M_{\mathrm{min}}`$ with $`\mathrm{\Omega }_m`$. We find that the $`w`$ sensitivity is also dominated by the growth function in the SZE survey, which goes out to relatively high redshifts; but the sensitivity to $`w`$ is enhanced by the $`w`$โdependence of $`M_{\mathrm{min}}`$. In comparison, in the Xโray survey, which only probes relatively low redshifts, nearly all of the $`w`$โsensitivity is caused by the cosmologyโdependence of the limiting mass, rather than the growth function.
## 5. Constraints on Cosmological Parameters
We derive cosmological constraints by considering a 3โdimensional grid of models in $`\mathrm{\Omega }_m,h`$, and $`w`$. As described above, we first find $`\sigma _8`$ in each model, so that all models are normalized to produce the same local cluster abundance at $`z=0`$. We then compute $`dN/dzd\mathrm{\Omega }`$ in these models for $`0.2\mathrm{\Omega }_m0.5`$, $`0.5h0.9`$, and $`1w0.2`$. The range for $`w`$ corresponds to that allowed by current astrophysical observations (Wang et al. (2000)); although recent observations of Type Ia SNe suggest the stronger constraint $`w<0.6`$ (Perlmutter, Turner & White (1999)).
### 5.1. Comparing $`dN/dz`$ in Two Different Cosmologies
The main goal of this paper is to quantify the accuracy to which $`w`$ can be measured in future SZE and Xโray surveys. To do this, we must answer the following question: given a hypothetical sample of $`N_{\mathrm{tot}}`$ clusters (with measured redshifts) obeying the distribution $`dN_A/dz`$ of the test model (A) cosmology, what is the probability $`P_{\mathrm{tot}}(A,B)`$ that the same sample of clusters is detected in the fiducial (B) cosmology, with distribution $`dN_B/dz`$? We have seen in section 4.1 that the overall amplitude, and the shape of $`dN/dz`$ are both important. Motivated by this, we define
$$P_{\mathrm{tot}}(A,B)=P_0(A,B)\times P_z(A,B)$$
(6)
where $`P_0(A,B)`$ is the probability of detecting $`N_{A,\mathrm{tot}}`$ clusters when the mean number is $`N_{B,\mathrm{tot}}`$, and $`P_z(A,B)`$ is the probability of measuring the redshift distribution of model (A) if the true parent distribution is that of model (B). We assume $`P_0`$ is given by the Poisson distribution, and we use the KolmogorovโSmirnov (KS) test to compute $`P_z(A,B)`$ (Press et al. (1992)). The main advantage of this approach, when compared to the usual $`\chi ^2`$ tests, is that we do not need to bin the data in redshift.
For reference, it is useful to quote here some examples for the probabilities, taking ($`\mathrm{\Omega }_m=0.3,h=0.65,w=1`$) as the fiducial (B) model. For example, closest to this model in Figure 4.1 is the one with $`w=0.6`$. For this case, we find $`P_0=0.25`$ and $`P_z=0.1`$ for a total probability of $`P_{\mathrm{tot}}=0.025`$. In other words, the two cosmologies could be distinguished at a likelihood of $`1.2\sigma `$ using only the total number of clusters, at $`1.6\sigma `$ using only the shape of the redshift distribution, and at the $`2.3\sigma `$ level using both pieces of information. In this case, the distinction is made primarily by the different redshift distributions, rather than the total number of detected clusters. Taking the $`\mathrm{\Omega }_m=0.33`$ $`\mathrm{\Lambda }`$CDM cosmology from Figure 4.1 as another example for model (A), we find $`P_0=0.0075`$ (=$`2.7\sigma `$), $`P_z=0.78`$ (=$`0.3\sigma `$), and a total probability of $`P_{\mathrm{tot}}=0.0058`$ (=$`2.8\sigma `$). Not surprisingly, the shape of the redshift distribution does not add significantly to the statistical difference between these two models, which differ primarily by the total number of clusters.
### 5.2. Expectations from the SunyaevโZelโdovich Survey
Figure 1 shows contours of 1, 2, and 3$`\sigma `$ for the total probability $`P_{\mathrm{tot}}`$ for models when compared to the fiducial flat $`\mathrm{\Lambda }`$CDM model. For reference, we note that the total number of clusters in the SZE survey in our fiducial model is $`100`$, located between $`0<z<3`$. The three panels show three different crossโsections of the investigated 3โdimensional $`\mathrm{\Omega }_m,h,w`$ parameter space, taken at constant values of $`h=`$ 0.55, 0.65, and 0.80, spanning the range of values preferred by other observations. The most striking feature in this figure is the direction of the contours, which turn upwards in the $`w,\mathrm{\Omega }_m`$ plane, and become narrower for larger values of $`w`$. We find that the trough of maximum probability for fixed $`h=0.65`$ is well described by
$$(\mathrm{\Omega }_m0.3)(w+1)^{5/2}=0.1,$$
(7)
with further constant shifts in $`\mathrm{\Omega }_m`$ caused by changing $`h`$. The $`\pm 3\sigma `$ width enclosed by the contours around this relation is relatively narrow in $`\mathrm{\Omega }_m`$ ($`\pm 10\%`$). In a $`\mathrm{\Lambda }`$CDM case, even when a large range of values is considered for $`h`$ ($`0.45<h<0.90`$), the constraint $`0.26<\mathrm{\Omega }_m<\mathrm{\hspace{0.33em}0.36}`$ follows; when $`w1`$ is considered, the allowed range widens to $`0.27<\mathrm{\Omega }_m<\mathrm{\hspace{0.33em}0.41}`$. On the other hand, a wide range of $`w`$โs is seen to be consistent with $`w=1`$: the largest value shown, $`w0.2`$ is approximately $`3\sigma `$ away from $`w=1`$, and $`w=0.6`$ is allowed at $`1\sigma `$. Note that $`h`$ is not well determined, i.e. the contours look similar for all three values of $`h`$, and 1$`\sigma `$ models exist for any value of $`h`$ in the range $`0.5<h<\mathrm{\hspace{0.33em}0.9}`$. This is not surprising, as Figure 4.1.1 shows $`dN/dzd\mathrm{\Omega }`$ is insensitive to the value of $`h`$, with only a mild $`h`$โdependence through the nonโpower law shape of the power spectrum.
### 5.3. Expectations from the Xโray Survey
The total number of clusters in the Xโray survey in our fiducial model is $`1000`$, ten times that in the SZE survey; all Xโray clusters are located between $`0<z<1`$. Figure 2 contains expectations for the X-ray survey; we show contours of 1, 2, and 3$`\sigma `$ probabilities relative to the fiducial $`\mathrm{\Lambda }`$CDM model. The qualitative features are similar to that in the SZE case, but owing to the larger number of clusters, the constraints are significantly stronger and the contours are narrower. However, the contours extend further along the $`w`$ axis, and the largest value of $`w`$ allowed at a probability better than $`3\sigma `$ is $`w>0.2`$ (assuming that the values of $`\mathrm{\Omega }_m`$ and $`h`$ are not known). Although the contours are narrower than in the SZE case, assuming that $`h`$ and $`w`$ are unknown, the allowed range of $`\mathrm{\Omega }_m`$ is similar to that in the SZE case, $`0.26<\mathrm{\Omega }_m<\mathrm{\hspace{0.33em}0.42}`$. Note that because of the shape and direction of the likelihood contours, a knowledge of $`h`$ would not significantly improve this constraint (although if $`h`$ is found to be low, then the lower limit in $`\mathrm{\Omega }_m`$ would increase). Finally, assuming that both $`h`$ and $`\mathrm{\Omega }_m`$ are known to high accuracy ($`3\%`$), the allowed $`3\sigma `$ range on $`w`$ could be reduced to $`1w<0.85`$.
## 6. Results and Discussion
### 6.1. Total Number vs. the Redshift Distribution
Our main results are presented in Figures 1 and 2, which show the probabilities of various models relative to a fiducial $`\mathrm{\Lambda }`$CDM model in the SZE and Xโray surveys. As demonstrated by these figures, the cluster data determine a combination of $`\mathrm{\Omega }_m`$ and $`w`$. In the absence of external constraints on $`\mathrm{\Omega }_m`$ and $`h`$, $`w`$ as large as $`0.2`$ differs from $`w=1`$ by $`3\sigma `$; while $`w=0.6`$ would be $`1\sigma `$ away from our fiducial $`\mathrm{\Lambda }`$CDM cosmology. Owing to the larger number of clusters in the Xโray survey, the constrained combination of $`\mathrm{\Omega }_m`$ and $`w`$ is significantly narrower than in the SZE survey; the direction of the contours is also somewhat different. As a result, analysis of the Xโray survey could distinguish a $`w0.85`$ model from $`\mathrm{\Lambda }`$CDM at $`3\sigma `$ significance, provided $`\mathrm{\Omega }_m`$ is known to an accuracy of $`3\%`$ from other studies.
It is interesting to ask whether these constraints arise mainly from the total number of detected clusters, or from their redshift distribution. To address this issue, in Figure 6.1 we show separate likelihood contours for the probability $`P_0`$ (total number of clusters, left panels), and for the probability $`P_z`$ (shape of redshift distribution, right panels). In the SZE case, the contours of likelihood from the shape information alone are broad, and adding these constraints to the Poissonโprobability plays almost no role in the range $`w<0.7`$ (the contours of $`P_{\mathrm{tot}}`$ and $`P_0`$ are very similar). However, at larger $`w`$, the shape becomes increasingly important. Adding in this information significantly reduces the allowed region relative to the Poissonโprobability alone at $`w>0.7`$. It is the combination of the $`P_0`$ and $`P_z`$ contours that allows ruling out $`w>0.2`$ at the $`3\sigma `$ level. Note that the difference in shapes arises mostly from the highโredshift ($`z>\mathrm{\hspace{0.33em}1}`$) clusters (cf. Fig. 4.1).
In the Xโray case (bottom panels in Fig. 6.1), the situation is different, because the contours of $`P_0`$ and $`P_z`$ are both much narrower. As a result, the contours for the combined likelihood are somewhat reduced, but they still reach to $`w0.2`$ (at $`2\sigma `$). Note that as in the SZE survey, the redshift distribution (of clusters primarily in the $`0<z<1`$ range) plays an important role. As Figures 4.1.1 and 4.1 show, the total number of clusters can be adjusted by changing $`\mathrm{\Omega }_m`$ and $`h`$. In terms of the total number of clusters, $`w`$ is therefore degenerate both with $`\mathrm{\Omega }_m`$ and $`h`$: raising $`w`$ lowers the total number, but this can always be offset by a change in $`\mathrm{\Omega }_m`$ and/or $`h`$. The bottom left panel in Fig. 6.1 reveals that based on $`P_0`$ alone, $`w=0.2`$ (and $`\mathrm{\Omega }_m=0.43`$) can not be distinguished from $`\mathrm{\Lambda }`$CDM even at the $`1\sigma `$ level. On the other hand, the middle panel in Fig. 2 shows that when the shape information is added, $`w<0.2`$ follows to $`2\sigma `$ significance.
### 6.2. Discussion of Possible Systematic Uncertainties
Our results imply that the cluster abundances in the SZE and Xโray surveys can provide useful constraints on cosmological parameters, based on statistical differences expected among different cosmologies. The purpose of this section is to summarize and quantify the various systematic uncertainties that can affect these constraints.
Knowledge of the Limiting Mass $`M_{\mathrm{min}}`$. Our conclusions above are dependent on the chosen limiting mass, which is a function of both redshift and cosmology. From the discussion in ยง 4.1 we have seen that the limiting mass plays a secondary role in the SZE survey, where the bulk of the constraint comes from the growth function. In comparison, we find that $`M_{\mathrm{min}}`$ plays an important role in the Xโray survey. To demonstrate the importance of the mass limit explicitly, in Figure 6.2 we show the likelihood contours in the $`\mathrm{\Omega }_mw`$ plane when the variations of the limiting mass with cosmology are not taken into account. Not surprisingly, this makes the contours somewhat narrower, but nearly parallel to $`w`$ โ this is consistent with our finding in Figure 4.2.2 that the mass limit accounts for nearly all of the $`w`$โdependence, but it reduces the $`\mathrm{\Omega }_m`$ dependence. Figure 6.2 demonstrates the need to accurately know the limiting mass $`M_{\mathrm{min}}`$, and its cosmological scaling, in the Xโray survey.
Because our proposed cluster sample will have measured Xโray temperatures, the uncertainty in our knowledge of the limiting mass will likely be dominated by the theoretical uncertainties of the $`MT`$ relation. In order to quantify the effect of such errors, we have performed a set of simple modifications to our modeling of the constraints from the Xโray survey. In all cases, we adopt the same $`MT`$ relations as we did before (cf. eq. 1). However, in the fiducial model, we use a limiting mass that is altered by either $`\pm 5\%`$ or $`\pm 10\%`$ from the mass inferred from this $`MT`$ relation. This mimics a situation where the theoretical $`MT`$ relation we apply is either 5% or 10% away from the relation in the real universe. In a second set of calculations, we mimic a situation where the slope of the $`MT`$ relation is incorrectly modeled; i.e. we alter this slope in the fiducial model to $`\alpha =1.5\pm 0.05`$. The deviations to the likelihood contours caused by these offsets are demonstrated in Figure 3, which shows the effects of the offset in the $`MT`$ normalization, and in Figure 4, which shows the effects of the offsets in the slope. As the figures reveal, the contours shift relatively little under these changes. We conclude that the results we derive are robust, as long as we can predict the $`MT`$ relation to within $`10\%`$.
In our approach, we have attempted to utilize the whole observed cluster sample, down to the detection threshold: we had to therefore include the above cosmological dependencies. In principle, measured cluster velocity dispersions and Xโray temperatures (both of which are cosmology independent) could be utilized to improve the constraints, i.e. by selecting subโsamples that maximize the differences between models. Further work is needed to clarify the feasibility of this approach, as well as to quantify the accuracy to which the dependence of $`M_{\mathrm{min}}`$ on $`\mathrm{\Omega }_m`$, $`h`$, $`w`$, and $`z`$ can be predicted.
Evolution of Internal Cluster Structure. Further work is also required to test the cluster structural evolution models we use. For the Xโray survey, we have assumed that the cluster luminosityโtemperature relation does not evolve, consistent with current observations (Mushotzky & Scharf (1997)), and in the SZE survey, we have adopted the structural evolution found in state of the art hydrodynamical simulations. Because of the sensitivity of the survey yields to the limiting mass, cluster structural evolution which changes the observability of high redshift clusters can introduce systematic errors in cosmological constraints: for example, both low $`\mathrm{\Omega }_m`$ cosmologies and positive evolution of the cluster luminosityโtemperature relation increase the cluster yield in an X-ray survey. SZE surveys are generally less sensitive to evolution than Xโray surveys, because the Xโray luminosity is heavily dependent on the core structure (e.g., the presence or absence of cooling instabilities), whereas the SZE visibility depends on the integral of the ICM pressure over the entire cluster (Eqn. 2). We are testing these assertions with a new suite of hydrodynamical simulations in scenarios where galaxy formation at high redshift preheats the intergalactic gas before it collapses to form clusters (Bialek, Evrard & Mohr (2000); Mohr et al. in prep). However, most importantly, we emphasize that because of the sensitivity of Xโray surveys to evolution, we have only used those clusters which produce enough photons to measure an emission weighted mean temperature. In this case, one can directly extract the minimum temperature $`T_{lim}(z)`$ of detected clusters as a function of redshift. Correctly interpreting such a survey requires mapping $`T_{lim}(z)M_{lim}(z)`$ using the mass-temperature relation; the evolution of the mass-temperature relation is less sensitive to the details of preheating than the luminosity-temperature relation. Thus, in a survey constructed in this manner, it should be possible to disentangle the cosmological effects from those caused by the evolution of cluster structure.
Cluster Mass Function. In our treatment, we have relied on the mass function inferred from large scale numerical simulations of Jenkins et al. (2000). Although we do not expect the results presented here to change qualitatively, changes in $`dN/dM`$ by upto the quoted accuracy of $`30\%`$ could affect the exact shape of the likelihood contours shown in Figures 1 and 2. It is important to test the scaling of the mass function with cosmological parameters in future simulations. We have further ignored the effects of galaxy formation and feedback on the limiting mass. In principle, the relation between the cluster SZE decrement and virial mass in the lowest mass clusters could be affected by these processes. In addition, the dependence of both the SZE decrement and the Xโray flux likely exhibits a nonโnegligible intrinsic scatter. The SZE decrement to virial mass relation is found to have a small scatter in numerical simulations (Metzler 1998), and to cause a negligible increase in the total cluster yields (Holder et al. 1999). However, the presence of scatter could effectively lower the limiting masses in our treatment of the Xโray survey.
Local Cluster Abundance. Perhaps the most critical assumption is that the local cluster abundance is known to high accuracy. We have used this assumption to determine $`\sigma _8`$, i.e. to eliminate one free parameter โ effectively assigning โinfinite weightโ to the cluster abundance near $`z=0`$. This approach is appropriate for several reasons. The cosmological parameters make little difference to the cluster abundance at $`z0`$, other than the volume being proportional to $`h^3`$. Similarly, the study of local cluster masses is cosmologically independent (upto a factor of $`h`$). In a $`10^4`$ square degree survey, we find that the total number of clusters between $`0<z<0.1`$, down to a limiting mass of $`2\times 10^{14}h^1\mathrm{M}_{}`$ is $`2500`$; with a random error of only $`\pm 2\%`$. We have experimented with our models, assuming that the normalization at $`z=0`$ is incorrectly determined by a fraction of 2%. In Figure 5, we show the shift in the usual likelihood contour in the Xโray survey, caused by errors in the local abundance at this level. As the figure shows, the shift is relatively small (by about the width of the $`1\sigma `$ region). In similar calculations with errors of $`\pm 4\%`$, we find shifts that are approximately twice as significant. We conclude that for our normalization procedure to be valid, the local cluster abundance has to be known to an accuracy of about $`<\mathrm{\hspace{0.33em}10}\%`$.
Although such an accuracy can be achieved by only $`600`$ nearby clusters (which can be provided, for example, by an analysis of the SDSS data or perhaps the 2MASS survey), it is interesting to consider a different approach, where $`\sigma _8`$ is treated as another free parameter in addition to $`\mathrm{\Omega }_m,h,`$ and $`w`$. The result of such a calculation over a 4โdimensional grid is displayed in Figure 6.2. This figure shows the likelihood contours along the slice $`h=0.65`$ through this parameter space, but in projection along the $`\sigma _8`$ axis; to be compared directly with the middle panel of Figure 2. Allowing $`\sigma _8`$ to vary results in a range of values $`0.70<\sigma _8<0.97`$, and considerably expands the allowed likelihood region. The shape of the contours stay nearly unchanged, but their widths along the $`\mathrm{\Omega }_m`$ direction expand by approximately a factor of $`4`$, and their lengths along the $`w`$ direction increase by about a factor of 2. We conclude that our constraints would be significantly weakened without the local normalization (but would still be potentially useful when combined with other data; see below).
More General Cosmologies. In section 5, we restricted our range of models to flat CDM models. We find that the redshift distribution of clusters in open CDM models typically resembles that in models with high $`w`$. This is demonstrated in Figure 4.1: both in the $`w=0.2`$ and the OCDM model, the redshift distributions are flatter and extend to higher $`z`$ than in $`\mathrm{\Lambda }`$CDM. We find that OCDM models with suitably adjusted values of $`\mathrm{\Omega }_m`$ and $`h`$ are typically difficult to distinguish from those with $`w>0.5`$, but the flat shape of $`dN/dzd\mathrm{\Omega }`$ makes OCDM easily distinguishable from $`\mathrm{\Lambda }`$CDM. Note that open CDM models appear inconsistent with the recent CMB anisotropy data from the Boomerang and Maxima experiments (e.g. Lange et al. (2000); White, Scott & Pierpaoli (2000); Bond et al. (2000)). A broader study of different cosmological models, including those with both dark energy and curvature, timeโdependent $`w`$, and those with nonโGaussian initial conditions could reveal new degeneracies, and will be studied elsewhere.
### 6.3. Clusters versus CMB Anisotropy and High-$`z`$ SNe
A useful generic feature of the likelihood contours presented here is their difference from those expected in CMB anisotropy or Supernovae data. Two different cosmologies produce the same location (spherical harmonic index $`\mathrm{}_{\mathrm{peak}}`$) for the first Doppler peak for the CMB temperature anisotropy, provided they have the same comoving distance to the surface of last scattering (cf. Wang & Steinhardt (1998); White (1998); Huey et al. (1999)). Note that this is only the most prominent constraint that can be obtained from the CMB data, with considerable more information once the location and height of the second and higher Doppler peaks are measured. Similarly; the apparent magnitudes of the observed SNe constrain the luminosity distance $`d_L(z)`$ to $`0z<\mathrm{\hspace{0.33em}1}`$ (Schmidt et al. (1998); Perlmutter et al. (1999)). In general, both of these types of observations will determine a combination of cosmological parameters that is different from the cluster constraints derived here.
In Figure 6.3, we zoom in on the relevant region of the $`\mathrm{\Omega }_mw`$ plane in the Xโray survey, and compare the cluster constraints to those expected from CMB anisotropy or highโ$`z`$ SNe. The three dashed curves correspond to the CMB constraints: the middle curve shows a combination of $`\mathrm{\Omega }_m`$ and $`w`$ that produces the constant $`\mathrm{}_{\mathrm{peak}}243`$ obtained in our fiducial $`\mathrm{\Lambda }`$CDM model (using the fitting formulae from White 1998 for the physical scale $`k_{\mathrm{peak}}`$); the other two dotted curves bracket a $`\pm 1\%`$ range around this value. Similarly, the dotted curves correspond to the constraints from SNe. The middle curve shows a line of constant $`d_L`$ at $`z=1`$ that agrees with the $`\mathrm{\Lambda }`$CDM model; the two other curves produce a $`d_L`$ that differs from the fiducial value by $`\pm 1\%`$. As the figures show, the lines of CMB and SNe parameter degeneracies run somewhat unfavorably parallel to each other; however, both of those degeneracies are much more complementary to the direction of the parameter degeneracy in cluster abundance studies. In particular, the maximum allowed value of $`w`$, using both the CMB or SNe data, is $`w0.8`$; while this is reduced to $`w0.95`$ when the cluster constraints are added. Note that in Figure 6.3, we have assumed a fixed value of $`h=0.65`$; however, we find that relaxing this assumption does not significantly change the above conclusion. The CMB and SNe constraints depend more sensitively on $`h`$ than the cluster constraints do: as a result, the confidence regions do not overlap significantly even in the threeโdimensional ($`w,\mathrm{\Omega }_m,h`$) space.
The high complementarity of the cluster constraint to those from the other two methods can be understood based on the discussions in $`\mathrm{\S }`$ 4.1. To remain consistent with the CMB and SNe Ia constraints, an increase in $`w`$ must be coupled with a decrease in $`\mathrm{\Omega }_m`$; however, both increasing $`w`$ and lowering $`\mathrm{\Omega }_m`$ raises the number of detected clusters. To keep the total number of clusters constant, an increase in $`w`$ must be balanced by an increase in $`\mathrm{\Omega }_m`$. Note that this statement is true both for the SZE and the Xโray surveys. Combining the cluster constraints with the CMB and SNe Ia constraints will therefore likely result in improved estimates of the cosmological parameters, and we do not expect this conclusion to rely on the details of the two surveys considered here.
## 7. Conclusions
We studied the expected evolution of galaxy cluster abundance from $`0<z<\mathrm{\hspace{0.33em}3}`$ in different cosmologies, including the effects of variations in the cosmic equation of state parameter $`wp/\rho `$. By considering a range of cosmological models, we quantified the accuracy to which $`\mathrm{\Omega }_m`$, $`w`$, and $`h`$ can be determined in the future, using a 12 deg<sup>2</sup> Sunyaev-Zelโdovich Effect survey and a deep 10<sup>4</sup> deg<sup>2</sup> X-ray survey. In our analysis, we have assumed that the local cluster abundance is known accurately: we find that in practice, an accuracy of $`5\%`$ is sufficient for our results to be valid.
We find that raising $`w`$ significantly flattens the redshiftโdistribution, which can not be mimicked by variations in either $`\mathrm{\Omega }_m`$, $`h`$, which affect essentially only the normalization of the redshift distribution. As a result, both surveys will be able to improve present constraints on $`w`$. In the $`\mathrm{\Omega }_mw`$ plane, both the SZE and Xโray surveys yield constraints that are highly complementary to those obtained from the CMB anisotropy and highโ$`z`$ SNe. Note that the SZE and Xโray surveys are themselves somewhat complementary. In combination with these data, the SZE survey can determine both $`w`$ and $`\mathrm{\Omega }_m`$ to an accuracy of $`10\%`$ at $`3\sigma `$ significance. Further improvements will be possible from the Xโray survey. The large number of clusters further alleviates the degeneracy between $`w`$ and both $`\mathrm{\Omega }_m`$ and $`h`$, and, as a result, the Xโray sample can determine $`w`$ to $`10\%`$ and $`\mathrm{\Omega }_m`$ to $`5\%`$ accuracy, in combination with either the CMB or the SN data.
Our work focuses primarily on the statistics of cluster surveys. We have provided an estimate of the scale of various systematic uncertainties. Further work is needed to clarify the role of these uncertainties, arising especially from the analytic estimates of the scaling of the mass limits with cosmology, the dependence of the cluster mass function on cosmology, and our neglect of issues such as galaxy formation in the lowest mass clusters. However, our findings suggest that, in a flat universe, the cluster data lead to tight constraints on a combination of $`\mathrm{\Omega }_m`$ and $`w`$, especially valuable because of their high complementarity to those obtained from the CMB anisotropy or Hubble diagrams using SNe as standard candles.
We thank L. Hui for useful discussions, D. Eisenstein, M. Turner, D. Spergel and the anonymous referee for useful comments, and J. Carlstrom and the COSMEX team for providing access to instrument characteristics required to estimate the yields from their planned surveys. ZH is supported by the DOE and the NASA grant NAG 5-7092 at Fermilab, and by NASA through the Hubble Fellowship grant HF-01119.01-99A, awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA under contract NAS 5-26555. JJM is supported by Chandra Fellowship grant PF8-1003, awarded through the Chandra Science Center. The Chandra Science Center is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-39073. |
no-problem/0002/hep-ph0002169.html | ar5iv | text | # References
IFT 2000/2
Heavy Quark Production at a Linear $`e^+e^{}`$ and Photon Collider and its Sensitivity to the Gluon Content of the Photon
P. Jankowski and M. Krawczyk
Institute of Theoretical Physics, Warsaw University
ul. Hoลผa 69, 00-681 Warsaw, Poland
A. De Roeck
CERN, 1211 Geneva 23, Switzerland
## Abstract
A high energy linear $`e^+e^{}`$ collider (LC) can also be used as a Photon Collider (PC), using Compton scattering of laser photons on the $`e^+/e^{}`$ beams. The leading order cross-section for the production of heavy quarks, $`e^+e^{}e^+e^{}Q(\overline{Q})X`$, at high transverse momenta is calculated for both LC and PC modes. The sensitivity of this process to the parton distribution parametrizations of real photons, especially the gluon content, is tested for both modes.
For the study of a future electron-positron Linear Collider (LC) it is important to examine the physics potential for its main and possible derived options. The so called Photon (or Compton) Collider (PC) is an option in which high energy real photons can be obtained by backscattering photons from a laser beam on the electron or positron beam . This way an excellent tool for the study of $`\gamma \gamma `$ collisions at high energies can be constructed.
In high energy $`e^+e^{}`$ collisions the hadronic final state is predominantly produced in $`\gamma ^{}\gamma ^{}`$ interactions where the virtual photons are almost on mass shell. These processes can be described by an effective (real) photon energy spectrum, i.e. using the Weizsรคcker-Williams (WW) approximation. A Photon Collider based on Compton scattering, however, provides beams of real photons, which can be produced in a definite polarization state and with high monochromaticity. Moreover, the resulting photon spectrum (denoted as LASER) is much harder then the WW one. The comparison of the photon spectra used in this analysis (see Appendix) is presented in Fig. 1.
The main goal of this work is to compare the LC and PC opportunities for probing the gluon distribution in the photon, without making use of polarization. Heavy quark production in the unpolarized electron-positron scattering $`e^+e^{}e^+e^{}Q(\overline{Q})X`$ is a promising process for such a study, see e.g. and also , where the related topic is considered. The measurement of this process is also an important test of QCD by itself. Heavy quarks can be produced in $`\gamma \gamma `$ collisions through three mechanisms. Direct (DD) production occurs when both photons couple directly to the $`Q\overline{Q}`$ pair. In single resolved photoproduction processes (DR) one of photons interacts via its partonic structure with the second photon. When both photons split into a flux of quarks and gluons, the process is labelled a double resolved photon (RR) process.
Calculations for processes involving heavy quarks are performed in two schemes. They differ by the number of quark flavours which are considered to be part of the structure of the photon, and thus can take part in the process as partons. In the case of the massive scenario, the so called Fixed Number Flavour Scheme (FFNS), the photon โconsistsโ only of light quarks and gluons, which may interact, and massive heavy quarks can be only created e.g. via gluon-gluon fusion. The massless Variable Number Flavour Scheme (VFNS) considers apart from the gluons and $`u,d`$ and $`s`$ quarks also heavy quarks as active flavours, which are all treated massless. This scheme is expected to be valid only for $`p_T>>m_{c(b)}`$. All the partonic reactions contributing in LO to heavy quark production in these schemes are shown in table 1.
We calculate in LO QCD the production rates for $`c`$ and $`b`$ quarks produced with large $`p_T`$ for the $`e^+e^{}`$ colliders LEP and LC at energies of 180 GeV and of 300, 500 and 800 GeV, respectively, and for the $`\gamma \gamma `$ PC based on the corresponding ($`e^+e^{}`$) LC collider. We test the sensitivity of considered processes to the gluonic content of the photon by using two different parton density parametrizations for the real photon: GRV and SaS1d . Both these parametrizations were extracted from QCD fits to photon structure function data measured in $`e\gamma `$ collisions from $`e^+e^{}`$ interactions, but have different assumptions for the gluon content, which is only weakly constrained by these measurements. The GRV and SAS1d distributions both start the evolution from a small starting scale, $`Q_0^2=0.25`$ GeV<sup>2</sup> and 0.36 GeV<sup>2</sup> respectively, a procedure which has turned out to be quite successful for the parton densities in the proton. Consequently both parton densities predict a rise of the gluon density at small $`x`$. The different treatment of the vector meson valence quark distributions leads to a larger gluon component at small-$`x`$ of the photon for GRV compared to SAS1d.
In the massive (FFNS) calculations the number of active flavours ($`N_f`$) is taken to be 3. In the massless (VFNS) scheme it varies from 3 to 5 depending on the value of the hard (factorization, renormalization) scale $`\mu `$. Heavy quarks are included in the computation provided that $`\mu >m_c`$ ($`m_b`$) with $`m_c=1.6`$ GeV ($`m_b=4.5`$ GeV) being the mass of $`c`$ ($`b`$) quark. When charm production is calculated in the VFNS the bottom quarks are always excluded, hence $`N_f=3`$ or $`N_f=4`$. Also the QCD energy scale $`\mathrm{\Lambda }_{QCD}`$, which appears in the one loop formula for the strong coupling constant $`\alpha _s`$, is affected by change of the number of active flavours. Therefore we denote it as $`\mathrm{\Lambda }_{QCD}^{N_f}`$. We take this scale to be:
$$\mathrm{\Lambda }_{QCD}^3=232\mathrm{\Lambda }_{QCD}^4=200\mathrm{\Lambda }_{QCD}^5=153\mathrm{MeV}$$
(1)
as in . If not stated otherwhise the hard scale in the calculation of the cross-section $`\mu `$ is taken to be the transverse mass of the produced heavy quark $`m_T=\sqrt{m_Q^2+p_T^2(Q)}`$.
Both resolved photon contributions (DR and RR) to the process $`e^+e^{}e^+e^{}Q(\overline{Q})X`$ are dominated by reactions initiated by gluons, especially in the PC kinematic regime. This can be seen in Fig. 2, where the individual contributions to the differential LO cross-section are presented for the charm quark production. We study $`\frac{d^2\sigma }{dp_T^2dy}`$, with $`y=`$$`\frac{1}{2}\mathrm{ln}\frac{Ep_L}{E+p_L}`$ being the rapidity of the produced heavy quark and $`p_T`$ its transverse momentum. The results were obtained in the VFNS scheme for both types of initial photon spectra. A fixed energy $`\sqrt{s}`$=500 GeV and $`p_T`$ =10 GeV for the charm quark was assumed. The calculation was performed using the GRV parton parametrization. The dominance of the gluon over the quark contribution is larger for the PC spectrum; it is also larger in the FFNS scheme (not shown) compared to the VFNS one.
Fig. 3 shows the differential cross-sections for charm quark production with $`p_T`$=10 GeV for the direct, single resolved, double resolved and total contributions. The results were obtained in the VFNS scheme using the GRV parton distribution. An interesting pattern is observed. In case of the $`e^+e^{}`$ LC with a WW photon spectrum either the process is dominated by direct photons coupling to heavy quarks, or resolved and direct contributions are found to be of the same importance. The DR and RR contributions increase with increasing energy. Nevertheless in the range of the anticipated LC energies, the gluon induced reactions do not play the dominant role for heavy quark production. The opposite is found for a PC: heavy quark production is always dominated by resolved photon interactions. The direct contribution becomes even less important with increasing centre of mass system energy. Hence, the charm production cross-section is much more sensitive to the parton distribution parameterization of the photon for a PC compared to a LC. Since for the PC option the resolved photon contributions are clearly dominated by the processes involving gluons (Fig. 2) this option offers an excellent tool for measuring the gluonic content of the photon.
An important feature of the results is the observed rise of the resolved photon process contribution, and therefore also an increasing sensitivity to gluons (see below), with increasing energy. This results from the fact that higher energies explore regions of small Bjorken-$`x_\gamma `$ values. The minimal $`x_\gamma `$ value reached for $`p_T=10`$ GeV varies from $`0.01`$ for $`\sqrt{s}=180`$ GeV to $`0.0006`$ for $`\sqrt{s}=800`$ GeV. At the same time the gluon distributions differences for the GRV and SaS1d parametrizations are large for small $`x_\gamma `$ values. These results are not affected by the choice of the scheme for the heavy quark calculation (see Fig. 4): for both the FFNS and VFNS the cross-section is 20-30 times larger for the PC than for the LC.
The sensitivity of the considered process to the gluon distribution is studied further by comparing the predictions obtained using two different parton parametrizations for the photon. In Fig. 5 and 6 the ratio of the relative difference of cross-sections $`\frac{d^2\sigma }{dp_T^2dy}`$ is presented, obtained using the GRV and SaS1d parton distribution parametrizations in the VFNS and FFNS schemes. As expected the PC photon spectrum leads to a larger sensitivity than the WW spectrum for a given energy of the $`e^+e^{}`$ collider: the difference between the two structure function parametrizations shown is 5-20% for a WW and 25-40% for a PC photon spectrum.
We presented here only the results for $`c`$ quark production. The corresponding $`b`$-quark production (not shown) has all the features listed above though the difference between the sensitivity to the gluon distribution at a PC compared to a LC is smaller. All calculated cross-sections for beauty production are found to be even more sensitive to the gluonic content of the photon than the corresponding ones for charm production, but the cross-sections are smaller, see also below.
Our calculation predicts a high number of the heavy quarks, $`c`$ and $`b`$, produced at the considered centre of mass energies of the $`e^+e^{}`$ collider of 300, 500 and 800 GeV for both the LC and PC options. The event numbers are given in Table 2 assuming an $`e^+e^{}`$ integrated luminosity of 100 $`fb^1`$, which could be achieved at a high luminosity LC with one year of running, and for $`p_T>`$ 10 GeV. In practice, charm with a $`p_T>10`$ GeV produced, e.g. via $`D^{}`$ decays, can be detected in a generic LC detector without dedicated detectors in the rapidity range of $`|y|<1.52`$. The charm detection efficiency, including fragmentation fraction and branching ratios, is typically around a few times 10<sup>-3</sup> . Hence the number of detected events from charm with such high $`p_T`$ will be approximately a few thousand for the LC and several ten thousands for the PC. The latter will clearly allow for precision measurements of charm production and the gluon distribution in the photon. Some of the advantage of the PC over the LC is lost however due to the charm production at large $`y`$ in case of the PC (see Fig. 4), which will go undetected with the presently planned detectors. The statistical precision of the measurements of $`\frac{d^2\sigma }{dp_T^2dy}`$ will be approximately 5-10% at the LC and a few % at the PC.
In conclusion, the calculated cross-sections for heavy quarks ($`c`$ and $`b`$) production in two photon collisions show a much higher sensitivity to the parton distribution parametrization of the structure of the photon in case of a Photon Collider compared to a $`e^+e^{}`$ Linear Collider. This does not depend on the particular scheme used to calculate the heavy quark cross-sections. Since the resolved photon contribution is to a large extend dominated by gluon induced processes, especially for a high energy PC, we conclude that heavy quark production provides indeed a sensitive probe of the gluon content of the photon. Combining the above features with the much larger cross-sections achieved at energies of the $`e^+e^{}`$ collisions at a PC favours this option for future photon structure research. A high luminosity $`e^+e^{}`$ collider which drives the PC collider will however be essential.
Appendix
The simplest Weizsรคcker-Williams formula of the Equivalent Photon Approximation is used:
$$f_\gamma (x)=\frac{\alpha }{2\pi }(\frac{2}{x}2+x)\mathrm{log}(\frac{\mu ^2}{4m_e^2})$$
(2)
where $`x=\frac{E_\gamma }{E_e}`$, $`m_e`$ is the mass of the electron and $`\mu `$ is the energy scale of the process.
In case of the Compton (LASER) mode we use the original energy spectrum of unpolarized photons :
$`f_\gamma (x)={\displaystyle \frac{1}{\sigma _c^{np}}}[{\displaystyle \frac{1}{1x}}+1x4r(1r)],`$
$`\sigma _c^{np}=(1{\displaystyle \frac{4}{\kappa }}{\displaystyle \frac{8}{\kappa ^2}})\mathrm{ln}(\kappa +1)+{\displaystyle \frac{1}{2}}+{\displaystyle \frac{8}{\kappa }}{\displaystyle \frac{1}{2(\kappa +1)^2}},`$ (3)
$`r={\displaystyle \frac{x}{\kappa (1x)}},`$
where $`\kappa `$ is a parameter giving the restriction of $`x`$ value: $`x<\frac{\kappa }{1+\kappa }`$. It is argued that the optimal value of $`\kappa `$ is 4.83, which gives a cut of $`x`$, $`x_{max}=0.83`$. We have chosen these values for this analysis. Note that the part of the spectrum with $`x<0.6`$ is very sensitive to the technical parameters of the PC such as the size of the beam.
M. Krawczyk has been partly supported by the Polish State Committee for Scientific Research (grant 2003B 01414, 1999-2000) |
no-problem/0002/cond-mat0002046.html | ar5iv | text | # Mn ๐พ-edge XANES studies of La1-xAxMnO3 systems (A = Ca, Ba, Pb)
## I Introduction
The La<sub>1-x</sub>A<sub>x</sub>MnO<sub>3+ฮด</sub> systems exhibit a wide range of different phenomena depending on the concentration, x, of the divalent substitutional atom, A (A = Ca, Ba, Sr, Pb, etc.) and the O concentration. These include ferromagnetism, antiferromagnetism, charge ordering, a metal-insulator transition, and large magnetoresistance effects. For x roughly in the range 0.2-0.5, these systems have a ferromagnetic transition at a transition temperature, T<sub>c</sub>, a metal/insulator transition (MI) at T=T<sub>MI</sub>, and a โcolossalโ magnetoresistance (CMR) which reaches itโs maximum at T<sub>MR</sub>, with T<sub>MI</sub> $``$ T<sub>MR</sub> $``$ T<sub>c</sub> in many cases. The substitution of a divalent ion for La<sup>+3</sup> formally changes the average Mn valence to 3+x (the Mn valence is +3 in LaMnO<sub>3</sub> +4 in CaMnO<sub>3</sub>), and is usually thought to introduce holes into the narrow e<sub>g</sub> band of Mn 3d-electrons, which are also hybridized with O 2p states. It is the changing occupation of this hybridized band with x that leads to many of the observed properties. Excess O in LaMnO<sub>3+ฮด</sub> (actually Mn and La vacancies) or La vacancies can also increase the formal Mn valence, thereby adding carriers to the system.
The coupling between charge and magnetism has been modeled using the double exchange (DE) mechanism plus strong electron-phonon coupling. In the ferromagnetic metallic (FM) state, well below T<sub>c</sub>, the charge carriers are assumed to be highly delocalized (large polarons) and spread out over several unit cells for CMR samples.
In the paramagnetic (PM) state above T<sub>c</sub> for CMR samples, there is a significant increase in the distortions about the Mn atoms compared to the low temperature data. It is generally assumed that this distortion is a result of the charge carriers becoming localized on the Mn atoms. However, mobile holes could also be located more on the O atoms as is the case for cuprates. Consequently, there might be very little change in the charge localized on the Mn atoms above and below T<sub>c</sub>. This raises the question as to how the energy shift of the absorption edge relates to valence and the local environment in these materials. Is there a mixture of โionic-likeโ +3 and +4 states, an average valence, as in a metal where all Mn atoms are equivalent, or something in between?
Experimentally, the Mn $`K`$-edge absorption for the Ca substituted manganites is sharp with relatively little structure, and shifts almost uniformly with dopant concentration, consistent with an average valence state of v = 3+x. The sharpness of the edge is suggestive of a transition into a state that is uniform throughout the sample, and initially we interpreted this result to mean that all Mn sites have comparable local charge densities. This is difficult to reconcile with the usual assumption of a mixture of purely local ionic Mn<sup>+3</sup> and Mn<sup>+4</sup> sites. For example, below we show explicitly that the observed $`K`$-edge cannot be modeled as a weighted sum of the edges of the end compounds LaMnO<sub>3</sub> and CaMnO<sub>3</sub> for the charge ordered (CO) material with 65% Ca. Note however that even with similar charge densities, the d-electron wavefunctions and the local environment need not be identical at each site. In addition, Tyson et al. have investigated the K<sub>ฮฒ</sub> emission which probes Mn 3d states through the 3p-1s decay, and report that these spectra for the substituted manganite materials can be modeled as a weighted sum of the end compounds although the shifts with valence are small.
The pre-edge structure for the Mn $`K`$-edge consists of 2-3 small peaks labeled A<sub>1</sub>-A<sub>3</sub> which have Mn 3d character. These features are observed for all the transition metals and are generally ascribed to mixture of 1s-3d quadrupole and 1s-p dipole transitions (made weakly allowed by a hybridization between 3d states and p-states). Although the latter are assumed to be dominant, the interpretation of the A<sub>i</sub> peaks is still controversial. Two important issues are: 1) How large is the quadrupole contribution and when is it important? and 2) How are the dipole transitions made allowed since in many instances, the local environment has inversion symmetry, and in that case, the transition is symmetry forbidden? There have been a large number of papers in the last five years addressing these issues for many of the transition metals, not all of which are in agreement. However some questions have been answered. Since quadrupole-allowed pre-edge features have a strong angular dependence, in contrast to the dipole-allowed transitions, measurements on single crystals, as a function of angle can separate the two contributions. Such studies have shown that quadrupole transitions contribute to the A<sub>i</sub> peaks in Ti, V, Ni, and Fe, with the largest contribution at the lowest energies of the pre-edge. The amplitude can be as large as $``$ 4% of the absorption edge height for some systems at optimum orientations; but more generally it is of order 1%, and could be smaller in powdered samples which are orientational averages. The dipole-allowed A<sub>i</sub> peaks are often in the 5-15% range and often do dominate, but not always. For example, for Ti in rutile (TiO<sub>2</sub>), the small A<sub>1</sub> peak appears to be primarily a quadrupole feature.
Early Mn XANES work assumed that the A<sub>1</sub>-A<sub>2</sub> splitting is produced by the crystal field parameter, often called the 10Dq parameter, which splits the t<sub>2g</sub> and e<sub>g</sub> states. These investigations did not consider the possibility of a large on-site Coulomb term, U. Recent work, using the Local Spin Density Approximation (LSDA or sometimes LDA) with and without U, and including the Hundโs rule exchange parameter, J<sub>H</sub>, find a Coulomb splitting of both the t<sub>2g</sub> and e<sub>g</sub> states, with the e<sub>g</sub> states further split by the Jahn-Teller (J-T)interaction.
Pickett et al, (LSDA model) suggest that these systems are half metallic, with a gap between the O band and a minority spin d-band. They also point out that near 25 % Ca, all Mn sites could be essentially identical if the Ca were uniformly distributed such that there are two Ca and six La second neighbors to each Mn. Thus for the concentration range 20-30%, the local environment for each Mn may be very similar. Anisimov et al. and Mizokawa et al. suggest that a large fraction of the d-electrons are found on the Mn atoms rather than being transferred to the O atom as in an ionic solid (thereby leaving holes in the O band). These calculations yield nearly the same electron density on each Mn atom, for sites associated with formal Mn<sup>+3</sup> and Mn<sup>+4</sup> valences. Other recent papers have also stressed the importance of O, and the question of charge localization on the O atoms or on the Mn atoms has been considered.
Some promising calculations for considering the pre-edge features are those of Elfimov et al. These calculations indicate that in addition to U and J<sub>H</sub>, there are appreciable higher order Coulomb terms that must be included and that strong hybridization occurs between the Mn 4p orbitals and the Mn 3d states on neighboring Mn atoms. The resulting splitting of the majority and minority e<sub>g</sub> spin states results in a splitting of the Mn pre-edge features. We consider these calculations together with some of the new results on pre-edges in the discussion section.
In this paper we address the valence question and probe the Mn 3d bands using the near edge structure. Specifically we show there is no Mn $`K`$-edge shift (within 0.04 eV) through T<sub>c</sub>. We also compare the main edge, which is too narrow to arise from a mixture of ionic Mn<sup>+3</sup> and Mn<sup>+4</sup>, with the edge for a material, Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub>, that does have a mixture of these ions. In addition, using a careful subtraction method, we show that there is indeed a small structure in the main edge that correlates with T<sub>c</sub> for the CMR samples and with T<sub>CO</sub> for the charge ordered material. The structure for CMR material is out of phase with that for the CO sample - which suggests that there is a distortion for the CO sample that increases at low T. We also note that the pre-edge structure has a temperature dependence which again correlates with T<sub>c</sub> for CMR samples. The splitting of the pre-edge peaks decreases in the ferromagnetic phase which may indicate a change in covalency. Finally our interpretation of the XANES differs from earlier work on Mn $`K`$-edges but is consistent with recent studies of other transition metal atoms.
In Sec II we summarize the samples and experimental setup; some details were given earlier. Then in Sec III, we provide a more extensive discussion of the shift of the Mn $`K`$-edge as a function of concentration and temperature. Here we also present the pre-edge results. We consider the implications of these results in Sec IV.
## II Experimental details
Many samples are used in this study, with the average Mn valence changed in a variety of ways:- divalent substitutions for La<sup>+3</sup> and changes in the La<sup>+3</sup> or O concentrations. Powder samples of La<sub>1-x</sub>A<sub>x</sub>MnO<sub>3</sub> were prepared by solid state reaction of La<sub>2</sub>O<sub>3</sub>, MnO<sub>2</sub>, and a dopant compound - CaCO<sub>3</sub>, PbO, BaO, for various divalent atoms, A. Ca substitutions are $`x`$=0.0, 0.12, 0.21, 0.25, 0.3, 0.65 and 1.0, and Ba and Pb are 0.33. Several firings with repeated grindings were carried out using temperatures up to 1400C, with in some cases a final slow cool at 1C per minute. The dc magnetization was measured using a commercial SQUID magnetometer. The end compounds, CaMnO<sub>3</sub> and LaMnO<sub>3.006</sub>, show antiferromagnetic transitions at $``$130 and 125 K, respectively, while the $`x`$=0.65 sample showed features consistent with a charge ordered (CO) transition at 270 K and an AF transition at$``$140 K. Similar measurements on the substituted manganites indicates that they are all orthorhombic. The average Mn valence for several Ca substituted samples was also determined by titration (Sec. III A). See Refs. for further details.
The LaMnO<sub>3.006</sub> sample was prepared by grinding stoichiometric amounts of La<sub>2</sub>O<sub>3</sub> (Alfa Aesar Reacton 99.99%) and MnO<sub>2</sub> (Alfa Aesar Puratronic 99.999%) in an Al<sub>2</sub>O<sub>3</sub> mortar and pestle under acetone until well mixed. The powder sample was formed into a 3/4โ diameter pellet using uniaxially presure (1000lbs), and fired in an Al<sub>2</sub>O<sub>3</sub> boat under pure oxygen for 12 hours at 1200-1250C. Next the sample was cooled to 800C, re-ground, re-pelletized, and refired at 1200-1250C for an additional 24 hours. This process was repeated until a single phase, rhombohedral XRD trace was obtained. The reground powder was placed in an Al<sub>2</sub>O<sub>3</sub> boat and post-annealed in UHP Ar at 1000C for 24 hrs. The oxygen partial pressure was about 60 ppm (determined using an Ametek oxygen analyzer). The sample was then quenched to room temperature. Diffraction, titration and TGA measurements indicate this sample is essentially stoichiometric, with an oxygen content of 3.006.
Additional LaMnO<sub>3+y</sub> specimens with various average Mn valences were prepared at 1300 C in air, followed by three intermediate regrindings. The original specimen was removed from the furnace at 1300 C and has a Mn valence of 3.150. A piece of this specimen was reacted overnight at 1000 C and removed from the furnace, producing a sample with an average Mn valence of 3.206. A nearly stoichiometric specimen with average Mn valence of 3.063 was prepared at temperatures up to 1350 C with 4 intermediate regrindings in flowing helium gas. Finally, the nonstoichiometric La<sub>0.9</sub>MnO<sub>3</sub> specimen was prepared at temperatures of up to 1350 C with three intermediate regrindings. It was slow-cooled in air at 1.5 C/min to room temperature and had an average Mn valence of 3.312. For each of these samples the valence was determined by titration.
A sample that should have isolated Mn<sup>+3</sup> and Mn<sup>+4</sup> species is also needed for comparison purposes; such a material is Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub>. The two species are due to the oxygen defect structure that puts vacancies into the MnO<sub>2</sub> planes to form mixtures of square pyramids and octahedra. This highly insulating material can then be understood from chemical reasoning to be Mn<sup>+3</sup> (square pyramids) and Mn<sup>+4</sup> (octahedra). Some further justification for this assignment comes from the compound Ca<sub>2</sub>MnO<sub>3.5</sub>, which is all Mn<sup>+3</sup> and has only square pyramids with vacancies in the MnO<sub>2</sub> planes; it is an ordered superstructure of the single-layer compound. Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> was synthesized by firing a stoichiometric mixture of SrCO<sub>3</sub> and MnO<sub>2</sub> at 1650 C for 12 hr followed by rapid quenching into dry ice. This procedure is essential to prevent decomposition into $`\alpha `$-Sr<sub>2</sub>MnO<sub>4</sub> and Sr<sub>4</sub>Mn<sub>3</sub>O<sub>10</sub> on cooling and to prevent oxidation to Sr<sub>3</sub>Mn<sub>2</sub>O<sub>7</sub>. The oxygen content was measured independently by iodometric titration and by thermogravimetric analysis, both techniques yielding 6.55(1) oxygen atoms per formula unit.
All XAFS data were collected at the Stanford Synchrotron Radiation Laboratory (SSRL). Most Mn $`K`$-edge data were collected on beam line 2-3 using Si(220) double monochromator crystals for all samples. Some data were collected on beam line 4-3 using Si(111) crystals, while most of the Mn $`K`$-edge data for the Ba substituted sample were collected on beam line 10-2 using Si (111). The manganite powders were reground, passed through a 400-mesh sieve, and brushed onto scotch tape. Layers of tape were stacked to obtain absorption lengths $`\mu _{\mathrm{Mn}}t`$1 ($`\mu _{\mathrm{Mn}}`$ is the Mn contribution to the absorption coefficient and $`t`$ the sample thickness) for each sample. Samples were placed in an Oxford LHe flow cryostat, and temperatures were regulated to within 0.1 K. All data were collected in transmission mode. A powdered Mn metal sample was used as an energy reference for each scan. The pre-edge absorption (absorption from other excitations) was removed by fitting the data to a Victoreen formula, and a simple cubic spline (7 knots at constant intervals $``$140 eV in $`E`$) was used to simulate the embedded-atom absorption, $`\mu _0`$, above the edge.
The edge shifts are reported relative to a Mn powdered metal foil for which we take the position of the first inflection point to be 6537.4 eV. For each scan, the position of the reference edge was determined by fitting the edge to that of a fiducial scan. This provided a correction to the relative edge position consistent within $`\pm `$ 0.015 eV - see next section.
In the pre-edge region there is a remnant of the La L<sub>I</sub> XAFS that must be considered; the oscillation amplitude is about 0.3 % of the Mn step height, just before the pre-edge. However, the La $`K`$-edge XAFS show that there is a โbeatโ in the XAFS from about 8.4-10 ร
<sup>-1</sup>, which for the La L<sub>I</sub> XAFS corresponds to the range of the Mn XANES. In this beat region the La L<sub>I</sub> XAFS is reduce by another factor of 4; thus the La oscillations underlying the Mn XANES region has an amplitude of about 0.08%, much smaller than the changes we investigate. In addition, this oscillation is slowly varying with energy, and would at most produce a slowly varying background. Consequently any remaining La L<sub>I</sub> XAFS are not a problem for the Mn XANES study.
## III Near edge results
### A Main edge
In Fig. 1 we show the Mn absorption $`K`$-edge for several concentrations of Ca, 33% Ba and Pb, La<sub>0.35</sub>Pr<sub>0.35</sub>Ca<sub>0.3</sub>MnO<sub>3</sub>, a Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> sample that should have a nearly uniform mixture of ionic Mn<sup>+3</sup> and Mn<sup>+4</sup>, some O excess samples, and a La deficient sample.
For the Ca substituted samples several points are immediately obvious: (1) To first order the main absorption edges (ignoring pre-edge structures for now) have almost the same shape for each dopant concentration and shift nearly rigidly to higher energy as the concentration is increased, (2) the edges for the manganite samples are very sharp, roughly half as wide as the edge for the Sr<sub>3</sub>M<sub>2</sub>nO<sub>6.55</sub> sample, (3) there is no obvious kink or structure in the sharp edges for the substituted (La,Ca) manganite samples that would indicate a simple mixture of Mn<sup>+3</sup> and Mn<sup>+4</sup> ions, (4) however, there is a tiny shape change, visible in Fig. 1 for samples of different concentration, which shifts the position of the inflection point on the edge relative to the half height position.
The data for the La<sub>0.35</sub>Pr<sub>0.35</sub>Ca<sub>0.3</sub>MnO<sub>3</sub> sample looks very similar to that for La<sub>0.7</sub>Ca<sub>0.3</sub>MnO<sub>3</sub> indicating that replacing some of the La by Pr does not change the local electronic configuration on the Mn. The O excess and La deficient samples show a similar edge shape to LaMnO<sub>3</sub>, but the edge shift is considerably smaller than expected based on the Mn valence obtained from TGA. The shifts for the O excess data are inconsistent with data from other groups and are included here to show the sharpness of the edge. However, such data suggest that the position of the Mn $`K`$-edge is determined by several factors and using the Mn valence and O content obtained from TGA may not be sufficient.
In contrast to the Ca substituted materials, Ba and Pb substitution results in a significantly broader edge, more comparable to the edges of other Mn oxides and the Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> sample. There is relatively more weight in the lower part of the edge.
In addition to the shift of the inflection point position on the edge with Ca concentration, as noted above, the region of steepest slope is also quite broad. Consequently using the position of the peak in the first derivative curve as a measure of the average edge position (as we and others have done previously) is only an approximate measure of the average edge shift. Using the derivative peak yields a roughly linear shift with concentration. Our data and that of Subรญas et al. have the same edge shift per valence unit, while the shift reported by Croft et al. is smaller. This may be the result of different O content in the samples.
To obtain a better estimate of the average edge shift with concentration (at room temperature), we have fit the LaMnO<sub>3</sub> edge data (or the CaMnO<sub>3</sub> data) to that for each of the other samples, over the main part of the edge (above the pre-edge structure). In this procedure it is important that when the absorption from other atoms is removed, the data base-line below the pre-edge structure be at zero. Each edge is also normalized using some feature of the data; for the data at different concentrations, we normalized over a range of energies well above the edge, where the XAFS oscillations are small. Similarly we fit the corresponding reference edges (Mn foil) to a reference scan to obtain a net overall edge shift. Several examples of these fits are shown in Fig. 2. Although there is a change in shape between LaMnO<sub>3</sub> and CaMnO<sub>3</sub>, the relative shifts determined with either end compound are nearly identical - less than 0.02 eV difference over the entire concentration range.
In Fig. 3a we plot the relative shifts obtained from fits to LaMnO<sub>3</sub> at room temperature. The shift with x is roughly linear with concentration, with a net shift from 0 to 100% Ca of $``$3 eV. This is considerably smaller than the value 4.2 obtained from the derivative peak and illustrates the effect of the shift of the inflection point relative to the half height. However, over the straight part of the plot from x= 0.3-1.0, the slope is 3.3 eV/valence unit, quite close to the 3.5 eV/valence unit obtained by Ressler et al. for MnO, Mn<sub>2</sub>O<sub>3</sub>, and MnO<sub>2</sub>. The point at x = 0.12 is anomalous, but titration measurements give about the same Mn valence for the 12 and 21% samples, which agrees with the comparable edge shifts. The same data is re-plotted as a function of the titrated valence in Fig. 3b; in this case the variation with valence is smoother, but slightly non-linear. The different values for the titrated valence, compared to the value expected from the Ca concentration may indicate that there are slight variations in O content in some samples.
We have also used a similar analysis to investigate any possible edge shift as a function of temperature, by fitting the entire edge of the 50K data for a given sample to all the higher temperature data files. Two examples are shown in Fig. 2c,d. This figure shows that changes in the shape of the main edge, above and below T<sub>c</sub>, are quite small (although measurable). The largest relative change is in the pre-edge peaks, to be discussed later. In Fig. 4 we show the shift of the edge position as a function of temperature up to 320K for several sets of samples. (We have similar data out to nearly 500K for the Ba sample and CaMnO<sub>3</sub>.) Variations in the fit values for the net shift, $`\mathrm{\Delta }`$E<sub>o</sub>, for several traces at the same temperature are less than $`\pm `$ 0.02 eV, and fluctuations about the small average shift with T are comparable for a given experimental run. Differences between experimental set-ups or using Si$`<`$111$`>`$ or Si$`<`$220$`>`$ monochromators are less than 0.1 eV. For T less than 300K, the net shift for each sample is very small (less than 0.04 eV), but nearly all appear to have a slight decrease at high T.
### B Pre-edge region
In Fig. 5 we plot the pre-edge region as a function of temperature on an expanded scale for CaMnO<sub>3</sub>, a CMR sample with 21% Ca, and the 33% Ba sample. Data for LaMnO<sub>3</sub>, the CO sample with 65% Ca and another CMR sample have recently been published in a short paper. For these systems the main features are the lower three peaks labeled A<sub>1</sub>-A<sub>3</sub> (near 6539, 6541, and 6544 eV) and the B peak. The lower two peaks A<sub>1</sub> and A<sub>2</sub> are common to all materials although not resolved for the 33% Ba data collected using Si (111) crystals which have a lower energy resolution. The comparison of the two Ba data sets in this figure illustrate the importance of using high energy resolution. The A<sub>3</sub> peak is not obviously present in most samples. In Fig. 6 we compare the data for the 30% Ca CMR sample with the Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> sample and also show the pre-edge for the Pb sample, all on a more expanded scale.
There are several features to note; all the pre-edge features start at very nearly the same position regardless of doping and the amplitude of the pre-edge features labeled A increases with average Mn valence (Ca concentration) as observed in other Mn compounds and in a previous manganite study. There are, however, small shifts of these features with Ca concentration as shown in Fig. 7. The A<sub>1</sub> peak energy increases slightly from LaMnO<sub>3</sub> to CaMnO<sub>3</sub>, and the A<sub>1</sub>-A<sub>2</sub> splitting decreases from 2.2 to 1.8 eV. (The exception is the 65% sample, but here the A<sub>i</sub> peaks are poorly resolved.) For the substituted samples, the leading edge of the A<sub>1</sub> peak remains steep for all concentrations except the 65 % sample. Consequently, the pre-edge for the intermediate concentrations (CMR samples) cannot be modeled as a simple weighted sum of the end compounds LaMnO<sub>3</sub> and CaMnO<sub>3</sub>. Note that the leading edge for the Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> sample (See Fig. 6) is broader, consistent with a mixture of Mn<sup>+3</sup> and Mn<sup>+4</sup> ions, and also has a significant A<sub>3</sub> peak. The latter is not present in the data for 30% Ca.
The Ba and Pb pre-edges are slightly different; the Ba pre-edge features are not as well resolved even for the higher energy resolution data while the A<sub>1</sub> peak is largest for the Pb sample (Compare Fig. 6 with Fig. 5).
The most striking feature in Figs. 5,6 is the variation in the intensity of the pre-edge peaks and the shift of A<sub>2</sub> as T increases through T<sub>c</sub> for the CMR samples. In contrast, the change for LaMnO<sub>3</sub> is small up to 300K. For the 21% Ca sample in Fig. 5 the A<sub>1</sub> peak decreases in amplitude while the A<sub>2</sub> and B-peaks increase with increasing T; the A<sub>2</sub> peak is sharpest at 300K and clearly shifts downward below T<sub>c</sub> (0.4 - 0.5 eV, depending on the background function used). See the solid triangles in Fig. 7. The change in the A-peaks for the 33% Ba sample (using the high resolution monochromator) appear to follow the same trend as observed for the Ca data (Fig. 6) but the A<sub>2</sub> peak is not as well resolved.
The largest temperature dependence is observed for the CaMnO<sub>3</sub> sample above 300K (See Fig. 5a), with the largest increase occurring for the B-peak. Also, the amplitude of the peak at the top of the edge, commonly called the โwhite lineโ, (see Fig 1c at 6554 eV for example) decreases slightly at high T. These effects become much larger at only slightly higher temperatures and will be treated in a separate paper. For the CMR samples, we associate the temperature dependent changes in the amplitude of the pre-edge features with changes in charge localization/hybridization.
### C Difference Spectra
More detailed information can be obtained by examining the change in the shape of the XANES region as a function of temperature. The files are first shifted to correct for any small changes in the energy of the monochromator and all spectra are carefully normalized as discussed earlier. The difference spectra are obtained by subtracting the data at 300K from all the data files (at different temperatures) for a given sample. This approach was used originally to investigate the pre-edge region for the 21% sample, but considerable structure was found at energies corresponding to the main edge, for both the CMR and CO (x=0.65) samples. Several examples of these difference spectra are shown in Fig. 8 for LaMnO<sub>3</sub>, CaMnO<sub>3</sub>, and the 21, 30, and 65 % Ca substituted samples.
In the pre-edge region, the temperature variation of the A<sub>1</sub> and A<sub>2</sub> peaks for the 21 and 30% Ca (CMR) samples is very clearly visible in Fig. 8c,d; it begins at T<sub>c</sub>, with most of the change occurring over a 60K range just below T<sub>c</sub>. The temperature-dependent changes of the pre-edge are comparable in both samples, with the magnitude of the change of the A<sub>2</sub> peak being roughly 50-70% that of the A<sub>1</sub> peak. For the 65% Ca sample, changes of the A<sub>i</sub> with T are also observed in the different spectra, but the amplitudes are considerably smaller, and interestingly, the phase is inverted - the A<sub>1</sub> difference peak decreases instead of increasing. For the LaMnO<sub>3</sub> sample (Fig. 8a) there is essentially no structure in the difference spectra over the pre-edge energy range, but surprisingly there are small peaks in this range for CaMnO<sub>3</sub> (See lower part of Fig. 8a), with the largest peak in the difference spectra occurring between A<sub>2</sub> and A<sub>3</sub> \- this suggests that there are in reality more than three pre-edge peaks.
There is also well defined structure in the difference spectra over the energy range of the main edge, although it is only a few percent of the edge in amplitude. For LaMnO<sub>3</sub> there is a broad feature over most of the edge region which increases as T is lowered. CaMnO<sub>3</sub> has a similar feature but it is larger and narrower (Fig. 8a). Both appear to correspond to the temperature dependent peak near the top of the edge (the โwhite lineโ mentioned earlier) which is sharpest at low T. For the CMR samples, there is additional structure on top of this broad peak - a dip at 6551 and a peak at 6553 eV (2 eV apart). Another dip/peak occurs just above the edge at 6555-6556.5 eV. The CO sample also shows structure over this energy range but again the phase is inverted relative to the CMR samples (i.e. a peak/dip at 6552 and6554 eV), \- this phase inversion thus extends over the entire near-edge region.
## IV Discussion
### A Main edge
For Mn atoms the main $`K`$-absorption edge represents transitions mainly from the atomic 1s state to the empty Mn 4p band. The XANES results show that this edge is very sharp for the Ca-substituted samples, the O-excess samples and the La deficient sample. The width of the edge (roughly 5-6 eV) is narrower than the edge for most other Mn compounds and the shift in edge position is $``$ 3 eV for a valence change of +1. No obvious indication of a step or double edge structure is present that would indicate two distinct valence states. If completely localized Mn<sup>+3</sup> and Mn<sup>+4</sup> ions were present on time scales of 10<sup>-14</sup> sec, the edge should have a smaller average slope and generally be broader, as would be expected for a mixture of fine powder of LaMnO<sub>3</sub> and CaMnO<sub>3</sub>. To model this explicitly, we compare in Fig. 9 the experimental edge for the 65% Ca (CO) sample and a weighted sum of the +3 and +4 end compounds; clearly the experimental edge is much sharper as noted previously.
In contrast to the Ca-doped samples, the edge for Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> is much broader (Fig. 1a), with a width of 10-12 eV. There is also a change of slope of the main edge for this sample that is consistent with two valence states, but the shape is more complicated. Note that a combination of two edges each $``$5 eV wide, separated by $``$3 eV (the separation for a valence change of 1) would yield an edge of width $``$ 13eV. Thus the width and structure of the Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub> edge are both consistent with the expectation that two valence states are present in this sample โ Mn<sup>+3</sup> and Mn<sup>+4</sup>. Similarly our data for Mn<sub>3</sub>O<sub>4</sub>, which has a mixture of +2 and +3 valence states, has a very broad edge of 13-14 eV (not shown).
The edges for the Ba and Pb samples are also broader; they have a break in slope and more amplitude in the lower part of the edge that might suggest two valence states. In this regard they are quite different from the Ca substituted samples for which the shape of the main edge for the CMR samples does not change much from that of LaMnO<sub>3</sub>. In addition, the net shifts of the edges for the 33% Ba, and Pb samples are smaller than expected; the additional structure near 6546-48 eV shifts the lower part of the edge down in energy while the top of the edge is close to the position of the 30% Ca samples. The net result is a very small overall average edge-shift compared to LaMnO<sub>3</sub>.
We have also shown by fitting over most of the edge, that there is no significant change in the average edge position for any of the substituted manganite samples near T<sub>c</sub>. This agrees with our earlier result in which we averaged data points above and below T<sub>c</sub>. The new analysis also indicates that there is consistently a slight decrease in edge position at the highest temperatures, that is largest for CaMnO<sub>3</sub> and Sr<sub>3</sub>Mn<sub>2</sub>O<sub>6.55</sub>. The reason for this downward shift is not yet clear but may be related to the temperature dependence of the B peak (since an increase in the B-peak intensity effectively shifts the lower section of the edge to lower energy).
However, the lack of any temperature dependence below T<sub>c</sub> disagrees with the earlier work of Subรญas et al., who report a 0.1 eV decrease in edge position up to T<sub>c</sub> for La<sub>0.6</sub>Y<sub>0.07</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> and then a 0.09 eV increase up to 210K. The small but important change in the shape of the main edge in Fig. 8 provides a partial explanation for this discrepancy. Subรญas et al. assumed no change in edge shape and calculated difference spectra for each temperature. Under this assumption, the amplitude of the peak in the difference spectra would be proportional to the energy shift of the edge. The additional structure observed in the difference spectra indicates there is a shape change rather than an overall edge shift.
The difference spectra (Fig. 8) also show clearly that the additional structure is dependent on T<sub>c</sub>. For the CMR samples, the dip-peak structure, superimposed on the peak observed for the end compounds, begins to be observable near T<sub>c</sub> and grows rapidly in the 60-100K range just below T<sub>c</sub>. This structure means that compared to the edge at low temperatures (undistorted Mn-O bonds) the edge above T<sub>c</sub> (distorted Mn-O bonds) has the upper part of the edge shifted upward in energy while the lower part is shifted downwards. The separation between the dip-peak structure is about 2 eV (see vertical dotted lines).
This raises several questions - for the CMR samples is there a mixture of +3 and +4 sites as usually assumed? If so why is the edge structure so small? Can the small structure observed in the difference spectra be explained in some other way? One aspect that must be included is the very large width of the Mn 4p density of states (DOS), roughly 15 eV wide, that is found in two quite different recent calculations. The main edge is due to transitions into this band and the calculated absorption edge (broadened by the core-hole lifetime) is very similar to that observed experimentally. The broad width means that the 4p states are extended and not localized on one Mn atom. Consequently, the $`K`$-edge will correspond to a Mn valence partially averaged over several Mn atoms, and thus will be less sensitive to variations in local charge on different Mn sites.
Another possibility is that the system is more covalent and that there are some partial holes in the O 2p band which is hybridized with the Mn 3d states. This is supported by several calculations and by the observation of holes in the O 2p band in absorption studies. Such holes may play an important role in the unusual transport of these materials. In calculations, Anisimov et al. and Mizokawa and Fujimori obtain two types of Mn e<sub>g</sub> configurations with almost identical local charge densities. In both calculations there are distortions of the Mn-O bond distances. For the calculation of Anisimov et al., one configuration is symmetric in the $`ab`$ plane with four small equal lobes directed towards O while the other has two large (and two small) lobes, again directed towards O atoms in the $`ab`$ plane. The more symmetric case is associated with a formal Mn<sup>+4</sup> site and the other state with Mn<sup>+3</sup> but because the charge densities are comparable would not lead to significantly different edge shifts. The recent calculations of Elfimov et al. are also relevant. To fit the observed splittings of A<sub>i</sub> peaks (2.2 eV for LaMnO<sub>3</sub> and 1.8 eV for CaMnO<sub>3</sub>), U and J<sub>H</sub> had to be lowered from the values in the first calculation \- to 4 eV and 0.7 eV, respectively, which implies higher covalency.
The remaining question to be answered about the main edge for the CMR samples is what is the explanation for the small structure in the difference spectra as T is lowered below T<sub>c</sub>? A possible answer is again found in the calculations of Elfimov et al.. They find that the position of the 4p partial DOS is bond-length dependent - it occurs at a lower energy when the Mn-O bond lengthens (p<sub>x</sub> orbitals in their paper) and is at a higher energy for shorter bond lengths ((p<sub>y</sub> and p<sub>z</sub>). Such a shift is expected; in polarized XAFS experiments on high T<sub>c</sub> materials we have observed edge shifts between the c- and a-axes. In addition, studies of molecules show that the edge shifts to higher energy when the bond length shortens. The separation between the partial DOS for p<sub>x</sub> and p<sub>y</sub> is about 2 eV in Elfinov etalโs calculation when they use distortions similar to those observed in LaMnO<sub>3</sub>; we expect to see some evidence of this splitting in the experimental absorption edge, although it is lifetime broadened and the 4p states are extended. We propose that the tiny dip-peak structure observed for the CMR materials is the result of the different positions for the partial DOS for p<sub>x</sub> and p<sub>y</sub>. The dip-peak splitting is also about 2 eV but it is not clear whether this is significant or a coincidence.
For the CO sample there is also structure in the edge but the phase is inverted. If the above explanation for the dip-peak structure in the difference spectra for CMR samples is correct then it suggests that the peak-dip feature for the CO sample is also produced by local distortions - but in this case by a local distortion that starts at the charge ordering temperature, T<sub>CO</sub>=270K, and increases as T is lowered. Such a model then provides a simple interpretation for the unusual lack of temperature dependence (reported but not explained) for $`\sigma ^2`$ for this sample. The surprise is that at least the thermal phonon broadening should have caused some increase in $`\sigma ^2`$ with T. However, if there is a distortion associated with the CO state, then there must be an associated broadening contribution $`\sigma _{CO}`$, for the Mn-O pair distribution function, $`\sigma _{CO}^2`$, that is zero above T<sub>CO</sub> and increases as T is lowered below T<sub>CO</sub>. Then the total variance for the Mn-O bond, $`\sigma _{MnO}^2`$, will be given by
$$\sigma _{MnO}^2(T)=\sigma _{phonon}^2(T)+\sigma _{CO}^2(T)+\sigma _{static}^2$$
(1)
where $`\sigma _{phonon}^2(T)`$ is the phonon contribution, and $`\sigma _{static}^2`$ is a static (temperature-independent) contribution from disorder. $`\sigma _{phonon}^2(T)`$ should be comparable to that for CaMnO<sub>3</sub>, since we see the same phonon component for both pure CaMnO<sub>3</sub> and La<sub>0.79</sub>Ca<sub>0.21</sub>MnO<sub>3</sub> above T<sub>c</sub>. To make $`\sigma _{total}^2`$ nearly independent of T means $`\sigma _{CO}^2(T)`$ and $`\sigma _{phonon}^2(T)`$ almost cancel for temperatures below 300K.
We can extract the CO contribution following the method in Ref. for calculating $`\mathrm{\Delta }\sigma ^2`$ \- we fit the two highest T data points (from Ref. ) to $`\sigma _{phonon}^2(T)`$ \+ $`\sigma _{static}^2`$ and then subtract these contributions from the data. In Fig. 10 we plot the result of this analysis for the 65% Ca sample. From this figure, the maximum value for $`\sigma _{CO}^2(T)`$, is roughly 10% of that associated with polaron formation for the CMR samples. Such an increased distortion for charge ordered material makes sense \- as the sample becomes charge or orbital ordered, there is more room for the longer Mn-O bonds to lengthen, while for a random arrangement of orbitals, the series of long and short Mn-O bonds are more constrained.
### B Pre-edge region
#### 1 Background
The pre-edge results provide additional information about the nature of the electronic states. For many of the transition elements, 1-3 pre-edge peaks, A<sub>i</sub>, occur well below the main edge ($``$ 15eV below) and are assigned to transitions to empty states with d-like character, i.e. these are 1s to 3d<sup>(n+1)</sup> transitions where n is the initial number of d electrons and n+1 includes the excited electron in the final state, which usually includes the effect of a core hole. The 1s-3d transitions are directly allowed through the very weak quadrupole transition or allowed via an admixture of 3d and 4p states. In the pre-edge region there may also be some hybridization with the O 2p states. If the metal site is centrosymmetric, there is no mixing of 3d and 4p states on the excited atom and 1s-3d dipole transitions are strictly forbidden; however local distortions can make such 1s-3d transitions very weakly dipole allowed. Three aspects need to be recognized in considering the Mn pre-edge in the present study.
* Quadrupole interactions
Although the quadrupole interaction is weak it has a clear signature through the angular dependence of the absorption process. Based on recent studies on oriented single crystals we estimate that quadrupole-allowed peaks will be at most 1% in powder samples which is considerably smaller than the A<sub>i</sub> peaks observed for the substituted manganites but perhaps not negligible. A small quadrupole component, as seen for example in FeO, may well be present.
* Dipole allowed via 3d-4p mixing on the absorbing atom
If the Mn site lacks inversion symmetry, then in principle there will be mixing of the Mn 3d and 4p states on the central atom. Consider a system that is nearly cubic but has a small distortion that removes the inversion symmetry - i.e. the metal atom is slightly displaced such that the bonds on opposite sides of the metal atom are slightly different. Then the mixing parameter is $``$ $`\delta _l/r_o`$ and the matrix element will be proportional to $`(\delta _l/r_o)^2`$, where $`\delta _l`$ is the difference in opposite bond lengths and $`r_o`$ is the average bond length. An example is the V site in V<sub>2</sub>O<sub>5</sub>; here the VO<sub>6</sub> octahedron is strongly distorted, and along the c-axis, the two V-O bond lengths are 1.577 and 2.791 ร
respectively. Experimentally there is a large pre-edge peak for the V $`K`$-edge that can be modeled assuming 3d-4p mixing on the absorbing atom plus the effect of a corehole.
* Dipole allowed via 4p mixing with neighboring metal atom 3d states
In several cases, a significant pre-edge peak is observed in a cubic crystal, which canโt be explained by the above 3d-4p mixing on the excited atom. Multi-scattering calculations for such systems often show that large clusters are needed before the pre-edge features are produced - scattering paths are needed which include many further neighbors, particularly the second neighbor metal atoms. An equivalent result emerges from band theory calculations where the hybridization of extended states is important. Here the dipole transition can be made allowed via mixing of the 4p state on the central atom with the 3d states on neighboring atoms. Projections of the density of states with p character (p-DOS) for such systems show small features at the energies of the 3d states; such features are not observed in the p-DOS when the pre-edge feature is a quadrupole transition. In the limit of multi-scattering calculations with very large clusters, the two approaches (band theory and multi-scattering) should be equivalent.
To have a mixing of 3d with the 4p states (to make a state of p-character), one needs a combination of 3d states that has odd parity as pointed out by Elfimov et al.. It is easy to obtain such a state if a linear combination of 3d states on two neighboring Mn atoms is used and the p-states are extended enough to partially overlap them. Specifically, consider 3 Mn atoms in a line - a central excited atom (0) and left (L) and right (R) atoms - with $`\mathrm{\Psi }_{4p}`$(0) being the 4p state on the central atom, and $`\mathrm{\Psi }_{3d;x^2y^2}`$(R<sub>R</sub>) and $`\mathrm{\Psi }_{3d;x^2y^2}`$(R<sub>L</sub>) being the 3d$`_{x^2y^2}`$ states centered on the right and left atoms. Then a state with odd symmetry about the central atom is given by
$$\mathrm{\Psi }_{total}=\alpha \mathrm{\Psi }_{4p}(0)+\frac{\beta }{\sqrt{2}}(\mathrm{\Psi }_{3d;x^2y^2}(R_L)\mathrm{\Psi }_{3d;x^2y^2}(R_R))$$
(2)
where $`\alpha `$ is essentially 1.0 and we ignore the intervening O atom via which the hybridization occurs. The small parameter, $`\beta `$, is a measure of the hybridization and is strongly dependent on the overlap of the 4p and 3d wavefunctions on different Mn atoms, and hence on the distance between them.
#### 2 Application to the substituted manganites
The pre-edge for the substituted manganites follows the general trends observed for other Mn systems quite well. Three A-peaks are observed for CaMnO<sub>3</sub>; A<sub>2</sub> is larger than the A<sub>1</sub> peak, and the A<sub>2</sub>-A<sub>1</sub> splitting is smaller (high valence - +4) than for other samples. The LaMnO<sub>3</sub> case is similar; the A<sub>2</sub>-A<sub>1</sub> splitting is largest (lower valence +3) and the overall A peak amplitude is smallest. However, the A<sub>2</sub> peak is larger than expected from the literature for Mn<sup>+3</sup> states in other compounds, possibly because of increased local distortions in this compound.
However, there are difficulties with some of the earlier interpretations in which the dipole allowed transitions are assumed to originate from a 3d-4p mixing on the excited atom. First the A<sub>i</sub> peaks appear for both distorted and undistorted systems. Second, the amplitude (particularly for the relatively undistorted system CaMnO<sub>3</sub>) is too large to be a 1s-3d transition made allowed by a slight breaking of inversion symmetry about the excited Mn atom. Recently, based on the calculations of Elfimov et al., we have interpreted A<sub>1</sub> and A<sub>2</sub> as dipole allowed via a mixing of Mn 4p states with Mn 3d states on neighboring metal atoms. The projected p-DOS in the calculations of Elfimov et al. show two features in the pre-edge region, which indicates that dipole-allowed transitions should be present. In addition, the broad Mn 4p band obtained in that work and by by Benfatto et al., also implies that the 4p states are indeed extended - a necessary requirement for mixing with the 3d states on the neighboring metal atoms. Similar interpretations have been given recently for other transition metal systems that are cubic or very nearly so; Fe in FeO and Ti in rutile. A mixing with the 3d states on neighboring Ti atoms was also reported in the layered disulfide TiS<sub>2</sub>.
The calculations of Elfimov et al. also show that there is a splitting of the unfilled 3d bands - the lowest is the majority e<sub>g</sub> band (which may be partially filled via doping); the next two are the minority e<sub>g</sub> and t<sub>2g</sub> bands which partially overlap. The coupling with the t<sub>2g</sub> is expected to be smaller since these orbitals are of the form d<sub>xy</sub>, which has reduced overlap with the Mn 4p in a $`\pi `$ bonding configuration. The splitting of these e<sub>g</sub> bands depends both on J<sub>H</sub> and on the degree of covalency/hybridization. As reported recently, adjusting the parameters in this calculation so that the theoretical splitting is close to the 2 eV observed experimentally, resulted in U=4 eV and J<sub>H</sub>=0.7 eV. These lower values also suggest an increase in covalency and hence that the charge is shared between Mn and O. Consequently, there is a non-zero density of holes in the O bands, in agreement with Ju et al. and these O-holes may play an important role in the unusual transport of these materials. For the CMR samples, the additional decrease in the A<sub>1</sub>-A<sub>2</sub> splitting for T$`<`$T<sub>c</sub> may suggest a further increase in covalency.
Finally the temperature dependence of the A<sub>i</sub> peak amplitudes is still not explained. Comparing the pre-edges of distorted LaMnO<sub>3</sub> with almost undistorted CaMnO<sub>3</sub> (See Fig. 5) would suggest that as the CMR samples change from distorted above T<sub>c</sub> to ordered at low T, the A<sub>2</sub> peak would increase relative to A<sub>1</sub>. Experimentally the reverse is true. However, we still suggest that the observed temperature dependence arises from the change in local structure, based on the fact that the changes for the CMR and CO samples are out of phase for both the pre-edge features and the structure in the main edge.
#### 3 Other Models
Another general feature that emerges from our data is that although the main changes occur just below T<sub>c</sub>, there is also a gradual change to the fully ordered state as the sample is cooled well below T<sub>c</sub>, and the local structure continues to change down to 50K and below. Consequently there may be clusters formed at T<sub>c</sub> that grow as T is lowered. We have interpreted our local distortion results earlier in terms of a two component model. Within that model, one of these components (fluids) would correspond to delocalized states - these could be either delocalized holes or delocalized electrons. We also point out that the decreasing distortions observed in EXAFS as T is decreased below T<sub>c</sub> and the corresponding increase in resistivity suggest a changing average mobility of the charge carriers. Within the model we have suggested, the fraction of delocalized carriers would increase as T is lowered. However, one of these components might also correspond to the Mn atoms in a cluster, the positions of which are dominated by small variations in dopant concentration or O vacancies, possibly leading to a regime with phase separation. Such inhomogeneities likely play an import role in these materials. In addition, Jaime et al. have successfully modeled their resistivity and thermoelectric measurements using a two component system of localized and itinerant carriers. The recent calculations using the Kondo model also stress phase separation but it is not clear how to compare with their results.
## V Conclusions
We have addressed several issues related to the Mn valence in the substituted LaMnO<sub>3</sub> materials. Although discussions of these systems often assume isolated Mn<sup>+3</sup> and Mn<sup>+4</sup> states, we observe no change ($`<`$ 0.02 eV) in the average edge position through the ferro-magnetic transition for the CMR systems (Ca, Ba or Pb doped), and in all cases the total edge shift from 0 to 300K is $`<`$ 0.04 eV. Although there is no obvious step or kink in the edge, expected for two well-defined valence states, there is a very small shape change that can be observed by taking the difference of data files at different temperatures. A dip/peak structure develops as T drops below T<sub>c</sub> for the CMR samples; the dip/peak separation is $``$ 2 eV and is consistent with the splitting calculated for the p<sub>x</sub> and p<sub>y</sub> partial DOS when the manganite structure changes from an undistorted to a distorted (LaMnO<sub>3</sub>) lattice. For CMR samples, such changes in the local distortions below T<sub>c</sub> were deduced earlier from EXAFS data. At low T the CMR samples are very well ordered, but as T increases there is a rapid increase in the local distortions up to T=T<sub>c</sub>; above T<sub>c</sub> the change in disorder changes slowly. The rapid change just below T<sub>c</sub> has been associated with the formation of polarons. These distortions, now observed in both the XANES and EXAFS data, indicate some change in the local charge distribution. However the small size of the effect in the XANES spectra needs to be understood. In part it can be attributed to the extended nature of the broad Mn 4p band which tends to make the 1s-4p edge transition an average over several Mn atoms. However a change in covalency - specifically a transfer of charge between Mn 3d and O 2p states - might also be associated with this structural change, but produce little change in the edge. Support for this possibility is obtained from the pre-edge results, summarized below.
For the CO sample we observe a similar behavior, but in this case the structure in the difference spectra are inverted relative to that for the CMR sample. This indicates that the local distortions increase in the CO state below T<sub>CO</sub>.
The pre-edge structure provides additional information about the 3d-bands in these materials. Two or three peaks are observed, labeled A<sub>1</sub>-A<sub>3</sub>. A<sub>2</sub> at 300K is essentially independent of concentration while A<sub>1</sub> increases slowly with x; A<sub>3</sub> is only observed for high Ca concentrations. Following the work of Elfimov et al. we attribute these peaks to a hybridization of Mn 4p on the excited atom with an ungerade combination of 3d states on neighboring Mn atoms, i.e. they are not the result of splittings of atomic multiplets on the excited atom as is often assumed. Similar explanations for the pre-edge region have recently been proposed for several other transition metal $`K`$-edges. Consequently the splittings observed are essentially unaffected by the presence of the core hole and should be a good measure of the splittings of the e<sub>g</sub> bands which are influenced by the hybridization of the Mn 3d and O 2p states. This interpretation of the pre-edge does not depend on small distortions of the crystal and therefore also provides a simple explanation for the large pre-edge features observed in the more ordered CaMnO<sub>3</sub> material.
In the calculations of Elfimovet al., the two lowest empty bands are the majority and minority spin e<sub>g</sub> bands; the minority spin t<sub>2g</sub> band overlaps the latter but is expected to be more weakly coupled. U and J must be reduced slightly to fit the experimental splitting (2 eV) of the A<sub>1</sub> and A<sub>2</sub> peaks; U=4eV and J<sub>H</sub>=0.7eV. This indicates an increase in the covalency. The additional small decrease in the A<sub>1</sub>-A<sub>2</sub> splitting below T<sub>c</sub> may suggest a further change in covalency or hybridization.
Thus the picture that emerges is that there is considerable hybridization of the energy states (Mn 4p and 3d, and O 2p), with some hole density in the O bands and possibly only small differences in the charge localized on Mn atoms which have different types of e<sub>g</sub> orbitals. The possibility of distinct types of orbitals can lead to orbital ordering, with displacements of the O atoms forming J-T-like Mn-O bond distortions when the hopping charge is localized for times of order the optical phonon periods. As a result, the possibility that part of the transport takes place via hole density in the O bands needs to be considered. Note that slowly hopping holes on the O sites would lead to distorted Mn-O bonds while rapid hopping (faster than phonons) would leave the O atom at an average undistorted position.
###### Acknowledgements.
The authors wish to thank G. Brown, C. Brouder, D. Dessau, T. Geballe, J. Rehr, G. Sawatzky, and T. Tyson for useful discussions and comments. FB thanks K. Terakura for sending some of their unpublished results. The experiments were performed at the Stanford Synchrotron Radiation Laboratory, which is operated by the U.S. Department of Energy, Division of Chemical Sciences, and by the NIH, Biomedical Resource Technology Program, Division of Research Resources. Some experiments were carried out on UC/National Laboratories PRT beam time. The work is supported in part by NSF grant DMR-97-05117. |
no-problem/0002/hep-ex0002002.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The internal target of the HERAโB experiment consists of eight ribbons positioned around the proton beam of the HERA storage ring at a distance of $`36rms`$ beam widths . Protons drifting away from the beam core interact with these targets and are expected to produce among other hadrons Bโmesons. Since 1992 we studied those problems of basic importance for the experiment namely the interaction rate achievable and its fluctuations, the target efficiency, the spatial distribution of the interaction vertices, and the interference with the beams as well as the background induced by the wire target at the place of other experiments running in parallel to HERAโB. It turns out that the design goal of a target efficiency $`50\%`$ and the interaction rate of $``$ 40 MHz can be achieved .
The HERAโB detector with its readout and different trigger levels relies on the close correlation between the time of the interaction and the bunch crossing signal of the proton beam. Therefore this correlation was studied in some detail. Besides the interactions due to the bunched protons an unexpected high background from bunch uncorrelated interactions was observed, which showed a strong asymmetry in the transverse plane. In this paper we summarize the observations, quantify them and discuss possible sources of this background.
## 2 Coasting Beam
A schematic view of the target is shown in fig.1. Protons in the halo or close to the beam core interact with the target wires which are positioned inside/outside (โinnerโ and โouterโ wire) and up/down (โupperโ and โlowerโ wire) with respect to the center of the storage ring and the beam respectively. A sketch of the bunch structure of the HERA proton beam is shown in fig.2. The bunches are organized in $`36`$ trains, each consisting of 10 bunches of $`1ns`$ length. The bunch spacing within a train corresponds to $`96ns`$. The trains are separated by an empty bunch. 6 trains correspond to one fill of the PETRA preaccelerator, they are separated from the consecutive 6 trains by a gap of $`480ns`$. The last 15 buckets are empty to enable a safe beam dump (kicker gap). A complete revolution of a fill corresponds to $`22096ns=21.12\mu s`$.
A FADC system is used to record separately the contribution of each single bunch to the interactions produced at the target measured with scintillator hodoscopes . The FADC samples with the fourfold rate of the bunch crossing, i.e. every $`24ns`$ a signal is recorded. Per readout cycle of the FADCโsystem 880 bytes are recorded allowing to study the interactions in a time slice of 21.12 $`\mu s`$ which corresponds to the time needed by a proton to cycle once around the HERA ring. A rate of 150 Hz for the readout is achieved. About 5000 succeeding measurements are summed up to accomplish a reasonable statistics, hence the FADC rates presented in this paper are averaged over a time of $`30s`$. As demonstrated by fig.3a the targetโbeam halo interaction indeed shows the bunch structure of the proton beam if the protons interact with an inner target wire. The time structure of the outer target wires (fig.3b,c) differs qualitatively from the inner one, though the data were collected consecutively within a short time interval. While the inner wire provides a clear bunched signal, for the outer wire in addition to the bunch correlated events a continuous background is observed (fig.3b). Even in the regions of the empty RFโbuckets and of the kicker gap a strong signal shows up if the outer wire is positioned in the beam halo. If the outerโtarget is hit by protons at a distance of $`5\sigma `$ from the beam center only the $`dc`$ component of the beam contributes (fig.3c). We have convinced ourselves that the observed interactions from the continuous component have the same signature as the bunch correlated events: exploiting the HERAโB Siโvertex detector it can be shown that the bunched and the $`dc`$ component of the proton beam interact with the wire, moreover the relative rates of different detector components for the two types of events are equal.
The difference of the time distributions for protons interacting with the inner and outer wire respectively hint to a $`dc`$ component of the machine current with an energy smaller than the synchronous protons $`\left(\mathrm{\Delta }E/E<0\right)`$ since at the position of the target the horizontal dispersion $`D_x`$ of the HERA protonโring is negative and therefore the offset $`\mathrm{}x`$ of the beam due to its energy deviation $`\frac{\mathrm{\Delta }E}{E}`$ is positive, $`\mathrm{}x=D_x\frac{\mathrm{\Delta }E}{E}>0`$. The ratio of the integrated continuous background to the total interaction rate is smallest for the inner wire, followed by the lower and upper wire and it is largest for the outer wire (fig.4).
These observations can be explained easiest by discussing the particle trajectories in longitudinal phase space (fig.5) <sup>1</sup><sup>1</sup>1The HERA proton ring is operated with cavities excited at two different RF frequencies (52 MHz and 208 MHz). The figure is a simplified sketch for the case of a single frequency., where $`\mathrm{\Psi }`$ describes the longitudinal phase of a particle in a bunch and $`\dot{\mathrm{\Psi }}\frac{\delta p}{p}`$ the longitudinal momentum deviation of a particle with respect to the centroid. The boundary (separatrix) between the region of stable and unstable motion is characterized by the invariant
$$I\left(\frac{\delta p}{p}\right)^2\left(\frac{2Q_s}{h\alpha _p}\mathrm{cos}\frac{\mathrm{\Psi }}{2}\right)^2=0$$
| where | $`Q_s`$ | synchrotron tune |
| --- | --- | --- |
| | $`h`$ | harmonic number |
| | $`\alpha _p`$ | = $`\frac{\delta L/L}{\delta p/p}`$ momentum compaction factor |
| | $`L`$ | path length of proton for 1 turn |
Particles in the stable region $`\left(I<0\right)`$ stay bunched while particles outside the stable region $`\left(I>0\right)`$ are debunched. These unbunched protons in case of the HERA storage ring deviate, depending on the voltage of the RFโsystem, by $`\frac{\mathrm{\Delta }E}{E}>\left(2\mathrm{}3\right)10^4`$ from the centroid particle. Note that the dynamic energy acceptance of the machine $`\frac{\mathrm{\Delta }E}{E}10^3`$ is much larger than $`\left(\frac{\mathrm{\Delta }E}{E}\right)_S=\left(2\mathrm{}3\right)10^4`$, the energy deviation of typical protons close to the separatrix. The results shown in figs.3, 4 demonstrate the evidence of an unbunched beam component with $`\mathrm{\Delta }E<0`$, the so called coasting beam.
Since these protons have smaller energies than the bunched particles an energy loss mechanism has to exist which forces the bunched protons to pass the separatrix and debunch. Besides machine inherent sources as synchrotron radiation ($`10eV`$ per turn), noise of the RFโsystem and intrabeam scattering the energy loss of protons in the target can produce the coasting beam. As far as these losses can be quantified they are much smaller than the maximum energy deviation of $`\mathrm{\Delta }E=0.27GeV`$ allowed for the stable longitudinal phase space at standard RFโvoltage. Hence these energy losses are expected only to force protons near the separatrix to pass the phase space boundary of stable bunched beams.
### 2.1 Source of coasting beam
The first study was performed with a virgin beam, no electrons were stored in the electronโring of the HERA storageโring complex and the measurement started a few minutes after stable beam conditions were declared. In fig.6 b,c the position of the โouterโ target with respect to the beam center of gravity is shown, the measured total interaction rate is plotted in fig.6a. At different times, indicated by the arrows on the time axis, FADC spectra were analyzed which allow to separate the bunch contribution from the coasting beam. In fig.7 the FADC spectra are plotted for the different time intervals. The data shown in fig.7a were collected when the target wire touched the beam for the first time. In this case the continuous coasting beam contribution to the interactions dominates. Since before this measurement no target wire had touched the beam this measurement unquestionably proves that the machine itself produces a coasting beam. Approaching further the beam core the bunch contribution starts to develop.
In the next step the target was withdrawn from the beam core and placed at a fixed position (fig.6 b,c). As shown by fig.6a the total interaction rate is strongly reduced but after a short time starts to increase again. Figs.7d,e demonstrate that the coasting beam โdiffusesโ faster towards the outer target than the bunched component, since no bunch correlated contribution is detected.
Further evidence for a machine induced coasting beam component follows from a second measurement. The total interaction rate and the position of the wires (inner I and outer II) at different times are shown in fig.8. At 19:15 the outer wire is moved towards the beam. An increase of the interaction rate and its fluctuations is observed. At 19:30 the outer wire is retracted, the interaction rate decreases strongly and recovers within the next half hour to a level of several MHz. This behaviour was already observed in fig.6. Starting at 20:18 the inner wire is moved towards the beam (fig.8c). As expected, the interaction rate increases. At 20:35 the inner wire is retracted by $`2mm`$ from the beam core, the interaction rate is strongly decreasing to the level observed for $`t<`$ 20:18. This behaviour is reproducible as shown by the measurements performed in the time interval 20:42 $`t`$ 21:12.
These results demonstrate that the energy loss of bunched particles due to interaction of the protons with the (inner) target wire is not a major source for producing the coasting beam. Comparing the interaction rate measured at $`t`$ = 20:18 and $`t`$ = 20:48 $`h`$, it follows that $`15\%`$ of the coasting beam interacting with the outer target was produced by energy loss of bunched protons in the inner target.
A similar conclusion can be drawn from fig.9. In this case the coasting beam component is detected (fig.9c) by a $`dc`$ current monitor. This current in good approximation stays constant for 15:00 $`t`$ 19:00, when the inner target was positioned at $`5.4\sigma `$ to $`3.9\sigma `$ from the beam core (fig.9b) and produced a high interaction rate of $`>`$ 30 MHz (fig.9a). Note, however, that in other measurements a slight increase of the current is observed for the same setup of the wire target. As demonstrated by fig.9c for $`t`$ 19:00 an increase of the $`dc`$ current is detected though the wire is pulled away from the beam (fig9b). Considering these effects one arrives again at a target contribution to the coasting beam of $`<`$ 20 %. Moreover, these observations stress the sensitivity of the observations on details of the machine setup, collimator positions etc.
Further details concerning the properties of the coasting beam follow from the measurement performed during machine studies where HERA was filled with 10 proton bunches. The contribution of these protons interacting with the target is shown in fig.10 in the time interval corresponding to channels 60 to 100. The coasting beam was excited by a kicker magnet in a narrow time slice around channel 460 every $`21.12\mu s`$, i.e. once per full turn of a fill. The measured distribution of the interaction rate (fig.10) shows an exponential increase of the rate which starts at channel $`300`$, peaks at channel $`460`$ and is followed by a sharp decrease.
The slope of the exponential corresponds to a characteristic โlifetimeโ of the protons of $`0.5\mu s`$ which can be explained as follows. The fraction of the coasting beam excited by the kicker produces a continuous beam of protons with large transverse emittance traveling around the storage ring with a slightly shorter circulation time than the synchronous protons since for coasting beam protons $`\alpha _p>0`$. Due to interactions of the protons with the target, the intensity of the excited beam component decreases with time.
A proton interacts in the target after typically $`10^5`$ revolutions . From fig.10 one concludes that at this time it has advanced by $`0.5\mu s`$ with respect to the synchronous protons if it belongs to the coasting beam. Therefore after $`n=21.12\mu s/0.5\mu s10^5`$ turns corresponding to a typical scale of $`t=21.12\mu sn90s`$, the nonโinteracting excited protons in the coasting beam are smeared homogenously around the ring. This time has to be compared to the time needed by a proton close to the separatrix to advance by a full turn with respect to a centroid proton. The order of magnitude is given by
$$t=\frac{L}{\mathrm{\Delta }L}21.12\mu s=\frac{1}{\alpha _p\frac{\mathrm{\Delta }E}{E}}21.12\mu s80s$$
where $`\frac{\mathrm{\Delta }L}{L}`$ is the relative deviation of the path length for coasting protons, $`\alpha _p=1.310^3`$ is the momentum compaction factor of HERA and $`\left(\frac{\mathrm{\Delta }E}{E}\right)_S(2\mathrm{}3)10^4`$ the energy deviation of typical protons close to the separatrix. Given the fact that the estimates are quite rough, the two time scales are in remarkable agreement.
### 2.2 Impact of coasting beam on target operation
Measurements performed with an old fill are shown in figs.1112. Until t=14:12 the inner target was operated with a constant interaction rate of $``$ 10 MHz. The continuous background due to coasting beams is negligible (fig.12a), and the contribution of bunch correlated interaction dominates. At the time 14:12, the inner target was retracted and the outer target was inserted (fig.11b). The results shown in fig.12bโd demonstrate that now nearly 100 % of the interaction rate measured under these conditions is due to coasting beam protons. Initially, strong fluctuations of the interaction rate are observed while the wire is gradually scraping away the halo and moving towards the beam core (fig.11a). At $`t`$ = 15:00 the fluctuations disappear abruptly and the wire stops moving towards the beam. At the same time, the coasting beam background drops by a factor of two, as can be seen from data taken at the times 14:59 (fig.12d) and 15:03 (fig.12e). With further operation of the outer target wire at constant interaction rate, the coasting beam background thereafter continues to decrease slowly (fig.12f). These effects can be interpreted by the transition of the wire moving from the pure coasting beam halo into the beam core region. In this sense the outer wire acts effectively like a scraper for the coasting beam halo.
Similar to the measurements described in section 2.1 a steady increase of the rate with time (fig.11b) is observed, indicating the repopulation of the halo, if the outer wire is retracted by $`0.5mm`$ (fig.11a). The time structure of the proton bunches disappeares completely and the interactions are again caused by the remaining coasting beam component at large betatron amplitudes.
Note that the time dependence for the two rate measurements presented above are similar and reproducible. They are indicative of the inherent beam dynamics, leading to beam diffusion describable by the FokkerโPlanck equation . The sudden drop of the coasting beam contribution observed at $`t=`$ 15:00 in the measurement discussed above deserves special attention and will be further analyzed by time resolved measurements which are presently being prepared.
## 3 Conclusions
We have detected a continuous current of protons in the HERA machine which produces nonbunched background at the HERAโB target. Since the experiment is positioned at a location with a negative dispersion, this observation can be attributed to protons with a smaller energy than the synchronous ones. The measurement of the circulation time difference of protons for the coasting beam and synchronous protons and its quantitative agreement with estimates from linear beam optics support this interpretation. We have shown that the coasting beams exist already for a virgin fill of the proton ring which is not disturbed in advance by proton interactions with the target. The interactions of bunched protons with the target increases the intensity of the coasting beam only marginally.
Moreover the protons of the coasting beam diffuse faster into the halo region than the bunch correlated protons. A surprising observation is the simultaneous sudden decrease of the rate fluctuations and the coasting beam contribution at a characteristic wireโbeam distance, indicating a transition from the pure coasting beam halo into the (mainly bunched) beam core. With an improved setup we plan to study the effects with higher time resolution.
### Acknowledgement
This work was supported by the BMBF Bonn under contract number 05 7Do55P and 057Bu35I. The observations presented in this paper and their understanding has been obtained in close and fruitful collaboration with the HERA machine physicists. All further progress to understand and overcome the problems related with coasting beam at HERA requires still a close cooperation and the support of the HERA machine group.
We like to express our cordial thanks to the HERA crew for the friendly collaboration, their support and their assistance to discuss our observations and to carry out dedicated machine studies to investigate and improve the situation.
It is impossible to name all of them, special thanks we owe to Jim Ellis, B. Holzer, Jens Kluthe, Helmut Mais, Chr. Montag, Mark Lomperski, Tanaji Sen, F. Willeke and Mari Paz Zorzano. |
no-problem/0002/cond-mat0002239.html | ar5iv | text | # Antiferromagnetic Correlations versus Superfluid Density in La2-xSrxCuO4
## Abstract
We have performed muon spin relaxation and low field $`ac`$-susceptibility measurements in a series of high quality samples of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> ($`x=0.080.24`$) as a function of temperature. Superconductivity is found to coexist with low temperature spin glass order up to the optimally doped region where the normal state pseudogap also closes. The systematic depletion of the superfluid density with the enhancement of aniferromagnetic correlations with underdoping indicates a $`competition`$ between antiferromagnetic correlations and superconductivity.
Establishing and understanding the phase diagram of the high-$`T_c`$ superconductors (HTS) versus temperature and doping has been one of the major challenges in modern solid state physics. The parent compound La<sub>2</sub>CuO<sub>4</sub> of the first HTS family to be discovered, La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, is an insulator exhibiting long range antiferromagnetic (AF) order, which is eventually destroyed as carriers are doped into the CuO<sub>2</sub> planes. After passing through a spin glass (SG) state superconductivity emerges near 0.06 holes/planar-Cu-atom and follows an approximately parabolic doping dependence until it disappears at $`x`$0.30. Spectroscopic evidence however, indicates that AF correlations do not seize to exist with the emergence of superconductivity, but instead a short range ordered AF state persists in the superconducting state . This observation raises fundamental questions as to how far into the superconducting regime of the phase diagram these AF correlations persist and how, if at all, these affect the superconductivity.
In this short paper we present briefly new experimental findings in which the signatures of antiferromagnetic correlations is observed to persist up to $`x`$0.17. We find a close correlation between the doping dependence of the AF and the absolute value of the superfluid density, $`\rho ^s(0)`$. The latter is almost constant for approximately $`x>0.17`$ and drops with the enhancement of the AF fluctuations, undergoing an increased depletion in the vicinity of the stripe phase region ($`x=0.125`$). These results suggest that the AF order parameter competes with the superconducting counterpart in more than $`50\%`$ of the superconducting region of the phase diagram.
The samples studied were single-phase polycrystalline La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) ($`x=`$ 0.08, 0.10, 0.125, 0.15, 0.17, 0.20, 0.22, 0.24) prepared using solid-state reaction procedures. No other phases were detected by powder x-ray diffraction and the phase purity is thought to be better than $`1\%`$. High field magnetic susceptibility measurements showed no signatures of excess paramagnetic centres and the measured values of $`T_c`$ and lattice parameters are also in very good agreement with published work . The samples have been extensively characterised by several transport, magnetic and spectroscopic techniques, all indicating their high quality. Zero-field (ZF) and transverse-field (TF) $`\mu `$SR experiments were performed at the pulsed muon source, ISIS Facility, Rutherford Appleton Laboratory. The samples were mounted on a silver plate either on the cold stage of a dilution refrigerator or in a variable temperature helium cryostat, enabling spectra to be collected over the temperature range $`40mK`$ to $`50K`$. The in-plane magnetic penetration depth, $`\lambda _{ab}`$, ($`\lambda _{ab}^2`$ $``$ $`\rho ^s`$) was determined both by a low-field $`ac`$-susceptibility technique (typically at $`1G`$ and $`333Hz`$) on grain-aligned powders and from the analysis of TF-cooled $`\mu `$SR measurements. The latter were carried out on unaligned powders in a field of $`400G`$. Details for deriving $`\lambda _{ab}`$ in HTS from the measured low-field $`ac`$-susceptibility and TF-$`\mu `$SR spectra can be found elsewhere .
In Fig.1 we present the time evolution of the ZF muon asymmetry for $`x=0.08`$ ($`T_c=21K`$) as a function of temperature. In all samples the high temperature form of the depolarisation is Gaussian, consistent with dipolar interactions between the muons and their near neighbour nuclear moments. This was verified by applying a $`50G`$ longitudinal field, which completely suppressed the depolarisation. As the temperature is lowered the onset of dynamical relaxation processes becomes apparent in the change in the shape of the depolarisation function. The samples with $`x=0.08,0.10`$ and $`0.125`$ follow the same pattern, which is indicative of the onset of spin glass ordering at low temperature. For simplicity, we have chosen to parametrise the form of the depolarisation function as a stretched exponential, $`G_z(t)`$ = $`A_1`$ $`exp[(\lambda t)^\beta ]`$ \+ $`A_2`$. The constant term $`A_2`$ accounts for a small time independent background arising from muons stopping in the silver backing plate. At high temperatures we find that the value of $`\beta `$ $``$ 2, but for samples with $`x<0.15`$ it decreases smoothly to a value approaching $`0.5`$ at low temperatures. This โroot exponentialโ behaviour is widely found in the temperature regime just above the glass temperature in spin glasses . The temperature dependence of $`\beta `$ for the present samples is shown in Fig. 2. We have used the temperature at which the value of $`\beta `$ drops below 2 as the onset temperature for AF correlations $`T_{sf}`$, and the temperature where $`\beta `$ $``$ 0.5 as a measure of $`T_{sg}`$ the spin glass freezing temperature. We find that for the $`x<0.08`$ sample for example, the relaxation rate parameter $`\lambda ^\beta `$ is peaked close to $`T_{sg}`$ (Fig. 3). At temperatures below $`T_{sg}`$ the form of the depolarisation function changes: there is an initial rapid decay of $`G_z(t)`$, followed by a slowly damped tail (see left hand panel in Fig. 1). This behaviour is very characteristic of the behaviour of spin glasses below $`T_{sg}`$, and has been attributed by Uemura $`et`$ $`al`$, to the effects of the static distribution of field, combined with dynamical processes in the frozen spin glass state.
The present data show that SG freezing persists up to and beyond $`x=0.125`$. Indeed the onset of AF correlations for $`x=0.125`$ occurs at a higher temperatures than for $`x=0.10`$. This is probably associated with the formation of stripe domains in this range of concentration . For $`x=0.15`$ and $`0.17`$ the trends of $`\beta (T)`$ suggest a very small value of spin glass temperature $`T_{sg}`$ ($`<45mK`$) with fluctuations setting just below 8K and 2K, respectively. This suggests that for higher dopings the AF correlations are absent or at least beyond experimental range.
In Fig. 4 we summarise the essence of this work by comparing the doping dependence of $`T_{sf}`$ and $`T_{sg}`$, (including data for $`T_{sg}`$ from ref. ), with that of $`\lambda _{ab}(0)^2`$ $``$ $`\rho ^s(0)`$. We note that our vales of $`T_{sg}`$ for $`x=0.08`$ and $`0.10`$ are in excellent agreement with those reported in ref. . Figure 4 indicates that although the freezing of spins occurs at very low temperatures, $`T_{sg}`$ $``$ $`T_c`$, magnetic fluctuations are apparent at significantly higher temperatures ($`e.g.`$, $`T_{sf}0.5T_c`$ for $`x=0.10`$ and $`T_{sf}0.2T_c`$ for $`x=0.15`$). Therefore, a large fraction of the supeconducting region of the phase diagram coexists with AF picking up near $`x=1/8`$ and eventually disappearing in the lightly overdoped region where the normal state (or pseudo) gap, $`\mathrm{\Delta }_N`$, is known to close , suggesting a connection between $`\mathrm{\Delta }_N`$ and antiferromagnetic correlations.
As shown in Fig. 4, the superfluid density is doping independent in the region where $`T_{sg}`$ = $`T_{sf}`$ $`=0`$ and is gradually reduced with the increased evolution of AF correlations ($`ie`$., for $`x<0.20`$), undergoing a local dip in the $`1/8`$ region where AF is enhanced possibly due to the stripe phase. We would like to note the striking parallel changes in $`\rho ^s(0)`$ with $`T_{sg}`$, $`T_{sf}`$ emphasising the intimate connection of $`\rho ^s(0)`$ with the AF background rather than simply $`T_c`$.
The present results suggest that AF coexists with superconductivity in the underdoped and slightly overdoped samples and becomes undetectable in the heavily overdoped regime where the pseudogap also disappears. The systematic depletion of the superfluid density with increasing AF correlations (Fig. 4) indicates that the latter $`compete`$ with superconductivity and $`\rho ^s(0)`$ is not simply a function of $`T_c`$
We are grateful to Dr A.D. Taylor of the ISIS Facility, Rutherford Appleton Laboratory for the allocation of muon beam time. C.P. thanks Tao Xiang for useful discussions and Trinity College, Cambridge for financial support.
FIGURE CAPTIONS
Figure 1. Typical zero-field $`\mu `$SR spectra of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`x=0.08`$ measured at $`1.3,5`$ and $`9K`$. The solid lines are fits of the stretched exponential, $`G_z(t)`$ = $`A_1`$ $`exp[(\lambda t)^\beta ]`$ \+ $`A_2`$ (see text for details).
Figure 2. Temperature dependence of the exponent $`\beta `$ of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`x=0.08,0.10,0.125,0.15,0.17`$.
Figure. 3. Temperature dependence of the relaxation rate parameter $`\lambda ^\beta `$ of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> for $`x=0.08,0.10,0.125,0.15,0.17`$.
Figure 4. Phase diagram of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> showing the doping dependence of $`T_c`$ (closed lower triangles), $`T_{sg}`$ (closed circles), $`T_{sf}`$ (closed squares), and $`\lambda _{ab}(0)^2`$ $``$ $`\rho ^s(0)`$ (closed upper triangles). Open cirlces are data for $`T_{sg}`$ taken from ref. . A schematic variation of the Nell temperature $`T_N`$ is also shown as a broken line. |
no-problem/0002/hep-th0002031.html | ar5iv | text | # Confinement and the AdS/CFT Correspondence
## I Introduction
It was realized many years ago that in the large N limit of Yang-Mills theory a remarkable simplification takes place: the physics is dominated by โplanarโ graphs, Feynman diagrams with no line-crossings. In this limit, the gauge theory ought to be described by a โQCD string,โ and it was a hope that such a simplification might shed light on some of the mysteries of nonabelian gauge theories, notably the puzzle of confinement.
The AdS/CFT correspondence makes explicit this relation between gauge fields and strings. Specifically, the correspondence says that IIB string theory in a background of five-dimensional anti-de Sitter space times a five-sphere is dual to the large N limit of $`๐ฉ=4`$ supersymmetric Yang-Mills theory in four dimensions. This is a conformal theory with no confinement; however, the thermal theory in finite volume could still have a confined phase.
An important ingredient in the correspondence is the principle of holography , the notion that the physics of a gravitational theory is dual to a different theory in one lower dimension. Conversely, given the dual theory on a boundary, we must consider all the possible bulk manifolds whose boundaries have the same intrinsic geometry as the background of the dual theory . For thermal super Yang-Mills on $`S^3`$, there are at least two distinct Einstein manifolds with the requisite boundary geometry: thermal anti-de Sitter space and a Schwarzschild-like black hole in AdS . These classical bulk solutions are a sort of master field of the gauge theory. The distinct geometries are interpreted in the gauge theory as different phases in the strong โt Hooft coupling limit; the thermodynamics of the black hole corresponds to the thermodynamics of strong-coupling SYM in the unconfined phase while thermal AdS is seen as dual to the confined phase of the gauge theory.
In this paper, we investigate the thermodynamics of the different phases of super Yang-Mills at finite volume on $`S^3`$ using the AdS/CFT correspondence. In particular, we examine the conditions for phase change in a microcanonical framework. Formally, a phase transition cannot occur in finite volume at finite N. However, a crossover between these qualitatively different phases can still occur when their weights in the partition sum are the same. In the dual picture, the black hole dominates the path integral when the horizon is large compared to the inverse AdS curvature, and the thermal AdS geometry dominates for sufficiently low temperatures. The crossover between these two geometries is known as the Hawking-Page transition and corresponds in the field theory to a transition between the confining and unconfined phases. Since a microcanonical framework requires that energy be conserved during any transition, we shall consider not empty AdS with thermal identifications, but rather thermally-identified AdS with a thermal gas in it. The energy of the system is then measured with respect to the thermal AdS background.
This paper is structured as follows. We begin, in Section II, by reviewing the thermal properties of AdS black holes and their interpretation in light of the AdS/CFT correspondence. Then, in Section III, we consider a thermal bath in AdS. Finally, in Section IV, we determine the necessary conditions for a crossover between the black hole and radiation-dominated phases.
## II AdS Black Holes
The five-dimensional Einstein-Hilbert action with a cosmological constant is given by
$$I_{\mathrm{BH}}=\frac{1}{16\pi G_5}d^5x\sqrt{g}\left(R+12l^2\right),$$
(1)
where $`G_5`$ is the five-dimensional Newton constant, $`R`$ is the Ricci scalar, the cosmological constant is $`\mathrm{\Lambda }=6l^2`$, and we have neglected a surface term at infinity. Anti-de Sitter solutions derived from this action can be embedded in ten-dimensional IIB supergravity such that the supergravity background is of the form $`AdS_5\times S^5`$.
The line element of a โSchwarzschildโ black hole in anti-de Sitter space in five spacetime dimensions can be written as
$$ds^2=\left(1\frac{2MG_5}{r^2}+r^2l^2\right)dt^2+\left(1\frac{2MG_5}{r^2}+r^2l^2\right)^1dr^2+r^2d\mathrm{\Omega }_3^2,$$
(2)
where $`l`$ is the inverse radius of AdS space. This solution has a horizon at $`r=r_+`$ where
$$r_+^2=\frac{1}{2l^2}\left(1+\sqrt{1+8MG_5l^2}\right).$$
(3)
When $`r_+l1`$, the black hole could become unstable to localization on the $`S^5`$ by an analog of the Gregory-Laflamme mechanism . As a rule, one may determine a necessary (though not sufficient) condition for instability from entropic considerations. A straightforward computation then shows that localization instability could occur for very small black holes with $`r_+l1`$ . Here we shall work with black holes with $`r_+l>1`$ for which we do not expect such an instability.
To study the black holeโs thermodynamics, we Euclideanize the metric. The substitution $`\tau =it`$ makes the metric positive definite and, by the usual removal of the conical singularity at $`r_+`$, yields a periodicity in $`\tau `$ of
$$\beta _{\mathrm{BH}}=\frac{2\pi r_+}{1+2r_+^2l^2},$$
(4)
which is identified with the inverse temperature of the black hole. The entropy is given by
$$S_{\mathrm{BH}}=\frac{A}{4G_5}=\frac{\pi ^2r_+^3}{2G_5},$$
(5)
where $`A`$ is the โareaโ (that is, three-volume) of the horizon. The mass above the anti-de Sitter background is
$$U_{\mathrm{BH}}=\frac{3\pi }{4}M=\frac{3\pi }{8G_5}r_+^2\left(1+r_+^2l^2\right).$$
(6)
This is the AdS equivalent of the ADM mass, or energy at infinity. (Actually if the black hole is to be considered at thermal equilibrium it should properly be regarded as being surrounded by a thermal envelope of Hawking particles. Because of the infinite blueshift at the horizon, the envelope contributes a formally infinite energy. Here we shall neglect this infinite energy as unphysical, absorbed perhaps by a renormalization of the Newton constant.) We can now also write down the free energy:
$$F_{\mathrm{BH}}=\frac{\pi r_+^2}{8G_5}\left(1r_+^2l^2\right).$$
(7)
Eqs. (5-7) then satisfy the first law of thermodynamics. To express them in terms of the gauge theory parameters $`N`$, $`T_{\mathrm{CFT}}`$, and $`V_{\mathrm{CFT}}`$, we substitute physical data taken from the boundary of the black hole spacetime. At fixed $`rr_0r_+`$, the boundary line element tends to
$$ds^2r_0^2\left[l^2dt^2+d\mathrm{\Omega }_3^2\right],$$
(8)
giving a volume of
$$V_{\mathrm{CFT}}=2\pi ^2r_0^3.$$
(9)
The field theory temperature is the physical temperature at the boundary:
$$T_{\mathrm{CFT}}=\frac{T_{\mathrm{BH}}}{\sqrt{g_{tt}}}\frac{T_{\mathrm{BH}}}{lr_0}.$$
(10)
To obtain an expression for $`N`$, we invoke the AdS/CFT correspondence. This relates $`N`$ to the radius of $`S^5`$ and the cosmological constant:
$$R_{S^5}^2=\sqrt{4\pi g_s\alpha _{}^{}{}_{}{}^{2}N}=\frac{1}{l^2}.$$
(11)
Then, since
$$(2\pi )^7g_s^2\alpha _{}^{}{}_{}{}^{4}=16\pi G_{10}=16\frac{\pi ^4}{l^5}G_5,$$
(12)
we have
$$N^2=\frac{\pi }{2l^3G_5}.$$
(13)
With these substitutions, we see that in the limit $`r_+l1`$, the black hole entropy can be expressed in terms of conformal field theory parameters as
$$S_{\mathrm{BH}}=\frac{2}{3}\pi ^2N^2V_{\mathrm{CFT}}T_{\mathrm{CFT}}^3.$$
(14)
The dimensionful terms in this expression are in accord with expectations for a conformal theory. The matching has been extended to rotating black holes and their field theory dual, Yang-Mills with angular momentum. The dependence on $`N^2`$ indicates that the conformal field theory is in its unconfined phase; the $`N^2`$ species of free gluons make independent contributions to the free energy. We shall see in the next section that the thermodynamics of the confined phase is rather different.
## III A hot bath in AdS
Now consider a gas of thermal radiation in anti-de Sitter space. The energy eigenstates of $`AdS_5`$ are :
$$\mathrm{\Psi }_{\omega jmn}(r,t,\theta ,\varphi ,\psi )=N_{\omega j}\mathrm{exp}\left(i\omega lt\right)\mathrm{sin}^j\rho C_{\omega j1}^{j+1}(\mathrm{cos}\rho )Y_j^{mn}(\theta ,\varphi ,\psi ),$$
(15)
with the condition $`\omega 1j|m|,|n|`$ where $`C_q^p(x)`$ are Gegenbauer polynomials, $`Y_j^{mn}(\theta ,\varphi ,\psi )`$ are the spherical harmonics in five-dimensional spacetime (with total angular momentum number $`j`$), and $`\rho \mathrm{arctan}(rl)`$. Here $`\omega `$ is an integer and hence the spectrum is quantized in units of $`l`$, the inverse โradiusโ of AdS. Since this is also the quantum of excitations of the five-sphere, we should consider thermodynamics over the full ten-dimensional space. The appropriate line element is therefore
$$ds^2=\left(1+r^2l^2\right)dt^2+\left(1+r^2l^2\right)^1dr^2+r^2d\mathrm{\Omega }_3^2+l^2d\mathrm{\Omega }_5^2.$$
(16)
To obtain a thermal field theory, we again Euclideanize the metric. The periodicity of $`\tau =it`$ is then the inverse (asymptotic) temperature, $`T_{\mathrm{AdS}}^1`$, of the theory; the absence of a horizon means that $`T_{\mathrm{AdS}}`$ is an arbitrary parameter. However, the relevant temperature for thermodynamics in the bulk is not $`T_{\mathrm{AdS}}`$, but the local, redshifted, temperature:
$$T_{\mathrm{local}}=\frac{T_{\mathrm{AdS}}}{\sqrt{g_{tt}}}=\frac{T_{\mathrm{AdS}}}{\sqrt{1+r^2l^2}}.$$
(17)
To calculate thermodynamic quantities we foliate spacetime into (timelike) slices of constant local temperature. Extensive thermodynamic quantities are then computed by adding the contribution of each such hypersurface.
The local five-dimensional energy density of the thermal gas of radiation can be written as
$$\rho _{\mathrm{local}}=\sigma \frac{\pi ^3}{l^5}T_{\mathrm{local}}^{10},$$
(18)
where we have neglected infrared effects due to curvature or nonconformality. Here $`\sigma `$ is the ten-dimensional supersymmetric generalization of the Stefan-Boltzmann constant, which is approximated by its flat space value:
$$\sigma =\frac{62}{105}\pi ^5,$$
(19)
where we have included a factor of 128, the number of massless bosonic physical degrees of freedom of IIB supergravity.
The total โADMโ energy-at-infinity of a gas contained in a ball of radius $`r_0`$ is then
$$U_{\mathrm{gas}}^{\mathrm{}}=\sigma \frac{\pi ^3}{l^4}T_{\mathrm{local}}^{10}\sqrt{g_{tt}}\sqrt{g_{rr}}r^3๐r๐\mathrm{\Omega }_3=\frac{2\pi ^5}{l^5}\sigma T_{\mathrm{AdS}}^{10}_0^{r_0}\frac{r^3dr}{(1+r^2l^2)^5}\sigma V_{\mathrm{eff}}(r_0)T_{\mathrm{AdS}}^{10}.$$
(20)
Here the additional blueshift factor of $`\sqrt{g_{tt}}`$ converts the local (fiducial) energy into an ADM-type energy, comparable to Eq. (6). We have also defined an effective volume,
$$V_{\mathrm{eff}}(r_0)=\frac{2\pi ^5}{l^9}\left(\frac{2}{3}\frac{2+3(r_0l)^2}{3\left(1+(r_0l)^2\right)^{3/2}}\right),$$
(21)
which, as $`r_0\mathrm{}`$, approaches
$$\frac{4\pi ^5}{3l^9}.$$
(22)
Thermodynamically, anti-de Sitter space behaves as if it had a finite volume.
Similarly, the other thermodynamic quantities of the thermal bath are
$$F=\frac{\sigma }{9}V_{\mathrm{eff}}T_{\mathrm{AdS}}^{10},S=\frac{10}{9}\sigma V_{\mathrm{eff}}T_{\mathrm{AdS}}^9,$$
(23)
consistent with the first law of thermodynamics. The absence of a $`G_5`$ in the free energy indicates, from the CFT point of view, that the free energy is of order $`N^0`$. This is the confined phase of the theory โ the free energy is of order $`N^0`$ because the $`N^2`$ species of gluons have condensed into hadronic color singlets. (Curiously, attempts to formulate confinement in terms of anti-de Sitter space date back at least to the 1970โs .)
The factor of nine spatial dimensions in the volume is somewhat puzzling for a three-dimensional gauge theory. It reflects the fact that the QCD (SYM) string is really a type IIB string which naturally lives in nine spatial dimensions. It has been suggested that the extra dimensions in which the open string worldsheet bounded by a Wilson loop can extend are akin to Liouville dimensions .
## IV The Crossover
A field theory living on a manifold, $`S^3\times S^1`$ with three-sphere radius $`r_0`$, is dual to those five-dimensional Einstein manifolds that have the same geometry at $`r_0`$. In the microcanonical approach that we shall follow, the contributions to the partition function come from both the black hole and the gas in thermally-identified AdS where the energy of the gas and black hole are taken to be the same:
$$Z(U)=e^{I_{\mathrm{BH}}(U)}+e^{I_{\mathrm{gas}}(U)}.$$
(24)
Which of these two thermodynamic phases the system is found in is determined, in the saddle point approximation, by the relative values of the respective Euclidean classical actions. The action of the black hole is simply the Einstein-Hilbert action, Eq. (1). This is proportional to the volume of the spacetime and so needs to be regulated. A finite action is obtained by subtracting the (also infinite) action for thermally-identified anti-de Sitter space in which the hypersurface at a constant large radius has the same intrinsic geometry as a hypersurface at the same radius in the black hole background . The regularized black hole action is
$$I_{\mathrm{BH}}=\frac{\pi ^2r_+^3}{4G_5}\frac{1r_+^2l^2}{1+2r_+^2l^2}.$$
(25)
Subtracting the anti-de Sitter background is equivalent to choosing the ground state of the theory. Then the comparable value of the action for the gas in thermally-identified AdS should be just the action of the gas itself, namely $`F/T`$. Thus
$$I_{\mathrm{gas}}=\frac{1}{9}\sigma V_{\mathrm{eff}}T_{\mathrm{AdS}}^9.$$
(26)
The qualitative thermodynamic behavior of the system is determined by the action which dominates the partition function Eq. (24). At the crossover between the two phases, the action for the gas and the black hole are the same. Moreover, energy must be conserved. Hence one may determine the conditions for a smooth crossover from the following equations:
$$U_{\mathrm{gas}}^{\mathrm{local}}=U_{\mathrm{BH}}^{\mathrm{local}},I_{\mathrm{gas}}=I_{\mathrm{BH}}.$$
(27)
Note that, since the two phases cannot be in physical contact, the physical temperature does not have to be the same for the two phases. (The physical temperatures are the same at $`r_+l=\sqrt{7}`$.)
Solving these equations yields $`N^2`$ as a function of the dimensionless quantity $`xr_+l`$ at the crossover:
$$N^2=\frac{31}{2^53^{13}57}\frac{(1+x^2)^9(1+2x^2)^{10}}{(x^21)^{10}x^{12}}.$$
(28)
How accurate is this equation? In using Eq. (17), we have omitted the back-reaction of the gas on the metric. One may estimate this. Consider a spherically symmetric stationary spacetime with a cosmological constant and massive matter fields. The equation of hydrostatic equilibrium (equally, the $`tt`$ Einstein equation) reads
$$\frac{d}{dr}m(r)=8\pi ^2r^3\left(G_5\rho +\mathrm{\Lambda }\right),$$
(29)
where $`g_{tt}1m(r)/r^2`$. Back-reaction can reliably be neglected when the matter term in the parenthesis is (much) smaller than the cosmological term. For an energy density given by Eq. (18) and a total energy matched to that of the black hole phase, Eq. (6), the condition $`G_5\rho <|\mathrm{\Lambda }|`$ amounts to
$$\frac{9\pi }{32}x^2(1+x^2)l^2<6l^2,$$
(30)
and we see that the matter term becomes dominant at large $`x`$, and is not entirely negligible even near $`x=1`$. At high temperature, therefore, Eq. (28) becomes unreliable; this limit has been studied elsewhere . To accommodate the effect of matter, one might try to seek a solution to the linearized Einstein equations, perturbed around the $`AdS_5`$ background. The exact form of Eq. (28) would also be modified by including the correct Stefan-Boltzmann constant for anti-de Sitter space. And finally, when $`N`$ is small, the supergravity approximation itself breaks down.
Despite these caveats, Eq. (28) seems to capture the correct qualitative behavior. In Fig. 1, we plot $`N`$ near the crossover for $`x1`$. The region below the crossover curve is dominated by the confined or AdS gas phase, whereas the region above is dominated by the unconfined or black hole phase. Note that $`x`$ grows roughly like the dimensionless product $`T_{\mathrm{phys}}r_0`$. As the temperature/volume increases, the graph confirms our expectation that the theory becomes conformal. As $`N`$ goes to infinity, we recover the result that the transition occurs at $`r_+l=1`$. This is in fact a very good approximation for finite but large N.
Acknowledgments
We would like to thank Josรฉ Barbรณn, Gerard โt Hooft, Shiraz Minwalla, and Erik Verlinde for helpful discussions. D. B. is supported by European Commission TMR programme ERBFMRX-CT96-0045, and would like to acknowledge the hospitality of the Spinoza Institute, CERN and the support of ITF, Utrecht. M. P. is by the Netherlands Organization for Scientific Research (NWO). |
no-problem/0002/math-ph0002004.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Turbulence is the state of vortical fluid flow where velocity, pressure, and other flow field properties vary in time and space sharply and irregularly, and can be assumed to be random. The experimental investigation of individual realizations of such flows is impossible because the results are irreproducible: Experiments repeated under identical external conditions produce a different outcome. Therefore experimental investigations of turbulent flows can only provide their average properties.
Less trivial and not always recognized is the following: What is of greatest interest in these experiments are intermediate asymptotic states of wider classes of flows, i.e., coherent, self-consistent fragments common to many different flows. Other measurements reflect special properties of a set-up, which cannot be reproduced in other experiments. It may sometimes be useful to investigate a particular device (e.g. an atomizer) for an immediate practical purpose, but one should be cautious in transferring the results to different flows.
Typical examples of intermediate-asymptotic flows are shear flows, where the flow is homogeneous in the direction of mean velocity which depends only on the coordinate perpendicular to the mean flow. A well known example of shear flows occurs in smooth cylindrical circular pipes far from the entrance and outlet. Another example is the zero-pressure gradient boundary layer above a smooth flat plate far from its tip. In spite of their apparent simplicity experiments with such flows require high experimental culture and are expensive, and therefore relatively rare. When they are successful, like, for instance, the experiments of Nikuradze \[<sup>1</sup>\] with flows in smooth pipes in the range of Reynolds numbers up to $`3.2410^6`$, performed 70 years ago under the guidance of L.Prandtl, they become milestones in turbulence studies. They are used to check theories based on special hypotheses valid for special classes of flows. Not enough is known now about the solutions of Navier-Stokes equations to avoid such hypotheses.
The responsibility of experimentalists who perform such experiments and process the data is therefore very high. They are to be very careful in their conclusions. We showed, for instance \[<sup>2,3</sup>\] that the experiments of Princeton group (A.Smits, M.V.Zagarola) presented in the thesis of M.V.Zagarola \[<sup>4</sup>\], which attempted to increase the range of Reynolds numbers achieved by Nikuradze by an order of magnitude, had a flaw. Starting from $`Re=10^6`$, well below the upper boundary of Nikuradze experiments, their data were influenced by roughness โ insufficient polishing of the pipe walls.
In the present work we analyze the experiments with zero-pressure-gradient boundary layers presented in the thesis of J.M.รsterlund \[<sup>5</sup>\]. Like the earlier theses of M.V.Zagarola \[<sup>4</sup>\] and M.H.Hites \[<sup>6</sup>\], the thesis of J.M.รsterlund presents the results of long-time work of a group headed by a senior scientist (in this case A.V.Johansson), using a complicated, expensive and unique facility. The experimental data and their interpretation presented in this thesis might be accepted by some readers, as in the case of the thesis by Zagarola, without precautions. This was the motivation of our analysis of this thesis.
The number of runs (70 measurements of mean velocity profiles) reported in the thesis \[<sup>5</sup>\] is larger than in previously reported series, although the range of Reynolds numbers covered is not as wide; less than in the previous thesis of M.H.Hites \[<sup>6</sup>\]: $`2500<Re_\theta <27,500`$. In the thesis \[<sup>5</sup>\] the authors make very definite statements: they claim that their experiments confirm the classical two-layer theory, in particular the Reynolds number-independent universal logarithmic law, and, exhibiting no Reynolds number-dependence, disagree with the alternative theory based on the Reynolds number-dependent scaling law.
J.M.รsterlund presented the data of 70 mean velocity measurements on the Internet (www.mesh.kth.se/$``$jens/zpg/ ). We present here the results of the processing of all these data. We demonstrate that, properly processed, these data lead to the opposite conclusion: they confirm the Reynolds number-dependent scaling law and disagree with the conclusion of Reynolds number-independence.
## 2 Background
According to the classical two-layer theory of wall-bounded turbulent shear flows at large Reynolds numbers, the distribution of average velocity $`u`$ across the flow in the basic intermediate region adjacent to the viscous sublayer is represented in the form of universal (Reynolds number-independent) von Kรกrmรกn-Prandtl logarithmic law
$$\varphi =\frac{1}{\kappa }\mathrm{ln}\eta +C.$$
(2.1)
Here we use classical Nikuradze-Schlichting et al. notations:
$$\varphi =\frac{u}{u_{}},u_{}=\left(\frac{\tau }{\rho }\right)^{\frac{1}{2}},\eta =\frac{u_{}y}{\nu },$$
(2.2)
where $`\tau `$ is the shear stress at the wall, $`y`$ the distance from the wall; $`\nu `$ and $`\rho `$ are the fluidโs kinematic viscosity and density. In the thesis more modern notations are used: $`\overline{U}^+`$ instead of $`\varphi `$, $`y^+`$ instead of $`\eta `$. So, the von Kรกrmรกn-Prandtl universal law (2.1) is represented in the thesis in the form
$$\overline{U}^+=\frac{1}{\kappa }\mathrm{ln}(y^+)+B.$$
(2.3)
Here $`\kappa `$ (von Kรกrmรกn constant) and $`B`$, according to the logic of the derivation, should be universal constants identical in all high quality experiments. It is known from the literature however that various experiments give substantially different values of these constants. Nikuradze \[<sup>1</sup>\] determined $`\kappa =0.417`$, $`B=5.84`$, Monin and Yaglom \[<sup>7</sup>\] give the values $`\kappa =0.40`$, $`B=5.1`$; Schlichting \[<sup>8</sup>\] gives the values $`\kappa =0.40`$, $`B=5.5`$. The difference is substantial, and for many years doubts have accumulated on the validity of the universal logarithmic law.
In our papers (see e.g. \[<sup>9</sup>\],\[<sup>10</sup>\]) the derivation of the universal logarithmic law was reconsidered. It was shown that one of the basic assumptions is not quite correct, and on the basis of an alternative assumption, a different โscalingโ (power) law was proposed:
$$\overline{U}^+=(C_1\mathrm{ln}Re+C_2)(y^+)^{c/\mathrm{ln}Re}$$
(2.4)
where the constants $`C_1,C_2,c`$ should be universal and Reynolds number-independent. Comparison with the experimental data of Nikuradze has given the following values of the constants:
$$C_1=\frac{1}{\sqrt{3}},C_2=\frac{5}{2},c=\frac{3}{2},$$
(2.5)
so that the law (2.4) is presented in the form
$$\varphi =\left(\frac{\sqrt{3}+5\alpha }{2\alpha }\right)\eta ^\alpha ,\alpha =\frac{3}{2\mathrm{ln}Re}$$
(2.6)
or, using the notation of รsterlundโs thesis \[<sup>5</sup>\]
$$\overline{U}^+=\left(\frac{1}{\sqrt{3}}\mathrm{ln}Re+\frac{5}{2}\right)(y^+)^{\frac{3}{2\mathrm{ln}Re}}.$$
(2.7)
Asymptotically, at $`Re\mathrm{}`$, the specific choice of $`Re`$ is of no importance: $`Re`$ can be replaced by $`Re^{}=\text{Const}Re`$, and the asymptotic form of (2.7) will remain the same. However, for practical large, but not too large, values of $`Re`$, its actual expression is significant. It should be remembered that in our comparison with Nikuradzeโs data \[<sup>1</sup>\] we used his definition of the Reynolds number: $`Re=\overline{u}d/\nu `$, where $`\overline{u}`$ is the mean flow velocity (bulk discharge rate divided by cross-section area), and $`d`$ is the diameter of the pipe. Furthermore, it was recognized from the beginning \[<sup>11</sup>\] that the law (2.7) is asymptotic (in the parameter $`1/\mathrm{ln}Re`$). It should be valid at large $`Re`$, but at lesser values higher terms in the expansion of the coefficients could be significant.
The experimental data of the Stockholm group (J.M.รsterlund, A.V.Johansson) are presented in the form of graphs in the $`\mathrm{ln}y^+`$, $`\overline{U}^+`$ plane suggested by the universal logarithmic law (2.3) (see Figure 5 on page 43 and Figures 13, 14 on pages 152โ153 of the thesis \[<sup>5</sup>\]). They became available on the Internet (www.mesh.kth.se/$``$jens/zpg/ ) in complete digital form. The data are presented in our table, parameterized by the authors by the parameter $`Re_\theta =U\theta /\nu `$. Here $`U`$ is the free stream velocity, $`\theta `$ is the momentum thickness, a quantity measurable a posteriori, after the run is performed.
## 3 Processing of the mean velocity data
The first question to be answered was as follows: are the mean velocity data presented on the Internet compatible with some scaling law, not necessarily the law (2.7). Therefore the data were plotted in the double-logarithmic coordinates $`(\mathrm{lg}y^+,\mathrm{lg}\overline{U}^+)`$. The result was instructive: the data outside the viscous sublayer form a characteristic shape of a broken line (see Figures 1โ70). This shape is similar to the shape obtained for the experiments of the first group according to our classification \[<sup>12</sup>\] where there was no influence neither of the external turbulence of the free stream nor of roughness. The Stockholm authors place the lower boundary of the intermediate region at $`y^+=200`$. We found this value generally too high, and the standard value $`y^+=70รท100`$ seems to be more appropriate, however we marked the line $`y^+=200`$ on all Figures 1โ70.
Thus, all experiments revealed two straight lines forming a broken line in the $`\mathrm{lg}y^+,\mathrm{lg}U^+`$ plane. These straight lines correspond to the scaling laws
$$\text{(I)}\overline{U^+}=A(y^+)^\alpha $$
(3.1)
(in the intermediate region adjacent to the viscous sublayer), and
$$\text{(II)}\overline{U^+}=B(y^+)^\beta $$
(3.2)
(in the intermediate region adjacent to the free stream). The coefficients $`A,\alpha ,B,\beta `$ were obtained by standard statistical processing of data (see the table). The coefficients $`A,\alpha `$ of the straight line (3.1) representing the scaling law for the mean velocity distribution in the basic intermediate region adjacent to the viscous sublayer are obviously Reynolds-number-dependent: For us this was not unexpected, because previous processing of all available experimental data for a much wider range of Reynolds number led us to the same conclusion (see paper \[<sup>12</sup>\] which was known to the Stockholm group and referenced by them). Therefore we conclude that the validity of some Reynolds-number-dependent scaling law for mean velocity distribution is unquestionably confirmed by the experiments of the Stockholm group as well.
Note that because the Reynolds number range covered by the Stockholm group was not large, substantially less than the range covered by the other groups, in particular the Chicago group \[<sup>6</sup>\], there would be a danger that they would not notice the Reynolds-number-dependence because the governing parameter is $`\mathrm{ln}Re`$, not $`Re`$ itself. This is not the case: The $`Re`$-dependence of the Stockholm experimental data is sufficiently strong to be revealed by proper processing.
By the way, the authors could notice that their values $`\kappa =0.38`$ and $`\beta =4.1`$ are substantially less than those presented in the literature as standard. However, if the law is universal Reynolds-number-independent, these parameters should be identical for all experiments of sufficient quality!
The argument against the power law used by the authors (see paper \[<sup>13</sup>\] reproduced in the thesis) is the following. They introduce the โdiagnostic functionโ
$$\mathrm{\Gamma }=\frac{y^+}{\overline{U}^+}\frac{d\overline{U}^+}{dy^+}.$$
Their statement, โThe function $`\mathrm{\Gamma }`$ should be a constant in a region governed by a power lawโ is correct for a fixed Reynolds number. However, this is not true for the โdiagnostic function averaged for KTH dataโ, which is shown in their Figure 6.
We invite the reader to look at any of the Figures 1โ70. It is clear that for each run $`\mathrm{\Gamma }`$ is a constant โ look at the straight lines in the first intermediate region! However, this constant is different for different runs because the slopes of straight lines is $`Re`$-dependent! Indeed, the slope in the first region decays with growing Reynolds number. It is clear why $`\mathrm{\Gamma }`$ obtained by the authors is decreasing: the runs with larger Reynolds number and smaller slopes contribute more to larger $`y^+`$.
Now, when the validity of some Reynolds-number-dependent scaling law for the experiments of the Stockholm group is unquestionably established, we have to investigate whether this scaling law can be represented in the same form (2.7) obtained for flows in pipes. But what is Re? We cannot assume it arbitrarily to be equal to $`Re_\theta `$.
This effective Reynolds number $`Re`$ should have the form $`Re=U\mathrm{\Lambda }/\nu `$, where $`U`$ is the free stream velocity, $`\nu `$ the kinematic viscosity, and $`\mathrm{\Lambda }`$ a length scale which we cannot ร priori identify with the momentum thickness $`\theta `$, as there is no rationale for such identification. So, the basic question is whether one can find for each run such length scale $`\mathrm{\Lambda }`$ so that the scaling law (2.7) will be valid for the mean velocity distribution in the first intermediate region. A priori the very existence of such a length scale is under question, but if it does exist, this means that the law (2.7) is not specific to flows in pipes but can be a general law for wall-bounded shear flows at large Reynolds numbers.
To answer this question one has to take the values $`A`$ and $`\alpha `$ for each run, obtained by standard statistical processing of the experimental data in the first intermediate scaling region, and then calculate two values $`\mathrm{ln}Re_1`$ and $`\mathrm{ln}Re_2`$ by solving two equations suggested by the scaling law (2.7),
$$\frac{1}{\sqrt{3}}\mathrm{ln}Re_1+\frac{5}{2}=A,\frac{3}{2\mathrm{ln}Re_2}=\alpha .$$
(3.3)
If the values $`\mathrm{ln}Re_1`$ and $`\mathrm{ln}Re_2`$ obtained by solving two different equations (3.3) are close, i.e. if they coincide within experimental accuracy, it will mean that the unique length scale $`\mathrm{\Lambda }`$ can be determined so that the experimental scaling law in the region (3.1), whose existence was proved before, coincides with the basic scaling law (2.7). The table shows that indeed these values are close โ for all $`Re_\theta >10,000`$, the difference $`\mathrm{ln}Re_1\mathrm{ln}Re_2`$ does not exceed 3%. This allows one to introduce for large Reynolds numbers the effective Reynolds number $`Re`$ according to the relation
$$\mathrm{ln}Re_1=\frac{1}{2}(\mathrm{ln}Re_1+\mathrm{ln}Re_2),\text{or}Re=\sqrt{Re_1Re_2},$$
(3.4)
i.e., the geometric mean of $`Re_1`$ and $`Re_2`$. This Reynolds number allows the definition of the effective length scale $`\mathrm{\Lambda }`$, which plays a similar role in the scaling law for the boundary layer flow as does the pipe diameter for flow in pipes. We remind the reader that the momentum thickness is calculated by integration of the velocity profile obtained experimentally: the calculation of the length scale on the basis of the measured velocity profile is not more complicated. Naturally the ratio of two length scales $`\theta /\mathrm{\Lambda }`$ is different for different runs: both these quantities depend upon the details of flows, in particular they can depend in principle upon the distance between the tip of the plate and the point of observation.
## 4 Checking universality
The scaling law (2.7) can be reduced to a universal form
$$\psi =\frac{1}{\alpha }\mathrm{ln}\left(\frac{2\alpha \overline{U}^+}{\sqrt{3}+5\alpha }\right)=\mathrm{ln}y^+$$
(4.1)
where $`\alpha =\frac{3}{2\mathrm{ln}Re}`$. This formula gives another way to check the applicability of the Reynolds-number-dependent scaling law (2.7) in the intermediate region (3.1). Indeed, according to (4.1), in the coordinates $`\mathrm{ln}y^+,\psi `$, all experimental points should collapse onto the bisectrix of the first quadrant. Figure 71 shows that all data for large Reynolds numbers ($`Re_\theta >15,000`$, 24 runs) presented on the Internet collapse onto the bisectrix with accuracy sufficient to give an additional confirmation to the Reynolds-number-dependent scaling law (2.7). For lesser values of $`Re_\theta `$ a systematic parallel shift is observed (Figures 72โ74). Apparently in this case the choice of $`Re`$ according to (3.4) is insufficient because at small Reynolds numbers the higher terms of the expansion could have some influence (see the paper by Radhakrishnan Srinivasan \[<sup>13</sup>\]).
## 5 Conclusion
The thesis of J.รsterlund contains the following statements:
1. (p.22 of thesis), โThe classical two layer theory was confirmed and constant values of the slope of the logarithmic overlap region (i.e. von Kรกrmรกn constant) and additive constants were found and estimated to $`\kappa =0.38`$, $`B=4.1`$, and $`B_1=3.6`$.โ
2. (p.29 of thesis), in fact the Introduction to the paper : โContrary to the conclusions of some earlier publications, careful analysis of the data reveals no significant Reynolds number dependence for the parameters describing the overlap region using the classical logarithmic relation.โ
These statements are not correct. Our careful analysis of experimental data presented in the thesis performed in the present work leads to the opposite conclusions.
> $`1^{}.`$ The results contradict the classical two-layer theory. The estimates of the constants obtained by the authors are substantially different from the standard values and this reason alone is enough to reject the assumption of universality, the cornerstone of the classical theory.
>
> $`2^{}.`$ In full agreement with our earlier publications, careful analysis of the data reveals significant Reynolds number dependence for the parameters describing the โoverlapโ region and confirms the Reynolds-number-dependent scaling law.
The thesis of J.รsterlund was not the first investigation of this kind, many similar experimental investigations were performed earlier covering a much larger range of Reynolds number. (In the thesis only one decade of $`Re_\theta `$ was covered: $`2,500<Re_\theta <27,000`$; in previous investigations, in particular in those reflected in the instructive review \[<sup>15</sup>\], the range $`1,000<Re_\theta <200,000`$ was covered). However, as we showed above, the accuracy of experimental data is sufficient to reveal Reynolds number dependence and correspondence of their data to the Reynolds-number-dependent scaling law (2.7) proposed by us.
The experiments of รsterlund et al. allow an additional confirmation of the separation of the basic part of the flow into two self-similar regions (I) and (II) governed by the laws (3.1) and (3.2). It is important that these two self-similar regions cover the whole boundary layer and not a small (1/6!) part of it where the universal law is expected by the authors to be valid. These experiments reveal a weak $`Re`$-number dependence of the parameter $`\beta `$ (Figure 75): it decreased with growing $`Re`$. The data are not sufficient to come to a final decision, but they are in approximate agreement with the correlation
$$\beta =\frac{2}{\mathrm{ln}Re}+0.01.$$
(5.1)
Acknowledgments. This work was supported in part by the Applied Mathematics subprogram of the U.S. Department of Energy under contract DEโAC03โ76โSF00098, and in part by the National Science Foudnation under grants DMS 94โ16431 and DMS 97โ32710.
References
1. Nikuradze, J. 1932. Zur turbulenten Strรถmung in glatten Rohren. VDI Forschungsheft, no. 356.
2. Barenblatt, G.I., Chorin, A.J., and Prostokishin, V.M. 1997. Scaling laws in fully developed turbulent pipe flow: discussion of experimental data. Proceedings National Academy of Sciences USA 94a, pp.773โ776.
3. Barenblatt, G.I., Chorin, A.J. 1998. Scaling of the intermediate region of wall-bounded turbulence: The power law. Physics of Fluids vol. 10, no. 4, pp.1043โ1044.
4. Zagarola, M.V. 1996. Mean-flow scaling of turbulent pipe flow. Doctoral Thesis, Princeton University, Princeton, New Jersey.
5. รsterlund, J.M. 1999. Experimental studies of zero pressure-gradient turbulent boundary layer flow. Doctoral Thesis, Royal Institute of Technology, Stockholm.
6. Nagib, H. 1997. Scaling of high Reynolds number turbulent boundary layers in the National Diagnostic Facility. Doctoral Thesis, Illinois Institute of Technology, Chicago.
7. Monin, A.S. and Yaglom, A.M. 1971. Statistical Fluid Mechanics, vol. 1, MIT Press, Boston.
8. Schlichting, H. 1968. Boundary Layer Theory, McGraw-Hill, New York.
9. Barenblatt, G.I., Chorin, A.J., and Prostokishin, V.M. 1997. Scaling laws in fully developed turbulent pipe flow. Applied Mechanics Reviews vol. 50, no. 7, pp.413โ429.
10. Chorin. A.J. 1998. New perspectives in turbulence. Quarterly of Applied Mathematics, vol. LVI, no. 4, pp.767โ785.
11. Barenblatt, G.I., 1993. Scaling laws for fully developed turbulent shear flows. Part 1: Basic hypotheses and analysis. Journal of Fluid Mechanics, vol. 248, pp.513โ520.
12. Barenblatt, G.I., Chorin, A.J., and Prostokishin, V.M. 1999. Self-similar intermediate structures in turbulent boundary layers at large Reynolds numbers. UC Berkeley CPAM preprint 775. Journal of Fluid Mechanics, in press.
13. Srinivasan, Radhakrishnan, 1998. The importance of higher-order effects in the Barenblatt-Chorin theory of wall-bounded fully developed turbulent shear flows. Physics of Fluids vol. 10, no. 4, pp.1037โ1039.
14. รsterlund, J.M., Johansson, A.V., Nagib, H.M. and Hites, M.H. 2000. A note on the overlap region in turbulent boundary layers. Physics of Fluids, vol. 12, no. 1, pp.1โ4.
15. Fernholz, H. H. and Finley, P. J., 1996. The incompressible zero-pressure-gradient turbulent boundary layer: an assessment of the data. Progr. Aerospace Sci. 32, pp.245โ311. |
no-problem/0002/cond-mat0002400.html | ar5iv | text | # Linear and Nonlinear Experimental Regimes of Stochastic Resonance
## I Introduction.
The renewed interest of the last two decades on stochastic processes modeling different phenomena of physics, chemistry and engineering sciences has led to the discovery of noise-induced phenomena in nonlinear systems away from equilibrium. In these systems a variation of the level of external noise can qualitatively change the response of the system. The paradigmatic example of these noise-induced phenomena is stochastic resonance (for a recent review see ). Other examples of noise-induced phenomena comprise resonant activation , noise-induced transitions and noise enhanced stability .
Stochastic resonance (SR) manifests itself as an enhancement of the system response for certain finite values of the noise strength. In particular the signal-to-noise ratio (SNR) shows a maximum as a function of the noise intensity. In other words, a statistical synchronization of the random transitions between the two metastable states of the nonlinear system takes place in the presence of an external weak periodic force and noise. Such a physical system presents a time-scale matching condition, which can be observed by tuning the noise level to such a value that the period of the driving force approximately equals twice the noise-induced escape time. The SR phenomenon appears in a large variety of physical systems and has been observed in different systems, ranging from sets of neurons , to lasers and to solid-state devices, like SQUIDs and tunnel diodes .
The SR phenomenon is a well investigated phenomenon both from a theoretical and an experimental point of view . However, few studies systematically analyze the SR phenomenon for different values of the frequency and amplitude of the modulating signal . In this article we systematically study the SR phenomenon as a function of the frequency and amplitude of the modulating signal in a physical bistable system based on a tunnel diode. Our experimental set-up allows us to investigate this phenomenon in a range of amplitude and frequency spanning several orders of magnitude. By varying the amplitude and the frequency of the modulating signal we detect both the regime of the SR phenomenon described by the linear response theory and the nonlinear deviation from it. In the linear regime we observe the customary behavior of stochastic resonance whereas in the nonlinear regime we detect a saturation of the power spectral density measured at the frequency of the modulating signal and a depletion of the power spectral density of the noise at the same frequency. When the noise depletion takes place we observe phase and frequency synchronization between the stochastic output and the deterministic input.
The paper is organized as follows. In section II we describe the experimental apparatus and we discuss the stochastic differential equation associated to the electronic circuit based on a tunnel diode. In section III we present our experimental results of the power amplification and SNR as a function of the noise intensity for different values of the amplitude and frequency of the modulating signal. In this section we discuss the detected deviations form the predictions of the linear response theory and we present an evidence of phase and frequency synchronization detected in the nonlinear regime of high values of the amplitude of the modulating signal. In section IV we briefly draw our conclusions.
## II Experimental apparatus and the tunnel diode
The experimental setup used for investigating the SR phenomenon is a bistable electronic system based on a tunnel diode. The physical system is a series of a resistor(tunable to a desired value) and a tunnel diode in parallel to a capacitor. The tunnel diode is a highly doped semiconductor device with a typical current-voltage characteristic showing a region of negative differential resistance, which is due to a tunneling current from the valence band of the n-doped region to the conducting band of the p-doped region. There are two stable states and one unstable state . For details about this experimental set-up see ref. .
A network of general purpose very low-noise wide band operational amplifier is used to sum the driving periodic signal and the noise signal. The noise signal is the output of a commercial digital noise source, whose spectral density is approximately flat up to 20 MHz and whose root mean square voltage $`V_{rms}`$ may be selected within the range from $`0.133`$ to $`5.5`$ V with a 27 mV resolution. At the output of the operational amplifier the statistical properties of the noise are altered by the filtering of the operational amplifier. We measure the noise $`v_n(t)`$ at the output of the operational amplifier and we observe that it is a Gaussian noise characterized by a spectral density which is flat at low frequency ($`f<1.25`$ MHz). By defining the correlation time of the Gaussian noise as the time at which the normalized autocorrelation function assumes the value $`1/e`$, we measure $`\tau _n=120.`$ ns. In our measurements we vary the amplitude and the frequency of the driving periodic signal $`v_s(t)=V_scos(2\pi f_st)`$. Specifically, we vary the amplitude $`V_s`$ from $`0.0067`$ to $`1.00`$ V, and the frequency $`f_s`$ from $`1`$ Hz to $`1`$ MHz. The output voltage across the diode $`v_d`$ is detected by a digital oscilloscope and transferred to a PC. A typical time series has 4096 records. The digitized time series are analyzed on-line by using a fast Fourier transform (FFT) routine. The result of the FFT routine provides the power spectral density of the $`v_d`$ time signal.
We model our electronic circuit by writing down its differential equation. This equation is
$$\frac{dv_d}{dt}=\frac{dU(v_d,t)}{dv_d}+\frac{1}{RC}v_n(t),$$
(1)
which is formally equivalent to a stochastic differential equation describing the position of an overdamped random particle moving in a generalized potential. In this equation $`R`$ is the biasing resistor, $`C`$ is the parallel capacitor (in our case 45 pF) of the circuit and $`v_n(t)`$ is a noise voltage mimicking the presence of a finite temperature in the corresponding overdamped system with a physical particle. The generalized potential $`U(v_d,t)`$ associated to our physical system is
$`U(v_d,t)={\displaystyle \frac{V_bv_d}{RC}}+{\displaystyle \frac{v_d^2}{2RC}}+{\displaystyle \frac{1}{C}}{\displaystyle _0^{v_d}}I(v)๐v`$ (2)
$`{\displaystyle \frac{V_sv_d}{RC}}\mathrm{sin}(\omega _st),`$ (3)
here $`V_B`$ is the biasing voltage of the electronic bistable network.
Our circuit presents two control parameters affecting the shape of the associated generalized potential. They are the biasing resistance $`R`$ and the biasing voltage $`V_b`$. We control both of them independently. We perform our experiments by ensuring a symmetric escape from one potential well to the other. This is done by selecting the values of the two control parameters (R=770 $`\mathrm{\Omega }`$ and $`V_b=6.76`$ V) in such a way that we do not detect power spectral density of the output voltage $`v_d`$ above the noise level at even harmonics of the frequency of modulating signal. We also verify that for this choice of control parameters the experimentally measured residence times have approximately the same value in the two potential wells.
The aim of this study is to investigate the SR phenomenon over a wide interval of the frequency $`f_s`$ of the modulating signal. To perform such a task, we have to overcome two experimental conflicting constraints: (i) we are forced to set the time constant of our system $`\tau RC`$ to a low value satisfying the inequality $`f_s1/2\pi \tau `$ and (ii) we need to use a high value of $`\tau `$ to maintain the ratio $`\tau _n/\tau `$ as low as possible to conduct our experiments in theโwhite noiseโ limit of $`v_n(t)`$. The best compromise we find is to set $`\tau RC=34.6`$ ns. With this choice $`1/2\pi \tau 4f_s^{max}`$ and $`\tau _n/\tau 3.47`$. In other words we guarantee the investigation of the SR phenomenon over a rather wide range of $`f_s`$ by performing our experiments in a regime of moderately colored noise.
## III Stochastic Resonance for different values of frequency and amplitude of the modulating signal
A bistable system based on a tunnel diode provides a versatile physical system in which stochastic resonance can be investigated . In this study we perform an investigation of the SR phenomenon as a function of a wide range of amplitude and frequency of the modulating signal. The first investigation concerns the power $`P_1`$ of the output signal $`v_d(t)`$ localized around the frequency of the modulating signal. This quantity is obtained by integrating the spectral density $`S(\omega )`$ over the delta-like peak observed at angular frequency $`2\pi f_s`$. The signal โpowerโ $`P_1`$ used in the theory of linear signal processing obtained by integrating the spectral density over the $`f`$ peaks at $`f_s`$ and -$`f_s`$ is
$$P_1=4\pi |M_1|^2,$$
(4)
here $`|M_1|`$ is the magnitude of coefficient of the Fourier series $`<v_d(t)>=_{\mathrm{}}^{\mathrm{}}M_n\mathrm{exp}(in2\pi f_st)`$ taken at the frequency of the modulating signal. The linear response theory for stochastic resonance predicts that $`|M_1|`$ is proportional to $`V_s`$ at fixed value of $`V_n`$ . We test this dependence on a large interval of values of the amplitude signal $`V_s`$. In Fig. 1 we show the measured values of $`|M_1|`$ as a function of $`V_s`$ varying in the interval from 0.0067 to 1.00 V. The measure are done by setting $`f_s=10`$ Hz and $`V_{rms}=1.89`$ V. From the figure is evident that the prediction $`|M_1|V_s`$ obtained by using the linear response theory is valid only within the amplitude interval $`0.017<V_s<0.067`$ V. The deviation observed for the lowest investigated value of the modulating amplitude signal ($`V_s=0.0067`$ V) is probably due to experimental detection problems related with the low value of the signal whereas the deviation observed when $`V_s>0.067`$ is entirely ascribed to a deviation of the physical system from the behavior predicted by the linear response theory. In particular a saturation of the power localized at $`f_s`$ is detected for large values of the amplitude of the modulating signal.
The second investigation concerns the frequency dependence of $`|M_1|`$. Within the framework of the linear response theory, at fixed values of $`V_n`$, $`|M_1|`$ is related to $`V_s`$, $`f_s`$ and $`V_n`$ through the relation
$$|M_1|<v_d^2>\frac{V_s}{V_{rms}^2}\frac{\lambda _{min}}{(\lambda _{min}^2+(2\pi f_s)^2)^{1/2}},$$
(5)
where $`<v_d^2>`$ denotes a stationary mean value of the unperturbed system and $`\lambda _{min}`$ is the smallest non-vanishing eigenvalue of the Fokker-Plank operator of the system without periodic driving . This quantity is an exponential function of the noise amplitude $`V_{rms}`$ under the hypothesis of white noise. We set $`V_s`$=0.067 V to ensure that we are in a region of parameters where the linear response theory may apply and we perform our experiments as a function of $`f_s`$ for various values of $`V_{rms}`$. In Fig. 2 we show the results obtained. The general trend predicted by Eq. (4) is observed. When $`f_s<<\lambda _{min}/2\pi `$ a constant value of $`|M_1|`$ is detected whereas in the opposite regime the value of $`|M_1|`$ is decreasing as a function of the frequency. Concerning the functional form of $`|M_1|`$, we observe that $`|M_1|f_s^{1.3}`$. This is close but not coincident with the behavior expected from the linear response theory $`|M_1|f_s^1`$. The observed deviation might be ascribed to one or more than one of the following possibilities: (i) a distortion introduced by the noise background present in our measurements; (ii) an additional frequency dependence which is present through the term $`<v_d^2>`$ of Eq. (4)and (iii) the colored nature of the noise $`v_n(t)`$.
One key aspect of the SR phenomenon is the statistical synchronization that takes place when the Kramers time $`T_K(V_{rms})`$ between two noise induced inter-well transition is of the order of half period of the periodic forcing. In other words statistical synchronization occurs when $`f_s=1/2T_K(V_{rms})`$. In our measurement, we verify the validity of this description with the following procedure. We set $`V_s=0.067`$ V and we measure the SNR of the output signal $`v_d(t)`$ at the frequency of the modulating signal for six values of $`f_s`$ ranging from 1 to 10<sup>5</sup> Hz. We use these experimental results to single out for which value of $`V_{rms}V_{rms}^{}(f_s)`$ a maximum of the SNR is detected. This is of course the state of maximal statistical noise induced synchronization. We then compare $`V_{rms}^{}(f_s)`$ with the function $`y(V_{rms})=r_K(V_{rms})/2`$, where the Kramers rate $`r_K(V_{rms})`$ is measured in the absence of a modulating signal. The results are shown in Fig. 3. From the figure it is clear that statistical synchronization is observed for all the investigated frequencies, supporting the traditional interpretation of the stochastic resonance mechanism over a frequency range of the modulating signal spanning six frequency orders of magnitude. It is worth pointing out that synchronization is observed in spite of the fact that our experiments are performed in a regime of moderately colored noise.
We now investigate the SR phenomenon by studying both the signal power amplification
$$\eta =\frac{P_1}{P_{in}}=4\left[\frac{|M_1|}{V_s}\right]^2$$
(6)
and the signal to noise ratio
$$SNR=10\mathrm{log}_{10}\left[\frac{P_1}{N_1}\right].$$
(7)
The $`SNR`$ is customary given in dB and it is obtained by dividing the output signal power level $`P_1`$ to the noise level signal $`N_1`$. Both quantities are measured at the frequency of the modulating signal.
We investigate both the effect of varying the amplitude and the frequency of the modulating signal on $`\eta `$ and $`SNR`$. We first consider the role of the frequency of the modulating signal. Specifically we investigate the SR phenomenon as a function of $`V_{rms}`$ by keeping $`V_s`$ constant (we choose $`V_s=0.067`$ V) and by varying $`f_s`$ from 1 to $`10^6`$ Hz. For each pair of the control parameters $`V_s`$ and $`f_s`$, we vary $`V_{rms}`$ from 0.67 to 5.33 V. The measured values of the power amplification $`\eta `$ are collected in Fig. 4, where we show $`\eta `$ as a function of $`V_{rms}`$ for 7 different values of $`f_s`$, which are 1, 10, 100, $`10^3`$, $`10^4`$, $`10^5`$ and $`10^6`$ Hz. The classical profile of the SR phenomenon is observed for the lowest values of $`f_s`$. For higher values of $`f_s`$, $`\eta `$ deviates from the canonical SR profile by lowering and broadening its maximum. These results are in qualitative agreement with the explicit results theoretically obtained for the signal power amplification in a model bistable system .
The next investigation of the power amplification $`\eta `$ concerns the study performed by keeping $`f_s`$ constant whereas $`V_s`$ is varied. We set $`f_s=10`$ Hz and we vary $`V_s`$ from 0.0067 to 1.00 V. For all the selected values of $`V_s`$ we check that the amount of the amplitude of the modulating signal is not sufficient to induce deterministic jumps between the two wells. In Fig. 5 we show the experimental values of $`\eta `$ obtained for 8 different values of $`V_s`$. The selected values are 0.0067, 0.017, 0.033, 0.067, 0.167, 0.333, 0.667 and 1.00 V. In the figure the top line corresponds to $`V_s=0.0067`$ V whereas the bottom line refers to the value $`V_s=1.00`$ V. The profile of $`\eta `$ becomes progressively more sharp around the value $`V_{rms}1.5`$ V for decreasing values of $`V_s`$. This experimental finding is in qualitative agreement with the results of theoretical calculations of Ref. . In the figure, the curves measured for highest values of $`V_s`$ tend to collapse into a unique curve that the theory indicates as the limit behavior predicted by the linear response theory for negligible values of the amplitude of the modulating signal. On the other hand, a difference is detected when one considers the lowest values of $`V_s`$. In these cases we experimentally detect a deviation from the expected limit curve. These deviations are observed in our experiment because for these values of $`V_s`$ (0.0067, 0.017 and 0.033 V) the signal $`P_1`$ becomes of the same level of the noise level $`N_1`$ and it is therefore indistinguishable from it. One can verify quickly the above statements by inspecting Fig. 6 where we present the $`SNR`$ measured under the same conditions of Fig. 5. In Fig. 6 the bottom curve refers to the case $`V_s=0.0067`$ V whereas the top curve is obtained by setting $`V_s=1.00`$ V. From the figure it is evident that for $`V_s`$ equal to 0.0067, 0.017 and 0.033 V, the $`SNR`$ becomes zero within the experimental errors for a wide range of values of $`V_{rms}`$. This effect is reflected into the deviation of $`\eta `$ from the limit curve observed in Fig. 5.
A simultaneous inspection of Figs 5 and 6 show that the results obtained with the highest values of $`V_s`$ are associated with high values of the $`SNR`$ but are at the same time seriously affected by nonlinear distortion. This nonlinear distortion manifests itself in the broadening of the SNR curve. In other words, by using high values of $`V_s`$ it is possible to detect a wide interval of $`V_{rms}`$ where the SR phenomenon occurs, however this interval is not well described in terms of linear response theory. On the other hand by using low values of $`V_s`$ one observe experimentally SR on a more limited interval of $`V_{rms}`$ but the experimental results are in this interval well described by a linear response theory. Hence from an experimental point of view the more straightforward investigation of the SR phenomenon requires the selection of a value of the amplitude of the modulating signal which allows the detection of a large but undistorted signal. In our case this condition is attained when $`V_s0.067`$ V.
We also investigate the detailed behavior of the output noise power level $`N_1`$ at the frequency of the modulating signal. In Fig. 7 we show $`N_1`$ as a function of $`V_{rms}`$ for several values of $`V_s`$ ranging from $`0.0067`$ to $`1.00`$ V. In these investigation $`f_s`$ is kept constant at the value of 10 Hz. We observe that the noise power $`N_1`$ slightly decreases in an interval of values of $`V_{rms}`$ by increasing $`V_s`$. The noise level $`N_1`$ sharply increases at the onset of the SR phenomenon, reaches a maximum and then decreases. Depending on the value of $`V_s`$, the noise level may decrease monotonically to the asymptotic value observed for high values of $`V_{rms}`$ or reach a minimum value and then increases until reaching the same asymptotic value. In other words, we detect a dip in the noise level for a finite value of $`V_{rms}`$ for high values of $`V_s`$. The dip is shown in the inset of Fig. 7 for the measurements done by setting $`f_s=10`$ Hz. The noise dip is more pronounced for high values of the signal amplitude. A similar behavior is also observed for values of $`f_s`$ satisfying the condition $`f_s<1`$ kHz. By taking into account the results previously obtained concerning the deviation form the behavior predicted in terms of linear response theory, we conclude that this behavior is belonging to the nonlinear response regime of stochastic resonance. In this regime, sometimes called weak-noise limit, the amplitude of the periodic signal can not be regarded as weak with respect to the noise intensity ($`V_sV_{rms}`$) and the linear-response theory or the perturbation theory is no longer valid .
In the nonlinear response regime we experimentally detect the phenomenon of phase and frequency locking. Specifically by increasing the value of the amplitude of the modulating signal one observes jumps between the two stable states occurring at phases which are progressively more synchronized with the phases of the modulating signal. Moreover a locking between the period of the output signal and the period of the input signal is also observed for given values of the noise amplitude. We address this last phenomenon as frequency locking. An example of phases and frequency locking is shown in Fig. 8 where we show the digitized time series of $`v_d(t)`$ and $`v_s(t)`$ recorded by setting $`f_s=10`$ Hz, $`V_s=0.667`$ V and $`V_{rms}=1.33,2.00`$ and $`4.67`$ V for the top, middle up and middle down time series of Fig. 8. The bottom time series of Fig. 8 shows the time evolution of the modulating signal. All the time series are digitized synchronously. By inspecting Fig. 8 one notes that for low levels of noise amplitude ($`V_{rms}=1.33`$ V, top time series) the system jumps randomly from one state to the other but the jumps are statistically synchronized in phases. In fact they occur preferentially at times $`t=nT/2`$, where $`T`$ is the period of the modulating signal and $`n`$ is an integer. When the noise amplitude is increased ($`V_{rms}=2.00`$ V, middle top time series) we still observe a phase synchronization, but in this case jumps occurs preferentially at times $`t=nT/2+T/4`$. Moreover for the present value of the noise amplitude jumps occurs with probability almost one at each period. This means that in addition to the phase synchronization we also observe frequency synchronization. It is worth pointing out that the noise amplitude at which we observe phase and frequency synchronization is always detected in the region of the dip observed in the noise level of Fig. 7. In other words the dip of the noise may be interpreted as a manifestation of the fact that phases and frequency locking are simultaneously present. By increasing the noise amplitude ($`V_{rms}=4.67`$ V, middle bottom time series) jumps becomes very frequent inside a single period of the modulation signal so that phase and frequency synchronization is progressively lost. A similar effect has been theoretically considered in the literature recently.
## IV Conclusions
We report an experimental study of stochastic resonance in a physical system. Our physical system, which is characterized by versatility and high stability, allows us to investigate with high precision the SR phenomenon in a wide range of parameters, such as the frequency and the amplitude of the modulating signal and the noise amplitude. In the experiments presented here, the frequency range is spanning up to seven orders of magnitude whereas the amplitude range spans more than two orders of magnitude.
Theoretical and experimental investigations have been mainly focused on the linear regime of SR. However for a complete description of the SR phenomenon it is also important to investigate the nonlinear response regime of SR. We experimentally investigate the degree of consistence of our experimental results with the results expected in terms of the linear response theory. We find a range of experimental parameters within which the linear response theory describes quite well the investigated dynamics. However, outside these intervals, nonlinear deviations from the prediction of the linear response theory are clearly detected. These deviations primarily manifest them-self (i) in a saturation of the output power spectral density signal and of the signal amplification and (ii) in a non-monotonic behavior of the output noise level associated with a high degree of phase and frequency synchronization.
We wish to thank ASI, INFM and MURST for financial support. |
no-problem/0002/cond-mat0002366.html | ar5iv | text | # Single and Paired Point Defects in a 2D Wigner Crystal
## Acknowledgments
PP acknowledges stimulating discussions with Antonio Castro-Neto and partial funding from DMR98-12422. LC acknowledges useful discussions and computational help with Dr. G. Bauer and Dr. B. Militzer and support by Fundacรฃo de Amparo ร Pesquisa do Estado de Sรฃo Paulo (FAPESP). DC is supported by DMR98-02373. Computations were performed at National Computational Science Alliance (NCSA). |
no-problem/0002/cond-mat0002205.html | ar5iv | text | # Resonances in Fock Space: Optimization of a SASER device
## Abstract
We model the Fock space for the electronic resonant tunneling through a double barrier including the coherent effects of the electron-phonon interaction. The geometry is optimized to achieve the maximal optical phonon emission required by a SASER (ultrasound emitter) device. PACS numbers: 73.20.Dx, 73.40.Gk, 73.50.Rb
, and
The possibility of generating coherent phonons in a double barrier semiconductor heterostructure was first proposed a few years ago. This is the basis of a SASER device which transforms the electric potential energy in a single vibrational mode of the lattice. This is facilitated by the electronic confinement in a double barrier structure. The phonon emission appears when the energy of the resonant state is one quantum $`\mathrm{}\omega _0`$(LO phonon energy) bellow the energy of the incoming electrons. As in laser devices this is enhanced if the first excited state of the well lies bellow the Fermi energy and becomes overpopulated. According to ref. the emitted LO phonons decay coherently into a pair of $`LO`$ and TA phonons the last the useful ones in a SASER device.
In this paper we want to explore the case in which wellโs ground state mediates the decay of the emitter states into the collectorโs ones plus a phonon. This feature represents a resonance in the electron-phonon Fock space and is observed as a satellite peak in the current. This resonant condition is tuned directly by the applied voltage and we expect that its optimization could also provide enough emission of primary phonons to allow for SASER operation. We carry out the modeling of the electronic structure and the electron-phonon interaction to get a minimal structure in the Fock space. Thus, the optimization of the phonon emission for different geometries of the device (height and width of the barriers, field intensity) can be discussed in simple terms.
We consider a one-dimensional model for a double barrier including the interaction with LO phonons in the well, neglecting the effects derived from the accumulated charge. This will give results comparable to the 3-D case when $`\epsilon _F`$ is small, thus limiting the number of traversal modes; or in the presence of a high magnetic field perpendicular to the plane of the barriers which quantize these modes in Landau levels.We do not consider the phonon-phonon interaction that leads to the decay of the LO phonons.
The Hamiltonian is a sum of an electronic contribution, a phonon contribution and an electron-phonon interaction term.
$$=_e+_p+_{ep}$$
$$_e=\underset{j}{}E_jc_j^+c_j\underset{j,k}{}V_{j,k}(c_j^+c_k+c_k^+c_j),$$
$$_p=\mathrm{}\omega _0\underset{k[\mathrm{well}]}{}b_k^+b_k,\mathrm{and}_{ep}=V_g\underset{k[\mathrm{well}]}{}c_k^+c_k(b_k^++b_k).$$
where $`c_j^+`$and $`c_j`$ are electron operators on site $`j`$, $`E_j`$ is the site dependent diagonal energy and $`V_{j,k}=V\delta _{j\pm 1,k}`$ are the hopping parameters. We assume that the potential drop $`\mathrm{๐๐}`$ is linear through the double barrier and limited to it. N<sub>L</sub> and N<sub>R</sub> are the number of sites in the left and right barriers and N<sub>w</sub> are those in the well, the associated lengths are L<sub>i</sub>=N<sub>i</sub> 2.825ร
. There is a single well state in the energy range of interest.
Since the most important interaction between electrons and phonons in polar semiconductors involves longitudinal optical (LO) phonons, only one phonon mode with frequency $`\omega _0`$ is considered. The electron-phonon interaction is limited to the well region and the coupling to the phonons is denoted with $`V_g`$. The model is represented schematically in figure 1.
For simplicity we restrict the problem to the case in which we have either $`0`$ or $`1`$ phonons with no phonons in the well before the scattering process. By modifying $`V_gV_g\sqrt{n+1}`$ this also represents a finite temperature emission $`nn+1.`$ The effective mass is taken to be $`0.067`$ m<sub>e</sub>, the LO phonon frequency $`\mathrm{}\omega _0=36`$ meV and the value of the hopping parameter $`V=7.1018`$eV. $`V_g`$is taken $`10`$meV which gives a typical electron-phonon interaction strength $`g=(V_g/\mathrm{}\omega _0)^20.1`$. The barrier heights are $`300`$meV and the Fermi energy $`\epsilon _F`$ is taken between $`10`$ and $`20`$ meV.
This discrete model is solved exactly using a decimation procedure for the sites in the barriers and the well . The leads are taken into account by adding a proper self-energy. The transmittances are computed from the Greenโs functions for the system .
Let us denote with $`T_{0,0}^{RL}`$ ($`T_{0,0}^{LR}`$) and $`T_{1,0}^{RL}`$ ($`T_{1,0}^{RL}`$) the transmission coefficients from left (right) to right (left) where the subscripts $`0`$ and $`1`$ denote the number of phonons in the outgoing (first subscript) and in the incoming (second subscript) channel. The total current is a sum of an elastic current $`I_{\mathrm{el}}`$and an inelastic current $`I_{\mathrm{in}}`$ (with the emission of one phonon during the scattering process). These currents can be calculated from the following expressions
$`I_{el}`$ $`=`$ $`(2e/h){\displaystyle [T_{0,0}^{RL}f_L(\epsilon )T_{0,0}^{LR}f_R(\epsilon )]๐\epsilon },`$
$`I_{in}`$ $`=`$ $`(2e/h){\displaystyle [T_{1,0}^{RL}f_L(\epsilon )T_{1,0}^{LR}f_R(\epsilon )]๐\epsilon };`$
where $`f_L(\epsilon )`$ and $`f_R(\epsilon )`$ are the Fermi functions for the left and right leads.
For a given configuration of the system a curve of inelastic current vs. applied bias is obtained and its maximum value $`I_{\mathrm{in}}^{\mathrm{max}}`$ can be extracted. Figure 2 shows $`I_{\mathrm{in}}`$-V curves as we change N<sub>R</sub>. The peaks in these curves correspond to the inelastic contributions to the main peak and to the satellite peak in the total current respectively. This figure also shows that the peaks are shifted to higher voltages as N<sub>R</sub> is increased. This shows a strong renormalization of the resonant energies due to the electrodes. In figure 3 we present $`I_{\mathrm{in}}^{\mathrm{max}}`$ vs. N<sub>R</sub> curves for different values of N<sub>L</sub>. These curves exhibit a maximum for $`I_{\mathrm{in}}^{\mathrm{max}}`$ as a function of N<sub>R</sub>. The optimal configurations correspond to asymmetric structures with wider right barriers. This can be understood by means of the following argument. Increasing the lifetime of the electrons in the well favors the electron-phonon interaction and thus increases the inelastic current. This can be done by choosing wider (or higher) barriers. In spite of this, as an effect of the asymmetry produced by the applied bias, the lifetime is still controlled mainly by the right barrier. On the other hand increasing the length of the barriers increases the reflectivity of the device diminishing the currents, here it is the left barrier which plays the main role. Then there is a trade off between these two effects that maximizes the phonon emission.
In summary, we have used a simple model to show that the asymmetry in double barrier structures plays an important role in the $`I_{\mathrm{in}}`$-V characteristics and to predict how it can be controlled to optimize LO phonon emission. In particular we show that the optimal configuration corresponds to a collector barrier with a length which doubles that of the emitter.
We acknowledge finantial support from CONICET, SeCyT-U.N.C., AnPCyT and an international grant from Andes-Vitae-Antorchas. |
no-problem/0002/astro-ph0002422.html | ar5iv | text | # Simulating the effects of intergalactic grey dust
## 1. Introduction
Recent observations of Type IA Supernovae (SNe) at redshifts up to $`z0.8`$ (Riess et al. (1998); Perlmutter et al. 1999, hereafter P (99)) have made possible classical cosmological tests that require standard candles, such as the magnitude-redshift relation. The most dramatic result is that these SNe appear dimmer (by $`0.2`$ magnitudes) at high redshift than would be predicted in a non-accelerating universe, suggesting at face value that we live in an accelerating universe. However, other explanations are possible, including the one we consider here, namely that distant SNe are dimmer due to extinction by intergalactic dust.
As distant standard candles, SNe are sensitive probes of extinction in the intergalactic medium (IGM). The distribution of matter in the IGM may now be modelled accurately in the context of modern cosmology using hydrodynamic simulations (see e.g., Cen et al. (1994); Hernquist et al. (1996); Davรฉ et al. (1999)). The resulting IGM is not smooth, but rather traces large-scale structure. If such structure contains not only dark matter and gas but also dust, this would result in significant variations in SNe brightnesses due to intervening extinction. Observationally, the distribution of Type Ia SNe magnitudes has a very small dispersion (P (99)). Thus by comparing simulations to the distribution of observed brightnesses, we can set limits on the amount of dust extinction and possibly constrain its spatial distribution with respect to intergalactic gas. In this Letter we present a technique for doing this, and apply it to the SNe observations of P (99).
Intergalactic grey dust has been examined in a series of papers by Aguirre (1999a,b; hereafter A (99)) and Aguirre & Haiman (1999), who develop a scenario in which small grains are preferentially destroyed during ejection from galaxies, polluting the IGM with large dust grains that are effectively grey in the bandpasses of the SNe data. This greyness is necessary in order not to violate tight limits on reddening from SNe data, which imply that galactic-type dust would provide negligible absorption (P (99)). Furthermore, a significant fraction of the dust must reside in the IGM. If the grey dust causing extinction were present only in the ISM of the supernova host galaxy, this would introduce too large a dispersion in observed SNe magnitudes (Riess et al. (1998)). In this study, we assume that grey dust blends smoothly from galaxies into the surrounding IGM. Our analysis is insensitive to the intrinsic properties of the dust, such as its opacity and grain size, since we use the simulations to directly translate the observed SNe magnitude distribution into a dust extinction in magnitudes. It is, however, sensitive to the way in which dust traces the distribution of gas in the IGM, and we will consider several simple but plausible variations of this relation.
In ยง 2 we describe our simulations of the IGM, and of dust extinction. In ยง 3 we describe our analysis method and results, including constraints on grey dust afforded by current SN observations. In ยง 4 we discuss systematic uncertainties, and the implications of our results.
## 2. Simulations of Intergalactic Dust
We employ a hydrodynamic simulation of a $`\mathrm{\Lambda }`$-dominated cold dark matter model, with $`\mathrm{\Omega }_m=0.4`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$, $`\mathrm{\Omega }_b=0.02h^2`$, $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, and $`\sigma _8=0.8`$. Our simulation volume is $`50h^1`$ Mpc with $`10h^1`$ kpc spatial resolution, having $`144^3`$ dark matter and $`144^3`$ gas particles, and was evolved from $`z=490`$ using Parallel TreeSPH (Davรฉ, Dubinski & Hernquist (1997)).
In order to obtain dust column densities along lines of sight, we consider three different ways that dust may trace intergalactic hydrogen gas:
1. $`\rho _{\mathrm{dust}}\rho _{\mathrm{gas}}`$: Dust traces gas linearly.
2. $`\rho _{\mathrm{dust}}\rho _{\mathrm{metal}}`$: Dust traces metals linearly.
3. $`\rho _{\mathrm{dust}}\rho _{\mathrm{gas}}^2`$: Dust traces gas quadratically.
As our simulation makes no direct prediction for the metallicity of gas, we adopt a heuristic prescription (c.f. Cen & Ostriker (1999)), in the second case above. We assume that the metallicity is $`10^2`$ solar if the gas overdensity is less that 10, solar if the overdensity is greater than 1000, and log-linear in between.
Figure 1: Dust extinction maps in $`2.2^{}\times 2.2^{}`$ patches of sky out to $`z=0.05`$ (left) and $`z=0.5`$ (right), for (a) $`\rho _{\mathrm{dust}}\rho _{\mathrm{gas}}`$ and (b) $`\rho _{\mathrm{dust}}\rho _{\mathrm{gas}}^2`$ The median extinction magnitude to $`z=0.5`$ is set equal to $`0.4`$ for both (a) and (b). The maps correspond to what would be seen against a white background.
To extract dust extinction values from the simulations, we assume that the gas associated with each particle is spread over its SPH smoothing volume (see e.g., Hernquist & Katz (1989)). We perform a numerical integration of gas column density along 5000 rays cast through these volumes, at the same time applying one of the three transformations given above to relate gas to dust densities. To reach the required path lengths out to $`z0.5`$, we follow rays through 26 simulation volumes, each ray entering through a random point on a random face. This yields the column density of dust to $`z=0.5`$ along each line of sight.
In Figure 1 we show extinction maps of $`2.2^{}\times 2.2^{}`$ patches of sky, for (a) $`\rho _{\mathrm{dust}}\rho _{\mathrm{gas}}`$ and (b) $`\rho _{\mathrm{dust}}\rho _{\mathrm{gas}}^2`$. The median extinction to $`z=0.5`$ was set to be equal (to $`0.4`$ mag) for cases (a) and (b). In Figure 2, we show how the mean dust extinction and its dispersion varies with redshift. Here, the dispersion in all three panels was set to be the same small value, equal to the difference in quadrature of the dispersion in SNe magnitudes at high ($`\sigma _{z=0.5}=0.157`$) and low ($`\sigma _{z=0.05}=0.154`$) redshifts observed by P (99), namely 0.03 mag. Figs. 1 and 2 show that the mean extinction is much greater when the dust is more smoothly distributed. We will now quantitatively explore the constraints that can be put on the dust extinction by using the observed distribution of SNe magnitudes.
Figure 2: Mean and dispersion of the dust extinction as a function of redshift. Filled squares show mean, open circles with line show the dispersion. Values have been scaled to a dispersion of 0.03 magnitudes at $`z=0.5`$.
## 3. Comparison with observations
We make use of two characteristics of the observed SN data in our comparison, the change with redshift of the dispersion in SNe magnitudes, and the shape of the histogram of SNe magnitudes. As mentioned above, P (99) found little difference in the dispersions of two samples with $`\overline{z}0.05`$ and $`\overline{z}0.5`$. As they stated, this leaves little room for dispersion due to dust, as this dispersion is expected to increase for longer path lengths (see Fig. 2). In order to quantify this, and the effect of the distribution shape, we generate simulated SNe magnitudes, and compare them to the P (99) data using a maximum likelihood approach.
The observational datasets we use are both taken from P (99) (their Tables 1 and 2), being the high-$`z`$ SNe of the Supernova Cosmology Project, and the low-$`z`$ sample of Calรกn-Tololo SNe survey (Hamuy et al. (1996)). We use 40 (non-reddened SNe) of the former SNe, between $`z=0.172`$ and $`z=0.83`$ ($`\overline{z}0.5`$) , and 16 of the latter SNe which lie between $`z=0.02`$ and $`z=0.101`$ ($`\overline{z}0.05`$) .
We generate simulated datasets for each dust model described in ยง 2. For each dust model, we vary two parameters; first, $`M_C`$, a cosmological magnitude shift applied to all simulated SNe at a given $`z`$, normalized so that $`M_C=0`$ at $`z=0.5`$ corresponds to the best fitting cosmology found by P (99) (with $`\mathrm{\Omega }_m=0.28`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.72`$); and second, $`A_V`$, the median V-band magnitude of dust extinction out to $`z=0.5`$<sup>1</sup><sup>1</sup>1 We use the median in order to be less sensitive to long tails of the distribution (c.f. Figure 3) when renormalizing extinction values.. $`M_C0.2`$ then corresponds to an open model with $`\mathrm{\Omega }_m0.3`$, and $`M_C0.4`$ to an $`\mathrm{\Omega }_m=1`$ model. We generate the simulated datasets as follows:
(1) We renormalize the 5000 simulated lines-of-sight so that the median dust extinction to $`z=0.5`$ equals $`A_V`$.
(2) We add the cosmological shift, $`M_C`$.
(3) We broaden the magnitude distribution, which involves convolving it separately with a Gaussian of width given by the observational error of each SN, and then adding these distributions together. When doing this, we include an โintrinsicโ SN dispersion of $`\sigma _{\mathrm{int}}=0.17`$ mag (P (99)). Varying this (by $`\pm 0.1`$ mag) makes little difference to the results (see ยง4).
(4) We truncate the distribution at a magnitude difference $`\mathrm{\Delta }_M=1`$, to roughly account for the fact that SNe along lines of sight passing through high extinction regions would not make it into these magnitude-limited samples.
Figure 3: The PDF of magnitude differences, $`\mathrm{\Delta }_M`$, for 5000 simulated SNe. The shaded region is included in the calculation of the dispersion; non-shaded region indicates heavily obscured SNe that are not included in the PDF normalization. The histogram shows 40 of the high-$`z`$ SNe of P (99).
We derive the probability distribution function (PDF) of SNe magnitude differences $`\mathrm{\Delta }_M`$ predicted by the simulation, so that the predicted number of SNe between $`\mathrm{\Delta }_M`$ and $`\mathrm{\Delta }_M+d\mathrm{\Delta }_M`$ is $`NP(\mathrm{\Delta }_M)d\mathrm{\Delta }_M`$, where $`N`$ is the number of observed SNe. Figure 3 shows histograms of the PDF of $`\mathrm{\Delta }_M`$, for $`A_V=0.4`$ mag and $`M_C=0.4`$. We also plot the observational data of P (99). In effect, for this plot, we have brightened the simulated SNe to mimic an $`\mathrm{\Omega }_m=1`$ model and then dimmed them with dust. We can see that in panel (a), where the dust is fairly smoothly distributed, the simulated PDF is not too different from the observations. In the other panels, which have clumpier dust, there is a skewness not seen in the observed data, due to a long tail of high extinction lines-of-sight.
For each set of simulated lines of sight, we form the relative likelihood of drawing all the observed SNe magnitudes, $`=_i^NP(\mathrm{\Delta }_{M,i})`$, where $`\mathrm{\Delta }_{M,i}`$ is the magnitude difference of SN $`i`$. We define the quantity $`S=2\mathrm{ln}`$ and assume that $`S`$ follows a $`\chi ^2`$ distribution in order to derive confidence limits on the parameters $`M_C`$ and $`A_v`$. In the present analysis, we use results at two different redshifts ($`z=0.05`$ and $`z=0.5`$), and combine the two by adding the values of $`S`$. It is this step, combining the likelihoods at two different redshifts, which constrains the amount of additional dispersion (or the change in the shape of the magnitude distribution) due to grey dust between low and high redshifts. Contours of $`\mathrm{\Delta }S=2.3,6.2`$, and $`11.8`$ (the difference in $`S`$ from its minimum value), representing $`1,2`$ and $`3\sigma `$ intervals of joint confidence, are plotted in Figure 4.
Figure 4: 1, 2 and 3$`\sigma `$ contours for the cosmological magnitude shift, $`M_C`$, with respect to a model with $`\mathrm{\Omega }_\mathrm{\Lambda }=0.72`$ and $`\mathrm{\Omega }_M=0.28`$, and the median extinction due to grey dust ($`A_V`$).
Figure 4 shows that the smoothest distribution of dust easily accomodates enough extinction to reconcile a non-accelerating ($`M_C=0.2`$) or flat ($`M_C=0.4`$) universe, as there is a strong degeneracy between the cosmological magnitude shift and dust extinction. The other cases, where dust traces metals and $`\rho _{\mathrm{gas}}^2`$, are coincidentally quite similar. Such a distribution of dust would be mildly disfavored in a non-accelerating open universe and ruled out at $`99\%`$ confidence in an Einstein-de-Sitter universe. The constraints arise mostly due to the shape of the distribution; the fact that the observed dispersions are similar at $`z=0.05`$ and $`z=0.5`$ is relatively unimportant, only making a noticeable difference in panel (a) where the shape is similar to the observed distribution.
## 4. Discussion
We have found that current SNe datasets appear to have some power to constrain grey dust models. We find it somewhat unlikely (at the $`1.52`$ level) that sufficient grey dust could be distributed in the relatively clumpy fashion expected for the metal distribution (c.f., Gnedin & Ostriker (1997), Davรฉ et al. (1998), Cen & Ostriker (1999)) to reconcile the P (99) data with a non-accelerating universe. This suggests that any substantial grey dust component must be largely segregated from the galaxies where it was formed. In the specific grey dust model of A (99), sputtering does destroy dust more effectively in denser regions, but once in the IGM the large grains have long lifetimes, so that dust is still likely to trace the metals. The question of how the dust and metals are distributed can be answered self-consistently by modelling the relevant physical processes (dust and metal ejection from galaxies) directly in the simulations (e.g., Aguirre et al., in preparation).
There are many alternative explanations for SNe appearing dimmer in the past. For example, there may be metallicity effects in the host galaxy (Hรถflich et al. (1999)), intrinsic evolution (Riess et al. (1999)), observational selection differences between CCDs used for distant SNe and photographic plates used for nearby samples (Howell, Wang & Wheeler (1999)), or time evolution of the gravitational constant (Amendola, Corasaniti & Occhionero (1999); Garcia-Berro et al. (1999)). Such effects would alter the interpretation of our parameter $`M_C`$. Gravitational lensing magnification (Metcalf (1999)) would also change the magnitude dispersion at higher redshifts.
The study we have presented is reasonably general, in that our analysis is not dependent on the (unknown) microscopic properties of this hypothetical intergalactic dust. Still, there are some possible systematic uncertainties, which we now consider. We assume (as do P (99)) that the observed dispersion in SNe magnitudes in excess of the estimated observational errors is an intrinsic property of SNe and does not vary with redshift. One could envision scenarios in which the intrinsic dispersion is lower at high redshift, thus allowing more dispersion from dust. However, as mentioned earlier, most of the statistical power of our analysis comes from the shape of the distributions, which is not significantly affected by a change in the intrinsic dispersion. Also, we find that when we decrease the intrinsic SN dispersion ($`\sigma _{\mathrm{int}}`$), models with more dust fit slightly better at high-$`z`$, but the low-$`z`$ fit becomes worse, a trade-off which means that the overall results hardly change. The distribution shape depends on our assumption that the intrinsic dispersion has a Gaussian distribution in magnitudes, whereas a distribution skewed to fainter magnitudes could weaken our constraints. With a larger sample of low-z SNe we might be in a position to test this, by using the distribution of low-z SN magnitudes to make a simulated high-z sample with the correct intrinsic distribution shape. Also, we have made a simplifying approximation in our simulated high-z samples, by using dust extinction magnitudes that result from integrating the dust contribution from $`z=0`$ to exactly $`z=0.5`$. The real SNe are at a range of redshifts; our constraints are effectively conservative because of this. We have also assumed that the total mass of dust does not change with redshift from $`z=0.5`$ to $`z=0`$; dust increasing with time would strengthen our constraints, while a decrease seems implausible. Finally, the truncation of our simulated magnitude distribution at $`+1`$ mag is a rather approximate procedure. We have tried changing the cutoff, and find that with none (an unrealistic case) the constraints become stronger, as we might expect. With a lower cutoff, the right side of the observed distribution is not reproduced. One additional degree of freedom would involve changing the functional form of the cutoff. In the future, as observations improve, we plan to simulate such observational selection effects more carefully.
On the simulation side, our relatively low mass resolution ($`m_{\mathrm{gas}}=8.5\times 10^8M_{}`$) means that we miss fluctuations in the extinction that occur on smaller mass scales. If we had higher resolution, this should have the effect of making our dust constraints tighter. We find that the dispersion in projected magnitudes largely depends on small scale fluctuations. We tested this by splitting the simulation into small sub-volumes and then shuffling them to remove large scale correlations (see e.g., Bhavsar & Ling (1988)) before projection, and we obtained similar results. Changing the assumed cosmological model could have some effects, although it is difficult to conceive of models which have less power on small scales while fitting other constraints (see e.g., White & Croft (2000)).
The models of A (99) have grains that do produce some reddening, typically in the infrared. A complementary approach to ours is therefore to use information from different color bands. Such an approach was used by P (99), for example, to constrain normal galactic-type dust. Recent near-IR observations by Riess et al. (2000) have made reddening constraints tight even for non-standard large dust grains. Our approach does not make any use of color information, and so constrains the most extreme scenario in which the dust is totally โgreyโ.
In summary, we have used cosmological hydrodynamic simulations to explore how intergalactic grey dust could affect observations of high redshift supernovae, and how supernova data can constrain grey dust extinction. We conclude that only a fairly smooth distribution of dust could readily mimic the effect of an accelerating Universe. Such a distribution would be strange, as the dust would be strongly biased away from metal producing regions. More realistic dust distributions are mildly disfavored, but upcoming samples of SNe (at current redshifts) should enable us to put tighter limits as they more precisely determining the shape of the brightness distribution. At higher redshifts ($`z1`$), grey dust predicts that SNe should show increased dimming, while decreased dimming would be strong evidence for a cosmological constant.
We thank Anthony Aguirre, Bruce Draine, and Bob Kirshner for useful discussions, and David Weinberg for helpful comments on the manuscript. |
no-problem/0002/cond-mat0002370.html | ar5iv | text | # Time Dependent Theory for Random Lasers
The interplay of localization and amplification is an old and interesting topic in physics research. With promising properties, mirror-less random laser systems are widely studied both experimentally and theoretically. Recently new observations of laser-like emission were reported and showed new interesting properties of amplifying media with strong randomness. First, sharp lasing peaks appear when the gain or the length of the system is over a well defined threshold value. Although a drastic spectral narrowing has been previously observed , discrete lasing modes were missing. Second, more peaks appear when the gain or the system size further increases over the threshold. Third, the spectra of the lasing system is direction dependent, not isotropic. To fully explain such an unusual behavior of stimulated emission in random systems with gain, we are in need of new theoretical ideas.
Theoretically, a lot of methods have been used to discuss the properties of such random lasing systems. Based on the time-dependent diffusion equation, earlier work of Letokhov predicted the possibility of lasing in a random system and Zyuzin discussed the fluctuation properties near the lasing threshold. Recently, John and Pang studied the random lasing system by combining the electron number equations of energy levels with the diffusion equation. Such a consideration predicted a reduction in the threshold gain for a laser action due to the increased optical path from diffusion. It also verified the narrowing of the output spectrum when approaching the gain threshold. By using the diffusion approach is not possible to explain the lasing peaks observed in the recent experiments in both semiconductor powders and in organic materials. The diffusive description of photon transport in gain media neglects the phase coherence of the wave, so it gives limited information for the wave propagation in the gain media. Another approach which is based on the time-independent wave equations for the random gain media can go beyond the diffusive description . But as was shown recently , the time-independent method is only useful in determining the lasing threshold. When the gain or the length of system is larger than the threshold value, the time-independent description will give a totally unphysical picture for such a system. To fully understand the random lasing system, we have to deal with time-dependent wave equations in random systems. On the other hand, the laser physics community has developed some phenomenological theories to deal with gain media which were overlooked by the researchers working on random systems.
In this paper we introduce a model by combining these semi-classical laser theories with Maxwell equations. By incorporating a well-established FDTD ( finite-difference time-domain) method we calculate the wave propagation in random media with gain. Because this model couples electronic number equations at different levels with field equations, the amplification is nonlinear and saturated, so stable state solutions can be obtained after a long relaxation time. The advantages of this FDTD model are obvious, since one can follow the evolution of the electric field and electron numbers inside the system. From the field distribution inside the system, one can clearly distinguish the localized modes from the extended ones. One can also examine the time dependence of the electric field inside and just outside the system. Then after Fourier transformation, the emission spectra and the modes inside the system can be obtained.
Our system is essentially a one-dimensional simplification of the real experiments . It consists of many dielectric layers of real dielectric constant of fixed thickness, sandwiched between two surfaces, with the spacing between the dielectric layers filled with gain media (such as the solution of dye molecules). The distance between the neighboring dielectric layers is assumed to be a random variable. The overall length of the system is $`L`$.
Our results can be summarized as follows: ($`i`$) As expected for periodic and $`short`$ ($`L<\xi `$, $`\xi `$ is the localization length) random system, an extended mode dominates the field and the spectra. ($`ii`$) For either strong disorder or the long ($`L\xi `$) system, we obtain a low threshold value for lasing. By increasing the length or the gain (higher gain can be achieved by increasing the pumping intensity) more peaks appear in the spectra. By examining the field distribution inside the system, one can clearly see that these lasing peaks are coming from localized modes. ($`iii`$) When the gain or the pumping intensity increases even further, the number of lasing modes do not increase further, but saturate to a constant value, which is proportional to the length of system for a given randomness. And ($`i`$v) the emission spectra are not same for different output directions which show that the emission is not isotropic. These findings are in agreement with recent experiments and also make new predictions.
The binary layers of the system are made of dielectric materials with dielectric constant of $`\epsilon _1=\epsilon _0`$ and $`\epsilon _2=4\times \epsilon _0`$ respectively. The thickness of the first layer, which simulates the gain medium, is a random variable $`a_n=a_0(1+W\gamma )`$ where $`a_0=300`$nm, $`W`$ is the strength of randomness and $`\gamma `$ is a random value in the range \[-0.5, 0.5\]. The thickness of second layer, which simulates the scatterers, is a constant $`b=180`$nm. In the first layer, there is a four-level electronic material mixed inside. An external mechanism pumps electrons from ground level ($`N_0`$) to third level ($`N_3`$) at certain pumping rate $`P_r`$, which is proportional to the pumping intensity in experiments. After a short lifetime $`\tau _{32}`$, electrons can non-radiative transfer to the second level ($`N_2`$). The second level ($`N_2`$) and the first level ($`N_1`$) are called the upper and the lower lasing levels. Electrons can be transfered from the upper to the lower level by both spontaneous and stimulated emission. At last, electrons can non-radiative transfer from the first level ($`N_1`$) back to the ground level ($`N_0`$). The lifetimes and energies of upper and lower lasing levels are $`\tau _{21}`$, $`E_2`$ and $`\tau _{10}`$, $`E_1`$ respectively. The center frequency of radiation is $`\omega _a=(E_2E_1)/\mathrm{}`$ which is chosen to be equal to $`2\pi \times 610^{14}`$ $`Hz`$ ($`\lambda =499.7`$ $`nm`$). Based on real materials , the parameters $`\tau _{32}`$, $`\tau _{21}`$ and $`\tau _{10}`$ are chosen to be $`1\times 10^{13}`$s, $`1\times 10^9`$s and $`1\times 10^{11}`$s. The total electron density $`N_0^0=N_0+N_1+N_2+N_3`$ and the pump rate $`P_r`$ are the controlled variables according to the experiments .
The time-dependent Maxwell equations are given by $`\times ๐=๐/๐ญ`$ and $`\times ๐=\epsilon ๐/๐ญ+๐/๐ญ`$, where $`๐=\mu ๐`$ and $`๐`$ is the electric polarization density from which the amplification or gain can be obtained. Following the single electron case, one can show that the polarization density in the presence of an electric field obeys the following equation of motion:
$$\frac{d^2P(t)}{dt^2}+\mathrm{\Delta }\omega _a\frac{dP(t)}{dt}+\omega _a^2P(t)=\frac{\gamma _r}{\gamma _c}\frac{e^2}{m}\mathrm{\Delta }N(t)E(t)$$
(1)
where $`\mathrm{\Delta }\omega _a=1/\tau _{21}+2/T_2`$ is the full width at half maximum linewidth of the atomic transition. $`T_2`$ is the mean time between dephasing events which is taken to be $`2.18\times 10^{14}`$s, $`\mathrm{\Delta }N(t)=N_1N_2`$ and $`\gamma _r=1/\tau _{21}`$ is the real decay rate of the second level and $`\gamma _c=\frac{e^2}{m}\frac{\omega _{a}^{}{}_{}{}^{2}}{6\pi \epsilon _0c^3}`$ is the classical rate. It is easy to derive from Eq. (1) that the amplification line shape is Lorentzian and homogeneously broadened. Eq. (1) can be thought as a quantum mechanically correct equation for the induced polarization density $`P(t)`$ in a real atomic system.
The equations giving the number of electrons on every level can be expressed as follows:
$`{\displaystyle \frac{dN_3(t)}{dt}}`$ $`=`$ $`P_rN_0(t){\displaystyle \frac{N_3(t)}{\tau _{32}}}`$ (2)
$`{\displaystyle \frac{dN_2(t)}{dt}}`$ $`=`$ $`{\displaystyle \frac{N_3(t)}{\tau _{32}}}+{\displaystyle \frac{1}{\mathrm{}\omega _a}}E(t){\displaystyle \frac{dP(t)}{dt}}{\displaystyle \frac{N_2(t)}{\tau _{21}}}`$ (3)
$`{\displaystyle \frac{dN_1(t)}{dt}}`$ $`=`$ $`{\displaystyle \frac{N_2(t)}{\tau _{21}}}{\displaystyle \frac{1}{\mathrm{}\omega _a}}E(t){\displaystyle \frac{dP(t)}{dt}}{\displaystyle \frac{N_1(t)}{\tau _{10}}}`$ (5)
$`{\displaystyle \frac{dN_0(t)}{dt}}`$ $`=`$ $`{\displaystyle \frac{N_1(t)}{\tau _{10}}}P_rN_0(t)`$ (6)
where $`\frac{1}{\mathrm{}\omega }E(t)\frac{dP(t)}{dt}`$ is the induced radiation rate from level 2 to level 1 or excitation rate from level 1 to level 2 depending on its sign.
To excite the system, we must introduce sources into the system. To simulate the real laser system, we introduce sources homogeneously distributed in the system to simulate the spontaneous emission. We make sure that the distance between the two sources $`L_s`$ is smaller than the localization length $`\xi `$. Each source generates waves of a Lorentzian frequency distribution centered around $`\omega _a`$, with its amplitude depending on $`N_2`$. In real lasers, the spontaneous emission is the most fundamental noise but generally submerged in other technical noises which are much larger. In our system, the simulated spontaneous emission is the $`only`$ noise present, and is treated self-consistently. This is the reason for the small background in the emission spectra shown below.
There are two leads, both with width of 3000 nm, at the right and the left sides of the system and at the end of the leads we use the Liao method to impose an absorbing-boundary conditions (ABC). In the FDTD calculation, discrete time step and space steps are chosen to be $`10^{17}`$s and $`10^9`$m respectively. Based on the previous time steps we can calculate the next time step ($`n+1`$ step) values. First we obtain the $`n+1`$ time step of the electric polarization density $`P`$ by using Eq. (1), then the $`n+1`$ step of the electric and magnetic fields are obtained by Maxwellโs equations and at last the $`n+1`$ step of the electron numbers at each level are calculated by Eq. (2). The initial state is that all electrons are on the ground state, so there is no field, no polarization and no spontaneous emission. Then the electrons are pumped and the system begins to evolve according to equations.
We have performed numerical simulations for periodic and random systems. First, for all the systems, a well defined lasing threshold exists. As expected, when the randomness becomes stronger, the threshold intensity decreases because localization effects make the paths of waves propagating inside the gain medium much longer.
For a periodic or $`short`$ ($`L<\xi `$) random system, generally only one mode dominates the whole system even if the gain increases far above the threshold. This is due to the fact that the first mode can extend in the whole system, and its strong electric field can force almost all the electrons of the upper level $`N_2`$ to jump down to the $`N_1`$ level quickly by stimulated emission. This leaves very few upper electrons for stimulated emission of the other modes. In other words, all the other modes are suppressed by the first lasing mode even though their threshold values are only a little bit smaller than the first one. This phenomenon also exists in common lasers .
For $`long`$ ($`L>>\xi `$) random systems, richer behavior is observed. First we find that all the lasing modes are localized and stable around their localization centers after a long time. Each mode has its own specific frequency and corresponds to a peak in the spectrum inside the system. When the gain increases beyond the threshold, the electric field pattern (see Fig. 1a) shows that more localized lasing modes appear in the system and the spectrum inside the system (see Fig. 1b) gives more sharp peaks just as observed in the experiments . This is clearly seen in Figs 1a and 1b for a 80 cell random system, above threshold. In Figs. 1c and 1d similar values are shown for the 160 cell system. Notice that both the number of localized modes (Fig. 1c) of the field, as well the number of lasing peaks (Fig. 1d) are larger now. The exact position of the lasing peaks depends on the random configuration. Notice that the lasing peaks are much narrower than the experimental ones . This is due to the 1d nature of our model. In the present case only two escaping channels exist, so itโs more difficult for the wave to get out from the system which has a higher quality factor. When the gain is really big, we find the number of lasing modes will not increase any more, so a saturated number $`N_m`$ of lasing modes exists for the long random system. This is clearly seen in Fig. 2, where we plot the number of modes $`N_m`$ vs the pumping rate $`P_r`$. In Fig. 3, we plot the spectral intensity vs the wavelength for different input transitions (or equivalently pumping rates). Notice these results are in qualitative agreement with the experimental results shown in Fig. 2 of the paper of Cao et. al. .
These multi-lasing peaks and the saturated-mode-number phenomena are due to the interplay between localization and amplification. Localization makes the lasing mode strong around its localization center and exponentially small away from its center so that it only suppress the modes in this area by reducing $`N_2`$. When a mode lases, only those modes which are $`far`$ $`enough`$ from this mode can lase afterwards. So more than one mode can appear for a long system and each mode seems to $`repel`$ each other. Because every lasing mode dominates a certain area and is separated from other modes, only a limited number of lasing modes can exist for a finite long system even in the case of large amplification. We therefore expect that the number of surviving lasing modes $`N_m`$ should be proportional to the length of the system $`L`$ when the amplification is very large. Since the โmode-repulsionโ property is coming from the localization of the modes we expect that the average mode length $`L_m=L/N_m`$ should be proportional to localization length $`\xi `$ too. In Fig.4, we plot $`N_m`$ vs the length of the systems $`L`$ when we increase the length from 80 cells to 320 cells and keep all other parameters the same. In Fig.4, we also plot the average mode length $`L_m`$ vs the localization length $`\xi `$ when we change the random strength W for a 320 cell system. The localization lengths are calculated using the transfer-matrix method by averaging 10,000 random configurations. These results confirm that indeed $`N_mL`$ and $`L_m\xi `$. It will be very interesting if these predictions can be checked experimentally.
The emission spectra at the right and left side of the system are quite different. This can be explained from the field patterns shown in Fig. 1a and Fig. 1c. Notice the localized modes are not similar at both sides of the system. This is the reason for this difference in the output spectrum. In Fig. 1d we denote with $`l`$ and $`r`$ the output modes from the left and the right side of our 1d system, respectively. The non-isotropic output spectra of real $`3D`$ experiments might be explained by assuming that every localized mode has its intrinsic direction, strength and position, and the detected output spectra in experiments at different directions are the overlap of contributions from many modes. So generally they should be different. Most of the modes are not able to escape in our model and this is because of the 1D localization effects and exchange of energy between modes.
In summary, by using a FDTD method we constructed a random four-level lasing model to study the interplay of localization and amplification. Unlike the time-independent models, the present formulation calculates the field evolution beyond the threshold amplification. This model allows us to obtain the field pattern and spectra of localized lasing modes inside the system. For random systems, we can explain the multi-peaks and the non-isotropic properties in the emission spectra, seen experimentally. Our numerical results predict the โmode-repulsionโ property, the lasing-mode saturated number and average modelength. We also observed the exchange of energy between the localized modes which is much different from common lasers and this is essential for further research of mode competition and evolution in random laser. All of these properties are from the interplay of the localization and amplification where new physics phenomena can be found.
Ames Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. W-7405-Eng-82. This work was supported by the director for Energy Research, Office of Basic Energy Sciences. |
no-problem/0002/astro-ph0002432.html | ar5iv | text | # Searching for cluster substructure using APM and ROSAT data.
## 1 Introduction
Galaxy clusters occupy a special position in the hierarchy of cosmic structure in many respects. Being the largest physical laboratories in the universe, they appear to be ideal tools for studying large-scale structure, testing theories of structure formation and extracting invaluable cosmological information, especially regarding the Hubble parameter and the value of $`\mathrm{\Omega }_{}`$ (cf. B$`\ddot{\mathrm{o}}`$hringer 1995; West, Jones & Forman 1995; Buote 1998; Schindler 1999).
One of the most significant properties of galaxy clusters is the relation between their dynamical state and the underlying cosmology. In an open universe, clustering effectively freezes at high redshifts ($`z\mathrm{\Omega }_{}^11`$) and clusters today should be more relaxed with weak or no indications of substructure. Instead, in a critical density model, such systems continue to form even today and should appear to be dynamically active. Even, in a small but well-controlled (from any kind of selection or other biases) cluster sample, the percentage and morphologies of disordered and perturbed objects could lead to constraints on the $`\mathrm{\Omega }_{}`$ and $`\mathrm{\Lambda }`$ parameters, especially if combined with N-body/gasdynamic numerical simulations spanning different dark matter (DM) scenarios (cf. Richstone, Loeb & Turner 1992 hereafter RLT92; Evrard et al. 1993; Lacey & Cole 1993).
The above pioneering works were the first to set some limits on $`\mathrm{\Omega }_{}`$ using the rate of cluster formation for various cosmologies and led the way to a plethora of research works towards this direction (see Figure 2 of RLT92). Since then, a large number of relevant analyses have been devoted to this study and an accordingly varying and large number of optical and Xโray cluster compilations have been utilised to this aim. All the methods employed in each case are quite different and all the related studies find an appreciable percentage of dynamically active galaxy clusters (see Forman & Jones 1990; B$`\ddot{\mathrm{o}}`$hringer 1995; West 1995; Thomas et al. 1998 for good reviews of the subject). However, these studies do disagree on the precise number of clusters exhibiting significant dynamical activity, which varies between 30% and 80% of the total number of clusters studied, and which seems also to depend on the techniques employed in each analysis.
We quote only some of the most recent studies together with the sample used and the preference of the method given in each case. Geller & Beers (1982) analysed the iso-intensity contour maps of 65 optical clusters maps. Forman et al. (1981) studied 4 Einstein clusters. Dressler & Shectman (1988) studied velocity dispersion diagnostics for 15 optical clusters. Rhee, van Haarlem & Katgert (1991) and Solanes, Salvador-Sole & Gonzalez-Casado (1999) utilised 107 and 67 ENACS clusters respectively and a variety of 2D and 3D statistical tests to quantify the significance of substructure. Similarly, Dutta (1995), Crone, Evrard & Richstone (1996 hereafter CER96), Pinkney et al. (1996), Thomas et al. (1998 hereafter T98) and Jones & Forman (1999 hereafter JF99; 208 ROSAT Xโray clusters) proposed a variety of substructure tests depending on the level of information available (1D, 2D, 3D) and applied these to different N-body cluster data using several DM models. CER96 and T98 have explicitly argued that variations in the cluster center-of-mass as a function of distance or density (overdensity) threshold is one of the best possible substructure measures (see also West & Bothun 1990). The latter has previously been adopted by Mohr, Fabricant & Geller (1993; 5 Xโray clusters), Mohr et al. (1995; 65 Einstein clusters) and Rizza et al. (1998; 11 ROSAT HRI distant clusters). Evrard et al. (1993) and Mohr et al. (1993; 1995) were the first to make extensive use of the surface-brightness moments of the Xโray cluster distribution (eg. center-of-mass shifts). Gomez et al. (1997; 9 Abell clusters) have also employed the same methods and claimed that variations in the cluster ellipticity and orientation as a function of distance from the cluster center (isophotal twisting) should be considered as one of the prime substructure diagnostics. Bird (1994) in her sample of 25 Abell cluster has utilised galaxy peculiar velocities to locate and identify subclumps in the galaxy distribution. Kriessler & Beers (1997; 56 Abell and other clusters) have searched for cluster substructure signatures using the innovative KMM algorithm on the surface-density galaxy maps and quantified their findings using N-body simulations. Furthermore, Buote & Tsai (1995; 1996 hereafter BT95; BT96) and Valdarnini, Ghizzardi & Bonometto (1999) used the 2D gravitational potential moments (power ratio method) to characterise the dynamical state of clusters. Serna & Gerbal (1996; 2 Abell clusters) have developed the so-called hierarchical method to define and identify substructure in Abell clusters, while Slezak et al. (1994; 11 Xโray clusters) and Lazzati et al. (1998; 2 Xโray clusters) have adopted wavelet transform techniques in similar attempts.
As it is evident from all the above there is neither agreement on the methods utilised nor on the exact frequency of perturbed clusters. It seems that identifying significant dynamical activity within galaxy clusters, in close relation to the underlying cosmology and the density parameter, still remains an open issue. Not only do we need a large, statistically complete, cluster sample but an objective and reliable definition of what substructure is, upon which our study should be based.
The large majority of the analyses carried out so far, have made use of either optical (Abell) or Xโray (ROSAT, Einstein) cluster data. However, Mohr et al. (1993) and Rizza et al. (1998) have investigated cluster substructure using in a complementary fashion optical and Xโray data. In the present work, we extend this approach using a sample of 22 galaxy clusters for which we have data both in the optical and the Xโray part of the spectrum (APM and ROSAT respectively). Our sample size is twice as large as that of Rizza et al. (1998) and more than four times larger than that of Mohr et al. (1993). The advantage of using Xโray data is that the Xโray emission is proportional to the square of the gas density (rather than just density in the optical) and emanates mostly from the central cluster region, a fact which minimises projection effects (cf. Sarazin 1988; Schindler 1998). The advantage of using optical data is the shear size of the available cluster catalogues and thus the statistical significance of the emanating results. Subsequently, we are not only interested in comparing optical to Xโray cluster data regarding the various substructure tools, but to calculate and calibrate different biases using the superior Xโray data with which we could measure and test easily similar substructure diagnostics in a large, solely optical dataset as well. Therefore it is of great importance to address the following two questions, which we attempt to do in our present study:
* Is substructure in the optical also corroborated by the Xโray observations and in what percentage?
* What is the confirmed percentage of systems depicting significant indications of internal activity?
To this aim we compare optical to Xโray morphological cluster parameters (position angles, ellipticities, centroid shifts and group statistics) in an attempt to classify objects according to their dynamical state and we compute the relative frequency of substructure.
We proceed by presenting the optical and Xโray datasets in section 2. In section 3, we describe our methods of analysis, in section 4 we present a comparison of the optical and Xโray cluster images while in section 5 we present the results of our substructure analysis. Finally, we draw our conclusions in section 6. Also, there is an extensive appendix at the end of this paper, where morphological and dynamical information for each individual cluster can be found as well as a comparison with other similar works on the common clusters.
## 2 The data
The present dataset follows from a double cross-correlation between very rich ACO (Abell, Corwin & Olowin 1989) clusters ($`\mathrm{R}`$1,2,3) with the APM cluster catalogue (Dalton et al. 1997 and references therein) and the Xโray (0.1 - 2.4) keV ROSAT pointed observations archive. The first correlation results in 329 common optical galaxy clusters (ACO/APM) in the southern sky ($`b40^{},\delta 17^{}`$), while the second correlation results in 27 common clusters. Due to problematic regions of the APM catalogue, low signal to noise Xโray observations, contamination by known foreground or background objects and even double entries we exclude 5 clusters (A4038, A122, A3264, A3049 and A2462) reducing our cluster sample to 22 systems.
Furthermore, the ROSAT (Tr$`\ddot{\mathrm{u}}`$mper 1983) data we have used come from both the PSPC (Positional Sensitive Proportional Counter; Pfefferman & Briel 1986), and the HRI (High Resolution Imager) detectors, both operating in the (0.1 - 2.4) keV band. The HRI has an excellent spatial resolution (FWHM$``$ 5 arcsec) but no spectral resolution. The PSPC has an energy resolution of 0.5 keV at 1 keV but poorer spatial resolution (30 arcsec FWHM) compared to HRI. We note that the PSPC is quite efficient in detecting extended emission, in contrast with the HRI which has a lower sensitivity due to its higher background.
The cluster redshifts of our sample span the range $`0.04<\text{ }z<\text{ }0.13`$ with $`z0.074`$ and median $`0.069`$. For the needs of our analysis we transform cluster redshifts to cluster distances using the well-known angular-diameter distance formula with $`q_{}=0.5`$ (ie., critical density universe) as:
$$r=\frac{2c}{H_{}}(1+z)^{1/2}[1(1+z)^{1/2}],$$
(1)
with $`H_{}=100h`$ km $`s^1`$ Mpc<sup>-1</sup>. In Table 1 we give all the relevant details for the present cluster sample.
## 3 The methodology
In this section we provide an account of the techniques used to define the cluster morphological parameters as well as the substructure measures. We first present how to $`\mathrm{๐๐๐๐๐๐ ๐ }`$ the cluster images and reduce the noise from point-like sources.
### 3.1 Processing the cluster images
Our Xโray cluster images are retrieved from the ROSAT pointed observations archive and we have used the image processing package XIMAGE, which is designed to display and reduce the available data. Each Xโray image is embedded in a $`512\times 512`$ grid, each cell of which has a size of 15 arcseconds. We subtract all known point-like sources around the clusters (cf. West 1995; JF99) and replace them with the average background counts. In doing so, we have mostly encountered radio point-like sources (see Appendix for details).
Our optical data consist of all APM galaxies that fall within a radius of $`1.8h^1`$ Mpc from each optical APM cluster center. We then transform the galaxy and Xโray grid-cell angular coordinates into physical units, centering the Xโray data on the optical cluster centers. In order to construct a common comparison base, we create a continuous density field for both optical and Xโray data by using a Gaussian Kernel and the same smoothing length ($`R_{\mathrm{sm}}`$). To this end we utilise a $`N\times N`$ grid, where the typical size of each grid cell is $`0.065h^1`$ Mpc. In order to take into account the reduction of the number of galaxy cluster members as a function of distance (due to the APM magnitude limit), and thus the corresponding increase of discreteness effects, we have investigated, using Monte-Carlo cluster simulations, the necessary increase in size of $`N`$ and $`R_{\mathrm{sm}}`$, as a function of distance, in order to minimise such effects and optimize the performance of our procedure (for details see Basilakos, Plionis & Maddox 2000; hereafter BPM).
### 3.2 Cluster Shape Parameters
We compute the optical and Xโray cluster shape parameters utilising the method of moments of inertia (cf. Carter & Metcalfe 1980; Plionis, Barrow & Frenk 1991; BPM). We can then write the moments as follows:
$$I_{11}=\underset{i=1}{\overset{N}{}}\rho _\mathrm{i}(r_\mathrm{i}^2x_\mathrm{i}^2)$$
(2)
$$I_{22}=\underset{i=1}{\overset{N}{}}\rho _\mathrm{i}(r_\mathrm{i}^2y_\mathrm{i}^2)$$
(3)
$$I_{12}=I_{21}=\underset{i=1}{\overset{N}{}}\rho _\mathrm{i}x_\mathrm{i}y_\mathrm{i},$$
(4)
where $`\rho _\mathrm{i}`$ is the cell density, $`x_\mathrm{i}`$, $`y_\mathrm{i}`$ are the Cartesian coordinates of the grid cells and $`r_\mathrm{i}=\sqrt{x_\mathrm{i}^2+y_\mathrm{i}^2}`$. If we now diagonalise the inertia tensor $`det(I_{\mathrm{ij}}\lambda ^2M_2)=0`$ ($`M_2`$ being the $`2\times \mathrm{\hspace{0.17em}2}`$ unit matrix), we can obtain the eigenvalues $`\lambda _1,\lambda _2`$, from which the ellipticity of each object can be estimated as $`ฯต=\mathrm{\hspace{0.17em}1}\lambda _2/\lambda _1`$, with $`\lambda _1>\lambda _2`$. The eigenvectors, corresponding to these eigenvalues, provide us with the cluster orientations. The major axis orientation with respect to the North, in the anticlockwise direction, is the so-called cluster position angle ($`\theta `$ hereafter).
The shape parameters are estimated using all cells that have densities above three thresholds. These are defined as the average density of all cells that fall within a chosen radius. The three radii used are $`r_\rho =0.3,0.45`$ and $`0.6h^1`$ Mpc. The choice of such a step size is not arbitrary, however. We have tested the robustness of our procedure using different step sizes, ranging from $`0.1h^1`$ Mpc to $`0.3h^1`$ Mpc to find that the former is too small, since it will only locate the highest density peaks in the galaxy or hot gas distribution, while the latter is somewhat too large since it typically registers very low density fluctuations, comparable to the background level of the cluster image. The above spatially defined procedure overcomes the difficulty of determining the density thresholds in the two intrinsically different (optical and Xโray) density distribution.
Note that for each cluster we find the highest cluster density-peak $`(x_\mathrm{p},y_\mathrm{p})`$ within a radius of $`0.5h^1`$ Mpc around the original APM cluster center. We then redefine the cluster center as being $`(x_\mathrm{p},y_\mathrm{p})`$ and estimate all shape parameters around this new coordinate center. Typically, this coincides with the registered APM cluster center.
We can get an idea of the regions of the clusters sampled by our procedure by inspecting Figure 1 where we plot the resulting major axis of the fitted ellipses in each distribution and for the three density thresholds. It is evident that the regions sampled in the X-ray images are more centrally concentrated, as expected.
### 3.3 Substructure Measures
#### 3.3.1 Ellipticity
It is expected that the existence of significant substructure affects the shape of the cluster in the direction of producing large ellipticities (McMillan et al. 1989; Davis & Mushotzky 1993; West et al. 1995; Gomez et al. 1997). Although this appears to be a robust prediction, a small $`ฯต`$ does not always endorse the lack of substructure. The reasons being that (a) small scale structure could develop symmetrically around the cluster core, and (b) a possible merger could happen along the line of sight. In both cases a small ellipticity would be measured.
#### 3.3.2 Cluster centroid shift
Evrard et al. (1993) and Mohr et al. (1993) have suggested as an indicator of cluster substructure the shift of the center-of-mass position as a function of density threshold above which it is estimated (see also CER96; T98). Following their suggestion, we define as centroid-shift ($`sc`$) the distance between the cluster center-of-mass, $`(x_\mathrm{o},y_\mathrm{o})`$, where $`x_\mathrm{o}=x_\mathrm{i}\rho _\mathrm{i}/\rho _\mathrm{i}`$, $`y_\mathrm{o}=y_\mathrm{i}\rho _\mathrm{i}/\rho _\mathrm{i}`$ and the highest cluster density-peak, ie.,
$$sc=\sqrt{(x_\mathrm{o}x_\mathrm{p})^2+(x_\mathrm{o}x_\mathrm{p})^2}.$$
(5)
Notice here, that while the cluster center-of-mass changes as a function of density threshold ($`\rho _\mathrm{t}`$), above which we define the cluster shape parameters, the position $`(x_\mathrm{p},y_\mathrm{p})`$ remains unchanged. A large value of $`sc`$ may therefore furnish a first clear indication of substructure.
In order to quantify the significance of such centroid variations to the presence of background contamination and random density fluctuations, we carry out, in a fashion similar to BPM, a series of Monte Carlo cluster simulations in which we have, by construction, no substructure. For each cluster we produce a series of simulated clusters having the same number of observed galaxies as well as a random distribution of background galaxies, determined by the distance of the cluster and the APM selection function. Furthermore, the simulated galaxy distribution follows a King-like profile:
$$\mathrm{\Sigma }(r)\left[1+\left(\frac{r}{r_\mathrm{c}}\right)^2\right]^\alpha ,$$
(6)
where $`r_\mathrm{c}`$ is the core radius, $`\alpha =(3\beta 1)/2`$ and $`\beta `$ being the ratio of the specific energy in the galaxies to the specific thermal energy in the gas. We use the weighted, by the sample size, mean of most recent $`r_c`$ and $`\alpha `$ determinations (cf. Girardi et al. 1995; 1998), i.e., $`r_\mathrm{c}=0.085h^1`$ Mpc and $`\alpha =0.7`$. We do test the robustness of our results for a plausible range of these parameters. In general we find that the significance of the $`sc`$ measure decreases as $`r_c`$ increases. This is to be expected since using a large value of $`r_c`$, for the same number of core galaxies, will increase the random density fluctuations and thus the $`sc`$ measure. Naturally, we expect our simulated clusters to generate small $`sc`$โs and in any case insignificant shifts. Therefore, for each optical cluster in our sample we perform 1000 such simulations and we derive $`sc_{\mathrm{sim}}`$ as a function of the same thresholds, $`\rho _\mathrm{t}`$, as in the real cluster case. Then, within a search radius of $`0.75h^1`$ Mpc from the simulated highest cluster peak, we calculate the quantity:
$$\sigma =\frac{sc_\mathrm{o}sc_{\mathrm{sim}}}{\sigma _{\mathrm{sim}}},$$
(7)
in order to measure the significance of real centroid shifts as compared to those of relaxed, mock objects. Note that $`sc_\mathrm{o}`$ is the average, over the three density thresholds, centroid shift for the real cluster.
#### 3.3.3 Subgroup classes
We also utilise a friend-of-friends algorithm to categorise the observed substructure (see also section 3.2 of Rhee et al. 1991 for details). We join all cells having common boundaries that fall above each density threshold. We therefore create and register all subgroups as a function of $`\rho _\mathrm{t}`$ and rank substructure events according to the following 3 categories (see BPM and Plionis et al. 2000 in preparation):
(1) No substructure: Clusters with only one group at all density thresholds (regular, spherical systems). Systems with two unequal subgroups, where the second is much smaller than the first ($`mincir10\%`$) also fall in this level.
(2) Weak or moderate substructure: Objects with one clump at the first $`\rho _\mathrm{t}`$ and multiple clumps at the next two levels (second or third threshold), where the second in size group is greater than 10% and $``$$`<`$ 25% of the total cluster size (see also RLT92).
(3) Strong substructure: Like in (2) but now the second in size subgroup is $``$$`>`$ 25% of the total cluster size. Complex systems with multiple condensations at all density thresholds are naturally included in this category. Finally, a cluster with two large clumps (bimodal) at the highest density level, of which one is $`>40\%`$ of the other, fall in this category as well.
Note that in order to be consistent with the scale within which APM clusters are reliably identified and constructed (Dalton et al. 1997), we have used a radius of $`0.75h^1`$ Mpc as our maximum searching radius within which we search for subgroup statistics. We have also carried out the same analysis by increasing the radius to $`1h^1`$ Mpc but results do not change appreciably.
## 4 Comparison of Optical & Xโray Cluster Images
We investigate here the compatibility between the Xโray and optical cluster data by visually inspecting their smoothed density distributions as well as by correlating their respective shape parameters (ellipticity and position angles). The aim of our procedure is to evaluate how well the optical APM data trace the cluster potential and therefore how reliable the optical data can be in deriving the structural and dynamical parameters of clusters. This is of paramount importance since large cluster samples exist mostly in the optical and their analysis can provide important constraints in theories of galaxy formation.
A similar comparsion of optical and X-ray cluster data has been performed by Miller, Melott & Nichol (2000), and they found that optical total cluster luminosity is directly proportional to the total X-ray luminosity for several orders of magnitude in mass and luminosity (from poor groups to rich clusters). These results point in the direction that data from both parts of the spectrum can uniquely characterise the dynamical state of a cluster.
### 4.1 Isodensity maps
We plot in Figure 2 the smooth APM galaxy distribution as greyscale maps and the Xโray data as isophote contour maps. The three Xโray contours correspond to the density thresholds defined in section 3.2. As expected, clusters in the Xโray appear to be more concentrated around the central potential wells, thus having rounder contours than their optical counterparts. Instead, optical data depict more distinct structure (in the form of groups of galaxies) around the core, although this could in some cases, be due to projection effects. A careful comparison with the Xโray maps shows that a few of the clusters in the optical may suffer from such problems, although in some cases galaxy groups, visible only in the optical, may be weak Xโray emitters and hence absent from the Xโray images. Nevertheless, the majority of our clusters seem to have a nice agreement in both parts of the spectrum. In 17 out of 22 systems, the gross features (double and secondary components, elliptical structures, multimodal objects and single, relaxed configurations) of the mass distribution are apparent in both images. This is an indication that galaxies and groups of galaxies do trace the hot gas distribution in most of the cases ($`80\%`$). In only five systems (A2717, A3112, A3897, A3093, A3921), we observe significant apparent substructure in the optical which is (almost) undetectable in the Xโrays.
In some of the 17 clusters, with relatively good optical and Xโray image correspondence, there is evidence of recent merger events. Zabludoff & Zaritsky (1995) and Baier et al. (1996) suggest that a substantial spatial difference between the optical and the Xโray peak positions together with the Xโray peak being distorted in an orthogonal direction with respect to the line connecting the two main optical cluster clumps, signify two undeniable collision vestiges. The reason is that gas, unlike galaxies and DM, is collisional. This has been born out also from simulations (Evrard 1990; Roettiger et al 1993) which have shown that during the collision of two groups, the gas is stripped and remains in dis-equilibrium for $`1h^1`$ Gyr afterwards. Therefore, the criterion of a spatial displacement between the optical and Xโray peak positions seems to be particularly sensitive to the value of $`\mathrm{\Omega }_{}`$, since it overcomes the known uncertainty of the cluster relaxation time which hampers the cosmological interpretation of the existence of cluster substructure. Such indications, with varying strength, are apparent in A3128, A2804, A3223 and A500 where the corresponding differences between optical and Xโray cluster peaks ($`dp`$) are of order $`0.42h^1`$ Mpc, $`0.58h^1`$ Mpc, $`0.62h^1`$ Mpc and $`0.3h^1`$ Mpc respectively. The argument of orthogonality suits best A3128 as is further evident from the large misalignment angles at all $`\rho _\mathrm{t}`$โs between the optical and the Xโray mass distributions.
### 4.2 Position angles
As a first quantitative test of the compatibility between the Xโray and optical images we correlate in Figure 3 their respective cluster major axis orientations. It is evident that there is a good correlation, with coefficient $`0.8`$ and probability of no correlation $`๐ซ10^5`$. No cluster has $`\delta \theta >\text{ }60^{}`$, while the mean misalignment angle between the cluster optical and Xโray defined position angles is $`\delta \theta <\text{ }28^{}\pm 18^{}`$. Excluding the five clusters that we have identified as having discordant optical and Xโray morphological features we find a similar correlation coefficient but with a slightly lower significance ($`๐ซ10^4`$) which is mostly due to the reduction of the sample.
Davis & Mushotzky (1993) have argued that considerable variations of cluster orientations as a function of distance from the cluster center could be considered as evidence of merger events. If, however, subgroups dominantly develop along the filamentary structure in which the cluster is embedded then variations in the cluster position angle could be negligible and thus significant subclumping can escape detection (see West 1995; West et al. 1995). In Figure 4 we examine possible variations in cluster orientations as a function of $`\rho _\mathrm{t}`$ and thus as a function of distance from the cluster center. For each cluster we connect the position angles estimated at the three density thresholds, and thus at three different distances from the cluster center. It is evident that only a few clusters exhibit evidence for such an effect in their X-ray images, which however should be attributed to their weak position angle determination (due to their extremely small ellipticity).
### 4.3 Ellipticities
We calculate cluster ellipticities in both data sets as a function of $`\rho _\mathrm{t}`$. The cluster $`ฯต`$โs defined in Xโrays are generally slightly smaller than their optical counterparts, corresponding to more spherical configurations (especially for single-component clusters). This is expected when the inter-cluster gas is in hydrostatic equilibrium and the dark matter is distributed like galaxies. However, in the case where we have evidence of recent merger events (A3128, A2804, A3223) we observe that the X-ray ellipticity is typically larger than the corresponding optical. Due to these conflicting trends the correlation between optical and X-ray ellitpicities is rather weak as can be seen in Figure 5. Excluding the five systems with discordant optical and Xโray morphologies (A2717, A3112, A3897, A3093, A3921) and A3128, in which due to a recent merger there is strong differentiation of its X-ray and optical morphology, we find a correlation coefficient of $`0.7`$ with the probability of zero correlation being $`๐ซ3\times 10^3`$.
Another characteristic is that for most clusters (17/22) their optical $`ฯต`$ is a weakly decreasing function of cluster-centric distance, as can be seen in Figure 6, which should be attributed mostly to the effect of background galaxies projected in the area covered by the cluster. In the X-ray images such an effect is evident only in 9 clusters, most of which however are those having large ellipticity. Visual inspection of these clusters show that the effect is not artifiscial and should be attributed to bimodality at the highest threshold or to the existance of significant substructure located in the central parts of the clusters (cf. A3128, A3223, A2933 etc).
For A2717, A3112, A3897 and A3921 their large $`ฯต_\mathrm{o}`$ does not correspond to a similarly large $`ฯต_\mathrm{x}`$, supporting the view that these clusters may suffer from optical projection effects that have altered their true 2D structure (see also section 4.1).
A particularly interesting example is the noisy and apparently highly unrelaxed A3144 cluster. However, its relatively small value of $`ฯต`$ could be possibly attributed to the development of small-scale structures symmetrically around the cluster core or to the existance of significant noise.
## 5 Substructure Results
We present in this part the results of all the substructure tests that we have applied to the optical and X-ray cluster data. The results are tabulated in Table 2, together with the cluster shape parameters, discussed in the previous section.
### 5.1 Centroid shifts
We calculate $`sc`$, ie., the difference between the position of the weighted cluster center and the highest density peak as a function of $`\rho _\mathrm{t}`$ within a radius of $`0.75h^1`$ Mpc (equation 5). In Figure 7 we correlate the optical and Xโray estimated centroid shifts. We observe a very good correlation between the relative cluster centroid variations for the 17 clusters that have concordant morphological features in both optical and Xโrays data (correlation coefficient $`0.8`$ and $`๐ซ3\times 10^4`$). As expected the significance of the optical $`sc`$ (equation 7) is well correlated with the value of the optical and X-ray $`sc`$ ($`R0.9`$ and $`0.8`$, respectively).
These results indicate that the $`sc`$ measure and its significance is a useful tool to characterize the degree of cluster substructure using optical or X-ray data.
Most of the five clusters that are suspect of being affected by projection effects in the optical (A2717, A3112, A3921) have a large $`sc`$ value (only in the optical) and correspondingly high significance indication of subclustering. This fact further suggests that these clusters suffer from projection effects. The cases of A3897 and A3093 are somewhat different, as it is evident from Figure 2. Although there is evidence of projection effects it appears that substructure emerges in a symmetric fashion around the optical cluster core. Such an effect is more pronounced in the optical than in the Xโray images, thus producing as expected a small $`sc`$ value.
### 5.2 Subgroup statistics
Using the categories defined in section 3.3.3 and also a maximum search radius of $`0.75h^1`$ Mpc, we obtain 6 clusters falling into the first category of relaxed objects, 6 clusters in the category of systems displaying partial (weak) substructure and 10 clusters belonging to the third case within which clusters exhibit obvious and concrete indications of subclumping events. Note that 4 clusters of the strong substructure case are the ones affected by projections (A3093, A3112, A3897, A3921). In the last column of Table 2 we show the index corresponding to each cluster subgroup index.
The results of this analysis are in good broad agreement with the other substructure measures ($`ฯต`$โs and $`sc`$โs), with differences only in a few cases. The vast majority of clusters (19 out of 22) show that their subgroup index does agree, at least qualitatively, with the moments analysis that we have applied in this work (section 3). Nevertheless, in 3 cases (A3266, A3301, A2384) results imply discrepancies between our percolation-like algorithm and the method of moments. On checking the parameter space of both techniques, we have found that while the latter stays firm, the former is extremely sensitive to slight alterations regarding the geometric definition of our classes (section 3.3.3).
In the line of the above arguments, we suggest that it is wise to use these results on an advisory level and not as definite.
### 5.3 Comparison of different substructure test
In Figure 8 we correlate ellipticities and $`sc`$โs for all 22 clusters of our sample. The upper panel corresponds to the optical and the lower to the Xโray data. It is evident that the two substructure measures are correlated in both sets of data. The correlation coefficients are $`0.8`$ and $`0.5`$ for the Xโray and optical data respectively with probability of zero correlation being $`๐ซ10^4`$ for the Xโray and $`๐ซ0.05`$ for the optical data respectively.
Furthermore, cross-correlating, in figure 9, the optical $`sc`$โs and Xโray ellipticities we also find a significant correlation with $`๐ซ<\text{ }10^4`$ and $`R0.82`$ if we exclude the clusters with discordant morphologies. We find that the line of best fit is given by:
$$ฯต_\mathrm{x}2.55(\pm 0.47)sc_\mathrm{o}+0.07(\pm 0.06),$$
from which we can deduce the shape of the ICM region emitting X-rays, directly from optical cluster data.
These results imply that the flattening of clusters, as evident in Xโrays as well as in the optical, is a result of their dynamical activity and not due to initial conditions; for example even high-peaks of an underlying Gaussian random field are aspherical, although less so than lower density peaks (cf. Bardeen et al. 1986).
### 5.4 Main Results
From the total of 22 clusters, at least in 8 cases we observe strong substructure signatures, verified by all available methods and data (A2804, A2933, A3128, A3144, A3223, A3266, A514, A2384). Of the remaining population we have confirmed, using all our substructure indications, that 9 clusters (A2734, A2811, A133, A3111, A500, A3158, A3301, A2580, A4059) show no or weak substructure activity in both parts of the spectrum (although there is some evidence of a recent merger event in A500). We have also found that 5 clusters (A2717, A3112, A3921, A3093, A3897) appear distinctly bimodal (or even multimodal) although their Xโray contour maps are almost relaxed, which we attribute to projection effects in the optical. From these, A2717, A3112 and A3093 seem to be single-component (relaxed) clusters (based on their X-ray images), while A3921 and A3897 shows clear indications of elliptical systems. Therefore we find that $`>\text{ }45\%`$ of clusters exhibit significant evidence of substructure, which is in good general accordance with most substructure analysis results (see references in Introduction).
Note that out of our 22 clusters, 11 (50%) have been examined for substructure signals elsewhere in the literature (A2717, A133, A3158, A3128, A3266, A500, A514, A2384, A3897, A3921, A4059). We find that our results are in general good qualitative and quantitative agreement with those of other studies (see Appendix). Although, mostly different methods have been employed in these studies (wavelet transform analysis, isophotal maps, power ratios, kinematical and velocity dispersion estimators), we do not detect any serious disparities regarding the dynamical and shape parameters for these Xโray clusters that we have in common. For example we find our position angles always within $`10^{}`$ of the other determinations.
### 5.5 Morphological and dynamical classification
We can now proceed in classifying the present sample according to a scheme that is close to the one developed by JF99 (see also Forman & Jones 1990; Jones & Forman 1992; Forman & Jones 1994 and Girardi et al. 1997 hereafter G97). Just to be consistent with the largest and most complete of the above analyses (JF99), we will restrict ourselves only to the Xโray cluster images, which we regard as more reliable in terms of the current assessment (see Table 3).
Clusters with no or marginal evidence of subclumping events are dubbed as single, relaxed objects and given the symbol U, featuring unimodality. In this category we count 12 Xโray clusters. Looking at the strong substructure cases, we have 3 bimodal (B) objects (A2804, A2933 and A3128) out of which the first two show their respective optical doubles, while optical A3128 appears to be somewhat closer to a complex system. We characterise complex or multimodal (M) systems that display more than two clumps in their contour maps. Typical such examples are A3144, A3223 and A514, also authenticated by inspection in the optical. Objects showing apparent deviations from singularity also having large $`ฯต`$โs, without depicting obvious small-scale structures, are flagged as elliptical (A3266, A2384 and A3921). The latter three clusters, in absence of any other substructure characteristic, are tagged as Eโs. Finally, A3897 is tagged as P, characteristic of a large central region (primary component) associated with a small secondary structure (left of the Xโray image), visible in both data. Note here, that its optical counterpart would have been flagged as complex, since it shows two extra small-scale clumps at the top and right of the contour plot (see Figure 2).
## 6 Concluding remarks
We have investigated a sample of 22 galaxy clusters using optical (APM) and Xโray (ROSAT) data with the aim of addressing two questions: (a) Do optical and Xโray data reveal the same cluster morphological features ? (b) What is the percentage of relaxed and dynamically active clusters ?
Our cluster sample has flown from a cross-correlation between the APM, ACO and ROSAT pointed observations data and has a depth distribution with $`z<\text{ }0.13`$. We have examined our cluster sample utilising several cluster-morphology diagnostics such as isodensity maps, orientations, ellipticities, centroid variations and subgroup statistics. Looking at the isodensity contour maps, the cluster orientations and ellipticities, we observe a remarkable 1-to-1 correspondence between Xโray and optical data in $`80\%`$ of our sample, regarding the gross cluster characteristics (prime structures, elongations, multimodality, collision signatures). We quantify this by correlating the optical and Xโray cluster ellipticities and orientations and find high and statistically significant correlations. In an attempt to quantify the compatibility of the different substructure measures, that we have used, we correlate $`ฯต`$โs and $`sc`$โs, in the optical and in the Xโray separately and we also cross-correlate them to find significant and strong correlations. This implies that indeed the flattening of clusters is due to their dynamical activity.
From our substructure analysis we find that $`10`$ out of 22 systems ($`45\%`$) display strong substructure indications visible in both parts of the spectrum. We also find that 5 clusters ($`22\%`$) show clear disparities between the optical and Xโray maps, with apparent substructure in the optical not corroborated by the available Xโray data. This is most possibly due to optical projection effects. Our results on the frequency of disordered clusters do concur with most of the relevant studies published to-date.
On the understanding that our catalogue is rather inadequate for drawing significant cosmological conclusions, we would prefer to be cautious when it comes to such a precarious task. Nevertheless, we observe that our present analysis is compatible with that of RLT92 (see their Figure 2) regarding the cluster substructure frequency, setting a rather frail lower limit on the density parameter ($`\mathrm{\Omega }_{}\mathrm{\hspace{0.17em}0.5}`$; see also West 1995; West et al. 1995). Furthermore, the relatively large fraction (4/22) of recent mergers that we have identified is again indicative of a high-$`\mathrm{\Omega }_{}`$ Universe (see discussion of Zabludoff & Zaritsky 1995).
In the near future we plan to apply the methodology of this work to $`900`$ APM galaxy clusters, in order to investigate in more detail the issue of cluster substructure.
## Acknowledgements
Both S. Basilakos and V. Kolokotronis acknowledge financial support from the Greek State Fellowship Foundation. M. Plionis acknowledges the hospitality of the Astrophysics Group of Imperial College, were this work was completed. This research work has made use of NASA Extragalactic Database (NED). The cluster data have been obtained through LEDAS online service, provided by the University of Leicester.
## Appendix A Details on individual clusters
$`\mathrm{๐๐๐๐๐}`$: Obvious similarity of the primary cluster peak ($`0.4h^1`$ Mpc) but disparity on the secondary structure which is only distinct in the optical image. Suspect of being affected by projection effects. Eight Xโray and radio sources have been removed due to their point-like nature (see also Slezak et al. 1994; Mohr et al. 1995; BT96; G97).
$`\mathrm{๐๐๐๐๐}`$: Apparently good correspondence between the two images. Two radio point-like sources (center and upper left) have been subtracted from the Xโray image.
$`\mathrm{๐๐๐๐๐}`$: Highly elongated cluster with large and significant $`sc`$โs in both images. Collision signature visible since the primary optical cluster structure seems to be displaced with respect to its Xโray analogue by $`0.58h^1`$Mpc. Two radio point-like sources have been excised from the Xโray image.
$`\mathrm{๐๐๐๐๐}`$: A relaxed single-component system corroborated by both data, typical of a unimodal configuration. No sources have been removed here.
$`\mathrm{๐๐๐๐}`$: Probably a confirmed unimodal cluster by all substructure measures (Mohr et al. 1995; BT96) showing small values of ellipticities and $`sc`$โs. Optical and Xโray $`\theta `$โs differ by more than $`64^{}`$. One projected Xโray source at $`z0.235`$ has been subtracted (EXO 0059.8-2218).
$`\mathrm{๐๐๐๐๐}`$: This is the archetype of a bimodal cluster as computed by both data. Highly significant $`sc`$โs, large ellipticities ($`0.5`$) and similar $`\theta `$โs granting excellent 1-to-1 correspondence. One point-like source has been removed from the upper right of the Xโray image.
$`\mathrm{๐๐๐๐๐}`$: Four point-like sources have been excised from the Xโray data (center bottom and left). This system has mediocre ellipticities and non-significant centroid shifts. It also yields a $`\delta \theta 54^{}`$. Suspect of being affected by projection effects.
$`\mathrm{๐๐๐๐๐}`$: Partial substructure activity present at vary low density thresholds. If taken at face value, system would have been flagged as complex. Isodensity contour plots are very similar, whereas its respective $`\delta \theta `$ is more than $`45^{}`$. Like the previous one and in the absence of any distinct activity, it is regarded as a relaxed object. Five point-like sources have been removed from the Xโray cluster map.
$`\mathrm{๐๐๐๐๐}`$: No sources were subtracted here. This is the archetype of image disparity. Obvious bimodality in the optical (a $`5.2\sigma `$ $`sc`$ event) corresponds to definite unimodality in the Xโrays. Excluded from the cross-correlation statistical analysis (together with A2717, A3897, A3921 and A3093) as being suspect of optical projections.
$`\mathrm{๐๐๐๐๐}`$: One radio point-like source (PMN: JO331-S242) has been removed (lower right of Xโray map). Dissimilar cluster orientations and ellipticities are typical of a collision event between the prime cluster structures. Despite that, it exhibits highly significant $`sc`$โs ($`3\sigma `$) in both data. In the G97 analysis this cluster is dubbed as a unimodal object on a scale of $`1h^1`$ Mpc. Notice that the latter study is based on an entirely different approach than the one developed here.
$`\mathrm{๐๐๐๐๐}`$: This is a typical complex system. Multiple peaks associated with marginally significant $`sc`$โs and medium $`ฯต`$โs due to symmetrically developed structures around the central cluster potential wells. No sources have been excluded here.
$`\mathrm{๐๐๐๐๐}`$: A single-component cluster, well-aligned and also showing small values of ellipticities and insignificant $`sc`$โs (Slezak et al. 1994; Mohr et al. 1995; BT96; G97). The optical $`sc0.14h^1`$Mpc is only a $`2\sigma `$ substructure event. No sources have been removed.
$`\mathrm{๐๐๐๐๐}`$: Typical complex system displaying large $`ฯต0.5`$ and the largest $`sc0.38h^1`$Mpc out of the whole sample. Optical peak appears to be largely displaced with respect to the Xโray one ($`dp0.62h^1`$Mpc), a fact that signifies a collision vestige. Four radio point-like sources have been subtracted from the Xโray map (upper left, right and lower left).
$`\mathrm{๐๐๐๐๐}`$: We have classified this cluster as elliptical in the absence of other distinct features. Optical and X-ray images are well-aligned. No sources excised (see also Mohr et al. 1993; 1995; BT96; G97; de Grandi & Molendi 1999).
$`\mathrm{๐๐๐๐}`$: Good correspondence between optical and Xโray isodensity maps. It shows weak substructure which is rather insignificant. However, there are indications of a possible recent merger (displacement of optical and X-ray peaks by $`dp0.3h^1`$Mpc). No sources have been removed from the Xโray map. Similar studies (Mohr et al. 1995; BT96) show evidence of a unimodal cluster at least within $`0.5h^1`$Mpc in all substructure properties.
$`\mathrm{๐๐๐๐}`$: Another classical multimodal system with distinct density peaks in both maps. Large ellipticities are followed by analogously large and statistically significant $`sc`$โs ($`5\sigma `$). Two Xโray and two radio point-like sources have been omitted here. This is the complex archetype in most of the published works up till now (see also West et al. 1995; BT96; Bliton et al. 1998; JF99).
$`\mathrm{๐๐๐๐๐}`$: Two point-like sources have been subtracted from the left of the Xโray map. Definitely unimodal in the Xโray but slightly elongated and multimodal in the optical, although with an insignificant $`sc`$. Relatively good 1-to-1 correspondence of the contour maps.
$`\mathrm{๐๐๐๐๐}`$: This is the archetype of an elliptical cluster. A high quality HRI image ($`t_{\mathrm{exp}}>`$7 hrs) which exhibits large $`sc`$โs and $`ฯต`$โs in both images. However, the $`sc`$ values are marginally significant. No sources have been removed in this case (see also McMillan et al. 1989; West et al. 1995).
$`\mathrm{๐๐๐๐๐}`$: An object with distinct substructure in both data, typical of the category P (see Xโray map). Notwithstanding that, it seems multimodal in the optical. The optical $`sc`$ is non-significant but it is apparent that it is somehow underestimated due to symmetric and equally-sized structures developing around the central core. The same reasoning fully explains the low $`ฯต_\mathrm{o}`$. Suspect of being affected by projection effects. Three point-like sources have been excised at the lower left of the Xโray image (see also Gomez et al. 1997).
$`\mathrm{๐๐๐๐๐}`$: A definitely optically complex object which is seemingly unimodal in the Xโrays. There are, however, traces of elongation in the Xโray contour plot, only at the lower $`\rho _\mathrm{t}`$. This extension appears to be in the exact direction of the secondary optical structure which is not visible in the Xโray map. As a result, both data maps seem to be well-aligned and we have therefore classified A3921 as an elliptical object but we have also considered it as being affected by projection effects in the optical. Three point-like sources (center and lower left) have been removed from the Xโray cluster (see also Mohr et al. 1995; BT96).
$`\mathrm{๐๐๐๐๐}`$: Within $`0.5h^1`$ Mpc of the highest cluster peak, this cluster seems relaxed and unimodal in both images. There is some substructure evidence by means of the optical $`sc`$ (non-significant) and ellipticity, while it appears slightly misaligned ($`\delta \theta 47^{}`$ ). No sources have been subtracted from this system.
$`\mathrm{๐๐๐๐๐}`$: This is another typical unimodal system. A low $`sc`$ object which displays remarkable accordance in the isodensity maps and a $`\delta \theta 27^{}`$. No sources have been removed from this one. Definitely dubbed as a single-component cluster also by other analyses (cf. Slezak et al. 1994; Mohr et al. 1995; BT96; G97). |
no-problem/0002/astro-ph0002424.html | ar5iv | text | # The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble?
## 1 Introduction
Hot, massive stars have a major impact on the surrounding interstellar medium (ISM), not only at the end of their life when they become supernovae but also throughout their more stable phases of evolution via their strong stellar wind. The winds of O-type stars can inject as much energy into the ISM throughout their main sequence life as in their final explosion, and therefore should have an equally great impact on the structures in the ISM and its energization.
Typically, wind-blown shells have been found by looking at the environments of stars known to have strong stellar winds, e.g. O stars and Wolf-Rayet stars (e.g. Benaglia & Cappa 1999, Marston 1997, Miller & Chu 1993, Dubner et al. 1990), but a neutral shell may continue to exist after the central star has evolved off the main sequence and has lost the power to ionize the shell. Supernova remnants (SNRs) are thought to be visible in the radio continuum for only a few to several tens of thousands of years (Braun et al. 1989, Frail et al. 1994). As this is considerably less than the duration of their life before they merge with the ambient ISM ($`>1`$ Myr), there should be many SNRs consisting of neutral gas shells. Large scale, low resolution surveys have revealed the presence of HI supershells (e.g. Heiles 1984), most likely created by stellar clusters and associations, but could not bring to light the smaller but likely numerous neutral shells created by single stars which are an important part of the Galaxyโs zoology.
The Canadian Galactic Plane Survey (CGPS; Higgs 1999 and Taylor 1999) offers the first opportunity to study a large collection of HI shells, as opposed to supershells, to determine their dynamics and how they relate to and impact on the surrounding ISM. A few such objects were serendipitously discovered in the pilot project (Normandeau et al. 1997; hereafter NTD97). One of these, G132.6โ0.7โ25.3, will be presented in detail here as an illustrative case study of this class of objects. It is a striking feature within the HI data cube, developing over several spectral channels, at velocities generally associated with interarm gas.
The following section briefly outlines the observations and processing of the data. Section 3 provides a description of the structure at several wavelengths. In ยง4, the possible location of the shell is discussed. The next section considers the stars present in this vicinity as possible energy sources for a wind-blown bubble. In ยง6, all these elements are brought together for analysis and conjecture. A summary and conclusions are given in ยง7.
## 2 The data: observations and processing
Radio continuum data at 408 MHz and 1420 MHz as well as 21 cm spectral line data were obtained at the Dominion Radio Astrophysical Observatory (DRAO) as part the CGPS pilot project. The pilot project covered an 8ยฐ$`\times `$ 6ยฐ area of the sky, encompassing all of the W3/W4/W5/HB3 Galactic complex. Observations were carried out in June, July, November and December of 1993. Details of observations and data reduction are given in NTD97 except for the 1420 MHz continuum polarisation data which are treated by Gray et al. (1999). Table 1 summarizes the observational parameters for the DRAO data.
The CGPS also comprises other data sets which have been reprojected and regridded to match the DRAO images. Among them is the FCRAO CO Survey of the Outer Galaxy which is described by Heyer et al. (1998).
## 3 Description
### 3.1 The HI structure
In the HI images at velocities of approximately โ25 km s$`^1`$there is a well-defined ring of enhanced emission, presumably a shell of atomic hydrogen. This shell is centred at $`(l,b)=(132.62\mathrm{ยฐ},0.72\mathrm{ยฐ})`$ and will henceforth be referred to as G132.6โ0.7โ25.3. Figure The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble? presents a subsection of the HI mosaics for the relevant velocity interval.
For a complete expanding shell, the varying line-of-sight component of the expansion velocity from projected centre to rim will result in the constant velocity images in a data cube showing a progression from a small filled ellipse (the receding cap), through annuli of progressively larger radii, then decreasing radii back to a small filled ellipse (the approaching cap). G132.6โ0.7โ25.3 appears to develop from a cap at $`v=15.37\text{ km s}^1`$ to a complete ring at โ25.27 km s$`^1`$. As velocities become more negative it does not progress back to a cap.
At maximum extent G132.6โ0.7โ25.3 is approximately elliptical. The major axis, which is perpendicular to the Galactic plane, measures 110.4 arcmin and the minor axis is 95.2 arcmin; this is equivalent to 71 pc by 61 pc for a distance of 2.2 kpc (but see ยง4 for a discussion of possible distances). There is a hint that the structure is slightly ovoid, being wider nearer to the plane. For the half of the shell from $`15.37\text{ km s}^1`$ to $``$25.27 km s$`^1`$, the total flux above the background level is $`390\pm 10\mathrm{Jy}`$.
Within the shell at velocities where it is at full extent there is a HI filament. At โ25.27 km s$`^1`$it diagonally traverses most of the shell. At more negative velocities, it persists along with a section of the western edge of the shell, forming a U-shaped structure. It is centred on ($`132.3\mathrm{ยฐ}`$, $`0.8\mathrm{ยฐ}`$).
### 3.2 Counterparts at other frequencies
#### 3.2.1 Radio continuum
Figure The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble? shows the corresponding area from mosaics of the radio continuum emission at 408 MHz and 1420 MHz. There is no corresponding ring structure in the radio continuum at either frequency.
Figure The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble? shows images of polarized radio emission from the region of the shell. The emission is displayed in two equivalent forms: Stokes Q and U images, and polarized intensity and polarization angle. Highly structured emission is seen within G132.6โ0.7โ25.3. Outside of the ring, to the northeast and northwest the polarised intensity vanishes.
Gray et al. (1999) discuss the observations of the polarised emission from this region. Polarized structures on arcminute to degree scales are shown to arise from line-of-sight variations in Faraday rotation of the diffuse Galactic synchrotron radiation field. The Faraday screen of varying magentic field strength and ionized gas density is located primarily in the diffuse interstellar medium of the Perseus arm. The emergent radiation exhibits angular structure in the polarisation angle of the polarised component. The area to the north of G132.6โ0.7โ25.3 is depolarized due to the high electron density (and thus Rotation Measure) in the ionized halo of the W3/W4 HII region complex (Gray et al. 1999). The appearance of polarised structures along lines of sight within the ring suggests that the bubble is isolated from the depolarising effects of W3/W4, perhaps because of the surrounding protective shell of neutral gas at the rim.
It is noteworthy that the polarised emission shows structures that are elongated in the northeast-southwest direction and coincident with the HI filament that crosses the centre of the bubble (see Figures 1 and 6). This similarity suggests either a diffuse electron component mixed in with the HI filament or that magnetic fields play a role in the structure of this low-density environment.
#### 3.2.2 Infrared
The IRAS infrared data have been searched for dust counterparts to the atomic hydrogen structure. The 60 $`\mu `$m and 100 $`\mu `$m images of this area are shown in Figure The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble?. Within the shell, in projection, there is a clumpy plateau of infrared emission at 60 microns and 100 microns which falls off rapidly at the HI boundary.
#### 3.2.3 Molecular
Within the velocity interval and region of the shell there are no extended molecular gas structures present in the FCRAO CO Survey of the Outer Galaxy. There are relatively compact sources projected onto the eastern rim of the shell at ($`133.19\mathrm{ยฐ}`$, $`0.34\mathrm{ยฐ}`$) and ($`133.23\mathrm{ยฐ}`$, $`0.62\mathrm{ยฐ}`$) at velocities of approximately โ25.2 km s$`^1`$. There is also a compact molecular cloud in the lower rim of the shell at ($`132.47\mathrm{ยฐ}`$, $`1.37\mathrm{ยฐ}`$, โ28.5 km s$`^1`$). The velocities suggest that these are not merely along the line-of-sight towards the rim but are in fact within it.
### 3.3 Summary of morphology
It can always be argued that any โobjectโ seen in HI images is but a chance superposition of unrelated regions of emission however in this case the accumulated evidence is reassuring. The gradual progression from cap to full extent in the HI images, the fall-off of infrared emission outside the shell (except to the northeast), and the signature in the polarisation images, particularly in the polarised intensity, of a clear difference between inside and outside the shell all combine to show that G132.6โ0.7โ25.3 is indeed a single, coherent structure.
## 4 Distance
Assigning a distance or even a relative position along the line-of-sight to G132.6โ0.7โ25.3 is not an easy task. Different possibilities emerge depending on the observational facts considered and the assumptions made.
From the average velocity-longitude plot in Fig. The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble? (top panel) the shellโs velocity would place it in the interarm region if each of the main bands of emission is identified with an arm. However the HI distribution varies significantly over the latitude range covered by the pilot project. Judging from the velocity-longitude plot for $`b=1.0\mathrm{ยฐ}`$, the shell would be at the outer edge of the Local HI . As for kinematic distances, the shell is completely developed at $`25.27\text{ km s}^1`$. Assuming a flat rotation curve with $`A=14\text{ km s}^1\mathrm{kpc}^1`$ and R<sub>0</sub> = 8.5 kpc, one finds d<sub>kin</sub> = 1.7 kpc (Burton 1988). Using the best fit rotation curve from Fich et al. (1989), one finds 2.0 kpc.
As was mentioned in the previous section, G132.6โ0.7โ25.3 develops from a cap to full extent over a range of velocities but does not progress back to a cap, i.e. only half a shell is seen. The missing second half implies that it must be on a density gradient along the line-of-sight. The location of G132.6โ0.7โ25.3 on the edge of a spiral arm could account for the absence of the second half: it would have expanded more freely in this direction and would have fragmented and dispersed into the less dense medium. In this context, if the shell is now static, the missing second half indicates that the shell is on the outer edge of the Local arm. However, if the shell is expanding, then the less negative velocities correspond to material which is moving away from us, and the suggestion is then that G132.6โ0.7โ25.3 is on the near side of the Perseus arm. Alternately, the missing second half of the shell could be indicative of the expansion in that direction having been forestalled when it encountered a region of higher density. If this is the case then the shell would either be near the edge of the Local arm, the higher density of which would have halted the expansion in that direction, or just past the high-density shocked region of the Perseus arm. A cap would not be visible in this scenario because the material in the second half of the shell would be indistinguishable from the โwallโ of higher density material which prevented its expansion.
There is absorption associated with the W3 HII out to $`50\text{ km s}^1`$ (see e.g. Normandeau 1999), approximately twice the velocity of the shell at full extent. If both the shell and the absorbing gas at โ50 km s$`^1`$are following the rotation curve of the Galaxy then W3 is further away and therefore G132.6โ0.7โ25.3 is at a distance of substantially less than 2.2 kpc, the distance adopted here for W3. If, on the other hand, the gas producing the absorption at โ50 km s$`^1`$is the shocked gas prescribed by the Two-Armed Spiral Shock model (TASS; Roberts 1972) then the shell is slightly further than W3, assuming that it is following the rotation of the Galaxy along with the gas at the position of W3 at these velocities. In the TASS model, the high density gas in the Perseus arm has been accelerated from its standard rotation curve velocity of approximately โ20 km s$`^1`$, and gas at โ25 km s$`^1`$or so would be undisturbed gas that is located a little farther than the shocked gas; the presence of emission rather than absorption at โ25 km s$`^1`$in the spectrum towards W3 supports this idea (Normandeau 1999).
At the velocity where the shell is most clearly seen there is also interaction between the western edge of the W5 HII region and the HI , and there is HI apparently associated with HB3 from โ25.27 km s$`^1`$to โ28.00 km s$`^1`$and perhaps at โ30.21 km s$`^1`$(see NTD97). If this apparently interacting HI and the HI forming the shell is all at the distance of W5 and HB3 then G132.6โ0.7โ25.3 would be at $``$2.2 kpc.
Table 2 summarizes this rather confusing state of affairs. In what follows, all quantities shall be given with their dependance on distance expressely stated and with the value for a distance of 2.2 kpc in brackets. This value is preferred for a combination of reasons. Kinematic distances have shown themselves to be unreliable towards the Perseus arm (eliminating entries I and II of the table), tied to the fact that the standard rotation curve does not apply because of observed streaming motions (eliminating entry IV, as well as VI and IX both of which implicitely assume that all the HI is following the rotation curve, that decreasing velocity corresponds to increasing distance). The TASS model is a more promising description of the behaviour of gas towards these longitudes, favoring entry V (slightly more than 2.2 kpc) which places G132.6โ0.7โ25.3 slightly past the main ridge of the Perseus arm (entry XI). This is also in accord with the inference that it is at the same distance as W5 and HB3 (entry III).
## 5 Stars
There are no visible, catalogued, energetic main-sequence stars within the shell at present (according to the Simbad data base). This is not surprising; if there were energetic stars present, there should be an inner shell of ionized gas visible in the Stokes I images. Figure The HI shell G132.6โ0.7โ25.3: A Supernova Remnant or an Old Wind-Blown Bubble? shows the positions of the 74 catalogued O and B stars in the vicinity (in projection) of G132.6โ0.7โ25.3 with reference to the shell at full extent. The area searched using the Simbad data base was the one displayed in the figure. The concentration of stars in the lower left-hand corner of the plot is the open cluster Stock 2 which is at 303 pc (Mermillod 1999). A cautionary note should be sounded: if G132.6โ0.7โ25.3 is behind the main ridge of the Perseus arm as argued above, then some stars may have been lost to obscuration. However, at these longitudes the plane is at higher latitudes, near $`b=1\mathrm{ยฐ}`$, and most of the observed dust emission seems to be associated with the shell rather than being in the foreground.
The most promising candidate for energy source of the shell if it is wind-blown is BD+60 447. This B1 Ia star is almost exactly at the centre of the shell in projection, and according to Humphreys (1970) it is at a distance of 1.55 kpc, determined spectro-photometrically from previously published data, which distance is not inconsistent with the various estimates for G132.6โ0.7โ25.3. No uncertainty was quoted by Humphreys and there was no radial velocity listed. While on the main sequence BD+60 447 was most probably a late O star, which means that it would have had sufficiently strong stellar winds to blow a bubble even though, in its present state, it is no longer capable of maintaining the growth of the shell or its ionization.
Other stars within the shell in projection include main sequence B9 and B7 stars, and four unclassified B stars. While these do not provide stellar winds, if they are inside the shell they may be contributing to the expansion of G132.6โ0.7โ25.3 through radiation pressure as discussed by Elmegreen & Chiang (1982). These authors contend that once a shell has grown sufficiently that it includes many field stars, their radiation pressure will cause the shellโs expansion to accelerate.
Based on the evolutionary tracks by Maeder (1990) and using the luminosity and effective temperature given by Lang (1991) for a B1 I star, it would appear that BD+60 447 had a mass between 20 M and 25 M while on the main sequence. According to Table 3 of Howarth & Prinja (1989; hereafter HP89) this implies that it was an O9.5 V or an O9.0 V star. As a lower limit for the energy that could have been input by stellar winds during its hydrogen burning phase, consider an O9.5 V star. Based on the empirical relations derived by HP89, such a star would have a mass-loss rate of $`10^{7.36}`$ Myr<sup>-1</sup> and a terminal wind velocity of 2000 km s$`^1`$, giving a stellar wind luminosity of $`6\times 10^{34}`$ erg s<sup>-1</sup>. From Stothers (1972), the main sequence lifetime of such a star would be 11.2 Myr and therefore, the total kinetic energy output would be $`2\times 10^{49}`$ erg.
## 6 Analysis and conjecture
It will be assumed that the shell is expanding. This is the most likely scenario in view of the varying morphology seen in the HI images. It is unlikely that there exists, in the ISM, a long enough, stationary cylinder or funnel to account for the aspect of the HI in the different channels.
### 6.1 Kinetic energy of G132.6โ0.7โ25.3
From its integrated flux, from which a twisted plane background was subtracted, the average column density for the well-defined first half of the shell is $`2.7\times 10^{20}\mathrm{cm}^2`$. This implies an HI mass of $`1.8d_{\mathrm{kpc}}^2\times 10^3\text{M}\text{}`$ \[$`9\times 10^3\text{M}\text{}`$\]. This is in agreement with Heiles (1984)โs statistical observation that the mass swept up by a shell is very approximately $`8.5R_{\mathrm{sh}}^2\text{M}\text{}`$, where $`R_{sh}`$ is in parsecs, which in this case would predict $`\mathrm{M}1.9d_{\mathrm{kpc}}^2\times 10^3\text{M}\text{}`$ \[$`9\times 10^3\text{M}\text{}`$\].
Assuming that the cap of the shell is seen at a velocity of $`15.37\text{ km s}^1`$ and that it has reached full extent at $`25.27\text{ km s}^1`$, an expansion velocity of approximately 9.9 km s$`^1`$is found. This is barely larger than the turbulent velocity standardly assumed for the ISM, implying that the shell should soon begin to dissipate into the ambient gas, though the low density of the latter, as evidenced in the image of the shell at full extent, will cause the process to be slower than in denser surroundings. For a complete shell โ one with twice the mass of the fore half โ to expand with this velocity would require the injection of $`2d_{\mathrm{kpc}}^2\times 10^{48}`$ erg \[$`10^{49}`$ erg\]. Note that this is the same order of magnitude as the stellar wind kinetic energy of BD+60 447 during its main sequence life. Also, it is low though not unreasonable for a supernova remnant, especially considering that it would have lost energy by now; $`5d_{\mathrm{kpc}}^2\times 10^{49}`$ erg \[$`2\times 10^{50}`$ erg\] may have been lost to recombination if all the HI currently associated with G132.6โ0.7โ25.3 was previously part of an ionized shell.
### 6.2 Age and expansion velocity
If the winds from BD+60 447 created and sustained the shell, then G132.6โ0.7โ25.3 would be approximately the same age as the star. Given that fusion stages past hydrogen burning are estimated to last 0.1 times as long as the main sequence (Meynet et al. 1994), the age of the bubble cannot be much greater than the main sequence lifetime of BD+60 447. This was earlier estimated to be 11.2 Myr based on the assumption that it was of type O9.5 when on the main sequence. It should be noted that the shellโs low expansion velocity and the current evolutionary phase of the central star are in accord with the fact that, in the standard model for wind-blown bubbles (Weaver et al. 1977), the time to dissipation into the ISM is approximately equal to the main sequence lifetime of the source of the wind.
The shell should at present be in the momentum driven bubble phase, but the transition to this phase would only have been a recent event and therefore it would be best to consider the previous phase, a bubble with a radiative outer shock. According to the standard model, for a bubble with a radiative outer shock the radius varies as
$$R_2(t)=28\left(\frac{L_{36}}{n_0}\right)^{1/5}t_6^{3/5}\mathrm{pc},$$
(1)
where $`L_{36}`$ is the wind luminosity in units of $`10^{36}`$ erg s<sup>-1</sup> and the time is given in units of $`10^6\mathrm{yr}`$.
By taking the derivative of the above equation and using the radius and age estimates, one can calculate an expansion velocity for the shell. A velocity of $`0.7d_{\mathrm{kpc}}`$ km s$`^1`$\[1.6 km s$`^1`$\] is predicted and the shell would only have slowed down further as it continued into the momentum driven bubble phase, barring other energy inputs. Not only is this significantly less than the observed value but it is also less than the turbulent velocity of the ISM and so the shell should have dissipated. This shell has too high an expansion velocity for its radius and assumed age if it is wind-blown.
The kinematic age of the bubble ($`R_{\mathrm{sh}}/v_{\mathrm{exp}}`$), for a constant expansion velocity of 9.9 km s$`^1`$, is $`1.4d_{\mathrm{kpc}}`$ Myr \[3.1 Myr\]. This should be an upper limit to the age of the bubble, unrelated to wind-blown models, as long as there has been no acceleration. The age estimated from main sequence lifetime of the assumed stellar wind source is much greater than the kinematic age stated above. BD+60 447 may have been of a somewhat earlier type when on the main sequence, perhaps as early as an O8, but this does not solve the problem because the age estimate would still be too high, slightly greater than 7.1 Myr (Stothers et al. 1972) which would require an uncertainty of over 100% on the kinematic age in order for there to be agreement. This is unlikely considering that there is little uncertainty in the radius, and as for the expansion velocity, the smooth variation of the morphology from channel to channel argues against the estimate of 9.9 km s$`^1`$being significantly off.
This age disagreement indicates that the stellar winds from this star on its own cannot be responsible for the present state of G132.6โ0.7โ25.3. If G132.6โ0.7โ25.3 has been mainly formed by BD+60 447โs wind, some other factor must have caused it to accelerate. Expansion into a density gradient and consequent acceleration (see next section), may explain the discrepancy between the observed velocity and that predicted for expansion into a uniform medium. Another possible contributing factor is the radiation pressure from ordinary stars now within the shell, as mentioned in ยง5.
For the SNR hypothesis, there is no candidate energy source and therefore no presumed age for the G132.6โ0.7โ25.3. Thus the kinematic age and the expansion velocity do not pose a problem if the shell is a SNR rather than a stellar wind bubble.
### 6.3 Shape of the shell and scale height
As stated in ยง3.1, G132.6โ0.7โ25.3 is elongated in the direction perpendicular to the Galactic plane, slightly wider at the base. This type of shape is expected for a bubble evolving in a density gradient. The minor axis is 95.2 arcmin (61 pc for 2.2 kpc). Since any model of bubble evolution in a stratified atmosphere (e.g. Kompaneets 1960, Tomisaka & Ikeuchi 1986, Mac Low & McCray 1988) predicts near spherical evolution at early times, and significant elongation only when the radius exceeds the scale height (enabling the bubble to sense the ambient stratification), the observed elongation of this shell implies $`H<b/2=13.8d_{\mathrm{kpc}}`$ pc \[30.5 pc\].
This extremely small value for the scale height is reminiscent of the low value of H (25 pc) found by Basu et al. (1999) for the nearby (in projection at least) W4 superbubble, and of the scale height (22 pc) used by Shelton et al. (1999) when modelling W44. A more precise value for the scale height can be found by fitting the analytic Kompaneets (1960) solution to the shell. The Kompaneets solution consists of an analytic expression for the bubble shape at various stages of evolution in an exponential atmosphere. The observed ratio of major to minor axis can be matched to a Kompaneets model at particular stage of evolution, yielding the ratio of the current radius to the ambient scale height. Details of the Kompaneets solution and this technique for determining the scale height can be found in Basu et al. (1999). For G132.6โ0.7โ25.3, we find that the best fit Kompaneets model has a semi-minor axis $`1.76H`$. This yields $`H=7.9d_{\mathrm{kpc}}`$ pc \[17.3 pc\]. The general point is that the scale height must be 30 pc or less in this environment, if the elongation is due to a density gradient.
This value is, of course, valid for the ISM local to G132.6โ0.7โ25.3, just as the values relating to studies of W44 and the W4 superbubble were applicable to the environment of these objects. They do not invalidate the much greater values ($`>`$ 100 pc) found for the global, Galactic scale height, though this is perhaps an indication that the global scale height is determined by very different processes than those that govern the equilibrium of gas on smaller scales.
### 6.4 Within the shell
As was noted in the description of the HI emission, there is an HI โUโ within the shell. A partial shell within a shell as it were. In the crook of this โUโ there is a compact radio continuum source which is positionally coincident with an IRAS compact source having the colours of an HII region (see Hughes & MacLoed 1989 for colour selection criteria), namely IRAS-02044+6031. The infrared colours, the radio continuum spectral index of +0.55 and the morphological indication of HI surrounding the compact source combine to suggest that this an HII region within a layer of dissociated gas, located on the periphery of the shell.
Though originally thought to be a planetary nebula (Acker et al. 1983), this identification of IRAS-02044+6031 has since been found to be in error (Sabbadin 1986, Acker et al. 1986, Zijlstra et al 1990). No maser emission has been detected despite several searches (6.7 GHz methanol by MacLoed et al. 1998, 5 cm OH lines by Baudry et al. 1997, H<sub>2</sub>O maser lines by Codella et al. 1996 and by Brand et al. 1994), indicating that either the geometry is simply inappropriate for maser detection or that the region has evolved sufficiently that there is no longer maser activity. Considering the possibility that the HI $`U`$ is related dissociated gas, the latter explanation is not unreasonable. However, it should be noted that the velocity interval sampled by the maser searches did not always cover velocities as high as for the HI discussed here.
There is no indication of a bright star which could account for the ionized gas. In fact on the POSS images, coincident with the IRAS source, there is a compact region of increased extinction. This suggests that this is a young HII region. Perhaps its formation was triggered by the expansion of the G132.6โ0.7โ25.3 shell. It should be noted however that Wouterloot & Brand (1989) associate this IRAS source with CO emission at โ55.7 km s$`^1`$which would be unlikely to be related to the HI seen at โ25.27 km s$`^1`$, and the geometry of the region of CO emission at โ55 km s$`^1`$is also not suggestive of an association with the HI .
To summarize, in the context of the larger shell described in this paper, this small U-shaped structure is proposed to be HI formed by the stars within a compact HII region through dissociation. The HII region itself was perhaps formed when the expansion of G132.6โ0.7โ25.3 compressed gas in its periphery sufficiently to induce star formation. The U would then be second generation HI gas related to G132.6โ0.7โ25.3.
## 7 Summary and conclusions
An HI shell has been found near the very active W3/4 HII region complex. The lack of a radio continuum counterpart has been interpreted as indicative of the advanced age of the shell, be it a wind-blown shell or a SNR. If it is a wind-blown shell then the most likely powering source is the B1 supergiant BD+60 447. This is based on the position of the star (at the centre in projection and at a reasonable distance) and its spectral type (strong enough stellar winds while on the main sequence to blow a bubble, no longer capable of maintaining the ionization of the shell which is in accord with the lack of a continuum counterpart). The age of the star and the kinematic age of the shell are however discrepant, the former being greater than the latter. This could point to the shell being a member of the observed class of โhigh velocityโ shells (Oey 1996), which have somehow been reaccelerated, but it could also be taken to indicate that the shell was not created by BD+60 447 (and there are no other catalogued stars present which could have blown the shell through stellar winds) but is in fact a SNR. Based on the available data, it is not possible to distinguish between the two possibilities.
Regardless of the origin of the G132.6โ0.7โ25.3, if the elongation of the shell is due to the density gradient of its surroundings, the Kompaneets model can be used to determine the scale height of the ambient ISM. In this case, the aspect ratio of the shell indicates that the local scale height is 17.3 pc for a distance of 2.2 kpc. While surprisingly low, analysis of other regions have also pointed to small scale heights (W4 by Basu et al. 1999; W44 by Shelton et al. 1999).
At the edge of the shell, there is a smaller U-shaped HI structure curving around a compact radio continuum and infrared source. The thermal spectral index of the compact source and its infrared colours imply that it is an HII region and the HI then corresponds to an encircling photodissociation region. However, the FCRAO outer Galaxy Survey shows no indication of a coincident molecular cloud at similar velocities. It has been hypothesized that this HII region could be the result of star formation triggered by the expansion of G132.6โ0.7โ25.3. Compact <sup>12</sup>CO clouds at other locations along the shellโs perimeter could also be triggered or enhanced condensations.
The data from the CGPS are likely to be rife with such structures as their identification requires arcminute resolution coupled with coverage of wide angular scales. With an analysis of this sort carried out for each shell, our picture of the star-ISM feedback mechanisms will be more complete.
M.N. thanks Brad Wallace for useful input, as well as James Graham and Carl Heiles for comments on drafts of this paper. The Dominion Radio Astrophysical Observatoryโs synthesis telescope is operated by the National Research Council of Canada as a national facility. The Canadian Galactic Plane Survey is a Canadian project with international partners, and is supported by a grant from the Natural Sciences and Engineering Research Council of Canada. This research made extensive use of the Simbad database, operated at CDS, Strasbourg, France, and of NASAโs Astrophysics Data System Astrophysics Science Information and Abstract Service. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.